1 Department of Economics and Business Economics, Aarhus BSS, Aarhus University2 Stockholm Sch Econ, Stockholm School of Economics, Dept Finance3 Department of Economics and Business Economics, Aarhus BSS, Aarhus University
We develop a theory for a general class of discrete-time stochastic control problems that, in various ways, are time-inconsistent in the sense that they do not admit a Bellman optimality principle. We attack these problems by viewing them within a game theoretic framework, and we look for subgame perfect Nash equilibrium points. For a general controlled Markov process and a fairly general objective functional, we derive an extension of the standard Bellman equation, in the form of a system of nonlinear equations, for the determination of the equilibrium strategy as well as the equilibrium value function. Most known examples of time-inconsistent stochastic control problems in the literature are easily seen to be special cases of the present theory. We also prove that for every time-inconsistent problem, there exists an associated time-consistent problem such that the optimal control and the optimal value function for the consistent problem coincide with the equilibrium control and value function, respectively for the time-inconsistent problem. To exemplify the theory, we study some concrete examples, such as hyperbolic discounting and mean–variance control.
Finance and Stochastics, 2014, Vol 18, Issue 3, p. 545-592
Time consistency; Time inconsistency; Time-inconsistent control; Dynamic programming; Stochastic control; Bellman equation; Hyperbolic discounting; Mean-variance; GROWTH; Mean–variance