Recursive macroeconomic theory, Thomas Sargent 2nd Ed - Chapter 7 pdf

18 348 0
Recursive macroeconomic theory, Thomas Sargent 2nd Ed - Chapter 7 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Part III Competitive equilibria and applications Chapter Recursive (Partial) Equilibrium 7.1 An equilibrium concept This chapter formulates competitive and oligopolistic equilibria in some dynamic settings Up to now, we have studied single-agent problems where components of the state vector not under the control of the agent were taken as given In this chapter, we describe multiple-agent settings in which some of the components of the state vector that one agent takes as exogenous are determined by the decisions of other agents We study partial equilibrium models of a kind applied in microeconomics We describe two closely related equilibrium concepts for such models: a rational expectations or recursive competitive equilibrium, and a Markov perfect equilibrium The first equilibrium concept jointly restricts a Bellman equation and a transition law that is taken as given in that Bellman equation The second equilibrium concept leads to pairs (in the duopoly case) or sets (in the oligopoly case) of Bellman equations and transition equations that are to be solved jointly by simultaneous backward induction Though the equilibrium concepts introduced in this chapter obviously transcend linear-quadratic setups, we choose to present them in the context of linear quadratic examples in which the Bellman equations remain tractable For example, see Rosen and Topel (1988) and Rosen, Murphy, and Scheinkman (1994) – 186 – Example: adjustment costs 187 7.2 Example: adjustment costs This section describes a model of a competitive market with producers who face adjustment costs The model consists of n identical firms whose profit function makes them want to forecast the aggregate output decisions of other firms just like them in order to determine their own output We assume that n is a large number so that the output of any single firm has a negligible effect on aggregate output and, hence, firms are justified in treating their forecast of aggregate output as unaffected by their own output decisions Thus, one of n competitive firms sells output yt and chooses a production plan to maximize ∞ β t Rt (7.2.1) t=0 where Rt = pt yt − 5d (yt+1 − yt )2 (7.2.2) subject to y0 being a given initial condition Here β ∈ (0, 1) is a discount factor, and d > measures a cost of adjusting the rate of output The firm is a price taker The price pt lies on the demand curve pt = A0 − A1 Yt (7.2.3) where A0 > 0, A1 > and Yt is the marketwide level of output, being the sum of output of n identical firms The firm believes that marketwide output follows the law of motion Yt+1 = H0 + H1 Yt ≡ H (Yt ) , (7.2.4) where Y0 is a known initial condition The belief parameters H0 , H1 are among the equilibrium objects of the analysis, but for now we proceed on faith and take them as given The firm observes Yt and yt at time t when it chooses yt+1 The adjustment costs d(yt+1 − yt )2 give the firm the incentive to forecast the market price Substituting equation (7.2.3 ) into equation (7.2.2 ) gives Rt = (A0 − A1 Yt ) yt − 5d (yt+1 − yt )2 The model is a version of one analyzed by Lucas and Prescott (1971) and Sargent (1987a) The recursive competitive equilibrium concept was used by Lucas and Prescott (1971) and described further by Prescott and Mehra (1980) 188 Recursive (Partial) Equilibrium The firm’s incentive to forecast the market price translates into an incentive to forecast the level of market output Y We can write the Bellman equation for the firm as v (y, Y ) = max A0 y − A1 yY − 5d (y − y) + βv (y , Y ) y (7.2.5) where the maximization is subject to Y = H(Y ) Here denotes next period’s value of a variable The Euler equation for the firm’s problem is −d (y − y) + βvy (y , Y ) = (7.2.6) Noting that for this problem the control is y and applying the BenvenisteScheinkman formula from chapter gives vy (y, Y ) = A0 − A1 Y + d (y − y) Substituting this equation into equation (7.2.6 ) gives −d (yt+1 − yt ) + β [A0 − A1 Yt+1 + d (yt+2 − yt+1 )] = (7.2.7) In the process of solving its Bellman equation, the firm sets an output path that satisfies equation (7.2.7 ), taking equation (7.2.4 ) as given, subject to the initial conditions (y0 , Y0 ) as well as an extra terminal condition The terminal condition is (7.2.8) lim β t yt vy (yt , Yt ) = t→∞ This is called the transversality condition and acts as a first-order necessary condition “at infinity.” The firm’s decision rule solves the difference equation (7.2.7 ) subject to the given initial condition y0 and the terminal condition (7.2.8 ) Solving the Bellman equation by backward induction automatically incorporates both equations (7.2.7 ) and (7.2.8 ) The firm’s optimal policy function is yt+1 = h (yt , Yt ) (7.2.9) Then with n identical firms, setting Yt = nyt makes the actual law of motion for output for the market Yt+1 = nh (Yt /n, Yt ) (7.2.10) Example: adjustment costs 189 Thus, when firms believe that the law of motion for marketwide output is equation (7.2.4 ), their optimizing behavior makes the actual law of motion equation (7.2.10 ) A recursive competitive equilibrium equates the actual and perceived laws of motion (7.2.4 ) and (7.2.10 ) For this model, we adopt the following definition: Definition: A recursive competitive equilibrium of the model with adjustment costs is a value function v(y, Y ), an optimal policy function h(y, Y ), and a law of motion H(Y ) such that a Given H , v(y, Y ) satisfies the firm’s Bellman equation and h(y, Y ) is the optimal policy function b The law of motion H satisfies H(Y ) = nh(Y /n, Y ) The firm’s optimum problem induces a mapping M from a perceived law of motion for capital H to an actual law of motion M(H) The mapping is summarized in equation (7.2.10 ) The H component of a rational expectations equilibrium is a fixed point of the operator M This equilibrium just defined is a special case of a recursive competitive equilibrium, to be defined more generally in the next section How might we find an equilibrium? The next subsection shows a method that works in the present case and often works more generally The method involves noting that the equilibrium solves an associated planning problem For convenience, we’ll assume from now on that the number of firms is one, while retaining the assumption of price-taking behavior This is also often called a rational expectations equilibrium 190 Recursive (Partial) Equilibrium 7.2.1 A planning problem Our solution strategy is to match the Euler equations of the market problem with those for a planning problem that can be solved as a single-agent dynamic programming problem The optimal quantities from the planning problem are then the recursive competitive equilibrium quantities, and the equilibrium price can be coaxed from shadow prices for the planning problem To determine the planning problem, we first compute the sum of consumer and producer surplus at time t, defined as Yt St = S (Yt , Yt+1 ) = (A0 − A1 x) d x − 5d (Yt+1 − Yt ) (7.2.11) The first term is the area under the demand curve The planning problem is to choose a production plan to maximize ∞ β t S (Yt , Yt−1 ) (7.2.12) t=0 subject to an initial condition Y0 The Bellman equation for the planning problem is V (Y ) = max A0 Y − Y A1 2 Y − 5d (Y − Y ) + βV (Y ) (7.2.13) The Euler equation is −d (Y − Y ) + βV (Y ) = (7.2.14) Applying the Benveniste-Scheinkman formula gives V (Y ) = A0 − A1 Y + d (Y − Y ) (7.2.15) Substituting this into equation (7.2.14 ) and rearranging gives βA0 + dYt − [βA1 + d (1 + β)] Yt+1 + dβYt+2 = (7.2.16) Return to equation (7.2.7 ) and set yt = Yt for all t (Remember that we have set n = When n = we have to adjust pieces of the argument for n.) Notice that with yt = Yt , equations (7.2.16 ) and (7.2.7 ) are identical Thus, a solution of the planning problem also is an equilibrium Setting yt = Yt in Recursive competitive equilibrium 191 equation (7.2.7 ) amounts to dropping equation (7.2.4 ) and instead solving for the coefficients H0 , H1 that make yt = Yt true and that jointly solve equations (7.2.4 ) and (7.2.7 ) It follows that for this example we can compute an equilibrium by forming the optimal linear regulator problem corresponding to the Bellman equation (7.2.13 ) The optimal policy function for this problem can be used to form the rational expectations H(Y ) 7.3 Recursive competitive equilibrium The equilibrium concept of the previous section is widely used Following Prescott and Mehra (1980), it is useful to define the equilibrium concept more generally as a recursive competitive equilibrium Let x be a vector of state variables under the control of a representative agent and let X be the vector of those same variables chosen by “the market.” Let Z be a vector of other state variables chosen by “nature”, that is, determined outside the model The representative agent’s problem is characterized by the Bellman equation v (x, X, Z) = max{R (x, X, Z, u) + βv (x , X , Z )} u (7.3.1) where denotes next period’s value, and where the maximization is subject to the restrictions: x = g (x, X, Z, u) (7.3.2) X = G (X, Z) (7.3.3) Z = ζ (Z) (7.3.4) Here g describes the impact of the representative agent’s controls u on his state x ; G and ζ describe his beliefs about the evolution of the aggregate state The solution of the representative agent’s problem is a decision rule u = h (x, X, Z) (7.3.5) The method of this section was used by Lucas and Prescott (1971) It uses the connection between equilibrium and Pareto optimality expressed in the fundamental theorems of welfare economics See Mas-Colell, Whinston, and Green (1995) 192 Recursive (Partial) Equilibrium To make the representative agent representative, we impose X = x, but only “after” we have solved the agent’s decision problem Substituting equation (7.3.5 ) and X = xt into equation (7.3.2 ) gives the actual law of motion X = GA (X, Z) , (7.3.6) where GA (X, Z) ≡ g[X, X, Z, h(X, X, Z)] We are now ready to propose a definition: Definition: A recursive competitive equilibrium is a policy function h, an actual aggregate law of motion GA , and a perceived aggregate law G such that (a) Given G, h solves the representative agent’s optimization problem; and (b) h implies that GA = G This equilibrium concept is also sometimes called a rational expectations equilibrium The equilibrium concept makes G an outcome of the analysis The functions giving the representative agent’s expectations about the aggregate state variables contribute no free parameters and are outcomes of the analysis There are no free parameters that characterize expectations In exercise 7.1, you are asked to implement this equilibrium concept 7.4 Markov perfect equilibrium It is instructive to consider a dynamic model of duopoly A market has two firms Each firm recognizes that its output decision will affect the aggregate output and therefore influence the market price Thus, we drop the assumption of price-taking behavior The one-period return function of firm i is Rit = pt yit − 5d (yit+1 − yit ) (7.4.1) There is a demand curve pt = A0 − A1 (y1t + y2t ) (7.4.2) This is the sense in which rational expectations models make expectations disappear from a model One consequence of departing from the price-taking framework is that the market outcome will no longer maximize welfare, measured as the sum of consumer and producer surplus See exercise 7.4 for the case of a monopoly Markov perfect equilibrium 193 Substituting the demand curve into equation (7.4.1 ) lets us express the return as Rit = A0 yit − A1 yit − A1 yit y−i,t − 5d (yit+1 − yit )2 , (7.4.3) where y−i,t denotes the output of the firm other than i Firm i chooses a decision rule that sets yit+1 as a function of (yit , y−i,t ) and that maximizes ∞ β t Rit t=0 Temporarily assume that the maximizing decision rule is yit+1 = fi (yit , y−i,t ) Given the function f−i , the Bellman equation of firm i is vi (yit , y−i,t ) = max {Rit + βvi (yit+1 , y−i,t+1 )} , yit+1 (7.4.4) where the maximization is subject to the perceived decision rule of the other firm (7.4.5) y−i,t+1 = f−i (y−i,t , yit ) Note the cross-reference between the two problems for i = 1, We now advance the following definition: Definition: A Markov perfect equilibrium is a pair of value functions vi and a pair of policy functions fi for i = 1, such that a Given f−i ,vi satisfies the Bellman equation (7.4.4 ) b The policy function fi attains the right side of the Bellman equation (7.4.4 ) The adjective Markov denotes that the equilibrium decision rules depend only on the current values of the state variables yit , not their histories Perfect means that the equilibrium is constructed by backward induction and therefore builds in optimizing behavior for each firm for all conceivable future states, including many that are not realized by iterating forward on the pair of equilibrium strategies fi 194 Recursive (Partial) Equilibrium 7.4.1 Computation If it exists, a Markov perfect equilibrium can be computed by iterating to conj vergence on the pair of Bellman equations (7.4.4 ) In particular, let vi , fij be the value function and policy function for firm i at the j th iteration Then imagine constructing the iterates j+1 j vi (yit , y−i,t ) = max Rit + βvi (yit+1 , y−i,t+1 ) , yi,t+1 (7.4.6) where the maximization is subject to j y−i,t+1 = f−i (y−i,t , yit ) (7.4.7) In general, these iterations are difficult In the next section, we describe how the calculations simplify for the case in which the return function is quadratic and the transition laws are linear 7.5 Linear Markov perfect equilibria In this section, we show how the optimal linear regulator can be used to solve a model like that in the previous section That model should be considered to be an example of a dynamic game A dynamic game consists of these objects: (a) a list of players; (b) a list of dates and actions available to each player at each date; and (c) payoffs for each player expressed as functions of the actions taken by all players The optimal linear regulator is a good tool for formulating and solving dynamic games The standard equilibrium concept—subgame perfection—in these games requires that each player’s strategy be computed by backward induction This leads to an interrelated pair of Bellman equations In linear-quadratic dynamic games, these “stacked Bellman equations” become “stacked Riccati equations” with a tractable mathematical structure We now consider the following two-player, linear quadratic dynamic game An (n × 1) state vector xt evolves according to a transition equation xt+1 = At xt + B1t u1t + B2t u2t (7.5.1) See Levhari and Mirman (1980) for how a Markov perfect equilibrium can be computed conveniently with logarithmic returns and Cobb-Douglas transition laws Levhari and Mirman construct a model of fish and fishers Linear Markov perfect equilibria 195 where ujt is a (kj × 1) vector of controls of player j We start with a finite horizon formulation, where t0 is the initial date and t1 is the terminal date for the common horizon of the two players Player maximizes t1 −1 − xT R1 xt + uT Q1 u1t + uT S1 u2t t 1t 2t (7.5.2) t=t0 where R1 and S1 are positive semidefinite and Q1 is positive definite Player maximizes t1 −1 − xT R2 xt + uT Q2 u2t + uT S2 u1t t 2t 1t (7.5.3) t=t0 where R2 and S2 are positive semidefinite and Q2 is positive definite We formulate a Markov perfect equilibrium as follows Player j employs linear decision rules ujt = −Fjt xt , t = t0 , , t1 − where Fjt is a (kj × n) matrix Assume that player i knows {F−i,t ; t = t0 , , t1 − 1} Then player 1’s problem is to maximize expression (7.5.2 ) subject to the known law of motion (7.5.1 ) and the known control law u2t = −F2t xt of player Symmetrically, player 2’s problem is to maximize expression (7.5.3 ) subject to equation (7.5.1 ) and u1t = −F1t xt A Markov perfect equilibrium is a pair of sequences {F1t , F2t ; t = t0 , t0 + 1, , t1 − 1} such that {F1t } solves player 1’s problem, given {F2t } , and {F2t } solves player 2’s problem, given {F1t } We have restricted each player’s strategy to depend only on xt , and not on the history ht = {(xs , u1s , u2s ), s = t0 , , t} This restriction on strategy spaces accounts for the adjective “Markov” in the phrase “Markov perfect equilibrium.” Player 1’s problem is to maximize t1 −1 − T xT R1 + F2t S1 F2t xt + uT Q1 u1t t 1t t=t0 subject to xt+1 = (At − B2t F2t ) xt + B1t u1t This is an optimal linear regulator problem, and it can be solved by working backward Evidently, player 2’s problem is also an optimal linear regulator problem 196 Recursive (Partial) Equilibrium The solution of player 1’s problem is given by T F1t = B1t P1t+1 B1t + Q1 −1 T B1t P1t+1 (At − B2t F2t ) (7.5.4) t = t0 , t0 + 1, , t1 − where P1t is the solution of the following matrix Riccati difference equation, with terminal condition P1t1 = : T P1t = (At − B2t F2t )T P1t+1 (At − B2t F2t ) R1 + F2t S1 F2t T − (At − B2t F2t )T P1t+1 B1t B1t P1t+1 B1t + Q1 −1 T B1t P1t+1 (At − B2t F2t ) (7.5.5) The solution of player 2’s problem is T F2t = B2t P2t+1 B2t + Q2 −1 T B2t P2t+1 (At − B1t F1t ) (7.5.6) where P2t solves the following matrix Riccati difference equation, with terminal condition P2t1 = : T T P2t = (At − B1t F1t ) P2t+1 (At − B1t F1t ) + R2 + F1t S2 F1t T − (At − B1t F1t ) P2t+1 B2t T B2t P2t+1 B2t + Q2 −1 T B2t P2t+1 (7.5.7) (At − B1t F1t ) The equilibrium sequences {F1t , F2t ; t = t0 , t0 + 1, , t1 − 1} can be calculated from the pair of coupled Riccati difference equations (7.5.5 ) and (7.5.7 ) In particular, we use equations (7.5.4 ), (7.5.5 ), (7.5.6 ), and (7.5.7 ) to “work backward” from time t1 − Notice that given P1t+1 and P2t+1 , equations (7.5.4 ) and (7.5.6 ) are a system of (k2 × n) + (k1 × n) linear equations in the (k2 × n) + (k1 × n) unknowns in the matrices F1t and F2t Notice how j ’s control law Fjt is a function of {Fis , s ≥ t, i = j} Thus, agent i ’s choice of {Fit ; t = t0 , , t1 − 1} influences agent j ’s choice of control laws However, in the Markov perfect equilibrium of this game, each agent is assumed to ignore the influence that his choice exerts on the other agent’s choice 8 In an equilibrium of a Stackelberg or dominant player game, the timing of moves is so altered relative to the present game that one of the agents called the leader takes into account the influence that his choices exert on the other agent’s choices See chapter 18 Linear Markov perfect equilibria 197 We often want to compute the solutions of such games for infinite horizons, in the hope that the decision rules Fit settle down to be time invariant as t1 → +∞ In practice, we usually fix t1 and compute the equilibrium of an infinite horizon game by driving t0 → −∞ Judd followed that procedure in the following example 7.5.1 An example This section describes the Markov perfect equilibrium of an infinite horizon linear quadratic game proposed by Kenneth Judd (1990) The equilibrium is computed by iterating to convergence on the pair of Riccati equations defined by the choice problems of two firms Each firm solves a linear quadratic optimization problem, taking as given and known the sequence of linear decision rules used by the other player The firms set prices and quantities of two goods interrelated through their demand curves There is no uncertainty Relevant variables are defined as follows: Iit = inventories of firm i at beginning of t qit = production of firm i during period t pit = price charged by firm i during period t Sit = sales made by firm i during period t Eit = costs of production of firm i during period t Cit = costs of carrying inventories for firm i during t The firms’ cost functions are Cit = ci1 + ci2 Iit + 5ci3 Iit Eit = ei1 + ei2 qit + 5ei3 qit where eij , cij are positive scalars Inventories obey the laws of motion Ii,t+1 = (1 − δ) Iit + qit − Sit Demand is governed by the linear schedule St = dpit + B 198 Recursive (Partial) Equilibrium where St = [ S1t S2t ] , d is a (2 × 2) negative definite matrix, and B is a vector of constants Firm i maximizes the undiscounted sum lim T →∞ T T (pit Sit − Eit − Cit ) t=0 by choosing a decision rule for price and quantity of the form uit = −Fi xt where uit = [ pit qit ] , and the state is xt = [ I1t I2t ] In the web site for the book, we supply a Matlab program nnash.m that computes a Markov perfect equilibrium of the linear quadratic dynamic game in which player i maximizes ∞ − {xt ri xt + 2xt wi uit + uit qi uit + ujt si ujt + 2ujt mi uit } t=0 subject to the law of motion xt+1 = axt + b1 u1t + b2 u2t and a control law ujt = −fj xt for the other player; here a is n × n; b1 is n × k1 ; b2 is n × k2 ; r1 is n × n; r2 is n × n; q1 is k1 × k1 ; q2 is k2 × k2 ; s1 is k2 × k2 ; s2 is k1 × k1 ; w1 is n × k1 ; w2 is n × k2 ; m1 is k2 × k1 ; and m2 is k1 × k2 The equilibrium of Judd’s model can be computed by filling in the matrices appropriately A Matlab tutorial judd.m uses nnash.m to compute the equilibrium Exercises 199 7.6 Concluding remarks This chapter has introduced two equilibrium concepts and illustrated how dynamic programming algorithms are embedded in each For the linear models we have used as illustrations, the dynamic programs become optimal linear regulators, making it tractable to compute equilibria even for large state spaces We chose to define these equilibria concepts in partial equilibrium settings that are more natural for microeconomic applications than for macroeconomic ones In the next chapter, we use the recursive equilibrium concept to analyze a general equilibrium in an endowment economy That setting serves as a natural starting point for addressing various macroeconomic issues Exercises These problems aim to teach about (1) mapping problems into recursive forms, (2) different equilibrium concepts, and (3) using Matlab Computer programs are available from the web site for the book Exercise 7.1 A competitive firm A competitive firm seeks to maximize ∞ β t Rt (1) t=0 where β ∈ (0, 1), and time-t revenue Rt is (2) Rt = pt yt − 5d (yt+1 − yt ) , d > 0, where pt is the price of output, and yt is the time-t output of the firm Here 5d(yt+1 − yt )2 measures the firm’s cost of adjusting its rate of output The firm starts with a given initial level of output y0 The price lies on the market demand curve (3) pt = A0 − A1 Yt , A0 , A1 > The web site is ftp://zia.stanford.edu/pub/sargent/webdocs/matlab 200 Recursive (Partial) Equilibrium where Yt is the market level of output, which the firm takes as exogenous, and which the firm believes follows the law of motion Yt+1 = H0 + H1 Yt , (4) with Y0 as a fixed initial condition a Formulate the Bellman equation for the firm’s problem b Formulate the firm’s problem as a discounted optimal linear regulator problem, being careful to describe all of the objects needed What is the state for the firm’s problem? c Use the Matlab program olrp.m to solve the firm’s problem for the following parameter values: A0 = 100, A1 = 05, β = 95, d = 10, H0 = 95.5 , and H1 = 95 Express the solution of the firm’s problem in the form yt+1 = h0 + h1 yt + h2 Yt , (5) giving values for the hj ’s d If there were n identical competitive firms all behaving according to equation (5), what would equation (5) imply for the actual law of motion (4) for the market supply Y ? e Formulate the Euler equation for the firm’s problem Exercise 7.2 Rational expectations Now assume that the firm in problem is “representative.” We implement this idea by setting n = In equilibrium, we will require that yt = Yt , but we don’t want to impose this condition at the stage that the firm is optimizing (because we want to retain competitive behavior) Define a rational expectations equilibrium to be a pair of numbers H0 , H1 such that if the representative firm solves the problem ascribed to it in problem 1, then the firm’s optimal behavior given by equation (5) implies that yt = Yt ∀ t ≥ a Use the program that you wrote for exercise 7.1 to determine which if any of the following pairs (H0 , H1 ) is a rational expectations equilibrium: (i) (94.0888, 9211); (ii) (93.22, 9433), and (iii) (95.08187459215024, 95245906270392)? b Describe an iterative algorithm by which the program that you wrote for exercise 7.1 might be used to compute a rational expectations equilibrium (You are not being asked actually to use the algorithm you are suggesting.) Exercises Exercise 7.3 201 Maximizing welfare A planner seeks to maximize the welfare criterion ∞ β t St , (6) t=0 where St is “consumer surplus plus producer surplus” defined to be Yt St = S (Yt , Yt+1 ) = (A0 − A1 x) d x − 5d (Yt+1 − Yt )2 a Formulate the planner’s Bellman equation b Formulate the planner’s problem as an optimal linear regulator, and solve it using the Matlab program olrp.m Represent the solution in the form Yt+1 = s0 + s1 Yt c Compare your answer in part b with your answer to part a of exercise 7.2 Exercise 7.4 Monopoly A monopolist faces the industry demand curve (3) and chooses Yt to maximize ∞ t t=0 β Rt where Rt = pt Yt − 5d(Yt+1 − Yt ) and where Y0 is given a Formulate the firm’s Bellman equation b For the parameter values listed in exercise 7.1, formulate and solve the firm’s problem using olrp.m c Compare your answer in part b with the answer you obtained to part b of exercise 7.3 Exercise 7.5 Duopoly An industry consists of two firms that jointly face the industry-wide demand curve (3), where now Yt = y1t + y2t Firm i = 1, maximizes ∞ β t Rit (7) t=0 where Rit = pt yit − 5d(yi,t+1 − yit )2 a Define a Markov perfect equilibrium for this industry 202 Recursive (Partial) Equilibrium b Formulate the Bellman equation for each firm c Use the Matlab program nash.m to compute an equilibrium, assuming the parameter values listed in exercise 7.1 Exercise 7.6 Self-control This is a model of a human who has time-inconsistent preferences, of a type proposed by Phelps and Pollak (1968) and used by Laibson (1994) 10 The human lives from t = 0, , T Think of the human as actually consisting of T +1 personalities, one for each period Each personality is a distinct agent (i.e., a distinct utility function and constraint set) Personality T has preferences ordered by u(cT ) and personality t < T has preferences that are ordered by T −t β j u (ct+j ) , u (ct ) + δ (7.1) j=1 where u(·) is a twice continuously differentiable, increasing and strictly concave function of consumption of a single good; β ∈ (0, 1), and δ ∈ (0, 1] When δ < , preferences of the sequence of personalities are time-inconsistent (that is, not recursive) At each t, let there be a savings technology described by kt+1 + ct ≤ f (kt ) , (7.2) where f is a production function with f > 0, f ≤ a Define a Markov perfect equilibrium for the T + personalities b Argue that the Markov perfect equilibrium can be computed by iterating on the following functional equations: Vj+1 (k) = max {u (c) + βδWj (k )} c Wj+1 (k) = u [cj+1 (k)] + βWj [f (k) − cj+1 (k)] (7.3a) (7.4) where cj+1 (k) is the maximizer of the right side of (7.3a) for j + , starting from W0 (k) = u[f (k)] Here Wj (k) is the value of u(cT −j ) + βu(cT −j+1 ) + + β T −j u(cT ), taking the decision rules ch (k) as given for h = 0, 1, , j c State the optimization problem of the time- person who is given the power to dictate the choices of all subsequent persons Write the Bellman equations for this problem The time zero person is said to have a commitment technology for “self-control” in this problem 10 See Gul and Pesendorfer (2000) for a single-agent recursive representation of preferences exhibiting temptation and self-control ... equation (7. 2.3 ) into equation (7. 2.2 ) gives Rt = (A0 − A1 Yt ) yt − 5d (yt+1 − yt )2 The model is a version of one analyzed by Lucas and Prescott (1 971 ) and Sargent (1987a) The recursive. .. P2t+1 (7. 5 .7) (At − B1t F1t ) The equilibrium sequences {F1t , F2t ; t = t0 , t0 + 1, , t1 − 1} can be calculated from the pair of coupled Riccati difference equations (7. 5.5 ) and (7. 5 .7 ) In... particular, we use equations (7. 5.4 ), (7. 5.5 ), (7. 5.6 ), and (7. 5 .7 ) to “work backward” from time t1 − Notice that given P1t+1 and P2t+1 , equations (7. 5.4 ) and (7. 5.6 ) are a system of (k2

Ngày đăng: 04/07/2014, 15:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan