Date of Original Version

7-2011

Type

Working Paper

Rights Management

All Rights Reserved

Abstract or Description

In two influential papers, Abreu, Pearce and Stacchetti (APS) (1986, 1990) developed set-valued dynamic programming techniques for solving certain classes of repeated game. They showed that, for these games, the set of sequential equilibrium payoffs can be computed as a fixed point of an operator analogous to the Bellman operator in dynamic programming. These methods can be extended to cover a large class of dynamic games that arise naturally in industrial organization, macroeconomics, and public finance.1 In the dynamic case, the object of interest is a correspondence that maps a physical state variable to sets of equilibrium payoffs. APS-based methods imply that the equilibrium payoff correspondence is a fixed point of a monotone operator and can be obtained via an iteration of this operator. In principle, this iterative procedure can be implemented numerically to solve for the equilibrium payoff correspondence. However, its practical numerical implementation requires an approximation scheme for correspondences that is efficient and consistent with the underlying structure of the monotone operator. In the first part of this paper, we provide such a scheme.

Included in

Economics Commons

Share

COinS