Markov strategy

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In game theory, a Markov strategy is one that depends only on state variables that summarize the history of the game in one way or another.[1] For instance, a state variable can be the current play in a repeated game, or it can be any interpretation of a recent sequence of play.

A profile of Markov strategies is a Markov perfect equilibrium if it is a Nash equilibrium in every state of the game.

References[edit]

  1. ^ Fudenberg, Drew (1995). Game Theory. Cambridge, MA: The MIT Press. pp. 501–40. ISBN 0-262-06141-4.