Jump to content
Wikipedia The Free Encyclopedia

Markov strategy

From Wikipedia, the free encyclopedia
Strategy which only depends on the current state of a game

In game theory, a Markov strategy[1] is a strategy that depends only on the current state of the game, rather than the full history of past actions. The state summarizes all relevant past information needed for decision-making. For example, in a repeated game, the state could be the outcome of the most recent round or any summary statistic that captures the strategic situation or recent sequence of play.[2]

A profile of Markov strategies forms a Markov perfect equilibrium if it constitutes a Nash equilibrium in every possible state of the game. Markov strategies are widely used in dynamic and stochastic games, where the state evolves over time according to probabilistic rules.

Although the concept is named after Andrey Markov due to its reliance on the Markov property [3] —the idea that only the current state matters—the strategy concept itself was developed much later in the context of dynamic game theory.

References

[edit ]
  1. ^ "First Links in the Markov Chain". American Scientist. 2017年02月06日. Retrieved 2017年02月06日.
  2. ^ Fudenberg, Drew (1995). Game Theory. Cambridge, MA: The MIT Press. pp. 501–40. ISBN 0-262-06141-4.
  3. ^ Sack, Harald (2022年06月14日). "Andrey Markov and the Markov Chains". SciHi Blog. Retrieved 2017年11月23日.
Traditional game theory
Definitions
Equilibrium
concepts
Strategies
Games
Theorems
Subfields
Key people
Core
concepts
Games
Mathematical
tools
Search
algorithms
Key people
Core
concepts
Games
Applications
Key people
Core
concepts
Theorems
Applications
Other topics


Stub icon

This game theory article is a stub. You can help Wikipedia by expanding it.

AltStyle によって変換されたページ (->オリジナル) /