- The Markov property states that the future depends only on the present and not on the past.
- MDP is an extension of the Markov chain. It provides a mathematical framework for modeling decision-making situations. Almost all RL problems can be modeled as MDP.
- Refer section Discount factor.
- The discount factor decides how much importance we give to the future rewards and immediate rewards.
- We use Bellman function for solving the MDP.
- Refer section Deriving the Bellman equation for value and Q functions.
- Value function specifies goodness of a state and Q function specifies goodness of an action in that state.
- Refer section Value iteration and Policy iteration.
Germany
Slovakia
Canada
Brazil
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
United States
Great Britain
India
Spain
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
France
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Australia
Japan
Russia