- The Markov property states that the future depends only on the present and not on the past.
- MDP is an extension of the Markov chain. It provides a mathematical framework for modeling decision-making situations. Almost all RL problems can be modeled as MDP.
- Refer section Discount factor.
- The discount factor decides how much importance we give to the future rewards and immediate rewards.
- We use Bellman function for solving the MDP.
- Refer section Deriving the Bellman equation for value and Q functions.
- Value function specifies goodness of a state and Q function specifies goodness of an action in that state.
- Refer section Value iteration and Policy iteration.
United States
United Kingdom
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Argentina
Austria
Belgium
Bulgaria
Chile
Colombia
Cyprus
Czechia
Denmark
Ecuador
Egypt
Estonia
Finland
Greece
Hungary
Indonesia
Ireland
Italy
Japan
Latvia
Lithuania
Luxembourg
Malaysia
Malta
Mexico
Netherlands
New Zealand
Norway
Philippines
Poland
Portugal
Romania
Singapore
Slovakia
Slovenia
South Africa
South Korea
Sweden
Switzerland
Taiwan
Thailand
Turkey
Ukraine