In this chapter, we dove into working with the ML-Agents framework by developing a heuristic brain that uses an ML technique called Reinforcement Learning to solve various fundamental learning problems. We first explored the classic multi-armed bandit problem in order to first introduce the concepts of RL. Then, we expanded on this problem by adding context or the sense of state. This required us to modify our Value function by adding state and turning our function into a Q function. While this algorithm worked well to solve our simple learning problem, it was not sufficient for more complex problems with delayed rewards. Introducing delayed rewards required us to look at the Bellman equation and understand how we could discount rewards over an agent's steps, thus providing the agent with Q value breadcrumbs as a way for it to find its way home. Finally, we loaded...
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Japan
Slovakia