The Vehicle Routing Problem (VRP) is a typical distribution and transport problem, which consists of optimizing the use of a set of vehicles with limited capacity to pick up and deliver goods or people to geographically distributed stations. Managing these operations in the best possible way can significantly reduce costs. Temporal difference (TD) learning algorithms are based on reducing the differences between estimates made by the agent at different times. It is a combination of the ideas of the Monte Carlo (MC) method and Dynamic Programming (DP). It can learn directly from raw data, without a model of the dynamics of the environment (such as MC). Update estimates are based in part on other learned estimates, without waiting for the final result (bootstrap, like in DP). In this chapter, we will learn how to use TD learning algorithms to...
United States
Great Britain
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Singapore
Hungary
Ukraine
Luxembourg
Estonia
Lithuania
South Korea
Turkey
Switzerland
Colombia
Taiwan
Chile
Norway
Ecuador
Indonesia
New Zealand
Cyprus
Denmark
Finland
Poland
Malta
Czechia
Austria
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Netherlands
Bulgaria
Latvia
South Africa
Malaysia
Japan
Slovakia
Philippines
Mexico
Thailand