The Vehicle Routing Problem (VRP) is a typical distribution and transport problem, which consists of optimizing the use of a set of vehicles with limited capacity to pick up and deliver goods or people to geographically distributed stations. Managing these operations in the best possible way can significantly reduce costs. Temporal difference (TD) learning algorithms are based on reducing the differences between estimates made by the agent at different times. It is a combination of the ideas of the Monte Carlo (MC) method and Dynamic Programming (DP). It can learn directly from raw data, without a model of the dynamics of the environment (such as MC). Update estimates are based in part on other learned estimates, without waiting for the final result (bootstrap, like in DP). In this chapter, we will learn how to use TD learning algorithms to...
United States
United Kingdom
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Argentina
Austria
Belgium
Bulgaria
Chile
Colombia
Cyprus
Czechia
Denmark
Ecuador
Egypt
Estonia
Finland
Greece
Hungary
Indonesia
Ireland
Italy
Japan
Latvia
Lithuania
Luxembourg
Malaysia
Malta
Mexico
Netherlands
New Zealand
Norway
Philippines
Poland
Portugal
Romania
Singapore
Slovakia
Slovenia
South Africa
South Korea
Sweden
Switzerland
Taiwan
Thailand
Turkey
Ukraine