An interesting feature of a DQN is the utilization of a second network during the training procedure, which is referred to as the target network. This second network is used for generating the target-Q values that are used to compute the loss function during training. Why not use just use one network for both estimations, that is, for choosing the action a to take, as well as updating the Q-network? The issue is that, at every step of training, the Q-network's values change, and if we use a constantly changing set of values to update our network, then the estimations can easily become unstable – the network can fall into feedback loops between the target and estimated Q-values. In order to mitigate this instability, the target network's weights are fixed – that is, slowly updated to the primary Q-network's values. This...
United States
United Kingdom
India
Germany
France
Canada
Russia
Spain
Brazil
Australia
Argentina
Austria
Belgium
Bulgaria
Chile
Colombia
Cyprus
Czechia
Denmark
Ecuador
Egypt
Estonia
Finland
Greece
Hungary
Indonesia
Ireland
Italy
Japan
Latvia
Lithuania
Luxembourg
Malaysia
Malta
Mexico
Netherlands
New Zealand
Norway
Philippines
Poland
Portugal
Romania
Singapore
Slovakia
Slovenia
South Africa
South Korea
Sweden
Switzerland
Taiwan
Thailand
Turkey
Ukraine