An interesting feature of a DQN is the utilization of a second network during the training procedure, which is referred to as the target network. This second network is used for generating the target-Q values that are used to compute the loss function during training. Why not use just use one network for both estimations, that is, for choosing the action a to take, as well as updating the Q-network? The issue is that, at every step of training, the Q-network's values change, and if we use a constantly changing set of values to update our network, then the estimations can easily become unstable – the network can fall into feedback loops between the target and estimated Q-values. In order to mitigate this instability, the target network's weights are fixed – that is, slowly updated to the primary Q-network's values. This...
Germany
Slovakia
Canada
Brazil
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
United States
Great Britain
India
Spain
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
France
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Australia
Japan
Russia