- Is A3C an on-policy or off-policy algorithm?
- Why is the Shannon entropy term used?
- What are the problems with using a large number of worker threads?
- Why is softmax used in the policy neural network?
- Why do we need an advantage function?
- This is left as an exercise: For the LunarLander problem, repeat the training without reward shaping and see if the agent learns faster/slower than what we saw in this chapter.
Germany
Slovakia
Canada
Brazil
Singapore
Hungary
Philippines
Mexico
Thailand
Ukraine
Luxembourg
Estonia
Lithuania
Norway
Chile
United States
Great Britain
India
Spain
South Korea
Ecuador
Colombia
Taiwan
Switzerland
Indonesia
Cyprus
Denmark
Finland
Poland
Malta
Czechia
New Zealand
Austria
Turkey
France
Sweden
Italy
Egypt
Belgium
Portugal
Slovenia
Ireland
Romania
Greece
Argentina
Malaysia
South Africa
Netherlands
Bulgaria
Latvia
Australia
Japan
Russia