The A2C method
The first method that we will apply to our walking robot problem is A2C, which we experimented with in Part 3 of the book. This choice of method is quite obvious, as A2C is very easy to adapt to the continuous action domain. As a quick refresher, A2C’s idea is to estimate the gradient of our policy as ∇J = ∇𝜃 log π𝜃(a|s)(R −V 𝜃(s)). The policy π𝜃(s) is supposed to provide the probability distribution of actions given the observed state. The quantity V 𝜃(s) is called a critic, equal to the value of the state, and is trained using the mean squared error (MSE) loss between the critic’s return and the value estimated by the Bellman equation. To improve exploration, the entropy bonus LH = π𝜃(s)log π𝜃(s) is usually added to the loss.
Obviously, the value head of the actor-critic will be unchanged for continuous actions. The only thing that is affected...