The PyTorch way
So far, we have developed a simple two-layer neural network in a hybrid NumPy-PyTorch style. We have coded each operation line by line, like how we do it in NumPy, and we have adopted automatic differentiation from PyTorch so that we don't have to code the backward pass.
On the way, we have learned how to wrap matrices (or tensors) in PyTorch, and that helps us with backpropagation. The PyTorch way of doing the same thing is a bit more convenient and that is what we are going to discuss in this section. PyTorch gives access to almost all the functionality required for a deep learning project inbuilt. Since PyTorch supports all the mathematical functions available in Python, it's not a tough task to build one function if it's not available in the core. You can not only build any functionality you need, but PyTorch defines the derivative function of the functionality you build implicitly.
PyTorch is helpful for people who need to know the low-level...