Recurrent neural networks
Recurrent neural networks (RNNs) are a family of neural networks specifically designed to handle sequential data. They were first proposed by Rumelhart et al. (1986) in their seminal work, Learning Representations by Back-Propagating Errors. The work borrows ideas such as parameter sharing and recurrence from previous work in statistics and machine learning to come up with a neural network architecture that helps overcome many of the disadvantages FFNs have when processing sequential data.
Parameter sharing is when we use the same set of parameters for different parts of the model. Apart from a regularization effect (restricting the model to using the same set of weights for multiple tasks, which regularizes the model by constraining the search space while optimizing the model), parameter sharing enables us to extend and apply the model to examples of different forms. RNNs can scale to much longer sequences because of this. In an FFN, each timestep (each...