Bag of words benchmark
We came across one-hot embeddings while identifying fraudulent emails in Chapter 3, Fraud Detection with Autoencoders. The idea is to represent each word as a basis vector; that is, a vector with zeros except one coordinate. Hence, each document (a review in this case) is represented as a vector with ones and zeros. We went a bit further from that and used different weighting (tf-idf).
Let's revisit this model once again, but include n-grams instead of single words. This will be our benchmark for the more sophisticated word embeddings we will do later.
Preparing the data
The data is a subset of the Stanford Large Movie Review dataset, originally published in:
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011).
This data is available to download at http://ai.stanford.edu/~amaas/data/sentiment...