Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Deep Learning for Natural Language Processing
Deep Learning for Natural Language Processing

Deep Learning for Natural Language Processing: Solve your natural language processing problems with smart deep neural networks

Arrow left icon
Profile Icon Karthiek Reddy Bokka Profile Icon Monicah Wambugu Profile Icon Tanuj Jain Profile Icon Shubhangi Hora
Arrow right icon
€29.99
Full star icon Half star icon Empty star icon Empty star icon Empty star icon 1.5 (2 Ratings)
Paperback Jun 2019 372 pages 1st Edition
eBook
€15.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Karthiek Reddy Bokka Profile Icon Monicah Wambugu Profile Icon Tanuj Jain Profile Icon Shubhangi Hora
Arrow right icon
€29.99
Full star icon Half star icon Empty star icon Empty star icon Empty star icon 1.5 (2 Ratings)
Paperback Jun 2019 372 pages 1st Edition
eBook
€15.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€15.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Deep Learning for Natural Language Processing

Introduction to Natural Language Processing

Learning Objectives

By the end of this chapter, you will be able to:

  • Describe natural language processing and its applications
  • Explain different text preprocessing techniques
  • Perform text preprocessing on text corpora
  • Explain the functioning of Word2Vec and GloVe word embeddings
  • Generate word embeddings using Word2Vec and GloVe
  • Use the NLTK, Gensim, and Glove-Python libraries for text preprocessing and generating word embeddings

This chapter aims to equip you with knowledge of the basics of natural language processing and experience with the various text preprocessing techniques used in Deep Learning.

The Basics of Natural Language Processing

To understand what natural language processing is, let's break the term into two:

  • Natural language is a form of written and spoken communication that has developed organically and naturally.
  • Processing means analyzing and making sense of input data with computers.
Figure 1.1: Natural language processing
Figure 1.1: Natural language processing

Therefore, natural language processing is the machine-based processing of human communication. It aims to teach machines how to process and understand the language of humans, thereby allowing an easy channel of communication between human and machines.

For example, the personal voice assistants found in our phones and smart speakers, such as Alexa and Siri, are a result of natural language processing. They have been created in such a manner that they are able to not only understand what we say to them but also to act upon what we say and respond with feedback. Natural language processing algorithms aid these technologies in communicating with humans.

The key thing to consider in the mentioned definition of natural language processing is that the communication needs to occur in the natural language of humans. We've been communicating with machines for decades now by creating programs to perform certain tasks and executing them. However, these programs are written in languages that are not natural languages, because they are not forms of spoken communication and they haven't developed naturally or organically. These languages, such as Java, Python, C, and C++, were created with machines in mind and the consideration always being, "what will the machine be able to understand and process easily?"

While Python is a more user-friendly language and so is easier for humans to learn and be able to write code in, the basic point remains the same – to communicate with a machine, humans must learn a language that the machine is able to understand.

Figure 1.2: Venn diagram for natural language processing
Figure 1.2: Venn diagram for natural language processing

The purpose of natural language processing is the opposite of this. Rather than having humans conform to the ways of a machine and learn how to effectively communicate with them, natural language processing enables machines to conform to humans and learn their way of communication. This makes more sense since the aim of technology is to make our lives easier.

To clarify this with an example, your first ever program was probably a piece of code that asked the machine to print 'hello world'. This was you conforming to the machine and asking it to execute a task in a language that it understood. Asking your voice assistant to say 'hello world' by voicing this command to it, and having it say 'hello world' back to you, is an example of the application of natural language processing, because you are communicating with a machine in your natural language (in this case, English). The machine is conforming to your form of communication, understanding what you're saying, processing what you're asking it to do, and then executing the task.

Importance of natural language processing

The following figure illustrates the various sections of the field of artificial intelligence:

Fig 1.3: Artificial intelligence and some of its subfields
Fig 1.3: Artificial intelligence and some of its subfields

Along with machine learning and deep learning, natural language processing is a subfield of artificial intelligence, and because it deals with natural language, it's actually at the intersection of artificial intelligence and linguistics.

As mentioned, natural language processing is what enables machines to understand the language of humans, thus allowing an efficient channel of communication between the two. However, there is another reason Natural language processing is necessary, and that is because, like machines, machine learning and deep learning models work best with numerical data. Numerical data is hard for humans to naturally produce; imagine us talking in numbers rather than words. So, natural language processing works with textual data and converts it into numerical data, enabling machine learning and deep learning models to be fitted on it. Thus, it exists to bridge the communication gap between humans and machines by taking the spoken and written forms of language from humans and converting them into data that can be understood by machines. Thanks to natural language processing, the machine is able to make sense of, answer questions based on, solve problems using, and communicate in a natural language, among other things.

Capabilities of Natural language processing

Natural language processing has many real-world applications that benefit the lives of humans. These applications fall under three broad capabilities of natural language processing:

  • Speech Recognition

    The machine is able to recognize a natural language in its spoken form and translate it into a textual form. An example of this is dictation on your smartphones – you can enable dictation and speak to your phone, and it will convert whatever you are saying into text.

  • Natural Language Understanding

    The machine is able to understand a natural language in both its spoken and written form. If given a command, the machine is able to understand and execute it. An example of this would be saying 'Hey Siri, call home' to Siri on your iPhone for Siri to automatically call 'home' for you.

  • Natural Language Generation

    The machine is able to generate natural language itself. An example of this is asking 'Siri, what time is it?' to Siri on your iPhone and Siri replying with the time – 'It's 2:08pm'.

These three capabilities are used to accomplish and automate a lot of tasks. Let's take a look at some of the things natural language processing contributes to, and how.

Note

Textual data is known as corpora (plural) and a corpus (singular).

Applications of Natural Language Processing

The following figure depicts the general application areas of natural language processing:

Figure 1.4: Application areas of natural language processing
Figure 1.4: Application areas of natural language processing
  • Automatic text summarization

    This involves processing corpora to provide a summary.

  • Translation

    This entails translation tools that translate text to and from different languages, for example, Google Translate.

  • Sentiment analysis

    This is also known as emotional artificial intelligence or opinion mining, and it is the process of identifying, extracting, and quantifying emotions and affective states from corpora, both written and spoken. Sentiment analysis tools are used to process things such as customer reviews and social media posts to understand emotional responses to and opinions regarding particular things, such as the quality of food at a new restaurant.

  • Information extraction

    This is the process of identifying and extracting important terms from corpora, known as entities. Named entity recognition falls under this category and is a process that will be explained in the next chapter.

  • Relationship extraction

    Relationship extraction involves extracting semantic relationships from corpora. Semantic relationships occur between two or more entities (such as people, organizations, and things) and fall into one of the many semantic categories. For example, if a relationship extraction tool was given a paragraph about Sundar Pichai and how he is the CEO of Google, the tool would be able to produce "Sundar Pichai works for Google" as output, with Sundar Pichai and Google being the two entities, and 'works for' being the semantic category that defines their relationship.

  • Chatbot

    Chatbots are forms of artificial intelligence that are designed to converse with humans via speech and text. The majority of them mimic humans and make it feel as though you are speaking to another human being. Chatbots are being used in the health industry to help people who suffer from depression and anxiety.

  • Social media analysis

    Social media applications such as Twitter and Facebook have hashtags and trends that are tracked and monitored using natural language processing to understand what is being talked about around the world. Additionally, natural language processing aids the process of moderation by filtering out negative, offensive, and inappropriate comments and posts.

  • Personal voice assistants

    Siri, Alexa, Google Assistant, and Cortana are all personal voice assistants that leverage natural language processing techniques to understand and respond to what we say.

  • Grammar checking

    Grammar-checking software automatically checks and corrects your grammar, punctuation, and typing errors.

Text Preprocessing

When answering questions on a comprehension passage, the questions are specific to different parts of the passage, and so while some words and sentences are important to you, others are irrelevant. The trick is to identify key words from the questions and match them to the passage to find the correct answer.

Text preprocessing works in a similar fashion – the machine doesn't need the irrelevant parts of the corpora; it just needs the important words and phrases required to execute the task at hand. Thus, text preprocessing techniques involve prepping the corpora for proper analysis and for the machine learning and deep learning models. Text preprocessing is basically telling the machine what it needs to take into consideration and what it can disregard.

Each corpus requires different text preprocessing techniques depending on the task that needs to be executed, and once you've learned the different preprocessing techniques, you'll understand where to use what and why. The order in which the techniques have been explained is usually the order in which they are performed.

We will be using the NLTK Python library in the following exercises, but feel free to use different libraries while doing the activities. NLTK stands for Natural Language Toolkit and is the simplest and one of the most popular Python libraries for natural language processing, which is why we will be using it to understand the basic concepts of natural language processing.

Note

For further information on NLTK, go to https://www.nltk.org/.

Text Preprocessing Techniques

The following are the most popular text preprocessing techniques in natural language processing:

  • Lowercasing/uppercasing
  • Noise removal
  • Text normalization
  • Stemming
  • Lemmatization
  • Tokenization
  • Removing stop words

Let's look at each technique one by one.

Lowercasing/Uppercasing

This is one of the most simple and effective preprocessing techniques that people often forget to use. It either converts all the existing uppercase characters into lowercase ones so that the entire corpus is in lowercase, or it converts all the lowercase characters present in the corpus into uppercase ones so that the entire corpus is in uppercase.

This method is especially useful when the size of the corpus isn't too large and the task involves identifying terms or outputs that could be recognized differently due to the case of the characters, since a machine inherently processes uppercase and lowercase letters as separate entities – 'A' is different from 'a.' This kind of variation in the input capitalization could result in incorrect output or no output at all.

An example of this would be a corpus that contains both 'India' and 'india.' Without applying lowercasing, the machine would recognize these as two separate terms, when in reality they're both different forms of the same word and correspond to the same country. After lowercasing, there would exist only one instance of the term "India," which would be 'india,' simplifying the task of finding all the places where India has been mentioned in the corpus.

Note

All exercises and activities will be primarily developed on Jupyter Notebook. You will need to have Python 3.6 and NLTK installed on your system.

Exercises 1 – 6 can be done within the same Jupyter notebook.

Exercise 1: Performing Lowercasing on a Sentence

In this exercise, we will take an input sentence with both uppercase and lowercase characters and convert them all into lowercase characters. The following steps will help you with the solution:

  1. Open cmd or another terminal depending on your operating system.
  2. Navigate to the desired path and use the following command to initiate a Jupyter notebook:

    jupyter notebook

  3. Store an input sentence in an 's' variable, as shown:

    s = "The cities I like most in India are Mumbai, Bangalore, Dharamsala and Allahabad."

  4. Apply the lower() function to convert the capital letters into lowercase characters and then print the new string, as shown:

    s = s.lower()

    print(s)

    Expected output:

    Figure 1.5: Output for lowercasing with mixed casing in a sentence
  5. Create an array of words with capitalized characters, as shown:

    words = ['indiA', 'India', 'india', 'iNDia']

  6. Using list comprehension, apply the lower() function on each element of the words array and then print the new array, as follows:

    words = [word.lower() for word in words]

    print(words)

    Expected output:

Figure 1.6: Output for lowercasing with mixed casing of words
Figure 1.6: Output for lowercasing with mixed casing of words

Noise Removal

Noise is a very general term and can mean different things with respect to different corpora and different tasks. What is considered noise for one task may be what is considered important for another, and thus this is a very domain-specific preprocessing technique. For example, when analyzing tweets, hashtags might be important to recognize trends and understand what's being spoken about around the globe, but hashtags may not be important when analyzing a news article, and so hashtags would be considered noise in the latter's case.

Noise doesn't include only words, but can also include symbols, punctuation marks, HTML markup (<,>, *, ?,.), numbers, whitespaces, stop words, particular terms, particular regular expressions, non-ASCII characters (\W|\d+), and parse terms.

Removing noise is crucial so that only the important parts of the corpora are fed into the models, ensuring accurate results. It also helps by bringing words into their root or standard form. Consider the following example:

Figure 1.7: Output for noise removal
Figure 1.7: Output for noise removal

After removing all the symbols and punctuation marks, all the instances of sleepy correspond to the one form of the word, enabling more efficient prediction and analysis of the corpus.

Exercise 2: Removing Noise from Words

In this exercise, we will take an input array containing words with noise attached (such as punctuation marks and HTML markup) and convert these words into their clean, noise-free forms. To do this, we will need to make use of Python's regular expression library. This library has several functions that allow us to filter through input data and remove the unnecessary parts, which is exactly what the process of noise removal aims to do.

Note

To learn more about 're,' click on https://docs.python.org/3/library/re.html.

  1. In the same Jupyter notebook, import the regular expression library, as shown:

    import re

  2. Create a function called 'clean_words', which will contain methods to remove different types of noise from the words, as follows:

    def clean_words(text):

    #remove html markup

    text = re.sub("(<.*?>)","",text)

    #remove non-ascii and digits

    text=re.sub("(\W|\d+)"," ",text)

    #remove whitespace

    text=text.strip()

    return text

  3. Create an array of raw words with noise, as demonstrated:

    raw = ['..sleepy', 'sleepy!!', '#sleepy', '>>>>>sleepy>>>>', '<a>sleepy</a>']

  4. Apply the clean_words() function on the words in the raw array and then print the array of clean words, as shown:

    clean = [clean_words(r) for r in raw]

    print(clean)

    Expected output:

Figure 1.8: Output for noise removal

Text Normalization

This is the process of converting a raw corpus into a canonical and standard form, which is basically to ensure that the textual input is guaranteed to be consistent before it is analyzed, processed, and operated upon.

Examples of text normalization would be mapping an abbreviation to its full form, converting several spellings of the same word to one spelling of the word, and so on.

The following are examples for canonical forms of incorrect spellings and abbreviations:

Figure 1.9: Canonical form for incorrect spellings
Figure 1.9: Canonical form for incorrect spellings
Figure 1.10: Canonical form for abbreviations
Figure 1.10: Canonical form for abbreviations

There is no standard way to go about normalization since it is very dependent on the corpus and the task at hand. The most common way to go about it is with dictionary mapping, which involves manually creating a dictionary that maps all the various forms of one word to that one word, and then replaces each of those words with one standard form of the word.

Stemming

Stemming is performed on a corpus to reduce words to their stem or root form. The reason for saying "stem or root form" is that the process of stemming doesn't always reduce the word to its root but sometimes just to its canonical form.

The words that undergo stemming are known as inflected words. These words are in a form that is different from the root form of the word, to imply an attribute such as the number or gender. For example, "journalists" is the plural form of "journalist." Thus, stemming would cut off the 's', bringing "journalists" to its root form:

Figure 1.11: Output for stemming
Figure 1.11: Output for stemming

Stemming is beneficial when building search applications due to the fact that when searching for something in particular, you might also want to find instances of that thing even if they're spelled differently. For example, if you're searching for exercises in this book, you might also want 'Exercise' to show up in your search.

However, stemming doesn't always provide the desired stem, since it works by chopping off the ends of the words. It's possible for the stemmer to reduce 'troubling' to 'troubl' instead of 'trouble' and this won't really help in problem solving, and so stemming isn't a method that's used too often. When it is used, Porter's stemming algorithm is the most common algorithm for stemming.

Exercise 3: Performing Stemming on Words

In this exercise, we will take an input array containing various forms of one word and convert these words into their stem forms.

  1. In the same Jupyter notebook, import the nltk and pandas libraries as well as Porter Stemmer, as shown:

    import nltk

    import pandas as pd

    from nltk.stem import PorterStemmer as ps

  2. Create an instance of stemmer, as follows:

    stemmer = ps()

  3. Create an array of different forms of the same word, as shown:

    words=['annoying', 'annoys', 'annoyed', 'annoy']

  4. Apply the stemmer to each of the words in the words array and store them in a new array, as given:

    stems =[stemmer.stem(word = word) for word in words]

  5. Print the raw words and their stems in the form of a DataFrame, as shown:

    sdf = pd.DataFrame({'raw word': words,'stem': stems})

    sdf

    Expected output:

Figure 1.12: Output of stemming
Figure 1.12: Output of stemming

Lemmatization

Lemmatization is a process that is like stemming – its purpose is to reduce a word to its root form. What makes it different is that it doesn't just chop the ends of words off to obtain this root form, but instead follows a process, abides by rules, and often uses WordNet for mappings to return words to their root forms. (WordNet is an English language database that consists of words and their definitions along with synonyms and antonyms. It is considered to be an amalgamation of a dictionary and a thesaurus.) For example, lemmatization is capable of transforming the word 'better' into its root form 'good', since 'better' is just the comparative form of 'good."

While this quality of lemmatization makes it highly appealing and more efficient when compared with stemming, the drawback is that since lemmatization follows such an organized procedure, it takes a lot more time than stemming does. Hence, lemmatization is not recommended when you're working with a large corpus.

Exercise 4: Performing Lemmatization on Words

In this exercise, we will take an input array containing various forms of one word and convert these words into their root form.

  1. In the same Jupyter notebook as the previous exercise, import WordNetLemmatizer and download WordNet, as shown:

    from nltk.stem import WordNetLemmatizer as wnl

    nltk.download('wordnet')

  2. Create an instance of lemmatizer, as follows:

    lemmatizer = wnl()

  3. Create an array of different forms of the same word, as demonstrated:

    words = ['troubling', 'troubled', 'troubles', 'trouble']

  4. Apply lemmatizer to each of the words in the words array and store them in a new array, as follows. The word parameter provides the lemmatize function with the word it is supposed to lemmatize. The pos parameter is the part of speech you want the lemma to be. 'v' stands for verb and thus the lemmatizer will reduce the word to its closest verb form:

    # v denotes verb in "pos"

    lemmatized = [lemmatizer.lemmatize(word = word, pos = 'v') for word in words]

  5. Print the raw words and their root forms in the form of a DataFrame, as shown:

    ldf = pd.DataFrame({'raw word': words,'lemmatized': lemmatized})

    ldf = ldf[['raw word','lemmatized']]

    ldf

    Expected output:

Figure 1.13: Output of lemmatization
Figure 1.13: Output of lemmatization

Tokenization

Tokenization is the process of breaking down a corpus into individual tokens. Tokens are the most commonly used words – thus, this process breaks down a corpus into individual words – but can also include punctuation marks and spaces, among other things.

This technique is one of the most important ones since it is a prerequisite for a lot of applications of natural language processing that we will be learning about in the next chapter, such as Parts-of-Speech (PoS) tagging. These algorithms take tokens as input and can't function with strings or paragraphs of text as input.

Tokenization can be performed to obtain individual words as well as individual sentences as tokens. Let's try both of these out in the following exercises.

Exercise 5: Tokenizing Words

In this exercise, we will take an input sentence and produce individual words as tokens from it.

  1. In the same Jupyter notebook, import nltk:

    import nltk

  2. From nltk, import word_tokenize and punkt, as shown:

    nltk.download('punkt')

    from nltk import word_tokenize

  3. Store words in a variable and apply word_tokenize() on it, then print the results, as follows:

    s = "hi! my name is john."

    tokens = word_tokenize(s)

    tokens

    Expected output:

Figure 1.14: Output for the tokenization of words
Figure 1.14: Output for the tokenization of words

As you can see, even the punctuation marks are tokenized and considered as individual tokens.

Now let's see how we can tokenize sentences.

Exercise 6: Tokenizing Sentences

In this exercise, we will take an input sentence and produce individual words as tokens from it.

  1. In the same Jupyter notebook, import sent_tokenize, as shown:

    from nltk import sent_tokenize

  2. Store two sentences in a variable (our sentence from the previous exercise was actually two sentences, so we can use the same one to see the difference between word and sentence tokenization) and apply sent_tokenize() on it, then print the results, as follows:

    s = "hi! my name is shubhangi."

    tokens = sent_tokenize(s)

    tokens

    Expected output:

Figure 1.15: Output for tokenizing sentences
Figure 1.15: Output for tokenizing sentences

As you can see, the two sentences have formed two individual tokens.

Additional Techniques

There are several ways to perform text preprocessing, including the usage of a variety of Python libraries such as BeautifulSoup to strip away HTML markup. The previous exercises serve the purpose of introducing some techniques to you. Depending on the task at hand, you may need to use just one or two or all of them, including the modifications made to them. For example, at the noise removal stage, you may find it necessary to remove words such as 'the,' 'and,' 'this,' and 'it.' So, you will need to create an array containing these words and pass the corpus through a for loop to store only the words that are not a part of that array, removing the noisy words from the corpus. Another way of doing this is given later in this chapter and is done after tokenization has been performed.

Exercise 7: Removing Stop Words

In this exercise, we will take an input sentence and remove the stop words from it.

  1. Open a Jupyter notebook and download 'stopwords' using the following line of code:

    nltk.download('stopwords')

  2. Store a sentence in a variable, as shown:

    s = "the weather is really hot and i want to go for a swim"

  3. Import stopwords and create a set of the English stop words, as follows:

    from nltk.corpus import stopwords

    stop_words = set(stopwords.words('english'))

  4. Tokenize the sentence using word_tokenize, and then store those tokens that do not occur in stop_words in an array. Then, print that array:

    tokens = word_tokenize(s)

    tokens = [word for word in tokens if not word in stop_words]

    print(tokens)

    Expected output:

Figure 1.16: Output after removing stopwords
Figure 1.16: Output after removing stopwords

Additionally, you may need to convert numbers into their word forms. This is also a method you can add to the noise removal function. Furthermore, you might need to make use of the contractions library, which serves the purpose of expanding the existing contractions in the text. For example, the contractions library will convert 'you're' into 'you are,' and if this is necessary for your task, then it is recommended to install this library and use it.

Text preprocessing techniques go beyond the ones that have been discussed in this chapter and can include anything and everything that is required for a task or a corpus. In some instances, some words may be important, while in others they won't be.

Word Embeddings

As mentioned in the earlier sections of this chapter, natural language processing prepares textual data for machine learning and deep learning models. The models perform most efficiently when provided with numerical data as input, and thus a key role of natural language processing is to transform preprocessed textual data into numerical data, which is a numerical representation of the textual data.

This is what word embeddings are: they are numerical representations in the form of real-value vectors for text. Words that have similar meanings map to similar vectors and thus have similar representations. This aids the machine in learning the meaning and context of different words. Since word embeddings are vectors mapping to individual words, word embeddings can only be generated once tokenization has been performed on the corpus.

Figure 1.17: Example for word embeddings
Figure 1.17: Example for word embeddings

Word embeddings encompass a variety of techniques used to create a learned numerical representation and are the most popular way to represent a document's vocabulary. The beneficial aspect of word embeddings is that they are able to capture contextual, semantic, and syntactic similarities, and the relations of a word with other words, to effectively train the machine to comprehend natural language. This is the main aim of word embeddings – to form clusters of similar vectors that correspond to words with similar meanings.

The reason for using word embeddings is to make machines understand synonyms the same way we do. Consider an example of online restaurant reviews – they consist of adjectives describing food, ambience, and the overall experience. They are either positive or negative, and comprehending which reviews fall into which of these two categories is important. The automatic categorization of these reviews can provide a restaurant with quick insights as to what areas they need to improve on, what people liked about their restaurant, and so on.

There exist a variety of adjectives that can be classified as positive, and the same goes with negative adjectives. Thus, not only does the machine need to be able to differentiate between negative and positive, it also needs to learn and understand that multiple words can relate to the same category because they ultimately mean the same thing. This is where word embeddings are helpful.

Consider the example of restaurant reviews received on a food service application. The following two sentences are from two separate restaurant reviews:

  • Sentence A – The food here was great.
  • Sentence B – The food here was good.

The machine needs to be able to comprehend that both these reviews are positive and mean a similar thing, despite the adjective in both sentences being different. This is done by creating word embeddings, because the two words 'good' and 'great' map to two separate but similar real-value vectors and, thus, can be clustered together.

The Generation of Word Embeddings

We've understood what word embeddings are and their importance; now we need to understand how they're generated. The process of transforming words into their real-value vectors is known as vectorization and is done by word embedding techniques. There are many word embedding techniques available, but in this chapter, we will be discussing the two main ones – Word2Vec and GloVe. Once word embeddings (vectors) have been created, they combine to form a vector space, which is an algebraic model consisting of vectors that follow the rules of vector addition and scalar multiplication. If you don't remember your linear algebra, this might be a good time to quickly review it.

Word2Vec

As mentioned earlier, Word2Vec is one of the word embedding techniques used to generate vectors from words – something you can probably understand from the name itself.

Word2Vec is a shallow neural network – it has only two layers – and thus does not qualify as a deep learning model. The input is a text corpus, which it uses to generate vectors as the output. These vectors are known as feature vectors for the words present in the input corpus. It transforms a corpus into numerical data that can be understood by a deep neural network.

The aim of Word2Vec is to understand the probability of two or more words occurring together and thus to group words with similar meanings together to form a cluster in a vector space. Like any other machine learning or deep learning model, Word2Vec becomes more and more efficient by learning from past data and past occurrences of words. Thus, if provided with enough data and context, it can accurately guess a word's meaning based on past occurrences and context, similar to how we understand language.

For example, we are able to create a connection between the words 'boy' and 'man', and 'girl' and 'woman,' once we have heard and read about them and understood what they mean. Likewise, Word2Vec can also form this connection and generate vectors for these words that lie close together in the same cluster so as to ensure that the machine is aware that these words mean similar things.

Once Word2Vec has been given a corpus, it produces a vocabulary wherein each word has a vector of its own attached to it, which is known as its neural word embedding, and simply put, this neural word embedding is a word written in numbers.

Functioning of Word2Vec

Word2Vec trains a word against words that neighbor the word in the input corpus, and there are two methods of doing so:

  • Continuous Bag of Words (CBOW):

    This method predicts the current word based on the context. Thus, it takes the word's surrounding words as input to produce the word as output, and it chooses this word based on the probability that this is indeed the word that is a part of the sentence.

    For example, if the algorithm is provided with the words "the food was" and needs to predict the adjective after it, it is most likely to output the word "good" rather than output the word "delightful," since there would be more instances where the word "good" was used, and thus it has learned that "good" has a higher probability than "delightful." CBOW it said to be faster than skip-gram and has a higher accuracy with more frequent words.

Fig 1.18: The CBOW algorithm
Fig 1.18: The CBOW algorithm
  • Skip-gram

    This method predicts the words surrounding a word by taking the word as input, understanding the meaning of the word, and assigning it to a context. For example, if the algorithm was given the word "delightful," it would have to understand its meaning and learn from past context to predict that the probability that the surrounding words are "the food was" is highest. Skip-gram is said to work best with a small corpus.

Fig 1.19: The skip-gram algorithm
Fig 1.19: The skip-gram algorithm

While both methods seem to be working in opposite manners, they are essentially predicting words based on the context of local (nearby) words; they are using a window of context to predict what word will come next. This window is a configurable parameter.

The decision of choosing which algorithm to use depends on the corpus at hand. CBOW works on the basis of probability and thus chooses the word that has the highest probability of occurring given a specific context. This means it will usually predict only common and frequent words since those have the highest probabilities, and rare and infrequent words will never be produced by CBOW. Skip-gram, on the other hand, predicts context, and thus when given a word, it will take it as a new observation rather than comparing it to an existing word with a similar meaning. Due to this, rare words will not be avoided or looked over. However, this also means that a lot of training data will be required for skip-gram to work efficiently. Thus, depending on the training data and corpus at hand, the decision to use either algorithm should be made.

Essentially, both algorithms, and thus the model as a whole, require an intense learning phase where they are trained over thousands and millions of words to better understand context and meaning. Based on this, they are able to assign vectors to words and thus aid the machine in learning and predicting natural language. To understand Word2Vec better, let's do an exercise using Gensim's Word2Vec model.

Gensim is an open source library for unsupervised topic modeling and natural language processing using statistical machine learning. Gensim's Word2Vec algorithm takes an input of sequences of sentences in the form of individual words (tokens).

Also, we can use the min_count parameter. It exists to ask you how many instances of a word should be there in a corpus for it to be important to you, and then takes that into consideration when generating word embeddings. In a real-life scenario, when dealing with millions of words, a word that occurs only once or twice may not be important at all and thus can be ignored. However, right now, we are training our model only on three sentences each with only 5-6 words in every sentence. Thus, min_count is set to 1 since a word is important to us even if it occurs only once.

Exercise 8: Generating Word Embeddings Using Word2Vec

In this exercise, we will be using Gensim's Word2Vec algorithm to generate word embeddings post tokenization.

Note

You will need to have gensim installed on your system for the following exercise. You can use the following command to install it, if it is not already installed:

pip install –-upgrade gensim

For further information, click on https://radimrehurek.com/gensim/models/word2vec.html.

The following steps will help you with the solution:

  1. Open a new Jupyter notebook.
  2. Import the Word2Vec model from gensim, and import word_tokenize from nltk, as shown:

    from gensim.models import Word2Vec as wtv

    from nltk import word_tokenize

  3. Store three strings with some common words into three separate variables, and then tokenize each sentence and store all the tokens in an array, as shown:

    s1 = "Ariana Grande is a singer"

    s2 = "She has been a singer for many years"

    s3 = "Ariana is a great singer"

    sentences = [word_tokenize(s1), word_tokenize(s2), word_tokenize(s3)]

    You can print the array of sentences to view the tokens.

  4. Train the model, as follows:

    model = wtv(sentences, min_count = 1)

    Word2Vec's default value for min_count is 5.

  5. Summarize the model, as demonstrated:

    print('this is the summary of the model: ')

    print(model)

    Your output will look something like this:

    Figure 1.20: Output for model summary
    Figure 1.20: Output for model summary

    Vocab = 12 signifies that there are 12 different words present in the sentences that were input to the model.

  6. Let's find out what words are present in the vocabulary by summarizing it, as shown:

    words = list(model.wv.vocab)

    print('this is the vocabulary for our corpus: ')

    print(words)

    Your output will look something like this:

Figure 1.21: Output for the vocabulary of the corpus
Figure 1.21: Output for the vocabulary of the corpus

Let's see what the vector (word embedding) for the word 'singer' is:

print("the vector for the word singer: ")

print(model['singer'])

Expected output:

Figure 1.22: Vector for the word ‘singer’
Figure 1.22: Vector for the word 'singer'

Our Word2Vec model has been trained on these three sentences, and thus its vocabulary only includes the words present in this sentence. If we were to find words that are similar to a particular input word from our Word2Vec model, we wouldn't get words that actually make sense since the vocabulary is so small. Consider the following examples:

#lookup top 6 similar words to great

w1 = ["great"]

model.wv.most_similar (positive=w1, topn=6)

The 'positive' refers to the depiction of only positive vector values in the output.

The top six similar words to 'great' would be:

Figure 1.23: Word vectors similar to the word ‘great’
Figure 1.23: Word vectors similar to the word 'great'

Similarly, for the word 'singer', it could be as follows:

#lookup top 6 similar words to singer

w1 = ["singer"]

model.wv.most_similar (positive=w1, topn=6)

Figure 1.24: Word vector similar to word ‘singer’
Figure 1.24: Word vector similar to word 'singer'

We know that these words are not actually similar in meaning to our input words at all, and that also shows up in the correlation value beside them. However, they show up because these are the only words that exist in our vocabulary.

Another important parameter of the Gensim Word2Vec model is the size parameter. Its default value is 100 and implies the size of the neural network layers that are being used to train the model. This corresponds to the amount of freedom the training algorithm has. A larger size requires more data but also leads to higher accuracy.

Note

For more information on Gensim's Word2Vec model, click on

https://rare-technologies.com/word2vec-tutorial/.

GloVe

GloVe, an abbreviation of "global vectors," is a word embedding technique that has been developed by Stanford. It is an unsupervised learning algorithm that builds on Word2Vec. While Word2Vec is quite successful in generating word embeddings, the issue with it is that is it has a small window through which it focuses on local words and local context to predict words. This means that it is unable to learn from the frequency of words present globally, that is, in the entire corpus. GloVe, as mentioned in its name, looks at all the words present in a corpus.

While Word2Vec is a predictive model as it learns vectors to improve its predictive abilities, GloVe is a count-based model. What this means is that GloVe learns its vectors by performing dimensionality reduction on a co-occurrence counts matrix. The connections that GloVe is able to make are along the lines of this:

king – man + woman = queen

This means it's able to understand that "king" and "queen" share a relationship that is similar to that between "man" and "woman".

These are complicated terms, so let's understand them one by one. All of these concepts come from statistics and linear algebra, so if you already know what's going on, you can skip to the activity!

When dealing with a corpus, there exist algorithms to construct matrices based on term frequencies. Basically, these matrices contain words that occur in a document as rows, and the columns are either paragraphs or separate documents. The elements of the matrices represent the frequency with which the words occur in the documents. Naturally, with a large corpus, this matrix will be huge. Processing such a large matrix will take a lot of time and memory, thus we perform dimensionality reduction. This is the process of reducing the size of the matrix so it is possible to perform further operations on it.

In the case of GloVe, the matrix is known as a co-occurrence counts matrix, which contains information on how many times a word has occurred in a particular context in a corpus. The rows are the words and the columns are the contexts. This matrix is then factorized in order to reduce the dimensions, and the new matrix has a vector representation for each word.

GloVe also has pretrained words with vectors attached to them that can be used if the semantics match the corpus and task at hand. The following activity guides you through the process of implementing GloVe in Python, except that the code isn't directly given to you, so you'll have to do some thinking and maybe some googling. Try it out!

Exercise 9: Generating Word Embeddings Using GloVe

In this exercise, we will be generating word embeddings using Glove-Python.

Note

To install Glove-Python on your platform, go to https://pypi.org/project/glove/#files.

Download the Text8Corpus from http://mattmahoney.net/dc/text8.zip.

Extract the file and store it with your Jupyter notebook.

  1. Import itertools:

    import itertools

  2. We need a corpus to generate word embeddings for, and the gensim.models.word2vec library, luckily, has one called Text8Corpus. Import this along with two modules from the Glove-Python library:

    from gensim.models.word2vec import Text8Corpus

    from glove import Corpus, Glove

  3. Convert the corpus into sentences in the form of a list using itertools:

    sentences = list(itertools.islice(Text8Corpus('text8'),None))

  4. Initiate the Corpus() model and fit it on to the sentences:

    corpus = Corpus()

    corpus.fit(sentences, window=10)

    The window parameter controls how many neighboring words are considered.

  5. Now that we have prepared our corpus, we need to train the embeddings. Initiate the Glove() model:

    glove = Glove(no_components=100, learning_rate=0.05)

  6. Generate a co-occurrence matrix based on the corpus and fit the glove model on to this matrix:

    glove.fit(corpus.matrix, epochs=30, no_threads=4, verbose=True)

    The model has been trained!

  7. Add the dictionary of the corpus:

    glove.add_dictionary(corpus.dictionary)

  8. Use the following command to see which words are similar to your choice of word based on the word embeddings generated:

    glove.most_similar('man')

    Expected output:

Figure 1.25: Output of word embeddings for ‘man’
Figure 1.25: Output of word embeddings for 'man'

You can try this out for several different words to see which words neighbor them and are the most similar to them:

glove.most_similar('queen', number = 10)

Expected output:

Figure 1.26: Output of word embeddings for ‘queen’
Figure 1.26: Output of word embeddings for 'queen'

Note

To learn more about GloVe, go to https://nlp.stanford.edu/projects/glove/.

Activity 1: Generating Word Embeddings from a Corpus Using Word2Vec.

You have been given the task of training a Word2Vec model on a particular corpus – the Text8Corpus, in this case – to determine which words are similar to each other. The following steps will help you with the solution.

Note

You can find the text corpus file at http://mattmahoney.net/dc/text8.zip.

  1. Upload the text corpus from the link given previously.
  2. Import word2vec from gensim models.
  3. Store the corpus in a variable.
  4. Fit the word2vec model on the corpus.
  5. Find the most similar word to 'man'.
  6. 'Father' is to 'girl', 'x' is to "boy." Find the top 3 words for x.

    Note

    The solution for the activity can be found on page 296.

    Expected Outputs:

Figure 1.27: Output for similar word embeddings
Figure 1.27: Output for similar word embeddings

Top three words for 'x' could be:

Figure 1.28: Output for top three words for ‘x’
Figure 1.28: Output for top three words for 'x'

Summary

In this chapter, we learned about how natural language processing enables humans and machines to communicate in natural human language. There are three broad applications of natural language processing, and these are speech recognition, natural language understanding, and natural language generation.

Language is a complicated thing, and so text is required to go through several phases before it can make sense to a machine. This process of filtering is known as text preprocessing and comprises various techniques that serve different purposes. They are all task- and corpora-dependent and prepare text for operations that will enable it to be input into machine learning and deep learning models.

Since machine learning and deep learning models work best with numerical data, it is necessary to transform preprocessed corpora into numerical form. This is where word embeddings come into the picture; they are real-value vector representations of words that aid models in predicting and understanding words. The two main algorithms used to generate word embeddings are Word2Vec and GloVe.

In the next chapter, we will be building on the algorithms used for natural language processing. The processes of POS tagging and named entity recognition will be introduced and explained.

Left arrow icon Right arrow icon

Key benefits

  • Gain insights into the basic building blocks of natural language processing
  • Learn how to select the best deep neural network to solve your NLP problems
  • Explore convolutional and recurrent neural networks and long short-term memory networks

Description

Applying deep learning approaches to various NLP tasks can take your computational algorithms to a completely new level in terms of speed and accuracy. Deep Learning for Natural Language Processing starts by highlighting the basic building blocks of the natural language processing domain. The book goes on to introduce the problems that you can solve using state-of-the-art neural network models. After this, delving into the various neural network architectures and their specific areas of application will help you to understand how to select the best model to suit your needs. As you advance through this deep learning book, you’ll study convolutional, recurrent, and recursive neural networks, in addition to covering long short-term memory networks (LSTM). Understanding these networks will help you to implement their models using Keras. In later chapters, you will be able to develop a trigger word detection application using NLP techniques such as attention model and beam search. By the end of this book, you will not only have sound knowledge of natural language processing, but also be able to select the best text preprocessing and neural network models to solve a number of NLP issues.

Who is this book for?

If you’re an aspiring data scientist looking for an introduction to deep learning in the NLP domain, this is just the book for you. Strong working knowledge of Python, linear algebra, and machine learning is a must.

What you will learn

  • Understand various preprocessing techniques for solving deep learning problems
  • Build a vector representation of text using word2vec and GloVe
  • Create a named entity recognizer and parts-of-speech tagger with Apache OpenNLP
  • Build a machine translation model in Keras
  • Develop a text generation application using LSTM
  • Build a trigger word detection application using an attention model
Estimated delivery fee Deliver to Italy

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 11, 2019
Length: 372 pages
Edition : 1st
Language : English
ISBN-13 : 9781838550295
Category :
Languages :
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Italy

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Jun 11, 2019
Length: 372 pages
Edition : 1st
Language : English
ISBN-13 : 9781838550295
Category :
Languages :
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 95.97
Natural Language Processing Fundamentals
€32.99
Hands-On Natural Language Processing with Python
€32.99
Deep Learning for Natural Language Processing
€29.99
Total 95.97 Stars icon
Banner background image

Table of Contents

9 Chapters
Introduction to Natural Language Processing Chevron down icon Chevron up icon
Applications of Natural Language Processing Chevron down icon Chevron up icon
Introduction to Neural Networks Chevron down icon Chevron up icon
Foundations of Convolutional Neural Network Chevron down icon Chevron up icon
Recurrent Neural Networks Chevron down icon Chevron up icon
Gated Recurrent Units (GRUs) Chevron down icon Chevron up icon
Long Short-Term Memory (LSTM) Chevron down icon Chevron up icon
State-of-the-Art Natural Language Processing Chevron down icon Chevron up icon
A Practical NLP Project Workflow in an Organization Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Half star icon Empty star icon Empty star icon Empty star icon 1.5
(2 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 50%
1 star 50%
Claus Jul 30, 2020
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Ich behalte mir noch vor, dass Buch direkt an den Verlag zurückzuschicken, da das Druckbild der Code-Zeilen komplett unleserlich ist.Ist ja toll, dass man von Packt die Bücher jetzt (wahrscheinlich) also print on demand bekommt - nur leidet die Druckqualität so stark, so dass man bei vielen Codezeilen rätseln muss: "Was wollte der Autor mir damit sagen ... welche Buchstaben stellt diese Kritzelei dar?"
Amazon Verified review Amazon
Jen Nov 05, 2019
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Lots of weird printing issues that make parts of the text unreadable.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela