Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Natural Language Processing with Java Cookbook

You're reading from   Natural Language Processing with Java Cookbook Over 70 recipes to create linguistic and language translation applications using Java libraries

Arrow left icon
Product type Paperback
Published in Apr 2019
Publisher Packt
ISBN-13 9781789801156
Length 386 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Richard M. Reese Richard M. Reese
Author Profile Icon Richard M. Reese
Richard M. Reese
Richard M Reese Richard M Reese
Author Profile Icon Richard M Reese
Richard M Reese
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Preparing Text for Analysis and Tokenization FREE CHAPTER 2. Isolating Sentences within a Document 3. Performing Name Entity Recognition 4. Detecting POS Using Neural Networks 5. Performing Text Classification 6. Finding Relationships within Text 7. Language Identification and Translation 8. Identifying Semantic Similarities within Text 9. Common Text Processing and Generation Tasks 10. Extracting Data for Use in NLP Analysis 11. Creating a Chatbot 12. Installation and Configuration 13. Other Books You May Enjoy

Tokenization using maximum entropy

Maximum entropy is a statistical classification technique. It takes various characteristics of a subject, such as the use of specialized words or the presence of whiskers in a picture, and assigns a weight to each characteristic. These weights are eventually added up and normalized to a value between 0 and 1, indicating the probability that the subject is of a particular kind. With a high enough level of confidence, we can conclude that the text is all about high-energy physics or that we have a picture of a cat.

If you're interested, you can find a more complete explanation of this technique at https://nadesnotes.wordpress.com/2016/09/05/natural-language-processing-nlp-fundamentals-maximum-entropy-maxent/. In this recipe, we will demonstrate the use of maximum entropy with the OpenNLP TokenizerME class.

Getting ready

To prepare, we need to do the following:

  1. Create a new Maven project.
  2. Download the en-token.bin file from http://opennlp.sourceforge.net/models-1.5/. Save it at the root directory of the project.
  3. Add the following POM dependency to your project:
<dependency>
<groupId>org.apache.opennlp</groupId>
<artifactId>opennlp-tools</artifactId>
<version>1.9.0</version>
</dependency>

How to do it...

Let's go through the following steps:

  1. Add the following imports to the project:
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import opennlp.tools.tokenize.Tokenizer;
import opennlp.tools.tokenize.TokenizerME;
import opennlp.tools.tokenize.TokenizerModel;
  1. Next, add the following code to the main method. This sequence initializes the text to be processed and creates an input stream to read in the tokenization model. Modify the first argument of the File constructor to reflect the path to the model files:
String sampleText = 
"In addition, the rook was moved too far to be effective.";
try (InputStream modelInputStream = new FileInputStream(
new File("...", "en-token.bin"))) {
...
} catch (FileNotFoundException e) {
// Handle exception
} catch (IOException e) {
// Handle exception
}
  1. Add the following code to the try block. It creates a tokenizer model and then the actual tokenizer:
TokenizerModel tokenizerModel = 
new TokenizerModel(modelInputStream);
Tokenizer tokenizer = new TokenizerME(tokenizerModel);
  1. Insert the following code sequence that uses the tokenize method to create a list of tokens and then display the tokens:
String tokenList[] = tokenizer.tokenize(sampleText);
for (String token : tokenList) {
System.out.println(token);
}
  1. Next, execute the program. You should get the following output:
In
addition
,
the
rook
was
moved
too
far
to
be
effective
.

How it works...

The sampleText variable holds the test string. A try-with-resources block is used to automatically close the InputStream. The new File statement throws a FileNotFoundException, while the new TokenizerModel(modelInputStream) statement throws an IOException, both of which need to be handled.

The code examples in this book that deal with exception handling include a comment suggesting that exceptions should be handled. The user is encouraged to add the appropriate code to deal with exceptions. This will often include print statements or possibly logging operations.

An instance of the TokenizerModel class is created using the en-token.bin model. This model has been trained to recognize English text. An instance of the TokenizerME class represents the tokenizer where the tokenize method is executed against it using the sample text. This method returns an array of strings that are then displayed. Note that the comma and period are treated as separate tokens.

See also

You have been reading a chapter from
Natural Language Processing with Java Cookbook
Published in: Apr 2019
Publisher: Packt
ISBN-13: 9781789801156
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image