Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with TensorFlow and Keras – 3rd edition

You're reading from   Deep Learning with TensorFlow and Keras – 3rd edition Build and deploy supervised, unsupervised, deep, and reinforcement learning models

Arrow left icon
Product type Paperback
Published in Oct 2022
Publisher Packt
ISBN-13 9781803232911
Length 698 pages
Edition 3rd Edition
Tools
Arrow right icon
Authors (3):
Arrow left icon
Sujit Pal Sujit Pal
Author Profile Icon Sujit Pal
Sujit Pal
Antonio Gulli Antonio Gulli
Author Profile Icon Antonio Gulli
Antonio Gulli
Dr. Amita Kapoor Dr. Amita Kapoor
Author Profile Icon Dr. Amita Kapoor
Dr. Amita Kapoor
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Neural Network Foundations with TF 2. Regression and Classification FREE CHAPTER 3. Convolutional Neural Networks 4. Word Embeddings 5. Recurrent Neural Networks 6. Transformers 7. Unsupervised Learning 8. Autoencoders 9. Generative Models 10. Self-Supervised Learning 11. Reinforcement Learning 12. Probabilistic TensorFlow 13. An Introduction to AutoML 14. The Math Behind Deep Learning 15. Tensor Processing Unit 16. Other Useful Deep Learning Libraries 17. Graph Neural Networks 18. Machine Learning Best Practices 19. TensorFlow 2 Ecosystem 20. Advanced Convolutional Neural Networks 21. Other Books You May Enjoy
22. Index

Character and subword embeddings

Another evolution of the basic word embedding strategy has been to look at character and subword embeddings instead of word embeddings. Character-level embeddings were first proposed by Xiang and LeCun [17] and have some key advantages over word embeddings.

First, a character vocabulary is finite and small – for example, a vocabulary for English would contain around 70 characters (26 characters, 10 numbers, and the rest special characters), leading to character models that are also small and compact. Second, unlike word embeddings, which provide vectors for a large but finite set of words, there is no concept of out-of-vocabulary for character embeddings, since any word can be represented by the vocabulary. Third, character embeddings tend to be better for rare and misspelled words because there is much less imbalance for character inputs than for word inputs.

Character embeddings tend to work better for applications that require the...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image