Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Neuro-Symbolic AI

You're reading from   Neuro-Symbolic AI Design transparent and trustworthy systems that understand the world as you do

Arrow left icon
Product type Paperback
Published in May 2023
Publisher Packt
ISBN-13 9781804617625
Length 196 pages
Edition 1st Edition
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Alexiei Dingli Alexiei Dingli
Author Profile Icon Alexiei Dingli
Alexiei Dingli
David Farrugia David Farrugia
Author Profile Icon David Farrugia
David Farrugia
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Chapter 1: The Evolution and Pitfalls of AI 2. Chapter 2: The Rise and Fall of Symbolic AI FREE CHAPTER 3. Chapter 3: The Neural Networks Revolution 4. Chapter 4: The Need for Explainable AI 5. Chapter 5: Introducing Neuro-Symbolic AI – the Next Level of AI 6. Chapter 6: A Marriage of Neurons and Symbols – Opportunities and Obstacles 7. Chapter 7: Applications of Neuro-Symbolic AI 8. Chapter 8: Neuro-Symbolic Programming in Python 9. Chapter 9: The Future of AI 10. Index 11. Other Books You May Enjoy

Solution 1 – logic tensor networks

In our first Python NSAI example, we will implement a system based on the Logic Tensor Network (LTN) framework.

In short, LTNs are a sub-class of neural networks that leverage logical propositions (i.e., symbolic logic). LTNs use logical propositions to represent the knowledge base as formulas and deep learning to learn the different weights of these formulas. These logical propositions act as soft constraints on the neural network’s inference. If the neural network’s output violates the logical propositions, then it is penalized. As a result, an LTN during training has two main objectives: 1) satisfy the logical propositions, and 2) improve its predictive performance on the target objective. As such, the logical propositions as model constraints act as a way to directly integrate prior domain knowledge into the neural network.

For the more interested reader, you can read the full LTN paper at https://arxiv.org/pdf/1606...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image