Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Generative AI on Google Cloud with LangChain

You're reading from   Generative AI on Google Cloud with LangChain Design scalable generative AI solutions with Python, LangChain, and Vertex AI on Google Cloud

Arrow left icon
Product type Paperback
Published in Dec 2024
Publisher Packt
ISBN-13 9781835889329
Length 306 pages
Edition 1st Edition
Concepts
Arrow right icon
Author (1):
Arrow left icon
Leonid Kuligin Leonid Kuligin
Author Profile Icon Leonid Kuligin
Leonid Kuligin
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1: Intro to LangChain and Generative AI on Google Cloud
2. Chapter 1: Using LangChain with Google Cloud FREE CHAPTER 3. Chapter 2: Foundational Models on Google Cloud 4. Part 2: Hallucinations and Grounding Responses
5. Chapter 3: Grounding Responses 6. Chapter 4: Vector Search on Google Cloud 7. Chapter 5: Ingesting Documents 8. Chapter 6: Multimodality 9. Part 3: Common Generative AI Architectures
10. Chapter 7: Working with Long Context 11. Chapter 8: Building Chatbots 12. Chapter 9: Tools and Function Calling 13. Chapter 10: Agents 14. Chapter 11: Agentic Workflows 15. Part 4: Designing Generative AI Applications
16. Chapter 12: Evaluating GenAI Applications 17. Chapter 13: Generative AI System Design 18. Index 19. Other Books You May Enjoy Appendix 1: Overview of Generative AI 1. Appendix 2: Google Cloud Foundations

Working with Long Context

In the previous chapter, we discussed how you can work with text in LangChain. We also briefly discussed context windows, and how they limit the amount of data that you can process in large language models (LLMs). Unfortunately, your users do not accept this limitation and expect you to build applications that can give them concise summaries and answer their questions on documents that might span hundreds of pages!

The most common way of addressing this limitation is to summarize documents. This will enable your LLM to either process more context or process that same context more efficiently within its limited context window. Luckily, due to their architecture, LLMs excel at summarizing long documents and extracting relevant information. In this chapter, we will discuss how you can answer user questions by summarizing documents in LangChain and even how you can leverage the newest long-context LLMs to skip the summarization step altogether!

We will cover...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image