Preface
Generative AI is on the rise, and its enterprise adoption is growing quickly. Developers need to understand the new technology, and they’re requested to deliver business value quickly and reduce time to market. That’s where LangChain as a framework for the quick development of generative AI applications comes in, together with enterprise-ready and highly scalable solutions from Google Cloud that expose foundational models, vector search, and other capabilities required by such applications.
In this book, we’ll explore the basics of the LangChain framework and its core interfaces and then we’ll start building our applications on Google Cloud using the Gemini model family and Vertex AI platform. We’ll learn how to compose generative AI workflows, access external knowledge, and chain large language models (LLMs) to solve specific domain problems. You’ll learn about some commonly used patterns and techniques such as retrieval-augmented generation (RAG), ways to process long documents that don’t fit into the context of the LLM, implementing external memory layers, using third-party APIs to enhance LLMs’ capabilities, and developing agentic workflows.