Part 2: Hallucinations and Grounding Responses
One of the main concerns when building generative AI applications is hallucinations and how to keep LLMs fresh in line with changes and new knowledge in the outside world. We’re going to discuss key patterns to ground the responses and implement memory layers outside of the LLM itself.
This part has the following chapters:
- Chapter 3, Grounding Responses on Google Cloud
- Chapter 4, Vector Search on Google Cloud
- Chapter 5, Advanced Techniques for Parsing and Ingesting Documents
- Chapter 6, Multimodality