Exploring LangChain memory
As we work with LLMs, a key challenge emerges as they cannot inherently recall past interactions. So, in essence, they are stateless. A stateless operation won’t persist information from one request to the next, which is a problem if you want to create a chatbot. The way around this is to add the full conversation to the context. The ChatGPT client itself will be passing the full conversation into each prompt as it progresses.
What we want our ChatGPT applications to do is offer stateful interactions where information is remembered across requests and sessions. To achieve this, we need to use a memory mechanism. Different representations of memory will then be included in an LLM prompt. This section is dedicated to exploring the concept of memory in the context of LLMs. We will delve into different types of memory and the challenges you’ll face in using memory with LLMs before taking a deep dive into how to use LangChain to provide memory...