Working with Long Context
In the previous chapter, we discussed how you can work with text in LangChain. We also briefly discussed context windows, and how they limit the amount of data that you can process in large language models (LLMs). Unfortunately, your users do not accept this limitation and expect you to build applications that can give them concise summaries and answer their questions on documents that might span hundreds of pages!
The most common way of addressing this limitation is to summarize documents. This will enable your LLM to either process more context or process that same context more efficiently within its limited context window. Luckily, due to their architecture, LLMs excel at summarizing long documents and extracting relevant information. In this chapter, we will discuss how you can answer user questions by summarizing documents in LangChain and even how you can leverage the newest long-context LLMs to skip the summarization step altogether!
We will cover...