Prompt templates
We looked at a very simple example where we provided text input and received text output. Next, we will build a simple chain where we use a templated input, apply the LLM, and parse the output.
First, why do we need a templated input? We try to utilize the ability of LLMs to process a task defined in natural language. See Appendix 1 for more details on what prompt engineering and in-context learning are about.
For this example, imagine that we take item descriptions from a retail website, and we need to extract certain attributes and return a structured JSON as a result. But what attributes change based on the category of the item? We end up with a prompt template – in our example, a natural language description of a task that expects certain pieces as input. LangChain has a rich set of APIs to make prompt templating easier for you:
from langchain.prompts import PromptTemplate from langchain_core.output_parsers import JsonOutputParser prompt_template...