Summary
In this appendix, we discussed some fundamentals that engineers and product owners might need to understand if they’re involved in adopting generative AI across their products.
First, we discussed the surprising behavior of an LLM being trained to predict the missing word in a piece of text so that it can perform well on many NLP tasks. Domain adaptation can occur via in-context learning, which involves describing your task in natural language and maybe providing a few examples. This description is called a prompt, and prompt engineers try to use various techniques to increase the performance of the LLM on specific tasks by improving this prompt (for example, a task description).
Then, we touched on the problem of alignment – the process of making LLMs more performant on specific scales, such as being helpful or harmless. LLMs aren’t trained for these goals, so it’s not surprising they don’t behave very well on these scales. However...