What is a prompt?
As specified earlier, LLMs are pretrained on publicly available large datasets, which makes them very powerful and versatile. These LLMs typically have billions of parameters that can be used to solve multiple tasks out of the box, without the need for additional training.
Users just need to ask the right question with relevant context to get the best output from an LLM. The plain text comments/questions that act as an instruction to an LLM are called prompts and the technique of asking the right questions with corresponding context is called prompt engineering. While interacting with LLMs, providing prompts with precise information and, if required, supplementing with additional relevant context, is very important in order to get the most accurate results. The same is true while interacting with code assistants as most of the code assistants are integrated with an LLM. As a user, while interacting with a code assistant, you should provide prompts with simple,...