Recent advances in AI
The first advance is large language models (LLMs). Language models are a far older technique of reducing human language to something a machine can understand and use for speech recognition, translation, and more. A “model” in broader AI terms is something trained on a dataset that another tool or service can use to recognize patterns and make decisions on those patterns, typically without human action. So, while a language model is built to work with language, there are also models built to recognize sound, video, and more.
Language models were typically built upon relatively small and specific datasets until the arrival of LLMs around 2017 to 2020. Google’s “Transformer”, the “T” in GPT (https://en.wikipedia.org/wiki/Generative_pre-trained_transformer) proposal in 2017 accelerated the growth and potential of LLMs, increasing their ability to be trained on much larger datasets that often include more general...