Generative AI is a subset of artificial intelligence (AI) that creates (aka generates) content based on the data and examples it was trained on. Presently, many of the common generative AI tools and models utilize natural language as an input to create text, images, audio, video, code, and other media. The most well known generative AI tools, such as ChatGPT, are powered by large language models (LLMs). LLMs are trained on massive amounts of text in order to understand user input (prompt) and generate a response by predicting the most likely desired sequence of words and sentences.
Although the responses from generative AI tools such as ChatGPT can give the impression that the underlying AI model is able to critically analyze the user prompt, it is important to understand that generated outputs are based on predictive models based patterns in the text that the models were trained on. Equally important, it is essential to understand both the capabilities and limitations of these tools, particularly when using them for academic and research purposes.
Definitions according to ChatGPT (July 2023):
Generative AI
Generative AI is a subset of artificial intelligence that aims to create new data from the training data it has been provided. Generative models learn the true data distribution of the training set so as to generate new data points with some variations. These new data points can be in any form, such as images, music, speech, or text.
Large Language Models
A large language model, like me (ChatGPT), is a kind of computer program that's really good at understanding and generating human language. It's trained on lots of text from books, websites, and other sources, so it learns patterns in how words and sentences are put together. This training helps it respond to a wide range of questions and prompts, generate stories, write essays, and more. It's not actually thinking or understanding like a human, but it's good at mimicking human-like text based on what it has learned from its training data.