Skip to main content

Human Experiences

Glossary of Generative AI Terms for Employees


Glossary of Generative AI Terms

Generative AI is a fast-evolving field, and with it comes a wide range of terms and acronyms. Understanding the language of AI can provide an entry point for anyone interesting in becoming more experienced with this technology.

Agents

Agents are computer systems or programs that act on their own to complete specific tasks. They can observe their environment, make decisions, and take action without needing direct human input.

AGI (Artificial General Intelligence)

AGI describes AI systems that can learn, reason, and use intelligence for a wide range of subjects, similar to a human's cognitive abilities. Unlike the majority of modern AI models, which are limited to specific tasks, AGI could potentially do anything a person can do intellectually, including using language, logic, and perception.

Annotation

Annotation refers to labeling data, such as highlighting parts of a sentence or tagging objects in an image, so that AI systems can learn from it. These labeled data sets teach AI how to recognize patterns, make sense of information, and understand nuances.

ASI (Artificial Super Intelligence)

ASI is an advanced form of AI that's far beyond human intelligence. It would outperform humans in every way, from creativity and scientific reasoning to emotional intelligence and strategic thinking.

Bias

Bias in AI can mean two things:

  1. Unfairness or skewed results in AI outputs caused by biased training data. This type of bias can show itself in several forms, including racial, political, socioeconomic, and cultural bias.
  2. A parameter that helps fine-tune a model's behavior. Bias and weights work together to help train a neural network so that it can accurately represent and work with complicated patterns.

Bot

A bot, short for robot, is a program designed to perform automated tasks. In generative AI, bots can interpret user inputs and generate responses, ranging from simple functions that follow predefined rules to more complex AI-driven bots capable of contextual interaction. Some tools, such as ChatGPT, allow users to create their own bots.

Chatbot

A chatbot is a software application designed to simulate conversation with human users through text or voice. It uses natural language processing (NLP) to interpret user inputs and respond accordingly. Chatbots can be rule-based, responding to specific keywords, or AI-powered, allowing for more contextual dialogue. Their complexity can range from answering frequently asked questions to handling multi-turn conversations across various topics.

ChatGPT

ChatGPT is an AI-powered conversational tool developed by OpenAI. It's built on a language model designed to understand and generate human-like responses based on user input. The tool has multiple models, each of which uses vast amounts of training data and complex neural network architectures to deliver responses that mimic natural conversation, assist with writing, and solve problems across a wide range of subjects.

Completion

A completion is an AI's response to your input. For example, if you type a question into ChatGPT, the model's answer is called a completion.

Conversational AI

Conversational AI refers to systems that can engage in natural-sounding dialogue with humans. These systems combine multiple AI disciplines, such as NLP, machine learning, and sometimes speech recognition, to process language inputs and generate relevant responses. They are used in customer service, virtual assistants, and personal productivity tools, which can enhance both the customer and employee experience by streamlining communication and support.

Embedding

Embeddings are a way for AI to turn words, phrases, or entire documents into numbers, or vectors, in a high-dimensional space. These vectors capture both the meaning and structure of the language, helping the AI understand how words relate to one another. For example, words that are used in similar contexts or have similar meanings end up with vectors that are close together. This helps the model recognize things like synonyms, analogies, and even subtler details like tone or sentiment.

Few-Shot Learning

When an AI model learns to perform tasks using only a small number of examples, it's called few-shot learning. Traditional models usually need large sets of data to learn, but few-shot models can generalize from just a few samples, which is useful in areas where data is limited.

Fine-Tuning

Fine-tuning is when you take a general AI model and retrain it on a smaller, specific data set so it performs better in a certain area. For example, you might fine-tune a language model on legal documents so it gets better at understanding legal terms. The goal is to keep all of the broad knowledge it already has but make it more specialized for your needs.

Generative AI

Generative AI is a type of AI that creates new content, such as images, videos, music, or text, based on a prompt. It's trained on existing data and uses that knowledge to generate new material that looks or sounds original. Tools like DALL-E can create images from text, while systems like Synthesia can turn text into videos. Large language models like ChatGPT work by predicting the next word in a sentence to generate human-like text.

GPT (Generative Pre-Trained Transformer)

A GPT is a type of deep learning model built on the transformer architecture. These models are first trained on massive collections of text to learn language patterns, structure, and context. Once trained, they can generate human-like text by predicting the next word in a sequence based on a given prompt.

Hallucinations

In AI, hallucinations refer to situations where a model generates incorrect or misleading information. These errors are often caused by a model's reliance on using predictions rather than facts. Hallucinations can result from gaps in the training data, overgeneralization, or biases in how the model was created.

Inference

Inference is the phase in which a trained AI uses what it has learned to process new data or create responses. For example, when you ask ChatGPT a question, the model is performing inference to answer you.

Large Language Model (LLM)

A large language model is a version of AI trained on large amounts of text so it can understand and generate human language. These models use deep learning to perform tasks like answering questions, summarizing articles, and translating languages.

Model

An AI model is a framework designed to learn from data and make predictions or decisions or generate content. It consists of algorithms, parameters, and structures that work together to identify patterns and relationships within training data. In generative AI, models are used to produce language-based outputs, while others may focus on image generation or audio creation.

NLP (Natural Language Processing)

NLP is the field of AI that focuses on helping computers understand and work with human language based on linguistics and computer science. NLP is the foundation for tools like chatbots, virtual assistants, and translation apps.

Parameters

Parameters are the internal values that shape how an AI model works. They're what the model learns and adjusts during training to improve its accuracy. In neural networks, two of the most important types of parameters are weights and biases. Weights control how much influence an input has on a given part of the network, while biases help shift the output independently of those inputs. Together, they help the model make decisions based on patterns in the data.

Prompt

A prompt is the input text given to a generative AI model to guide its output. Prompts can be as simple as a question or as complex as a structured command. The quality and clarity of a prompt significantly affect the relevance and accuracy of the model's response.

Prompt Engineering

Prompt engineering is the process of designing effective prompts that get the AI to produce specific, useful, or high-quality responses. This involves understanding how the model interprets instructions and crafting input that guides it effectively, often through strategic phrasing, formatting, or context-setting.

Reinforcement Learning

Reinforcement learning is a type of machine learning in which an agent learns by interacting with its environment and receiving feedback in the form of rewards or penalties. The agent tries a variety of actions and gradually learns which behaviors produce the most favorable outcomes. Over time, it develops a strategy that maximizes long-term rewards.

Retrieval Augmented Generation (RAG)

Retrieval-augmented generation combines retrieval-based and generative models. RAG allows AI to retrieve relevant information from a knowledge base and use a generative model to produce responses based on that information.

Semantic Network

A semantic network maps out how ideas or concepts are semantically related. Each concept is represented as a node, and the connections between them, known as edges, capture semantic meaning. For example, "a dog is a kind of animal" captures the semantic relationships between "dog" and "animal." Semantic networks help AI systems organize and reason about language, making it easier to interpret context and draw inferences based on relationships.

Temperature

Temperature controls how creative or predictable an AI's output is. Lower values make responses more focused and safe. Higher values lead to more surprising or creative results.

Tokens

Tokens are the smallest parts of data an AI processes. In NLP, these can be words, parts of words, or characters. AI models calculate processing costs and limits based on how many tokens your input and output use.

Training

Training is how an AI model learns. The AI is given large data sets and constantly adjusts its parameters to get better at predicting the right output. Every time it gets something wrong, it makes a small correction. Over time, these corrections add up, and the model becomes more accurate and capable.

Transformer

A transformer is a deep learning model architecture that changed how AI processes language. Unlike older models that analyze words one at a time, transformers look at an entire sequence, such as a full sentence or paragraph, all at once. They use a technique called "self-attention" to figure out which words are most relevant to each other, even if they're far apart in the text. This ability to understand relationships and context across a whole passage is what makes transformers so effective at tasks like translating text, summarizing content, or generating language.

Tuning

Tuning is the process of adjusting a pre-trained AI model to improve its performance on a particular task.

Zero-Shot Learning

Zero-shot learning is when an AI model can handle a task it hasn't been directly trained to do. Instead of needing examples for everything, the model uses what it already knows about how different ideas are connected to make educated guesses.

Additional Resources

Related Articles