Reference

Glossary

Short definitions for overloaded terms in the AI tooling ecosystem.

Page role

Use this page for fast definitions, not full lessons.

The glossary is the shortest layer in the site: one term, one meaning, enough context to keep the stack readable.

If you want the metaphor, the lab link, the real-world tools, or the lifecycle status for a category, jump to the catalog instead.

Beads
A distributed graph issue tracker for AI agents, exposed through the bd CLI and designed as durable task memory.
Agent
A model-driven loop that observes, reasons, chooses actions, uses tools, and evaluates progress toward a goal. See agents and agent systems.
Agent framework
A developer toolkit for building agent loops, workflows, memory, tools, routing, and multi-agent systems.
Agent runtime
The software loop and state machine around the model: planning, memory, tool routing, retries, and termination. See agents and agent systems.
AI host
The application that owns the user experience and coordinates models, context, tools, approvals, and results.
API
A programmatic interface exposed by software. Agents often call APIs directly or through wrappers and protocols.
API key
A credential used to authenticate programmatic access to a hosted API. Treat it like a secret. See API key security.
CLI tool
A command-line program. Agents like CLIs because they are composable and can often be run in a controlled workspace.
CLI AI wrapper
A terminal-facing AI application that wraps models, tools, context, approvals, and sometimes MCP or skills into one workflow.
Commercial hosted offering
A paid product where important parts run as a provider-operated service. Client code, SDKs, or docs may be public while the model or service remains proprietary.
Context
Information provided to the model for the current task: user prompt, files, tool results, memory, docs, or retrieved data.
Context window
The amount of input and output a model can consider in one request. Larger windows help with long tasks but do not remove the need for retrieval, memory, or good prompts.
Direct model provider
A company or service that exposes its own hosted models through APIs, SDKs, and product surfaces.
Embedding model
A model that turns text or other inputs into vectors for search, retrieval, clustering, or similarity workflows.
Evaluation
A repeatable check that asks whether an AI tool or agent behaved as expected for a known input.
FOSS
Free and open-source software: software with a license that allows users to inspect, run, modify, and redistribute the code under defined terms.
Function calling
A model-provider feature where the model emits structured calls to application-defined functions.
Governance
Controls that keep AI systems safe and accountable: permissions, approvals, audit logs, evals, policy, and monitoring. See governance on the stack.
Gas Town
A multi-agent workspace manager for coding agents, using concepts like Mayor, rigs, polecats, hooks, convoys, and Beads-backed work state.
Hook
A callback triggered at a lifecycle point, commonly used for policy, context injection, automation, formatting, or logging. See hooks.
Hermes Agent
A Nous Research persistent agent project with a CLI, messaging gateway, memory, skills, scheduling, subagents, and tool execution according to its public README.
JSON interface
A stable input/output shape using JSON so tools are easier for agents to call, validate, log, and replay.
Lab
A small hands-on exercise that builds a toy version of one AI tooling concept.
LangChain
A framework for building applications around model calls, prompts, tools, retrieval, and agent loops. It usually belongs in the framework/runtime part of the stack rather than at raw model access.
LangGraph
A graph-oriented orchestration layer in the LangChain ecosystem for stateful agent workflows and longer-running coordination.
LSP
Language Server Protocol: a standard way for editors and tools to get code intelligence such as definitions and diagnostics.
MCP
Model Context Protocol: a client-server protocol for exposing tools, resources, and prompts to AI applications. See protocols and adapters.
MCP client
The host-side component that maintains a connection to a specific MCP server.
MCP server
A program that provides tools, resources, or prompts to an AI host using the MCP protocol.
Memory
Stored information from prior interactions or project history that can be retrieved for future tasks.
Model access
The path used to call a model: subscription product, direct API, managed platform, aggregate provider, local endpoint, or local model runtime. See model access.
Model platform
A managed cloud surface that blends model access with deployment, enterprise identity, governance, evaluation, or model-catalog features. It is not just a provider API and not just a router. See managed model platforms.
Model artifact
The downloadable model file or checkpoint, such as weights or a quantized file. It is not the same thing as the runtime that serves it.
Model card
Documentation for a model, usually describing intended use, training notes, limitations, license, benchmarks, and safety considerations.
Model weights
The learned parameters of a model. Access to weights affects whether a model can be run locally, inspected, modified, or redistributed under its license.
Inference endpoint
An API address where a client sends model inputs and receives outputs. It can be hosted by a provider or exposed by local hosting software.
Inference runtime
Software that loads a model and performs inference, often exposing a CLI, server, or local API.
Local endpoint
An inference endpoint running on your own machine or local network, commonly used by local model hosts and development tools.
Local hosting software
A desktop app, CLI, or server that runs model artifacts locally and often exposes a local API for other tools to call. See local hosting and model artifacts.
Open-source local tool
A tool whose important runtime code can be inspected and run locally. It may still call hosted model APIs unless paired with local models.
OpenClaw
A local-first personal AI assistant project with a Gateway, many channels, skills, toolsets, routing, and sandbox options according to its public README. See the persistent-platform stretch goal.
Orchestration
Coordination of multi-step, multi-agent, scheduled, or long-running workflows.
Persistent agent
A long-running agent system with memory, scheduling, tool access, and interfaces beyond a single chat session.
Prompt
A reusable instruction or template. In MCP, prompts are user-controlled server-provided templates.
Provider API
A hosted model API exposed by a direct provider or aggregator for programmatic use by applications, tools, and agents.
Deployment plane
The part of a managed platform that lets users deploy, version, govern, and operate model endpoints rather than only calling a shared public API.
Quantization
A compression technique that represents model weights with fewer bits, usually reducing memory needs while changing speed, quality, or compatibility tradeoffs.
Protocol adapter
A wrapper that exposes an existing capability through a protocol-shaped interface, such as turning a CLI into a tool a host can discover and call.
Resource
Contextual data exposed to the model or host, such as a file, database schema, log, or API response.
Sandbox
A constrained execution environment where an agent can run commands with controlled access and reduced risk.
Skill
A packaged unit of procedural knowledge that tells an agent how and when to perform a kind of task. See skills.
Source-available
Software where source code is visible but the license is not the same as an open-source license. Read the license before assuming fork, hosting, or commercial rights.
Subscription
A paid product access model, usually tied to a human-facing app or assistant surface. It is not automatically the same thing as API access.
Tool
An executable capability the agent can invoke, from a simple shell command to a typed remote API.
Wrapper
A layer that adapts one interface into another, such as wrapping a CLI as an MCP server or adding an AI loop around shell commands. See wrappers.

Tiny examples

Terms in context

Tool

gh issue create is a tool because it does a concrete action.

Protocol

MCP is a protocol because it gives hosts and servers a shared way to describe and call capabilities.

Skill

A code-review skill is not the review itself. It is the reusable procedure for doing one well.