Big picture

The AI tooling stack

A layered map from model access up to tools, agents, workflows, interfaces, and safety controls.

Example first

A local CLI agent is not one layer. That is the point.

A terminal agent sits near the top because it is the thing you interact with. But it also reaches downward: it calls tools, connects to protocol adapters, manages context, and relies on a model access path. Goose is one example of this shape.

The stack is less like a set of boxes and more like a cutaway view. It shows what a product is built out of.

Bottom to top

The layers

08

Governance, evaluation, and observability

Plain English: The safety rails and dashboard.

Technical view: Approvals, audit logs, traces, evals, policy, rate limits, cost controls, secret scanning, reproducibility, and deployment gates.

Examples: promptfoo, Braintrust, LangSmith, Helicone, OpenTelemetry, CI checks.

07

Orchestration and multi-agent coordination

Plain English: The project manager that splits and schedules work.

Technical view: Workflow engines, graph execution, background jobs, subagent delegation, retries, queues, routing, and long-running task state.

Examples: LangGraph, AutoGen, CrewAI, Temporal, Airflow, GitHub Actions, n8n.

06

Hosts and user interfaces

Plain English: The place where you talk to the AI and approve its work.

Technical view: Hosts own the conversation, context window policy, permission prompts, tool result display, and user interaction model.

Examples: Copilot CLI, Claude Code, Cursor, VS Code, Cline, Continue, chat apps, web dashboards.

05

Agent runtime

Plain English: The loop that keeps asking, "what should I do next?"

Technical view: Planning, tool selection, state management, memory retrieval, error handling, reflection, retry logic, and stop conditions.

Examples: ReAct loops, LangChain, OpenAI Agents SDK, Semantic Kernel, Pydantic AI, custom coding agents.

Framework note: LangChain mainly fits here; LangGraph pushes higher into orchestration.

04

Packaging and behavior extension

Plain English: Reusable recipes and add-ons.

Technical view: Skills, plugins, prompt packs, hooks, slash commands, project instruction files, templates, and reusable workflows.

Examples: skills, project instruction files, pre-tool hooks, code review prompt packs.

03

Protocols and adapters

Plain English: Standard plugs that connect the AI to tools and information.

Technical view: Schemas and transports for discovering context, invoking tools, streaming results, authenticating, and adapting existing systems.

Examples: MCP, function calling, OpenAPI, LSP, DAP, browser automation protocols.

02

Executable capabilities

Plain English: The actual tools that do work.

Technical view: CLIs, APIs, shell commands, scripts, test runners, package managers, databases, browsers, and cloud services.

Examples: git, rg, jq, curl, gh, npm, pytest, SQL clients, REST APIs.

01

Model access, data, and compute

Plain English: The engine, the fuel, and the workshop floor.

Technical view: Hosted subscriptions, provider APIs, aggregate routers, local endpoints, model artifacts, embedding models, filesystems, databases, vector stores, containers, sandboxes, and credentials.

Examples: provider APIs, local hosting runtimes, quantized model files, Docker, E2B, local files.

Start here: model access paths.

Walkthrough

Place a task-memory graph on the stack

1

It has a CLI. A command such as bd is an executable capability, so it touches layer 02.

2

It stores durable state. The task graph acts like memory for agents and humans, so it also belongs near the foundation.

3

It shapes workflow. Ready-task detection, claiming, and dependencies push it upward into coordination.