AI tooling field guide

AI tooling, without the fog.

This is a working map of AI developer tooling, written for learning first: plain-English explanations up front, then small labs where you build toy versions of the ideas yourself.

Why this exists

A practical map for a messy ecosystem.

AI tooling has a lot of overlapping names for things that sometimes do similar jobs. This guide gives those pieces a place to live: what they are, what they are for, and how they relate to each other.

The aim is not to memorize every tool. It is to build a mental model that makes new tools easier to classify when they show up.

Learning path

One reasonable pass through the site

Start here

The shortest useful way into the site

New readers usually need three moves in order: choose how the model is reached, turn that into one stable surface, then start layering tools and agent behavior on top.

Deepest start: if you want to begin from a real local model, take the detour through local hosting and model artifacts before bootstrap.

Site shape

Each part of the site has one main job.

The site works best when each page type stays narrow: orientation pages build the map, concept pages explain one layer, labs let you touch the machinery, and reference pages help you classify what you found.

Orientation

Use Models, Map, and Stack when you need the big picture and a first mental model.

Practice

Use Labs when you want runnable artifacts and toy implementations instead of more theory.

Reference

Use the Catalog for metaphors, real-world examples, lifecycle context, and lab links. Use the Glossary for quick definitions.

The core idea

AI tools are ways for a model to get context and take action.

A language model on its own can only produce text. The tooling ecosystem is everything we add around it so it can read files, query systems, run commands, follow reusable procedures, ask for approvals, and remember what happened before.

The names can sound intimidating: MCP, skills, hooks, wrappers, agent runtimes. Fair. But most of them answer one of three questions: what can the AI see, what can it do, and who decides what is safe or useful?

A friendly metaphor

Think of it like a workshop

The AI is not the whole workshop. It is more like a smart apprentice working inside a shop full of tools, rules, recipes, adapters, and supervisors.

The model is the reasoning engine

It reads the situation and proposes next steps. It is powerful, but it needs context and tools to do grounded work.

Examples: GPT, Claude, Gemini, local models.

Tools are the hands

Tools do concrete things: search files, call an API, query a database, run tests, create a ticket, or edit a document.

Examples: git, rg, curl, SQL clients, cloud CLIs.

Protocols are the plugs

Protocols define how a host discovers tools and data without every integration being custom-built from scratch.

Examples: MCP, OpenAPI, function calling, LSP.

Skills are recipes

A skill tells the agent how to do a kind of task well: when to use which tool, what order to follow, and what mistakes to avoid.

Examples: code review, deploy a model, query a warehouse.

Hooks are shop rules

Hooks run automatically at key moments. They can check safety, add context, run formatting, block secrets, or log what happened.

Examples: before command, after edit, before commit.

Agents are workers with a loop

An agent keeps observing, deciding, acting, and checking progress until the task is done or it needs help.

Examples: CLI agents, IDE agents, persistent assistants.

Four quick examples

Same ecosystem, different handles

Pocket map

The ecosystem in one screen

Tiny walkthrough

Classify a new tool in about a minute

1

Name the action. If it runs tests, creates issues, edits files, or queries data, you are looking at a tool or capability.

2

Name the interface. If another app discovers and calls it through a schema, you are probably looking at a protocol or wrapper.

3

Name the decision-maker. If it chooses the next step, coordinates workers, or asks for approval, you are higher in the stack.