The knowledge gap in isolated agents
You now know how to build skills (on-demand knowledge) and agents (isolated execution). Each is powerful on its own. But there is a problem when you use agents alone.
An agent starts with an empty context. It has its system prompt from AGENT.md, the project's CLAUDE.md, and whatever you tell it in the delegation prompt. It does not have your team's code review checklist. It does not know your testing conventions. It does not know that your React components use named exports and co-located test files.
So what happens? The agent starts exploring. It reads your codebase to figure out conventions. It infers patterns from existing code. This takes time, consumes its context window, and the inferences are not always correct. The agent might produce a code review that checks for things your team does not care about and misses things your team considers critical.
Skills fix this. When you preload skills into an agent using the skills field in the agent's frontmatter, the full content of those skills is injected into the agent's context at startup. The agent starts with institutional knowledge already loaded — your standards, your conventions, your checklists — before it reads a single file.
This is the difference between hiring a contractor who arrives on day one and needs two weeks to learn your processes, and hiring a contractor who has already read your operations manual.