majonics.net  /  est. 2026

MAJONICS

Claude Code Agents: a field guide
for engineers who already know what LLMs can do.

Domain   Agentic Architecture Runtime   Claude Code Audience   Technical founders & engineers

01  —  Context

You already know
what LLMs can do.

You've shipped products with language models. You've used deep research, seen it write solid analysis in one shot. You know the context window, the token limits, the difference between RAG and fine-tuning. This page isn't about any of that.

What you're looking at is the next layer — the one that turns a brilliant generalist into a programmable team. Claude Code agents aren't a smarter prompt. They're a new category of software infrastructure that happens to run on intelligence.

Whatever your software does — whatever domain it operates in, whatever integrations it relies on, whatever rules and constraints your codebase encodes — that's exactly the kind of domain-dense, multi-system environment where agents stop being interesting experiments and start being the only rational choice for shipping fast.

Below: seven concepts. One argument. One roadmap.

02  —  Architecture

Seven shifts from
prompting to programming.

Classic LLM usage is stateless, sequential, and single-threaded. Claude Code agents are a layered framework: stateful memory, isolated context windows, event-driven automation, and fine-grained permission architecture. Here's what that actually means.

Memory System

CLAUDE.md — Stateful, Hierarchical Memory

Forget pasting context at the start of every session. CLAUDE.md files load automatically and merge hierarchically: enterprise-level rules → user preferences → project architecture → directory-specific conventions. It's not a prompt. It's a persistent knowledge layer that accumulates across every session.

One CLAUDE.md that knows your DB schema, your domain constraints, your integration patterns, your test conventions — loaded before Claude writes a single line.

vs. Classic LLM: Every session starts blank. You re-explain the codebase. Claude hallucinates the schema it's never seen. You paste context. You forget to paste context.
Context Architecture

Sub-Agents: Context Isolation

Each sub-agent runs in its own isolated context window — completely separate from the main conversation. Verbose operations (test runs, log analysis, schema exploration) happen in isolation. Only the summary comes back. Your main thread stays clean.

vs. LLM: One thread. Every tool call pollutes the context. By the time you're 3 steps in, the model is fighting its own history.
Execution Model

Parallel Execution

Spawn multiple agents to work simultaneously on independent tasks. One explores your auth module. One maps the DB schema. One reads the API layer. All in parallel, returning summaries to the orchestrator. What takes an hour sequentially takes minutes.

vs. LLM: Sequential. One question, one answer, next question. You are the orchestrator. You are the bottleneck.
Automation

Hooks: Event-Driven Automation

Shell commands that fire on lifecycle events: PreToolUse (validate before Claude touches anything), PostToolUse (run linter after every edit), SubagentStart/Stop (setup and cleanup per agent). Hooks are the CI/CD layer inside your AI workflow. No explicit prompting. Pure event-driven.

vs. LLM: You remind Claude to run tests. Claude forgets. You remind again. It works until you stop watching.
Capability Layer

Skills: Passive Capability Discovery

A SKILL.md describes a capability. Claude reads the description and activates the skill automatically when the task context matches — no slash command, no explicit invocation. The right expertise appears at the right moment. You encode your domain knowledge once. It persists forever.

vs. LLM: You're the capability router. You decide which context to paste. You forget to load the relevant domain skill mid-task.
Multi-Agent Systems

Agent Teams: Mesh, Not Hub-and-Spoke

Beyond sub-agents (which report back to the main agent), Agent Teams give each worker their own independent context window plus a shared task list and mailbox — direct agent-to-agent communication. One agent owns the API layer. Another owns DB migrations. They coordinate like an actual team. The lead assigns and synthesizes. Workers execute and communicate directly.

vs. LLM: One model, one conversation. Coordination overhead is 100% on you.
Security Layer

Permission Architecture

Fine-grained tool access control per agent. A DB-reader agent gets Bash access but a PreToolUse hook blocks any non-SELECT query before it executes. A code-reviewer agent gets Read/Grep/Glob but no Write. A scaffolding agent gets full tool access but only in a git worktree. Security is infrastructure, not an afterthought.

vs. LLM: The model has exactly as much access as you gave it in the session. Which is usually too much and unaudited.

03  —  The Case

Why agents beat
deep research for your use case.

Classic LLM Deep Research

  • Stateless — starts from zero every session
  • Sequential — one step at a time, you wait
  • Single context — everything pollutes the thread
  • Prompt-driven — you initiate every action
  • No tool persistence — no hooks, no lifecycle
  • Unrestricted — same access regardless of task
  • One-shot — output is a document, not a system

Claude Code Agent Architecture

  • Stateful — CLAUDE.md loads your entire codebase context
  • Parallel — N agents work simultaneously, orchestrator synthesizes
  • Context-isolated — each agent has its own clean window
  • Event-driven — hooks fire automatically on lifecycle events
  • Persistent skills — domain expertise activates by context match
  • Permission-enforced — fine-grained access per agent per task
  • Composable — agents, skills, hooks, MCP servers stack indefinitely

Scenario: Unite fragmented CRM data from different tools into a unified structure

Deep Research Approach

  1. You paste the data schema from Tool A — Hubspot export format
  2. Claude analyzes it, but doesn't know the schema from Tool B or Tool C
  3. You paste Tool B schema. Context is now 40% full.
  4. Claude proposes a mapping — but can't run it, can't validate against real data
  5. You implement manually. Edge cases appear. You restart the conversation.
  6. You re-explain the full context in a new session to handle the edge cases.

Agent Architecture Approach

  1. Agent 1 (Explore): maps all CRM-related files, schemas, and API contracts — in parallel
  2. Agent 2 (Data): reads Tool A, B, C schemas and identifies field overlaps — in parallel
  3. Agent 3 (Test): runs existing data validation tests, captures failures — in parallel
  4. Orchestrator synthesizes a unified schema proposal from all three agents
  5. PostToolUse hook auto-runs schema linter on every proposed transformation
  6. Full mapping + validated migration script in one session. Main context stays clean.

04  —  Roadmap

Your path to
agent-native development.

Five stages. Ordered by leverage. Each one builds on the previous. Stop at any stage and you'll already be ahead of the majority of engineering teams.

01
1–2 days

Memory & Context

Author a CLAUDE.md at your repo root. Encode your architecture, domain rules, test conventions, and integration patterns. Claude loads it every session — no re-explaining, ever.

02
3–5 days

Sub-Agents

Create specialized agents: code-reviewer (read-only), db-reader (SELECT-only via hook), debugger (isolated context). Explore your codebase in parallel. Only summaries return to the main thread.

03
1–2 weeks

Hooks & Automation

Wire lifecycle events: PreToolUse to block unsafe operations before they run, PostToolUse to auto-lint after edits, SubagentStop to log findings. CI-lite inside your AI workflow.

04
Ongoing

Skills

Encode your domain expertise as SKILL.md files. They activate automatically when Claude detects a context match. Your knowledge persists and compounds across every future session.

05
When you scale

Agent Teams

Deploy a team lead + workers with a shared task list and mailbox. Direct agent-to-agent communication. Parallel ownership of API, DB, and frontend. Cross-layer features in a single coordinated session.

End state

A development environment where Claude knows your codebase, specialized agents handle distinct concerns in isolation, automation fires on events rather than prompts, domain expertise is encoded once and reused forever, and complex features are built by a coordinated team of agents — not a single conversation thread.