Concepts
How Fellowship orchestrates AI agents with memory, specs, and code review.
How It Works
When you run a task, Fellowship orchestrates a full pipeline:
The Memory System
Fellowship's memory is a local SQLite graph — no vector database, no embeddings, no external infra. It runs on your machine, commits to your repo, and grows with every run.
The graph has two layers:
Layer 1 — Memory Graph (MCP)
The core. A structured knowledge graph exposed to agents via an MCP server. Every run, agents must search it before writing code. Every decision, gotcha, or lesson is recorded as a node.
| Node type | What it stores | Example |
|---|---|---|
learning | General lessons from experience | "Always use parameterized queries" |
gotcha | Technical traps with specific fix | "Supabase pooler breaks on port 5432" |
decision | Architectural choices with rationale | "JWT over sessions — stateless, scales" |
code-pattern | Reusable implementation patterns | "Auth middleware pattern for Express" |
architecture | System-level structural knowledge | "Event-driven: services via queues" |
Mandatory Protocol
Agents follow this protocol on every run — it's enforced in the agent prompt, not just suggested:
- MANDATORY FIRST STEP:
graph_searchwith task keywords - Before writing any code: minimum 2
graph_querycalls - During implementation: record learnings/gotchas/decisions when friction occurs
- Post-run: graph is enriched for the next agent
Layer 2 — Markdown files (human-readable)
Alongside the graph, Fellowship maintains plain Markdown files you can read, edit, and commit:
| File | Purpose |
|---|---|
learnings.md | Raw lessons extracted from runs |
learnings-distilled.md | Top 30 distilled learnings (injected in agent prompt) |
gotchas.md | Technical pitfalls and warnings |
architecture.md | Project architecture reference (used when MCP unavailable) |
fellowship.db | SQLite: Memory Graph + runs + decisions (gitignored) |
Spec-First Workflow
Every task gets a detailed spec before any code is written. The spec engine combines your task with graph context:
- Memory Graph results — graph_search finds relevant learnings, gotchas, decisions
- Agent profile — role-specific guidelines and conventions
- Architecture — from graph or
architecture.mdas fallback
This means the agent starts with a complete picture — not just your one-liner task description. You can preview any spec before running it:
# Preview the spec without executing
fellowship run "add WebSocket support" --dry-run
# Or generate just the spec file
fellowship spec "add WebSocket support" Agents & Roles
Fellowship creates a team of specialized agents, each with a role, profile, and memory. Agents are defined in .fellowship/team.yaml and have individual profile files in .fellowship/profiles/.
A typical team might include:
- Carlos (Backend) — handles API endpoints, business logic, database queries
- Atlas (Database) — schema design, migrations, query optimization
- Merlín (Coordinator) — project-level decisions and planning
- Sentinel (Reviewer) — code review on every run
Fellowship automatically selects the right agent for each task based on the agent’s role and the task description. You can also add new agents with fellowship hire.
Writer-Reviewer Pattern
Fellowship uses a writer-reviewer pattern for every run. The coding agent (writer) implements the task, then Sentinel (reviewer) checks the code against the spec.
This creates a feedback loop:
- Writer implements the code based on the spec
- Sentinel reviews against acceptance criteria, catches bugs and edge cases
- If issues found, the writer gets feedback and fixes them
- Cycle repeats up to
maxCycles(default: 2) - After passing review, changes go to human review (if
--reviewflag used)
Sentinel — The Built-in Reviewer
Every project gets Sentinel, a dedicated AI code reviewer. Unlike using another coding agent as reviewer, Sentinel has a specialized profile focused on:
- Verifying acceptance criteria from the spec
- Catching bugs and missing edge cases
- Checking architecture compliance
- Not nitpicking style or suggesting rewrites
Sentinel reviews every run automatically. Configure via review: in config.yaml.
Background Runs
Background runs let the agent work while you keep coding. Your terminal stays free — no waiting around.
# Start a background run
fellowship run "add authentication" --bg
# Check on it any time
fellowship status
fellowship log --follow
# Review when notified
fellowship review The agent works in an isolated git worktree, so there’s zero risk of conflicts with your current work. When it finishes, you get a notification.
Notifications
Fellowship notifies you when runs complete — zero config required.
Built-in (automatic)
| Platform | Sounds | Desktop Notifications |
|---|---|---|
| macOS | afplay (Glass, Basso, Ping) | osascript (native alerts) |
| Linux | paplay / aplay | notify-send |
| Windows | PowerShell SoundPlayer | PowerShell toast (Win10/11) |
Plus: terminal flash — colored bar on completion (green / red / yellow / blue).
Additional Providers
- Telegram — set bot token + chat ID in config or env vars
- Slack — set webhook URL in
FELLOWSHIP_SLACK_WEBHOOKenv var - Custom — any shell command with env vars
Environment Variables in Hooks
| Variable | Example |
|---|---|
FELLOWSHIP_TASK | "add user authentication" |
FELLOWSHIP_AGENT | "Carlos" |
FELLOWSHIP_STATUS | completed / failed / rejected |
FELLOWSHIP_BRANCH | feat/add-user-auth |
FELLOWSHIP_DURATION | 145 (seconds) |
FELLOWSHIP_TOKENS | 31000 |
FELLOWSHIP_COST | 0.47 |
FELLOWSHIP_PR_URL | https://github.com/.../pull/5 |
Memory Graph (MCP)
The Memory Graph is a local SQLite-backed knowledge graph that persists structured knowledge across every run. It's exposed to agents via an MCP (Model Context Protocol) server.
Node Types
| Type | Purpose |
|---|---|
learning | Lessons derived from completed runs |
gotcha | Technical pitfalls and warnings to avoid |
decision | Architectural and design choices |
code-pattern | Reusable patterns found in the codebase |
architecture | High-level structural knowledge |
MCP Tools Exposed
| Tool | Description |
|---|---|
graph_search | Full-text search across all nodes |
graph_query | Structured query by type, tags, or relationships |
record_learning | Add a new learning node |
record_gotcha | Add a new gotcha node |
record_decision | Add a new decision node |
Mandatory First Step
Every run begins with a graph_search before any code is written. This ensures the agent consults accumulated project knowledge — gotchas, past decisions, known patterns — before starting implementation.
Organic Registration
Knowledge is only recorded when there's real friction or a meaningful insight. The graph grows gradually and intentionally, not by noise — keeping retrieval signal high.
Graph Versioning
The Memory Graph is portable. Export it to a .jsonl.gz file, commit it to your repo, and anyone who clones the project can restore full knowledge with fellowship import.
What to commit
| File | Commit? | Why |
|---|---|---|
.fellowship/graph.jsonl.gz | ✓ Yes | Portable graph snapshot — the knowledge baseline |
.fellowship/config.yaml | ✓ Yes | Project config shared across the team |
.fellowship/team.yaml | ✓ Yes | Agent definitions and role boundaries |
.fellowship/profiles/ | ✓ Yes | Agent profiles and guidelines |
.fellowship/fellowship.db | ✗ No | Binary SQLite file — gitignored, regenerated from import |
# Export and commit the graph
fellowship export
git add .fellowship/graph.jsonl.gz
git commit -m "chore: update knowledge graph snapshot"
# New team member restores full knowledge
git clone <repo>
fellowship import Smart Agent Selection
When you run a task, Fellowship automatically picks the most appropriate agent from your team. The selection uses a two-phase approach:
- Keyword matching — each agent's
scopeinteam.yamlis scored against the task description - LLM fallback — if no agent scores above
HIGH_CONFIDENCE_THRESHOLD = 0.4, an LLM call disambiguates
You can bypass the selector with the --agent <id> flag:
# Let Fellowship pick the agent
fellowship run "add login form"
# Force a specific agent
fellowship run "add login form" --agent carlos Improving Selection via team.yaml
A well-defined scope field in team.yaml is the most effective way to improve automatic selection. Include specific technologies, file paths, and task verbs that belong to each agent:
agents:
- id: luna
role: Frontend engineer
scope: React components, CSS, UI, forms, pages, routing, Tailwind
- id: carlos
role: Backend engineer
scope: API endpoints, database queries, auth, middleware, services fellowship fix
fellowship fix is a specialized command for bug resolution. It runs a three-step pipeline:
- Diagnose — analyzes the bug description against the codebase and memory graph
- Generate fix spec — produces a targeted spec scoped to the specific defect
- Run — executes the fix using the most appropriate agent
fellowship fix "payment webhook returns 500 when amount is zero" Unlike fellowship run, the fix command treats the input as a bug description rather than a feature request. The spec engine focuses on root-cause analysis and minimal targeted changes.
fellowship memory — Interactive TUI
fellowship memory (alias: fellowship mem) opens an interactive terminal UI for browsing everything Fellowship knows about your project.
fellowship memory # open TUI
fellowship mem # alias The TUI has four views: Dashboard (stats overview), Runs (full run history), Memory (browse learnings, gotchas, decisions, reviews), and Diff (side-by-side diff per run). Navigate with vim keys (jk) or arrow keys. See the Commands reference for all keyboard shortcuts.
Security
Fellowship includes prompt injection protection:
- 30+ detection patterns — XML tags, ChatML tokens, role markers, identity overrides, authority markers
- Sanitization on every run — Learnings, gotchas, and architecture scanned before injection
fellowship sanitize— Manual scan with severity levels- Dry-run reports —
--dry-runshows sanitization results before committing to a run - strictMode — Optional throw-on-detection for zero-tolerance environments
Skills System
Fellowship ships with built-in skills and supports project-level overrides:
fellowship/skills/ ← Built-in (shipped with CLI)
code-review.md ← Sentinel's review guidelines
.fellowship/skills/ ← Project overrides (optional)
code-review.md ← Your custom review guidelines Project skills override built-in skills with the same name. Profiles reference skills with {{skill:name}}.
Platform Support
| Feature | macOS | Linux | Windows |
|---|---|---|---|
| Core CLI | ✓ | ✓ | ✓ |
| Sounds | afplay | paplay/aplay | PowerShell |
| Desktop notifications | osascript | notify-send | PowerShell toast |
| Terminal flash | ✓ | ✓ | ✓ |
| Keychain credentials | macOS Keychain | libsecret | Credential Store |
| Background runs | ✓ | ✓ | ✓ |
Supported Providers
| Provider | Agent CLI | Model |
|---|---|---|
| Anthropic | Claude Code | claude-sonnet-4-6, claude-opus-4-6 |
| OpenAI | Codex | gpt-4o, o3-mini |