research-pi
Headless research orchestrator for pi (the coding agent by Mario Zechner, now stewarded by Earendil). Spins up a pi orchestrator that delegates to read-only subagents to research topics, map codebases, or plan features.
Install
Requires Node.js ≥18, pnpm, and pi already installed.
git clone https://github.com/sleepyeldrazi/research-pi.git ~/.pi/research
cd ~/.pi/research
pnpm install && pnpm build
ln -s ~/.pi/research/bin/research ~/.local/bin/research
Or one-liner (after you host the repo):
curl -fsSL https://raw.githubusercontent.com/sleepyeldrazi/research-pi/main/install.sh | bash
Quick Start
# 1. Research a new topic
research --model k2p5 --start_research \
--task "build a native android app with gemma 4 e4b and all 3 input modalities"
# 2. Onboard an existing project
research --model kimi-for-coding --onboarding
# 3. Plan a new feature
research --model minimax-token-plan/MiniMax-M2.7 --new_feature \
--task "add a real-time collaborative editor to my nextjs app"
Configuration
All config lives in ~/.pi/research/config.json (created automatically on first install).
Models
Models are resolved against your ~/.pi/agent/models.json. Use aliases, provider names, or full provider/id paths:
{
"models": {
"default": "kimi-for-coding",
"web-researcher": "k2p5",
"paper-researcher": "minimax-token-plan/MiniMax-M2.7"
}
}
default— used if you don't pass--model- Per-agent overrides — any subagent name can have its own model
Web Search (the important part)
The orchestrator and subagents need a way to search the web. Three backends are supported:
| Mode | What it does | When to use |
|---|---|---|
extension (default) |
Embedded SearXNG extension (web_search / web_fetch tools) |
You have a local SearXNG instance |
mcp |
Proxies to an MCP server that exposes web_search / web_fetch |
You already run an MCP server (e.g. for llama.cpp) |
skill |
Raw curl commands against SearXNG, taught via a skill file |
Fallback, no extensions needed |
Option A: SearXNG Extension (default)
Point it at your SearXNG instance:
{
"webSearch": {
"mode": "extension",
"searxngUrl": "http://192.168.178.58:7777"
}
}
Then just run normally. The orchestrator and all research subagents automatically get web_search + web_fetch tools.
Option B: MCP Proxy
If you run an MCP server (e.g. web-search-mcp):
{
"webSearch": {
"mode": "mcp",
"mcpUrl": "http://sleepy-think:3001/mcp"
}
}
Then run with --web-search-mode mcp or set it as default in config.
Option C: Skill Fallback
No extensions, just raw curl:
{
"webSearch": {
"mode": "skill",
"searxngUrl": "http://192.168.178.58:7777"
}
}
The web-search-bash skill teaches subagents how to search via curl.
Environment Variables
| Var | Purpose |
|---|---|
SEARXNG_URL |
Override SearXNG URL for extension/skill modes |
MCP_URL |
Override MCP server URL for mcp mode |
Usage
--start_research — Research a new topic
Spawns web and paper researchers in parallel, then synthesizes into:
PLAN.md— POC plan, stack, risksresearch/web-summary.md— web findings with linksresearch/paper-summary.md— papers and technical reports (if relevant)
research --model k2p5 --start_research \
--task "self-hosted LLM inference server with speculative decoding and chunked prefilling"
--onboarding — Map an existing codebase
Spawns codebase mappers (single or multiple for large repos), then writes:
MAP.md— feature-to-file mappingONBOARDING.md— per-feature guide
# Whole repo
research --model kimi-for-coding --onboarding
# Scoped to a directory
research --model kimi-for-coding --onboarding \
--task "backend/api/v2 only"
--new_feature — Plan a feature
Analyzes the current project, researches best practices, and writes:
FEATURE.md— SOTA findings + tailored integration steps
Automatically reads existing MAP.md / ONBOARDING.md if present to fast-forward.
research --model minimax-token-plan/MiniMax-M2.7 --new_feature \
--task "add a real-time collaborative editor to my nextjs app"
If the topic feels paper-adjacent (ML, algorithms, performance), the orchestrator will also spawn a paper researcher automatically.
Other Flags
--output-dir <path> Where to write files (default: cwd)
--timeout <minutes> Per-agent timeout (default: 15)
--verbose Stream orchestrator output to stderr
--web-search-mode Override config: extension | mcp | skill
--mcp-url <url> Override MCP URL
How It Works
- The launcher resolves your
--modelagainst~/.pi/agent/models.json - Spawns a headless pi orchestrator with
read,write,bash,grep,find,lstools - The orchestrator assesses the task and spawns read-only subagents via
spawn_subagent - Subagents return findings as text; the orchestrator writes the final markdown files
- Status messages with timestamps are printed to stderr so you know what's happening
Read-only guarantee: Only the orchestrator has write. Subagents get read,grep,find,ls for code or read,bash for research (bash is limited to curl/fetch). No subagent can modify your codebase.