CoreCoder

agent
Security Audit
Pass
Health Pass
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 279 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose

This is a minimal, lightweight AI coding agent designed as an educational blueprint. Inspired by Claude Code, it acts as an assistant that can read, edit, and execute commands on local files to fix bugs or write code, and it is compatible with almost any third-party Large Language Model (LLM).

Security Assessment

Risk Rating: Medium

As an autonomous coding agent, this tool inherently performs high-risk actions. It executes shell commands and reads or modifies local files based on instructions from an external LLM. The automated code scan found no dangerous patterns, no hardcoded secrets, and no unnecessary dangerous permissions requested by the repository itself. However, because it acts as a bridge between an external AI model and your local system's command line, its overall security is heavily dependent on the behavior of the LLM you connect it to. Any connected model could potentially be prompted to execute destructive system commands.

Quality Assessment

Quality is exceptionally high for an open-source educational tool. The project is licensed under the permissive and standard MIT license. It is highly maintained, featuring repository activity as recent as today. The developer community shows a strong vote of confidence, with 279 GitHub stars and an active test suite. The codebase is deliberately small—distilled from over half a million lines down to approximately 950 lines—making it extremely easy for any developer to read, audit, and understand in a single sitting.

Verdict

Use with caution: the repository itself is clean, well-maintained, and transparent, but the tool inherently gives an external AI model the ability to execute shell commands and modify files on your machine.
SUMMARY

Minimal AI coding agent (~950 Lines Python) inspired by Claude Code. Works with any LLM. Think nanoGPT for coding agents.

README.md

CoreCoder

Formerly NanoCoder — renamed to avoid confusion with Nano-Collective/nanocoder. All links from the old repo redirect here automatically.

PyPI
Python
License: MIT
Tests

中文 | English | Claude Code Architecture Deep Dive (7 articles)

512,000 lines of TypeScript → 950 lines of Python.

I spent two days reverse-engineering the leaked Claude Code source — all half a million lines. Then I stripped it down to the load-bearing walls and rebuilt them in Python. The result: every key architectural pattern from Claude Code, in a codebase you can read in one sitting.

CoreCoder is not another AI coding tool. It's a blueprint — the nanoGPT of coding agents. Read it, fork it, build your own.


$ corecoder -m kimi-k2.5

You > read main.py and fix the broken import

  > read_file(file_path='main.py')
  > edit_file(file_path='main.py', ...)

--- a/main.py
+++ b/main.py
@@ -1 +1 @@
-from utils import halper
+from utils import helper

Fixed: halper → helper.

What You Get

Claude Code's 512K lines distilled to 7 patterns that actually matter:

Pattern Claude Code CoreCoder
Search-and-replace editing (unique match + diff) FileEditTool tools/edit.py — 70 lines
Parallel tool execution StreamingToolExecutor (530 lines) agent.py — ThreadPool
3-layer context compression HISTORY_SNIP → Microcompact → CONTEXT_COLLAPSE context.py — 145 lines
Sub-agent with isolated context AgentTool (1,397 lines) tools/agent.py — 50 lines
Dangerous command blocking BashTool (1,143 lines) tools/bash.py — 95 lines
Session persistence QueryEngine (1,295 lines) session.py — 65 lines
Dynamic system prompt prompts.ts (914 lines) prompt.py — 35 lines

Every pattern is a real, runnable implementation — not a diagram or a blog post.

Install

pip install corecoder

Pick your model — any OpenAI-compatible API works. You can export env vars or drop a .env file in your project root:

# Kimi K2.5
export OPENAI_API_KEY=your-key OPENAI_BASE_URL=https://api.moonshot.ai/v1
corecoder -m kimi-k2.5

# Claude Opus 4.6 (via OpenRouter)
export OPENAI_API_KEY=your-key OPENAI_BASE_URL=https://openrouter.ai/api/v1
corecoder -m anthropic/claude-opus-4-6

# OpenAI GPT-5
export OPENAI_API_KEY=sk-...
corecoder -m gpt-5

# DeepSeek V3
export OPENAI_API_KEY=sk-... OPENAI_BASE_URL=https://api.deepseek.com
corecoder -m deepseek-chat

# Qwen 3.5
export OPENAI_API_KEY=sk-... OPENAI_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
corecoder -m qwen-max

# Ollama (local)
export OPENAI_API_KEY=ollama OPENAI_BASE_URL=http://localhost:11434/v1
corecoder -m qwen3:32b

# One-shot mode
corecoder -p "add error handling to parse_config()"

Architecture

The whole thing fits in your head:

corecoder/
├── cli.py            REPL + commands               160 lines
├── agent.py          Agent loop + parallel tools    120 lines
├── llm.py            Streaming client + retry       150 lines
├── context.py        3-layer compression            145 lines
├── session.py        Save/resume                     65 lines
├── prompt.py         System prompt                   35 lines
├── config.py         Env config                      30 lines
└── tools/
    ├── bash.py       Shell + safety + cd tracking    95 lines
    ├── edit.py       Search-replace + diff            70 lines
    ├── read.py       File reading                     40 lines
    ├── write.py      File writing                     30 lines
    ├── glob_tool.py  File search                      35 lines
    ├── grep.py       Content search                   65 lines
    └── agent.py      Sub-agent spawning               50 lines

Use as a Library

from corecoder import Agent, LLM

llm = LLM(model="kimi-k2.5", api_key="your-key", base_url="https://api.moonshot.ai/v1")
agent = Agent(llm=llm)
response = agent.chat("find all TODO comments in this project and list them")

Add Your Own Tools (~20 lines)

from corecoder.tools.base import Tool

class HttpTool(Tool):
    name = "http"
    description = "Fetch a URL."
    parameters = {"type": "object", "properties": {"url": {"type": "string"}}, "required": ["url"]}

    def execute(self, url: str) -> str:
        import urllib.request
        return urllib.request.urlopen(url).read().decode()[:5000]

Commands

/model <name>    Switch model mid-conversation
/compact         Compress context (like Claude Code's /compact)
/tokens          Token usage
/save            Save session to disk
/sessions        List saved sessions
/reset           Clear history
quit             Exit

How It Compares

Claude Code Claw-Code Aider CoreCoder
Code 512K lines (closed) 100K+ lines 50K+ lines 1,300 lines
Models Anthropic only Multi Multi Any OpenAI-compatible
Readable? No Hard Medium One afternoon
Purpose Use it Use it Use it Understand it, build yours

The Deep Dive

I wrote 7 articles breaking down Claude Code's architecture — the agent loop, tool system, context compression, streaming executor, multi-agent, and 44 hidden feature flags. If you want to understand why CoreCoder is designed this way, start there.

License

MIT. Fork it, learn from it, ship something better. A mention of this project is appreciated.


Built by Yufeng He · Agentic AI Researcher @ Moonshot AI (Kimi)

Claude Code Source Analysis — 170K+ reads, 6000 bookmarks on Zhihu

Reviews (0)

No results found