ralph-orchestrator

agent
Guvenlik Denetimi
Gecti
Health Gecti
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 2 days ago
  • Community trust — 2417 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested

Bu listing icin henuz AI raporu yok.

SUMMARY

An improved implementation of the Ralph Wiggum technique for autonomous AI agent orchestration

README.md

Ralph Orchestrator

License
Rust
Build
Coverage
Mentioned in Awesome Claude Code
Docs
Discord

A hat-based orchestration framework that keeps AI agents in a loop until the task is done.

"Me fail English? That's unpossible!" - Ralph Wiggum

Documentation | Getting Started | Presets

Installation

Via npm (Recommended)

npm install -g @ralph-orchestrator/ralph-cli

Via Homebrew (macOS/Linux)

brew install ralph-orchestrator

Via Cargo

cargo install ralph-cli

Quick Start

# 1. Initialize Ralph with your preferred backend
ralph init --backend claude

# 2. Plan your feature (interactive PDD session)
ralph plan "Add user authentication with JWT"
# Creates: specs/user-authentication/requirements.md, design.md, implementation-plan.md

# 3. Implement the feature
ralph run -p "Implement the feature in specs/user-authentication/"

Ralph iterates until it outputs LOOP_COMPLETE or hits the iteration limit.

For simpler tasks, skip planning and run directly:

ralph run -p "Add input validation to the /users endpoint"

Web Dashboard (Alpha)

Alpha: The web dashboard is under active development. Expect rough edges and breaking changes.

image

Ralph includes a web dashboard for monitoring and managing orchestration loops.

ralph web                              # starts Rust RPC API + frontend + opens browser
ralph web --no-open                    # skip browser auto-open
ralph web --backend-port 4000          # custom RPC API port
ralph web --frontend-port 8080         # custom frontend port
ralph web --legacy-node-api            # opt into deprecated Node tRPC backend

MCP Server Workspace Scope

ralph mcp serve is scoped to a single workspace root per server instance.

ralph mcp serve --workspace-root /path/to/repo

Precedence is:

  1. --workspace-root
  2. RALPH_API_WORKSPACE_ROOT
  3. current working directory

For multi-repo use, run one MCP server instance per repo/workspace. Ralph's current
control-plane APIs persist config, tasks, loops, planning sessions, and collections
under a single workspace root, so server-per-workspace is the deterministic model.

Requirements:

  • Rust toolchain (for ralph-api)
  • Node.js >= 18 + npm (for the frontend)

On first run, ralph web auto-detects missing node_modules and runs npm install.

To set up Node.js:

# Option 1: nvm (recommended)
nvm install    # reads .nvmrc

# Option 2: direct install
# https://nodejs.org/

For development:

npm install              # install frontend + legacy backend deps
npm run dev:api          # Rust RPC API (port 3000)
npm run dev:web          # frontend (port 5173)
npm run dev              # frontend only (default)
npm run dev:legacy-server  # deprecated Node backend (optional)
npm run test             # all frontend/backend workspace tests

MCP Server Mode

Ralph can run as an MCP server over stdio for MCP-compatible clients:

ralph mcp serve

Use this mode from an MCP client configuration rather than an interactive terminal workflow.

What is Ralph?

Ralph implements the Ralph Wiggum technique — autonomous task completion through continuous iteration. It supports:

  • Multi-Backend Support — Claude Code, Kiro, Gemini CLI, Codex, Amp, Copilot CLI, OpenCode
  • Hat System — Specialized personas coordinating through events
  • Backpressure — Gates that reject incomplete work (tests, lint, typecheck)
  • Memories & Tasks — Persistent learning and runtime work tracking
  • 5 Supported Builtinscode-assist, debug, research, review, and pdd-to-code-assist, with more patterns documented as examples

RObot (Human-in-the-Loop)

Ralph supports human interaction during orchestration via Telegram. Agents can ask questions and block until answered; humans can send proactive guidance at any time.

Quick onboarding (Telegram):

ralph bot onboard --telegram   # guided setup (token + chat id)
ralph bot status               # verify config
ralph bot test                 # send a test message
ralph run -c ralph.bot.yml -p  "Help the human"
# ralph.yml
RObot:
  enabled: true
  telegram:
    bot_token: "your-token"  # Or RALPH_TELEGRAM_BOT_TOKEN env var
  • Agent questions — Agents emit human.interact events; the loop blocks until a response arrives or times out
  • Proactive guidance — Send messages anytime to steer the agent mid-loop
  • Parallel loop routing — Messages route via reply-to, @loop-id prefix, or default to primary
  • Telegram commands/status, /tasks, /restart for real-time loop visibility

See the Telegram guide for setup instructions.

Documentation

Full documentation is available at mikeyobrien.github.io/ralph-orchestrator:

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines and CODE_OF_CONDUCT.md for community standards.

License

MIT License — See LICENSE for details.

💬 Community & Support

Join the ralph-orchestrator community to discuss AI agent patterns, get help with your implementation, or contribute to the roadmap.

  • Discord: Join our server to chat with the maintainers and other users in real-time.
  • GitHub Issues: For bug reports and formal feature requests, please use the Issue Tracker.

Acknowledgments


"I'm learnding!" - Ralph Wiggum

Yorumlar (0)

Sonuc bulunamadi