orchestral

mcp
Security Audit
Warn
Health Warn
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 6 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested

No AI report is available for this listing yet.

SUMMARY

Orchestral is workflow orchestration for grounded agents.

README.md

Orchestral

Workflow orchestration for grounded agents.

中文版本

What It Does

  • Orchestrates stateful workflows, not one-off tool calls
  • Executes typed actions with an agent loop and mini-DAG
  • Replans from real state and verifies before finishing

See It Work

One user command. Orchestral coordinates an MCP data source, a domain skill, and shell execution into a multi-step pipeline:

You:  "Query Q4 sales from the API, fill the Excel template with actuals
       and formulas, write a markdown summary comparing to budget."

Orchestral automatically:
  ├─ mcp__sales-api__query_sales_data  → fetch actuals from external API
  ├─ file_read budget.yaml             → load budget targets
  ├─ shell (venv python + openpyxl)    → fill Excel: values, formulas, status
  └─ file_write report.md              → generate comparison report

The planner discovers MCP tools at startup via tool_lookup, activates the xlsx skill for openpyxl guidance, and uses the skill's virtual environment to run Python. When a step fails, the agent loop observes the error and replans — no manual intervention needed.

Try it:

export OPENROUTER_API_KEY="sk-or-..."
cargo build -p orchestral-cli
cargo run -p orchestral-cli -- scenario \
  --spec configs/scenarios/sales_report_pipeline.smoke.yaml

Architecture

Intent → Planner (LLM) → Normalizer (DAG validation) → Executor (parallel + retry)
            ↑                                                    ↓
            └──── agent loop: observe execution result ──────────┘
  • Agent loop — planner iterates up to 6 rounds, observing results and replanning
  • MCP integration — servers probed at startup, each tool registered as a callable action with deferred schema loading via tool_lookup
  • Skills — domain knowledge (SKILL.md files) auto-discovered and injected into planner context; skill_activate for on-demand loading
  • Typed actions — document inspect/patch/verify, structured config patch/verify, shell, file I/O, HTTP

SDK

Use Orchestral as a library — register custom actions and lifecycle hooks with a builder API:

use orchestral::{Orchestral, core::action::*};

let app = Orchestral::builder()
    .action(MyCustomAction::new())
    .hook(MyLoggingHook::new())
    .planner_backend("openrouter")
    .planner_model("anthropic/claude-sonnet-4.5")
    .build()
    .await?;

let result = app.run("Analyze the data and generate a report").await?;
println!("{}", result.message);

See examples/sdk_quickstart.rs and examples/sdk_hooks.rs.

Install

# As a library
cargo add orchestral

# As a CLI tool
cargo install orchestral-cli

Quick Start

Export one provider key:

export GOOGLE_API_KEY="..."  # or OPENAI_API_KEY, ANTHROPIC_API_KEY, OPENROUTER_API_KEY

CLI:

orchestral-cli run

As a library:

let app = Orchestral::builder()
    .planner_backend("google")
    .planner_model("gemini-2.5-flash")
    .build().await?;
let result = app.run("read README.md").await?;

Project Structure

core/orchestral-core     — Pure abstractions: Intent/Plan/Step, traits, DAG executor
core/orchestral-runtime  — LLM planners, actions, MCP bridge, skill system
core/orchestral          — Facade re-exporting core + runtime
apps/orchestral-cli      — CLI + TUI (ratatui)
apps/orchestral-telegram — Telegram bot adapter

Telegram Bot

Run Orchestral as a Telegram bot — each message goes through the full orchestration pipeline:

export TELEGRAM_BOT_TOKEN="your-bot-token"
export GOOGLE_API_KEY="your-key"  # or OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.
cargo run -p orchestral-telegram

LLM backend is configurable in code (defaults to Google Gemini). The bot supports file operations, shell commands, and multi-turn conversations.

Current Status

  • Core orchestration loop working with agent loop + mini-DAG
  • MCP per-tool registration with deferred schema loading
  • Skill auto-discovery and on-demand activation
  • SDK with builder API, lifecycle hooks, and programmatic execution
  • Telegram bot adapter (configurable LLM backend)
  • Document and structured config typed pipelines
  • Scenario smoke tests covering core workflows

License

See LICENSE.

Reviews (0)

No results found