orxhestra
Health Gecti
- License — License: Apache-2.0
- Description — Repository has a description
- Active repo — Last push 0 days ago
- Community trust — 14 GitHub stars
Code Gecti
- Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
- Permissions — No dangerous permissions requested
Security Assessment: Overall risk is Medium. The framework acts as an orchestrator that executes AI-generated actions, which includes reading local files and executing shell commands. It explicitly features an "--auto-approve" flag that bypasses the default safety prompt before running destructive operations like writing files or executing shell scripts. No hardcoded secrets or dangerous permission requests were found during the scan. It requires network requests to function, as it relies on external LLM APIs (like OpenAI, Anthropic, or Google) to generate responses and execute code.
Quality Assessment: The project passes standard health checks. It is licensed under the permissive Apache-2.0 and was updated very recently (last push was today). The codebase is lightweight (12 files scanned) with no dangerous patterns detected. Community trust is currently low given its niche scope and small footprint of 14 GitHub stars, indicating it is likely an early-stage or personal project rather than a battle-tested enterprise tool.
Verdict: Use with caution — while the code itself is clean, safe, and requires explicit approval for destructive actions, the default ability to autonomously execute shell commands and edit files via AI poses inherent risks to your local environment if not monitored closely.
Multi-agent orchestration framework for Python — turn any agent setup into a CLI or server.
Multi-agent orchestration framework for Python — turn any agent setup into a CLI or server.
Compose multi-agent AI systems with async event streaming, agent hierarchies, and built-in support for MCP and A2A protocols.
Orx CLI
Turn any orx.yaml agent setup into an interactive terminal agent. Ships with a coding agent out of the box — or compose your own.
pip install orxhestra[cli,openai]
orx
+-- orx - terminal coding agent ------------------------------------+
| model: gpt-5.4 workspace: ~/my-project /help for commands |
+-------------------------------------------------------------------+
orx> add error handling to the API routes
> read_file(src/api/routes.py)
> grep(pattern="raise", path=src/api/)
> write_todos(3 tasks)
Tasks
* Add try/except to all route handlers [in progress]
- Add custom error response model
- Write tests for error cases
> edit_file(src/api/routes.py)
> shell_exec(pytest tests/test_api.py)
4 passed
Done - added structured error handling to all 4 route handlers
with a custom ErrorResponse model. All tests pass.
Features
- Any LLM — OpenAI, Anthropic, Google via
--model gpt-5.4/claude-sonnet-4-6/gemini-2.0-flash - Streaming — real-time token rendering with Markdown formatting
- Tool approval — prompts before destructive operations (write, edit, shell)
- Task planning — structured todo lists visible in the terminal
- Sub-agent delegation — spawn isolated agents for complex subtasks
- AGENTS.md memory — persistent project context across sessions
- Local context injection — auto-detects language, git state, package manager, project tree
- Context summarization — auto-compacts long conversations,
/compactcommand - Orx YAML — run any orx.yaml agent team:
orx my-agents.yaml
Usage
orx # interactive REPL (default model)
orx --model claude-sonnet-4-6 # use a specific model
orx -c "fix the failing tests" # single-shot command
orx my-agents.yaml # run a custom orx file
orx --auto-approve # skip approval prompts
orx orx.yaml --serve -p 9000 # start as A2A server
Commands
| Command | Description |
|---|---|
/model <name> |
Switch model mid-session |
/clear |
Reset conversation |
/compact |
Summarize old messages to free context |
/todos |
Show current task list |
/help |
Show all commands |
/exit |
Exit |
Quickstart (SDK)
pip install orxhestra
# or
uv add orxhestra
from orxhestra import LlmAgent, Runner, InMemorySessionService
agent = LlmAgent(
name="assistant",
model="gpt-5.4",
instructions="You are a helpful assistant.",
)
runner = Runner(agent=agent, session_service=InMemorySessionService())
response = await runner.run(user_id="user1", session_id="s1", new_message="Hello!")
for event in response:
print(event.content)
[!TIP]
For full documentation, guides, and API reference, visit docs.orxhestra.com.
Features
- Agent ensemble - LLM, ReAct, Sequential, Parallel, and Loop agents
- Event streaming - Async event-driven architecture with real-time streaming
- Composer - Conduct entire agent orchestras declaratively with YAML
- Tools - Function tools, filesystem tools, agent-as-tool, shell, and long-running tool support
- Planners - Choreograph task execution with PlanReAct and TaskPlanner strategies
- Skills - Reusable, composable agent repertoires
- MCP - Model Context Protocol integration for tool servers
- A2A - Agent-to-Agent protocol for cross-service harmonization
- Memory - Pluggable memory stores for persistent agent context
- Tracing - Built-in support for Langfuse, LangSmith, and custom callbacks
Agents at a glance
| Agent | Description |
|---|---|
LlmAgent |
Chat model agent with tools, instructions, and structured output |
ReActAgent |
Reasoning + acting loop with automatic tool use |
SequentialAgent |
Runs sub-agents in order |
ParallelAgent |
Runs sub-agents concurrently |
LoopAgent |
Repeats a sub-agent until exit condition |
A2AAgent |
Connects to remote agents via A2A protocol |
Composer
Define entire agent orchestras in a single YAML file — no Python wiring needed. Compose LLM agents, loops, pipelines, tools, and review cycles declaratively. The example below builds a coding agent that plans, implements with filesystem + shell access, and self-reviews in a loop:
defaults:
model:
provider: openai
name: gpt-5.4
tools:
exit:
builtin: "exit_loop"
filesystem:
builtin: "filesystem"
shell:
builtin: "shell"
agents:
planner:
type: llm
description: "Plans the implementation steps for the coder agent."
instructions: |
Output a numbered list of concrete steps the coder
should execute. Each step must be an actionable file
operation or shell command.
coder:
type: llm
description: "Implements code changes with filesystem and shell access."
instructions: |
Follow the plan from the previous step exactly.
Use filesystem tools to create files and shell to
run commands. Never ask the user to do anything.
tools:
- filesystem
- shell
reviewer:
type: llm
description: "Reviews changes and approves or requests fixes."
instructions: |
Check files exist and look correct. If done, call
exit_loop. Otherwise describe what needs fixing.
tools:
- exit
dev_loop:
type: loop
agents: [coder, reviewer]
max_iterations: 10
coordinator:
type: sequential
agents: [planner, dev_loop]
main_agent: coordinator
runner:
app_name: coding-agent
session_service: memory
Run it as an interactive CLI or expose it as an A2A server:
orx orx.yaml # interactive terminal agent
orx orx.yaml --serve -p 9000 # A2A server on port 9000
# test the server
curl -X POST http://localhost:9000/ \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0", "id": "1",
"method": "message/send",
"params": {
"message": {
"role": "user",
"parts": [{"text": "Hello!", "mediaType": "text/plain"}]
}
}
}'
Docker
docker run -e OPENAI_API_KEY=$OPENAI_API_KEY \
-v ./orx.yaml:/app/orx.yaml \
nicolaimtlassen/orxhestra
Documentation
- Getting Started - Installation and first agent
- Agents - Agent types and configuration
- Tools - Built-in and custom tools
- Composer - YAML-based agent composition
- Integrations - MCP and A2A setup
Acknowledgments
This project is built on the shoulders of several outstanding open-source projects and research efforts:
- LangChain
- Google Agent Development Kit (ADK)
- LangGraph
- Model Context Protocol (MCP)
- Agent-to-Agent Protocol (A2A)
Special thanks to the open-source AI community for pushing the boundaries of what's possible with agent frameworks.
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi