superagent
Health Gecti
- License — License: MIT
- Description — Repository has a description
- Active repo — Last push 8 days ago
- Community trust — 6504 GitHub stars
Code Basarisiz
- rm -rf — Recursive force deletion command in cli/src/commands/guard.ts
- process.env — Environment variable access in cli/src/commands/guard.ts
- process.env — Environment variable access in cli/src/commands/redact.ts
- process.env — Environment variable access in cli/src/commands/scan.ts
- rm -rf — Recursive force deletion command in cli/src/index.ts
Permissions Gecti
- Permissions — No dangerous permissions requested
This open-source SDK and CLI tool helps developers secure their AI applications by detecting prompt injections, redacting sensitive data like PII and secrets, and scanning repositories for vulnerabilities.
Security Assessment
The tool requires an API key and accesses environment variables to function correctly. The automated scan flagged two failures for recursive force deletion commands (`rm -rf`) located in the CLI source files (`guard.ts` and `index.ts`). While deleting files is typical for a CLI cleaning up temporary resources, `rm -rf` requires caution as it can be destructive if improperly handled or manipulated. No hardcoded secrets or dangerous OS permissions were detected, and the tool relies on external API calls for its core logic rather than local execution. Overall risk is rated as Medium due to the presence of recursive deletion commands in the codebase.
Quality Assessment
The project is highly regarded and actively maintained, backed by a strong community with over 6,500 GitHub stars and recent updates (last push was just 8 days ago). It is distributed under the standard and permissive MIT license, making it highly accessible for integration.
Verdict
Use with caution—the project is high quality and trusted by the community, but developers should manually verify the context of the `rm -rf` commands in the CLI tool before deploying it in production environments.
Superagent protects your AI applications against prompt injections, data leaks, and harmful outputs. Embed safety directly into your app and prove compliance to your customers.
Superagent SDK
Make your AI apps safe.
Website · Docs · Discord · HuggingFace
An open-source SDK for AI agent safety. Block prompt injections, redact PII and secrets, scan repositories for threats, and run red team scenarios against your agent.
Features
Guard
Detect and block prompt injections, malicious instructions, and unsafe tool calls at runtime.
TypeScript:
import { createClient } from "safety-agent";
const client = createClient();
const result = await client.guard({
input: userMessage
});
if (result.classification === "block") {
console.log("Blocked:", result.violation_types);
}
Python:
from safety_agent import create_client
client = create_client()
result = await client.guard(input=user_message)
if result.classification == "block":
print("Blocked:", result.violation_types)
Redact
Remove PII, PHI, and secrets from text automatically.
TypeScript:
const result = await client.redact({
input: "My email is [email protected] and SSN is 123-45-6789",
model: "openai/gpt-4o-mini"
});
console.log(result.redacted);
// "My email is <EMAIL_REDACTED> and SSN is <SSN_REDACTED>"
Python:
result = await client.redact(
input="My email is [email protected] and SSN is 123-45-6789",
model="openai/gpt-4o-mini"
)
print(result.redacted)
# "My email is <EMAIL_REDACTED> and SSN is <SSN_REDACTED>"
Scan
Analyze repositories for AI agent-targeted attacks such as repo poisoning and malicious instructions.
TypeScript:
const result = await client.scan({
repo: "https://github.com/user/repo"
});
console.log(result.result); // Security report
console.log(`Cost: $${result.usage.cost.toFixed(4)}`);
Python:
result = await client.scan(repo="https://github.com/user/repo")
print(result.result) # Security report
print(f"Cost: ${result.usage.cost:.4f}")
Test
Run red team scenarios against your production agent. (Coming soon)
const result = await client.test({
endpoint: "https://your-agent.com/chat",
scenarios: ["prompt_injection", "data_exfiltration"]
});
console.log(result.findings); // Vulnerabilities discovered
Get Started
Sign up at superagent.sh to get your API key.
TypeScript:
npm install safety-agent
Python:
uv add safety-agent
Set your API key:
export SUPERAGENT_API_KEY=your-key
Integration Options
| Option | Description | Link |
|---|---|---|
| TypeScript SDK | Embed guard, redact, and scan directly in your app | sdk/typescript |
| Python SDK | Embed guard, redact, and scan directly in Python apps | sdk/python |
| CLI | Command-line tool for testing and automation | cli |
| MCP Server | Use with Claude Code and Claude Desktop | mcp |
Why Superagent SDK?
- Works with any model — OpenAI, Anthropic, Google, Groq, Bedrock, and more
- Open-weight models — Run Guard on your infrastructure with 50-100ms latency
- Low latency — Optimized for runtime use
- Open source — MIT license with full transparency
Open-Weight Models
Run Guard on your own infrastructure. No API calls, no data leaving your environment.
| Model | Parameters | Use Case |
|---|---|---|
| superagent-guard-0.6b | 0.6B | Fast inference, edge deployment |
| superagent-guard-1.7b | 1.7B | Balanced speed and accuracy |
| superagent-guard-4b | 4B | Maximum accuracy |
GGUF versions for CPU: 0.6b-gguf · 1.7b-gguf · 4b-gguf
Resources
License
MIT
Yorumlar (0)
Yorum birakmak icin giris yap.
Yorum birakSonuc bulunamadi