reprompt

mcp
Guvenlik Denetimi
Gecti
Health Gecti
  • License — License: MIT
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Community trust — 39 GitHub stars
Code Gecti
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Gecti
  • Permissions — No dangerous permissions requested
Purpose

This tool is an MCP server and CLI that analyzes prompts sent to AI coding assistants. It scores prompt quality, detects error loops in agent workflows, identifies leaked credentials, and compresses or distills conversation history to optimize token usage.

Security Assessment

By design, the tool accesses highly sensitive data. It explicitly scans local chat logs, prompt histories, and conversation sessions from up to nine different AI tools. While this is necessary for its core functionality (such as leaked credential detection), it means the tool inherently handles your private code snippets, debugging context, and proprietary information. The automated code scan found no dangerous patterns, hardcoded secrets, or requests for dangerous OS permissions. The tool does not appear to make unauthorized external network requests to exfiltrate this data. Overall risk is rated as Medium solely because a utility that reads widespread local AI logs is an attractive target for supply-chain attacks.

Quality Assessment

The project is in excellent health. It is actively maintained with recent repository pushes and a solid 39 GitHub stars for a highly specialized niche tool. It is distributed under the permissive and standard MIT license. The repository features an excellent README, claims a strong 95% test coverage, and provides a clear, professional CLI interface, indicating a high level of developer care.

Verdict

Safe to use, but developers should be aware that granting any extension or CLI access to read all local AI prompt histories comes with inherent privacy responsibilities.
SUMMARY

Analyze what you type into AI tools — prompt scoring, agent error loops, leaked credential detection. CLI + MCP server for 9 AI tools.

README.md

re:prompt

Analyze what you type into AI tools -- prompt scoring, agent error loops, leaked credential detection, conversation distillation.

PyPI version
Python 3.10+
License: MIT
Tests
Coverage


reprompt demo

See it in action

$ pip install reprompt-cli
$ reprompt
  ╭─ Prompt Dashboard ─────────────────────────────────────────╮
  │  Prompts: 1,063 (295 unique)   Sessions: 890              │
  │  Avg Score: 68/100             Top: debug (31%), impl (24%)│
  │  Sources: claude-code, cursor, chatgpt                     │
  ╰────────────────────────────────────────────────────────────╯

$ reprompt score "Fix the auth bug in src/login.ts where JWT expires"
  Score: 40/100  (Fair)
  Structure: 0/25 | Context: 8/25 | Position: 20/20 | Repetition: 0/15 | Clarity: 12/15
  Tip: Include the error message -- debug prompts with errors are 3.7x more effective

$ reprompt distill --last 3 --summary
  Session: feature-dev (42 turns, 18 important)
  Key moments: initial spec → auth module → test failures → JWT fix → passing
  Context: "Building auth system with JWT refresh tokens for Express API"

$ reprompt compress "I was wondering if you could please help me refactor this code. Basically what I need is to split this function into smaller helpers and add error handling."
  Before: 28 tokens → After: 14 tokens (50% saved)
  "Help me refactor this code. Split this function into smaller helpers and add error handling."

What it does

Analyze

Command Description
reprompt Instant dashboard -- prompts, sessions, avg score, top categories
reprompt scan Auto-discover prompts from 9 AI tools
reprompt score "prompt" Research-backed 0-100 scoring with 30+ features
reprompt compare "a" "b" Side-by-side prompt analysis (or --best-worst for auto-selection)
reprompt insights Personal patterns vs research-optimal benchmarks
reprompt style Prompting fingerprint with --trends for evolution tracking
reprompt agent Agent workflow analysis -- error loops, tool patterns, session efficiency

Optimize

Command Description
reprompt compress "prompt" 4-layer prompt compression (40-60% token savings typical)
reprompt distill Extract important turns from conversations with 6-signal scoring
reprompt distill --export Recover context when a session runs out -- paste into new session
reprompt lint Prompt quality linter with GitHub Action support

Manage

Command Description
reprompt privacy See what data you sent where -- file paths, errors, PII exposure
reprompt privacy --deep Scan for sensitive content: API keys, tokens, passwords, PII
reprompt report Full analytics: hot phrases, clusters, patterns (--html for dashboard)
reprompt digest Weekly summary comparing current vs previous period
reprompt wrapped Prompt DNA report -- persona, scores, shareable card
reprompt template save|list|use Save and reuse your best prompts

Prompt Science

Scoring is calibrated against 4 research papers covering 30+ features across 5 dimensions:

Dimension What it measures Paper
Structure Markdown, code blocks, explicit constraints Prompt Report 2406.06608
Context File paths, error messages, technical specificity Google 2512.14982
Position Instruction placement relative to context Stanford 2307.03172
Repetition Redundancy that degrades model attention Google 2512.14982
Clarity Readability, sentence length, ambiguity SPELL (EMNLP 2023)

All analysis runs locally in <1ms per prompt. No LLM calls, no network requests.

Conversation Distillation

reprompt distill scores every turn in a conversation using 6 signals:

  • Position -- first/last turns carry framing and conclusions
  • Length -- substantial turns contain more information
  • Tool trigger -- turns that cause tool calls are action-driving
  • Error recovery -- turns that follow errors show problem-solving
  • Semantic shift -- topic changes mark conversation boundaries
  • Uniqueness -- novel phrasing vs repetitive follow-ups

Session type (debugging, feature-dev, exploration, refactoring) is auto-detected and signal weights adapt accordingly.

Supported AI tools

Tool Format Auto-discovered by scan
Claude Code JSONL Yes
Codex CLI JSONL Yes
Cursor .vscdb Yes
Aider Markdown Yes
Gemini CLI JSON Yes
Cline (VS Code) JSON Yes
OpenClaw / OpenCode JSON Yes
ChatGPT JSON Via reprompt import
Claude.ai JSON/ZIP Via reprompt import

Installation

pip install reprompt-cli            # core (all features, zero config)
pip install reprompt-cli[chinese]   # + Chinese prompt analysis (jieba)
pip install reprompt-cli[mcp]       # + MCP server for Claude Code / Continue.dev / Zed

Quick start

reprompt scan                       # discover prompts from installed AI tools
reprompt                            # see your dashboard
reprompt score "your prompt here"   # score any prompt instantly
reprompt distill --last 1           # distill your most recent conversation

Auto-scan after every session

reprompt install-hook               # adds post-session hook to Claude Code

Browser extension

Capture prompts from ChatGPT, Claude.ai, and Gemini directly in your browser:

  1. Install the extension from Chrome Web Store or Firefox Add-ons
  2. Connect to the CLI: reprompt install-extension
  3. Verify: reprompt extension-status

Captured prompts sync locally via Native Messaging -- nothing leaves your machine.

CI integration

GitHub Action

# .github/workflows/prompt-lint.yml
- uses: reprompt-dev/reprompt@main
  with:
    score-threshold: 50   # fail if avg prompt score < 50
    strict: true          # fail on warnings too
    comment-on-pr: true   # post quality report as PR comment

pre-commit

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/reprompt-dev/reprompt
    rev: v1.7.0
    hooks:
      - id: reprompt-lint

Direct CLI

reprompt lint --score-threshold 50  # exit 1 if avg score < 50
reprompt lint --strict              # exit 1 on warnings
reprompt lint --json                # machine-readable output

Privacy

  • All analysis runs locally. No prompts leave your machine.
  • reprompt privacy shows exactly what you've sent to which AI tool.
  • Optional telemetry sends only anonymous 26-dimension feature vectors -- never prompt text.
  • Open source: audit exactly what's collected.

Privacy policy

Links

Contributing

See CONTRIBUTING.md for development setup and guidelines.

License

MIT

Yorumlar (0)

Sonuc bulunamadi