mcp-video

mcp
Security Audit
Warn
Health Warn
  • License — License: Apache-2.0
  • Description — Repository has a description
  • Active repo — Last push 0 days ago
  • Low visibility — Only 6 GitHub stars
Code Pass
  • Code scan — Scanned 12 files during light audit, no dangerous patterns found
Permissions Pass
  • Permissions — No dangerous permissions requested
Purpose
This is a Model Context Protocol (MCP) server that provides AI agents with tools to programmatically edit, create, and manipulate video files. It wraps system utilities like FFmpeg to perform operations such as trimming, merging, and filtering videos based on user prompts.

Security Assessment
Overall Risk: Low. The code scan of 12 core files found no dangerous patterns, hardcoded secrets, or requests for risky permissions. Because the tool's primary function is video processing, it inherently relies on executing local shell commands to interact with external dependencies like FFmpeg and Node.js. It does not appear to require network access to external APIs or cloud services, meaning your video data is processed entirely locally. No sensitive data access (like browser histories or private keys) is required for the tool to function.

Quality Assessment
The project has a solid foundation, utilizing the permissive Apache-2.0 license and featuring an active codebase with a recent last push. The developer has invested heavily in reliability, as evidenced by an extensive suite of 858 tests and a clear, professional README with detailed documentation. The main drawback is its extremely low community visibility; with only 6 stars on GitHub, the project has not yet been broadly peer-reviewed or battle-tested by a large user base.

Verdict
Safe to use, though users should expect a solo-maintainer level of support given the project's current low community adoption.
SUMMARY

Video editing MCP server for AI agents. 83 tools, 858 tests collected, 3 interfaces. Works with Claude Code, Cursor, and any MCP client. Local, fast, free.

README.md

mcp-video

mcp-video

Video editing and creation for AI agents.
Edit existing video with FFmpeg. Create new video from code with Hyperframes.

PyPI CI Tools License Python

InstallQuick StartToolsFull ReferenceAgent DiscoveryContributingChangelog


What is mcp-video?

An open-source video editing server built on the Model Context Protocol (MCP). It gives AI agents, developers, and video creators the ability to programmatically edit and create video files.

Two modes:

  1. Edit existing video with FFmpeg — trim, merge, overlay text, add audio, apply filters, stabilize, detect scenes, transcribe, and more.
  2. Create new video from code with Hyperframes (HTML-native, Apache 2.0) — scaffold compositions, preview live, render to MP4, then post-process.

Three interfaces:

Interface Best For Example
MCP Server AI agents (Claude Code, Cursor) "Trim this video and add a title"
Python Client Scripts, automation, pipelines editor.trim("v.mp4", start="0:30", duration="15")
CLI Shell scripts, quick ops, humans mcp-video trim video.mp4 -s 0:30 -d 15

Installation

Prerequisites: FFmpeg must be installed. For Hyperframes features, you also need Node.js 22+.

# macOS
brew install ffmpeg

# Ubuntu/Debian
sudo apt install ffmpeg

Install:

pip install mcp-video
# or run without installing:
uvx mcp-video

Verify your setup:

mcp-video doctor
mcp-video doctor --json

Quick Start

As an MCP Server (for AI agents)

Claude Code:

claude mcp add mcp-video -- uvx mcp-video

Claude Desktop:

{
  "mcpServers": {
    "mcp-video": {
      "command": "uvx",
      "args": ["mcp-video"]
    }
  }
}

Cursor:

{
  "mcpServers": {
    "mcp-video": {
      "command": "uvx",
      "args": ["mcp-video"]
    }
  }
}

Then just ask your agent: "Trim this video from 0:30 to 1:00, add a title card, and resize for TikTok."

As a Python Library

from mcp_video import Client

editor = Client()

info = editor.info("interview.mp4")
clip = editor.trim("interview.mp4", start="00:02:15", duration="00:00:30")
video = editor.merge(clips=["intro.mp4", clip.output_path, "outro.mp4"])
video = editor.add_text(video.output_path, text="EPISODE 42", position="top-center", size=48)
result = editor.resize(video.output_path, aspect_ratio="9:16")

Agent-safe Python workflow

For autonomous agents, prefer inspection, pipeline chaining, and a release checkpoint:

from mcp_video import Client

client = Client()
print(client.inspect("create_from_images"))  # Real params, aliases, return type

result = client.pipeline(
    [
        {"op": "create_from_images", "images": frames, "fps": 30},
        {"op": "effect_glow", "intensity": 0.2},  # safe capped default
        {"op": "add_audio", "audio_path": "soundtrack.wav", "mix": True},
        {"op": "export", "quality": "high"},
    ],
    output_path="final.mp4",
)

checkpoint = client.release_checkpoint(result.output_path)
print(checkpoint["thumbnail"], checkpoint["storyboard"])

Agent contract:

  • Media-producing client calls return EditResult with .output_path.
  • Analysis/discovery calls return typed reports or dictionaries.
  • Client.inspect(name) exposes parameters, aliases, category, and return type.
  • Raw unexpected-keyword errors are converted into actionable MCPVideoError guidance.
  • Do not publish agent-generated video without assert_quality() or release_checkpoint() plus human visual/audio inspection.

As a CLI Tool

mcp-video info video.mp4
mcp-video trim video.mp4 -s 00:02:15 -d 30
mcp-video convert video.mp4 -f webm -q high
mcp-video template tiktok video.mp4 --caption "Check this out!"

MCP Tools

85 unique MCP tools across 10 categories, plus a search_tools meta-tool for fast discovery. All return structured JSON. See the full tool reference for complete details.

Category Count Highlights
Core Video 29 trim, merge, text, audio, resize, convert, filters, stabilize, chroma key, subtitles, watermark, batch, export, normalize audio
AI-Powered 11 transcribe (Whisper), scene detect, stem separation (Demucs), upscale, color grade
Hyperframes 8 init, render, still, preview, compositions, validate, add block, pipeline
Audio Synthesis 7 generate waveforms, presets, sequences, effects, spatial audio — pure NumPy
Visual Effects 6 vignette, chromatic aberration, scanlines, noise, glow, mask
Transitions 3 glitch, pixelate, morph
Layout & Motion 6 grid, pip, animated text, counters, progress bars, auto-chapters
Analysis 8 scene detect, thumbnail, preview, storyboard, quality compare, metadata, waveform, release checkpoint
Image Analysis 3 color extraction, palette generation, product analysis
Meta 1 search_tools — keyword search across all tools

Tool discovery:

from mcp_video import Client
editor = Client()
results = editor.search_tools("subtitle")  # Find subtitle-related tools

Hyperframes Integration

Create videos programmatically with Hyperframes — an HTML-native framework for video (Apache 2.0).

1. Create project     -> hyperframes_init
2. Add blocks         -> hyperframes_add_block
3. Preview live       -> hyperframes_preview
4. Render             -> hyperframes_render
5. Post-process       -> hyperframes_to_mcpvideo

See Hyperframes docs and the Python client reference.

Hyperframes Integration

Create videos programmatically with Hyperframes — an HTML-native framework for video.

1. Init project       -> hyperframes_init
2. Add blocks         -> hyperframes_add_block
3. Preview live       -> hyperframes_preview
4. Render             -> hyperframes_render
5. Post-process       -> hyperframes_to_mcpvideo

See Hyperframes docs and the Python client reference.


Python Client

from mcp_video import Client
editor = Client()

See the full Python client reference for all methods and return types.


CLI Reference

mcp-video [command] [options]

See the full CLI reference for all commands and options.


Timeline DSL

For complex multi-track edits, describe everything in a single JSON object:

editor.edit({
    "width": 1080,
    "height": 1920,
    "tracks": [
        {
            "type": "video",
            "clips": [
                {"source": "intro.mp4", "start": 0, "duration": 5},
                {"source": "main.mp4", "start": 5, "trim_start": 10, "duration": 30},
                {"source": "outro.mp4", "start": 35, "duration": 10},
            ],
            "transitions": [
                {"after_clip": 0, "type": "fade", "duration": 1.0},
            ],
        },
        {
            "type": "audio",
            "clips": [
                {"source": "music.mp3", "start": 0, "volume": 0.7, "fade_in": 2},
            ],
        },
    ],
    "export": {"format": "mp4", "quality": "high"},
})

Templates

Pre-built templates for common social media formats:

from mcp_video.templates import tiktok_template, youtube_shorts_template

timeline = tiktok_template(video_path="clip.mp4", caption="Check this out!", music_path="bgm.mp3")
result = editor.edit(timeline)

Supports: TikTok, YouTube Shorts, Instagram Reels/Posts, YouTube Videos.


Error Handling

Structured, actionable errors with auto-fix suggestions:

{
  "success": false,
  "error": {
    "type": "encoding_error",
    "code": "unsupported_codec",
    "message": "Codec error: vp9 — Auto-convert input from vp9 to H.264/AAC before editing",
    "suggested_action": {
      "auto_fix": true,
      "description": "Auto-convert input from vp9 to H.264/AAC before editing"
    }
  }
}

Workflows

ICM-style staged pipelines for common productions — with CONTEXT.md stage contracts, references/ factory config, and runnable workflow.py scripts.

cd workflows/01-social-media-clip
python workflow.py /path/to/video.mp4
Workflow Stages Description
01-social-media-clip 5 Landscape → TikTok / Short / Reel
02-podcast-clip 6 Highlight with chapters + burned captions
03-explainer-video 7 Branded explainer from scratch

See workflows/CONTEXT.md for the routing table.

Architecture

mcp_video/
  client/                # Python Client API (mixins per domain)
  client/meta.py         # Client discovery mixin (search_tools)
  server.py              # MCP server (87 tools + 4 resources + search_tools meta-tool)
  server_tools_*.py      # Tool registration by category
  engine.py              # Core FFmpeg engine
  engine_*.py            # Specialized engines (thumbnail, edit, probe, etc.)
  models.py              # Pydantic models
  errors.py              # Error hierarchy + FFmpeg stderr parser
  ffmpeg_helpers.py      # Shared FFmpeg utilities
  audio_engine.py        # Procedural audio synthesis
  effects_engine.py      # Visual effects + motion graphics
  transitions_engine.py  # Clip transitions
  ai_engine.py           # AI features (Whisper, Demucs, Real-ESRGAN)
  hyperframes_engine.py  # Hyperframes CLI wrapper
  hyperframes_engine.py  # Hyperframes CLI wrapper
  image_engine.py        # Image color analysis
  quality_guardrails.py  # Automated quality checks
workflows/               # ICM staged pipelines
  CONTEXT.md             # Layer 1 routing table
  01-social-media-clip/  # Stage contract + runnable script
  02-podcast-clip/       # Stage contract + runnable script
  03-explainer-video/    # Stage contract + runnable script

Supported Formats

Video Audio (extraction) Subtitles
MP4, WebM, MOV, GIF MP3, AAC, WAV, OGG, FLAC SRT, WebVTT

Agent Discovery


Development

git clone https://github.com/pastorsimon1798/mcp-video.git
cd mcp-video
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

Community & Support

Testing

Tests are excluded from the PyPI package. To run locally:

pip install -e ".[dev]"
pytest tests/ -v -m "not slow and not hyperframes"

See docs/TESTING.md for full test categories and CI details.

License

Apache 2.0 — see LICENSE.

Built on FFmpeg, Hyperframes, and the Model Context Protocol.

See docs/LEGAL_REVIEW.md for dependency licensing notes.

Reviews (0)

No results found