Every gamer has an hour of footage and zero time to edit it.
I built clipforge — upload a raw gameplay video, get three ready-to-post clips back. No editing skills required.
Upload gameplay.mp4 (10 min)
↓
clipforge AI
↓
┌─────────────────────────────────────┐
│ tiktok.mp4 → 60s · 9:16 crop │
│ youtube.mp4 → 8min · 16:9 │
│ trailer.mp4 → 90s · cinematic │
└─────────────────────────────────────┘
Download as ZIP
The problem it solves
Streamers record everything but post almost nothing. The gap isn't motivation — it's the edit. Cutting, cropping, timing, captioning. It takes hours per video.
clipforge closes that gap to one upload.
How it works
The pipeline is five stages:
1. Scene detection — PySceneDetect finds every cut boundary in the video.
2. Highlight scoring — librosa computes RMS audio energy per scene. High energy = kill, explosion, hype moment.
3. Clip selection — top 10 scenes by score become the candidate pool.
4. Format assembly — moviepy handles cuts, crops, and compositions:
- TikTok: best moment, vertical 9:16 crop, Whisper captions burned in, max 60s
- YouTube reel: top moments sequenced with transitions, up to 8 min
- Trailer: fast cuts → slow-motion climax at 0.5x on the highest-energy scene
5. ZIP download — all three files packaged together.
The stack
| Layer | Tech |
|---|---|
| Backend API | FastAPI + BackgroundTasks |
| Scene detection | PySceneDetect |
| Audio scoring | librosa (RMS energy) |
| Video editing | moviepy |
| Captions | OpenAI Whisper (local, base model) |
| Frontend | Next.js 15 · Tailwind CSS |
| Deploy | Railway (backend) · Vercel (frontend) |
The hardest part: getting it to deploy
The local pipeline worked perfectly. Railway kept failing.
Root cause: openai-whisper==20231117 uses pkg_resources in its setup.py. That module was removed from setuptools v71+, released in 2024. Railway's build environment pulled the latest setuptools automatically, and the whisper build exploded every time.
The fix was a Dockerfile with PIP_CONSTRAINT:
FROM python:3.11-slim
RUN apt-get update && apt-get install -y ffmpeg libsndfile1 git
# PIP_CONSTRAINT applies to isolated build envs too
RUN echo "setuptools<71.0.0" > /tmp/constraints.txt
ENV PIP_CONSTRAINT=/tmp/constraints.txt
RUN pip install --upgrade pip "setuptools<71.0.0" && pip install -r requirements.txt
PIP_CONSTRAINT is the key. When pip creates an isolated build environment for a package that has setup.py, it respects the constraint file. So even the ephemeral build env gets the pinned setuptools. That one env var fixed six consecutive failed deploys.
Testing with no real video files
ML pipeline tests are usually painful — you need actual video, audio, model weights. I didn't want any of that in CI.
The solution: mock everything at the boundary.
def test_scores_scenes_by_rms(tmp_path):
scenes = [Scene("v.mp4", 0.0, 5.0), Scene("v.mp4", 5.0, 10.0)]
with patch("pipeline.scorer._load_audio") as mock_audio:
mock_audio.return_value = (np.array([0.1, 0.9, 0.9]), 22050)
results = score_scenes(scenes)
assert results[0].score > 0
assert len(results) == 2
_load_audio, _run_detector, and _get_model are extracted as single-purpose functions specifically so they can be mocked. 21 tests, zero real video files.
Try it / build on it
GitHub: https://github.com/LakshmiSravyaVedantham/clipforge
git clone https://github.com/LakshmiSravyaVedantham/clipforge
cd clipforge/backend
pip install -r requirements.txt
uvicorn main:app --reload --port 8000
Built with FastAPI + moviepy + Whisper + librosa. Deployed on Railway + Vercel. MIT licensed.
Top comments (0)