HappyHorse 1.0 Video Edit API
Intelligent video editing with text prompts and reference image style guidance
Your generated video will appear here
Features
What HappyHorse 1.0 Video Edit API offers
Use cases
Built for
Style transfer: apply artistic styles from reference images to existing footage for brand-aligned visuals
Post-production: refined text-guided editing for advertising and marketing content
Brand adaptation: restyle footage to match brand aesthetics using reference images at scale
Creative exploration: experiment with different edit directions on the same source while preserving audio
E-commerce: transform product videos with style guidance for seasonal campaigns
FAQ
About HappyHorse 1.0 Video Edit API
HappyHorse 1.0 Video Edit provides intelligent video editing capabilities, letting you upload a video along with optional reference images for style-guided transformation. It is a separate model from HappyHorse 1.0 video generation (t2v, i2v, r2v).
The base HappyHorse 1.0 model generates new videos from scratch using text, first-frame images, or reference images. Video Edit takes an existing video as input and modifies it according to your text prompt and optional reference images — designed for refined post-production processing.
You can provide up to 5 reference images for style guidance. The model uses these to understand the visual style you want applied to the edited video.
Yes. Set audio_setting to "origin" to preserve the original audio from the source video, or use "auto" to let the model decide how to handle audio.