Decart AI Releases Lucy Edit Dev for Text-Guided Video Edits
Decart AI has launched Lucy Edit Dev, an open-weight, instruction-following video editing model now available on Hugging Face. Built to interpret free-text prompts and apply them directly to footage, the model focuses on preserving motion and composition while performing edits ranging from wardrobe swaps to full scene replacements. Explore the model card and assets on Hugging Face: Lucy Edit Dev on Hugging Face.

Natural-Language Editing, Framed for Creative Workflows
For creators and brand builders, the headline is straightforward: Lucy Edit Dev translates plain English instructions into video edits without masks, keyframes, or per-frame tinkering. In practice, that means directions like “add a patterned kimono,” “replace the background with a neon city,” or “swap the character for an astronaut” can be interpreted and executed while the subject’s motion and the camera’s composition remain intact. The model is designed to follow instructions rather than prompts crafted for generative models, shifting the focus from prompt engineering to straightforward art direction.
Edit the video the way you’d brief an editor. That’s the promise here: free-text guidance with results that respect your original performance, timing, and camera movement.
Scope of Edits: What Lucy Edit Dev Targets
According to the model card, Lucy Edit Dev supports a broad slate of instruction-guided changes intended for social production, marketing, fashion and beauty, product, and experimental film work:
- Clothing & Accessories: Change garments, colors, fabrics, hats, glasses, jewelry, and similar elements while keeping pose and identity consistent.
- Character Swaps: Replace a subject with another human or fantastical figure while maintaining the blocking and motion you shot.
- Object Insertions & Removals: Add or remove props and moving elements that fit the scene’s perspective and timing.
- Background & Scene Replacement: Place subjects into new environments, or apply sweeping style and setting transformations with temporal coherence.
Open-Weight Release, Research-First License
Lucy Edit Dev arrives as an open-weight model under a non-commercial license. That combination invites researchers, indie labs, and creative teams to evaluate, benchmark, and explore instruction-following video editing without commercial deployment. It also gives startups and studios a window into the approach, enabling comparisons with inference-time methods and offering a baseline for future fine-tuning in research contexts.
- Open-weight access allows running the model locally and integrating it into internal pipelines.
- Non-commercial license sets usage guardrails while the technology matures.
Under the Hood: A Modern Video Stack
Decart AI notes that Lucy Edit Dev leverages a high-compression VAE and a Diffusion Transformer (DiT) video backbone associated with the Wan 2.2 5B lineage. In plain terms, that means the model is built on a contemporary video architecture optimized for temporal stability and efficiency. For readers following infrastructure trends, Wan 2.2’s open research and tooling have accelerated broader video model development; that context helps explain Lucy Edit Dev’s fluid motion preservation and responsiveness to descriptive instructions. Background on Wan 2.2 is available via the project’s repository: Wan 2.2 on GitHub.
Why This Matters for Creators and Brands
If you’re editing branded content, short-form social, or pitching motion tests for a client, the cost of frame-by-frame revisions adds up fast. Instruction-following models like Lucy Edit Dev speak the creative brief directly, promising faster iteration loops with fewer technical hurdles.
- Direction over dials: Treat edits like notes, not a manual process.
- Temporal consistency: Keep the shot’s performance and camera language intact.
- Prototype quickly: Explore looks, wardrobe, or placements without reshoots.
What’s Distinct in This Release
Lucy Edit Dev leans into a few priorities that stand out in the current wave of AI video tools:
- Instruction fidelity: The model card emphasizes action verbs (“add,” “replace,” “transform”) and clear noun phrases, making the intent legible to the system without heavy prompt craft.
- Shot integrity: Edits aim to anchor to the original motion and composition, limiting jitter and drift often seen in frame-wise or effect-stack workflows.
- Pragmatic openness: Open weights can be evaluated and integrated locally under a research-first license, an important option for teams navigating data, privacy, and compliance constraints.
Fit in the Toolchain
While Lucy Edit Dev is positioned as a developer-oriented release, its availability on Hugging Face aligns with how many creative technologists already prototype: blending model checkpoints with familiar pipelines, node-based graph tools, or lightweight scripting. For the non-technical creator, the headline is simpler: models that align to your words instead of demanding complex setup are becoming part of the everyday stack, whether you access them through a hosted service down the line or via integrated tools in the apps you already use.
At a Glance: Lucy Edit Dev
| Aspect | What’s New / Notable | Why It Matters |
|---|---|---|
| Edit Control | Instruction-guided edits from free-text prompts | Brief like you would a human editor; fewer technical steps |
| Temporal Handling | Preserves motion and composition | Protects performances, timing, and camera moves |
| Supported Edits | Wardrobe/accessories, character swaps, object insert/remove, scene replacement | Covers core creative and commercial post needs |
| Release Model | Open-weight, non-commercial license | Run locally for research; evaluate in pipelines before production decisions |
| Architecture | VAE + DiT video stack tied to Wan 2.2 5B lineage | Modern backbone tuned for temporal coherence and efficiency |
Signal for the Market
The release underscores a larger shift: video models that edit what you shot, rather than generate from scratch, are stepping into practical roles. For startups and solo founders, that means faster proof-of-concepts and client demos. For marketing teams and brand builders, it introduces a way to localize and personalize content without sending everything back into heavy post. And for photographers and directors threading brand consistency across formats, it points to an AI layer that respects lighting, layout, and movement you already dialed in on set.
As open-weight options expand, creative teams gain auditability: the ability to test, compare, and decide which models fit their aesthetic, legal, and operational constraints before committing to a production stack.
Context and Expectations
Lucy Edit Dev is framed as a developer-oriented release. The non-commercial license signals a research phase, and the “Dev” tag points to active iteration. For teams considering future deployment, this is a strong moment to benchmark instruction-following behavior, temporal stability, and edit coverage on representative footage. It’s also an opportunity to align with internal review processes, particularly where brand safety, likeness rights, and disclosure policies are in play.
Who Benefits Right Now
- Creators and studios exploring AI-assisted post for fashion, product, and music video content where wardrobe and set changes are frequent.
- Marketing teams testing faster A/B variants and localized spins on hero assets while preserving performance and shot design.
- Founders and toolmakers building verticalized editors that will eventually abstract this capability behind simpler UIs and brand-safe controls.
- Researchers evaluating instruction fidelity and temporal coherence across real-world footage.
Availability
Lucy Edit Dev is live now with a model card, examples, and assets hosted on Hugging Face: decart-ai/Lucy-Edit-Dev. For readers tracking the architectural lineage referenced in the release, see the Wan 2.2 project overview and codebase here: Wan-Video/Wan2.2.
Bottom Line
Lucy Edit Dev advances a clear idea: video edits should follow instructions, not force creators into technical detours. With open weights and a research-first license, Decart AI is inviting the community to push on what instruction-guided editing can do, especially when preserving the motion language of a shot is non-negotiable. For teams building the next generation of creative tools, this release is a timely benchmark and a concrete signal of where AI-assisted postproduction is headed.




