Runway just dropped Runway Characters: real-time, conversational avatars you can spin up from a single image and deploy on the web (or inside your product) via API. The headline is not “digital humans exist” (they have existed). It is that Runway is packaging expressive, live, two-way avatars into something creators and dev teams can actually ship without building a Frankenstein stack of face animation, voice, and a chatbot bolted on behind the curtain.
If you have been waiting for talking characters to move from a demo reel into a reliable interaction layer for brands, creators, and interactive experiences, this is one of the cleanest attempts yet.
What shipped
Characters is a platform plus a developer API for building intelligent avatars that talk back in real time. You provide a reference image, configure voice plus personality plus knowledge, and deploy the character in a browser experience or embed it into your own app or site through Runway’s developer stack.
Runway’s pitch is simple: stop making users click through dead menus and static pages when they could just talk. Whether you buy that future or not, the execution matters: the avatars are designed to carry facial expression, eye movement, lip sync, and gestures in a way that reads as present, not PowerPoint with a mouth.
The shift here is productization. A conversational avatar is not interesting because it is animated. It is interesting because it can be deployed as a reusable interface, like chat, but with brand, tone, and performance baked in.
Where to access
There are three on ramps depending on how hands on you want to be:
- Web app Characters page: app.runwayml.com/characters
- Developer platform: dev.runwayml.com
- Support docs (behavior plus limits): Runway Characters overview
Runway is also framing Characters as part of a broader world model direction, but the practical story is: you can now stand up a character experience without inventing your own pipeline.
What it does well
Single image character creation
The most creator friendly detail is also the most underrated: you do not need a multi angle scan, a video performance capture session, or a full 3D rig. A single image can be enough to get a character that is expressive and conversational.
That matters because most brands already have what Characters needs: product mascots, illustrated spokescharacters, headshots, campaign stills, or a single hero image from a shoot. This turns “we need an avatar” from a production request into a creative iteration problem.
Real time expressiveness
Runway is clearly prioritizing performance continuity: facial animation that keeps up with live conversation and does not degrade into the classic stuck smile plus jitter mouth after 30 seconds. In creator terms: it is aiming for watchable, not just technically animated.
API first deployment
Characters is not only a web toy. The API framing is the point. A lot of earlier avatar tools were effectively export a clip products. Characters is positioned as a runtime you can embed, meaning it can sit inside onboarding flows, interactive demos, learning experiences, or customer support surfaces.
That is a different kind of value: less make content, more make an experience that updates itself.
Limits and pricing
Runway is explicit about constraints, and that is good news for anyone trying to scope real usage. Conversation caps vary based on where you run it:
| Surface | Max conversation length | Notes |
|---|---|---|
| Web app | 2 minutes | Designed for quick tries |
| Developer platform | 5 minutes | Longer testing in Runway’s environment |
| API integration | 30 minutes | Built for real sessions |
Runway’s support docs also outline credit burn: Characters are billed in 6 second chunks at 2 credits per 6 seconds. If you are planning to put this on a high traffic landing page, model usage fast, because conversational plus video can get expensive if you treat it like unlimited chat.
Runway is also offering a developer friendly on ramp: new users get 30 free minutes of Characters conversation time to start testing and integrating.
Why this matters
Creators get a new format
We are used to generative AI shipping as assets: an image, a clip, a voiceover. Characters is a format shift: interactive performance. For creators, that opens up a new class of projects that sit between content and product:
- Interactive show hosts for livestream pre shows, fan hubs, or community portals
- Living brand mascots that can answer questions instead of just being vibes on a homepage
- Story characters that can hold a conversation inside a narrative world, especially for web based experiences
And yes, some of this will be cringe. But when it is done well, it is sticky in a way static video is not, because the user is not watching, they are participating.
Brands can finally match tone
The quiet win is brand control. Text chat widgets are functional, but they are rarely on brand in a way that feels intentional. A character can carry visual identity, voice identity, and behavioral rules in one place, which makes the interaction feel less like support and more like experience design.
That said, the same thing that makes Characters powerful also makes it risky: a speaking avatar amplifies errors. If your knowledge base is thin or your instructions are sloppy, the result is not a wrong chatbot answer. It is your spokesperson said something weird with full eye contact.
What to watch
Latency in real use
Real time avatars live or die on responsiveness. A two second delay in text chat is tolerable. A two second delay while a face stares into your soul is a horror short. The best demos will feel instant. The real test is how it behaves under load, across devices, and inside embedded web contexts.
Consistency across sessions
Characters will matter most when a persona is repeatable: same voice, same mannerisms, same brand brain, day after day. If creators can maintain that identity without constant babysitting, Characters becomes a reliable interface layer. If it drifts, it becomes a novelty feature you demo once and quietly retire.
Cost vs engagement
Video based conversation is inherently heavier than text. The teams that win with Characters will be the ones who use it where it earns its keep: high intent product education, premium onboarding, interactive demos, ticket deflection with a human feeling wrapper, not as a default replacement for every FAQ.
Characters is not a better chatbot. It is a more expensive interaction with a higher ceiling for trust, clarity, and conversion, if you put it in the right place.
Bottom line
Runway Characters is a real step toward shippable, expressive conversational avatars built for creators who want personality and for teams who need deployment options beyond export a clip. The single image workflow lowers the production barrier, the API framing makes it more than a gimmick, and the documented limits make it possible to scope responsibly.
If you are a creator, this is a new canvas: performance as product. If you are a brand, it is a new kind of interface: a spokesperson that can actually answer. The difference between cool and useful will come down to latency, knowledge quality, and whether you can justify the cost with real engagement outcomes, not vibes alone.






