Runway’s Mindblowing Act-One Transforms an Actor into a Cartoon From Just One Video

4 weeks ago 22
A woman wearing a black hoodie is on the left, looking perplexed. On the right, an animated character with similar expression, wearing a green apron, appears surprised.Runway’s Act-One works from just a single video and transforms the actor into an array of computer-generated characters.

The AI video platform Runway has unveiled a remarkable new tool that transforms a person into a computer-generated character.

Called Act-One, it takes a video of someone talking — which can be shot on just a smartphone — and uses the performance as an input to create compelling animations.

Introducing, Act-One. A new way to generate expressive character performances inside Gen-3 Alpha using a single driving video and character image. No motion capture or rigging required.

Learn more about Act-One below.

(1/7) pic.twitter.com/p1Q8lR8K7G

— Runway (@runwayml) October 22, 2024

Traditionally, this type of technology — transposing an actor’s performance onto a cartoon used in films like Avatar — is complex involving motion capture equipment, facial capture rigging, and multiple footage reference. But Runway says it’s their mission to “build expressive and controllable tools for artists that can open new avenues for creative expression.”

Without the need for motion-capture or character rigging, Act-One is able to translate the performance from a single input video across countless different character designs and in many different styles.

(3/7) pic.twitter.com/n5YBzHHbqc

— Runway (@runwayml) October 22, 2024

“The key challenge with traditional approaches lies in preserving emotion and nuance from the reference footage into the digital character,” Runway writes in a blog post.

“Our approach uses a completely different pipeline, driven directly and only by the performance of an actor and requiring no extra equipment.”

One of the models strengths is producing cinematic and realistic outputs across a robust number of camera angles and focal lengths. Allowing you generate emotional performances with previously impossible character depth opening new avenues for creative expression.

(4/7) pic.twitter.com/JG1Fvj8OUm

— Runway (@runwayml) October 22, 2024

Act-One can be applied to a wide variety of reference images including cartoons and realistic-looking computer-generated humans, essentially deepfakes.

“The model also excels in producing cinematic and realistic outputs, and is remarkably robust across camera angles while maintaining high-fidelity face animations,” says Runway. “This capability allows creators to develop believable characters that deliver genuine emotion and expression, enhancing the viewer’s connection to the content.”

With Act-One, eye-lines, micro expressions, pacing and delivery are all faithfully represented in the final generated output.

(6/7) pic.twitter.com/R7RX6UhPEQ

— Runway (@runwayml) October 22, 2024

Runway says that users will be able to create high-quality, narrative content using nothing more than a consumer-grade camera and one actor reading lines. The actor can even play different characters.

Act-One has begun rolling out to Runway users and will soon be available to everyone.

Read Entire Article