Runway Act-Two

Runway Act-Two:
Bring Any Character to Life with Your Performance

It's the next big thing in AI video. Act-Two lets you take a real performance and apply its motion, lip-sync, and expressions to any character you can imagine—from a still photo to an existing video.

Steps

Your 1st Act-Two Video
in 3 Simple Steps

The power of Runway ML's Act 2 is in its simplicity. You don't need a background in animation to get started. Just follow these steps to bring your creative concepts to the screen with this powerful AI video generator.

  • Upload Driving Performance

    Choose a clip with a clear view of a person's actions, expressions, and speech. This footage will act as the blueprint for the final animation.

  • Choose Your Character

    Select the character you want to animate. This step is where you decide whether to turn an image into video or modify an existing one.

  • Fine-Tune and Generate

    Before you hit "Generate", you can adjust settings for more control, like enabling gesture motion, facial expressiveness or the video's duration.

Features

Core Features of Act-Two

Act-Two is designed to give you a powerful and flexible set of tools for AI character animation. Here are the core features that make it an essential part of any modern video workflow, much more powerful than Act-One.

  • Animate Anything: Images or Videos as Your Canvas

    This is Act-Two’s standout feature. You can start with a static source and bring it to life, or take an existing video clip and direct a new performance. It’s a powerful technique known as structure-guided video synthesis, a leading approach in video to video AI.

  • Go Beyond the Face: Full Performance Control

    When you start with a character image, Act-Two lets you transfer hand and body movements from your performance video. This is a simplified form of AI motion capture that adds a rich layer of physical expression to your animation.

  • Lifelike Environments, Automatically

    To avoid the static look when you animate a photo, Act-Two intelligently adds camera and environmental motion to your scene. This creates more natural-looking shots with a cinematic feel in a single generation when using a character image.

  • Creative Freedom: A Versatile AI Animation Generator

    This model was built for versatility. Act-Two works well with a wide range of camera angles and non-human characters. Whether you want to animate a face from a photo or a full-body cartoon, the tool is designed to transfer the performance faithfully.

Act-One vs. Act-Two Comparison Table

FeatureAct-One (The Original)Act-Two (The Upgrade)
Core FunctionGenerates character animation from a performance.Generates animation OR replaces a performance in an existing video.
Character InputStatic Images OnlyStatic Images or Video Clips
Primary UseBringing a still character to life.Animating a still character or changing the performance in a video.
TechnologyThe foundational model, part of the Gen-3 Alpha release.A more mature and versatile tool, part of the Gen-4 video model.
Applications

Power Up Act-Two with FaceFusion

In video synthesis, the quality of your output depends entirely on the quality of your input. A great final render requires great source footage. This is where FaceFusion comes in—helping you create higher-quality source videos with perfect lip-sync.

How FaceFusion Helps

FaceFusion gives you precise control over motion and text-to-speech, ensuring your source footage is clean, consistent, and perfectly prepped for your Act-Two projects.

  • Create Better Driving Performances

    truggling to film the perfect source video? Use FaceFusion's Image to Video or Text to Video tools to generate a digital actor with consistent motion and flawless dialogue delivery.

  • Design Unique Character Inputs

    Want to give your character a "warm-up" before the main performance? Use FaceFusion to generate a short, subtly animated clip, then use that as your Character Input in Act-Two for a more layered result.

FAQ

Frequently asked questions

Still have questions about Runway AI and Act-Two? We've got answers.

  • What are the recommended specs for input files?

    For the best results, Runway suggests using source files that meet these specs:

    Cost5 credits per second, 3 second minimum
    Supported durationsUp to 30 seconds
    Infinite generations in Explore ModeYes
    Platform availabilityWeb
    Base prompt inputsDriving performance: Video
    Character: Image or Video
    Output resolutions16:9 – 1280x720 px
    9:16 – 720x1280 px
    1:1 – 960x960 px
    4:3 – 1104x832 px
    3:4 – 832x1104 px
    21:9 – 1584x672 px
    Frame rate (FPS)24fps
    Gesture controlSupported with character images
  • Where does the audio in the final video come from?

  • What is the difference between "video to video" and "image to video" in Act-Two?

  • Is Act-Two a good tool to make photos talk?

  • Can I use Act-Two for my commercial projects?