Runway Gen 4 AI Video is Blowing My Mind! First Impressions
Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Runway Gen 4 shows sharper realism and more stable backgrounds than Gen 3, with improved lighting behavior such as light passing through fabric.
Briefing
Runway ML’s Gen 4 arrives with a clear jump in video realism and control—especially for character motion, physics-like effects, and background consistency—while still showing familiar failure modes like occasional anatomy drift and imperfect object interactions. In early demos and hands-on tests, the model produces scenes where lighting behaves plausibly (light passing through fabric), motion reads cleanly (cloak movement, bird wing flaps), and backgrounds stay stable without the “mushy” artifacts common in earlier generations.
The most striking improvements show up in how Gen 4 handles complex movement and coherence across shots. A character in a sandstorm keeps believable physics in clothing and lighting, while a low-depth-of-field forest walk pairs a moving subject with a readable environment. Animal motion stands out too: a vulture-like creature spreads and contracts wings with a level of timing that feels more physically grounded than typical AI motion. Even abstract sequences—like jellyfish-like movement or large-scale transformations—tend to look sharper and more defined, with less of the scribbled, hallucination-heavy texture that can undermine credibility.
Hands-on testing adds nuance. Gen 4 supports image uploading and offers multiple aspect ratios (including 16:9, 21:9, 4:3, and portrait options) with generation lengths up to 10 seconds. When prompted to animate a person from an uploaded image—staring into the camera, sprinting away, and pulling the camera upward—Gen 4 follows the intent well, including dust kicked up during the run. But it still struggles with strict continuity: an “armless” test subject gains the missing arm once the character sprints away, and a 10-second run repeats the same issue. Upscaling to 4K can improve detail, yet it doesn’t fully fix motion artifacts; warping can still appear when frames are paused.
Vehicle and physics prompts show both progress and limits. A “car speeding away with smoke and a fire trail” prompt generally works better in Gen 4 than in Gen 3, with more consistent fire and overall quality. Still, camera rotation can fail to match the exact instruction, and motion can look slightly washed as the subject shrinks. A “truck smashing through a wall” test is hit-or-miss: sometimes the truck materializes inside the room rather than breaking through, then later generations do manage a more convincing smash—suggesting that precise physical causality remains difficult.
The model also leans into stylized and narrative-friendly animation. VHS-like “creepy footage” prompts maintain the look while delivering unsettling close-ups (a lemon held to the camera). In 3D animation tests—like a robot riding a rocket to the moon—Gen 4 can preserve character consistency and add emotion through body language, even when the prompt is relatively simple. Where 2D animation appears weaker, the broader takeaway is that Gen 4 is carving out a stronger niche in realism, 3D character performance, and cinematic motion, while leaving room for future refinement in strict anatomy control and deterministic physics.
Cornell Notes
Runway ML’s Gen 4 is presented as a step up from Gen 3 in realism, motion clarity, and controllability—particularly for character movement, lighting, and physics-like effects. Early examples show sharper backgrounds and fewer “mushy” artifacts, with convincing behavior like light passing through fabric and complex wing motion. Hands-on tests confirm practical features such as image uploading, multiple aspect ratios, and up to 10-second generations, plus optional 4K upscaling. The tradeoff: strict continuity is still unreliable (e.g., an armless character regains an arm when sprinting), and physical interactions like “truck through wall” can sometimes materialize incorrectly. Overall, Gen 4 looks strongest for cinematic, 3D-friendly animation and realism-focused prompts.
What kinds of realism improvements stand out most in Gen 4’s early demos?
How does Gen 4 perform when given an uploaded image and a multi-step action prompt?
What does Gen 4’s support for duration, aspect ratio, and upscaling enable?
Where do physics and object-interaction prompts still struggle?
How does Gen 4 compare across animation styles—especially 3D vs 2D?
What does the transcript suggest about Gen 4’s speed and practical workflow?
Review Questions
- In the armless-character test, what specific continuity failure occurs, and at what point in the action does it show up?
- Why might a 5-second generation be more likely to miss complex prompt details than a 10-second generation?
- Give one example of where Gen 4 improves physical realism and one example where it still produces an incorrect physical outcome.
Key Points
- 1
Runway Gen 4 shows sharper realism and more stable backgrounds than Gen 3, with improved lighting behavior such as light passing through fabric.
- 2
Complex character motion (including animal wing movement) appears more coherent, with fewer “mushy” artifacts in the environment.
- 3
Gen 4 supports image uploading and multiple aspect ratios (16:9, 21:9, 4:3, and portrait options) with generation lengths up to 10 seconds.
- 4
4K upscaling can increase detail, but it doesn’t reliably fix motion warping or anatomy/motion inconsistencies.
- 5
Strict continuity remains a weak spot: an armless input character regains the missing arm during sprinting, even in longer generations.
- 6
Physics-style prompts (smoke/fire trails, collapsing bridges, object impacts) often improve quality versus Gen 3, but deterministic interactions like “truck through wall” can still fail or materialize incorrectly.
- 7
Gen 4’s strongest niche in these tests is cinematic, realistic, and 3D-friendly animation, while 2D performance is described as comparatively less dominant.