In late September 2025, OpenAI publicly introduced Sora 2, a video-generation model capable of transforming text prompts into short clips with dialogue and sound.

With its ability to simulate physical movement, environments, and characters, Sora 2 was often framed as a major leap for generative video. But was it?

Since its launch, the response has been underwhelming, and there are a few reasons for that. First, the trailer for Sora 2 (by OpenAI) may have been misleading for certain audiences, particularly in the context of commercial production.

The promotional material showcased highly polished, cinematic visuals suggesting high-quality video generation from simple prompts; motion, camera movement, characters, and environments that looked “broadcast-ready”; and a tool capable of replacing or substantially reducing conventional production steps such as shoots, talent, and VFX. When viewed by someone in advertising or production, it could easily appear as though Sora 2 were immediately ready to deliver high-end, commercial-grade spots. However, when put to the test, it has not met any of these expectations.

For instance, a report by NewsGuard found that the tool could create convincing-looking videos, but not necessarily production-grade or always “safe” or “brand-ready.” It also stated that Sora 2 “produced realistic videos advancing false claims 80 % of the time (16 of 20 prompts).”

And if you work in visual research, you’ll know this is a whole lot of prompting just to get one usable output, especially considering that, these days, a single day of visual research is expected to yield around 300 files at minimum.

It has been widely reported that users of Sora 2 note similar issues: while results can be impressive, they also face significant limitations. Guardrails block certain prompts, consistency is uneven, and many feel the tool is being used more for memes and entertainment than for professional output. It’s also worth noting that the maximum resolution for Sora videos is 1080p (1920 × 1080), which no longer meets the standard for much of today’s visual research. And although you’re theoretically allowed to download videos in full quality, availability across tiers, and whether the files are suitable for broadcast, varies.

Meme / Slop Machine

Even though the tool gained substantial traction, with some reports suggesting Sora’s mobile app surpassed one million downloads within just a few days of its U.S. and Canadian launch,  the majority of this usage appears to be driven by social and novelty purposes rather than professional commercial assignments.

Sora 2 has been described as impressive in certain demo contexts but is not yet viewed as production-grade for high-stakes branding. While the model has advanced in physical simulation, fine control of brand narrative, motion persistence, and live-actor substitution all remain works in progress.

In other words: the excitement is real, but the “threat narrative” remains muted and conditional.


Response from Agencies and Rights-Holders

Rights-holders and agency bodies are now in risk-management mode,  asking questions like “How do we control likenesses?” and “How do we ensure brand safety?” The responses of these groups will ultimately determine how broadly tools like Sora 2 can be integrated into commercial production pipelines.


What This Means for Commercial Production Roles

Visual Research & Ideation

Here lies perhaps the clearest impact: tools like Sora 2 may serve as faster prototyping or ideation aids. For example, an agency might generate a series of treatment-concept visuals or short snippet ideas to present to a client. In theory, the cost and turnaround advantages could shorten the “visual research” phase.

At Ghost, we have already been using, when applicable, similar AI tools like Veo 3. These have been helpful, but not disruptive to our workflow so far. The impact of Sora 2 is yet to be seen. It will likely improve over time, as its creators claim, but for now it hasn’t proven to be the panacea it was promised to be.

Shooting & Full Production

At the other end of the spectrum,  full shoots, live talent, brand-specific narratives, and high-stakes broadcast spots, the verdict is still “not yet.” The modeling of characters, brand assets, live-action performance, continuity, and client scrutiny remain difficult for generative video to fully replicate. Agencies currently treat Sora 2 more as a curiosity than a disruption.

Post-Production / VFX / Motion Graphics

Here the impact potential is higher, though still not an immediate replacement. Studios and post houses may adopt elements of Sora-type workflows, such as rapid background generation or motion prototyping, but the full chain (agency brief → live shoot → post → delivery) remains firmly anchored in human expertise and oversight.


Closing Thoughts

The launch of Sora 2 sparked headlines, but in the world of advertising and production it has arrived with more questions than contracts, more possibility than practice, at least for now. Production houses, agencies, visual researchers, and the wider ecosystem would be wise to watch, experiment, and prepare, while recognising that the core of the craft — briefing, storytelling, human judgement, and brand trust — remains in our hands.

Sources


What Next?

If this resonates with you, we’ll be sharing more deep dives into the craft of treatment writing and design. Let us know if there’s a topic you’d like us to explore next.

🔗 Check our work at http://www.treatmentsbyghost.com
🔗 Job inquiries info@treatmentsbyghost.com

🔗 Follow @ghost_treatments for more insights