Sora, OpenAI’s new AI video creation platform and (kind of a) social media network, has people concerned, according to a new study.
In a survey commissioned by Copyleaks, an AI detection and governance company of roughly 4,000 people released last week, over 80% (83%, specifically) said they’re worried that tools like Sora could be used to create fake or misleading videos. In fact, more than half described themselves as very concerned.
We’re seeing this play out in real time amid the government shutdown. Over the last few days, Fox News and Newsmax have been criticized for airing segments featuring supposed SNAP beneficiaries making threats or voicing grievances in viral videos tied to the shutdown. It was later revealed that many of these clips were Sora/AI-generated deepfakes designed to look like real people — in some cases reinforcing (very) racist stereotypes.
Back to the Copyleaks study: nearly one in four respondents (24%) believe the impact of Sora and similar tools will be “mostly negative.” Obviously, that’s bad.
Why This Matters:
AI-generated deepfakes and realistic synthetic videos aren’t limited to Sora. Because these platforms allow downloads, the content can quickly spread across ad-supported networks like TikTok, Instagram, Facebook, and others. That’s how the SNAP deepfakes ended up catching the attention of Fox News and Newsmax, after all — they went viral after circulating beyond Sora. This raises the stakes for both platforms, advertisers, and the adtech ecosystem overall.
For platforms, there’s a growing need to build in AI-detection and provenance tools at the source. For advertisers, it’s critical to work with independent verification providers to identify and avoid misleading or harmful AI-generated content. Not all AI content is bad — but distinguishing between legitimate, creative AI video and manipulated or deceptive material will become increasingly complex and essential to safety.
Experts React:
Here’s an excerpt from the Copyleaks blog on the survey:
“AI video tools are already being used to distort reality, and the public is both aware and alarmed. There’s growing consensus that while the technology holds promise, detection, transparency, and proactive safeguards are now urgent.”
Our Take:
It’s weird that ad-supported social platforms haven’t done more to address the rise of Sora-style videos. Perhaps the engagement is just too strong to resist? After all, these clips can feel like car wrecks you can’t look away from. From MLK Jr. WWE videos to Sam Altman grilling Pikachu, people are tuning in.
Interestingly, many of these videos are so outlandish you “know it when you see it.” But the real concern is what happens when AI-generated videos become much more subtle. When the line between real and synthetic blurs, advertisers will be stuck in a constant game of whack-a-mole — and the tech that detects these videos will have to evolve just as fast.