In February, OpenAI’s announcement of Sora, a new AI-powered video generation tool, caught the tech world by surprise. While many were intrigued by its potential, it also raised significant concerns, especially regarding its accessibility and potential for misuse. Nearly 10 months later, these concerns have only grown as the tool inches closer to becoming available to the general public. While Sora’s video quality may not have dramatically improved, its accessibility is what has many worried—particularly when it comes to misinformation.
What Sora Does: Sora leverages cutting-edge AI technology to generate realistic videos. The platform uses AI to create video content with varying degrees of realism and accuracy, depending on the input and the sophistication of the AI features utilized. Despite these advances, the quality of the videos produced so far doesn’t appear to have vastly improved since its announcement. However, Sora does come with some new AI tools that allow users to tweak videos and enhance specific features.
At its core, Sora offers the ability to create realistic short videos. Though these videos may not be cinematic masterpieces, they are certainly convincing enough to raise eyebrows in terms of their potential for use in creating fake content.
Accessibility and Pricing: One of the most significant aspects of Sora is its pricing and accessibility through OpenAI’s existing subscription services. While Sora is not yet widely available, OpenAI plans to make it accessible to anyone with a ChatGPT Plus or ChatGPT Pro subscription once they work through the demand.
- ChatGPT Plus ($20/month): This plan gives users access to 50 five-second videos per month. While these short clips may not seem like much, they are still enough to potentially cause disruption if used maliciously.
- ChatGPT Pro ($200/month): The Pro plan unlocks the ability to generate 500 videos per month, with each video being up to 1080p resolution and 20 seconds long. Additionally, these videos are free of watermarks, making them harder to detect as AI-generated content.
For many individuals, $200 may be a steep price to pay for creating fake videos. However, for bad actors or organizations looking to manipulate public perception, this price is a relatively small barrier to entry.
Concerns Over Misinformation: The real danger with Sora, as many experts have pointed out, lies in its potential to spread misinformation. In a world already grappling with the rise of deepfakes, Sora could further complicate the fight against fake content. With the ability to easily create and share short videos at a relatively low cost, bad actors could flood social media platforms with misleading or entirely fabricated content.
Imagine a crisis situation where videos allegedly “prove” one side of a political argument or show events in a light that suits a particular narrative. With Sora, these videos could be convincing enough to mislead the public, leading to confusion and division. Unlike text-based misinformation, videos are often perceived as more authentic, making them particularly dangerous tools for spreading falsehoods.
Safety Features and Limitations: OpenAI has introduced several safety features to minimize the risk of misuse, including blocking copyrighted material and preventing notable figures from being incorporated into videos. However, the effectiveness of these safeguards remains uncertain. It’s possible that malicious actors could still find ways to bypass these restrictions and produce videos that contain misleading or harmful content.
The challenge lies in ensuring that these safeguards are robust enough to prevent Sora from being used as a tool for malicious purposes while still allowing creative and constructive uses of the platform.
How to Try Sora: At the moment, Sora is not yet open for general account creation. However, interested users can head to sora.com and log in using their ChatGPT account, provided they have a ChatGPT Plus or ChatGPT Pro subscription. The platform is expected to open up soon, and users will be able to explore its features once it becomes more widely available.
Sora represents both a breakthrough in AI technology and a new challenge in the ongoing fight against misinformation. While the tool’s potential for creativity and innovation is clear, its accessibility also raises valid concerns. The ability to generate convincing, customizable video content for a relatively low cost could make it easier than ever for bad actors to manipulate public perception.
As OpenAI continues to refine its safety measures, it remains to be seen whether the platform can strike a balance between allowing creative freedom and protecting against misuse. In the coming months, the real-world impact of Sora will become clearer as it enters the hands of more users and its potential for good or harm is fully realized.