The creators of ChatGPT, Open AI, have revealed a new AI model that uses only text cues to produce realistic-looking one-minute films.
Summary
- Sora is a brand-new AI model developed by ChatGPT’s creators.
- The AI model creates one-minute films with basic text inputs.
- Currently, Sora is limited to use by certain artists and designers as well as internal testers.
OpenAI, the company behind ChatGPT, has unveiled a brand-new AI model that is astounding. The model, named Sora, can record a whole minute of video using only text commands.
Sam Altman, the CEO of OpenAI, also showcased the technology through posts on his X account. “Please respond with captions for videos you’d like to see, and we’ll start making some. We’d like to show you what sora can do.” As stated in Altman’s post. He received a lot of prompts from users on the network, and the outcomes he gave seem extremely accurate.
Another OpenAI member has shared a Sora-generated video and it is a little hard to believe how real it looks.
According to OpenAI, Sora is capable of creating complex scenes with numerous people, accurate movements, and elaborate backgrounds. Along with understanding user prompts, the model also deciphers how these elements appear in real-world circumstances.
The model’s profound linguistic comprehension allows it to reliably decipher cues and produce engrossing characters that vividly convey emotions. According to OpenAI, “Sora can also produce multiple shots that faithfully maintain characters and visual style within a single generated video.”
Although the OpenAI Sora model has generated a lot of excitement online this morning, well-known YouTuber Marques Brownlee, often known as MKBHD, has correctly brought up several potential issues with the model. “All of these videos are artificial intelligence (AI)-generated, and nothing will worry you more than a little bit about this,” he stated in his article.
Nonetheless, OpenAI is not avoiding the topic of safety. According to the business, they will be putting in place a number of vital safety precautions before deploying Sora in OpenAI’s products.
This entails working with red teamers, specialists in fields like bias, hate speech, and disinformation, who will thoroughly test the model to find any potential weaknesses. Furthermore, OpenAI will be creating tools to identify deceptive content, such a detection classifier that can recognize films produced by Sora.
Additionally, OpenAI will make use of DALL·E 3 safety protocols that have already been established for its products, which also apply to Sora. For example, the text classifier in an OpenAI application will filter and reject input prompts that don’t follow usage guidelines, such those that call for excessive violence, explicit sexual content, hateful imagery, or the resemblance of famous people.
According to the company, it has also built strong image classifiers that examine each frame of produced movies to make sure our usage guidelines are followed before granting users access.
Sora can only be used by red team members to scan key regions for possible problems or hazards at this time. Additionally, OpenAI is making its services available to designers, filmmakers, and visual artists in order to get their feedback on how to make the model better.