OpenAI warned that Sora Turbo has “many limitations.” For instance, it “generates unrealistic physics and struggles with complicated actions over lengthy durations.”
To stop Sora from producing dangerous content material, OpenAI has blocked it from creating damaging types of abuse, together with little one sexual abuse supplies (CSAM) and sexual deepfakes (the corporate didn’t specify restrictions on every other sorts of deepfakes). At launch, importing property with folks in will probably be restricted, with the function rolling out to extra customers over time as deepfake mitigations are refined.
All of the movies generated by the mannequin may even embody C2PA metadata, which helps make sure the content material’s metadata discloses that it was created by Sora. Content material may even have seen watermarks, though OpenAI labels them as “imperfect.” Lastly, OpenAI constructed an inside search instrument to assist confirm if content material got here from Sora.
For extra particulars on the mannequin efficiency, the corporate launched a Sora System Card that features the mannequin’s knowledge, danger evaluation, exterior pink teaming findings, learnings from Early Artist Entry, and extra.