OpenAI pulls the plug on Sora, the viral AI video app that sparked deepfake concerns
The move comes less two years after the launch of the AI video app sent shockwaves through the media industry.
Why it matters
The shutdown of Sora signals growing tensions between AI innovation and societal concerns about deepfakes, copyright, and media integrity—raising questions about how powerful AI tools should be governed and who decides when technology poses too much risk to continue.
Go deeper
Click a question to unpack this story layer by layer.
Where do you stand?
Should governments impose strict regulations on generative AI tools before they cause widespread harm, or does regulatory caution risk stifling beneficial innovation that could democratize creative tools?
When AI threatens creative industries, should society prioritize protecting artists' livelihoods through copyright restrictions, or embrace AI tools that could eventually augment and expand creative possibilities?
Does Sora's shutdown represent responsible corporate accountability for emerging risks, or does it show how vague fears about deepfakes lead companies to abandon useful tools without clear evidence of actual harm?