Artists and advocates push back on OpenAI’s Sora 2

Image (c) ConsumerAffairs. OpenAI's Sora 2 faces backlash for potential copyright violations and misinformation risks amid calls for stricter AI regulations in Japan and the EU.

The output is too realistic and the safeguards too weak, critics say

Japanese creators accuse OpenAI’s new video tool of copying protected works
• Watchdogs warn of deepfake and misinformation risks
• Regulators in Japan and the EU move toward tighter AI oversight


OpenAI’s latest video-generation model, Sora 2, is drawing sharp criticism from creators, regulators, and digital-rights groups who say it threatens intellectual property, personal privacy, and public trust in media.

A coalition of Japanese entertainment companies—including Studio Ghibli, Bandai Namco, and Square Enix—has accused OpenAI of using copyrighted animation and design styles without permission to train its model. The industry group Content Overseas Distribution Association (CODA) said the company’s “opt-out” system for rights holders reverses the normal burden of consent and violates Japan’s copyright principles. CODA urged OpenAI to stop using Japanese works entirely until a legal framework is in place.

Fears of deepfakes and misinformation

Public Citizen, a U.S. consumer advocacy group, has called on OpenAI to suspend Sora 2, warning that its realistic video output could be misused to create political deepfakes or non-consensual images of individuals. Critics say the system’s safeguards are too weak to prevent impersonation or manipulation during an election year.

OpenAI has paused generation of certain figures, including Martin Luther King Jr., after family representatives objected to AI-created likenesses circulating online.

Company promises tighter guardrails

In response to the backlash, OpenAI said it will give rights holders more “granular control” over how their content is used and will expand filtering tools to block unauthorized likenesses. The company maintains that Sora 2 was trained on licensed and publicly available data and that its goal is to democratize creative video production.

Still, artists, studios, and watchdogs argue that Sora 2 exposes unresolved questions about consent, compensation, and control in the era of AI-generated media.

Regulators weigh in on AI video rules

Regulators in Japan have already stepped in to raise alarms about Sora 2’s treatment of copyrighted characters and animation styles. The Japanese government formally requested that OpenAI refrain from actions that “could constitute copyright infringement” after the tool produced videos resembling popular anime and game characters. Meanwhile, Japan’s new AI Promotion Act gives the government power to investigate AI systems that may infringe citizens’ rights or harm creative industries.

In Europe, under the incoming EU Artificial Intelligence Act, AI-video systems will soon face stricter transparency and liability requirements, which could significantly affect how tools like Sora 2 handle training data, consent, and content moderation.


How global rules compare

RegionKey regulatory framework / statusRelevant obligations & featuresImplications for AI-video tools
EUArtificial Intelligence Act (entered into force Aug 2024)- Risk-based classification of AI systems. - Deepfake / synthetic-media transparency: AI-generated video must be labelled and disclosed. - Applies to providers outside the EU offering services within its territory.A video generator must label outputs that resemble real people or events, maintain logs, and comply with content documentation rules.
JapanDeveloping framework under the AI Promotion Act and existing IP/cyber laws- Government authority to investigate suspected IP or rights violations. - Deepfake or impersonation content may violate defamation and privacy laws.Tools producing non-consensual or IP-infringing videos may face enforcement even without dedicated AI legislation.
United StatesPatchwork of state and federal deepfake laws- TAKE IT DOWN Act (2025) criminalizes non-consensual intimate AI images. - Several states regulate political and likeness-based deepfakes.Fragmented compliance environment; AI-video tools must monitor evolving state laws and implement robust moderation.

Together, these developments show that regulators across major markets are converging on one theme: powerful generative-video tools like Sora 2 will face mounting demands for transparency, consent, and accountability as governments move to catch up with AI’s rapid evolution.


Stay informed

Sign up for The Daily Consumer

Get the latest on recalls, scams, lawsuits, and more

    By entering your email, you agree to sign up for consumer news, tips and giveaways from ConsumerAffairs. Unsubscribe at any time.

    Thanks for subscribing.

    You have successfully subscribed to our newsletter! Enjoy reading our tips and recommendations.

    Was this article helpful?

    Share your experience about ConsumerAffairs

    Was this article helpful?

    Share your experience about ConsumerAffairs