Japan Warns OpenAI Over Using Studio Ghibli’s Art to Train Sora AI

Japan’s CODA warns OpenAI to stop using Studio Ghibli and other Japanese works for training Sora 2’s AI video model.
Japan’s Content Overseas Distribution Association (CODA), which represents several major publishers including Studio Ghibli, has issued a warning to OpenAI over the alleged use of Japanese creative content to train its AI models — particularly Sora 2, the company’s advanced video generation tool. The association has demanded that OpenAI immediately cease using copyrighted Japanese material without permission.
Earlier this year, social media was flooded with “Ghibli-style” artwork — not created by the famed animation studio, but by OpenAI’s ChatGPT image generator. The tool allowed users to transform ordinary pictures into the signature Studio Ghibli aesthetic, sparking a global trend. Even OpenAI CEO Sam Altman joined in, updating his profile photo in the same whimsical style. However, the viral trend has now drawn backlash from Japanese rights holders.
In a formal letter to OpenAI, CODA expressed concern that the company’s training practices may have involved unauthorized use of Japanese art and creative works. “A large portion of content produced by Sora 2 closely resembles Japanese content or images,” the letter stated, alleging that such similarity suggests that copyrighted materials were used for machine learning. CODA also argued that these replicated visuals may constitute copyright infringement.
The association’s demands are twofold: it wants OpenAI to stop using Japanese content in its datasets and to address any inquiries from CODA’s member companies about potential copyright violations stemming from Sora 2’s outputs.
At the center of the dispute lies OpenAI’s opt-out copyright policy, under which creators must explicitly request that their works not be used for AI training. According to OpenAI, this policy applies to both its ChatGPT and Sora platforms. However, CODA argues that this approach conflicts with Japan’s copyright system, which requires prior consent from the copyright holder before any use of their material — not retroactive exclusion.
Reports also suggest that the Sora iOS app allows users to easily generate videos featuring copyrighted characters such as SpongeBob SquarePants, raising additional legal questions about how OpenAI moderates its outputs.
This controversy comes amid broader global scrutiny of AI companies over data usage and copyright protection. OpenAI has faced similar accusations from organizations such as India’s Digital News Publishers Association (DNPA) and major international media houses, which claim that their copyrighted articles and images were used to train AI models without authorization.
Adding another layer to the issue, Amazon’s recent $38 billion partnership with OpenAI — reportedly aimed at securing Nvidia chips and advancing AI infrastructure — could influence how the company handles copyright compliance and international legal challenges. The scale of this deal underscores the growing tension between rapid AI innovation and the rights of creators whose works may be fuelling it.
As CODA pushes for accountability, the outcome of this dispute could set a significant precedent for how AI companies engage with creative content in Japan and beyond.

















