Meta Denies Using Adult Content to Train AI, Calls Lawsuit ‘Baseless and Speculative’

Meta rejects claim of using pirated adult films for AI training, stating any downloads were likely personal and unrelated to company research.
Meta has firmly denied accusations that it used pirated adult videos to train its artificial intelligence systems, calling a recent lawsuit by adult film studio Strike 3 Holdings “nonsensical and unsupported.” The company emphasized that its AI models have never been trained on explicit material and suggested that any such downloads from its network were likely made by individuals for personal use.
The lawsuit, filed in the U.S. District Court for the Northern District of California, claims that Meta’s corporate IP addresses were linked to the illegal downloading of over 2,300 copyrighted adult films owned by Strike 3 Holdings via BitTorrent. The studio, which describes its catalog as “award-winning and critically acclaimed,” has accused Meta of using these videos to train its AI models—including Movie Gen, the company’s video generator, and the LLaMA large language model. Strike 3 has demanded $350 million in damages.
Meta, however, dismissed the allegations, stating that there are “no facts to suggest that Meta has ever trained an AI model on adult images or video, much less intentionally so,” according to its court filing cited by Ars Technica. The company said the lawsuit’s claims are speculative and lack credible evidence linking any alleged downloads to its AI programs.
“The far more plausible inference to be drawn from such meagre, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta said in its response, rejecting the notion that the activity had any connection to company research or AI development.
Meta also pointed out that the timeline presented by Strike 3 weakens the case itself. The alleged downloads date back to 2018—years before Meta began its major multimodal AI research in 2022. This discrepancy, the company argued, undermines the lawsuit’s core claim that the videos were used in AI training.
Further addressing the issue, Meta underscored that its policies strictly prohibit the use or generation of adult content in AI systems. “We don’t want this type of content, and we take deliberate steps to avoid training on this kind of material,” the company said. It also criticized the idea that pornography could in any way enhance AI model performance as “inaccurate and absurd.”
Meta acknowledged the challenge of tracking every download across its vast corporate infrastructure, noting that “monitoring every file downloaded by any person using Meta’s global network would be an extraordinarily complex and invasive undertaking.” The company added that Strike 3 failed to identify who was responsible for the alleged downloads or how those activities could be tied to any Meta project.
In its filing, Meta accused Strike 3 of employing “extortive” legal tactics, citing the studio’s history of filing mass lawsuits. “Plaintiffs go to great lengths to stitch this narrative together with guesswork and innuendo, but their claims are neither cogent nor supported by well-pleaded facts,” Meta wrote.
The company has requested that the court dismiss the lawsuit entirely, reaffirming that the allegations are baseless and unsupported by evidence.















