Meta Platforms has found itself in a legal and reputational spotlight this week after being accused by adult-film producer Strike 3 Holdings of downloading thousands of its adult videos for the purpose of training AI systems. Meta has categorically denied the claim, stating that any downloads tied to its network were uncoordinated, minimal, and likely carried out by individuals for “personal use” rather than as part of any AI-training programme.
The Allegations
Strike 3’s lawsuit alleges that Meta downloaded roughly 2,400 of its adult-films via BitTorrent across Meta’s corporate IP addresses over a period of years, and possibly used those files to train its generative AI video model (allegedly named “Movie Gen”). The suit further suggests the use of a “stealth network” of 2,500 masked IP addresses to conceal the activity.
Meta’s Response
Meta responded by filing a motion to dismiss the lawsuit, claiming:
- There are no facts suggesting Meta ever trained an AI model on adult images or videos — “much less intentionally so”.
- The downloads cited are sparse — about 22 downloads per corporate year (on average) — which does not align with what one would expect from a systematic AI-training operation.
- The alleged downloads often pre‐date the company’s major push into “multimodal models and generative video” research — which helps Meta argue the timeline doesn’t line up for training use.
- Meta’s terms of service explicitly prohibit the generation of adult content through its AI models — undermining the claim that adult porn videos were “useful” for its AI training.
- The company emphasises that the downloads in question could have been by contractors, visitors, or external parties using Meta’s network — and not necessarily Meta employees or related to Meta’s AI research.
In Meta’s words: “We don’t want this type of content, and we take deliberate steps to avoid training on this kind of material.”
Why This Matters
- For AI ethics and training data transparency: Major AI firms are under increasing scrutiny over what data they use to train large models, how they source it, and whether copyrighted or sensitive material is included. This case touches on both data provenance and copyright risk.
- For corporate network oversight: The allegation underscores how even peripheral or personal behaviour on corporate networks can become part of broader legal exposure for a firm.
- For reputation risk: Adult-content is inherently a sensitive area. A large tech company being accused of pirating or seeding adult videos raises reputational issues that go beyond pure legal liability.
- For legal strategy: Meta is pushing a dismissal, arguing the evidence is speculative and lacks direct links to AI training, which could set precedents for how copyright and AI training‐data lawsuits may proceed in future.
Key Takeaways
- Meta denies any AI training on adult pornographic content.
- The downloads in question are claimed to be minimal, uncoordinated, and for “personal use”.
- The timeline alleged by the plaintiff (Strike 3) is challenged by Meta as misaligned with its own AI research timeline.
- Meta emphasises its policy forbidding adult content generation via its AI, and says it takes steps to avoid such material in training.
- The case highlights broader questions of how large tech firms monitor internal network use, how they document training datasets, and how they manage legal risk related to data sourcing.
What Happens Next
The lawsuit by Strike 3 is still pending. Meta has asked the US District Court to throw out the case, arguing that the plaintiff’s claims rest on guesswork and innuendo rather than well-pleaded facts. The outcome may help clarify how courts treat allegations of AI training with potentially infringing content, especially when the link between data-usage and model training is indirect or contested.
Final Thoughts
Even if Meta succeeds in dismissing the suit, the case is a warning signal for tech companies investing heavily in AI: the sourcing and documentation of training data matter more than ever — not only for performance and fairness, but for legal and reputational risk. Transparent data practices, robust internal oversight of network downloads, and clear policies around restricted content are going to be increasingly important in the age of generative AI.