A Rights Reset After the Verdicts
Engineering AI’s Next Era of Responsibility Where Rights Return to the Center
This essay was contributed by, Jeff Allen, over 20 years experience working with Big Tech in Media, founder of Flikforge, an end-to-end data management platform for AI-powered video creation focused on making the entire ecosystem legally trustworthy.
For the better part of a decade, artificial intelligence companies operated in the shadows of legal ambiguity. They trained models on oceans of digital content including books, music, images, code, and were often scraped from the open internet without permission. These datasets were vast, messy, and, crucially, unlicensed. For years, the industry thrived on the assumption that “fair use” might shield them from liability, or at least delay the inevitable reckoning long enough to establish market dominance.
That era is closing. Once lawsuits began landing in earnest, the reality changed. Public datasets have been exhausted, high-quality proprietary material is locked behind paywalls, and lawsuits have peeled back the veneer of legitimacy that scraping once enjoyed.
The lawsuits now unfolding mark a shift; litigation is no longer just a penalty but the blunt instrument forcing AI developers to confront the need for rights-native licensing systems. Solutions that, if adopted earlier, could have prevented these legal battles altogether.
The result is a pivot point in the history of AI: one where lawsuits and settlements are finally forcing companies to treat content as an asset rather than a free commodity. In doing so, the industry may inadvertently pave the way for the licensing marketplace that many authors, artists, publishers, and studios have been demanding all along. The companies that once fueled growth by cutting corners are now, ironically, being steered toward licensing frameworks that creators and rights-holders have long called for, though only after years of avoidable conflict.
From Piracy to Payment
It's not just a handful of headline-grabbing lawsuits that are reshaping the industry—there are well over 200 copyright and anti-trust cases worldwide that will eventually demand resolution, spanning data owners across the globe. These lawsuits vary in nature—from claims of wholesale scraping and unauthorized ingestion of books, articles, and images, to demands for payment on derivative content or AI-generated output that purportedly echoes protected work. Many are still pending, with a complex web of rulings, settlements, dismissals, and appeals in motion.
While this could take many more years to sort out, the clearest signal of a turning point came on September 5, 2025, when a federal judge did not approve a proposed $1.5 billion settlement between Anthropic and a sweeping class-action lawsuit brought by authors including Andrea Bartz and others. The case centered on Anthropic’s reliance on pirated books to train its Claude chatbot. Over seven million illicit copies were allegedly in circulation inside the company’s systems, representing a breathtaking scale of infringement.
The judge's rejection of the proposed terms was blunt. In his ruling, he expressed concerns that the settlement failed to adequately compensate authors and strongly suggested that it didn’t establish a clear, enforceable path toward licensed data acquisition. He wrote, “A settlement of this scale should not be a 'get out of jail free' card, but a clear blueprint for future compliance.” While the dollar figure was eye-watering, the symbolism of the court’s rejection is greater: a top-tier AI developer's attempt to settle was deemed insufficient, signaling that the “build first, litigate later” approach is unsustainable.
Critics noted the irony. To some, Anthropic’s proposed settlement looked less like contrition and more like a cost of doing business. Build a $183-billion-valuation company on pirated data, then pay a fraction of that value back to the people whose work fueled it. Some see this as the standard Silicon Valley playbook; disrupt first, apologize later. The court’s rejection, however, suggests the judicial system is now committed to a deeper market correction.
If Anthropic’s collapse in court was the warning shot, Midjourney’s ongoing battles with Hollywood have been the cannon fire, showing that even the most innovative platforms must now contend with rights enforcement not just on training data, but on generative outputs themselves. In June 2025, Disney and Universal filed lawsuits accusing the image-generation platform of mass plagiarism, alleging that its tools churn out copyrighted characters ranging from Mickey Mouse to Minions without so much as a licensing agreement. By September, Warner Bros. Discovery had joined the fray, claiming Midjourney enabled users to create unauthorized versions of Superman, Bugs Bunny, Batman, and more.
The suits are scathing. Studios allege Midjourney not only trained on illegal copies of their works but also deliberately disabled safeguards that once prevented users from generating infringing content. Damages could reach up to $150,000 per violation, a sum that when multiplied across thousands of works could obliterate the company.
Midjourney, for its part, insists that its training falls under the umbrella of “transformative fair use” and argues that end users, not the company itself, bear responsibility for misuse. But this defense feels increasingly tenuous as courts begin to recognize derivative works as unlicensed exploitation rather than innovation.
And now, a new front has opened abroad. On September 16, 2025, Disney, Universal, and Warner Bros. Discovery filed a joint lawsuit against Chinese AI firm Minimax, accusing it of large-scale copyright infringement tied to its generative image and video tools. The case mirrors many of the Midjourney allegations but underscores the truly global nature of the fight; Western studios are extending enforcement into China’s AI sector. Its significance lies not just in scale but in its geopolitical implications then IP enforcement in generative AI will not remain a U.S.-only issue.
These cases collectively illustrate the scathing reality of how AI companies built their empires: on a foundation of borrowed, often stolen, intellectual property. Yet there is an ironic silver lining. By forcing billion-dollar companies to pay settlements, courts are accomplishing what legislation has struggled to achieve: they are making scraping and piracy financially untenable and pushing companies toward licensed data pipelines.
For the first time, authors, filmmakers, and artists may stand to benefit financially from the AI boom, rather than being trampled by it.
Toward a Rights-Native AI Economy
The question is what comes next. Court rulings and rejected settlements may punish the past, but they don’t provide a roadmap for the future. To build an AI economy that is both innovative and legitimate, the industry needs a reliable, automated infrastructure for managing rights. It needs to be scalable, fast and trustworthy. It’s not an impossible task.
We’ve seen this kind of transition before. In the early days of e-commerce, the internet was a chaotic frontier where trust was scarce. Consumers were rightly hesitant to type their credit card numbers into websites that could vanish overnight. It wasn’t until the emergence of VeriSign, SSL certificates, and standardized encryption protocols that online transactions became both safe and scalable. Only then could platforms like Amazon or eBay flourish.
AI now stands at a similar juncture. Just as e-commerce needed trust and authentication layers, AI needs a rights-native transaction system that can validate, license, and enforce the use of data at an industrial scale. Without it, companies will continue stumbling into lawsuits that could have been avoided by adopting licensing-first solutions.
Building that system requires four non-negotiable features:
Proof of origin: Certification embedded at the infrastructure level so every dataset every book, image, video can be traced to a legitimate source and verified as immutable.
Real-time licensing: Manual negotiations can’t match the speed of AI training. Rights holders should be able to publish works with machine-readable terms, standardized or dynamic pricing, and instant clearing of transactions. That way, a developer requesting 50,000 video clips or 500,000 book passages gets immediate approval, and royalties flow automatically.
License enforcement: Safeguards at both input and output. Models must be prevented from ingesting uncertified content, while generated outputs that borrow from licensed works should carry a digital license before being published or commercialized.
Automation and auditability: Transparent, automated ledgers that record every license transaction, data ingestion, and output approval thus building trust through verifiable audit trails.
The rationale is simple. Without it, AI companies face endless litigation, unpredictable liabilities, and the reputational damage that comes with being labeled pirates. With it, they gain scalability, compliance, and legitimacy. Rights holders, meanwhile, receive compensation and visibility, turning the AI boom from a threat into an opportunity. And regulators gain confidence that the industry is operating within a framework that respects intellectual property rather than undermining it.
With media being dear to our hearts, the latest Minimax lawsuit also highlights a deeper truth: lawsuits absent a technical, go-forward solution do little. Courts can punish the past, but they cannot engineer the infrastructure the industry needs. Both developers and rights-holders must step up to implement scalable licensing frameworks, or else we risk repeating the cycle of piracy, settlement, and renewed litigation.
In short, this is not merely a technical fix but an economic necessity. Just as SSL unlocked the commercial internet by making transactions secure, a rights-native data licensing infrastructure will unlock the next phase of AI by making innovation sustainable. The messy era of pirated training data may soon be remembered like the chaotic early internet: a frontier that never fully shed its risks, but gradually evolved toward enforceable rules, order, and scalable growth.
A platform like Flikforge is already prototyping shot-level licensing, automated contracts, and auditable provenance to show how these solutions can move from theory to practice. By demonstrating that rights-native infrastructure is not just possible but already in motion, it offers a blueprint for where the industry can head if it chooses to step up.
Conclusion
The federal court’s rejection of Anthropic’s proposed settlement and the pending litigation against Midjourney mark a turning point in the AI industry's approach to data. Together with the new Minimax case, they expose how unscrupulous ingestion of copyrighted content has become financially and legally untenable, not just in the U.S., but globally.
Yet, they also present an opportunity: a market-driven correction that rewards original creators and punishes piracy, while encouraging regulated, transparent, and automated licensing. The open question remains whether this translates into a truly rights-native economy or simply a new cost of doing business, where large companies can settle and move on.
The legal system is forcing compliance today, but sustainable change requires structural licensing systems that avoid lawsuits altogether. In the end, the goal is a generative AI ecosystem that’s not only powerful but responsible, ethical, and built on respect for creativity.