By Jenifer Wallis

The past year brought a wave of first-of-its-kind rulings shaping how copyright law applies to artificial intelligence, especially in entertainment. But while 2025 was busy, 2026 may be the year everything truly comes to a head. Major studios like Disney, Warner Bros., and Universal have now entered the fight, filing copyright infringement lawsuits against leading AI companies.

So, where do things stand as we close out 2025 and what’s next for 2026?

Major Developments in 2025

In June 2025, Judge Alsup issued a much-watched order partially granting and partially denying summary judgment in Bartz v. Anthropic, 3:24-cv-05417 (N.D. Cal.). In short, Judge Alsup held that the books Anthropic legally purchased could be used to train its AI model under the fair use doctrine but the books Anthropic pirated were infringed by being used to train the AI model. Beware of LinkedIn clickbait attempting to distill Alsup’s order into mere one- or two-word rules. The opinion is much more nuanced than that.

Following Judge Alsup’s order, the case settled in September 2025 for $1.5 billion, the largest AI copyright settlement amount to date, including a requirement that Anthropic destroy its data sets of pirated books.[1] While $1.5 billion sounds huge, it only equates to approximately $3,000 per infringed work. In comparison, willful copyright infringement can result in up to $150,000 per infringed work plus attorney’s fees. By that measure, despite the headline-grabbing number, Anthropic may have gotten off lightly.

Just a few days after Judge Alsup’s order in Bartz v. Anthropic, Judge Chhabria issued an order granting summary judgment in Kadrey v. Meta, 3:23-cv-03417 (N.D.Cal.). Some outlets reported that Judge Chhabria’s order in Kadrey found that LLM (large language model) training constitutes fair use regardless of whether the underlying materials were obtained from legitimate sources or pirated. That is not what the order says.

Judge Chhabria found that while such use may be transformative, transformativeness alone doesn’t guarantee a fair use defense. He criticized Judge Alsup’s earlier order, calling Alsup’s earlier analogy comparing AI training to “training schoolchildren to write well,” calling it “inapt” and warning that it downplays the most important factor: harm to the market.

Perhaps the most striking part of Chhabria’s opinion comes when addressing the argument that ruling against the AI companies could hinder technological progress. Chhabria dismissed that notion, writing: “If using copyrighted works to train the [AI] models is as necessary as the companies say, they will figure out a way to compensate copyright holders for it.”

Still, Chhabria cautioned against over-reading his decision, finding that he had no choice but to grant summary judgment due to “the state of the record[.]” To avoid any possibility of doubt, Chhabria wrote:

as should now be clear, this ruling does not stand for the proposition that Mega’s use of copyrighted materials to train its language models is lawful. It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.

While Chhabria’s language is sharp, it underscores the limited scope of this order within the evolving landscape of copyright law.

Chhabria also devoted a section discussing Meta’s original plans to budget $100 million to license the books needed for its language models. The company ultimately abandoned that plan after realizing there was no centralized system for obtaining such licenses at scale.

In another case, Concord Music Group, Inc. v. Anthropic PBC, 5:24-cv-03811 (N.D. Cal.), Judge Lee denied Concord’s August 2025 motion to amend its complaint to add new infringement claims related to Anthropic’s alleged use of pirated musical works. The reason for the denial remains unclear, though it may have merely been procedural. Concord may simply have missed the deadline to amend.

2026 and Beyond

Now the major studios have arrived. Disney, Warner Bros., and Universal have now sued Midjourney, alleging copyright infringement in its text to image platform in both the output and training on beloved franchises including Despicable Me, Star Wars, Shrek, and Superman, among others. Judge Alsup’s reasoning in Bartz could support an argument that training on lawfully purchased movies is fair use. Judge Chhabria’s view, however, suggests otherwise.

Another open question is what about creative works that include explicit disclaimers prohibiting AI training? Some Kindle books now include such language. Courts haven’t yet addressed whether these disclaimers are enforceable, but expect that issue to surface soon.

We’re also seeing new alliances form. Businesses are banding together to strengthen their bargaining power with AI companies. One example is The Magazine Coalition,[2] a new organization helping publishers license and protect their content for AI use. In the Coalition’s own words, they were formed “to solve the problem of AI using content they don’t own.”[3]

As noted back in 2024[4], there is no singular strategy to protect creative works from AI. Rather, it will take a combination of legislation, litigation, and licensing deals to balance innovation with creator rights. 2026 may be the year those paths start to converge.


[1] https://www.anthropiccopyrightsettlement.com/

[2] The Magazine Coalition https://magazinecoalition.com/vision/

[3] Id.

4 Current AI Issues in the Entertainment Space https://www.munckwilson.com/news-insights/current-ai-issues-in-the-entertainment-space/


People