AI rose after the Cloud Computing wave and the start of an Internet centralized, hyper concentrated and dominated by few. Why ? Because the value of an AI service lies in the accumulation of data with which its model is trained.
Google's AI is getting ahead of the game by being trained with the millions of Youtube videos at its disposal. The result is VEO 3, a generative AI, months ahead the competition video generation wise.
And when these companies don’t have sufficient datas, it's really hard to resist the temptation of illegally acquiring these ressources. In February 2025, Meta admitted to pirating hundreds of thousands of books, 82 To of texts, to train its Large Language Model (LLM).

The AI war revolves around the access to the largest possible number of resources for training. Obviously this acquisition should not happen outside of the legal framework. And that's how we reach the issue at hand : defending copyright and the monetary value placed on the resources used in the process of creating and using these models. A necessary and urgent discussion, as the legal framework requires rapid adaptation to a rapidly changing technological landscape.
Because, let's face it, we don't really know today how to apply the existing rules governing the remuneration of intellectual property to the rapid emergence of AIs. The answer to this question is critical, and should be at the heart of the debate, since it determines the entire business model for generative AI services.
I previously wrote on that subject, making the point that the economic models around which these services are currently built, would not be viable if authors were fairly and justly compensated.
The arrival of AI is likely to be challenged by the financial stakes involved in respecting copyrights - and disillusionment about the real capabilities of these tools in the near future.
– Mickaël Rémond, Ces modèles qu'on appelle IA
And it seems that all the major players in the field of AI have clearly understood this point: the future of AI lies in the existence or otherwise of regulations that promote respect for copyright and enforce the conditions for training artificial intelligence models.
But it remains to be seen whether the various governments are really willing to take their responsibility and limit the content bulimia of the big companies and defend the weight of the authors of content used in LLM training in the balance.
On the one hand, lawsuits against AI companies are springing up all over the world, as in France where Meta is being sued for copyright infringement, while on the other, some governments aim to protect companies and leave them free rein.
On November 13, 2024, Shira PERLMUTTER testified in her role as Director of the U.S. Copyright Office to the Senate Subcommittee on Intellectual Property. A few months later, when she published her report questioning the use of copyrighted content to train AI models, she received news of her dismissal by the Trump administration.

And now, Trump's massive pet project, the bill he's trying so hard to pass in the US, contains, among other things, a 10-year moratorium on legislation to regulate AI. A measure criticized as incredibly imprudent by some, including some from his side, who see it as a dangerous precedent.

Another example, on this side of the Atlantic, the British government has announced its intention to include an exception to copyright in law for the training of AI models for commercial purposes, which has provoked backlash from the public and various artists such as Paul McCartney.
« So you know, if you're putting through a bill, make sure you protect the creative thinkers, the creative artists, or you're not going to have them. I think AI is great [...] but it shouldn't rip creative people off. There's no sense in that »
— Paul McCartney - BBC interview
Meanwhile on the creator’s side, fear is beginning to creep in. And online, it's taking over the conversation. Some authors with a more nuanced and measured stance are having to defend their position to their audience. Authors like R.J. Bennett, who is also a programmer, who speaks out on Instagram. His unique position makes him someone who understands the technology behind LLMs and can have a measured and informed technical opinion on AIs while exposing the scams around LLMs and the companies that exploit them.
He felt the need to make a video to respond to the accusations of using AI for his novels and creative writings. The accusations were made after he defended the use of AI for administrative tasks like email writing.
“You post on the internet, fire off a bunch of posts and run your mouth, you are going to take some lumps”
— R. J. Bennett - Instagram
As uncertainty grows for the people trying to make a living with their creativity, it seems mandatory for those with an online platform to adopt a public position on AI. And between hardline anti AI and those who live and die by it, the discourse is certainly polarized.
But now that the anti-copyright position is defended amongst the powerful people in the US governments, it appears easier to take a shot at the messengers than to fight the flaws in the system. Like with any new technology, the legal framework will need to be adapted.
If we want room for the legitimate case of AI to emerge, it's the non-legitimate ones that need to be tackled. So we need to be clear about how the models work, where the value lies in the chain, and how to distribute it in a just way.
If AI is a commodity, then the value lies in the knowledge protection
If AI is a commodity, then the value remains in protecting the knowledge that brings it to life. This is what will enable us to sort out the uses of AI. For example, the use of AI trained on one's own data, or of medical imaging data shared as part of an Open Science approach, could be considered legitimate.
However, if we give in on copyright, under the guise of a form of fair use granted to AI operators, we abandon the field of knowledge and humanity to the same logic deployed by those who overexploit the Earth's natural resources.
In the same way that mass fishing with long driftnets was banned for its environmental impact, AI needs to be monitored and regulated, not for the immediate danger it would pose to the survival of humanity, but more prosaically to protect and defend creators and their condition of living.
The companies that operate AI cannot simply and by themselves claim fair use, while appropriating the work of others without compensation.
The AI battle is being waged in the law and the courts... And it's now that we need to mobilize, if we want to defend creation, knowledge and humanity, not against AIs, which have legitimate uses, but against the bad actors; the predatory companies that steal from artists.