EUROPEAN DIGITAL DEREGULATION
The EU Digital Omnibus is not only simplification and regulatory coordination, but deregulation
Past week I participated in the International Week on the Legal and Social Impact of Artificial Intelligence, at the University of Oviedo. Our table was called ‘regulating Artificial Intelligence’, but, taking advantage of the current negotiation of the Digital Omnibus (OD), I wanted to stop and analyse the scope and meaning of this proposal. And it is noteworthy that before a rule that has been in force since August 2024 and that is, was about to develop effects in its most substantial part this August, we are already facing an amendment that goes far beyond mere adjustments. We are facing a shock that threatens to destabilize a building that requires very delicate counterweights.
‘Pragmatism or claudication?’, Master Fernando Llano asked last week, in an article in the press in which he briefly analysed the Digital Omnibus that the European Commission presented in November 2025, under the declared intention of reducing bureaucracy and simplifying compliance, but which nevertheless entails some striking reductions in protection standards and obligations for deployers and developers of Artificial Intelligence (AI). The Omnibus not only proposes reforms in the European Artificial Intelligence regulation (AAC), but also in the General Data Protection Regulation, the Data Act, the Data Governance Act or in cybersecurity. But, of course, today we’re going to stop at AI. For anticipating an answer to Professor Llano’s question, I would say: Some pragmatism and a lot of claudication.
It is well known that the EU is at this time focused on the pursuit of competitiveness, as the main item of this mandate (spurred by the Draghi and Letta reports) and that the need for better harmonisation in the interaction between different digital standards, as well as simplification in the means of compliance with them, is obvious. But we must also bear in mind that, after several years of enormous legislative effort, Europe should not forget values such as legal certainty, the protection of citizens, and the establishment of a socio-legal space that responds to its values principles (even if it is subject to enormous pressure from the outside to devalue them).
I will comment below on some of the most substantial points of the OD, from my point of view , without exhaustive or in-depth spirit; and always taking into account that you have to wait for the final negotiation to know its real scope.
1. Pandora’s Box
To begin with, conceptually speaking, OD is a controlled demolition attempt. As Dr Laura Caroli says, ‘they opened the Pandora Box’. Reopening the text of the AI Act just a few months after an arduous, technically difficult and politically balanced political agreement is a kind of recklessness. In fact, it seems that the alteration of the ideological balance of the 2019-2024 mandate has caused a kind of sense of opportunity for revenge for more extreme pro-deregulation positions. If that is so, it is a big mistake, because precisely the arduous consensus that was reached then was not only more representative of a more diverse social and political reality than a mere conjuncture, but a guarantee of long-term stability and legal certainty.
As if this were not enough, this proposal is also made taking into account a very tight schedule (that August 1 as the date of entry into force of high-risk bonds), at the risk of alternatively generating chaos if that term is not reached in time. In fact, the Commission has not even met its own deadlines for issuing guidelines and compliance guides, which are essential to facilitate compliance with complex rules; It seems that they have already discounted the result of the trilogues. Either me or the chaos.
2. The attack on the Digital Enlightenment
Let’s go to something that for me was already a poorly closed chapter in the original text and that if it deserved something, it was its improvement. Digital Enlightenment is the most important tool we have as a society and as individuals to adapt and take advantage, in equity, of the opportunities that come from the digital revolution; to protect us from their most important risks. This cannot be replaced by laws, public control bodies or private self-regulatory mechanisms.
In the AAC, AI literacy was an obligation of companies and a right of workers who would have to use those systems; in addition to a deferred citizen’s right: if a lawyer, doctor or toy factory operator is going to use a high-risk tool, she has the right and obligation to know what she is up to.
The proposal no longer needs to be “guaranteed”, it becomes an evanescent responsibility of the Commission and the Member States, who “should encourage” suppliers and deployers to take such measures, through training opportunities and information resources, instead of being a direct legal mandate for the company.
3. The Trojan Horse: SMEs and Small Mid-caps
Who can object to helping small and medium-sized enterprises (SMEs) to comply with RIA obligations? For that, among other things, the guidelines and guidelines were designed. But it is not about that, but that from now on they are applied a simplified and summary regime that the law reserved for micro-enterprises (less than 10 employees). And not only that, the reality is that they have put through the back door the Small Mid-caps (companies of between 250 and 749 employees, and turnover of up to 150 million euros, according to the Recommendation 2025/1099 of the Commission).
Well, in the European Union (EU), if we add SMEs and these mid-caps (small mid-cap companies), we are talking about more than 99% of the business fabric. Not only does the simplified scheme for micro-enterprises extend to SMEs, but it is also intended to be general rather than exceptional.
We have created a “general” law which, in practice, only applies strictly to less than 1% of companies; an exception regime so massive that the general rule becomes the anecdote. And we will see if it is not at the end of an incentive for large corporations to fragment their AI departments into subsidiaries that qualify as Small Mid-caps, to try to sneak in for that simplified regime.
4. The End of Horizontal Application.
This is the most conceptually profound point and where the proposal has accommodated the pressures of some lobbies in a more noticeable way. The plan is to move all Annex IA products (lifts, toys, machinery) to the Annex IB system. Thus, the AI Law ceases to apply directly to these sectors and becomes a spectrum that wanders around the house, but without any corporeality.
Instead of having common and transparent European standards, the European Commission (I smelled for the reservation of law) will dictate sector-specific rules separately, leading to total fragmentation. We will have an AI for medical devices, a different one for cars and another one for pacemakers, (probably with different safety standards, depending on the pressure capacity of each sector).
5. Ban on “naked AI”: new category not initially foreseen in the OD.
Both Parliament and Council include it in their positions as an additional element, but not covered by the initial DB proposal. It is certainly a political response to the (justified) scandal that GROK generated, following the presentation of the proposal by the Commission.
The proposal is to prohibit the generation or manipulation of images and videos using generative AI, in which the person involved is identifiable, which may be an image/video of intimate parts, or of that person participating in sexually explicit activities, and that there is no accreditable and verifiable consent. It is up to the supplier to put in place measures to prevent such generation.
The Council added a specific ban on CSAM (child sexual abuse material), except when used by law enforcement to infiltrate abuse networks. The prohibition covers the generation of this content when it is the intended purpose of the system or even a reasonably foreseeable result if preventive measures are lacking.
Exceptions are added, such as partial nudity, artistic or satirical content, or unrealistic images.
6. Article 50, transparency of AI-generated content:
Connected with this, we are faced with the most difficult proposal to understand from my point of view. A multi-month moratorium on compliance with the mandatory labelling of generative AI products (including deepfakes) is proposed for systems already operating on the market (all known). All this justified by current impossibilities for compliance due to technical issues. As if in a handful of months this is going to change substantially…
7. Conclusion.
I turn again to Professor Llano for my conclusion, which is his and I share: “Digital Omnibus is not the end of the European AI model, it is its first stress test and the result is disturbing”
The panorama that draws this Omnibus is that of a Europe that has been afraid of its own humanistic ambition. We went from leading in the world with an ethical and robust law, boasting in addition to it and defending our balances with conviction, to amending ourselves without even having had the gallantry to try.
The legal certainty and generality promised by the AI Act is changed to a fragmented model, in which the Commission will have immense discretionary power. We have exchanged a freshly squeezed orange juice, for a tasteless substitute
.


