Article 4 of the AI Act Regulation
AI applications offer a lot of potential but pose risks, including discrimination and disinformation. To minimise such risks and foster knowledge transfer, Brussels has stipulated in Article 4 of the AI Act that providers and operators of AI systems must take measures to ensure that their staff and other persons involved with the AI system have ‘a sufficient level of AI literacy’. As part of the Digital Omnibus, the European Commission has proposed to weaken this training requirement. There is now resistance in Parliament. However, literacy training required by digital legislation is good for bureaucracy but not for skills acquisition for at least four reasons. The Commission's proposal should therefore be adopted.
The content is missing
Many AI literacy training courses focus on the technical, laboratory-style features of AI, such as different variants of machine learning. These characteristics offer little insight into the risks associated with AI in real-world contexts. Figuratively speaking, the AI Act asks people to learn how to drive a car whether the car is a toy vehicle or an SUV. While all vehicles have four wheels and a steering wheel, differences in equipment make them so different that general training on wheels and steering wheels does little to minimise actual risks. Similarly, a person who drives a toy vehicle learns little from someone driving an SUV. The same applies to AI: the context in which AI is deployed, the time available to verify outputs, and the quality of the data shape risks and opportunities far more than technical features alone.
Training courses that take these conditions into account are currently hardly feasible, as the use of AI is predominantly experimental and exploratory. As a result, there are few routines, standardised AI products and services. However, teaching programmes rely on patterns that can be taught. Mandatory training at this stage creates incentives to learn something (technical peculiarities) that has limited relevance in practice.
Technical and organisational measures are more important
Training only prevents negative consequences if the trained person consistently acts ‘correctly’ in their everyday decisions. In the digital sphere, there are more effective technical options for enforcing rules. To use a metaphor: you can morally admonish drivers to buy a parking ticket – but installing a barrier at the exit is far more reliable. Applied to AI, employees can be told not to enter trade secrets into an AI system– but it is safer to choose models and storage locations that prevent data leakage. AI is a digital product, and numerous technical and organisational safeguards can enhance its security. Yet AI literacy courses tempt organisations to start with the wrong lever.
Professions rather than technology as the starting point
AI literacy courses stand-alone alongside other training courses on digital laws, such as GDPR data protection training. Such courses are small silos – yet data protection and AI are interviewed in practice. In this respect, it is important to take professions, goods and services as the starting point, not individual technologies. Competence requirements need to be harmonised in continuing education, training and further education programmes – not isolated in digital laws.
Agile formats instead of certificates
To encourage knowledge transfer and low‑risk AI applications, low‑threshold, sector‑specific and flexible teaching and learning formats are needed, alongside technical and organisational safeguards. Article 4 of the AI Act, however, tends to result in isolated, one‑off training courses that are detached from practical realities. The provision may be good for bureaucratic governance but offers limited practical value and should therefore be amended, as proposed by the Commission.
Real insights instead of bureaucracy
In order to promote knowledge transfer and safe AI applications, there is a need for low-threshold, sector-specific and flexible teaching and learning formats on the one hand, and technical and organisational safeguards on the other. However, Article 4 of the AI Act tends to lead to one-off, isolated training courses with content that is far removed from practical application. The regulation is good for bureaucracy, but of little practical use and should therefore be modified, as proposed by the Commission.