Asset Publisher

smarterpix / HayDmitriy
kurzum

AI training requirements should be reformed to be more practical

Article 4 of the AI Regulation: Good for bureaucracy, bad in practice

Article 4 of the AI Regulation leads to abstract, one-off training courses that can be easily controlled bureaucratically. In practice, however, they offer little real added value; what is needed instead are agile, sector-specific teaching and learning programmes. The passage should therefore be modified with the Digital Omnibus.

Asset Publisher

Article 4 of the AI Regulation

AI applications offer a lot of potential, but also risks. These include discrimination and disinformation. To minimise such risks and encourage knowledge transfer, Brussels has stipulated in Article 4 of the AI Regulation that providers and operators of AI systems must take measures to ensure that their staff and other persons involved with the AI system have ‘a sufficient level of AI competence’. As part of the Digital Omnibus, the European Commission has proposed to weaken this training requirement. There is now resistance in Parliament. However, training required by digital laws is good for bureaucracy but not for skills acquisition for at least four reasons. The Commission's proposal should therefore be accepted.

 

The contents are missing

Many AI training courses focus on the technical laboratory characteristics of AI, such as different variants of machine learning. These characteristics provide little insight into the risks associated with AI in practice. Figuratively speaking, the AI Regulation requires people to learn to drive a car without it being clear whether the car is a Bobby Car or an SUV. All vehicles have four wheels and a steering wheel, but differences in equipment make them so different that general training on wheels and steering wheels would be of little help in minimising the risks posed by these vehicles. Similarly, a Bobby Car driver can learn little from an SUV driver. The same applies to AI. The context in which AI is used, the time allowed for reviewing the results, and the quality of the data, for example, shape AI to such an extent that risks and opportunities cannot be assessed on the basis of the technology alone.

Training courses that take these framework conditions into account are currently hardly possible, as the use of AI is predominantly experimental and exploratory. As a result, there are few routines, standardised AI products and services. However, teaching programmes are based on the principle that there are patterns that can be taught. Mandatory training at this stage creates incentives to learn something (technical peculiarities) that is of little relevance in practice.

 

Technical and organisational measures are more important

Training protects against negative consequences if the trained person subsequently continues to act ‘correctly’ in their everyday decisions. In the digital world, there are more effective technical options for enforcing rules. To use a metaphor: you can morally admonish drivers to buy a parking ticket – but it is safer to install a barrier at the exit. Applied to AI, it is possible to explain to employees that no trade secrets should be entered into the AI – but it would be safer to choose models and storage locations that prevent data from leaking. AI is a digital product, and there are many technical and organisational measures that can make it more secure. However, AI training courses tempt people to start in the wrong place.

 

Professions rather than technology as the starting point

AI training courses stand-alone alongside other training courses on digital laws, such as GDPR data protection training. Such training courses are small silos – yet data protection and AI are interlinked in practice. In this respect, it is important to take professions, goods and services as the starting point, not individual technologies. Competence requirements need to be harmonised in continuing education, training and further education programmes – not isolated in digital laws.

 

Agile formats instead of certificates

Legal provisions such as Article 4 – as can currently be observed – generally lead to measures that result in a certificate that can be presented during bureaucratic checks. What is actually needed, however, are continuous, agile teaching and learning formats, including those of an informal nature. The legal requirement leads to training courses that can be well documented and monitored bureaucratically. However, they do not correspond to what is really needed in terms of knowledge transfer and risk minimisation.

 

Real insights instead of bureaucracy

In order to promote knowledge transfer and low-risk AI applications, there is a need for low-threshold, sector-specific and flexible teaching and learning formats on the one hand, and technical and organisational safeguards on the other. However, Article 4 of the AI Regulation tends to lead to one-off, isolated training courses with content that is far removed from practical application. The regulation is good for bureaucracy, but of little practical use and should therefore be modified, as proposed by the Commission.

Asset Publisher

Contact Leonie Mader
Leonie_Mader
Advisor on Artificial Intelligence
Leonie.Mader@kas.de +49 30 26996-3319

comment-portlet

Asset Publisher

About this series

Concise, reduced to the essentials, but always highly topical. In our series “in short”, our experts summarise an issue or problem on a maximum of two pages.