This portlet should not exist anymore
Question: Can you give us a brief introduction to your interest and expertise in AI?
Eva Maydell: In my previous mandate, I worked on most of the important digital portfolios that enhanced innovation, contributed to the completion of the Digital Single Market and ensured technology regulation was based on our shared European values but did not stifle opportunities. All these initiatives in one way or another have touched upon data flows, data protection, algorithms and artificial intelligence. As I consider myself a lifelong learner, in the beginning of this mandate I attended tutoring at the Oxford University Internet Institute on Artificial Intelligence, machine learning and governance. In short, the call from the researchers back then was to regulate AI in all its possible deployments. This surprised me a bit and made me think – strong regulation is usually called for either by people who know close to nothing but they fear the given topic, or by people who actually have in-depth knowledge on the topic. Certainly, the Oxford case was the latter.
While putting safeguards in place, we will have to focus on the opportunities that AI holds for our society and for our economy. This, in particular, is my interest: as a tech optimist, I believe that new technologies give us a chance to modernise the European economy. We have been lagging behind in the global technological revolution and this can be clearly seen in the low levels of tech penetration in the European economy. By advancing trustworthy AI, I believe we will have a tremendous opportunity to compensate for this lag in the past decade.
Question: The EU Commission’s White Paper on AI notes that building an ecosystem of trust is a policy objective in itself, and should give citizens the confidence to take up AI applications and give companies and public organisations the legal certainty to innovate using AI. Do you agree with this objective?
Eva Maydell: Indeed, I do believe that building up trust in AI is a legitimate objective. Technology will become even more intertwined with our everyday lives, not less. Services, products and processes enhanced by AI have the potential to change the way we, humans, live. It will reinvent the way we work, learn, move, the way we eat, the way we produce, the way we build our homes, and perhaps even more - the way we communicate with each other. This in turn will influence the whole society.
That is why it is very important to have trustworthy AI systems. People will never see the full potential of technology that they do not trust. We need to implement the right standards and safeguards for the systems, but also to create the ethical code for the people who construct them and use them.
What is important for me is to regulate ‘smart’. I call it the Regulation for Innovation principle. In other words - our regulations should be made very carefully. Yes, we need to think about the possible dangers and threats, what is equally important is to see what the opportunities in AI are.
In the European Parliament, we have created a special Parliamentary committee on Artificial Intelligence in the Digital Age (AIDA). Within the new AIDA committee, I am the coordinator and spokesperson for the EPP Group. Our objective is a mix of future thinking and democratic exercise because we have a unique opportunity to dive in deep and explore the issues by consulting with all stakeholders on short- and long- term objectives for AI. In practical terms, this means we will consult with both the companies, businesses, and people who make AI systems as well as the people who use them, like doctors, drivers, engineers and farmers. That is our task for the year ahead and it is how we plan to investigate the challenges and benefits of deploying AI and its contribution to the economy.
As Coordinator of the EPP Group, my task is precisely to facilitate this investigation: what are the areas of use and implementation of AI? Once we have a clear picture about this, we can start drafting the regulation. To reiterate, regulation should not sacrifice the opportunities for innovations and the added value the AI could present.
Question: Many countries in the world are trying to build regulatory frameworks to govern AI including risk-based models to trusted AI certifications. Do you believe that a global standard in this regard will bring about more trust, legal certainty and market uptake?
Eva Maydell: Without a doubt, achieving a global standard for AI would be good, but this is a very complicated task because of the different cultures and moods towards tech in different countries, let alone continents. Recently I spoke at an event together with the Digital Minister of Taiwan, who had implemented a radical transparency principle and data sharing - all data is open, including citizens’ data. I do not think such an approach would gather much popular support in the EU or the US.
However, I believe that when it comes to ethics we in the EU have a very serious task ahead of us. In an ideal world, it should be the EU that creates and sets up the global standard. Why? Because Europe and its civilization has created all the golden standards when it comes to democracy and humanism. However, it is precisely for that same reason, that in the real world, the EU should be really careful about how we set up our standard for government, ethics and certification. There is a risk that our model is too restrictive. This means that we will not be able to realise our potential that AI holds for us. Secondly, if our standard is too restrictive, it will not become a global one, as the rest of the world will not accept it.
Question: What role does and should the EU take in international discussions and activities on AI governance?
Eva Maydell: It is difficult to say as we have not finished our own internal European debate. We need to go through this debate before “exporting” our vision. However, for me it is very important to base this debate about AI governance on what we know about AI implementation from our businesses and academia. We need to base our principles in AI governance on how we currently use and will use AI. We need to make the pragmatic approach meet the philosophical and ethical paradigms.
Question: As regards the protection of fundamental rights and consumer rights, do you believe we can use the existing general and specific sectorial legislation to protect against harms arising from AI?
Eva Maydell: The EU is a leader in consumer rights and the future use of AI in the sectors should not change that. Citizens will not accept anything less than the rights they already enjoy. What is more, the protection of fundamental rights and consumer rights are very important elements of the citizens’ vision for the EU.
That being said, I am happy to have seen such broad support for the EPP Group-led report on Civil Liability and AI. The European Parliament clearly stated the right balance between legal certainty and innovation does not require major changes to the EU's legal system. The EPP Group’s goal is to fill one potential legal gap by making operators of high-risk AI systems strictly liable for the harm their applications are causing when there is neither a defect nor a fault.
Question: Do you believe that specific rules are required to increase accountability when the state is the user of AI applications for public services?
Eva Maydell: There are many people who are less trusting of governments than of companies. This can be justified in many places around the world, but I am not sure that this is valid for the EU. If we have standards for AI, they should be the same for both private and public developers.
Personally, I am more concerned when it comes to the collection of data. There the state has proven that it tends to collect more data than it needs, and we are not sure how this data is secured. Cyber defence and cybersecurity have to be excellent across the EU because there is no other way of us building and operating the data spaces proposed by the Commission. These data spaces will be the lifeblood for large-scale and small-scale data science projects and will advance forecasting and efficiencies in sectors like health, finance, environment, etc. We all know that AI is a function of well-curated and high quality data. But the data cannot be abundant if the underlying infrastructure is not cyber-safe. No one will want to share data if we do not address this challenge first.
Question: The digital divide remains a key concern in the ICT for Development agenda. Governments have a duty to ensure that no one is left behind according to the SDG goals. How can regulatory frameworks enable this goal? What is the EU or Bulgaria doing about this issue?
Eva Maydell: Correct, the digital divide is a key concern in the ICT world and in the policy world as well. One of the main priorities I discussed with my team in the beginning of this mandate was precisely how to advance our efforts in closing the digital divide. I see this divide in a bit of a broader sense. Bridging the digital divide for me means making sure that every company or organisation has access to digital solutions and can participate in the digital economy. The broader benefit of this would be to empower Europeans to work and live wherever they want; that citizens will not have to leave their hometowns due to lack of opportunities, instead they can still work a well-paid job without necessarily being present in an office every day.
The remote work during the lockdowns brought on by COVID-19 clearly shows that many companies can run their business from a remote area. The winning combination is the right infrastructure, processes and skills. This way their employees can still work and participate in the economy. Digitalising companies is equally empowering and advantageous.
Bridging the digital divide is also an effort of education. We should focus more efforts on creating the learning experience and environment that integrates AI in education and that transforms and improves learning. I would like to mention an example of such efforts from Bulgaria. This year, education policy-makers introduced a new National School STEM Centres Programme. It offers financial support for schools to create a model space and experience for STEM education. It includes not only the environment and technological equipment, but also teacher training, innovative teaching practices, new and innovative learning process organisation, and opportunities for teachers to cooperate and teach together through integrated teaching approaches. I give this example in order highlight the need to think about AI and its role in education as not only infrastructure and technology, but also as a way to transform teaching and learning practices, to upgrade community cooperation, and change learning and assessment process. I see it as a whole package of innovation and future-oriented transformation.
Question: AI systems are seen as difficult to regulate because of the changing functionality of these systems. The other challenge relates to the difficulty with the allocation of responsibilities between different economic operators in the supply chain. How can we solve this issue?
Eva Maydell: Regulating fast-paced industries is not a problem per se. Medicine, pharmaceuticals, research - they all are evolving every minute. The right approach is not to regulate a technology, but to regulate the principles that this technology has to observe. In our EP jargon, we call this a “technology-neutral” regulation. This means you look into possible positive applications and negative outcomes then try to dwell on the very core of those principles and turn them into policy-making. As in medicine - we cannot deploy a vaccine if it had not gone through the safety requirements, because the core principle is that the product has to be safe for society. So, using the same idea with technology, we need to look into safety principles and procedures. Ethical codes, liability, insurance, all have a role to play when it comes to regulating a fast-paced technology such as AI.
Question: Opacity, unpredictability and partially or complete autonomous behaviour are key features of AI that can cause harm to humans. Despite a regulatory framework, these features can make it hard to verify compliance with existing laws and may hamper their effective enforcement. How do we combat this issue?
Eva Maydell: I tend to believe that AI is far from achieving singularity. At the beginning of any AI system is a human who has developed it. The mistakes AI may make come from the fact that humans have not set the right task, or the right conditions for this system to achieve the desired result.
In the EPP Group, we do not want an AI Agency to help enforce the regulation. My personal view is that to be able to enforce AI regulation, we must have regulatory bodies working in sync, because AI affects more than just one domain: the consumer protection regulators, national competition authorities, telecoms and communications regulators, data protection authorities, to name a few. All this will require huge efforts from all Member States and this is why I am happy to see that the European Commission will try to coordinate the national AI plans, but also the national plans for investing in digital components through the Recovery and Resilience Facility. Despite the fact that I am optimistic by nature, I believe that we have to be realistic and we have to acknowledge that the talk about AI advancement happening at EU level and some EU capitals are further from the discussions in other capitals and governments. Coordinating AI regulation is an ambitious task, nonetheless an achievable one.
Question: It is a reality that many AI innovations also come from the open source community and this space is not strongly regulated. How active is the open source community in AI innovation in the EU and Bulgaria? Do you believe this area can be regulated through legislative measures? If not, then how do you suggest we regulate this area?
Eva Maydell: I am a proponent of the free-flow of non-personal data because I believe that this is a driver of innovation and entrepreneurship. Furthermore, easily accessible data, open data and the open source community have definitely contributed to many of the innovations in the tech sector. Leading tech companies are already opening up their data in order to facilitate customer and supplier interconnectivity. I am not a fan of regulating spheres where there is no market failure. Nevertheless, we should not forget that the EU already has an Open Data Directive that entered into force in July 2019. This is an important milestone achieved in Europe, because it stimulates the publishing of dynamic data and the uptake of Application Programme Interfaces (APIs). Under this Directive, the Commission is currently working on defining high-value data sets, i.e. data sets of high commercial potential, which can speed up the emergence of value-added EU-wide information products. Unlocking the potential of these data sets will be a key driver for the development of AI. In Bulgaria, we have a relatively strong open data community. Several data scientists and bloggers use open data to track air quality, parking spots in the capital etc. Even some of the big forecasts or analysis in the COVID-19 pandemic are done via publicly available data sets. Therefore, I believe that open data holds more opportunities than risks. Nevertheless, we should be vigilant.
Rule of Law Programme Asia, Konrad Adenauer Stiftung and School of Law, Strathclyde: Many thanks, Mrs Maydell.
The interview was conducted by Dr Angela Daly (Senior Lecturer of Law, University of Strathclyde) and Ms Aishwarya Natarajan (Research Associate, Rule of Law Programme Asia, KAS). We welcome your thoughts, suggestions and feedback. Dr Daly can be contacted at firstname.lastname@example.org and Ms Natarajan can be contacted at email@example.com.
 Legislation includes the Regulation on portability, the Directive on Digital Content contracts, the ePrivacy Regulation, the Regulation on removing terrorist content online, the Platform to Business Regulation, the European Cloud initiative, the Digitising Industry initiative etc.