Asset-Herausgeber

Interviews

KAS-Strathclyde Interview Series on AI, Global Governance & Ethics: Interview with Dr Joshua Meltzer

Dr Joshua P. Meltzer is a Senior Fellow in the Global Economy and Development program at the Brookings Institute, USA. He also co-leads Digital Economy and Trade Project.

Dr Meltzer shares his views on the AI policy in the USA and the future trajectories of AI governance.

Asset-Herausgeber

Question: Internationally, do you see there being an emerging shared approach to AI governance or divergence?

Dr Meltzer: There is a lot of goodwill amongst many governments to work towards convergence and there has been some success, obviously, with the ability to develop common AI ethical principles, such as in the OECD. Then there is the GPAI effort that has kicked off, which looks to take that further. If you look at the participating countries, you get a sense of a potential group that could work towards convergence. But the devil is going to be in the detail and it will matter what the EU does, in part because it is obviously a very big economy. It is also going to regulate in a more comprehensive way sooner than anyone else. Just the very nature of its regulation and the anticipation that it will be binding with various compliance mechanisms underpinning it will force a reckoning with different approaches. So whether then the EU approach is consistent with or can sit alongside other but different approaches or is sharply at odds with them will become a defining point of whether we truly can talk about convergence or we are going to see divergence.

Question: Can you give us an insight into what is happening in the US on AI governance at the moment?

Dr Meltzer: AI policy generally has been quite a priority for the federal government. There has been a focus on this notion of US leadership in artificial intelligence and that is comprised a couple of strands - one has been emphasizing the importance of R&D and federal R&D and pushing for more resources in that space. That includes using the procurement power of the federal government to drive forms of AI research and application. AI research has been taken up in a pretty significant way in the Defense Department, in addition to within the federal government. A huge chunk of it, of course, really sits in the private sector as well. Another element has been developing appropriate AI standards: the National Institute of Standards and Technology (NIST) in particular is taking the lead on that and they are doing a lot of work around that that topic at the moment. Preparing a workforce to be AI-ready has been another element of the federal government strategy. You see that across a number of countries: making sure that the skills are available in the workforce for developing AI and also using AI. There has been some emphasis on international cooperation on AI. That is still, I think, in the early stage. The federal government has also recently developed AI principles, which are similar and build off of some of the other international AI principles that have come out the OECD and elsewhere around transparency, non-discrimination and fairness, for instance.

Question: What do think about the US as approach so far? Is it going in the right direction?

Dr Meltzer: Because there is so much happening in the private sector, it raises novel opportunities, but also real challenges. I think the government has been doing a pretty good job, but it is clearly not enough. On the one hand, more resources frankly are needed around R&D and just basic research. The government continues to have a key role in making sure that it is driving a lot of the key research in areas of AI. Private companies are going to identify areas where they can capture a lot of the gains from research and that will produce a lot of important broader social and economic benefits. But there are going to be other areas where the gains from the research may be hard to capture or just may be so large that you are going to get a sub-optimal level of output if you rely on the private sector alone. That is in a range of areas, like data, including on the development front, climate change, other social inequities in the US and so forth. The government has traditionally played a central role in a lot of these technologies and should continue to do so. The broader question for the US, and this is not AI specific, is where existing regulation fall short and what additional regulation (and non-regulatory approaches) are needed. One area for example is with respect to privacy. There have been various attempts in the US to pass a federal privacy bill. This reflects a growing recognition, in part because of the California Consumer Privacy Act and other developments in US states, that the current approach to privacy is lacking and that a comprehensive federal approach is needed and could help AI development by reducing some of the risks that arise from a lack of regulation. A federal US privacy bill would also become another model that affects AI regulation globally and would be an alternative to the EU and China. These differences also underscore the importance of international cooperation. There are various areas where international cooperation on AI is happening but could be expanded including with respect to R&D and on AI regulation. AI standards development, through trade agreements and in the OECD.

Question: How do you view these developments in the US vis-à-vis what is been happening in the EU? Is there synergy between the approaches, or is there conflict, tension or dialogue?

Dr Meltzer: It is probably too early to say because the EU still has not finalized its approach to AI regulation and part of this effort is to inform that process. If you look at the EU AI White Paper a key questions is going to be around how the EU identifies what is high risk AI and how targeted that is. The role of standards will also be important, and the extent there is alignment. There is also the broader EU Data Strategy out there as well, which will have specific AI applications. The EU is developing the idea of  data governance spaces; how it operationalizes this in practice is unclear, but could have a significant impact on access to data, which is of course important for AI. 

Question: Do you think a risk-based system of AI governance is a good approach to take?

Dr Meltzer: It seems to make sense to focus efforts on AI that presents the highest risks of harm and that underlies the EU approach, which seems reasonable. The challenge is how to identify what is high risk? For instance, take the healthcare sector. There are going to be areas where you can imagine specific AI applications, such as when it comes to making diagnostic decisions that can have significant health implications and might be considered high risk. But there are other AI applications in the healthcare sector, such as on the patient management side or hospital billing, that would not be high risk. This example is just to underscore that a sectoral approach would likely be too broad and that identifying what is high risk and may require new regulation needs to be very application specific. 

Question: What do you think about the approach in China? Is it going to have much impact on what is happening in Western countries?

Dr Meltzer: I expect that the US and China will remain in various ways at odds and in a form of competition around technology development for the foreseeable future, and that will certainly include AI. The sharp end of this dispute has been between the US and China. But the EU is clearly hardening its approach towards China on various fronts, as is Japan, Australia and others. So this is increasingly not just a US-China issue and that is certainly part of the broader context around the importance of alignment.  Worth noting that the group of governments in GPAI does not include China.

China's clearly an important player in the AI space, by all measures. It is probably second to the US and may actually be leading in some areas of application. There are a couple of issues which make this very complex. One is, how do you manage competition with China without essentially splintering the relationship entirely, which would have very significant costs in terms of development, economics, and a further hardening of relations? What is the way to manage the risks and to identify where there are areas of cooperation? The concern in the US with respect to China is multifaceted. For one,  some of China’s development and use of AI is at odds with the values and ethics that are consistent with democracies. The potential for AI to be used in dystopian, authoritarian ways if it is China-led is a real concern. There are going to be those fundamental differences, which I do not think will change anytime soon, which will be structural to some extent to the relationship between China and the West. But not all AI and not all technology raises such issues, and here is where we need to look carefully at opportunities for engagement.

Another dimension to AI is its military applications. How AI gets developed, who develops it, and how it is deployed will therefore also affect national security.  That’s going to be another point of competition between the US and China.

At the same time, AI research relies on international cooperation amongst researchers. That includes lots of Chinese engagement and that still is ongoing and remains an important engine of ideas which is still very much needed to push AI forward. There is a lot of new work that needs to be done on the basic science in order for AI to be fully realized and shutting that down this research collaboration as a product of the broader geopolitical tensions is going to be harmful for everyone. How you manage or sustain those forms of cooperation, given these other points of tension is part of the complexity.

There are also other tensions that are emerging in other areas of technology governance that matter for AI. Issues to do with standards, for instance, certainly matter.  

Question: Could you tell us more about the interaction of AI governance and policy, and technical standard setting bodies and initiatives?

Dr Meltzer: There's a lot of AI standards work underway that matters for digital technologies broadly and AI specifically and that is happening in a lot of international standards bodies such as IEEE and ISO/IEC. Even 5G standards and the role of 3GPP affects matters for AI.  China has been a very strategic and strong participant in those standards bodies. Both through government representatives and private sector representatives, such as Huawei, it is taking a fairly coordinated approach to developing these standards. This points to concerns that the blurred distinction in China between government and private sector representatives affects standards setting in organization like ITU. Yet in other standards bodies such as ISO/IEC (which is one country, one vote) and in IEEE (that is focused on expert representation), Chinese engagement remains important, in part as a socializing effort, and global companies, also from China, have a strong incentive to agree on a global standard. 

Question: Is divergence on standards happening now, or is there a big risk that we might see China/Chinese companies promoting or adopting certain standards, but the US and Western companies adopting others?

Dr Meltzer: I do not want to overstate the extent that diverging standards is purely a geopolitical matter. We have seen around previous standards such as 3G and 4G and LTE divergent pushed by different countries and different companies, ultimately converging around particular standards with winners and losers. This points to some of the benefits from competition amongst standard setting bodies. That competition and that process is likely to play out as we move into 5G and 6G and AI standards. The mere existence of divergence coming from different bodies is not in and of itself unusual or always problematic, and is not always simply a product of US-China issues. There are US-EU splits on various technology issues as well. Japan plays a big role, South Korea too. Even with these divergences there has ultimately been a coalescing around particular standards as it has mattered for scale and access to markets. But not only is China actively engaged in outward-looking standard setting, but it has got a very closed domestic market and obviously a very large market so it can it can legitimately develop a set of standards which it applies domestically and then it can export it strategically through to other countries, which may not be the US, Japan, the EU - but could be countries in Africa and other parts of Asia, that are more linked into the Chinese economy and through the One Belt, One Road and Digital Silk Road. The risk is a world splitting around the Chinese-oriented regional standards and Western standards.

Question: Where do you can see other BRICS countries heading on this? India is part of GPAI but what about other BRICS countries?

Dr Meltzer: I do not know how meaningful that BRICS category is in this space any longer. I think certainly Russia has a view of the Internet which is quite aligned with the Chinese one. Russia wants a very controlled Internet along the lines of what the Communist Party has developed for China. But then Russia is not an economic power that is going to be able to drive any new standards that will be taken up beyond Russia. And maybe Russia just ends up adopting Chinese standards in various ways.

India, I think, is quite different, where they see China broadly in more strategically competitive terms. We saw that recently with the Indian government decision to ban a whole range of Chinese apps in response to tensions at the border with China. So, this spillover between geopolitical tensions and what India does on trade, digital and technology policy is already happening. India has also got a very well developed services and digital sector which is already deeply plugged into the US and the EU and Japan. I would see India ultimately going more the way of the West on these types of standards than China.

Question: How do you view some of these developments from a trade law and policy perspective? Do you think that we may see kind of some of these tensions playing out in the WTO or regional trade agreements?

Dr Meltzer: The broader technology driven tensions with China are not contained in that respect and they do come up in different approaches to trade agreements, trade negotiations and so forth. If you look at the e-commerce negotiations and the WTO you definitely see a different set of priorities that China has to the US and even the EU. The EU is also not quite aligned with the US on various important issues there either. For instance, the US wants robust commitments in its free trade agreements on cross border data flows and data localization. That's a broader set of policies than AI, but it certainly matters for AI. In contrast, China's has more limited ambitions in its FTAs. China has got a big digital sector, with large companies that want to go global: so it is balancing its domestic imperatives around the way it governs data and access to data with the needs of these tech companies as they go global, which will increasingly push in the opposite direction. So I think there's a tension there that China’s managing which means it doesn't ignore these issues entirely in its FTAs, but it's certainly a lot less ambitious than the US is.

Question: The UK has now left the EU and is coming to the end of the Brexit transition period. Do you see the UK changing its approach in any way, diverging more from the EU and getting closer to the US?

Dr Meltzer: The UK is obviously a really important player on AI and is one of the key countries developing AI. For the UK, it is not clear yet how their approach to AI will map onto whether they need to choose  EU alignment or US alignment. There are certainly going to be some  horizontal regulatory issues which the UK still needs to work out with respect to the EU that will matter for AI, such as on privacy and adequacy under the GDPR. But if you look at the UK’s AI strategy and its focus, for instance, on R&D and education, that is something which not only it can pursue and do independently in many respects, but could be done in a way which aligns with and builds off what will be happening in the EU. One area to look at is what happens in the EU around Horizon Europe and the funding streams that will be available there for AI research. As I understand, there is still opportunity for the UK to be part of that. So around the central sort of AI R&D, there should be a lot of UK-EU cross-programmatic types of collaboration.

Question: Do you see discussions in the UN sphere on AI for Good and AI and Sustainable Development Goal (SDG) objectives as being something that is in tension with international trade?

Dr Meltzer: There is a lot in the SDGs which either fairly explicitly identifies digital technologies as a goal in themselves, or as key enablers. So AI potentially has an important role to play. In AI for Good, AI for sustainable development is already a part of the framework. For developing countries there are a number of barriers and hurdles which they need to address in order to really operationalize AI effectively. These countries will need to create enabling environments for AI. You need the data to train the AI algorithms so it is specific to the needs and idiosyncrasies of each country. An AI application for healthcare data will need different types of data compared to AI for improving agricultural productivity. In all cases, access to data will certainly matter. In many respects, if you think about trade as including cross border data flows and including issues around standards and compliance and so forth, then an appropriately designed trade policy really should facilitate and complement the use and development of AI in developing countries broadly and certainly for achieving a number of these SDGs.

Question: Do you think trade policy does this now, though?

Dr Meltzer: Well, it does not, to the extent that there are very limited rules at the moment in trade agreements on AI specifically. For many developing countries, there are certainly a range of WTO commitments that help with respect to AI, but they are limited. So there are not obvious barriers built into trade agreements, but I think that not enough is being done to specifically enable AI development to happen.

Question: Do you think that there is a need to update existing laws to better address and facilitate AI implementation?

Dr Meltzer: What we need is trustworthy AI, which is broad societal acceptance that the way AI is developed and the way AI is used is consistent with underlying ethical principles and norms that exist in in that country. Now, as I said, we are beginning to coalesce around these broader notions of AI ethics. But how they now get implemented within countries, I think, is really where it is going to matter. Already countries have existing laws that are applicable to AI, whether it is consumer protection, for instance, on privacy issues or liability. But there will be other areas where maybe new laws and regulations are required. One of the issues each country is going have to deal with is, is how do you sequence that approach? So on the one hand AI is still very new and you want to create sufficient space to experiment, for AI to be developed. This means that you do not want to get too far ahead of it from a regulatory perspective, which risks picking winners and shutting down what might otherwise be fruitful pathways. You also risk designing approaches which ultimately are very suboptimal given where AI goes. To give you one example, a lot of the focus on AI regulation in Europe is based on the presumption that collecting huge amounts of data will continue to be the key driver of AI. It is not clear that this will necessarily be the case given renewed interest in looking at forms of symbolic logic-developed AI algorithms which are less data intensive. There is always the tension that regulators will miss what is actually happening at the cutting edge of AI. On the other hand, having no regulation risks undermining trust in AI.

Governments will have different ways of thinking about where that balance is struck. The US has traditionally had a very iterative approach that has premised on a lot of engagement with the actual stakeholders that are doing the research and doing the investing and design, and a lot more self-regulation than Europe's comfortable with. Europe has been much more willing to be more comprehensive on the regulations upfront. Then there are different cultural and other legal reasons for those approaches, and I think they will remain.

There is no obvious right answer and I think that there will just be differences. More specifically, I think privacy is becoming this touchstone for a whole range of digital issues, including with respect to AI. That is, in part, the most concrete one because of GDPR and the recent Schrems decision and what it means for not only cross border data flows, but also for the extent that you can take data collected for one purpose and use it in another area – a feature of AI development. This has been happening a lot in the US where for example, data that Google collected from Gmail and search was used to train its AI speech recognition algorithm.

Question: What do you think are likely near-future developments on AI governance?

Dr Meltzer: There is ongoing work on AI standards under way. This is important and will continue. I think we need to do more on identifying areas for cooperative projects on AI research and at the application stage towards broader common good issues like climate change and achieving various sustainable development goals. Then I think we need to level-set where China is headed on AI and the extent that we can work with China, and find a cooperative path forward, or whether China will go off in a direction which is so at odds with core principles in democracy that we need a different approach. That will require renewed forms of cooperation and focus. Whether we can do that through GPAI or need to think about doing that somewhere else, I think, is another question.

Asset-Herausgeber

Kontakt

Aishwarya Natarajan

comment-portlet

Asset-Herausgeber

Asset-Herausgeber