This portlet should not exist anymore
Question: Can you give us an overview of the governance and policy situation for AI in China?
Jeffrey Ding: At the top level, the State Council's July 2017 New Generation AI Development Plan had a fair deal of emphasis on ethics and governance, talked about setting technical standards for ensuring that AI is safe, secure and reliable, and controllable. Those top-level documents also include white papers on AI standardization published by different technical institutes like the China Electronic Standardization Institute (CESI) and the China Academy of Information and Communications Technology (CAIC). Because AI interacts with so many other fields there are also related documents in e.g. privacy. The white paper on standardization also talks about how AI makes the challenge of governing personal information much more difficult because it increases the amount of information that you can derive about a person, based on collating different data sets.
Companies are at least professing to do more self-regulation. For example, Tencent changed their whole company motto to ‘tech for good’. Baidu, for a time, was part of the Partnership for AI that is one of the main international bodies, but they left recently. Then there are some Institutes popping up. More and more, academics are looking at this issue in China as well. One of the most prominent is the Beijing Academy of AI, they released a set of AI principles as well. It is starting to become a really indispensable topic because even startups like Megvii – who in their draft IPO prospectus in Hong Kong - outlined that they are going to set up AI ethics external board of advisors and also released their own set of AI principles. So it does seem like everybody has their own set of AI principles. The tricky thing is figuring out is which one of those have teeth and how they are going to be enforced and conceptualized.
Question: Are ethics principles in China implemented and enforced?
Jeffrey Ding: A lot of these documents and principles whether in China or other countries are aspirational, so they are not going to be codified in law. One area where I have seen some interesting cases of implementation, mostly just anecdotal evidence, is in privacy and personal information protection. A professor sued a wildlife park, for only using facial recognition as the only way to get into the park, and that lawsuit actually has made it to Court proceedings. That is an example of not necessarily AI-specific law because I think the suit is based on privacy-related law, but, it is a case of a new application of privacy law to a new technology, facial recognition. For privacy, maybe the first instances of this type of regulation will be based on existing law - just seeing if and how it extends to this new domain of AI. Privacy is an emerging, burgeoning area in China, especially with the new Personal Information Protection Specification. There has been a lot of public backlash against high profile instances of data being stolen and sold on social chat groups. There is a lot of pressure at least on the side of consumer privacy for more forceful interventions.
Question: Your research looks at AI policy and strategies in a historical context, but also in the context of US-China relations, is that correct? How do you see this playing out at the moment with regards to AI?
Jeffrey Ding: I do not have firm conclusions yet, but I think the main point is that we focus too much on who is the innovation leader, whereas for a lot of these general purpose technologies like AI it is more about how efficiently and comprehensively countries are able to adopt and spread this general purpose technology across a bunch of different sectors.
Between considering the US, China and Europe, it is not necessarily about who has the leading flashy technology companies doing innovation in this field, it might be about more about who has the fast following country companies that are able to adopt these ground breaking algorithms and adapt them to processes that add real economic value. In that sense companies like Siemens in Europe may not have the ground breaking fundamental AI algorithms, but they might be able to adapt them to smart manufacturing contexts faster than companies in the US or China.
I do think in at least in smart manufacturing and other fields, China is still further behind than people think, just because the overall rate of digitalization is so much lower. By that I mean, basic things like sensors in manufacturing plants to collect data, and the proportion and number of private companies that are on the cloud, just these baseline indicators. If you do not have these baseline indicators, there is no way to implement AI models to increase efficiency and productivity. The slower rate of digitalization in general is due to a lot of structural factors, so it makes rational economic sense for companies to not invest so much in high end equipment and digitalization if they can benefit from a dividend of really cheap labour, which almost disincentivizes these types of investments.
For cloud technology, it was Western technology giants like Amazon, Google, Microsoft that were first to develop this, and in recent years Alibaba and Tencent are really stepping up their capabilities. So in some sense, maybe there will be a late comer advantage, where China will be able to start implementing and adopting some of these AI solutions quickly. But AI is so broad that everything will vary by domain. There may be much less upfront costs to implementing something like voice assistance throughout your entire company. For those, it might be just much easier for fast followers to take advantage of the leading innovations.
Question: Can you comment a bit about China's involvement in any international processes around AI governance, whether that is as a state or the involvement of companies like Baidu, Alibaba or Tencent?
Jeffrey Ding: The United Nations is an important space of international governance on this topic. Chinese experts and thinkers do participate in those meetings with leading countries from all over the world, for instance the group of Governmental Experts on Lethal Autonomous Weapons Systems. That is one area where there is multilateral governance on a specific application of AI.
Most of the real work on AI governance is happening in technical standard setting organizations like the International Organization for Standardization (ISO), International Electrotechnical Commission (IEC) and the International Telecommunications Union (ITU). Some of the standard setting work has been critiqued for a ‘lowest common denominator’ type of regulation and standard setting. But I do think it will build a framework for any form of higher level regulation and governance. These basic standards on what counts as AI, what counts as a safety critical application, how you define high risk applications – that is already happening, and being developed in these standards organizations.
Question: Is there more understanding in China of these debates and developments in other parts of the world, particularly the West, compared to what people in the US and EU know about what is going on in China?
Jeffrey Ding: I think it definitely varies by subject so some areas just are not covered, or are completely off limits like the use of AI to disproportionately target ethnic minorities in Xinjiang. Only people with VPN access to be able to find out about that and you would not be able to write openly about it. On that topic, the only source is from outside of China.
In other areas like consumer privacy, even topics about privacy from government surveillance – in some contexts those are not as regulated. A lot of the most knowledgeable people about those topics are writing in Chinese, blogging in Chinese. For example, one of the leading writers on China's data privacy scene is Samm Sacks (New America) and her research benefits from her interviews with and careful reading of the blogs of people, who are drafting these personal information laws in China. It just makes sense to just read what they are saying.
Question: Has there been any discussion of differentiating between uses of AI by public authorities and uses of AI in the private sector in China?
Jeffrey Ding: One entry point into that is thinking about data sharing in different fields. In some fields, public authorities have all the data, and in other fields private companies have all the data. The most prominent example of this is in smart surveillance and that is where the Ministry of Public Security’s different bureaus across China have all the data, and then they share it selectively with different companies, including facial recognition companies. There is a good case to be made that it was essential to facial recognition companies like Sensetime and Megvii becoming some of the world's leading and most valuable AI startups. Then, conversely, the Chinese government attempts’ to build a still-evolving and nascent social credit system requires collecting data on people's behavior online and all that data is under the jurisdiction of private companies. There's actually been some evidence that it is hard for them to get access and that some companies have been resisting efforts to get their data because that is key to their competitive success and competency.
Some of the most important applications are going to be in places where public private interests overlap, like smart cities: not just surveillance, but also flexible energy grid management. That is an area where there is going to have to be a lot of public private partnerships and cooperation.
Question: In our past discussions with IT industry bodies, we have heard about proposals to bring about trusted AI certifications through public private partnership. Is there something like that which is happening in China, given that all the data is with the private sector in some instances and with the state in others?
Jeffrey Ding: The closest thing that comes to mind is the AI industry alliance, more specifically I think it is the AIIA. It is closer to an industrial lobby for AI companies and I think that they work closely with the government. For example, recently they did a project to collect all the different application cases of AI after all these different companies in the fight against COVID. I think there is a little bit of a lobbying aspect to their work but they try to coordinate some efforts for partnership with the government. That is probably the closest thing to certification that comes to mind. So, in China it is very much a government top down driven effort, but in AI application areas like biometrics and just general AI standardization, they invite all the leading technology companies and universities to come and help draft the standards. So I think there is a fair deal of public private cooperation on those efforts as well.
Question: What do you think are the likely next developments in terms of AI governance in China?
Jeffrey Ding: Both the trade war and COVID-19 pandemic are really important. Probably the biggest connection from the trade war is just the realization that ‘compute’ is this strategic lever of ‘control’ in international AI governance. For example, Chinese companies are feeling the brunt of this from the US entity list, including some of the leading facial recognition giants, whose supply to chips to train and run AI algorithms is very much dependent on US companies. That rule, which put those companies on the entity list for involvement in Xinjiang, is definitely a tool of governance for the US to try to influence - or at least attempt to influence - Chinese policy. Trying to push to have a more independent semiconductor supply because it is such a critical part of all these different technologies including AI is key for China to remove a tool of international governance for controlling its AI ecosystem.
Question: In light of the trade war and other tensions with great powers, for the export of AI technology, is China looking more to Belt and Road countries?
Jeffrey Ding: It is too early to say how long lasting this dispute will be and what the ramifications are. I am a little hesitant to say that the trade war has really affected where Chinese companies are expanding their markets, because before the trade war they were already trying to focus on Southeast Asian countries and other developing countries in Africa. These markets are less competitive and less saturated than the American and European markets. I think that trend will continue, but I do not think it is going to largely be because of the trade war.
Question: What about the role of board accountability or corporate oversight in the Chinese context, especially given the rapid growth of AI companies in China?
Jeffrey Ding: That is a really important question, what companies are doing for oversight. Even Google had so many issues with their external ethics board. Maybe the best example of a company that is doing this well is actually Axon (which used to be Taser) in the US. They have developed a good and, I think, open, independent external ethics board on the use of facial recognition and policing. I have not seen anything like that in the Chinese case.
One thing that I have been tracking is the growth of research institutes attached to big tech companies. So Alibaba has its own research institute, Tencent has its own research institute, Bytedance has its own research institute, although they work on policy and ethics and legal issues, similar to Google’s Google Policy, the Google Legal Team. The question is always, how much of the work is actually devoted to thinking about ethical principles for the company. For example, Tencent’s content is more related to this type of ethics and governance work, whereas Alibaba’s research institute was more focussed on market research or looking more at the overall legal landscape. It is important to track, how startups are setting up their own ethics initiatives - like I mentioned, the Megvii example. But it would be hard to even start thinking about how to measure the level of oversight on ethics in a company like Google. If that is the starting point, it is even more difficult in the Chinese case.
Rule of Law Programme Asia, Konrad Adenauer Stiftung and School of Law, Strathclyde: Many thanks, Mr Ding.
The interview was conducted by Dr Angela Daly (Senior Lecturer of Law, University of Strathclyde) and Ms Aishwarya Natarajan (Research Associate, Rule of Law Programme Asia, KAS). We welcome your thoughts, suggestions and feedback. Dr Daly can be contacted at firstname.lastname@example.org and Ms Natarajan can be contacted at email@example.com.
 For more information on the case, please see the link: https://www.bbc.co.uk/news/world-asia-china-50324342