Asset Publisher

Interviews

KAS-Strathclyde Interview Series on AI, Global Governance and Ethics: Interview with Mr Zee Kin Yeong

Mr Yeong has concurrent appointments as Assistant Chief Executive (Data Innovation and Protection Group) of the Info-communications Media Development Authority (IMDA) and Deputy Commissioner of the Personal Data Protection Commission, Singapore.

In this interview, Mr Yeong shares his views on AI Policy in Singapore.

Asset Publisher

Question: Can you tell us about your work on AI?

Zee Kin Yeong: I am the Assistant Chief Executive (Data Innovation and Protection Group) of the Info-communications Media Development Authority of Singapore (IMDA) and Deputy Commissioner of the Personal Data Protection Commission (PDPC). My scope of work includes developing forward-thinking governance on and innovative use of AI and data, promoting industry adoption of AI and data analytics, as well as building specific AI governance and data science capabilities in Singapore. As the Deputy Commissioner of PDPC, I oversee the administering and enforcement of Singapore’s Personal Data Protection Act 2012.

I am a member of the AI Group of Experts at the OECD (AIGO), which developed the OECD Principles on AI in 2019. These principles have been endorsed by the G20 in 2019. I am currently a member of the OECD Network of AI Experts and an observer participant at the European Commission High-Level Expert Group on AI.

Question: Can you give us an overview of the AI governance situation in Singapore?

Zee Kin Yeong: Singapore’s vision, as articulated in our National AI Strategy, is to be a leader in developing and deploying scalable, impactful AI solutions, in key sectors of high value and relevance to our citizens and business by 2030. One of the enablers of this Strategy is to create and sustain a progressive and trusted AI environment so that business and citizens can benefit from AI. Our approach to AI Governance is a practical one that facilitates innovation and builds public trust in AI technologies. We hope Singapore’s approach can eventually be a global reference of what works. In the last two years, we have introduced several related initiatives:

  1. An industry-led Advisory Council on the Ethical Use of AI and Data, to advise the Government on responsible development and deployment of AI and provide private sector perspectives on issues arising from AI that may need policy intervention.
  2. A Model AI Governance Framework, and its companion Implementation and Self-Assessment Guide for Organisations (ISAGO) and Compendium of Use Cases. These initiatives seek to guide organisations to deploy AI responsibly.
  3. A Research Programme on the Governance of AI and Data, that harnesses expertise to generate forward-thinking practices, policies and regulations for AI and data.

The governance situation currently can perhaps be best described as emergent, with general agreement that some form of governance is necessary, and a willingness for policy makers and industry to co-create. We are keenly aware that industry has not worked with AI long enough for any form of best practices to take root, so the governance efforts have to be voluntary and light-touch for now.

Question: Can you tell us more about your work with the OECD? How does it align with your work domestically in Singapore?

Zee Kin Yeong: I am a member of the OECD Network of AI Experts (ONE AI), which was established early this year. Briefly, ONE AI provides expert input to the OECD’s analytical work on AI and identifying possible trends and topics. It is a multi-disciplinary and multi-stakeholder group, and comprises experts from member countries who provide AI-specific expertise about policy, technical and business topics related to OECD analytical work.

Singapore firmly believes that there is a need to continue to engage internationally on AI governance issues. In a global economy, cross-border trade and provision of goods and services will increasingly include the use of AI. The international community should therefore continue to look at these issues actively to ensure alignment and the facilitation of trade and commerce.

Question: Do you believe the ecosystem of trust in AI should be a global policy objective? Do you see regional differences within this concept? Can you provide examples from your context?

Zee Kin Yeong: Yes, we believe that trust in AI is key to its adoption and use, and should be a global policy objective, given AI’s ability to transform businesses and enhance quality of life. There are also regional efforts in developing AI ethics principles to instill trust, e.g., EU, OECD. While we increasingly observe the convergence of AI ethics principles, e.g., explainability, transparency, fairness, human-centeredness, there are differences in practice. Thus, our view is that the principles are likely to converge, and the OECD Principles look set to be a good articulation of these norms. However, the implementation of these principles will necessarily vary across geographies and cultures, taking into account variations in societal expectations and values. As an example, the principle of explainability has to be implemented in the context of societal expectations, taking into account consumer, commercial and public policy interests. The degree and extent of information that is required will therefore see some variations across different societies and markets.

Singapore hopes to put forth our views on these common AI ethics principles by translating them into implementable practice so that organisations can objectively and verifiably demonstrate to their stakeholders that they have established proper measures to mitigate the risks associated with AI. To achieve this, Singapore developed the aforementioned Model AI Governance Framework.

First released at the World Economic Forum Annual meeting in January 2019, Singapore’s Model AI Governance Framework is the first in Asia to provide detailed and implementable guidance to private sector organisations on the responsible use of AI. The Model Framework embodies two sets of principles:

  1. Decisions made by or with the assistance of AI should be explainable, transparent and fair; and
  2. AI solutions should be human-centric.

It maps out the key ethical principles and practices that apply to common AI deployment processes in four areas:

  1. Internal governance structures and measures;
  2. Determining level of human involvement in AI-augmented decision-making;
  3. Operations management; and
  4. Stakeholder interaction and communication.

Organisations can also use the companion Implementation and Self-Assessment Guide for Organisations (ISAGO) to assess the alignment of their governance practices with the Model Framework.

Question: Opacity, unpredictability and partially or complete autonomous behaviour are key features of AI that can cause harm to humans. Despite a regulatory framework, these features can make it hard to verify compliance with existing laws and may hamper their effective enforcement. How do we combat this issue?   

Zee Kin Yeong: As mentioned above in question 4, operationalising principles of fairness, explainability and transparency in practice will bring us closer to achieving the desired outcomes of these principles. Implementing measures for other principles and values such as auditability, robustness, repeatability, traceability and reproducibility can further address the issues of opacity, unpredictability and help identify the possible causes for a wrong autonomous decision made by AI. Better understanding of how these might be achieved will enable the application of extant legal principles to autonomous systems. We believe that it is necessary to invest in this area, and have therefore established a Centre for AI and Data Governance in the Singapore Management University’s School of Law. The Centre’s research and publications will inform how existing laws can deal with emerging issues.

Where there are gaps and law reform efforts are required, these can also be identified and recommendations put forward. A recent example is the Singapore Academy of Law Law Reform Committee’s publication of a series of law reform papers in this area, Impact of Robotics and AI on the Law. In specific areas, new or revised legislation may have to be introduced. An example is the amendment to the Road Traffic Act to allow for public testing of autonomous vehicles.

Question: Do you think that there could be a regional framework for AI governance in Asia like that being proposed in the European Union? Or do you think different Asian countries will adopt their own approaches?

Zee Kin Yeong: What comes out of the EU is largely driven by the pursuit of a single market in the EU. In this sense, it is trying to achieve a harmonised set of principles – and even regulations. Asia is a very different space and comparisons cannot thus be drawn. However, since AI governance is very much influenced by culture and societal values, there are opportunities for convergence on AI ethics principles, while acknowledging that there could be diversity in interpretation and practice due to differences in culture, social mores, and existing regulatory environments. 

As such, it is not inconceivable that like-minded countries or countries from a specific geographic area develop their own regional frameworks. One example is that of the ASEAN Framework on Digital Data Governance, which aims for a more uniform approach towards managing data and cross-border data flows within. This data governance framework could also form the foundation for discussions on AI governance within ASEAN.

Question: Many countries in the world are trying to build regulatory frameworks to govern AI including risk based models to trusted AI certifications. Do you believe that a global standard in this regard will bring about more trust, legal certainty and market uptake? What is the regulatory framework that your country is setting up and why?

Zee Kin Yeong: A global standard for AI governance could bring about better clarity in AI issues and concepts, create a common language, and facilitate flow of AI-enabled products and services across borders. There are quite a few international platforms that have been established to discuss global baseline norms. Singapore has been participating in these platforms to contribute to the global discourse.

We also believe that norms need to be evidenced in practice. Singapore’s Model AI Governance Framework, Implementation and Self-Assessment Guide (ISAGO) and Compendium of Use Cases are incipient efforts in this direction:

  1. Co-developed with the World Economic Forum Centre for the Fourth Industrial Revolution (WEF C4IR), ISAGO seeks to help organisations assess the alignment of their AI governance practices with the Model Framework. It also provides an extensive list of useful industry examples and practices to help organisations implement the Model Framework.
  2. Compendium of Use Cases contains illustrations on how local and international organisations across different sectors and sizes have implemented or aligned their AI governance practices with the Model Framework, and benefitted from the use of AI in their line of business.

Using the Model Framework as a basis, we are exploring AI governance certification to recognise organisations with robust AI governance practices. We are also partnering with a professional body to develop training and certification for professionals implementing AI.

Question: Do you believe that it is straight forward to differentiate between high and low risk AI? What other ways of risk mitigation exist to regulate AI?

Zee Kin Yeong: While it is possible to differentiate between high and low risk AI, regulation should not be focused on this differentiation. AI is a relatively new technology. Regulation should be driven by the effect of high risk AI on consumers and the market, and regulations should ideally be sector specific. As a technology, AI is neutral and the same technology can have different effect in different sectors, depending on its application. The treatment of new technologies should be consistent with how older technology is treated, as the desired outcomes and underlying principles should remain consistent despite changes in technology. This ensures consistency in our policy approaches across technologies. As an example, AI models for recognising objects may be classified as low risk when applied in the e-commerce space to search for merchandise; but the same models applied in autonomous vehicles to recognise road signs and persons or objects in its path may need to be subject to regulation of autonomous vehicles. These regulations probably will not just focus on the AI models, but will seek to ensure that the relevant system of detection and response (e.g., avoidance and braking) meets the requisite standard.

Notwithstanding, we do recognise that AI is different from the previous generations of technologies. AI is able to learn and make choices and decisions autonomously. If left to operate autonomously without defined and standardised boundaries, AI technologies may be called upon to make decisions that oppose moral and policy standards upon which societies operate and exist. Hence, we encourage organisations to conduct appropriate risk and impact assessments to determine whether their AI solutions used in a specific context should be considered high- or low-risk, and how these risks could be mitigated.

Question: As regards the protection of fundamental rights and consumer rights, do you believe we can use the existing general and specific sectorial legislation to protect against harms arising from AI? 

Zee Kin Yeong: AI’s uses, impacts, and risks would differ across sectors. We are cognisant that as AI becomes more pervasive and enters different sectors, sectoral regulations and guidelines will need to deal with unique issues that are raised, and to protect against harms arising from AI. Examples of sector-specific guidelines include MAS’ Principles to Promote Fairness, Ethics, Accountability and Transparency in the Use of AI and Data Analytics in Singapore’s Financial Sector (“FEAT Principles”), which set out the principles that organisations should look to adhere to when using AI solutions in the financial sector.

While we do not have a horizontal law to regulate AI in Singapore, organisations have to comply with existing, relevant laws and regulations when deploying AI. For example, Singapore’s Personal Data Protection Act 2012 provides a baseline framework for the collection, use and disclosure of personal data by private sector organisations. In addition, trials for autonomous vehicles should comply with the Road Traffic (Autonomous Motor Vehicles) Rules 2017.

Question: It is a reality that many AI innovations also come from the open source community and this space is not strongly regulated. How active is the open source community in AI innovation in your country? Do you believe this area can be regulated through legislative measures? If not, then how do you suggest we regulate this area?

Zee Kin Yeong: We do see a trend of AI technologies being increasingly open-sourced, to involve the community in enhancing the technologies, and in providing greater transparency and trust in the technologies. IMDA has released data, corpora and AI toolsets to the open-source community to spur the development and innovation of AI-related apps in Singapore:

  1. National Speech Corpus (NSC) – It contains 3,000 hours of locally accented audio recordings and corresponding text transcriptions. There are more than 40,000 unique words within the text transcriptions comprising local words such as “Tanjong Pagar” (a place in Singapore), “ice kachang” (a local dessert), or “nasi lemak” (a local dish). Speech corpora is used to train AI models related to speech recognition, synthesis and natural language processing (NLP). The NSC is currently one of the largest contribution of speech data to the open source community.
  2. Intelligent Sensing Toolbox – It is a suite of open-source tools and technologies. This sense-making AI algorithm offers businesses a plug-and-play open-source source code that can be quickly adapted and layered on top of existing data analytics system to help make better decisions.

Besides IMDA, there are communities such as AI Singapore that create open source solutions inspired by real-world projects and common AI requests from the industry. When it comes to open sourced AI models, one key consideration is whether AI models are protected like other computer software. The position in Singapore is likely to be that AI models can qualify as computer software. This being the case, it benefits from intellectual property protection and can also be treated in like manner under the Free and Open Source Software (FOSS) licenses. Beyond this, it may not be necessary to introduce new regulations at this time.

Generally, we believe that premature regulation of AI technologies could impede its development and deployment. This is because imposing regulations can increase compliance costs, thus discouraging adoption for nascent industry use cases. The challenge thus lies in creating an approach to AI governance that supports its use for innovation and the protection of consumer interest in order to create trust. This is why Singapore has taken a balanced approach to AI governance to foster public trust in AI, as well as to address the ethical and governance challenges arising from the use of AI while enabling pervasive and widespread AI adoption. We want to continue to enable a business- and innovation-friendly environment and encourage companies to pursue AI innovations locally.

Question: AI systems are seen as difficult to regulate because of the changing functionality of these systems. The other challenge relates to the difficulty with the allocation of responsibilities between different economic operators in the supply chain. How can we solve this issue through legislation? 

Zee Kin Yeong: Indeed, determining the liability of different economic operators in the supply chain is often a complex and challenging issue. Some issues may be addressed by existing laws, for example, product liability or tort. I believe that we should spend some time understanding how existing laws can apply to AI systems before looking at new legislation.

We have therefore established the aforementioned Research Programme on Governance of AI and Data Use in the Singapore Management University School of Law Centre for AI and Data Governance (CAIDG) in 2018 to identify and anticipate these issues and explore possible legal solutions for them. It aims to build up a body of knowledge of the various issues concerning AI and data use, so that when the issues do occur, we have a base of materials to work off. It will support the Advisory Council on the Ethical Use of AI and Data and inform Government and industry discussion on AI challenges.

Question: How do we use existing rules on board accountability and corporate governance standards to increase oversight on private entities involved in developing, deploying AI applications. What is the role of the industry bodies in this context?    

Zee Kin Yeong: The aforementioned Model AI Governance Framework provides guidance on corporate governance and accountability within organisations. In particular, it suggests that organisations adapt existing internal governance structures and processes as much as possible, or put in place new ones if needed, to ensure robust and appropriate oversight over how AI technologies are brought into their operations, products and services. For example, risks associated with the use of AI can be managed within the enterprise risk management structure, while ethical considerations can be introduced as corporate values.

As AI continues to evolve and new applications are constantly being discovered, industry bodies can help to promote awareness, formulate industry standards, certify AI governance practices and build communities.

Question: Do you believe that specific rules are required to increase accountability when state is the user of AI applications for public services? Are there such rules in Singapore?

Zee Kin Yeong: We believe that it is important to ensure proper governance of all AI applications, including that which is used by government. For example, the use of personal data in AI applications in the public sector is governed by the Instruction Manuals and Public Sector (Governance) Act.[1] Public agencies can also refer to the Model AI Governance to improve their internal processes.

Question: The digital divide remains a key concern in ICT for Development agenda. Governments have a duty to ensure that no one is left behind according to the SDG goals. How can regulatory frameworks and policies enable this goal?

Zee Kin Yeong: To truly become a Smart Nation and achieve our goals under the National AI Strategy, our citizens (young and old) must be able to have the opportunity to embrace, benefit from, and have the assurance that they can thrive in the digital age. To ensure that no one gets left behind, it is important for us to build a digital future where there are opportunities for all, and nurture a digitally ready community that embraces ICT and emerging technologies.

Digital disruption can cause profound anxiety for individuals with limited tech knowledge and exposure. Singapore has put in place manpower development initiatives such as TechSkills Accelerator (TeSA), a SkillsFuture initiative driven by IMDA and in partnership with strategic partners such as Workforce Singapore and SkillsFuture Singapore, as well as collaboration with industry partners and hiring employers.

TeSA offers various programmes to support current ICT professionals and non-ICT professionals to upgrade and acquire new skills and domain knowledge that are in demand, and to stay competitive and meet the challenges of a fast-moving digital landscape. For non-ICT professionals, IMDA offers the Tech Immersion and Placement Programme (TIPP) to convert them into industry-ready ICT professionals through short intensive and immersive courses delivered by industry practitioners. These courses include areas like Data Science, Machine Learning, and Applied AI. For mid-career professionals aged 40 and above, IMDA provides them with opportunities to be reskilled or upskilled while holding a tech-related job under TeSA Mid-Career Advance. Job roles include Data Analysts, Project Managers, Software Engineers and more.  

To fully integrate into an increasing digital society, we believe that people from all walks of life should have equal opportunities to use technologies. IMDA seeks to facilitate digital accessibility. IMDA’s efforts include the Home Access Programme to provide subsidised internet access to low-income families, and NEU PC Plus Programme that supports low-income students with a free/subsidised PC and free fibre broadband. It aims to empower all citizens with greater digital connectivity to ensure that no one gets left behind. IMDA also helps seniors to learn digital skills. For example, IMDA, together with partners such as National Library Board, organises free Digital Clinics in libraries and community spaces to help seniors with their smartphone devices. In addition, for students from underprivileged backgrounds who may not have the opportunity to learn coding from private coding schools, IMDA together with Google bring free coding classes to them to enable them to learn and develop basic coding skills.

Question: What do you think will be the future of AI governance activities in Singapore over the next few years?

Zee Kin Yeong: The steps we take today will leave an indelible imprint on our collective future. Our current set of initiatives lay the foundation and will pave the way for future developments, such as the training of professionals on responsible AI deployment, and laying the groundwork for Singapore, and the world, to better address AI’s impact on society.

AI is likely to become more pervasive and influence how we work, live and play. As a society, we must be alive to its benefits and potential pitfalls. We will need to consider the impact of AI on individuals such as consumers and employees as organisations increasingly realise the benefits of AI and adopt it. We will need to prepare for the future of work and continue to advance a human-centric approach to AI – one that facilitates innovation and safeguards public trust – to ensure AI’s positive impact on the world for generations to come.

Rule of Law Programme Asia, Konrad Adenauer Stiftung and School of Law, Strathclyde: Many thanks, Mr Zee Kin Yeong.

The interview was conducted by Dr Angela Daly (Senior Lecturer of Law, University of Strathclyde) and Ms Aishwarya Natarajan (Research Associate, Rule of Law Programme Asia, KAS). We welcome your thoughts, suggestions and feedback. Dr Daly can be contacted at a.daly@strath.ac.uk and Ms Natarajan can be contacted at aishwarya.natarajan@kas.de.

 

[1] The Singaporean Government’s Personal Data Protection Laws and Policies can be found here: https://www.smartnation.gov.sg/why-Smart-Nation/secure-smart-nation/personal-data-protection

Asset Publisher

Contact

Aishwarya Natarajan

comment-portlet

Asset Publisher

Asset Publisher