Skip to main content

Highlights

Virtual Ethical Innovation Lecture

VEIL - Ansgar Koene

Principles, Standards and Regulation for Trustworthy AI

Summarized by Christian Herzog 

 

Highlights of the VEIL

The Ethical Innovation Hub's third Virtual Ethical Innovation Lecture featured Ansgar Koene, Global AI Ethics and Regulatory Leader at EY, senior research fellow at the University of Nottingham and chair of the IEEE P7003 Standard on Algorithmic Bias Considerations. Judging from his affiliations, Ansgar Koene is uniquely qualified to shed light on the tensions that arise (or do not arise, for that matter), when business self-regulation, government regulation and academic perspectives on what should be guiding ethical principles clash.

Accordingly, the presentation covered a lot of ground, starting off with a brief overview about principled approaches to AI and traversing issues such as the different facets of (self-)regulation, standardization and societal discourse, before digging a little deeper into the IEEE's work on standardizing considerations to prevent unjustified, unintended and unacceptable biases in algorithms.

Gartner Hype Cycle for AI

In his presentation, Ansgar Koene started off with a brief reference to the well-known Gartner Hype Cycle, pointing out that AI technology is trailed by digital ethics, which in turn is also trailed rather significantly by appropriate governance. Despite this, Ansgar Koene highlighted that there is a considerable degree of consensus on what ethical principles should be guiding AI as illustrated by a publication of the Berkman Klein Center for Internet & Society at Harvard University (https://cyber.harvard.edu/publication/2020/principled-ai). A reference point for convergence of principles for ethical AI can supposedly be found in the OECD's AI Principles (https://oecd.ai/ai-principles/).

Ansgar Koene proposed that there seems to be a gap between formulating meaningful principles and developing governance tools, many of which may answer qualitative questions explicitly ("Who should be accountable?") or even quantitatively ("How to measure well-being?").

According to Ansgar Koene, the discussion typically revolves around the general premise that many methods orbiting the term and contributing to the field of artificial intelligence, are offering huge potential for benefits, but are holding high risks as well.

Pioneers in Regulating AI

According to Ansgar Koene, many AI developments and products have already been rolled out with some causing public outrage in the face of severely adverse incidents. A lot of these have been discovered by investigative journalists, who deserve credit for their contribution in informing both the general public and policy makers about issues with some AI developments.

Such outrage has spawned initiatives to quickly regulate AI and have also caused industry to backtrack on their developments for fear of triggering further ad-hoc regulation. More principled approaches to regulation, such as the one initiated by the EU, which resulted in the High-Level Expert Group on Artificial Intelligence's Guidelines for Trustworthy AI, are allegedly slower, but may yield better and more thoughtful regulatory measures.

In fact, according to Ansgar Koene, it is quite clear that the EU aspires to be a leader in creating a governance for ethical and trustworthy AI.

Governance Frameworks

Ansgar Koene then went on to detail potential governance frameworks according to Saurwein et al. (2015). Market solutions build on the advantages due to building a good reputation as a trusted AI provider. Demand side solutions see players withdraw from untrusted platforms, while a supply side push for trustworthy AI is supposedly rewarded by market success. Problems to this approach can be seen in network effects, because AI benefits from a concentration of computing and cloud storage resources that hinder competition.

Self-regulation solutions can work on both a company or a branch/sector level. Their aim is to establish a notion of what a "good corporate citizen" is and then opt for companies to adopt this notion in the form of a standard. While on a company level, employees can have a big impact to shift directives of top-level management (cf. Google...), sector level solutions seek to develop standards for improved interoperability and trustworthy business practices. While these proceedings are typically open for all, there is an issue for non-governmental groups to get involved due to the demands on time and funds to effectively participate.

State supported regulatory bodies are currently pooling their standardization efforts on the ISO level. IEEE is also working on an international level. The qualitative difference lies in the IEEE focusing on ethical issues, while ISO focuses on technical standardizations.

State intervention solutions can take the form of co-regulation and legislative action. In co-regulation, states and private regulators co-operate in joint institutions. An example for legislative action can be found in France's Digital Bill, which requires every algorithm used in the public sector to be explainable.

The IEEE P7003 Standard for Algorithmic Bias Considerations

The IEEE is currently active in developing the set of P70XX series of standards for translating the principled approach of the IEEE framework "Ethically Aligned Design" into workable guidelines for processes in development and business conduct.

The standard considers algorithmic systems as inherently socio-technical with significant potential impacts on society and, importantly, on its most vulnerable people. Causes for algorithmic bias are identified as stemming from

  • an insufficient understanding of the context of use
  • a failure to rigorously map decision critera, and
  • a failure to have explicit justifications for the chosen criteria.

The standard thus attempts to make sure the underlying considerations are sufficiently addressed and documented for proper accountability.

When referring to the well-known ProPublica study about the allegedly racially-biased parole system COMPAS in the US (Angwin et al., 2016), Ansgar Koene points out the fundamental underlying rationale in these kinds of AI systems: By reducing complex human individuals to simplistic, often binary, stereotypes, AI systems perpetuate and perhaps amplify the imperfections of societies, instead of offering solutions. This is because, too often the underlying ontologies are not questioned from an ethical perspectives, but rather economic rationalization appears to be the leading motive.

Certification

As a final example, Ansgar Koene highlighted the IEEE's work on developing a set of certifications, intended to allow a labelling of trustworthy AI products and company procedures focused on transparency, accountability and algorithmic bias. The program, denoted the "Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS)", is industry driven and voluntary.

Q&A Session

Questions during the Q&A session revolved around access to the processes within the different governance frameworks and the relationship between standards and legislation. The issue that, only in theory, standardization processes are often open for a wide-ranging and inclusive participation of all stakeholders, interest and activist groups was further stressed. When asked whether he considered AI technology to be explainable in principle, Ansgar Koene replied that it was the explainability of processes that mattered. The question "who is it you are giving the explanation to?" is also helpful. In addition to technical explanations, it is important to explain why a system was built in the way it was, for example with regard to economic aspects.

Summary

Regulation for AI is a highly topical issue with civil rights activists, academia and state regulators struggling to keep up with both the technology developments and self-regulation and standardization efforts in industry. Nonetheless, many leading figures are calling for reasonable measures to be urgently included in regulation processes, while also calling for an increasingly inclusive design of the governance processes.

Further Literature

Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (January 15, 2020). Berkman Klein Center Research Publication No. 2020-1, Available at SSRN: https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482

Saurwein, Florian and Just, Natascha and Latzer, Michael, Governance of Algorithms: Options and Limitations (July 14, 2015). info, Vol. 17 No. 6, pp. 35-49, 2015, Available at SSRN: https://ssrn.com/abstract=2710400

Angwin J, Larson J, Kirchner L, Mattu S (2016) Machine Bias - There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica

Independent High-Level Expert Group on Artificial Intelligence Set Up By the European Commission (2019) Ethics Guidelines for Trustworthy AI