Wulf Loh: From Principles to Practice – An interdisciplinary framework to operationalise AI ethics

Highlights summarized by Christian Herzog

Highlights of the VEIL

The inaugural Virtual Ethical Innovation Lecture featured Dr. Wulf Loh, who studied philosophy, political science as well as International and European law in Heidelberg, Bologna and Berlin. His PhD dissertation “Legitimität und Selbstbestimmung” (Legitimacy and Self-Determination), in which he undertook a normative reconstruction of international law, has been published in early 2019 by Nomos.

In 2012 he joined the Chair of Philosophy of Science and Technology in Stuttgart, where he worked in the project “Be-Greifen”, which was part of the BmBF-Call on “Experiential Learning”. In 2011 he was a guest researcher at Princeton with Prof. Charles Beitz. Since September 2018, Wulf Loh is a research associate at the International Center for Ethics in the Sciences and Humanities.

Wulf Loh spoke about an approach to bring ethical principles into AI practice. Wulf Loh is part of the Artificial Intelligence Ethics Impact (AIEI) Group that published the report “From Principles to Practice - An interdisciplinary framework to operationalise AI ethics”. The AIEI Group is led by both the VDE and the Bertelsmann Stiftung and incorporated a wide range of leading experts and scientists in the field.

AI Ethics Needs to Move Beyond Guidelines and Become Operational

In his presentation, Wulf Loh first highlighted that there is a host of initiatives proposing guidelines for AI ethics. A colleague, Thilo Hagendorff, did a meta-analysis, published in 2020 and found about 90 documents proclaiming ethics guidelines for AI. This number may well have risen to 200 by the end of 2020. Even though such activity is highly laudable, the ubiquity of AI solutions requires a more stringent, transparent and verifiable way of communicating a development team’s efforts to incorporate ethical considerations into their AI development processes and solutions.

This is where the AIEI Group picked up the issue.

The Values, Criteria, Indicators, Observables Model

The AIEI Group’s approach rests on the Values, Criteria, Indicators, Observables Model (VCIO Model) due to Christoph Hubig. In short, this model presents a hierarchy of abstraction layers to systematize assessments with respect to an AI solution’s fulfilment of particular normative values. The assessment of a particular value is split into various criteria, which in turn are evaluated based on qualifiable, or even quantifiable, indicators. Practical value conflicts will manifest on the indicator level. Measurements of the indicators are then given in terms of observables.

The AIEI Group did an exemplary elaboration of the VCIO model for the values of accountability, transparency and justice. More explicit examples for the respective criteria, indicators and observables were given.

Which Values?

Six values were identified by the AIEI Group: Transparency, Accountability, Privacy, Justice, Reliability and Environmental Sustainability. While most of them are well-known from highly regarded AI ethics guidelines, such as the EU HLEG, the AIEI Group took care to emphasize environmental sustainability as a special value, which is often underrepresented in discussions. Similarly, the AIEI Group was keen on highlighting the additional dimensions encompassed by the value of justice as opposed to just fairness, such as issues of participation, redress and social justice in general.

The Ethics Label

The representation of an ethics label must find middle ground between honoring the complexity of ethical considerations and maintaining simplicity for signalling ethical alignment almost at first glance. The AIEI Group settled for a label reminiscent of energy efficiency labels known from buying home appliances. Accordingly the ethics label features a scale from A to G, A being the best.

Aggregating Observables

The evaluation of the observables is then aggregated, for which the AIEI Group considered two modes. Ratings could either be averaged, opening the possibility to trade-off certain ethical values against each other. Alternatively, a final rating could be based on the minimum rating of any observable, incentivizing developers to at least bring everything onto the same level. Hybrid modes are conceivable as well. The AIEI Group did not settle on one final mode, as they consider this the task of the regulators and policy-makers.

Ways to Use the Label

The AIEI Group suggests a range of ways in which the label could be of use. From advising consumers, allowing developers to self-commit and advertise this, providing seals of approvals handed out by NGOs to providing unions and management to use the label as a tool for bargaining working conditions or facilitating third-party-certification – all of this remained on the table.

Assessing AI Application Risk

To be useful for regulatory purposes, the AIEI Group sidelined the VCIO model-based ethics label with a simple risk assessment matrix. The matrix, spanning the intensity of potential harm versus the dependence of an application to rely on the AI solution, is intended to distinguish five different levels of risk.

Q&A Session

While Wulf Loh made very clear throughout the presentation that the AIEI Group’s approach is very much open for debate and good ideas are very much invited, the Q&A session made clear that experience in applications to real-world problems will reveal crucial aspects by which the approach may be fine-tuned. For instance, it was suggested that the range of possible applications suggested might require adjustments and a specific tailoring.

In addition, it was put up for discussion, whether a reformulation of the observables’ descriptions might allow for skewing the ethics label in favor of particular interests. There was also a lively discussion about the meaning and depth of certain ethical values and the fact that they are very much debated. Furthermore, more inquiries were made as with respect to how the actual meaning of certain ratings is settled and conveyed.


The AIEI Group’s approach to systematically assess values in terms of criteria, indicators and observables is a proposal to bring AI ethics into wide-spread visibility and consideration – potentially even the mainstream. Future work of the group may address many open questions and is open to suggestions. The lively discussions on the labelling approach also highlighted that ethics is very much a thing that requires discourse. The AIEI Group’s work certainly facilitates this and sparks debate.

Further Literature

Hallensleben, S., Hustedt, C., Fetic, L., Fleischer, T., Grünke, P., Hagendorff, T., Hauer, M., Hauschke, A., Heesen, J., Herrmann, M., Hillerbrand, R., Hubig, C., Kaminski, A., Krafft, T., Loh, W., Otto, P., & Puntschuh, M. (2020). From Principles to Practice - An interdisciplinary framework to operationalise AI ethics. Available here.

Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machineshttps://doi.org/10.1007/s11023-020-09517-8

Hubig, Ch. (2016) ‘Indikatorenpolitik’, CSSA Discussion Paper 2016(2), [Online]. Available here.