Hermann Diebel-Fischer: Ethics and Quantification: Disentangling a Relationship

Highlights summarized by Daniela Zetti and Christian Herzog

Highlights of the VEIL

The Ethical Innovation Hub's fifth Virtual Ethical Innovation Lecture featured Hermann Diebel-Fischer, who is a theologist currently working for Dresden University of Technology, where he is the AI ethicist in the Center for Scalable Data Analytics and Artificial Intelligence Dresden/Leipzig. He studied theology and general economics at Dresden University of Technology, where he also did his PhD in systematic theology. Hermann Diebel-Fischer is also a Young Academy Fellow at the Academy of Humanities and Sciences in Hamburg.

In his talk, Hermann Diebel-Fischer explored a number of facets of quantification in relation to ethics. In doing this, Diebel-Fischer gave an overview on how different ways of doing ethics (namely descriptive and normative) position themselves to practices and discourses of quantification and translation. After briefly motivating, why quantification needs to be discussed in relation to ethics, he delved into the question of when ethics enters decision-making: before, during and after an action, i.e., ex ante, in an ongoing situation and ex post. He further explored the hypothesis that the will to put ethics into machines is, in fact, the will to humanize machines. Hermann Diebel-Fischer explicitly does not want to offer a solution, but an optimistic outlook: by asking anew how ethics works, we may learn something new about ourselves as human beings.

Normative, Descriptive Ethics and Algorithms

Diebel-Fischer introduced ethics as a cultural tool to organize the complex process of living together. Normative ethics deals with theories and concepts of "what is good and what is right", or bad and wrong. The theories emanating from the field work towards methods for arriving at justified and validated decisions. Descriptive ethics is not about coming to moral judgements. Instead, it entails a hermeneutical approach geared towards understanding and interpreting ethics. In other words: "we want to understand what is going on." Therefore descriptive ethics is contributing to deepening knowledge about how humans arrive at moral problem formulations.

While normative ethics is not interested in how societies come to work out moral problems, descriptive ethics points to the problem that situations are rare "in which we only need to do the maths and its done". If methods for arriving at justified and validated decisions could be taken literally, Diebel-Fischer adds, algorithm-like ethical evaluations would have long been implemented in computer science. Algorithms for arriving at ethical judgments meanwhile do exist. But they often assume the moral problem or dilemma to be given at the outset. However, the hardest part is to agree on what the problem is, which entails an ethical judgment itself.


Utilitarian ethics as developed by Jeremy Bentham and John Stuart Mills has been refined into several modern forms, but, Diebel-Fischer claims, it has never really been adopted in a wide-spread manner, at least in continental Europe. However, for machine ethics it is attractive to have a calculus of utility. This is why utilitarianism is now experiencing a renaissance within machine ethics.

However, making ethical problems amenable to algorithmic interpretation on machines requires a translation of facts, ideas, values and other items into data, and hence, requires quantification. Understanding that data is the result of a translation, makes this translation a worthy object for further examination.

In one of his main works, Jeremy Bentham elaborated on utilitarianism's "Felicific Calculus", where he implicitly assumed that an ethical problem has an outcome that can be regarded as a solution and that this outcome can be reached as a rational choice. Other ethical theories, such as intuitionism, however, doubt that ethical assessments are always based on reason. This makes the translation process non-trivial.

Ethics Ex Ante, in the process, Ex Post

Situations that are of moral significance (cf. Johannes Fischer) can only be really pondered on in anticipation or afterwards via some algorithmic, methodical way. If we find ourselves forced to make an ethical judgment in the midst of a time critical situation, however, it appears that humans follow other principles. Even though some philosophers, like Joshua Greene, assume that all human decision-making is utilitarian, this remains debatable.

If we construe ethics as an endeavour that requires us to weigh different quantified options, we are required to operationalize concepts like justice, autonomy, etc., into quantifiable values, because machines lack a hermeneutic system, which would enable them to interpret. Instead, currently, the abstractions need to be predefined by humans.

The Anthropological Dimension

The ability for ethical reflection and to reconsider our conclusions is, until now, only found in humans, as only we can be held responsible for our actions. The will to render machines into moral machines, i.e. machines capable of moral deliberation, can be interpreted as the will to humanify machines. In turn, this trait, which is considered uniquely human, will consequently be made computable – or mechanized.

Günther Anders denotes "Promethean Shame" as the notion that humans are deeply aware of the allegedly superior perfection of machines, and – in turn – their own fallibility. This implies a perception that the human ethical decision-making should actually resemble the perfect abstractions employed by machines. However, it may even be the human imperfection that can be deemed valuable for ethical decision-making processes.

"Not a solution, but an optimistic outlook"

When translating from the qualitative to the quantitative, there is always something lost. This may be significant, when engaging in the endeavour to construct moral machines.

There is a definite upside to this endeavour: When asking again how ethics actually work, we may learn something new about ourselves. We should also ask the question, whether we can have counterparts in society that do ethics differently than humans.

Q&A Session

Questions during the Q&A session first revolved around the question of the infallibility and perfection of machines – ideas were discussed that deny this, but it was highlighted that machines could work into uniform ways.

Finally Hermann Diebel-Fischer shared his fascination with finding the reasons why ethics cannot be calculated.

Further Literature

The notion of the "Felicific Calculus" appears in Jeremy Bentham, An Introduction to the Principles of Morals and Legislation, London, 1789

Johannes Fischer (2002): Theologische Ethik. Stuttgart: Kohlhammer.