Skip to main content

Highlights

Virtual Ethical Innovation Lecture

VEIL - Andrea Aler Tubella

Contestable Black Boxes: Implementing Transparent Moral Bounds for AI Behaviour

 

Summarized by Daniela Zetti

 

 

Highlights of the VEIL

Andrea Aler Tubella is a senior researcher working with the Responsible Artificial Intelligence Group at Umea University, Sweden. She is a computer scientist looking at so called black box technology emphasizing moral issues. In her VEIL she is pleading to make decisions made by algorithms contestable. Her talk revolves around “the right to contest a decision from automated processing”, sketching first results of interdisciplinary research that can be made fruitful in order to embed the right to contest decisions within technical processes. She then outlines an approach that makes contestability achievable by developing technical processes that can be re-run on demand and are supported by an advanced records management.

Contestability – The right to contest a decision from automated processing

At the beginning, Andrea Aler Tubella outlines the basic claim associated with the concept of contestability: “Individuals affected by decisions based on predictive algorithms should have similar rights to those in the legal system, including the right to challenge them.” The outcome of an automated process might not be correct and thus individuals should be able to challenge the decision and go to court. This claim indeed is anchored in the European General Data Protection Regulation, which says that whenever a decision relying solely on automated processing affects an individual significantly, then the right to contest this decision must be guaranteed. To ensure this right in the near future, the work of various professions is needed, Andrea Aler Tubella says. Many questions still need to be clarified: "what can be contested?", "what legal framework is needed?", "what assurances are needed to guarantee the right to contest?", or “how to explain decisions so that individuals can decide whether they want to contest?”. These questions await an answer, because if a decision is contested, one needs to know if it was taken in a correct way. “We need, of course, clarity – what are the rules – in order to be able to check whether the rules were followed”.

In her own research, Andrea Aler Tubella is focusing on what kind of tools and technical processes can make sure that AI decisions are observable, understandable and contestable. Transparency and explainability are conditions that must be fulfilled if you want to come to an informed decision on whether you want to contest a [black box] decision or not. Accountability – to name another key concept of AI ethics – comes into play after you contested a decision because it takes care of compensation “if a fault is found, how should it be compensated?”. Andrea Aler Tubella argues that it is the field of contestability in particular that needs more research and may provide answers.

Tools – A proposal for a contestability process

In the second part of her presentation, Andrea Aler Tubella sketches a proposal for a contestability process that combines well established software and engineering practices with rule based approaches. She described the main idea: to encode a system´s constraints and to re-run processes after their decisions were contested. “If a part of the process does not adhere to the specifications some flags are raised. Then you can see where exactly the process was not correct.” To arrive at such a system, stakeholders must be involved in its development. Another important factor is the language that is used. It must allow for detecting violations. You also should know the version of the algorithm used and should be able to record inputs and track events.

To illustrate the demands that arise for record keeping, Andrea Aler Tubella cites a "real life" example showing how Lufthansa had to adjust its pricing policy and algorithmic decision making after Air Berlin´s bankruptcy.

Q&A Session

The Q&A session emphasized the very pronounced sociotechnical, especially juridical, nature of contestability. As the speaker put it: “what is correct for a system in one context is not correct in another one.” This is why it is important that rules are found and set by each system´s developers – they cannot be defined in a universal way. Don't get distracted by black boxes, was another message from the presenter. Those entirely transparent strategies guiding the use of algorithms would give enough to do. How to identify potential areas and systems that would work well with the contestability processes outlined by Andrea Aler Tubella? How to define appropriate criteria? The last part of the discussion made the very concrete potential of transparent moral bounds in AI tangible.

Literature

Andrea Aler Tubella, Andreas Theodorou, Virginia Dignum and Loizos Michael (2020). Contestable Black Boxes. RuleML+RR. https://arxiv.org/abs/2006.05133

Henrietta Lyons, Eduardo Velloso, and Tim Miller (2021). Conceptualising Contestability: Perspectives on Contesting Algorithmic Decisions. Proc. ACM Hum.-Comput. Interact. 5, CSCW1, Article 106. DOI: https://doi.org/10.1145/3449180