Skip to main content

Highlights

Virtual Ethical Innovation Lecture

VEIL - J. Mark Bishop

AI is Stupid and Casual Reasoning Will not fix it

Highlights summarized by Christian Herzog

 

 

Highlights of the VEIL

In this Virtual Ethical Innovation Lecture, J. Mark Bishop, Professor Emeritus of Cognitive Computing, Goldsmiths, University of London, now scientific advisor to FACT360, dismantles common misconceptions on the narratives about intelligent machines. Sparked by calls from computer science greats such as Judea Pearl, Gary Marcus and Ernest Davis that researchers should focus on actual capabilities for causal reasoning as a fix to the abundant errors of purely associationist approaches to intelligent systems, J. Mark Bishop holds that the roots of AI errors lie deeper: Computation itself is devoid of understanding.

Read Bishop’s full paper here.

Neural Computing and the Mind Analogy

J. Mark Bishop commences by outlining the dominant cognitive paradigm behind artificial intelligence – Simulating the brain is often supposed to hold the key to produce a real computational mind. A range of fascinating and seemingly magical examples of deep learning systems appeared to elicit a concept of true learning. For instance, concepts such as a “cat” or a “face” seemingly emerged from large amounts of unstructured data (youtube videos).

However, such associations, J. Mark Bishop continues, still amount to nothing more than classifications, mappings between sets of data. Just because some aspects of learning can be modelled by algorithms such as deep learning, the question remains whether that is an accurate depiction of the learning process. Even if we were to find an algorithm that closely mimics a human brain, Bishop continues, the question still remains, whether every human brain works that way.

Bishop has serious doubts about this. Even newer paradigms such as auto encoders, large language models and general adversarial networks, Bishop states, are fundamentally embedded in Euclidean space as weighted sums of particular quantities, discriminating hyper planes and – in result – essentially non-interpretable pieces of information. Because things from the real world do not always match the quantitative, pattern separation and matching mechanics of neural networks, causal relations are difficult to infer from them. Some things are inherently discrete, or even qualitative in nature, rendering approximations by quantitative features as inherently lacking.

Challenging the Computational View on the Human Mind

Based on the above, Bishop ventures on to present three essential challenges to the view that the human mind is entirely amenable to ratiocination and computation. These challenges are based on three essential claims:

  • Computers do not understand the bits they are processing (Searle’s the Chinese Room Argument)
  • Computers lack mathematical insight (Penrose’s Turing-non-Computability & Chess Problems)
  • Computers cannot experience phenomenal sensation (Bishop’s Dancing With Pixies Reductio)

In short, Searle’s “Chinese Room Argument” (Searle, 1980) is a thought experiment that undermines the idea that a computation system could actually understand the information (such as stories) it processes. In other words, Searle’s thought experiment purports to show that “syntax is not sufficient for semantics”. For Searle, it follows that minds are not based on mere computation. The experiment describes someone locked into a room with three stacks of papers containing Chinese ideographs, whose meaning is not understood by that someone. There is also an English-language book of rules (an algorithm) on how to correlate the three stacks of paper based on the form of the symbols on them. Outside the room, people identify the three stacks as “the script”, “the story” and “questions about the story”. The algorithm can be identified from the outside as a “program” that returns “answers to the questions about the story”. People from the outside can also ask questions in English, to which the one in the room can, of course, reply in English.

After a while the person in the room perfectly follows the rules, while those on the outside perfectly engineer the rules such that the responses become indistinguishably close to someone who would actually understand Chinese. Searle contended that despite this, one could not infer anything about whether the person in the room understands anything about the stories.

Searle’s “Chinese Room Argument” has, for sure, been the subject of much controversy. However, the result of the debate about it may turn out, it stands that there is extreme difficulty in epistemically establish the presence of a true cognitive state (in humans or machines), which is a separate task altogether from ontologically instantiating a cognitive state in a machine.

Sir Roger Penrose contents that the issue of “understanding” must be something not amenable to computation by referring to a Gödelian argument (Penrose, 1994, p. 150). In short, it states that one may find a formal theory or system, which can prove certain statements, but there will always be statements that are not provable from within the theory – i.e., a theory cannot be both consistent and complete. Following this, computationalism, as the view that everything may be amenable to computation, may be refuted on the grounds of the so-called Penrose-Lucas Argument. Lucas argued that an automaton cannot replicate the behavior of a human mathematician. The argument essentially says that “the mental procedures whereby mathematicians arrive at their judgments of truth are not simply rooted in the procedures of some specific formal system” (Penrose, 2016, p. 144).

Bishop’s “Dancing with Pixies (DwP) reductio ad absurdum” (Bishop, 2002) adds to this an attempt at showing that machines cannot implement raw sensation (phenomenal experience) by computational means. Bishop contents that his argument holds, unless one acknowledges a form of “panpsychic mysterianism” to be true, i.e., the idea that “every open physical system is phenomenally conscious”, or that physical entities have conscious experiences. An open physical system is one that is not isolated from, and hence is in causal interaction with, its environment. The argument, which can be retraced in (Bishop, 2021) in its entirety, tries to show that if computations realize phenomenal sensation, then panpsychism must hold. Otherwise, the claim that phenomenal consciousness can be realized by computation must be rejected.

In a little more detail, the argument proceeds as follows. Bishop first shows that an open physical system (which could even be a rock) implements a particular sequence of state transitions over a finite period of time, that is to say, a computational system Q operates on known inputs I. A rock’s physical parameters, e.g., will be changed by physical causes endogenous to the rock, such as its atoms changing state, vibrations, atomic decay. Exogenous or external causal influences can stem from, e.g.,  gravitation. Some of the influences will act like a clock and result in a noncyclic behavior of the rock’s states. Following Hilary Putnam’s state-mapping procedure (Putnam, 2011), any execution trace of state transitions can be realized in any open physical system. Assuming that some of these transitions implement phenomenal experience, this would mean that even a rock can be conscious, i.e., panpsychism holds.

However, according to Bishop, panpsychism is extremely unlikely and, hence, the argument shows that if we reject it, we must also reject the idea that any machine can realize phenomenal consciousness by simply executing a computer program.

Conclusion

Bishop contents that in contrast to the classical cognitive science view that cognition can either be explicitly, implicitly or descriptively computed (computations on symbols that embed cognitive functions, connectionist theories of mind and mathematical models of neurons), the three arguments (Searle, Penrose, Bishop) show that computation cannot realize understanding, mathematical insight and raw sensation. Accordingly computational syntax will never completely cover human semantics. While machines may be developed to do very meaningful and complex things, they will always lack the kind of “deep understanding” that may be required to actually attribute agency.

Literature

Bishop, J. M. (2002). Dancing with Pixies: Strong Artificial Intelligence and Panpsychism. In J. M. Preston & J. M. Bishop (Eds.), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence (pp. 360–379). Oxford University Press.

Bishop, J. M. (2021). Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It. Frontiers in Psychology, 11, 513474. doi.org/10.3389/fpsyg.2020.513474

Penrose, R. (1994). Shadows of the mind: A search for the missing science of consciousness. Oxford University Press.

Penrose, R. (2016). The emperor’s new mind: Concerning computers, minds and the laws of physics (Revised impression as Oxford landmark science). Oxford University Press.

Putnam, H. (2011). Representation and reality (Repr.). MIT Press.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. doi.org/10.1017/S0140525X00005756

J. Mark Bishop, director of the Centre for Intelligent Data Analytics, professor of Cognitive Computing at Goldsmiths, University of London and a consultant on artificial intelligence, analytics and data ethics, who has been invited to contribute to policy at UN, the EC and the UK.

Please register to obtain an automatic calendar entry and reminders with a simple button to join for this lecture. 

Here you can register for all lectures of the 2022/23 winter term.