Why Hinton is Wrong about AI taking over (and about conscious AI)
Recently the The Nobel Prize and ACM, Association for Computing Machinery Turing Award Winner Geoffrey Hinton relaunched his ideas and fears about "AI's taking over" (see this recent interview https://www.youtube.com/watch?v=vxkBE23zDmQ, when he claims that "AI is already conscious" because - among the other things - 1) it has its own goals, 1) it can create subgoals and, 3) since - if implanted in a biological tissue of biological entity that biological entity remains conscious such biological entity remains conscious, as a consequence also the implanted device is (yes he said that). First: the fact that AI systems have goals (always provided by the humans) and create subgoals to achieve them is not new (see e.g. the subgoaling procedure in the cognitive architecture SOAR that relies on the means-end analysis heuristics, used already in the General Problem Solver developed by Newell, Shaw and Simon in 1959! I repeat: 1959!). So these two points do not make any sense. Also the fact that AI systems are able to invent new knowledge (going beyond subgoaling procedures) to solve new problems is not something new: in 2018-2019 with my colleague Gian Luca Pozzato and our students we developed a system that used the TCL logical framework, https://www.antoniolieto.net/tcl_logic.html, to invent new knowledge for solving problems via concept combination, blending and fusion (the paper is here: https://doi.org/10.1016/j.cogsys.2019.08.005 ). None of these things make an AI system conscious: they are just general heuristic procedures that allow a system to have yet another strategy (provided by us) to perform better on unknown tasks. In addition: the discourse that an AI system can functionally replace biological cells/neurons/tissues without making the system subject to this replacement “unconscious” (and therefore - this is how the reasoning of Hinton goes - the AI system is also “conscious”) is a complete nonsense since there is a confusion on many levels. The first confusion concerns the distinction between functional and structural systems (discussed extensively in my book Cognitive Design for Artificial Minds https://www.amazon.com/Cognitive-Design-Artificial-Minds-Antonio/dp/1138207950). The second one - connected to the first - concerns the attribution to a functional component of a “structural” (i.e. cognitive/biological) explanation. This is a sort of ascription fallacy that is also described in the book (and that is very common nowadays). More specifically: of course we can have bionic systems that are integrated - via partial or total replacement - with biological cells and tissues but functional replacement does not imply any biological/cognitive attribution (on this please also see this paper https://doi.org/10.3389/frobt.2022.888199) Think for example to the exoskeletons controlled via semi-invasive medical devices to record brain activity: these systems “function as” our biological counterpart and communicate well with other biological components leading to locomotion. But… Would you say that they (i.e.: the semi-invasive component implanted in our brain in this case…to follow Hinton’s unreasonable reasoning chain) are conscious (just because the biological entity that have them implanted is conscious)? That’s plain wrong and non sense. These kind of claims are completely unjustified and wrong from a scientific perspective and have nothing to do with the typical concerns and legitimate discussions about the risks coming from the societal and ethical impact that AI has any technology.