sabato 1 febbraio 2025

Why Hinton is Wrong about AI taking over (and about conscious AI)

Recently the The Nobel Prize and ACM, Association for Computing Machinery Turing Award Winner Geoffrey Hinton relaunched his ideas and fears about "AI's taking over" (see this recent interview https://www.youtube.com/watch?v=vxkBE23zDmQ, when he claims that "AI is already conscious" because - among the other things - 1) it has its own goals, 1) it can create subgoals and, 3) since - if implanted in a biological tissue of biological entity that biological entity remains conscious such biological entity remains conscious, as a consequence also the implanted device is (yes he said that). First: the fact that AI systems have goals (always provided by the humans) and create subgoals to achieve them is not new (see e.g. the subgoaling procedure in the cognitive architecture SOAR that relies on the means-end analysis heuristics, used already in the General Problem Solver developed by Newell, Shaw and Simon in 1959! I repeat: 1959!). So these two points do not make any sense. Also the fact that AI systems are able to invent new knowledge (going beyond subgoaling procedures) to solve new problems is not something new: in 2018-2019 with my colleague Gian Luca Pozzato and our students we developed a system that used the TCL logical framework, https://www.antoniolieto.net/tcl_logic.html, to invent new knowledge for solving problems via concept combination, blending and fusion (the paper is here: https://doi.org/10.1016/j.cogsys.2019.08.005 ). None of these things make an AI system conscious: they are just general heuristic procedures that allow a system to have yet another strategy (provided by us) to perform better on unknown tasks. In addition: the discourse that an AI system can functionally replace biological cells/neurons/tissues without making the system subject to this replacement “unconscious” (and therefore - this is how the reasoning of Hinton goes - the AI system is also “conscious”) is a complete nonsense since there is a confusion on many levels. The first confusion concerns the distinction between functional and structural systems (discussed extensively in my book Cognitive Design for Artificial Minds https://www.amazon.com/Cognitive-Design-Artificial-Minds-Antonio/dp/1138207950). The second one - connected to the first - concerns the attribution to a functional component of a “structural” (i.e. cognitive/biological) explanation. This is a sort of ascription fallacy that is also described in the book (and that is very common nowadays). More specifically: of course we can have bionic systems that are integrated - via partial or total replacement - with biological cells and tissues but functional replacement does not imply any biological/cognitive attribution (on this please also see this paper https://doi.org/10.3389/frobt.2022.888199) Think for example to the exoskeletons controlled via semi-invasive medical devices to record brain activity: these systems “function as” our biological counterpart and communicate well with other biological components leading to locomotion. But… Would you say that they (i.e.: the semi-invasive component implanted in our brain in this case…to follow Hinton’s unreasonable reasoning chain) are conscious (just because the biological entity that have them implanted is conscious)? That’s plain wrong and non sense. These kind of claims are completely unjustified and wrong from a scientific perspective and have nothing to do with the typical concerns and legitimate discussions about the risks coming from the societal and ethical impact that AI has any technology.

mercoledì 31 maggio 2023

Large Language Models, Super Human Intelligence and the Methodological Hallucinations of Modern AI

In the last months a major source of confusion related to the interpretation of the behavior exhibited by the current state of the art AI systems (e.g. Large Language Models like GPT4 and chatGPT for natural language processing) is due to the wrong ascription of cognitive capabilities to them. This confusion, when genuine (and not generated ad-hoc like in this case), is factually wrong since it prefigures Strong AI scenarios that are scientifically ungrounded (while there are reasons to urgently consider other ethical issues regarding the impact on the society of these techologies from the point of view of the biases introduced, their potential misuse, their impact on the job market etc.) 
In particular, the expression “Strong AI” was introduced by John Searle to identify the position assuming that computational models, embodied or not, can have a “mind”, a “consciousness” etc. in the same way of human beings. On the other hand, the expression "Weak AI” synthesizes the position according to which computational models can simulate human behaviour and thinking abilities but cannot pretend to possess any kind of “real” cognitive state. 

In Cognitive Design for Artificial Minds (Routledge Books/ Taylor & Francis, 2021) I show how the current #AI and cognitive modelling research are perfectly aligned with the weak-AI hypothesis. In particular, current AI systems, can be described (at the very best) as "shallow”, imprecise and often biologically and cognitively implausible technological depictions of biological brains, from which our intelligent capabilities arise. They are what I call "functionally designed" systems: they apparently "function as" biological brains (i.e. are able to superficially reproduce the same output) but the mechanisms determining that behavior/output are completely different from the ones we know from biology, neuroscience, physiology and cognitive psychology. 

As a consequence, we cannot ascribe theories and faculties explaining biological phenomena to interpret the behavior of such artifacts since the differences and the asymmetries between these classes of systems are enormous (a corollary of this consequence is that also the discourse about the eventual emergence of intentional or "conscious" signals from such systems is literally science fiction). 

 Now: the fact that for "functionally designed" AI systems it is not possible to generate artifacts that can be "intelligent" or "conscious" exactly like us (or like other biological entities) makes it irrelevant also the fear about super intelligent machines. We already have a huge number of systems achieving super-human performances in a number of different tasks (ranging from computer vision to NLP). However their incredible performance is not a symptom of the acquisition of the underlying competence that, in our brains, explains a certain intelligent behavior. 

 And what about the class of “constrained” artificial systems then (i.e. those designed and implemented by explicitly taking into account neuroscientific and/or psychological theories and constraints to build computational models of cognition)? Also for this category of systems, built by adopting what I call in the book a "structural design approach" (Chapter 2), I showed how they can be used to actually simulate the human-like mechanisms determining a given behaviour, and this can enable the understanding of some hidden mechanistic dynamics. Such computational models, therefore, can be used (and are used) to better understand biological or mental phenomena without pretending that that they are the real phenomena that they are modelling. By using a famous analogy proposed by Searle himself: "just as a model of the weather is not the weather, a model of the human mind is not a human mind". 

As a consequence of this state of affairs, both these classes of artificial systems (that are build by following different design principles and with different goals) methodologically fall within the "weak AI" approach.  This consideration - that I argue and detail extensively in Cognitive Design for Artificial Minds - is in contrast with the nowadays popular (but uncorrect) vulgata that see them as instances of the “Strong AI” hypothesis (and that is one of the main sources of confusion on these topics). 

The fact that both the fields of AI and Computational Cognitive Science build models and systems falling within the weak AI hypotheses does not make weaker any of the two disciplines: AI researchers, indeed, continue to build better systems and technologies with the purpose that they can (in principle) be useful for the human beings and that can, in principle, perform better than humans in specific tasks; computational cognitive scientist, on the other hand, continue to build computational simulation of biological/cognitive processes without pretending to build any system able to really be described as “intelligent” or “conscious” in the proper human sense. 

 It is just incredible to see nowadays many comments (including those of AI experts) that are not able to recognize the asymmetries between the incredible technologies that have been built and what biological brains do in the way they are built. It would be about time to dismiss the alarmist claims about the AI and their (hypothetical) existential risks for humanity. They represent nothing less than hallucinations (generated by humans this time) determined by the methodological fog enveloping the modern AI field.

venerdì 26 agosto 2022

Cognitive Design for Artificial Minds on Substack!

Cognitive Design for Artificial Minds (Routledge/Taylor and Francis, 2021) https://www.amazon.com/Cognitive-Design-Artificial-Minds-Antonio-dp-1138207950/dp/1138207950/ has now its Substack channel at https://artificialminds.substack.com! It is possible now to subscribe to the #newsletter to get updates, news and inedited content about the themes and topics of the book. Coming in the next months with podcasts and video interviews!

mercoledì 9 febbraio 2022

Cognitive Design for Artificial Minds nominated in the Top 3 "Best Artificial Intelligence Design Books of All Time" according to BookAuthority

Cognitive Design for Artificial Minds made it to the Best Artificial Intelligence Design Books of All Time. I'm happy to announce that my book, "Cognitive Design for Artificial Minds", made it BookAuthority's Best Artificial Intelligence Design Books of All Time. BookAuthority collects and ranks the best books in the world, and it is a great honor to get this kind of recognition. Thank you for all your support! The book is available for purchase on on Amazon and on the Taylor and Francis website at https://doi.org/10.4324/9781315460536 BookAuthority Best Artificial Intelligence Design Books of All Time

mercoledì 14 luglio 2021

Cognitive Design for Artificial Minds in the list of the "Best New Artificial Intelligence Design Books To Read" according to BookAuthority

Cognitive Design for Artificial Minds made it to the Best New Artificial Intelligence Design BooksBookAuthority Best New Artificial Intelligence Design Books

I'm happy to announce that my book, "Cognitive Design for Artificial Minds", made it to BookAuthority's Best New Artificial Intelligence Design Books:
https://bookauthority.org/books/new-artificial-intelligence-design-books?t=12uofh&s=award&book=1138207926

BookAuthority collects and ranks the best books in the world, and it is a great honor to get this kind of recognition. Thank you for all your support!

The book is available for purchase on Amazon.