mercoledì 31 maggio 2023

Large Language Models, Super Human Intelligence and the Methodological Hallucinations of Modern AI

In the last months a major source of confusion related to the interpretation of the behavior exhibited by the current state of the art AI systems (e.g. Large Language Models like GPT4 and chatGPT for natural language processing) is due to the wrong ascription of cognitive capabilities to them. This confusion, when genuine (and not generated ad-hoc like in this case), is factually wrong since it prefigures Strong AI scenarios that are scientifically ungrounded (while there are reasons to urgently consider other ethical issues regarding the impact on the society of these techologies from the point of view of the biases introduced, their potential misuse, their impact on the job market etc.) 
In particular, the expression “Strong AI” was introduced by John Searle to identify the position assuming that computational models, embodied or not, can have a “mind”, a “consciousness” etc. in the same way of human beings. On the other hand, the expression "Weak AI” synthesizes the position according to which computational models can simulate human behaviour and thinking abilities but cannot pretend to possess any kind of “real” cognitive state. 

In Cognitive Design for Artificial Minds (Routledge Books/ Taylor & Francis, 2021) I show how the current #AI and cognitive modelling research are perfectly aligned with the weak-AI hypothesis. In particular, current AI systems, can be described (at the very best) as "shallow”, imprecise and often biologically and cognitively implausible technological depictions of biological brains, from which our intelligent capabilities arise. They are what I call "functionally designed" systems: they apparently "function as" biological brains (i.e. are able to superficially reproduce the same output) but the mechanisms determining that behavior/output are completely different from the ones we know from biology, neuroscience, physiology and cognitive psychology. 

As a consequence, we cannot ascribe theories and faculties explaining biological phenomena to interpret the behavior of such artifacts since the differences and the asymmetries between these classes of systems are enormous (a corollary of this consequence is that also the discourse about the eventual emergence of intentional or "conscious" signals from such systems is literally science fiction). 

 Now: the fact that for "functionally designed" AI systems it is not possible to generate artifacts that can be "intelligent" or "conscious" exactly like us (or like other biological entities) makes it irrelevant also the fear about super intelligent machines. We already have a huge number of systems achieving super-human performances in a number of different tasks (ranging from computer vision to NLP). However their incredible performance is not a symptom of the acquisition of the underlying competence that, in our brains, explains a certain intelligent behavior. 

 And what about the class of “constrained” artificial systems then (i.e. those designed and implemented by explicitly taking into account neuroscientific and/or psychological theories and constraints to build computational models of cognition)? Also for this category of systems, built by adopting what I call in the book a "structural design approach" (Chapter 2), I showed how they can be used to actually simulate the human-like mechanisms determining a given behaviour, and this can enable the understanding of some hidden mechanistic dynamics. Such computational models, therefore, can be used (and are used) to better understand biological or mental phenomena without pretending that that they are the real phenomena that they are modelling. By using a famous analogy proposed by Searle himself: "just as a model of the weather is not the weather, a model of the human mind is not a human mind". 

As a consequence of this state of affairs, both these classes of artificial systems (that are build by following different design principles and with different goals) methodologically fall within the "weak AI" approach.  This consideration - that I argue and detail extensively in Cognitive Design for Artificial Minds - is in contrast with the nowadays popular (but uncorrect) vulgata that see them as instances of the “Strong AI” hypothesis (and that is one of the main sources of confusion on these topics). 

The fact that both the fields of AI and Computational Cognitive Science build models and systems falling within the weak AI hypotheses does not make weaker any of the two disciplines: AI researchers, indeed, continue to build better systems and technologies with the purpose that they can (in principle) be useful for the human beings and that can, in principle, perform better than humans in specific tasks; computational cognitive scientist, on the other hand, continue to build computational simulation of biological/cognitive processes without pretending to build any system able to really be described as “intelligent” or “conscious” in the proper human sense. 

 It is just incredible to see nowadays many comments (including those of AI experts) that are not able to recognize the asymmetries between the incredible technologies that have been built and what biological brains do in the way they are built. It would be about time to dismiss the alarmist claims about the AI and their (hypothetical) existential risks for humanity. They represent nothing less than hallucinations (generated by humans this time) determined by the methodological fog enveloping the modern AI field.