martedì 8 aprile 2025

Feedback by Péter Érdi

The notion of feedback represents one of the most important contribution of the cybernetics to the science of (any) complex systems design. In the cybernetic tradition, machines capable of adapting themselves actively to the environments via trial-and-error process based on negative feedback (used for autocorrection) were called “servomechanisms” or “negative feedback automata”. A recent example of machines using these type of mechanisms comes from the Artificial Intelligence, since current artificial neural networks (at least the once that are more successful nowadays) make use, in their supervised learning phase, of a well knoan feedback mechanism called "backpropagation". 


The book "Feedback, how to destroy or save the worlds" (Springer, 2024) by Péter Érdi is an intellectual journey in the science of feedback and shows, with clear examples coming from different fields, how either the design or the discovery of feedback mechanisms is crucial for the advancement of science, technology, economics and society since feedback is a crucial component of any complex system.

The book focuses not only on the well known importance of negative feedback (e.g. for learning) but also, and more importantly, on the importance of understanding what are feedback mechanisms governing the behavior of a certain system and upon which it is possibile to intervene (using both positive and negative corrective signals) in order to lead to (biological/computational/mechanical/physical/chemical...) states that can lead to homeostatsis, market stability, environmental sustainability and so on.

If there is any chance of saving this World, the realization of powerful feedback mechanisms able to avoid catastrophic outcomes represents one of the few elements to put in place. And, in this state of affairs, the science of feedback (Cybernetics) strikes back and seems to be even more relevant today!

sabato 1 febbraio 2025

Why Hinton is Wrong about AI taking over (and about conscious AI)

Recently the The Nobel Prize and ACM, Association for Computing Machinery Turing Award Winner Geoffrey Hinton relaunched his ideas and fears about "AI's taking over" (see this recent interview https://www.youtube.com/watch?v=vxkBE23zDmQ, when he claims that "AI is already conscious" because - among the other things - 1) it has its own goals, 1) it can create subgoals and, 3) since - if implanted in a biological tissue of biological entity that biological entity remains conscious such biological entity remains conscious, as a consequence also the implanted device is (yes he said that). First: the fact that AI systems have goals (always provided by the humans) and create subgoals to achieve them is not new (see e.g. the subgoaling procedure in the cognitive architecture SOAR that relies on the means-end analysis heuristics, used already in the General Problem Solver developed by Newell, Shaw and Simon in 1959! I repeat: 1959!). So these two points do not make any sense. Also the fact that AI systems are able to invent new knowledge (going beyond subgoaling procedures) to solve new problems is not something new: in 2018-2019 with my colleague Gian Luca Pozzato and our students we developed a system that used the TCL logical framework, https://www.antoniolieto.net/tcl_logic.html, to invent new knowledge for solving problems via concept combination, blending and fusion (the paper is here: https://doi.org/10.1016/j.cogsys.2019.08.005 ). None of these things make an AI system conscious: they are just general heuristic procedures that allow a system to have yet another strategy (provided by us) to perform better on unknown tasks. In addition: the discourse that an AI system can functionally replace biological cells/neurons/tissues without making the system subject to this replacement “unconscious” (and therefore - this is how the reasoning of Hinton goes - the AI system is also “conscious”) is a complete nonsense since there is a confusion on many levels. The first confusion concerns the distinction between functional and structural systems (discussed extensively in my book Cognitive Design for Artificial Minds https://www.amazon.com/Cognitive-Design-Artificial-Minds-Antonio/dp/1138207950). The second one - connected to the first - concerns the attribution to a functional component of a “structural” (i.e. cognitive/biological) explanation. This is a sort of ascription fallacy that is also described in the book (and that is very common nowadays). More specifically: of course we can have bionic systems that are integrated - via partial or total replacement - with biological cells and tissues but functional replacement does not imply any biological/cognitive attribution (on this please also see this paper https://doi.org/10.3389/frobt.2022.888199) Think for example to the exoskeletons controlled via semi-invasive medical devices to record brain activity: these systems “function as” our biological counterpart and communicate well with other biological components leading to locomotion. But… Would you say that they (i.e.: the semi-invasive component implanted in our brain in this case…to follow Hinton’s unreasonable reasoning chain) are conscious (just because the biological entity that have them implanted is conscious)? That’s plain wrong and non sense. These kind of claims are completely unjustified and wrong from a scientific perspective and have nothing to do with the typical concerns and legitimate discussions about the risks coming from the societal and ethical impact that AI has any technology.