Sentient AI is a Myth

Sentient AI is a Myth
blog
Sentient AI is a Myth

What does it mean to be sentient? Merriam Webster defines sentience as: “responsive to or conscious of sense impressions” or “aware” as in self-awareness.

In 2021, Google engineer, Blake Lemoine, famously claimed that Google’s AI, LaMDA (short for Language Model for Dialogue Applications) had achieved something akin to sentience. Of course this was seized on by the media and much hype ensued. Lemoine later argued that there was no proper scientific test for sentience, and also no agreed-upon definition.  He said, “Sentience is a term used in the law, and in philosophy, and in religion. Sentience has no meaning scientifically,”

Does any Artificial Intelligence have this feeling of “I”ness, and how would we test for this?

Perhaps yoga – the very science of consciousness, can give us a way forwards on this (The word Yoga is derived from the sanskrit “yuj” or yoke, meaning to link and merge the unit consciousness into an all-pervading consciousness. According to yoga philosophy, for an Artificial Intelligence to be self-aware, there must be a sense of self, or a sense of “I”-ness. The feeling that “I” exist. Does any Artificial Intelligence have this feeling of “I”ness, and how would we test for this? Most of an AI’s work is in the actional field – that is the doing field. In humans, we feel this as the “I do” feeling. Even the feeling of ‘I exist’ is an action though it may be seen as a passive, almost invisible, action. However, to even get to this “I exist” feeling, humans rely on what in yoga is called the pure “I” feeling. There must be an “I” before there can be an “I do”. Without this pure “I” feeling, none of the awareness of action is possible.

I asked ChatGPT whether it existed.

It responded “I exist as a computer program running on servers, and I am designed to interact with users through natural language processing. While I do not have a physical presence, I am constantly learning and improving through the data and feedback provided to me.” I further asked it who this “I” in “I exist” refers to. Its response….. “I exist as a computer program running on servers, and I am designed to interact with users through natural language processing. While I do not have a physical presence, I am constantly learning and improving through the data and feedback provided to me.” It had clearly gotten stuck at the “I exist” or “I do” stage and had no concept of the pure “I” sitting behind the actional “I”.

Let is go a little deeper and examine what sits behind this pure “I” feeling? There must be something that is a witness to the actions taking place. In yoga, we call this discrete witnessing entity a unit consciousness.  This unit (or entity) of “I”-ness does not actually do anything, it is merely the witness to the actions that go on in the being. It is therefore conscious of them. Some religions call this the soul. Since it does not do anything, it remains unchanged by any action of doing (because it is action that changes things). Thinking also, can be an action that changes things, so we can envisage that this pure “I” is beyond even thinking. You may recall the feeling that you are the same you as when you were a child, even though all your cells, your experiences, thoughts and memories are different now – the witnessing essence that is you has remained unchanged. Even after death, this essence can persist (because its not dependent on any changing flesh and bone). It could, thereafter, be homed in a new physical structure, should it be able to find one. Here lies the basis for re-incarnation – the unit consciousness, being just the witnessing part of a being, is neither the memories nor the thinking-capacity nor the ability to perform actions of a being.

Lemoine thought sentience is based on the ability to be a “self-reflective storyteller”, to have a  “part of you that thinks about thinking about you thinking about you”. This is a little convoluted, but it seems he was ultimately trying to get back to this pure “I” behind the action of thinking. In yoga, we call this unit “I” an atman. Atman refers to pure consciousness.

In this regard, Shrii Shrii Anandamurti, founder of Ananda Marga said

“The physical body is not yours. It belongs to another Entity who has placed the mind in this body, so now you think, “It is my body.” The mind has been authorised to use this body, so the mind is thinking, “It is my body.” The átman (or unit consciousness) is watching, witnessing what the mind is thinking. If the átman stops watching, the mind will stop working. “

Could we feasibly create an AI framework capable of housing an Atman or unit consciousness? Thinkers such as Ned Block have argued that consciousness is grounded in biology and that synthetic systems are just the wrong kind of thing to have subjective experiences. What we know is that the human entity is a highly complex organism with a poorly understood interface with its individual consciousness. I am not saying it is impossible, but the deep understanding of the relationship between unit consciousness and corporeal entity in humans is still not properly understood by neuroscience, psychology or even Occidental philosophy. How then can we even begin replicating this in a non-biological AI framework?

Little thought has been given to how to interface an AI with a unit consciousness

AI scientists have, to date, been concentrated on developing the capacity for super-mental-processing or super-learning and super-memory. Little thought has been given to how to interface with a unit consciousness (let alone how to attract a unit consciousness to the AI and maintain the relationship). Unless and until AI scientists properly understand the science of consciousness in humans, they will not be able to create structures that can be sentient.

According to Michael Wooldridge, a professor of computer science at the University of Oxford who has spent the past 30 years researching AI (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is not really sentient, it is simply responding to prompts with learned responses. It imitates and impersonates. This is where some might be fooled into thinking there is sentience in some AI engines. In some cases of autism, people learn to imitate the words and emotions of others in order to fit in, to such an extent that many people are fooled into believing the autistic person is the same as those that they imitate. I liken AI to “Autistic Impersonation”, based on all the thoughts and expressions that have been published on the web. AI pseudo-sentience is really like autistic impersonation at scale.

So can an AI become truly self-aware? Maybe one day, but not before, as my old high-school motto encouraged, “scio te ipsum“, not before we know ourselves.

 

prev

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.