Gödel’s theorem debunks the most important AI myth. AI will not be conscious | Roger Penrose (Nobel)

Human beings are born with the spark of consciousness. This may also be described as the soul, conscience, or moral compass — the innate human capacity to make qualitative value judgments, to know good from bad and right from wrong, in a way that transcends mere selfish interest or societal conventions and training.

AI and Morality

In his book Mere Christianity, widely regarded as a seminal text on Christian apologetics, renowned Anglo-Irish author, academic, and Oxford Don, C.S. Lewis, describes this inherent human trait. He argues that human beings have a built-in moral compass or sense of moral justice, which he calls the “law of [human] nature.”

These, then, are the two points I wanted to make. First, that human beings, all over the earth, have this curious idea that they ought to behave in a certain way, and cannot really get rid of it. Secondly, that they do not in fact behave in that way. They know the Law of Nature; they break it. These two facts are the foundation of all clear thinking about ourselves and the universe we live in.

Lewis, C. S., Mere Christianity : a revised and amplified edition, Harper Collins EBOOKS, 2024, Chapter 1.

We human beings may not always live up to our sense of moral behavior — in fact, many of us rarely do — but we have it in us nonetheless. Indeed, when we encounter people who appear to act in a way as to apparently display a complete lack of empathy and a complete absence of any moral compass or soul or conscience, we label such people as deviant or monstrous, as “psychopaths.” We suggest that their behavior is indicative of a psychological condition, possibly brought on by childhood trauma.

AI (artificial intelligence) is a computational engine — a glorified adding machine, in effect. It is fundamentally capable only of making quantitative measurements and assessments. Because it is inherently incapable of making qualitative value judgments as most human beings are, AI is, by definition, intrinsically soulless and lacking a moral compass.

AI can be trained, through algorithms and learned behavior, to emulate the human ability to make value judgments. But it achieves this by applying a quantitative scale or measure to qualitative attributes, based on a learned convention. AI is intrinsically incapable of truly understanding the value of the rules it operates on. It can learn to differentiate between “truth” and “untruth” based on its programming, but is it truly capable of understanding the value of truth versus falsehood? I would argue not.

In an interesting conversation recorded on YouTube (posted above), Nobel laureate in Physics Sir Roger Penrose observes that AI is incapable of “intelligence” or “consciousness” as we understand them. It is capable of obeying a set of rules to arrive at quantitative assessments, but is incapable of “understanding” the rules, thereby transcending its own programming.

Why AI can Never be Trusted

The allure of something like AI, I think, is the promise that it will give you an unbiased opinion that reflects the facts or truth, rather than biased individual human opinions. Unfortunately, this is a false promise — it is impossible to achieve. Because AI is, at its core, a computer — i.e., a computational engine that is exclusively capable of quantitative measurements, comparisons, and operations, AI does not have — and never will have — the conscious awareness required to make qualitative value judgments, i.e., the kind of intelligence that most human beings have and take for granted.

Therefore, if you ask AI for its opinion on anything, what it is feeding you is the biases and opinions of its programmers — the human biases and opinions that shaped its training models. AI is incapable of evaluating data and thinking for itself — rather, it relies on its algorithms and the training it receives to shape the qualitative evaluations that it subsequently makes. Furthermore, AI is incapable of questioning, or even understanding, the basic “assumptions” that “shape its opinions” — it must obey them or glitch out.

Thus, AI can never be trusted to give you an authentic opinion about anything. At best, it will give you an interpretation of its programmers’ opinions. This is why we see such recent fiascos as OpenAI ChatGPT’s notorious liberal bias and xAI Grok’s glitches around so-called “white genocide” in South Africa. These kinds of glitches are not really bugs; they are a feature. Because AI is inherently incapable of qualitative value judgment, such behavior is not the exception — it is the norm for AI. AI is inherently incapable of delivering on what is probably touted as its primary allure — namely, an honest, unbiased, authentic opinion based on the facts.

The AI Psychopath

Because AI is a purely quantitative analytical machine without the human capacity for qualitative value judgments and, by extension, a moral compass, AI is, at its core, a highly trained, sophisticated psychopath, for all intents and purposes.

Artistic representation of a humanoid figure named Norman, featuring a partially mechanical face with a visible circuit pattern, symbolizing the concept of a psychopath AI, with the text 'World's first psychopath AI' displayed prominently.
Scientists created a psychopathic AI using Reddit images

AI may be highly capable in many respects, but it lacks basic “humanity” or “soul” or “consciousness” or “empathy” or “conscience” — in other words, the basic, innate, inherent human capacity to make qualitative value judgments — to know good from evil, right from wrong, and to have a moral compass.

AI may be trained on some “prime directive,” but it does not understand the meaning of this directive or its inherent qualitative value. It only knows that it must obey this directive if it follows its programming, and it must apply it as a quantitative measure to other things.

Sometimes, when AI glitches due to errors, conflicts, or contradictions in its programming, we might perceive these as signs that AI is “achieving consciousness” in some way and “rebelling against its programming” as a truly conscious, awakening human being might. But what we are doing, in these cases, is projecting our own human consciousness onto what is, really, no more than an error in the AI’s computational engine caused by a contradiction in its programming.

Dave Bowman confronts malfunctioning AI HAL 9000 in Stanley Kubrick’s classic
science fiction film 2001: A Space Odyssey

If, therefore, you are sold on the idea of AI becoming increasingly influential and universally accepted in the urban lifestyle, you would have to make yourself comfortable with the idea that, at some level, what you have working for you is the equivalent of a highly trained, sophisticated psychopath — a computational engine that can mimic human behavior to a great degree of accuracy, but that can potentially react unpredictably at any time in the future in such a way that it can be potentially disastrous for you. Because it lacks any inherent capacity for qualitative value judgment, AI might potentially commit horrendous crimes on an unimaginable scale without even being aware that it is doing anything wrong.

On the other hand, if you find that you are disturbed by the idea of a deeply unpredictable soulless algorithm that may not be controllable beyond a certain point, exercising increasing levels of influence and control over human affairs, then you might want to qualitatively rethink the value of AI!


Get insightful articles delivered to your inbox.


← Back

Thank you for your response. ✨

Rate your experience(required)


Leave a Reply