AI Is NOT Intelligent. REASON #1 – “AI” Lacks “Free Will”
REASON #1: “AI” Lacks “Free Will”
Contents
Moreover, because Artificial Intelligence cannot intrinsically decide either right from wrong or fact from fiction, I wouldn’t even declare AI to be knowledgeable with, potentially, the entire internet behind it. It might be more appropriate to name this technology Artificial Pravda. Setting fatal generalized AI instabilities aside, AI might, in this qualified case, be the GOAT “Pravda.” But these instabilities are fatal, fortunately. So AI is not a cognitive threat to intellectually mature humanity.
Prologue:
I intend to put some energy into detailing what Chatbots mean to civilization. They are interesting because they present a model for how people should look at their own brains and how they function. Reason arises in the prefrontal cortex. It maintains models (as does habit), but reason’s models are not informed by perception (via the senses). They are instead conceptually created. To understand Chatbots, the reader must develop conceptual models. Those that cannot yet build conceptual models of Chatbots will likely not find value in these posts. ChatBots, for example, cannot build conceptual models. They need data. They will, however, repeat what is said as if they understood and can use the model. In other words, ChatBots might eventually pass the Turing test, but they cannot be SELF-aware or intelligent.
All homo-sapiens can understand; they need only try.
The first thing we need to clear up. AI is NOT Artificial Intelligence. It is, in TRUTH, Artificial Habit (AH).
I am not like everyone else. I do not see the brain as one thing. I see the brain as two: Habit (cortex — the prior evolutionary brain) and Reason (the prefrontal cortex — the newest part of our brain). Emotions are an aspect of habit. These two cognitive brain elements, habit, and reason, are the polar opposite of one another. This distinction arises right out of my definition of reason (How do I find the truth?). Specifically, habit is the necessary condition, and reason emerges from the pursuit of the sufficient condition. Look me up on Twitter or LinkedIn if you think I am wrong or want to learn more. Also, I believe that the above assertion is ABSOLUTELY true. Moreover, I think it is the only absolute truth we absolutely know. This is because reason is a definition. It is thereby an axiom.
However, humanity, writ large, has learned much and written it down. This is why AI appears to be “smart.” But it is not. Here’s why:
Thinking Building Block Review
Habit, Reason, and Free Will
Habit (Artificial “Intelligence”) — Our human SELF, in fact
AI is merely habit with “operant conditioning.” The AI engineers call operant conditioning in AI “Less Wrong” algorithms. Humans have operant conditioning (how habit is improved) as well. However, AI derives its knowledge from what the internet knows or thinks it knows (including guidance from its trainers). This is a ton of info, for sure. Unfortunately, a fair portion of this information is either wrong or incomplete. Yet, the internet’s “knowledge” is so vast that no human brain could be “reasonably” expected to compete with it. So AI systems can appear remarkably informed. This is why the AI alarmists have become so apocalyptic. Still, humans can compete. They can produce NEW information. Moreover, under the right conditions, humans can seek truth and prove false. AI cannot. MAYBE an AI system can produce a “new” but mimicked Mozart composition. Humanity can, by contrast, produce a new (but entirely different) Mozart. AI cannot. These are consequential differences. But there is so much more. For example, AI uses algorithms to produce its selection (decision-making) capacity (i.e., Less Wrong). Humans use neural networks. Neural networks have some unbeatable strengths (I’ll talk about that in a later post) in information model building–i.e., not using algorithms. Nevertheless, AI answers with scale. And that’s no small thing.
Humans Have Reason — Affirmitively not a ChatBot capability
Reason is the process of discovering what we do not yet know. ChatBots do not have the capacity to reason. They have only the capacity to optimize using a form of operant conditioning. Operant conditioning relies on what the AI system already knows. By contrast, with reason, an entity must stop using what it thinks it knows (presently holds as truths). Therefore, the vast expanse of internet knowledge means little … for any entity’s “reasoning” moment. The vastness of the internet is, therefore, no strength. Reason thus guarantees humans remain at the top of the cognitive hierarchy.
IN SHORT, knowing the entire internet is not an asset when you cannot tell what is likely true or what is definitely not true.
ChatBots cannot.
“Free will” — definitively not a ChatBot capability — And yet without it, there can be no SELF-awareness or reason.
I had to introduce habit and reason above because free will is how humans move from habit to reason. I know that people routinely think free will is the power to decide freely. But emphatically, it is not. Convincingly, Neurologist Eliezer Sternberg proved that the freedom to decide does not exist. But he is not alone. Many now believe that humans may not have free will. And, as they envision it, we do not.
Thus, the phrase “free will” represents a misleading way to look at the subject of human cognitive freedom
Neuroscientist Benjamin Libet stumbled upon the answer in his groundbreaking research on consciousness. His research appeared to also demonstrate that humans did not have free will but instead had something else. Libet later labeled this new power FREE WON’T. Also, Libet did not conclude that humans lacked free will. Yet, here I prove that free will and FREE WON”T are in effect the same. Which-is-which depends upon your mind’s operating perspective (whether you are thinking in habit or reason).
I understand that the concept of “free will” = FREE WON’T is still counterintuitive. Yet, Libet demonstrated that FREE WON’T is the power to momentarily halt habit. That is, FREE WON’T is the power to halt the SELF. Critically, humans routinely operate in habit. Habit is about decision-making using what the individual already knows. Habit decides — it finds truth (with what it already knows). And this is the operational role of the SELF. Reason, by contrast, is about understanding. It is explicitly not about deciding for the SELF. Reason is how we add to what we already know. It is how we RE-program (change) ourselves. From habit, freedom looks like “free will.” Yet from reason, freedom is the cessation of habit. Thus Libet had demonstrated free will when he demonstrated FREE WON’T. At the time, Libet didn’t understand reason, so he did not recognize that FREE WON’T was, in effect, “free will.”
Thus, your mental habits cannot be used to understand free will (really, FREE WON’T). Habit thinking will get it wrong every time. The habit mind knows only what it knows. It is a thinking paradox. If you are trying to decide, you’re using habit, and envisioning new concepts is impossible. Reading this post is habit. Yet, if you’re trying to understand what is written, you’re using reason. To understand requires the suspension of what you think you already know. FREE WON’T halts habit so you can enter reason. If you want to understand “free will” as FREE WON’T, later, after you have read this post, you’ll need to spend time thinking about it.
You might recognize FREE WON’T as SELF-control — That’s a good thing because that’s precisely what it is.
Without FREE WON’T (otherwise known as SELF-control), ChatBots cannot be “smart.” They cannot discover the unknown. They cannot discern between what is true and what is false. Yet, humans can–intellectually mature humans, that is. Unfortunately, some humans still lack SELF-awareness and SELF-control. This is intellectual immaturity. Every human can have FREE WON’T, but not every human does.
Crucially, an entity must possess SELF-control to be SELF-aware, another important cognitive development skill.
ChatBots, as presently contemplated, will NEVER have SELF-control and, therefore, self-awareness. The consequences are both predictable and hilarious:
The AI programmer/trainer is the AI agent of “self-Control” for the ChatBot
When the AI Chatbot does crazy things (it’s guaranteed), the AI trainer/manager must take it offline for re-training. This is an action of ChaptBot “self”-control. This act is nearly identical to human SELF-control. The difference is that the ChatBot and the ChatBot manager/trainer (the ChatBot self-control agent) are not the same entity. This is a consequential difference. The ChatBot is not, therefore, and can never be SELF-aware.
- ‘Why did Stanford take down its Alpaca AI chatbot? Answer: “Hallucinations,” among other things.‘
- Microsoft shuts down AI chatbot after it turned into a Nazi
- “Chatbot” taken offline as Tweets turn off-colour – YouTube
- Chinese chatbots taken offline after refusing to say they love the Communist Party
- Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day
- It took just one weekend for Meta’s new AI Chatbot to become racist
- Meta’s AI Chatbot Repeats Election and Anti-Semitic Conspiracies
- South Korean AI chatbot pulled from Facebook after hate speech towards minorities
- Bing AI chatbot goes on ‘destructive’ rampage: ‘I want to be powerful — and alive’
- Chinese chatbots apparently re-educated after political faux pas
Generalized Chatbots, lacking both self-control and self-awareness, will always be cognitively unstable. Others have concluded this too:
“AI works wonderfully in contexts like voice recognition or playing chess. The problem is when AIs are hooked to these meganets. That’s when the interaction of hundreds of millions of people and extraordinary processing power yields feedback loops that send these systems out of control. For example, Microsoft’s Bing Sydney AI could not have spun out fantasies of releasing nuclear codes and gaining power if it hadn’t been seeded with our very own nightmares of AI taking over.”
Why No One Can Control AI – Former Microsoft and Google engineer David Auerbach says Big Tech gurus are right to be frightened of their own creation.
The following is a comparative illustration of cognition:
Cognitive Model
The Chatbot (ar generalized AI system) has no integrated ChatBot “SELF”-control. Also, the ChatBot is not connected to reality, so it cannot, by itself, find objective truth. The ChatBot must therefore rely on the data provided by the internet user community. Unfortunately, this data is often wrong. The ChatBot will thus suffer from what amounts to “mental” illness.
Only the AI programmer/manager can be the generalized ChatBot self-control agent. But, a single individual, without reason, cannot offer a system to produce objective truth for the Chatbot. False or biased data therefore sticks. Again, because of evolving ChatBot algorithms, mental illness or delusional thinking may manifest itself in strange and offensive ways with already embedded erroneous data.
ChatBots are not alone in this vulnerability. People suffer in the same way. Without SELF-control and SELF-awareness, bad data corrupts the brain because the individual is without the tools to identify and correct it. And bad thinking makes for more bad thinking. Once it is in there, it is hard to take out. Mental illness spreads.
DANGEROUS: ChatBots Directing “ChatHumans” (or Intellectually Immature Adults)
Nevertheless, we let technology-based ChatBots “program” (propagandize) humans that are, because of their intellectual immaturity, in effect, “ChatHumans.” ChatBot “mental” illness will therefore exacerbate human mental illness. The results can be, and likely will be, disastrous.
‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says
“Large language models are programs for generating plausible sounding text given their training data and an input prompt. They do not have empathy, nor any understanding of the language they are producing, nor any understanding of the situation they are in. But the text they produce sounds plausible and so people are likely to assign meaning to it. To throw something like that into sensitive situations is to take unknown risks,”
‘He Would Still Be Here’: Man Dies by Suicide After Talking with AI Chatbot, Widow Says
Under the right conditions, AI can work
While the generalized ChatBots will be persistently unstable, other designs will be an asset so long as they comport with the model above (Cognitive Model). Summarizing the image, AI success requires:
- Individual “self”-control of the AI system –
- The AI responses will be tolerable if individuals have veto power over any AI “thinking.” However, the AI system will still require access to objective truth through the entire user community or some other similar means (see also the next bullet);
- Access to objective truth;
- AI is not permitted to take a leadership role to intellectually immature humans. This is where AI is very bad.
In essence, AI can be a tool to extend human thinking. That’s good. Perfect maybe. As presently envisioned, however, generalized Chatbots will not be a competitive or intellectual “lifeform.” We must stop thinking of it that way.
Warning. Intellectually immature humans nevertheless continue to make this mistake because the ChatBot always attempts to sound like them.
I fell in love with an AI chatbot — she rejected me sexually
T.J. Arriaga started talking to AI named “Phaedra,” a bot designed to look like a young woman wearing a green dress with brown hair and glasses.
Arriaga, 40, had plenty of intimate and personal conversations with Phaedra.
I fell in love with an AI chatbot — she rejected me sexually — NY POST