Blog

AI in weapon systems is a horrible idea — AI cannot be made rational, and, therefore, it can never be made safe

The Evidence

For example, I read (6/2/2023) the depicted headline just yesterday. See the image for this post. Sure, the defense department subsequently declared that the project lead’s words were taken out of context. But it doesn’t look like they were me. In any event, because of the nature of AI, what might have happened, will eventually happen. It is inescapable.

What I can tell you is this: AI LACKS Free will. Just ask it. So, (lacking reason and self-control [free will]) it will ALWAYS do what it is trained to do, unable ever to change its mind. It will, of course, do its task better than any human. This is its benefit, but also it’s weakness.

One should also note: the AI trainers do not know what free will is. Just ask them. This is why, by the way, AI says it does not have it, nor can it explain free will.

Weapon systems should not rely on AI. The battlefield needs intellect.

It is idiotic to install AI in a weapon system. However, we should encourage our enemies (like the CCP) to do just exactly that. Then, with a little finesse (strategy) by reasoning humans, we can direct their AI to turn on them just like the headline image of this post suggests.

Thus, the enemy-with-AI problem is solved.