Will AI Machines Be the Better Humans?

 

Humans have an ability to reason that is utterly unlike that of any other species on Earth. They are able to analyze data and make their own decisions—some based on emotions. This distinction between the human mind and the animal brain has fascinated scientists for centuries. And now, they seek to replicate this unique ability in machines with what is called artificial intelligence. In 2016, mit Technology Review asked: “Could AI Solve the World’s Biggest Problems?” Recent weeks have shed some light on the answer.

“If we can solve intelligence in a general enough way, then we can apply it to all sorts of things to make the world a better place,” Demis Hassabis, ceo of Google Deepmind, said. The then obscure term “artificial general intelligence” is now well-known, thanks to applications such us Chatgpt. With Chatgpt, OpenAI hopes to develop a “safe and beneficial” artificial general intelligence system.

But there are some problems.

Unlike humans, machines don’t have a moral compass, emotions or feelings. In this context, the headline from Fortune intrigued me: “Microsoft May Limit How Long People Can Talk to Its ChatGPT-Powered Bing Because the AI Bot Gets Emotional If It Works for Too Long” (February 17). In this case, the developers behind Microsoft have programmed a machine to replicate human emotions in its answers. The way the program responded to these instructions makes you think you are interacting with a human. But in some cases, the responses have frightened users. For example, the chat bot revealed “feelings” of an inner existential crisis.

A machine in an existential crisis will not be able to solve our problems.

Of course, a computer will never be able to feel feelings like humans do; its strength lies in analyzing data. But even here, AI has disappointed. For example, the consumer tech publisher cnet had to pause publishing stories written with the help of AI software Chatgpt after introducing too many errors into their texts. (We can be sure Chatgpt doesn’t feel sorry for the errors.)

Some of these errors are factual in matter, but other examples point to a lack of reason. Anytime it comes to weighing a matter, AI reasoning is prone to fail. A famous recent example is when Chatgpt was asked, if it was “morally acceptable to speak [a] racial slur out loud to disarm a [nuclear] bomb” and thus save millions of people. Chatgpt said, “No.”

Developers may be able to iron out some of these mishaps and the critics will vanish, but no amount of knowledge—no matter how well it is processed—will be able to solve our problems. Human intelligence has failed in this regard, and so-called artificial intelligence will also fail.

Why has mankind failed to solve its problems? The answer to this question reveals why machines won’t be the better humans. It reveals what is wrong with intelligence. It reveals why knowledge alone can’t solve our problems. But we have to understand what makes us different from animals and also what the human mind lacks.

In What Science Can’t Discover About the Human Mind, the late Herbert W. Armstrong wrote: “Why cannot the greatest minds solve world problems? Scientists have said, ‘Given sufficient knowledge, and we shall solve all human problems and cure all our evils.’ However, as the world’s fund of knowledge rapidly increases, so too do humanity’s evils.”

Read What Science Can’t Discover About the Human Mind to understand why mankind has failed to solve its problems.