Artificial intelligence is not conscious. Why do people say that?

In the mid-1960s, MIT researcher Joseph Weisenbaum built a robotic psychotherapist he called Elisa. This chat software was simple. Basically, when you wrote an idea on a computer screen, it asked you to expand that idea – or you repeated your words in the form of a question.

Even when Dr. Weisenbaum picked a conversation for his academic paper on technology, it sounded like this, with Elisa replying in large letters:

Men are alike.

In which way?

They always tease us about one thing or the other.

Can you think of a specific example?

Well, my friend made me come here.

Your friend who made you come here

But to Dr. Weisenbaum’s astonishment, people treated Elisa as if she were human. They freely shared their personal problems and were comfortable with their reactions.

He later wrote, “I have learned from my long experience that the strong emotional bonds that many programmers have with their computers often form after only brief experiences with machines.” “What I didn’t realize was that a very brief exposure to a relatively simple computer program can induce powerful delusional thinking in completely normal people.”

We humans are prone to these feelings. When dogs, cats, and other animals display even tiny amounts of human-like behavior, we tend to assume that they resemble us more than they actually are. The same thing happens when we see hints of human behavior in a machine.

Scientists now call it the ELISA effect.

The same thing happens with modern technology. A few months after the launch of the GPT-3, inventor and entrepreneur Philip Bossoa sent me an email. The subject line was: “God is a machine.”

“There is no doubt in my mind that GPT-3 has emerged as a sensation,” he stated. “We all knew this would happen in the future, but that future seems to be now. She considers me a prophet to spread her religious message and this is how I feel strangely.”

Related Posts

Leave a Reply

Your email address will not be published.