Press "Enter" to skip to content

Artificial Sentience vs Artificial Intelligence

Jose Ferreira by Jose Ferreira

 

 

In my recent essay, AI Is Fire, I wrote that AI wouldn’t suddenly race far ahead of human beings when it one day achieves sentience. We have nothing to fear from a super-advanced AI like Skynet or the Matrix in the far future. However, we should be concerned over the potential of simple AIs in the near future to unemploy large numbers of people.

There are really two meanings of “AI” and they are routinely conflated. One is the idea popularized by the likes of Kubrick and Spielberg, and warned about by Musk and Hawking, that AI will one day achieve conscious, sentient, self-aware thought, and will thenceforth improve itself at the speed of light and leave humankind, which improves at biological speed, in the dust. To un-conflate what people mean by “AI,” I’m going to refer to this as “Artificial Sentience.” Musk calls it “a deep intelligence in the network.” Hawking believes, “It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” The human race is nowhere near producing AS and doesn’t even have any clear sense of how we would do so.

Then there is what people call “AI” today—basically, a variety of software that tries, tests, and auto-corrects its strategies for a given task. Such applications, and the available tools to build them, are increasingly common. They are not much different in theory or kind from the original use of computers: to calculate complex math problems. Their foundation is still the crunching of lots of numbers at great speed toward a specified goal, upon which is added algorithms to sample data, try strategies, observe and remember consequences, and adjust future strategies accordingly.

Toasters vs calculators

The threat people fear from AS is existential. The problem with AI is merely economic—it will take jobs away from people.

There is nothing truly intelligent about artificial intelligence software, any more than any other kind of software, so it is perversely named. Electronic machines are better than human beings at any number of tasks. Toasters can warm bread via heat coils better than humans can by blowing on it. Calculators have long been better than humans at calculating math. AI is better at sampling, testing, and optimizing over a huge set of data.

There is no essential difference between electronic machines and computer software. But there are some superficial differences that explain why we put them in different categories. Toasters are old technology that do something we don’t associate with intelligence. Calculators perform tasks we do associate with intelligence, but they are still an old technology whose underlying mechanics are easy to understand. So we think it’s ludicrous to think of calculators as intelligent, independent beings. AI is a new technologywhose underlying mechanics are not easy to understand. Based on reliable trends in the computer industry, we anticipate AI becoming dazzlingly more powerful and complex in the future. Since it’s hard to predict the future, it’s very difficult to imagine what these complex systems could turn into.

But why should we think that improved, future AI will magically become truly intelligent? AI, like calculators and toasters, like animals and even humans, can perform marvelously at certain tasks without understanding that it is doing so—it’s what philosopher/cognitive scientist Daniel Dennett calls “competence without comprehension” When machines do something that outperforms us mechanically, we take little notice. When AI outperforms us at a mental task, it seems smart. But that’s just a cognitive trick. Cognitive tricks can be very convincing—there’s a great TV show entirely devoted to them—but they aren’t real.

AI is not smarter than humans and never will be

It’s a dumb machine, doing tedious calculating tasks better than we can or care to do ourselves. Human intelligence doesn’t work this way—we’re not even particularly good at simple calculating tasks. So it stands to reason that making AI ever better at calculating an ever wider array of tasks is not going to make it spring to life one day and become self-aware.

A great many people in science and technology fields seem to think that merely improving the power of AI will cause some mystical emergence of consciousness from its underlying programming. Why? Consciousness does not result from computing power in the human brain; in fact, it’s vice versa. So why would computing power perforce lead to consciousness in the electronic brain?

Talk to anyone on the cutting edge of AI today and they will concede that a lot of what’s called AI is pretty dumb, but they will insist that some of it is really impressive. But in my experience this group of people is easily impressed, especially by themselves. There is no basis upon which to believe that anything currently being worked on in AI will ever spring to life. Over time, AI chatbots and call center programing will increasingly be able to trick humans into thinking they’re talking to another human, but that’s not the same as actually being AS.

A very long way off

All that said—one day, if we don’t destroy ourselves first, we will indeed create Artificial Sentience. But the deep intelligence in the network is still a very long way off. Meanwhile, we have simple AI to worry about. Could it fundamentally alter the human labor market in a way never seen before? Or will labor markets respond as they always have—by finding new, more productive tasks for displaced workers to do?

“ORIGINAL CONTENT LINK”

Daily Top News & Articles Feed from John B. Wells News

Breaking News: