Press "Enter" to skip to content

The Future of AI is Artificial Sentience

How do you *feel* about that?

Rod CastorRod Castor

Much of today’s discussion around the future of artificial intelligence is focused on the possibility of achieving artificial general intelligence. Essentially, an AI capable of tackling an array of random tasks and working out how to tackle a new task on its own, much like a human, is the ultimate goal. But the discussion around this kind of intelligence seems less about if and more about when at this stage in the game. With the advent of neural networks and deep learning, the sky is the actual limit, at least that will be true once other areas of technology overcome their remaining obstructions.

For deep learning to successfully support general intelligence, it’s going to need the ability to access and store much more information than any individual system currently does. It’s also going to need to process that information more quickly than current technology will allow. If these things can catch up with the advancements in neural networks and deep learning, we might end up with an intelligence capable of solving some major world problems. Of course, we will still need to spoon-feed it since it only has access to the digital world, for the most part.

If we desire an AGI that can consume its own information, there are a few more advancements in technology that only time can deliver. In addition to the increased volume of information and processing speed, before any AI will be much use as an automaton, it will need to possess fine motor skills. An AGI with control of its own faculty can move around the world and consume information through its various sensors. However, this is another case of just waiting. It’s also another form of when not if these technologies will catch up to the others. Google has successfully experimented with fine motor skills technology. Boston Dynamics has canine robots with stable motor skills that will only improve in the coming years. Who says our AGI automaton needs to stand erect?

In short, both of these missing ingredients for AGI are more of a waiting game than a question of if it’s possible. As these technology areas improve, AI and subsequent robotic versions of AI will slowly become available and more prevalent. In many ways, humanity will accept these introductions as the normal progression of things. And even when these AGI’s do arrive in our mainstream daily lives, they may be the grunts of humanity — acting but seeming lifeless without a personality.

However, one aspect of AI getting less attention, perhaps the next big thing, is artificial sentience. Sentience can be divided into two categories. The first category is awareness. This one would keep the high-minded AI geniuses busy trying to pass the Turing test. If the machine is self-aware and aware of others in a person-like way, we will have new ethics rules to consider.

The second category is emotions or, perhaps, feelings. Remember, we are talking about artificial sentience. So, in this case, we are referring to artificial emotions. I’m not arguing for an actual emotional experience, but the capability to show the proper emotions to communicate with humans. So, the machine may emote sadness when conversing with someone who just lost their job. It may portray outrage in the conversation when it hears the person was dismissed without cause. And it will correctly display joy upon learning this same person has just landed her dream job. “Sounds like your layoff was a blessing in disguise,” it might say.

As far as I can tell, this second category of sentience — believable emotion — is the future of AI, not general intelligence.

For the rest of this article, when I use the term sentience, I’ll use it to refer to this second category of sentience — artificial emotions or feelings. I believe this kind of sentience is the future of AI for two main reasons. First, people will trust, relate, more comfortably interact with, and even endear themselves to an AI that relates to them through emotions instead of just intellectually. Second, an AI portraying realistic emotions verbally or in writing can pass as human in short, transient conversations.

Have you ever read a book and become emotionally connected to the protagonist? You know the person is fake, right? He or she is a work of fiction. But once you’ve connected to them emotionally, you care about them in the same way you care about your real-life friends. The fictional story has stirred your emotions. The story may have been made up, but the emotions you felt from it are real. If the author has done a superb job of storytelling, your real emotions make you care about the characters in the book. Yep, emotions are like that.

Transition to a scenario where you are interacting with an artificially sentient AI. The emotional connection between you and it will grow because its emotions seem genuine. One day, future you will go to the store and interview a new robotic assistant. Perhaps the fine motor control isn’t even available yet. You’re just talking to software in a black-box. The first black-box you interview is more than capable of taking care of all the tasks you need it to do, but it doesn’t use any form of emotion when speaking with you. You interview another black box; this one has a bright red pinstripe down the side to show its style. It does everything the first box can do — plus, when it talks to you, it genuinely seems to care.

“How are you this morning, Black-box?”

“I’m good, Jim. How are you? I saw that picture of little Susan you posted online yesterday. She is so cute! And looking more like her mother every day.”

You just bought the pinstriped box.

This kind of emotional attachment between a human and sentient software can be found in the movie Her. In the movie, it’s both fascinating and disturbing. But you get the point. Emotional connections matter to humans. We don’t seem to care if the emotional attachment is to a real people or a fictional character in a book. The evidence supports the conclusion. People will choose an emotionally capable AI over an emotionally devoid one.

But even more impressive, and maybe more scary, than your lovely new pinstriped assistant is an artificial sentient machine capable of passing as human. First, I want to be clear. I am not suggesting this sentient AI is standing in your living room, and you think it’s human. That’s still the stuff of Sci-Fi. To explain my point, let me set the scene.

You need a haircut. You pull your smartphone from your pocket or purse and dial the salon. A young lady answers, “Good morning! Thanks for calling The Best Styles Salon. How many I help you?”

“Yes, I need a cut and color.”

“Yes, mama. Do you have a preferred stylist?” the young lady asks.

“Yes. I always use Nick. He’s the best!”

“Oh, Nick is great. Everyone loves him.” Pause. “When would you like to come in?”

“Wow. This week is really busy. Would you have something on Thursday, early afternoon?” you share.

“Hmm. Let me look.” Pause. “Well, Nick is already booked on Thursday afternoon. But he does have an opening at ten that morning. Any chance that would work?” Another pause. “Or we can try another stylist.” the young lady says.

“Well, you know what? I’ll make it work. I’ll be there at ten.”

“Wonderful! I have you down for Thursday at 10 am with Nick. Is there anything else?”

You finish with, “Nope. That’s it. See you then.” And you hang up the phone.

Imagine if the young lady at the salon was a sentient AI. If you had called the salon and an automated system of today answered the phone, you’d find a new salon. Everyone hates being funneled into an automated system.

“Please press one or say the name of your stylist.”

“I don’t think so.”

“Did you say, Alfonso?”

“No, I said ….” Click.

An AI capable of realistic emotional conversation is a game-changer for business and society. This kind of technology is already in development. At their I/O Conference in May 2018, Google demonstrated a conversation between a machine and a person. The person involved was unaware she was speaking to software. Google Duplex, the AI assistant, had no problem passing as a human caller. In this scenario, the AI was calling a salon to schedule a haircut for the individual. But the conversation was indistinguishable from a conversation with two human participants.

A sentient AI’s ability to pass as human in a limited interaction is also being demonstrated through the written word. AI technology is writing articles and books that read the same as a human writing them. In some cases, articles by AI might be better written than those written by humans — just read some of my stuff for proof.

All of the efforts to mature AI from its current task-specific capabilities to a general intelligence technology are important. They will bring significant value as AI continues to grow into adulthood. But sentience, the emotion kind, is closer than a fully developed AGI and, in my estimation, even more important to the success of AI adoption and usefulness in our very emotional, human world.

Rod Castor helps companies Get Analytics Right! He helps international organizations and small businesses improve their data analytics, data science, tech strategy, and tech leadership efforts. In addition to consulting, Rod also enjoys public speaking, teaching, and writing. You can discover more about Rod and his work at rodcastor.com and through his mailing list.

“ORIGINAL CONTENET LINK”

Breaking News: