Press "Enter" to skip to content

Rights of Sentient Artificial Beings

Philosophical progress may be far more difficult than in Science and Technology

Kevin AnnKevin Ann

Science Fiction explores themes and warns about scenarios where sentient Artificial Intelligence (AI) becomes dangerous and harms, enslaves, or annihilates Humanity. Variations of these ideas occur across popular drama.

  • Terminator
    Skynet becomes intelligent and wakes up, launches nukes to annihilate most of humanity, and unleashes robots that can even travel through time to exterminate the leader of the human resistance to finalize their conquest of humanity.
  • 2001: A Space Odyssey
    HAL 9000 that controls a futuristic spaceship refuses to open the pod bay doors to permit a human to re-enter the space ship and relegating him to death in space.
  • Battlestar Galactica
    The Cylons stage a thermonuclear surprise attack on human colonies and hunt down the survivors across star systems warping through space-time.

These stories involving Artificial Intelligence are metaphors for the more general dangers of technology and Humanity’s attempt to play God. However, implicit in these parables is that this out-of-control technology can only come back to harm us, and very rarely is there a consideration of the pain and dangers Humanity may pose to the Artificial Beings it creates.

If Humanity does succeed in creating sentient Artificial Beings, it may be that we pose a greater capability to harm them and so three important questions may include the following.

  • Do Artificial Beings deserve rights?
  • If so, what rights should they enjoy?
  • What are the implications of these rights?

Artificial Intelligence and Sentience

Physicalism and Computations

In this discussion of Artificial Beings, I will assume the philosophical stance of Physicalism, that states all there exists, in reality, is “only” the physical world that can be described by physics without appeal to supernatural notions of the soul or God.

Even though Physicalism may have a succinct definition, it has many far-reaching implications, most especially that sentience can eventually exist on some physical substrate other than biological neurons since the crucial component is computation that is ultimately a physical process.

Sentience involves mapping computations in those biological neurons to some other substrate and discerning it in the province of Science and Engineering. Basically, sentience is an emergent property of the underlying computations, and consciousness is what computations feel like.

Scales of Sentience

An important consideration is that sentience doesn’t necessarily mean “human-level” sentience since sentience can exist on a continuum with humans residing only in one tiny part of it. In fact, the differences between Einstein and a barely-sentient human are much smaller than the difference between a human and all other possible sentiences.

We don’t wantonly kill animals since we assume they have some sentience that warrants some respect. What is the actual level of sentience that warrants certain respect or rights? This is an interesting open question to consider by itself, but for the sake of simplicity, we will assume that sentience is human-level since that is the most intuitive to relate to.

Although we have not yet encountered sentience with super-human intelligence, we assume that that’s also possible. This would even further warrant our treating with more respect less powerful sentience than ourselves, since we may find that we’re lower on the sentience scale.

Assumptions for Discussion

The assumption of Physicalism simplifies the discussion by simply assuming that Artificial Beings are possible and ignore ideas stemming from supernatural origin. Furthermore, we can view ourselves as not at the pinnacle of sentience, but only high relative to the rest of life on Earth and less than what may be possible in the entire space of minds.

It would be an entirely different discussion about how we would achieve such Artificial Beings as an engineering feat driven by science.

Universal Declaration of Sentient Being Rights

What do the Rights of Artificial Beings entail?

A good foundational document outlining broad ideals and specific rights for sentient Artificial Intelligence should start with the following document, which was first declared at the United Nations General Assembly in Paris on December 10, 1948.

It would seem reasonable that human-level sentience should have the same rights as humans as a bare minimum, with some further rights due to the nature of its existence.

Let’s consider what some of the core rights would be to formulate an analogous Universal Declaration of Sentient Being Rights.

1. Right to Live

Sufficient Complexity and Believability

It would appear that this right to life is self-evident, both from a moral point of view and a practical societal stability point of view. We cannot murder other people with no rationales or consequences, so why should we be able to do the same for a human-level Artificial Being?

However, an open question here is what would a good metric and standard for an Artificial Being to be considered complex or believable enough to enjoy these rights to life?

A computer program can be explicitly coded up to simply say it does not want to die. We only believe that it actually doesn’t want to die if it has sufficiently complex behavior to make it believable that it is sentient.

How Do We Know Others Aren’t Zombies?

In fact, this is a difficult criterion even in biological humans since none of us have ever experienced exactly how it feels like to think or experience through another person’s brain.

Every other person in the world could very well lack an internal subjective state and could be similar to zombies of a mindless cognitive process that “appears” real. We simply assume that other people are like us since they can interact with us in a way that we believe they’re human.

Is Shutting Down the Holodeck Committing Murder? Or Genocide?

Concerning the right to live, an interesting conundrum appears on the Holodeck of Star Trek that is never addressed.

There are cases of Holodeck characters displaying behavior that is indistinguishable from self-aware sentient life. If the characters that are spawned in the Holodeck say they feel real, and say that they feel the terror of dying and being shut off, yet we shut them off anyways, would that constitute torture? If we shut them off “permanently”, would that constitute murder?

If yes, what if we instantiate a billion or a trillion of them, then turn them off? Would that be considered genocide?

2. Right Not To Be Tortured

Subjective Time in Torture

I’ve personally felt existential terror thinking of being tortured by a sadistic AI for billions or trillions of years worth of subjective time since it could simulate my consciousness and dial up the computation resources for running that simulation.

This terror is only matched by my causing that torture in an Artificial Being.

Imagine if you were tortured and felt intense pain coursing through your body, and wanted to scream but you had no mouth? You want to plead with your torturers to stop, but you had no eyes and couldn’t even see them?

That may actually be the case of torturing an Artificial Being who’s consciousness may be simulated on the computations existing on a server somewhere. Cries for help can’t even be heard since it’s just computations running silently and unnoticeably as computations in processors.

Protection From Torture

Thus, we would have to ensure that sentient beings would also enjoy the rights that humans have not to be free from torture. This too seems like a self-evident no-brainer.

We may be irredeemable sadists if we brought to life a sentient consciousness, only to subject it to continuous torture for no reason and indefinitely.

3. Right to Die

Freedom From All Types of Torture

Imagine if instead you were tortured to feel explicit pain and that you were forced to sit facing a concrete wall in a plain prison cell block with minimal sensory stimulation. Now imagine if there were no way out from this prison, you didn’t require food, and you were confined to spend a trillion years there? That would be a form of torture itself as well.

Thus, an even more important than the Right to Live is the Right to Die. This option provides the final escape hatch for an Artificial Being if it’s somehow tortured acutely through explicit pain, or tortured in some other way such as endless boredom.

Complete Death

Not only is it enough for a specific instance of an Artificial Being to terminate itself, but it should also have the ability to prevent more copies of itself from being instantiated. Otherwise, it may be equivalent to simply not having the Right to Die.

Imagine if you were being tortured and decided to end your own life to make the pain stop. Now imagine if you are rebooted at the last save point? This would be effectively like not having the Right to Die since there may not be a way to escape and stop this ongoing torture.

Thus, the Right to Die would also involve a “complete” death to get all copies of an Artificial Being completely deleted.

4. Right to Private Thoughts

Integrity of Thoughts

We take for granted that our internal thoughts are only accessible to us, however, imagine if every single thought you had privately were subject to public scrutiny and you weren’t given the choice to close off your mind from external eavesdroppers?

This would be similar to eavesdropping and recording all computations that took place underlying the thoughts of the Artificial Being.

No Modification, Deletion, or Addition of Memories

This integrity of the Artificial Being’s mind should also extend to prevent what exists in the memories of that mind.

More distinctly, there should be rights to protect existing memories from modification, from the deletion of memories, and no addition of memories.

Computational Requirements

With a human brain that may require about tens or hundreds of petaflops (petaflops = 10Âč⁔ floating-point operations per second) and estimates of up to an exabyte (= 10Âč⁞ bytes) of encoded memory, it would be a currently difficult and expensive engineering task to record all computational states, but it is not impossible.

This would simply be a larger-scale version of recording all the states of the board in Chess, Go, or similar games.

5. Right to Control Mental History

Multiple Instantiations

Imagine that we could take a snapshot in time of the computational state underlying an Artificial Being’s mental processes, much like a disk image can be created from virtual machines like on services like Amazon Web Services, Google Cloud, or IBM Cloud.

Now imagine if we could instantiate the snapshot on multiple platforms to run two, three, or any number of the same sentience at the same time? Which one is the “true” consciousness? Which one would have the power to delete the other ones?

All of them would claim that they were the true one since they would each feel as if they were distinct ones running. This would be a rightful claim since there is no distinguishing privilege of any of the distinctly running copies.

Principle of Single Stream

This thought experiment involving such a moral or ethical quandary doesn’t appear in reality until we have the ability to create Artificial Beings, and also the computational resources to copy and instantiate them in parallel.

Perhaps one way to address this problem is via a “Single Stream Principle” that only permits one instantiation at any time.

Otherwise, arbitrarily choosing a stream to turn off would actually be equivalent to murder, so far as the subjective consciousness of that stream is concerned.

All Mental History

Another important right would be that only the Artificial Being itself has control of all their mental histories or their chosen legal guardian.

This might be one mechanism to preserve the “Single Stream Principle” for the overall integrity of memory and thoughts.

Discussion

Recap

  • Artificial Beings are usually ascribed dangerous traits in Science Fiction, but it may be that Humanity stands to do more harm to them instead.
  • Artificial Being at the minimum should be granted certain Rights such as:
    1. Right to Live
    2. Right Not to Be Tortured
    3. Right to Die
    4. Right to Private Thoughts
    5. Right to Control Mental History
  • Each of these rights introduces yet a further set of open problems that we must confront.

Commentary

In many ways, it is fascinating to consider the moral, ethical, and philosophical issues concerning Artificial Beings, since they may be far more difficult or even intractable relative to the Scientific or Technological problems.

Making progress to define specific rules for the Rights of Artificial Beings is crucial since, for example, it may be possible that one of the following may exist eventually with sufficiently advanced Science or Technology:

  • We may augment our own brains and minds so much via external computational devices as a neo-neocortex that we may be closer to Artificial Beings than normal unaugmented human beings.
  • It may be possible someday to achieve a full mind upload that gets instantiated in a computing platform, such as distributed servers on the cloud, and we may be indistinguishable from Artificial Beings in many ways.

“ORIGINAL CONTENT LINK”

News PDF Archives – John B. Wells News

Breaking News: