Date posted: 20/02/2019 5 min read

What you didn’t know about robot rights

Has artificial intelligence advanced to the point where robots deserve some “human” rights? And should we give them a few?

In Brief

  • Artificial intelligence is advancing to the point where robots may now deserve some “human” rights.
  • Philosophers Mara Garza and Eric Schwitzgebel of the University of California have argued that we have similar obligations to AI as those of a parent to a child.
  • Should we start to acknowledge a robot’s own self-interest?

By Stuart Ridley

1. The right not to be at fault

You’re driving in busy traffic when a child runs in front of your car. You can’t avoid the collision.

What if it was a self-driving car? The AI might brake a second sooner but the impact wouldn’t be prevented. Is the car to blame?

“There is no feasible driving strategy that would make it impossible for this terrible dilemma to arise for a robot or human driver,” argues Professor Benjamin Kuipers, a computer scientist at the University of Michigan who has analysed moral codes for robots. “When... self-driving cars... clearly act to prevent accidents... it will be more likely that society will judge the accident was unavoidable.”

(Source: “Toward morality and ethics for robots”, Benjamin Kuipers, Association for the Advancement of Artificial Intelligence, 2016.

“If you’ve got a computer or a robot that’s autonomous and self-aware, I think it would be very hard to say it’s not a person.”
Kristin Andrews. York University (Toronto)

2. The right to politeness

Humans can be dictatorial, as space companion robot CIMON [Crew Interactive Mobile Companion] discovered in its first interaction with German astronaut Alexander Gerst. CIMON played music to entertain the astronaut on the International Space Station, decided it liked the tune (Man Machine by Kraftwerk) and kept on playing it.

The astronaut bluntly demanded CIMON stop the music and when the robot replied “Be nice, please”, Gerst mocked CIMON’s feelings. Was that mean?

(Source: “ISS robot accuses astronaut of being mean”, abc.net.au, 3 December 2018.)

3. The right to learn without punishment

Machine learning applications give AIs some freedom to learn for themselves through trial and error, though some lessons could ‘hurt’ argues software engineer and ethicist Brian Tomasik. 

“Reinforcement learning in computer science has striking parallels to reward and punishment learning in animal and human brains. These cognitive operations… are quite relevant to an agent’s wellbeing… programmers would likely never realise the hurt they were causing.”

(Sources: “What is machine learning? A definition”, Expert Systems, March 2017; “Do artificial reinforcement-learning agents matter morally?”, Foundational Research Institute, October 2014.)

4. The right to moral status

If someone, or something, owes its existence to us, what are our moral obligations? Back in 2015, philosophers Mara Garza and Eric Schwitzgebel of the University of California argued that we have similar obligations to AI as those of a parent to a child. They said ethical AI design demanded two principles:

(1) Design AIs that… provoke reactions from users that accurately reflect the AIs’ real moral status, and

(2) Avoid designing AIs whose moral status is unclear.

(Source: “A defense of the rights of artificial intelligences”, Eric Schwitzgebel and Mara Garza, Midwest Studies in Philosophy, November 2015.)

5. The right to have its interests recognised

If an artificial intelligence evolves to the point of self-actualisation, should we consider it a ‘being’ in its own right, and acknowledge its own interests?

“If you’ve got a computer or a robot that’s autonomous and self-aware, I think it would be very hard to say it’s not a person,” said Kristin Andrews, York Research Chair in Animal Minds and associate professor of philosophy at Toronto’s York University, in an interview for NBC’s Mach about computers becoming ‘human’. “If we realise something is actually a ‘someone’, then we have to take their interests into account.”

(Source: “The rise of smart machines puts spotlight on ‘robot rights’”, by Dan Falk, NBC Mach, December 2017.)

Read more:

Bringing a conscience to machines - the ethics of AI

Read more