A team of researchers from the University of Hartford, the University of Connecticut, and the Max Planck Institute for Intelligent Systems have delved into principle-based behavior paradigms pertaining to the ethical behavior of autonomous machines. Their paper argues that the behavior of autonomous systems should be developed by explicit ethical principles, and ultimately, determined from a consensus of ethicists.
“This year marks the 50th anniversary of the movie 2001: A Space Odyssey,” said Michael Anderson, one of the researchers, according to Tech Xplore. “While reading The Making of 2001: A Space Odysseyat the turn of the century, it struck me that much of HAL’s capability had lost its science fiction aura and was on the cusp of being realized. It also struck me that they had gotten the ethics so wrong: If HAL was around the corner, it was time to get the ethics right.”
Anderson and his team have been looking at ethical principles that can be integrated into autonomous machines as part of a project called Machine Ethics. The researchers believe AI should be guided by ethical principles, and this should not only ensure the ethical behavior of these systems, but also serve as a basis to justify the behavior.
The team created a case-supported and principle-based behavior paradigm called CPB. With this, an autonomous system will decide its next action using a principle abstracted from cases that the ethicists have deemed as the correct action.
“CPB uses machine learning to abstract a principle of ethical preference from ethically relevant features of particular cases of ethical dilemmas, where the ethically preferable choice is clear,” said Anderson. “A system can use that principle to determine the ethically preferable action at any given moment.”
CPB is intended to help justify an autonomous system’s actions and explain why it chose to act in that particular way.
“When we were attempting to determine how the robot would know that it was in a situation where its ethics should kick in, we realized that it was always in such a state: ethics is not simply about not choosing incorrect actions, it is about choosing which of all its actions is ethically preferable in the given situation,” said Anderson. “Given this, CPB is committed to determine ethically preferable actions whatever the situation, and does so whenever the situation changes.”
The researchers hope to use this for future endeavors to ensure robots and autonomous machines make consistent ethical decisions, and help humans better understand autonomous choices.
“We are planning on extending our research, adding more actions and considering more ethically relevant features to the end of developing an ethical principle that can be used to direct the behavior of all future eldercare robots,” Anderson said. “We will be testing these new ideas on a PAL Robotics TIAGo robot.”