A study published in Frontiers in Robotics and AI has examined how humans react to different types of lies told by robots. The research, led by Andres Rosero, a PhD candidate at George Mason University, explored the ethics surrounding robot deception. The study surveyed nearly 500 participants, presenting them with scenarios involving robots that told different types of lies in settings such as healthcare, cleaning, and retail. These lies fell into three categories: external state deceptions, hidden state deceptions, and superficial state deceptions.
External state deception, exemplified by a robot telling a patient with Alzheimer’s that her deceased husband would be home soon, was the most accepted by participants. They reasoned that the lie spared the patient unnecessary distress. Hidden state deception, where a housecleaning robot secretly filmed a visitor, was deemed the most unethical. Participants disapproved of this lie the most, citing privacy concerns. Superficial state deception, where a retail robot falsely claimed to feel pain while moving furniture, was perceived as manipulative and was also widely rejected.
The study found that participants were more tolerant of lies aimed at protecting someone’s feelings, like in the external state deception scenario. In contrast, deceptions that misrepresented a robot’s capabilities or involved secret behaviors, such as hidden cameras, were largely condemned. The findings suggest that humans are more likely to accept robot lies in situations where emotional protection is prioritized over transparency, while deceit involving privacy or manipulation is met with strong disapproval.
Rosero emphasized the need for further research, suggesting that real-life scenarios, such as videos or roleplays, could offer more accurate insights into human responses to robot deception. He also expressed concerns about the potential for manipulative uses of technology, calling for regulations to prevent harmful deceptions by robots or their developers.