Sometimes the discussion about AI, robotics and ethics seems to have a very theoretical approach. Therefore, the input of House of Ethics in this discourse is based on a multidisciplinary approach, rooted in a fundamental understanding of the technology involved. As generative AI makes its way into robotics, Rocking Robots discussed the current challenges in ethics with its founder Katja Rausch.
Katja Rausch is the founder of the House of Ethics. Before that, as a professor at the Sorbonne she specialized in information systems, and later at numerous universities and private schools she taught data ethics. Alongside academia, she was also a professional management consultant. “I have always been engaged in multiple activities, driven by a desire to hybridize”, she says, “To network, to combine things and to create new meanings from existing ones. This is how the House of Ethics was conceived.”
House of Ethics is an interdisciplinary hub focused on ethics, bringing together academics and professionals. “This mingling of ideas, minds, and cultures is, in my view, the only way to practice independent ethics. At our hub, we have people of all genders and ages. It is truly an open Think Tank. We make it a point to voice our opinions politely, because inactivity is also a part of ethics: if you see something wrong and do not act, that is unethical.”
Large language models
“There are different degrees of involvement, but at a minimum, you should express your opinion, especially when it involves wrongs that impact rights, privacy, and human rights. This is particularly relevant today with the proliferation of large language models that are filtering into our society and bias systems. We believe it is crucial to go beyond ethics to regulation and human rights, as this touches on dignity and the fundamental rights we all hold, which go beyond issues like digital manipulation and consent.”
Last year, House of Ethics launched an initiative on swarm ethics, which deals with collective ethics, an important concept for the field of robotics. This includes swarm algorithms and how robots can autonomously coordinate. “We draw on principles from anthropology, complex systems, and digital technologies and apply them to ethics. We have observed how quickly ethics can become outdated—as seen when ChatGPT was launched in November 2022—and how regulation often lags. Therefore, our approach is proactive and collective, differing from the traditional, slower, and individually focused virtue ethics that is imposed from above. Our perspective is horizontal: ethics for the people, by the people, and with the people.”
Generative AI
Generative AI is a topic that affects people and society. “What really intrigued me right from the start is the modus operandi, by OpenAI. I come from a technical field, and I am really an ethics advocate who knows about databases. So I saw they were gathering all that data without people’s consent. I do not eat a cake that is baked with stolen ingredients, so why should technology be any different? Generative AI sits on ethical permafrost—there is no ethics.”
“The launch was filled with promises. But now we are talking about hallucinations, about false statements, about opinion shaping. And it is based on transformer models, which are not relational databases. So here we have a statistical probability of words. Sometimes this collage makes sense, and mostly it does not. So at the House of Ethics, we have refused to use it. It is based on technology that is not reliable for the kind of work that we do. People misunderstand it for something powerful and trustworthy, which it is not.”
Safety
The combination of AI and robots brings its own challenges. “For example, the question of liability. If something goes wrong, who is liable for it? And more important, what about safety? We do not talk that much about safety, but in this hyper-connected world we live in we need to address it differently. This touches upon human rights, especially with moving boundaries en the danger of latent harm. The cursor is moving, and we need to keep an eye on that. The big companies in Silicon Valley are immensely powerful communicators We need to protect our human heritage. I am not an anti-technology person, I like progress, but in a responsible and inclusive way.”
She specifically mentions medical robots.” If something goes wrong with a robotic surgeon, who is liable? For example, if a patient really wants to use it, and the doctor does not want to, can the doctor be held liable for not assisting a patient by using robotics? I think the integration of robots into the medical world will bear more ethical and legal questions than other industries, because it is very tricky because it is very complex with three participants involved: the robot, the doctor, and the patient.
Accountability
The unique aspect of the ever-growing number of humanoid robots is its anthropomorphism. “Industrial robots and service robots like a Roomba, a dishwasher, or a smart fridge do not affect me that much. There is no human connective intuitiveness. But if we talk about humanoids, a different level of abstraction inside ourselves is activated and makes it more dangerous. Even though we always have the uncanny valley, where we feel a bit strange when robots look too human and they become creepy, it does not prevent us from projecting human-like capabilities, emotions, onto them.”
“That coupled with generative AI, which is no longer a passive tool, but active and proactive. For me, that raises alarms. It can be extremely beneficial, but it is also easy to digitally manipulate if there is no perceived risk. Anthropomorphism can just erase that risk feeling that we might have with a car. So that is where I see the danger. And here as well there is the issue of physical safety because we are much more open to it, and accountability. If something goes wrong, who is accountable for all of this? So, we should sober up straighten it out and say, ‘Look, we can use it for this, we can use it for that, but we should not use it for these kinds of things. Right now, it is a melting pot.”