Home Bots & Brains Research: The ethics of a robot therapist for kids

Research: The ethics of a robot therapist for kids

by Pieter Werner

Researchers at the University of Rochester Medical Center (URMC) have raised ethical concerns about the use of artificial intelligence (AI) mental health chatbots for children, citing developmental, social, and regulatory challenges. While AI-based mental health applications are becoming more common, particularly as tools to bridge gaps in the U.S. mental health system, most are designed for adults and remain largely unregulated.

In a peer-reviewed commentary published in the Journal of Pediatrics, Bryanna Moore, PhD, assistant professor of Health Humanities and Bioethics at URMC, emphasized the need to consider the unique cognitive and emotional development of children when discussing the use of AI in pediatric mental health care. According to Moore, children’s developing minds and their reliance on family and social environments make them particularly vulnerable to the limitations of AI tools. She noted that children may form attachments to AI chatbots, which could hinder the development of interpersonal relationships with peers and adults.

AI systems typically do not have access to the broader familial and social context that human therapists use to assess and treat pediatric patients. As a result, they may fail to detect signs of danger or provide appropriate interventions. Moore stated that unlike adult therapy, pediatric mental health care often involves observing and interacting with a child’s family to ensure effective treatment and safety.

Jonathan Herington, PhD, coauthor of the commentary and assistant professor in the departments of Philosophy and of Health Humanities and Bioethics at URMC, noted that AI tools may also contribute to health inequities. He explained that AI systems rely on training data that may not represent all populations equally. Without deliberate efforts to use inclusive datasets, the resulting tools may not perform adequately for children from marginalized or underrepresented groups.

Herington further cautioned that children from low-income backgrounds, who already face barriers to accessing mental health care, may disproportionately rely on unregulated AI tools in place of professional therapy. He emphasized that while AI chatbots may offer support, they should not be viewed as substitutes for human therapy.

Currently, the U.S. Food and Drug Administration has approved only one AI-based mental health app for treating major depression in adults. The lack of regulatory oversight means that AI therapy tools may be deployed without established standards for safety, efficacy, or equity.

The commentary was coauthored with Şerife Tekin, PhD, associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical University, who specializes in the philosophy of psychiatry and the ethics of AI in medicine. The authors advocate for greater transparency and ethical scrutiny in the development of AI therapy tools, particularly for pediatric use. They expressed interest in collaborating with developers to examine how ethical and safety considerations are integrated into product design and whether these tools are informed by engagement with children, parents, and healthcare professionals.

Misschien vind je deze berichten ook interessant

preload imagepreload image