SPS Mental Health: AI Technology Must Include Socioethical Considerations

SPS Updates

artificial intelligence robot

AI technology is known for its great capacity for analyzing and computing large amounts of data. AI technology could provide mental health social workers with in-depth analysis and summaries of their clients’ medical histories and offer timely hints and recommendations on treatment plans, says Chen Chen, LCSW, chair of the NASW Mental Health Specialty Practice Section.

In an article Chen wrote for the latest Section Connection newsletter, he points out that AI chatbots can be a major help to mental health social workers, as the technology is available around the clock and can provide timely intervention when practitioners are off work—to avoid worst-case scenarios.

“Although AI technology can offer such benefits to mental health social workers, the attendant challenges and ethical dilemmas must also be considered,” says Chen, a health care social worker with six years of experience in inpatient hospitals in New York City. ”The first challenge is data privacy and informed consent. Clients need to be told—and understand—how AI will use, store and disclose their data and information.”

Compliance with legal standards like the Health Insurance Portability and Accountability Act (HIPAA) will not be enough to address this challenge, he says. “Clients need to know how AI will influence decisions and treatment plans. The possibility exists for AI to lead to bad decisions and wrong treatment options; this should be avoided at all costs. Therefore, the client’s understanding and consent on the matter of AI are essential.”

It’s also critical to use the right data and algorithms—which is how AI technology generates results—to prevent potential biases in the AI system,” Chen says. “All clients must receive appropriate and fair service or treatment regardless of their gender, age, race, cultural background, and other such factors.”

AI technology must not play any role in building systematic discrimination or barriers for certain populations, but because AI interventions are often built without clear socioethical considerations, the implications of AI as it relates to issues of trust, privacy and autonomy require further investigation. Otherwise, Chen says, the division of responsibility in the event of medical malpractice will be another challenge. “To address these issues, social workers, their managers, and social work programs can develop specific ethical guidelines about the use of AI in the profession.”



cover of fall 2024 issue

Social Work Advocates Flipbook

NASW members, sign in to read the Fall 2024 issue as a flipbook