Artificial Intelligence, or AI, can do a remarkable job in helping social workers analyze data quickly in ways that lead to meaningful services and interventions, says social work ethics expert Frederic Reamer. However, there is another side to this coin, says Reamer, professor emeritus in the School of Social Work at Rhode Island College.
“Recognizing lots of potential benefits to AI, I think there are a lot of risks attached to social workers’ use of AI,” he says. “I have found a number of social workers who are excited about the use of AI and others who are extremely nervous about the use of AI and others who are indifferent.”
Reamer discusses these issues in the NASW Wisconsin Chapter webinar that examines cutting-edge ethical issues related to social workers’ use of AI. It is on the NASW Wisconsin Online Calendar.
Core ethical issues that social workers should ask themselves include:
- Informed consent to use data: To what extent do clients know how data they share with social workers, including data in electronic health records, will be used?
- Transparency: Are social workers who use AI sufficiently transparent with clients about the potential benefits and risks?
- Privacy invasion: Is there a risk to clients’ privacy when they share personal information with AI platforms and tools?
- Threats to autonomy: Do clients risk any loss of autonomy when they are provided services by AI?
- Misdiagnosis: In clinical settings, is there a risk that clients would be harmed if artificial intelligence misdiagnoses their symptoms and leads them in the wrong treatment direction?
- Risk of abandonment: Is there a risk that clients who rely on artificial intelligence will not receive timely responses and continuity of care, especially during crises?
- Surveillance: How might artificial intelligence data be used for surveillance purposes (e.g., reproductive health data in states where abortion is not legal)?
- Plagiarism and dishonesty: How can social workers prevent plagiarism, dishonesty, and intellectual property abuse that results from use of artificial intelligence (e.g., copying and pasting without attribution and credit)?
Social workers also need to recognize algorithmic fairness and biases: To what extent do artificial intelligence tools used for clinical purposes and agency hiring exacerbate social, cultural, and political bias as a result of the databases on which they are built? Do the AI algorithms incorporate sufficient information from people of color, LGBTQ+ individuals, low-income individuals, etc.?
Finally, social workers need to consider potential ethics complaints and litigation: Do social workers who rely on AI risk being named in ethics complaints (licensing board, NASW) and in lawsuits alleging, for example, failure to comply with informed consent standards, confidentiality breaches, misdiagnosis (e.g., overreliance on AI to diagnose), client abandonment, plagiarism?
To help guide social workers to uphold ethical standards using AI, Reamer suggests review of the following texts:
In addition, Reamer says there are 10 ethics-informed policies for the use of AI in social work. They are:
- Create ethics-based governing principles
- Establish a digital ethics steering committee
- Convene diverse focus groups
- Subject algorithms to peer review
- Conduct AI simulations
- Develop clinician-focused guidance for interpreting AI results
- Develop rigorous training protocols
- Maintain a log of AI results to identify positive and negative trends
- Test algorithms for possible biases and inaccuracies
- Continuously monitor algorithmic decision processes.
Other Resources: