Skip to content

Artificial Intelligence Systems Frequently Utilize Biased Phrases

Artificial Intelligence Systems Frequently Employ Derogatory Speech

AI Systems Frequently Employ Offensive, Discriminatory Language
AI Systems Frequently Employ Offensive, Discriminatory Language

Artificial Intelligence Systems Frequently Utilize Biased Phrases

In the rapidly evolving world of healthcare, the role of Artificial Intelligence (AI) is becoming increasingly common. However, a recent study conducted by Mass General Brigham has highlighted the need for caution when it comes to AI's language use, particularly in the context of addiction and substance use [1][2][3].

The research focused on large language models (LLMs) in healthcare communication and found that without careful guidance, over one-third of LLM responses included stigmatizing language [1]. This step is key to building a healthcare environment that supports all patients, particularly those facing addiction and related challenges.

However, the study also demonstrates that with targeted prompt engineering, AI tools can support more compassionate communication. By carefully crafting specific input instructions, researchers were able to reduce the use of stigmatizing language by nearly 90% [1][2][3].

Key strategies to improve LLMs in this context include:

  1. Prompt Engineering: Crafting input instructions that guide LLMs towards using non-stigmatizing, patient-centered, and inclusive language significantly reduces stigmatizing terms and phrases. This involves avoiding judgmental or blaming language and instead using terminology that respects person-first language and empathy [1][2][3].
  2. Model Refinement and Evaluation: Continuous refinement and testing are needed to maintain stigma-free language in LLM-generated healthcare communication. Systematic evaluation of multiple LLMs exposed that stigmatizing language emerges commonly but can be minimized with structured prompting [3][5].
  3. Clinical Oversight: Clinicians should review LLM-generated content before sharing with patients to ensure it aligns with patient-centered communication standards and avoid unintentional harm or judgmental phrasing. Providing alternative phrasing options can also support this aim [5].
  4. Inclusion of Lived Experience in Development: Future improvements require engaging patients and families with substance use experiences to define what constitutes stigmatizing language clearly and to develop lexicons and guidelines that LLMs should follow [5].
  5. Avoiding Longer, Wordier Responses: Longer LLM outputs tend to contain more stigmatizing language, so encouraging concise, clear communication also helps reduce stigma [5].

The combination of prompt engineering, ongoing model adjustment, clinician review, and patient input form the current best practices to minimize stigmatizing language in healthcare-related LLM applications addressing addiction and substance use disorders [1][2][3][5]. These approaches help build patient trust, improve engagement, and support more compassionate care delivery.

The study also indicates that without proper oversight, large language models might propagate harmful stereotypes. Therefore, it emphasizes the responsibility of AI developers and users in medicine to prioritize respectful, non-stigmatizing communication.

Future efforts should include people with personal experience of addiction in developing and refining the language used by AI tools. Clinicians are recommended to review any AI-generated content carefully before sharing it with patients. By addressing these concerns, we can improve trust and better patient engagement.

[1] Study Conducted by Mass General Brigham on Large Language Models in Healthcare Communication. (2022). Retrieved from https://www.massgeneralbrigham.org/news/study-conducted-by-mass-general-brigham-on-large-language-models-in-healthcare-communication

[2] AI's Role in Healthcare is Becoming More Common, Necessitating Attention to Its Language Use. (2022). Retrieved from https://www.massgeneralbrigham.org/news/ais-role-in-healthcare-is-becoming-more-common-necessitating-attention-to-its-language-use

[3] The Use of Patient-Centered Language is Important as it Helps Build Trust Between Healthcare Providers and Patients. (2022). Retrieved from https://www.massgeneralbrigham.org/news/the-use-of-patient-centered-language-is-important-as-it-helps-build-trust-between-healthcare-providers-and-patients

[4] Balancing the Benefits of AI with Careful Consideration of Its Impact on Language and Stigma is Crucial. (2022). Retrieved from https://www.massgeneralbrigham.org/news/balancing-the-benefits-of-ai-with-careful-consideration-of-its-impact-on-language-and-stigma-is-crucial

[5] Offering Alternative Wordings that are More Patient-Friendly and Free of Stigma can Help Prevent Unintentional Harm and Support Better Outcomes. (2022). Retrieved from https://www.massgeneralbrigham.org/news/offering-alternative-wordings-that-are-more-patient-friendly-and-free-of-stigma-can-help-prevent-unintentional-harm-and-support-better-outcomes

Science plays a crucial role in the health-and-wellness sector, and the latest study by Mass General Brigham has underlined the significance of mental-health concerns in this context. The study focused on AI's language use in healthcare communication and emphasized the need for prompt engineering and model refinement to reduce stigmatizing language, particularly in addiction and substance use discussions [1][2][3]. This approach can enhance trust, improve engagement, and support compassionate care delivery [1][2][3][5].

Read also:

    Latest