
Artificial intelligence is fundamentally changing the way that people communicate and understand complex topics, these changes could have a profound impact in the way that clinicians and healthcare providers diagnose and deliver health care services.
Denise Payán, a health policy expert and associate professor of health, society, and behavior at the UC Irvine Joe C. Wen School of Population & Public Health, contributed to identifying and proposing relevant regulatory and policy guidelines and strategies on the equity implications in the adoption of large language models in healthcare delivery.
Payán’s expertise is included in a commentary article published in the March 5 issue of the American Journal of Managed Care.
Large language models are advanced artificial intelligence systems that utilize vast amounts of information to generate responses and outcomes that humans can understand. There are two sides to the use of this technological tool in healthcare. The benefits of using large language models include improved operational efficiency, more precise diagnosis, and acceleration of medical research. However, equity issues do exist, and, up until now, few studies have considered potentially disparate use and impacts.
The group of authors, led by Dr. Aaron Tierney of Kaiser Permanente Northen California, reviewed and analyzed eight regulations and guidelines on AI that affect the integration of large language models in the country’s healthcare delivery systems.
The authors identified three main equity issues: linguistic and cultural bias, accessibility and trust, and oversight and quality control, and propose solutions to address and potentially mitigate future healthcare inequities from use of large language models.
“Rapid adoption and use of artificial intelligence within healthcare is exciting and promising. It can reduce inefficiencies and increase provider time with patients.” said Payán, who is also the director of the UC California Initiative for Health Equity & Action.
“However, we must fully understand – and even tread carefully – when using AI to diagnose, understand, and deliver healthcare. We hope this commentary can serve to guide best practices in AI use and healthcare delivery for all patients and communities,” she added.
We hope this commentary can serve to guide best practices in AI use and healthcare delivery for all patients and communities.”
– Denise Payán
Solutions shared by the group of authors include (1) ensure diverse representation in training data and in teams that develop artificial intelligence (AI) tools, (2) develop techniques to evaluate AI-enabled health care tool performance against real-world data, (3) ensure that AI used in health care is free of discrimination and integrates equity principles, (4) take meaningful steps to ensure access for patients with limited English proficiency, (5) apply AI tools to make workplaces more efficient and reduce administrative burdens, (6) require human oversight of AI tools used in health care delivery, and (7) ensure AI tools are safe, accessible, and beneficial while respecting privacy.
Additional authors include Mary E. Reed, DrPH, Richard W. Grant, MD, MPH, Vincent X. Liu, MD, MS, all from the Kaiser Permanente Northern California Division of Research; and Florence X. Doo, MD, with the University of Maryland School of Medicine.