Translate

Search This Blog

Wednesday, November 27, 2024

The Real-World Harms of AI in Healthcare: A Closer Look

Ethical Considerations for Generative AI Now and in the Future

Presented by Dr. Kellie Owens, Assistant Professor in the Division of Medical Ethics at NYU Grossman School of Medicine



Dr. Kellie Owens delivered an insightful presentation on the ethical considerations surrounding generative AI, particularly relevant to medical librarians and professionals involved in data services. As a medical sociologist and empirical bioethicist, Dr. Owens focuses on the social and ethical implications of health information technologies, including the infrastructure required to support artificial intelligence (AI) and machine learning in healthcare.

Introduction

Dr. Owens began by situating herself within the broader discourse on AI ethics, acknowledging the prevalent narratives of both awe and panic that often dominate news coverage. She highlighted a split within the field between AI safety—which focuses on existential risks and future catastrophic events—and AI ethics, which concentrates on addressing current, tangible ethical concerns associated with AI technologies.

Referencing the "Pause Letter" signed by prominent figures like Yoshua Bengio and Elon Musk, which called for a six-month halt on training AI systems more powerful than GPT-4, Dr. Owens expressed skepticism about such approaches. She argued that while managing existential risks is important, it is crucial to focus on the real and already manifesting ethical issues that AI poses today.

Real-World Harms of AI in Healthcare

Dr. Owens provided examples of harms caused by AI tools in healthcare, emphasizing that these issues are not hypothetical but are currently affecting patients and providers. She cited instances where algorithms reduced the number of Black patients eligible for high-risk care management programs by more than half and highlighted biases in medical uses of large language models like GPT, which can offer different medical advice based on a patient's race, insurance status, or other demographic factors.

Framework for Ethical Considerations

Building her talk around the five key themes from the Biden administration's Office of Science and Technology Policy's "Blueprint for an AI Bill of Rights," Dr. Owens discussed:

  1. Safe and Effective Systems
  2. Algorithmic Discrimination Protections
  3. Data Privacy and Security
  4. Notice and Explanation
  5. Human Alternatives, Consideration, and Fallback

1. Safe and Effective Systems

Emphasizing the principle of "First, do no harm," Dr. Owens discussed the ethical imperative to ensure that AI tools are both safe and effective. She addressed the issue of AI hallucinations, where large language models generate false or misleading information that appears credible. In healthcare, such errors can have significant consequences.

She also touched on the problem of dataset shift, where AI models decline in performance over time due to changes in technology, populations, or behaviors. Dr. Owens highlighted the need for continuous monitoring and updating of AI systems to maintain their reliability and accuracy.

2. Algorithmic Discrimination Protections

Dr. Owens delved into the ethical concerns related to algorithmic bias and discrimination. She cited studies like "Gender Shades," which revealed that facial recognition technologies performed poorly on women, particularly women with darker skin tones. In the context of generative AI, she discussed how image generation tools can perpetuate stereotypes, such as depicting authoritative roles predominantly as men.

She highlighted instances where AI models like GPT-4 produced clinical vignettes that stereotyped demographic presentations, calling for comprehensive and transparent bias assessments in AI tools used in healthcare.

3. Data Privacy and Security

Addressing data privacy concerns, Dr. Owens discussed vulnerabilities like prompt injection attacks, where attackers manipulate AI models to reveal sensitive training data, including personal information. She emphasized the importance of protecting users from abusive data practices and ensuring that individuals have agency over how their data is used.

She also raised concerns about plagiarism and intellectual property violations, noting that generative AI models can reproduce copyrighted material without attribution, leading to potential legal and ethical issues.

4. Notice and Explanation

Dr. Owens stressed the importance of transparency and autonomy, arguing that users should be informed when they are interacting with AI systems and understand how these systems might affect them. She cited the example of a mental health tech company that used AI-generated responses without informing users, highlighting the ethical implications of such practices.

5. Human Alternatives, Consideration, and Fallback

Finally, Dr. Owens emphasized the necessity of providing human alternatives and the ability for users to opt out of AI systems. She underscored that while AI can offer efficiency, organizations must be prepared to address failures and invest resources to support those affected by them.

Key Takeaways

Dr. Owens concluded with several key insights:

  • Technology is Not Neutral: AI systems are socio-technical constructs influenced by human decisions, goals, and biases. Recognizing this is essential in addressing ethical considerations.
  • Benefits and Costs: It is crucial to weigh both the advantages and potential harms of AI applications, including issues like misinformation, environmental impact, and the perpetuation of biases.
  • What's Missing Matters: Considering the gaps in AI training data and the politics of what's excluded can provide valuable ethical insights.
  • Power Dynamics: Evaluating how AI shifts power structures is important. AI applications should aim to empower marginalized communities rather than exacerbate existing inequalities.

Conclusion

Dr. Owens encouraged ongoing dialogue and critical examination of generative AI's ethical implications. She highlighted the role of professionals like medical librarians in shaping how AI is integrated into systems, emphasizing the need for intentional design, transparency, and a focus on equitable outcomes.

For those interested in further exploration, she recommended reviewing the "Blueprint for an AI Bill of Rights" and engaging with interdisciplinary approaches to AI ethics.

Note: This summary is based on a presentation by Dr. Kellie Owens on the ethical considerations of generative AI, particularly in the context of healthcare and data services.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.