Translate

Search This Blog

Wednesday, November 27, 2024

From Theory to Practice: How Educators are Using Packback to Boost Student Engagement

Instructional AI: A Master Class in Packback Adoption and Integration

Presented by Devon McGuire and Juliet Rogers



Introduction

In a recent webinar titled "Instructional AI: A Master Class in Packback Adoption and Integration", educators Devon McGuire and Juliet Rogers shared valuable insights into the integration of instructional AI tools in the classroom. The webinar aimed to guide teachers on effectively adopting Packback, an AI-powered platform designed to enhance student engagement, critical thinking, and writing skills.

Devon McGuire, the Director of Academic Innovation and Strategy at Packback, brought her extensive experience in educational technology to the session. Juliet Rogers, a veteran teacher with 24 years of experience at Pasadena Independent School District, provided a practical perspective based on her firsthand experience using Packback in her 10th-grade AVID (Advancement Via Individual Determination) elective class.

Understanding Instructional AI and Packback

Instructional AI refers to artificial intelligence tools specifically designed to augment the teaching and learning process. Unlike general AI applications, instructional AI focuses on enhancing the educational environment by supporting both instructors and students. Packback is one such platform that leverages AI to foster critical thinking and improve writing skills among students.

Devon explained that Packback has been utilizing AI since 2017, initially in higher education. Recognizing the opportunity to support students in developing college and career readiness skills, Packback formed a partnership with AVID, aligning with their mission to promote inquiry-based learning and student engagement.

Juliet Rogers' Journey with Packback

Juliet shared her motivation for incorporating Packback into her classroom. Despite her extensive teaching experience, she noticed that her students were struggling with writing, a critical skill for college success. She wanted a tool that could assist her students without turning her AVID elective into an English class.

After learning about Packback during an AVID training session, Juliet was eager to implement it. She appreciated that Packback didn't just correct student mistakes but also provided explanations, helping students learn from their errors. The platform's ability to handle tedious tasks like grading grammar, mechanics, and formatting freed Juliet to focus on more impactful teaching activities.

Implementing Packback in the Classroom

Weekly Routine and Integration with AVID Strategies

Juliet described how she integrated Packback into her weekly teaching routine. Every day, she began with a warm-up activity, and several times a week, this included a Packback assignment. The platform operates on a two-week rotation, allowing students ample time to engage with the material.

At the start of each week, students conducted a "weekly check-in" where they reviewed their grades and identified areas where they were struggling. Using this reflection, they crafted open-ended questions related to their core or college classes, aligning with AVID's Tutorial Request Form (TRF) process.

Juliet emphasized that the questions had to meet certain criteria, such as being open-ended and reaching a minimum "Curiosity Score" provided by Packback's AI. Students were also encouraged to include SAT vocabulary words in their posts, reinforcing their language skills.

Engaging in Inquiry-Based Discussions

Throughout the two-week period, students were required to respond to at least two of their peers' questions. Juliet guided them to choose classmates who hadn't received responses yet, promoting inclusivity and ensuring that all students received support.

The Packback platform's design encouraged students to engage in higher-order thinking, as they had to formulate thoughtful questions and provide substantive responses. This practice mirrored the collaborative and inquiry-based learning emphasized in AVID's strategies.

Leveraging Packback's Features

Curiosity Score and Leaderboard

The Curiosity Score is an AI-generated metric that evaluates the quality of student posts based on criteria like open-endedness, academic tone, and the use of credible sources. Juliet set a minimum Curiosity Score of 40 to encourage students to meet certain standards.

The Leaderboard feature fostered a friendly competition among students, motivating them to improve their scores and engage more deeply with the content. Juliet noted that students became excited about their progress and the recognition they received.

Deep Dives for Writing Practice

In addition to the regular discussions, Juliet utilized Packback's "Deep Dives" feature for more extensive writing assignments. This tool allowed her to create custom rubrics focusing on aspects like word count, grammar, formatting, and flow. She could set specific requirements, such as using APA or MLA citation styles, which helped students prepare for college-level writing.

Juliet shared an example of a student whose writing significantly improved over time, demonstrating the effectiveness of the Deep Dives in enhancing students' writing skills. The AI provided detailed feedback, highlighting areas for improvement and guiding students through revisions.

AI-Powered Feedback and Plagiarism Detection

Packback's AI not only graded assignments but also provided constructive feedback. It pointed out issues like repetitive language or formatting errors, allowing students to learn and correct mistakes independently.

The platform also included plagiarism and AI content detection features. If a student's submission appeared to be copied or generated by AI, Packback would alert the student privately, giving them an opportunity to revise their work. This approach promoted academic integrity while educating students about ethical writing practices.

Impact on Students and Learning Outcomes

Improved Writing Skills and Confidence

Juliet observed that her students' writing abilities improved noticeably over time. The regular practice and immediate feedback helped them develop stronger grammar, vocabulary, and critical thinking skills. Students began to produce more substantive and thoughtful responses, moving beyond superficial answers.

Preparation for College Expectations

The use of Packback familiarized students with discussion boards, a common component in college courses. Juliet's students, including alumni who returned to share their experiences, reported that they felt more prepared and confident in their college classes due to their practice with Packback.

One former student expressed that Packback "saved me" in college, highlighting the platform's role in easing the transition to higher education's writing and discussion demands.

Challenges and Student Feedback

While there was an initial learning curve and some resistance from students who were unaccustomed to the platform, over time, they recognized its benefits. Juliet noted that teenagers might grumble about extra work, but the growth in their skills and confidence eventually led to appreciation for the tool.

She also addressed the prevalent issue of academic dishonesty in the digital age. By integrating Packback, she provided a structured environment that discouraged cheating and emphasized the importance of original thought and effort.

Advice for Educators Considering Packback

Juliet encouraged other educators to embrace instructional AI tools like Packback, emphasizing the support and resources available. She highlighted the importance of setting clear expectations, integrating the platform into regular routines, and using it to reinforce existing instructional strategies like AVID's WICOR (Writing, Inquiry, Collaboration, Organization, Reading) framework.

Devon added that educators interested in adopting Packback should attend general training sessions to understand the foundational aspects and ensure they have the necessary district approvals, especially regarding data privacy agreements.

Conclusion

The webinar showcased how instructional AI, when thoughtfully integrated into the classroom, can significantly enhance student engagement, writing proficiency, and readiness for college-level work. Juliet's practical application of Packback in her AVID elective class provided a roadmap for other educators seeking to leverage technology to support their teaching goals.

By focusing on critical thinking, inquiry-based learning, and ethical practices, tools like Packback can play a crucial role in preparing students for the demands of higher education and beyond.

Resources and Next Steps

  • Training Sessions: Educators can access recordings and training sessions at packback.co/webinars to familiarize themselves with the platform.
  • Support Contacts: For assistance and onboarding, teachers can reach out to Packback representatives like Madison Shay.
  • District Approval: Ensure that district consent and data privacy agreements are secured before implementation to protect student information.
  • Integration with Curriculum: Align Packback activities with existing curricular frameworks like AVID to maximize effectiveness and reinforce learning objectives.

By embracing instructional AI, educators can provide their students with the tools and skills necessary to succeed in an increasingly digital and interconnected world.

Exploring the Intersection of AI and Archives: Key Insights from Experts

Opening Keynote: AI and Archives – Applications, Implications, and Possibilities

Presented by Ray Pun, Helen Wong Smith, and Thomas Padilla at the AI and Libraries 2 Mini Conference



Introduction

In the opening keynote of the "AI and Libraries 2: More Applications, Implications, and Possibilities" mini conference, Ray Pun welcomed attendees to an engaging session focused on the intersection of artificial intelligence (AI) and archives. The event built upon the previous month's conference, which saw over 16,000 sign-ups, and celebrated Ray Pun's recent election as President-Elect of the American Library Association (ALA).

Joining Ray Pun were two distinguished professionals in the field of archives:

  • Helen Wong Smith: President of the Society of American Archivists (SAA) and Archivist for University Records at the University of Hawaii at Mānoa.
  • Thomas Padilla: Deputy Director of Archiving and Data Services at the Internet Archive.

The session aimed to explore the current landscape of AI in archives, discuss ethical considerations, and examine how AI can make born-digital collections more accessible and usable.

The Professional Landscape of AI in Archives

Helen Wong Smith's Perspective

Helen emphasized that AI integration into the archival profession offers promising opportunities for enhancing efficiency, accessibility, and deriving new insights from archival collections. However, she also highlighted several challenges that require careful management:

  • Quality and Ethics: Ensuring that AI-generated metadata and content maintain the authenticity and trustworthiness of archival records.
  • Privacy: Navigating data privacy concerns when implementing AI technologies.
  • Professional Adaptation: The need for ongoing dialogue, research, and training to effectively integrate AI while preserving the nature of archival records.

Helen introduced the concept of "paradata," which involves capturing information about the procedures, tools, and individuals involved in creating and processing information resources. She stressed that paradata is essential for maintaining the authenticity, reliability, and usability of records in the context of AI-generated content.

Thomas Padilla's Perspective

Thomas noted the significant engagement of senior leadership in exploring the potential of AI within libraries and archives. He mentioned initiatives like the ARL (Association of Research Libraries) and CNI (Coalition for Networked Information) Task Force focused on scenario planning for AI in these fields.

However, Thomas expressed concerns about the for-profit capture of library and archival work, cautioning against over-reliance on specific products or proprietary technologies. He emphasized the importance of a more holistic and less product-centric approach to AI integration, suggesting that focusing on overarching frameworks and values would be more beneficial for the profession.

Ethical Considerations in AI and Archival Processing

Thomas Padilla's Insights

Thomas highlighted the ethical complexities that arise when AI perpetuates existing societal biases, referencing Safiya Noble's work on "Algorithms of Oppression." He argued that it's insufficient to accept these biases as inevitable and stressed the need for proactive responses to address inequities exacerbated by AI technologies.

He advocated for "bias management," suggesting that while subjectivity in archival description is unavoidable, it must be anchored in consistent values that prioritize human rights and historical understanding. Thomas also called for regulatory frameworks to provide clarity and consistency in ethical approaches to AI in archives.

Helen Wong Smith's Insights

Helen echoed the importance of addressing biases in AI-generated metadata and content. She raised concerns about AI's potential to perpetuate inaccuracies and misconceptions, particularly in generative AI that produces new content based on existing data.

She emphasized the necessity of codified record-keeping practices for creators using AI, referencing Jessica Bushey's work on AI-generated images as an emergent record format. Helen reiterated the importance of paradata in documenting not just the tools used but also the reasons, methods, and contexts in which they are applied.

AI and Born-Digital Archives

Enhancing Accessibility and Usability

Helen outlined several ways AI can improve access to born-digital collections:

  • Automating metadata creation
  • Content classification
  • Natural language processing
  • Image recognition
  • Enhanced search capabilities

However, she also identified barriers to implementation, including:

  • Lack of knowledge and competencies within the archival profession
  • Reliable technologies and interoperability issues
  • Economic constraints and personnel expertise
  • Data privacy and security concerns
  • Ethical considerations and cultural sensitivity

Addressing Backlogs with AI

Thomas discussed the potential of AI in addressing access issues for backlogged materials, particularly those in less commonly known languages. He highlighted the challenge of insufficient resources and the difficulty in hiring personnel with the necessary language skills to catalog these materials.

Thomas proposed leveraging AI advancements in world languages, possibly in collaboration with companies like Meta, to process and make these materials discoverable. He emphasized that minimal digitization combined with AI could help fulfill the access and preservation mission of archives.

Staying Informed and Managing Overwhelm

Thomas Padilla's Approach

Thomas acknowledged the feeling of overwhelm many professionals experience due to the rapid developments in AI. He recommended:

  • Adopting a utilitarian approach to AI as a tool
  • Grounding oneself in the history and values of the profession
  • Practicing careful curation of information sources
  • Utilizing platforms like LinkedIn for professional updates
  • Setting up curated Google Alerts for topics like AI in libraries, archives, and regulation

Helen Wong Smith's Resources

Helen suggested leveraging collaborative initiatives like the InterPARES Trust AI project, an international and interdisciplinary effort aimed at designing and developing AI to support trustworthy public records. The project's goals include:

  • Identifying AI technologies to address critical records and archival challenges
  • Determining the benefits and risks of using AI on records and archives
  • Ensuring archival concepts and principles inform the development of responsible AI
  • Validating outcomes through case studies and demonstrations

Helen emphasized the importance of engaging with such resources to stay informed and contribute to the ongoing dialogue around AI in archives.

Conclusion

The opening keynote provided valuable insights into the intersection of AI and archives, highlighting both opportunities and challenges. Ray Pun thanked the speakers and attendees, encouraging continued dialogue and exploration of these critical topics.

As AI technologies continue to evolve, the archival profession must navigate ethical considerations, enhance competencies, and develop strategies to leverage AI responsibly. By fostering collaboration, staying informed, and grounding practices in core values, archivists can effectively integrate AI to enhance accessibility and preservation.

Note: This summary is based on the opening keynote delivered by Ray Pun, Helen Wong Smith, and Thomas Padilla at the AI and Libraries 2 Mini Conference.

The Impact of AI on Information Literacy: Introducing the "Artificial Intelligence and Information Literacy" Course

Planning a Credit-Bearing Course on AI and Information Literacy

Presented by Alyssa Russo and David Hurley from the University of New Mexico



Introduction

Alyssa Russo, Learning Services Librarian, and David Hurley, Discovery and Web Librarian at the University of New Mexico (UNM), shared their experiences and plans for developing a credit-bearing course titled "Artificial Intelligence and Information Literacy." This presentation delved into the rationale, structure, and pedagogical approaches they considered while designing the course, aiming to integrate generative AI tools like ChatGPT into information literacy instruction.

Background and Context

The advent of ChatGPT and similar generative AI technologies prompted librarians at UNM to reconsider their approaches to information literacy instruction. Recognizing the profound impact of AI on information systems and user behavior, Russo and Hurley sought to develop a course that not only addressed the practical use of AI tools but also engaged students in critical thinking about the social and ethical implications of these technologies.

At UNM, the library operates within a unique structure, being part of the Organizational Information and Learning Sciences (OILS) program. This affiliation allows librarians to teach credit-bearing courses that explore theoretical aspects of information literacy beyond traditional library instruction. Leveraging this opportunity, Russo and Hurley aimed to create a three-credit course that would encourage students to think critically about how AI reshapes information landscapes.

Inspirational Framework

The presenters drew inspiration from Barbara Fister's perspective on information literacy, emphasizing the need to understand the architectures, infrastructures, and belief systems that shape our information environment. They recognized that generative AI challenges conventional notions of authority, value, and the processes underlying information creation and dissemination.

Hurley noted parallels between current responses to AI and past reactions to disruptive technologies like Google and Wikipedia. In the early days of the web, librarians grappled with similar concerns about information quality and authority. By examining historical responses—ranging from rejection to revolutionary integration—they identified strategies to effectively incorporate AI into information literacy education.

Course Structure and Objectives

Utilizing the ACRL Framework

To provide a solid foundation, the course was structured around the Association of College and Research Libraries (ACRL) Framework for Information Literacy for Higher Education. Each of the six frames served as a module, allowing for a comprehensive exploration of core concepts. This approach also aligned well with the eight-week accelerated format of the course, providing sufficient time for introduction, in-depth exploration, and reflection.

Hybrid Learning Model

Recognizing the benefits of both in-person and online learning, the course was designed as a hybrid. Meeting twice a week, the first session would introduce key concepts and AI tools, while the second would be student-led, fostering a community of practice. This structure aimed to balance guided instruction with collaborative learning, encouraging students to share insights and take ownership of their learning process.

Target Audience and Enrollment

The course was intended for upper-division undergraduates who had prior college-level coursework. This prerequisite ensured that students possessed foundational academic skills, enabling them to engage deeply with complex topics and contribute meaningfully to discussions and projects.

Assignments and Activities

Researchers' Notebook

A central component of the course was the "Researchers' Notebook," an iterative assignment where students documented their evolving thoughts, questions, and interactions with AI tools. This notebook aimed to make the research process visible, emphasizing the development of inquiry skills and reflective practice. By capturing moments of discovery, frustration, and dialogue with AI, students could illustrate their understanding of information literacy concepts in a tangible way.

Module Deep Dive: Research as Inquiry

Focusing on the ACRL frame "Research as Inquiry," one module exemplified the course's pedagogical approach. The objectives were to have students view research as an open-ended exploration and to formulate increasingly sophisticated questions. Activities included:

  • Question Formulation Technique: Students engaged in generating, refining, and prioritizing questions related to AI. This collaborative exercise encouraged curiosity and critical thinking, serving as a model for ongoing inquiry throughout the course.
  • Walk and Talk Activity: Adapted from the University of Arizona's Atlas of Creative Tools, this exercise involved students pairing up and discussing prompts while walking around campus. Questions like "What is curiosity to you?" and "What challenges does AI face in understanding human questions?" facilitated deeper engagement and embodied learning.

Other Modules and Activities

While the presentation focused on one module in detail, Russo and Hurley outlined plans for other modules based on the remaining ACRL frames. These included activities such as:

  • Authority Is Constructed and Contextual: Exploring how authority is established in different information sources and how AI-generated content challenges traditional notions of authority.
  • Searching as Strategic Exploration: Comparing search strategies in traditional databases versus AI tools, emphasizing iteration and strategy refinement.
  • Information Has Value: Discussing the ethical, legal, and economic implications of AI-generated content, including issues of intellectual property and environmental impact.

Challenges and Reflections

Despite their thorough planning, Russo and Hurley faced challenges in promoting and enrolling students in the course. Both were on different types of leave during critical promotion periods, resulting in insufficient enrollment for the course to run as scheduled. Initially disappointed, they reconsidered and recognized that the course content remained relevant and valuable, even as the initial hype around AI began to settle.

They emphasized that the rapidly evolving nature of AI and its integration into various aspects of society make such a course timely and essential. By sharing their experience, they hoped to inspire others to develop similar courses or integrate these ideas into existing curricula.

Conclusion and Takeaways

Russo and Hurley's presentation highlighted the importance of adapting information literacy instruction to address the challenges and opportunities presented by generative AI. By framing the course around collaborative exploration and critical engagement, they aimed to empower students to navigate and contribute to the evolving information landscape.

Key takeaways from their experience include:

  • The value of integrating established frameworks (like the ACRL frames) with new technologies to provide structure and depth.
  • The effectiveness of hybrid learning models in fostering community and active participation.
  • The importance of reflective and process-oriented assignments, such as the Researchers' Notebook, in making the research process transparent and meaningful.
  • The need for flexibility and adaptability in course planning, acknowledging that challenges like enrollment and shifting student interests may arise.
  • The relevance of addressing ethical considerations, including environmental impacts and biases inherent in AI technologies.

Final Thoughts

While their course did not run as initially planned, Russo and Hurley remain optimistic about its potential and relevance. They encouraged other educators and librarians to consider similar approaches, emphasizing that the need for critical engagement with AI and information literacy is ongoing.

Their work serves as a valuable model for integrating emerging technologies into educational practices, fostering not only skill development but also critical awareness and ethical considerations among students.

Note: This summary is based on a presentation by Alyssa Russo and David Hurley on planning a credit-bearing course on AI and information literacy at the University of New Mexico.

The Ethics of AI: Navigating the Three Cs of Generative AI

Closing Keynote: The Three Cs of Generative AI in Libraries

Presented by Reed Hepler at the AI and the Libraries 2 Mini Conference



In the closing keynote of the "AI and the Libraries 2 Mini Conference: More Applications, Implications, and Possibilities," Reed Hepler, Digital Initiatives Librarian and Archivist at the College of Southern Idaho, shared valuable insights on the use of generative AI in educational and library settings. With experience spanning educational formats, library environments, and business training, Hepler delved into the ethical considerations and best practices surrounding generative AI tools.

Introduction

Hepler began by acknowledging the diverse perspectives educators and administrators hold regarding generative AI. He identified four primary viewpoints observed at his institution:

  1. Fear that student use of ChatGPT and similar tools creates new forms of unethical practices.
  2. Confidence that students wish to use ChatGPT effectively and constructively.
  3. Concern that generative AI undermines established systems and norms of online learning.
  4. Belief that ChatGPT can lead to innovative products and workflows enhancing instructional design and assessment.

Recognizing the need to address these concerns, Hepler introduced a framework he developed to guide ethical and effective use of generative AI: the "Three Cs."

The Three Cs of Generative AI

1. Copyright

Key Question: Who owns the rights to AI-generated products, and how are they created?

Hepler discussed the complexities of copyright in the context of generative AI, posing three critical questions:

  • What are the rights and responsibilities of the original creators whose works are used by AI?
  • What are the rights and responsibilities of users who employ AI tools?
  • Is generative AI an owner, a user, both, or neither in terms of copyright?

He clarified that copyright protects the expression of ideas in any medium and grants exclusive rights to the creator or copyright holder. However, devices, processes, ideas, public domain materials, works by government employees, and recipes cannot be copyrighted.

Hepler emphasized that the current copyright law requires human authorship for protection, raising questions about whether AI can be considered an author. He also highlighted the ongoing debates and legal challenges surrounding the fair use doctrine as it pertains to AI training on copyrighted materials.

He cited examples of copyright battles involving AI-generated works, such as "Zarya of the Dawn," and discussed the implications of using copyrighted content in AI prompts. He stressed the importance of respecting intellectual property rights and advised users to avoid inputting copyrighted material into AI tools unless they own the rights.

2. Citation

Key Question: How should AI tools and outputs be cited, and where did the information originate?

Noting the absence of standardized citation formats for AI-generated content, Hepler emphasized that the purpose of citation is to provide information about sources. He recommended including the following elements in any AI citation:

  • Tool name and version
  • Date and time of usage
  • Prompt, query, or conversation title
  • Name of the person who queried the AI
  • Link to the conversation or output, if possible

He provided an example of how to cite AI-generated content in APA style, suggesting that users include their name to acknowledge their role in the creation process. He stressed that users should engage in the editing and revision of AI outputs to ensure originality and accuracy.

3. Circumspection

Key Question: What hazards—moral, ethical, educational, or otherwise—should users manage when utilizing generative AI tools?

Hepler outlined several ethical issues associated with AI outputs, including:

  • Plagiarism
  • Biases
  • Repetitiveness and arbitrariness
  • Incorrect or misleading information
  • Lack of connection to external resources

He discussed privacy concerns, highlighting how AI tools can extrapolate personal data from user inputs, even when users attempt to minimize the information they provide. He emphasized that users should never input sensitive or confidential information into AI prompts.

Hepler recommended several practices to mitigate these risks:

  • Informing users about data collection and its purposes
  • Obtaining explicit consent for data usage
  • Limiting data collection to essential information (data minimization)
  • Implementing strict access and use controls
  • Anonymizing data in prompts

He also discussed the importance of quality control when using AI-generated content, advising users to:

  • Use AI tools for their intended purposes
  • Engage in best practices for prompting
  • Ask the AI for its sources and verify them
  • Find external resources to support AI-generated information
  • Analyze outputs for ethical issues, accessibility, and accuracy

Privacy and Ethical Considerations

Hepler delved deeper into privacy harms associated with AI, referencing works by legal scholars such as Danielle Keats Citron and Daniel J. Solove. He noted that privacy laws often require proof of harm, which can be difficult when dealing with intangible injuries like anxiety or frustration resulting from data breaches or misuse.

He highlighted that AI tools like ChatGPT have specific terms of use that assign users ownership of the outputs generated from their inputs. However, users are responsible for ensuring that their content does not violate any applicable laws.

Hepler stressed that despite best efforts, AI tools can still extrapolate personal data, underscoring the importance of being cautious with the information provided to these systems.

Conclusion and Recommendations

Concluding his keynote, Hepler provided a list of references and resources for further exploration of the topics discussed. He reiterated the need for libraries and educators to navigate the evolving landscape of generative AI thoughtfully, balancing innovation with ethical considerations.

He encouraged attendees to remain informed about developments in AI and copyright law, to respect intellectual property rights, and to engage in responsible use of AI tools. By adhering to the "Three Cs" framework—Copyright, Citation, and Circumspection—users can harness the benefits of generative AI while mitigating potential risks.

Final Thoughts

Hepler's presentation offered a comprehensive overview of the challenges and responsibilities associated with generative AI in libraries and education. His insights serve as a valuable guide for professionals seeking to integrate AI tools into their work ethically and effectively.

Note: This summary is based on the closing keynote delivered by Reed Hepler at the AI and the Libraries 2 Mini Conference.

The Real-World Harms of AI in Healthcare: A Closer Look

Ethical Considerations for Generative AI Now and in the Future

Presented by Dr. Kellie Owens, Assistant Professor in the Division of Medical Ethics at NYU Grossman School of Medicine



Dr. Kellie Owens delivered an insightful presentation on the ethical considerations surrounding generative AI, particularly relevant to medical librarians and professionals involved in data services. As a medical sociologist and empirical bioethicist, Dr. Owens focuses on the social and ethical implications of health information technologies, including the infrastructure required to support artificial intelligence (AI) and machine learning in healthcare.

Introduction

Dr. Owens began by situating herself within the broader discourse on AI ethics, acknowledging the prevalent narratives of both awe and panic that often dominate news coverage. She highlighted a split within the field between AI safety—which focuses on existential risks and future catastrophic events—and AI ethics, which concentrates on addressing current, tangible ethical concerns associated with AI technologies.

Referencing the "Pause Letter" signed by prominent figures like Yoshua Bengio and Elon Musk, which called for a six-month halt on training AI systems more powerful than GPT-4, Dr. Owens expressed skepticism about such approaches. She argued that while managing existential risks is important, it is crucial to focus on the real and already manifesting ethical issues that AI poses today.

Real-World Harms of AI in Healthcare

Dr. Owens provided examples of harms caused by AI tools in healthcare, emphasizing that these issues are not hypothetical but are currently affecting patients and providers. She cited instances where algorithms reduced the number of Black patients eligible for high-risk care management programs by more than half and highlighted biases in medical uses of large language models like GPT, which can offer different medical advice based on a patient's race, insurance status, or other demographic factors.

Framework for Ethical Considerations

Building her talk around the five key themes from the Biden administration's Office of Science and Technology Policy's "Blueprint for an AI Bill of Rights," Dr. Owens discussed:

  1. Safe and Effective Systems
  2. Algorithmic Discrimination Protections
  3. Data Privacy and Security
  4. Notice and Explanation
  5. Human Alternatives, Consideration, and Fallback

1. Safe and Effective Systems

Emphasizing the principle of "First, do no harm," Dr. Owens discussed the ethical imperative to ensure that AI tools are both safe and effective. She addressed the issue of AI hallucinations, where large language models generate false or misleading information that appears credible. In healthcare, such errors can have significant consequences.

She also touched on the problem of dataset shift, where AI models decline in performance over time due to changes in technology, populations, or behaviors. Dr. Owens highlighted the need for continuous monitoring and updating of AI systems to maintain their reliability and accuracy.

2. Algorithmic Discrimination Protections

Dr. Owens delved into the ethical concerns related to algorithmic bias and discrimination. She cited studies like "Gender Shades," which revealed that facial recognition technologies performed poorly on women, particularly women with darker skin tones. In the context of generative AI, she discussed how image generation tools can perpetuate stereotypes, such as depicting authoritative roles predominantly as men.

She highlighted instances where AI models like GPT-4 produced clinical vignettes that stereotyped demographic presentations, calling for comprehensive and transparent bias assessments in AI tools used in healthcare.

3. Data Privacy and Security

Addressing data privacy concerns, Dr. Owens discussed vulnerabilities like prompt injection attacks, where attackers manipulate AI models to reveal sensitive training data, including personal information. She emphasized the importance of protecting users from abusive data practices and ensuring that individuals have agency over how their data is used.

She also raised concerns about plagiarism and intellectual property violations, noting that generative AI models can reproduce copyrighted material without attribution, leading to potential legal and ethical issues.

4. Notice and Explanation

Dr. Owens stressed the importance of transparency and autonomy, arguing that users should be informed when they are interacting with AI systems and understand how these systems might affect them. She cited the example of a mental health tech company that used AI-generated responses without informing users, highlighting the ethical implications of such practices.

5. Human Alternatives, Consideration, and Fallback

Finally, Dr. Owens emphasized the necessity of providing human alternatives and the ability for users to opt out of AI systems. She underscored that while AI can offer efficiency, organizations must be prepared to address failures and invest resources to support those affected by them.

Key Takeaways

Dr. Owens concluded with several key insights:

  • Technology is Not Neutral: AI systems are socio-technical constructs influenced by human decisions, goals, and biases. Recognizing this is essential in addressing ethical considerations.
  • Benefits and Costs: It is crucial to weigh both the advantages and potential harms of AI applications, including issues like misinformation, environmental impact, and the perpetuation of biases.
  • What's Missing Matters: Considering the gaps in AI training data and the politics of what's excluded can provide valuable ethical insights.
  • Power Dynamics: Evaluating how AI shifts power structures is important. AI applications should aim to empower marginalized communities rather than exacerbate existing inequalities.

Conclusion

Dr. Owens encouraged ongoing dialogue and critical examination of generative AI's ethical implications. She highlighted the role of professionals like medical librarians in shaping how AI is integrated into systems, emphasizing the need for intentional design, transparency, and a focus on equitable outcomes.

For those interested in further exploration, she recommended reviewing the "Blueprint for an AI Bill of Rights" and engaging with interdisciplinary approaches to AI ethics.

Note: This summary is based on a presentation by Dr. Kellie Owens on the ethical considerations of generative AI, particularly in the context of healthcare and data services.