Translate

Search This Blog

Thursday, November 28, 2024

The Intersection of Artificial Intelligence and Structural Racism: Understanding the Connection

Understanding Structural Racism in AI Systems

Presented by Craig Watkins, Visiting Professor at MIT and Professor at the University of Texas at Austin


Introduction

Craig Watkins discusses the intersection of artificial intelligence (AI) and structural racism, emphasizing the critical need to address systemic inequalities in the development and deployment of AI technologies. He highlights initiatives at MIT and the University of Texas at Austin aimed at fostering interdisciplinary approaches to create fair and equitable AI systems that have real-world positive impacts.

Key Points

The Impact of AI on Marginalized Communities

  • Instances where facial recognition software has falsely identified Black men, leading to wrongful arrests.
  • These cases underscore the potential of AI to replicate systemic forms of inequality if not carefully designed and monitored.

Challenges of Defining Fairness in AI

  • Machine learning practitioners have developed over 20 different definitions of fairness, highlighting its complexity.
  • Debate over whether AI models should be aware of race to prevent implicit biases or unaware to avoid explicit discrimination.
  • Fair algorithms may not address deeply embedded structural inequalities if they assume equal starting points for all individuals.

Understanding Structural Racism

  • Structural racism refers to systemic inequalities embedded within societal institutions and systems.
  • It manifests in interconnected disparities across various domains, such as housing, credit markets, education, and health.
  • These disparities are often less visible and more challenging to address than interpersonal racism.

Case Study: Housing and Credit Markets

  • Homeownership is a primary pathway to wealth accumulation and access to quality education, health care, and social networks.
  • Discriminatory practices in credit markets have historically limited access to homeownership for marginalized groups.
  • AI-driven financial services aiming to address biases may inadvertently introduce data surveillance and privacy concerns.

Interconnected Systems of Inequality

  • Disparities in one system (e.g., credit markets) are linked to disparities in others (e.g., housing, education).
  • Addressing structural racism requires understanding and tackling these interconnected systems holistically.
  • Designing AI models that account for this complexity is a significant computational and ethical challenge.

The Role of Education and Interdisciplinary Collaboration

  • Emphasizes the importance of training both AI developers and users to recognize and mitigate biases.
  • Advocates for interdisciplinary approaches combining technical expertise with social science insights.
  • Highlights initiatives at MIT and UT Austin focused on integrating these perspectives into AI research and education.

Conclusion

Craig Watkins calls for the development of AI systems that not only avoid perpetuating systemic inequalities but actively work to dismantle them. He stresses the need for educating the next generation of AI practitioners and users to make ethical, responsible decisions, and to be aware of the societal impact of their work.

Key Quote

Referencing Robert Williams, a man wrongly arrested due to faulty facial recognition software:

"This obviously isn’t me. Why am I here?"

The police responded, "Well, it looks like the computer got it wrong."

This exchange underscores the profound consequences of unchecked AI systems and the urgent need for responsible design and implementation.

Demystifying Ethical AI: Understanding the Jargon and Principles

The Many Flavors of AI: Terms, Jargon, and Definitions You Need to Know


Introduction

The presenter discusses the rapidly evolving landscape of artificial intelligence (AI), particularly focusing on the terminology and jargon associated with ethical AI. Using an ice cream analogy, the presentation aims to help librarians and information professionals understand and keep up with various AI concepts to better assist their patrons, colleagues, and stakeholders.

Importance for Librarians

  • Librarians have a foundational responsibility to understand AI tools and systems.
  • AI is not just filtering information but also creating it, affecting how information is accessed and used.
  • Similar to past technological shifts (e.g., Google, Wikipedia), AI is a bellwether of change in information science.
  • Librarians need to lead the charge in ethical AI usage and education.

The Ice Cream Analogy of Ethical AI

The presenter uses different ice cream flavors to represent various terms related to ethical AI:

1. Ethical AI (Vanilla)

  • Principles and values guiding the development, deployment, and use of AI systems.
  • Focuses on fairness, accountability, and transparency.
  • Ensures AI aligns with societal values and ethical principles.

2. Responsible AI (Chocolate)

  • Actions and practices organizations should take to ensure AI is developed and used responsibly.
  • Includes risk management, stakeholder engagement, and governance.
  • Emphasizes organizational norms and the practical implementation of ethical standards.

3. Transparent AI (Strawberry)

  • AI systems where the inner workings are visible (glass-box vs. black-box systems).
  • Transparency in development processes and usage purposes.
  • Not necessarily explainable; complexity can still hinder understanding.

4. Explainable AI (Pistachio)

  • AI systems whose operations can be understood and explained to users.
  • Not always transparent; proprietary systems may be explainable without revealing inner workings.
  • Important for building trust and accountability.

5. Accessible AI (Peach)

  • AI systems that are usable by a wide range of people, including those with disabilities.
  • Focus on inclusivity in design and implementation.
  • Examples include AI with spoken captions or image descriptions.

6. Open AI (Not to be Confused with OpenAI)

  • AI systems with open-source code, open development environments, and accessible documentation.
  • Emphasizes transparency and community involvement.
  • Being open doesn't necessarily mean being ethical or responsible.

7. Trustworthy AI (Blueberry)

  • AI systems that are reliable and operate as intended.
  • Trustworthiness depends on who is assessing it and for what purpose.
  • Often paired with transparency and explainability but not guaranteed.

8. Consistent AI (Lemon Sherbet)

  • AI systems that operate reliably and produce consistent results.
  • Consistency does not imply trustworthiness or ethical behavior.
  • Consistent AI may consistently exhibit biases or other issues.

Key Takeaways

  • Terminology around AI can be misleading; terms like "transparent," "open," or "trustworthy" are not guarantees of ethical behavior.
  • Librarians should critically evaluate AI systems beyond surface-level labels.
  • Understanding these distinctions helps librarians guide users in the proper use of AI tools.

Recommendations for Staying Informed

The presenter suggests following key figures in the field of ethical AI to stay updated:

  • Timnit Gebru: Former Google researcher specializing in AI ethics.
  • Abhishek Gupta
  • Carey Miller
  • Reid Blackman: Hosts a podcast series on AI ethics and responsibility.
  • Laura Mueller
  • Ryan Carrier
  • Kurt Cagle
  • Norman Mooradian: Professor at San Jose State University with extensive research on ethical AI.

Q&A Highlights

During the question and answer session, the following points were discussed:

1. Importance of AI Literacy

  • Librarians should educate themselves and patrons about AI tools.
  • AI literacy includes understanding the limitations and proper uses of AI.

2. Transparency and Open AI

  • OpenAI's transparency has been questioned; "open" does not always mean fully transparent.
  • Critical evaluation of AI companies and their claims is necessary.

3. Explaining AI to Users

  • AI systems like ChatGPT predict language based on training data, which includes a vast range of internet content.
  • Librarians should guide users on when and how to use AI tools appropriately.

4. Understanding Algorithms

  • An algorithm is the foundational code that dictates how an AI system operates.
  • Algorithms can embed biases based on how they process training data.

5. FAIR AI

  • FAIR stands for Findable, Accessible, Interoperable, and Reusable.
  • Applying FAIR principles to AI and machine learning is an emerging area.

6. Hallucinations in AI

  • AI hallucinations occur when AI systems generate incorrect or fabricated information.
  • Important for librarians to educate users about verifying AI-generated content.

Conclusion

The presenter emphasizes that AI is a tool—neither inherently good nor bad—and it's crucial for librarians to stay informed and lead in ethical AI practices. By understanding the nuances of AI terminology and concepts, librarians can better assist users and influence responsible AI development and use.

Revolutionizing Library UX: Using AI to Enhance Website Usability

Improving Library Website Usability with AI

Presented by Elisa Saphier, Librarian at Central Connecticut State University (CCSU)



Introduction

Elisa discusses how librarians can leverage artificial intelligence (AI) to enhance the usability of library websites. She shares her personal experiments and insights using AI tools, particularly generative AI models like ChatGPT and Google's Gemini, to support various aspects of website usability and user experience (UX) design.

Context and Motivation

  • Elisa has extensive experience as a technologist, systems librarian, and web librarian.
  • She is co-teaching an introductory course on research with AI, focusing on information literacy.
  • Her goal is to gain practical experience with AI to understand its capabilities and limitations in improving library website usability.

Challenges with AI

  • Lack of substantial literature on using AI for library website usability improvements.
  • Common issues with AI include biases, hallucinations, ethical concerns, intellectual property rights, privacy, and environmental impacts.
  • Emphasizes the importance of a "trust but verify" approach when using AI tools.

Applications of AI in Library Website Usability

AI Chatbots

  • Discussed the potential and challenges of integrating AI-powered chatbots in libraries.
  • Noted that chatbots have been considered in libraries for years but require careful implementation.
  • Encouraged sharing experiences with AI chatbots like Springshare's LibChat or Google's Dialogflow.

Data Collection and Analysis

  • Stressed the need for collecting user data through surveys, interviews, and usage statistics to inform website improvements.
  • Mentioned the System Usability Scale (SUS) as a tool for evaluating user reactions to websites.

User Personas and Stories

  • Used ChatGPT to generate user personas for CCSU's library website redesign.
  • Identified biases and stereotypes in AI-generated personas, such as lack of diversity and reinforcing stereotypes.
  • Highlighted the importance of involving community members to ensure accurate and respectful representations.

Customer/User Journey Mapping

  • Explored how AI can assist in creating user journey maps to understand user interactions with the library website.
  • Used AI to identify phases where users might disengage and to develop strategies to enhance user engagement.

Usability Testing

  • Suggested using AI to generate sample tasks for usability testing of the library website.
  • Referenced a compiled Google Sheet of usability tasks used by various libraries as a resource.

Analyzing User Feedback

  • Employed tools like Whisper AI to transcribe and analyze audio and video feedback from users.
  • Used AI to summarize key points and extract actionable insights from user feedback.

Improving Navigation and Information Architecture

  • Attempted to use AI for creating site maps and evaluating the website's navigation structure.
  • Faced challenges with AI not providing accurate or high-quality outputs when analyzing their specific website.
  • Described difficulties in using AI to parse HTML code for card sorting exercises, encountering limitations in AI's understanding.

Design Inspiration

  • Used AI to identify exemplary academic library websites (e.g., MIT, Stanford, Michigan, Harvard, Oxford) for inspiration.
  • Considered analyzing these websites' navigation and terminology to adopt best practices.

Code Improvements

  • Utilized AI to improve website code, such as replacing "click here" links with more accessible and descriptive text.
  • Faced challenges with AI in generating code that met specific requirements, requiring multiple iterations and clarifications.

Usage Data Analysis

  • Explored using AI to define user conversion funnels and metrics in Google Analytics.
  • Aimed to understand user paths, engagement levels, and points where users drop off.

Reflections on AI

  • Noted that AI can provide generic or inaccurate suggestions not tailored to specific contexts.
  • Described AI as "weird" due to its unpredictable behavior and occasional misalignment with user intentions.
  • Emphasized the necessity for librarians to engage with AI critically, given its growing influence on the information ecosystem.

Conclusion

Elisa invites fellow librarians and colleagues to share their experiences and collaborate in exploring AI's potential in enhancing library services. She underscores the importance of continuous learning and adaptation in the rapidly evolving landscape of AI technologies.

AI in Academic Libraries: Enhancing Student Success

Harnessing the Potential of AI Technologies to Enhance Student Success

Presented by Muhammad Hassan, Linda Saleh, and Craig Anderson



Introduction

The presenters discuss the integration of artificial intelligence (AI) technologies in academic libraries and learning commons to enhance student success. They emphasize the importance of embracing AI tools to support students in various aspects of their academic journey, from research assistance to skill development.

Understanding Artificial Intelligence

Muhammad Hassan introduces AI as a simulation of human intelligence processed by machines. He notes that while AI has become a popular topic recently, it has been around for a long time. Key applications of AI mentioned include:

  • Expert systems
  • Natural language processing (NLP)
  • Machine vision
  • Speech recognition

AI and Student Success

The presenters highlight the role of libraries and learning commons in supporting student success. Common student inquiries include:

  • How to conduct research
  • Finding articles and resources
  • Achieving academic goals
  • Accessing workshops and support services
  • Improving well-being and efficiency

Muhammad emphasizes that addressing these needs is crucial for student success, and AI technologies can play a significant role in providing solutions.

Integrating AI into Workflows

The team discusses their proactive approach to incorporating AI into their institutional workflows:

  • Providing workshops for faculty and students on proper AI usage
  • Developing an AI policy to guide ethical and effective use
  • Encouraging faculty to learn and embed AI tools in teaching
  • Collecting and analyzing data using AI tools for insights on student behavior

Data Analysis and Predictive Modeling

Muhammad shares examples of how they use AI to analyze data:

  • Tracking library usage, tutoring sessions, and resource access
  • Using AI tools like ChatGPT to analyze large datasets quickly
  • Applying predictive analysis to determine optimal library hours and resource allocation
  • Creating heat maps to visualize peak usage times on their website

Challenges with Sentiment Analysis

He notes that while AI excels in processing data, it still struggles with sentiment analysis. Libraries need to ensure AI models are built with proper sentiment understanding and work towards correcting deficiencies.

Student Interactions with AI

Examples from the Learning Commons

Craig Anderson shares anecdotes illustrating how students interact with AI:

  • A student used QuillBot, an AI tool, to find articles but received fabricated references. She was unaware that the articles were not real.
  • ESL students used translation tools for assignments, which were flagged by AI detection software as plagiarized, leading to misunderstandings.
  • A professor mistakenly accused students of cheating by using ChatGPT to confirm authorship of their papers, not realizing the tool can provide misleading affirmations.

Concerns and Misunderstandings

Students worry about being falsely accused of plagiarism due to AI tools. These examples highlight the need for proper education on AI usage and limitations.

When Not to Use AI

Muhammad addresses a question about situations where AI should not be used to ensure student success:

  1. Foundational Learning: In programming courses, students should first learn to code without AI assistance to build a solid understanding.
  2. Writing Skills: In writing-intensive courses, reliance on AI can hinder the development of essential writing abilities.
  3. Communication Skills: In communication classes, students benefit more from interacting with peers rather than AI.

He emphasizes that AI should enhance, not replace, foundational learning and interpersonal interactions.

AI as a Supplementary Tool

Analogy with Calculators

Craig draws an analogy between AI tools and calculators in education:

  • Just as calculators are introduced after students understand basic arithmetic, AI should be used after foundational skills are developed.
  • AI can then serve as a tool to enhance and advance learning.

Embracing AI Literacy

Linda Saleh discusses the importance of AI literacy and how AI tools can supplement student learning in areas beyond research:

  • Reading and comprehending scholarly articles
  • Preparing presentations and participating in scholarly conversations
  • Developing coding skills

AI Tools for Skill Development

Reading Assistance

Linda highlights AI tools that help students understand complex academic texts:

  • ChatPDF: Allows students to upload PDFs and ask questions to gain better understanding.
  • SciSpace: Provides access to open-access scholarly articles with a co-pilot feature for interactive learning.

Presentation and Public Speaking

AI tools can assist students in creating and delivering effective presentations:

  • SlidesGo, Clipchamp, SlidesAI: Help in developing visual presentations.
  • Udly: An AI tool that provides feedback on practice speeches, suggests improvements, and anticipates audience questions.

Coding Assistance

AI tools like Blackbox AI support students in learning programming by offering coding assistance and troubleshooting help.

Balancing AI Use and Critical Thinking

In response to concerns about AI potentially hindering critical thinking skills, the presenters emphasize:

  • AI tools should be part of a broader set of resources available to students.
  • Faculty and support services play a crucial role in ensuring students continue to develop essential skills independently.
  • Teaching students how to use AI properly is vital for their success in an evolving technological landscape.

Ethical Considerations and Policy Development

The presenters acknowledge the importance of discussing the ethics of AI use in education:

  • Institutions should have conversations about AI ethics at the start of each semester.
  • Developing clear policies and guidelines helps prevent misuse and misunderstandings.
  • Emphasizing transparency, authorship, and copyright considerations is essential.

Conclusion

The team concludes by reinforcing the potential of AI technologies to enhance student success when used appropriately. They advocate for defining what success means for students and then integrating AI tools thoughtfully to support that vision.

The Boundaries of Authorship: Can AI Be Considered an Author?

Generative AI and Authorship

Presented by Robin Kear, Academic Librarian at the University of Pittsburgh



Introduction

Robin Kear discusses the question: Can generative AI (GenAI) be an author? She explores the implications of this question, considering the rapid advancement of AI technology and its impact on authorship, creativity, and responsibility.

Can GenAI Be an Author?

Kear reflects on her concerns regarding AI's potential to become sentient or possess its own consciousness and agency. She believes that, with the current structure of generative tools, the answer is no. GenAI reacts, suggests, anticipates, and amalgamates existing content but does not create something entirely new.

AI-Generated Content and Authorship

Using an example of an image created by a human using DALL-E (an AI image generator), Kear prompts the audience to consider where authorship resides in such creations. She emphasizes the importance of understanding the human aspects of being an author and creator.

What Makes an Author?

Kear identifies four key human aspects of authorship:

  1. Creativity: The idea must originate from the individual. While influenced by experiences and environments, humans create new things that didn't exist before.
  2. Agency: Authors have the will to decide what to do with their ideas, choosing how, when, and what to produce.
  3. Moral Responsibility: Authors are morally accountable for what they put into the world, and their work should be discoverable and attributable to them.
  4. Legal Responsibility: Authors accept legal responsibility for their creations in the public and economic spheres, including the publishing industry.

Research on AI and Authorship in Academic Journals

Kear shares a research project conducted with colleague Amy Jenkins, examining how research journals are addressing AI and authorship. They analyzed top journals across various disciplines to find policies and guidance on AI authorship.

Methodology

  • Used Journal Citation Reports to identify impactful journals.
  • Selected top three journals in chosen categories based on impact factor.
  • Searched journal and publisher websites for AI authorship policies.

Findings Based on the Four Aspects of Authorship

Creativity and Agency

  • AI Cannot Be an Author: All journals agreed that an author must be a human being.
  • Lack of Agency: AI does not have the ability to act independently or be accountable.
  • AI in Images: Generally not permissible, especially in scientific contexts due to potential harm to scientific advancement.
  • Writing Assistant vs. Data Analysis: A nuanced difference exists between using AI as a writing tool and using it for data insights, which requires disclosure.

Moral Responsibility

  • Personal Accountability: Authors must be accountable for their content, hence AI cannot be an author.
  • Disclosure Requirement: Use of AI tools must be disclosed, with specifics on how and where it was used.
  • Publication Process: Different guidelines exist for authors, peer reviewers, and manuscript reviewers.
  • Confidentiality Concerns: Public AI tools like ChatGPT should not be used for peer review due to confidentiality and proprietary rights.

Legal Responsibility

  • Liability: Journals could be held liable for AI-generated content, so responsibility is shifted to the author.
  • Verification: Authors are responsible for verifying the accuracy of AI-generated content, including potential errors or plagiarism.
  • Ethical Breaches: Authors are liable for any breaches of publication ethics, even if AI tools were used.
  • Guidance from COPE: The Committee on Publication Ethics emphasizes authors' full responsibility for their manuscripts.

Reconsidering the Role of AI in Creative Endeavors

Kear poses critical questions about how we should view AI in the context of creativity:

  • Should AI be considered an assistant or helper rather than a creator?
  • Can AI serve as a sounding board for ideas or help augment human creativity?
  • Where is the ethical line between presenting something as one's own idea versus a technology-created idea?
  • Given that AI responses are derivative, what is its usefulness in creative work?

Reflection on Automated Creativity

She references the 1982 World's Fair painting robot as an early example of automated creativity, noting that while simplistic compared to current AI, it prompts consideration of the evolving role of technology in authorship.

Further Considerations

Kear discusses additional points stemming from her findings and university discussions:

  • Changing Acceptance: The use of AI in writing may become more accepted over time, potentially becoming seamless and expected.
  • Reflecting Existing Challenges: AI often mirrors societal biases and existing challenges related to transparency, integrity, and accountability.
  • Core Principles: The fundamental principles of research and publishing should continue to guide the use of AI in authorship.

Question and Answer Session

To What Extent Do Humans Also Derive from Other Content?

Response: Kear acknowledges that humans are influenced by their environment and existing works. In academic writing, literature reviews are essential for building upon previous research, but authors strive to contribute something new to the conversation.

At What Point Is AI Used or Not Used?

Response: She differentiates between general writing tools (like Microsoft Editor or Grammarly) and generative AI tools. While tools like Microsoft Co-Pilot are still developing, she focuses on the implications of generative AI in authorship.

If a Student Uses an AI Tool to Fully Write a Paper, Who Is the Author?

Response: Kear advises against students using AI to write entire papers. Such papers may contain inaccuracies, lack depth, and could be easily identified by instructors. Students should be cautious about relying on AI for academic work.

Future Value of Writing in Editing vs. Writing Itself

Response: Currently, the value of generative AI lies in its ability to assist rather than replace human creativity. She mentions authors using AI tools based on their own work to aid in writing, but emphasizes that AI should complement, not replace, human authorship.

Conclusion

Kear concludes by emphasizing the importance of maintaining core principles in research and publishing as AI continues to evolve. Transparency, integrity, attribution, and accountability should guide any use of AI in authorship and creative endeavors.