Translate

Search This Blog

Thursday, November 28, 2024

Demystifying Ethical AI: Understanding the Jargon and Principles

The Many Flavors of AI: Terms, Jargon, and Definitions You Need to Know


Introduction

The presenter discusses the rapidly evolving landscape of artificial intelligence (AI), particularly focusing on the terminology and jargon associated with ethical AI. Using an ice cream analogy, the presentation aims to help librarians and information professionals understand and keep up with various AI concepts to better assist their patrons, colleagues, and stakeholders.

Importance for Librarians

  • Librarians have a foundational responsibility to understand AI tools and systems.
  • AI is not just filtering information but also creating it, affecting how information is accessed and used.
  • Similar to past technological shifts (e.g., Google, Wikipedia), AI is a bellwether of change in information science.
  • Librarians need to lead the charge in ethical AI usage and education.

The Ice Cream Analogy of Ethical AI

The presenter uses different ice cream flavors to represent various terms related to ethical AI:

1. Ethical AI (Vanilla)

  • Principles and values guiding the development, deployment, and use of AI systems.
  • Focuses on fairness, accountability, and transparency.
  • Ensures AI aligns with societal values and ethical principles.

2. Responsible AI (Chocolate)

  • Actions and practices organizations should take to ensure AI is developed and used responsibly.
  • Includes risk management, stakeholder engagement, and governance.
  • Emphasizes organizational norms and the practical implementation of ethical standards.

3. Transparent AI (Strawberry)

  • AI systems where the inner workings are visible (glass-box vs. black-box systems).
  • Transparency in development processes and usage purposes.
  • Not necessarily explainable; complexity can still hinder understanding.

4. Explainable AI (Pistachio)

  • AI systems whose operations can be understood and explained to users.
  • Not always transparent; proprietary systems may be explainable without revealing inner workings.
  • Important for building trust and accountability.

5. Accessible AI (Peach)

  • AI systems that are usable by a wide range of people, including those with disabilities.
  • Focus on inclusivity in design and implementation.
  • Examples include AI with spoken captions or image descriptions.

6. Open AI (Not to be Confused with OpenAI)

  • AI systems with open-source code, open development environments, and accessible documentation.
  • Emphasizes transparency and community involvement.
  • Being open doesn't necessarily mean being ethical or responsible.

7. Trustworthy AI (Blueberry)

  • AI systems that are reliable and operate as intended.
  • Trustworthiness depends on who is assessing it and for what purpose.
  • Often paired with transparency and explainability but not guaranteed.

8. Consistent AI (Lemon Sherbet)

  • AI systems that operate reliably and produce consistent results.
  • Consistency does not imply trustworthiness or ethical behavior.
  • Consistent AI may consistently exhibit biases or other issues.

Key Takeaways

  • Terminology around AI can be misleading; terms like "transparent," "open," or "trustworthy" are not guarantees of ethical behavior.
  • Librarians should critically evaluate AI systems beyond surface-level labels.
  • Understanding these distinctions helps librarians guide users in the proper use of AI tools.

Recommendations for Staying Informed

The presenter suggests following key figures in the field of ethical AI to stay updated:

  • Timnit Gebru: Former Google researcher specializing in AI ethics.
  • Abhishek Gupta
  • Carey Miller
  • Reid Blackman: Hosts a podcast series on AI ethics and responsibility.
  • Laura Mueller
  • Ryan Carrier
  • Kurt Cagle
  • Norman Mooradian: Professor at San Jose State University with extensive research on ethical AI.

Q&A Highlights

During the question and answer session, the following points were discussed:

1. Importance of AI Literacy

  • Librarians should educate themselves and patrons about AI tools.
  • AI literacy includes understanding the limitations and proper uses of AI.

2. Transparency and Open AI

  • OpenAI's transparency has been questioned; "open" does not always mean fully transparent.
  • Critical evaluation of AI companies and their claims is necessary.

3. Explaining AI to Users

  • AI systems like ChatGPT predict language based on training data, which includes a vast range of internet content.
  • Librarians should guide users on when and how to use AI tools appropriately.

4. Understanding Algorithms

  • An algorithm is the foundational code that dictates how an AI system operates.
  • Algorithms can embed biases based on how they process training data.

5. FAIR AI

  • FAIR stands for Findable, Accessible, Interoperable, and Reusable.
  • Applying FAIR principles to AI and machine learning is an emerging area.

6. Hallucinations in AI

  • AI hallucinations occur when AI systems generate incorrect or fabricated information.
  • Important for librarians to educate users about verifying AI-generated content.

Conclusion

The presenter emphasizes that AI is a tool—neither inherently good nor bad—and it's crucial for librarians to stay informed and lead in ethical AI practices. By understanding the nuances of AI terminology and concepts, librarians can better assist users and influence responsible AI development and use.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.