Translate

Search This Blog

Friday, November 29, 2024

The Impact of AI on Libraries: A Boon or Doom?

Libraries and AI: Boon or Doom? – A Comprehensive Discussion




Introduction

The advent of Artificial Intelligence (AI) has sparked intense discussions across various sectors, and libraries are no exception. In Session 69 of the "Libraries in Response" series, experts gathered to explore the implications of AI on libraries and librarianship. Titled "Libraries and AI: Boon or Doom?", the session delved into how AI technologies like ChatGPT are influencing library services, the ethical considerations, and the future role of librarians in an AI-driven world.

Session Overview

The session was moderated by Don Means, founder of the Gigabit Libraries Network, and featured three distinguished speakers:

  • Dr. Soo Young Rieh – Associate Dean for Education and Professor at the University of Texas at Austin's School of Information.
  • Dr. Beth Patin – Assistant Professor at Syracuse University's School of Information Studies.
  • Dr. Joe Janes – Associate Professor at the University of Washington's Information School.

The discussion centered on the impact of AI on libraries, focusing on non-technical aspects such as ethical implications, equity, information literacy, and the future of library education.

Key Discussions and Insights

1. The Rise of AI and Its Impact on Libraries

Don Means opened the session by highlighting the rapid adoption of AI technologies like ChatGPT, noting that it reached 100 million users within two months—a record-breaking achievement. This unprecedented growth signifies a profound public interest in AI and raises questions about its implications for libraries.

The central theme of the session revolved around understanding whether AI is a boon or doom for libraries. The speakers aimed to unpack this dichotomy by exploring various facets of AI's influence on library services and the profession at large.

2. AI Literacy and Education for Librarians

Dr. Soo Young Rieh's Perspective

Dr. Rieh discussed her work on an Institute of Museum and Library Services (IMLS) grant aimed at enhancing AI literacy among librarians. She emphasized the need for continuous education to help librarians understand and leverage AI technologies effectively.

Key points from her discussion include:

  • Gap in AI Knowledge: Many librarians recognize the importance of AI but lack the resources and training to engage with it meaningfully.
  • Ideal Institute on AI: Dr. Rieh and her colleagues established the "IDEAL Institute," focusing on Innovation, Disruption, Enquiry, Access, and Learning. The program offers a week-long intensive training for librarians, covering basics of AI, ethical implications, project management, and team-building skills.
  • Building a Community of Practice: The institute aims to create a supportive community where librarians can share ideas, collaborate on projects, and continue learning about AI beyond the initial training.
  • Challenges in Curriculum Development: Dr. Rieh highlighted the lack of AI-focused courses in library science programs, with only a small percentage offering courses on AI, machine learning, or natural language processing.

She stressed the importance of interdisciplinary collaboration, bridging the gap between computer science and library science to prepare librarians for the AI era.

3. Equity, Bias, and Representation in AI

Dr. Beth Patin's Perspective

Dr. Patin focused on the social implications of AI, particularly concerning marginalized communities and indigenous knowledge systems. She raised concerns about how AI models, which are trained on vast amounts of internet data, often exclude or misrepresent voices from historically marginalized groups.

Key points from her discussion include:

  • Bias in AI Training Data: AI models replicate existing societal biases because they are trained on data that reflects those biases.
  • Exclusion of Marginalized Voices: Communities that lack substantial digital footprints are underrepresented in AI models, leading to a continuation of epistemicide—the erasure of knowledge systems.
  • Algorithmic Reparation: Dr. Patin advocated for intentional efforts to include marginalized voices in AI training data. Libraries play a crucial role in digitizing and making accessible the histories and knowledge of these communities.
  • Critical Literacy and Librarian Training: She emphasized the need for librarians to be trained in critical race theory and information ethics to recognize and address biases in AI.
  • Impact on Information Literacy: With AI-generated content becoming more prevalent, librarians must help users develop skills to critically evaluate information sources.

Dr. Patin underscored that librarians have a responsibility to ensure that AI technologies do not perpetuate systemic inequities and that they work towards creating more inclusive and representative AI systems.

4. The Nature of Documents and AI-Generated Content

Dr. Joe Janes' Perspective

Dr. Janes brought a historical and philosophical lens to the discussion, examining how AI challenges traditional notions of documents and authorship.

Key points from his discussion include:

  • Redefining Documents: AI-generated content blurs the lines between traditional documents created by humans and machine-generated text.
  • Authenticity and Authority: Librarians must grapple with questions about the authenticity of AI-generated content and how to provide context and credibility assessments to users.
  • Cataloging Challenges: The influx of AI-generated materials poses challenges for cataloging and organizing library collections.
  • Impact on Cultural Records: AI content could become part of the cultural record, necessitating strategies for preservation and access while acknowledging its unique origins.
  • Economic Factors: Dr. Janes noted that AI-generated content is cheaper to produce, which might lead resource-strapped institutions to rely on it more heavily, potentially at the expense of quality and representation.

He highlighted the need for librarians to develop new frameworks and policies to address the complexities introduced by AI in the realm of information creation and dissemination.

5. Ethical Considerations and the Role of Librarians

The speakers collectively emphasized the ethical responsibilities of librarians in the age of AI. Key considerations include:

  • Information Literacy Education: Librarians must teach users how to critically evaluate AI-generated content, understand its limitations, and recognize potential biases.
  • Advocacy for Inclusivity: Librarians should advocate for the inclusion of diverse voices in AI datasets and work towards mitigating biases in AI systems.
  • Policy Development: There is a need for developing policies and guidelines on how libraries handle AI-generated content, including issues of citation, authenticity, and preservation.
  • Collaboration Across Disciplines: Building partnerships between library science and computer science can help create AI tools that are ethical, equitable, and aligned with the values of librarianship.
  • User Privacy and Data Ethics: Librarians must be vigilant about user privacy, especially as AI systems often rely on large amounts of data that could infringe on individual privacy rights.

6. Challenges and Opportunities for Rural and Small Libraries

An important point raised during the session was the impact of AI on rural and small libraries, which often lack resources and professionally trained staff.

Key considerations include:

  • Capacity Building: There is a need for state libraries and larger institutions to support small libraries in building capacity to engage with AI technologies.
  • Equity in Access: Ensuring that patrons in rural areas have access to the benefits of AI without exacerbating existing inequalities.
  • Training and Education: Developing scalable training programs for library staff who may not have formal library science education.
  • Community Engagement: Small libraries can play a pivotal role in educating their communities about AI and its implications.

The discussion highlighted the importance of inclusivity and support to prevent a digital divide in AI literacy and access.

7. The Future Role of Librarians

The session concluded with reflections on how librarians can navigate the evolving landscape shaped by AI:

  • Embracing AI as a Tool: Rather than viewing AI solely as a threat, librarians can leverage AI technologies to enhance services, such as automating cataloging processes or providing personalized recommendations.
  • Focus on Human-Centered Services: With AI handling routine tasks, librarians can dedicate more time to community engagement, programming, and supporting users' informational needs.
  • Continual Learning: The dynamic nature of AI necessitates ongoing professional development and staying informed about technological advancements.
  • Ethical Stewardship: Librarians must uphold ethical standards, advocating for transparency, accountability, and fairness in AI applications.

Conclusion

The "Libraries and AI: Boon or Doom?" session provided a multifaceted exploration of AI's impact on libraries. The consensus among the speakers is that AI presents both challenges and opportunities. While there are legitimate concerns about bias, equity, and the authenticity of AI-generated content, there is also potential for AI to enhance library services and empower librarians to focus on more strategic, community-oriented roles.

Librarians are called upon to be proactive in addressing the ethical implications of AI, to advocate for inclusive and fair AI practices, and to equip themselves and their patrons with the skills necessary to navigate an AI-influenced information landscape.

Thursday, November 28, 2024

Unlocking Every Child's Potential: Leveraging AI in Education

Transforming Education with Artificial Intelligence



Introduction

The speaker shares a personal journey from being a student predicted to fail academically to achieving exceptional academic success. This transformation was not due to any special innate abilities but rather a discovery of fundamental truths about how the brain naturally learns. The talk emphasizes the mismatch between traditional educational methods and the brain's natural learning processes and proposes leveraging artificial intelligence (AI) to personalize education and unlock every child's potential.

Personal Story

  • Doctors told the speaker's mother that there was a 50% chance he wouldn't survive birth and, if he did, he might be brain-damaged and unlikely to achieve much.
  • He was slow to walk, talk, and learn, struggling to maintain a C average in primary and high school.
  • Despite this, he earned a PhD in cognitive science, received a university medal for outstanding academic achievement, and ranked in the top 1% of the student population.
  • The key to this dramatic turnaround was understanding and applying basic principles of how the brain learns.

The Brain's Natural Learning Processes

Innate Thirst for Knowledge

  • Humans are hardwired to learn and derive joy from learning.
  • The highest concentration of endorphin receptors is found in the brain's learning centers.
  • We learn fundamental skills like crawling, grasping, and walking through self-directed exploration and play.

The Inverted U-Shape of Learning Pleasure

  • Endorphin release in the learning centers follows an inverted U-shape concerning familiarity.
  • Things that are too familiar are boring; things that are too unfamiliar are aversive.
  • Information on the periphery of our knowledge—slightly challenging yet achievable—is highly pleasurable.
  • Examples:
    • Facebook is addictive because it constantly provides new information at the edge of our knowledge.
    • Children naturally explore and learn through activities that extend their capabilities.

Robotic Experiments Demonstrating Natural Learning

  • The speaker conducted experiments using robots equipped with basic vision, hearing, and reflexes, along with a 'happiness' function (preference for exploring new but not overwhelming experiences).
  • Robots initially behaved randomly but gradually learned hand-eye coordination and how to interact with their environment.
  • This mimics how children learn through self-exploration and finding joy in new experiences.

The Mismatch in Traditional Education

  • Traditional classrooms present a set curriculum at a set pace, not accounting for individual differences.
  • Some students are ahead and bored; others are behind and find learning aversive.
  • Few students receive information at the optimal point for their learning—the periphery of their knowledge.
  • A study in North America showed that 63% of students are disengaged in school.
  • This disengagement is not the students' fault but a systemic issue.

Need for Transformation in Education

  • Referencing Sir Ken Robinson, the speaker asserts that education doesn't need reform but transformation.
  • Advocates for less standardization and more personalization in learning.
  • Recognizes the challenge for teachers, who are already among the top 10 most stressful occupations, to personalize education for each student.

Leveraging Artificial Intelligence in Education

The speaker proposes that AI can serve as an intelligent tutor, personalizing education to match each student's learning needs.

Three Levels of AI in Education

Level 1: Rote Learning

  • AI can optimize rote learning through spaced repetition and active recall.
  • Spaced repetition involves reviewing information just before it is likely to be forgotten, strengthening memory retention.
  • Active recall (e.g., using flashcards) is more effective than passive study methods like re-reading or highlighting.
  • AI systems can track what each student knows and present the right material at the right time, which is impossible for a teacher to manage individually.
  • The speaker demonstrates a software system that allows teachers to create interactive lessons with personalized quizzes and exercises.

Level 2: Active Learning

  • Generative AI can create its own questions and problems tailored to the student's skill level.
  • For example, in music education, AI can generate music pieces that match the student's abilities and provide feedback.
  • Applicable to various subjects like touch typing, mathematics, and more.
  • Transforms the classroom by replacing traditional lectures with AI-enabled e-learning platforms.
  • Frees teachers to focus on inspiring students, explaining the importance of the material, and facilitating group projects that link learning to real-world interests.

Level 3: Integrative AI (The Future)

  • Combines generative AI with advanced technologies like virtual reality and gesture recognition.
  • Immersive learning experiences can be created, such as virtual environments for language learning or using gesture recognition to teach dance, martial arts, or sign language.
  • Aims to present exactly what the student needs to learn at that moment for optimal learning.

The Potential Impact of AI on Education

  • AI is advancing exponentially and has the potential to transform education fundamentally.
  • If implemented correctly, it can unlock every child's hidden potential and enable them to live full, rich, and valued lives.

Call to Action

The speaker emphasizes that while AI offers powerful tools, there are things everyone can do now to improve education.

"Education is the kindling of a flame, not the filling of a vessel."

– Attributed to Socrates

  • Every child needs someone to delight in them, help them find their natural joy and curiosity.
  • We should help children explore topics that fascinate them at their own pace.
  • Encourages educators and individuals to be the ones who help kindle the flame of learning in children.

Conclusion

The speaker concludes by reiterating the transformative power of aligning education with how the brain naturally learns and the pivotal role AI can play in this process. By embracing personalized learning through AI, we can move away from a one-size-fits-all education system to one that truly nurtures each child's unique potential.

The Intersection of Artificial Intelligence and Structural Racism: Understanding the Connection

Understanding Structural Racism in AI Systems

Presented by Craig Watkins, Visiting Professor at MIT and Professor at the University of Texas at Austin


Introduction

Craig Watkins discusses the intersection of artificial intelligence (AI) and structural racism, emphasizing the critical need to address systemic inequalities in the development and deployment of AI technologies. He highlights initiatives at MIT and the University of Texas at Austin aimed at fostering interdisciplinary approaches to create fair and equitable AI systems that have real-world positive impacts.

Key Points

The Impact of AI on Marginalized Communities

  • Instances where facial recognition software has falsely identified Black men, leading to wrongful arrests.
  • These cases underscore the potential of AI to replicate systemic forms of inequality if not carefully designed and monitored.

Challenges of Defining Fairness in AI

  • Machine learning practitioners have developed over 20 different definitions of fairness, highlighting its complexity.
  • Debate over whether AI models should be aware of race to prevent implicit biases or unaware to avoid explicit discrimination.
  • Fair algorithms may not address deeply embedded structural inequalities if they assume equal starting points for all individuals.

Understanding Structural Racism

  • Structural racism refers to systemic inequalities embedded within societal institutions and systems.
  • It manifests in interconnected disparities across various domains, such as housing, credit markets, education, and health.
  • These disparities are often less visible and more challenging to address than interpersonal racism.

Case Study: Housing and Credit Markets

  • Homeownership is a primary pathway to wealth accumulation and access to quality education, health care, and social networks.
  • Discriminatory practices in credit markets have historically limited access to homeownership for marginalized groups.
  • AI-driven financial services aiming to address biases may inadvertently introduce data surveillance and privacy concerns.

Interconnected Systems of Inequality

  • Disparities in one system (e.g., credit markets) are linked to disparities in others (e.g., housing, education).
  • Addressing structural racism requires understanding and tackling these interconnected systems holistically.
  • Designing AI models that account for this complexity is a significant computational and ethical challenge.

The Role of Education and Interdisciplinary Collaboration

  • Emphasizes the importance of training both AI developers and users to recognize and mitigate biases.
  • Advocates for interdisciplinary approaches combining technical expertise with social science insights.
  • Highlights initiatives at MIT and UT Austin focused on integrating these perspectives into AI research and education.

Conclusion

Craig Watkins calls for the development of AI systems that not only avoid perpetuating systemic inequalities but actively work to dismantle them. He stresses the need for educating the next generation of AI practitioners and users to make ethical, responsible decisions, and to be aware of the societal impact of their work.

Key Quote

Referencing Robert Williams, a man wrongly arrested due to faulty facial recognition software:

"This obviously isn’t me. Why am I here?"

The police responded, "Well, it looks like the computer got it wrong."

This exchange underscores the profound consequences of unchecked AI systems and the urgent need for responsible design and implementation.

Demystifying Ethical AI: Understanding the Jargon and Principles

The Many Flavors of AI: Terms, Jargon, and Definitions You Need to Know


Introduction

The presenter discusses the rapidly evolving landscape of artificial intelligence (AI), particularly focusing on the terminology and jargon associated with ethical AI. Using an ice cream analogy, the presentation aims to help librarians and information professionals understand and keep up with various AI concepts to better assist their patrons, colleagues, and stakeholders.

Importance for Librarians

  • Librarians have a foundational responsibility to understand AI tools and systems.
  • AI is not just filtering information but also creating it, affecting how information is accessed and used.
  • Similar to past technological shifts (e.g., Google, Wikipedia), AI is a bellwether of change in information science.
  • Librarians need to lead the charge in ethical AI usage and education.

The Ice Cream Analogy of Ethical AI

The presenter uses different ice cream flavors to represent various terms related to ethical AI:

1. Ethical AI (Vanilla)

  • Principles and values guiding the development, deployment, and use of AI systems.
  • Focuses on fairness, accountability, and transparency.
  • Ensures AI aligns with societal values and ethical principles.

2. Responsible AI (Chocolate)

  • Actions and practices organizations should take to ensure AI is developed and used responsibly.
  • Includes risk management, stakeholder engagement, and governance.
  • Emphasizes organizational norms and the practical implementation of ethical standards.

3. Transparent AI (Strawberry)

  • AI systems where the inner workings are visible (glass-box vs. black-box systems).
  • Transparency in development processes and usage purposes.
  • Not necessarily explainable; complexity can still hinder understanding.

4. Explainable AI (Pistachio)

  • AI systems whose operations can be understood and explained to users.
  • Not always transparent; proprietary systems may be explainable without revealing inner workings.
  • Important for building trust and accountability.

5. Accessible AI (Peach)

  • AI systems that are usable by a wide range of people, including those with disabilities.
  • Focus on inclusivity in design and implementation.
  • Examples include AI with spoken captions or image descriptions.

6. Open AI (Not to be Confused with OpenAI)

  • AI systems with open-source code, open development environments, and accessible documentation.
  • Emphasizes transparency and community involvement.
  • Being open doesn't necessarily mean being ethical or responsible.

7. Trustworthy AI (Blueberry)

  • AI systems that are reliable and operate as intended.
  • Trustworthiness depends on who is assessing it and for what purpose.
  • Often paired with transparency and explainability but not guaranteed.

8. Consistent AI (Lemon Sherbet)

  • AI systems that operate reliably and produce consistent results.
  • Consistency does not imply trustworthiness or ethical behavior.
  • Consistent AI may consistently exhibit biases or other issues.

Key Takeaways

  • Terminology around AI can be misleading; terms like "transparent," "open," or "trustworthy" are not guarantees of ethical behavior.
  • Librarians should critically evaluate AI systems beyond surface-level labels.
  • Understanding these distinctions helps librarians guide users in the proper use of AI tools.

Recommendations for Staying Informed

The presenter suggests following key figures in the field of ethical AI to stay updated:

  • Timnit Gebru: Former Google researcher specializing in AI ethics.
  • Abhishek Gupta
  • Carey Miller
  • Reid Blackman: Hosts a podcast series on AI ethics and responsibility.
  • Laura Mueller
  • Ryan Carrier
  • Kurt Cagle
  • Norman Mooradian: Professor at San Jose State University with extensive research on ethical AI.

Q&A Highlights

During the question and answer session, the following points were discussed:

1. Importance of AI Literacy

  • Librarians should educate themselves and patrons about AI tools.
  • AI literacy includes understanding the limitations and proper uses of AI.

2. Transparency and Open AI

  • OpenAI's transparency has been questioned; "open" does not always mean fully transparent.
  • Critical evaluation of AI companies and their claims is necessary.

3. Explaining AI to Users

  • AI systems like ChatGPT predict language based on training data, which includes a vast range of internet content.
  • Librarians should guide users on when and how to use AI tools appropriately.

4. Understanding Algorithms

  • An algorithm is the foundational code that dictates how an AI system operates.
  • Algorithms can embed biases based on how they process training data.

5. FAIR AI

  • FAIR stands for Findable, Accessible, Interoperable, and Reusable.
  • Applying FAIR principles to AI and machine learning is an emerging area.

6. Hallucinations in AI

  • AI hallucinations occur when AI systems generate incorrect or fabricated information.
  • Important for librarians to educate users about verifying AI-generated content.

Conclusion

The presenter emphasizes that AI is a tool—neither inherently good nor bad—and it's crucial for librarians to stay informed and lead in ethical AI practices. By understanding the nuances of AI terminology and concepts, librarians can better assist users and influence responsible AI development and use.

Revolutionizing Library UX: Using AI to Enhance Website Usability

Improving Library Website Usability with AI

Presented by Elisa Saphier, Librarian at Central Connecticut State University (CCSU)



Introduction

Elisa discusses how librarians can leverage artificial intelligence (AI) to enhance the usability of library websites. She shares her personal experiments and insights using AI tools, particularly generative AI models like ChatGPT and Google's Gemini, to support various aspects of website usability and user experience (UX) design.

Context and Motivation

  • Elisa has extensive experience as a technologist, systems librarian, and web librarian.
  • She is co-teaching an introductory course on research with AI, focusing on information literacy.
  • Her goal is to gain practical experience with AI to understand its capabilities and limitations in improving library website usability.

Challenges with AI

  • Lack of substantial literature on using AI for library website usability improvements.
  • Common issues with AI include biases, hallucinations, ethical concerns, intellectual property rights, privacy, and environmental impacts.
  • Emphasizes the importance of a "trust but verify" approach when using AI tools.

Applications of AI in Library Website Usability

AI Chatbots

  • Discussed the potential and challenges of integrating AI-powered chatbots in libraries.
  • Noted that chatbots have been considered in libraries for years but require careful implementation.
  • Encouraged sharing experiences with AI chatbots like Springshare's LibChat or Google's Dialogflow.

Data Collection and Analysis

  • Stressed the need for collecting user data through surveys, interviews, and usage statistics to inform website improvements.
  • Mentioned the System Usability Scale (SUS) as a tool for evaluating user reactions to websites.

User Personas and Stories

  • Used ChatGPT to generate user personas for CCSU's library website redesign.
  • Identified biases and stereotypes in AI-generated personas, such as lack of diversity and reinforcing stereotypes.
  • Highlighted the importance of involving community members to ensure accurate and respectful representations.

Customer/User Journey Mapping

  • Explored how AI can assist in creating user journey maps to understand user interactions with the library website.
  • Used AI to identify phases where users might disengage and to develop strategies to enhance user engagement.

Usability Testing

  • Suggested using AI to generate sample tasks for usability testing of the library website.
  • Referenced a compiled Google Sheet of usability tasks used by various libraries as a resource.

Analyzing User Feedback

  • Employed tools like Whisper AI to transcribe and analyze audio and video feedback from users.
  • Used AI to summarize key points and extract actionable insights from user feedback.

Improving Navigation and Information Architecture

  • Attempted to use AI for creating site maps and evaluating the website's navigation structure.
  • Faced challenges with AI not providing accurate or high-quality outputs when analyzing their specific website.
  • Described difficulties in using AI to parse HTML code for card sorting exercises, encountering limitations in AI's understanding.

Design Inspiration

  • Used AI to identify exemplary academic library websites (e.g., MIT, Stanford, Michigan, Harvard, Oxford) for inspiration.
  • Considered analyzing these websites' navigation and terminology to adopt best practices.

Code Improvements

  • Utilized AI to improve website code, such as replacing "click here" links with more accessible and descriptive text.
  • Faced challenges with AI in generating code that met specific requirements, requiring multiple iterations and clarifications.

Usage Data Analysis

  • Explored using AI to define user conversion funnels and metrics in Google Analytics.
  • Aimed to understand user paths, engagement levels, and points where users drop off.

Reflections on AI

  • Noted that AI can provide generic or inaccurate suggestions not tailored to specific contexts.
  • Described AI as "weird" due to its unpredictable behavior and occasional misalignment with user intentions.
  • Emphasized the necessity for librarians to engage with AI critically, given its growing influence on the information ecosystem.

Conclusion

Elisa invites fellow librarians and colleagues to share their experiences and collaborate in exploring AI's potential in enhancing library services. She underscores the importance of continuous learning and adaptation in the rapidly evolving landscape of AI technologies.

AI in Academic Libraries: Enhancing Student Success

Harnessing the Potential of AI Technologies to Enhance Student Success

Presented by Muhammad Hassan, Linda Saleh, and Craig Anderson



Introduction

The presenters discuss the integration of artificial intelligence (AI) technologies in academic libraries and learning commons to enhance student success. They emphasize the importance of embracing AI tools to support students in various aspects of their academic journey, from research assistance to skill development.

Understanding Artificial Intelligence

Muhammad Hassan introduces AI as a simulation of human intelligence processed by machines. He notes that while AI has become a popular topic recently, it has been around for a long time. Key applications of AI mentioned include:

  • Expert systems
  • Natural language processing (NLP)
  • Machine vision
  • Speech recognition

AI and Student Success

The presenters highlight the role of libraries and learning commons in supporting student success. Common student inquiries include:

  • How to conduct research
  • Finding articles and resources
  • Achieving academic goals
  • Accessing workshops and support services
  • Improving well-being and efficiency

Muhammad emphasizes that addressing these needs is crucial for student success, and AI technologies can play a significant role in providing solutions.

Integrating AI into Workflows

The team discusses their proactive approach to incorporating AI into their institutional workflows:

  • Providing workshops for faculty and students on proper AI usage
  • Developing an AI policy to guide ethical and effective use
  • Encouraging faculty to learn and embed AI tools in teaching
  • Collecting and analyzing data using AI tools for insights on student behavior

Data Analysis and Predictive Modeling

Muhammad shares examples of how they use AI to analyze data:

  • Tracking library usage, tutoring sessions, and resource access
  • Using AI tools like ChatGPT to analyze large datasets quickly
  • Applying predictive analysis to determine optimal library hours and resource allocation
  • Creating heat maps to visualize peak usage times on their website

Challenges with Sentiment Analysis

He notes that while AI excels in processing data, it still struggles with sentiment analysis. Libraries need to ensure AI models are built with proper sentiment understanding and work towards correcting deficiencies.

Student Interactions with AI

Examples from the Learning Commons

Craig Anderson shares anecdotes illustrating how students interact with AI:

  • A student used QuillBot, an AI tool, to find articles but received fabricated references. She was unaware that the articles were not real.
  • ESL students used translation tools for assignments, which were flagged by AI detection software as plagiarized, leading to misunderstandings.
  • A professor mistakenly accused students of cheating by using ChatGPT to confirm authorship of their papers, not realizing the tool can provide misleading affirmations.

Concerns and Misunderstandings

Students worry about being falsely accused of plagiarism due to AI tools. These examples highlight the need for proper education on AI usage and limitations.

When Not to Use AI

Muhammad addresses a question about situations where AI should not be used to ensure student success:

  1. Foundational Learning: In programming courses, students should first learn to code without AI assistance to build a solid understanding.
  2. Writing Skills: In writing-intensive courses, reliance on AI can hinder the development of essential writing abilities.
  3. Communication Skills: In communication classes, students benefit more from interacting with peers rather than AI.

He emphasizes that AI should enhance, not replace, foundational learning and interpersonal interactions.

AI as a Supplementary Tool

Analogy with Calculators

Craig draws an analogy between AI tools and calculators in education:

  • Just as calculators are introduced after students understand basic arithmetic, AI should be used after foundational skills are developed.
  • AI can then serve as a tool to enhance and advance learning.

Embracing AI Literacy

Linda Saleh discusses the importance of AI literacy and how AI tools can supplement student learning in areas beyond research:

  • Reading and comprehending scholarly articles
  • Preparing presentations and participating in scholarly conversations
  • Developing coding skills

AI Tools for Skill Development

Reading Assistance

Linda highlights AI tools that help students understand complex academic texts:

  • ChatPDF: Allows students to upload PDFs and ask questions to gain better understanding.
  • SciSpace: Provides access to open-access scholarly articles with a co-pilot feature for interactive learning.

Presentation and Public Speaking

AI tools can assist students in creating and delivering effective presentations:

  • SlidesGo, Clipchamp, SlidesAI: Help in developing visual presentations.
  • Udly: An AI tool that provides feedback on practice speeches, suggests improvements, and anticipates audience questions.

Coding Assistance

AI tools like Blackbox AI support students in learning programming by offering coding assistance and troubleshooting help.

Balancing AI Use and Critical Thinking

In response to concerns about AI potentially hindering critical thinking skills, the presenters emphasize:

  • AI tools should be part of a broader set of resources available to students.
  • Faculty and support services play a crucial role in ensuring students continue to develop essential skills independently.
  • Teaching students how to use AI properly is vital for their success in an evolving technological landscape.

Ethical Considerations and Policy Development

The presenters acknowledge the importance of discussing the ethics of AI use in education:

  • Institutions should have conversations about AI ethics at the start of each semester.
  • Developing clear policies and guidelines helps prevent misuse and misunderstandings.
  • Emphasizing transparency, authorship, and copyright considerations is essential.

Conclusion

The team concludes by reinforcing the potential of AI technologies to enhance student success when used appropriately. They advocate for defining what success means for students and then integrating AI tools thoughtfully to support that vision.

The Boundaries of Authorship: Can AI Be Considered an Author?

Generative AI and Authorship

Presented by Robin Kear, Academic Librarian at the University of Pittsburgh



Introduction

Robin Kear discusses the question: Can generative AI (GenAI) be an author? She explores the implications of this question, considering the rapid advancement of AI technology and its impact on authorship, creativity, and responsibility.

Can GenAI Be an Author?

Kear reflects on her concerns regarding AI's potential to become sentient or possess its own consciousness and agency. She believes that, with the current structure of generative tools, the answer is no. GenAI reacts, suggests, anticipates, and amalgamates existing content but does not create something entirely new.

AI-Generated Content and Authorship

Using an example of an image created by a human using DALL-E (an AI image generator), Kear prompts the audience to consider where authorship resides in such creations. She emphasizes the importance of understanding the human aspects of being an author and creator.

What Makes an Author?

Kear identifies four key human aspects of authorship:

  1. Creativity: The idea must originate from the individual. While influenced by experiences and environments, humans create new things that didn't exist before.
  2. Agency: Authors have the will to decide what to do with their ideas, choosing how, when, and what to produce.
  3. Moral Responsibility: Authors are morally accountable for what they put into the world, and their work should be discoverable and attributable to them.
  4. Legal Responsibility: Authors accept legal responsibility for their creations in the public and economic spheres, including the publishing industry.

Research on AI and Authorship in Academic Journals

Kear shares a research project conducted with colleague Amy Jenkins, examining how research journals are addressing AI and authorship. They analyzed top journals across various disciplines to find policies and guidance on AI authorship.

Methodology

  • Used Journal Citation Reports to identify impactful journals.
  • Selected top three journals in chosen categories based on impact factor.
  • Searched journal and publisher websites for AI authorship policies.

Findings Based on the Four Aspects of Authorship

Creativity and Agency

  • AI Cannot Be an Author: All journals agreed that an author must be a human being.
  • Lack of Agency: AI does not have the ability to act independently or be accountable.
  • AI in Images: Generally not permissible, especially in scientific contexts due to potential harm to scientific advancement.
  • Writing Assistant vs. Data Analysis: A nuanced difference exists between using AI as a writing tool and using it for data insights, which requires disclosure.

Moral Responsibility

  • Personal Accountability: Authors must be accountable for their content, hence AI cannot be an author.
  • Disclosure Requirement: Use of AI tools must be disclosed, with specifics on how and where it was used.
  • Publication Process: Different guidelines exist for authors, peer reviewers, and manuscript reviewers.
  • Confidentiality Concerns: Public AI tools like ChatGPT should not be used for peer review due to confidentiality and proprietary rights.

Legal Responsibility

  • Liability: Journals could be held liable for AI-generated content, so responsibility is shifted to the author.
  • Verification: Authors are responsible for verifying the accuracy of AI-generated content, including potential errors or plagiarism.
  • Ethical Breaches: Authors are liable for any breaches of publication ethics, even if AI tools were used.
  • Guidance from COPE: The Committee on Publication Ethics emphasizes authors' full responsibility for their manuscripts.

Reconsidering the Role of AI in Creative Endeavors

Kear poses critical questions about how we should view AI in the context of creativity:

  • Should AI be considered an assistant or helper rather than a creator?
  • Can AI serve as a sounding board for ideas or help augment human creativity?
  • Where is the ethical line between presenting something as one's own idea versus a technology-created idea?
  • Given that AI responses are derivative, what is its usefulness in creative work?

Reflection on Automated Creativity

She references the 1982 World's Fair painting robot as an early example of automated creativity, noting that while simplistic compared to current AI, it prompts consideration of the evolving role of technology in authorship.

Further Considerations

Kear discusses additional points stemming from her findings and university discussions:

  • Changing Acceptance: The use of AI in writing may become more accepted over time, potentially becoming seamless and expected.
  • Reflecting Existing Challenges: AI often mirrors societal biases and existing challenges related to transparency, integrity, and accountability.
  • Core Principles: The fundamental principles of research and publishing should continue to guide the use of AI in authorship.

Question and Answer Session

To What Extent Do Humans Also Derive from Other Content?

Response: Kear acknowledges that humans are influenced by their environment and existing works. In academic writing, literature reviews are essential for building upon previous research, but authors strive to contribute something new to the conversation.

At What Point Is AI Used or Not Used?

Response: She differentiates between general writing tools (like Microsoft Editor or Grammarly) and generative AI tools. While tools like Microsoft Co-Pilot are still developing, she focuses on the implications of generative AI in authorship.

If a Student Uses an AI Tool to Fully Write a Paper, Who Is the Author?

Response: Kear advises against students using AI to write entire papers. Such papers may contain inaccuracies, lack depth, and could be easily identified by instructors. Students should be cautious about relying on AI for academic work.

Future Value of Writing in Editing vs. Writing Itself

Response: Currently, the value of generative AI lies in its ability to assist rather than replace human creativity. She mentions authors using AI tools based on their own work to aid in writing, but emphasizes that AI should complement, not replace, human authorship.

Conclusion

Kear concludes by emphasizing the importance of maintaining core principles in research and publishing as AI continues to evolve. Transparency, integrity, attribution, and accountability should guide any use of AI in authorship and creative endeavors.

AI in Education: How Librarians Can Lead the Way

Navigating AI in Education through a K-12 Librarian's Lens

Presented by Delandra Seals, Teaching and Learning Librarian at the University of North Carolina at Wilmington



Introduction

Delandra Seals shares insights on integrating artificial intelligence (AI) in K-12 education from a librarian's perspective. With a background in K-12 education, special education, public libraries, and higher education, she brings a comprehensive view of how AI can enhance teaching and learning.

Understanding the Evolution of AI

AI is Not New

  • AI has been gradually integrated into everyday life over the years.
  • Examples include predictive text, speech-to-text, smart devices like Alexa and Siri, and self-driving cars.
  • Students are already interacting with AI through various technologies.

Defining AI

  • AI refers to computers programmed to perform tasks that typically require human intelligence.
  • Involves algorithms, machine learning, data patterns, and predictive modeling.
  • Used in applications like facial recognition, red-light cameras, and digital assistants.

AI in Education

The Potential of AI

Sal Khan, founder of Khan Academy, envisions AI as a transformative tool in education, providing personalized tutoring to every student.

Historical Disruptions in Teaching

  • Technologies like calculators, search engines, and Google Translate have previously disrupted education.
  • Matt Miller emphasizes that education adapts and moves forward with new technologies.

Teachers' and Students' Perspectives

  • Teachers are curious about integrating AI into the classroom and concerned about academic integrity.
  • Students are interested in using AI to assist with assignments and learning challenges.
  • IT staff are evaluating the implications of AI on network security and educational policies.

Introducing ChatGPT and AI Tools

What is ChatGPT?

  • ChatGPT is a language model developed by OpenAI.
  • G: Generative – capable of generating text.
  • P: Pre-trained – trained on large datasets to understand language patterns.
  • T: Transformer – uses transformer architecture to process input and generate responses.

Capabilities and Limitations

  • Generates human-like text based on input prompts.
  • Can assist with lesson planning, idea generation, vocabulary lists, writing prompts, and feedback.
  • Limitations include potential biases, inaccuracies, outdated information (knowledge cutoff), and lack of ethical judgment.
  • Not designed for users under certain age thresholds due to privacy policies.

Privacy and Ethical Considerations

  • Privacy policies are crucial, especially in K-12 education (FERPA considerations).
  • Most AI tools are designed for users aged 13 or older.
  • Educators should review privacy policies before integrating AI tools into the classroom.

Practical Applications of AI in Education

Using AI Tools

  • Teachers and librarians can use AI for creating lesson plans, assessments, and instructional materials.
  • Examples include generating open-ended questions, scaffolding for English Language Learners (ELLs), and drafting communications.
  • AI can assist with administrative tasks like writing report card comments and responding to emails.

Prompt Engineering

  • The quality of AI-generated output depends on the specificity of the input prompts.
  • More detailed prompts yield more accurate and useful results.
  • Example: Asking Google Gemini to generate open-ended questions about "Long Way Down" by Jason Reynolds.

Examples of AI Tools

  • Google Gemini: An AI tool for generating text and ideas.
  • Bing Chat: Uses GPT-4 for search and conversational responses.
  • Microsoft Co-Pilot: Integrates with Microsoft Office for productivity enhancements.
  • YouChat: An AI-powered search assistant that can generate code, answer questions, and assist with tasks.
  • TinyWow: A tool for converting documents and media files.
  • Curipod and MagicSchool AI: Generate interactive lesson plans and presentations based on standards and grade levels.
  • Canva: Offers AI features for creating graphics and documents.

Addressing Plagiarism and Academic Integrity

  • Tools like Turnitin and GPTZero can detect AI-generated text.
  • Educators should establish policies on AI usage and plagiarism with their school communities.
  • Encourage transparency and ethical use of AI among students.

Best Practices for Integrating AI

Crafting Effective Prompts

  • Be clear about the context, purpose, audience, and desired outcome when writing prompts.
  • Use frameworks like CRAFT (Context, Role, Audience, Format, Topic) to structure prompts.
  • Example: "As an expert fourth-grade math teacher, create a lesson plan on fractions aligned with [specific standard]."

Human Oversight and Critical Thinking

  • AI is a tool to assist educators, not replace them.
  • Educators must review and verify AI-generated content for accuracy and bias.
  • Emphasize the development of creativity, critical thinking, problem-solving, empathy, and human interaction, which AI cannot replicate.

Policy Development

  • Work with school districts to develop policies regarding AI usage.
  • Consider the ethical implications and establish guidelines for students and staff.
  • Promote an environment where students feel comfortable discussing their use of AI tools.

Conclusion

AI offers numerous opportunities to enhance education by improving productivity, organization, and addressing learning gaps. Educators should embrace AI as a partner in the educational journey, leveraging its capabilities while maintaining human oversight and fostering essential skills in students.

The Impact of AI on Academic Library Research Support: Perceptions and Realities

The Impact of AI on Academic Library Research Support Services and Literature Review



Introduction

This article explores the impact of artificial intelligence (AI) on academic library research support services, with a particular focus on literature reviews. The discussion includes perceptions of academic librarians towards AI, the promotion and evaluation of AI tools, and the integration of AI into the literature review and research process.

Perceptions of Academic Librarians Towards AI

Initial Surveys and Findings

Surveys conducted between 2020 and 2022 indicated that librarians generally viewed AI as a helpful tool that would not jeopardize their employment status. Key findings included:

  • 30% of librarians did not expect significant impact from AI on library functions.
  • Little impact was anticipated on instruction (30%) and references.
  • Greater concern was noted regarding collection development.
  • 67% believed AI would transform library functions positively.

Changing Perspectives in 2023-2024

Recent surveys from 2023-2024 show a shift in perceptions:

  • Only 14% of librarians believed students used AI for research.
  • Contrastingly, 73% of students confirmed using AI in their courses, with 68% admitting to inappropriate use.
  • Approximately 38% of librarians felt AI made them lazy and threatened their employment.

This highlights a collision between librarians' perceptions and students' actual use of AI, indicating a need for librarians to adapt to the changing landscape.

Trust and Understanding of AI

Librarians exhibit varied attitudes towards AI:

  • Some view AI as a "magic box" that works without needing to understand its inner workings.
  • Others prefer to collaborate, test, and evaluate AI tools before adopting them.
  • Trust issues arise from a lack of understanding or skepticism towards new technologies.

Technology Hype Cycle and AI

The Gartner Hype Cycle places generative AI in the "Peak of Inflated Expectations" stage, suggesting that while expectations are high, practical results may not yet meet the hype. This underscores the need for real research into AI's effectiveness in library settings.

Impact on Teaching, Learning, and Research

Librarians are concerned about AI's implications for:

  • Teaching and learning processes.
  • Discovery and research synthesis.
  • Issues of copyright, privacy, and bias.
  • Agency and authorship in academic work.
  • The future of reference and instruction services.

Positionality and Personal Engagement with AI

The Librarian's Multiple Roles

The presenter identifies as a librarian, teacher, technologist, and researcher, wearing many hats in the academic environment.

Adoption of AI Tools

Using Rogers' Diffusion of Innovation Theory, the presenter places themselves as an early adopter, having moved through awareness and interest stages to evaluating and adopting AI tools in literature review processes.

Despite being an early adopter, the presenter notes that they are surrounded by prudent individuals who are cautious or distrustful of AI.

Promotion and Evaluation of AI Tools

Library Promotion of AI Products

A survey question from Helper Systems asked, "Does your library currently offer or promote any AI products to researchers?" Findings included:

  • A slight increase in libraries promoting AI products from 13% in 2023 to 19% in 2024.
  • A significant decrease in libraries not promoting AI, indicating growing interest.

Personal Initiatives in Promoting AI

The presenter actively promotes AI tools and integrates them into practice and teaching:

  • Created a directory and evaluation of semantic search engines in 2022.
  • Presented on the automation of systematic reviews using AI tools.
  • Developed a popular guide on using AI for literature review, published in November, which has garnered over 1,500 users.
  • Participated in beta testing and consulting for AI tool development, providing valuable feedback to developers.

AI in Literature Review and Research Process

Benefits of AI in Literature Reviews

AI tools offer significant advantages in conducting literature reviews, especially systematic reviews that involve analyzing thousands of scholarly records:

  • Shortens the time required for literature searches and analysis.
  • Assists in text mining and data synthesis.
  • Enables smaller teams to handle large-scale reviews efficiently.

Teaching and Supporting Students

The presenter has redesigned literature review courses and micro-credentials to incorporate AI tools, helping students who often spend excessive time on literature reviews due to:

  • Difficulty in searching effectively.
  • Challenges in analyzing and synthesizing information.

Ongoing Research and Development

Current projects include:

  • Publishing a taxonomy and characteristics of AI discovery tools, highlighting their features, limitations, and suitability in the research process.
  • Developing an AI research assistant based on the Hopscotch Research Design Model, providing a step-by-step framework for research.
  • Working on an AI recommendation system for educational researchers.

Interactive Research Methods Lab

The presenter is a member of an Interactive Research Methods Lab, which received an innovation award for incorporating library and open access resources with an AI recommendation system. Current work involves:

  • Developing a research assistant using ChatGPT and customized language models.
  • Creating custom chatbots tailored to specific research needs.

Guides and Resources

The presenter has developed guides to assist in discovering new literature using AI tools:

  • Categorizing tools based on research processes and literature review steps.
  • Including AI tools for research planning, such as developing research questions and conceptual frameworks.
  • Reviewing AI search engines and research assistants, comparing features and limitations.
  • Highlighting hybrid systems that integrate various AI technologies.

Collaborations and Institutional Support

Emphasizing the importance of collaboration, the presenter notes involvement in various institutional initiatives:

  • Member of the Office of Research Applied Technology Community, discussing AI topics.
  • Part of the Scholarship of Teaching and Learning (SoTL) group, working on AI tools to support teaching and learning scholarship.
  • Engagement with research labs in computer science and data analytics, with plans to offer a new master's program in AI.
  • Collaboration with the Digital Learning department to create resources for instructors on teaching with or without AI.

Future Aspirations

The presenter expresses a desire for an AI lab to experiment with different tools and assess their applicability in education and library services.

Conclusion

AI is increasingly impacting academic library research support services, particularly in literature reviews. While librarians' perceptions of AI are evolving, the presenter advocates for proactive engagement with AI tools to enhance research processes and support students and faculty effectively.

References

A list of reference materials is available for further reading.

Questions and Discussion

During the presentation, the following questions were addressed:

Question from Jenny Pierce:

Are you using literature review and systematic review interchangeably?

Answer: No, they are separate. The presenter has created distinct guides for traditional narrative literature reviews (commonly required for dissertations, theses, or capstones) and systematic reviews, which often require expensive platforms that some students cannot afford.

Question from Rachel:

We have been beta testing an app called CurvXR to develop support for students learning using virtual reality. Some of these science anatomy and chemistry models are impressive. What are your thoughts?

Answer: Agrees that integrating tools like virtual reality (VR) and augmented reality (AR) with AI is beneficial. Involvement in applied technology communities discusses combining different tools, and integration between technologies and people (teachers and students) is the next step. Customization of AI tools like ChatGPT is growing, offering more tailored solutions for organizations and specific purposes.

Final Remarks

The presenter invites further questions and encourages collaboration in exploring the impact of AI on academic libraries.

The Future is Now: Exploring AI in Public Libraries

Exploring AI in Public Libraries: Programs for Communities

Presented by Arya Mala Prasad and colleagues from the Center for Technology in Government at the University at Albany



Introduction

This presentation delves into the research conducted by the Center for Technology in Government (CTG) at the University at Albany, focusing on the role of public libraries in fostering critical and inclusive civic engagement in artificial intelligence (AI) initiatives. The research team includes:

  • Arya Mala Prasad, Researcher at CTG
  • Zongshang Zhang, PhD student at Rockefeller College of Public Affairs and Policy, and Graduate Assistant at CTG
  • Mila Gasco Hernandez, Research Director at CTG and Associate Professor at Rockefeller College
  • J. Ramon Gil-Garcia, Director of CTG and Professor at Rockefeller College

Background and Motivation

AI Bias and Public Engagement

  • Increasing use of AI in various sectors such as financial services, healthcare, welfare programs, and policing.
  • Evidence of racial and other biases in AI decision-making processes.
  • Efforts at national and international levels to strengthen regulation and governance of AI systems.
  • Public engagement is seen as a mechanism to improve transparency and accountability in AI systems.

Challenges in Facilitating Public Engagement

  • Lack of technical knowledge among the general public to understand AI.
  • Need for open and accessible spaces for public participation in AI initiatives.

The Role of Public Libraries

  • Public libraries have a history of promoting digital literacy and ensuring digital inclusion and equity.
  • They offer safe and collaborative spaces for communities to discuss local issues, including the impacts of AI.
  • Libraries can empower marginalized communities to understand and engage with AI technologies that affect them.

Research Objectives

The research aims to answer the following questions:

  1. What role may public libraries play in increasing knowledge about AI in the community?
  2. How may public libraries foster inclusive civic engagement in AI initiatives?
  3. What are the opportunities, threats, benefits, and challenges of public libraries leading inclusive civic engagement in AI initiatives?

This research is part of a larger project funded by the Institute of Museum and Library Services (IMLS) and conducted in partnership with the Urban Libraries Council (ULC). The project began in August 2023 and will continue until Spring 2026.

Focus of the Current Study

The presentation focuses on the initial mapping exercise aimed at identifying and assessing the role of public libraries in increasing AI awareness and fostering inclusive civic engagement.

Specific Research Questions

  1. What are the main types of AI programs and services offered in public libraries?
  2. What is the purpose of AI programs and services, and who are the intended users?
  3. What are the main components of AI programs and services?
  4. Do the AI programs and services include individuals from marginalized communities and address the potential negative effects of AI systems?

Scope Clarification

The research focuses solely on AI programs organized for community members, excluding AI services or programs used internally by libraries for operations (e.g., search catalogs, robots, voice assistants).

Methodology

Data Collection

Data collection included three steps:

  1. Review of Library Associations: Searched publications and resources from the American Library Association (ALA), Urban Libraries Council (ULC), and the International Federation of Library Associations (IFLA) to identify popular AI programs and success stories.
  2. Systematic Website Review: Examined the websites of ULC member libraries to find AI-related events, programs, and blogs.
  3. Broad Internet Search: Conducted Google searches using keywords identified from library websites (e.g., "ChatGPT courses") to uncover additional programs.

Data collection spanned from November 2023 to February 2024, including programs that were announced or available online during or before this period. The dataset comprised 109 cases, with 97 from the United States and 12 from Canada.

Data Analysis

An inductive approach was used to classify the cases into different categories based on:

  • Purpose of the AI programs
  • Targeted participants
  • Types of partnerships involved
  • Content and components of the programs

Findings

Main Purposes of AI Programs in Public Libraries

  1. Increasing Awareness of AI: Informational programs aimed at providing a basic understanding of AI, including lectures, courses, and seminars that explain AI terminologies, technologies, benefits, and challenges.
  2. Providing Technical Training on AI: Instructional programs focused on teaching community members how to use AI applications or tools (e.g., ChatGPT, DALL·E) and offering coding classes related to AI programming.

Types of AI Programs Offered

1. Increasing Awareness Programs

  • Lectures and Courses: The most common type, featuring one-way communication from experts to the audience. Examples include:
    • AI for Communities Program: Offered by Brooklyn Public Library and San Mateo Public Library in collaboration with Women in AI Ethics, covering AI basics, generative AI, and online safety.
    • ABC of AI: An introductory course by San Jose Public Library, explaining AI terminologies and discussing benefits and risks.
  • Seminars and Conversations: Interactive discussions between participants and experts. Examples include:
    • Building the World We Want: A panel discussion on global AI governance hosted by the New York Public Library.
    • Conversation with Experts on AI: Organized by William F. Laman Public Library, featuring local university researchers.
  • Exhibitions: Interactive displays or art installations to engage the community with AI concepts. Examples include:
    • The Laughing Room: An interactive art exhibition at Cambridge Public Library in collaboration with Harvard University, demonstrating AI's ability to detect humor through voice inflections.
    • Misinfo Day Escape Room: Hosted by St. Joseph County Public Library in partnership with the University of Washington, teaching participants to identify bots, deepfakes, and misinformation.
  • Podcasts: Audio programs discussing AI topics. Examples include:
    • AI Podcast Series: By Knox County Public Library, a four-part series breaking down AI in everyday life (e.g., self-driving cars, robots).
    • Tech Talk Weekly: A 20-minute weekly podcast by Broward County Public Library, covering AI as part of broader tech news.

2. Technical Training Programs

  • Hands-On Workshops: Practical sessions teaching participants to use AI tools or programming skills.
    • Application of AI Tools: Workshops on using generative AI tools for professional skills or hobbies.
      • Example: "Using ChatGPT for Writing Effective Resumes and Cover Letters" by Brooklyn Public Library.
      • Example: Digital art creation using DALL·E at St. Louis Public Library.
    • Programming and Coding Workshops: Teaching AI programming skills.
      • Example: "After-School AI Program" at St. Joseph County Public Library, teaching coding and machine learning to teenagers.
      • Example: Hands-on AI and machine learning workshop at San Jose Public Library, culminating in participants developing their own machine learning projects.
  • Maker Space Programs: Providing access to AI-related devices and kits for experiential learning.
    • AI Maker Kits: Offered by Frisco Public Library, allowing patrons to experiment with AI technologies (recipient of a national award).
    • Tech Petting Zoo: Hosted by an unspecified library, offering devices like AI gadgets, virtual reality equipment, and 3D printers for hands-on exploration.

Role of Collaboration and Partnerships

Partnerships play a crucial role in organizing AI programs, with over 50% of libraries collaborating with external entities. Types of partners include:

  • Universities: Collaborations involve inviting experts for talks or co-hosting courses and exhibitions.
    • Example: New York University partnering with Queens Public Library to offer a five-week series on AI, focusing on ethical aspects and empowering public advocacy.
  • Nonprofits: Libraries leverage resources or co-host events with nonprofits.
    • Example: Women in AI Ethics collaborating with multiple libraries for the "AI for Communities" course.
    • Example: Code.org's "AI for Oceans" game used by libraries to teach kids about machine learning and data's role in AI.
  • Businesses: Industry experts are invited for lectures and workshops.
    • Example: Seattle Public Library's "Tech Talk 101" series featuring startup founders discussing emerging technologies, including AI.
  • Government Agencies: Limited but notable involvement.
    • Example: Boston Public Library partnering with the Mayor's Office to organize an AI course.
    • Example: Some government agencies sponsoring AI courses at local public libraries.

Observations and Opportunities

Current State

  • Public libraries are beginning to offer AI programs to increase awareness and provide technical training.
  • Most programs are one-off events or short courses rather than structured, long-term initiatives.
  • Programs often include discussions on the benefits and challenges of AI, focusing on relatable technologies like ChatGPT and voice assistants.
  • Libraries address the needs of different age groups, offering sessions for teens, adults, and seniors.

Potential for Expansion

  • Opportunity to develop more structured and long-term AI programs similar to existing digital literacy classes.
  • Need to tailor programs for marginalized communities to help them understand how AI systems impact them, especially concerning biased decision-making.
  • Lack of civic engagement opportunities within current programs; potential to include co-creation activities and facilitate broader community discussions on AI.
  • Example from Spain: The "ExperimentAI" program offered a 15-session course with co-creation opportunities, allowing participants to work with professionals to address real-world problems using AI.

Conclusion

The research indicates that while public libraries are starting to play a role in increasing AI awareness and providing technical training, there is significant room for growth. By expanding programs to include marginalized communities and fostering civic engagement, libraries can become pivotal in shaping an inclusive AI future.

Next Steps

  • Continue researching the role of public libraries in AI education and civic engagement.
  • Explore opportunities to collaborate with libraries in developing and implementing more inclusive and participatory AI programs.
  • Assess the impact of these programs on communities, especially marginalized groups.

Stay Connected

If you're interested in this research, you can follow the Center for Technology in Government (CTG) for updates:

Acknowledgments

Special thanks to San Jose State University and Future of Libraries for organizing the conference on AI and Libraries, and to the Institute of Museum and Library Services for funding the research.

Breaking Down Barriers: How Automated Tools Can Increase Faculty Participation in Open Access

Build Your Own AI Tool: Scripting with Google's PaLM and Python for Library

Presented by Eric Silverberg, Librarian at Queens College, City University of New York



Introduction

In this presentation, Eric Silverberg shares his journey in developing an automated tool to assist faculty at Queens College in depositing their scholarly articles into the institutional repository. Recognizing the low participation of faculty in the School of Education, he sought to simplify the process by leveraging Google's PaLM API and Python scripting.

Background and Motivation

The Importance of Open Access

  • Personal Commitment: Eric emphasizes the significance of making educational research openly accessible, aligning with his values and background as a classroom teacher.
  • University Mission Alignment: As a public institution, the City University of New York aims to make its research available to the public.
  • Impact on Education: Open access to research empowers policymakers, administrators, and teachers by providing them with valuable insights and data.

Challenges with Faculty Participation

  • Faculty were generally unaware of the institutional repository or found the process too cumbersome.
  • Understanding open access policies for each journal can be complex and time-consuming.
  • Manually checking policies via Sherpa Romeo for numerous publications is inefficient.

Problem Statement

The core issue was automating the extraction of journal names from faculty citations to retrieve open access policies from Sherpa Romeo's API without manual intervention.

Initial Approach

  • Coding APA Rules: Attempted to parse citations by coding the rules of APA formatting.
  • Encountering Exceptions: Faculty citations varied significantly, with inconsistencies and creative deviations from standard formats.
  • Limitations: The approach became impractical due to the numerous exceptions, leading to excessive coding for edge cases.

Leveraging Google's PaLM API

Discovering PaLM

  • He learned about Google's PaLM API, which powers the language model behind Bard (now Gemini).
  • Recognized its potential for natural language understanding and processing.

Implementing PaLM for Journal Extraction

  • Simple Prompting: Used straightforward prompts like "What is the name of the journal in this citation?"
  • High Accuracy: PaLM effectively extracted journal names even from inconsistently formatted citations.
  • Automation: Enabled batch processing of citations without manually coding for formatting exceptions.

Technical Implementation

Setting Up the Environment

  1. API Key Connection: Established a connection to PaLM's API using a free API key.
  2. Selecting the Model: Chose the text generation model suitable for processing text inputs.
  3. Python Scripting: Used Python to write functions for automating the process.

Key Components of the Script

Part A: Connecting to PaLM

# Connect to PaLM API
import google.generativeai as palm
palm.configure(api_key='YOUR_API_KEY')

# Select the text generation model
models = [model for model in palm.list_models() if 'generateText' in model.supported_generation_methods]
model = models[0].name

Part B: Extracting Journal Names

# Function to get journal name
def get_journal_name(citation):
    prompt = f"What is the name of the journal in this citation?\n{citation}"
    completion = palm.generate_text(model=model, prompt=prompt, temperature=0, max_output_tokens=800)
    return completion.result
  • Temperature Parameter: Set to 0 to minimize randomness and ensure consistent outputs.
  • Max Output Tokens: Defined to control the length of the response.

Automating the Entire Process

  1. Input Data: Collected faculty citations in a spreadsheet.
  2. Journal Extraction: Used the `get_journal_name` function to populate journal names next to citations.
  3. OA Policy Retrieval: Sent journal names to Sherpa Romeo's API to get open access policies.
  4. Output Report: Generated a comprehensive report detailing OA policies for each publication.

Example Output

An example of the output report includes:

  • Citation: Full citation provided by the faculty.
  • Journal Name: Extracted using PaLM.
  • OA Policies: Detailed information on preprint, accepted manuscript, and final version policies.
Citation 4:
[Full Citation Here]

Journal: African Journal of Teacher Education

OA Policies:
- Submitted Manuscript: [Policy Details]
- Accepted Manuscript: [Policy Details]
- Final Version of Record: [Policy Details]

Challenges and Considerations

Dealing with Sherpa Romeo's API

  • Data Structure: The API returns data nested in complex ways, requiring careful parsing.
  • Error Handling: Implemented to manage cases where OA data was missing or incomplete.

Faculty Engagement

  • Planned to share the generated reports with faculty to encourage repository deposits.
  • Recognized the need for feedback to refine the tool and process.

Next Steps and Potential Enhancements

  • User Feedback: Gather input from faculty like Professor N'Dri T. AssiĆ©-Lumumba, who agreed to pilot the tool.
  • Automation of Deposits: Consider scripting the submission of articles into the repository, pending faculty permission.
  • Exploring Other APIs: Investigate alternatives like OpenAlex for OA policy data, potentially simplifying the process.
  • Improving PDF Handling: Explore methods to reverse engineer formatted PDFs back into Word documents for easier repository submissions.

Audience Questions and Responses

Is there a template available?

Answer: Yes, the code shared is largely based on Google's documentation. You can access Eric's script on GitHub and modify it for your needs.

How are citations received from faculty?

Answer: Currently, citations are obtained directly from faculty CVs. The process may evolve based on faculty feedback and scalability considerations.

Does the tool handle abbreviated journal names?

Answer: Yes, PaLM effectively recognizes and extracts abbreviated journal names, which is particularly useful in fields where abbreviations are common.

Why use Sherpa Romeo instead of OpenAlex?

Answer: Familiarity with Sherpa Romeo's API led to its initial use. OpenAlex may offer a more streamlined API, and exploring it could be beneficial for future iterations.

Can ChatGPT be used for journal name extraction?

Answer: While ChatGPT could perform similar tasks, using PaLM's API allows for automation within the script, eliminating the need for manual input and handling larger batches efficiently.

Could the process be further automated to deposit articles?

Answer: Automating the entire submission process is an intriguing idea. It would require careful consideration of repository submission protocols and faculty permissions.

Conclusion

Eric Silverberg's innovative approach demonstrates how AI tools like Google's PaLM can address practical challenges in academic libraries. By automating the extraction of journal names and retrieval of OA policies, the process becomes more efficient, encouraging greater faculty participation in open access initiatives.

The project underscores the potential of AI in streamlining workflows and enhancing access to scholarly research. Ongoing feedback and collaboration with faculty will be essential in refining the tool and maximizing its impact.

Resources and Contact Information

Eric welcomes questions, collaborations, and feedback on the project.

Acknowledgments

Special thanks to Natalie Swanberg for participating in the pilot and to all attendees for their insightful questions and engagement.