Translate

Search This Blog

Thursday, November 28, 2024

Building AI Competency in Library Staff: The Key to Success

At the Helm of Innovation: Librarians at the Forefront of AI Engagement and Integration

Presented by the Library Team at Georgetown University's International Campus in Qatar



Introduction

The advent of artificial intelligence (AI) has ushered in a new era of opportunities and challenges in the academic landscape. Recognizing the transformative potential of AI, the library team at Georgetown University's International Campus in Qatar embarked on a proactive journey to engage with and integrate AI tools across the campus. This article delves into their comprehensive approach, highlighting staff development initiatives, experimentation with AI, faculty outreach, and the incorporation of AI into daily operations.

Staff Development: Building AI Competency

The foundation of the library's AI integration strategy was robust staff development. The acting director of library services emphasized the importance of equipping the team with the necessary resources, time, and training to navigate the evolving AI landscape.

Workshops and Training Sessions

  • ALA's AI Literacy Workshop: The team participated in the American Library Association's workshop on "AI Literacy Using ChatGPT and Artificial Intelligence in Instruction," which provided valuable insights into AI applications in educational settings.
  • Collaborative Learning: The library facilitated special sessions and collaborations with colleagues to foster a culture of continuous learning and shared expertise.

Access to AI Tools

  • ChatGPT Account: A dedicated ChatGPT account was secured for the librarians, serving as a sandbox environment to explore and understand the capabilities and limitations of AI language models.
  • Skilltype Investment: The library invested in Skilltype, a talent management and development platform that provided personalized learning paths, including AI-related courses through LinkedIn Learning.

Experimenting with AI: Collaborative Exploration

Understanding the importance of hands-on experience, the library team engaged in active experimentation with AI tools.

Inter-Institutional Collaboration

The team collaborated with other institutions within Education City, including the Qatar National Library and neighboring universities like Texas A&M and Virginia Commonwealth University. These collaborative sessions focused on:

  • Demonstrating AI Tools: Sharing knowledge about various AI applications and how they can be utilized effectively.
  • Discussing Challenges: Identifying pitfalls and limitations of AI tools to develop best practices for their use.

Creative Applications of AI

The library leveraged AI creatively to enhance their services and outreach efforts:

  • Marketing Initiatives: AI tools were used to develop innovative marketing campaigns and materials, showcasing the library's commitment to embracing new technologies.
  • Workshop Development: AI was utilized to design a series of workshops aimed at exploring AI's creative potential, catering to faculty members who were hesitant to integrate AI directly into their courses.

Faculty Outreach: Bridging the Gap

Recognizing the varying levels of acceptance and familiarity with AI among faculty, the library undertook a strategic outreach initiative.

Understanding Faculty Perspectives

The team reached out to faculty members to gauge their plans and comfort levels regarding AI integration in their courses. They discovered that:

  • Some faculty were resistant to incorporating AI, often due to a lack of familiarity or concerns about academic integrity.
  • There was a trend toward eliminating traditional research papers in favor of in-class assessments to mitigate potential misuse of AI tools.

Adaptive Support and Resources

In response, the library developed alternative strategies to support faculty and students:

  • New Workshop Offerings: They created workshops that complemented and supplemented existing information literacy sessions, focusing on ethical and effective use of AI in research.
  • Alternative Assignments: The library assisted faculty in designing alternative assignments, such as podcasting and video discussions, that leveraged technology while addressing concerns about AI misuse.

Incorporating AI into Daily Operations

The library team integrated AI tools into their everyday workflows to enhance efficiency and innovation.

Brainstorming and Content Creation

  • Utilizing AI Language Models: Tools like ChatGPT and Claude were used for brainstorming ideas, drafting content, and refining communications.
  • Enhancing Marketing Efforts: AI-generated content and images were incorporated into marketing materials, increasing engagement and showcasing the library's forward-thinking approach.

AI-Driven Projects

One notable project involved using AI to recreate book covers for a library display:

  • Image Generation: Using tools like Leonardo AI, the team reimagined existing book covers, demonstrating the creative capabilities of AI.
  • Community Engagement: The display sparked interest among students and faculty, serving as a conversation starter about the role of AI in creativity and design.

Instructional Integration: AI in the Pre-Research Process

The Instructional Services Librarian took significant steps to integrate AI into the research instruction provided to students.

Addressing Citation and Academic Integrity

By the summer of 2023, major citation styles (APA, MLA, and Chicago) had issued guidelines on citing AI tools. The library:

  • Collaboration with the Writing Center: Partnered to create a cheat sheet on how to cite AI content and tools correctly.
  • Resolving Challenges: Addressed issues with citation management tools like Zotero, which lacked specific item types for AI-generated content.
  • Promoting Ethical Use: Emphasized the importance of attribution and academic integrity when using AI tools in research.

Overcoming Faculty Resistance

Some faculty members prohibited the use of AI in their syllabi. To navigate this:

  • Educational Frameworks: Utilized the CLEAR framework and UNESCO publications to demonstrate ethical and effective ways to incorporate AI into academic work.
  • Non-Generative AI Tools: Introduced tools like Research Rabbit, which assist in literature mapping without generating text, alleviating concerns about plagiarism.

Integrating AI into Lesson Plans

The librarian incorporated AI tools into instruction sessions, focusing on:

  • Free and Privacy-Conscious Tools: Selected AI applications like Copilot in Microsoft Edge that protect student data and are accessible without cost.
  • Parallel with Existing Tools: Demonstrated how AI can perform similar functions to familiar tools like Credo's concept mapping, easing the transition for both faculty and students.

AI Workshop Series: Empowering the Campus Community

To further AI literacy on campus, the library launched a futuristic-themed workshop series titled "AI's Creative Edge."

Workshop Offerings

  1. Advanced Prompt Engineering: Taught participants how to use AI for brainstorming keywords and concepts to enhance database searches.
  2. Citing AI Content: Provided hands-on training on using Zotero and Grammarly to correctly cite AI-generated material.
  3. Student Perspectives: Invited students to share their experiences and discuss ethical uses of AI tools.

Engagement and Outcomes

The workshop on citing AI content saw the highest attendance, indicating a strong interest in understanding how to use AI ethically within the bounds of academic integrity. This response highlighted the need for ongoing education and support in navigating AI's role in academia.

AI Across the Research Process

The library team developed a comprehensive framework illustrating how AI tools can be integrated at various stages of the research process:

Research Stage AI Applications
Brainstorming Tools for organizing tasks, defining topics, and generating ideas (e.g., Copilot, ChatGPT).
Literature Review Non-generative AI tools for mapping literature and identifying key sources (e.g., Research Rabbit).
Evaluation Using AI to verify sources, assess credibility, and filter results based on journal rankings (e.g., Consensus).
Citing AI-assisted citation tools for proper attribution (e.g., Grammarly add-on with ChatGPT, integrated with Zotero).

Leadership in AI Engagement: A Collaborative Effort

The Data, Media, and Web Librarian discussed the library's leadership role in advancing AI engagement on campus.

Proactive Initiatives

  • AI Literacy Development: Embraced AI as an area of intellectual curiosity and practical application, positioning the library as a knowledge hub.
  • Workshop Series: Expanded offerings to include topics like generative AI in images, music, and video, as well as AI's impact on career development.

Creative Projects and Experimentation

  • AI-Generated Book Covers: Created a library display featuring AI-generated reimaginings of existing book covers, engaging the community in discussions about AI and creativity.
  • Teaching AI Skills: Offered instruction on prompt engineering and image generation, enabling students and staff to interact effectively with AI tools.

Advanced AI Applications

  • GPT-4 and Claude 3 Vision Features: Explored the use of AI to transcribe and analyze handwritten historical documents, enhancing access to primary sources.
  • Support for Course Development: Participated in a pilot course on learning processes and AI, addressing the ethical considerations and potential of AI in education.

Campus Collaboration and Conversations

The library facilitated campus-wide discussions and collaborations regarding AI:

  • Campus Conversations: Organized events where faculty, IT staff, admissions officers, and finance team members shared perspectives on AI's impact in their areas.
  • Faculty Workshops: Engaged with faculty to discuss AI's role in teaching and learning, offering support and resources for integration.
  • Increased Course Support: Provided enhanced support for courses incorporating AI, ensuring that students and faculty have the necessary tools and knowledge.

Overcoming Challenges and Resistance

Throughout their journey, the library encountered challenges, including resistance from faculty and staff hesitant to adopt AI tools.

Addressing Faculty Concerns

  • Demonstrating Value: Showed faculty how AI could enhance research and learning without compromising academic integrity.
  • Alternative Assignments: Assisted in designing assignments that leveraged technology while mitigating concerns about AI misuse.

Engaging Resistant Staff

  • Demonstrations and Training: Conducted sessions to showcase the practical benefits of AI, highlighting efficiency gains and new capabilities.
  • Collaborative Approach: Encouraged open dialogue and shared experiences to ease apprehensions and build confidence in using AI tools.

Conclusion

The library team at Georgetown University's International Campus in Qatar exemplifies proactive leadership in AI engagement and integration. Through dedicated staff development, innovative experimentation, strategic faculty outreach, and the incorporation of AI into daily operations, they have positioned themselves at the forefront of academic innovation.

Their efforts not only enhance the library's services but also contribute significantly to the campus's overall readiness to navigate the evolving landscape of AI in education. By fostering a culture of ethical use, continuous learning, and collaborative exploration, they are shaping a future where AI is harnessed to enrich learning, research, and creativity.

Questions and Engagement

During their presentations and workshops, the library team actively engaged with students and faculty, addressing questions such as:

  • How can AI tools be used ethically in academic work?
  • What are effective strategies for citing AI-generated content?
  • How can resistance to AI adoption among staff and faculty be overcome?

Their willingness to share resources, such as cheat sheets for citing AI content, and to collaborate across departments underscores their commitment to supporting the campus community in embracing AI responsibly and effectively.

Unlocking Hidden Treasures: The Transformative Potential of AI in Special Collections

What Can AI Do for Special Collections? Improving Access and Enhancing Discovery

Presenters: Sonia Yaco and Bala Singu



In this enlightening presentation, Sonia Yaco and Bala Singu explore the transformative potential of Artificial Intelligence (AI) in the realm of special collections. Drawing from a year-long study conducted at Rutgers University, they delve into how AI can significantly improve access to and enhance the discovery of rich archival materials.

Introduction

Special collections in libraries house a wealth of historical and cultural artifacts. However, accessing and extracting meaningful insights from these collections can be challenging due to the nature of the materials, which often include handwritten documents, rare photographs, and other hard-to-process formats.

The presenters highlight a "golden opportunity" at the intersection of rich collections, an ever-expanding set of AI tools, and a strong desire to maximize the utility of these collections. By applying AI in meaningful ways, they aim to mine this wealth of information and make it more accessible to scholars and the public alike.

The William Elliot Griffis Collection

The focal point of the study is the William Elliot Griffis Papers at Rutgers University. This extensive collection documents the lives and work of the Griffis family, who were educators and missionaries in East Asia during the Meiji period (1868-1912). The collection includes manuscripts, photographs, and published materials and is heavily utilized by scholars from Asia, the United Kingdom, and the United States.

Margaret Clark Griffis

The study specifically focuses on Margaret Clark Griffis, the sister of William Elliot Griffis. She holds historical significance as one of the first Western women to educate Japanese women. By centering on her diaries, biographies, and photographs, the presenters aim to shed light on her contributions and experiences.

Strategies for Mining the Collection

To unlock the wealth of information within the Griffis collection, the presenters employed several strategies:

  1. Extracting Text to Improve Readability: Utilizing AI tools to transcribe handwritten and typewritten documents into machine-readable text.
  2. Finding Insights in Digitized Text and Photographs: Applying natural language processing and image analysis to gain deeper understanding.
  3. Connecting Text to Images: Linking textual content with corresponding images to create a richer narrative.

Software Tools Utilized

The project explored a variety of AI tools, categorized into:

  • Generative AI for Text and Images
  • Natural Language Processing Tools
  • Optical Character Recognition (OCR) Tools
  • Other Analytical Tools

In total, they examined nearly 26 software tools, assessing each based on cost and learning curve. The tools ranged from free and user-friendly applications like ChatGPT 3.5 to more complex and subscription-based services like ChatGPT 4.0 and DALL·E API.

Project Demonstrations

The presenters showcased three key demonstrations to illustrate the capabilities of AI in handling special collections:

1. Improving Readability

One of the primary challenges with special collections is the difficulty in reading handwritten and typewritten documents, especially those written in old cursive styles. To address this, the team used OCR tools to convert these documents into ASCII text, making them more accessible for computational analysis.

Handwritten Material

The team focused on transcribing Margaret Griffis's handwritten diary entries. They used tools like eScriptorium, Transkribus (AM Digital), and ChatGPT-4 to process the text. Each tool had varying levels of accuracy and challenges:

  • eScriptorium: A free tool with a moderate learning curve, it achieved an initial accuracy of around 89%.
  • Transkribus (AM Digital): A commercial tool with a higher cost but offered competitive accuracy.
  • ChatGPT-4: While powerful, it faced issues with "hallucinations," generating text not present in the original material.

By combining these tools, they improved the transcription accuracy significantly. For instance, feeding the eScriptorium output into ChatGPT-4 enhanced the accuracy to approximately 96%.

Typewritten Material

For typewritten documents, such as William Griffis's biography of his sister, tools like Adobe Acrobat provided efficient OCR capabilities with high accuracy. These documents were easier to process compared to handwritten materials.

2. Finding Insights with AI

Once the text was extracted, the next step was to derive meaningful insights using AI techniques:

Translation

To make the content accessible to international scholars, the team utilized translation tools:

  • Google Translate: A free tool suitable for smaller text volumes.
  • Googletrans API: An API version of Google Translate, which had reliability issues and limitations on volume.
  • Google Cloud Translation API: A paid service offering high reliability for large-scale translations.

Text Analysis and Visualization

Using natural language processing tools, the team performed analyses such as named entity recognition and topic modeling. They employed Voyant Tools, a free, open-source platform that offers various analytical capabilities:

  • Identifying key entities like names, places, and dates.
  • Visualizing word frequencies and relationships.
  • Creating interactive geographic maps based on the text.

Photographic Grouping

With over 427 photographs in the collection, the team sought to group images programmatically based on content similarities. By leveraging Python scripts and AI algorithms, they clustered photographs that shared visual characteristics, such as shapes, subjects, and themes.

3. Connecting Text and Images

One of the most innovative aspects of the project was linking textual content with corresponding images to enrich the narrative:

Describing Photographs Using AI

The team used ChatGPT to generate detailed descriptions of photographs. For example, given a photograph with minimal metadata labeled "small Japanese print," ChatGPT produced an extensive description, identifying elements like traditional attire, expressions, and possible historical context.

This process significantly enhances the discoverability of images, providing researchers with richer information than previously available.

Adding Metadata and Generating MARC Records

Beyond descriptions, the AI tools were used to generate metadata and even create MARC records for cataloging purposes. This automation can streamline library workflows and improve access to collections.

Generating Images from Text and Matching to Real Images

Taking the connection a step further, the team explored generating images based on extracted text and then matching these AI-generated images to real photographs in the collection:

  1. Extract Text Descriptions: Using ChatGPT to identify descriptive passages from the diary.
  2. Generate Images: Employing tools like DALL·E to create images based on these descriptions.
  3. Match to Real Images: Programmatically comparing AI-generated images to actual photographs in the collection to find potential matches.

While not perfect, this method opens up new avenues for discovering connections within archival materials that might not be immediately apparent.

Limitations and Takeaways

Limitations

  • Infrastructure Needs: AI requires significant resources, including computational power, software costs, and staff time.
  • Technical Expertise: A background in programming and software development is highly beneficial. Collaboration with technical staff is often necessary.
  • Learning Curves: Many AI tools, even free ones, come with steep learning curves that can be challenging to overcome.
  • Human Intervention: AI tools are not fully autonomous and require human oversight to ensure accuracy and relevance.

Takeaways

  • Combining Tools Enhances Effectiveness: Using multiple AI tools in conjunction can yield better results than using them in isolation.
  • Start with Accessible Tools: Begin with user-friendly software like Adobe Acrobat for OCR and Google Translate for initial forays into AI applications.
  • Incorporate AI into Workflows: Integrate AI tools into existing library processes to improve efficiency and output quality.
  • Partnerships are Crucial: Collaborate with technical staff, data scientists, and computer science departments to leverage expertise.

Recommendations for Libraries

The presenters offer practical advice for libraries interested in leveraging AI for their special collections:

  1. Begin with Easy-to-Use Software: Tools like Adobe Acrobat and Google Translate can have an immediate impact with minimal investment.
  2. Experiment with Text Analysis: Use platforms like Voyant Tools to gain insights into your collections and explore new research possibilities.
  3. Enhance Metadata Creation: Utilize AI to generate or enrich metadata, improving searchability and access.
  4. Seek Funding Opportunities: Apply for grants to support more extensive AI projects, such as large-scale photograph organization.
  5. Collaborate with Technical Experts: Engage with technical staff within or outside your institution to support complex AI initiatives.

Conclusion

The presentation underscores the significant potential of AI in unlocking the hidden treasures within special collections. By improving readability, finding insights, and connecting text with images, AI tools can make collections more accessible and enhance scholarly research.

The journey involves challenges, particularly in terms of resources and expertise, but the rewards can be substantial. As AI technology continues to evolve, libraries have an opportunity to embrace these tools, transform their workflows, and open their collections more fully to the world.

Questions and Further Discussion

During the Q&A session, attendees posed several insightful questions:

  • Tools for MARC Records: The presenters used ChatGPT-4 to generate MARC records from photographs, finding it effective for creating initial catalog entries.
  • Batch Processing: When asked about processing multiple images, they noted that while interactive interfaces might limit batch sizes, using APIs and programmatic approaches allows for processing larger volumes.
  • Applying Techniques to Other Formats: The techniques discussed are applicable to manuscripts, maps, and even video materials. Tools like Whisper can transcribe audio and video content, enhancing accessibility.

Exploring the Possibilities of Generative AI: A Deep Dive into Research Tools

Exploring Research-Focused Generative AI Tools for Libraries and Higher Education

Hello everyone, and thank you so much for joining today's session on research-focused generative AI tools. In this presentation, we'll delve into various types of generative AI, with a particular emphasis on research tools like Consensus, Elicit, and Research Rabbit. We'll also discuss the challenges associated with generative AI and consider how these tools impact instruction and library services.



Types of Generative AI

Generative AI is a rapidly evolving field with a variety of applications. Some of the main types include:

  • Chatbots: Conversational AI systems like ChatGPT that can generate human-like text responses.
  • Image Generation and Synthesis Tools: Tools like Midjourney and NightCafe that can create images based on textual prompts.
  • Research Tools: Our focus today is on research tools such as Consensus, Elicit, and Research Rabbit, which aim to enhance the research process.
  • Music and Video Generation Tools: AI systems that can compose music or generate videos.
  • Others: The field is continually expanding, and new tools are being developed as we speak.

Research Generative AI Tools

1. Consensus

Consensus is a search engine that utilizes language models to surface and synthesize insights from academic research papers. According to their website:

"Consensus is not a chatbot, but we use the same technology throughout the product to help make the research process more efficient."

Source Material: The content comes from the Semantic Scholar database, which provides access to a wide range of academic papers.

Mission: Their mission is to use AI to make expert information accessible to all.

Example Usage:

When prompted with the question:

"How do faculty and instructional designers use Universal Design for Learning in higher education?"

Consensus provides a summary at the top of the page, analyzing the top eight papers related to the query. Below the summary, it lists the eight papers, including details like the title, authors, publication venue, and citation count.

Features:

  • Save, Cite, Share: Users can save articles, generate citations, and share them.
  • Citation Generation: Similar to many databases, Consensus can generate citations, though users should verify for minor errors.
  • Study Snapshot: Offers a synthesized overview of a paper's key points and outcomes. Note that generating a snapshot may require AI credits.

AI Credits and Premium Features:

  • AI Credits: Users have a monthly limit of 20 AI credits in the free version, which are used for premium features like generating study snapshots.
  • Premium Version: Offers additional features beyond the free version.

2. Elicit

Elicit is a research assistant that uses language models like GPT-3 to automate parts of the research workflow, especially literature reviews.

Functionality:

  • When asked a question, Elicit shows relevant papers and summarizes key information in an easy-to-use table.

Example Usage:

With the prompt:

"How should generative AI be used in libraries and higher education?"

Elicit provides a summary of the top four papers, including in-text citations that link to the sources.

Features:

  • Paper Details: Includes paper information, citations, abstract summaries, and main findings.
  • Additional Columns: Users can add more columns to the results table to customize the information displayed.

Source Material:

Elicit pulls content from Semantic Scholar, searching over 175 million papers.

3. Research Rabbit

Research Rabbit is a research platform that enables users to discover and visualize relevant literature and scholars.

Mission:

To empower researchers with powerful technologies.

Features:

  • Visualization: Provides visual representations of how papers are interconnected.
  • Explore Options: Users can explore similar work, earlier work, later work, and linked content.
  • Authors: Allows exploration of authors and suggested authors in the field.
  • Export Papers: Users can export lists of papers for further use.

Example Usage:

Starting with one or more articles, users can find similar articles, explore cited works, or see which papers cite the original article. The platform creates a network graph showing the relationships between articles.

Personal Experience:

The presenter found Research Rabbit particularly useful for organizing dissertation literature reviews.

Why Use Generative AI in Libraries?

Generative AI technology is not going away; it's becoming a mainstay in our culture and professional practices. Libraries and librarians need to consider how to respond to this technology.

Supporting Patrons

  • Should we support patrons in using these new tools or try to prevent them from using them?
  • It's a balancing act, considering the benefits and challenges.

Advancing Effectiveness and Efficiency

  • Generative AI tools claim to make research more effective and efficient.
  • Teaching students how to use and evaluate these tools prepares them for future workplaces where such technologies may be prevalent.

Personal Uses of Generative AI

  • Making Paragraphs More Concise: Using AI to refine writing.
  • Rephrasing Assistance: Helping with tricky paraphrasing tasks.
  • Creating Titles: Generating titles for presentations or programs.
  • Organizing Articles: Managing literature for dissertations or research projects.
  • Brainstorming: Generating ideas and exploring new concepts.

Challenges with Generative AI

While generative AI offers many benefits, there are significant challenges to consider.

Privacy and Lack of Transparency

  • Uncertainty about where these tools get their information and how they process data.
  • Users may unknowingly input sensitive information.

Quality and Hallucinations

  • AI can produce inaccurate information or "hallucinations," including ghost sources that don't exist.
  • Some are beginning to refer to these as "fabrications."

Biases and Blind Spots

  • AI models can perpetuate biases present in the training data.

Date Range of Content

  • Some AI tools may have outdated information, as their training data cuts off at a certain point.

Plagiarism and Academic Integrity

  • Students may misuse AI tools, leading to academic integrity violations.
  • Detection tools exist but may produce false positives.

Detection Tools and False Positives

  • Tools designed to detect AI-generated content are not foolproof.

Evaluating Generative AI Tools

The AI ROBOT Test

Developed by Hervol and Wheatley, the AI ROBOT test is a framework for evaluating AI tools, focusing on:

  • Reliability
  • Objective
  • Bias
  • Ownership
  • Type

This framework can be used in information literacy instruction to help students and patrons critically assess AI tools. You can read more about it here.

Additional Resources

The presenter has compiled a LibGuide with articles, videos, podcasts, and other resources on generative AI.

Poll Results

In a previous presentation, attendees were polled on their views regarding generative AI.

Should Librarians Embrace Generative AI?

Most respondents believed librarians should either embrace it or respond somewhere in between embracing and avoiding. Only one person suggested that librarians should avoid it.

Which Generative AI Tools Are Potentially Useful for Your Library?

  • ChatGPT: 134 responses
  • Elicit: 3 responses
  • Perplexity: 118 responses
  • Research Rabbit: 189 responses
  • NightCafe: 40 responses
  • Other: 22 responses
  • Consensus: 103 responses

Upcoming GAL Virtual Conference

The presenter is organizing an upcoming GAL (Generative AI in Libraries) virtual conference titled:

Prompter or Perish: Navigating Generative AI in Libraries

Dates: June 11th, 12th, and 13th

Time: 1 PM to 4 PM Eastern Time

Call for Proposals: Librarians are encouraged to submit proposals and participate in the conference. For more information, visit the conference website.

Contact Information

For further questions or to continue the conversation, you can contact the presenter at:

Email: brienne.dow@briarcliff.edu

Conclusion

Generative AI is a transformative technology with significant implications for libraries and higher education. By understanding and critically engaging with these tools, librarians can better support their patrons and prepare for the future.

Thank you for attending today's session. We look forward to continuing the conversation at the upcoming GAL Virtual Conference.

Wednesday, November 27, 2024

Overcoming Challenges: How NPR Digitized Their Music Collection with AI

Practical Application of AI: Evaluating Music to Build a Music Library

Presented by Jane Gilvin, NPR's Research Archives and Data Team



Introduction

Jane Gilvin delivered a presentation on how her team at NPR utilized artificial intelligence (AI) to automate the identification of instrumental and vocal music to build a digital music library more efficiently. The session focused on the practical application of AI in music cataloging, the challenges faced, and the solutions implemented.

About Jane Gilvin and the RAD Team

  • Jane Gilvin:
    • Member of NPR's Research Archives and Data (RAD) Team for nearly 13 years.
    • Educational background in music and library science.
    • Alumna of San Jose State University's Information Science program.
    • Experience in radio since she was a teenager.
  • The RAD Team:
    • Formerly known as the NPR Library, established in the 1970s.
    • Responsible for collecting NPR programming archives.
    • Provides resources for production, including a comprehensive music collection.

NPR's Music Collection Evolution

The NPR music collection has evolved alongside technological advancements:

  • Vinyl Records: The initial collection comprised vinyl records across various genres.
  • Transition to CDs: Shifted to compact discs (CDs) as CD players became standard in production.
  • Digital Music Files: Moved towards digital files to meet the expectations of quick and remote access to music.

Challenges in Digitizing the Collection

The transition to digital presented several challenges:

  • Converting thousands of physical CDs into digital files for immediate access.
  • Ensuring metadata accuracy and consistency, especially for instrumental and vocal classification.
  • Lack of resources for continuous large-scale ingestion and cataloging of new music.

Solution: Automation with AI

The Robot and ORRIS

  • The Robot: A batch processing system capable of ripping CDs, identifying metadata from online databases, and delivering MP3 and WAV files with embedded ID3 tags.
  • ORRIS (Open Resource and Research Information System): A new database developed to allow users to search, stream, and download songs for production.

Implementing Essentia

  • Essentia: An open-source library and collection of tools used to analyze audio and music to produce descriptions and synthesis.
  • Capabilities: Predicts genre, beats per minute, mood, and most importantly, classifies tracks as instrumental or vocal.
  • Training the Algorithm: Used NPR's extensive archive of over 300,000 tracks with existing instrumental and vocal tags to train the algorithm.

Accuracy and Testing

  • Human Cataloging Accuracy: Ranged from 90% to 98%, averaging around 90% due to human error and limitations.
  • Algorithm Accuracy Goal: Set at 80% to balance the usefulness and the efficiency of the process.
  • Results: The algorithm achieved an accuracy of 86%, meeting the team's criteria.

Integration and Quality Control

Building into the Ingest Process

  • Automated the instrumental/vocal tagging during the ingest process of new tracks.
  • Applied the algorithm to existing tracks that lacked instrumental/vocal classification.

User Feedback Mechanism

  • Added a feature allowing users to report incorrectly tagged songs directly from the ORRIS interface.
  • Provided a quick way for the RAD team to receive notifications and correct metadata errors.

Quality Control Measures

  • Automated spreadsheets generated during the algorithm's run allowed for immediate review of results.
  • Periodic checks to ensure the algorithm continues to perform within the acceptable accuracy range.
  • Addressed any shifts in algorithm performance due to changes in the type of music being ingested or other factors.

Demonstration

Jane provided a live demonstration of how the process works:

  1. Showed the ORRIS search interface and how users can search for and listen to tracks (e.g., Thelonious Monk, David Bowie).
  2. Demonstrated the ingestion of new albums and how the algorithm processes them to classify tracks as instrumental or vocal.
  3. Illustrated the use of the user feedback feature to report incorrect classifications.

Benefits and Outcomes

  • Significantly reduced the time and resources required for music cataloging.
  • Enabled continuous addition of new music to the library despite limited staff time.
  • Improved user satisfaction by providing a reliable point of data for instrumental and vocal tracks.

Challenges and Considerations

  • Training Data Limitations: Ensuring the training data was representative and free from bias or errors.
  • Algorithm Bias: Addressing the overrepresentation of certain genres (e.g., jazz and classical) in the training data to avoid skewed results.
  • Metadata Accuracy: Dealing with inconsistent or incorrect metadata from external sources.

Future Plans

Jane discussed potential future projects:

  • Revisiting other algorithms from Essentia, such as those predicting timbre and mood.
  • Implementing user testing and UX projects to improve data research and user experience.
  • Continuing to refine the algorithm and processes to maintain or improve accuracy.

Questions and Answers

During the Q&A session, several topics were addressed:

Copyright and Licensing Considerations

  • NPR has licenses with major performing rights organizations for the use of music in production.
  • Other libraries considering building a music collection should review legal permissions and terms of use.

Data Labeling and Ongoing QA/QC

  • The team performs periodic quality checks but does not engage extensively in data labeling projects.
  • Emphasis on monitoring algorithm performance and making adjustments as needed.

User Testing and UX Improvements

  • Future plans include conducting user testing to evaluate the effectiveness of additional algorithms (e.g., mood taxonomy).
  • Goal is to enhance the search and discovery experience for users.

Conclusion

Jane concluded by emphasizing how the application of AI allowed the RAD team to develop a less time-consuming ingest and cataloging process. This enabled the continuous growth of the music library, providing valuable resources to production staff while efficiently managing limited staff time.

Contact Information

For further information or inquiries, you can reach out to Jane Gilvin through NPR's Research Archives and Data Team.

Protecting Your Privacy: The Risks of Sharing Sensitive Data with AI Tools

Deliberately Safeguarding Privacy and Confidentiality in the Era of Generative AI

Presented by Reed N. Hedges, Digital Initiatives Librarian at the College of Southern Idaho



Introduction

Reed N. Hedges delivered a presentation focusing on the critical importance of safeguarding privacy and confidentiality when using generative artificial intelligence (AI) tools. The session highlighted the potential risks associated with sharing sensitive data with AI models and provided actionable recommendations for users and professionals in the library and information science fields.

Personal Anecdotes and the Need for Caution

Hedges began by sharing several personal anecdotes illustrating how individuals unknowingly compromise their privacy by inputting sensitive information into AI tools:

  • A user who spends long hours chatting with GPT-4, sharing more personal information with the AI than with their own spouse.
  • An individual who input all their grandchildren's data into an AI to generate gift ideas.
  • A person who provided detailed demographic data of a local social group, including identifiable information, to plan activities and programs.
  • A user who entered their entire family budget into an AI tool for financial management.

These examples underscore the pressing need for users to be more conscientious about the data they share with AI systems.

Main Point: Do Not Input Sensitive Data into AI Tools

The core message of the presentation is clear: Users should not input any sensitive or personal data into prompts for generative AI tools. This includes business information, personal identifiers, or any data that could compromise individual or organizational privacy.

Privacy Policies and Data Handling by AI Tools

Hedges highlighted specific concerns regarding popular AI tools:

  • Google Bard: Explicitly notes that human supervisors may read user data, emphasizing the importance of anonymization.
  • OpenAI's ChatGPT: Terms of use discuss the need for proprietary data protection. Users can have a more privacy-conscious session by using OpenAI's Playground or adjusting settings at privacy.openai.com/policies.
  • Perplexity AI: Evades questions about data handling and extrapolation.

The Challenge of Legal Recourse and Privacy Harms

The presentation delved into the limitations of current privacy laws:

  • Harm Requirement: Courts often require proof of harm, which is challenging when privacy violations involve intangible injuries like anxiety or frustration.
  • Impediments to Enforcement: The need to establish harm impedes the effective enforcement of privacy violations, allowing wrongdoers to escape accountability.
  • Lack of Adequate Legal Framework: The existing legal system lacks effective mechanisms to address privacy harms resulting from AI data handling.

Extrapolation and Inference by AI Tools

Generative AI models can infer additional information beyond what users explicitly provide:

  • Data Extrapolation: AI tools can infer behaviors, engagement patterns, and personal attributes from minimal data inputs.
  • Privacy Risks: Such extrapolation can inadvertently reveal sensitive information, including learning disabilities or mental health issues.
  • Example: Even generic prompts can lead to AI inferring personal details that compromise privacy.

Recommendations for Safeguarding Privacy

1. Transparency in Data Collection

  • Inform users about the data being collected and its intended use.
  • Only OpenAI's ChatGPT and Anthropic's Claude explicitly deny storing and extrapolating user data.

2. Informed Consent

  • Obtain explicit consent before collecting or using personal information.
  • Ensure users are aware of the implications of data sharing with AI tools.

3. Data Minimization

  • Limit data collection to what is absolutely essential for the task.
  • Avoid including unnecessary personal or demographic details in AI prompts.

4. Anonymization and Avoiding Sensitive Information

  • Do not include individual attributes or identifiers in AI prompts.
  • Use synthetic or generalized data where possible.
  • Be cautious even with public data, as ethical considerations remain.

5. Implement Strict Access and Use Controls

  • Enforce a "least privilege" access model, using tools that require minimal data access.
  • Ensure staff and users are clear on what data can be input into AI tools.

6. Use Human Content Moderation

  • Have prompts reviewed by multiple individuals to screen for privacy issues.
  • This process can also enhance quality control.

7. Be Skeptical of "Secure" AI Tools

  • Avoid promising or assuming that any AI tool is completely secure.
  • Recognize that even custom AI models can be vulnerable to exploitation.

Understanding AI Terms of Service

Users should familiarize themselves with the terms of service of AI tools:

  • Ownership of Content: OpenAI states that users own the input and, to the extent permitted by law, the output generated.
  • Responsibility for Data: Users are responsible for ensuring that their content does not violate any laws or terms.
  • Data Use: AI providers may use input data for training and improving models unless users opt out.

Final Thoughts on Privacy Practices

Hedges emphasized that traditional privacy protection principles remain relevant but must be applied more diligently in the context of AI:

  • Extra Vigilance: Users must be proactive in safeguarding their data when interacting with AI tools.
  • Data Breaches are Inevitable: Even with safeguards, data breaches can occur; therefore, minimizing shared data is crucial.
  • Reassessing the Need for AI: Consider whether using AI is necessary for a given task, especially when handling sensitive information.

Conclusion

In the era of generative AI, safeguarding privacy and confidentiality requires deliberate and informed actions by users and professionals. By understanding the risks, adhering to best practices, and educating others, individuals can mitigate potential harms associated with AI data handling.

References and Further Reading

  • Danielle Keats Citron and Daniel J. Solove: "Privacy Harms" - A comprehensive paper discussing the challenges in addressing privacy violations legally.
  • Shantanu Sharma: "Artificial Intelligence and Privacy" - An exploration of AI's impact on privacy, available on SSRN.
  • Nathan Hunter: "The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts" - A book providing insights into effective AI interactions.

Links to these resources were provided during the presentation for attendees interested in deepening their understanding of AI privacy concerns.

ridging the Gap: The Role of Librarians in Facilitating AI Integration in Library Instruction

Faculty Attitudes Toward Librarians Introducing AI in Library Instruction Sessions

Presented by Beth Evans, Associate Professor at Brooklyn College, City University of New York



Introduction

Beth Evans delivered a presentation discussing the role of librarians in introducing artificial intelligence (AI) tools in library instruction sessions. With over 30 years of experience at Brooklyn College's library, she explored faculty perspectives on the use of AI in academic settings and the potential implications for library instruction.

Background

Evans noted that AI technologies like ChatGPT have the potential to augment, support, or even replace certain library functions, such as reference services, instruction, and technical services. Recognizing the transformative impact of AI, she sought to understand faculty attitudes toward AI and whether they would welcome librarians incorporating AI tools into their instruction sessions.

Research Methodology

In the fall of 2023, Evans conducted a survey targeting faculty members at Brooklyn College. Key aspects of the survey included:

  • Distributed to 199 faculty members.
  • Received 74 responses, representing a response rate of approximately 37%.
  • Respondents came from various departments, with the largest representation from English, History, and Sociology.
  • Questions focused on faculty's introduction of AI in their courses, their attitudes toward AI, and their openness to librarians discussing AI in instruction sessions.

Survey Findings

Faculty Introduction of AI in Courses

Evans explored how faculty members addressed AI in their teaching:

  • Proactive Introduction: Some faculty included AI tools in their syllabi, assignments, or class discussions.
  • Student-Initiated Discussions: In a few cases, students brought up AI topics during classes.
  • No Introduction: A portion of faculty did not introduce AI topics at all.

Methods of Introducing AI

Among faculty who addressed AI:

  • Rule Setting in Syllabi: Establishing guidelines on AI usage in course policies.
  • Class Discussions: Engaging students in conversations about AI's role and impact.
  • Assignments Involving AI: Incorporating AI tools as part of coursework to critically evaluate their utility.

Faculty Attitudes Toward AI

Faculty responses reflected a spectrum of attitudes:

1. Prohibitive

Some faculty strictly prohibited the use of AI tools, expressing concerns about academic integrity and potential threats to human creativity and critical thinking.

2. Cautionary

Others cautioned students about relying on AI, highlighting limitations and encouraging transparency if AI tools were used.

3. Preventative

Certain faculty designed assignments that were difficult or impossible to complete using AI tools, thereby discouraging their use.

4. Proactive Utilization

A group of faculty embraced AI, integrating it into their teaching to enhance learning outcomes:

  • Using AI for media literacy discussions.
  • Employing AI to improve cover letters in business courses.
  • Assigning comparative analyses between AI-generated content and traditional research tools like PubMed.

Faculty Concerns About Librarians Introducing AI

When asked whether they were concerned about librarians introducing AI in library instruction sessions:

  • Majority Not Concerned: Most faculty members were open to librarians discussing AI tools.
  • Supportive of Librarian Expertise: Many acknowledged librarians as information experts capable of providing balanced and ethical guidance on AI.
  • Strong Opposition: A minority expressed strong opposition, fearing AI as a threat to human flourishing and academic integrity.

Additional Faculty Comments

Faculty provided further insights:

Ambivalence and Hesitation
  • Some were uncertain about AI's role and expressed a need for more understanding before fully integrating it.
  • Concerns about keeping pace with rapidly evolving technology and its implications for cheating and academic dishonesty.
Recognizing the Inevitable Presence of AI
  • Acknowledgment that AI is prevalent and students need to be educated about its use.
  • Emphasis on not burying heads in the sand and preparing students for real-world applications where AI is utilized.
Desire for Collaboration with Librarians
  • Faculty expressed interest in workshops and collaborations led by librarians to explore AI tools constructively.
  • Appreciation for librarians' efforts to assist both students and faculty in understanding AI's prevalence and uses.

Conclusion

Beth Evans concluded that while faculty attitudes toward AI vary widely, there is significant openness and even enthusiasm for librarians to take an active role in introducing and educating about AI tools in library instruction sessions. Librarians are viewed as information experts well-equipped to navigate the ethical, practical, and pedagogical aspects of AI in academic settings.

Implications for Librarians

Based on the survey findings:

  • Librarians have an opportunity to lead in AI literacy education, providing balanced perspectives on AI tools.
  • Collaboration with faculty is essential to ensure that AI integration aligns with course objectives and academic integrity policies.
  • There is a need to address concerns and misconceptions about AI, tailoring approaches to different disciplines and faculty attitudes.

Contact Information

For further information or collaboration opportunities, you can contact Beth Evans:

Note: The final slide of the presentation included an AI-generated image using the tool "Tome" with the theme "Ocean."

Navigating the Intersection of AI and Information Literacy: Essential Competencies for Librarians

Competencies for the Use of Generative AI in Information Literacy Instruction

Presented by Paul Pival, Librarian at the University of Calgary



Introduction

During the Library 2.0 Mini-Conference on AI and Libraries, Paul Pival delivered a presentation titled "Competencies for the Use of Generative AI in Information Literacy Instruction." The session focused on identifying the essential competencies that librarians should possess to effectively incorporate generative artificial intelligence (AI) into information literacy instruction.

Frameworks vs. Competencies

Paul began by distinguishing between frameworks and competencies. While frameworks serve as blueprints outlining how various components fit together (analogous to building a house), competencies are the specific skills and knowledge required to execute those plans (the materials needed to build the house).

He referenced the Association of College and Research Libraries (ACRL) Framework for Information Literacy for Higher Education, noting that it is broad enough to encompass generative AI. He highlighted that efforts are underway, led by professionals like Dr. Leo Lo, to update the framework to explicitly address generative AI.

ACRL Framework and Generative AI

Paul discussed how the six frames of the ACRL Framework relate to generative AI:

  1. Authority is Constructed and Contextual: Emphasizing the importance of assessing content critically and acknowledging personal biases when evaluating AI-generated information.
  2. Information Creation as a Process: Understanding how large language models (LLMs) generate content and accepting the ambiguity in emerging information formats.
  3. Information Has Value: Recognizing the need to cite AI-generated content appropriately and verifying the accuracy of AI-provided citations.
  4. Research as Inquiry: Utilizing AI tools to break down complex problems and enhance inquiry-based learning.
  5. Scholarship as Conversation: Engaging in dialogues with AI tools, understanding that they are conversational agents rather than traditional search engines.
  6. Searching as Strategic Exploration: Acknowledging that searching is iterative and that AI tools complement but do not replace traditional academic databases.

Essential Competencies for Librarians

Paul proposed four key competencies that librarians should develop to effectively use generative AI in information literacy instruction:

  1. Understanding How Generative AI Works:
    • Familiarity with the leading AI models, referred to as "Frontier Models," including GPT-4, Google's Gemini 1.0, and Anthropic's Claude 3.
    • Investing time (at least 10 hours per model) to become proficient with their nuances.
    • Recognizing accessibility issues, such as subscription costs and geographical restrictions, which contribute to the digital divide.
  2. Recognizing Bias in AI Models:
    • Understanding that AI models are trained on vast internet data, including biased and harmful content.
    • Acknowledging that the programming and training data may not represent diverse worldviews.
    • Being aware of potential overcorrections and content filtering issues.
  3. Identifying and Managing Hallucinations:
    • Recognizing that AI models may generate false or fabricated information, including non-existent citations.
    • Understanding the concept of "hallucinations" in AI and their implications for information accuracy.
    • Exploring solutions like Retrieval Augmented Generation (RAG) to mitigate hallucinations by incorporating domain-specific knowledge bases.
  4. Ethical Considerations:
    • Evaluating the ethical implications of using AI tools, including environmental impacts and labor practices.
    • Understanding legal issues related to copyright and content usage.
    • Considering the potential for AI tools to disseminate disinformation.

Resources and Continuous Learning

Paul emphasized the importance of continuous learning and adaptability in AI literacy. He provided several resources for further exploration:

Conclusion

In conclusion, Paul highlighted that AI literacy is not static but evolves with technological advancements. He urged librarians to:

  • Educate themselves on generative AI tools and their implications.
  • Integrate AI competencies within existing information literacy frameworks.
  • Stay informed about ethical considerations and emerging issues.
  • Promote continuous learning to adapt to the rapidly changing AI landscape.

By developing these competencies, librarians can better serve their patrons and help navigate the complexities introduced by generative AI in information literacy instruction.

Contact Information

You can connect with Paul Pival on social media platforms under the handle @ppival.

AI in Libraries: Unlocking the Potential for Public Libraries

AI and Libraries: Applications, Implications, and Possibilities

Opening Keynote at the Library 2.0 Mini-Conference



Introduction

The Library 2.0 mini-conference titled "AI and Libraries: Applications, Implications, and Possibilities" was held, featuring an opening keynote panel discussion. The conference was organized by San Jose State University's School of Information, with special thanks extended to Dr. Sandra Hirsh and Dr. Anthony Chow for their leadership. The keynote was moderated by Dr. Raymond Pun, an academic and research librarian at Alder Graduate School of Education and a prominent figure in the field.

Panelists

The panel consisted of esteemed professionals from various library settings:

  • Ida Mae Craddock: School Librarian at Albemarle County Public Schools' Community Lab Schools in Virginia.
  • Dr. Brandy McNeil: Deputy Director of Programs and Services at the New York Public Library.
  • Dr. Leo Lo: Dean and Professor of the College of University Libraries and Learning Sciences at the University of New Mexico.

AI in Different Library Contexts

Public Libraries

Dr. Brandy McNeil discussed how public libraries are integrating AI to enhance both internal and external operations. Key applications include:

  • Automating FAQs and email responses.
  • Assisting with customer complaints and inquiries.
  • Creating curriculum outlines and scheduling.
  • Cataloging books and ensuring data accuracy.
  • Offering information literacy classes on AI basics.

She highlighted the establishment of an AI committee at the New York Public Library, modeled after the Library of Congress's phases of AI—understanding, experimenting, and implementing. The committee explores AI tools like Whisper AI and the Devon software (an AI software engineer), and collaborates with institutions like the Library of Congress.

School Libraries

Ida Mae Craddock shared insights from the school library perspective, noting that school librarians are often the first to encounter and integrate new technologies. AI is being used for:

  • Generating essays and leveling texts to match student reading levels.
  • Translating materials to make curriculum accessible to non-native English speakers.
  • Creating custom educational materials quickly.
  • Processing data and scheduling.

She emphasized the importance of policies guiding AI use in schools, particularly regarding student data privacy and compliance with laws like FERPA.

Academic Libraries

Dr. Leo Lo discussed the exploration of AI in academic libraries, particularly generative AI. The University of New Mexico initiated a GPT-4 exploration program involving staff from different units with varying levels of AI expertise. Applications included:

  • Generating alt text for images and editing bibliographies.
  • Developing machine-readable data management plans.
  • Facilitating staff-patron interactions using AI-generated templates and FAQs.
  • Using AI for cataloging and metadata management.
  • Assisting with administrative tasks like scheduling and email drafting.

Dr. Lo emphasized the importance of experimenting with AI to discover its potential benefits and limitations within the academic library context.

Popular AI Tools and Applications

The panelists discussed various AI tools being utilized in their respective settings:

Tools in Public Libraries

  • ChatGPT: Used for a variety of tasks, with some staff using the paid version for advanced features.
  • Canva Magic Studio: For creating promotional materials and program flyers.
  • Midjourney and Stable Diffusion: Image generation tools.
  • Microsoft Co-Pilot and Google's Duet AI: For productivity and note-taking features.
  • Otter AI: For transcription and translation services.
  • Quick Draw by Google and Goblin Tools: For educational demonstrations of AI capabilities.
  • Adobe Firefly and Character.ai: For creative and interactive experiences.

Tools in School Libraries

  • ChatGPT: For natural language processing tasks and assisting students in generating research topics.
  • BigHugeLabs Image Editor: For easy image editing tasks.
  • Diffit: For leveling texts and generating practice questions aligned with testing cultures in schools.
  • Google Immersive Translate and Rask AI: For translating materials to support multilingual students.
  • OpenAI Codex and TabNine: For coding and creating custom AI models to process specific data.

Tools in Academic Libraries

  • ChatGPT and GPT-4: For various research and administrative tasks.
  • Claude from Anthropic and Google Bard: Alternative AI models for exploration.
  • Perplexity AI: A tool that could potentially change information discovery processes.
  • Scite.ai and Kendra: Research-oriented models for academic purposes.
  • Elsevier's Scopus AI: An AI developed by publishers to assist with academic research.

Concerns and Ethical Considerations

Policy and Privacy Issues

The panelists emphasized the importance of policies guiding AI use, especially concerning data privacy, equity, and access. Key points included:

  • Ensuring student data privacy in compliance with laws like FERPA.
  • Addressing the digital divide and information privilege associated with access to AI tools.
  • The need for clear institutional policies to guide AI use in educational settings.

Copyright and Intellectual Property

The discussion highlighted significant concerns regarding AI's impact on copyright and intellectual property:

  • Ongoing lawsuits against AI companies for copyright infringement and the use of copyrighted materials in training data.
  • The complexity of citing AI-generated content and the ethical implications of using AI outputs in academic work.
  • The need for balanced approaches to protect creators' rights while allowing AI to be used for research and educational purposes.

Bias, Equity, and Labor Practices

Other concerns included:

  • Biases present in AI models due to the data they are trained on, affecting marginalized communities.
  • Environmental impacts of large data centers required for AI processing.
  • Labor practices related to content moderation and the underpaid workforce behind AI technologies.

Resources and Staying Informed

The panelists shared various resources for librarians and professionals to stay updated on AI developments:

  • Attending conferences and workshops, such as those hosted by the Public Library Association and the American Library Association.
  • Following technology news outlets like The Verge, Mashable, Wired, CNET, and MIT Technology Review.
  • Engaging with local tech platforms and staying informed about funding opportunities and industry trends.
  • Reading reports from organizations like the Pew Research Center and the Center for an Urban Future.
  • Following thought leaders and experts in the field on social media and professional networks.
  • Utilizing library-specific publications like School Library Journal and Knowledge Quest.
  • Listening to relevant podcasts and webinars, such as those offered by Choice 360 and New York Times' "Hard Fork."

Impact on Library Workforce and Future Outlook

The panelists concluded with reflections on how AI might impact the library workforce:

  • Ida Mae Craddock expressed optimism that AI would not replace school librarians but would change certain aspects of the job, emphasizing the irreplaceable role of librarians in teaching critical thinking and fostering a love of reading.
  • Dr. Leo Lo highlighted the importance of upskilling and reskilling, suggesting that AI would change job functions rather than eliminate positions. He mentioned efforts to develop AI competencies for library workers through organizations like ACRL.
  • Dr. Brandy McNeil noted that while AI might not replace people, it could replace those who do not know how to use it effectively. She emphasized the emergence of new job roles like prompt engineering and the need for library professionals to adapt.

Conclusion

The opening keynote of the Library 2.0 mini-conference provided valuable insights into the current state and future possibilities of AI in various library contexts. The panelists highlighted both the practical applications and the ethical considerations that come with integrating AI into library services. Key takeaways include:

  • The transformative potential of AI to enhance library operations, accessibility, and user engagement.
  • The critical importance of policies, ethical considerations, and ongoing dialogue to navigate challenges related to privacy, equity, and intellectual property.
  • The need for library professionals to stay informed, adapt to new technologies, and continue their role as educators and facilitators in an evolving information landscape.

The conference emphasized that while AI presents significant opportunities for innovation, it also requires thoughtful implementation and a commitment to addressing its broader societal impacts.

Additional Information

The panelists encouraged attendees to participate in upcoming sessions of the mini-conference and to engage with resources and networks to further explore AI's role in libraries.

Revolutionizing Research: A Look at AI and Data Innovations in Higher Education

New AI and Data Innovations in the Classroom: A Roundtable Discussion

Presented by Miraj Berry, Brian Cooper, Josh Nicholson, and Joe Karaganis



Introduction

The Charleston Library Conference hosted a virtual roundtable discussion titled "New AI and Data Innovations in the Classroom". The session brought together experts in the field of educational technology to discuss the application and usefulness of AI tools and databases in higher education settings. The panelists included:

  • Miraj Berry: Director of Business Development at Overton.
  • Brian Cooper: Associate Dean of Innovation and Learning at Florida International University (FIU) Libraries.
  • Josh Nicholson: Founder and CEO of Scite.
  • Joe Karaganis: Director of Open Syllabus.

The 30-minute session, followed by a 10-minute live Q&A, aimed to explore the use cases of three innovative tools—Overton, Scite, and Open Syllabus—and their impact on teaching, learning, and library services in higher education. The discussion focused on how these tools leverage AI and large data sets to enhance content discovery, support classroom instruction, and contribute to textbook affordability and collection development initiatives.

Overview of the Tools

Overton

Overton is the world's largest database of policy documents and grey literature. It indexes over 9.3 million policy documents from more than 1,800 sources across 32,000 organizations in 188 countries. The platform makes policy documents easily searchable and discoverable by indexing their full text and linking them to academic papers, relevant people, topics, and Sustainable Development Goals (SDGs).

Miraj explained that Overton's mission is to support evidence-based policymaking by providing a platform that allows users to explore the connections between policy documents and scholarly research. Overton helps surface content that might otherwise be difficult to find, putting existing content into perspective for researchers, students, and policymakers.

Open Syllabus

Open Syllabus is an open-source syllabus archive that collects and analyzes millions of syllabi from around the world. With a database of around 20 million syllabi, the platform uses AI and machine learning to extract structured information from these documents, such as course descriptions, reading lists, and learning outcomes.

Joe highlighted that Open Syllabus aims to make the intellectual backdrop of teaching more accessible. By aggregating syllabi at scale, the platform provides insights into what is being taught, how subjects are structured, and which materials are considered central or peripheral in various fields. This information can inform curricular design, collection development, and OER (Open Educational Resources) adoption initiatives.

Scite

Scite is an AI-powered platform designed to help users better understand and evaluate research articles. By leveraging machine learning, Scite processes millions of full-text PDFs to extract citation statements, providing context on how and why an article, researcher, journal, or university has been cited.

Josh explained that Scite addresses challenges related to information overload and trust in scholarly communication. The platform offers a "next-generation citation index" that brings more nuance and context to citations, enabling users to discover, trust, evaluate, and use research more effectively. Scite also integrates with large language models to provide fact-checking and grounding against the scientific literature.

Challenges and Opportunities in Adopting AI Tools

Critical Evaluation and Adoption

The panelists discussed the importance of critically evaluating AI tools before adopting them in educational settings. Josh emphasized that while large language models like ChatGPT offer powerful capabilities, they can also produce untrustworthy or fabricated information. Therefore, it's crucial to implement guardrails, such as providing citations and allowing users to verify sources.

Joe added that the barriers to textual analysis have significantly decreased due to advancements in AI and machine learning. This democratization means that specialized capabilities are now accessible to a broader audience, but it also raises questions about data aggregation, ethical considerations, and the responsible use of AI in education.

Supporting Staff and Students

Brian shared insights from the librarian's perspective, highlighting the challenges and initiatives at FIU in supporting textbook affordability and collection development. He noted that librarians play a neutral role in fostering AI literacy among students and faculty. By creating resources like LibGuides and engaging with faculty liaisons, libraries can help navigate the complexities of AI and digital tools.

The panelists agreed that it's essential to provide advice, training, and support for staff and student consumption of these tools. This includes understanding where these technologies might be useful, testing them, and finding possible ways to package them for educational purposes.

Feedback Channels and Collaboration

Effective adoption of AI tools requires collaboration among various stakeholders, including students, teachers, librarians, and technology vendors. The panelists discussed the importance of establishing feedback channels to gather input from users and to refine the tools based on real-world needs.

Josh mentioned that libraries have a critical role in guiding researchers and students through the suite of available tools, helping them understand the strengths and limitations of each. By being proactive and embracing these technologies, libraries can better support their communities in an era of rapid technological change.

Use Cases and Impact

Overton's Application in Policy Research

Miraj highlighted how Overton supports evidence-based policymaking by making grey literature and policy documents more accessible. Researchers and students can discover policy documents related to their field of study, explore citations between policy and academic literature, and gain a broader understanding of the policy landscape.

This accessibility enables users to incorporate policy perspectives into their research and teaching, fostering a more interdisciplinary approach to education.

Open Syllabus and Curriculum Development

Joe discussed how Open Syllabus aids in curriculum development and OER adoption. By analyzing syllabi at scale, the platform can identify commonly assigned materials, trends in subject matter, and gaps in available resources. This information can inform collection development decisions and help educators select materials that align with their instructional goals.

Brian shared that FIU is leveraging Open Syllabus to map out peer-reviewed OER materials aligned with classes nationwide. By correlating these with existing classes at FIU, faculty can be informed about OER options that their peers are using, promoting textbook affordability and enhancing student success.

Scite's Role in Research and Education

Josh explained that Scite helps address the challenges of information overload and the need for trustworthy sources. By providing context to citations and integrating with large language models, Scite allows users to fact-check information and understand the credibility of sources more effectively.

In educational settings, Scite can assist students in starting quality research papers by guiding them to relevant and reliable sources, thereby enhancing the research and learning process.

The Role of Libraries and Vendors

Libraries as Facilitators

Brian emphasized that libraries are in a unique position to bridge the gap between technology and users. By engaging in new and novel ways with their constituencies, libraries can support the adoption of AI tools, promote AI literacy, and contribute to student and faculty success.

He highlighted the potential for libraries to expand their involvement in areas like institutional effectiveness and accreditation by leveraging data and insights from tools like Open Syllabus and Scite.

Vendor Collaboration

The panelists agreed that collaboration between libraries and vendors is essential for maximizing the benefits of AI tools. Vendors can support libraries by providing data, integrating with existing systems, and offering solutions that address specific institutional needs.

Miraj mentioned Overton's commitment to being a responsible data provider, focusing on ethical considerations and user needs. Josh added that understanding how these tools can be used responsibly and developing training materials are critical steps in ensuring their effective adoption.

Conclusion

The roundtable discussion highlighted the transformative potential of AI and data innovations in the classroom and library services. By leveraging tools like Overton, Open Syllabus, and Scite, educational institutions can enhance teaching and learning experiences, support evidence-based research, promote textbook affordability, and foster AI literacy among students and faculty.

The panelists underscored the importance of critically evaluating these tools, providing support and training, and fostering collaboration among stakeholders. Libraries, in particular, have a pivotal role in guiding the adoption of AI technologies and ensuring they are used ethically and effectively.

As the landscape of educational technology continues to evolve, ongoing dialogue and partnerships will be crucial in addressing challenges and harnessing opportunities to improve education in the digital age.

Contact Information

From Theory to Practice: How Educators are Using Packback to Boost Student Engagement

Instructional AI: A Master Class in Packback Adoption and Integration

Presented by Devon McGuire and Juliet Rogers



Introduction

In a recent webinar titled "Instructional AI: A Master Class in Packback Adoption and Integration", educators Devon McGuire and Juliet Rogers shared valuable insights into the integration of instructional AI tools in the classroom. The webinar aimed to guide teachers on effectively adopting Packback, an AI-powered platform designed to enhance student engagement, critical thinking, and writing skills.

Devon McGuire, the Director of Academic Innovation and Strategy at Packback, brought her extensive experience in educational technology to the session. Juliet Rogers, a veteran teacher with 24 years of experience at Pasadena Independent School District, provided a practical perspective based on her firsthand experience using Packback in her 10th-grade AVID (Advancement Via Individual Determination) elective class.

Understanding Instructional AI and Packback

Instructional AI refers to artificial intelligence tools specifically designed to augment the teaching and learning process. Unlike general AI applications, instructional AI focuses on enhancing the educational environment by supporting both instructors and students. Packback is one such platform that leverages AI to foster critical thinking and improve writing skills among students.

Devon explained that Packback has been utilizing AI since 2017, initially in higher education. Recognizing the opportunity to support students in developing college and career readiness skills, Packback formed a partnership with AVID, aligning with their mission to promote inquiry-based learning and student engagement.

Juliet Rogers' Journey with Packback

Juliet shared her motivation for incorporating Packback into her classroom. Despite her extensive teaching experience, she noticed that her students were struggling with writing, a critical skill for college success. She wanted a tool that could assist her students without turning her AVID elective into an English class.

After learning about Packback during an AVID training session, Juliet was eager to implement it. She appreciated that Packback didn't just correct student mistakes but also provided explanations, helping students learn from their errors. The platform's ability to handle tedious tasks like grading grammar, mechanics, and formatting freed Juliet to focus on more impactful teaching activities.

Implementing Packback in the Classroom

Weekly Routine and Integration with AVID Strategies

Juliet described how she integrated Packback into her weekly teaching routine. Every day, she began with a warm-up activity, and several times a week, this included a Packback assignment. The platform operates on a two-week rotation, allowing students ample time to engage with the material.

At the start of each week, students conducted a "weekly check-in" where they reviewed their grades and identified areas where they were struggling. Using this reflection, they crafted open-ended questions related to their core or college classes, aligning with AVID's Tutorial Request Form (TRF) process.

Juliet emphasized that the questions had to meet certain criteria, such as being open-ended and reaching a minimum "Curiosity Score" provided by Packback's AI. Students were also encouraged to include SAT vocabulary words in their posts, reinforcing their language skills.

Engaging in Inquiry-Based Discussions

Throughout the two-week period, students were required to respond to at least two of their peers' questions. Juliet guided them to choose classmates who hadn't received responses yet, promoting inclusivity and ensuring that all students received support.

The Packback platform's design encouraged students to engage in higher-order thinking, as they had to formulate thoughtful questions and provide substantive responses. This practice mirrored the collaborative and inquiry-based learning emphasized in AVID's strategies.

Leveraging Packback's Features

Curiosity Score and Leaderboard

The Curiosity Score is an AI-generated metric that evaluates the quality of student posts based on criteria like open-endedness, academic tone, and the use of credible sources. Juliet set a minimum Curiosity Score of 40 to encourage students to meet certain standards.

The Leaderboard feature fostered a friendly competition among students, motivating them to improve their scores and engage more deeply with the content. Juliet noted that students became excited about their progress and the recognition they received.

Deep Dives for Writing Practice

In addition to the regular discussions, Juliet utilized Packback's "Deep Dives" feature for more extensive writing assignments. This tool allowed her to create custom rubrics focusing on aspects like word count, grammar, formatting, and flow. She could set specific requirements, such as using APA or MLA citation styles, which helped students prepare for college-level writing.

Juliet shared an example of a student whose writing significantly improved over time, demonstrating the effectiveness of the Deep Dives in enhancing students' writing skills. The AI provided detailed feedback, highlighting areas for improvement and guiding students through revisions.

AI-Powered Feedback and Plagiarism Detection

Packback's AI not only graded assignments but also provided constructive feedback. It pointed out issues like repetitive language or formatting errors, allowing students to learn and correct mistakes independently.

The platform also included plagiarism and AI content detection features. If a student's submission appeared to be copied or generated by AI, Packback would alert the student privately, giving them an opportunity to revise their work. This approach promoted academic integrity while educating students about ethical writing practices.

Impact on Students and Learning Outcomes

Improved Writing Skills and Confidence

Juliet observed that her students' writing abilities improved noticeably over time. The regular practice and immediate feedback helped them develop stronger grammar, vocabulary, and critical thinking skills. Students began to produce more substantive and thoughtful responses, moving beyond superficial answers.

Preparation for College Expectations

The use of Packback familiarized students with discussion boards, a common component in college courses. Juliet's students, including alumni who returned to share their experiences, reported that they felt more prepared and confident in their college classes due to their practice with Packback.

One former student expressed that Packback "saved me" in college, highlighting the platform's role in easing the transition to higher education's writing and discussion demands.

Challenges and Student Feedback

While there was an initial learning curve and some resistance from students who were unaccustomed to the platform, over time, they recognized its benefits. Juliet noted that teenagers might grumble about extra work, but the growth in their skills and confidence eventually led to appreciation for the tool.

She also addressed the prevalent issue of academic dishonesty in the digital age. By integrating Packback, she provided a structured environment that discouraged cheating and emphasized the importance of original thought and effort.

Advice for Educators Considering Packback

Juliet encouraged other educators to embrace instructional AI tools like Packback, emphasizing the support and resources available. She highlighted the importance of setting clear expectations, integrating the platform into regular routines, and using it to reinforce existing instructional strategies like AVID's WICOR (Writing, Inquiry, Collaboration, Organization, Reading) framework.

Devon added that educators interested in adopting Packback should attend general training sessions to understand the foundational aspects and ensure they have the necessary district approvals, especially regarding data privacy agreements.

Conclusion

The webinar showcased how instructional AI, when thoughtfully integrated into the classroom, can significantly enhance student engagement, writing proficiency, and readiness for college-level work. Juliet's practical application of Packback in her AVID elective class provided a roadmap for other educators seeking to leverage technology to support their teaching goals.

By focusing on critical thinking, inquiry-based learning, and ethical practices, tools like Packback can play a crucial role in preparing students for the demands of higher education and beyond.

Resources and Next Steps

  • Training Sessions: Educators can access recordings and training sessions at packback.co/webinars to familiarize themselves with the platform.
  • Support Contacts: For assistance and onboarding, teachers can reach out to Packback representatives like Madison Shay.
  • District Approval: Ensure that district consent and data privacy agreements are secured before implementation to protect student information.
  • Integration with Curriculum: Align Packback activities with existing curricular frameworks like AVID to maximize effectiveness and reinforce learning objectives.

By embracing instructional AI, educators can provide their students with the tools and skills necessary to succeed in an increasingly digital and interconnected world.

Exploring the Intersection of AI and Archives: Key Insights from Experts

Opening Keynote: AI and Archives – Applications, Implications, and Possibilities

Presented by Ray Pun, Helen Wong Smith, and Thomas Padilla at the AI and Libraries 2 Mini Conference



Introduction

In the opening keynote of the "AI and Libraries 2: More Applications, Implications, and Possibilities" mini conference, Ray Pun welcomed attendees to an engaging session focused on the intersection of artificial intelligence (AI) and archives. The event built upon the previous month's conference, which saw over 16,000 sign-ups, and celebrated Ray Pun's recent election as President-Elect of the American Library Association (ALA).

Joining Ray Pun were two distinguished professionals in the field of archives:

  • Helen Wong Smith: President of the Society of American Archivists (SAA) and Archivist for University Records at the University of Hawaii at Mānoa.
  • Thomas Padilla: Deputy Director of Archiving and Data Services at the Internet Archive.

The session aimed to explore the current landscape of AI in archives, discuss ethical considerations, and examine how AI can make born-digital collections more accessible and usable.

The Professional Landscape of AI in Archives

Helen Wong Smith's Perspective

Helen emphasized that AI integration into the archival profession offers promising opportunities for enhancing efficiency, accessibility, and deriving new insights from archival collections. However, she also highlighted several challenges that require careful management:

  • Quality and Ethics: Ensuring that AI-generated metadata and content maintain the authenticity and trustworthiness of archival records.
  • Privacy: Navigating data privacy concerns when implementing AI technologies.
  • Professional Adaptation: The need for ongoing dialogue, research, and training to effectively integrate AI while preserving the nature of archival records.

Helen introduced the concept of "paradata," which involves capturing information about the procedures, tools, and individuals involved in creating and processing information resources. She stressed that paradata is essential for maintaining the authenticity, reliability, and usability of records in the context of AI-generated content.

Thomas Padilla's Perspective

Thomas noted the significant engagement of senior leadership in exploring the potential of AI within libraries and archives. He mentioned initiatives like the ARL (Association of Research Libraries) and CNI (Coalition for Networked Information) Task Force focused on scenario planning for AI in these fields.

However, Thomas expressed concerns about the for-profit capture of library and archival work, cautioning against over-reliance on specific products or proprietary technologies. He emphasized the importance of a more holistic and less product-centric approach to AI integration, suggesting that focusing on overarching frameworks and values would be more beneficial for the profession.

Ethical Considerations in AI and Archival Processing

Thomas Padilla's Insights

Thomas highlighted the ethical complexities that arise when AI perpetuates existing societal biases, referencing Safiya Noble's work on "Algorithms of Oppression." He argued that it's insufficient to accept these biases as inevitable and stressed the need for proactive responses to address inequities exacerbated by AI technologies.

He advocated for "bias management," suggesting that while subjectivity in archival description is unavoidable, it must be anchored in consistent values that prioritize human rights and historical understanding. Thomas also called for regulatory frameworks to provide clarity and consistency in ethical approaches to AI in archives.

Helen Wong Smith's Insights

Helen echoed the importance of addressing biases in AI-generated metadata and content. She raised concerns about AI's potential to perpetuate inaccuracies and misconceptions, particularly in generative AI that produces new content based on existing data.

She emphasized the necessity of codified record-keeping practices for creators using AI, referencing Jessica Bushey's work on AI-generated images as an emergent record format. Helen reiterated the importance of paradata in documenting not just the tools used but also the reasons, methods, and contexts in which they are applied.

AI and Born-Digital Archives

Enhancing Accessibility and Usability

Helen outlined several ways AI can improve access to born-digital collections:

  • Automating metadata creation
  • Content classification
  • Natural language processing
  • Image recognition
  • Enhanced search capabilities

However, she also identified barriers to implementation, including:

  • Lack of knowledge and competencies within the archival profession
  • Reliable technologies and interoperability issues
  • Economic constraints and personnel expertise
  • Data privacy and security concerns
  • Ethical considerations and cultural sensitivity

Addressing Backlogs with AI

Thomas discussed the potential of AI in addressing access issues for backlogged materials, particularly those in less commonly known languages. He highlighted the challenge of insufficient resources and the difficulty in hiring personnel with the necessary language skills to catalog these materials.

Thomas proposed leveraging AI advancements in world languages, possibly in collaboration with companies like Meta, to process and make these materials discoverable. He emphasized that minimal digitization combined with AI could help fulfill the access and preservation mission of archives.

Staying Informed and Managing Overwhelm

Thomas Padilla's Approach

Thomas acknowledged the feeling of overwhelm many professionals experience due to the rapid developments in AI. He recommended:

  • Adopting a utilitarian approach to AI as a tool
  • Grounding oneself in the history and values of the profession
  • Practicing careful curation of information sources
  • Utilizing platforms like LinkedIn for professional updates
  • Setting up curated Google Alerts for topics like AI in libraries, archives, and regulation

Helen Wong Smith's Resources

Helen suggested leveraging collaborative initiatives like the InterPARES Trust AI project, an international and interdisciplinary effort aimed at designing and developing AI to support trustworthy public records. The project's goals include:

  • Identifying AI technologies to address critical records and archival challenges
  • Determining the benefits and risks of using AI on records and archives
  • Ensuring archival concepts and principles inform the development of responsible AI
  • Validating outcomes through case studies and demonstrations

Helen emphasized the importance of engaging with such resources to stay informed and contribute to the ongoing dialogue around AI in archives.

Conclusion

The opening keynote provided valuable insights into the intersection of AI and archives, highlighting both opportunities and challenges. Ray Pun thanked the speakers and attendees, encouraging continued dialogue and exploration of these critical topics.

As AI technologies continue to evolve, the archival profession must navigate ethical considerations, enhance competencies, and develop strategies to leverage AI responsibly. By fostering collaboration, staying informed, and grounding practices in core values, archivists can effectively integrate AI to enhance accessibility and preservation.

Note: This summary is based on the opening keynote delivered by Ray Pun, Helen Wong Smith, and Thomas Padilla at the AI and Libraries 2 Mini Conference.