Translate

Search This Blog

Wednesday, November 27, 2024

Protecting Your Privacy: The Risks of Sharing Sensitive Data with AI Tools

Deliberately Safeguarding Privacy and Confidentiality in the Era of Generative AI

Presented by Reed N. Hedges, Digital Initiatives Librarian at the College of Southern Idaho



Introduction

Reed N. Hedges delivered a presentation focusing on the critical importance of safeguarding privacy and confidentiality when using generative artificial intelligence (AI) tools. The session highlighted the potential risks associated with sharing sensitive data with AI models and provided actionable recommendations for users and professionals in the library and information science fields.

Personal Anecdotes and the Need for Caution

Hedges began by sharing several personal anecdotes illustrating how individuals unknowingly compromise their privacy by inputting sensitive information into AI tools:

  • A user who spends long hours chatting with GPT-4, sharing more personal information with the AI than with their own spouse.
  • An individual who input all their grandchildren's data into an AI to generate gift ideas.
  • A person who provided detailed demographic data of a local social group, including identifiable information, to plan activities and programs.
  • A user who entered their entire family budget into an AI tool for financial management.

These examples underscore the pressing need for users to be more conscientious about the data they share with AI systems.

Main Point: Do Not Input Sensitive Data into AI Tools

The core message of the presentation is clear: Users should not input any sensitive or personal data into prompts for generative AI tools. This includes business information, personal identifiers, or any data that could compromise individual or organizational privacy.

Privacy Policies and Data Handling by AI Tools

Hedges highlighted specific concerns regarding popular AI tools:

  • Google Bard: Explicitly notes that human supervisors may read user data, emphasizing the importance of anonymization.
  • OpenAI's ChatGPT: Terms of use discuss the need for proprietary data protection. Users can have a more privacy-conscious session by using OpenAI's Playground or adjusting settings at privacy.openai.com/policies.
  • Perplexity AI: Evades questions about data handling and extrapolation.

The Challenge of Legal Recourse and Privacy Harms

The presentation delved into the limitations of current privacy laws:

  • Harm Requirement: Courts often require proof of harm, which is challenging when privacy violations involve intangible injuries like anxiety or frustration.
  • Impediments to Enforcement: The need to establish harm impedes the effective enforcement of privacy violations, allowing wrongdoers to escape accountability.
  • Lack of Adequate Legal Framework: The existing legal system lacks effective mechanisms to address privacy harms resulting from AI data handling.

Extrapolation and Inference by AI Tools

Generative AI models can infer additional information beyond what users explicitly provide:

  • Data Extrapolation: AI tools can infer behaviors, engagement patterns, and personal attributes from minimal data inputs.
  • Privacy Risks: Such extrapolation can inadvertently reveal sensitive information, including learning disabilities or mental health issues.
  • Example: Even generic prompts can lead to AI inferring personal details that compromise privacy.

Recommendations for Safeguarding Privacy

1. Transparency in Data Collection

  • Inform users about the data being collected and its intended use.
  • Only OpenAI's ChatGPT and Anthropic's Claude explicitly deny storing and extrapolating user data.

2. Informed Consent

  • Obtain explicit consent before collecting or using personal information.
  • Ensure users are aware of the implications of data sharing with AI tools.

3. Data Minimization

  • Limit data collection to what is absolutely essential for the task.
  • Avoid including unnecessary personal or demographic details in AI prompts.

4. Anonymization and Avoiding Sensitive Information

  • Do not include individual attributes or identifiers in AI prompts.
  • Use synthetic or generalized data where possible.
  • Be cautious even with public data, as ethical considerations remain.

5. Implement Strict Access and Use Controls

  • Enforce a "least privilege" access model, using tools that require minimal data access.
  • Ensure staff and users are clear on what data can be input into AI tools.

6. Use Human Content Moderation

  • Have prompts reviewed by multiple individuals to screen for privacy issues.
  • This process can also enhance quality control.

7. Be Skeptical of "Secure" AI Tools

  • Avoid promising or assuming that any AI tool is completely secure.
  • Recognize that even custom AI models can be vulnerable to exploitation.

Understanding AI Terms of Service

Users should familiarize themselves with the terms of service of AI tools:

  • Ownership of Content: OpenAI states that users own the input and, to the extent permitted by law, the output generated.
  • Responsibility for Data: Users are responsible for ensuring that their content does not violate any laws or terms.
  • Data Use: AI providers may use input data for training and improving models unless users opt out.

Final Thoughts on Privacy Practices

Hedges emphasized that traditional privacy protection principles remain relevant but must be applied more diligently in the context of AI:

  • Extra Vigilance: Users must be proactive in safeguarding their data when interacting with AI tools.
  • Data Breaches are Inevitable: Even with safeguards, data breaches can occur; therefore, minimizing shared data is crucial.
  • Reassessing the Need for AI: Consider whether using AI is necessary for a given task, especially when handling sensitive information.

Conclusion

In the era of generative AI, safeguarding privacy and confidentiality requires deliberate and informed actions by users and professionals. By understanding the risks, adhering to best practices, and educating others, individuals can mitigate potential harms associated with AI data handling.

References and Further Reading

  • Danielle Keats Citron and Daniel J. Solove: "Privacy Harms" - A comprehensive paper discussing the challenges in addressing privacy violations legally.
  • Shantanu Sharma: "Artificial Intelligence and Privacy" - An exploration of AI's impact on privacy, available on SSRN.
  • Nathan Hunter: "The Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Prompts" - A book providing insights into effective AI interactions.

Links to these resources were provided during the presentation for attendees interested in deepening their understanding of AI privacy concerns.

ridging the Gap: The Role of Librarians in Facilitating AI Integration in Library Instruction

Faculty Attitudes Toward Librarians Introducing AI in Library Instruction Sessions

Presented by Beth Evans, Associate Professor at Brooklyn College, City University of New York



Introduction

Beth Evans delivered a presentation discussing the role of librarians in introducing artificial intelligence (AI) tools in library instruction sessions. With over 30 years of experience at Brooklyn College's library, she explored faculty perspectives on the use of AI in academic settings and the potential implications for library instruction.

Background

Evans noted that AI technologies like ChatGPT have the potential to augment, support, or even replace certain library functions, such as reference services, instruction, and technical services. Recognizing the transformative impact of AI, she sought to understand faculty attitudes toward AI and whether they would welcome librarians incorporating AI tools into their instruction sessions.

Research Methodology

In the fall of 2023, Evans conducted a survey targeting faculty members at Brooklyn College. Key aspects of the survey included:

  • Distributed to 199 faculty members.
  • Received 74 responses, representing a response rate of approximately 37%.
  • Respondents came from various departments, with the largest representation from English, History, and Sociology.
  • Questions focused on faculty's introduction of AI in their courses, their attitudes toward AI, and their openness to librarians discussing AI in instruction sessions.

Survey Findings

Faculty Introduction of AI in Courses

Evans explored how faculty members addressed AI in their teaching:

  • Proactive Introduction: Some faculty included AI tools in their syllabi, assignments, or class discussions.
  • Student-Initiated Discussions: In a few cases, students brought up AI topics during classes.
  • No Introduction: A portion of faculty did not introduce AI topics at all.

Methods of Introducing AI

Among faculty who addressed AI:

  • Rule Setting in Syllabi: Establishing guidelines on AI usage in course policies.
  • Class Discussions: Engaging students in conversations about AI's role and impact.
  • Assignments Involving AI: Incorporating AI tools as part of coursework to critically evaluate their utility.

Faculty Attitudes Toward AI

Faculty responses reflected a spectrum of attitudes:

1. Prohibitive

Some faculty strictly prohibited the use of AI tools, expressing concerns about academic integrity and potential threats to human creativity and critical thinking.

2. Cautionary

Others cautioned students about relying on AI, highlighting limitations and encouraging transparency if AI tools were used.

3. Preventative

Certain faculty designed assignments that were difficult or impossible to complete using AI tools, thereby discouraging their use.

4. Proactive Utilization

A group of faculty embraced AI, integrating it into their teaching to enhance learning outcomes:

  • Using AI for media literacy discussions.
  • Employing AI to improve cover letters in business courses.
  • Assigning comparative analyses between AI-generated content and traditional research tools like PubMed.

Faculty Concerns About Librarians Introducing AI

When asked whether they were concerned about librarians introducing AI in library instruction sessions:

  • Majority Not Concerned: Most faculty members were open to librarians discussing AI tools.
  • Supportive of Librarian Expertise: Many acknowledged librarians as information experts capable of providing balanced and ethical guidance on AI.
  • Strong Opposition: A minority expressed strong opposition, fearing AI as a threat to human flourishing and academic integrity.

Additional Faculty Comments

Faculty provided further insights:

Ambivalence and Hesitation
  • Some were uncertain about AI's role and expressed a need for more understanding before fully integrating it.
  • Concerns about keeping pace with rapidly evolving technology and its implications for cheating and academic dishonesty.
Recognizing the Inevitable Presence of AI
  • Acknowledgment that AI is prevalent and students need to be educated about its use.
  • Emphasis on not burying heads in the sand and preparing students for real-world applications where AI is utilized.
Desire for Collaboration with Librarians
  • Faculty expressed interest in workshops and collaborations led by librarians to explore AI tools constructively.
  • Appreciation for librarians' efforts to assist both students and faculty in understanding AI's prevalence and uses.

Conclusion

Beth Evans concluded that while faculty attitudes toward AI vary widely, there is significant openness and even enthusiasm for librarians to take an active role in introducing and educating about AI tools in library instruction sessions. Librarians are viewed as information experts well-equipped to navigate the ethical, practical, and pedagogical aspects of AI in academic settings.

Implications for Librarians

Based on the survey findings:

  • Librarians have an opportunity to lead in AI literacy education, providing balanced perspectives on AI tools.
  • Collaboration with faculty is essential to ensure that AI integration aligns with course objectives and academic integrity policies.
  • There is a need to address concerns and misconceptions about AI, tailoring approaches to different disciplines and faculty attitudes.

Contact Information

For further information or collaboration opportunities, you can contact Beth Evans:

Note: The final slide of the presentation included an AI-generated image using the tool "Tome" with the theme "Ocean."

Navigating the Intersection of AI and Information Literacy: Essential Competencies for Librarians

Competencies for the Use of Generative AI in Information Literacy Instruction

Presented by Paul Pival, Librarian at the University of Calgary



Introduction

During the Library 2.0 Mini-Conference on AI and Libraries, Paul Pival delivered a presentation titled "Competencies for the Use of Generative AI in Information Literacy Instruction." The session focused on identifying the essential competencies that librarians should possess to effectively incorporate generative artificial intelligence (AI) into information literacy instruction.

Frameworks vs. Competencies

Paul began by distinguishing between frameworks and competencies. While frameworks serve as blueprints outlining how various components fit together (analogous to building a house), competencies are the specific skills and knowledge required to execute those plans (the materials needed to build the house).

He referenced the Association of College and Research Libraries (ACRL) Framework for Information Literacy for Higher Education, noting that it is broad enough to encompass generative AI. He highlighted that efforts are underway, led by professionals like Dr. Leo Lo, to update the framework to explicitly address generative AI.

ACRL Framework and Generative AI

Paul discussed how the six frames of the ACRL Framework relate to generative AI:

  1. Authority is Constructed and Contextual: Emphasizing the importance of assessing content critically and acknowledging personal biases when evaluating AI-generated information.
  2. Information Creation as a Process: Understanding how large language models (LLMs) generate content and accepting the ambiguity in emerging information formats.
  3. Information Has Value: Recognizing the need to cite AI-generated content appropriately and verifying the accuracy of AI-provided citations.
  4. Research as Inquiry: Utilizing AI tools to break down complex problems and enhance inquiry-based learning.
  5. Scholarship as Conversation: Engaging in dialogues with AI tools, understanding that they are conversational agents rather than traditional search engines.
  6. Searching as Strategic Exploration: Acknowledging that searching is iterative and that AI tools complement but do not replace traditional academic databases.

Essential Competencies for Librarians

Paul proposed four key competencies that librarians should develop to effectively use generative AI in information literacy instruction:

  1. Understanding How Generative AI Works:
    • Familiarity with the leading AI models, referred to as "Frontier Models," including GPT-4, Google's Gemini 1.0, and Anthropic's Claude 3.
    • Investing time (at least 10 hours per model) to become proficient with their nuances.
    • Recognizing accessibility issues, such as subscription costs and geographical restrictions, which contribute to the digital divide.
  2. Recognizing Bias in AI Models:
    • Understanding that AI models are trained on vast internet data, including biased and harmful content.
    • Acknowledging that the programming and training data may not represent diverse worldviews.
    • Being aware of potential overcorrections and content filtering issues.
  3. Identifying and Managing Hallucinations:
    • Recognizing that AI models may generate false or fabricated information, including non-existent citations.
    • Understanding the concept of "hallucinations" in AI and their implications for information accuracy.
    • Exploring solutions like Retrieval Augmented Generation (RAG) to mitigate hallucinations by incorporating domain-specific knowledge bases.
  4. Ethical Considerations:
    • Evaluating the ethical implications of using AI tools, including environmental impacts and labor practices.
    • Understanding legal issues related to copyright and content usage.
    • Considering the potential for AI tools to disseminate disinformation.

Resources and Continuous Learning

Paul emphasized the importance of continuous learning and adaptability in AI literacy. He provided several resources for further exploration:

Conclusion

In conclusion, Paul highlighted that AI literacy is not static but evolves with technological advancements. He urged librarians to:

  • Educate themselves on generative AI tools and their implications.
  • Integrate AI competencies within existing information literacy frameworks.
  • Stay informed about ethical considerations and emerging issues.
  • Promote continuous learning to adapt to the rapidly changing AI landscape.

By developing these competencies, librarians can better serve their patrons and help navigate the complexities introduced by generative AI in information literacy instruction.

Contact Information

You can connect with Paul Pival on social media platforms under the handle @ppival.

AI in Libraries: Unlocking the Potential for Public Libraries

AI and Libraries: Applications, Implications, and Possibilities

Opening Keynote at the Library 2.0 Mini-Conference



Introduction

The Library 2.0 mini-conference titled "AI and Libraries: Applications, Implications, and Possibilities" was held, featuring an opening keynote panel discussion. The conference was organized by San Jose State University's School of Information, with special thanks extended to Dr. Sandra Hirsh and Dr. Anthony Chow for their leadership. The keynote was moderated by Dr. Raymond Pun, an academic and research librarian at Alder Graduate School of Education and a prominent figure in the field.

Panelists

The panel consisted of esteemed professionals from various library settings:

  • Ida Mae Craddock: School Librarian at Albemarle County Public Schools' Community Lab Schools in Virginia.
  • Dr. Brandy McNeil: Deputy Director of Programs and Services at the New York Public Library.
  • Dr. Leo Lo: Dean and Professor of the College of University Libraries and Learning Sciences at the University of New Mexico.

AI in Different Library Contexts

Public Libraries

Dr. Brandy McNeil discussed how public libraries are integrating AI to enhance both internal and external operations. Key applications include:

  • Automating FAQs and email responses.
  • Assisting with customer complaints and inquiries.
  • Creating curriculum outlines and scheduling.
  • Cataloging books and ensuring data accuracy.
  • Offering information literacy classes on AI basics.

She highlighted the establishment of an AI committee at the New York Public Library, modeled after the Library of Congress's phases of AI—understanding, experimenting, and implementing. The committee explores AI tools like Whisper AI and the Devon software (an AI software engineer), and collaborates with institutions like the Library of Congress.

School Libraries

Ida Mae Craddock shared insights from the school library perspective, noting that school librarians are often the first to encounter and integrate new technologies. AI is being used for:

  • Generating essays and leveling texts to match student reading levels.
  • Translating materials to make curriculum accessible to non-native English speakers.
  • Creating custom educational materials quickly.
  • Processing data and scheduling.

She emphasized the importance of policies guiding AI use in schools, particularly regarding student data privacy and compliance with laws like FERPA.

Academic Libraries

Dr. Leo Lo discussed the exploration of AI in academic libraries, particularly generative AI. The University of New Mexico initiated a GPT-4 exploration program involving staff from different units with varying levels of AI expertise. Applications included:

  • Generating alt text for images and editing bibliographies.
  • Developing machine-readable data management plans.
  • Facilitating staff-patron interactions using AI-generated templates and FAQs.
  • Using AI for cataloging and metadata management.
  • Assisting with administrative tasks like scheduling and email drafting.

Dr. Lo emphasized the importance of experimenting with AI to discover its potential benefits and limitations within the academic library context.

Popular AI Tools and Applications

The panelists discussed various AI tools being utilized in their respective settings:

Tools in Public Libraries

  • ChatGPT: Used for a variety of tasks, with some staff using the paid version for advanced features.
  • Canva Magic Studio: For creating promotional materials and program flyers.
  • Midjourney and Stable Diffusion: Image generation tools.
  • Microsoft Co-Pilot and Google's Duet AI: For productivity and note-taking features.
  • Otter AI: For transcription and translation services.
  • Quick Draw by Google and Goblin Tools: For educational demonstrations of AI capabilities.
  • Adobe Firefly and Character.ai: For creative and interactive experiences.

Tools in School Libraries

  • ChatGPT: For natural language processing tasks and assisting students in generating research topics.
  • BigHugeLabs Image Editor: For easy image editing tasks.
  • Diffit: For leveling texts and generating practice questions aligned with testing cultures in schools.
  • Google Immersive Translate and Rask AI: For translating materials to support multilingual students.
  • OpenAI Codex and TabNine: For coding and creating custom AI models to process specific data.

Tools in Academic Libraries

  • ChatGPT and GPT-4: For various research and administrative tasks.
  • Claude from Anthropic and Google Bard: Alternative AI models for exploration.
  • Perplexity AI: A tool that could potentially change information discovery processes.
  • Scite.ai and Kendra: Research-oriented models for academic purposes.
  • Elsevier's Scopus AI: An AI developed by publishers to assist with academic research.

Concerns and Ethical Considerations

Policy and Privacy Issues

The panelists emphasized the importance of policies guiding AI use, especially concerning data privacy, equity, and access. Key points included:

  • Ensuring student data privacy in compliance with laws like FERPA.
  • Addressing the digital divide and information privilege associated with access to AI tools.
  • The need for clear institutional policies to guide AI use in educational settings.

Copyright and Intellectual Property

The discussion highlighted significant concerns regarding AI's impact on copyright and intellectual property:

  • Ongoing lawsuits against AI companies for copyright infringement and the use of copyrighted materials in training data.
  • The complexity of citing AI-generated content and the ethical implications of using AI outputs in academic work.
  • The need for balanced approaches to protect creators' rights while allowing AI to be used for research and educational purposes.

Bias, Equity, and Labor Practices

Other concerns included:

  • Biases present in AI models due to the data they are trained on, affecting marginalized communities.
  • Environmental impacts of large data centers required for AI processing.
  • Labor practices related to content moderation and the underpaid workforce behind AI technologies.

Resources and Staying Informed

The panelists shared various resources for librarians and professionals to stay updated on AI developments:

  • Attending conferences and workshops, such as those hosted by the Public Library Association and the American Library Association.
  • Following technology news outlets like The Verge, Mashable, Wired, CNET, and MIT Technology Review.
  • Engaging with local tech platforms and staying informed about funding opportunities and industry trends.
  • Reading reports from organizations like the Pew Research Center and the Center for an Urban Future.
  • Following thought leaders and experts in the field on social media and professional networks.
  • Utilizing library-specific publications like School Library Journal and Knowledge Quest.
  • Listening to relevant podcasts and webinars, such as those offered by Choice 360 and New York Times' "Hard Fork."

Impact on Library Workforce and Future Outlook

The panelists concluded with reflections on how AI might impact the library workforce:

  • Ida Mae Craddock expressed optimism that AI would not replace school librarians but would change certain aspects of the job, emphasizing the irreplaceable role of librarians in teaching critical thinking and fostering a love of reading.
  • Dr. Leo Lo highlighted the importance of upskilling and reskilling, suggesting that AI would change job functions rather than eliminate positions. He mentioned efforts to develop AI competencies for library workers through organizations like ACRL.
  • Dr. Brandy McNeil noted that while AI might not replace people, it could replace those who do not know how to use it effectively. She emphasized the emergence of new job roles like prompt engineering and the need for library professionals to adapt.

Conclusion

The opening keynote of the Library 2.0 mini-conference provided valuable insights into the current state and future possibilities of AI in various library contexts. The panelists highlighted both the practical applications and the ethical considerations that come with integrating AI into library services. Key takeaways include:

  • The transformative potential of AI to enhance library operations, accessibility, and user engagement.
  • The critical importance of policies, ethical considerations, and ongoing dialogue to navigate challenges related to privacy, equity, and intellectual property.
  • The need for library professionals to stay informed, adapt to new technologies, and continue their role as educators and facilitators in an evolving information landscape.

The conference emphasized that while AI presents significant opportunities for innovation, it also requires thoughtful implementation and a commitment to addressing its broader societal impacts.

Additional Information

The panelists encouraged attendees to participate in upcoming sessions of the mini-conference and to engage with resources and networks to further explore AI's role in libraries.

Revolutionizing Research: A Look at AI and Data Innovations in Higher Education

New AI and Data Innovations in the Classroom: A Roundtable Discussion

Presented by Miraj Berry, Brian Cooper, Josh Nicholson, and Joe Karaganis



Introduction

The Charleston Library Conference hosted a virtual roundtable discussion titled "New AI and Data Innovations in the Classroom". The session brought together experts in the field of educational technology to discuss the application and usefulness of AI tools and databases in higher education settings. The panelists included:

  • Miraj Berry: Director of Business Development at Overton.
  • Brian Cooper: Associate Dean of Innovation and Learning at Florida International University (FIU) Libraries.
  • Josh Nicholson: Founder and CEO of Scite.
  • Joe Karaganis: Director of Open Syllabus.

The 30-minute session, followed by a 10-minute live Q&A, aimed to explore the use cases of three innovative tools—Overton, Scite, and Open Syllabus—and their impact on teaching, learning, and library services in higher education. The discussion focused on how these tools leverage AI and large data sets to enhance content discovery, support classroom instruction, and contribute to textbook affordability and collection development initiatives.

Overview of the Tools

Overton

Overton is the world's largest database of policy documents and grey literature. It indexes over 9.3 million policy documents from more than 1,800 sources across 32,000 organizations in 188 countries. The platform makes policy documents easily searchable and discoverable by indexing their full text and linking them to academic papers, relevant people, topics, and Sustainable Development Goals (SDGs).

Miraj explained that Overton's mission is to support evidence-based policymaking by providing a platform that allows users to explore the connections between policy documents and scholarly research. Overton helps surface content that might otherwise be difficult to find, putting existing content into perspective for researchers, students, and policymakers.

Open Syllabus

Open Syllabus is an open-source syllabus archive that collects and analyzes millions of syllabi from around the world. With a database of around 20 million syllabi, the platform uses AI and machine learning to extract structured information from these documents, such as course descriptions, reading lists, and learning outcomes.

Joe highlighted that Open Syllabus aims to make the intellectual backdrop of teaching more accessible. By aggregating syllabi at scale, the platform provides insights into what is being taught, how subjects are structured, and which materials are considered central or peripheral in various fields. This information can inform curricular design, collection development, and OER (Open Educational Resources) adoption initiatives.

Scite

Scite is an AI-powered platform designed to help users better understand and evaluate research articles. By leveraging machine learning, Scite processes millions of full-text PDFs to extract citation statements, providing context on how and why an article, researcher, journal, or university has been cited.

Josh explained that Scite addresses challenges related to information overload and trust in scholarly communication. The platform offers a "next-generation citation index" that brings more nuance and context to citations, enabling users to discover, trust, evaluate, and use research more effectively. Scite also integrates with large language models to provide fact-checking and grounding against the scientific literature.

Challenges and Opportunities in Adopting AI Tools

Critical Evaluation and Adoption

The panelists discussed the importance of critically evaluating AI tools before adopting them in educational settings. Josh emphasized that while large language models like ChatGPT offer powerful capabilities, they can also produce untrustworthy or fabricated information. Therefore, it's crucial to implement guardrails, such as providing citations and allowing users to verify sources.

Joe added that the barriers to textual analysis have significantly decreased due to advancements in AI and machine learning. This democratization means that specialized capabilities are now accessible to a broader audience, but it also raises questions about data aggregation, ethical considerations, and the responsible use of AI in education.

Supporting Staff and Students

Brian shared insights from the librarian's perspective, highlighting the challenges and initiatives at FIU in supporting textbook affordability and collection development. He noted that librarians play a neutral role in fostering AI literacy among students and faculty. By creating resources like LibGuides and engaging with faculty liaisons, libraries can help navigate the complexities of AI and digital tools.

The panelists agreed that it's essential to provide advice, training, and support for staff and student consumption of these tools. This includes understanding where these technologies might be useful, testing them, and finding possible ways to package them for educational purposes.

Feedback Channels and Collaboration

Effective adoption of AI tools requires collaboration among various stakeholders, including students, teachers, librarians, and technology vendors. The panelists discussed the importance of establishing feedback channels to gather input from users and to refine the tools based on real-world needs.

Josh mentioned that libraries have a critical role in guiding researchers and students through the suite of available tools, helping them understand the strengths and limitations of each. By being proactive and embracing these technologies, libraries can better support their communities in an era of rapid technological change.

Use Cases and Impact

Overton's Application in Policy Research

Miraj highlighted how Overton supports evidence-based policymaking by making grey literature and policy documents more accessible. Researchers and students can discover policy documents related to their field of study, explore citations between policy and academic literature, and gain a broader understanding of the policy landscape.

This accessibility enables users to incorporate policy perspectives into their research and teaching, fostering a more interdisciplinary approach to education.

Open Syllabus and Curriculum Development

Joe discussed how Open Syllabus aids in curriculum development and OER adoption. By analyzing syllabi at scale, the platform can identify commonly assigned materials, trends in subject matter, and gaps in available resources. This information can inform collection development decisions and help educators select materials that align with their instructional goals.

Brian shared that FIU is leveraging Open Syllabus to map out peer-reviewed OER materials aligned with classes nationwide. By correlating these with existing classes at FIU, faculty can be informed about OER options that their peers are using, promoting textbook affordability and enhancing student success.

Scite's Role in Research and Education

Josh explained that Scite helps address the challenges of information overload and the need for trustworthy sources. By providing context to citations and integrating with large language models, Scite allows users to fact-check information and understand the credibility of sources more effectively.

In educational settings, Scite can assist students in starting quality research papers by guiding them to relevant and reliable sources, thereby enhancing the research and learning process.

The Role of Libraries and Vendors

Libraries as Facilitators

Brian emphasized that libraries are in a unique position to bridge the gap between technology and users. By engaging in new and novel ways with their constituencies, libraries can support the adoption of AI tools, promote AI literacy, and contribute to student and faculty success.

He highlighted the potential for libraries to expand their involvement in areas like institutional effectiveness and accreditation by leveraging data and insights from tools like Open Syllabus and Scite.

Vendor Collaboration

The panelists agreed that collaboration between libraries and vendors is essential for maximizing the benefits of AI tools. Vendors can support libraries by providing data, integrating with existing systems, and offering solutions that address specific institutional needs.

Miraj mentioned Overton's commitment to being a responsible data provider, focusing on ethical considerations and user needs. Josh added that understanding how these tools can be used responsibly and developing training materials are critical steps in ensuring their effective adoption.

Conclusion

The roundtable discussion highlighted the transformative potential of AI and data innovations in the classroom and library services. By leveraging tools like Overton, Open Syllabus, and Scite, educational institutions can enhance teaching and learning experiences, support evidence-based research, promote textbook affordability, and foster AI literacy among students and faculty.

The panelists underscored the importance of critically evaluating these tools, providing support and training, and fostering collaboration among stakeholders. Libraries, in particular, have a pivotal role in guiding the adoption of AI technologies and ensuring they are used ethically and effectively.

As the landscape of educational technology continues to evolve, ongoing dialogue and partnerships will be crucial in addressing challenges and harnessing opportunities to improve education in the digital age.

Contact Information