Translate

Search This Blog

Wednesday, November 27, 2024

The Impact of AI on Information Literacy: Introducing the "Artificial Intelligence and Information Literacy" Course

Planning a Credit-Bearing Course on AI and Information Literacy

Presented by Alyssa Russo and David Hurley from the University of New Mexico



Introduction

Alyssa Russo, Learning Services Librarian, and David Hurley, Discovery and Web Librarian at the University of New Mexico (UNM), shared their experiences and plans for developing a credit-bearing course titled "Artificial Intelligence and Information Literacy." This presentation delved into the rationale, structure, and pedagogical approaches they considered while designing the course, aiming to integrate generative AI tools like ChatGPT into information literacy instruction.

Background and Context

The advent of ChatGPT and similar generative AI technologies prompted librarians at UNM to reconsider their approaches to information literacy instruction. Recognizing the profound impact of AI on information systems and user behavior, Russo and Hurley sought to develop a course that not only addressed the practical use of AI tools but also engaged students in critical thinking about the social and ethical implications of these technologies.

At UNM, the library operates within a unique structure, being part of the Organizational Information and Learning Sciences (OILS) program. This affiliation allows librarians to teach credit-bearing courses that explore theoretical aspects of information literacy beyond traditional library instruction. Leveraging this opportunity, Russo and Hurley aimed to create a three-credit course that would encourage students to think critically about how AI reshapes information landscapes.

Inspirational Framework

The presenters drew inspiration from Barbara Fister's perspective on information literacy, emphasizing the need to understand the architectures, infrastructures, and belief systems that shape our information environment. They recognized that generative AI challenges conventional notions of authority, value, and the processes underlying information creation and dissemination.

Hurley noted parallels between current responses to AI and past reactions to disruptive technologies like Google and Wikipedia. In the early days of the web, librarians grappled with similar concerns about information quality and authority. By examining historical responses—ranging from rejection to revolutionary integration—they identified strategies to effectively incorporate AI into information literacy education.

Course Structure and Objectives

Utilizing the ACRL Framework

To provide a solid foundation, the course was structured around the Association of College and Research Libraries (ACRL) Framework for Information Literacy for Higher Education. Each of the six frames served as a module, allowing for a comprehensive exploration of core concepts. This approach also aligned well with the eight-week accelerated format of the course, providing sufficient time for introduction, in-depth exploration, and reflection.

Hybrid Learning Model

Recognizing the benefits of both in-person and online learning, the course was designed as a hybrid. Meeting twice a week, the first session would introduce key concepts and AI tools, while the second would be student-led, fostering a community of practice. This structure aimed to balance guided instruction with collaborative learning, encouraging students to share insights and take ownership of their learning process.

Target Audience and Enrollment

The course was intended for upper-division undergraduates who had prior college-level coursework. This prerequisite ensured that students possessed foundational academic skills, enabling them to engage deeply with complex topics and contribute meaningfully to discussions and projects.

Assignments and Activities

Researchers' Notebook

A central component of the course was the "Researchers' Notebook," an iterative assignment where students documented their evolving thoughts, questions, and interactions with AI tools. This notebook aimed to make the research process visible, emphasizing the development of inquiry skills and reflective practice. By capturing moments of discovery, frustration, and dialogue with AI, students could illustrate their understanding of information literacy concepts in a tangible way.

Module Deep Dive: Research as Inquiry

Focusing on the ACRL frame "Research as Inquiry," one module exemplified the course's pedagogical approach. The objectives were to have students view research as an open-ended exploration and to formulate increasingly sophisticated questions. Activities included:

  • Question Formulation Technique: Students engaged in generating, refining, and prioritizing questions related to AI. This collaborative exercise encouraged curiosity and critical thinking, serving as a model for ongoing inquiry throughout the course.
  • Walk and Talk Activity: Adapted from the University of Arizona's Atlas of Creative Tools, this exercise involved students pairing up and discussing prompts while walking around campus. Questions like "What is curiosity to you?" and "What challenges does AI face in understanding human questions?" facilitated deeper engagement and embodied learning.

Other Modules and Activities

While the presentation focused on one module in detail, Russo and Hurley outlined plans for other modules based on the remaining ACRL frames. These included activities such as:

  • Authority Is Constructed and Contextual: Exploring how authority is established in different information sources and how AI-generated content challenges traditional notions of authority.
  • Searching as Strategic Exploration: Comparing search strategies in traditional databases versus AI tools, emphasizing iteration and strategy refinement.
  • Information Has Value: Discussing the ethical, legal, and economic implications of AI-generated content, including issues of intellectual property and environmental impact.

Challenges and Reflections

Despite their thorough planning, Russo and Hurley faced challenges in promoting and enrolling students in the course. Both were on different types of leave during critical promotion periods, resulting in insufficient enrollment for the course to run as scheduled. Initially disappointed, they reconsidered and recognized that the course content remained relevant and valuable, even as the initial hype around AI began to settle.

They emphasized that the rapidly evolving nature of AI and its integration into various aspects of society make such a course timely and essential. By sharing their experience, they hoped to inspire others to develop similar courses or integrate these ideas into existing curricula.

Conclusion and Takeaways

Russo and Hurley's presentation highlighted the importance of adapting information literacy instruction to address the challenges and opportunities presented by generative AI. By framing the course around collaborative exploration and critical engagement, they aimed to empower students to navigate and contribute to the evolving information landscape.

Key takeaways from their experience include:

  • The value of integrating established frameworks (like the ACRL frames) with new technologies to provide structure and depth.
  • The effectiveness of hybrid learning models in fostering community and active participation.
  • The importance of reflective and process-oriented assignments, such as the Researchers' Notebook, in making the research process transparent and meaningful.
  • The need for flexibility and adaptability in course planning, acknowledging that challenges like enrollment and shifting student interests may arise.
  • The relevance of addressing ethical considerations, including environmental impacts and biases inherent in AI technologies.

Final Thoughts

While their course did not run as initially planned, Russo and Hurley remain optimistic about its potential and relevance. They encouraged other educators and librarians to consider similar approaches, emphasizing that the need for critical engagement with AI and information literacy is ongoing.

Their work serves as a valuable model for integrating emerging technologies into educational practices, fostering not only skill development but also critical awareness and ethical considerations among students.

Note: This summary is based on a presentation by Alyssa Russo and David Hurley on planning a credit-bearing course on AI and information literacy at the University of New Mexico.

The Ethics of AI: Navigating the Three Cs of Generative AI

Closing Keynote: The Three Cs of Generative AI in Libraries

Presented by Reed Hepler at the AI and the Libraries 2 Mini Conference



In the closing keynote of the "AI and the Libraries 2 Mini Conference: More Applications, Implications, and Possibilities," Reed Hepler, Digital Initiatives Librarian and Archivist at the College of Southern Idaho, shared valuable insights on the use of generative AI in educational and library settings. With experience spanning educational formats, library environments, and business training, Hepler delved into the ethical considerations and best practices surrounding generative AI tools.

Introduction

Hepler began by acknowledging the diverse perspectives educators and administrators hold regarding generative AI. He identified four primary viewpoints observed at his institution:

  1. Fear that student use of ChatGPT and similar tools creates new forms of unethical practices.
  2. Confidence that students wish to use ChatGPT effectively and constructively.
  3. Concern that generative AI undermines established systems and norms of online learning.
  4. Belief that ChatGPT can lead to innovative products and workflows enhancing instructional design and assessment.

Recognizing the need to address these concerns, Hepler introduced a framework he developed to guide ethical and effective use of generative AI: the "Three Cs."

The Three Cs of Generative AI

1. Copyright

Key Question: Who owns the rights to AI-generated products, and how are they created?

Hepler discussed the complexities of copyright in the context of generative AI, posing three critical questions:

  • What are the rights and responsibilities of the original creators whose works are used by AI?
  • What are the rights and responsibilities of users who employ AI tools?
  • Is generative AI an owner, a user, both, or neither in terms of copyright?

He clarified that copyright protects the expression of ideas in any medium and grants exclusive rights to the creator or copyright holder. However, devices, processes, ideas, public domain materials, works by government employees, and recipes cannot be copyrighted.

Hepler emphasized that the current copyright law requires human authorship for protection, raising questions about whether AI can be considered an author. He also highlighted the ongoing debates and legal challenges surrounding the fair use doctrine as it pertains to AI training on copyrighted materials.

He cited examples of copyright battles involving AI-generated works, such as "Zarya of the Dawn," and discussed the implications of using copyrighted content in AI prompts. He stressed the importance of respecting intellectual property rights and advised users to avoid inputting copyrighted material into AI tools unless they own the rights.

2. Citation

Key Question: How should AI tools and outputs be cited, and where did the information originate?

Noting the absence of standardized citation formats for AI-generated content, Hepler emphasized that the purpose of citation is to provide information about sources. He recommended including the following elements in any AI citation:

  • Tool name and version
  • Date and time of usage
  • Prompt, query, or conversation title
  • Name of the person who queried the AI
  • Link to the conversation or output, if possible

He provided an example of how to cite AI-generated content in APA style, suggesting that users include their name to acknowledge their role in the creation process. He stressed that users should engage in the editing and revision of AI outputs to ensure originality and accuracy.

3. Circumspection

Key Question: What hazards—moral, ethical, educational, or otherwise—should users manage when utilizing generative AI tools?

Hepler outlined several ethical issues associated with AI outputs, including:

  • Plagiarism
  • Biases
  • Repetitiveness and arbitrariness
  • Incorrect or misleading information
  • Lack of connection to external resources

He discussed privacy concerns, highlighting how AI tools can extrapolate personal data from user inputs, even when users attempt to minimize the information they provide. He emphasized that users should never input sensitive or confidential information into AI prompts.

Hepler recommended several practices to mitigate these risks:

  • Informing users about data collection and its purposes
  • Obtaining explicit consent for data usage
  • Limiting data collection to essential information (data minimization)
  • Implementing strict access and use controls
  • Anonymizing data in prompts

He also discussed the importance of quality control when using AI-generated content, advising users to:

  • Use AI tools for their intended purposes
  • Engage in best practices for prompting
  • Ask the AI for its sources and verify them
  • Find external resources to support AI-generated information
  • Analyze outputs for ethical issues, accessibility, and accuracy

Privacy and Ethical Considerations

Hepler delved deeper into privacy harms associated with AI, referencing works by legal scholars such as Danielle Keats Citron and Daniel J. Solove. He noted that privacy laws often require proof of harm, which can be difficult when dealing with intangible injuries like anxiety or frustration resulting from data breaches or misuse.

He highlighted that AI tools like ChatGPT have specific terms of use that assign users ownership of the outputs generated from their inputs. However, users are responsible for ensuring that their content does not violate any applicable laws.

Hepler stressed that despite best efforts, AI tools can still extrapolate personal data, underscoring the importance of being cautious with the information provided to these systems.

Conclusion and Recommendations

Concluding his keynote, Hepler provided a list of references and resources for further exploration of the topics discussed. He reiterated the need for libraries and educators to navigate the evolving landscape of generative AI thoughtfully, balancing innovation with ethical considerations.

He encouraged attendees to remain informed about developments in AI and copyright law, to respect intellectual property rights, and to engage in responsible use of AI tools. By adhering to the "Three Cs" framework—Copyright, Citation, and Circumspection—users can harness the benefits of generative AI while mitigating potential risks.

Final Thoughts

Hepler's presentation offered a comprehensive overview of the challenges and responsibilities associated with generative AI in libraries and education. His insights serve as a valuable guide for professionals seeking to integrate AI tools into their work ethically and effectively.

Note: This summary is based on the closing keynote delivered by Reed Hepler at the AI and the Libraries 2 Mini Conference.

The Real-World Harms of AI in Healthcare: A Closer Look

Ethical Considerations for Generative AI Now and in the Future

Presented by Dr. Kellie Owens, Assistant Professor in the Division of Medical Ethics at NYU Grossman School of Medicine



Dr. Kellie Owens delivered an insightful presentation on the ethical considerations surrounding generative AI, particularly relevant to medical librarians and professionals involved in data services. As a medical sociologist and empirical bioethicist, Dr. Owens focuses on the social and ethical implications of health information technologies, including the infrastructure required to support artificial intelligence (AI) and machine learning in healthcare.

Introduction

Dr. Owens began by situating herself within the broader discourse on AI ethics, acknowledging the prevalent narratives of both awe and panic that often dominate news coverage. She highlighted a split within the field between AI safety—which focuses on existential risks and future catastrophic events—and AI ethics, which concentrates on addressing current, tangible ethical concerns associated with AI technologies.

Referencing the "Pause Letter" signed by prominent figures like Yoshua Bengio and Elon Musk, which called for a six-month halt on training AI systems more powerful than GPT-4, Dr. Owens expressed skepticism about such approaches. She argued that while managing existential risks is important, it is crucial to focus on the real and already manifesting ethical issues that AI poses today.

Real-World Harms of AI in Healthcare

Dr. Owens provided examples of harms caused by AI tools in healthcare, emphasizing that these issues are not hypothetical but are currently affecting patients and providers. She cited instances where algorithms reduced the number of Black patients eligible for high-risk care management programs by more than half and highlighted biases in medical uses of large language models like GPT, which can offer different medical advice based on a patient's race, insurance status, or other demographic factors.

Framework for Ethical Considerations

Building her talk around the five key themes from the Biden administration's Office of Science and Technology Policy's "Blueprint for an AI Bill of Rights," Dr. Owens discussed:

  1. Safe and Effective Systems
  2. Algorithmic Discrimination Protections
  3. Data Privacy and Security
  4. Notice and Explanation
  5. Human Alternatives, Consideration, and Fallback

1. Safe and Effective Systems

Emphasizing the principle of "First, do no harm," Dr. Owens discussed the ethical imperative to ensure that AI tools are both safe and effective. She addressed the issue of AI hallucinations, where large language models generate false or misleading information that appears credible. In healthcare, such errors can have significant consequences.

She also touched on the problem of dataset shift, where AI models decline in performance over time due to changes in technology, populations, or behaviors. Dr. Owens highlighted the need for continuous monitoring and updating of AI systems to maintain their reliability and accuracy.

2. Algorithmic Discrimination Protections

Dr. Owens delved into the ethical concerns related to algorithmic bias and discrimination. She cited studies like "Gender Shades," which revealed that facial recognition technologies performed poorly on women, particularly women with darker skin tones. In the context of generative AI, she discussed how image generation tools can perpetuate stereotypes, such as depicting authoritative roles predominantly as men.

She highlighted instances where AI models like GPT-4 produced clinical vignettes that stereotyped demographic presentations, calling for comprehensive and transparent bias assessments in AI tools used in healthcare.

3. Data Privacy and Security

Addressing data privacy concerns, Dr. Owens discussed vulnerabilities like prompt injection attacks, where attackers manipulate AI models to reveal sensitive training data, including personal information. She emphasized the importance of protecting users from abusive data practices and ensuring that individuals have agency over how their data is used.

She also raised concerns about plagiarism and intellectual property violations, noting that generative AI models can reproduce copyrighted material without attribution, leading to potential legal and ethical issues.

4. Notice and Explanation

Dr. Owens stressed the importance of transparency and autonomy, arguing that users should be informed when they are interacting with AI systems and understand how these systems might affect them. She cited the example of a mental health tech company that used AI-generated responses without informing users, highlighting the ethical implications of such practices.

5. Human Alternatives, Consideration, and Fallback

Finally, Dr. Owens emphasized the necessity of providing human alternatives and the ability for users to opt out of AI systems. She underscored that while AI can offer efficiency, organizations must be prepared to address failures and invest resources to support those affected by them.

Key Takeaways

Dr. Owens concluded with several key insights:

  • Technology is Not Neutral: AI systems are socio-technical constructs influenced by human decisions, goals, and biases. Recognizing this is essential in addressing ethical considerations.
  • Benefits and Costs: It is crucial to weigh both the advantages and potential harms of AI applications, including issues like misinformation, environmental impact, and the perpetuation of biases.
  • What's Missing Matters: Considering the gaps in AI training data and the politics of what's excluded can provide valuable ethical insights.
  • Power Dynamics: Evaluating how AI shifts power structures is important. AI applications should aim to empower marginalized communities rather than exacerbate existing inequalities.

Conclusion

Dr. Owens encouraged ongoing dialogue and critical examination of generative AI's ethical implications. She highlighted the role of professionals like medical librarians in shaping how AI is integrated into systems, emphasizing the need for intentional design, transparency, and a focus on equitable outcomes.

For those interested in further exploration, she recommended reviewing the "Blueprint for an AI Bill of Rights" and engaging with interdisciplinary approaches to AI ethics.

Note: This summary is based on a presentation by Dr. Kellie Owens on the ethical considerations of generative AI, particularly in the context of healthcare and data services.

Navigating the Intersection of AI and Copyright Law in Australia

AI and Copyright Law in Australia: Exploring Options and Challenges

Presentation by an expert on the intersection of AI and Australian copyright law.



Introduction

The speaker delves into the complexities of how Australian copyright law intersects with artificial intelligence (AI), particularly generative AI. The focus is on exploring practical options for Australia to balance AI innovation with the protection of human creators in the creative industries.

Key Premises

  1. Australian Copyright Law is Unique: Australia's legal framework differs significantly from other jurisdictions, impacting how AI and copyright issues are addressed.
  2. Room for Debate: There's flexibility in how international copyright principles apply to AI, allowing Australia to make deliberate choices about its legal stance.
  3. Desirable End State: The goal is to achieve both AI innovation and deployment in Australia, alongside thriving human creators and creative industries.
  4. Practical Realities Matter: Any legal approach must consider Australia's position in the global landscape and the types of AI activities likely to occur within the country.

Generative AI in Australia

The speaker emphasizes that generative AI isn't limited to global platforms like ChatGPT or Midjourney but also includes local applications such as government chatbots and educational tools. These smaller models, often built on larger ones, are integral to various sectors in Australia, including government services and businesses.

Five Options for Addressing AI and Copyright

  1. Strict Copyright Rules (Status Quo):
    • Maintains the current strong interpretation of copyright law.
    • Results in widespread potential infringement by businesses and government entities using AI.
    • Does not lead to compensation for creators due to training occurring overseas or behind closed doors.
    • Considered a "lose-lose" scenario with a chilling effect on AI development and deployment in Australia.
  2. Classic Common Law Compromise:
    • Attempts to balance interests through complex rules and conditional exceptions.
    • Could lead to a prolonged and complicated legal process with little practical benefit.
    • Risks stalling AI innovation due to legal uncertainties.
  3. Equitable Remuneration for Creators:
    • Proposes a remunerated copyright limitation for human creators whose works are used in AI training.
    • Involves collective management organizations and statutory licensing.
    • Faces challenges in valuation, distribution, and practical implementation.
  4. Lump Sum Levy on AI Systems:
    • Suggests imposing a levy on AI systems capable of producing literary and artistic works.
    • Aims to compensate creators for potential substitution effects (displacement of human labor).
    • Not strictly a copyright issue but more akin to models like the News Media Bargaining Code.
  5. Focus on Economic Loss and Market Effects:
    • Allows AI training on copyrighted data but permits rights holders to claim compensation if they can demonstrate economic loss.
    • Acknowledges the difficulty in proving loss and valuing it appropriately.
    • Highlights the complexity of linking copyright infringement to market harm in the AI context.

Challenges and Considerations

The speaker notes that many proposed solutions have significant drawbacks, particularly in terms of practicality and potential negative impact on AI innovation in Australia. Attempts to create a balanced compromise may result in prolonged legal battles and complex regulations that fail to satisfy any stakeholders fully.

Recommended Path Forward

The speaker suggests a pragmatic approach:

  • Address Mundane but Impactful Issues: Focus on areas where immediate improvements can be made, such as text and data mining exceptions, especially for sectors outside the core creative industries.
  • Reform Liability at the Deployment Stage: Modify laws to ensure that Australian firms using AI, particularly those adopting reasonable copyright safety measures, are not unduly liable for potential infringements.
  • Consider Non-Copyright Solutions for Creator Compensation: Explore mechanisms outside of copyright law, such as levies or funds, to address the displacement effects on human creators.
  • Implement Technical Copying Exceptions: Introduce exceptions that allow for necessary technical copying during AI training and deployment without infringing copyright.

Conclusion

The speaker concludes that while the intersection of AI and copyright law presents complex challenges, a practical and focused approach can help Australia navigate these issues effectively. By addressing specific areas where legal adjustments can facilitate AI innovation while minimizing harm to creators, Australia can work towards a more balanced and forward-looking legal framework.

Questions and Discussion

The presentation ends with an invitation for questions and further discussion on the topic, emphasizing the need for ongoing dialogue to refine and implement effective solutions.

Note: This summary is based on a presentation discussing the challenges and options for addressing AI and copyright law in Australia.

The Rise of AI and Its Impact on Organizational Trends

Leadership Trends and the Impact of AI: A Conversation with DBS and NeuroLeadership Institute

Featuring Dr. David Rock and Joan, Chief Learning Officer at DBS Group



In a recent session hosted by the NeuroLeadership Institute, Dr. David Rock and Joan, Chief Learning Officer at DBS Group, discussed current trends in organizations, the role of AI, and the importance of understanding human behavior in leadership.

Opening Remarks and Acknowledgments

The session began with an acknowledgment of the traditional custodians of the land, the Gadigal people of the Eora nation in Sydney, Australia. Participants from around the world joined the conversation, highlighting the global interest in leadership and organizational trends.

Introduction to the NeuroLeadership Institute

The NeuroLeadership Institute, led by Dr. David Rock, focuses on making organizations more human and high-performing through science. With operations worldwide, the institute advises a significant percentage of major companies, including 27% of the ASX 200 and 75% of the Fortune 100.

Celebrating DBS Group's Milestone

Joan shared exciting news that DBS Group has exceeded $100 billion in market capitalization. She expressed enthusiasm about discussing leadership and organizational trends with Dr. Rock, noting their decade-long partnership.

Current Organizational Trends and the Role of AI

When asked about trends in organizations today, Dr. Rock highlighted several key points:

  • Importance of Understanding Human Behavior: With the rise of artificial intelligence, understanding how humans function is becoming increasingly critical.
  • Relevance of Neuroscience Research: The NeuroLeadership Institute's 26 years of research is more pertinent than ever, especially in navigating the AI revolution.
  • AI and Leadership: Dr. Rock emphasized that as AI advances, the need to comprehend human leadership and behavior intensifies.

Looking Ahead

The conversation hinted at deeper discussions on leadership, learning innovation, and the challenges and opportunities presented by AI in organizational contexts.

Note: This summary is based on a session hosted by the NeuroLeadership Institute featuring Dr. David Rock and Joan, Chief Learning Officer at DBS Group.

he Emergence of AI in Academic Libraries: Transforming Student Research

Exploring AI in Academic Libraries: Insights from Librarians

Presentation by Kate Ganski and Heidi Anzano at UWM Libraries



In a recent session at the University of Wisconsin-Milwaukee (UWM), librarians Kate Ganski and Heidi Anzano discussed the evolving role of artificial intelligence (AI) in academic libraries and its impact on student research and information literacy.

Opening Discussion: AI in Today's World

The session began with an interactive discussion where participants shared their experiences and insights about AI over the past semester. Key points included:

  • Environmental Impact: Concerns about the significant server space and energy consumption required for AI technologies.
  • Accessibility and Control: Recognition that large companies may dominate AI development due to high costs.
  • Student Use of AI: Observations that students are using AI not just for cheating but also as a study aid, such as generating quizzes and summarizing chapters.
  • Limitations of AI: Acknowledgment that AI tools can make mistakes and may not be effective in specialized or obscure fields.
  • Comparison to Wikipedia: Similarities in how students use AI and Wikipedia as reference tools to support their learning.

Librarians' Expertise and the Role of AI

Kate and Heidi highlighted the expertise that librarians bring to the table, especially in terms of information literacy and ethics. They discussed how AI is changing the landscape of information discovery and the importance of guiding students in this new environment.

Key areas of focus included:

  • Information Abundance: With the proliferation of AI-generated content, librarians can help students navigate and critically evaluate the vast amount of information available.
  • Information Literacy Framework: They introduced the Association of College and Research Libraries (ACRL) Framework for Information Literacy, which includes six core concepts:
    • Authority Is Constructed and Contextual
    • Information Creation as a Process
    • Information Has Value
    • Research as Inquiry
    • Scholarship as Conversation
    • Searching as Strategic Exploration
  • AI's Impact on Research Practices: Discussion on how AI tools are changing research methodologies and the need to adapt teaching strategies accordingly.

Interactive Reflection and Exercises

Participants engaged in reflection activities to identify core research practices and skills within their disciplines. They considered how these practices are being disrupted or enhanced by AI and where to focus students' critical thinking in this new context.

Challenges and Considerations

Several challenges associated with AI in academic settings were discussed:

  • Bias and Representation: AI tools may amplify existing biases in scholarly literature, underrepresenting marginalized voices.
  • Evaluation of AI-generated Content: The importance of teaching students to critically assess the reliability and validity of AI-generated information.
  • Ethical Use of AI: Addressing concerns related to privacy, data usage, and intellectual property rights.

Conclusion

The session concluded with a call to reevaluate traditional research models in light of AI advancements. Kate and Heidi emphasized the need to foster curiosity and critical thinking among students, encouraging them to question and analyze the information they encounter.

Lane, the host, wrapped up the session by highlighting additional resources and experiments for attendees to explore AI tools in research.

Note: This summary is based on a presentation by librarians Kate Ganski and Heidi Anzano discussing the intersection of AI and academic libraries.

Exploring the Evolving Relationship Between AI and Libraries

AI and Libraries: Friends or Enemies?

By Dr. Luba Pirgova-Morgan, University of Leeds



In a recent presentation, Dr. Luba Pirgova-Morgan explored the evolving relationship between artificial intelligence (AI) and libraries. Drawing from her report titled "Looking Towards a Brighter Future," completed in 2023 at the University of Leeds, she examined whether AI is a friend or foe to the library world.

AI in the Library Space: Hero or Villain?

Dr. Pirgova-Morgan posed the question of AI's role in libraries—is it a hero enhancing library services or a villain introducing challenges? She concluded that AI is a multifaceted tool that is neither inherently good nor bad. Its impact depends on how it is utilized within the library context.

On one hand, AI can be a hero by:

  • Enhancing Efficiency: Automating routine tasks, allowing librarians to focus on complex responsibilities.
  • Personalizing User Experience: Providing tailored recommendations and improving search optimization.
  • Improving Accessibility: Assisting users with disabilities through tools like text-to-speech and language processing applications.

On the other hand, AI can be a villain by introducing:

  • Bias and Inequality: Perpetuating existing biases if algorithms are not carefully designed.
  • Privacy Concerns: Handling large amounts of user data, which may infringe on privacy if not properly managed.
  • Reduction of Human Element: Potentially diminishing the value of human interaction in libraries.

AI and Libraries: Friends or Enemies?

The presentation also delved into whether AI and libraries can be friends or are destined to be enemies. Dr. Pirgova-Morgan suggested that a harmonious relationship is possible through:

  • Education and Skills Development: Librarians should develop AI-related skills to navigate the evolving landscape effectively.
  • Ethical Implementation: Libraries must address ethical considerations, ensuring AI is used responsibly.
  • User Engagement: Encouraging open dialogue with users about AI to foster understanding and trust.

She emphasized that the key to a positive relationship lies in balancing the benefits of AI with mindful awareness of its limitations.

Current Initiatives at the University of Leeds

The University of Leeds is actively exploring AI applications within its library system, including:

  • Digitizing Ancient Texts: Using AI to enhance the digitization process, making historical documents more accessible.
  • Digital Humanities Projects: Integrating AI into research workflows to support academic studies.
  • Policy Development: Engaging in debates and consultations to develop strategies for ethical AI integration.

Conclusion

Dr. Pirgova-Morgan concluded that the relationship between AI and libraries is complex but holds great potential. By establishing clear guidelines and fostering collaboration, libraries can leverage AI as a powerful ally rather than viewing it as an adversary.

For more information or to access the full report, please contact Dr. Luba Pirgova-Morgan at [email protected].

Note: This summary is based on a presentation by Dr. Luba Pirgova-Morgan discussing the intersection of AI and library services.

Saturday, November 23, 2024

Understanding Generative AI: Implications for Academic Integrity and Citation

Ethical and Productive—Considering Generative Artificial Intelligence Citation Across Learning and Research



Introduction

  • Host: Daniel Pfeiffer from Choice and LibTech Insights.
  • Speakers:
    • Kari Weaver: Learning, Teaching, and Instructional Design Librarian at the University of Waterloo.
    • Antonio Muñoz Gómez: Digital Scholarship Librarian at the University of Waterloo.
  • Context: Discussion on ethical considerations and citation practices for generative AI tools like ChatGPT in academia.

Acknowledgment of Land

  • Recognition of the traditional territories where the University of Waterloo is situated.
  • Reflection on how citation practices are influenced by colonial approaches to knowledge ownership.

Background of the Project

  • Campus Context:
    • Research-intensive university with over 42,000 students.
    • Home to the Waterloo Artificial Intelligence Institute.
  • Emergence of Generative AI:
    • Open availability of tools like ChatGPT sparked campus-wide discussions.
    • Initial focus on AI's impact on teaching, learning, and academic integrity.

Focus on Citation Practices

  • Purpose of Citation:
    • Creates an information trail and establishes academic connections.
    • Provides standardization and consistency in student assignments.
    • Supports academic integrity through transparency.
  • Challenges with AI-generated Content:
    • Difficulty in citing AI-generated outputs.
    • Lack of initial guidance from traditional citation styles.
    • Need for practical solutions for students and faculty.

Ethical Dimensions

  • Academic Integrity Concerns:
    • Fear of students using AI to cheat on assignments.
    • Issues with AI detection software misidentifying non-native English speakers.
  • Power Dynamics:
    • Discrepancy in the use of AI tools between students and instructors.
    • Data privacy concerns when student work is uploaded to detection software.
  • Reproducibility and Accountability:
    • AI outputs are inconsistent; same prompts yield different results.
    • Challenges in preserving AI-generated content for verification.

Citation in Research vs. Learning Contexts

  • Research Context:
    • AI tools generally not allowed as authors in publications.
    • AI-generated images discouraged due to reliability concerns.
    • Disclosure of AI use required in methodology sections.
  • Learning Context:
    • Adaptation of citation practices to include AI tools.
    • Encouragement for students to be transparent about AI use.

Development of Resources

  • Initial Outputs:
    • Created a LibGuide on ChatGPT and generative AI.
    • Developed infographics and annotated prompts illustrating citation practices.
  • Ongoing Work:
    • Updating resources to include guidance on citing AI-generated images and videos.
    • Exploring AI tools for literature reviews and knowledge synthesis.
  • Campus Collaboration:
    • Formed a campus-wide committee with diverse representation.
    • Contributed to faculty programming and standardized syllabus language.
    • Supported resource development in partnership with other academic units.

Library Initiatives

  • Internal Exploration:
    • Monthly sessions on AI tools like Whisper for transcription.
    • Workshops on AI and machine learning in academic libraries.
  • Interest Groups and Bibliographies:
    • Formed an interest group on AI within the library.
    • Created a Zotero bibliography with curated readings on AI topics.
  • Future Directions:
    • Participation in provincial and federal AI initiatives for academic libraries.

Q&A Session Highlights

  • Use of AI in Professional Practice:
    • Librarians using AI tools for brainstorming and instructional design.
  • Access to Paywalled Content:
    • AI tools generally cannot access content behind paywalls unless provided by the user.
  • Guidance on AI Use in Assignments:
    • Importance of transparency and attribution when students use AI for brainstorming or editing.
    • Encouragement for faculty to discuss AI expectations with students.
  • Ethical Considerations:
    • Need to address citation as a colonial practice and explore decolonized approaches.
    • Challenges with integrated AI features in tools and implications for citation.
  • Institutional Policies:
    • University of Waterloo currently has no formal policy on AI use.
    • Emphasis on ongoing conversations and collaborative efforts to address AI's impact.

Conclusion

  • Recognition of the complexities and rapid development of AI technologies.
  • Importance of grappling with ethical, practical, and pedagogical implications.
  • Encouragement for open dialogue between faculty, students, and librarians.
  • Acknowledgment of the need for adaptable approaches rather than rigid policies.

Note: This summary captures key points from a presentation discussing the ethical considerations and citation practices related to the use of generative AI tools in academic learning and research contexts.

Streamline Your Writing Process with QuillBot Flow: A Comprehensive Overview

Introduction to QuillBot Flow—Enhancing Your Writing Process



Introduction

  • Host: Gul, leading Business Development at QuillBot.
  • Team Members Present:
    • Aim: Handling administrative issues.
    • Ashish: Addressing general questions.
    • Jerry: Addressing product-related questions.
  • Audience Engagement:
    • Participants from around the world, including Tanzania, Indonesia, Scotland, France, Germany, Italy, Canada, Netherlands, Philippines, Mexico, USA, South Africa, Sri Lanka, Pakistan, and South Korea.
    • Shared favorite quotes and New Year greetings to foster community spirit.

Webinar Overview

  • Purpose: To introduce QuillBot Flow, an AI-powered writing tool designed to streamline and enhance the writing process.
  • Agenda:
    • Introduction to QuillBot and its mission.
    • Deep dive into QuillBot Flow features.
    • Interactive Q&A session.
    • Special surprise announcement for attendees.

About QuillBot

  • Founded: In 2017 by three computer science graduates from the University of Illinois—Rohan Gupta, Anil Jason, and Dave S.
  • Headquarters: Chicago, USA, and Jaipur, India.
  • Mission: To make the writing process painless and help users grow and learn as writers.
  • User Base:
    • Over 35 million monthly active users.
    • More than 50 million users globally.
  • Key Features:
    • AI writing tools for drafting, brainstorming, researching, editing, proofreading, creating citations, summarizing, and translating.
    • Ad-free platform focused on user efficiency.

Introduction to QuillBot Flow

  • Formerly Known As: QuillBot's Co-Writer.
  • Description: A comprehensive AI writing platform integrating all of QuillBot's tools in one place.
  • Demonstration Highlights:
    • Templates:
      • Options for blogs, academic papers, emails, letters, and custom templates.
    • Structure Generation:
      • Helps create an outline or flow for writing projects.
    • Research Assistance:
      • Integrated search within the platform.
      • Ability to insert researched content directly into the document.
    • QuillBot Flares:
      • Generate ideas, complete paragraphs, add examples or counter-examples.
    • Paraphrasing Modes:
      • Multiple styles (e.g., standard, fluency, formal) and multilingual capabilities.
    • Summarizer Tool:
      • Condenses long texts into key sentences or paragraphs.
    • Translation Feature:
      • Supports over 45 languages, including French, German, and Spanish.
    • Plagiarism Checker:
      • Scans documents for originality and assists with citations.
    • AI Review:
      • Offers suggestions to improve writing style and tone.
    • Suggest Text Feature:
      • Predicts the next sentence based on the current content.
    • Dictate and Listen Feature:
      • Converts speech to text and text to speech for increased productivity.

Interactive Q&A Session

  • Poll Conducted:
    • Asked attendees what they hoped to gain from the webinar.
    • Majority wanted to learn how to enhance their writing process.
  • Common Questions Addressed:
    • Differences Between QuillBot and Other Tools:
      • Multilingual paraphrasing accuracy.
      • Integrated features like summarizer and translator.
    • Subscription Options and Discounts:
      • Availability of monthly, semi-annual, and annual subscriptions.
      • Special discounts for students and educational institutions.
    • Language and Accent Adjustments:
      • Ability to choose between American, British, Canadian, and Australian English.
    • Upcoming Webinars:
      • Plans for future sessions covering various topics based on user feedback.
    • Templates and Citation Support:
      • Access to multiple templates and citation formats (APA, MLA, Chicago, etc.).
    • Device Accessibility:
      • QuillBot is accessible across different devices.
  • Feedback Encouraged:
    • Participants were invited to share topics they would like covered in future webinars.
    • Emphasized the importance of user feedback in improving QuillBot.

Special Surprise for Attendees

  • Exclusive Offer:
    • A 50% discount on the annual premium subscription.
    • Valid for 24 hours post-webinar.
    • Coupon code provided during the session.
  • How to Avail:
    • Instructions to contact support if assistance is needed with the coupon code.
    • Encouraged to reach out via email or the QuillBot website for any queries.

Conclusion

  • Gratitude Expressed:
    • Thanked attendees for their participation and engagement.
    • Expressed excitement about the overwhelming response.
  • Encouragement to Connect:
    • Invited attendees to follow QuillBot on social media for updates.
    • Encouraged sharing feedback and suggestions for future webinars.
  • Final Remarks:
    • Wished everyone a great and exciting journey ahead.
    • Anticipated how QuillBot's tools can empower users to achieve writing excellence.

Note: This summary captures key points from a webinar introducing QuillBot Flow, an AI-powered writing platform designed to enhance and streamline the writing process by integrating multiple tools into one comprehensive solution.

Navigating the AI Landscape: How Libraries Can Adapt

Libraries and AI—Challenges and Responses


Introduction

  • Host: Don from the Gigabit Libraries Network.
  • Speakers:
    • Andrew Cox: Member of the AI Special Interest Group at IFLA; Information School in Sheffield.
    • Richard Whitt: President of GLIA Foundation.
  • Series Context: Part of the "Libraries in Response" series on technology issues affecting libraries.

Context and Background

  • Libraries are facing multiple crises: COVID-19, climate change, political unrest, and AI.
  • AI is seen as both an opportunity and a challenge for libraries.
  • The importance of libraries as trusted institutions in navigating technological changes.

Challenges of AI for Libraries

  • Existential Concerns: AI's potential impact on humanity and societal structures.
  • Trust Issues: Ensuring AI agents act in the best interest of users, avoiding "double agents."
  • Digital Divide: AI might exacerbate inequalities between connected and unconnected communities.
  • Regulatory Landscape:
    • Federal and state policies are being developed to address AI.
    • Challenges in effectively regulating complex AI technologies.

Role of Libraries in the Age of AI

  • Leveraging the high trust in libraries to guide communities through AI challenges.
  • Promoting AI literacy and responsible AI use among patrons.
  • Developing AI capabilities, including data stewardship and ethical practices.
  • Potential partnerships with technology companies for AI development.

Presentations

Richard Whitt

  • Referenced Cerf's work on digital libraries and intelligent agents (knowbots).
  • Discussed the rise of AI bots and personal digital assistants.
  • Introduced the concept of "double agents" in AI that may not serve users' best interests.
  • Highlighted potential roles for libraries:
    • Providing infrastructure and connectivity.
    • Serving as repositories of trustworthy digital knowledge.
    • Acting as fiduciaries with obligations to patrons.
    • Developing AI agents aligned with library values.
    • Educating patrons on AI and digital citizenship.

Andrew Cox

  • Introduced the work of the IFLA AI Special Interest Group.
  • Presented a strategic framework for libraries responding to AI challenges.
  • Discussed the AI capability model:
    • Material Resources: Data and infrastructure needs.
    • Human Resources: Technical and business skills required.
    • Intangible Resources: Leadership, coordination, and adaptability.
  • Suggested key actions for libraries:
    • Implement responsible and explainable AI solutions.
    • Enhance data stewardship and management skills.
    • Promote AI literacy and critical understanding among patrons.
  • Addressed challenges like resource limitations and the need for collaboration and vision.

Discussion and Audience Participation

  • Practical Steps for Libraries:
    • Start small with AI projects relevant to existing services.
    • Define a clear vision for AI integration.
    • Collaborate with other libraries and institutions.
  • Partnerships with Tech Companies:
    • Potential benefits and risks of collaborating with technology firms.
    • Need for libraries to advocate for ethical AI practices.
  • Comments from Participants:
    • Diane: Shared a tool developed by her library using AI to assist patrons; emphasized the importance of prompt engineering.
    • Stephen Abram: Highlighted the need for collaborative efforts, use cases, and establishing guardrails for AI implementation.
    • Fiona: Mentioned Toronto Public Library's leadership in using AI.

Conclusion

  • Recognized that AI presents both significant challenges and opportunities for libraries.
  • Emphasized the unique position of libraries to leverage trust and promote ethical AI use.
  • Committed to ongoing discussions and exploring AI's impact on libraries in future sessions.
  • Encouraged proactive engagement with AI, focusing on community needs and responsible practices.

Note: This outline summarizes a presentation on how libraries can respond to the challenges and opportunities presented by AI, featuring insights from industry experts and audience participation.

Data Science 101: Understanding Statistical Concepts and Analysis

From Couch to Jupyter: A Beginner's Guide to Data Science Tools and Concepts



Introduction

  • Host: Manogna, Senior Data Scientist at Slalom.
  • Presenter: Kiko K., Analytic Scientist at FICO on the Scores Predictive Analytics team.
  • Background:
    • Graduated from UC Berkeley in 2019 with a degree in Applied Mathematics and Data Science.
    • Led teams integrating data science into non-traditional curricula.
    • Passionate about data science's power and community.

Workshop Overview

  • Title: "From Couch to Jupyter—A Beginner's Guide to Data Science Tools and Concepts"
  • Objective: Provide foundational knowledge and tools for beginners in data science.
  • Structure:
    • Introduction to Jupyter Notebook.
    • Basics of Python programming.
    • Understanding data structures and statistical concepts.
    • Interactive code demonstrations.
  • Resources:
    • GitHub repository with tutorial notebooks and datasets.
    • Anaconda installation guide for environment setup.

Key Topics Covered

  • Using Jupyter Notebook
    • Understanding markdown and code cells.
    • Running cells and writing code.
  • Python Basics
    • Data types: integers, floats, strings, booleans.
    • Variables and functions.
    • Arithmetic operations and function calls.
  • Data Structures
    • Arrays with NumPy.
    • Pandas Series and DataFrames.
    • Indexing and slicing data.
  • Data Manipulation and Analysis
    • Importing libraries and reading data files.
    • Handling missing data (NaN values).
    • Filtering and selecting data.
    • Basic statistical calculations: mean, median, standard deviation.
  • Practical Demonstrations
    • Working with a stroke prediction dataset from Kaggle.
    • Visualizing data distributions.
    • Imputing missing values.

Additional Resources

  • Anaconda Installation Guide: For setting up the Python environment.
  • Tutorial Notebooks: Covering various topics in more depth.
  • External Links: Videos and other learning materials for further study.

Conclusion

  • Q&A Session: Addressed audience questions on topics like:
    • Differences between Jupyter Notebook and JupyterLab.
    • Handling missing data and NaN values.
    • Differences between arrays and series.
    • Recommendations for beginners starting with data sets.
  • Final Remarks:
    • Encouraged attendees to explore provided resources.
    • Emphasized continuous learning in data science.
    • Thanked the audience for participation.

Note: The workshop aims to make data science accessible to beginners by providing hands-on experience with tools like Jupyter Notebook and Python, using practical examples and interactive code demonstrations.