Closing Keynote: The Three Cs of Generative AI in Libraries
Presented by Reed Hepler at the AI and the Libraries 2 Mini Conference
In the closing keynote of the "AI and the Libraries 2 Mini Conference: More Applications, Implications, and Possibilities," Reed Hepler, Digital Initiatives Librarian and Archivist at the College of Southern Idaho, shared valuable insights on the use of generative AI in educational and library settings. With experience spanning educational formats, library environments, and business training, Hepler delved into the ethical considerations and best practices surrounding generative AI tools.
Introduction
Hepler began by acknowledging the diverse perspectives educators and administrators hold regarding generative AI. He identified four primary viewpoints observed at his institution:
- Fear that student use of ChatGPT and similar tools creates new forms of unethical practices.
- Confidence that students wish to use ChatGPT effectively and constructively.
- Concern that generative AI undermines established systems and norms of online learning.
- Belief that ChatGPT can lead to innovative products and workflows enhancing instructional design and assessment.
Recognizing the need to address these concerns, Hepler introduced a framework he developed to guide ethical and effective use of generative AI: the "Three Cs."
The Three Cs of Generative AI
1. Copyright
Key Question: Who owns the rights to AI-generated products, and how are they created?
Hepler discussed the complexities of copyright in the context of generative AI, posing three critical questions:
- What are the rights and responsibilities of the original creators whose works are used by AI?
- What are the rights and responsibilities of users who employ AI tools?
- Is generative AI an owner, a user, both, or neither in terms of copyright?
He clarified that copyright protects the expression of ideas in any medium and grants exclusive rights to the creator or copyright holder. However, devices, processes, ideas, public domain materials, works by government employees, and recipes cannot be copyrighted.
Hepler emphasized that the current copyright law requires human authorship for protection, raising questions about whether AI can be considered an author. He also highlighted the ongoing debates and legal challenges surrounding the fair use doctrine as it pertains to AI training on copyrighted materials.
He cited examples of copyright battles involving AI-generated works, such as "Zarya of the Dawn," and discussed the implications of using copyrighted content in AI prompts. He stressed the importance of respecting intellectual property rights and advised users to avoid inputting copyrighted material into AI tools unless they own the rights.
2. Citation
Key Question: How should AI tools and outputs be cited, and where did the information originate?
Noting the absence of standardized citation formats for AI-generated content, Hepler emphasized that the purpose of citation is to provide information about sources. He recommended including the following elements in any AI citation:
- Tool name and version
- Date and time of usage
- Prompt, query, or conversation title
- Name of the person who queried the AI
- Link to the conversation or output, if possible
He provided an example of how to cite AI-generated content in APA style, suggesting that users include their name to acknowledge their role in the creation process. He stressed that users should engage in the editing and revision of AI outputs to ensure originality and accuracy.
3. Circumspection
Key Question: What hazards—moral, ethical, educational, or otherwise—should users manage when utilizing generative AI tools?
Hepler outlined several ethical issues associated with AI outputs, including:
- Plagiarism
- Biases
- Repetitiveness and arbitrariness
- Incorrect or misleading information
- Lack of connection to external resources
He discussed privacy concerns, highlighting how AI tools can extrapolate personal data from user inputs, even when users attempt to minimize the information they provide. He emphasized that users should never input sensitive or confidential information into AI prompts.
Hepler recommended several practices to mitigate these risks:
- Informing users about data collection and its purposes
- Obtaining explicit consent for data usage
- Limiting data collection to essential information (data minimization)
- Implementing strict access and use controls
- Anonymizing data in prompts
He also discussed the importance of quality control when using AI-generated content, advising users to:
- Use AI tools for their intended purposes
- Engage in best practices for prompting
- Ask the AI for its sources and verify them
- Find external resources to support AI-generated information
- Analyze outputs for ethical issues, accessibility, and accuracy
Privacy and Ethical Considerations
Hepler delved deeper into privacy harms associated with AI, referencing works by legal scholars such as Danielle Keats Citron and Daniel J. Solove. He noted that privacy laws often require proof of harm, which can be difficult when dealing with intangible injuries like anxiety or frustration resulting from data breaches or misuse.
He highlighted that AI tools like ChatGPT have specific terms of use that assign users ownership of the outputs generated from their inputs. However, users are responsible for ensuring that their content does not violate any applicable laws.
Hepler stressed that despite best efforts, AI tools can still extrapolate personal data, underscoring the importance of being cautious with the information provided to these systems.
Conclusion and Recommendations
Concluding his keynote, Hepler provided a list of references and resources for further exploration of the topics discussed. He reiterated the need for libraries and educators to navigate the evolving landscape of generative AI thoughtfully, balancing innovation with ethical considerations.
He encouraged attendees to remain informed about developments in AI and copyright law, to respect intellectual property rights, and to engage in responsible use of AI tools. By adhering to the "Three Cs" framework—Copyright, Citation, and Circumspection—users can harness the benefits of generative AI while mitigating potential risks.
Final Thoughts
Hepler's presentation offered a comprehensive overview of the challenges and responsibilities associated with generative AI in libraries and education. His insights serve as a valuable guide for professionals seeking to integrate AI tools into their work ethically and effectively.
Note: This summary is based on the closing keynote delivered by Reed Hepler at the AI and the Libraries 2 Mini Conference.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.