-
AI as a Service (AIaaS)
AIaaS provides cloud-based AI tools that libraries can adopt without heavy upfront investments in hardware or in-house expertise. Standard offerings include automated translation services, speech-to-text processing, and chatbots, which help libraries enhance user engagement and streamline operations.
FuReadingeading: https://en.wikipedia.org/wiki/AI_as_a_service -
Algorithm
An algorithm is a finite set of instructions a computer follows to perform a specific task. In libraries, algorithms underpin search engines, recommendation systems, and automated classification, ultimately shaping how patrons find information and resources.
FuReadingeading: https://en.wikipedia.org/wiki/Algorithm -
Artificial Intelligence (AI)
AI refers to computational systems that perform tasks requiring human intelligence, such as learning, reasoning, and decision-making. Libraries leverage AI to automate repetitive workflows (e.g., metadata tagging) and offer advanced user services (e.g., virtual reference and intelligent recommendations).
FuReadingeading: https://en.wikipedia.org/wiki/Artificial_intelligence -
Association for the Advancement of Artificial Intelligence (AAAI)
The AAAI is a professional organization committed to advancing the understanding and application of AI. Librarians track AAAI publications, conferences, and workshops to stay informed about the latest AI research and ethical guidelines, ensuring responsible library technology adoption.
FuReadingeading: https://aaai.org/ -
Autoencoder
An autoencoder is a neural network that learns to compress input data (such as images or text) into a more miniature, latent representation and then reconstructs it. Libraries might use autoencoders to remove noise from digitized documents or to discover latent topics in extensive text collections.
FuReadingeading: https://en.wikipedia.org/wiki/Autoencoder -
Automatic Speech Recognition (ASR)
ASR converts spoken language into written text. In libraries, ASR tools can generate transcripts for oral histories, podcasts, and event recordings, improving accessibility and enabling keyword searching of audio materials.
FuReadingeading: https://en.wikipedia.org/wiki/Speech_recognition -
Batch Learning
Batch learning trains machine learning models on a fixed dataset at once rather than incrementally. Libraries may use batch learning for periodic tasks such as reclassifying the catalog or updating recommendation systems with newly accumulated usage data.
FuReadingeading: https://en.wikipedia.org/wiki/Batch_learning -
Bias (in AI)
AI bias occurs when a model produces skewed or unfair outcomes due to training data or design. Limitations Librarians must be alert to bias to maintain equitable access and uphold the library's mission of fairness and inclusivity in automated services.
FuReadingeading: https://en.wikipedia.org/wiki/Algorithmic_bias -
Bidirectional Encoder Representations from Transformers (BERT)
BERT is a Transformer-based NLP model that reads text in both directions (left-to-right and right-to-left), capturing deeper context. Libraries can adopt BERT-powered tools for sophisticated search, text classification, and automated reference assistance.
Further Reading: https://en.wikipedia.org/wiki/BERT_(language_model) -
Big Data
Big Data refers to datasets so large or complex that traditional data processing methods struggle with them. Libraries often handle Big Data through large-scale digitized archives, extensive usage logs, or research datasets that require advanced analytics and storage solutions.
FuReadingeading: https://en.wikipedia.org/wiki/Big_data -
Chatbot
A chatbot simulates human conversation through text or voice interactions, often powered by natural language processing. Libraries can deploy chatbots to handle routine queries, guide patrons to resources, and provide round-the-clock virtual reference support.
FuReadingeading: https://en.wikipedia.org/wiki/Chatbot -
Computer Vision
Computer Vision trains algorithms to understand and interpret visual content like images or videos. Libraries use it to automatically tag photographs in digital collections, perform image-based metadata extraction, or assist in identifying and categorizing scanned archival materials.
FuReadingeading: https://en.wikipedia.org/wiki/Computer_vision -
Convolutional Neural Network (CNN)
A CNN is a type of deep neural network particularly effective for image recognition tasks. In libraries, CNNs can categorize extensive image collections, identify text in digitized documents, and power content-based image retrieval systems.
FuReadingeading: https://en.wikipedia.org/wiki/Convolutional_neural_network -
Data Anonymization
Data anonymization strips datasets of identifying details, safeguarding individual privacy. This practice is crucial in libraries, where it allows for the safe sharing of usage or circulation data without exposing patron identities.
FuReadingeading: https://en.wikipedia.org/wiki/Data_anonymization -
Data Augmentation
Data augmentation involves expanding a training dataset by applying transformations (like flipping or rotating images) or creating synthetic data. When the amount of labeled data is limited, this helps libraries improve AI model performance.
FuReadingeading: https://en.wikipedia.org/wiki/Data_augmentation -
Data Cleaning (Data Wrangling)
Data cleaning fixes or removes errors and inconsistencies in datasets. It ensures that catalog records, metadata, and user analytics remain accurate and trustworthy in library contexts, which is vital for compelling AI-driven insights.
FuReadingeading: https://en.wikipedia.org/wiki/Data_cleansing -
Data Ethics
Data ethics refers to the moral considerations governing how data is collected, shared, and used. Libraries uphold data ethics to respect patron privacy, maintain public trust, and ensure fairness in AI-driven initiatives.
FuReadingeading: https://en.wikipedia.org/wiki/Data_ethics -
Data Governance
Data governance establishes the policies and procedures for managing data availability, usability, integrity, and security. Effective data governance in libraries helps maintain consistent catalog records, safeguard patron data, and standardize data-driven decisions.
FuReadingeading: https://en.wikipedia.org/wiki/Data_governance -
Data Lake
A data lake is a vast store of raw data kept in its native format until needed. Libraries may use data lakes to hold large-scale digital archives or research datasets for flexible access and advanced analytics, including machine learning.
FuReadingeading: https://en.wikipedia.org/wiki/Data_lake -
Data Mining
Data mining uncovers patterns and relationships within large datasets. Libraries use it to analyze circulation logs, usage statistics, or full-text corpora, revealing insights that guide acquisitions, outreach, and collection management.
FuReadingeading: https://en.wikipedia.org/wiki/Data_mining -
DataOps
DataOps is an agile, process-oriented methodology that integrates data management with software development best practices. In libraries, DataOps helps coordinate large data projects (e.g., linking various databases), ensuring quick, reliable insights.
FuReadingeading: https://en.wikipedia.org/wiki/DataOps -
Data Silo
A data silo is an isolated repository accessible to one group but closed to others. Librarians strive to avoid silos so that data—from user statistics to catalog information—can be shared and integrated, enabling cohesive services and research.
FuReadingeading: https://en.wikipedia.org/wiki/Information_silo -
Data Sovereignty
Data sovereignty holds that information is subject to the laws and governance of the nation where it's collected. Libraries hosting international resources or patron data must comply with various legal frameworks to protect user rights and privacy.
FuReadingeading: https://en.wikipedia.org/wiki/Data_sovereignty -
Data Visualization
Data visualization presents information in graphical or pictorial formats, such as charts or dashboards. Libraries harness visualization tools to interpret usage statistics, communicate research results, and spot trends in extensive collections.
FuReadingeading: https://en.wikipedia.org/wiki/Data_visualization -
Data Warehouse
A data warehouse stores integrated data from multiple sources, typically in a structured manner for reporting and analysis. Libraries use data warehouses to consolidate acquisitions, circulation, and budgeting data for strategic decision-making.
FuReadingeading: https://en.wikipedia.org/wiki/Data_warehouse -
Deep Learning
Deep learning involves multi-layered neural networks that recognize intricate patterns in text, images, or other data. Libraries leverage deep learning to improve optical character recognition, item classification, and advanced recommendation algorithms.
FuReadingeading: https://en.wikipedia.org/wiki/Deep_learning -
DevOps
DevOps integrates software development and IT operations to speed up development cycles and increase collaboration. Libraries adopt DevOps principles to streamline the deployment of new digital services, including AI-based catalog or discovery platforms.
FuReadingeading: https://en.wikipedia.org/wiki/DevOps -
Digital Preservation
Digital preservation encompasses activities that ensure long-term access to digital content. Libraries use AI to detect file corruption, automate metadata creation, and migrate obsolete formats, safeguarding cultural and scholarly records over time.
FuReadingeading: https://en.wikipedia.org/wiki/Digital_preservation -
Domain Adaptation
Domain adaptation focuses on transferring a model trained in one data domain to work effectively in another. Libraries might use it to adapt general NLP models for specialized collections or subject areas with limited labeled data.
FuReadingeading: https://en.wikipedia.org/wiki/Domain_adaptation -
Doc2Vec
Doc2Vec is an algorithm that produces a numeric vector representation for entire documents, capturing semantic meaning. Libraries use it to cluster similar documents, improve search relevance, or power recommendation systems based on text similarity.
FuReadingeading: https://en.wikipedia.org/wiki/Document_embedding -
Edge Computing
Edge computing shifts data processing closer to the source—like local servers or user devices—instead of relying solely on cloud data centers. Libraries can benefit from reduced latency and improved privacy, especially when handling sensitive local patron data.
FuReadingeading: https://en.wikipedia.org/wiki/Edge_computing -
Ethical AI
Ethical AI ensures that AI design and deployment respect privacy, fairness, and accountability principles. Libraries, as institutions of public trust, prioritize ethical AI to protect patron data and uphold equitable access to information.
FuReadingeading: https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence -
Explainable AI (XAI)
Explainable AI comprises methods that make AI models' decisions understandable to humans. For librarians, XAI is vital to clarify how recommendation engines or automated classification tools produce results, preserving transparency and user trust.
FuReadingeading: https://en.wikipedia.org/wiki/Explainable_artificial_intelligence -
Fairness, Accountability, and Transparency (FAccT)
FAccT is a movement and conference series focused on the ethical dimensions of AI. Libraries monitor FAccT research to apply best practices in data handling, ensuring that AI-driven systems align with equity and inclusivity library values.
FuReadingeading: https://facctconference.org/ -
Feature Engineering
Feature engineering transforms raw data into meaningful attributes that improve AI model performance. In libraries, it might involve combining circulation data with user demographics to predict which resources patrons need next.
FuReadingeading: https://en.wikipedia.org/wiki/Feature_engineering -
Federated Learning
Federated learning trains a model across decentralized devices holding local data, preventing the need to send raw data to a central server. Libraries concerned with patron privacy may use federated learning to keep sensitive information on individual devices.
FuReadingeading: https://en.wikipedia.org/wiki/Federated_learning -
Few-Shot Learning
Few-shot learning allows AI models to recognize new classes or perform tasks with only a few examples. Libraries with rare or niche materials can use few-shot learning to accurately label and classify resources with limited training data.
FuReadingeading: https://en.wikipedia.org/wiki/One-shot_learning#Few-shot_learning -
Generative Adversarial Network (GAN)
A GAN consists of two competing neural networks—a generator and a discriminator—working together to create realistic synthetic data. Libraries might use GANs to expand or enrich training sets for image classification or text analytics.
FuReadingeading: https://en.wikipedia.org/wiki/Generative_adversarial_network -
Generative Pre-trained Transformer (GPT)
GPT is a family of Transformer-based language models capable of generating coherent, context-aware text. Libraries employ GPT-driven services for automated summaries, translations, or research assistance in digital reference systems.
Further Reading: https://en.wikipedia.org/wiki/GPT-3 -
Gated Recurrent Unit (GRU)
A GRU is a recurrent neural network that manages how much prior information to keep or discard in sequence data. Libraries might use GRUs to analyze time-series data (e.g., circulation over time) or perform more efficient text processing.
FuReadingeading: https://en.wikipedia.org/wiki/Gated_recurrent_unit -
GPU (Graphics Processing Unit)
GPUs excel at parallel processing, making them highly suited for training and running complex AI models. Libraries that conduct AI research or support advanced computing may invest in GPU servers to accelerate deep learning workloads.
FuReadingeading: https://en.wikipedia.org/wiki/Graphics_processing_unit -
Hadoop
Hadoop is an open-source framework for distributed storage and processing of large datasets. Libraries with massive digital archives or research data can use Hadoop clusters to efficiently manage and analyze large-scale information.
FuReadingeading: https://en.wikipedia.org/wiki/Apache_Hadoop -
High-Performance Computing (HPC)
HPC refers to computing environments with powerful processing capabilities, enabling advanced data analysis and AI training. Academic libraries often provide HPC resources for researchers handling large datasets or complex simulations.
FuReadingeading: https://en.wikipedia.org/wiki/High-performance_computing -
Human-Centered AI
Human-centered AI prioritizes augmenting human expertise rather than replacing it, ensuring systems align with user needs. For libraries, this means leveraging AI to support librarians' decision-making and enhance patron experiences rather than supplant personal interactions.
FuReadingeading: https://hai.stanford.edu/ -
Human-in-the-Loop
Human-in-the-loop systems incorporate human judgment or feedback at critical steps of an AI workflow. Librarians might review AI-generated catalog records or subject classifications to ensure accuracy, curbing automated errors.
FuReadingeading: https://en.wikipedia.org/wiki/Human-in-the-loop -
Hugging Face
Hugging Face is a platform for sharing NLP models and datasets, fostering an open AI community. Libraries can use pre-trained language models for document summarization, sentiment analysis, or bilingual services.
FuReadingeading: https://huggingface.co/ -
Inference
Inference applies a trained AI model to new data to make predictions or classifications. The inference might be used in libraries to categorize newly acquired materials, recognize images, or forecast resource demand in real-time.
FuReadingeading: https://en.wikipedia.org/wiki/Inference -
Information Retrieval (IR)
IR focuses on finding relevant information within a large repository based on user queries. Libraries rely on IR principles to design efficient catalog systems and discovery layers that provide precise, fast retrieval of books, articles, and digital resources.
FuReadingeading: https://en.wikipedia.org/wiki/Information_retrieval -
Intelligent Agent
An intelligent agent perceives its environment and takes action autonomously to achieve goals. Libraries might deploy agents to monitor collection usage or handle routine inventory tasks, freeing staff for more specialized work.
FuReadingeading: https://en.wikipedia.org/wiki/Intelligent_agent -
Intelligent Virtual Agent (IVA)
An IVA is an advanced conversational system that can engage in nuanced, context-aware interactions. Libraries may use IVAs for sophisticated virtual reference services, guiding patrons through research queries more deeply than a basic chatbot.
FuReadingeading: https://en.wikipedia.org/wiki/Virtual_assistant#Intelligent_virtual_agents -
Interoperability
Interoperability ensures that different systems, formats, and protocols work together seamlessly. Libraries seek interoperability among catalog systems, digital repositories, and external databases to provide a unified user experience.
FuReadingeading: https://en.wikipedia.org/wiki/Interoperability -
Knowledge Discovery in Databases (KDD)
KDD is a process that uses data mining and pattern recognition to uncover insights from large databases. Libraries use KDD to reveal trends, topics, and relationships in usage data, digital text corpora, or archival collections.
FuReadingeading: https://en.wikipedia.org/wiki/Knowledge_discovery_in_databases -
Knowledge Extraction
Knowledge extraction pulls structured facts or relationships from unstructured text. Libraries can automate metadata enrichment or build specialized databases (e.g., extracting place names, dates, or events from historical documents).
FuReadingeading: https://en.wikipedia.org/wiki/Information_extraction -
Knowledge Graph
A knowledge graph is a network of interconnected entities and their relationships, often leveraging ontologies. Libraries can employ knowledge graphs to link authors, works, subjects, and locations, enhancing patrons' discovery and context.
FuReadingeading: https://en.wikipedia.org/wiki/Knowledge_Graph -
Large Language Models (LLMs)
LLMs are trained on massive text datasets and can generate or understand language with human-like fluency. Libraries use LLMs for automatic summarization, question-answering, and advanced search capabilities that interpret natural language queries.
FuReadingeading: https://en.wikipedia.org/wiki/Large_language_model -
Linked Data
Linked Data involves publishing structured data that is interlinked and becomes more valuable. Libraries adopt Linked Data approaches in their catalogs, enabling enriched records that connect to external datasets for broader discovery.
FuReadingeading: https://en.wikipedia.org/wiki/Linked_data -
Long Short-Term Memory (LSTM)
LSTMs are a recurrent neural network that handles long-range dependencies in sequential data. Libraries might use LSTMs to analyze user search histories, forecast future information needs, or interpret text that spans multiple paragraphs.
FuReadingeading: https://en.wikipedia.org/wiki/Long_short-term_memory -
Machine Learning (ML)
ML is a subset of AI in which algorithms learn patterns from data to make predictions or decisions. In libraries, ML automates classification, aids in collection analytics, and powers recommendation engines for reading materials.
FuReadingeading: https://en.wikipedia.org/wiki/Machine_learning -
Metadata
Metadata describes data attributes such as author, title, or publication date, enabling better organization and discovery. Libraries depend on accurate metadata to facilitate catalog searches and enhance AI-driven classification or recommendation systems.
FuReadingeading: https://en.wikipedia.org/wiki/Metadata -
MLOps (Machine Learning Operations)
MLOps merges ML model development with reliable deployment and maintenance practices. Libraries implementing AI for cataloging or user services should consider MLOps to ensure their models remain accurate and up-to-date in production.
FuReadingeading: https://en.wikipedia.org/wiki/MLOps -
Natural Language Generation (NLG)
NLG transforms structured data into coherent, human-readable text. Libraries can use NLG to produce automated summaries of collection statistics, create plain-language descriptions of new acquisitions, or generate user notifications.
FuReadingeading: https://en.wikipedia.org/wiki/Natural-language_generation -
Natural Language Processing (NLP)
NLP combines linguistics and AI to enable computers to interpret, generate, and analyze human language. Libraries adopt NLP to mine text in extensive collections, build chatbots, or improve the accuracy of search queries in online catalogs.
FuReadingeading: https://en.wikipedia.org/wiki/Natural_language_processing -
Neural Network
A neural network is a model inspired by the human brain's interconnected neurons that can learn from examples. Libraries leverage neural networks to classify text or images, power recommender systems, and enhance search relevance.
FuReadingeading: https://en.wikipedia.org/wiki/Artificial_neural_network -
Observability
Observability involves continuously tracking metrics, logs, and other signals to understand system behavior. Libraries use observability strategies to ensure AI-driven catalog or discovery services function smoothly and can be quickly debugged if issues arise.
FuReadingeading: https://en.wikipedia.org/wiki/Observability -
One-Shot Learning
One-shot learning enables an AI model to recognize or categorize something after seeing just one example. Libraries with rare or unique materials benefit from these techniques, which reduce the need for extensive labeled training data.
FuReadingeading: https://en.wikipedia.org/wiki/One-shot_learning -
Online Learning (Incremental Learning)
Online learning updates the model incrementally as new data arrives, rather than retraining from scratch. Libraries might use this for real-time recommender systems that adapt to changing patron behavior or evolving trends in resource usage.
FuReadingeading: https://en.wikipedia.org/wiki/Incremental_learning -
OpenAI
OpenAI is an AI research organization famous for developing advanced models like GPT. Libraries may explore OpenAI's tools for natural language understanding, automated summarization, or innovative search experiences.
FuReadingeading: https://openai.com/ -
Ontology
An ontology defines relationships between concepts in a given domain. Libraries use ontologies to structure knowledge about authors, subjects, or periods, improving digital collections' organization and semantic linking.
Further Reading: https://en.wikipedia.org/wiki/Ontology_(information_science) -
Overfitting
Overfitting happens when an AI model learns noise or random fluctuations in the training data, performing poorly on new data. In libraries, overfitting can lead to inaccurate resource recommendations or misclassification of new items.
FuReadingeading: https://en.wikipedia.org/wiki/Overfitting -
Predictive Analytics
Predictive analytics uses historical data to forecast future events or trends. Libraries use these budgeting techniques to manage resource demand and anticipate collection usage patterns.
FuReadingeading: https://en.wikipedia.org/wiki/Predictive_analytics -
Predictive Coding
Predictive coding automates document review by ranking the relevancy of items (often used in legal e-discovery). Libraries might apply it to expedite sorting through extensive text archives or pinpointing documents aligned with specific research needs.
FuReadingeading: https://en.wikipedia.org/wiki/Technology_assisted_review -
PyTorch
PyTorch is an open-source machine learning framework popular for its flexible, pythonic design. Libraries or research labs may use PyTorch to develop deep learning models for classification, recommendation, or digitization projects.
FuReadingeading: https://pytorch.org/ -
Python
Python is a high-level programming language widely used in AI, data science, and automation. Due to its extensive ecosystem of data-centric libraries, libraries often select Python to prototype AI tools like chatbots or text mining pipelines.
FuReadingeading: https://www.python.org/ -
R
R is a language designed for statistical computing and graphics. Librarians use it to clean, analyze, and visualize data in research data support or to evaluate library usage metrics.
FuReadingeading: https://www.r-project.org/ -
Recommender System
A recommender system predicts and suggests items (e.g., books or articles) a user might prefer. Libraries implement them to personalize the user experience, guiding patrons to resources aligned with their interests or research areas.
FuReadingeading: https://en.wikipedia.org/wiki/Recommender_system -
Recurrent Neural Network (RNN)
RNNs are neural networks that handle sequential data, such as text or time series. Libraries use RNNs to process user queries, parse textual archives, or predict seasonal trends in resource circulation.
FuReadingeading: https://en.wikipedia.org/wiki/Recurrent_neural_network -
Reinforcement Learning
Reinforcement learning trains agents through trial-and-error interactions with an environment. While more common in robotics, libraries might use it to optimize recommendation engines that adjust suggestions based on patron feedback over time.
FuReadingeading: https://en.wikipedia.org/wiki/Reinforcement_learning -
Robotic Process Automation (RPA)
RPA uses software "bots" to automate repetitive tasks like data entry or record updating. Libraries can deploy RPA to streamline workflows,, by uploading new e-book records or batch-processing digitized content.
FuReadingeading: https://en.wikipedia.org/wiki/Robotic_process_automation -
Scikit-Learn
Scikit-Learn is a Python library that offers user-friendly machine-learning algorithms. Librarians or staff can use it to build prototypes for classification, regression, and clustering, for instance, to categorize incoming materials or analyze user behavior.
FuReadingeading: https://scikit-learn.org/ -
Semi-Structured Data
Semi-structured data does not fit a rigid schema but includes identifiable tags or markers (like XML or JSON). Libraries handle semi-structured data in metadata records, enabling more flexible analysis and interoperability than fully unstructured content.
FuReadingeading: https://en.wikipedia.org/wiki/Semi-structured_data -
Semantic Web
The Semantic Web aims to make web data machine-readable through defined ontologies and relationships. Libraries use Semantic Web technologies to create Linked Data catalogs, enriching the user experience with context and external resources.
FuReadingeading: https://en.wikipedia.org/wiki/Semantic_Web -
Sentiment Analysis
Sentiment analysis classifies the attitudes or emotions expressed in text. Libraries might use sentiment analysis to evaluate feedback forms or social media posts about library services and inform improvements.
FuReadingeading: https://en.wikipedia.org/wiki/Sentiment_analysis -
Spark
Apache Spark is an open-source engine for large-scale data processing. In many tasks, it offers faster performance than Hadoop's MapReduce. Libraries can use Spark to speed up text mining or run machine learning workloads across massive digital collections.
FuReadingeading: https://en.wikipedia.org/wiki/Apache_Spark -
Structured Data
Structured data is organized into a predefined schema, such as rows and columns. Library catalogs and MARC records are classic examples. These schemas enable efficient searching, indexing, and integration with AI-driven classification or recommendation engines.
FuReadingeading: https://en.wikipedia.org/wiki/Structured_data -
Supervised Learning
Supervised learning teaches models to classify or predict outcomes using labeled training examples. Librarians can use it to auto-tag resources (e.g., "history" vs. "art") or indicate which materials patrons are likely to borrow next.
FuReadingeading: https://en.wikipedia.org/wiki/Supervised_learning -
Synthetic Data
Synthetic data is artificially generated rather than collected from real-world events. Libraries may produce artificial data to train AI systems without exposing sensitive patron information, preserving privacy while enhancing model performance.
FuReadingeading: https://en.wikipedia.org/wiki/Synthetic_data -
Synthetic Oversampling (e.g., SMOTE)
SMOTE (Synthetic Minority Over-sampling Technique) and similar methods balance class distributions by generating new, artificial samples. Libraries can address skewed data, such as a rare genre category, improving model accuracy.
FuReadingeading: https://en.wikipedia.org/wiki/Oversampling_and_undersampling_in_data_analysis#SMOTE -
TensorFlow
TensorFlow is an open-source library by Google used to build and train neural networks. Libraries explore TensorFlow to develop custom deep-learning solutions for tasks such as OCR, image classification, or advanced text analytics.
FuReadingeading: https://www.tensorflow.org/ -
Tokenization
Tokenization is a key step in NLP that breaks text into smaller units (tokens), such as words or subwords. Libraries performing text analysis on extensive collections rely on tokenization to prepare data for more complex processing, such as classification or clustering.
Further Reading: https://en.wikipedia.org/wiki/Tokenization_(language) -
TPU (Tensor Processing Unit)
A TPU is a specialized chip created by Google to accelerate machine learning operations. Libraries or academic consortia with demanding AI research needs might use TPUs to train large neural network models efficiently.
FuReadingeading: https://en.wikipedia.org/wiki/Tensor_Processing_Unit -
Training Data
Training data is the labeled information a model learns from. Libraries must ensure that the training data used for AI applications, such as automated classification, accurately represents collections and user needs to prevent biased outcomes.
FuReadingeading: https://en.wikipedia.org/wiki/Training,_test,_and_validation_sets -
Transfer Learning
Transfer learning reuses a model trained on one task as a starting point for another, reducing required data and training time. Libraries might adopt a pre-trained language model to classify niche historical documents or domain-specific texts.
FuReadingeading: https://en.wikipedia.org/wiki/Transfer_learning -
Transformer
A Transformer is a neural network architecture that processes data in parallel rather than sequentially, revolutionizing NLP tasks. Libraries benefit from Transformer-based tools for language translation, question-answering, or automatic summarization of extensive text collections.
Further Reading: https://en.wikipedia.org/wiki/Transformer_(machine_learning_model) -
Turing Test
The Turing Test, proposed by Alan Turing, measures a machine's ability to exhibit intelligence indistinguishable from a human. While more of a philosophical benchmark than a practical library tool, it underscores ongoing debates about AI's capabilities and limitations.
FuReadingeading: https://en.wikipedia.org/wiki/Turing_test -
Underfitting
Underfitting occurs when a model is too simple to capture the data's underlying patterns, leading to poor performance. For example, underfitted models might fail to accurately categorize new books or produce weak library recommendations.
FuReadingeading: https://en.wikipedia.org/wiki/Overfitting#Underfitting -
Unstructured Data
Unstructured data lacks a predefined schema and encompasses resources like text documents, images, or audio. Much of a library's digital collection is unstructured, requiring AI methods (e.g., NL and ompucomputeron) to extract meaningful insights.
FuReadingeading: https://en.wikipedia.org/wiki/Unstructured_data -
Unsupervised Learning
Unsupervised learning discovers patterns in unlabeled data, grouping similar items without predefined categories. Libraries use it to unearth hidden topics in large document sets or segment patrons based on usage behaviors.
FuReadingeading: https://en.wikipedia.org/wiki/Unsupervised_learning -
Virtual Assistant
A virtual assistant uses voice or text interfaces to perform tasks or services based on user requests. Libraries can deploy virtual assistants to answer FAQs, help patrons navigate the catalog, or manage account inquiries.
FuReadingeading: https://en.wikipedia.org/wiki/Virtual_assistant -
Word Embeddings
Word embeddings are vector representations of words that capture semantic relationships. Libraries use word embeddings to improve search relevance, cluster documents by topic, and detect similarities between subject terms.
FuReadingeading: https://en.wikipedia.org/wiki/Word_embedding -
Zero-Shot Learning
Zero-shot learning allows a model to classify new categories it has never explicitly seen during training. Libraries with ever-expanding collections can adopt zero-shot techniques to handle emerging topics without requiring extensive labeled samples.
FuReadingeading: https://en.wikipedia.org/wiki/Zero-shot_learning
How Librarians Benefit
These 100 terms form a strong baseline of AI knowledge, helping librarians evaluate new technologies, collaborate with IT teams, and uphold ethical standards in emerging library services. Understanding AI concepts positions libraries to innovate responsibly and deliver meaningful community support.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.