Off-campus UNL users: To download campus access dissertations, please use the following link to log into our proxy server with your NU ID and password. When you are done browsing please remember to return to this page and log out.
Non-UNL users: Please talk to your librarian about requesting this dissertation through interlibrary loan.
Cost-Aware Hierarchical Active Learning and Sub-Linear Time Embedding Based Deep Retrieval
The research in this dissertation consists of two parts: An active learning algorithm for hierarchical labels and an embedding-based retrieval algorithm. In the first part, we present a new approach for learning hierarchically decomposable concepts. The approach learns a high-level classifier (e.g., location vs. non-location) by separately learning multiple finer-grained classifiers (e.g., museum vs. non-museum), and then combining the results. Soliciting labels at a finer level of granularity than that of the target concept is a new approach to active learning, which we term active over-labeling. In experiments in NER and document classification tasks, we show that active over-labeling substantially improves area under the precision-recall curve when compared with standard passive or active learning. Finally, because finer-grained labels may be more expensive to obtain, we also present a cost-sensitive active learner that uses a multi-armed bandit approach to dynamically choose the label granularity to target, and show that the bandit-based learner is robust to differences in label cost and labeling budget. In the second part, we present a Bayesian Deep Structured Semantic Model (BDSSM) that efficiently in retrieval tasks with a large pool of candidates for real-time applications, e.g., in search engines, digital ads, and recommendation systems. The efficiency is achieved by indexing the items into groups based on their sparse representation of embeddings during offline pre-computation. In the online retrieval phase, the algorithm only retrieves and ranks items from indices that are relevant to the query. We explore optimization strategies in the algorithm to make sparse representation sparser. In evaluation, the algorithm is compared with other popular clustering-based, hashing-based, and tree-based retrieval methods. We measure the differences in multiple dimensions, including retrieval recall, storage of embeddings, and CPU time. We show that this algorithm outperforms other algorithms in the comparison of both recall and CPU time with the same storage limit. Finally, we also show that this algorithm can be used in exploration when the model is recurrently retrained.
Artificial intelligence|Information Technology|Computer science|Computer Engineering|Information science
Mo, Yuji, "Cost-Aware Hierarchical Active Learning and Sub-Linear Time Embedding Based Deep Retrieval" (2022). ETD collection for University of Nebraska-Lincoln. AAI29167709.