Graduate Studies, UNL

 

Dissertations and Doctoral Documents from University of Nebraska-Lincoln, 2023–

First Advisor

Stephen Scott

Degree Name

Doctor of Philosophy (Ph.D.)

Committee Members

Ashok Samal, Bertrand Clarke, Qiuming Yao

Department

Computer Science

Date of this Version

2025

Document Type

Dissertation

Citation

A dissertation presented to the faculty of the Graduate College of the University of Nebraska in partial fulfillment of requirements for the degree Doctor of Philosophy (Ph.D.)

Major: Computer Science

Under the supervision of Professor

Lincoln, Nebraska, December 2025

Comments

Copyright 2025, the author. Used by permission

Abstract

The rapid adoption of deep learning has come at the cost of properties long valued in artificial intelligence: intelligibility and safety. This dissertation develops methods that restore these properties by coupling neural networks with symbolic structure.

First, for supervised classification, I propose a differentiable decision tree integrated with a supervised variational autoencoder. The resulting model maintains competitive accuracy and generative performance while exposing clear macro-features in its latent space, improving interpretability.

Second, for reinforcement learning, I extend constrained Markov decision processes by specifying constraints in formal languages. This formal language constrained MDP enables the use of automata for state augmentation, cost shaping, and action shaping. This improves safety in exploration while remaining computationally tractable.

Third, I address safe exploration in nonlinear continuous-control tasks by integrating deep neural network learned dynamics with symbolic reachability analysis. Using star sets, I extract exact piecewise-affine dynamics and synthesize cycle-based trajectories that are provably safe with respect to the learned model. The hybrid architecture intervenes during reinforcement learning to prevent early failures, yielding both theoretical guarantees and empirical safety improvements in experiments.

Together, these contributions show that combining deep learning with symbolic reasoning can make machine learning models more intelligible and reliably safe, providing a foundation for trustworthy deployment in safety-critical domain.

Advisor: Stephen Scott

Share

COinS