Graduate Studies, UNL

 

Dissertations and Doctoral Documents from University of Nebraska-Lincoln, 2023–

First Advisor

Yi Qian

Degree Name

Doctor of Philosophy (Ph.D.)

Committee Members

Hamid Sharif-Kashani, Witawas Srisa-An

Department

Engineering

Date of this Version

2025

Document Type

Dissertation

Citation

A dissertation presented to the faculty of the Graduate College of the University of Nebraska in partial fulfillment of requirements for the degree Doctor of Philosophy (Ph.D.)

Major: Engineering

Under the supervision of Professor

Lincoln, Nebraska, December 2025

Comments

Copyright 2025, the author. Used by permission

Abstract

Distributed machine learning (DML) is a component of modern intelligent systems, enabling collaborative training across devices such as mobile clients, vehicles, and edge networks. However, the decentralized nature of these systems introduces vulnerabilities, particularly data poisoning attacks that compromise model integrity and degrade performance. Traditional defenses, such as statistical filtering, robust aggregation, and privacy-preserving techniques, often struggle to adapt to overwhelming adversaries or operate under strict privacy and real-time constraints. This dissertation proposes the use of reinforcement learning (RL) and deep reinforcement learning (DRL) based misbehavior detection schemes that dynamically identify poisoning attempts in distributed AI systems, including federated learning, model parallelism and vehicular communication networks. The presented methods leverage behavioral patterns and aggregation-level indicators, such as gradient similarity, client contribution consistency, and clustering to learn optimal defense policies without requiring access to raw client data. First, an RL-driven trust management mechanism is developed to detect label-flipping and backdoor attacks in federated learning by tracking client updates over time and adaptively filtering malicious contributors. Second, a combined RL and graph-based supervised framework is introduced to detect falsified position reporting attacks in vehicular communication networks, improving resilience under dynamic mobility patterns. Finally, a scalable DRL architecture is designed to identify malicious clients in both federated and model-parallel training environments, demonstrating strong robustness even under severe adversarial conditions. Comprehensive experiments across MNIST, FMNIST, CIFAR-10, and real-world vehicular datasets show that the proposed solutions significantly improve detection accuracy, maintain global model performance, and generalize across attack intensities, surpassing existing benchmark defenses. Overall, this work advances adaptive security in distributed AI by establishing reinforcement learning as a powerful foundation for proactive, privacy-preserving, and scalable poisoning defense.

Advisor: Yi Qian

Included in

Engineering Commons

Share

COinS