Graduate Studies, UNL

 

Dissertations and Doctoral Documents, University of Nebraska-Lincoln, 2023–

First Advisor

Yi Qian

Degree Name

Doctor of Philosophy (Ph.D.)

Committee Members

Betty Love, Hamid Sharif, Kuan Zhang

Department

Engineering

Date of this Version

12-2025

Document Type

Dissertation

Citation

A dissertation presented to the faculty of the Graduate College of the University of Nebraska in partial fulfillment of requirements for the degree Doctor of Philosophy (Ph.D.)

Major: Engineering

Under the supervision of Professor Yi Qian

Lincoln, Nebraska, December 2025

Comments

Copyright 2025, Owana Marzia Moushi. Used by permission

Abstract

A vehicular network is susceptible to various security flaws and attacks. Cryptographic techniques are used in vehicular networks but these alone cannot provide proper security to the network. Identifying various types of attacks is necessary to secure vehicular communication networks. In this dissertation, we focused on detecting various insider attacks in vehicular networks to enhance the security of the network.

Our first contribution in this dissertation is the detection of both binary and multi-class data replay and data replay Sybil attacks in vehicular networks. A publicly available dataset, VeReMi-Extension is used to detect these attacks. This dataset has been reformulated to generate novel features aimed at detecting attacks in vehicular networks accurately. Machine learning-based methods have been applied to the reformulated dataset for the detection of attacks in vehicular networks. The extensive simulation results show that the proposed scheme can detect the attacks both for binary and multi-class scenarios effectively in vehicular networks.

Connected and autonomous vehicles (CAVs) leverage sensing, decision-making, and wireless communication technologies to enable autonomous driving. In particular, CAVs use various types of sensors and communication channels to share real-time data inside the network. However, CAVs are vulnerable to a range of attacks due to real-time data sharing. Recently, Federated Learning (FL) has been widely applied as a decentralized technique to train machine learning (ML) models on local devices, with the updated parameters shared for secure aggregation into a global model. However, even though the original data is not shared, FL could still be vulnerable to data poisoning attacks. For example, a malicious vehicle can introduce poisoned data during the training phase, decreasing the performance of the global model. We propose Malicious Client Detection Scheme (MCDS) to counter such data poisoning attacks and detect malicious clients among CAVs. In MCDS, each local client updates two key metrics, i.e., average accuracy and model weights, to the central server, who detects malicious clients and updates the global model. If the performance of any locally trained model deteriorates compared to others, the corresponding local client will be flagged as malicious and excluded from further processing as a mitigation of data poisoning attacks. The new metric applied in the MCDS allows the identification of anomalous clients without using any threshold values. Hence, only trustworthy client updates are used to update the model. The results demonstrate that the proposed MCDS can successfully identify and mitigate data poisoning attacks from CAVs.

At the same time, we have proposed two other schemes focused on detection backdoor attacks and label-flipping attacks from FL-based CAVs. For detecting, backdoor attacks, we have utilized SHAP and weight distance with K-means clustering while for detecting label-flipping attacks, we have utilized SHAP, Grad-CAM, and weight distance with K-means clustering. In both of cases, our proposed scheme can detect these attacks successfully. However, our for label-flipping attack we have false positive rates while for backdoor attack there is no false positive rates. Therefore, our proposed work can successfully identify various insider attacks from CAVs to enhance the security of CAVs.

Advisor: Yi Qian

Share

COinS