Graduate Studies

 

First Advisor

Byrav Ramamurthy

Degree Name

Doctor of Philosophy (Ph.D.)

Committee Members

Lisong Xu, Massimiliano Pieorobon, Yi Qian

Department

Computer Science

Date of this Version

8-2025

Document Type

Dissertation

Citation

A dissertation presented to the Graduate College of the University of Nebraska in partial fulfillment of requirements for the degree of Doctor of Philosophy

Major: Computer Science

Under the supervision of Professor Byrav Ramamurthy

Lincoln, Nebraska, August 2025

Comments

Copyright 2025, Venkat Sai Suman Lamba Karanam. Used by permission

Abstract

Networked systems have become increasingly complex, with newer communication technologies and standards being added every day. Machine Learning (ML) and Artificial Intelligence (AI) paradigms have been adopted in networks to not only solve many fundamental problems, but also to allow seamless integration of components comprising them. The saying “let’s not reinvent the wheel” in ML/AI adoption implies that model architecture design be left for pure ML/AI researchers, while network researchers focus on input preprocessing (e.g. formatting the packet data to be fed to a model), hyperparameter fine-tuning and a trial-and-error approach to find the “best” result. This pipeline may work in theory, but is not always practical, especially considering the diversity in network behavior (e.g. communication patterns) in any given environment (even the same network that was used originally likely behaves differently, eventually). This dissertation is built around the following principle. While tackling the mathematical modeling of networks at-large is an impossibly daunting task and cannot be addressed in any single work, we sought out to address the task across multiple arcs. Intelligent techniques with ML/AI can help predict transfer patterns, resource usage, attack detection and even network infrastructure planning. For any semblance of network intelligence, modeling the statistical properties of the data are essential, either at micro- or macro-levels, and in the short- or long-term. This dissertation investigates the importance of careful mathematical modeling before/while adopting ML/AI techniques. We show that modeling the network behavior (say, traffic patterns for a traffic forecasting model) using a mathematical model (say, PDE or ODE) is sometimes more reliable than adopting a published ML model even if it promises great results in its original publication. Such effort produces objective, optimization and loss functions while keeping the ML/AI model architecture close to baseline. We chose to present proof-of-concepts across several selected problem statements inherent to the networked systems, namely, network management, large-scale network analysis, security and traffic classification, and traffic prediction. Each chapter in this dissertation focuses on one of these problems.

Chapter 2 presents a runtime Deep Learning (DL)-based approach for network management and orchestration in a cross-layer optical network setting. The work in chapter 3 presents a mathematical framework that models the traffic records collected from separate backbone routers "collectively", and helps study not only its high-level statistical properties, but also cross-correlations between different collection points (i.e. different routers). Chapter 4 presents a novel way of tackling adversarial attacks on traffic classifiers, which deter and veer the classifier from accurately classifying the traffic samples. The classification model is designed to learn from encrypted packet bytes; the concepts of adversarial retraining and uncertainty quantification are used to improve adversarial robustness. Chapter 5 presents a traffic prediction approach for Research and Education Networks (RENs) via a mathematical model in the form of a system of Ordinary Differential Equations (ODEs) designed to capture the underlying data transfer behavior. In Chapter 6, we study the data/cache access patterns in the US High-Energy Physics (HEP) environment and design hourly access pattern prediction techniques using LSTM and CatBoost techniques. In Chapter 7, we studied the runtime network analysis problem, with the definition of “runtime network analysis” implying any network task with strict real-time needs. Motivated by the increasingly distributed aspect of many network systems, we combined theory from online and incremental learning and integrated it into Federated Learning (FL) paradigm, called Runtime Online Federated Learning (ROFL).

Advisor: Byrav Ramamurthy

Share

COinS