Computer Science and Engineering, Department of

 

First Advisor

Hoang-Dung Tran

Date of this Version

Spring 5-2-2024

Document Type

Article

Citation

A thesis presented to the faculty of the Graduate College at the University of Nebraska in partial fulfillment of requirements for the degree of Master of Science

Major: Computer Science

Under the supervision of Professor Hoang-Dung Tran

Lincoln, Nebraska, May 2024

Comments

Copyright 2024, Mykhailo Ivashchenko. Used by permission

Abstract

Deep Neural Networks (DNNs) have become a popular instrument for solving various real-world problems. DNNs’ sophisticated structure allows them to learn complex representations and features. However, architecture specifics and floating-point number usage result in increased computational operations complexity. For this reason, a more lightweight type of neural networks is widely used when it comes to edge devices, such as microcomputers or microcontrollers – Binary Neural Networks (BNNs). Like other DNNs, BNNs are vulnerable to adversarial attacks; even a small perturbation to the input set may lead to an errant output. Unfortunately, only a few approaches have been proposed for verifying BNNs.

In this thesis, we describe an approach to verify BNNs on continuous input space using Star reachability analysis. This was accomplished by adapting Star reachability to performing verification of Sign layer that is a key activation layer in BNNs. We discuss the influence of the Sign activation on the verification process compared to regular Feedforward and Convolutional Neural Networks. We demonstrate the ability of the approach to compute both exact and overapproximate reachable sets of BNNs with Sign activation functions and use them for verification. We conduct experiments to show that the approach is efficient in constructing a complete set of counterexamples in case a network is unsafe. The experimental results show that our Star-based approach is less conservative, more efficient, and scalable than the recent SMT-based method implemented in Marabou. We also provide a comparison with a quantization-based tool Efficient Exact Verification of Binarized Neural Networks (EEVBNN). We implement and release the approach in NNV, a neural network verification tool for DNNs and learning-enabled Cyber-Physical Systems.

Advisor: Hoang-Dung Tran

Share

COinS