Electrical & Computer Engineering, Department of


First Advisor

Qing Hui

Date of this Version


Document Type



A DISSERTATION Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfillment of Requirements For the Degree of Doctor of Philosophy, Major: Electrical Engineering, Under the Supervision of Professor Qing Hui. Lincoln, Nebraska: August, 2019

Copyright 2019 Qishuai Liu


Due to a lot of attention for the multi-agent system in recent years, the consensus algorithm gained immense popularity for building fault-tolerant systems in system and control theory. Generally, the consensus algorithm drives the swarm of agents to work as a coherent group that can reach an agreement regarding a certain quantity of interest, which depends on the state of all agents themselves. The most common consensus algorithm is the average consensus, the final consensus value of which is equal to the average of the initial values. If we want the agents to find the best area of the particular resources, the average consensus will be failure. Thus the algorithm is restricted due to its incapacity to solve some optimization problems.

In this dissertation, we want the agents to become more intelligent so that they can handle different optimization problems. Based on this idea, we first design a new consensus algorithm which modifies the general bat algorithm. Since bat algorithm is a swarm intelligence method and is proven to be suitable for solving the optimization problems, this modification is pretty straightforward. The optimization problem suggests the convergence direction. Also, in order to accelerate the convergence speed, we incorporate a term related to flux function, which serves as an energy/mass exchange rate in compartmental modeling or a heat transfer rate in thermodynamics. This term is inspired by the speed-up and speed-down strategy from biological swarms. We prove the stability of the proposed consensus algorithm for both linear and nonlinear flux functions in detail by the matrix paracontraction tool and the Lyapunov-based method, respectively.

Another direction we are trying is to use the deep reinforcement learning to train the agent to reach the consensus state. Let the agent learn the input command by this method, they can become more intelligent without human intervention. By this method, we totally ignore the complex mathematical model in designing the protocol for the general consensus problem. The deep deterministic policy gradient algorithm is used to plan the command of the agent in the continuous domain. The moving robots systems are considered to be used to verify the effectiveness of the algorithm.

Adviser: Qing Hui