Off-campus UNL users: To download campus access dissertations, please use the following link to log into our proxy server with your NU ID and password. When you are done browsing please remember to return to this page and log out.
Non-UNL users: Please talk to your librarian about requesting this dissertation through interlibrary loan.
Cooperative Learning for the Consensus of Multi-Agent Systems
Abstract
Due to a lot of attention for multi-agent systems in recent years, the consensus algorithm has gained immense popularity for building fault-tolerant systems in system and control theory. Generally, the consensus algorithm drives the swarm of agents to work as a coherent group that can reach an agreement regarding a certain quantity of interest, which depends on the state of all agents themselves. The most common consensus algorithm is the average consensus, the final consensus value of which is equal to the average of the initial values. If we want the agents to find the best area of the particular resources, the average consensus will be failure. Thus, the algorithm is restricted due to its incapacity to solve some optimization problems.In this dissertation, we want the agents to become more intelligent so that they can handle different optimization problems. Based on this idea, we first design a new consensus algorithm which modifies the general bat algorithm. Since bat algorithm is a swarm intelligence method and is proven to be suitable for solving the optimization problems, this modification is straightforward. The optimization problem suggests the convergence direction. Also, in order to accelerate the convergence speed, we incorporate a term related to flux function, which serves as an energy/mass exchange rate in compartmental modeling or a heat transfer rate in thermodynamics. This term is inspired by the speed-up and speed-down strategy from biological swarms. We prove the stability of the proposed consensus algorithm for both linear and nonlinear flux functions in detail by the matrix paracontraction tool and the Lyapunov-based method, respectively.Another direction we are trying is to use the deep reinforcement learning to train the agent to reach the consensus state. By letting the agent learn the input command by this method, they can become more intelligent without human intervention. We ignore the complex mathematical model in designing the protocol for the general consensus problem. The deep deterministic policy gradient algorithm is used to plan the command of the agent in the continuous domain. The moving robots systems are used to verify the effectiveness of the algorithm.
Subject Area
Electrical engineering
Recommended Citation
Liu, Qishuai, "Cooperative Learning for the Consensus of Multi-Agent Systems" (2019). ETD collection for University of Nebraska-Lincoln. AAI22588123.
https://digitalcommons.unl.edu/dissertations/AAI22588123