Computer Science and Engineering, Department of

 

Date of this Version

5-2022

Citation

Published (2022) IEEE International Conference on Communications, 2022-May, pp. 3202-3207.

Comments

This work was supported by the German Federal Ministry of Education and Research (BMBF) project KICK [16KIS1102K].

Abstract

Network slicing enables the operator to configure virtual network instances for diverse services with specific requirements. To achieve the slice-aware radio resource scheduling, dynamic slicing resource partitioning is needed to orchestrate multi-cell slice resources and mitigate inter-cell interference. It is, however, challenging to derive the analytical solutions due to the complex inter-cell interdependencies, interslice resource constraints, and service-specific requirements. In this paper, we propose a multi-agent deep reinforcement learning (DRL) approach that improves the max-min slice performance while maintaining the constraints of resource capacity. We design two coordination schemes to allow distributed agents to coordinate and mitigate inter-cell interference. The proposed approach is extensively evaluated in a system-level simulator. The numerical results show that the proposed approach with inter-agent coordination outperforms the centralized approach in terms of delay and convergence. The proposed approach improves more than two-fold increase in resource efficiency as compared to the baseline approach.

Share

COinS