Abstract:
The efficient allocation of radio resources is an essential trait of 5G/6G radio access networks (RANs), as they are called to meet diverse QoS requirements of highly demanding applications. To equip RANs with such an ability and, at the same time, meet their function split constraints, we envision a distributed learning approach for radio resource allocation that makes the most out of the Central Unit (CU) and Distributed Unit (DU) components by effectively exploiting their synergy. On the one hand, our solution, named MERGE, leverages the knowledge of the radio connectivity dynamics that each DU can acquire through the local use of a deep reinforcement learning radio agent. On the other hand, it lets the CU collect such agents in a crowdsourcing fashion, and, then, thanks to a meta-learning policy, properly select and aggregate them to create up-to-date radio agents of the right size (hence, complexity level) to fit the computing constraints of the individual DUs. Our results show that MERGE can match the performance of the highest-complexity radio model in [1] with 25% less computational requirements, and, for a given computational resource, it outperforms a single pruned model with a 19% increase in QoS.