Author:
Fang, Juan
(Fang, Juan.)
|
Liu, Zhenzhen
(Liu, Zhenzhen.)
|
Li, Shuopeng
(Li, Shuopeng.)
|
Chen, Siqi
(Chen, Siqi.)
|
Yang, Huijing
(Yang, Huijing.)
Unfold
Abstract:
To solve the problem of communication delay and resource shortage when multiple users offload tasks at the same time in mobile edge computing (MEC), the deep reinforcement learning algorithm based on non-orthogonal multiple access (NOMA) technology was proposed to optimize users' communication resource allocation. Firstly, the taboo tag deep Q-network algorithm was used to train the relationship between users and subchannels at the users grouping stage, then the deep deterministic policy gradient algorithm was used to allocate users transmission power who sharing subchannel. The simulation results display that the proposed algorithm perform more stable than other reinforcement learning and traditional algorithm, moreover, the system sum rate have been significantly improved when multiple edge users offload tasks. © 2022 IEEE.
Keyword:
Reinforcement learning
Mobile edge computing
Resource allocation
Deep learning
Learning algorithms
Reprint Author's Address:
Conference Name
2022 International Conference on Computing, Communication, Perception and Quantum Technology, CCPQT 2022
Classification
461.4 Ergonomics and Human Factors Engineering - 722.4 Digital Computers and Systems - 723.4 Artificial Intelligence - 723.4.2 Machine Learning - 912.2 Management
Type
ACKNOWLEDGMENT This work is supported by Beijing Natural Science Foundation (4192007), and supported by the National Natural Science Foundation of China (61202076), along with other government sponsors. The authors would like to thank the reviewers for their efforts and for providing helpful suggestions that have led to several important improvements in our work. We would also like to thank all teachers and students in our laboratory for helpful discussions.
Access Number
EI:20224913202208
Corresponding authors email