收录:
摘要:
As a promising technique, mobile edge computing (MEC) has attracted significant attention from both academia and industry. However, the offloading decision for computing tasks in MEC is usually complicated and intractable. In this paper, we propose a novel framework for offloading decision in MEC based on Deep Reinforcement Learning (DRL). We consider a typical network architecture with one MEC server and one mobile user, in which the tasks of the device arrive as a flow in time. We model the offloading decision process of the task flow as a Markov Decision Process (MDP). The optimization object is minimizing the weighted sum of offloading latency and power consumption, which is decomposed into the reward of each time slot. The elements of DRL such as policy, reward and value are defined according to the proposed optimization problem. Simulation results reveal that the proposed method could significantly reduce the energy consumption and latency compared to the existing schemes.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC)
ISSN: 1525-3511
年份: 2019
语种: 英文
归属院系: