Indexed by:
Abstract:
Both caching and interference alignment (IA) are promising techniques for next-generation wireless networks. Nevertheless, most of the existing works on cache-enabled IA wireless networks assume that the channel is invariant, which is unrealistic considering the time-varying nature of practical wireless environments. In this paper, we consider realistic time-varying channels. Specifically, the channel is formulated as a finite-state Markov channel (FSMC). The complexity of the system is very high when we consider realistic FSMC models. Therefore, in this paper, we propose a novel deep reinforcement learning approach, which is an advanced reinforcement learning algorithm that uses a deep Q network to approximate the Q value-action function. We use Google TensorFlow to implement deep reinforcement learning in this paper to obtain the optimal IA user selection policy in cache-enabled opportunistic IA wireless networks. Simulation results are presented to show that the performance of cache-enabled opportunistic IA networks in terms of the network's sum rate and energy efficiency can be significantly improved by using the proposed approach.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
ISSN: 0018-9545
Year: 2017
Issue: 11
Volume: 66
Page: 10433-10445
6 . 8 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:165
CAS Journal Grade:2
Cited Count:
WoS CC Cited Count: 250
SCOPUS Cited Count: 281
ESI Highly Cited Papers on the List: 18 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: