Indexed by:
Abstract:
In recent years, the application of deep reinforcement learning in the field of finance has received a lot of attention from researchers. Due to the non-stationary characteristic and noisy environment in the financial market, single-scale features are difficult to effectively characterize the market environment. In this paper, we extract multi-scale volume-price features and trend features from financial time series by multi-scale processing and propose a deep reinforcement learning model named MSDDPG-R, which is based on the Deep Deterministic Policy Gradient (DDPG) algorithm. Specifically, we consider the trading problem as a Markov Decision Process (MDP), where the state space considering both single-scale and multi-scale features is built and the reward function combining multi-scale trend features is used. We test the MSDDPG-R model on the datasets of SH000001, SH000300, SZ399905 and S&P 500. The results show that MSDDPG-R model performs better in terms of return and risk than other models that excludes the partial components, which illustrates the validity of the multi-scale features and the trend reward function. © 2023 Copyright held by the owner/author(s).
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2023
Page: 624-630
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: