Indexed by:
Abstract:
Causal discovery from river runoff data aids flood prevention and mitigation strategies, garnering attention in climate and earth science. However, most climate causal discovery methods rely on conditional independence approaches, overlooking the non-stationary characteristics of river runoff data and leading to poor performance. In this paper, we propose a river runoff causal discovery method based on deep reinforcement learning, called RCD-DRL, to effectively learn causal relationships from non-stationary river runoff time series data. The proposed method utilizes an actor-critic framework, which consists of three main modules: an actor module, a critic module, and a reward module. In detail, RCD-DRL first employs the actor module within the encoder-decoder architecture to learn latent features from raw river runoff data, enabling the model to quickly adapt to non-stationary data distributions and generating a causality matrix at different stations. Subsequently, the critic network with two fully connected layers is designed to estimate the value of the current encoded features. Finally, the reward module, based on the Bayesian information criterion (BIC), is used to calculate the reward corresponding to the currently generated causal matrix. Experimental results obtained on both synthetic and real datasets demonstrate the superior performance of the proposed method over the state-of-the-art methods.
Keyword:
Reprint Author's Address:
Email:
Source :
APPLIED INTELLIGENCE
ISSN: 0924-669X
Year: 2024
Issue: 4
Volume: 54
Page: 3547-3565
5 . 3 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: