收录:
摘要:
Causal discovery from river runoff data aids flood prevention and mitigation strategies, garnering attention in climate and earth science. However, most climate causal discovery methods rely on conditional independence approaches, overlooking the non-stationary characteristics of river runoff data and leading to poor performance. In this paper, we propose a river runoff causal discovery method based on deep reinforcement learning, called RCD-DRL, to effectively learn causal relationships from non-stationary river runoff time series data. The proposed method utilizes an actor-critic framework, which consists of three main modules: an actor module, a critic module, and a reward module. In detail, RCD-DRL first employs the actor module within the encoder-decoder architecture to learn latent features from raw river runoff data, enabling the model to quickly adapt to non-stationary data distributions and generating a causality matrix at different stations. Subsequently, the critic network with two fully connected layers is designed to estimate the value of the current encoded features. Finally, the reward module, based on the Bayesian information criterion (BIC), is used to calculate the reward corresponding to the currently generated causal matrix. Experimental results obtained on both synthetic and real datasets demonstrate the superior performance of the proposed method over the state-of-the-art methods.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
APPLIED INTELLIGENCE
ISSN: 0924-669X
年份: 2024
期: 4
卷: 54
页码: 3547-3565
5 . 3 0 0
JCR@2022
归属院系: