收录:
摘要:
Cloud computing providers face several challenges in precisely forecasting large-scale workload and resource time series. Such prediction can help them to achieve intelligent resource allocation for guaranteeing that users’ performance needs are strictly met with no waste of computing, network and storage resources. This work applies a logarithmic operation to reduce the standard deviation before smoothing workload and resource sequences. Then, noise interference and extreme points are removed via a powerful filter. A Min–Max scaler is adopted to standardize the data. An integrated method of deep learning for prediction of time series is designed. It incorporates network models including both bi-directional and grid long short-term memory network to achieve high-quality prediction of workload and resource time series. The experimental comparison demonstrates that the prediction accuracy of the proposed method is better than several widely adopted approaches by using datasets of Google cluster trace. © 2020 Elsevier B.V.
关键词:
通讯作者信息:
电子邮件地址: