• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    LDformer: a parallel neural network model for long-term power forecasting*

    2023-09-21 06:31:00RanTIANXinmeiLIZhongyuMAYanxingLIUJingxiaWANGChuWANG

    Ran TIAN,Xinmei LI,Zhongyu MA,Yanxing LIU,Jingxia WANG,Chu WANG

    College of Computer Science &Engineering, Northwest Normal University, Lanzhou 730070, China

    Abstract: Accurate long-term power forecasting is important in the decision-making operation of the power grid and power consumption management of customers to ensure the power system’s reliable power supply and the grid economy’s reliable operation.However,most time-series forecasting models do not perform well in dealing with long-time-series prediction tasks with a large amount of data.To address this challenge,we propose a parallel time-series prediction model called LDformer.First,we combine Informer with long short-term memory (LSTM) to obtain deep representation abilities in the time series.Then,we propose a parallel encoder module to improve the robustness of the model and combine convolutional layers with an attention mechanism to avoid value redundancy in the attention mechanism.Finally,we propose a probabilistic sparse (ProbSparse) self-attention mechanism combined with UniDrop to reduce the computational overhead and mitigate the risk of losing some key connections in the sequence.Experimental results on five datasets show that LDformer outperforms the state-of-the-art methods for most of the cases when handling the different long-time-series prediction tasks.

    Key words: Long-term power forecasting;Long short-term memory (LSTM);UniDrop;Self-attention mechanism

    1 Introduction

    Power forecasting is an important part of power system planning and the basis of the economic operation of power systems.Accurate power forecasting helps provide a reliable decision-making basis for power system planning and operation,thus reducing power production costs (Ciechulski and Osowski,2021).Traditional prediction methods have achieved great results in meteorology,finance,industry,and transportation for short-term power forecasting problems.However,existing time-series methods still have limitations in long-time-series prediction:

    1.Due to the highly complex and large-scale long-time-series data,traditional methods are limited in efficiently handling high-dimensional large data and representing complex functions (Han et al.,2021).It will forget some data and ignore the long-term dependency of the time-series data.

    2.The model structure of traditional methods is not stable when long-term power forecasting is performed.

    3.The existing attention mechanism is prone to losing key connections between sequences when capturing the dependencies among long-time-series data,leading to the degradation of prediction performance.

    Aiming at the problem of long-time-series prediction,we propose LDformer,a parallel neural network model for long-term power forecasting based on the Informer framework (Zhou et al.,2021).The contributions of this study are summarized as follows:

    1.We propose a parallel neural network model called LDformer,which combines the advantages of Informer and long short-term memory (LSTM) to effectively solve the problem of long-term power forecasting.Using LSTM to learn the long-term correlation of time-series data,the model has good long-term forecasting performance.

    2.We propose a parallel encoder module that combines a convolutional layer and an attention mechanism to avoid value redundancy in the attention mechanism.Several experiments on different datasets validate the effectiveness and robustness of the parallel encoder and the convolutional layer.

    3.We propose a probabilistic sparse (ProbSparse) self-attention mechanism combined with UniDrop.UniDrop does not need additional computational overhead or external resources.Combining UniDrop,Prob-Sparse attention mechanism can mitigate the risk of losing some key connections in the sequence.

    2 Related works

    Long-time-series prediction plays an essential role in decision-making in many fields,such as economy,transportation,medicine,hydrology,and energy (Zhang D et al.,2018;Chakraborty et al.,2019;Chuku et al.,2019;Xie and Lou,2019;Marcjasz et al.,2020).However,the prediction model may produce inaccurate results due to different patterns of the actual time series.In this paper,we analyze both machine learning and deep learning.

    2.1 Machine learning

    In time-series prediction,the Autoregressive Integrated Moving Average (ARIMA) model is widely used in various fields.Chen JF et al.(1995) developed an adaptive ARIMA model for short-term power forecasting of a power generation system.Viccione et al.(2020) applied the ARIMA model for tank water level prediction and analysis.Many researchers have also achieved better results by improving ARIMA models or combining ARIMA with other models.Xu et al.(2017) proposed an ARIMA and Kalman filterbased approach applied to road traffic real-time state prediction.Khan et al.(2020) proposed a waveletbased hybrid ARIMA model.However,although the ARIMA model has achieved great success in stable time-series prediction,there are no almost pure stationary data in the real-time-series data.Therefore,the application of the ARIMA model is limited by the data characteristics and is less general.As a result,many models other than ARIMA models have been applied to time-series forecasting.Syriopoulos et al.(2021) used a support vector machine (SVM) to predict shipping prices.Based on SVM,Ding et al.(2021) developed a time-series model based on the least squares support vector machine (LSSVM) and achieved better results.Nóbrega and Oliveira (2019) proposed sequential learning methods for the Kalman filter and a limited learning machine for regression and time-series prediction,obtaining better results.In addition to the normal case,many researchers have also modeled predictions for time-series data with outliers.Chen XY and Sun (2022) proposed a Bayesian time-factor decomposition framework for modeling and predicting multidimensional time series of specific spatiotemporal data in the presence of missing values.However,machine learning methods cannot obtain more accurate results for complex prediction problems.

    2.2 Deep learning

    With the development of deep learning,researchers have found that deep learning applies to complex time-series problems (Kim et al.,2018).Many scholars use convolutional neural networks (CNNs) to model and predict time-series data (Guo et al.,2019;Hosseini and Talebpour,2019).Cao and Wang (2019) applied CNN to stock forecasting.However,a CNN is more suitable for spatial correlation than time series.As a result,recurrent neural networks (RNNs) have emerged.Hu et al.(2020) applied RNNs to traffic flow prediction based on time-series analysis.Min et al.(2019) proposed an RNN-based intent inference model to solve the time-series prediction problem.Many studies have proven that RNNs have obvious advantages in time-series prediction (Xiao et al.,2019;Zhang L et al.,2019).However,when the time-series is too long,the problems of gradient disappearance and gradient explosion may occur in RNN training (Zheng and Huang,2020).

    Karevan and Suykens (2020) proposed LSTM to solve these problems.LSTM is more suitable for long-time-series than RNN.Therefore,many deep learning models based on LSTM are applied to timeseries prediction.Zhang TJ et al.(2019) used LSTM for gas concentration prediction.Miao et al.(2020) proposed a novel LSTM framework for short-term fog prediction,consisting of an LSTM and a fully connected layer.Their experiments showed that the framework outperformed K-nearest neighbor (KNN),Ada-Boost,and CNN algorithms.Gai et al.(2021) proposed a new parking space prediction model based on LSTM,providing a more accurate prediction.Many researchers have also proposed a combined model of LSTM and other models (Ran et al.,2019;Wang ZP et al.,2020),which proves the feasibility of LSTM in time-series prediction.However,we can learn from Khandelwal et al.(2018) that the adequate context size of the language model using LSTM is approximately 200 tokens on average.Nevertheless,the model can only distinguish 50 tokens in the vicinity,which indicates that even LSTM has difficulty capturing longterm dependencies.

    In recent years,the transformer has been applied to many fields to perform long-time-series prediction tasks.Wu N et al.(2020) applied the transformer model to the prediction of influenza-like diseases,and this method can learn complex patterns and dynamics from time-series data using the self-attention mechanism.As a general framework,the transformer can not only be applied to univariate and multivariate time-series data,but can also be used for time-series embedding.However,because the point-by-point dot product in the transformer architecture is insensitive to the local context,its spatial complexity is too large.Therefore,Li et al.(2019) proposed the LogSparse transformer,which improves the prediction accuracy of time series with fine granularity and strong long-term dependence under a limited memory budget.Zhou et al.(2021) proposed the Informer and applied it to electricity consumption planning.Unlike the selfattention mechanism used in the transformer (Vaswani et al.,2017),this model proposes a new ProbSparse self-attention (Zhou et al.,2021),which minimizes the complexity and improves the transformer model’s calculation,memory,and efficiency.Although the existing time-series prediction methods have promoted the development of the time-series field to a certain extent,there is still room for improvement.Because of the increase in data volume,many time-series prediction models overlook deep representation abilities of temporal data,and also have problems such as numerous parameters,large memory consumption,and long operation time.

    Compared with previous work,LDformer not only considers the deep representation of temporal data,but also mitigates the risk of losing critical connections between sequences while ensuring minimum complexity.In addition,LDformer combines the convolutional layer and the attention mechanism to obtain a parallel encoder module,which prevents redundancy in the attention mechanism while improving model robustness.These methods effectively improve the prediction accuracy.

    3 Preliminaries

    As shown in Eq.(1),the objective of long-term power forecasting is to use modelfto predict the power load data of a certain data point atntime steps based on historical power load data from multiple data points atmtime steps.

    4 Long-time-series prediction model

    The overall framework of LDformer is as shown in Fig.1.LDformer contains LSTM,encoder,and decoder structures.First,the time-series data enter the embedding layer,at which data,location,and time information is converted into a uniform dimension and then merged.The processed data then enter the LSTM,allowing for more effective use of long-term temporal information.Second,the data enter the encoder,which uses a multichannel parallel mode to improve model robustness to receive long-sequence inputs.At the same time,we extract the main features by adding convolution layers in the encoder module,and the convolution layer can generate a concentrated attention feature map to reduce feature redundancy.The decoder receives long-sequence inputs and predicts the output element immediately in the form of generation.After the embedding layer,the data enter the decoder.To ensure that the output decoded at timetdepends only on the output before timet,X0is a placeholder for the target sequence,padded with 0.A mask is added to the first attention of the decoder to prevent the target information from being used earlier.Finally,the prediction results of the last column are outputted through a fully connected layer.

    Fig.1 Framework of LDformer

    4.1 Embedding layers with multiple perspectives

    First,the preprocessed data are unified into the same dimension through data encoding,position encoding,and timestamp encoding,and then the final embedding result is obtained by summing the results of these three encodings.

    1.Data embedding (DE): Convert data dimensionxinto uniform dimensiondmodelusing one-dimensional (1D) convolution.The formula is as follows:

    2.Position embedding (PE) (Vaswani et al.,2017):Compared to RNNs,positional embedding uses a different approach where elements within the input sequence are processed concurrently,thereby preserving the positional information of each element within the sequence.Although the processing speed of RNN is higher than that of PE,it ignores the order of elements in the sequence,so we choose positional embedding.The formulae are as follows:

    where pos denotes the position of the element in the sequence,dmodeldenotes the dimension of the element vector,andidenotes the position of the element vector.

    3.Time embedding: There are various methods for time embedding: month_embed,day_embed,weekday_embed,hour_embed,and minute_embed.The time slice of the dataset used in this paper is mainly in hours and minutes,so hour_embed and minute_embed are chosen to obtain the timestamp encoding results.

    The sum of these three embedding components is the output of the final embedding layer.The overall embedding layer structure is shown in Fig.2.

    Fig.2 Embedding layer structure

    The data outputted by embedding are sent to LSTM for feature extraction.For not affecting the input of the subsequent encoder,this study makes the output of LSTM consistent with the output of embedding;i.e.,the LSTM output dimension remains asdmodel.The output of LSTM is the input of the encoder.LSTM can perform better in long sequences.Therefore,using LSTM can extract deep representation abilities in the time series and improve the prediction accuracy.

    4.2 Multichannel parallel encoder module

    We build the encoder module by combining the attention mechanism with the convolutional layer.It takes four channels of lengthL,L/2,L/4,andL/8,and executes them in parallel.The convolutional layer performs dimensional pruning and reduces the memory footprint before the output of the upper layer is sent to the lower layer of the multihead attention module.The convolutional layer has one fewer layer than the encoder.The multichannel parallel encoder module is shown in Fig.3.

    Fig.3 Multichannel parallel encoder module (References to color refer to the online version of this figure)

    The encoder is used mainly to extract robust remote dependencies from time-series data.The overall architecture of the encoder is roughly the same as that of the transformer.It includes mainly two sublayers,the multihead attention layer (ProbSparse selfattention mechanism combined with UniDrop) and the feed-forward layer composed of two linear mappings.A batch normalization layer follows both sublayers,with jump connections between the sublayers.

    4.2.1 ProbSparse self-attention mechanism combined with UniDrop

    ProbSparse self-attention mechanism may create some risk of critical connection loss.To address this problem,we propose a ProbSparse self-attention mechanism combined with UniDrop.Unlike the self-attention mechanism and ProbSparse self-attention mechanism,we consider other possible problems while retaining their advantages.

    The canonical self-attention mechanism consists of a query and a set of key-value pairs.The formula is as follows (Vaswani et al.,2017):

    Fig.4 UniDrop structure: (a) attention;(b) feed forward;(c) output prediction (MatMul: matrix multiplication;H:hidden layer)

    Define the attention of theithrow,ofQ',K',V'obtained after dropout as a kernel smoother in the form of probability:

    Dropping the constant,the sparsity measure of theithquery can be defined as

    Eq.(9) traverses all queries that require the computation of each dot-product pair,but the log-sum-exp (LSE) operation may have numerical stability issues.Based on this,the above formula is improved to obtain the final sparsity measure formula:

    Based on the above steps,we obtain ProbSparse self-attention combined with UniDrop by allowing each key to pay attention to onlyudominant queries:

    4.2.2 Convolutional layer

    As a natural consequence of the ProbSparse attention mechanism combined with UniDrop,the feature mapping of the encoder produces a redundant combination of valuesV'.In the next layer,we use convolution to extract dominant features for processing so that they generate a focused attentional feature map,as shown in Fig.3,which will largely reduce the temporal dimension of the input.CNN is good at identifying simple patterns in data and generating complex patterns in more advanced layers.Conv1D is effective in obtaining features of interest from data whose locations are not highly correlated,and Conv1D can be well applied to time-series analysis of sensor data.Therefore,we select Conv1D to extract features and set its convolution kernel as 3×3.The formula is as follows:

    where [·]ABcontains the basic operations in the multihead attention and attention block,and Conv1D uses the LeakyReLU activation function to execute in the time dimension.LeakyReLU is a variant of ReLU.It introduces some changes when the input value is less than 0,alleviates the sparsity of ReLU,and inherits the advantages of ReLU.It can also speed up the convergence,alleviate the gradient disappearance and explosion problems,and simplify the calculation.The LeakyReLU activation function is as follows:

    4.3 Decoder

    The decoder generates the time-series output by a forward process,and part of its structure can be referred to as the decoder structure in the transformer,as shown in Fig.5.

    Fig.5 Decoder for growing sequence output through forward process generation

    The decoder consists of two layers of attention mechanism and a linear mapping feed-forward layer.The decoder’s input vector is as follows:

    The first layer of attention is the ProbSparse selfattention mechanism combined with UniDrop,as shown in Eq.(11).We set the masked multihead self-attention to -∞.It prevents focusing on the future position in the training process to avoid the autoregression problem.The second layer of attention is ordinary selfattention,as shown in Eq.(5).An add &norm layer follows both layers of attention.Add &norm is calculated as follows:

    Finally,the prediction results are outputted directly through a fully connected layer.

    5 Experiments and analyses

    5.1 Datasets

    We extensively perform experiments on five datasets,namely,the ETTm1,ETTh1,ETTh2,PEMS03,and weather datasets.ETTm1,ETTh1,and ETTh2 are collectively referred to as the ETT dataset.Table 1 shows the description of the datasets.

    Table 1 Datasets and prediction task descriptions

    ETT (Zhou et al.,2021): ETT is a key indicator of the long-term deployment of electricity.Here are two years of data collected from two cities in China.ETTm1 takes one data point every 15 min.ETTh1 and ETTh2 take one data point every hour.Each data point consists of the target value oil temperature (OT) and six different types of external load values,i.e.,high useful load (HUFL),high useless load (HULL),middle useful load (MUFL),middle useless load (MULL),low useful load (LUFL),and low useless load (LULL).We use a multivariate prediction univariate model with six power load features to predict the target value OT.According to the time characteristics,we divide the training set,validation set,and test set by 12,4,and 4 months,respectively.

    Weather (Zhou et al.,2021): This dataset contains local climate data for nearly 1600 U.S.locations.Each data point consists of a target value wet bulb and 11 climate features.We divide the training set,validation set,and test set by 12,4,and 4 months,respectively.

    PEMS03 (Wang C et al.,2023): This dataset is the traffic flow data of the California highway network.It is divided into five-minute intervals and contains data from 307 sensors for three months from 2018/9/1 to 2018/11/30.This experiment captures three months of traffic data from the first seven sensors.The dataset is divided by 7:2:1 into a training set,test set,and validation set.The seventh sensor is used as the target sensor for the experiment.

    5.2 Experimental setting

    In this study,the model uses both parallel and nonparallel modes in the encoder.The parallel mode has four channels executing in parallel and three stacks.The decoder contains two stacks.The sampling factorcin the top-uformula is set to 5.The parameter in dropout is 0.1.The number of multihead attentionn-heads is set to 8.When predicting the target sequence,the mean square error (MSE) is selected as the loss function,and the whole model is returned from the output of the decoder.The learning rate in the experimental setup decreases from 1e-04 and decays by a factor of 2 for each period.The model employs the Adam optimizer as its optimization algorithm,with 10 epochs and a batch size of 64.Moreover,prior to conducting the experiment,each input dataset is subjected to a standardization procedure.

    Assessment metrics: three evaluation indexes,MSE,mean absolute error (MAE),and root mean square error (RMSE),are used.The primary reason for choosing RMSE is that,compared to MSE and other evaluation metrics,RMSE has advantages such as greater intuitiveness,stronger robustness,and easier interpretability in the assessment of model performance.Consequently,RMSE is widely adopted in numerous practical applications.For example,when RMSE is equal to 10,the regression effect can be considered to differ by 10 on average compared to the true value.The indicator equations are as follows:

    whereyis the true value andis the prediction value.

    5.3 Experimental results and analyses

    We conduct comparative experiments on five datasets and multiple prediction tasks,comparing widely used RNN,LSTM,and gated recurrent unit (GRU) commonly employed in long-time-series prediction,as well as the representative Informer algorithm and its Informer (np) model under nonparallel conditions,with our parallel model LDformer and nonparallel LDformer (np).The specific description of the datasets is shown in Table 1.The corresponding experimental results are shown in Tables 2-5.

    Table 2 Performance comparison of short-time-series prediction tasks in the ETT dataset at different prediction lengths (24,36,48)

    Table 3 Performance comparison of long-time-series prediction tasks in the ETT dataset at different prediction lengths (96,168,336)

    Table 4 Performance comparison of short-time-series prediction tasks in the weather and PEMS03 datasets at different prediction lengths (24,48,96)

    Table 5 Performance comparison of long-time-series prediction tasks in the weather dataset at prediction lengths of 168,336,and 720 and the PEMS03 dataset at prediction lengths of 288,336,and 720

    Tables 2 and 3 show the prediction performance of the baseline models and our models for short-and long-time-series of the ETT dataset,respectively.As the prediction length increases,the prediction performance of LDformer continues to increase in all datasets.This demonstrates the success of LDformer in improving prediction performance in solving long-time-series prediction problems.In the ETTm1 dataset,when the prediction length is 48,MSE,MAE,and RMSE of the LDformer model proposed in this paper are 37.0%,22.5%,and 20.6% lower than those of Informer,respectively.As the prediction length increases,the advantages of the LDformer become more apparent.When the prediction length is 168,MSE,MAE,and RMSE of LDformer are reduced by 16.5%,13.1%,and 8.6% compared to those of Informer,respectively.When the prediction length is 336 in the ETTm1 dataset,the LDformer (np) model has the optimal result,while the LDformer model shows the suboptimal result.

    When the prediction length in ETTh1 is 24,MSE,MAE,and RMSE of LDformer are 74.7%,44.3%,and 49.7% lower than those of the traditional GRU model,respectively.After increasing the prediction length,Informer shows good performance for the ETTh1 dataset when the prediction length is 96 and 168.However,the performance of LDformer is better than those of traditional models in most cases.In the ETTh2 dataset of long-time-series prediction,the model in this paper outperforms traditional models.This phenomenon may be caused by the anisotropy of the feature dimensions.It is beyond the scope of this paper,and we will explore it in the future work.ETTh2 contains a large amount of continuous null data.

    Tables 4 and 5 show the prediction performance of the baseline models and the models proposed in this paper for short-and long-time-series in the weather and PEMS03 datasets,respectively.For the weather dataset,when the prediction length is 168,MSE,MAE,and RMSE of LDformer are reduced by 29.4%,14.4%,and 16.0%,respectively,compared to those of Informer (np).For the PEMS03 dataset,when the prediction length is 336,MSE,MAE,and RMSE of LDformer are reduced by 12.6%,9.8%,and 6.5%,respectively,compared to those of Informer.The experiments show that the models proposed in this paper achieve better results for the PEMS03 dataset with a time interval of 5 min and the ETTm1 dataset with a time interval of 15 min.

    LDformer accurately obtains the temporal information of each feature,where LSTM learns the longterm dependency of temporal data and fully exploits deep representation abilities among time-series data.The ProbSparse attention mechanism combined with UniDrop inherits the advantages of the ProbSparse attention mechanism,while avoiding the attention mechanism losing some key connections in the sequence when considering data correlation.Meanwhile,the encoder module adopts a multichannel parallel model to improve the robustness of LDformer.

    We calculate the average error between the true and prediction values for different datasets.The error is plotted as a histogram,as shown in Figs.6 and 7.

    In the short-time-series prediction (Fig.6),LDformer has the smallest average error between the prediction and true values,and the prediction accuracy is higher than that of other models.For long-time-series prediction (Fig.7),LDformer still achieves good results.However,the results are unsatisfactory due to the availability of null data in the ETTh2 dataset.LDformer combines the advantages of Informer and LSTM,and uses four channels in the encoder to improve the stability of the model.It also uses Prob-Sparse self-attention combined with UniDrop to reduce the loss of some key connections in the sequence and improves the accuracy of long-time-series prediction.

    Fig.6 Average error of the true and prediction values in the short-time-series prediction (References to color refer to the online version of this figure)

    Fig.7 Average error of the true and prediction values in the long-time-series prediction (References to color refer to the online version of this figure)

    We plot the average loss corresponding to different learning rates in the four models.As shown in Fig.8,the LDformer (np) proposed in this paper converges faster than the LDformer model.

    Fig.8 Convergence of loss with a decreasing learning rate for the ETTh1 dataset at a prediction length of 24

    Fig.9 shows the training runtime profiles of several models with an increasing number of epochs.Under the early stop mechanism,both LDformer and LDformer (np) stop training at the fourth epoch,Informer stops training at the fifth epoch,and Infomer (np) stops training at the sixth epoch.Informer (np) obtains the optimal result at the sixth epoch.The training time of the models in this paper is significantly less than that of other models.Our proposed ProbSparse attention mechanism combined with UniDrop reduces the number of parameters without increasing the time complexity.Convolutional layers extract dominant features,reducing the temporal feature dimension and avoiding redundancy in the attention mechanism.This is the key factor that reduces the training time.

    Fig.9 Training time comparison

    We perform ablation experiments for each innovation point to further evaluate the effectiveness of the individual components in LDformer.Table 6 shows several models with various innovation points removed.

    Table 6 Introduction to the ablation experiment modules

    We compare the MSE and MAE of our models on the four prediction tasks with those of the models without innovation point.Table 7 shows that the cancellation of each innovation point affects the results.Through LSTM,LDformer can extract deep representation abilities from time-series data.This is achieved by the use of LSTM memory cells and gate mechanisms.Through LSTM memory cells and gate mechanisms,LSTM can effectively capture long-term dependencies and longer contextual information.UniDrop reduces the number of parameters and computational complexity,improving the model’s parallel computing capabilities.Multiple parallel channels contribute to a more stable model structure,by accepting various long sequences,which increases the model’s robustness to input noise and variations,making it more adaptable to different input conditions and data distributions.The convolutional layer reduces the potential complexity and computational cost of the attention mechanism by reducing the number of parameters.By combining these modules,LDformer achieves optimal results on most prediction tasks.

    Table 7 The MSE and MAE of different models for the ETTm1 dataset at prediction lengths of 48 and 168 and the ETTh1 dataset at prediction lengths of 24 and 336

    6 Conclusions

    In this paper,we propose a long-term power forecasting model LDformer and its nonparallel state LDformer (np) to solve the long-time-series prediction problem with the power dataset ETT as an example.The LDformer model is useful for obtaining more accurate prediction results without increasing the time complexity.First,ProbSparse self-attention mechanism combined with UniDrop is proposed to replace ProbSparse self-attention mechanism,effectively avoiding the risk of losing key connections between sequences without increasing the complexity.Second,two perspectives,parallel and nonparallel,are used in the encoder module for comparison.Considering the number of parameters and model stability,we use convolutional layers for data extraction between attention modules.Finally,we combine the improved model with LSTM to capture the long-range time-series information and extract deep representation abilities in the time series to improve prediction accuracy.Experimental results confirm that LDformer performs better on long-time-series prediction tasks for short-interval datasets.For each innovation point proposed in this study,ablation experiments have been conducted to demonstrate the innovation points’ feasibility.However,the method presented in this study still has potential for improvement in dealing with long-term-series prediction tasks for long-interval datasets.

    Contributors

    Ran TIAN designed the research.Xinmei LI developed the methodology,curated the data,and worked on the software.Zhongyu MA conducted the investigation.Yanxing LIU processed the data.Jingxia WANG conducted the data visualization and result validation.Chu WANG verified the experimental results.Xinmei LI drafted the paper.Ran TIAN revised and finalized the paper.

    Compliance with ethics guidelines

    Ran TIAN,Xinmei LI,Zhongyu MA,Yanxing LIU,Jingxia WANG,and Chu WANG declare that they have no conflict of interest.

    Data availability

    The data that support the findings of this study are available from the corresponding author upon reasonable request.

    国产在线免费精品| 亚洲内射少妇av| 国产高清有码在线观看视频| 下体分泌物呈黄色| 国产色婷婷99| 日本-黄色视频高清免费观看| 视频区图区小说| 成人漫画全彩无遮挡| 国产精品久久久久成人av| 亚洲成人一二三区av| 亚洲人成网站高清观看| 国产午夜精品一二区理论片| 亚洲人成网站高清观看| 日韩国内少妇激情av| 纵有疾风起免费观看全集完整版| 高清日韩中文字幕在线| 亚洲高清免费不卡视频| 中文字幕亚洲精品专区| 亚洲色图av天堂| 欧美日韩在线观看h| 精品一区在线观看国产| 国产视频内射| 99国产精品免费福利视频| 亚洲精品亚洲一区二区| 成人18禁高潮啪啪吃奶动态图 | 国产精品一区二区性色av| www.色视频.com| 在线观看免费高清a一片| 美女视频免费永久观看网站| 亚洲国产日韩一区二区| 男女下面进入的视频免费午夜| 视频中文字幕在线观看| 亚洲国产成人一精品久久久| 九草在线视频观看| 亚洲怡红院男人天堂| 国精品久久久久久国模美| 国产无遮挡羞羞视频在线观看| 国产精品国产av在线观看| 纵有疾风起免费观看全集完整版| 免费久久久久久久精品成人欧美视频 | 青春草国产在线视频| 久久这里有精品视频免费| 成年美女黄网站色视频大全免费 | 日本色播在线视频| 久久 成人 亚洲| 美女视频免费永久观看网站| 久久久久久久久久成人| 精品国产乱码久久久久久小说| 精品少妇久久久久久888优播| 亚洲成人一二三区av| 欧美人与善性xxx| 亚洲真实伦在线观看| 国产精品爽爽va在线观看网站| 精品一区二区三区视频在线| 人妻一区二区av| freevideosex欧美| 日本爱情动作片www.在线观看| 欧美3d第一页| 亚洲色图av天堂| 视频区图区小说| 天天躁日日操中文字幕| 国产色爽女视频免费观看| 熟女电影av网| 国产亚洲91精品色在线| 亚洲精品日本国产第一区| 在线观看一区二区三区激情| 中文字幕av成人在线电影| 国产亚洲欧美精品永久| 久久久久性生活片| 99国产精品免费福利视频| 日本色播在线视频| 18禁裸乳无遮挡免费网站照片| 国产乱人偷精品视频| 欧美最新免费一区二区三区| 精品午夜福利在线看| 久久国内精品自在自线图片| 成人午夜精彩视频在线观看| 欧美日韩视频精品一区| 99久久综合免费| 直男gayav资源| 成人亚洲精品一区在线观看 | 一级毛片久久久久久久久女| 99久国产av精品国产电影| 国产无遮挡羞羞视频在线观看| 国产69精品久久久久777片| 欧美成人精品欧美一级黄| 涩涩av久久男人的天堂| 国产在线男女| 黄色日韩在线| 国产永久视频网站| 又大又黄又爽视频免费| 精品久久久久久久久av| 成人漫画全彩无遮挡| 欧美成人a在线观看| 毛片女人毛片| 久久99热6这里只有精品| 国产在线一区二区三区精| 国产高潮美女av| 久久久午夜欧美精品| 亚洲一级一片aⅴ在线观看| 深爱激情五月婷婷| a 毛片基地| 国产精品一二三区在线看| 欧美日韩国产mv在线观看视频 | 97在线人人人人妻| 一级毛片aaaaaa免费看小| 久久久久国产网址| 午夜福利高清视频| 国产精品国产三级专区第一集| 3wmmmm亚洲av在线观看| 亚洲av.av天堂| 亚洲精品国产成人久久av| 免费观看性生交大片5| 色婷婷久久久亚洲欧美| 久久久久久人妻| 亚洲,欧美,日韩| 91aial.com中文字幕在线观看| 校园人妻丝袜中文字幕| 尤物成人国产欧美一区二区三区| 国产成人精品久久久久久| 亚洲av福利一区| 国产精品熟女久久久久浪| 美女中出高潮动态图| 男的添女的下面高潮视频| 伦理电影免费视频| 亚洲精品456在线播放app| 国产精品.久久久| 色婷婷av一区二区三区视频| 麻豆成人午夜福利视频| 国产在线视频一区二区| 99久久中文字幕三级久久日本| 成年人午夜在线观看视频| 成人午夜精彩视频在线观看| 亚洲欧洲国产日韩| 热99国产精品久久久久久7| 在线观看免费视频网站a站| 免费人妻精品一区二区三区视频| 亚洲婷婷狠狠爱综合网| h视频一区二区三区| 免费高清在线观看视频在线观看| 精品久久久久久久末码| 午夜免费观看性视频| av播播在线观看一区| 在线观看免费高清a一片| 一个人免费看片子| 久久99蜜桃精品久久| 啦啦啦中文免费视频观看日本| 尤物成人国产欧美一区二区三区| 久久久久久九九精品二区国产| 国产精品精品国产色婷婷| 在线亚洲精品国产二区图片欧美 | 亚洲最大成人中文| 亚洲内射少妇av| 免费看av在线观看网站| 欧美日韩一区二区视频在线观看视频在线| 舔av片在线| 欧美极品一区二区三区四区| 永久网站在线| 亚洲,一卡二卡三卡| 乱码一卡2卡4卡精品| 亚洲av欧美aⅴ国产| 色视频在线一区二区三区| 99久久人妻综合| 日本黄大片高清| 伦理电影免费视频| 最近最新中文字幕大全电影3| 国内少妇人妻偷人精品xxx网站| 青青草视频在线视频观看| 亚洲精品一区蜜桃| 男的添女的下面高潮视频| 人人妻人人添人人爽欧美一区卜 | 欧美日韩精品成人综合77777| 最后的刺客免费高清国语| 80岁老熟妇乱子伦牲交| 久久99热这里只有精品18| av女优亚洲男人天堂| 色5月婷婷丁香| 草草在线视频免费看| 老司机影院成人| 日韩成人伦理影院| 国产精品av视频在线免费观看| 久久久色成人| 国产伦在线观看视频一区| 亚洲av.av天堂| 免费黄色在线免费观看| 免费观看在线日韩| 久久97久久精品| 亚洲av成人精品一区久久| 这个男人来自地球电影免费观看 | 日韩成人av中文字幕在线观看| 中文在线观看免费www的网站| 亚洲精品视频女| 少妇猛男粗大的猛烈进出视频| 青春草视频在线免费观看| 亚洲精品,欧美精品| 六月丁香七月| 色5月婷婷丁香| 十分钟在线观看高清视频www | 免费黄网站久久成人精品| 大话2 男鬼变身卡| 99久久综合免费| 小蜜桃在线观看免费完整版高清| 亚洲综合色惰| 国产精品久久久久久精品古装| 精品熟女少妇av免费看| 久久国产亚洲av麻豆专区| 欧美97在线视频| 国产真实伦视频高清在线观看| 王馨瑶露胸无遮挡在线观看| 欧美人与善性xxx| 久久久久视频综合| 亚洲av不卡在线观看| 亚洲欧美一区二区三区黑人 | 中文字幕亚洲精品专区| 久久99精品国语久久久| 日韩免费高清中文字幕av| 97超碰精品成人国产| 日本wwww免费看| 熟女人妻精品中文字幕| 日本黄色日本黄色录像| 亚洲精品日韩av片在线观看| 成人影院久久| 亚洲人与动物交配视频| 99热这里只有精品一区| 99re6热这里在线精品视频| 又大又黄又爽视频免费| 中文字幕精品免费在线观看视频 | 高清视频免费观看一区二区| 高清av免费在线| 久久久久久人妻| 免费黄网站久久成人精品| 男女边摸边吃奶| 成人无遮挡网站| 久久亚洲国产成人精品v| 亚洲综合色惰| 国产91av在线免费观看| 国产精品偷伦视频观看了| videossex国产| 日韩成人伦理影院| 亚洲国产欧美人成| 亚洲人成网站高清观看| 色婷婷av一区二区三区视频| 国产欧美日韩一区二区三区在线 | 插逼视频在线观看| 丰满迷人的少妇在线观看| 亚洲精品456在线播放app| 高清黄色对白视频在线免费看 | 99久久人妻综合| 免费人妻精品一区二区三区视频| 狂野欧美激情性bbbbbb| 国产成人精品久久久久久| 久久97久久精品| av视频免费观看在线观看| 亚洲av.av天堂| 蜜桃亚洲精品一区二区三区| 国产黄频视频在线观看| 乱系列少妇在线播放| 国产白丝娇喘喷水9色精品| 成人特级av手机在线观看| 精品亚洲成国产av| 国产一区二区三区av在线| 麻豆成人午夜福利视频| 黑丝袜美女国产一区| 人人妻人人添人人爽欧美一区卜 | 哪个播放器可以免费观看大片| 国产成人a∨麻豆精品| 国产成人免费无遮挡视频| 国产成人a∨麻豆精品| 久久精品国产鲁丝片午夜精品| 久久毛片免费看一区二区三区| 亚洲激情五月婷婷啪啪| 国产精品女同一区二区软件| 亚洲人与动物交配视频| 国产成人午夜福利电影在线观看| 精品人妻视频免费看| 精品一区在线观看国产| 免费在线观看成人毛片| 国产精品不卡视频一区二区| 欧美日韩在线观看h| 亚洲aⅴ乱码一区二区在线播放| 内地一区二区视频在线| 最近中文字幕2019免费版| av又黄又爽大尺度在线免费看| 水蜜桃什么品种好| 久久亚洲国产成人精品v| 99热国产这里只有精品6| 最后的刺客免费高清国语| tube8黄色片| 22中文网久久字幕| 国产高清国产精品国产三级 | 夜夜看夜夜爽夜夜摸| 国产综合精华液| 亚洲丝袜综合中文字幕| 日日撸夜夜添| av在线app专区| 欧美bdsm另类| 蜜桃亚洲精品一区二区三区| 欧美日韩视频精品一区| 青春草国产在线视频| 22中文网久久字幕| 一本一本综合久久| 看十八女毛片水多多多| 午夜福利视频精品| 女性生殖器流出的白浆| 亚洲欧美日韩卡通动漫| www.av在线官网国产| 日韩强制内射视频| 蜜桃在线观看..| 国产男人的电影天堂91| 夜夜爽夜夜爽视频| 欧美精品国产亚洲| 日本欧美视频一区| 日韩中文字幕视频在线看片 | 亚洲熟女精品中文字幕| 欧美日韩国产mv在线观看视频 | 永久网站在线| 丰满人妻一区二区三区视频av| 欧美日韩国产mv在线观看视频 | 日韩av免费高清视频| 中文字幕精品免费在线观看视频 | 国产免费一级a男人的天堂| 91久久精品国产一区二区三区| 特大巨黑吊av在线直播| .国产精品久久| 国产亚洲91精品色在线| 国产精品一二三区在线看| 女人久久www免费人成看片| 少妇人妻久久综合中文| 热99国产精品久久久久久7| 欧美人与善性xxx| 日韩强制内射视频| 亚洲精品一二三| 多毛熟女@视频| 精品一区二区三卡| 国产男人的电影天堂91| 中文字幕久久专区| 久久人人爽人人爽人人片va| 久久久久久久亚洲中文字幕| 最近最新中文字幕免费大全7| 欧美成人午夜免费资源| 亚洲欧美成人综合另类久久久| 美女视频免费永久观看网站| 五月伊人婷婷丁香| 成年av动漫网址| 五月伊人婷婷丁香| 极品教师在线视频| 在线观看一区二区三区| 99久久人妻综合| 日韩不卡一区二区三区视频在线| 亚洲av在线观看美女高潮| 少妇人妻 视频| 熟妇人妻不卡中文字幕| 亚洲美女视频黄频| av线在线观看网站| 色婷婷av一区二区三区视频| 久久久a久久爽久久v久久| 色吧在线观看| 亚洲人成网站高清观看| 大片电影免费在线观看免费| 久久婷婷青草| 欧美另类一区| 国产精品人妻久久久久久| 夫妻午夜视频| 日韩在线高清观看一区二区三区| 在线观看国产h片| 久久亚洲国产成人精品v| 在线观看国产h片| 中国美白少妇内射xxxbb| 亚洲av电影在线观看一区二区三区| 亚洲精华国产精华液的使用体验| 久久影院123| 久久久欧美国产精品| 伦理电影免费视频| 国产精品99久久久久久久久| 国产黄片视频在线免费观看| 黄色一级大片看看| 欧美xxxx黑人xx丫x性爽| .国产精品久久| 97热精品久久久久久| 成人亚洲欧美一区二区av| 王馨瑶露胸无遮挡在线观看| 精品熟女少妇av免费看| av女优亚洲男人天堂| 欧美精品人与动牲交sv欧美| 亚洲精品第二区| 亚洲欧美一区二区三区黑人 | 午夜激情久久久久久久| 亚洲精品成人av观看孕妇| av福利片在线观看| 欧美国产精品一级二级三级 | 22中文网久久字幕| 国产乱来视频区| 国产亚洲5aaaaa淫片| 欧美丝袜亚洲另类| 男人舔奶头视频| 久久国内精品自在自线图片| 久久久久久久久久久免费av| 亚洲精品国产色婷婷电影| 天堂俺去俺来也www色官网| 久久精品国产亚洲av涩爱| 国产精品国产三级国产av玫瑰| 国产黄色视频一区二区在线观看| 国产精品99久久久久久久久| 一级毛片 在线播放| 91精品国产九色| 天天躁夜夜躁狠狠久久av| 91久久精品国产一区二区成人| 久久精品国产a三级三级三级| 久久久午夜欧美精品| 美女内射精品一级片tv| 国产精品国产av在线观看| 国产成人91sexporn| 99热全是精品| 人人妻人人澡人人爽人人夜夜| 国产高清国产精品国产三级 | 丰满人妻一区二区三区视频av| 亚洲精品中文字幕在线视频 | 亚洲国产日韩一区二区| a级毛片免费高清观看在线播放| 中文在线观看免费www的网站| 亚州av有码| 老司机影院毛片| 中国国产av一级| 内射极品少妇av片p| 少妇猛男粗大的猛烈进出视频| 欧美日韩综合久久久久久| 在现免费观看毛片| 蜜臀久久99精品久久宅男| 欧美3d第一页| 日韩不卡一区二区三区视频在线| 亚洲av成人精品一区久久| 2018国产大陆天天弄谢| 91狼人影院| 91久久精品国产一区二区成人| 大陆偷拍与自拍| 毛片女人毛片| 日产精品乱码卡一卡2卡三| 国产乱来视频区| 亚洲精品亚洲一区二区| 国产成人免费无遮挡视频| 观看美女的网站| 观看免费一级毛片| 3wmmmm亚洲av在线观看| 热99国产精品久久久久久7| 亚洲性久久影院| 精品午夜福利在线看| 久久久久久人妻| 少妇人妻一区二区三区视频| 最近2019中文字幕mv第一页| 久久人妻熟女aⅴ| 国产精品久久久久久av不卡| 人妻制服诱惑在线中文字幕| 搡老乐熟女国产| 精品一区二区三卡| 久久久久视频综合| 国产精品免费大片| 国产精品一二三区在线看| 身体一侧抽搐| 国产视频内射| 欧美日韩在线观看h| 国产亚洲一区二区精品| 亚洲精品久久久久久婷婷小说| 18禁裸乳无遮挡免费网站照片| 欧美激情国产日韩精品一区| 免费播放大片免费观看视频在线观看| 免费看av在线观看网站| 亚洲激情五月婷婷啪啪| 久久久久网色| 亚洲电影在线观看av| 六月丁香七月| 欧美激情极品国产一区二区三区 | 好男人视频免费观看在线| 亚洲中文av在线| 免费观看性生交大片5| 欧美人与善性xxx| 少妇裸体淫交视频免费看高清| 18+在线观看网站| 国产精品三级大全| 欧美一级a爱片免费观看看| 最近最新中文字幕免费大全7| 精品亚洲乱码少妇综合久久| 黄片wwwwww| 在线亚洲精品国产二区图片欧美 | av视频免费观看在线观看| 亚洲欧洲日产国产| 80岁老熟妇乱子伦牲交| 国产高清国产精品国产三级 | 91精品国产国语对白视频| 人妻制服诱惑在线中文字幕| 嫩草影院入口| av网站免费在线观看视频| 美女视频免费永久观看网站| 亚洲精品色激情综合| 日韩成人伦理影院| 午夜老司机福利剧场| 欧美精品人与动牲交sv欧美| 国产欧美日韩一区二区三区在线 | 日本vs欧美在线观看视频 | 这个男人来自地球电影免费观看 | 欧美成人精品欧美一级黄| 一本色道久久久久久精品综合| 国产在线男女| 亚洲成人手机| 国产永久视频网站| 精品99又大又爽又粗少妇毛片| 日韩三级伦理在线观看| 日韩成人伦理影院| 18禁裸乳无遮挡免费网站照片| 亚洲精品日韩在线中文字幕| 久久精品国产a三级三级三级| 精品久久国产蜜桃| 女人久久www免费人成看片| 在线 av 中文字幕| 深夜a级毛片| 国产色爽女视频免费观看| 国产精品精品国产色婷婷| 黑丝袜美女国产一区| 国产中年淑女户外野战色| 99热国产这里只有精品6| 男人添女人高潮全过程视频| 亚洲,一卡二卡三卡| 我的老师免费观看完整版| 蜜桃亚洲精品一区二区三区| 欧美精品国产亚洲| 久久久久国产网址| 亚洲欧美中文字幕日韩二区| 美女主播在线视频| 成人一区二区视频在线观看| 久久青草综合色| 亚洲av日韩在线播放| 亚洲av综合色区一区| 亚洲自偷自拍三级| 日本黄色日本黄色录像| 男人爽女人下面视频在线观看| 观看av在线不卡| 青春草视频在线免费观看| 一区二区三区四区激情视频| 国产免费一级a男人的天堂| 晚上一个人看的免费电影| 日韩中文字幕视频在线看片 | 中文字幕精品免费在线观看视频 | 伊人久久精品亚洲午夜| 2021少妇久久久久久久久久久| av在线观看视频网站免费| 日日啪夜夜爽| 18禁在线播放成人免费| 日本-黄色视频高清免费观看| 男人狂女人下面高潮的视频| 亚洲怡红院男人天堂| 国产成人a区在线观看| 少妇的逼好多水| 国产成人午夜福利电影在线观看| 亚洲精品久久午夜乱码| 99视频精品全部免费 在线| 成年免费大片在线观看| 一本—道久久a久久精品蜜桃钙片| 亚洲精品一二三| 国产在视频线精品| 亚洲精品日韩av片在线观看| 久久婷婷青草| 好男人视频免费观看在线| 蜜臀久久99精品久久宅男| 丝袜喷水一区| 国产综合精华液| 三级国产精品欧美在线观看| 久久久久久久国产电影| 欧美精品国产亚洲| 亚洲色图av天堂| 韩国av在线不卡| 免费不卡的大黄色大毛片视频在线观看| 国产精品99久久久久久久久| 国产片特级美女逼逼视频| 国内揄拍国产精品人妻在线| 80岁老熟妇乱子伦牲交| 能在线免费看毛片的网站| 久久99精品国语久久久| 亚洲第一av免费看| 国产高清有码在线观看视频| 日韩av不卡免费在线播放| 中文在线观看免费www的网站| 视频中文字幕在线观看| 日韩欧美 国产精品| 亚洲精品一区蜜桃| 超碰av人人做人人爽久久| 亚洲欧美一区二区三区黑人 | av天堂中文字幕网| 久久精品久久久久久久性| 自拍欧美九色日韩亚洲蝌蚪91 | 菩萨蛮人人尽说江南好唐韦庄| 亚洲无线观看免费| 久久久久网色| 九九久久精品国产亚洲av麻豆| 免费看不卡的av| 我要看黄色一级片免费的| 看非洲黑人一级黄片| 免费看不卡的av| av不卡在线播放| 看非洲黑人一级黄片| 国产精品人妻久久久久久| 国内精品宾馆在线| 亚洲欧洲国产日韩| 亚洲无线观看免费| 在线免费十八禁| 亚洲av男天堂| 亚洲久久久国产精品| 水蜜桃什么品种好| 高清日韩中文字幕在线| 久久鲁丝午夜福利片| 国产成人aa在线观看| 色吧在线观看| 精品酒店卫生间| 色视频www国产| 国产精品三级大全| 全区人妻精品视频| 欧美日韩亚洲高清精品|