• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    LDformer: a parallel neural network model for long-term power forecasting*

    2023-09-21 06:31:00RanTIANXinmeiLIZhongyuMAYanxingLIUJingxiaWANGChuWANG

    Ran TIAN,Xinmei LI,Zhongyu MA,Yanxing LIU,Jingxia WANG,Chu WANG

    College of Computer Science &Engineering, Northwest Normal University, Lanzhou 730070, China

    Abstract: Accurate long-term power forecasting is important in the decision-making operation of the power grid and power consumption management of customers to ensure the power system’s reliable power supply and the grid economy’s reliable operation.However,most time-series forecasting models do not perform well in dealing with long-time-series prediction tasks with a large amount of data.To address this challenge,we propose a parallel time-series prediction model called LDformer.First,we combine Informer with long short-term memory (LSTM) to obtain deep representation abilities in the time series.Then,we propose a parallel encoder module to improve the robustness of the model and combine convolutional layers with an attention mechanism to avoid value redundancy in the attention mechanism.Finally,we propose a probabilistic sparse (ProbSparse) self-attention mechanism combined with UniDrop to reduce the computational overhead and mitigate the risk of losing some key connections in the sequence.Experimental results on five datasets show that LDformer outperforms the state-of-the-art methods for most of the cases when handling the different long-time-series prediction tasks.

    Key words: Long-term power forecasting;Long short-term memory (LSTM);UniDrop;Self-attention mechanism

    1 Introduction

    Power forecasting is an important part of power system planning and the basis of the economic operation of power systems.Accurate power forecasting helps provide a reliable decision-making basis for power system planning and operation,thus reducing power production costs (Ciechulski and Osowski,2021).Traditional prediction methods have achieved great results in meteorology,finance,industry,and transportation for short-term power forecasting problems.However,existing time-series methods still have limitations in long-time-series prediction:

    1.Due to the highly complex and large-scale long-time-series data,traditional methods are limited in efficiently handling high-dimensional large data and representing complex functions (Han et al.,2021).It will forget some data and ignore the long-term dependency of the time-series data.

    2.The model structure of traditional methods is not stable when long-term power forecasting is performed.

    3.The existing attention mechanism is prone to losing key connections between sequences when capturing the dependencies among long-time-series data,leading to the degradation of prediction performance.

    Aiming at the problem of long-time-series prediction,we propose LDformer,a parallel neural network model for long-term power forecasting based on the Informer framework (Zhou et al.,2021).The contributions of this study are summarized as follows:

    1.We propose a parallel neural network model called LDformer,which combines the advantages of Informer and long short-term memory (LSTM) to effectively solve the problem of long-term power forecasting.Using LSTM to learn the long-term correlation of time-series data,the model has good long-term forecasting performance.

    2.We propose a parallel encoder module that combines a convolutional layer and an attention mechanism to avoid value redundancy in the attention mechanism.Several experiments on different datasets validate the effectiveness and robustness of the parallel encoder and the convolutional layer.

    3.We propose a probabilistic sparse (ProbSparse) self-attention mechanism combined with UniDrop.UniDrop does not need additional computational overhead or external resources.Combining UniDrop,Prob-Sparse attention mechanism can mitigate the risk of losing some key connections in the sequence.

    2 Related works

    Long-time-series prediction plays an essential role in decision-making in many fields,such as economy,transportation,medicine,hydrology,and energy (Zhang D et al.,2018;Chakraborty et al.,2019;Chuku et al.,2019;Xie and Lou,2019;Marcjasz et al.,2020).However,the prediction model may produce inaccurate results due to different patterns of the actual time series.In this paper,we analyze both machine learning and deep learning.

    2.1 Machine learning

    In time-series prediction,the Autoregressive Integrated Moving Average (ARIMA) model is widely used in various fields.Chen JF et al.(1995) developed an adaptive ARIMA model for short-term power forecasting of a power generation system.Viccione et al.(2020) applied the ARIMA model for tank water level prediction and analysis.Many researchers have also achieved better results by improving ARIMA models or combining ARIMA with other models.Xu et al.(2017) proposed an ARIMA and Kalman filterbased approach applied to road traffic real-time state prediction.Khan et al.(2020) proposed a waveletbased hybrid ARIMA model.However,although the ARIMA model has achieved great success in stable time-series prediction,there are no almost pure stationary data in the real-time-series data.Therefore,the application of the ARIMA model is limited by the data characteristics and is less general.As a result,many models other than ARIMA models have been applied to time-series forecasting.Syriopoulos et al.(2021) used a support vector machine (SVM) to predict shipping prices.Based on SVM,Ding et al.(2021) developed a time-series model based on the least squares support vector machine (LSSVM) and achieved better results.Nóbrega and Oliveira (2019) proposed sequential learning methods for the Kalman filter and a limited learning machine for regression and time-series prediction,obtaining better results.In addition to the normal case,many researchers have also modeled predictions for time-series data with outliers.Chen XY and Sun (2022) proposed a Bayesian time-factor decomposition framework for modeling and predicting multidimensional time series of specific spatiotemporal data in the presence of missing values.However,machine learning methods cannot obtain more accurate results for complex prediction problems.

    2.2 Deep learning

    With the development of deep learning,researchers have found that deep learning applies to complex time-series problems (Kim et al.,2018).Many scholars use convolutional neural networks (CNNs) to model and predict time-series data (Guo et al.,2019;Hosseini and Talebpour,2019).Cao and Wang (2019) applied CNN to stock forecasting.However,a CNN is more suitable for spatial correlation than time series.As a result,recurrent neural networks (RNNs) have emerged.Hu et al.(2020) applied RNNs to traffic flow prediction based on time-series analysis.Min et al.(2019) proposed an RNN-based intent inference model to solve the time-series prediction problem.Many studies have proven that RNNs have obvious advantages in time-series prediction (Xiao et al.,2019;Zhang L et al.,2019).However,when the time-series is too long,the problems of gradient disappearance and gradient explosion may occur in RNN training (Zheng and Huang,2020).

    Karevan and Suykens (2020) proposed LSTM to solve these problems.LSTM is more suitable for long-time-series than RNN.Therefore,many deep learning models based on LSTM are applied to timeseries prediction.Zhang TJ et al.(2019) used LSTM for gas concentration prediction.Miao et al.(2020) proposed a novel LSTM framework for short-term fog prediction,consisting of an LSTM and a fully connected layer.Their experiments showed that the framework outperformed K-nearest neighbor (KNN),Ada-Boost,and CNN algorithms.Gai et al.(2021) proposed a new parking space prediction model based on LSTM,providing a more accurate prediction.Many researchers have also proposed a combined model of LSTM and other models (Ran et al.,2019;Wang ZP et al.,2020),which proves the feasibility of LSTM in time-series prediction.However,we can learn from Khandelwal et al.(2018) that the adequate context size of the language model using LSTM is approximately 200 tokens on average.Nevertheless,the model can only distinguish 50 tokens in the vicinity,which indicates that even LSTM has difficulty capturing longterm dependencies.

    In recent years,the transformer has been applied to many fields to perform long-time-series prediction tasks.Wu N et al.(2020) applied the transformer model to the prediction of influenza-like diseases,and this method can learn complex patterns and dynamics from time-series data using the self-attention mechanism.As a general framework,the transformer can not only be applied to univariate and multivariate time-series data,but can also be used for time-series embedding.However,because the point-by-point dot product in the transformer architecture is insensitive to the local context,its spatial complexity is too large.Therefore,Li et al.(2019) proposed the LogSparse transformer,which improves the prediction accuracy of time series with fine granularity and strong long-term dependence under a limited memory budget.Zhou et al.(2021) proposed the Informer and applied it to electricity consumption planning.Unlike the selfattention mechanism used in the transformer (Vaswani et al.,2017),this model proposes a new ProbSparse self-attention (Zhou et al.,2021),which minimizes the complexity and improves the transformer model’s calculation,memory,and efficiency.Although the existing time-series prediction methods have promoted the development of the time-series field to a certain extent,there is still room for improvement.Because of the increase in data volume,many time-series prediction models overlook deep representation abilities of temporal data,and also have problems such as numerous parameters,large memory consumption,and long operation time.

    Compared with previous work,LDformer not only considers the deep representation of temporal data,but also mitigates the risk of losing critical connections between sequences while ensuring minimum complexity.In addition,LDformer combines the convolutional layer and the attention mechanism to obtain a parallel encoder module,which prevents redundancy in the attention mechanism while improving model robustness.These methods effectively improve the prediction accuracy.

    3 Preliminaries

    As shown in Eq.(1),the objective of long-term power forecasting is to use modelfto predict the power load data of a certain data point atntime steps based on historical power load data from multiple data points atmtime steps.

    4 Long-time-series prediction model

    The overall framework of LDformer is as shown in Fig.1.LDformer contains LSTM,encoder,and decoder structures.First,the time-series data enter the embedding layer,at which data,location,and time information is converted into a uniform dimension and then merged.The processed data then enter the LSTM,allowing for more effective use of long-term temporal information.Second,the data enter the encoder,which uses a multichannel parallel mode to improve model robustness to receive long-sequence inputs.At the same time,we extract the main features by adding convolution layers in the encoder module,and the convolution layer can generate a concentrated attention feature map to reduce feature redundancy.The decoder receives long-sequence inputs and predicts the output element immediately in the form of generation.After the embedding layer,the data enter the decoder.To ensure that the output decoded at timetdepends only on the output before timet,X0is a placeholder for the target sequence,padded with 0.A mask is added to the first attention of the decoder to prevent the target information from being used earlier.Finally,the prediction results of the last column are outputted through a fully connected layer.

    Fig.1 Framework of LDformer

    4.1 Embedding layers with multiple perspectives

    First,the preprocessed data are unified into the same dimension through data encoding,position encoding,and timestamp encoding,and then the final embedding result is obtained by summing the results of these three encodings.

    1.Data embedding (DE): Convert data dimensionxinto uniform dimensiondmodelusing one-dimensional (1D) convolution.The formula is as follows:

    2.Position embedding (PE) (Vaswani et al.,2017):Compared to RNNs,positional embedding uses a different approach where elements within the input sequence are processed concurrently,thereby preserving the positional information of each element within the sequence.Although the processing speed of RNN is higher than that of PE,it ignores the order of elements in the sequence,so we choose positional embedding.The formulae are as follows:

    where pos denotes the position of the element in the sequence,dmodeldenotes the dimension of the element vector,andidenotes the position of the element vector.

    3.Time embedding: There are various methods for time embedding: month_embed,day_embed,weekday_embed,hour_embed,and minute_embed.The time slice of the dataset used in this paper is mainly in hours and minutes,so hour_embed and minute_embed are chosen to obtain the timestamp encoding results.

    The sum of these three embedding components is the output of the final embedding layer.The overall embedding layer structure is shown in Fig.2.

    Fig.2 Embedding layer structure

    The data outputted by embedding are sent to LSTM for feature extraction.For not affecting the input of the subsequent encoder,this study makes the output of LSTM consistent with the output of embedding;i.e.,the LSTM output dimension remains asdmodel.The output of LSTM is the input of the encoder.LSTM can perform better in long sequences.Therefore,using LSTM can extract deep representation abilities in the time series and improve the prediction accuracy.

    4.2 Multichannel parallel encoder module

    We build the encoder module by combining the attention mechanism with the convolutional layer.It takes four channels of lengthL,L/2,L/4,andL/8,and executes them in parallel.The convolutional layer performs dimensional pruning and reduces the memory footprint before the output of the upper layer is sent to the lower layer of the multihead attention module.The convolutional layer has one fewer layer than the encoder.The multichannel parallel encoder module is shown in Fig.3.

    Fig.3 Multichannel parallel encoder module (References to color refer to the online version of this figure)

    The encoder is used mainly to extract robust remote dependencies from time-series data.The overall architecture of the encoder is roughly the same as that of the transformer.It includes mainly two sublayers,the multihead attention layer (ProbSparse selfattention mechanism combined with UniDrop) and the feed-forward layer composed of two linear mappings.A batch normalization layer follows both sublayers,with jump connections between the sublayers.

    4.2.1 ProbSparse self-attention mechanism combined with UniDrop

    ProbSparse self-attention mechanism may create some risk of critical connection loss.To address this problem,we propose a ProbSparse self-attention mechanism combined with UniDrop.Unlike the self-attention mechanism and ProbSparse self-attention mechanism,we consider other possible problems while retaining their advantages.

    The canonical self-attention mechanism consists of a query and a set of key-value pairs.The formula is as follows (Vaswani et al.,2017):

    Fig.4 UniDrop structure: (a) attention;(b) feed forward;(c) output prediction (MatMul: matrix multiplication;H:hidden layer)

    Define the attention of theithrow,ofQ',K',V'obtained after dropout as a kernel smoother in the form of probability:

    Dropping the constant,the sparsity measure of theithquery can be defined as

    Eq.(9) traverses all queries that require the computation of each dot-product pair,but the log-sum-exp (LSE) operation may have numerical stability issues.Based on this,the above formula is improved to obtain the final sparsity measure formula:

    Based on the above steps,we obtain ProbSparse self-attention combined with UniDrop by allowing each key to pay attention to onlyudominant queries:

    4.2.2 Convolutional layer

    As a natural consequence of the ProbSparse attention mechanism combined with UniDrop,the feature mapping of the encoder produces a redundant combination of valuesV'.In the next layer,we use convolution to extract dominant features for processing so that they generate a focused attentional feature map,as shown in Fig.3,which will largely reduce the temporal dimension of the input.CNN is good at identifying simple patterns in data and generating complex patterns in more advanced layers.Conv1D is effective in obtaining features of interest from data whose locations are not highly correlated,and Conv1D can be well applied to time-series analysis of sensor data.Therefore,we select Conv1D to extract features and set its convolution kernel as 3×3.The formula is as follows:

    where [·]ABcontains the basic operations in the multihead attention and attention block,and Conv1D uses the LeakyReLU activation function to execute in the time dimension.LeakyReLU is a variant of ReLU.It introduces some changes when the input value is less than 0,alleviates the sparsity of ReLU,and inherits the advantages of ReLU.It can also speed up the convergence,alleviate the gradient disappearance and explosion problems,and simplify the calculation.The LeakyReLU activation function is as follows:

    4.3 Decoder

    The decoder generates the time-series output by a forward process,and part of its structure can be referred to as the decoder structure in the transformer,as shown in Fig.5.

    Fig.5 Decoder for growing sequence output through forward process generation

    The decoder consists of two layers of attention mechanism and a linear mapping feed-forward layer.The decoder’s input vector is as follows:

    The first layer of attention is the ProbSparse selfattention mechanism combined with UniDrop,as shown in Eq.(11).We set the masked multihead self-attention to -∞.It prevents focusing on the future position in the training process to avoid the autoregression problem.The second layer of attention is ordinary selfattention,as shown in Eq.(5).An add &norm layer follows both layers of attention.Add &norm is calculated as follows:

    Finally,the prediction results are outputted directly through a fully connected layer.

    5 Experiments and analyses

    5.1 Datasets

    We extensively perform experiments on five datasets,namely,the ETTm1,ETTh1,ETTh2,PEMS03,and weather datasets.ETTm1,ETTh1,and ETTh2 are collectively referred to as the ETT dataset.Table 1 shows the description of the datasets.

    Table 1 Datasets and prediction task descriptions

    ETT (Zhou et al.,2021): ETT is a key indicator of the long-term deployment of electricity.Here are two years of data collected from two cities in China.ETTm1 takes one data point every 15 min.ETTh1 and ETTh2 take one data point every hour.Each data point consists of the target value oil temperature (OT) and six different types of external load values,i.e.,high useful load (HUFL),high useless load (HULL),middle useful load (MUFL),middle useless load (MULL),low useful load (LUFL),and low useless load (LULL).We use a multivariate prediction univariate model with six power load features to predict the target value OT.According to the time characteristics,we divide the training set,validation set,and test set by 12,4,and 4 months,respectively.

    Weather (Zhou et al.,2021): This dataset contains local climate data for nearly 1600 U.S.locations.Each data point consists of a target value wet bulb and 11 climate features.We divide the training set,validation set,and test set by 12,4,and 4 months,respectively.

    PEMS03 (Wang C et al.,2023): This dataset is the traffic flow data of the California highway network.It is divided into five-minute intervals and contains data from 307 sensors for three months from 2018/9/1 to 2018/11/30.This experiment captures three months of traffic data from the first seven sensors.The dataset is divided by 7:2:1 into a training set,test set,and validation set.The seventh sensor is used as the target sensor for the experiment.

    5.2 Experimental setting

    In this study,the model uses both parallel and nonparallel modes in the encoder.The parallel mode has four channels executing in parallel and three stacks.The decoder contains two stacks.The sampling factorcin the top-uformula is set to 5.The parameter in dropout is 0.1.The number of multihead attentionn-heads is set to 8.When predicting the target sequence,the mean square error (MSE) is selected as the loss function,and the whole model is returned from the output of the decoder.The learning rate in the experimental setup decreases from 1e-04 and decays by a factor of 2 for each period.The model employs the Adam optimizer as its optimization algorithm,with 10 epochs and a batch size of 64.Moreover,prior to conducting the experiment,each input dataset is subjected to a standardization procedure.

    Assessment metrics: three evaluation indexes,MSE,mean absolute error (MAE),and root mean square error (RMSE),are used.The primary reason for choosing RMSE is that,compared to MSE and other evaluation metrics,RMSE has advantages such as greater intuitiveness,stronger robustness,and easier interpretability in the assessment of model performance.Consequently,RMSE is widely adopted in numerous practical applications.For example,when RMSE is equal to 10,the regression effect can be considered to differ by 10 on average compared to the true value.The indicator equations are as follows:

    whereyis the true value andis the prediction value.

    5.3 Experimental results and analyses

    We conduct comparative experiments on five datasets and multiple prediction tasks,comparing widely used RNN,LSTM,and gated recurrent unit (GRU) commonly employed in long-time-series prediction,as well as the representative Informer algorithm and its Informer (np) model under nonparallel conditions,with our parallel model LDformer and nonparallel LDformer (np).The specific description of the datasets is shown in Table 1.The corresponding experimental results are shown in Tables 2-5.

    Table 2 Performance comparison of short-time-series prediction tasks in the ETT dataset at different prediction lengths (24,36,48)

    Table 3 Performance comparison of long-time-series prediction tasks in the ETT dataset at different prediction lengths (96,168,336)

    Table 4 Performance comparison of short-time-series prediction tasks in the weather and PEMS03 datasets at different prediction lengths (24,48,96)

    Table 5 Performance comparison of long-time-series prediction tasks in the weather dataset at prediction lengths of 168,336,and 720 and the PEMS03 dataset at prediction lengths of 288,336,and 720

    Tables 2 and 3 show the prediction performance of the baseline models and our models for short-and long-time-series of the ETT dataset,respectively.As the prediction length increases,the prediction performance of LDformer continues to increase in all datasets.This demonstrates the success of LDformer in improving prediction performance in solving long-time-series prediction problems.In the ETTm1 dataset,when the prediction length is 48,MSE,MAE,and RMSE of the LDformer model proposed in this paper are 37.0%,22.5%,and 20.6% lower than those of Informer,respectively.As the prediction length increases,the advantages of the LDformer become more apparent.When the prediction length is 168,MSE,MAE,and RMSE of LDformer are reduced by 16.5%,13.1%,and 8.6% compared to those of Informer,respectively.When the prediction length is 336 in the ETTm1 dataset,the LDformer (np) model has the optimal result,while the LDformer model shows the suboptimal result.

    When the prediction length in ETTh1 is 24,MSE,MAE,and RMSE of LDformer are 74.7%,44.3%,and 49.7% lower than those of the traditional GRU model,respectively.After increasing the prediction length,Informer shows good performance for the ETTh1 dataset when the prediction length is 96 and 168.However,the performance of LDformer is better than those of traditional models in most cases.In the ETTh2 dataset of long-time-series prediction,the model in this paper outperforms traditional models.This phenomenon may be caused by the anisotropy of the feature dimensions.It is beyond the scope of this paper,and we will explore it in the future work.ETTh2 contains a large amount of continuous null data.

    Tables 4 and 5 show the prediction performance of the baseline models and the models proposed in this paper for short-and long-time-series in the weather and PEMS03 datasets,respectively.For the weather dataset,when the prediction length is 168,MSE,MAE,and RMSE of LDformer are reduced by 29.4%,14.4%,and 16.0%,respectively,compared to those of Informer (np).For the PEMS03 dataset,when the prediction length is 336,MSE,MAE,and RMSE of LDformer are reduced by 12.6%,9.8%,and 6.5%,respectively,compared to those of Informer.The experiments show that the models proposed in this paper achieve better results for the PEMS03 dataset with a time interval of 5 min and the ETTm1 dataset with a time interval of 15 min.

    LDformer accurately obtains the temporal information of each feature,where LSTM learns the longterm dependency of temporal data and fully exploits deep representation abilities among time-series data.The ProbSparse attention mechanism combined with UniDrop inherits the advantages of the ProbSparse attention mechanism,while avoiding the attention mechanism losing some key connections in the sequence when considering data correlation.Meanwhile,the encoder module adopts a multichannel parallel model to improve the robustness of LDformer.

    We calculate the average error between the true and prediction values for different datasets.The error is plotted as a histogram,as shown in Figs.6 and 7.

    In the short-time-series prediction (Fig.6),LDformer has the smallest average error between the prediction and true values,and the prediction accuracy is higher than that of other models.For long-time-series prediction (Fig.7),LDformer still achieves good results.However,the results are unsatisfactory due to the availability of null data in the ETTh2 dataset.LDformer combines the advantages of Informer and LSTM,and uses four channels in the encoder to improve the stability of the model.It also uses Prob-Sparse self-attention combined with UniDrop to reduce the loss of some key connections in the sequence and improves the accuracy of long-time-series prediction.

    Fig.6 Average error of the true and prediction values in the short-time-series prediction (References to color refer to the online version of this figure)

    Fig.7 Average error of the true and prediction values in the long-time-series prediction (References to color refer to the online version of this figure)

    We plot the average loss corresponding to different learning rates in the four models.As shown in Fig.8,the LDformer (np) proposed in this paper converges faster than the LDformer model.

    Fig.8 Convergence of loss with a decreasing learning rate for the ETTh1 dataset at a prediction length of 24

    Fig.9 shows the training runtime profiles of several models with an increasing number of epochs.Under the early stop mechanism,both LDformer and LDformer (np) stop training at the fourth epoch,Informer stops training at the fifth epoch,and Infomer (np) stops training at the sixth epoch.Informer (np) obtains the optimal result at the sixth epoch.The training time of the models in this paper is significantly less than that of other models.Our proposed ProbSparse attention mechanism combined with UniDrop reduces the number of parameters without increasing the time complexity.Convolutional layers extract dominant features,reducing the temporal feature dimension and avoiding redundancy in the attention mechanism.This is the key factor that reduces the training time.

    Fig.9 Training time comparison

    We perform ablation experiments for each innovation point to further evaluate the effectiveness of the individual components in LDformer.Table 6 shows several models with various innovation points removed.

    Table 6 Introduction to the ablation experiment modules

    We compare the MSE and MAE of our models on the four prediction tasks with those of the models without innovation point.Table 7 shows that the cancellation of each innovation point affects the results.Through LSTM,LDformer can extract deep representation abilities from time-series data.This is achieved by the use of LSTM memory cells and gate mechanisms.Through LSTM memory cells and gate mechanisms,LSTM can effectively capture long-term dependencies and longer contextual information.UniDrop reduces the number of parameters and computational complexity,improving the model’s parallel computing capabilities.Multiple parallel channels contribute to a more stable model structure,by accepting various long sequences,which increases the model’s robustness to input noise and variations,making it more adaptable to different input conditions and data distributions.The convolutional layer reduces the potential complexity and computational cost of the attention mechanism by reducing the number of parameters.By combining these modules,LDformer achieves optimal results on most prediction tasks.

    Table 7 The MSE and MAE of different models for the ETTm1 dataset at prediction lengths of 48 and 168 and the ETTh1 dataset at prediction lengths of 24 and 336

    6 Conclusions

    In this paper,we propose a long-term power forecasting model LDformer and its nonparallel state LDformer (np) to solve the long-time-series prediction problem with the power dataset ETT as an example.The LDformer model is useful for obtaining more accurate prediction results without increasing the time complexity.First,ProbSparse self-attention mechanism combined with UniDrop is proposed to replace ProbSparse self-attention mechanism,effectively avoiding the risk of losing key connections between sequences without increasing the complexity.Second,two perspectives,parallel and nonparallel,are used in the encoder module for comparison.Considering the number of parameters and model stability,we use convolutional layers for data extraction between attention modules.Finally,we combine the improved model with LSTM to capture the long-range time-series information and extract deep representation abilities in the time series to improve prediction accuracy.Experimental results confirm that LDformer performs better on long-time-series prediction tasks for short-interval datasets.For each innovation point proposed in this study,ablation experiments have been conducted to demonstrate the innovation points’ feasibility.However,the method presented in this study still has potential for improvement in dealing with long-term-series prediction tasks for long-interval datasets.

    Contributors

    Ran TIAN designed the research.Xinmei LI developed the methodology,curated the data,and worked on the software.Zhongyu MA conducted the investigation.Yanxing LIU processed the data.Jingxia WANG conducted the data visualization and result validation.Chu WANG verified the experimental results.Xinmei LI drafted the paper.Ran TIAN revised and finalized the paper.

    Compliance with ethics guidelines

    Ran TIAN,Xinmei LI,Zhongyu MA,Yanxing LIU,Jingxia WANG,and Chu WANG declare that they have no conflict of interest.

    Data availability

    The data that support the findings of this study are available from the corresponding author upon reasonable request.

    真人做人爱边吃奶动态| 一卡2卡三卡四卡精品乱码亚洲| 九九热线精品视视频播放| 日韩欧美 国产精品| 午夜视频国产福利| 美女 人体艺术 gogo| 一级毛片久久久久久久久女| 婷婷精品国产亚洲av在线| 国产精品99久久久久久久久| 美女大奶头视频| 99久久精品一区二区三区| 久99久视频精品免费| 国产一区二区亚洲精品在线观看| 又粗又爽又猛毛片免费看| 中文资源天堂在线| 国产69精品久久久久777片| 69av精品久久久久久| 99久久精品热视频| 久久久久久久亚洲中文字幕| 一个人观看的视频www高清免费观看| 色综合亚洲欧美另类图片| 国产一级毛片七仙女欲春2| 看片在线看免费视频| 成年女人毛片免费观看观看9| 超碰av人人做人人爽久久| 日韩精品有码人妻一区| 一区二区三区激情视频| 看黄色毛片网站| 男人狂女人下面高潮的视频| 亚洲黑人精品在线| 亚洲国产欧洲综合997久久,| 国产男靠女视频免费网站| 99久久九九国产精品国产免费| 天堂网av新在线| 日本黄色片子视频| 国产精品久久视频播放| 国产一区二区激情短视频| 少妇人妻一区二区三区视频| 亚洲国产高清在线一区二区三| 窝窝影院91人妻| 99久国产av精品| 国内精品宾馆在线| 国产精品自产拍在线观看55亚洲| 亚洲欧美日韩无卡精品| 在线天堂最新版资源| 小蜜桃在线观看免费完整版高清| 99久国产av精品| 1024手机看黄色片| 欧美性猛交╳xxx乱大交人| 最近中文字幕高清免费大全6 | 女同久久另类99精品国产91| 99热这里只有是精品在线观看| 日日干狠狠操夜夜爽| 久久精品影院6| 精品久久久久久,| av天堂中文字幕网| 国产精品野战在线观看| 国产69精品久久久久777片| 91久久精品国产一区二区成人| 久久精品国产清高在天天线| 亚洲经典国产精华液单| av在线老鸭窝| 亚洲人与动物交配视频| 日本免费a在线| 亚洲av免费在线观看| 久久人人精品亚洲av| 一区二区三区激情视频| 欧美+日韩+精品| 天堂网av新在线| 精品99又大又爽又粗少妇毛片 | 99久久精品一区二区三区| 国产成人aa在线观看| 久久久久九九精品影院| 国产精品爽爽va在线观看网站| 久久婷婷人人爽人人干人人爱| 男女啪啪激烈高潮av片| .国产精品久久| 久久久久免费精品人妻一区二区| 亚洲精华国产精华液的使用体验 | 天堂影院成人在线观看| 亚洲av熟女| 精品乱码久久久久久99久播| 人妻夜夜爽99麻豆av| 国产精品一区二区三区四区免费观看 | 精品久久久久久,| 日本黄色视频三级网站网址| 日日摸夜夜添夜夜添av毛片 | 精品久久久久久,| 国产精品一区二区性色av| 少妇熟女aⅴ在线视频| 免费在线观看日本一区| 亚洲久久久久久中文字幕| 狂野欧美白嫩少妇大欣赏| 美女cb高潮喷水在线观看| 欧美激情在线99| 99九九线精品视频在线观看视频| 亚洲熟妇中文字幕五十中出| 国产一区二区在线观看日韩| 一进一出抽搐gif免费好疼| 少妇高潮的动态图| 欧美最黄视频在线播放免费| 91精品国产九色| 午夜精品在线福利| 国产黄色小视频在线观看| 美女高潮喷水抽搐中文字幕| 亚洲成人精品中文字幕电影| 色哟哟哟哟哟哟| 男人舔奶头视频| 精华霜和精华液先用哪个| a级毛片免费高清观看在线播放| 亚洲四区av| 老司机福利观看| 伊人久久精品亚洲午夜| 又粗又爽又猛毛片免费看| 十八禁网站免费在线| 久久久久久久精品吃奶| 韩国av在线不卡| 国产v大片淫在线免费观看| 国产亚洲精品综合一区在线观看| 99视频精品全部免费 在线| 亚洲精品日韩av片在线观看| 黄色视频,在线免费观看| www.www免费av| 亚洲精品456在线播放app | 亚洲国产精品sss在线观看| 五月玫瑰六月丁香| 亚洲av第一区精品v没综合| 男人的好看免费观看在线视频| 99九九线精品视频在线观看视频| 女生性感内裤真人,穿戴方法视频| 热99在线观看视频| 99热只有精品国产| 久久天躁狠狠躁夜夜2o2o| 午夜亚洲福利在线播放| 亚洲av美国av| 精品国产三级普通话版| 日本色播在线视频| 精品人妻1区二区| 又粗又爽又猛毛片免费看| 午夜福利高清视频| 国产精品98久久久久久宅男小说| 搞女人的毛片| 久久久久免费精品人妻一区二区| 99热精品在线国产| 精品久久久久久久久久久久久| 午夜激情欧美在线| 99精品久久久久人妻精品| 99riav亚洲国产免费| 久久午夜福利片| 色播亚洲综合网| 日本爱情动作片www.在线观看 | 久久精品国产自在天天线| 亚洲专区中文字幕在线| 麻豆久久精品国产亚洲av| 在线天堂最新版资源| 日韩中文字幕欧美一区二区| 亚洲电影在线观看av| 成人特级黄色片久久久久久久| 欧美最新免费一区二区三区| 欧美精品啪啪一区二区三区| 18+在线观看网站| 久久久久久久亚洲中文字幕| 尤物成人国产欧美一区二区三区| 人妻久久中文字幕网| 国产日本99.免费观看| x7x7x7水蜜桃| 亚洲人成伊人成综合网2020| 两人在一起打扑克的视频| 九九热线精品视视频播放| 有码 亚洲区| 变态另类丝袜制服| 亚洲国产欧洲综合997久久,| 色吧在线观看| 久久这里只有精品中国| 91av网一区二区| 3wmmmm亚洲av在线观看| 人人妻人人澡欧美一区二区| 99国产极品粉嫩在线观看| 亚洲精品一区av在线观看| 国产免费一级a男人的天堂| 成人av一区二区三区在线看| 精品一区二区三区人妻视频| av福利片在线观看| 波野结衣二区三区在线| 国产老妇女一区| 我的老师免费观看完整版| 亚洲成人久久性| 99久久久亚洲精品蜜臀av| 999久久久精品免费观看国产| 国产精品亚洲一级av第二区| 熟女电影av网| 日韩中文字幕欧美一区二区| 亚洲中文字幕日韩| 午夜福利在线观看免费完整高清在 | 欧美日韩综合久久久久久 | 深夜a级毛片| 村上凉子中文字幕在线| 亚洲欧美日韩无卡精品| 亚洲乱码一区二区免费版| 亚洲真实伦在线观看| 午夜福利在线在线| 1000部很黄的大片| 亚洲一区二区三区色噜噜| 日本免费一区二区三区高清不卡| 欧美一区二区精品小视频在线| 成人av一区二区三区在线看| 人人妻,人人澡人人爽秒播| 欧美bdsm另类| 日韩欧美一区二区三区在线观看| 热99在线观看视频| 免费电影在线观看免费观看| 亚洲内射少妇av| 偷拍熟女少妇极品色| 99国产精品一区二区蜜桃av| 久久亚洲真实| 欧美成人一区二区免费高清观看| 女人十人毛片免费观看3o分钟| 欧美成人性av电影在线观看| 校园人妻丝袜中文字幕| 欧美一区二区国产精品久久精品| 国产三级在线视频| 国产免费一级a男人的天堂| 婷婷精品国产亚洲av| 国产老妇女一区| 亚洲 国产 在线| 国产一区二区在线观看日韩| 久久中文看片网| 麻豆国产97在线/欧美| a级毛片a级免费在线| 欧美一区二区亚洲| 观看美女的网站| 久久久久久久精品吃奶| 国模一区二区三区四区视频| 久久久色成人| 在线看三级毛片| 中文字幕免费在线视频6| 又黄又爽又刺激的免费视频.| 久久久久久久久中文| 欧美不卡视频在线免费观看| 夜夜看夜夜爽夜夜摸| 22中文网久久字幕| 最好的美女福利视频网| 美女大奶头视频| 成人国产麻豆网| 国产av不卡久久| 露出奶头的视频| 男女视频在线观看网站免费| 国产欧美日韩一区二区精品| 又爽又黄无遮挡网站| 午夜久久久久精精品| 成人精品一区二区免费| 不卡视频在线观看欧美| 变态另类成人亚洲欧美熟女| 日本黄色视频三级网站网址| 成人av一区二区三区在线看| 国产一区二区三区av在线 | 99久久精品国产国产毛片| x7x7x7水蜜桃| 国产av一区在线观看免费| 人妻丰满熟妇av一区二区三区| 国产精品亚洲美女久久久| 久久香蕉精品热| 久久热精品热| 国产一区二区三区av在线 | 18禁黄网站禁片午夜丰满| 老司机福利观看| 日日摸夜夜添夜夜添小说| 国产真实伦视频高清在线观看 | 高清毛片免费观看视频网站| 亚洲欧美清纯卡通| 麻豆av噜噜一区二区三区| 国产大屁股一区二区在线视频| 色5月婷婷丁香| 亚洲电影在线观看av| 日本色播在线视频| 国产精品日韩av在线免费观看| 狠狠狠狠99中文字幕| 国产精品美女特级片免费视频播放器| 99热这里只有精品一区| a级一级毛片免费在线观看| 久久久久国产精品人妻aⅴ院| 成人特级黄色片久久久久久久| 91午夜精品亚洲一区二区三区 | 国产av不卡久久| 日本a在线网址| 好男人在线观看高清免费视频| 亚洲人成网站在线播| av专区在线播放| 日韩国内少妇激情av| 一进一出抽搐gif免费好疼| 国产一区二区在线观看日韩| 国产一区二区亚洲精品在线观看| 久久亚洲真实| 99热这里只有是精品在线观看| 久久久精品大字幕| 人妻制服诱惑在线中文字幕| 国产一区二区三区视频了| 亚洲美女黄片视频| 久久精品国产亚洲av香蕉五月| 亚洲最大成人av| 啦啦啦韩国在线观看视频| 国产黄片美女视频| av在线观看视频网站免费| 99热只有精品国产| 校园人妻丝袜中文字幕| 久久午夜亚洲精品久久| 啦啦啦啦在线视频资源| 精品久久久久久久久久免费视频| 欧美+日韩+精品| 国产高潮美女av| 久久久成人免费电影| 69人妻影院| 一进一出抽搐动态| 国产又黄又爽又无遮挡在线| 亚洲成av人片在线播放无| 又紧又爽又黄一区二区| 欧美又色又爽又黄视频| 亚洲av.av天堂| 99国产精品一区二区蜜桃av| 国产麻豆成人av免费视频| 久久人妻av系列| 国产白丝娇喘喷水9色精品| 久久久午夜欧美精品| 国产男人的电影天堂91| 少妇猛男粗大的猛烈进出视频 | 免费人成视频x8x8入口观看| 国产中年淑女户外野战色| 久久久久久久亚洲中文字幕| 国产精品98久久久久久宅男小说| 精品久久久久久,| 国产一区二区激情短视频| videossex国产| 国产69精品久久久久777片| 亚洲在线自拍视频| 国产高清不卡午夜福利| 深夜a级毛片| 听说在线观看完整版免费高清| 国产欧美日韩一区二区精品| 久久人妻av系列| 国产精品1区2区在线观看.| 3wmmmm亚洲av在线观看| 欧美一区二区亚洲| 国产精品美女特级片免费视频播放器| 国内久久婷婷六月综合欲色啪| 欧美绝顶高潮抽搐喷水| 午夜福利高清视频| 99热只有精品国产| 亚洲精品久久国产高清桃花| 99热这里只有精品一区| 不卡视频在线观看欧美| 亚洲真实伦在线观看| 久久精品影院6| 一区二区三区免费毛片| 老司机午夜福利在线观看视频| 国产精品一区二区免费欧美| h日本视频在线播放| 不卡一级毛片| 国产精品人妻久久久久久| 人妻夜夜爽99麻豆av| 丰满人妻一区二区三区视频av| 精品一区二区三区视频在线| 小蜜桃在线观看免费完整版高清| 国产真实伦视频高清在线观看 | www.www免费av| 国产精品伦人一区二区| 非洲黑人性xxxx精品又粗又长| 午夜久久久久精精品| 精品久久久久久久久久免费视频| 午夜福利高清视频| 国产黄色小视频在线观看| 高清在线国产一区| 我的老师免费观看完整版| 超碰av人人做人人爽久久| 精品人妻一区二区三区麻豆 | 校园春色视频在线观看| 人妻夜夜爽99麻豆av| 欧美日韩乱码在线| 久久久精品大字幕| 国产精品无大码| 国产亚洲av嫩草精品影院| 亚洲无线观看免费| 美女黄网站色视频| 如何舔出高潮| 日本黄大片高清| 久久6这里有精品| 老司机福利观看| 麻豆成人av在线观看| АⅤ资源中文在线天堂| 国产v大片淫在线免费观看| 亚洲人成网站在线播放欧美日韩| 久久久久久大精品| av在线老鸭窝| 日本 av在线| 久久99热6这里只有精品| 日本黄色视频三级网站网址| 久久精品综合一区二区三区| 久久精品国产亚洲av天美| 97碰自拍视频| 国产精品久久视频播放| 亚洲成av人片在线播放无| 亚洲18禁久久av| 免费黄网站久久成人精品| 久久欧美精品欧美久久欧美| 久久久久九九精品影院| 亚洲va日本ⅴa欧美va伊人久久| 女生性感内裤真人,穿戴方法视频| 免费黄网站久久成人精品| 精品人妻偷拍中文字幕| 午夜福利在线观看吧| 国产精品人妻久久久久久| 男女做爰动态图高潮gif福利片| 国产精品一区www在线观看 | 变态另类丝袜制服| 久久久国产成人精品二区| 日韩,欧美,国产一区二区三区 | 精品国产三级普通话版| 精品欧美国产一区二区三| 亚洲成av人片在线播放无| 国产免费av片在线观看野外av| 麻豆成人午夜福利视频| 国产精品不卡视频一区二区| 精品一区二区免费观看| 热99re8久久精品国产| 色综合色国产| 国产一区二区在线av高清观看| 男人狂女人下面高潮的视频| 精品不卡国产一区二区三区| 亚洲性夜色夜夜综合| 观看免费一级毛片| 久久人人精品亚洲av| 久久国内精品自在自线图片| 国产精品不卡视频一区二区| a级毛片a级免费在线| 色综合婷婷激情| 在线观看美女被高潮喷水网站| 成人高潮视频无遮挡免费网站| 亚洲黑人精品在线| 精品一区二区三区视频在线观看免费| 国产欧美日韩精品一区二区| 高清在线国产一区| 我的老师免费观看完整版| 岛国在线免费视频观看| 欧美潮喷喷水| 国产精品福利在线免费观看| 亚洲av免费高清在线观看| 搡老熟女国产l中国老女人| 我的女老师完整版在线观看| 成人性生交大片免费视频hd| 国产伦一二天堂av在线观看| 亚洲熟妇中文字幕五十中出| 久久精品国产亚洲av天美| 久久精品人妻少妇| 亚洲国产色片| 国产精品无大码| 日本三级黄在线观看| 国产成年人精品一区二区| 麻豆一二三区av精品| 日本免费a在线| av福利片在线观看| 午夜爱爱视频在线播放| 国产 一区精品| 亚洲第一电影网av| 国产男人的电影天堂91| 韩国av一区二区三区四区| 一区二区三区四区激情视频 | 午夜老司机福利剧场| 干丝袜人妻中文字幕| 蜜桃久久精品国产亚洲av| 亚洲第一区二区三区不卡| 欧美性感艳星| 国产精品电影一区二区三区| 精品国产三级普通话版| 最新在线观看一区二区三区| 国产精品久久久久久亚洲av鲁大| 亚洲最大成人手机在线| 亚洲成人免费电影在线观看| 男女视频在线观看网站免费| 亚洲国产精品合色在线| 天美传媒精品一区二区| 中文资源天堂在线| 亚洲狠狠婷婷综合久久图片| 麻豆一二三区av精品| 有码 亚洲区| 精品国内亚洲2022精品成人| 少妇被粗大猛烈的视频| 亚洲人与动物交配视频| 国产乱人伦免费视频| 亚洲内射少妇av| 欧美成人免费av一区二区三区| 在线天堂最新版资源| 俄罗斯特黄特色一大片| 久久国产精品人妻蜜桃| 亚洲专区中文字幕在线| 日本爱情动作片www.在线观看 | 久久精品国产自在天天线| 听说在线观看完整版免费高清| 亚洲美女搞黄在线观看 | 日本成人三级电影网站| 欧美zozozo另类| 搞女人的毛片| 免费看美女性在线毛片视频| 在线国产一区二区在线| 国国产精品蜜臀av免费| 亚洲人成网站高清观看| 久久精品夜夜夜夜夜久久蜜豆| 精品午夜福利视频在线观看一区| 深夜a级毛片| 好男人在线观看高清免费视频| 成人永久免费在线观看视频| 99久久中文字幕三级久久日本| 别揉我奶头~嗯~啊~动态视频| 国产探花极品一区二区| 黄色视频,在线免费观看| 亚洲人成网站高清观看| 美女免费视频网站| 精品无人区乱码1区二区| 国产高清不卡午夜福利| 久久午夜福利片| 国产老妇女一区| 美女cb高潮喷水在线观看| 国产熟女欧美一区二区| 亚洲va在线va天堂va国产| 在线天堂最新版资源| 高清毛片免费观看视频网站| 亚洲成人免费电影在线观看| 国产精华一区二区三区| 麻豆久久精品国产亚洲av| 精品欧美国产一区二区三| 一区二区三区高清视频在线| 一本一本综合久久| 一区二区三区激情视频| 午夜老司机福利剧场| 精品不卡国产一区二区三区| 日本一二三区视频观看| 两个人的视频大全免费| 51国产日韩欧美| 村上凉子中文字幕在线| 如何舔出高潮| 国产免费av片在线观看野外av| 最近最新中文字幕大全电影3| 十八禁网站免费在线| 成人欧美大片| 国产黄a三级三级三级人| 日日夜夜操网爽| a级毛片a级免费在线| 亚洲在线自拍视频| 亚洲美女视频黄频| 欧美高清性xxxxhd video| 3wmmmm亚洲av在线观看| 欧美日韩亚洲国产一区二区在线观看| 男女做爰动态图高潮gif福利片| 人妻久久中文字幕网| 精品人妻视频免费看| xxxwww97欧美| 91久久精品电影网| 桃红色精品国产亚洲av| 亚洲经典国产精华液单| 性色avwww在线观看| 午夜福利在线观看免费完整高清在 | 日本欧美国产在线视频| 又紧又爽又黄一区二区| 亚洲va在线va天堂va国产| 婷婷精品国产亚洲av| 久久久久久久精品吃奶| 国产乱人伦免费视频| 欧美成人一区二区免费高清观看| 久久香蕉精品热| 日韩亚洲欧美综合| 日韩人妻高清精品专区| 99热这里只有是精品在线观看| 村上凉子中文字幕在线| 99久国产av精品| 久久精品国产亚洲av香蕉五月| 国产av一区在线观看免费| 亚洲真实伦在线观看| 狂野欧美白嫩少妇大欣赏| 久久精品夜夜夜夜夜久久蜜豆| 国产私拍福利视频在线观看| 国产av一区在线观看免费| 国产精品免费一区二区三区在线| 日日摸夜夜添夜夜添小说| 国产精品爽爽va在线观看网站| 波野结衣二区三区在线| 亚洲国产色片| 亚洲欧美日韩无卡精品| 精品久久久久久久久久久久久| 日韩强制内射视频| 日韩中文字幕欧美一区二区| 很黄的视频免费| 日本 av在线| 免费一级毛片在线播放高清视频| 啪啪无遮挡十八禁网站| 亚洲精品亚洲一区二区| 在线播放国产精品三级| 久久久久久国产a免费观看| 黄色日韩在线| bbb黄色大片| www.色视频.com| 在线播放无遮挡| 春色校园在线视频观看| 久久人人精品亚洲av| 中文字幕久久专区| 少妇猛男粗大的猛烈进出视频 | 久久久午夜欧美精品| 中文字幕免费在线视频6| 变态另类丝袜制服| 国产一级毛片七仙女欲春2| 久久草成人影院| 久久精品影院6| 国产黄片美女视频| 国产日本99.免费观看| 亚洲18禁久久av| 白带黄色成豆腐渣| 亚洲人成伊人成综合网2020| 日韩欧美三级三区|