• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A novel CBAMs-BiLSTM model for Chinese stock market forecasting

    2024-05-13 09:47:08ChenhaoCuiandYongLi

    Chenhao Cui, and Yong Li ?

    School of Management, University of Science and Technology of China, Hefei 230026, China

    Abstract: The convolutional block attention module (CBAM) has demonstrated its superiority in various prediction problems, as it effectively enhances the prediction accuracy of deep learning models.However, there has been limited research testing the effectiveness of CBAM in predicting stock indexes.To fill this gap and improve the prediction accuracy of stock indexes, we propose a novel model called CBAMs-BiLSTM, which combines multiple CBAM modules with a bidirectional long short-term memory network (BiLSTM).In this study, we employ the standard metric evaluation method(SME) and the model confidence set test (MCS) to comprehensively evaluate the superiority and robustness of our model.We utilize two representative Chinese stock index data sets, namely, the SSE Composite Index and the SZSE Composite Index, as our experimental data.The numerical results demonstrate that CBAMs-BiLSTM outperforms BiLSTM alone,achieving average reductions of 13.06%, 13.39%, and 12.48% in MAE, RMSE, and MAPE, respectively.These findings confirm that CBAM can effectively enhance the prediction accuracy of BiLSTM.Furthermore, we compare our proposed model with other popular models and examine the impact of changing data sets, prediction methods, and the size of the training set.The results consistently demonstrate the superiority and robustness of our proposed model in terms of prediction accuracy and investment returns.

    Keywords: stock index prediction; BiLSTM; CBAM; MCS; SME

    1 Introduction

    Stocks play a pivotal role in financial markets, making stock indexes of great interest to regulators and investors alike.At the macro level, stock indexes are influential factors in the stability of the financial environment and economic development, as well as serving as early warning indicators for the economic climate[1].Thus, stock indexes hold significant importance for regulators.On a micro level, the fluctuations of stock indexes directly impact investment risks and returns.Accurate prediction of stock indexes not only aids regulators in overseeing stock markets but also assists investors in making informed investment decisions.However, the prediction of stock indexes is a challenging task due to the complex factors influencing them, such as price levels, monetary policies, and market interest rates[2,3].

    Traditionally, researchers in stock index prediction have favored statistical methods, such as regression analysis[4], generalized autoregressive conditional heteroscedasticity(GARCH)[5], autoregressive integrated moving average(ARIMA)[6,7], and smooth transition autoregressive model(STAR)[8].However, these methods rely on assumptions of time series stationarity and linearity among normally distributed variables, which are not satisfied in real stock markets[9].Consequently, these models exhibit poor prediction accuracy when dealing with nonlinear and nonstationary stock data[10].

    Machine learning models, particularly neural networks,have shown better performance in extracting nonlinearity and nonstationarity from financial time series compared to classical statistical models[11].Neural networks leverage nonlinear activation functions to capture complex information in the data[12,15].For instance, Yu et al.[13]utilized a local linear embedding dimensionality reduction algorithm (LLE) to reduce the dimensionality of factors influencing stock indexes.They then employed a back-propagation (BP) neural network to optimize stock index prediction.Recurrent neural networks(RNNs) in deep learning can effectively extract autocorrelation information due to their recurrent structure[16].Long shortterm memory (LSTM), a type of RNN, not only extracts autocorrelation information but also addresses the vanishing or exploding gradient problem through gating functions[17].Bidirectional LSTM (BiLSTM) differs from LSTM by considering both historical and future information, enhancing sequence analysis[18].Several studies have confirmed the predictive superiority of BiLSTM over LSTM in stock data[19-21].

    The attention mechanism (AM) is a network module that dynamically learns the weights of each feature, while the convolutional block attention module (CBAM) represents an enhanced version of the attention mechanism.CBAM introduces an attention mechanism for spaces and channels, enabling models to focus on essential features and disregard irrelevant ones, thereby improving the prediction accuracy of network models[14].Additionally, CBAM effectively reduces the interference caused by redundant features[22].Cheng et al.[23]integrated CBAM into a temporal convolutional network(TCN) to create the hybrid model TCN-CBAM for predicting chaotic time series.The experimental results demonstrate that incorporating CBAM significantly enhances the prediction accuracy of the TCN.Li et al.[24]proposed a fault diagnosis model for rolling bearings that combines a dual-stage attention-based recurrent neural network (DA-RNN), CBAM,and convolutional neural network (CNN).By utilizing two vibration data sets from rolling bearings, they confirmed that the proposed DARNN-CBAM-CNN method improves the fault diagnosis accuracy by 1.90% compared to a DARNNCNN method without CBAM.In the domain of gold price prediction, Liang et al.[14]highlighted that CBAM, unlike the attention mechanism, allocated weights across the two independent dimensions of channel and space, leading to better prediction accuracy in theory.Moreover, CBAM has proven effective in improving prediction accuracy in other areas,such as global horizontal irradiance (GHI) prediction[25]and PM2.5 concentration prediction[26].However, despite the extensive research on CBAM’s effectiveness in other fields, it has been relatively underutilized in stock index prediction.Furthermore, existing studies lack a detailed analysis of whether the position and quantity of CBAM in models affect prediction accuracy.

    In summary, this paper aims to leverage the proven superiority of CBAM in other prediction problems and the established effectiveness of BiLSTM in stock data.To achieve this, the paper proposes a novel model called CBAMBiLSTM, which combines CBAM with BiLSTM to further enhance the prediction accuracy of stock indexes.The experimental data consist of two representative Chinese stock index data sets, namely, the SSE Composite Index and the SZSE Composite Index.The prediction accuracy of the models is assessed using standard metric evaluation methods (SME)and the model confidence set test (MCS).For comparison,classical models in time series prediction problems, such as BiLSTM, CNN, LSTM, CNN-LSTM, and CNN-BiLSTM,are chosen as benchmark models.

    The initial experiments focus on conducting a detailed analysis of how the position and quantity of CBAM affect the prediction accuracy of BiLSTM.The numerical results demonstrate that the proposed model exhibits significant improvements compared to BiLSTM alone, with an average reduction of 13.06%, 13.39%, and 12.48% in MAE, RMSE,and MAPE, respectively, and an average improvement of 1.98% inR2.These findings confirm that the combination of CBAM and BiLSTM can further enhance the prediction accuracy of BiLSTM.

    Furthermore, the paper validates the superiority and robustness of the proposed CBAM-BiLSTM model by comparing it with other popular models and evaluating its performance under different data sets, prediction methods, and training set sizes.This analysis encompasses both prediction accuracy and investment returns.

    The innovations and contributions of this paper can be summarized as follows.First, the paper introduces a rational strategy that combines CBAM and BiLSTM to propose the advanced CBAM-BiLSTM model, thereby further improving the accuracy of stock index prediction.Second, the paper conducts a detailed analysis to investigate the impact of the position and quantity of CBAM on the prediction accuracy of BiLSTM.

    The rest of the paper is organized as follows.Section 2 presents the methodology, which provides a detailed explanation of the structure and principles of the proposed CBAMBiLSTM model.Section 3 comprises an analysis of the experiments, including information about the experimental data,experimental design, and result analysis.Finally, Section 4 concludes the article by summarizing the key findings, discussing some shortcomings, and outlining future research plans.

    2 Methodology

    2.1 Structure and principle of CBAM-BiLSTM

    The structure of CBAM-BiLSTM is shown in Fig.1.Multiple CBAMs and a BiLSTM are mixed using a linear stacking approach, with multiple CBAMs placed in front of BiLSTM used to achieve a sufficiently rational distribution of attention weights to input features.In CBAM-BiLSTM, the number of CBAMs is a hyperparameter that needs to be set artificially.Similar to other hyperparameters in a deep learning model, the number of CBAMs can be chosen as an appropriate value by comparing the prediction accuracy of models on the validation set.CBAMn-BiLSTM is CBAM-BiLSTM containingnCBAM.For example, CBAM3-BiLSTM means that the CBAM-BiLSTM model contains three CBAM modules.The structures of CBAM and BiLSTM are described in detail below.

    Fig.1.Structure of CBAM-BiLSTM.

    2.2 Structure and principle of BiLSTM

    LSTM overcomes the problem of gradient disappearance or explosion that RNNs have by introducing long-term memory states and multiple gating functions.These gating functions selectively forget or remember new information in the longterm memory state, which in turn allows information useful for subsequent moments of computation to be passed and useless information to be discarded[27,28].BiLSTM consists of two independent LSTM layers that have the same input but transfer information in opposite directions.Therefore, compared to LSTM, BiLSTM can improve the prediction accuracy by fully considering both historical and future information.The cell structure of LSTM and the network structure of BiLSTM are shown in Fig.2.

    Here,w,bare the weight matrix and the deviation vector of the corresponding gating functions, respectively.c?tdenotes the long-term memory state of the current input.“ ?”denotes the scalar product between vectors.“ ·” denotes matrix multiplication.

    In the network structure of BiLSTM, the same input data arefedtotheforwardLSTMlayerandbackwardLSTM layer, andthe hiddenstateinthe forwardLSTMlayerand the hidden statein the backward LSTM layer are computed.In the forward LSTM layer, forward computation is performed from time 1 to timet.In the backward LSTM layer, backward computation is performed from timetto time 1.The outputs of the current prehidden state and posthidden state are obtained and saved at each time unit.Then, two hidden states are connected to calculate the output value of BiLSTM.Eqs.(7)-(9) represent the calculation process of BiLSTM.

    Here, L STM(·) denotes the mapping of the already defined LSTM network layers.wfandwbdenote the weight matrices of the forward LSTM layer and backward LSTM layer, respectively.bdenotes the deviation vector of the output layer.

    2.3 Structure and principle of CBAM

    CBAM can implement an attention mechanism on both space and channel, which in turn can focus on key features and ignore useless features.After features are brought up by the convolutional neural network implementation, CBAM computes the weight mapping of feature mapping from both channel and spatial dimensions and then multiplies the weights with input features for adaptive learning.This lightweight general-purpose module can be integrated into a variety of convolutional neural networks for end-to-end training[31].Fig.3 illustrates the network structure of CBAM.

    Fig.2.Structure of LSTM and BiLSTM.

    Fig.3.Structure of CBAM.

    From Fig.3, the channel attention module (CAM) outputs a one-dimensional channel attention vectorMC, which is used to assign weights to each channel, indicating the importance of each channel.The spatial attention module outputs a threedimensional spatial attention tensorMS, which indicates which features at which locations in the three-dimensional space are key features and which are secondary features.Eqs.(10) and (11) represent the whole calculation process.

    Here,MC(F) represents the output of the channel attention module when the input isF.MS(F′) represents the output of the spatial attention module when the input isF′.? represents element multiplication.Pooling operations in CBAM include two types: “MaxPool” and “AvgPool”.Pooling can extract high-level features, and different pooling methods mean that the extracted high-level features are richer.From Fig.3,we can see that Eq.(12) represents the computation process of the channel attention module, and that Eq.(13) represents the computation process of the spatial attention module.

    Here, AvgPool(·) is the average pooling of input features.MaxPool(·) maximizes the pooling of input features.MLP(·)is the output of a multilayer perceptron.Conv(·) is the output of a convolutional layer.

    2.4 Evaluation of model performance

    The standard metric evaluation method (SME) and model confidence set test (MCS) are used to comprehensively evaluate the performance of the models.

    2.4.1 Standard metric evaluation method Loss error is the difference between the observed and predicted values and is used to evaluate the prediction accuracy of models.The mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and coefficient of fit (R2) are chosen to comprehensively evaluate the prediction accuracy of the models.Smaller MAE, RMSE,and MAPE values indicate a higher prediction accuracy of the models; largerR2values indicate a higher prediction accuracy of the models.The true value isy=(y1,y2,···,yn), and the predicted value isy?=(y?1,y?2,···,y?n).The equations below are expressions of metrics.

    The Sharpe ratio is an indicator that combines the returns and risk of an investment.Experiments use the Sharpe ratio to evaluate the superiority of models in terms of investment returns.The expression of the Sharpe ratio is given as

    Here,Rpis the return sequence.E(Rp) is the mean ofRp.Rfis the risk-free rate.σpis the standard deviation ofRp.To facilitate calculation, letRf=0.

    2.4.2 Model confidence set test

    The model confidence set (MCS) test proposed by Hansen et al.[33]is used to test whether there is a significant difference between the prediction accuracy of different models.More conveniently, we can calculate MCSpvalues of models to quantify the model’s prediction accuracy and to visually compare the strengths and weaknesses of different models’ prediction accuracy.This method is widely used to test differences in prediction performance between different predictive models[34-36].

    The MCS test is designed to test the significance of differences in the prediction accuracy of models in a set and to eliminate the model with poor prediction accuracy.Therefore,in each test, the null hypothesis is that all models have the same prediction accuracy.That is.

    3 Analysis of experiments

    3.1 Data introduction

    Two representative Chinese stock index data sets, the SSE Composite Index (index code: 000001; the abbreviation for it is SHCI in this article) and the SZSE Composite Index(index code: 399106; the abbreviation for it is SZCI in this article), have been carefully selected as the experimental data for this study.The SHCI data set consists of all stocks listed on the Shanghai Stock Exchange, including A shares and B shares.It effectively captures the price movements of stocks listed on the Shanghai Stock Exchange.The SZCI data set represents a weighted composite stock index compiled by the Shenzhen Stock Exchange.It is calculated based on all stocks listed on the Shenzhen Stock Exchange, with each stock’s issue weight taken into account.The time period for the two data sets spans from 2012-06-14 to 2022-08-31.Daily data for the study are obtained from the Wind database.

    In this study, the closing price of the stock indexes is chosen as the experimental data.The data are divided into three parts: training data, validation data, and test data, as illustrated in Fig.4.The training data are utilized to calculate the weights and biases of the models.The validation data are employed to determine the optimal number of CBAMs if necessary.Finally, the trained models are evaluated on the test set.Notably, all models have been trained and converged, ensuring that there is no overfitting issue.

    Table 1 summarizes the statistics of the two data sets.Based on the results of the Jarque-Bera (JB) test, the null hypothesis of a normal distribution is rejected at the 5% significance level for both the SHCI and SZCI.Additionally, the results of the Ljung-Box test suggest that the null hypothesis of no autocorrelation up to the 20th order is rejected at the 5%significance level for both data sets, indicating the presence of long-term serial autocorrelation in SHCI and SZCI.

    3.2 Preprocessing of data

    Max-min normalization can speed up the training process of models and facilitate the convergence of models[14].Therefore,the data are first normalized.The normalization formula is as follows.

    Here, ← denotes assignment.yiis the closing price at momenti.The prediction method in the experiments is to use 60 days as a time step to predict the next day and then keep sliding forward.The form of the data is shown in Fig.5.The expression is shown in Eq.(22).

    3.3 Related hyperparameter settings

    The experiments were conducted usingPython3.8.1 as the programming language andPyCharm2020.1.2 (Community Edition) as the compiler.ThePythonlibraries used include numpy 1.23.3, pandas 1.4.4, matplotlib 3.6.1, TensorFlow 2.10.0, keras 2.10.0, sklearn 1.1.3, and arch 5.3.1.To ensure reproducibility of the results, random seeds were set to 12,1234, and 2345.To focus on the performance of the models rather than the influence of hyperparameters on the prediction results, consistent hyperparameter values were used for different models, as indicated in Table 2.The default values were retained for the remaining hyperparameters.

    3.4 The impact of CBAM on BiLSTM

    Despite the demonstrated superiority of CBAM in various domains, there is a lack of detailed analysis in existing studies regarding the influence of CBAM’s position and quantity on prediction accuracy.To fill this gap, the experiments are divided into two parts.The first part examines the impact of CBAM position on the prediction accuracy of BiLSTM, while the second part investigates the effect of CBAM quantity on the prediction accuracy of BiLSTM.This approach allows for a comprehensive understanding of how the position and amount of CBAM in models can affect prediction accuracy.

    3.4.1 The impact of the position of CBAM on BiLSTM

    Fig.4.Two stock indexes.

    Table 1.Summary statistics of data sets.

    First, the impact of CBAM position on prediction accuracy is analyzed.Two modeling schemes are considered: CBAM in front of BiLSTM (CBAM_f_BiLSTM) and CBAM behind BiLSTM (CBAM_b_BiLSTM).Fig.6 illustrates these two modeling schemes, and Table 3 presents the results on the test sets for both schemes.Furthermore, the robustness analysis of the two modeling schemes on the test sets is shown in Table 4.

    The values in Table 4 are explained as follows.The“model 1 / model 2” column represents the percentage optimization of model 1 compared to model 2.The “Mean” column displays the mean optimization percentage for the two modeling schemes across different data sets, while the “Std” column represents the standard deviation of the optimization percentage for the two schemes across different data sets.

    To illustrate the calculation process, let us consider the example of the MAE optimization percentage in the mean column (-1.65%).For the SHCI data set, CBAM_f_BiLSTM/BiLSTM = (34.0270 - 35.3833)/34.0270 × 100% =-3.99%, where 34.0270 and 35.3833 are values from Table 3.Similarly, for the SZCI data set, CBAM_f_BiLSTM/BiLSTM = (30.8811 - 30.6652)/30.8811× 100% = 0.70%,where 30.8811 and 30.6652 are values from Table 3.Thus,for MAE, the mean of CBAM_f_BiLSTM/BiLSTM is calculated as (-3.99% + 0.70%)/2 = -1.65%.The standard deviation (Std) of CBAM_f_BiLSTM/BiLSTM is computed as follows:

    Here, Sqrt{ } represents the arithmetic square root function,and Square[] represents the square function.

    The optimization percentages for RMSE and MAPE are calculated in the same manner as MAE.However, the optimization percentage forR2is calculated in the opposite way to MAE.Let us consider the example of theR2optimization percentage in the mean column (0.11%).For the SHCI data set,CBAM_f_BiLSTM/BiLSTM = (0.9404 - 0.9448)/0.9448 ×100% = -0.47%, where 0.9404 and 0.9448 are values from Table 3.Similarly, for the SZCI data set, CBAM_f_BiLSTM/BiLSTM =(0.9466-0.9400)/0.9400×100%=0.70%,where 0.9466 and 0.9400 are values from Table 3.Therefore,forR2, the mean of CBAM_f_BiLSTM/BiLSTM is calculated as ( -0.47%+0.70%)/2=0.11%.The standard deviation is computed as Std = Sqrt{Square[-0.47% - 0.11%] +Square[0.70% - 0.11%]} = 0.5850.

    Table 2.Hyperparameters of models.

    Fig.6.Two modeling schemes.

    Table 3.SME of two modeling schemes on test sets.

    Table 4.Robustness analysis of two modeling schemes on test sets.

    Table 4 clearly shows that CBAM_f_BiLSTM outperforms CBAM_b_BiLSTM in all metrics.Additionally,CBAM_f_BiLSTM exhibits smaller standard deviations for each metric.These findings indicate that CBAM_f_BiLSTM not only achieves better prediction accuracy but also demonstrates stronger robustness.Consequently, the modeling scheme with CBAM in front of BiLSTM is considered superior.

    3.4.2 The impact of the amount of CBAM on BiLSTM

    Although CBAM_f_BiLSTM demonstrates better prediction accuracy and robustness than CBAM_b_BiLSTM, it is slightly less effective than the standalone BiLSTM model.Therefore, the analysis now focuses on increasing the amount of CBAM to examine its impact on prediction accuracy.

    Fig.7 displays the results of models with varying amounts of CBAM on the test sets, ranging from 1 to 15.Table 5 presents the corresponding results, while Table 6 provides the robustness analysis of these models.The symbols and calculation procedure in Table 6 are consistent with those in Table 4.

    From Fig.7, it is evident that when the amount of CBAM is set to 6 or 15, the proposed model performs poorly on SZCI, despite its good performance on SHCI.Table 6 reveals that in such cases, the standard deviation for each metric is higher compared to the other results.Additionally, the mean for each metric is negative.These observations indicate that when the amount of CBAM is 6 or 15, the model not only predicts worse than BiLSTM but also exhibits poor robustness.Based on these experimental findings, it is apparent that the prediction accuracy of CBAM-BiLSTM significantly improves compared to BiLSTM when the amount of CBAM is not equal to 1, 6, 10, or 15.

    Fig.7.Results for models with different amounts of CBAM on test sets, which are calculated based on the data without normalization.

    Table 5.SME of models with different amounts of CBAM on test sets.

    Furthermore, Table 6 indicates that when the amount of CBAM is 8, 9, 11, 12, 13, or 14, the standard deviation for each metric is smaller, while the mean for each metric is larger.This suggests that the model not only achieves higher prediction accuracy but also maintains good robustness.

    To further validate the experimental findings presented in Table 6, the model confidence set (MCS) test is employed to analyze the prediction accuracy and robustness of the models with varying numbers of CBAM.Table 7 displays the MCSpvalues for the models obtained by summing errors from different test sets.The MCS test in these experiments is implemented using thePythonlibrary arch 5.3.1, with the random seed set to 12345.It is important to note that the MCSpvalues are not the result of a probability calculation and do not possess probabilistic significance.Instead, larger values indicate higher prediction accuracy for the corresponding model,with a maximum value of 1.

    As observed in Table 7, when the amount of CBAM is set to 11, the MCSpvalues for the model reach 1.00 for both test statistics, indicating the model’s superior prediction accuracy.In line with this, Table 6 demonstrates that compared to BiLSTM, CBAM11-BiLSTM exhibits an average reduction of 13.06%, 13.39%, and 12.48% in MAE, RMSE, and MAPE, respectively, while showcasing an average improvement of 1.98% inR2.

    3.5 Superiority and robustness of CBAM-BiLSTM

    The aforementioned experiments provide a comprehensive analysis of the impact of the position and amount of CBAM on the prediction accuracy of BiLSTM.This section further explores the superiority and robustness of CBAM-BiLSTM in terms of prediction accuracy and investment returns.

    First, the prediction accuracy of the models is examined using different prediction methods on the test sets.Subsequently, the influence of the training sample size on the prediction accuracy of the proposed model is analyzed.Finally,the experiments delve into the assessment of investment returns generated by the models on the test sets.

    Tables 8 and 9 present the outcomes of the SME and MCSpvalues, respectively, on the test sets when models are based on different prediction methods.Eq.(22) illustrates the expression for autoregressive one-step prediction, while Eq.(23) showcases the expression for autoregressive multistep prediction.Additionally, Eq.(24) demonstrates the expression for multivariate one-step prediction.

    Here,yirepresents the closing price at momenti,xirefers to the opening price at momenti,hidenotes the highest price at momenti,lirepresents the lowest price at momenti,visignifies the volume at momenti, andtirepresents the turnover at momenti.

    From Table 8, it is evident that the proposed model exhibits the minimum error in each prediction method.Moreover,in Table 9, the MCSpvalues for the proposed model are consistently 1 across all prediction methods.These findings reinforce that, compared to other popular models, the proposed model achieves the highest prediction accuracy across different data sets and prediction methods.Thus, the results in Table 8 and Table 9 validate the superiority and robustness of the proposed model in terms of prediction accuracy.

    Table 6.Robustness analysis of models with different amounts of CBAM on test sets.

    Table 7.MCS p values on test sets.

    Fig.8 illustrates the influence of the training set size on the prediction accuracy of CBAM-BiLSTM, withR2selected as the metric.It is observed thatR2remains highly consistent as the size of the training set varies across each data set.This stability reinforces the notion that the proposed model exhibits strong robustness in relation to the size of the training set.

    Ideally, a market prediction system can be integrated as a module within a trading system, where improved prediction accuracy is expected to yield higher profits.In this context,we present experiments that utilize the proposed model as the prediction subsystem of a simple trading system.It is important to note that the overall performance of the system depends on how the predictions are utilized for trading.The trading strategy employed in our experiments is as follows: IfFig.8.Impact of the size of the training set on the prediction accuracy of CBAM-BiLSTM.the predicted price for dayt+1 is higher than the true price for dayt, the predicted label for daytis considered “up”; otherwise, it is considered “down”.When the predicted label for the next day is “up”, the trading system fully invests in the corresponding index and holds the shares until a “down”label is encountered, at which point the system closes the position.

    Table 8.SME for different prediction methods on test sets.

    Table 9.MCS p values for different prediction methods on test sets.

    In this trading strategy, each individual prediction made by the models influences the trading performance and ultimately impacts the overall profit.To assess the performance of the trading system, we utilize the Sharpe ratio.Table 10 presents the Sharpe ratios of different predictive models on the test sets.In the table,Rprepresents the return sequences of the models,E(Rp) indicates the mean of the models’ return sequences,σprepresents the standard deviation of the models’return sequences, and “Sharpe” denotes the Sharpe ratio of the models.The numerical values in Table 10 demonstrate that employing the predictions of the proposed model as the foundation for the trading strategy leads to satisfactory results.This further confirms that, in terms of investment returns, the proposed model exhibits superiority and robustness when compared to other popular models.

    3.6 Findings

    In summary, the experiments conducted a detailed analysis on the impact of the position and number of CBAMs on BiLSTM.The results confirmed that CBAM has the ability to enhance the prediction accuracy of BiLSTM, and this improvement exhibits good robustness.Furthermore, the proposed model’s superiority and robustness in terms of prediction accuracy were confirmed through comparisons with other popular models, as well as by varying the prediction method and data sets.Additionally, the experiments demonstrated the model’s robustness in relation to the size of the training set.Finally, the experiments affirmed the model’s superiority and robustness in terms of investment returns.

    Overall, the experimental findings provide strong evidence supporting the effectiveness and reliability of the proposed model.The results indicate that integrating CBAM into the BiLSTM architecture enhances prediction accuracy and robustness across various scenarios and data sets.These findings contribute to advancing the understanding and applicability of the proposed model in real-world scenarios involving market prediction and trading systems.

    4 Conclusions

    4.1 Summary

    To address the issue of low accuracy in stock index prediction, this paper introduces a novel model called CBAMBiLSTM, which combines multiple CBAMs with a BiLSTM architecture.The experimental evaluation is conducted using the SSE Composite Index and the SZSE Composite Index as the data sets.The performance of various models is assessed using standard metric evaluation and model confidence set test methods.The final results demonstrate that CBAMBiLSTM exhibits superior performance and robustness in terms of both prediction accuracy and investment returns.Moreover, the experiments include a comprehensive analysis of the impact of CBAM position and amount on the prediction accuracy of the BiLSTM model.

    Overall, this research introduces a novel model that effectively addresses the challenge of accurate stock index prediction.Through rigorous evaluation and analysis, the proposed CBAM-BiLSTM model shows its superiority and robustness compared to other models.The findings provide valuable insights into improving prediction accuracy and investment returns in the field of stock market analysis.

    4.2 Discussion and outlook

    The proposed CBAM-BiLSTM model demonstrates its competence in predicting stock price indexes compared to other hybrid predictive models based on machine learning methods from the literature.In this study, our model achieves a minimum MAPE of 0.0084 (0.84%) and a maximumR2value of 0.9657.In previous studies, Md et al.[37]achieved anR2of0.981 for Samsung stock.Maqbool et al.[38]obtained a MAPE of 1.55% for the HDFC bank stock price data set.Gülmez[39]achievedR2values ranging from 0.814 to 0.975 for various stock data sets.Cui et al.[40]obtained an MAPE of 0.62% for the SSE Composite Index.

    Table 10.Sharpe ratio of models on test sets.

    It is acknowledged that the robustness of model forecasting can vary between tranquil and turbulent periods due to the idiosyncratic patterns of the data; however, this issue can be further addressed by considering hyperparameters.For instance, the number of BiLSTM modules and the number of neurons were not extensively explored in this study.Therefore, future research will aim to develop appropriate methods for selecting hyperparameters to further enhance the predictive performance of the proposed model.

    Regarding applications, future work will involve incorporating more contributing feature variables to improve the predictive performance of the model.However, it is important to note that the variables used to predict composite stock indexes and individual stock prices differ significantly.Composite stock indexes may require macrolevel variables such as GDP growth rate, inflation rate, interest rate, government fiscal policy, and international trade situation.On the other hand, individual stock prices tend to focus on internal factors of the company and the market’s supply and demand relationship.Therefore, future research plans involve utilizing natural language processing techniques to extract variables that impact stock indexes, thus boosting the prediction performance.Furthermore, since Ref.[32] suggests that solely constructing predictive models in terms of improving the precision may not yield good investment returns, future plans meanwhile cover investment returns as the primary aim to enhance the practical utility of the proposed model.

    Acknowledgements

    The authors thanks to Dr.Peiwan Wang for organizing the ideas in the discussion section of the article.

    Conflict of interest

    The authors declare that they have no conflict of interest.

    Biographies

    Chenhao Cui received his master’s degree from the University of Science and Technology of China in 2023.His research mainly focuses on the application of deep learning in time series prediction.

    Yong Li is an Associate Professor at the University of Science and Technology of China (USTC).He received his Ph.D.degree from USTC in 2012.His research mainly focuses on FinTech and data mining.

    国产成人av教育| 久久亚洲精品不卡| 激情在线观看视频在线高清| 美女扒开内裤让男人捅视频| 91av网站免费观看| 午夜福利欧美成人| 中文亚洲av片在线观看爽| 窝窝影院91人妻| 国产成人欧美在线观看| 亚洲国产精品成人综合色| 搡老岳熟女国产| 国产精品1区2区在线观看.| 国产麻豆成人av免费视频| 无遮挡黄片免费观看| 亚洲情色 制服丝袜| 亚洲欧美精品综合久久99| 操出白浆在线播放| 日韩三级视频一区二区三区| 日韩成人在线观看一区二区三区| 神马国产精品三级电影在线观看 | 精品国产亚洲在线| 久久午夜综合久久蜜桃| √禁漫天堂资源中文www| 久久伊人香网站| 亚洲七黄色美女视频| 免费看a级黄色片| 亚洲欧美激情在线| 国产亚洲精品综合一区在线观看 | 在线视频色国产色| 在线观看免费视频日本深夜| av片东京热男人的天堂| 一进一出抽搐动态| 免费人成视频x8x8入口观看| 国产精品久久久久久精品电影 | 欧美精品啪啪一区二区三区| 亚洲欧洲精品一区二区精品久久久| 国产欧美日韩一区二区精品| 亚洲精华国产精华精| 午夜福利免费观看在线| 一本综合久久免费| 日本vs欧美在线观看视频| 国产高清有码在线观看视频 | 日本 欧美在线| 国产一区二区在线av高清观看| av视频免费观看在线观看| 亚洲精品一卡2卡三卡4卡5卡| 欧美黑人欧美精品刺激| 一级a爱视频在线免费观看| 国产成人精品久久二区二区免费| 亚洲情色 制服丝袜| 国产亚洲欧美精品永久| 亚洲欧美精品综合久久99| 国产成人欧美在线观看| 一边摸一边抽搐一进一小说| 日本精品一区二区三区蜜桃| 99久久久亚洲精品蜜臀av| 亚洲自偷自拍图片 自拍| 熟女少妇亚洲综合色aaa.| 大香蕉久久成人网| 亚洲精品中文字幕一二三四区| 免费在线观看完整版高清| 亚洲国产精品sss在线观看| 悠悠久久av| 欧美 亚洲 国产 日韩一| 91老司机精品| 欧美丝袜亚洲另类 | 日日爽夜夜爽网站| 操美女的视频在线观看| 香蕉国产在线看| 精品欧美国产一区二区三| 国产欧美日韩一区二区精品| 久久中文字幕人妻熟女| 久久婷婷人人爽人人干人人爱 | 久久久久久大精品| 十八禁网站免费在线| 99在线视频只有这里精品首页| 91国产中文字幕| а√天堂www在线а√下载| 色在线成人网| 老熟妇仑乱视频hdxx| 好看av亚洲va欧美ⅴa在| 精品久久久久久,| 一边摸一边做爽爽视频免费| 欧美性长视频在线观看| 精品久久久久久久久久免费视频| 欧洲精品卡2卡3卡4卡5卡区| 18禁黄网站禁片午夜丰满| 一个人免费在线观看的高清视频| 亚洲精品国产精品久久久不卡| 老汉色av国产亚洲站长工具| 久久久水蜜桃国产精品网| 在线观看一区二区三区| 国产主播在线观看一区二区| 国产精品久久久av美女十八| 伊人久久大香线蕉亚洲五| 亚洲熟女毛片儿| 女同久久另类99精品国产91| 国产成人欧美| 国产免费男女视频| e午夜精品久久久久久久| 黄片大片在线免费观看| 精品国产乱码久久久久久男人| 亚洲七黄色美女视频| 久久久久久久久久久久大奶| 亚洲中文av在线| 亚洲av美国av| 一区福利在线观看| 夜夜看夜夜爽夜夜摸| 一边摸一边抽搐一进一出视频| 99久久综合精品五月天人人| 可以在线观看的亚洲视频| 女人精品久久久久毛片| 午夜久久久在线观看| 国产亚洲欧美精品永久| 嫁个100分男人电影在线观看| 久久久久久国产a免费观看| 精品乱码久久久久久99久播| 波多野结衣av一区二区av| 欧美亚洲日本最大视频资源| 侵犯人妻中文字幕一二三四区| 久久精品亚洲熟妇少妇任你| 我的亚洲天堂| 久久久久国产一级毛片高清牌| 国产极品粉嫩免费观看在线| 丝袜人妻中文字幕| 国产精品 国内视频| 免费人成视频x8x8入口观看| 满18在线观看网站| 性色av乱码一区二区三区2| 999久久久国产精品视频| 色播在线永久视频| 精品一区二区三区av网在线观看| 又黄又爽又免费观看的视频| 老熟妇乱子伦视频在线观看| 可以在线观看毛片的网站| 给我免费播放毛片高清在线观看| 国产精品日韩av在线免费观看 | 久久香蕉激情| 高清黄色对白视频在线免费看| 青草久久国产| www.www免费av| 午夜老司机福利片| 在线观看免费日韩欧美大片| 亚洲av第一区精品v没综合| 亚洲九九香蕉| 女生性感内裤真人,穿戴方法视频| 满18在线观看网站| 在线观看日韩欧美| 黄色丝袜av网址大全| 久久国产精品人妻蜜桃| 亚洲成a人片在线一区二区| 一卡2卡三卡四卡精品乱码亚洲| 国产在线精品亚洲第一网站| 香蕉久久夜色| 久久香蕉精品热| 久久国产精品影院| 国内精品久久久久精免费| 亚洲在线自拍视频| 韩国av一区二区三区四区| 午夜日韩欧美国产| 99久久精品国产亚洲精品| 欧美色视频一区免费| 美女 人体艺术 gogo| 日本撒尿小便嘘嘘汇集6| 热99re8久久精品国产| 久久精品成人免费网站| 免费女性裸体啪啪无遮挡网站| 亚洲av电影在线进入| 黑丝袜美女国产一区| a在线观看视频网站| 日韩高清综合在线| 亚洲色图 男人天堂 中文字幕| 91av网站免费观看| 人成视频在线观看免费观看| 满18在线观看网站| 两个人视频免费观看高清| 成年人黄色毛片网站| 久久这里只有精品19| 校园春色视频在线观看| 成年女人毛片免费观看观看9| 欧美人与性动交α欧美精品济南到| 中文字幕另类日韩欧美亚洲嫩草| 国产精品av久久久久免费| 制服丝袜大香蕉在线| 午夜免费激情av| 国产av又大| 69精品国产乱码久久久| 大码成人一级视频| 亚洲欧美精品综合久久99| 欧美一级a爱片免费观看看 | av天堂在线播放| 日本五十路高清| 黄片小视频在线播放| 女性生殖器流出的白浆| 日韩三级视频一区二区三区| 日韩精品免费视频一区二区三区| 自线自在国产av| 国产亚洲精品第一综合不卡| 免费看a级黄色片| 免费av毛片视频| 午夜a级毛片| 国产真人三级小视频在线观看| 日韩欧美三级三区| 久久精品国产亚洲av高清一级| av在线天堂中文字幕| 日本精品一区二区三区蜜桃| 岛国视频午夜一区免费看| 99国产综合亚洲精品| 99精品久久久久人妻精品| 亚洲欧美精品综合久久99| 9色porny在线观看| 我的亚洲天堂| 99久久99久久久精品蜜桃| 国产亚洲精品久久久久久毛片| 国产男靠女视频免费网站| 视频区欧美日本亚洲| 国产成人精品无人区| 欧美在线一区亚洲| 久久久国产精品麻豆| 51午夜福利影视在线观看| 久久性视频一级片| 夜夜夜夜夜久久久久| 欧美日韩福利视频一区二区| a在线观看视频网站| 操出白浆在线播放| 精品欧美一区二区三区在线| 日韩欧美国产在线观看| 成人欧美大片| 精品久久久精品久久久| 后天国语完整版免费观看| 久99久视频精品免费| 久久人妻av系列| 国产精品99久久99久久久不卡| 日本黄色视频三级网站网址| 免费看a级黄色片| 国产欧美日韩一区二区三| 露出奶头的视频| 一本久久中文字幕| 高清在线国产一区| 女人高潮潮喷娇喘18禁视频| 在线视频色国产色| 午夜福利免费观看在线| 日韩欧美免费精品| 一个人免费在线观看的高清视频| 在线天堂中文资源库| 大型av网站在线播放| 18禁裸乳无遮挡免费网站照片 | 88av欧美| 极品教师在线免费播放| 国产精华一区二区三区| 一卡2卡三卡四卡精品乱码亚洲| 国产精品一区二区精品视频观看| 激情在线观看视频在线高清| 国产高清视频在线播放一区| 十八禁网站免费在线| 制服丝袜大香蕉在线| 免费人成视频x8x8入口观看| 国产成人欧美在线观看| 国产亚洲精品综合一区在线观看 | 99国产综合亚洲精品| av超薄肉色丝袜交足视频| 国产亚洲av嫩草精品影院| 性欧美人与动物交配| 最近最新中文字幕大全电影3 | 1024视频免费在线观看| 一进一出抽搐动态| 欧美中文日本在线观看视频| 亚洲在线自拍视频| 欧美日韩瑟瑟在线播放| 女人被躁到高潮嗷嗷叫费观| 国产av一区二区精品久久| 91成年电影在线观看| 亚洲精品中文字幕一二三四区| 国产区一区二久久| 亚洲在线自拍视频| 欧美成人午夜精品| 午夜a级毛片| 91麻豆精品激情在线观看国产| 久久精品影院6| 日本 欧美在线| 91麻豆av在线| 国产激情欧美一区二区| 97碰自拍视频| 91成年电影在线观看| 久久精品成人免费网站| 99精品久久久久人妻精品| 正在播放国产对白刺激| 少妇熟女aⅴ在线视频| 波多野结衣av一区二区av| 成人免费观看视频高清| 禁无遮挡网站| 69精品国产乱码久久久| 亚洲在线自拍视频| 亚洲精品中文字幕在线视频| 99国产精品免费福利视频| 制服诱惑二区| 成人三级做爰电影| 亚洲avbb在线观看| 看黄色毛片网站| 每晚都被弄得嗷嗷叫到高潮| 国产高清激情床上av| 久久人妻熟女aⅴ| 两性夫妻黄色片| 国产精品 欧美亚洲| 亚洲第一电影网av| 天天一区二区日本电影三级 | 俄罗斯特黄特色一大片| 99香蕉大伊视频| 久久 成人 亚洲| 人人妻,人人澡人人爽秒播| 欧美激情极品国产一区二区三区| 国产精品精品国产色婷婷| 久久久精品欧美日韩精品| 色播亚洲综合网| 亚洲人成电影免费在线| 国产精品一区二区免费欧美| 亚洲精品中文字幕在线视频| 亚洲精品国产精品久久久不卡| 99riav亚洲国产免费| 极品人妻少妇av视频| netflix在线观看网站| 国产精品综合久久久久久久免费 | 91成人精品电影| 亚洲成av片中文字幕在线观看| 亚洲一码二码三码区别大吗| 色综合欧美亚洲国产小说| 亚洲va日本ⅴa欧美va伊人久久| 一区福利在线观看| 久久久久久久久久久久大奶| 法律面前人人平等表现在哪些方面| 波多野结衣巨乳人妻| 狠狠狠狠99中文字幕| 久久精品国产亚洲av高清一级| 国产av又大| 91老司机精品| 黄网站色视频无遮挡免费观看| 丰满的人妻完整版| 69av精品久久久久久| www国产在线视频色| 欧美日本视频| 日韩欧美国产一区二区入口| 亚洲精品美女久久久久99蜜臀| av超薄肉色丝袜交足视频| 首页视频小说图片口味搜索| 99精品在免费线老司机午夜| 亚洲成人免费电影在线观看| 日本欧美视频一区| 制服丝袜大香蕉在线| 桃色一区二区三区在线观看| 久久久久久久久中文| 涩涩av久久男人的天堂| 中文字幕人妻丝袜一区二区| 长腿黑丝高跟| bbb黄色大片| 久久精品国产综合久久久| 久久久久久人人人人人| 亚洲第一青青草原| 久久久久久大精品| 欧美人与性动交α欧美精品济南到| 天堂影院成人在线观看| 天天躁夜夜躁狠狠躁躁| 中文字幕另类日韩欧美亚洲嫩草| 久久久久精品国产欧美久久久| 动漫黄色视频在线观看| 美女免费视频网站| 亚洲精品中文字幕一二三四区| av天堂久久9| 大型黄色视频在线免费观看| or卡值多少钱| 精品久久蜜臀av无| 亚洲av五月六月丁香网| 亚洲国产精品sss在线观看| 亚洲精品国产精品久久久不卡| 亚洲色图 男人天堂 中文字幕| 亚洲欧美日韩高清在线视频| 乱人伦中国视频| 51午夜福利影视在线观看| 欧美色欧美亚洲另类二区 | 免费少妇av软件| 一级片免费观看大全| 丰满人妻熟妇乱又伦精品不卡| 变态另类成人亚洲欧美熟女 | 成熟少妇高潮喷水视频| 国产精品免费视频内射| 两个人免费观看高清视频| 成在线人永久免费视频| 精品一区二区三区四区五区乱码| 国产成+人综合+亚洲专区| 男女做爰动态图高潮gif福利片 | 国产人伦9x9x在线观看| 亚洲成人免费电影在线观看| 国产不卡一卡二| 18美女黄网站色大片免费观看| 看免费av毛片| 亚洲 国产 在线| 国产午夜精品久久久久久| 首页视频小说图片口味搜索| 欧美精品亚洲一区二区| 男女床上黄色一级片免费看| 国产国语露脸激情在线看| 美女国产高潮福利片在线看| 日韩欧美三级三区| 可以在线观看的亚洲视频| 亚洲第一欧美日韩一区二区三区| 亚洲国产欧美网| 日本黄色视频三级网站网址| 国产一区二区在线av高清观看| 成熟少妇高潮喷水视频| 色哟哟哟哟哟哟| 51午夜福利影视在线观看| 久久热在线av| 99国产极品粉嫩在线观看| 免费看美女性在线毛片视频| 欧美日韩亚洲国产一区二区在线观看| 12—13女人毛片做爰片一| 免费观看人在逋| 99久久精品国产亚洲精品| 国产免费男女视频| 中文字幕人妻丝袜一区二区| 亚洲人成伊人成综合网2020| 中文字幕精品免费在线观看视频| 在线视频色国产色| 两性夫妻黄色片| 欧美乱码精品一区二区三区| 久久狼人影院| 18禁黄网站禁片午夜丰满| 亚洲最大成人中文| 精品乱码久久久久久99久播| 嫁个100分男人电影在线观看| 9色porny在线观看| 欧美+亚洲+日韩+国产| 淫秽高清视频在线观看| 99久久综合精品五月天人人| 777久久人妻少妇嫩草av网站| 啦啦啦免费观看视频1| 亚洲aⅴ乱码一区二区在线播放 | 人人妻人人澡欧美一区二区 | 美国免费a级毛片| 午夜福利一区二区在线看| 国产成人影院久久av| 啦啦啦观看免费观看视频高清 | 如日韩欧美国产精品一区二区三区| 国产成人精品久久二区二区免费| av中文乱码字幕在线| 俄罗斯特黄特色一大片| 久久久精品国产亚洲av高清涩受| 欧洲精品卡2卡3卡4卡5卡区| 亚洲全国av大片| 青草久久国产| 久久精品国产综合久久久| 少妇被粗大的猛进出69影院| 美女 人体艺术 gogo| 少妇 在线观看| 亚洲av第一区精品v没综合| 99精品在免费线老司机午夜| 婷婷六月久久综合丁香| 两人在一起打扑克的视频| 亚洲av成人一区二区三| 国产三级在线视频| 十八禁人妻一区二区| 亚洲国产日韩欧美精品在线观看 | 色av中文字幕| 午夜视频精品福利| 日本免费a在线| 最新美女视频免费是黄的| 午夜福利18| 国产av又大| 亚洲自偷自拍图片 自拍| 午夜免费观看网址| 久久久久久亚洲精品国产蜜桃av| 变态另类成人亚洲欧美熟女 | 国内精品久久久久精免费| 在线视频色国产色| 久99久视频精品免费| 亚洲欧美激情在线| 免费观看人在逋| 亚洲天堂国产精品一区在线| 亚洲欧美日韩无卡精品| 久久久国产成人免费| 黑人操中国人逼视频| 国产主播在线观看一区二区| 无人区码免费观看不卡| 在线观看66精品国产| 免费高清在线观看日韩| 成年女人毛片免费观看观看9| 精品国产亚洲在线| 男女下面插进去视频免费观看| √禁漫天堂资源中文www| 亚洲国产精品sss在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 久久精品亚洲熟妇少妇任你| 99精品欧美一区二区三区四区| 夜夜夜夜夜久久久久| 国产精品一区二区精品视频观看| 国产精品一区二区三区四区久久 | 久久精品人人爽人人爽视色| 久久国产精品人妻蜜桃| 欧美av亚洲av综合av国产av| 一边摸一边抽搐一进一小说| 老司机深夜福利视频在线观看| 一二三四社区在线视频社区8| 亚洲欧美精品综合久久99| 纯流量卡能插随身wifi吗| 一本久久中文字幕| 757午夜福利合集在线观看| av有码第一页| 9色porny在线观看| 色综合亚洲欧美另类图片| 老司机福利观看| 亚洲成人久久性| 亚洲人成电影观看| 国产成+人综合+亚洲专区| 人人妻人人爽人人添夜夜欢视频| 亚洲精品国产一区二区精华液| 韩国精品一区二区三区| 给我免费播放毛片高清在线观看| 一二三四在线观看免费中文在| 免费高清在线观看日韩| 黄片播放在线免费| 国产主播在线观看一区二区| 国产精品美女特级片免费视频播放器 | 最近最新中文字幕大全免费视频| 精品一区二区三区av网在线观看| 日韩精品免费视频一区二区三区| 一区在线观看完整版| av免费在线观看网站| 国产成人精品在线电影| 亚洲在线自拍视频| av视频免费观看在线观看| 国产单亲对白刺激| 伊人久久大香线蕉亚洲五| 1024视频免费在线观看| 欧美精品啪啪一区二区三区| 午夜福利免费观看在线| 国产日韩一区二区三区精品不卡| 一级a爱片免费观看的视频| 非洲黑人性xxxx精品又粗又长| 欧美一区二区精品小视频在线| 欧美色视频一区免费| 999久久久精品免费观看国产| 夜夜爽天天搞| 国产乱人伦免费视频| av网站免费在线观看视频| 女人爽到高潮嗷嗷叫在线视频| 久久中文看片网| 亚洲一区高清亚洲精品| 999久久久国产精品视频| 日韩高清综合在线| 女同久久另类99精品国产91| 久久久久国内视频| 99久久久亚洲精品蜜臀av| 久久九九热精品免费| 中文字幕精品免费在线观看视频| 亚洲精品粉嫩美女一区| 国产成人av激情在线播放| 久久婷婷人人爽人人干人人爱 | 国产av在哪里看| 两人在一起打扑克的视频| 天天躁夜夜躁狠狠躁躁| 99久久国产精品久久久| 老汉色∧v一级毛片| 日本 欧美在线| 欧美精品亚洲一区二区| 免费人成视频x8x8入口观看| 欧美性长视频在线观看| 国产黄a三级三级三级人| 欧美一级a爱片免费观看看 | 两个人看的免费小视频| 免费av毛片视频| 亚洲男人的天堂狠狠| 久久亚洲精品不卡| 亚洲免费av在线视频| 日韩精品青青久久久久久| 三级毛片av免费| 午夜福利高清视频| 18禁黄网站禁片午夜丰满| 国产一区二区三区视频了| 香蕉国产在线看| 老司机靠b影院| 亚洲色图 男人天堂 中文字幕| 在线观看66精品国产| 久久久久国内视频| 色老头精品视频在线观看| av在线天堂中文字幕| 国产亚洲精品一区二区www| 琪琪午夜伦伦电影理论片6080| 亚洲国产欧美网| 国产欧美日韩一区二区三区在线| 一区在线观看完整版| 12—13女人毛片做爰片一| 午夜福利在线观看吧| 日韩精品中文字幕看吧| a级毛片在线看网站| 精品国产超薄肉色丝袜足j| 久久性视频一级片| 一二三四在线观看免费中文在| 日日干狠狠操夜夜爽| 无限看片的www在线观看| 黑丝袜美女国产一区| 欧美 亚洲 国产 日韩一| 免费搜索国产男女视频| 女同久久另类99精品国产91| 亚洲成av人片免费观看| 亚洲成人久久性| 国产亚洲欧美精品永久| 91成人精品电影| 色综合欧美亚洲国产小说| 国产精品久久久久久亚洲av鲁大| 三级毛片av免费| 日本免费一区二区三区高清不卡 | 侵犯人妻中文字幕一二三四区| 99在线视频只有这里精品首页| 黑人巨大精品欧美一区二区蜜桃| 久久午夜亚洲精品久久| 黄色 视频免费看|