• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Learning Modelling and Model Transfer for Near-Infrared Spectroscopy Quantitative Analysis

    2023-01-31 12:21:22FUPengyouWENYueZHANGYukeLILingqiaoYANGHuihua
    光譜學(xué)與光譜分析 2023年1期

    FU Peng-you, WEN Yue, ZHANG Yu-ke, LI Ling-qiao, YANG Hui-hua*

    1.School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China 2.School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China 3.School of International, Beijing University of Posts and Telecommunications, Beijing 100876, China

    Abstract Near-infrared spectroscoqy analysis technologyrelies on Chemometric methods that characterize the relationships between the spectral matrix and the chemical or physical properties.However, the samples’ spectra are composed of signals and various noises.It is difficult for traditional Chemometric methods to extract the effective features of the spectra and establish a calibration model with strong generative performance for a complex assay.Furthermore, the same quantitative analysis results cannot be achieved when the calibration model established on one instrument is applied to another because of the differences between the instruments.Hence, this paper presents a quantitative analysis modeling and model transfer frameworkbased on convolution neural networks and transfer learning to improve model prediction performance on one instrument and across the instrument.An advanced model named MSRCNN is presented based on a convolutional neural network, which integrates multi-scale feature fusion and residual structure and shows outstanding model generalization performance on the master instrument.Then, four transfer learning methods based on fine-tuning are proposed to transfer the MSRCNN established on the master instrument to the slave instrument.The experimental results on open accessed datasets of drug and wheat show that the RMSE and R2 of MSRCNN on the master instrument are 2.587, 0.981, and 0.309, 0.977, respectively, which outperforms PLS, SVM, and CNN.Byusing 30 slave instrument samples, the transfer of the convolutional layer and fully connected layer in the MSRCNN model is the most effective among the four fine-tune methods, with RMSE and R2 2.289, 0.982, and 0.379, 0.965, respectively.The performance can be further improved by increasing the sample of slave instruments that participated in model transferring.

    Keywords Near-infrared spectroscopy; Deep learning; Transfer learning; Multi-scale fusion; Residual convolution network; Model transfer

    Introduction

    Near-infrared analysis technology is widely used in agriculture, food, the environment, the chemical industry, and other fields with the characteristics of non-destructive,pollution-free, and fast detection[1-4].However, its application depends on the Chemometric method that characterizes the relationship between the spectral vector and the chemical or physical properties.Traditional Chemometrics methods(PLS, SVM, etc.)performs pretty well under the simple situation.But it is difficult to extract effective information directly from spectra with complex noises, so they often need the aid of pre-processing and wavelength selection algorithms with prior knowledge[5-6].In addition, due to the limitations of existing manufacturing techniques and the aging of instruments, there are also differences among the same type of NIR spectrometer manufactured by the same manufacturer, which makes it difficult to generalize the calibration model with good effect established on one instrument to another instrument[7-11].This degrades the generalization of the calibration model and the application of near-infrared analysis technology.Therefore, extracting spectral features, establishing models with good predictive capabilities, improving model generalization, and enabling models to be shared among different types of instruments have always been the focus of Chemometrics research[12-15].

    As one of the classical deep learning methods, the convolution neural network has shown great potential in spectral feature extraction and has been applied to qualitative and quantitative analysis.Acquarelli et al.[16]introduced convolutional neural networks into spectrum analysis, and the results are superior to the classical PLS-LDA and KNN algorithms on multiple data sets.Zhang et al.[17]draw on the Inception structure, the Deep Spectra model based on the idea of multi-scale fusion has achieved better results than PLS and SVR, and the experiments show that it is easier to obtain effective features of spectra by increasing the network width.Dong et al.[18]and Jiang et al.[19]introduce a residual mechanism to construct a deep Resnet network, which is used to trace the origin of Panax notoginseng and tobacco respectively, and further improve the classification accuracy compared with common convolution network and GA-SVM.Although CNN has shown excellent performance in prediction ability, it is mainly a nonlinear convolution structure with multi-layer stacking.However, the traditional model transfer method only applies to PLS and other multivariate linear models and cannot be applied to the model transfer of CNN networks[20-22].

    Transfer learning refers to transferring the trained data domain model to another related but different data domain by using the similarity of data distribution among different domains[23].Although it has been widely used in image processing, natural language processing and other fields[24-27], the research in the spectral analysis is still in the initial stage.Li et al.[28]constructed a convolution network model on a single class of drugs, and then used transfer learning to transfer the calibration model to other classes to realize the identification of multiple classes of drugs by a single model.Mishra et al.[29]Using transfer learning to transfer a convolution network built on many samples to a small number of sample data sets overcomes the data sensitivity of a deep learning model.Mishra et al.[30]also tried to transfer convolutional network models between different instruments based on two transfer learning methods, and the experimental results on the two data sets showed that it could effectively improve the model’s versatility.Previous studies have shown that the introduction of transfer learning can realize the transfer of convolutional models between similar data sets and can be used to alleviate the differences between measuring instruments.

    Because of this, this paper proposes a spectral modelling and model transfer method based on convolution neural network and transfer learning: First, based on ordinary convolutional networks, multi-scale fusion and residual mechanisms are introduced to construct a convolution model with better predictive ability and generalization.Then, the model built on the master instrument is transferred to the slave instrument using the cross-domain knowledge-sharing ability of transfer learning, and the model is recalibrated by using fewer data samples of transformation sets.At the same time, in order to further understand and apply transfer learning, this paper also designed four different model transfer methods to compare and discuss the impact of different transfer strategies on model transfer performance.The source code is available athttps://github.com/FuSiry/DeepNirs-cnn-transferfor academic use only.

    1 Materials and methods

    1.1 Data set

    1.1.1 Pharmaceutical data set

    The first data set comes from the open-source data set released by IDRC 2002.The data consists of 655 tablet samples, with a wavelength range of 600 to 1 898 nm, an interval of 2 nm, and a total of 649 sample points.It is measured by two instruments(A1, A2).The original spectral difference of a sample of A1 and A2 is shown in Fig.1(a).The data can be downloaded fromhttp://www.eigenvector.com/data/tablets/index.htm.

    1.1.2 Wheat data set

    The second data set is derived from the open-source data set released by IDRC 2016 and consists of 248 wheat samples measured by three manufacturers’ instruments.To facilitate comparison with existing studies, A manufacturer’s A1 and A2 instruments were selected to measure the wheat spectra, ranging in wavelength from 730 to 1 100 nm, with intervals of 0.5 nm and a total of 741 data points.The spectral differences between a sample of A1 and A2 are shown in Fig.1(b).The data can be downloaded fromhttp://www.idr-chambersburg.org/content.aspx?page_id=22&club_id=409746&module_id=19111.

    Fig.1 Spectrum of the same sample under different instruments

    1.2 Multi-scale fusion residual convolutional neural network

    1.2.1 Inception structure

    Prior knowledge in the fields of image and speech shows that widening or deepening the network is one of the methods to effectively improve the feature extraction ability of the model[31-32].Previous studies in the NIR field have also shown that different convolution kernels with different sizes extract different spectral information, and that inappropriate convolution kernel will lose the most important spectral crest information[33].In order to improve the receptive field of the model and obtain information on different smoothness degrees, the Inception structure is introduced, as shown in Fig.2.

    Fig.2 Improved inception structure

    m1,m2,m3 represent large, medium, and small convolution respectively.Large-scale convolution can learn sparse information, while small-scale convolution can easily learn non-sparse information.Convolution of different scales can increase the adaptability of the network to the spectrum, obtain spectral information with different flatness, and then improve the feature extraction ability.

    1.2.2 Resnet structure

    The stacking of network layers will make the extracted feature levels richer.But by simply deepening the number of network layers, under the chain rule, the gradient will continue to multiply, and eventually, the problem of gradient disappearance and explosion will occur.For this reason, a residual mechanism is added based on Inception, as shown in Fig.3.

    Fig.3 Residual mechanism

    xdenotes the input and is the output of theinput after the convolutionlayer.Therefore, during the back propagation process,the gradient information propagated byF(x)andxcan be accepted simultaneously.The updated signal can still be obtained through branchxeven if the network is too deep.As a result, the loss of gradients can be successfully inhibited.

    1.2.3 The proposed network structure

    The improved convolutional network modelin this paper is shown in Fig.4, where the Inception structure aod residual mechanism are introduced after the first layer of ordinary convolution to construct 2 Inception-Resnet convolutional layers.

    Fig.4 The convolutional neural network structure used in this study

    After each convolution, the ReLu activation function increase the nonlinear representation and performs BN(Batch Normalization, BN).In addition, a global average!adaptive pooling layer is added after the convolution layer to reduce the network parameters, adapt to different spectrum wavelength, and avoid overfitting.

    1.3 Transfer learning methods

    Although there are variances between instruments of different models, the challenge of forecasting the spectsum is important.As a result, an attempt is made to introduce transfer learning based on the proposed model in order to achieve model transfer between different instruments.The convolutional layer in CNN is responsible for translating the original data to the hidden feature space and for feature extraction.The fully connected layer’s essence is to linearly transform one feature space to another, equivalent to feature weighting.The sensitivity of various instrument models to feature extraction and feature weighting is unknown.Considering this, four distinct transfer methods, as depicted in Fig.5, were developed to investigate the effect of transfer strategies for various network structures on model transfer performance.

    Fig.5 Four different model transfer methods

    The network on the source domain is shown in Fig.4.In the first method, all layers and parameters of the network trained by the master instrument are transferred, the parameters of any layer are not frozen, and the whole network is fine-tunable for updating, as in Fig.5(a).The second method still transfer all layers and parameters, but freezes the parameters of the convolutional layer, and only the parameters of the fully connected layer can be updated, as in Fig.5(b).In the third method, only the convolutional layers and parameters of the network are transferred, the fully connected layers are rebuilt and initialized, and all of them can be updated after the transfer, as in Fig.5(c).In the fourth method, only the convolutional layer is transferred, the parameters are frozen, and the fully connected layer is rebuilt and initialized, as in Fig.5(d).

    1.4 Experiment preparation

    1.4.1 Experimental Environment

    The experiments were done on an AMD Ryzen 5 3600 6-Core CPU, and NVIDIA GeForce RTX 2060 Super GPU.All models were implemented in Python as the coding tool based on the Pytorch framework and the Scikit-learn library.

    1.4.2 Comparative experimental

    Classical Chemometric methods PLS, SVR, ordinary convolutional neural network(CNN), and the modified multi-scale fused residual convolutional network(MSRCNN)are used to compare the feature extraction ability.PLS and SVR are implemented using Scikit-learn.The structure of the ordinary convolutional network is similar to the network shown in Fig.4, only the Inception-Resnet block is replaced with ordinary convolution, and the rest is left unchanged.

    1.4.3 Model pre-training and transfer

    The model is built and trained on the master instrument, with the batchsize set to 128 for the drug dataset and 16 for the wheat data.The initial learning rateon both datasets is set to 0.01, and the maximum training eopch is set to 600, both using the Adam optimizer.To balance the training time and training effect, a learning rate decay strategy is introduced, in which the learning rate is changed to half of the current value if the loss of the calibration set does not decrease within 20 training cycles.In addition, the training process is monitored using an early stop, and the loss of the test set is stopped early if it does not decrease within 60 training cycles to prevent overfitting.

    The training model built on the master instrumentwas transferred to the slave instrument, and the initial learning rate was set to 0.001 on both drug datasets and 0.01 on the wheat data.To reduce the gradient descent rate on the drug, the SGD optimizer was used on the drug data, while the Adam optimizer was used on the wheat dataset.The maximum training eopch was set to 600 for both datasets, and both used the learning decay and earlystop strategies, which were the same as those set during pre-training.

    1.5 Model evaluation

    As shown in formulas 1 and 2, the model’s evaluation indicators are the Root Mean Squared Error(RMSE)and the coefficient of determination(R2).The RMSE measures the difference between the predicted and true values, andR2is used to assess the degree of fit of the regression model.

    (1)

    (2)

    2 Experimental results and discussion

    2.1 Analysis of performance on a drag data set

    2.1.1 Model prediction performance

    The A1 instrument is used as the master instrument.14 abnormal samples are deleted.The 451 samples originally marked as the test set are used as the calibration set, and the two data sets originally marked as the calibration set and the validation set are combined into one test set, resulting in a total of 190 samples.In order to improve the training speed, the data is pre-processed by standardization in advance.Then 4 methods, including PLS, SVR, CNN, and MSRCNN, are used to establish calibration models respectively.The prediction results of the A1 validation set and A2 by these calibration models are shown in Table 1.

    Table 1 A1 establishes a model to directly predict the results of A1 and A2

    The convolution model shows stronger predictive performance than the classical Chemometric method.At the same time, the prediction ability of the improved MSRCNN model in this study is further improved than that of the ordinary CNN, and the best results are obtained.The RMSE andR2reach 2.587 and 0.981, respectively, verifying the effectiveness of introducting multi-scale fusion and residual structure.However, whether it is a convolutional network, a PLS, or an SVR model, A2 has a poor direct prediction effect.Especially in the PLS method,R2is directly negative, and it is impossible to fit the spectrum collected by the A2 instrument.Although MSRCNN is the best method in the comparison,R2reaches 0.939 and is still available, but the model’s prediction performance is much lower than when using A1.This means that the model established on A1 directly applying to the spectrum of A2 would result in larger prediction errors.

    2.1.2 Modeltransfer

    The convolutional network model established on the master instrument A1 is transferred to the slave instrument A2 using the four transfer methods described in section 2.3.To compare to existing methods, 30 samples from the A2 data set are chosen at random as the training set, and the transfer model is fine-tuned.The results of using the fine-tuned model to predict the remaining 609 samples are shown in Table 2.

    Table 2 Prediction results of the convolution model under four transfer modes

    Every transfer strategy with different network structures can improve the model’s prediction results in the drug set.However, the MSRCNN model predicts better than the CNN model under different transfer methods, indicating that the model with better predictions before the transfer has better prediction results after the transfer.The MSRCNN model has the best prediction results through the second type of transfer method.Its RMSE andR2reach 2.289 and 0.982 respectively, which are better than the current methods[20-22,30].This verifies the effectiveness of the method in this paper.

    Furthermore, it is easy to discover that the transfer methods of various network structures have a greater impact on prediction performance.When the first and third transfer methods are compared to the second and fourth transfer methods, the updated convolutional layer and the fixed convolutional layer have little effect on transfer performance.However, when comparing the first and second types of transfer with the third and fourth types, it is clear that whether the fully connected layer is transferred has a greater impact on model transfer performance.For this reason, the MSRCNN model with better prediction performance is selected as the object.The prediction results of 609 prediction samples are shown in Fig.6.Red and yellow points respectively represent the prediction points before and after the model is transferred, and the blue line represents the reference value.The closer to the blue line, the better the prediction performance.

    Fig.6 Prediction results of four model transfer by using 30 training samples

    It can be seen from the figure that the overall performance of transferring the fully connected layer is good.However, after fine-tuning the transfer method of re-adding the fully connected layer, some samples deviate significantly from the true value, even though most of the predicted points are closer to the reference value.As a result, the RMSE andR2performances are unsatisfactory.

    For further research, the size of the training set and validation set are changed.The prediction performance of the four transfer methods under different kinds of sample partitions method is shown in Table 3.

    Table 3 The transfer prediction results of four models with different training set sizes

    The prediction performance of the models of the four transfer methods improves as the training set is gradually increased.The first and second types of transfer methods have similar performance variations.When the number of training set samples reaches 150, the variation in performance reaches a plateau and no longer improves.RMSE andR2are currently stable at around 2.0 and 0.985, respectively.After increasing the training sample to 190, the third and fourth types of transfer methods stabilized, and the RMSE andR2were around 2.2 and 0.981, respectively.

    2.2 Analysis of performance on wheat data set

    2.2.1 Model prediction performance

    The wheat data set persists in using A1 as the master instrument and A2 as the slave instrument.The K-S algorithm is used to divide the data set, and abnormal sample No.187 has been removed.Among them, 190 samples are used as a calibration set, and the rest are used as a validation set.The data is also pre-processed by standardization in advance.The prediction results of PLS, SVR, CNN and MSRCNN models established on the A1 calibration set for the A1 validation set and A2 are shown in Table 4.

    Table 4 A1 establishes a model to directly predict the results of A1 and A2

    The performance degradation of all models is not as great as that of the drug data set because the differences between instruments in this data set are small.However, it is easy to find that the feature extraction capabilities of deep learning algorithms are still significantly better than classic algorithms such as PLS and SVR without carefully designed pre-processing and wavelength selection algorithms.In particular, the MSRCNN model, the RMSE andR2on the master instrument reach 0.309 and 0.977, again achieving the best results.The RMSE andR2on the slave instrument are also 0.638 and 0.935, which are even higher than the prediction results of the PLS and SVR models on the master instrument.

    2.2.2 Model transfer

    The K-S algorithm is used to select 30 samples of A2 instruments as the training set to fine-tune the A1 transfer model for comparison with existing methods.The prediction results of the remaining 217 data after fine-tuning the model transfer method of the four transfer modes are shown in Table 5.

    Table 5 Prediction results of the convolution model under four transfer modes

    The experimental results are similar to the drug data set.The MSRCNN model is still better than the CNN model, and different transfer methods greatly impact the results.The performances of the first and second transfer methods are still close, and the transferred MSRCNN model achieves the best results under the second transfer method.RMSE andR2reach about 0.379 and 0.965, respectively, which is better than the existing research[20].However, the negative transfer happens in both the third and fourth transfer modes, and the prediction performance after the transfer is even lower.For this reason, the MSRCNN model is still selected as the object, and the sample prediction results using a 30 training samples are shown in Fig.7.

    It can be seen from Fig.7 that the model’s predicted value which transfers the parameters of the fully connected layer is generally closer to the true value than before the transfer.However, most of the predicted values of the models that re-added fully connected layers for transfer are offset.Some samples are seriously offset and even unpredictable.In order to avoid contingency and to further study the performance of the model after the transfer, the division ratio of the sample was changed, and then the prediction was made.The results are shown in Table 6.

    Fig.7 The prediction results of the transfer of the four models by using 30 samples

    Table 6 The transfer prediction results of four models with different training set sizes

    Increasing the training sample improves the prediction performance of the transferred model.In about 90 samples, the first and second transfer methods hit the model’s bottleneck.Since then, the model’s prediction performance has stoped rising and started fluctuating within a certain range.After 110 samples, the second and fourth types of the transfer stabilise, but they are still quite different from the first and second types of transfer.It demonstrates that it is difficult to retrain the fully connected layer well when the spectral data set involved in re-calibration is small.

    3 Conclusion

    Advanced pattern recognition technology is promoting the remarkable growth of Chemometrics.In this paper, an advanced deep learning model named MSRCNN and a model transfer method adapted to MSRCNN are proposed to improve the prediction performance of the calibration model on one instrument and across the instrument.The experimental results on the drug and wheat data sets show that:

    (1)The improved multi-scale fusion residual convolution network can use convolution kernels of different sizes to obtain the features of the spectrum in different sparseness.Without carefully designed pre-processing and wavelength selection algorithms, it has a stronger prediction ability than ordinary convolutional networks and classic Chemometric algorithms.

    (2)The problem of model failure can be solved by using transfer learning to transfer the model established on the master instrument to the slave instrument and needs only a few samples spectra of the slave instrument for recalibration.The model with better predictive performance before the transfer also performs better after transferring.

    (3)The model transfer method based on the convolutional neural network combined with transfer learning mainly lies in that the feature weights extracted by convolution and fully connected layer training feature weights are similar between different instruments.Therefore, if the convolution layer and full connection layer are transferred imultaneously, the effect is better than that of the untransformed full connection layer when there are few training samples.And whether to freeze the convolutional layer parameters, there is no obvious difference in this study.

    The results of similar studies

    Appendix A.Supplementary data

    国产熟女xx| 1024香蕉在线观看| 免费女性裸体啪啪无遮挡网站| 制服诱惑二区| svipshipincom国产片| 在线国产一区二区在线| 伊人久久大香线蕉亚洲五| 亚洲国产欧洲综合997久久, | 两个人看的免费小视频| 亚洲中文字幕日韩| 免费在线观看完整版高清| 夜夜爽天天搞| 国产精品 欧美亚洲| 少妇熟女aⅴ在线视频| av中文乱码字幕在线| 久久久久久久久久黄片| 午夜精品在线福利| 久久青草综合色| 日本黄色视频三级网站网址| 国产精品二区激情视频| 给我免费播放毛片高清在线观看| 亚洲国产精品久久男人天堂| 精品少妇一区二区三区视频日本电影| 国产91精品成人一区二区三区| 一个人观看的视频www高清免费观看 | 国产精品98久久久久久宅男小说| 国产欧美日韩一区二区三| 免费搜索国产男女视频| 在线播放国产精品三级| 亚洲电影在线观看av| 国产精品久久久人人做人人爽| 最近最新中文字幕大全免费视频| 欧洲精品卡2卡3卡4卡5卡区| 国产视频一区二区在线看| 久久精品aⅴ一区二区三区四区| 成在线人永久免费视频| 国产单亲对白刺激| 国产黄片美女视频| 免费电影在线观看免费观看| 成年人黄色毛片网站| 久久热在线av| 国产高清视频在线播放一区| 90打野战视频偷拍视频| 首页视频小说图片口味搜索| 亚洲一区二区三区不卡视频| 午夜福利欧美成人| 一个人观看的视频www高清免费观看 | 岛国视频午夜一区免费看| 日韩大码丰满熟妇| 啦啦啦韩国在线观看视频| 老汉色∧v一级毛片| 亚洲国产欧洲综合997久久, | 最好的美女福利视频网| 亚洲精品中文字幕一二三四区| 久久久国产成人精品二区| av电影中文网址| 亚洲成人精品中文字幕电影| av视频在线观看入口| 久久草成人影院| 一本大道久久a久久精品| 女人高潮潮喷娇喘18禁视频| 日日干狠狠操夜夜爽| 精品日产1卡2卡| 日本熟妇午夜| 香蕉av资源在线| 一个人观看的视频www高清免费观看 | 琪琪午夜伦伦电影理论片6080| 国产精品美女特级片免费视频播放器 | 欧美人与性动交α欧美精品济南到| 欧美一级a爱片免费观看看 | 俄罗斯特黄特色一大片| 亚洲国产中文字幕在线视频| 亚洲 欧美 日韩 在线 免费| 19禁男女啪啪无遮挡网站| 欧美日韩一级在线毛片| 免费一级毛片在线播放高清视频| 最好的美女福利视频网| 精品国产乱码久久久久久男人| 最新在线观看一区二区三区| 777久久人妻少妇嫩草av网站| 日本 av在线| 99久久无色码亚洲精品果冻| 亚洲在线自拍视频| 午夜视频精品福利| 国产三级黄色录像| 久久草成人影院| 国产精品国产高清国产av| 亚洲片人在线观看| 久久九九热精品免费| 国产成人av激情在线播放| 免费在线观看影片大全网站| 极品教师在线免费播放| 国产久久久一区二区三区| 一二三四在线观看免费中文在| 国产精品久久久久久精品电影 | 97碰自拍视频| 国产成人精品久久二区二区91| 一卡2卡三卡四卡精品乱码亚洲| 曰老女人黄片| 亚洲av成人av| 久久久久久亚洲精品国产蜜桃av| 少妇 在线观看| 18禁国产床啪视频网站| 可以在线观看的亚洲视频| 中文字幕高清在线视频| 久久国产精品男人的天堂亚洲| 精品高清国产在线一区| 亚洲成人免费电影在线观看| 久久久久久国产a免费观看| 很黄的视频免费| 亚洲专区字幕在线| 免费av毛片视频| 黄片大片在线免费观看| 亚洲中文日韩欧美视频| 成人国产一区最新在线观看| 欧美成人性av电影在线观看| 黄色a级毛片大全视频| 久久久久久九九精品二区国产 | xxxwww97欧美| 国产亚洲精品久久久久久毛片| 久久久久久久久久黄片| 久久久久久亚洲精品国产蜜桃av| 午夜两性在线视频| 欧美日韩亚洲综合一区二区三区_| 两人在一起打扑克的视频| 亚洲最大成人中文| 在线免费观看的www视频| 香蕉国产在线看| 久久 成人 亚洲| 人人妻,人人澡人人爽秒播| 黄频高清免费视频| 我的亚洲天堂| 男女午夜视频在线观看| 高清毛片免费观看视频网站| 韩国av一区二区三区四区| 国产成人系列免费观看| 国产亚洲av嫩草精品影院| 精品国产乱子伦一区二区三区| 999久久久国产精品视频| 亚洲性夜色夜夜综合| 午夜福利成人在线免费观看| 午夜视频精品福利| 老鸭窝网址在线观看| 淫妇啪啪啪对白视频| 好男人电影高清在线观看| 欧美激情高清一区二区三区| 久久人妻av系列| 成人国产一区最新在线观看| 国产成人精品久久二区二区91| 国产又黄又爽又无遮挡在线| 欧美日本亚洲视频在线播放| 嫁个100分男人电影在线观看| 脱女人内裤的视频| 美女大奶头视频| 母亲3免费完整高清在线观看| 91大片在线观看| 一进一出抽搐gif免费好疼| 免费看美女性在线毛片视频| 两个人看的免费小视频| 波多野结衣高清无吗| 制服丝袜大香蕉在线| 日本在线视频免费播放| 韩国精品一区二区三区| 这个男人来自地球电影免费观看| 午夜日韩欧美国产| 午夜免费观看网址| 女人被狂操c到高潮| 久热爱精品视频在线9| 老汉色av国产亚洲站长工具| www.精华液| 99久久久亚洲精品蜜臀av| 免费看美女性在线毛片视频| 看免费av毛片| 国产高清激情床上av| 国产av又大| 韩国av一区二区三区四区| 草草在线视频免费看| 精品欧美国产一区二区三| 99久久99久久久精品蜜桃| 久久精品夜夜夜夜夜久久蜜豆 | 女人被狂操c到高潮| 黑人巨大精品欧美一区二区mp4| 亚洲精品粉嫩美女一区| 国产精品亚洲av一区麻豆| 中文字幕另类日韩欧美亚洲嫩草| 这个男人来自地球电影免费观看| 色综合婷婷激情| 国产又色又爽无遮挡免费看| 两性夫妻黄色片| 亚洲中文日韩欧美视频| 草草在线视频免费看| 精品熟女少妇八av免费久了| www日本在线高清视频| 亚洲,欧美精品.| 日本一区二区免费在线视频| 黄色成人免费大全| 亚洲无线在线观看| 久久伊人香网站| 成人av一区二区三区在线看| 一个人观看的视频www高清免费观看 | 亚洲性夜色夜夜综合| 在线播放国产精品三级| 国产在线观看jvid| 制服诱惑二区| 久久国产精品影院| 免费观看人在逋| 久久久久久久久免费视频了| 男女下面进入的视频免费午夜 | 熟女电影av网| 亚洲欧美一区二区三区黑人| 看黄色毛片网站| 国产精品久久久人人做人人爽| 午夜老司机福利片| 麻豆成人av在线观看| 十八禁网站免费在线| e午夜精品久久久久久久| 又紧又爽又黄一区二区| videosex国产| 高清在线国产一区| 美女免费视频网站| 非洲黑人性xxxx精品又粗又长| 亚洲欧美精品综合一区二区三区| 国产一区二区激情短视频| 精品久久久久久成人av| 国产av在哪里看| a在线观看视频网站| 热re99久久国产66热| 一区福利在线观看| 国内精品久久久久久久电影| 欧美亚洲日本最大视频资源| 精品久久久久久久末码| 亚洲午夜理论影院| 在线天堂中文资源库| 特大巨黑吊av在线直播 | 成年女人毛片免费观看观看9| 中文亚洲av片在线观看爽| www.999成人在线观看| 可以免费在线观看a视频的电影网站| 不卡一级毛片| 又大又爽又粗| 757午夜福利合集在线观看| 亚洲精品美女久久久久99蜜臀| 狂野欧美激情性xxxx| 国产成人欧美| 亚洲精品久久国产高清桃花| 中文字幕精品免费在线观看视频| 国产三级黄色录像| 欧美一级a爱片免费观看看 | 好男人电影高清在线观看| 国产精品久久视频播放| 女人爽到高潮嗷嗷叫在线视频| 国产成年人精品一区二区| 成熟少妇高潮喷水视频| 亚洲精品一区av在线观看| 男人的好看免费观看在线视频 | 亚洲第一av免费看| 成年免费大片在线观看| 韩国精品一区二区三区| 亚洲人成电影免费在线| 精品一区二区三区视频在线观看免费| 一级片免费观看大全| 久久久久免费精品人妻一区二区 | 国内久久婷婷六月综合欲色啪| 99国产极品粉嫩在线观看| 男人的好看免费观看在线视频 | 亚洲成人精品中文字幕电影| 日本一区二区免费在线视频| 悠悠久久av| 夜夜爽天天搞| 成人国产综合亚洲| 18禁美女被吸乳视频| 丁香六月欧美| 久久久久久久久久黄片| 99热这里只有精品一区 | 亚洲国产欧美网| 99精品欧美一区二区三区四区| 熟女少妇亚洲综合色aaa.| 日本五十路高清| 真人一进一出gif抽搐免费| 美女国产高潮福利片在线看| 精品免费久久久久久久清纯| 黑人操中国人逼视频| 十八禁网站免费在线| 成人18禁在线播放| 亚洲国产看品久久| 性欧美人与动物交配| 91字幕亚洲| 午夜免费激情av| 亚洲国产精品合色在线| 欧美激情高清一区二区三区| 9191精品国产免费久久| 哪里可以看免费的av片| 啦啦啦观看免费观看视频高清| 一区二区日韩欧美中文字幕| 精品久久久久久成人av| 久久狼人影院| 精品日产1卡2卡| 人人妻人人澡欧美一区二区| 国产成人av教育| 欧美三级亚洲精品| 俺也久久电影网| 婷婷亚洲欧美| 国产成人精品无人区| 精品卡一卡二卡四卡免费| 久久久水蜜桃国产精品网| 观看免费一级毛片| 宅男免费午夜| 两个人看的免费小视频| 国产高清有码在线观看视频 | 精品高清国产在线一区| 琪琪午夜伦伦电影理论片6080| 人人澡人人妻人| 丰满的人妻完整版| 伦理电影免费视频| 欧美午夜高清在线| 欧美日韩亚洲国产一区二区在线观看| 精品国产一区二区三区四区第35| 狂野欧美激情性xxxx| 欧美又色又爽又黄视频| 免费在线观看亚洲国产| 神马国产精品三级电影在线观看 | 男女那种视频在线观看| 精品乱码久久久久久99久播| 黑人欧美特级aaaaaa片| 亚洲成人国产一区在线观看| 久久国产精品人妻蜜桃| 亚洲av美国av| 亚洲欧美日韩高清在线视频| 国产欧美日韩精品亚洲av| 制服人妻中文乱码| 国产私拍福利视频在线观看| 午夜久久久久精精品| 男人舔女人下体高潮全视频| 欧美另类亚洲清纯唯美| 日韩欧美一区视频在线观看| 久久久久久亚洲精品国产蜜桃av| 亚洲欧美精品综合一区二区三区| 成在线人永久免费视频| 国产伦在线观看视频一区| 国产精品免费一区二区三区在线| 欧美色欧美亚洲另类二区| 国产高清视频在线播放一区| 在线观看免费午夜福利视频| 成人国产一区最新在线观看| 亚洲欧美激情综合另类| 91成人精品电影| 欧美色欧美亚洲另类二区| 久久欧美精品欧美久久欧美| 亚洲成人久久爱视频| 悠悠久久av| 制服诱惑二区| 国产爱豆传媒在线观看 | 国产精品野战在线观看| 视频区欧美日本亚洲| 一级毛片精品| 淫秽高清视频在线观看| 黑人操中国人逼视频| 精品久久久久久久毛片微露脸| 麻豆一二三区av精品| 啦啦啦 在线观看视频| 久久青草综合色| 国产av不卡久久| 黄色a级毛片大全视频| 美女高潮喷水抽搐中文字幕| 怎么达到女性高潮| 亚洲国产欧洲综合997久久, | 欧美激情高清一区二区三区| 欧美国产精品va在线观看不卡| 久久人妻福利社区极品人妻图片| 欧美国产精品va在线观看不卡| 亚洲精品中文字幕一二三四区| 亚洲va日本ⅴa欧美va伊人久久| 国产黄色小视频在线观看| 久久精品亚洲精品国产色婷小说| 日日干狠狠操夜夜爽| 亚洲国产精品合色在线| 一级片免费观看大全| av免费在线观看网站| 亚洲精华国产精华精| 久热爱精品视频在线9| 麻豆av在线久日| 国产精品亚洲一级av第二区| 一本一本综合久久| 好男人电影高清在线观看| 国产成人一区二区三区免费视频网站| 2021天堂中文幕一二区在线观 | 久久狼人影院| 伦理电影免费视频| 亚洲成av片中文字幕在线观看| 精品国产乱子伦一区二区三区| 十八禁网站免费在线| 看免费av毛片| or卡值多少钱| 国产精品亚洲美女久久久| 午夜亚洲福利在线播放| 777久久人妻少妇嫩草av网站| 中亚洲国语对白在线视频| 国语自产精品视频在线第100页| 精品久久久久久久末码| 久久久久久免费高清国产稀缺| 制服诱惑二区| 男人舔奶头视频| 99久久精品国产亚洲精品| 91成人精品电影| 国产v大片淫在线免费观看| 国产精品久久久av美女十八| 美女扒开内裤让男人捅视频| 色综合亚洲欧美另类图片| 国产aⅴ精品一区二区三区波| 欧美激情 高清一区二区三区| xxx96com| 国产三级黄色录像| 成人永久免费在线观看视频| 久久久久久久精品吃奶| 又紧又爽又黄一区二区| 久9热在线精品视频| 国产精品综合久久久久久久免费| e午夜精品久久久久久久| 成人三级黄色视频| 法律面前人人平等表现在哪些方面| 男女做爰动态图高潮gif福利片| 欧美大码av| 一二三四社区在线视频社区8| 久久 成人 亚洲| 欧美性猛交╳xxx乱大交人| 超碰成人久久| 啪啪无遮挡十八禁网站| 夜夜看夜夜爽夜夜摸| 麻豆成人午夜福利视频| 国产99白浆流出| a级毛片在线看网站| 国产黄片美女视频| 亚洲色图 男人天堂 中文字幕| 手机成人av网站| 香蕉丝袜av| 欧美精品啪啪一区二区三区| 18禁裸乳无遮挡免费网站照片 | 久久精品91无色码中文字幕| 欧美亚洲日本最大视频资源| 日本撒尿小便嘘嘘汇集6| 天天躁狠狠躁夜夜躁狠狠躁| 嫩草影院精品99| 国产精品亚洲美女久久久| 亚洲aⅴ乱码一区二区在线播放 | 日韩欧美三级三区| 看黄色毛片网站| 搡老妇女老女人老熟妇| 亚洲人成网站在线播放欧美日韩| 可以在线观看的亚洲视频| 国产野战对白在线观看| 国产激情久久老熟女| 欧美激情 高清一区二区三区| 可以在线观看毛片的网站| 一个人免费在线观看的高清视频| 色老头精品视频在线观看| 国产成人影院久久av| 成人av一区二区三区在线看| 女人被狂操c到高潮| 一区二区日韩欧美中文字幕| 免费观看精品视频网站| 丝袜美腿诱惑在线| 777久久人妻少妇嫩草av网站| 欧美乱妇无乱码| 久久久久亚洲av毛片大全| 精品欧美一区二区三区在线| 精品乱码久久久久久99久播| 9191精品国产免费久久| 母亲3免费完整高清在线观看| 精品一区二区三区视频在线观看免费| 国内精品久久久久久久电影| 欧美黑人精品巨大| 怎么达到女性高潮| 国产亚洲精品久久久久5区| 国产精品一区二区精品视频观看| 麻豆国产av国片精品| av中文乱码字幕在线| 亚洲精品国产一区二区精华液| 深夜精品福利| 一本综合久久免费| 十八禁人妻一区二区| 亚洲 国产 在线| 婷婷丁香在线五月| 国产一卡二卡三卡精品| 热re99久久国产66热| 成人午夜高清在线视频 | 两人在一起打扑克的视频| 一区二区日韩欧美中文字幕| 国产激情欧美一区二区| 国产高清有码在线观看视频 | 人人妻人人澡欧美一区二区| www.熟女人妻精品国产| av有码第一页| 白带黄色成豆腐渣| 1024手机看黄色片| 97人妻精品一区二区三区麻豆 | 精品国产一区二区三区四区第35| 国产成人av激情在线播放| 听说在线观看完整版免费高清| 成人国产综合亚洲| 精品欧美国产一区二区三| 久久人人精品亚洲av| 正在播放国产对白刺激| 一个人免费在线观看的高清视频| 国产精品1区2区在线观看.| 欧美日韩一级在线毛片| 久久狼人影院| 可以在线观看毛片的网站| а√天堂www在线а√下载| 亚洲电影在线观看av| 两人在一起打扑克的视频| 久久99热这里只有精品18| 久久 成人 亚洲| 久久久久久久精品吃奶| 一进一出抽搐gif免费好疼| 国产v大片淫在线免费观看| 久久久久久久久久黄片| 中文字幕精品免费在线观看视频| 国产色视频综合| 国产av在哪里看| 精品久久久久久久久久久久久 | 怎么达到女性高潮| 亚洲精品中文字幕一二三四区| 精品一区二区三区视频在线观看免费| 首页视频小说图片口味搜索| 88av欧美| 老司机深夜福利视频在线观看| 午夜福利视频1000在线观看| 亚洲自拍偷在线| 久久午夜综合久久蜜桃| 日韩视频一区二区在线观看| 午夜两性在线视频| 十分钟在线观看高清视频www| 国产私拍福利视频在线观看| 香蕉久久夜色| 久久九九热精品免费| 日韩成人在线观看一区二区三区| 欧美久久黑人一区二区| 午夜亚洲福利在线播放| 色哟哟哟哟哟哟| 老司机午夜福利在线观看视频| 亚洲成av片中文字幕在线观看| 老司机深夜福利视频在线观看| 成人手机av| 村上凉子中文字幕在线| 亚洲在线自拍视频| 两个人视频免费观看高清| 麻豆久久精品国产亚洲av| 成人三级做爰电影| 久久久水蜜桃国产精品网| 亚洲午夜精品一区,二区,三区| 99精品久久久久人妻精品| 美女午夜性视频免费| 中文字幕久久专区| 99久久无色码亚洲精品果冻| 波多野结衣高清无吗| 午夜久久久久精精品| 国内毛片毛片毛片毛片毛片| 国产一级毛片七仙女欲春2 | 国产成年人精品一区二区| 超碰成人久久| av福利片在线| 午夜a级毛片| 热re99久久国产66热| 在线观看午夜福利视频| 久久久国产欧美日韩av| 啦啦啦观看免费观看视频高清| 国产成人精品无人区| 男人舔女人的私密视频| 国产人伦9x9x在线观看| 免费观看人在逋| 亚洲成人久久性| 欧美日韩亚洲国产一区二区在线观看| 久久久久久国产a免费观看| 一级黄色大片毛片| 97超级碰碰碰精品色视频在线观看| 欧美黄色片欧美黄色片| 少妇熟女aⅴ在线视频| 免费在线观看黄色视频的| 黄色女人牲交| 久久久久免费精品人妻一区二区 | 变态另类成人亚洲欧美熟女| 老司机午夜十八禁免费视频| 88av欧美| 白带黄色成豆腐渣| 99久久精品国产亚洲精品| 欧美黑人欧美精品刺激| 手机成人av网站| 日韩一卡2卡3卡4卡2021年| 美女高潮喷水抽搐中文字幕| 亚洲一区中文字幕在线| 午夜激情福利司机影院| 岛国视频午夜一区免费看| 久久久久国产精品人妻aⅴ院| 亚洲人成电影免费在线| 免费av毛片视频| 免费在线观看视频国产中文字幕亚洲| av福利片在线| 午夜免费观看网址| 亚洲最大成人中文| 热99re8久久精品国产| 亚洲激情在线av| 少妇熟女aⅴ在线视频| 午夜激情福利司机影院| 99精品欧美一区二区三区四区| 老熟妇乱子伦视频在线观看| 亚洲成a人片在线一区二区| АⅤ资源中文在线天堂| 少妇 在线观看| 中出人妻视频一区二区| videosex国产| 999久久久精品免费观看国产| 国产精品久久电影中文字幕| 99国产综合亚洲精品| 每晚都被弄得嗷嗷叫到高潮| 特大巨黑吊av在线直播 |