• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Local Quadratic Embedding Learning Algorithm and Applications for Soft Sensing

    2022-02-13 09:18:36YaoyaoBaoYuanmingZhuFengQian
    Engineering 2022年11期

    Yaoyao Bao, Yuanming Zhu*, Feng Qian*

    Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai 200237, China

    Keywords:Local quadratic embedding Metric learning Regression machine Soft sensor

    A B S T R A C T Inspired by the tremendous achievements of meta-learning in various fields,this paper proposes the local quadratic embedding learning (LQEL) algorithm for regression problems based on metric learning and neural networks (NNs). First, Mahalanobis metric learning is improved by optimizing the global consistency of the metrics between instances in the input and output space. Then, we further prove that the improved metric learning problem is equivalent to a convex programming problem by relaxing the constraints. Based on the hypothesis of local quadratic interpolation, the algorithm introduces two lightweight NNs; one is used to learn the coefficient matrix in the local quadratic model, and the other is implemented for weight assignment for the prediction results obtained from different local neighbors.Finally, the two sub-models are embedded in a unified regression framework, and the parameters are learned by means of a stochastic gradient descent(SGD)algorithm.The proposed algorithm can make full use of the information implied in target labels to find more reliable reference instances.Moreover,it prevents the model degradation caused by sensor drift and unmeasurable variables by modeling variable differences with the LQEL algorithm. Simulation results on multiple benchmark datasets and two practical industrial applications show that the proposed method outperforms several popular regression methods.

    1. Introduction

    In the cement production process, it is essential to monitor the quality of products,such as the finenessof raw meal,the free calcium oxide content of clinkers,and so forth.However,online instrumentations for these indicators are costly and require frequent regular maintenance. In industrial practice, off-line analysis in the lab is often implemented for these indexes every 2 h or more, which results in untimely feedback for real-time control systems. These problems can be solved by soft-sensing techniques[1,2].

    Soft-sensing models originate from multivariate statistical regression models,including linear regression(LR),principal component regression(PCR),partial least squares(PLS),and some variants with regularization strategies to balance the empirical error and complexity of the model, such as least absolute shrinkage and selection operator (LASSO) and ridge [7]. Kernel strategies have been extensively studied and combined with the aforementioned algorithms to solve the regression problem for nonlinear problems [8,9]. After that, machine learning methods such as knearest neighbor regression(k-NNR)[10],classification and regression trees (CARTs) [11,12], and support vector regression (SVR)[13,14]have been proposed for knowledge mining in massive data.To improve the performance of a single tree model,bagging strategies are implemented in random forest (RF) algorithms [15,16].Similarly, the prediction accuracy of boosting algorithms can be increased by combining a series of iteratively learned weak machines [17,18], such as gradient boosting machines (GBMs) and extreme gradient boosting(XGBoost).Furthermore,breakthroughs in deep learning in image and speech recognition have caused neural networks (NNs) [19,20] to become one of the most popular methods in the field of machine learning,especially when the data samples are sufficient. This popularity can be attributed to NNs’powerful feature extraction capabilities with specially designed structures [21].

    Among these algorithms, k-NNR is the simplest and one of the most prevailing regression methods. It is widely used in machine learning problems because it does not require an explicit model structure or any prior knowledge for data distribution. However,the strategy to use the average output of its k-nearest neighbors(k-NNs)as the prediction result also leads to this method’s greatest disadvantages. Initially, the k-NNR algorithm employed the Euclidean distance metric for the measurement of sample similarities.However, the magnitudes of the input features can vary greatly;redundancies and correlations between variables can also be misleading,resulting in an unpractical distance metric.To cope with this problem,a generalization of the Mahalanobis distance[22]was proposed, which is equivalent to a weighted Euclidean distance between two linear projected images.However,in practical applications, the input features tend to have distinct contributions to the output variables.The key is to develop a reliable feature extraction model and apply the classical metrics, such as the Euclid distance and cosine similarity,to the mapped features.Locally linear embedding (LLE) reconstructs the samples in a low-dimensional space using the locally linear weighting method and achieves dimension reduction by minimizing the reconstruction error [23]. Nevertheless, the adjacency relation constructed by the classical Euclidean metric in a high-dimensional space cannot meet the needs of all classification tasks.Thus,researchers usually try to transform the input features into a scaled space[24,25]and to get the weight coefficients to predict the label by means of local reconstruction in the space.However, this method is very dependent on elegant design of the transformation model. For example, in a fuzzy transformation, the basic function and the division of fuzzy intervals may have a great influence on the prediction result,because the meaningful information contained in the output labels is not made full use of.To address this issue, Weinberger and Saul [26] introduced the concept of Mahalanobis distance metric learning, which allows the inverse covariance matrix in the Mahalanobis distance to denote any positive semidefinite matrix. Similar to the idea of linear discriminant analysis(LDA)[27], the Mahalanobis distance metric is learned by maximizing the ratio of the average internal class distance to the average between-class distance.Xing et al.[28]constructed a convex optimization problem for metric learning by taking the average between-class distance as the optimization target and the average within-class distance as the constraint. This method has been applied to semi-supervised data clustering problems.

    The above methods are mainly designed for classification problems.For regression problems,Nguyen et al.[27]established a convex optimization problem by maximizing the consistency of the input and output distances over a set of constraint triplets in the neighborhood of each instance. However, the researchers did not elaborate the solution for a transformation matrix A in metric learning; the weight matrix W is optimized only under the condition of a given transformation matrix A. Moreover, the tradeoff parameter C tends to have a significant impact on the performance of the algorithm.Linear metric learning(LML)has limited power in feature representation, especially for high-dimensional samples such as image and text data.Deep metric learning(DML)uses deep neural network(DNN)models instead of linear transformations to extract features in order to achieve metric learning[29-31].One of the greatest differences between LML and DML lies in the form of the loss function. For example, Song et al. [30] minimized the distances between samples from the same class and maximized the distances with a margin from different classes. In general, these methods involve the construction of triplet sets, which consist of an anchor,a positive point, and a negative point. This implies that the methods cannot be directly applied to regression problems.

    In addition,using the average of k-NNs as the output prediction often results in conservative result. Take the wine quality assessment dataset on University of California Irvine (UCI) machine learning repository as an example. The k-NNR algorithm does not distinguish well between particularly high-grade or inferior wines.So,how does an operator predict the label?First,the operator will identify the most similar cases to the current sample in the historical data as references and then modify the label according to the change of the input features.We summarize this process and propose the local quadratic embedding learning (LQEL) algorithm.However, the coefficient matrix of the quadratic embedding function is difficult to obtain. Fortunately, the matrix is dependent on the location of the expansion point—that is, the current sample mentioned above. Thus, the coefficient matrix can be estimated by NNs,taking the current sample as the input.However,an appropriate network scale must be determined; otherwise, the model becomes over-fitted. To this end, ensemble methods to integrate multiple NNs are utilized to improve the generalization ability of NNs model [20,32]. The literature shows that standardizing the output of the hidden layer in the network by batch normalization(BN) can prevent distribution changes during the training process[33], which accelerates the convergence of networks. It has been pointed out that the dropout strategy can improve the generalization ability of the NN [34]. Moreover, superimposing a certain intensity of Gaussian noise on sample data can increase the number of training samples and thus improve the robustness of the model [35]. In general, these approaches improve the generalization of NNs in two ways.First,they increase the number of training samples; second, they add constraints to the network structure,reduce the complexity,and thus improve the network’s predictive ability. This paper follows the latter route.

    In this paper,metric learning is first accomplished to determine the neighborhood of a certain instance by maximizing the consistency of the distances between the input and output spaces. This makes full use of the information contained in the target labels and achieves the first step of the operators’ strategy. Then, a local quadratic coefficient matrix is generated by a well-trained NN to make predictions based on neighboring references; this prevents the model degradation caused by sensor drift and unmeasured variables by means of the differential compensation method. Furthermore, the other NN assigns weights to the predictions provided by different neighbors according to their confidence, which achieves a balance between the prediction errors and measurement noises,thereby minimizing the prediction errors.The parameters of these two networks can be optimized by end-to-end training with stochastic gradient descent(SGD)algorithms.Empirical studies on several regression datasets, including two practical industrial datasets from the cement production process and hydrocracking process, show that, in most cases, the proposed method outperforms the popular regression methods.

    The rest of this paper is organized as follows. In Section 2, a metric learning model is introduced and the optimization problem is proved to be equivalent to a convex optimization problem. In Section 3,the framework of the proposed LQEL is presented.In Section 4,several empirical studies,including a validation using actual industrial cases,are reported.The conclusions and contributions of this paper are summarized in Section 5.

    2. Metric learning

    A metric distance is a function d:X×X →R+0that satisfies the following, for any x(i),x(j),x (k ) ∈X:

    where M is a positive definite metric matrix to be learned, and u and v are two different instances.The objective of MML is to obtain the optimal matrix M that meets the purpose of metric learning.

    We hope to use the information implied in the output labels to guide the direction of metric learning. The basic principle is that similar input samples lead to similar target labels.The consistency of the distances between the input and output spaces, from a statistical point of view,can be described with the Pearson correlation coefficient. Therefore, the optimization problem is formed as follows:

    3. Local quadratic embedding learning

    The scheme of the LQEL algorithm is shown in Fig. 1. To obtain the output label corresponding to sample x, the k-NNs are first determined using the conclusion of the metric learning in Section 2(the ellipse in the left of the figure).Suppose a function F:δx →δy is learned to describe the mapping from the difference of input to the difference of output in two spaces. Then, for each

    Fig. 1. The scheme of LQEL.

    where Uδ(x0) represents the δ neighborhood of x0in the metric space defined in Section 2, W ≡(x0)TAx0+Bx0is the weight coefficient matrix of the linear mapping function.

    The result of Eq. (10) implies that a linear model could be designed for prediction in x0’s δ neighborhood. The matrix W expanded on different reference points can be estimated by an independent NN—for example, using an NN N:X →X to approximate the matrix as N(x0)=W. Considering that the parameter matrices ?2g (x0) and ?g (x0) tend to be more stable than g (x0) in most practical circumstances, the NN required here should be much simpler than the one used to estimate the output label directly.In particular,when g0is a quadratic function,the matrices Ax0and Bx0do not change with the reference point. In this case, a simple linear NN could work well. In general, these procedures can effectively reduce the complexity of the model and improve the generalization.

    This strategy provides k estimation results for each instance,one from each nearest neighbor,but the reliabilities can vary considerably. From an intuitive perspective, the predictions given by distant neighbors tend to have high uncertainty. This implies that different weights should be assigned to each of the predictions.Prediction uncertainties caused by the presence of measuring noise can be restrained by the averaging method. Inspired by this idea,we intend to design a machine that generates different weights according to the relative location of the instance,which minimizes the expectation of mean square error (MSE).

    In this paper, we introduce state-of-the-art strategies for NNs,such as BN and dropout.The MSE is employed as the loss function.The parameters of the proposed model, including the weights and biases in the two NNs, are optimized by the SGD algorithm.

    4. Empirical learning

    Fig. 2. The model structure of the proposed method.

    In order to know how well the proposed algorithm works, we use real-world benchmark regression datasets along with two practical industrial datasets for verification. A series of classical approaches are briefly introduced for the purpose of comparison with the proposed method. Finally, the experimental results are reported with tables and figures.

    4.1. Descriptions of datasets

    4.1.1. Benchmark datasets

    The details of the datasets [37-39] are shown in Table 1. For example, the red wine dataset shown in the first line contains 1599 samples.Each record contains 12 feature variables and a target label to be predicted.The objective is to establish a mathematical model to evaluate red wine quality through color,composition,and so forth. In this case, the quality of red wine is divided into nine grades from high to low,and only samples between the third and eighth grades are included in the dataset.

    4.1.2. Powder fineness dataset

    The aim of the first practical industrial application is to make online prediction of powder fineness in the raw meal preparation process. The details of this technological process are presented in Fig.3.In the raw meal preparation process,raw materials that consist of three or four minerals are transported onto the center of the grinding table. The materials are continuously pushed outward across the rotating grinding table due to centrifugal force. Rocks are crushed into small particles by the squeezing of the grinding rollers and the grinding table before leaving the grinding disk.When high-speed hot wind enters the mill from the bottom, finer particles are blown into the chamber, while larger particles fall to the bottom and are transported back to the entrance of the mill by a bucket elevator. High-speed airflow driven by an induced draftfan brings those finer particles into a high-efficiency dynamic classifier, where unqualified particles fall back to the mill table along the cone and get reground. Fine products gathered from cyclones and the electric dust collector are finally transported into a homogenization silo for storage.

    Table 1 Details of the datasets used in this paper.

    Fig. 3. Process flow chart of the raw meal preparation process.

    The most important indicator of this process is the fineness of the product, which further influences the product quality and energy consumption of the subsequent calcination process. However,samples are collected and analyzed every 2 h due to the limited capacity for manual analysis in the lab, resulting in time lags for real-time process control and further resulting in fluctuations in raw meal fineness.Therefore,the aim is to estimate the powder fineness in real time with other available and relevant online variables—that is, to achieve soft sensing for raw meal fineness.

    All of the variables that may affect or represent the fineness are considered to be auxiliary variables. These include the current of the draft fan,the current of the classifier,the current of the driven motor,the current of the bucket elevator to transport the product,the current of the bucket elevator to transport the rejected slags,the differential pressure,the inlet temperature,the outlet temperature,the feed quantity,and so forth.In general,an 80 μm sieve residue and a 200 μm sieve residue are considered to be the indicators of raw meal fineness,with the former being more sensitive.Therefore,the dataset is constructed with 14 auxiliary variables and one output label,with a total of 959 instances(about 4 months).

    4.1.3. Hydrocracking process dataset

    The simplified flow diagram of a typical hydrocracking process is shown in Fig.4.The feedstock is mixed with externally supplied hydrogen, which is heated to a specified temperature and then enters the two cascade reactors. The first reactor is loaded with a hydrotreating catalyst to remove most of the sulfur and nitrogen,as well as some heavy metal compounds. The second reactor,where the cracking reaction is completed, is loaded with hydrocracking catalyst. In these reactors, low-temperature hydrogen is directly added to absorb the heat released by the exothermic reaction to maintain a stable temperature.The reaction product passes through a high-pressure separator to recycle unreacted hydrogen and then passes through a low-pressure separator to separate some light gases. Finally, the separation of different components is achieved by a fractionation tower. Six kinds of products are collected:light end(LE),light naphtha(LN),heavy naphtha(HN),kerosene (KE), diesel (DI), and bottom oil (BO).

    Due to the fluctuation in product prices and changes in the market’s supply and demand, the yield of different products must be relocated accordingly in order to maximize the total profit. Therefore,it is essential to accurately predict the yield of each product in time to guide the operation optimization.In this paper,we take the yield of DI as an example to establish a prediction model. In this problem, the sampling period is 4 h and the dataset covers a total of 15 months. Finally, 2052 samples with 55 related input variables, including the feed mass flow rate, volume flow of the fresh hydrogen gas, and so forth, are collected.

    4.2. User-specified parameters

    Seven typical regression algorithms are involved in this work:

    (1)MML-based k-NNR first adopts the MML approach proposed in Ref. [27]. The model first defines the constraints based on triplets, and then formulates the optimization problem as a convex quadratic programming problem. In this algorithm, the number of nearest neighbors Kkis to be determined.

    (2)SVR achieves a tradeoff between structural risks and empirical risks by means of the regularization coefficient C and achieves nonlinear mapping by introducing kernel methods. In this paper,different kernels such as the linear kernel, the Gaussian kernel,and the polynomial kernel are compared with each other, and the Gaussian kernel is demonstrated to be better for these regression problems.Thus,the regularization coefficient C and the kernel parameter γ are to be optimized.

    Fig. 4. Process flowchart of the hydrocracking process.

    (5) NNs are effective tools to solve regression problems. We implemented strategies including BN and dropout, which have been demonstrated to be the state of the art in various fields[35].To be specific,the batch size is chosen to be 30,the proportion of dropout is 0.3, and the number of hidden neurons Nnhis chosen by fivefold cross-validation.

    The results in the table show that the number of nearest neighbors in the LQEL varies with different datasets.First,it depends on the scale of the dataset, which determines the density of the samples in the space.For example,in the critical assessment of protein structure prediction(CASP)dataset,the instances are sufficient for the neighbors to be better referenced for prediction. This implies that a large number of nearest neighbors can effectively improve the prediction ability of the model. However, for the industrial fineness dataset, limited samples are available for modeling. In addition, it is difficult to use the values of the instrumental variables for state representation. For example, the quantity of slag rejection in a vertical roller mill (VRM) is often evaluated by thecurrent of the bucket elevator,but current drift occurs when regular maintenance is carried out (approximately once every 2 days),especially when lubricating oil is added. Therefore, it is necessary to pay more attention to the changes in the current. Under these circumstances, the nearest neighbors in the space may not be as instructive as those of the CASP dataset. Therefore, the model chooses a small number of neighboring samples for prediction.The table also implies that the proposed LQEL model with simple forward NNs can perform well in regression problems. Compared with the forward NN model, there are fewer hidden neurons in the LQEL model(no more than four),and a smaller scale of parameters must be estimated. This reduces the model complexity,thereby improving the model generalization.

    Table 2 Hyper-parameters employed in case study.

    4.3. Performance comparison of different datasets

    To compare the performance of the proposed method with the abovementioned classical methods,a total of nine regression problems on seven datasets were used. Each experiment was repeated 30 times, and the MSE and mean absolute error (MAE) on the test sets were recorded. Then, statistical analyses were carried out on these indexes to validate the robustness of the algorithm.

    Table 3 shows the average indexes of each algorithm on different datasets.The best performance for each line is marked in bold.It can be seen that, for the nine verification tests listed below, the LQEL algorithm proposed in this paper achieves the best performance on most of the datasets. Moreover, the LQEL algorithm achieves a performance comparable to those of the best-performing LightGBM and RF algorithms, and it has clear advantages when compared with other algorithms.

    Moreover,to evaluate the robustness of the algorithm,it is necessary to compare the distribution of the obtained indexes. The MSE and MAE distributions of multiple repeated tests are shown with box plots in Figs.5 and 6,respectively.The figure implies that the LQEL has the most remarkable stability on most datasets,except for the wine quality, CASP, and fineness datasets. Although the performance fluctuates slightly more than some of the other algorithms, the overall MSE and MAE are significantly lower—that is, the algorithms with more stable performances often sacrifice precision as the cost. In particular, strategies such as dropout,batch learning, and BN are implemented in both the NN and LQEL algorithms, but the latter outperforms the former.

    Figs.7 and 8 show scatter plots of the prediction results for different algorithms on the two industrial datasets, in which the abscissa is the ground truth value and the ordinate is the prediction results. The coefficient of determination (R2) is marked on the top left corner, and indicates that the LQEL algorithm shows advantages over the other algorithms on these two soft sensing applications. This can be attributed to two aspects:

    (1) The absolute value of the variables in these industrial datasets cannot well describe the process state. The method proposed in this paper makes corrections to the nearest neighbors according to the change of auxiliary variables, which puts greater emphasis on the differences and thus reduces the risk of the above problem.

    (2)This method employs two extremely simple NNs to achieve LQEL.One NN aims to find the coefficients of local quadratic functions,and the other realizes the weight assignment for predictionsgiven by nearest neighbors.Based on these advantages,the generalization ability of the proposed algorithm can be effectively improved.

    Table 3 Performance comparison of different algorithms.

    Fig. 5. MSE box plots of algorithms tested on different datasets. MML: MML-based k-NNR; LGB: LightGBM; DML: DML-based k-NNR.

    Fig. 6. MAE box plots of algorithms tested on different datasets.

    Fig. 8. Scatter plots of the prediction results for different algorithms on the hydrocracking dataset.

    5. Conclusions

    The paper proposed an LQEL algorithm for regression problems.MML is first improved by optimizing the consistency of the distances between samples in the input and output space.By relaxing the constraints, the modified problem is proved to be a convex optimization problem,while it keeps the same solution as the original problem.Based on this,a locally quadratic embedding model is developed, and different weights are assigned to the prediction results to minimize the expectation of prediction error. In this framework, two extremely simple NNs are implemented to learn the quadratic embedding matrix and the weight assignments of the neighboring predictions. We hope to build a unified end-toend model that prevents the independent two-layer optimization from getting stuck in a local optimal. The proposed LQEL model has the following advantages:

    ●A global consistency for distances in the input and output space is achieved via improved metric learning.

    ●The information contained in output labels is better exploited,which leads to a better determination of the neighborhood for a certain instance.

    ●An LQEL framework was proposed based on the local quadratic embedding hypothesis. Two specially designed networks improve generalization by simplifying the model structure from either a global or a local perspective.

    ●The experimental results show that the LQEL can achieve a more precise and comparable robust prediction when lightweight NNs are employed.

    Acknowledgments

    This work was supported by the National Key Research and Development Program of China (2016YFB0303401), the International (Regional) Cooperation and Exchange Project(61720106008),the National Science Fund for Distinguished Young Scholars (61725301), and the Shanghai AI Lab.

    Compliance with ethics guidelines

    Yaoyao Bao, Yuanming Zhu, and Feng Qian declare that they have no conflict of interest or financial conflicts to disclose.

    两人在一起打扑克的视频| 搞女人的毛片| av.在线天堂| 一级a爱片免费观看的视频| 成年女人永久免费观看视频| 国产亚洲av嫩草精品影院| 免费在线观看成人毛片| 免费人成在线观看视频色| 黄色一级大片看看| 一级a爱片免费观看的视频| 最近视频中文字幕2019在线8| 在现免费观看毛片| 久久精品国产亚洲av天美| 不卡视频在线观看欧美| 此物有八面人人有两片| 色综合站精品国产| 成人av在线播放网站| 国产 一区 欧美 日韩| 看十八女毛片水多多多| 少妇被粗大猛烈的视频| 少妇猛男粗大的猛烈进出视频 | 男女之事视频高清在线观看| 亚洲精品456在线播放app | а√天堂www在线а√下载| 51国产日韩欧美| 亚洲成a人片在线一区二区| 黄色配什么色好看| 欧美日本亚洲视频在线播放| 亚洲内射少妇av| 免费观看的影片在线观看| 国产精品永久免费网站| 韩国av一区二区三区四区| 精品一区二区三区人妻视频| 国产国拍精品亚洲av在线观看| 99riav亚洲国产免费| 国产精品久久久久久久久免| 看十八女毛片水多多多| 九九久久精品国产亚洲av麻豆| 欧美成人性av电影在线观看| 久久久国产成人精品二区| 麻豆一二三区av精品| 国产免费一级a男人的天堂| 亚洲av美国av| 国产淫片久久久久久久久| 国产精品一区二区免费欧美| 人妻夜夜爽99麻豆av| 男女下面进入的视频免费午夜| 此物有八面人人有两片| 亚洲av二区三区四区| 欧美人与善性xxx| 国产精品一区二区三区四区免费观看 | 亚洲最大成人av| 精品一区二区免费观看| 99热网站在线观看| 久久99热这里只有精品18| 99久久中文字幕三级久久日本| 久久精品国产清高在天天线| 美女黄网站色视频| 欧美日韩中文字幕国产精品一区二区三区| 色综合站精品国产| 少妇猛男粗大的猛烈进出视频 | 偷拍熟女少妇极品色| 毛片女人毛片| av专区在线播放| 国产爱豆传媒在线观看| 国产精品精品国产色婷婷| 国内精品久久久久久久电影| 两性午夜刺激爽爽歪歪视频在线观看| 日本 av在线| 午夜福利在线观看吧| 别揉我奶头 嗯啊视频| 99久久九九国产精品国产免费| 婷婷色综合大香蕉| 久久久久国内视频| 免费在线观看日本一区| 午夜免费成人在线视频| 国产一区二区三区在线臀色熟女| 两人在一起打扑克的视频| 欧美人与善性xxx| 亚洲成人中文字幕在线播放| 亚洲欧美日韩卡通动漫| 国产综合懂色| 国产爱豆传媒在线观看| 91在线精品国自产拍蜜月| 蜜桃亚洲精品一区二区三区| 网址你懂的国产日韩在线| 国产主播在线观看一区二区| 毛片女人毛片| 尾随美女入室| 久久亚洲精品不卡| 亚洲av免费高清在线观看| 欧美一级a爱片免费观看看| 国产免费一级a男人的天堂| 亚洲国产欧美人成| 亚洲精品456在线播放app | 亚洲内射少妇av| 日日啪夜夜撸| 18+在线观看网站| 久久人妻av系列| 蜜桃亚洲精品一区二区三区| 国产v大片淫在线免费观看| 高清日韩中文字幕在线| 小说图片视频综合网站| 最好的美女福利视频网| 国产主播在线观看一区二区| 亚洲国产精品sss在线观看| eeuss影院久久| 999久久久精品免费观看国产| 亚洲国产日韩欧美精品在线观看| 国产单亲对白刺激| 简卡轻食公司| 中亚洲国语对白在线视频| 最近中文字幕高清免费大全6 | 国产成人福利小说| 噜噜噜噜噜久久久久久91| 岛国在线免费视频观看| 一个人观看的视频www高清免费观看| 18禁裸乳无遮挡免费网站照片| 成人av在线播放网站| 国产v大片淫在线免费观看| 老女人水多毛片| 精品久久久久久久久久免费视频| 人人妻人人看人人澡| 国产乱人伦免费视频| 国产精品福利在线免费观看| 97超视频在线观看视频| 国产av不卡久久| 国产精品1区2区在线观看.| 久久精品国产亚洲av天美| 精品人妻熟女av久视频| 嫩草影院入口| 日韩强制内射视频| 九色成人免费人妻av| av视频在线观看入口| or卡值多少钱| 99九九线精品视频在线观看视频| 一夜夜www| 久久久久久久精品吃奶| 国产精品嫩草影院av在线观看 | 成人二区视频| 中文字幕免费在线视频6| 免费大片18禁| 国产主播在线观看一区二区| 老熟妇仑乱视频hdxx| 久久久久久久久久成人| 成年免费大片在线观看| 色播亚洲综合网| 亚州av有码| 精品久久久久久,| 日韩欧美 国产精品| 国产视频一区二区在线看| 免费观看在线日韩| 国产一区二区三区av在线 | 别揉我奶头~嗯~啊~动态视频| 日本一二三区视频观看| 免费搜索国产男女视频| 久99久视频精品免费| 美女高潮的动态| 真实男女啪啪啪动态图| 又黄又爽又免费观看的视频| 亚洲人成网站在线播放欧美日韩| 99国产极品粉嫩在线观看| 国产亚洲91精品色在线| av视频在线观看入口| 一级黄片播放器| 一本精品99久久精品77| 午夜福利成人在线免费观看| 色av中文字幕| 日本黄色片子视频| 久久久国产成人免费| 久久久色成人| 国产精品美女特级片免费视频播放器| 男人的好看免费观看在线视频| 乱人视频在线观看| 女生性感内裤真人,穿戴方法视频| 亚洲国产欧美人成| 国产高清视频在线观看网站| 哪里可以看免费的av片| 欧美激情在线99| bbb黄色大片| 国内久久婷婷六月综合欲色啪| 搡老熟女国产l中国老女人| 国产精品一区二区性色av| 免费电影在线观看免费观看| 观看美女的网站| 51国产日韩欧美| 成人综合一区亚洲| 变态另类成人亚洲欧美熟女| 波多野结衣高清无吗| 国产精品三级大全| 在线观看午夜福利视频| 老司机午夜福利在线观看视频| 久久精品国产鲁丝片午夜精品 | av国产免费在线观看| 国产精品一及| 嫩草影院入口| 成人二区视频| 伦理电影大哥的女人| 亚洲人成网站在线播放欧美日韩| 少妇被粗大猛烈的视频| 精品久久国产蜜桃| 午夜福利在线观看吧| 干丝袜人妻中文字幕| 国产成人一区二区在线| 日本五十路高清| 一个人观看的视频www高清免费观看| 啦啦啦韩国在线观看视频| 国产一区二区三区av在线 | 日本 欧美在线| 99热精品在线国产| 老熟妇仑乱视频hdxx| 亚洲国产精品sss在线观看| 3wmmmm亚洲av在线观看| 免费看a级黄色片| 国产 一区精品| 十八禁网站免费在线| 日韩强制内射视频| 欧美zozozo另类| 99久久精品国产国产毛片| 两人在一起打扑克的视频| 精品人妻一区二区三区麻豆 | 一本久久中文字幕| 免费看日本二区| av专区在线播放| 午夜福利在线观看免费完整高清在 | 色播亚洲综合网| 中文字幕精品亚洲无线码一区| 赤兔流量卡办理| 久久久午夜欧美精品| 嫩草影院入口| 少妇熟女aⅴ在线视频| 国产高清视频在线观看网站| 久久精品国产鲁丝片午夜精品 | 日韩欧美在线二视频| 国产爱豆传媒在线观看| 色精品久久人妻99蜜桃| 免费人成视频x8x8入口观看| 九色国产91popny在线| 深夜a级毛片| 亚洲国产色片| 九九在线视频观看精品| 国产黄色小视频在线观看| 国产在线精品亚洲第一网站| 他把我摸到了高潮在线观看| 国产精品久久久久久久久免| 成人毛片a级毛片在线播放| 少妇猛男粗大的猛烈进出视频 | 午夜精品在线福利| 97超级碰碰碰精品色视频在线观看| 啪啪无遮挡十八禁网站| 熟女人妻精品中文字幕| 人妻丰满熟妇av一区二区三区| av在线蜜桃| 丝袜美腿在线中文| 熟妇人妻久久中文字幕3abv| 成人特级av手机在线观看| 久久精品国产鲁丝片午夜精品 | 日本三级黄在线观看| 男人舔奶头视频| 69av精品久久久久久| 国产精品伦人一区二区| 中文在线观看免费www的网站| 亚洲久久久久久中文字幕| 亚洲欧美精品综合久久99| 国产av麻豆久久久久久久| 不卡一级毛片| 少妇丰满av| 久久天躁狠狠躁夜夜2o2o| 九九爱精品视频在线观看| 日韩,欧美,国产一区二区三区 | 国产在线精品亚洲第一网站| 国产大屁股一区二区在线视频| 搡老熟女国产l中国老女人| 亚洲国产精品sss在线观看| 长腿黑丝高跟| 成人无遮挡网站| 欧美成人免费av一区二区三区| 亚洲国产日韩欧美精品在线观看| 精品一区二区三区视频在线观看免费| 黄色女人牲交| 夜夜爽天天搞| 欧美高清性xxxxhd video| 中文资源天堂在线| 麻豆国产97在线/欧美| 久久久久久久久久久丰满 | 日韩av在线大香蕉| 欧美日本视频| 成人特级av手机在线观看| 国产色爽女视频免费观看| 一区福利在线观看| 老司机福利观看| 国产三级中文精品| 成人性生交大片免费视频hd| 国产午夜福利久久久久久| 嫩草影院入口| 一个人看的www免费观看视频| 欧美性猛交╳xxx乱大交人| 成人一区二区视频在线观看| 哪里可以看免费的av片| 中文在线观看免费www的网站| 日本色播在线视频| 色视频www国产| 91久久精品电影网| 精品欧美国产一区二区三| 麻豆成人av在线观看| 99国产精品一区二区蜜桃av| 一个人看视频在线观看www免费| 国产精品爽爽va在线观看网站| 免费观看精品视频网站| 亚洲18禁久久av| 尤物成人国产欧美一区二区三区| 精品一区二区免费观看| 在线播放国产精品三级| 日本色播在线视频| 如何舔出高潮| 美女大奶头视频| 午夜激情欧美在线| 国产乱人视频| 国产精品久久久久久av不卡| 国产aⅴ精品一区二区三区波| 国产老妇女一区| 在线免费观看的www视频| 校园人妻丝袜中文字幕| 国产视频内射| 91在线观看av| 午夜免费成人在线视频| 日本一二三区视频观看| 人人妻人人看人人澡| 五月玫瑰六月丁香| 黄色欧美视频在线观看| 亚州av有码| 亚洲天堂国产精品一区在线| 99热网站在线观看| 国产色爽女视频免费观看| 精品一区二区免费观看| 老师上课跳d突然被开到最大视频| 国产高清激情床上av| 久久草成人影院| 国产三级在线视频| 国产亚洲欧美98| 亚洲精品在线观看二区| 无人区码免费观看不卡| 成人国产一区最新在线观看| 国产成人av教育| 人妻夜夜爽99麻豆av| 成年人黄色毛片网站| 一区二区三区激情视频| 免费观看的影片在线观看| 国产熟女欧美一区二区| 成年人黄色毛片网站| av.在线天堂| 俺也久久电影网| 国产乱人伦免费视频| 99国产精品一区二区蜜桃av| 亚洲av二区三区四区| 一个人观看的视频www高清免费观看| 日韩,欧美,国产一区二区三区 | 亚洲中文字幕日韩| 日韩亚洲欧美综合| 日本免费一区二区三区高清不卡| 国产精品美女特级片免费视频播放器| 久久欧美精品欧美久久欧美| 简卡轻食公司| 国产精品久久久久久亚洲av鲁大| 在线免费观看不下载黄p国产 | 欧美极品一区二区三区四区| 成人三级黄色视频| 99久国产av精品| 精品久久久久久久久亚洲 | 久久久久久久久久黄片| 亚洲av成人精品一区久久| 美女高潮喷水抽搐中文字幕| 免费人成在线观看视频色| 哪里可以看免费的av片| 小蜜桃在线观看免费完整版高清| 亚洲精品国产成人久久av| 蜜桃亚洲精品一区二区三区| 国产老妇女一区| 中文字幕熟女人妻在线| 一进一出抽搐动态| 啦啦啦韩国在线观看视频| 精品一区二区三区人妻视频| 男女做爰动态图高潮gif福利片| 日韩欧美三级三区| 性色avwww在线观看| 久久精品国产99精品国产亚洲性色| 午夜a级毛片| 亚洲av.av天堂| 亚洲男人的天堂狠狠| 一区福利在线观看| 乱码一卡2卡4卡精品| 国产精品国产三级国产av玫瑰| 午夜久久久久精精品| 他把我摸到了高潮在线观看| 99热这里只有是精品50| 女生性感内裤真人,穿戴方法视频| 亚洲av中文字字幕乱码综合| 欧美一区二区亚洲| 成人国产综合亚洲| 日韩高清综合在线| 亚洲中文字幕一区二区三区有码在线看| 男人和女人高潮做爰伦理| 两个人的视频大全免费| 日韩 亚洲 欧美在线| 国产爱豆传媒在线观看| 色视频www国产| 91av网一区二区| 亚洲av电影不卡..在线观看| 国产伦精品一区二区三区视频9| 日韩欧美在线二视频| 99热网站在线观看| 免费在线观看日本一区| 国产精品爽爽va在线观看网站| 中文资源天堂在线| 丝袜美腿在线中文| 夜夜看夜夜爽夜夜摸| 好男人在线观看高清免费视频| 国产爱豆传媒在线观看| 韩国av在线不卡| 国产真实伦视频高清在线观看 | 久久国产乱子免费精品| 亚洲人成网站在线播| 国内揄拍国产精品人妻在线| 十八禁国产超污无遮挡网站| 天堂√8在线中文| 看片在线看免费视频| av在线天堂中文字幕| 天天一区二区日本电影三级| 亚洲精品久久国产高清桃花| 国产高潮美女av| 免费在线观看成人毛片| 少妇丰满av| 国产在视频线在精品| 97超视频在线观看视频| 成人精品一区二区免费| 狠狠狠狠99中文字幕| 免费无遮挡裸体视频| videossex国产| 在线天堂最新版资源| 在线免费十八禁| 嫩草影视91久久| 久久久国产成人精品二区| 男女下面进入的视频免费午夜| 天堂影院成人在线观看| 久久人人爽人人爽人人片va| 国产精品人妻久久久久久| 一级毛片久久久久久久久女| 老熟妇乱子伦视频在线观看| 午夜免费激情av| 成人av一区二区三区在线看| 国产日本99.免费观看| www日本黄色视频网| 高清在线国产一区| 国产黄a三级三级三级人| 人人妻人人看人人澡| 国产精品女同一区二区软件 | 99在线人妻在线中文字幕| 国产在线男女| 婷婷丁香在线五月| 欧美+亚洲+日韩+国产| 久久久久九九精品影院| 国产伦在线观看视频一区| 亚洲熟妇熟女久久| 蜜桃久久精品国产亚洲av| 亚洲乱码一区二区免费版| bbb黄色大片| 欧美日韩黄片免| 夜夜爽天天搞| 伊人久久精品亚洲午夜| 狂野欧美激情性xxxx在线观看| 国产探花在线观看一区二区| 香蕉av资源在线| 欧洲精品卡2卡3卡4卡5卡区| 婷婷色综合大香蕉| 又爽又黄无遮挡网站| av天堂在线播放| 18禁黄网站禁片午夜丰满| 人妻夜夜爽99麻豆av| 日本一二三区视频观看| 黄色日韩在线| 亚洲av第一区精品v没综合| 中文资源天堂在线| 欧美性猛交黑人性爽| 日韩一区二区视频免费看| 国产男人的电影天堂91| av在线老鸭窝| 精品久久国产蜜桃| 国产精品电影一区二区三区| 波多野结衣高清作品| 日本三级黄在线观看| 亚洲成人中文字幕在线播放| 国产亚洲欧美98| 人人妻,人人澡人人爽秒播| 午夜精品在线福利| 亚洲中文字幕日韩| 最后的刺客免费高清国语| 国产精品1区2区在线观看.| 丝袜美腿在线中文| 日韩,欧美,国产一区二区三区 | 国产av一区在线观看免费| 国产一区二区在线av高清观看| 国产伦在线观看视频一区| 欧美xxxx黑人xx丫x性爽| 国产精品一区二区性色av| 亚洲一区高清亚洲精品| 免费av毛片视频| 窝窝影院91人妻| 国产高清有码在线观看视频| 成年女人看的毛片在线观看| 干丝袜人妻中文字幕| 色吧在线观看| 欧美高清成人免费视频www| 欧美激情在线99| 国产精品一区二区性色av| 久久久国产成人精品二区| 国产精品免费一区二区三区在线| 中文资源天堂在线| 亚洲va在线va天堂va国产| 成人特级黄色片久久久久久久| 嫁个100分男人电影在线观看| 免费观看在线日韩| 男插女下体视频免费在线播放| 亚洲精品一卡2卡三卡4卡5卡| 最近最新免费中文字幕在线| 人妻制服诱惑在线中文字幕| 99九九线精品视频在线观看视频| 国产在线精品亚洲第一网站| 久久婷婷人人爽人人干人人爱| 蜜桃亚洲精品一区二区三区| 国产男靠女视频免费网站| 很黄的视频免费| 亚洲精品亚洲一区二区| 国产人妻一区二区三区在| 在线天堂最新版资源| 美女cb高潮喷水在线观看| 小说图片视频综合网站| 国产三级在线视频| 久久九九热精品免费| 久久人人精品亚洲av| 又爽又黄a免费视频| 色综合站精品国产| 国产精品1区2区在线观看.| 欧美xxxx性猛交bbbb| 在线观看一区二区三区| 国产激情偷乱视频一区二区| 亚洲国产精品成人综合色| 免费在线观看影片大全网站| 欧美日韩亚洲国产一区二区在线观看| 色哟哟·www| 国产精品国产高清国产av| 乱系列少妇在线播放| 久久亚洲精品不卡| 午夜免费男女啪啪视频观看 | 很黄的视频免费| 91麻豆精品激情在线观看国产| 搡老熟女国产l中国老女人| 午夜福利视频1000在线观看| 亚洲真实伦在线观看| 亚洲人成网站高清观看| 最近中文字幕高清免费大全6 | 高清毛片免费观看视频网站| 村上凉子中文字幕在线| 午夜精品一区二区三区免费看| 在线观看美女被高潮喷水网站| 午夜福利在线观看免费完整高清在 | 色综合站精品国产| 精品不卡国产一区二区三区| 色视频www国产| 亚洲18禁久久av| 国产毛片a区久久久久| bbb黄色大片| 亚洲av.av天堂| 特级一级黄色大片| 亚洲国产色片| 九九久久精品国产亚洲av麻豆| 91在线观看av| 一本一本综合久久| 精华霜和精华液先用哪个| 日日摸夜夜添夜夜添小说| 亚洲黑人精品在线| 亚洲精品亚洲一区二区| 最近最新中文字幕大全电影3| 男女之事视频高清在线观看| 久久久久久九九精品二区国产| 亚洲第一电影网av| 亚洲av一区综合| 不卡一级毛片| 蜜桃久久精品国产亚洲av| 嫩草影院精品99| 在线天堂最新版资源| 国产v大片淫在线免费观看| 如何舔出高潮| 有码 亚洲区| 欧美日韩综合久久久久久 | 国产三级中文精品| 99久久成人亚洲精品观看| 国产av不卡久久| 欧美人与善性xxx| 一区二区三区免费毛片| 午夜福利高清视频| 夜夜夜夜夜久久久久| 国内揄拍国产精品人妻在线| 十八禁国产超污无遮挡网站| 九色成人免费人妻av| 99久久精品一区二区三区| 久久久久久大精品| 特级一级黄色大片| 两个人视频免费观看高清| 婷婷精品国产亚洲av| 午夜免费激情av| 1000部很黄的大片| 乱系列少妇在线播放| 国产真实伦视频高清在线观看 | 悠悠久久av| 在线免费观看不下载黄p国产 |