• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    WDBM:Weighted Deep Forest Model Based Bearing Fault Diagnosis Method

    2022-11-11 10:45:44LetaoGaoXiaomingWangTaoWangandMengyuChang
    Computers Materials&Continua 2022年9期

    Letao Gao,Xiaoming Wang,Tao Wang and Mengyu Chang

    1Department of Computer Science,City University of Hong Kong,Hong Kong,999077,China

    2School of Computer and Software Engineer,Xihua University,Chengdu,610039,China

    3Nanjing University of Aeronautics and Astronautics,Nanjing,210008,China

    4McGill University,Montreal,H3G 1Y2,Canada

    Abstract: In the research field of bearing fault diagnosis, classical deep learning models have the problems of too many parameters and high computing cost.In addition, the classical deep learning models are not effective in the scenario of small data.In recent years,deep forest is proposed,which has less hyper parameters and adaptive depth of deep model.In addition,weighted deep forest (WDF)is proposed to further improve deep forest by assigning weights for decisions trees based on the accuracy of each decision tree.In this paper,weighted deep forest model-based bearing fault diagnosis method (WDBM)is proposed.The WDBM is regard as a novel bearing fault diagnosis method,which not only inherits the WDF’s advantages-strong robustness, good generalization, less parameters, faster convergence speed and so on, but also realizes effective diagnosis with high precision and low cost under the condition of small samples.To verify the performance of the WDBM, experiments are carried out on Case Western Reserve University bearing data set(CWRU).Experiments results demonstrate that WDBM can achieve comparative recognition accuracy,with less computational overhead and faster convergence speed.

    Keywords:Deep forest;bearing fault diagnosis;weights

    1 Introduction

    Bearing is the core component of mechanical equipment,and its health is critical to the performance of mechanical equipment.The fault problems of bearing can lead to the failure of the whole machine equipment and reduce the production efficiency at least.In addition,the devastating problem of bearing fault problems can even cause casualties and certain degree of impact on the production of the manufacturing industry.To ensure the correct operation of large and high precision machinery,the bearing fault diagnosis is essential.If the corresponding solutions are made in time according to the diagnosis results,it can reduce,or eliminate the occurrence of mechanical failure accidents,improve the utilization rate of mechanical equipment,and provide more efficient services for the manufacturing industry.Therefore,bearing fault diagnosis plays an important role in industrialization[1].

    Bearing fault diagnosis is to diagnose the running state or fault of the bearing according to the measurable operating information of the mechanical bearing.Currently,the mainstream bearing fault diagnosis methods can be divided into feature engineering-based methods [2-4] and deep learning based methods [5].The feature engineering methods mainly exploit signal processing methods such as frequency analysis[2],empirical mode decomposition[3],wavelet transform[4],etc.to extract the features of bearing monitoring data,and then uses the classification model based on machine learning,such as artificial neural network [6], support vector machine [7], hidden Markov model [8] and all that to classify the extracted features for fault diagnosis.Therefore,in the feature engineering method,feature extraction,classification and diagnosis process are separated.Additionally,feature engineering method relies too much on artificial experience,resulting in large errors and lower diagnostic accuracy using a single intelligent classification model.Compared with feature engineering methods, deep learning can directly extract fault features from original bearing monitoring data for classification and diagnosis,owing to powerful feature extraction capability and end-to-end characteristics,which breaks the limitations of feature engineering method and draws great attention on the research of deep learning-based fault diagnosis[5].Tamilselvan et al.[9]proposed deep belief network-based fault diagnosis method.In[9],deep learning is exploited in the field of fault diagnosis and the advantages of deep learning-based method are confirmed.The literature[10]showed that a bearing fault diagnosis method based on convolutional neural network has high diagnostic accuracy through experiments.However, the model in [10] is unstable and requires a large amount of data for early learning.It is evident from[11]that a one-dimensional deep learning model based on convolutional neural network can directly diagnose bearing signals.A convolutional neural network method is proposed in [12].However,the method of[12]has too many training hyperparameters,resulting in too much training and diagnosis cost.Yuan et al.[13]proposed a bearing fault diagnosis method combining SVM and PSO.Nevertheless,the method in[13]is prone to be over-fitting and has poor generalization ability for complex bearing classification problems.In[14],an improved deep residual network-based bearing fault diagnosis method was presented which was proved to reach high accuracy through experiments.Unfortunately, these deep learning-based methods [12-14] need high computation cost and have imperfection in scenario of small data set.Since the bearing works normally most of the time, it is difficult to obtain real fault data samples.Although the above deep learning-based bearing fault diagnosis methods have achieved good results to a certain extent, there are still some deficiencies in these methods with small samples and low cost.In addition,some deep learning methods have poor model instability,generalization ability and robustness.

    In classical deep learning model, neural networks are stacked layer by layer to construct deep learning model and improve the learning ability.However,there are too many hyper parameters in such model which results in heavy computing overhead and slow convergence speed.In 2018,Zhou et al.[15-17]proposed deep forest model(DF).In DF,random forest instead of neural network is exploited at each layer to construct deep learning model.In addition, multi-grained scanning mechanism is designed to extract feature of original data.Compared with deep neural network,DF model is simple,highly interpretable,low computational overhead,adaptive and scalable in complexity,and has strong robustness and fewer hyperparameters to a certain extent.Therefore, DF has attracted extensive attention from research field and has been applied in many scenarios,such as financial analysis[18-21],medical diagnosis[22-25],remote sensing[26-28],software defect prediction[29-32]and so on.

    However, in DF the average value of all decision tree results is taken as the result of the corresponding forest, which ignores the accuracy difference of each decision tree.Utkin et al.[31]proposed weighted deep forest model which assigns weight to each decision tree according to its accuracy.Therefore, in weighted deep forest model, the decision trees with better performance have greater influence on the result of current level, which speeds up the ending of cascading structure and lows the computation overhead.The advantages of weighted deep forest model motivate us to introduce the weighted deep forest model into the research field of bearing fault diagnosis.

    In this paper,WDF based bearing fault Diagnosis Method(WDBM)is proposed.Experiments results demonstrate that WDBM inherits the WDF’s advantages-strong robustness, good generalization,less parameters,faster convergence speed and so on.Additionally,WDBM realizes effective diagnosis with high precision and low cost under the condition of small data sets.

    The paper is organized as follows.Section 2 introduces the weighted deep forest model.Section 3 gives the details of the weighted deep forest model-based bearing fault diagnosis method(WDBM).The experiment results analysis is given in Section 4, including stability analysis, the generalization performance analysis, the fault diagnosis effect and cost analysis for normal sample data and small sample data.Finally,Section 5 concludes this paper.

    2 Weighted Deep Forest Model

    Weighted Deep Forest (WDF)model is based on the deep forest(DF)model.Therefore, WDF has advantages of fewer parameters,learning ability on small data,robustness,and generalization as DF.In classical deep forest model,the result of each random forest is the average ensemble result of all its decision trees,whose classifying performance maybe quite different.In WDF,each decision tree is assigned different weight according to its accuracy performance.The aim of weighting mechanism is to enhance the positive influence of decision trees with better accuracy performance.Therefore,the WDF can end deep cascading faster than the classical deep forest model.The WDF includes following core parts:multi-grained scanning structure,weighting mechanism,and cascading structure.

    2.1 Multi-Grained Scanning Structure

    Multi-grained scanning is a structure to enhance the representation learning ability.There is evidence that the original sample data can be scanned through small windows of different scales to achieve feature transformation.And finally,the representation vectors with diversity can be obtained.

    The multi-grained process is shown in Fig.1.The original input isAdim sample,and the small windowBdim with step size 1 is used for sliding sampling.Through a series of characteristics transformation,C=(A-B)/1+ 1 characteristic sub-sample vectorBdim will be obtained and be incorporated into the weighted random forest and weighted completely random forests.After training on forests,each forest will produce aC*Echaracterization vector.Stitching these vectors together,and the final sample output for the cascade structure is obtained.

    2.2 Weighting Mechanism

    According to the principle of classical random forest, the result of classical random forest is absolute mean ensemble result of all its decision trees.The results of each decision tree of a random forest maybe quite different on the same data.It is obvious that if the decision tree with better results exerts more influence in the cascade structure,the result may be more accurate,and the depth of the deep model maybe reduced which will reduce the overall computation cost.

    Figure 1:Multi-grained scanning process of WDF

    The averaging method of classical deep forest and the weighting method of WDF are illustrated in Fig.2.Assume there arendecision trees in a random forest.After training on data,the classifying probability distribution is obtained for each decision tree.For thei-th decision tree, its probability distribution vector isPi=[p12 pi2 pi3...pic],wherecis the number of classes andpikis the probability that one sample belongs to thek-th class using thei-th decision tree.If the classical averaging is used,the final result is just the average of each decision tree and as follows.

    In weighted deep forest model, weighting mechanism is exploited.After training, each decision tree is assigned a weight parameterwi,and the result of the forest is the weighted sum of the probability distribution vector of each decision tree.And the weighted result is as follow.

    2.3 Cascading Structure

    The cascading structure of weighted deep forest model is as shown in Fig.3.At each layer of cascading structure, there are 4 weighted random forests marked with yellow color and 4 weighted completely random forests marked with blue color.For each completely random forest, there are 1000 decision trees.And for each decision tree of completely random forest, each node randomly selects a feature as the discriminant condition and generates child nodes according to the discriminant condition and the operation stops until each leaf node contains only instances of the same class.

    Similarly,for each random forest,there are 1000 decision trees.For each decision tree of random forest,each node is selected by randomly selectingfeatures(dis the number of input features)and then computing the Gini coefficient.

    In the Eq.(3),kis the number of classes,pkis the probability of thek-th class,andGini(p)is the Gini coefficient.The node with biggest Gini coefficient is used as the discriminant basis until each leaf node contains only instances of the same class and the operation is stopped.

    Compared with the classical deep forest model,the forests in WDF are weighted,i.e.the output of each weighted forest is the weighted value of its all decision trees,instead of average of all decision trees as in classical deep forest model.To prevent over-fitting of the result,K-fold cross-validation is used,and the result of each layer is transmitted to the next layer.When the cascaded forest structure is extended to a new layer,the effect of all previous cascaded forest structures will be evaluated through the validation set, and the training process will automatically end as the evaluation result cannot be further improved.The number and complexity of the cascade forest structure are determined automatically by the training process, which saves a lot of parameter adjustment costs.Therefore,it can save a lot of parameter adjustment overhead and improve the convergence speed of the cascade forest structure.Therefore,the cascading structure of the weighted deep forest can maintain a stable convergence state.

    Figure 2:Weighting method of WDF and averaging method of DF

    Figure 3:Cascading structure of WDF

    3 Details of Weighted Deep Forest Based Bearing Fault Diagnosis Method

    3.1 Datasets

    In this paper,the bearing dataset from Case Western Reserve University(CWRU)is used,which is widely accepted as the standard dataset in the field of bearing fault diagnosis, for its objective,reliability,and good quality.CWRU bearing fault data came from motor,torque sensor,power meter,16-channel data logger, 6203-2RS JEM SKF/NTN deep groove ball bearing and electronic control equipment.And the CWRU bearing fault data are processed manually by EDM technology.

    3.2 Flow Chart of WDBM

    The flow chart of weighted deep forest-based bearing fault diagnosis method(WDBM)is shown in Fig.4.

    Figure 4:Flow chart of WDBM

    Step 1: Using Case Western Reserve University bearing data set, 9 sets of fault data of bearing inner ring, outer ring and rolling element fan end acceleration in 6 o’clock direction under the condition that the motor load is 0,1 and 2 HP with 12 khz sampling frequency and the fault diameter is 0.007,0.014 and 0.021 feet respectively,and 1 set of corresponding bearing health data.There are 10 sets of data and 10 sample characteristics in the experiment.There are about 3 million data point samples.The data set used in this experiment is shown in Tab.1.

    Table 1: Parameter value of data in the experiment

    Step 2:Carry out data enhancement and down sampling technology on the bearing health data set.The time length of the data enhancement sliding window is set to 2048/12000,and the proportion of data overlap is 50%.Enhance the bearing health data to twice the fault data, and then down sampling and random deletion are carried out to prevent falling into local optimal diagnosis due to the imbalance of fault and health data.Then all data are normalized and one-hot coded to obtain labeled data samples,and the training set,test set and verification set are divided according to the proportion of 7:2:1.After the final data pre-processing, 58577 data enhancement and Class 0 data samples are obtained.The length of a single data is 2048 sampling points, the overlap amount is 2047, and the number of sampling points is[(0,5864),(1,5822),(2,5850),(3,5864),(4,5857),(5,5850),(6,5878),(7,5885),(8,5850),(9,5857)].

    Step 3:Input the training set to the multi-grained scanning structure of the WDBM and set the step size as 1.After a series of feature transformations,the generated representation vectors are stitched together and sent into the cascade forest structure of the WDBM for learning.

    Step 4: Set the number of weighted random forests and weighted completely random forests in each layer of the cascade structure to 4,and the number of sub trees in the completely random forest to 100, calculate the fault diagnosis effect of the current sub tree, and compute the weighted mean result of all the sub trees as the result of the forest.

    Step 5: Calculate the fault diagnosis rate of the current cascading structure on the training set respectively, and the model automatically evaluates whether it is necessary to expand the next cascading structure on the verification set.If necessary, return to step 4.If it is not necessary, stop training immediately.

    Step 6:Find out the layer with the highest diagnosis rate in the training set among all extension layers as the final diagnosis result of the training set,and the learning process ends.

    Step 7:Input the test set to the WDBM,cycle steps 3-5,find the layer with the highest diagnosis rate on the test set among all extension layers,and output the diagnosis result of this layer as the final bearing fault diagnosis rate.

    3.3 WDBM Procedure

    WDBM Procedure If T is mechanical bearing data,then T =multi-grained scanning TS;end if for i=1 to 4 for j=1 to N Use TS to train the sub-trees;Calculated the diagnostic rate of the sub-trees;Calculated the weights of the current sub-trees in the forest;Enter S to the current sub-tree;end for Calculated the current forest diagnosis rate on TS and S respectively end for Calculated the diagnostic rates of the current hierarchical structure on TS and S respectively If continues to extend the next level of linkage structure after evaluating on M then Use diagnosis rate of the previous layer spliced into the original feature space to form new TS and S;Return to Step 1 to continue;else Get layer’s highest diagnostic rate on TS,and output the diagnostic results of this layer end if

    In this paper,the bearing dataset from Case Western Reserve University(CWRU)is used,which is widely accepted as the standard data set in the field of bearing fault diagnosis, for its objective,reliability,and good quality.CWRU bearing fault data came from motor,torque sensor,power meter,16-channel data logger, 6203-2RS JEM SKF/NTN deep groove ball bearing and electronic control equipment.And the CWRU bearing fault data are processed manually by EDM technology.

    Input training setT,test setS,verification setM,N:the number of forest trees,TS:the training set after multi-grained scanning.

    4 Experimental Results

    4.1 Parameter Values of Experiment

    As shown in Tab.2,4 weighted completely random forests and 4 random forests were set in cascade forest structure,among which 100 decision trees existed in each forest and were cross-verified by 2 and 10 folds.In the part of the multi-grained scanning structure,the size of the scanning window was set as 4,the step size of the data slice was set as 1,the number of decision trees was set as 101,and the minimum sample number in each node was set as 0.1.

    Table 2: Parameter values of experiment

    4.2 Results Analysis

    4.2.1 Stability Analysis

    In order to ensure the accuracy of the experimental data, the experiments in this paper were repeated 10 times and the average value was taken as the result of the experiment.In addition to the WDBM,other deep learning-based models,e.g.,Convolution Neural Network model(CNN),Long and short-term memory neural networks model(LSTM),Classical Deep Forest model(CDF)are used for comparison in the bearing fault diagnosis experiments.

    In this section, the F1 score measurement index is used to verify the robustness of WDBM by dividing the training set,test set and verification set according to 7:2:1 for 9 types of fault samples and 1 type of health samples under 0 horsepower load.The calculation of F1 score is as shown in Eq.(4).

    In the Eq.(4),the F1 score is the mean value of precision and recall rate,and other indicators are shown in Tab.3.When the F1 score value is 1,it means that the model reaches the best state and 1 is the highest value.When the F1 score value is 0,it means that the model reaches the worst state and 0 is the worst value.The closer the value is to 1,the more robust the diagnostic model is.

    Table 3: F1-score evaluation index parameter

    As can be seen from Fig.5,performance of these deep learning-based methods is similar.However,by virtue of its strong feature extraction ability,the F1 score of the WDBM in 10 different data sets is above 0.95,which proves that the WDBM has excellent stability and good robustness,and WDBM can also adapt to complex bearing fault diagnosis.

    Figure 5:F1 scores of different diagnostic models

    4.2.2 Generalization Performance Analysis

    In the actual operation of mechanical equipment, the bearing loads are different.In order to accurately verify the generalization performance of the WDBM method,the bearing data sets under 0, 1 and 2 horsepower loads are used for fault diagnosis to verify the fault diagnosis ability of the WDBM under different load conditions.The fault diagnosis accuracy is shown in the Tab.4.

    Table 4: The accuracy of fault diagnosis of different load bearings using WDBM

    As can be seen from Tab.4,the average diagnostic accuracy of WDBM in bearing fault diagnosis is 98.27%.For purpose of verifying the generalization ability of the proposed method under other load conditions, bearing data sets with 1 horsepower and 2 horsepower are introduced to carry out fault diagnosis tests with the same configuration as 0 horsepower.The experimental results show that the average accuracy of the proposed method is more than 98% under 1 horsepower and 2 horsepower load conditions,which illustrates that WDBM has good generalization ability.

    4.2.3 The Fault Diagnosis Accuracy and Training Time Analysis

    In this section,the standard bearing fault diagnosis testis set up,which selects 0 horsepower,the 12 khz sampling frequency, 9 groups of bearing inner ring, outer ring and rolling element fan end acceleration fault data and 1 group of corresponding bearing health data.The test has 10 groups of data, 10 sample characteristics and about 3 million data point samples.The data pre-processing is processed according to step 2 in Section 3.2 to obtain the learning effect of WDBM cascade structure on the training set,as shown in Fig.6.

    It can be seen from Fig.6 that the accuracy of the WDBM is more than 90%in 75 training times and 99.5%in about 1450 training times,and the convergence maintains a stable learning effect.It is evident from Fig.6 that,WDBM reaches higher bearing fault diagnosis accuracy than CDF when the same number of layered cascading structure is conducted.The reason behind the phenomenon is that weighted mechanism is exploited in WDBM and the influence of decision trees with better classifying accuracy are taken into next cascading layer,which accelerate the improvement of classifying accuracy.

    Figure 6:Accuracy vs.number of layering

    The final diagnosis effect of WDBM on the test set is shown in Fig.7.The lowest diagnosis rate on data sets 0-9 is 99.23%,the highest is 99.45%,and the average diagnosis rate is 99.35%,which fully proves that the proposed method has high fault diagnosis accuracy.

    Figure 7:The diagnostic effect of the proposed method on the test set

    Fig.8 shows the comparison of the training time cost of diagnoses of 3 million data points for each model.The proposed WDBM performs well in these diagnosis models and the diagnosis cost of WDBM is the least,which is 0.62 hour lower than that of the CDF.It is verified that as follows:(1)WDBM has a low cost of diagnosis;(2)The cascade forest structure of WDBM is converged quickly.

    Figure 8:Model training time comparison of various diagnostic methods

    4.2.4 The Fault Diagnosis Performance Analysis for Small Sample Data

    In this section,fault diagnosis experiments using small sample data are conducted to analyze the performance of the bearing fault diagnosis methods through the average fault diagnosis accuracy of 4 different types of small samples.The results are shown in Tab.5.“5 categories”means that there is 1 category of healthy samples and 4 categories of fault samples when training the model and each type of sample contain 1000 data points, and so on.When 5 types of samples are used to train the model,the diagnosis rate of deep learning methods such as CNN and LSTM is seriously affected by the number of samples.When used 20 types of sample data, CNN increased by 26.59% and LSTM increased by 33.99%.For the tree-based model,CDF and WDBM are less affected by the sample data.When 5 types of small sample data are used,the diagnostic rate of CDF is 92.53%,and the precision rate of WDBM is 95.31%.The experimental results show that the number of training samples will affect the training effect of the model to some extent, and the possible reasons are as follows: For most deep learning methods,when sample data are relatively small,deep learning methods with more super parameters will produce serious over-fitting.

    Table 5: Bearing fault diagnosis accuracy for small sample data

    5 Conclusion

    In engineering practice,the bearing is in normal operation for most of the time,combined with its working environment and other reasons, which make it difficult to obtain many real fault data samples.Therefore,fault diagnosis of small sample data sets is particularly important.In this paper,weighted deep forest is exploited for bearing fault diagnosis.The method has strong robustness,good generalization, and low cost.It can accurately diagnose on small data sets, and its accuracy rate is more than 99%,which can provide new power for bearing fault diagnosis technology.

    Funding Statement:The work is supported by the National Key R&D Program of China (No.2021YFB2700500, 2021YFB2700503).Tao Wang received the grant and the URLs to sponsors’websites is https://service.most.gov.cn/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产av又大| 可以免费在线观看a视频的电影网站| e午夜精品久久久久久久| 日本五十路高清| 欧美精品亚洲一区二区| 黄网站色视频无遮挡免费观看| 久久精品91蜜桃| 午夜福利18| 99国产极品粉嫩在线观看| 久久精品国产清高在天天线| 性色av乱码一区二区三区2| 搡老妇女老女人老熟妇| 99久久国产精品久久久| e午夜精品久久久久久久| 中文字幕人妻熟女乱码| 母亲3免费完整高清在线观看| 88av欧美| 美女大奶头视频| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲成人免费电影在线观看| 亚洲精品中文字幕在线视频| www.自偷自拍.com| 在线观看免费视频网站a站| 天堂动漫精品| 好男人在线观看高清免费视频 | 中文字幕av电影在线播放| 国产一区二区三区视频了| 他把我摸到了高潮在线观看| 夜夜爽天天搞| 日本在线视频免费播放| 女同久久另类99精品国产91| 日韩欧美一区二区三区在线观看| 嫁个100分男人电影在线观看| 午夜成年电影在线免费观看| 中文亚洲av片在线观看爽| 少妇被粗大的猛进出69影院| 一区二区三区精品91| 高清在线国产一区| 不卡一级毛片| 琪琪午夜伦伦电影理论片6080| 欧美色视频一区免费| 国产在线精品亚洲第一网站| 国产99白浆流出| 国产私拍福利视频在线观看| 后天国语完整版免费观看| 久久中文看片网| 国产一区二区三区综合在线观看| 好男人在线观看高清免费视频 | 9热在线视频观看99| 97人妻精品一区二区三区麻豆 | 18禁美女被吸乳视频| 夜夜夜夜夜久久久久| 一区二区三区国产精品乱码| ponron亚洲| 日本精品一区二区三区蜜桃| 禁无遮挡网站| 麻豆一二三区av精品| 女人被躁到高潮嗷嗷叫费观| 亚洲人成77777在线视频| 人人妻人人澡人人看| 99国产精品99久久久久| 最好的美女福利视频网| 国产成人精品久久二区二区91| 大码成人一级视频| 一二三四在线观看免费中文在| 久热这里只有精品99| 色播亚洲综合网| 亚洲av电影不卡..在线观看| 国产精品1区2区在线观看.| 男女做爰动态图高潮gif福利片 | 手机成人av网站| avwww免费| 久久久久久久久久久久大奶| 9色porny在线观看| 久久人人97超碰香蕉20202| 巨乳人妻的诱惑在线观看| 性色av乱码一区二区三区2| 欧美日韩乱码在线| 成人手机av| 制服诱惑二区| 成人欧美大片| 97人妻精品一区二区三区麻豆 | 久久久久久大精品| 婷婷丁香在线五月| 动漫黄色视频在线观看| 啦啦啦 在线观看视频| 在线观看免费午夜福利视频| 亚洲国产日韩欧美精品在线观看 | 欧洲精品卡2卡3卡4卡5卡区| 亚洲国产精品999在线| 日韩精品中文字幕看吧| 亚洲熟女毛片儿| 国产野战对白在线观看| 国产精品二区激情视频| 国产成人精品久久二区二区91| 欧洲精品卡2卡3卡4卡5卡区| 成人av一区二区三区在线看| 免费观看人在逋| 亚洲第一欧美日韩一区二区三区| 少妇 在线观看| 久久中文看片网| 精品欧美国产一区二区三| 精品一区二区三区av网在线观看| www.自偷自拍.com| cao死你这个sao货| 九色国产91popny在线| 动漫黄色视频在线观看| 欧美日本中文国产一区发布| 亚洲成国产人片在线观看| 激情在线观看视频在线高清| 久久精品国产亚洲av香蕉五月| 中国美女看黄片| 精品第一国产精品| 国产欧美日韩一区二区三区在线| 叶爱在线成人免费视频播放| 亚洲精品国产色婷婷电影| 亚洲电影在线观看av| 日本精品一区二区三区蜜桃| 久久久久久亚洲精品国产蜜桃av| 九色亚洲精品在线播放| 国产午夜精品久久久久久| 美女 人体艺术 gogo| 咕卡用的链子| 国产精品久久久人人做人人爽| 高清毛片免费观看视频网站| 久久国产亚洲av麻豆专区| 此物有八面人人有两片| 免费人成视频x8x8入口观看| 国产日韩一区二区三区精品不卡| 亚洲全国av大片| 国产精品自产拍在线观看55亚洲| 熟女少妇亚洲综合色aaa.| 久久国产精品影院| 国产欧美日韩精品亚洲av| 人人妻人人澡人人看| 无限看片的www在线观看| 日韩欧美一区视频在线观看| 后天国语完整版免费观看| 热re99久久国产66热| 亚洲美女黄片视频| 禁无遮挡网站| 搞女人的毛片| 两个人免费观看高清视频| 午夜免费激情av| 亚洲精品av麻豆狂野| 日韩成人在线观看一区二区三区| 亚洲五月天丁香| 亚洲在线自拍视频| 日韩欧美国产一区二区入口| 亚洲国产欧美一区二区综合| 看片在线看免费视频| 亚洲国产精品sss在线观看| 免费搜索国产男女视频| 怎么达到女性高潮| 欧美在线黄色| 中文字幕人妻熟女乱码| 天天一区二区日本电影三级 | 精品日产1卡2卡| 麻豆成人av在线观看| 99精品欧美一区二区三区四区| 国产极品粉嫩免费观看在线| 国产乱人伦免费视频| 欧美黄色片欧美黄色片| 首页视频小说图片口味搜索| 亚洲成人免费电影在线观看| 国产精品自产拍在线观看55亚洲| 最新美女视频免费是黄的| 嫩草影视91久久| 日本a在线网址| 国产片内射在线| 国产高清视频在线播放一区| 亚洲五月天丁香| 窝窝影院91人妻| 欧美日本亚洲视频在线播放| 精品国产美女av久久久久小说| 亚洲av熟女| 人人澡人人妻人| 亚洲少妇的诱惑av| 亚洲五月婷婷丁香| 中文字幕精品免费在线观看视频| 国产私拍福利视频在线观看| 非洲黑人性xxxx精品又粗又长| 9热在线视频观看99| 老汉色av国产亚洲站长工具| 成熟少妇高潮喷水视频| 日韩免费av在线播放| 精品免费久久久久久久清纯| 9色porny在线观看| 日韩 欧美 亚洲 中文字幕| 禁无遮挡网站| 曰老女人黄片| 天堂√8在线中文| 最新在线观看一区二区三区| 黄色 视频免费看| 男女之事视频高清在线观看| 最近最新中文字幕大全电影3 | 99re在线观看精品视频| 99久久国产精品久久久| 亚洲精品一卡2卡三卡4卡5卡| 欧美激情极品国产一区二区三区| www.精华液| 一进一出抽搐gif免费好疼| 欧美日韩乱码在线| 麻豆一二三区av精品| 日本 av在线| 无人区码免费观看不卡| 久久香蕉精品热| 亚洲av电影在线进入| 国产av一区在线观看免费| 嫩草影视91久久| 中亚洲国语对白在线视频| 99国产精品99久久久久| 岛国视频午夜一区免费看| 国产97色在线日韩免费| 久久婷婷成人综合色麻豆| 亚洲av第一区精品v没综合| 久久热在线av| 久久精品aⅴ一区二区三区四区| 久99久视频精品免费| 精品福利观看| 视频区欧美日本亚洲| 精品久久久久久成人av| 久久国产乱子伦精品免费另类| 十八禁人妻一区二区| 精品久久久精品久久久| 老汉色av国产亚洲站长工具| 欧美大码av| 久久国产乱子伦精品免费另类| 成年人黄色毛片网站| 搞女人的毛片| 成人亚洲精品一区在线观看| 一级毛片女人18水好多| 国产精品久久久人人做人人爽| 日韩一卡2卡3卡4卡2021年| 国产精品一区二区精品视频观看| 日韩欧美一区视频在线观看| 一夜夜www| 18禁裸乳无遮挡免费网站照片 | av在线天堂中文字幕| 性欧美人与动物交配| 不卡av一区二区三区| 美女午夜性视频免费| 欧美在线一区亚洲| 日本在线视频免费播放| 国产亚洲精品久久久久久毛片| 国产精品免费一区二区三区在线| 香蕉国产在线看| 淫妇啪啪啪对白视频| 国产成人欧美| 国产精品日韩av在线免费观看 | 亚洲少妇的诱惑av| 午夜福利视频1000在线观看 | 久久九九热精品免费| 搞女人的毛片| 俄罗斯特黄特色一大片| 精品高清国产在线一区| 久久久水蜜桃国产精品网| 亚洲avbb在线观看| 好男人电影高清在线观看| 久久精品91无色码中文字幕| 国产精品乱码一区二三区的特点 | 亚洲最大成人中文| 久久人妻福利社区极品人妻图片| 中文字幕高清在线视频| 老汉色av国产亚洲站长工具| 久久人人爽av亚洲精品天堂| 国产欧美日韩一区二区三| 女人被狂操c到高潮| 老熟妇仑乱视频hdxx| 99riav亚洲国产免费| 久久精品亚洲精品国产色婷小说| 日韩视频一区二区在线观看| 一区二区三区国产精品乱码| 久久香蕉国产精品| 黄色a级毛片大全视频| 99re在线观看精品视频| 老司机午夜福利在线观看视频| 好男人电影高清在线观看| 一区二区三区精品91| 激情在线观看视频在线高清| 亚洲精品中文字幕一二三四区| 可以在线观看毛片的网站| 久久天堂一区二区三区四区| 成人18禁在线播放| 一a级毛片在线观看| 国产精品秋霞免费鲁丝片| 亚洲欧美激情在线| 丝袜人妻中文字幕| 国产乱人伦免费视频| 国产亚洲精品久久久久5区| 成人欧美大片| 在线天堂中文资源库| 正在播放国产对白刺激| 男女做爰动态图高潮gif福利片 | 久久久国产欧美日韩av| 久久久精品国产亚洲av高清涩受| 大型黄色视频在线免费观看| 欧美日韩一级在线毛片| 身体一侧抽搐| 久久午夜亚洲精品久久| 亚洲中文字幕一区二区三区有码在线看 | 夜夜夜夜夜久久久久| 国产欧美日韩一区二区三区在线| 在线观看日韩欧美| 麻豆久久精品国产亚洲av| www.www免费av| 超碰成人久久| 免费观看人在逋| 热re99久久国产66热| www.999成人在线观看| 欧美国产日韩亚洲一区| 一区福利在线观看| 欧洲精品卡2卡3卡4卡5卡区| 久久亚洲精品不卡| 超碰成人久久| 18禁美女被吸乳视频| 韩国av一区二区三区四区| 男女床上黄色一级片免费看| 黄色丝袜av网址大全| 在线观看一区二区三区| 一卡2卡三卡四卡精品乱码亚洲| 亚洲自拍偷在线| 黄色片一级片一级黄色片| 身体一侧抽搐| 亚洲一卡2卡3卡4卡5卡精品中文| 国产成人欧美| av福利片在线| 国产极品粉嫩免费观看在线| 日韩免费av在线播放| 欧美色视频一区免费| 夜夜看夜夜爽夜夜摸| 亚洲国产欧美一区二区综合| 女人爽到高潮嗷嗷叫在线视频| 久久香蕉国产精品| 啦啦啦韩国在线观看视频| 国产精品免费一区二区三区在线| 欧美久久黑人一区二区| 天堂动漫精品| 啦啦啦韩国在线观看视频| 91精品三级在线观看| 9色porny在线观看| 亚洲精品美女久久久久99蜜臀| 成人18禁高潮啪啪吃奶动态图| 国产精品久久久人人做人人爽| 精品久久久久久成人av| 搡老岳熟女国产| 性欧美人与动物交配| 国产男靠女视频免费网站| x7x7x7水蜜桃| 色在线成人网| 欧美成人免费av一区二区三区| 国产亚洲精品第一综合不卡| 首页视频小说图片口味搜索| 美女大奶头视频| 亚洲成国产人片在线观看| 村上凉子中文字幕在线| 成人国语在线视频| 美女大奶头视频| 国产单亲对白刺激| 国产xxxxx性猛交| 欧美成人免费av一区二区三区| 国产精品久久久av美女十八| 成人三级做爰电影| 日韩精品免费视频一区二区三区| 多毛熟女@视频| 午夜a级毛片| 亚洲avbb在线观看| 国产高清激情床上av| 国产激情久久老熟女| 亚洲第一av免费看| 欧洲精品卡2卡3卡4卡5卡区| 国产一卡二卡三卡精品| 9色porny在线观看| 在线天堂中文资源库| 日日夜夜操网爽| 在线天堂中文资源库| 日日夜夜操网爽| 19禁男女啪啪无遮挡网站| 国产激情欧美一区二区| 午夜福利免费观看在线| 亚洲中文日韩欧美视频| 亚洲中文av在线| 怎么达到女性高潮| 久久香蕉国产精品| 亚洲av成人一区二区三| 免费看美女性在线毛片视频| 又黄又爽又免费观看的视频| 18美女黄网站色大片免费观看| 少妇 在线观看| 麻豆av在线久日| 精品少妇一区二区三区视频日本电影| 99riav亚洲国产免费| 一区二区三区激情视频| 后天国语完整版免费观看| 亚洲国产欧美日韩在线播放| 99久久国产精品久久久| 无遮挡黄片免费观看| 精品欧美一区二区三区在线| 久久亚洲精品不卡| xxx96com| av有码第一页| 美国免费a级毛片| 黄频高清免费视频| 色老头精品视频在线观看| 高清黄色对白视频在线免费看| 国产免费男女视频| 久久香蕉国产精品| 国产精品一区二区在线不卡| 久久人人精品亚洲av| 亚洲欧美一区二区三区黑人| 亚洲色图综合在线观看| 久久狼人影院| 99香蕉大伊视频| 亚洲欧美日韩另类电影网站| 欧美日韩乱码在线| 国产精品电影一区二区三区| 亚洲片人在线观看| 在线av久久热| 纯流量卡能插随身wifi吗| 久久久久国内视频| 久久精品亚洲熟妇少妇任你| 久久精品人人爽人人爽视色| 50天的宝宝边吃奶边哭怎么回事| 国产精品久久久人人做人人爽| 久久久久国产一级毛片高清牌| 久久久久亚洲av毛片大全| 一区在线观看完整版| 国产精品亚洲一级av第二区| 亚洲七黄色美女视频| 成人精品一区二区免费| videosex国产| 国产精品乱码一区二三区的特点 | 在线观看www视频免费| 波多野结衣一区麻豆| 伊人久久大香线蕉亚洲五| 女人精品久久久久毛片| 日本 av在线| 久久国产精品人妻蜜桃| 久久久久久久午夜电影| 搡老熟女国产l中国老女人| av视频在线观看入口| 女同久久另类99精品国产91| 精品熟女少妇八av免费久了| 免费看a级黄色片| 成人亚洲精品av一区二区| 久久精品国产清高在天天线| 亚洲色图综合在线观看| 制服丝袜大香蕉在线| 日韩精品中文字幕看吧| 给我免费播放毛片高清在线观看| 999精品在线视频| 一级作爱视频免费观看| 神马国产精品三级电影在线观看 | 久久香蕉激情| 成人国产一区最新在线观看| 狂野欧美激情性xxxx| 日本欧美视频一区| 伦理电影免费视频| 国产午夜精品久久久久久| 欧美日本视频| 亚洲精华国产精华精| 999久久久精品免费观看国产| 国产精品美女特级片免费视频播放器 | 午夜福利在线观看吧| 日韩中文字幕欧美一区二区| 久久久精品国产亚洲av高清涩受| 国产不卡一卡二| 久久国产精品人妻蜜桃| 亚洲精品粉嫩美女一区| 成人永久免费在线观看视频| av超薄肉色丝袜交足视频| 日本a在线网址| 一二三四在线观看免费中文在| 亚洲色图 男人天堂 中文字幕| 天天躁夜夜躁狠狠躁躁| 婷婷丁香在线五月| 88av欧美| av电影中文网址| av超薄肉色丝袜交足视频| 色哟哟哟哟哟哟| 夜夜夜夜夜久久久久| 久热这里只有精品99| 免费观看精品视频网站| 国产精品av久久久久免费| 日韩精品中文字幕看吧| 真人一进一出gif抽搐免费| 婷婷精品国产亚洲av在线| 色综合欧美亚洲国产小说| 国产精品,欧美在线| 看黄色毛片网站| а√天堂www在线а√下载| 国产精品亚洲一级av第二区| 国产成人一区二区三区免费视频网站| 欧美在线黄色| 精品国产乱子伦一区二区三区| 亚洲狠狠婷婷综合久久图片| 日韩精品中文字幕看吧| 国产av又大| 欧美日本亚洲视频在线播放| 日韩欧美国产在线观看| 视频区欧美日本亚洲| 欧美日韩瑟瑟在线播放| 亚洲成av人片免费观看| 亚洲成人精品中文字幕电影| 制服人妻中文乱码| 久久久久久国产a免费观看| 国产精品乱码一区二三区的特点 | 老司机靠b影院| 两人在一起打扑克的视频| 国产精品久久久人人做人人爽| 可以在线观看毛片的网站| 狠狠狠狠99中文字幕| 又紧又爽又黄一区二区| 久久久水蜜桃国产精品网| 两性午夜刺激爽爽歪歪视频在线观看 | 99国产极品粉嫩在线观看| 精品久久久久久成人av| 日韩三级视频一区二区三区| 欧美国产精品va在线观看不卡| 国产高清有码在线观看视频 | 久久久久精品国产欧美久久久| 老汉色av国产亚洲站长工具| 欧美性长视频在线观看| 成年人黄色毛片网站| 91成人精品电影| 亚洲天堂国产精品一区在线| 国产av精品麻豆| 亚洲五月色婷婷综合| 成人手机av| 女人被躁到高潮嗷嗷叫费观| 人人妻人人澡人人看| 午夜a级毛片| 精品一区二区三区av网在线观看| 欧美日韩亚洲综合一区二区三区_| 侵犯人妻中文字幕一二三四区| 亚洲一卡2卡3卡4卡5卡精品中文| 国产野战对白在线观看| 中文字幕最新亚洲高清| 一个人免费在线观看的高清视频| 黄色成人免费大全| 国产精品秋霞免费鲁丝片| 男女下面进入的视频免费午夜 | 亚洲午夜理论影院| 电影成人av| 男人舔女人的私密视频| 一卡2卡三卡四卡精品乱码亚洲| 在线天堂中文资源库| 色婷婷久久久亚洲欧美| 午夜精品久久久久久毛片777| 精品国产一区二区三区四区第35| 欧美一级毛片孕妇| 国产免费av片在线观看野外av| 欧美最黄视频在线播放免费| 高清黄色对白视频在线免费看| 夜夜看夜夜爽夜夜摸| 欧美丝袜亚洲另类 | 真人做人爱边吃奶动态| 色播亚洲综合网| 午夜精品国产一区二区电影| 久久伊人香网站| 嫩草影院精品99| 夜夜看夜夜爽夜夜摸| 少妇的丰满在线观看| 免费高清视频大片| 欧美日韩瑟瑟在线播放| 一进一出抽搐动态| 日韩欧美一区视频在线观看| 亚洲午夜理论影院| 大香蕉久久成人网| 亚洲第一av免费看| 久久 成人 亚洲| 色精品久久人妻99蜜桃| 好男人电影高清在线观看| 亚洲精品国产色婷婷电影| 久久中文字幕人妻熟女| 久久久久久免费高清国产稀缺| 中文字幕色久视频| 久久久水蜜桃国产精品网| 手机成人av网站| 热99re8久久精品国产| 宅男免费午夜| 欧美乱色亚洲激情| 国产欧美日韩一区二区精品| av在线播放免费不卡| 国产成人免费无遮挡视频| 视频在线观看一区二区三区| 国产野战对白在线观看| 一级毛片女人18水好多| 老鸭窝网址在线观看| 女人高潮潮喷娇喘18禁视频| 淫妇啪啪啪对白视频| 色播亚洲综合网| 乱人伦中国视频| 日韩欧美在线二视频| 亚洲av电影在线进入| 亚洲三区欧美一区| 国产成人一区二区三区免费视频网站| 国产男靠女视频免费网站| 欧美一级毛片孕妇| 亚洲成人国产一区在线观看| 夜夜躁狠狠躁天天躁| 丰满的人妻完整版| 成年女人毛片免费观看观看9| 日韩欧美一区二区三区在线观看| 淫妇啪啪啪对白视频| 午夜福利免费观看在线| 少妇熟女aⅴ在线视频| 中国美女看黄片| 日本免费一区二区三区高清不卡 | 最好的美女福利视频网| 美女扒开内裤让男人捅视频| 亚洲久久久国产精品| 韩国av一区二区三区四区| 亚洲aⅴ乱码一区二区在线播放 | 99久久国产精品久久久|