• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved Logistic Regression Algorithm Based on Kernel Density Estimation for Multi-Classification with Non-Equilibrium Samples

    2019-11-07 03:12:16YangYuZeyuXiongYueshanXiongandWeiziLi
    Computers Materials&Continua 2019年10期

    Yang Yu,Zeyu Xiong,Yueshan Xiongand Weizi Li

    Abstract:Logistic regression is often used to solve linear binary classi fication problems such as machine vision,speech recognition,and handwriting recognition.However,it usually fails to solve certain nonlinear multi-classi fication problem,such as problem with non-equilibrium samples.Many scholars have proposed some methods,such as neural network,least square support vector machine,AdaBoost meta-algorithm,etc.These methods essentially belong to machine learning categories.In this work,based on the probability theory and statistical principle,we propose an improved logistic regression algorithm based on kernel density estimation for solving nonlinear multi-classi fication.We havecomparedourapproachwithothermethodsusingnon-equilibriumsamples,theresults show that our approach guarantees sample integrity and achieves superior classi fication.

    Keywords:Logistic regression,multi-classi fication,kernel function,density estimation,non-equilibrium.

    1 Introduction

    Machine Learning has become one of the most popular fields in recent years.There are two main tasks of Machine Learning:1)classi fication,which goal is to divide instances into the appropriate categories,and 2)regression,which goal is to study relationship between samples.The most basic classi fication problem is binary classi fication.which can be solved using algorithms such as Naive Bayes(NB),support vector machine(SVM),decision tree,logistic regression,KNN,neural network,etc.More generally,multi-classi fication problems such as identifying handwritten digits 0~9,and and labeling document topics have gained much attention recently.To provide few examples,Liu et al.[Liu,Liang and Xue(2008)]proposed a multi-classi fication algorithm based on fuzzy support vector machines,whichprovidesbetterclassi ficationaccuracyandgeneralizationabilitycompared with traditional One-vs.-Rest methods.Tang et al.[Tang,Wang and Chen(2005)]proposed a new multi-classi fication algorithm based on support vector machine and binary tree structure to solve the problem of non-separable regions.

    In the existing regression algorithm,support vector machines are mostly used for multi-classi fication problem,but there are some limitations in algorithm.The logistic regression algorithm can only solve the problem of dichotomy and linear classi fication.Support vector machines typically support only small training samples and are equally dif ficult to deal with multiple classi fication problems.Naive Bayes is based on the assumption that the characteristic conditions are independent.Once the dataset does not satisfy this assumption,its classi fication accuracy will be greatly affected.

    In order to solve the problem above,towards dif ficult for implement large scale samples,not applicable to multi-classi fication and uncertainty to constraint conditions,Chen et al.[Chen,Chen,Mao et al.(2013)]proposed a model of Density-based Logistic Regression(DLR),which has a good result in practical application.Our model is based on kernel density-based logistic regression and we construct a new kernel function for multi-classi fication problems.This has three advantages:1)It makes better improvements to classi fication effect.2)It is an extension of DLR model to multi-classi fication problems.3)It shows good generalization performance on nonlinear and unbalanced data.We will describe the theoretical rationality and check classifying quality according to practical application for our new model.

    The rest of the paper is organized as the following.In Section 2,we explain background knowledge including logistic regression binary classi fication,multi-classi fication,SoftMax and DLR model.In Section 3,we introduce several solutions for multi-classi fication problems with imbalanced samples.In Section 4,we explain our approach in details.In Section 5,we compare our approach to other methods and analyze the performances.Finally,we conclude in Section 6.

    2 Logistic regression and related knowledge

    2.1 Logistic regression

    Logistic regression is based on linear regression,and a sigmoid logic function is applied,which is a logarithmic probability function.Logistic regression is represented as follows,

    In the model of sigmoid function,zvalues are distributed within the range of[0,1].When the independent variable is taken near 0,thez-value change curve is very steep,while thezvalue is relatively stable at other values.Therefore,the binary classi fication tasks can be handled well if taking 0 as the boundary.However,it is sometimes dif ficult to make the representation model approximate to the expected model.By adding a constant termbto the function,

    By substituting Eq.(2)into Eq.(1),we have

    Based on these formulae,assuming a given datasetD={xi,yi},i=1,···,N,xi∈ R D,D is the dimension of samples,andyi∈{0,1},logistic regression is described as follows:

    wherewstands for feature weight,which is a parameter to be learned.φis the characteristic transformation function.

    In LR modelφis usually de fined to be equal tox.The key step is to learn unknown parameterswandb.Ifyin Eq.(3)is regarded as posterior probability estimationp(y=1|x),Eq.(4)can be rewritten as:

    Thenwcan be obtained by the maximum likelihood estimate.With the definition ofbi=p(yi=1|xi),y=0 or 1,for a single sample,the posterior probability is,

    Then,the maximum likelihood function is represented as follows,

    For the convenience of calculation,the negative log of the maximum likelihood function is used as the objective function to be optimized,

    Since the maximizing likelihood probability is equivalent to minimizing negative likelihood probability,the last step is to minimize the Loss function.

    2.2 Density-based logistic regression

    In the DLR model,φis a function that mapsxto the eigenspace,

    whereDis the dimension of the input data,lnmeasures the contribution ofxdto the probability ofy=1,andmeasures the degree of imbalance for datasets.p(y=1)is the proportion of data in the training set,whose label isy=1.Nadaraya-Watson is usually used to estimatep(y=k|xd)wherek=0,1.

    whereDk?Dis the subset of data in class k,andK(x,y)is a Gaussian kernel function de fined as follows,

    wherehdis the bandwidth of the kernel density function.Thehdis usually set using the Silverman’s rule of thumb[Silverman and Green(1986)],

    whereNis the total number of samples andσis the standard deviation ofxd.

    Next we need to trainwthrough the learning algorithm untilwconverges.Givenbi=p(yi=1|xi),the loss function based on likelihood probability is calculated as follows,

    2.3 Extension of logistic regression to multiple classi fication

    Since the logistic classi fication is a binary classi fication model,it is necessary to extend it for multiple classi fication,common extensions include multiple binary classi fication models or SoftMax models.

    2.3.1 N-logistic model

    The N-logistic model generally adopts One-vs-Rest or One-vs.-One.When classifying a sample,we first classify the two classi fiers,then vote,and select the category with the highest score.At the same time,to prevent the same vote,we also add the probability of the class to each classi fier in the voting.The predictive accuracy of these two approaches is usually very similar,so unless there is a speci fic need for data characteristics,it is generally arbitrary to choose one approach to calculate.

    2.3.2 SoftMax model

    SoftMax regression is a generalization of logistic regression to multiple classi fication problems.Its basic form is described as follows,

    When in the test,to samplex,if there is a categoryc,for all the other categoryc *(c * /=c)meet thep(y=c|x)>p(y=c *|x),thenxbelongs to the categoryc.

    On the question of choosing N-logistic model or SoftMax model,many scholars have conducted in-depth exploration.Currently,it is accepted that it is necessary to investigate whether the various categories are mutually exclusive.If there is a mutual exclusion relationship between the categories to be classi fied,we’d better choose SoftMax classi fier.Whileifthereisnomutualexclusionbetweencategories,andthecategoriesareintersecting,it is best suited to the N-logistic classi fier.We verify this conclusion according to corresponding datasets in Section 5.

    3 Analysis of the classi fication results with unbalanced sample proportion

    In our actual classi fication tasks,there are often needs to deal with problems with unbalanced data sample proportions.For example,the ratio of positive and negative samples in a dataset is 10:1,including 100 positive classes and 10 negative classes.If using this kind of data to train a classi fier,it is very likely that the test data will be divided into positive classes.Obviously,this classi fier is invalid.

    For this kind of data,traditional logistic regression method usually fails to work.In recent years,studies on the problem of unbalanced classi fication have been very active[Ye,Wen and Lv(2009)].In this section we introduce several common approaches to solve the problem of sample imbalance classi fication.

    3.1 Obtain more samples

    For unbalanced classi fication,the first solution is to obtain more samples and expand a few samples to balance the sample proportion.However,in most cases,the sampling procedure needs speci fic conditions.Thus,it is generally dif ficult to obtain more samples under the same conditions.

    3.2 Sampling methods

    The general sampling method is mainly based on modifying the number of unbalanced samples.The research of Estabrooks et al.[Estabrooks,Jo and Japkowicz(2004)]show that the general sampling method has a better effect on solving unbalanced classi fication problems.

    3.2.1 Under-sampling method

    Under-sampling method is also called down-sampling[Gao,Ding and Han(2008)],which is to eliminate some samples from majority class samples,so that the number of samples in the whole group tends to be balanced.The commonly used method is random under-sampling downward method.The method is based onNmin,the number of minority class samples.We randomly sample from the majority class samples and eliminateNsamples,and thenNmax-N=Nmin,so the samples are balanced.

    3.2.2 Over-sampling method

    Over-sampling method is also called up-sampling,which refers to increase the number of minority class samples.The method of adding a small number of minority class samples(random over-sampling method)or re-fitting some new data in accordance with some law can be used to make the number of samples balanced.One commonly used method is Synthetic Minority Over-sampling Technique(SMOTE)[Chawla,Bowyer,Hall et al.(2002)].The method analyzes the distribution of the characteristic space of a few samples and proposes new samples.Compared to the random over-sampling method,the data added by SMOTE sampling method is completely new,which can follow the regular pattern in the original sample.The main idea of SMOTE is shown in Fig.1.

    For each samplexin a minority class,the Euclidean distance of each sample point of a minority sample is calculated,and itskneighbors are obtained.A suitable sampling ratio is set according to the sample proportion to determine the sampling rateN.For each of the minority samplex,select several samples randomly from itskneighbors.For each random nearest neighborxn,a new sample is constructed with the original sample according to the following equation,

    3.3 Modify evaluation index

    For unbalanced classi fication,using accuracy to evaluate classi fiers may biases.For example,assuming ratio of positive and negative samples in a dataset is 9:1,and all samples are labelled be positive.Although the accuracy rate is up to 90%,the classi fier is useless.

    Figure 1:The main idea of SMOTE method

    Table 1:A hybrid matrix of binary classi fication

    Therefore,accuracy can serve as a biased indicator.Davis et al.[Davis and Goadrich(2006)]proposed a new evaluation index named Precision and Recall,some factors are listed in Tab.1.

    Precisionrefers to the proportion of positive samples in all predicted positive samples,andRecallrefers to the proportion of all actual positive samples that are being correctly predicted.

    3.4 Use penalty items to modify the weights

    If samples are dif ficult to sample directly,the method of modifying sample weights can be used.It increases the weight of minority class samples and reduces the weight of the majority class samples.Because the weights of minority class samples are high,they can lead to better classi fication results.The commonly used method is to add a penalty item to the majority class samples each time when training the sample weight.In general,we use the regularization method to add a penalty parameter to a objective function,this reduces the chance of the over fitting[Goodfellow,Bengio and Courville(2017)].The regularized objective function is shown below,

    whereαis a parameter which represents the contribution of the penalty item and the objective function.The penalty can be adjusted by controllingα.Ifα=0,there is no penalty,otherwise the larger theα,the greater the penalty.

    After we chose an appropriate penalty,the training regularize the objective function.In this way,the data error and the parameter scale can be reduced,the computation efficiency can be improved.But in practice,how to select the optimal penalty item is a complicated problem,which needs more tests.

    3.5 Kernel-based methods

    Towards general classi fication problem,we can assume that the sample data can be classi fieddirectlybylinearmodel.Inotherwords,thereisahyperplanethatcanseparatethe samples and ensure that the classi fication is correct.However,in practice,there is usually no such a hyperplane to partition the original data correctly,which means that the data are not linearly separable.For such a problem,we can consider preprocessing data.Using the principle of support vector machine,data in the low-dimensional space are transformed into the high dimensional space through nonlinear transformation,so that they can be linearly separable[Zhou(2016)].Using this method,the relationship between data samples can be written as dot product.For example,the linear regression function can be rewritten as follows,

    wherex(i)is the training data.αis the coef ficient vector.Replacing the dot product with a function of the kernelk(x,x(i))=φ(x)·φ(x(i)),we can get,

    This function is nonlinear with respect tox,while it is linear with respect toφ(x).

    Kernel function can deal with nonlinear unbalanced classi fication well.It uses a convex optimization technique to address nonlinear problems in a linear manner.At the same time,this method can guarantee convergence and improve the accuracy of classi fication.And there is some simpli fication in parameter determination.In addition,it is much more efficient to use the kernel function to transform data into a transformation function[Goodfellow,Bengio and Courville(2017)].

    SVM can convert sample data into high dimensional feature space through a kernel function.According to the principle of maximum spacing of SVM,the hyperplane of the optimal classi fication can be constructed in the characteristic space of high dimension to realizetheclassi fication.Iftheintervalofclassi ficationcanbeextended,especiallybetween minority class samples and the optimal classi fication hyperplane,the generalization performance of the classi fier and the accuracy of classes with small samples can be effectively improved.This enables the correct classi fication of unbalanced data[Liu,Huang,Zhu et al.(2009)].

    4 Improved method of kernel density estimation model for multi-classi fication

    We extend the DLR model to solve the multi-classi fication problem and design an improved multi-classi fication algorithm.Assuming there areCclasses,fork=1,2,...,C,the DLR model is de fined as follows,

    wherewk=(wk1,wk2,...,wkD)is the feature weighting parameter of classk,andφk=(φk1,φk2,...,φkD)is the characteristic transformation function of classk.

    According to the Nadaraya-Watson estimator,the probability formula of classkis obtained as follows:

    Finally,we need to minimize the loss function,

    where,1yi=kis 1 if and only ifyi=k,otherwise it takes value 0.

    Now we present the process of evaluating the gradient of the Loss function with respect towk,

    We adjust the weightwkaccording to the direction of the gradient descent,until thewkconverges andwkin the model is well trained.During the testing,the same kernel function transformation is performed on the testing data.The transformedφ(x)and trainedwkare substituted into Eq.(25).Then we compare the probability of the different classes and choose the class with the largest probability as the result category.At this point,we have completed the generalization of the logistic regression to multi-classi fication based on kernel density function.

    To show the difference between kernel density estimation logistic regression and classical logistic regression,we will compare the corresponding algorithms later.

    In the DLR algorithm,the inputxis given a feature transformation to getφbefore calculating the probability in Eq.(25).And then substituteφforxas the input to the probability formula.At the same time,the probability formula is changed from the Sigmoid function to the SoftMax function.

    After conducting experiments,we have found that the differences ofφamong different labels obtained using the DLR algorithm are small.There is a large error in the final classi fication result.And the minority class samples cannot be discriminated at all.And the value of loss function is not reduced by training.Therefore,in the process of constructing the bandwidth of kernel function and preprocessing the data,we improve it by the following scheme.

    Figure 2:The process of searching for the optimal coef ficient

    First,We try to train the parameters of the kernel function by modifying the weight values on the basis of Eq.(14).We conducted 16 groups of experiments,as shown in Fig.2.In the previous experiment,since the value ofwwas too large,the characteristics of the input dataXitself were dif ficult to distinguish.Properly reducingcan limit the complexity of the model,thereby improving the generalization performance of the model.Through comparison experiments,we found that changing 1.06 in Eq.(14)to 0.02 can signi ficantly improve the accuracy of the model.According to Fig.2,we reduce the bandwidth of kernel function in Eq.(14).

    In this way,the difference ofhdhas been improved.However,it may cause the value ofybecome too large and over fl ow in subsequent calculations.Feature scaling is a crucial step in the data preprocessing process.For most machine learning and optimization algorithms,scaling the values of features to the same interval can make their performance even better.In order to accelerate loss function convergence rate,we normalizeφusing the min-max method.

    The training process of the improved model is established in Algorithm 3.

    In the next section,we will conduct a comparative test to analyze the relationship between test results and training results after using Algorithm 3.

    5 Application of improved algorithm:datasets and veri fication analysis

    In particular,we have implemented the following methods for testing.

    1)N-logistic model,One-vs-Rest methods,abbreviated as NLR.

    2)N-logistic model,One-vs-Rest methods,combined with the oversampling method,abbreviated as NLR_Sample.

    3)N-logistic model,One-vs-Rest methods,combined with the Smote method,abbreviated as NLR_Smote.

    4)SoftMax model.

    5)SoftMax model combined with Algorithm 3,abbreviated as DLR++.

    We choose three datasets for testing.The first one is the fitting datasetNumbconstructed by us.In this dataset,each data element contains 10 fl oating point values,ranging from 0 to 5.The data distribution is divided into three categories:GroupA,GroupBandGroupC.The second dataset is theIrisfrom UCI.There are four features,including calyx length,calyx width and petal width,and the eigenvalue is fl oating-point number.The target value is the classi fication result of irises,includingvirginica,versicolor,andsetosa.The third dataset is theWinefrom UCI,which uses the various parameters of theWineto predict the quality of the Wine.There are 11 characteristic values,including volatileacidity,non-volatile acid,citric acid,residual sugar,chlorine,total sulfur dioxide,free of sulfur dioxide,sulfate,concentration,PH and alcohol.There are three quality classes:1,2,or 3.

    Table 2:Accuracy(%)of different methods on three datasets

    Table 3:Time(s)for different methods on three datasets

    Table 4:The number of iterations of training Loss convergence on three datasets

    In order to keep the data more versatile,and the classi fication results more persuasive,we use k-fold cross validation and assign the dataset to the training set and testing set according to the ratio 7:3.The test results are given as follows.

    From Tab.2 to Tab.4,we can see that the DLR++algorithm shows better prediction accuracy.In the three datasets,Numbis linear,whileIrisandWineare non-linear.We can see from the results that both N-logistic and SoftMax models can solve the multi-classi fication problem well.Both oversampling and smote sampling method can be used to improve the classi fication results of the sample imbalance problem with the accuracy rate increased by 1.34%and 3.92%respectively.The improved DLR++model based on kernel density is the best among all these methods,and it has an advantage in solving nonlinear multi-classi fication problems.From Tab.2 to Tab.4,we can see that the improved DLR++model converges faster than the original logistic model,using only 1/20 of the training times.At the same time,the accuracy rate has been increased 7.04%,at the cost of a higher operation time.

    From Tab.5 to Tab.6,we can see that the improved DLR++model has a better performance on datasets of large scales and multiple categories.It offers an accuracy of 93.0%while LRoffers an accuracy of 47.0%for 10-classi fication problems.

    Table 5:Performance of DLR++on different scales of datasets

    Table 6:Performance of DLR++on different number of categories

    6 Conclusion

    In this paper,we propose an improved logistic regression model based on kernel density estimation,and it can be applied to solve nonlinear multi-classi fication problems.We have compared and tested several common algorithms for logistic regression.For the experimental results,we found that the sampling method[Gao,Ding and Han(2008);Chawla,Bowyer,Hall et al.(2002)]can improve the classi fication accuracy,but the training samples obtained are very different from the original samples,which destroys the data characteristics inherently in the original sample.However in contrast,our improved model guarantees the integrity of the samples,it has obvious advantages in classi fication accuracy,and has good generalization ability with an ideal training speed.But there is still room for optimization in training,especially in the matrix operation stage.In the future,we will reduce the size of the matrix and block calculation,expected to decline training time and improve efficiency.Combining application to document retrieval[Xiong and Wang(2018);Xiong,Shen,Wang et al.(2018)],we will also expect to check the improved method in this paper is effect to document classi fication which is interested by us.

    Acknowledgement:The authors would like to thank all anonymous reviewers for their suggestions and feedback.This work was supported by National Natural Science Foundation of China(Grant No.61379103).

    精品久久久久久久久av| 一级毛片黄色毛片免费观看视频| 亚洲国产精品一区二区三区在线| 中文欧美无线码| 777米奇影视久久| 午夜福利视频精品| 亚洲精品中文字幕在线视频 | av网站免费在线观看视频| 搡女人真爽免费视频火全软件| 欧美日韩视频高清一区二区三区二| 国语对白做爰xxxⅹ性视频网站| 欧美日韩亚洲高清精品| 少妇被粗大的猛进出69影院 | 一个人免费看片子| 九九在线视频观看精品| 国产伦精品一区二区三区视频9| 黄色怎么调成土黄色| 欧美精品亚洲一区二区| 久久精品熟女亚洲av麻豆精品| 国产有黄有色有爽视频| 色婷婷av一区二区三区视频| 18禁在线无遮挡免费观看视频| 如何舔出高潮| 亚洲国产日韩一区二区| 亚洲天堂av无毛| 久久久久人妻精品一区果冻| 国产日韩欧美视频二区| 久久久欧美国产精品| 日本-黄色视频高清免费观看| 少妇被粗大猛烈的视频| 一区二区三区四区激情视频| 我的老师免费观看完整版| 亚洲怡红院男人天堂| 亚洲欧美日韩卡通动漫| 久久久久久久国产电影| 人人妻人人爽人人添夜夜欢视频 | 夜夜骑夜夜射夜夜干| 97在线人人人人妻| 九九在线视频观看精品| 97超视频在线观看视频| 最近手机中文字幕大全| 一级爰片在线观看| 欧美三级亚洲精品| 美女主播在线视频| 桃花免费在线播放| av一本久久久久| av在线观看视频网站免费| 午夜福利在线观看免费完整高清在| 精品一品国产午夜福利视频| 国产成人午夜福利电影在线观看| 在线观看免费视频网站a站| 成年av动漫网址| 丝瓜视频免费看黄片| 美女中出高潮动态图| 国产av一区二区精品久久| 国产精品99久久久久久久久| 久久久久国产网址| 黄色日韩在线| av.在线天堂| 国产极品天堂在线| 少妇的逼好多水| 国产老妇伦熟女老妇高清| 在线观看一区二区三区激情| 日本黄大片高清| 国产男人的电影天堂91| 老司机亚洲免费影院| 午夜影院在线不卡| 高清黄色对白视频在线免费看 | 人人妻人人看人人澡| 亚洲精品久久久久久婷婷小说| 91久久精品国产一区二区三区| 国产成人aa在线观看| 日韩一区二区视频免费看| 国内少妇人妻偷人精品xxx网站| 国产国拍精品亚洲av在线观看| 老司机影院成人| 成人综合一区亚洲| 日韩人妻高清精品专区| 久久人人爽av亚洲精品天堂| 国产69精品久久久久777片| 欧美激情国产日韩精品一区| 久久精品久久久久久噜噜老黄| 亚洲欧美日韩卡通动漫| 精品亚洲乱码少妇综合久久| 大陆偷拍与自拍| 免费观看av网站的网址| 久久国产亚洲av麻豆专区| 国产日韩一区二区三区精品不卡 | 国产精品无大码| 成人黄色视频免费在线看| 视频区图区小说| 国产欧美另类精品又又久久亚洲欧美| 一级a做视频免费观看| 少妇裸体淫交视频免费看高清| 久久国产亚洲av麻豆专区| 免费观看的影片在线观看| 99热6这里只有精品| 香蕉精品网在线| 国产伦精品一区二区三区四那| 日本免费在线观看一区| 免费av中文字幕在线| 一级黄片播放器| 亚洲无线观看免费| 国内揄拍国产精品人妻在线| 亚洲欧美清纯卡通| 国产午夜精品一二区理论片| 亚洲美女黄色视频免费看| 一区二区三区四区激情视频| 精品亚洲成国产av| 纵有疾风起免费观看全集完整版| 中国国产av一级| 超碰97精品在线观看| www.av在线官网国产| 国产精品秋霞免费鲁丝片| 18禁在线无遮挡免费观看视频| 老女人水多毛片| 国产精品一区二区性色av| 一级a做视频免费观看| 国产精品国产三级国产专区5o| 精品人妻熟女毛片av久久网站| 少妇被粗大猛烈的视频| 高清欧美精品videossex| 精品一区二区三卡| 亚洲真实伦在线观看| 最近中文字幕高清免费大全6| 国产精品伦人一区二区| 女性生殖器流出的白浆| 精品午夜福利在线看| 狂野欧美白嫩少妇大欣赏| 久久亚洲国产成人精品v| 久久国产乱子免费精品| 精品国产国语对白av| 99热这里只有是精品50| 免费高清在线观看视频在线观看| 国产高清有码在线观看视频| 久久久国产欧美日韩av| 久久久久久久久久久久大奶| 在线观看美女被高潮喷水网站| 女的被弄到高潮叫床怎么办| 国产精品一区二区在线观看99| 中国三级夫妇交换| 狂野欧美激情性bbbbbb| 久久久久国产网址| 国产欧美另类精品又又久久亚洲欧美| 99热全是精品| 波野结衣二区三区在线| 一边亲一边摸免费视频| 日韩一区二区三区影片| 国产黄片美女视频| 亚洲av福利一区| 狂野欧美白嫩少妇大欣赏| 久久久欧美国产精品| 久久人妻熟女aⅴ| 91aial.com中文字幕在线观看| 美女主播在线视频| 丰满人妻一区二区三区视频av| 久久久久视频综合| 天美传媒精品一区二区| 2021少妇久久久久久久久久久| 在线观看www视频免费| 国产在线一区二区三区精| 日韩一区二区视频免费看| 黄色日韩在线| 欧美日韩视频精品一区| 日韩中文字幕视频在线看片| 美女福利国产在线| 亚洲不卡免费看| 国产在线免费精品| 蜜臀久久99精品久久宅男| 国产伦精品一区二区三区视频9| av黄色大香蕉| 欧美日韩一区二区视频在线观看视频在线| 最新的欧美精品一区二区| 天堂8中文在线网| 老司机影院成人| 少妇精品久久久久久久| 99视频精品全部免费 在线| 日韩伦理黄色片| 大片免费播放器 马上看| 国产深夜福利视频在线观看| 你懂的网址亚洲精品在线观看| 日韩中文字幕视频在线看片| 精品一区二区三区视频在线| 97在线人人人人妻| 人人妻人人爽人人添夜夜欢视频 | 一区二区av电影网| 91精品国产国语对白视频| 精品人妻一区二区三区麻豆| 你懂的网址亚洲精品在线观看| 国产伦精品一区二区三区视频9| av在线播放精品| 97精品久久久久久久久久精品| 97在线人人人人妻| 国产精品人妻久久久久久| 久久人妻熟女aⅴ| 九九久久精品国产亚洲av麻豆| 最近中文字幕高清免费大全6| 国产 精品1| 免费少妇av软件| 黄色配什么色好看| 男女免费视频国产| 在线观看免费日韩欧美大片 | 久久精品久久久久久噜噜老黄| 青青草视频在线视频观看| 99久久人妻综合| 免费av不卡在线播放| 99久久精品热视频| 国产精品女同一区二区软件| 中国三级夫妇交换| 我要看黄色一级片免费的| 欧美97在线视频| 在线观看av片永久免费下载| 国产亚洲欧美精品永久| 亚洲精品日本国产第一区| 国产亚洲av片在线观看秒播厂| 久久女婷五月综合色啪小说| 搡女人真爽免费视频火全软件| 男男h啪啪无遮挡| av线在线观看网站| 波野结衣二区三区在线| 国产精品秋霞免费鲁丝片| 好男人视频免费观看在线| 久久久久久人妻| 亚洲欧美日韩东京热| 亚洲欧美精品专区久久| 青青草视频在线视频观看| 国产无遮挡羞羞视频在线观看| 免费看不卡的av| 亚洲真实伦在线观看| 亚洲国产精品一区三区| a级毛色黄片| 欧美变态另类bdsm刘玥| 欧美精品人与动牲交sv欧美| 久久久精品94久久精品| 亚洲欧美中文字幕日韩二区| 久久国产精品男人的天堂亚洲 | 日韩亚洲欧美综合| 国产日韩欧美在线精品| 久久人妻熟女aⅴ| 日韩中文字幕视频在线看片| 2021少妇久久久久久久久久久| 日本91视频免费播放| 我的老师免费观看完整版| 精品久久久久久久久av| 精品一区二区三区视频在线| 国产淫语在线视频| 我的老师免费观看完整版| 免费av中文字幕在线| 在线免费观看不下载黄p国产| 日韩一区二区视频免费看| 成人毛片a级毛片在线播放| 国产亚洲91精品色在线| 国产 精品1| 日韩欧美一区视频在线观看 | 51国产日韩欧美| h视频一区二区三区| 亚洲精品亚洲一区二区| 男女国产视频网站| 亚洲欧美日韩另类电影网站| 国产欧美日韩精品一区二区| 亚洲av欧美aⅴ国产| 国产男女内射视频| 亚洲精品第二区| 亚洲久久久国产精品| 男人舔奶头视频| 99九九线精品视频在线观看视频| 妹子高潮喷水视频| 亚洲欧美日韩东京热| 亚洲av成人精品一二三区| 日本欧美国产在线视频| 精品视频人人做人人爽| 午夜福利网站1000一区二区三区| 欧美少妇被猛烈插入视频| 欧美xxxx性猛交bbbb| 99久久精品一区二区三区| 在线观看av片永久免费下载| 一级毛片电影观看| 观看免费一级毛片| 日韩欧美一区视频在线观看 | 韩国av在线不卡| 久久av网站| 看非洲黑人一级黄片| 99九九在线精品视频 | 97超视频在线观看视频| 欧美 亚洲 国产 日韩一| 国产淫语在线视频| 最新的欧美精品一区二区| 少妇裸体淫交视频免费看高清| 亚洲av不卡在线观看| av天堂中文字幕网| av免费在线看不卡| 色婷婷久久久亚洲欧美| av专区在线播放| 亚洲真实伦在线观看| 久久6这里有精品| 国产乱人偷精品视频| 精品国产露脸久久av麻豆| 亚洲精品乱久久久久久| 最近的中文字幕免费完整| 亚洲在久久综合| 精品国产乱码久久久久久小说| videossex国产| 人人妻人人看人人澡| av免费观看日本| 热re99久久国产66热| 极品少妇高潮喷水抽搐| 成年美女黄网站色视频大全免费 | 9色porny在线观看| 中文字幕免费在线视频6| 熟女电影av网| 男女边吃奶边做爰视频| 99热6这里只有精品| 亚洲,欧美,日韩| 亚洲国产精品专区欧美| 欧美 亚洲 国产 日韩一| 国产又色又爽无遮挡免| 久久国产乱子免费精品| 丰满少妇做爰视频| 赤兔流量卡办理| 国产黄频视频在线观看| 国产免费视频播放在线视频| 日本91视频免费播放| 久久久亚洲精品成人影院| 色视频在线一区二区三区| av线在线观看网站| 国产女主播在线喷水免费视频网站| 久久久久人妻精品一区果冻| 精品亚洲乱码少妇综合久久| 我的老师免费观看完整版| 亚洲精品久久午夜乱码| 一级爰片在线观看| 国产精品偷伦视频观看了| 在线观看免费高清a一片| 亚洲国产毛片av蜜桃av| 国产精品国产三级专区第一集| 久久青草综合色| 国产免费又黄又爽又色| 街头女战士在线观看网站| 美女内射精品一级片tv| 美女中出高潮动态图| freevideosex欧美| 国产免费一区二区三区四区乱码| 美女中出高潮动态图| 国产一区二区三区av在线| 一级毛片我不卡| 免费黄网站久久成人精品| 波野结衣二区三区在线| 国产伦理片在线播放av一区| 99久久精品热视频| 欧美xxxx性猛交bbbb| 色94色欧美一区二区| 男女边吃奶边做爰视频| 男人狂女人下面高潮的视频| 亚洲成人av在线免费| 成人特级av手机在线观看| 99热这里只有精品一区| 天堂中文最新版在线下载| 一级毛片久久久久久久久女| 乱人伦中国视频| 亚洲一区二区三区欧美精品| 一二三四中文在线观看免费高清| 观看免费一级毛片| 欧美日韩在线观看h| 国产免费视频播放在线视频| 在线观看免费视频网站a站| 少妇被粗大猛烈的视频| 热99国产精品久久久久久7| 九草在线视频观看| 亚洲av福利一区| 天堂俺去俺来也www色官网| 看十八女毛片水多多多| 免费av中文字幕在线| av线在线观看网站| 在线看a的网站| 黄色怎么调成土黄色| av在线老鸭窝| 日本色播在线视频| 国产老妇伦熟女老妇高清| 精品少妇黑人巨大在线播放| 亚洲自偷自拍三级| 美女视频免费永久观看网站| 高清黄色对白视频在线免费看 | 日韩成人伦理影院| 亚洲精品乱久久久久久| 欧美成人精品欧美一级黄| 国产成人精品久久久久久| 一区二区三区四区激情视频| 国产av一区二区精品久久| 亚洲伊人久久精品综合| 国产老妇伦熟女老妇高清| 国产成人freesex在线| 国产亚洲欧美精品永久| 免费看日本二区| 国产女主播在线喷水免费视频网站| 啦啦啦中文免费视频观看日本| 国产欧美日韩精品一区二区| 99久久精品热视频| 韩国av在线不卡| 精品熟女少妇av免费看| 久久97久久精品| 赤兔流量卡办理| 亚洲伊人久久精品综合| 少妇人妻精品综合一区二区| 99视频精品全部免费 在线| 国产高清有码在线观看视频| h视频一区二区三区| 日本wwww免费看| 三级国产精品片| 在线天堂最新版资源| 午夜免费鲁丝| 美女中出高潮动态图| 99热国产这里只有精品6| 97精品久久久久久久久久精品| 国产精品久久久久久久电影| 亚洲精品国产色婷婷电影| 亚洲不卡免费看| 一区二区三区精品91| 日本欧美国产在线视频| 大香蕉久久网| 亚洲情色 制服丝袜| 乱码一卡2卡4卡精品| 成人黄色视频免费在线看| 丝袜脚勾引网站| 国产欧美亚洲国产| 日本猛色少妇xxxxx猛交久久| 国产精品蜜桃在线观看| 成人毛片60女人毛片免费| 91久久精品国产一区二区成人| 久久精品熟女亚洲av麻豆精品| 少妇人妻 视频| 久久久久国产精品人妻一区二区| 亚洲成人av在线免费| 9色porny在线观看| 免费少妇av软件| 亚洲欧美成人综合另类久久久| 国产精品一二三区在线看| 夜夜看夜夜爽夜夜摸| 亚洲精品国产色婷婷电影| 久热这里只有精品99| 国产女主播在线喷水免费视频网站| 18禁在线无遮挡免费观看视频| a级片在线免费高清观看视频| 97超视频在线观看视频| 纯流量卡能插随身wifi吗| 欧美亚洲 丝袜 人妻 在线| 在现免费观看毛片| 成人免费观看视频高清| 精品一区在线观看国产| 国产高清三级在线| 成人国产麻豆网| 丰满饥渴人妻一区二区三| 国产成人精品久久久久久| 午夜91福利影院| www.av在线官网国产| 国产精品人妻久久久影院| 久久精品国产a三级三级三级| 久久免费观看电影| 插阴视频在线观看视频| 黄色一级大片看看| 噜噜噜噜噜久久久久久91| 中文字幕人妻熟人妻熟丝袜美| 欧美日韩精品成人综合77777| 韩国av在线不卡| 成人毛片a级毛片在线播放| 国语对白做爰xxxⅹ性视频网站| 看免费成人av毛片| av天堂久久9| 久久狼人影院| 亚洲国产av新网站| 99热这里只有精品一区| 街头女战士在线观看网站| 水蜜桃什么品种好| 男女免费视频国产| 久久国内精品自在自线图片| 欧美最新免费一区二区三区| 视频中文字幕在线观看| 国产成人午夜福利电影在线观看| 一边亲一边摸免费视频| tube8黄色片| 精品一区二区免费观看| 欧美 日韩 精品 国产| 国产精品偷伦视频观看了| 51国产日韩欧美| 午夜福利网站1000一区二区三区| 婷婷色综合www| 99视频精品全部免费 在线| 亚洲,一卡二卡三卡| 国产精品99久久久久久久久| 丰满乱子伦码专区| h视频一区二区三区| 午夜激情久久久久久久| 少妇精品久久久久久久| 亚洲综合色惰| 亚洲精品乱码久久久v下载方式| 日本av免费视频播放| 热re99久久精品国产66热6| freevideosex欧美| 国产精品一区www在线观看| 夜夜看夜夜爽夜夜摸| 国产免费一区二区三区四区乱码| www.av在线官网国产| 黄色怎么调成土黄色| 国产成人午夜福利电影在线观看| 男女边吃奶边做爰视频| 一级毛片aaaaaa免费看小| 又粗又硬又长又爽又黄的视频| 亚洲精品一区蜜桃| 亚洲怡红院男人天堂| 内地一区二区视频在线| 26uuu在线亚洲综合色| 欧美 日韩 精品 国产| 深夜a级毛片| 婷婷色av中文字幕| 在线观看免费日韩欧美大片 | 青春草国产在线视频| 汤姆久久久久久久影院中文字幕| 国产美女午夜福利| 日韩欧美 国产精品| a 毛片基地| 久久久亚洲精品成人影院| 最近2019中文字幕mv第一页| 欧美精品一区二区大全| 欧美精品亚洲一区二区| 亚洲欧美一区二区三区黑人 | 色视频www国产| 亚洲精品,欧美精品| 麻豆乱淫一区二区| 日产精品乱码卡一卡2卡三| 久久精品夜色国产| 久久免费观看电影| 久久国内精品自在自线图片| 岛国毛片在线播放| 91精品国产国语对白视频| 成人免费观看视频高清| 久久 成人 亚洲| 国产精品.久久久| 久久午夜福利片| 九草在线视频观看| 嫩草影院入口| 美女脱内裤让男人舔精品视频| 大片免费播放器 马上看| 成人免费观看视频高清| 午夜福利视频精品| 黄色日韩在线| av国产久精品久网站免费入址| 五月开心婷婷网| 日日爽夜夜爽网站| 99re6热这里在线精品视频| 久久青草综合色| 看免费成人av毛片| 18禁在线无遮挡免费观看视频| 欧美最新免费一区二区三区| 一区二区三区免费毛片| 欧美最新免费一区二区三区| 一区二区三区免费毛片| 女人久久www免费人成看片| 亚洲,一卡二卡三卡| 久久久久国产网址| 午夜福利视频精品| 免费av不卡在线播放| 久久午夜福利片| 全区人妻精品视频| 最近手机中文字幕大全| 免费观看在线日韩| 亚洲欧美清纯卡通| 精品熟女少妇av免费看| 在线 av 中文字幕| 久久久久久久久久久久大奶| 免费在线观看成人毛片| 欧美日韩精品成人综合77777| 亚洲精品中文字幕在线视频 | 99九九线精品视频在线观看视频| 国产成人一区二区在线| 啦啦啦中文免费视频观看日本| 伦理电影大哥的女人| 王馨瑶露胸无遮挡在线观看| www.av在线官网国产| 国模一区二区三区四区视频| 亚洲电影在线观看av| 黄色视频在线播放观看不卡| 夫妻性生交免费视频一级片| 一本久久精品| 永久网站在线| 国产黄色视频一区二区在线观看| 日产精品乱码卡一卡2卡三| 国产精品伦人一区二区| 最近的中文字幕免费完整| 亚洲精品视频女| 建设人人有责人人尽责人人享有的| 国产精品无大码| 中文字幕免费在线视频6| 精品久久久久久电影网| 国产一区有黄有色的免费视频| 国产男人的电影天堂91| 国产在线男女| 综合色丁香网| 久久女婷五月综合色啪小说| 亚洲综合精品二区| av网站免费在线观看视频| 久久人人爽人人片av| 日本与韩国留学比较| 国产欧美日韩一区二区三区在线 | 一本色道久久久久久精品综合| 美女中出高潮动态图| 中国国产av一级| 女性生殖器流出的白浆| 亚洲综合精品二区| 国产永久视频网站| 日本黄色片子视频| av在线观看视频网站免费| 亚洲人成网站在线播| 各种免费的搞黄视频| 国产精品人妻久久久久久| 91精品国产国语对白视频| 精品国产一区二区三区久久久樱花|