• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved Logistic Regression Algorithm Based on Kernel Density Estimation for Multi-Classification with Non-Equilibrium Samples

    2019-11-07 03:12:16YangYuZeyuXiongYueshanXiongandWeiziLi
    Computers Materials&Continua 2019年10期

    Yang Yu,Zeyu Xiong,Yueshan Xiongand Weizi Li

    Abstract:Logistic regression is often used to solve linear binary classi fication problems such as machine vision,speech recognition,and handwriting recognition.However,it usually fails to solve certain nonlinear multi-classi fication problem,such as problem with non-equilibrium samples.Many scholars have proposed some methods,such as neural network,least square support vector machine,AdaBoost meta-algorithm,etc.These methods essentially belong to machine learning categories.In this work,based on the probability theory and statistical principle,we propose an improved logistic regression algorithm based on kernel density estimation for solving nonlinear multi-classi fication.We havecomparedourapproachwithothermethodsusingnon-equilibriumsamples,theresults show that our approach guarantees sample integrity and achieves superior classi fication.

    Keywords:Logistic regression,multi-classi fication,kernel function,density estimation,non-equilibrium.

    1 Introduction

    Machine Learning has become one of the most popular fields in recent years.There are two main tasks of Machine Learning:1)classi fication,which goal is to divide instances into the appropriate categories,and 2)regression,which goal is to study relationship between samples.The most basic classi fication problem is binary classi fication.which can be solved using algorithms such as Naive Bayes(NB),support vector machine(SVM),decision tree,logistic regression,KNN,neural network,etc.More generally,multi-classi fication problems such as identifying handwritten digits 0~9,and and labeling document topics have gained much attention recently.To provide few examples,Liu et al.[Liu,Liang and Xue(2008)]proposed a multi-classi fication algorithm based on fuzzy support vector machines,whichprovidesbetterclassi ficationaccuracyandgeneralizationabilitycompared with traditional One-vs.-Rest methods.Tang et al.[Tang,Wang and Chen(2005)]proposed a new multi-classi fication algorithm based on support vector machine and binary tree structure to solve the problem of non-separable regions.

    In the existing regression algorithm,support vector machines are mostly used for multi-classi fication problem,but there are some limitations in algorithm.The logistic regression algorithm can only solve the problem of dichotomy and linear classi fication.Support vector machines typically support only small training samples and are equally dif ficult to deal with multiple classi fication problems.Naive Bayes is based on the assumption that the characteristic conditions are independent.Once the dataset does not satisfy this assumption,its classi fication accuracy will be greatly affected.

    In order to solve the problem above,towards dif ficult for implement large scale samples,not applicable to multi-classi fication and uncertainty to constraint conditions,Chen et al.[Chen,Chen,Mao et al.(2013)]proposed a model of Density-based Logistic Regression(DLR),which has a good result in practical application.Our model is based on kernel density-based logistic regression and we construct a new kernel function for multi-classi fication problems.This has three advantages:1)It makes better improvements to classi fication effect.2)It is an extension of DLR model to multi-classi fication problems.3)It shows good generalization performance on nonlinear and unbalanced data.We will describe the theoretical rationality and check classifying quality according to practical application for our new model.

    The rest of the paper is organized as the following.In Section 2,we explain background knowledge including logistic regression binary classi fication,multi-classi fication,SoftMax and DLR model.In Section 3,we introduce several solutions for multi-classi fication problems with imbalanced samples.In Section 4,we explain our approach in details.In Section 5,we compare our approach to other methods and analyze the performances.Finally,we conclude in Section 6.

    2 Logistic regression and related knowledge

    2.1 Logistic regression

    Logistic regression is based on linear regression,and a sigmoid logic function is applied,which is a logarithmic probability function.Logistic regression is represented as follows,

    In the model of sigmoid function,zvalues are distributed within the range of[0,1].When the independent variable is taken near 0,thez-value change curve is very steep,while thezvalue is relatively stable at other values.Therefore,the binary classi fication tasks can be handled well if taking 0 as the boundary.However,it is sometimes dif ficult to make the representation model approximate to the expected model.By adding a constant termbto the function,

    By substituting Eq.(2)into Eq.(1),we have

    Based on these formulae,assuming a given datasetD={xi,yi},i=1,···,N,xi∈ R D,D is the dimension of samples,andyi∈{0,1},logistic regression is described as follows:

    wherewstands for feature weight,which is a parameter to be learned.φis the characteristic transformation function.

    In LR modelφis usually de fined to be equal tox.The key step is to learn unknown parameterswandb.Ifyin Eq.(3)is regarded as posterior probability estimationp(y=1|x),Eq.(4)can be rewritten as:

    Thenwcan be obtained by the maximum likelihood estimate.With the definition ofbi=p(yi=1|xi),y=0 or 1,for a single sample,the posterior probability is,

    Then,the maximum likelihood function is represented as follows,

    For the convenience of calculation,the negative log of the maximum likelihood function is used as the objective function to be optimized,

    Since the maximizing likelihood probability is equivalent to minimizing negative likelihood probability,the last step is to minimize the Loss function.

    2.2 Density-based logistic regression

    In the DLR model,φis a function that mapsxto the eigenspace,

    whereDis the dimension of the input data,lnmeasures the contribution ofxdto the probability ofy=1,andmeasures the degree of imbalance for datasets.p(y=1)is the proportion of data in the training set,whose label isy=1.Nadaraya-Watson is usually used to estimatep(y=k|xd)wherek=0,1.

    whereDk?Dis the subset of data in class k,andK(x,y)is a Gaussian kernel function de fined as follows,

    wherehdis the bandwidth of the kernel density function.Thehdis usually set using the Silverman’s rule of thumb[Silverman and Green(1986)],

    whereNis the total number of samples andσis the standard deviation ofxd.

    Next we need to trainwthrough the learning algorithm untilwconverges.Givenbi=p(yi=1|xi),the loss function based on likelihood probability is calculated as follows,

    2.3 Extension of logistic regression to multiple classi fication

    Since the logistic classi fication is a binary classi fication model,it is necessary to extend it for multiple classi fication,common extensions include multiple binary classi fication models or SoftMax models.

    2.3.1 N-logistic model

    The N-logistic model generally adopts One-vs-Rest or One-vs.-One.When classifying a sample,we first classify the two classi fiers,then vote,and select the category with the highest score.At the same time,to prevent the same vote,we also add the probability of the class to each classi fier in the voting.The predictive accuracy of these two approaches is usually very similar,so unless there is a speci fic need for data characteristics,it is generally arbitrary to choose one approach to calculate.

    2.3.2 SoftMax model

    SoftMax regression is a generalization of logistic regression to multiple classi fication problems.Its basic form is described as follows,

    When in the test,to samplex,if there is a categoryc,for all the other categoryc *(c * /=c)meet thep(y=c|x)>p(y=c *|x),thenxbelongs to the categoryc.

    On the question of choosing N-logistic model or SoftMax model,many scholars have conducted in-depth exploration.Currently,it is accepted that it is necessary to investigate whether the various categories are mutually exclusive.If there is a mutual exclusion relationship between the categories to be classi fied,we’d better choose SoftMax classi fier.Whileifthereisnomutualexclusionbetweencategories,andthecategoriesareintersecting,it is best suited to the N-logistic classi fier.We verify this conclusion according to corresponding datasets in Section 5.

    3 Analysis of the classi fication results with unbalanced sample proportion

    In our actual classi fication tasks,there are often needs to deal with problems with unbalanced data sample proportions.For example,the ratio of positive and negative samples in a dataset is 10:1,including 100 positive classes and 10 negative classes.If using this kind of data to train a classi fier,it is very likely that the test data will be divided into positive classes.Obviously,this classi fier is invalid.

    For this kind of data,traditional logistic regression method usually fails to work.In recent years,studies on the problem of unbalanced classi fication have been very active[Ye,Wen and Lv(2009)].In this section we introduce several common approaches to solve the problem of sample imbalance classi fication.

    3.1 Obtain more samples

    For unbalanced classi fication,the first solution is to obtain more samples and expand a few samples to balance the sample proportion.However,in most cases,the sampling procedure needs speci fic conditions.Thus,it is generally dif ficult to obtain more samples under the same conditions.

    3.2 Sampling methods

    The general sampling method is mainly based on modifying the number of unbalanced samples.The research of Estabrooks et al.[Estabrooks,Jo and Japkowicz(2004)]show that the general sampling method has a better effect on solving unbalanced classi fication problems.

    3.2.1 Under-sampling method

    Under-sampling method is also called down-sampling[Gao,Ding and Han(2008)],which is to eliminate some samples from majority class samples,so that the number of samples in the whole group tends to be balanced.The commonly used method is random under-sampling downward method.The method is based onNmin,the number of minority class samples.We randomly sample from the majority class samples and eliminateNsamples,and thenNmax-N=Nmin,so the samples are balanced.

    3.2.2 Over-sampling method

    Over-sampling method is also called up-sampling,which refers to increase the number of minority class samples.The method of adding a small number of minority class samples(random over-sampling method)or re-fitting some new data in accordance with some law can be used to make the number of samples balanced.One commonly used method is Synthetic Minority Over-sampling Technique(SMOTE)[Chawla,Bowyer,Hall et al.(2002)].The method analyzes the distribution of the characteristic space of a few samples and proposes new samples.Compared to the random over-sampling method,the data added by SMOTE sampling method is completely new,which can follow the regular pattern in the original sample.The main idea of SMOTE is shown in Fig.1.

    For each samplexin a minority class,the Euclidean distance of each sample point of a minority sample is calculated,and itskneighbors are obtained.A suitable sampling ratio is set according to the sample proportion to determine the sampling rateN.For each of the minority samplex,select several samples randomly from itskneighbors.For each random nearest neighborxn,a new sample is constructed with the original sample according to the following equation,

    3.3 Modify evaluation index

    For unbalanced classi fication,using accuracy to evaluate classi fiers may biases.For example,assuming ratio of positive and negative samples in a dataset is 9:1,and all samples are labelled be positive.Although the accuracy rate is up to 90%,the classi fier is useless.

    Figure 1:The main idea of SMOTE method

    Table 1:A hybrid matrix of binary classi fication

    Therefore,accuracy can serve as a biased indicator.Davis et al.[Davis and Goadrich(2006)]proposed a new evaluation index named Precision and Recall,some factors are listed in Tab.1.

    Precisionrefers to the proportion of positive samples in all predicted positive samples,andRecallrefers to the proportion of all actual positive samples that are being correctly predicted.

    3.4 Use penalty items to modify the weights

    If samples are dif ficult to sample directly,the method of modifying sample weights can be used.It increases the weight of minority class samples and reduces the weight of the majority class samples.Because the weights of minority class samples are high,they can lead to better classi fication results.The commonly used method is to add a penalty item to the majority class samples each time when training the sample weight.In general,we use the regularization method to add a penalty parameter to a objective function,this reduces the chance of the over fitting[Goodfellow,Bengio and Courville(2017)].The regularized objective function is shown below,

    whereαis a parameter which represents the contribution of the penalty item and the objective function.The penalty can be adjusted by controllingα.Ifα=0,there is no penalty,otherwise the larger theα,the greater the penalty.

    After we chose an appropriate penalty,the training regularize the objective function.In this way,the data error and the parameter scale can be reduced,the computation efficiency can be improved.But in practice,how to select the optimal penalty item is a complicated problem,which needs more tests.

    3.5 Kernel-based methods

    Towards general classi fication problem,we can assume that the sample data can be classi fieddirectlybylinearmodel.Inotherwords,thereisahyperplanethatcanseparatethe samples and ensure that the classi fication is correct.However,in practice,there is usually no such a hyperplane to partition the original data correctly,which means that the data are not linearly separable.For such a problem,we can consider preprocessing data.Using the principle of support vector machine,data in the low-dimensional space are transformed into the high dimensional space through nonlinear transformation,so that they can be linearly separable[Zhou(2016)].Using this method,the relationship between data samples can be written as dot product.For example,the linear regression function can be rewritten as follows,

    wherex(i)is the training data.αis the coef ficient vector.Replacing the dot product with a function of the kernelk(x,x(i))=φ(x)·φ(x(i)),we can get,

    This function is nonlinear with respect tox,while it is linear with respect toφ(x).

    Kernel function can deal with nonlinear unbalanced classi fication well.It uses a convex optimization technique to address nonlinear problems in a linear manner.At the same time,this method can guarantee convergence and improve the accuracy of classi fication.And there is some simpli fication in parameter determination.In addition,it is much more efficient to use the kernel function to transform data into a transformation function[Goodfellow,Bengio and Courville(2017)].

    SVM can convert sample data into high dimensional feature space through a kernel function.According to the principle of maximum spacing of SVM,the hyperplane of the optimal classi fication can be constructed in the characteristic space of high dimension to realizetheclassi fication.Iftheintervalofclassi ficationcanbeextended,especiallybetween minority class samples and the optimal classi fication hyperplane,the generalization performance of the classi fier and the accuracy of classes with small samples can be effectively improved.This enables the correct classi fication of unbalanced data[Liu,Huang,Zhu et al.(2009)].

    4 Improved method of kernel density estimation model for multi-classi fication

    We extend the DLR model to solve the multi-classi fication problem and design an improved multi-classi fication algorithm.Assuming there areCclasses,fork=1,2,...,C,the DLR model is de fined as follows,

    wherewk=(wk1,wk2,...,wkD)is the feature weighting parameter of classk,andφk=(φk1,φk2,...,φkD)is the characteristic transformation function of classk.

    According to the Nadaraya-Watson estimator,the probability formula of classkis obtained as follows:

    Finally,we need to minimize the loss function,

    where,1yi=kis 1 if and only ifyi=k,otherwise it takes value 0.

    Now we present the process of evaluating the gradient of the Loss function with respect towk,

    We adjust the weightwkaccording to the direction of the gradient descent,until thewkconverges andwkin the model is well trained.During the testing,the same kernel function transformation is performed on the testing data.The transformedφ(x)and trainedwkare substituted into Eq.(25).Then we compare the probability of the different classes and choose the class with the largest probability as the result category.At this point,we have completed the generalization of the logistic regression to multi-classi fication based on kernel density function.

    To show the difference between kernel density estimation logistic regression and classical logistic regression,we will compare the corresponding algorithms later.

    In the DLR algorithm,the inputxis given a feature transformation to getφbefore calculating the probability in Eq.(25).And then substituteφforxas the input to the probability formula.At the same time,the probability formula is changed from the Sigmoid function to the SoftMax function.

    After conducting experiments,we have found that the differences ofφamong different labels obtained using the DLR algorithm are small.There is a large error in the final classi fication result.And the minority class samples cannot be discriminated at all.And the value of loss function is not reduced by training.Therefore,in the process of constructing the bandwidth of kernel function and preprocessing the data,we improve it by the following scheme.

    Figure 2:The process of searching for the optimal coef ficient

    First,We try to train the parameters of the kernel function by modifying the weight values on the basis of Eq.(14).We conducted 16 groups of experiments,as shown in Fig.2.In the previous experiment,since the value ofwwas too large,the characteristics of the input dataXitself were dif ficult to distinguish.Properly reducingcan limit the complexity of the model,thereby improving the generalization performance of the model.Through comparison experiments,we found that changing 1.06 in Eq.(14)to 0.02 can signi ficantly improve the accuracy of the model.According to Fig.2,we reduce the bandwidth of kernel function in Eq.(14).

    In this way,the difference ofhdhas been improved.However,it may cause the value ofybecome too large and over fl ow in subsequent calculations.Feature scaling is a crucial step in the data preprocessing process.For most machine learning and optimization algorithms,scaling the values of features to the same interval can make their performance even better.In order to accelerate loss function convergence rate,we normalizeφusing the min-max method.

    The training process of the improved model is established in Algorithm 3.

    In the next section,we will conduct a comparative test to analyze the relationship between test results and training results after using Algorithm 3.

    5 Application of improved algorithm:datasets and veri fication analysis

    In particular,we have implemented the following methods for testing.

    1)N-logistic model,One-vs-Rest methods,abbreviated as NLR.

    2)N-logistic model,One-vs-Rest methods,combined with the oversampling method,abbreviated as NLR_Sample.

    3)N-logistic model,One-vs-Rest methods,combined with the Smote method,abbreviated as NLR_Smote.

    4)SoftMax model.

    5)SoftMax model combined with Algorithm 3,abbreviated as DLR++.

    We choose three datasets for testing.The first one is the fitting datasetNumbconstructed by us.In this dataset,each data element contains 10 fl oating point values,ranging from 0 to 5.The data distribution is divided into three categories:GroupA,GroupBandGroupC.The second dataset is theIrisfrom UCI.There are four features,including calyx length,calyx width and petal width,and the eigenvalue is fl oating-point number.The target value is the classi fication result of irises,includingvirginica,versicolor,andsetosa.The third dataset is theWinefrom UCI,which uses the various parameters of theWineto predict the quality of the Wine.There are 11 characteristic values,including volatileacidity,non-volatile acid,citric acid,residual sugar,chlorine,total sulfur dioxide,free of sulfur dioxide,sulfate,concentration,PH and alcohol.There are three quality classes:1,2,or 3.

    Table 2:Accuracy(%)of different methods on three datasets

    Table 3:Time(s)for different methods on three datasets

    Table 4:The number of iterations of training Loss convergence on three datasets

    In order to keep the data more versatile,and the classi fication results more persuasive,we use k-fold cross validation and assign the dataset to the training set and testing set according to the ratio 7:3.The test results are given as follows.

    From Tab.2 to Tab.4,we can see that the DLR++algorithm shows better prediction accuracy.In the three datasets,Numbis linear,whileIrisandWineare non-linear.We can see from the results that both N-logistic and SoftMax models can solve the multi-classi fication problem well.Both oversampling and smote sampling method can be used to improve the classi fication results of the sample imbalance problem with the accuracy rate increased by 1.34%and 3.92%respectively.The improved DLR++model based on kernel density is the best among all these methods,and it has an advantage in solving nonlinear multi-classi fication problems.From Tab.2 to Tab.4,we can see that the improved DLR++model converges faster than the original logistic model,using only 1/20 of the training times.At the same time,the accuracy rate has been increased 7.04%,at the cost of a higher operation time.

    From Tab.5 to Tab.6,we can see that the improved DLR++model has a better performance on datasets of large scales and multiple categories.It offers an accuracy of 93.0%while LRoffers an accuracy of 47.0%for 10-classi fication problems.

    Table 5:Performance of DLR++on different scales of datasets

    Table 6:Performance of DLR++on different number of categories

    6 Conclusion

    In this paper,we propose an improved logistic regression model based on kernel density estimation,and it can be applied to solve nonlinear multi-classi fication problems.We have compared and tested several common algorithms for logistic regression.For the experimental results,we found that the sampling method[Gao,Ding and Han(2008);Chawla,Bowyer,Hall et al.(2002)]can improve the classi fication accuracy,but the training samples obtained are very different from the original samples,which destroys the data characteristics inherently in the original sample.However in contrast,our improved model guarantees the integrity of the samples,it has obvious advantages in classi fication accuracy,and has good generalization ability with an ideal training speed.But there is still room for optimization in training,especially in the matrix operation stage.In the future,we will reduce the size of the matrix and block calculation,expected to decline training time and improve efficiency.Combining application to document retrieval[Xiong and Wang(2018);Xiong,Shen,Wang et al.(2018)],we will also expect to check the improved method in this paper is effect to document classi fication which is interested by us.

    Acknowledgement:The authors would like to thank all anonymous reviewers for their suggestions and feedback.This work was supported by National Natural Science Foundation of China(Grant No.61379103).

    老鸭窝网址在线观看| 在线观看午夜福利视频| bbb黄色大片| 中文字幕最新亚洲高清| 亚洲av五月六月丁香网| avwww免费| 国产精品爽爽va在线观看网站| 制服诱惑二区| 久久 成人 亚洲| 不卡一级毛片| 亚洲国产欧洲综合997久久,| 久久香蕉激情| 免费观看精品视频网站| 日本免费一区二区三区高清不卡| 国产亚洲欧美在线一区二区| 女警被强在线播放| 亚洲 欧美一区二区三区| 成人特级黄色片久久久久久久| 国产在线观看jvid| 亚洲精品国产精品久久久不卡| 毛片女人毛片| 村上凉子中文字幕在线| 久久中文字幕一级| 狂野欧美激情性xxxx| 91九色精品人成在线观看| 国内少妇人妻偷人精品xxx网站 | 国产精品九九99| 亚洲精华国产精华精| 成年女人毛片免费观看观看9| 窝窝影院91人妻| 999久久久精品免费观看国产| 国产精品香港三级国产av潘金莲| 88av欧美| 啪啪无遮挡十八禁网站| 嫁个100分男人电影在线观看| 国产熟女午夜一区二区三区| 欧美中文日本在线观看视频| 一本精品99久久精品77| 怎么达到女性高潮| 波多野结衣巨乳人妻| 国产免费男女视频| 欧美成人午夜精品| 男人舔女人的私密视频| 国产在线观看jvid| 1024香蕉在线观看| 久久久国产精品麻豆| 亚洲黑人精品在线| 一本一本综合久久| 99热6这里只有精品| xxxwww97欧美| www.熟女人妻精品国产| 50天的宝宝边吃奶边哭怎么回事| 亚洲中文字幕一区二区三区有码在线看 | 欧美日韩国产亚洲二区| 99国产极品粉嫩在线观看| a级毛片a级免费在线| 99在线视频只有这里精品首页| 99热这里只有是精品50| 91字幕亚洲| 久久精品综合一区二区三区| 国产69精品久久久久777片 | 国产不卡一卡二| 日本a在线网址| 老司机午夜福利在线观看视频| 国产成+人综合+亚洲专区| 亚洲乱码一区二区免费版| 天天添夜夜摸| 国产精品久久电影中文字幕| 中文亚洲av片在线观看爽| 香蕉av资源在线| 99riav亚洲国产免费| 亚洲第一电影网av| 老司机靠b影院| 九色成人免费人妻av| 麻豆成人午夜福利视频| 真人做人爱边吃奶动态| 亚洲欧美日韩无卡精品| 久久天躁狠狠躁夜夜2o2o| 久久久久精品国产欧美久久久| 在线a可以看的网站| 在线永久观看黄色视频| 1024手机看黄色片| 怎么达到女性高潮| 999久久久国产精品视频| 国产午夜精品久久久久久| 操出白浆在线播放| 天堂av国产一区二区熟女人妻 | 天堂√8在线中文| 两性夫妻黄色片| √禁漫天堂资源中文www| 国产69精品久久久久777片 | 国产成人啪精品午夜网站| 美女 人体艺术 gogo| 欧美日韩亚洲国产一区二区在线观看| 麻豆一二三区av精品| 三级男女做爰猛烈吃奶摸视频| 亚洲乱码一区二区免费版| 丰满人妻一区二区三区视频av | 色尼玛亚洲综合影院| 欧美最黄视频在线播放免费| 小说图片视频综合网站| 免费看a级黄色片| 亚洲国产欧洲综合997久久,| 亚洲黑人精品在线| 99精品久久久久人妻精品| 久99久视频精品免费| 首页视频小说图片口味搜索| 国产av一区在线观看免费| 一区二区三区国产精品乱码| 欧美日韩精品网址| 久久精品人妻少妇| 国产免费av片在线观看野外av| 床上黄色一级片| 一级片免费观看大全| 非洲黑人性xxxx精品又粗又长| 国内久久婷婷六月综合欲色啪| 麻豆一二三区av精品| 日韩中文字幕欧美一区二区| 波多野结衣高清作品| 亚洲一卡2卡3卡4卡5卡精品中文| 99国产精品一区二区三区| 午夜福利欧美成人| 99在线视频只有这里精品首页| 亚洲中文日韩欧美视频| 美女免费视频网站| 好男人在线观看高清免费视频| 成人一区二区视频在线观看| 亚洲国产精品999在线| 亚洲aⅴ乱码一区二区在线播放 | 亚洲熟妇中文字幕五十中出| 亚洲成a人片在线一区二区| 757午夜福利合集在线观看| 日本 av在线| 一本久久中文字幕| 中文资源天堂在线| 免费av毛片视频| 久久久久亚洲av毛片大全| 亚洲国产精品999在线| 校园春色视频在线观看| 国产片内射在线| 亚洲一区中文字幕在线| 男女午夜视频在线观看| 亚洲熟妇熟女久久| 久久久久九九精品影院| 午夜福利成人在线免费观看| 欧美性长视频在线观看| 999久久久国产精品视频| 国产不卡一卡二| 九九热线精品视视频播放| 1024手机看黄色片| 久久伊人香网站| 成人精品一区二区免费| 麻豆成人av在线观看| 日韩成人在线观看一区二区三区| 一二三四社区在线视频社区8| 中文字幕熟女人妻在线| 精品欧美国产一区二区三| √禁漫天堂资源中文www| 搞女人的毛片| 国产高清视频在线播放一区| 美女午夜性视频免费| 国产精品自产拍在线观看55亚洲| 欧美成狂野欧美在线观看| 中文在线观看免费www的网站 | 精品乱码久久久久久99久播| 国产成人欧美在线观看| 国产一区二区三区视频了| 91成年电影在线观看| 老司机福利观看| 亚洲成人免费电影在线观看| 香蕉丝袜av| 国内久久婷婷六月综合欲色啪| 老司机靠b影院| 俄罗斯特黄特色一大片| 日韩欧美国产在线观看| 男人舔奶头视频| 日日干狠狠操夜夜爽| 中文字幕最新亚洲高清| 亚洲国产精品999在线| 高清毛片免费观看视频网站| 国产免费男女视频| 亚洲,欧美精品.| 久久这里只有精品19| 欧美日韩瑟瑟在线播放| 中文字幕人妻丝袜一区二区| 久久久久国产一级毛片高清牌| 国产精品亚洲一级av第二区| 搡老妇女老女人老熟妇| 真人做人爱边吃奶动态| 国产精品,欧美在线| 亚洲成人久久爱视频| 亚洲欧美日韩东京热| 99精品欧美一区二区三区四区| 精品无人区乱码1区二区| 欧美精品啪啪一区二区三区| 国产一区二区在线av高清观看| 老司机午夜福利在线观看视频| 亚洲电影在线观看av| 琪琪午夜伦伦电影理论片6080| 精品久久蜜臀av无| 亚洲专区中文字幕在线| 久久亚洲真实| 久久久久免费精品人妻一区二区| 一二三四社区在线视频社区8| 在线观看免费日韩欧美大片| 看片在线看免费视频| 亚洲乱码一区二区免费版| 制服丝袜大香蕉在线| 免费在线观看亚洲国产| 一区二区三区国产精品乱码| 国产午夜福利久久久久久| 久久精品aⅴ一区二区三区四区| 成人午夜高清在线视频| www国产在线视频色| 国产亚洲精品一区二区www| 男女午夜视频在线观看| 久久中文字幕人妻熟女| 黄色片一级片一级黄色片| 真人做人爱边吃奶动态| 国内少妇人妻偷人精品xxx网站 | 亚洲精品美女久久久久99蜜臀| 高清毛片免费观看视频网站| 波多野结衣巨乳人妻| 日韩精品免费视频一区二区三区| 亚洲成a人片在线一区二区| 亚洲熟女毛片儿| 一个人免费在线观看的高清视频| 国产成人av激情在线播放| 巨乳人妻的诱惑在线观看| 久9热在线精品视频| 亚洲一区中文字幕在线| 亚洲无线在线观看| 91大片在线观看| 人人妻人人澡欧美一区二区| 亚洲人成电影免费在线| 成人特级黄色片久久久久久久| 免费看美女性在线毛片视频| 久久人妻av系列| 色在线成人网| 可以在线观看的亚洲视频| 国产精品1区2区在线观看.| 亚洲无线在线观看| 欧美最黄视频在线播放免费| 精品乱码久久久久久99久播| 国内久久婷婷六月综合欲色啪| 精品人妻1区二区| 桃红色精品国产亚洲av| 一二三四在线观看免费中文在| or卡值多少钱| 啦啦啦韩国在线观看视频| 草草在线视频免费看| 无人区码免费观看不卡| 老司机靠b影院| 丝袜美腿诱惑在线| 美女扒开内裤让男人捅视频| 久久国产精品影院| 午夜a级毛片| 欧美性猛交╳xxx乱大交人| 国产亚洲欧美98| 国产免费男女视频| 国产高清激情床上av| 人人妻人人澡欧美一区二区| 深夜精品福利| 日本一区二区免费在线视频| 少妇熟女aⅴ在线视频| 一a级毛片在线观看| 狠狠狠狠99中文字幕| 色av中文字幕| 黄色成人免费大全| 黄片小视频在线播放| 麻豆成人午夜福利视频| 一区二区三区国产精品乱码| 久久国产乱子伦精品免费另类| 亚洲一码二码三码区别大吗| 亚洲男人的天堂狠狠| 在线a可以看的网站| 村上凉子中文字幕在线| 日本免费a在线| 少妇的丰满在线观看| 免费在线观看影片大全网站| 一边摸一边抽搐一进一小说| 亚洲av五月六月丁香网| 亚洲一码二码三码区别大吗| 日韩国内少妇激情av| 亚洲无线在线观看| 黄色毛片三级朝国网站| 久久久水蜜桃国产精品网| 99热这里只有精品一区 | 12—13女人毛片做爰片一| 久久精品91无色码中文字幕| 老司机深夜福利视频在线观看| 国产黄a三级三级三级人| 高清在线国产一区| 久久这里只有精品中国| 久久久久国产一级毛片高清牌| 欧美+亚洲+日韩+国产| 毛片女人毛片| 亚洲 欧美 日韩 在线 免费| 最近最新免费中文字幕在线| 激情在线观看视频在线高清| 香蕉丝袜av| 男男h啪啪无遮挡| av国产免费在线观看| 18禁裸乳无遮挡免费网站照片| 天堂√8在线中文| 精品久久久久久久末码| 精品一区二区三区四区五区乱码| 免费无遮挡裸体视频| 岛国视频午夜一区免费看| 麻豆国产av国片精品| 99精品在免费线老司机午夜| 欧美性长视频在线观看| 香蕉久久夜色| 国产又色又爽无遮挡免费看| 欧美日韩乱码在线| 最近最新中文字幕大全电影3| 国产精品98久久久久久宅男小说| 88av欧美| e午夜精品久久久久久久| www日本在线高清视频| 男女做爰动态图高潮gif福利片| 亚洲五月天丁香| 色尼玛亚洲综合影院| 老司机在亚洲福利影院| 天堂动漫精品| 性欧美人与动物交配| av福利片在线观看| www.999成人在线观看| 国产成人欧美在线观看| 一边摸一边做爽爽视频免费| 国产精品久久久人人做人人爽| 亚洲黑人精品在线| 日本撒尿小便嘘嘘汇集6| 国产aⅴ精品一区二区三区波| 老司机深夜福利视频在线观看| 成人国产一区最新在线观看| 深夜精品福利| 亚洲免费av在线视频| 亚洲av五月六月丁香网| 国产av不卡久久| 国产单亲对白刺激| 色哟哟哟哟哟哟| 日本免费a在线| 欧美另类亚洲清纯唯美| 欧美三级亚洲精品| 欧美性猛交╳xxx乱大交人| 久久精品国产99精品国产亚洲性色| 国产精品国产高清国产av| 国产精品久久久人人做人人爽| 日韩有码中文字幕| 久久久久免费精品人妻一区二区| 久久这里只有精品中国| 国产成人aa在线观看| 老熟妇乱子伦视频在线观看| 波多野结衣高清作品| 国产精品亚洲av一区麻豆| 亚洲人成网站在线播放欧美日韩| 国产精品日韩av在线免费观看| 亚洲avbb在线观看| 小说图片视频综合网站| 最近视频中文字幕2019在线8| 国产人伦9x9x在线观看| 亚洲精品在线观看二区| 俺也久久电影网| 国产伦在线观看视频一区| 亚洲熟妇中文字幕五十中出| 成人亚洲精品av一区二区| 一进一出抽搐gif免费好疼| 亚洲熟妇熟女久久| 一进一出抽搐动态| 国产伦一二天堂av在线观看| 国产精品一区二区三区四区久久| 亚洲免费av在线视频| 亚洲专区国产一区二区| 亚洲狠狠婷婷综合久久图片| 国产成人av教育| 男男h啪啪无遮挡| 97碰自拍视频| 国产一区二区激情短视频| 国产欧美日韩一区二区精品| 最近最新中文字幕大全电影3| 欧美成人午夜精品| 国产av一区二区精品久久| 亚洲国产欧美一区二区综合| 国产精品久久电影中文字幕| 成人高潮视频无遮挡免费网站| 深夜精品福利| 首页视频小说图片口味搜索| 9191精品国产免费久久| 亚洲av成人不卡在线观看播放网| 亚洲精品粉嫩美女一区| 午夜精品久久久久久毛片777| 亚洲欧美日韩高清在线视频| 在线观看www视频免费| 美女黄网站色视频| 午夜免费激情av| 亚洲av成人精品一区久久| 不卡一级毛片| 变态另类丝袜制服| bbb黄色大片| 在线观看舔阴道视频| 亚洲国产精品sss在线观看| 亚洲成人久久爱视频| 国产精品一区二区精品视频观看| 99国产综合亚洲精品| 高潮久久久久久久久久久不卡| 在线观看一区二区三区| 国产69精品久久久久777片 | 曰老女人黄片| 美女黄网站色视频| 国产成人啪精品午夜网站| 村上凉子中文字幕在线| 亚洲精品久久国产高清桃花| 亚洲va日本ⅴa欧美va伊人久久| 久久久国产精品麻豆| 黄色a级毛片大全视频| 国产真人三级小视频在线观看| 亚洲av第一区精品v没综合| 精品高清国产在线一区| 美女黄网站色视频| 蜜桃久久精品国产亚洲av| 婷婷精品国产亚洲av在线| 99精品久久久久人妻精品| 国内久久婷婷六月综合欲色啪| 亚洲国产高清在线一区二区三| 日韩av在线大香蕉| 免费一级毛片在线播放高清视频| 午夜福利欧美成人| 毛片女人毛片| 欧美精品亚洲一区二区| 日韩三级视频一区二区三区| 法律面前人人平等表现在哪些方面| 人妻夜夜爽99麻豆av| 一级片免费观看大全| 一本大道久久a久久精品| 天堂√8在线中文| 一本精品99久久精品77| 日韩 欧美 亚洲 中文字幕| 无人区码免费观看不卡| 国产一级毛片七仙女欲春2| 18美女黄网站色大片免费观看| 日本a在线网址| aaaaa片日本免费| 久久久久国产精品人妻aⅴ院| 色综合站精品国产| 久久久久久久久免费视频了| 久久精品国产亚洲av香蕉五月| 中出人妻视频一区二区| 久久精品人妻少妇| 后天国语完整版免费观看| 欧美精品亚洲一区二区| cao死你这个sao货| 日韩国内少妇激情av| 欧美日韩乱码在线| 看免费av毛片| 国产高清视频在线观看网站| 久久人妻av系列| 午夜免费观看网址| 国产人伦9x9x在线观看| 草草在线视频免费看| 亚洲avbb在线观看| 国产一区二区在线观看日韩 | 国产精品久久久久久亚洲av鲁大| 亚洲精华国产精华精| 91大片在线观看| 久久精品国产亚洲av香蕉五月| 亚洲欧美精品综合久久99| 亚洲国产精品sss在线观看| 久久久久久亚洲精品国产蜜桃av| 女同久久另类99精品国产91| 久久精品国产亚洲av香蕉五月| 亚洲欧美一区二区三区黑人| 国内久久婷婷六月综合欲色啪| 久久久久久九九精品二区国产 | 成人高潮视频无遮挡免费网站| 国产成年人精品一区二区| 国产av一区在线观看免费| 久久婷婷人人爽人人干人人爱| √禁漫天堂资源中文www| 成人国产一区最新在线观看| a在线观看视频网站| 日日夜夜操网爽| 精品乱码久久久久久99久播| 亚洲精品国产一区二区精华液| 村上凉子中文字幕在线| bbb黄色大片| 一级黄色大片毛片| 欧美一区二区国产精品久久精品 | 人妻夜夜爽99麻豆av| 日韩高清综合在线| 777久久人妻少妇嫩草av网站| 丝袜美腿诱惑在线| 男男h啪啪无遮挡| 国产高清视频在线观看网站| 法律面前人人平等表现在哪些方面| 最好的美女福利视频网| 国产成人一区二区三区免费视频网站| 日本黄大片高清| а√天堂www在线а√下载| 久久精品国产综合久久久| 亚洲熟妇中文字幕五十中出| 亚洲成a人片在线一区二区| 成人国产一区最新在线观看| 在线观看午夜福利视频| 在线a可以看的网站| 精品熟女少妇八av免费久了| 又粗又爽又猛毛片免费看| 99久久国产精品久久久| 又粗又爽又猛毛片免费看| 亚洲成av人片免费观看| 亚洲人成网站在线播放欧美日韩| 欧美在线一区亚洲| 女人爽到高潮嗷嗷叫在线视频| 久久久久久大精品| www.精华液| 在线观看一区二区三区| 国产97色在线日韩免费| 日本五十路高清| 国产69精品久久久久777片 | 99久久久亚洲精品蜜臀av| 在线观看免费日韩欧美大片| 又紧又爽又黄一区二区| 国产精品久久久久久人妻精品电影| 亚洲av成人一区二区三| 欧美一区二区国产精品久久精品 | 国产爱豆传媒在线观看 | 亚洲av成人不卡在线观看播放网| 嫁个100分男人电影在线观看| 国产精品亚洲一级av第二区| 亚洲精华国产精华精| 国产精品1区2区在线观看.| 99国产精品一区二区蜜桃av| 亚洲欧美日韩东京热| 一个人免费在线观看电影 | 国产精品免费视频内射| 99久久无色码亚洲精品果冻| 亚洲av五月六月丁香网| 日本 欧美在线| 久热爱精品视频在线9| 亚洲五月天丁香| 激情在线观看视频在线高清| 日韩欧美免费精品| 国内精品一区二区在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 色噜噜av男人的天堂激情| 国产激情欧美一区二区| 日韩av在线大香蕉| 亚洲 国产 在线| 欧美3d第一页| 99re在线观看精品视频| 国产精品 欧美亚洲| 美女黄网站色视频| 国产av麻豆久久久久久久| 91av网站免费观看| 一本综合久久免费| 麻豆av在线久日| 亚洲一区二区三区色噜噜| 成年人黄色毛片网站| 91字幕亚洲| 两人在一起打扑克的视频| 色噜噜av男人的天堂激情| 制服人妻中文乱码| 香蕉av资源在线| 天堂av国产一区二区熟女人妻 | 久久久久国内视频| 精品少妇一区二区三区视频日本电影| 欧美成人一区二区免费高清观看 | 久久精品91无色码中文字幕| 久久久久久久久免费视频了| 国产欧美日韩一区二区三| 日本三级黄在线观看| 美女大奶头视频| 国产v大片淫在线免费观看| 午夜福利欧美成人| 欧美一级毛片孕妇| 午夜福利欧美成人| 日韩欧美国产在线观看| 午夜免费观看网址| 看片在线看免费视频| 日本a在线网址| 久久香蕉精品热| 黄色片一级片一级黄色片| 国产在线观看jvid| 法律面前人人平等表现在哪些方面| 男女做爰动态图高潮gif福利片| 中文在线观看免费www的网站 | x7x7x7水蜜桃| 18禁观看日本| 国产成人aa在线观看| 一区二区三区高清视频在线| 午夜精品在线福利| 日本黄大片高清| 麻豆国产av国片精品| 麻豆久久精品国产亚洲av| 一本一本综合久久| 日本黄大片高清| 国产熟女午夜一区二区三区| 三级国产精品欧美在线观看 | 日韩高清综合在线| 一边摸一边做爽爽视频免费| 国产成人影院久久av| 国产亚洲精品久久久久久毛片| 又大又爽又粗| 亚洲中文字幕日韩| 毛片女人毛片| 午夜视频精品福利| bbb黄色大片| 欧美又色又爽又黄视频| 欧美色欧美亚洲另类二区| 国产三级在线视频| 免费看日本二区| 国产又色又爽无遮挡免费看| 国产精品久久电影中文字幕| 草草在线视频免费看| 国产人伦9x9x在线观看|