• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Breakdown point of penalized logistic regression*

    2020-09-17 02:00:50DONGYulinGUOXiao

    DONG Yulin, GUO Xiao

    (International Institute of Finance, School of Management, University of Science and Technology of China, Hefei 230026, China)

    Abstract Breakdown point is regarded as an important measure of robustness in regression analysis. At the same time, sparse model estimation is a hot topic in data analysis. In the case of less attention to the breakdown point of robust estimates in nonlinear models, we study it in binary response models. We prove that the penalized estimate of logistic models always stays bounded, which means the finite explosive breakdown point of it is 1. Moreover, we give an upper bound of the implosive breakdown point of the slope parameter. Both simulation study and real data application verify this point while we use the approximation method and coordinate descent algorithm.

    Keywords breakdown point; logistic regression; maximum likelihood estimator; penalization; robust estimation

    Breakdown point is one of the basic tools to measure the robustness of statistical estimation. It was firstly defined by Hample[1], depending on the specific distribution of the observed data set. A distribution-free finite-sample definition of the breakdown point was proposed in Ref.[2]. Intuitively, the breakdown point is the maximum proportion of outliers that a given sample may contain without spoiling the estimator completely. Obviously, higher breakdown point indicates more robustness of an estimator. This concept has been widely applied to the location, scale, and regression models to describe the resistance of outliers.

    The study of breakdown point has made great progress over the recent 40 years. In linear regression models, it is well known that the breakdown point of least square method is 0, which means only one outlier may have a large effect on the estimator. Robust estimators with a high breakdown point of 0.5 have been developed, e.g. the MM-estimators[3], the τ-estimators[4]; the least median of squares (LMS) and least trimmed squares (LTS) estimators[5]. The breakdown properties of some nonlinear regression model estimators were also investigated in, e.g. Ref.[6]. In the area of logistic regression model, the breakdown property of the maximum likelihood estimator (MLE) is totally different from that of the linear model. Cox and Snell[7]argued that the commonly used MLE in logistic regression models is not robust. Christmann[8]proposed that high breakdown point estimators do not exist in logistic regression models, unless there are some specific conditions, such asp≤n. Similar statement was noticed by Ref. [9] if one replaced some observations to raw data set. Neykov and Muller[10]considered the finite-sample breakdown point of trimmed likelihood estimators in generalized linear models. Khan et al.[11]adopted the least trimmed absolute (LTA) estimator for logistic model fitting, in which LTA is a robust estimator with high breakdown since it outperforms M-estimator in case of a certain degree of contamination. Croux et al.[12]proved that classical MLE in logistic regression models always stays bounded if some outliers were added to the raw data set. However, the aforementioned methods requires low-dimensionality thatn>p.

    Due to the rapid development of science and technology and various ways of data collection, large-dimensional data is becoming more and more common and has attracted great attention. Regularization is powerful to solve high-dimensional problems, which can force the model be sparse, low rank, smooth and so on. Thus, a lot of penalized estimators have been established to select useful variables for high dimensional statistical linear modeling. Breiman[13]proposed the nonnegative garrote for subset regression. Tibshirani introduced lasso in Ref. [14], and LAD-lasso was discussed in Ref. [15]. Xie and Huang[16]applied the smoothly clipped absolute deviation (SCAD) penalty to achieve sparsity in the high-dimensional linear models. Therefore, breakdown point of penalized regression is becoming more important. Alfons et al.[17]focused on some sparse estimators in linear models, in which the breakdown point of sparse LTS is verified to be (n-h+1)/n, wherehis the initial guess of non-contaminations, both lasso and LAD-lasso have the breakdown point of 1/n. Various penalized logistic regression estimators are also proposed as alternatives to the non-robust MLE in logistic models, but the breakdown point of penalized logistic regression model under high-dimensionality is not discussed yet. It can neither be derived from the results of the penalized linear model nor from the generalization of the results of the unpenalized logistic regression models.

    We aim to compute the breakdown point of penalized logistic regression estimator while there are outliers in the observations. Traditional methods are often limited ton>p, but the collected data sets are often with sophisticated large scale, so we emphasis not only onn>p, but also onn

    The rest part of this article is arranged as follows. In Section 2, the replacement and addition explosive breakdown point of the penalized logistic regression estimator are obtained respectively. Furthermore, we also show the replacement implosive breakdown point of the penalized MLE. Section 3 discusses a fast convergent iterative algorithm and Section 4 performs the simulation studies. Finally, Section 5 concludes.

    We introduce some necessary notations in this paper. Ifβ=(β1,…,βp)T∈pis ap-dimensional vector, thenL1norm ofβis defined as ‖β‖1=andL2norm ofβis defined as ‖β‖2=

    1 Main results

    We start with a brief introduction of the logistic regression model. Denote byX=(Xij)1≤i≤p,1≤j≤n=(X1,…,Xn) the matrix of predictor variables, wherenis the number of observations andpthe number of variables. LetY=(Y1,…,Yn)Tbe the vector of responses, whereYiis either 0 or 1,i=1,…,n.

    The logistic regression model is given by

    (1)

    M-estimation of the parameters is often obtained by minimizing a certain loss function. While it is not suitable to use the quadratic loss for classification problems. Letγ=(γ1,…,γp+1)T=(α;β). Giving the observed sampleZ=(Z1,…,Zn), whereZi=(Xi;Yi),i=1,…,n. Specifically,

    Quadratic loss:l(γ,Zi)=(Yi-θi)2/2;

    Deviance loss:

    l(γ,Zi)=-2{Yilog(μi)+(1-Yi)log(1-μi)};

    Exponential loss:l(γ,Zi)=exp{-(Yi-0.5)θi}.

    (1-Yi)log(1-μi)}.

    (2)

    Here, the regression estimator obtained by minimizing (2) is the MLE. For binary logistic regression model, there are three possible patterns of data points: complete separation, quasi-complete separation and overlap. Albert and Anderson[18]verified that the MLE of logistic regression models does not exist when the data types are completely separated or quasi-complete separated, while in overlap situation, MLE exists uniquely. Similar result was discussed in Ref. [19]. Thus, we concentrate on the overlap situation here. On the other hand, regularization methods generally consist loss functions and penalty terms. It is commonly used to obtain well-behaved solutions to over-parameterized estimation problems. Fan and Li[20]mentioned that a good penalty function could result in an estimator with unbiasedness, sparsity and continuity and asserted that bounded penalty functions can reduce bias and yield continuous solutions. Therefore, we try to add a penalty termPλ(β) to the loss function. Then the regression estimator is defined as

    (3)

    whereQ(γ,Z) is the objective function. Since the intercept term has no corresponding feature term and it is mainly used to fit the overall migration of the data, so it is not involved in regularization.

    L1penalty function,

    Pλ(|βj|)=λ|βj|.

    (4)

    L2penalty function,

    (5)

    SCAD penalty function,

    Pλ(|βj|)=

    (6)

    wherea>2 andλ>0 are the unknown parameters.

    To study the robustness of an estimator, Donoho and Huber[2]introduced two finite-sample versions of breakdown point, one is replacement breakdown point, the other is addition breakdown point. Generally speaking, the value is the smallest proportion of contamination that can lead the estimator’s Euclidean norm to infinity or to zero, which means the estimator becomes completely unreliable. Rousseeuw and Leroy[21]held that breakdown point can not exceed 50%. Intuitively, if more than half of the data are polluted, we will not be able to distinguish the original distribution from the pollution distribution.

    1.1 Explosive breakdown point

    For penalized MLE in logistic models, we have the following theorem.

    ProofTheL1norm of a vectorβis denote as ‖β‖1, and the Euclidean norm as ‖β‖2. Since the two norms have topological equivalence, there exists a constantc≥1, such that ‖β‖1≥c‖β‖2. We replacemof the observations bymoutliers, then

    Considering the case ofγi=0,i=1,…,p+1, we haveμi=0.5,i=1,…,n. The objective function becomes

    We can see that the constantMonly depends onp. So either for the case ofn>por the situation ofn

    As mentioned above, there are two types of finite-sample breakdown point. Zuo[22]ever gave the quantitative relationship between replacement breakdown point and addition breakdown point of a large class of estimators whose breakdown points are independent of the configuration ofX. One may think that the influence of adding outliers may be different from that some of the raw observations are replaced by outliers. From the proof process of Theorem 1.1, it can be seen whether the arbitral observations are replaced or added, it does not matter. If we addedmoutliers to the raw data set, we could conclude the following theorem.

    The proof is similar to that of Theorem 1.1.

    Theorem 1.1 and Theorem 1.2 show that the penalized MLE is very robust in binary logistic regression models. Moreover, for multiple classification models, the theorems are also applicable. In logistic regression models with binary data, we verified that the explosive breakdown point of penalized MLE in logistic regression models gets the biggest value 0.5, which is a good result.

    1.2 Implosive breakdown point

    (7)

    The following theorem shows that ‖β‖2tends to zero while 2pobservations of the raw sample are replaced by outliers.

    Theorem1.3In model (1), considering (3), the penalized MLE satisfies

    Case one:βjm>0,

    ≤-(M+N)‖β‖2+M‖β‖2

    =-N‖β‖2<-Nδ=-τ.

    Thus

    Case two:βjm<0,

    >(M+N)‖β‖2-M‖β‖2

    =N‖β‖2>Nδ=τ.

    Thus

    Theorem 1.3 implies that the number of contaminated data cannot exceed twice the number of independent variables, which shows the non-robustness of the estimator. The standard error of a regression estimator would not explode when the Euclidean norm of the estimator tends to zero. Therefore, this kind of breakdown is more difficult to detect. Croux et al.[12]proved that MLE of logistic regression model breaks down to zero when adding several outliers to the data set. Here, we discuss penalized MLE in replacement situation, and our result is not bad.

    2 Algorithm

    In this section, we concentrate on the computation of (3). Approaches applicable to linear models may computationally infeasible to nonlinear regression analysis. In our model, the objective function is nonlinear, the general normal equation is a transcendental equation. we can only solve the problem by numerical methods in stead of algebraic methods.

    There are some fast algorithms for numerical computing. Efron et al.[23]introduced least angle regression. Balakrishnan and Madigan[24]presented the online algorithm and the MP algorithm for learningL1logistic regression models; Wu and Lange[25]used the coordinated descent (CD) algorithm to solve lasso regressions. Friedman et al.[26]derived the generalized CD algorithm for convex optimization problems. CD was also mentioned in Ref.[27], in which it was shown to gain computational superiority. It is proved that the block CD algorithm has linear convergence rate in Ref.[28]. Saha and Tewari[29]proved that the CD algorithm has the convergence rate ofO(1/p). Since this algorithm has many excellent properties, it is wise to use it in this paper.

    An approximate solution can be calculated by convergent iterative algorithms. we try to transform the deviance loss function into approximate squared loss function through Taylor expansion. In this way, the regression coefficient can be calculated more conveniently. Note that for multiple classification regression problems, the Taylor expansion for the ordinary objective function would become more complex. In these situations, one should consider the use of other numerical methods, such as Newton method, gradient ascent method, among others.

    If the loss function is deviance loss function, we would havel(γ,Zi)=-2{yilogμi+(1-yi)log(1-μi)}. Then for the logistic model and the probit model, we already have their distribution functions, it is easy to obtain their first order derivative function and two order derivative function.

    For logit model, we have

    And for probit model, we have

    Otherwise, if the loss function was exponential lossl(γ,Zi)=exp{-(yi-0.5)θi}, there is nothing to do withμi, so no matter how to constructui, we have

    l′θi=(yi-0.5)e-(yi-0.5)θi,

    According to the method of seeking extreme point, we set the first order derivative of the constrained objective function to all independent variables be zero. Then the parameters can be attained by solving the simultaneous equations.

    The first part of the objective function has been changed from Taylor expansion to square terms, and it is easy to find the derivative function of each order. Then consider penalty term. It is important to note that if we could not seek the first order guidance directly of the penalty term, such asL1penalty, we could apply the soft thresholding method introduced firstly by Donoho and Johnstone[30]. Then we apply CD algorithm. At thekthiteration,

    For the giving initialγ0, everyβj,j=1,…,phas the following explicit solution:

    ForL1penalization,

    (8)

    ForL2penalization,

    (9)

    (10)

    In practical application, the extreme point is usually the maximum or minimum extreme point we want. According to the fast convergence property of the CD method, we can easily obtain the convergent regression estimator.

    3 Simulation

    In this section, simulation studies on artificial data sets with different groups ofnandpare presented. We construct two data configurations, include the case ofn>pand the case ofn

    The first data configuration is withn=200 andp=5,n/2,n-20 respectively, which is corresponding to the low-dimensional data set. Assume thatXobeys the multivariate normal distribution,X~N(0,∑), where the covariance matrix ∑=(∑ij)1≤i,j≤pis assigned to ∑ij=0.5|i-j|, which implies the multiple multicollinearity between predictor variables. Using the coefficient vectorγwithα=0.2,β1=0.5,β2=-0.6,β3=1,β4=-0.8,β5=0.1, andβj=0 forj∈{1,2,…,p+1}{1,2,3,4,5}, the response variableYis generated according to model (1).

    Then, the second configuration represents the moderate case of high-dimensional.n=100 observations are generated and the value ofpisn+10, 2nor 4n. The independent variables are subjected to the distributionsN(0,∑) with ∑ij=0.1|i-j|. For regression parameters, we assume thatα=0.2, andβ1=-0.7,β1=1,β1=-0.3,β1=0.5,β1=-0.6, while other slope coefficientsβj=0.Yis also constructed according to model (1).

    Denote bykthe contaminated percentage,k=0, 25% or 50%. For each data set, we make different kinds of pollution.

    1) No contamination;

    2) Vertical outliers:kof the dependent variables in model (1) are changed, which means someYiwill become 1-Yi;

    3) Leverage points: same as in (2), while for thekcontaminated observations, we drawing the predictor variables fromN(μ,∑) instead ofN(0,∑), whereμ=(20,…,20).

    There are several termination rules widely adopted in an iterative algorithm, such as the function descent criterion, the distance criterion, the gradient criterion. At present article, we use the distance criterion. For each cycle of the coordinate descent algorithm, the specific form of stopping criterion is

    (11)

    The maximum number of iterations is 1 000, and each regression coefficient should be of practical significance, likeγj<103forj=1,…,p+1.

    Given a parametric statistical model (sample space and sample distribution family), decision space and loss function, it is naturally desirable to specify a decision for each point in the sample space. We assume that the decision function is

    Yi=1 ifp(Yi=1|Xi)>0.5.

    (12)

    Here, 0.5 is chosen as the threshold as it is a common practice. In practical application, different thresholds may be selected for specific situations. If the accuracy of positive criteria was high, the threshold value could be larger. If we required higher positive recall, the threshold value should be selected smaller.

    We polluted the raw data in different degrees and different types of outliers. Following are the partial simulation results and all the above simulations are performed in Matlab2016b.

    Here, Fig.1 is one of the cross validation diagram.

    Fig.1 Cross validation result while n=200 and p=100

    Table 1 and Table 2 express the results of logistic regression model whilen>pandn

    Table 1 Logistic regression results while there are vertical outliers

    Table 2 Logistic regression results with different numbers of leverage points

    Then, we change the link function to Probit link function. Table 3 shows the performance of probit regression, averaged over 500 simulation runs.

    Table 3 Probit regression results with different numbers of outliers

    Similarly, the MSE have a limit in Table 3, which means the regression estimator keeps bounded. In addition to some similarities between logistic and probit regression models, previous scholars found that the logistic regression coefficient is about 1.8 times bigger than that of the probit regression coefficient with the same data sets. Although the coefficients are different, the classification results are essentially similar. The meaning of the coefficient, the evaluation of the model and the hypothesis test are all similar.

    Furthermore, we find that the choice of the loss functions has an almost negligible effect on the prediction result. This might be the reason why less discussion about the impact on different loss functions was talked in classification models.

    4 Discussion

    This paper shows that penalized MLE has a high explosive breakdown point in binary logistic regression models. Accurately speaking, we can still get bounded regression coefficients even if the pollution rate of the observations reaches 50%. The property is shown by our simulation studies. In either logistic model or probit model, the regression estimators are robust in contaminated data sets. Also, we give the upper bound of implosive breakdown point of the slope parameter.

    Theorem 1.3 gives the upper bound of implosive breakdown point of slope parameter. However, for robust estimator, the bigger the breakdown point, the better. Naturally, we consider whether the implosive breakdown point has a larger lower bound by changing the penalty item or changing the loss function. For example, sparse LTS method is pretty robust in linear models, can we learn from the idea of trimming? As we know, SCAD penalty is a famous penalty term as it satisfies sparsity mathematical conditions. So it may lead to unexpected gains if we use this punishment. In our future research, we will pay more attention to it.

    While there is only a slight multi-collinearity between the explanatory variables, MLE is consistent, asymptotically valid, and asymptotically normal forn→+∞ under some weak assumptions. Fahrmeir and Kaufmann[31]did some research. With the increase of sample size, the standard error of parameter estimation will gradually decrease. After adding a penalty item, these properties may change. As for hypothesis test, we need more studies, not only on breakdown point, but also on the standard error.

    国产欧美日韩精品亚洲av| 午夜日韩欧美国产| 两性夫妻黄色片| 看十八女毛片水多多多| 丝瓜视频免费看黄片| 女人爽到高潮嗷嗷叫在线视频| 777米奇影视久久| 又大又爽又粗| 99国产精品免费福利视频| 欧美精品高潮呻吟av久久| 在线亚洲精品国产二区图片欧美| 日韩 亚洲 欧美在线| 亚洲av片天天在线观看| 免费av中文字幕在线| 成人国产av品久久久| 考比视频在线观看| 老司机靠b影院| 欧美性长视频在线观看| 9色porny在线观看| 国产一区二区在线观看av| 国产欧美日韩一区二区三区在线| 水蜜桃什么品种好| 亚洲欧美一区二区三区久久| 亚洲欧洲日产国产| 国产成人91sexporn| 最新的欧美精品一区二区| 精品国产一区二区三区久久久樱花| 久久av网站| 日韩视频在线欧美| 成年av动漫网址| 亚洲精品日本国产第一区| 精品少妇内射三级| 欧美久久黑人一区二区| 日韩一卡2卡3卡4卡2021年| 久久免费观看电影| 人成视频在线观看免费观看| 亚洲欧美精品综合一区二区三区| 天堂8中文在线网| 一边亲一边摸免费视频| 欧美精品一区二区免费开放| 一级,二级,三级黄色视频| 国产高清不卡午夜福利| 欧美老熟妇乱子伦牲交| av在线老鸭窝| 老汉色∧v一级毛片| 在线 av 中文字幕| 亚洲精品av麻豆狂野| 亚洲精品国产色婷婷电影| 国产精品亚洲av一区麻豆| 亚洲一卡2卡3卡4卡5卡精品中文| 欧美精品人与动牲交sv欧美| 久久女婷五月综合色啪小说| a级片在线免费高清观看视频| 在线观看人妻少妇| 中文字幕制服av| 一级黄片播放器| 中文字幕色久视频| 日本一区二区免费在线视频| 在线观看免费日韩欧美大片| 肉色欧美久久久久久久蜜桃| 欧美黄色片欧美黄色片| 欧美在线黄色| 好男人电影高清在线观看| 韩国高清视频一区二区三区| 精品福利观看| 亚洲,一卡二卡三卡| 成人影院久久| 精品熟女少妇八av免费久了| 午夜福利在线免费观看网站| 纯流量卡能插随身wifi吗| 99国产精品一区二区三区| 日日摸夜夜添夜夜爱| 国产一区二区 视频在线| 丰满人妻熟妇乱又伦精品不卡| 国产成人欧美在线观看 | 欧美精品亚洲一区二区| 亚洲色图综合在线观看| 黑人猛操日本美女一级片| 可以免费在线观看a视频的电影网站| 一本—道久久a久久精品蜜桃钙片| 一区二区三区精品91| 肉色欧美久久久久久久蜜桃| 久久亚洲国产成人精品v| 精品人妻1区二区| 黑人巨大精品欧美一区二区蜜桃| 一级黄片播放器| 五月天丁香电影| 人妻人人澡人人爽人人| 久久久久久久久免费视频了| 国产精品欧美亚洲77777| 国产97色在线日韩免费| 日本一区二区免费在线视频| 在线观看免费日韩欧美大片| 香蕉丝袜av| 深夜精品福利| 国产淫语在线视频| 男人舔女人的私密视频| 午夜91福利影院| 丰满迷人的少妇在线观看| xxx大片免费视频| 精品国产一区二区久久| 精品免费久久久久久久清纯 | 亚洲第一青青草原| 一个人免费看片子| 纯流量卡能插随身wifi吗| 亚洲一区中文字幕在线| 超碰成人久久| 校园人妻丝袜中文字幕| 久久影院123| 男女边吃奶边做爰视频| 成年人黄色毛片网站| 人体艺术视频欧美日本| 悠悠久久av| 欧美另类一区| 久久女婷五月综合色啪小说| 亚洲一码二码三码区别大吗| 欧美精品人与动牲交sv欧美| 午夜av观看不卡| 中国国产av一级| 日韩制服骚丝袜av| 视频在线观看一区二区三区| 男女边摸边吃奶| 人人妻人人澡人人爽人人夜夜| www.av在线官网国产| 高清视频免费观看一区二区| 国产不卡av网站在线观看| 在线观看免费视频网站a站| 欧美av亚洲av综合av国产av| 国产亚洲精品第一综合不卡| 国产黄色视频一区二区在线观看| 在线 av 中文字幕| 夜夜骑夜夜射夜夜干| 91精品国产国语对白视频| 天堂8中文在线网| 成人18禁高潮啪啪吃奶动态图| 丰满人妻熟妇乱又伦精品不卡| 在线观看免费午夜福利视频| 后天国语完整版免费观看| 国产黄色免费在线视频| 天天操日日干夜夜撸| 精品免费久久久久久久清纯 | 多毛熟女@视频| 国产精品一区二区在线观看99| 国产精品.久久久| 国产精品 国内视频| 亚洲人成电影观看| 亚洲欧洲国产日韩| 免费不卡黄色视频| 精品欧美一区二区三区在线| 亚洲欧洲精品一区二区精品久久久| 亚洲色图综合在线观看| 一本大道久久a久久精品| 日韩一区二区三区影片| 亚洲国产欧美日韩在线播放| 女人高潮潮喷娇喘18禁视频| 精品国产一区二区三区四区第35| 国产欧美日韩一区二区三 | 看十八女毛片水多多多| 亚洲久久久国产精品| 蜜桃在线观看..| avwww免费| 精品国产一区二区三区四区第35| 搡老岳熟女国产| 国产精品一二三区在线看| 免费在线观看影片大全网站 | av电影中文网址| 99国产精品99久久久久| 国产成人免费无遮挡视频| 精品熟女少妇八av免费久了| 18禁观看日本| 热re99久久精品国产66热6| 黄色 视频免费看| 热re99久久精品国产66热6| 日韩一区二区三区影片| 日韩 亚洲 欧美在线| 亚洲 欧美一区二区三区| 99热网站在线观看| 精品久久久久久电影网| 后天国语完整版免费观看| 麻豆国产av国片精品| 亚洲一区二区三区欧美精品| 成年人免费黄色播放视频| 国产成人一区二区在线| 欧美黑人欧美精品刺激| 日本黄色日本黄色录像| 国产精品九九99| 国产欧美亚洲国产| 亚洲中文av在线| 天天添夜夜摸| 久久女婷五月综合色啪小说| 国产免费福利视频在线观看| 麻豆av在线久日| 丝袜人妻中文字幕| 母亲3免费完整高清在线观看| 亚洲av欧美aⅴ国产| 国产精品 国内视频| 国产精品av久久久久免费| 国产亚洲精品久久久久5区| 亚洲七黄色美女视频| 一级,二级,三级黄色视频| 最新的欧美精品一区二区| 一区在线观看完整版| 免费在线观看黄色视频的| 久久这里只有精品19| 精品久久蜜臀av无| 久久久久久久国产电影| 国产片特级美女逼逼视频| 国产日韩欧美视频二区| 啦啦啦啦在线视频资源| 超碰97精品在线观看| 韩国精品一区二区三区| tube8黄色片| 成人午夜精彩视频在线观看| 国产成人一区二区在线| 美女高潮到喷水免费观看| 少妇裸体淫交视频免费看高清 | √禁漫天堂资源中文www| 亚洲av男天堂| 婷婷丁香在线五月| 欧美国产精品一级二级三级| 多毛熟女@视频| 精品一区二区三区av网在线观看 | 又大又黄又爽视频免费| 免费日韩欧美在线观看| 欧美精品亚洲一区二区| 国产精品一区二区在线观看99| 激情五月婷婷亚洲| 精品福利永久在线观看| 黄频高清免费视频| 国产成人影院久久av| 成人亚洲欧美一区二区av| 精品福利永久在线观看| 亚洲欧洲国产日韩| 首页视频小说图片口味搜索 | 久久九九热精品免费| 十八禁人妻一区二区| 亚洲欧美成人综合另类久久久| 大码成人一级视频| 成年女人毛片免费观看观看9 | 国产1区2区3区精品| 91精品伊人久久大香线蕉| a 毛片基地| 欧美日韩亚洲高清精品| 亚洲国产看品久久| 亚洲国产日韩一区二区| 午夜福利免费观看在线| 精品福利永久在线观看| 亚洲一码二码三码区别大吗| 麻豆国产av国片精品| 国产成人av教育| 日本猛色少妇xxxxx猛交久久| 国产成人欧美| 韩国高清视频一区二区三区| 精品人妻在线不人妻| 99国产精品一区二区蜜桃av | 视频区图区小说| 人人澡人人妻人| 一区二区三区激情视频| 国产91精品成人一区二区三区 | 性色av一级| 另类亚洲欧美激情| 国产精品久久久久久人妻精品电影 | av又黄又爽大尺度在线免费看| 99久久99久久久精品蜜桃| 午夜视频精品福利| 国产一区二区激情短视频 | 国产成人影院久久av| 亚洲精品乱久久久久久| 亚洲av片天天在线观看| 精品人妻1区二区| 国产三级黄色录像| 亚洲熟女精品中文字幕| 亚洲av电影在线观看一区二区三区| 欧美日韩av久久| 亚洲欧美色中文字幕在线| 大陆偷拍与自拍| 日韩大片免费观看网站| 日韩免费高清中文字幕av| 亚洲精品一区蜜桃| 黄色一级大片看看| 亚洲人成电影免费在线| 在线观看免费高清a一片| 欧美 日韩 精品 国产| 熟女av电影| 亚洲黑人精品在线| 啦啦啦在线观看免费高清www| 我要看黄色一级片免费的| 午夜免费成人在线视频| 99国产精品99久久久久| 一边摸一边做爽爽视频免费| 美女中出高潮动态图| 狠狠婷婷综合久久久久久88av| 亚洲伊人久久精品综合| 国产av国产精品国产| 纯流量卡能插随身wifi吗| 五月开心婷婷网| 女警被强在线播放| av在线播放精品| 国产高清视频在线播放一区 | 蜜桃在线观看..| 汤姆久久久久久久影院中文字幕| 亚洲国产精品一区三区| 激情视频va一区二区三区| 在线天堂中文资源库| 日韩伦理黄色片| 亚洲中文日韩欧美视频| 极品少妇高潮喷水抽搐| 国产成人精品久久久久久| 国产视频首页在线观看| 亚洲成人免费电影在线观看 | www.熟女人妻精品国产| 黄网站色视频无遮挡免费观看| 熟女av电影| 最黄视频免费看| 高清不卡的av网站| 嫩草影视91久久| 狠狠婷婷综合久久久久久88av| 日韩av免费高清视频| 一二三四在线观看免费中文在| 青春草视频在线免费观看| 只有这里有精品99| 色视频在线一区二区三区| 成年av动漫网址| 99国产综合亚洲精品| 爱豆传媒免费全集在线观看| 一区二区三区乱码不卡18| 欧美在线黄色| 51午夜福利影视在线观看| 久久久精品94久久精品| 久久人人爽人人片av| 国产在线视频一区二区| 成年人黄色毛片网站| 久久久久久久精品精品| 亚洲国产成人一精品久久久| videosex国产| 国产成人91sexporn| 夜夜骑夜夜射夜夜干| 在线观看一区二区三区激情| 成在线人永久免费视频| 日韩一卡2卡3卡4卡2021年| 精品久久久久久电影网| 亚洲av日韩精品久久久久久密 | 七月丁香在线播放| 欧美精品av麻豆av| 丝袜在线中文字幕| 老司机影院毛片| 亚洲国产欧美在线一区| 老司机在亚洲福利影院| 亚洲第一青青草原| 久久99精品国语久久久| 国产成人a∨麻豆精品| 亚洲伊人色综图| 欧美精品人与动牲交sv欧美| 校园人妻丝袜中文字幕| 每晚都被弄得嗷嗷叫到高潮| 波野结衣二区三区在线| 国产精品免费大片| 午夜两性在线视频| 久久鲁丝午夜福利片| av福利片在线| 丝袜美足系列| 国产精品久久久av美女十八| 丝袜人妻中文字幕| 午夜免费鲁丝| 少妇精品久久久久久久| 丝袜人妻中文字幕| 日本猛色少妇xxxxx猛交久久| 国产亚洲一区二区精品| www.999成人在线观看| 操出白浆在线播放| 精品久久久精品久久久| 精品人妻在线不人妻| 欧美人与性动交α欧美精品济南到| 亚洲精品久久午夜乱码| 波多野结衣av一区二区av| 人成视频在线观看免费观看| 精品少妇久久久久久888优播| 日本五十路高清| 一级,二级,三级黄色视频| 啦啦啦中文免费视频观看日本| 男女午夜视频在线观看| 久久久亚洲精品成人影院| 久久综合国产亚洲精品| 黑人巨大精品欧美一区二区蜜桃| 成年美女黄网站色视频大全免费| 久久国产精品人妻蜜桃| 伊人久久大香线蕉亚洲五| 亚洲黑人精品在线| 丁香六月欧美| av在线老鸭窝| 亚洲av国产av综合av卡| 日日爽夜夜爽网站| 久久狼人影院| 99精品久久久久人妻精品| 色婷婷久久久亚洲欧美| 在线天堂中文资源库| 日韩中文字幕视频在线看片| 亚洲中文字幕日韩| 90打野战视频偷拍视频| 男女高潮啪啪啪动态图| 国产高清视频在线播放一区 | 少妇裸体淫交视频免费看高清 | netflix在线观看网站| 精品一区二区三卡| 美女高潮到喷水免费观看| 国产成人91sexporn| 久热爱精品视频在线9| 亚洲av电影在线进入| 国产高清国产精品国产三级| 人人妻,人人澡人人爽秒播 | 男女国产视频网站| www.自偷自拍.com| 超色免费av| 日韩免费高清中文字幕av| 人人妻人人爽人人添夜夜欢视频| 操美女的视频在线观看| 97精品久久久久久久久久精品| 亚洲成色77777| 久久久精品94久久精品| 黑人巨大精品欧美一区二区蜜桃| 1024视频免费在线观看| 精品高清国产在线一区| 成年动漫av网址| 亚洲欧美成人综合另类久久久| 女性被躁到高潮视频| 欧美日本中文国产一区发布| 一边亲一边摸免费视频| 日韩制服丝袜自拍偷拍| 亚洲激情五月婷婷啪啪| 亚洲中文日韩欧美视频| 亚洲精品一二三| 在线av久久热| 久久久久国产精品人妻一区二区| 男女床上黄色一级片免费看| 午夜老司机福利片| 亚洲精品成人av观看孕妇| 国产av精品麻豆| av天堂久久9| 波多野结衣av一区二区av| 免费人妻精品一区二区三区视频| 我要看黄色一级片免费的| 国产一区二区三区综合在线观看| 午夜激情久久久久久久| 亚洲 国产 在线| 日韩大码丰满熟妇| 夜夜骑夜夜射夜夜干| 看免费成人av毛片| 久久久精品94久久精品| 波多野结衣av一区二区av| 成人手机av| 婷婷色综合www| 国产一区二区三区综合在线观看| 午夜久久久在线观看| 老熟女久久久| 久久精品成人免费网站| 精品卡一卡二卡四卡免费| 欧美xxⅹ黑人| 在线av久久热| 侵犯人妻中文字幕一二三四区| 日韩制服骚丝袜av| 国产精品免费大片| 国产一级毛片在线| 男女下面插进去视频免费观看| 少妇猛男粗大的猛烈进出视频| 日韩欧美一区视频在线观看| 国产一区二区在线观看av| 性色av一级| 亚洲av欧美aⅴ国产| 精品卡一卡二卡四卡免费| 久久精品国产亚洲av涩爱| 欧美中文综合在线视频| av线在线观看网站| 波多野结衣一区麻豆| 亚洲欧美日韩另类电影网站| 黄频高清免费视频| 啦啦啦视频在线资源免费观看| 中文字幕人妻丝袜一区二区| 夫妻午夜视频| 午夜福利视频在线观看免费| 免费观看a级毛片全部| 国产黄色视频一区二区在线观看| 欧美日韩亚洲综合一区二区三区_| 脱女人内裤的视频| 国产又色又爽无遮挡免| 又粗又硬又长又爽又黄的视频| 国产深夜福利视频在线观看| 久久九九热精品免费| 欧美日韩精品网址| 51午夜福利影视在线观看| 好男人视频免费观看在线| 90打野战视频偷拍视频| 久久精品国产综合久久久| 免费高清在线观看日韩| 国产av国产精品国产| 久久久久精品人妻al黑| 黑人欧美特级aaaaaa片| 欧美黑人精品巨大| 亚洲成国产人片在线观看| 亚洲专区中文字幕在线| 中文欧美无线码| 亚洲国产精品成人久久小说| 五月开心婷婷网| 色视频在线一区二区三区| 国产淫语在线视频| 十分钟在线观看高清视频www| 在线亚洲精品国产二区图片欧美| 久久九九热精品免费| 一边摸一边做爽爽视频免费| 日韩一本色道免费dvd| 一本大道久久a久久精品| 国产免费现黄频在线看| 成人18禁高潮啪啪吃奶动态图| 男人添女人高潮全过程视频| 成人国语在线视频| 老熟女久久久| 欧美人与性动交α欧美精品济南到| 侵犯人妻中文字幕一二三四区| 大型av网站在线播放| 天天躁日日躁夜夜躁夜夜| 女性生殖器流出的白浆| xxxhd国产人妻xxx| 成年人免费黄色播放视频| 9色porny在线观看| 国产淫语在线视频| 国产一区亚洲一区在线观看| av天堂在线播放| 精品熟女少妇八av免费久了| 国产欧美日韩一区二区三区在线| 成人黄色视频免费在线看| 欧美成人午夜精品| 欧美精品人与动牲交sv欧美| 免费日韩欧美在线观看| 成在线人永久免费视频| 亚洲中文日韩欧美视频| 成人亚洲精品一区在线观看| 亚洲国产中文字幕在线视频| 搡老岳熟女国产| 精品欧美一区二区三区在线| 国产野战对白在线观看| bbb黄色大片| 自线自在国产av| 久久这里只有精品19| 亚洲专区中文字幕在线| av福利片在线| 999久久久国产精品视频| 在线观看免费午夜福利视频| 久久久精品免费免费高清| 亚洲综合色网址| 亚洲国产看品久久| 亚洲国产中文字幕在线视频| 日韩电影二区| 美女扒开内裤让男人捅视频| 天天躁狠狠躁夜夜躁狠狠躁| 桃花免费在线播放| 精品人妻1区二区| 99热网站在线观看| 91字幕亚洲| 18禁黄网站禁片午夜丰满| 亚洲 国产 在线| 久久这里只有精品19| 婷婷成人精品国产| 成年人黄色毛片网站| 婷婷成人精品国产| 亚洲熟女精品中文字幕| 妹子高潮喷水视频| 欧美人与性动交α欧美精品济南到| 亚洲精品成人av观看孕妇| 免费人妻精品一区二区三区视频| 亚洲中文字幕日韩| 日韩av不卡免费在线播放| 国产精品免费视频内射| 一级片免费观看大全| 亚洲欧美一区二区三区国产| 免费高清在线观看视频在线观看| 建设人人有责人人尽责人人享有的| 欧美亚洲 丝袜 人妻 在线| 亚洲三区欧美一区| 久久精品亚洲av国产电影网| 韩国高清视频一区二区三区| 成人黄色视频免费在线看| 晚上一个人看的免费电影| 99香蕉大伊视频| 欧美日韩亚洲高清精品| 最黄视频免费看| 亚洲图色成人| 国产成人欧美在线观看 | www.自偷自拍.com| 免费在线观看黄色视频的| 国产在线一区二区三区精| e午夜精品久久久久久久| 国产真人三级小视频在线观看| 日本猛色少妇xxxxx猛交久久| 久久毛片免费看一区二区三区| 中国国产av一级| 高清视频免费观看一区二区| xxxhd国产人妻xxx| 视频区欧美日本亚洲| 超碰成人久久| 99国产精品一区二区蜜桃av | 18禁观看日本| 日本wwww免费看| 亚洲第一av免费看| 激情五月婷婷亚洲| 亚洲男人天堂网一区| 久久99热这里只频精品6学生| 国产成人精品在线电影| 不卡av一区二区三区| 电影成人av| 欧美亚洲日本最大视频资源| 中文字幕人妻熟女乱码| 国产日韩欧美亚洲二区| 大片电影免费在线观看免费| 国产一卡二卡三卡精品| 亚洲精品久久午夜乱码| 久久国产精品大桥未久av|