• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Weighted Sparse Image Classification Based on Low Rank Representation

    2018-08-15 10:38:28QidiWuYibingLiYunLinandRuolinZhou
    Computers Materials&Continua 2018年7期

    Qidi Wu, Yibing Li, Yun Lin, and Ruolin Zhou

    Abstract: The conventional sparse representation-based image classification usually codes the samples independently, which will ignore the correlation information existed in the data. Hence, if we can explore the correlation information hidden in the data, the classification result will be improved significantly. To this end, in this paper, a novel weighted supervised spare coding method is proposed to address the image classification problem. The proposed method firstly explores the structural information sufficiently hidden in the data based on the low rank representation. And then, it introduced the extracted structural information to a novel weighted sparse representation model to code the samples in a supervised way. Experimental results show that the proposed method is superiority to many conventional image classification methods.

    Keywords: Image classification, sparse representation, low-rank representation,numerical optimization.

    1 Introduction

    Image classification is a fundamental issue in computer vision, which aims to classify an image into an accurate category. Many literatures about image classification are emerged in the past two decades [Liu, Fu and He (2017)], and several classification frameworks are formed to address the classification problem of vision task. During the mentioned frameworks, sparse-based classification attracted many attentions and are widely studied in recent years. Sparse signal representation has proven to be an extremely powerful tool for acquiring, representing, and compressing high-dimensional signals [Wright, Ma,Mairal et al. (2010)]. In the past few years, sparse representation has been applied to many vision tasks, including face recognition [Yang, Zhang, Yang et al. (2011); Liu, Tran and Sang (2016)], image super-resolution [Yang, Chu and Wang (2010); Yang, Wright,Huang et al. (2010)], denoising and inpainting [Fadili, Starck and Murtagh (2013);Guleryuz (2006)], and image classification [Kulkarni and Li (2011); Thiagarajan and Spanias (2012)]. Due to its wide applicability, people conducted detailed research on image classification based on sparse representation and achieved remarkable results[Tang, Huang and Xue (2016); Gao, Tsang and Chia (2010); Liu, Fu and He (2017)].According to the picture type, image classification is mainly applied in certain specific fields such as face recognition and hyperspectral image classification and remote sensing image classification. In face recognition, based on a sparse representation computed by l1-minimization, Wright et al. [Wright, Ma, Mairal et al. (2010)] proposed a general classification algorithm for (image-based) object recognition [Wright, Ganesh, Zhou et al.(2009)]. This new framework provided new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. The theory of sparse representation helped predict how much occlusion the recognition algorithm could handle and how to choose the training images to maximize robustness to occlusion. Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. In order to unify such mutual complementarity and thus further enhance the classification performance, Cao et al. [Cao, Zhang, Luo et al. (2016)] proposed an efficient hybrid classifier to exploit the advantages of ELM and SRC in Cao et al. [Cao,Zhang, Luo et al. (2016)]. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. This paper [Zhang, Yang and Feng (2011)] devoted to analyze the working mechanism of SRC, and indicated that it was the CR but not the l1-norm sparsity that made SRC powerful for face classification. The authors proposed a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS), which had very competitive classification results, many classic and contemporary face recognition algorithms work well on public data sets, but degrade sharply when they are used in a real recognition system. This is mostly due to the difficulty of simultaneously handling variations in illumination, image misalignment, and occlusion in the test image. Considering this problem, Wagner et al. [Wagner, Wright, Ganesh et al. (2012)] proposed a conceptually simple face recognition system that achieved a high degree of robustness and stability to illumination variation, image misalignment, and partial occlusion. Besides, Zhang et al.[Zhang, Zhang, Huang et al. (2013)] proposed a pose-robust face recognition method to handle the challenging task of face recognition in the presence of large pose difference between gallery and probe faces. The proposed method exploited the sparse property of the representation coefficients of a face image over its corresponding view-dictionary. In hyperspectral image classification, Tang et al. [Tang, Chen, Liu et al. (2015)] present a hyperspectral image classification method based on sparse representation and superpixel segmentation. By refining the spectral classification results with the spatial constraints,the accuracy of classification is improved by a substantial margin in Tang et al. [Tang,Chen, Liu et al. (2015)]. Noting that the structural information can earn some extra discrimination for the classification framework [Liu, Fu and He (2017)], some methods considering the structural and contextual information are presented. A new sparsity-based algorithm for the classification of hyperspectral imagery was proposed in Chen et al.[Chen, Nasrabadi and Tran (2011)], which relied on the observation that a hyperspectral pixel could be sparsely represented by a linear combination of a few training samples from a structured dictionary. They mainly used Laplacian constraint and joint sparsity model to incorporate the contextual information into the sparse recovery optimization problem in order to improve the classification performance. To overcome this problem that the sparse representation-based classification (SRC) methods ignore the sparse representation residuals (i.e. outliers), Li et al. [Li, Ma, Mei et al. (2017)] proposed a robust SRC (RSRC) method which could handle outliers. They extended the RSRC to the joint robust sparsity model named JRSRC, where pixels in a small neighborhood around the test pixel were simultaneously represented by linear combinations of a few training samples and outliers. A superpixel tensor sparse coding (STSC) based hyperspectral image classification (HIC) method is proposed, by exploring the high-order structure of hyperspectral image and utilizing information along all dimensions to better understand data in Feng et al. [Feng, Wang, Yang et al. (2017)]. Article [Sun, Qu, Nasrabadi et al.(2014)] proposes a new structured prior called the low-rank (LR) group prior, which can be considered as a modification of the LR prior. Structured Priors for Sparse-Representation-Based Hyperspectral Image Classification. The paper [Zhang, Song, Gao et al. (2016)] presents a new spectral–spatial feature learning method for hyperspectral image classification, which integrates spectral and Zeng et al. [Zeng, Li, Liang et al.(2010)] spatial information into group sparse coding (GSC) via clusters and propose a novel kernelized classification framework based on sparse representation considering the image classification problem based on the similarities between images [Zhang, Zhang,Liu et al. (2015)]. Article [Zeng, Li, Liang et al. (2010)] proposes a fast joint sparse representation classification method with multi-feature combination learning for hyperspectral imagery and incorporate contextual neighborhood information of the image into each kind of feature to further improve the classification performance. The paper[Wang, Xie, Li et al. (2015)] investigates STDA (Sparse Tensor Discriminant Analysis)for feature extraction and it is the first time that STDA is applied for HIS (Hyperspectral imagery) and attempts to adopt STDA to preserve useful structural information in the original data and obtain multiple interrelated sparse discriminant subspaces. In remote sensing image classification,The paper [Shivakumar, Natarajan and Murthy (2015)]presents a novel multi-kernel based sparse representation for the classification of Remotely sensed images. The sparse representation based feature extraction are in a run which is a signal dependent feature extraction and thus more accurate. The paper [Song,Li, Dalla et al. (2014)] proposes to exploit sparse representations of morphological attribute profiles for remotely sensed image classification. By using the sparse representation classification framework to exploit this characteristic of the EMAPs.

    With more and more sources of image data on the Internet, images are becoming more and more cluttered, making more and more images lack complete category information.Based on this, the researchers proposed semi-supervised and unsupervised image classification methods. To address the problem of face recognition when there is only few,or even only a single, labeled examples of the face that we wish to recognize. The main idea was that: they used the variation dictionary to characterize the linear nuisance variables via the sparsity framework and prototype face images were estimated as a gallery dictionary via a Gaussian mixture model, with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. Unsupervised learning methods for building feature extractors have a long and successful history in pattern recognition and computer vision. Document [Huang, Boureau and Le (2007)] proposes an unsupervised method to learn the sparse feature detector hierarchy, which is invariant for small shifts and distortions. Article [Zeiler, Krishnan, Taylor et al. (2010)] present a learning framework where features that capture these mid-level cues spontaneously emerge from image data and the approach is based on the convolutional decomposition of images under a sparsity constraint and is totally unsupervised. The paper [Poultney, Chopra and Cun (2007)]describes a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying non-linearity that turns a code vector into a quasi-binary sparse code vector. The major contribution of paper [Hassairi, Ejbali and Zaied (2016)] is to show how to extract features and train an image classification system on large-scale datasets.

    In this paper, we proposed novel sparse representation-based image classification with weighted supervision sparse coding. It firstly constructed the supervision coefficient with Low rank representation, which would help to mine the structural information hidden in the data. And then, the supervision coefficient is incorporated into a weighted sparse representation framework to preserve the coding structure of similar samples. It is verified that the proposed method shows some advantage on the public test dataset.

    The remaining of the paper is arranged as follows. In section II, several similar previous works are analyzed. In section III, the proposed method is described in detail. In section IV, the experimental results are shown to demonstrate the superiority of the proposed method. Section V concludes the paper.

    2 Related work

    Although the traditional spatial pyramid matching (SPM) approach works well for image classification, people empirically found that, to achieve good performance, traditional SPM has to use classifiers with nonlinear Mercer kernels. Accordingly, the nonlinear classifier has to afford additional computational complexity, bearing O(n3) in training and O(n) for testing in SVM, where n is the number of support vectors. This implies a poor scalability of the SPM approach for real applications.

    To improve the scalability, researchers aim at obtaining nonlinear feature representations that work better with linear classifiers

    Let X be a set of D-dimensional local descriptors extracted from an image,i.e. X =[x1, x2,...,xN]∈RD×N. Given a codebook with M entries, B =[b1, b2,...,bM]∈RD×M,different coding schemes convert each descriptor into adimensional code to generate the final image representation. This section reviews three existing coding schemes.

    2.1 Coding descriptors in VQ

    Traditional SPM uses VQ coding which solves the following constrained least square fitting problem:

    where C =[c1, c2,...,cN]is the set of codes for X. The cardinality constraintci0=1 means that there will be only one non-zero element in each code ci, corresponding to the quantization id of xi.The non-negative,1constraint ci1= 1,ci≥0means that the coding weight for x is 1. In practice, the single non-zero element is found by searching the nearest neighbor.

    2.2 Coding descriptors in ScSPM

    To ameliorate the quantization loss of VQ, the restrictive cardinality constraint=1 in Eq. (1) can be relaxed by using a sparsity regularization term. In ScSPM [Yang, Yu,Gong et al. (2009)], such a sparsity regularization term is selected to be the1norm of ci,and coding each local descriptorxithus becomes a standard sparse coding (SC) [Lee,Battle, Raina et al. (2007)] problem:

    The sparsity regularization term plays several important roles: First, the codebook B is usually over-complete, i.e.M>D, and hence1regularization is necessary to ensure that the under-determined system has a unique solution ; Second, the sparsity prior allows the learned representation to capture salient patterns of local descriptors; Third, the sparse coding can achieve much less quantization error than VQ. Accordingly, even with linear SVM classifier, ScSPM can outperform the nonlinear SPM approach by a large margin on benchmarks like Caltech-101 [Yang, Yu, Gong et al. (2009)].

    2.3 Coding descriptors in LLC

    Locality-constrained Linear Coding (LLC) was presented by Wang et al. [Wang, Yang,Yu et al. (2010)]. LLC incorporates locality constraint instead of the sparsity constraint in Eq. 2, which leads to several favorable properties, including better reconstruction, local smooth sparsity and analytical solution. Specifically, the LLC code uses the following criteria [Yuan, Ma and Yuille (2017)]:

    With the above description of LLC coding, we can see that the locality constrain will earn extra discrimination for the coding coefficients. Because, some structure information is introduced the coding by enforcing some coefficients with weak sample correlation to be zero, which will help to select more samples within the same subspace. Next, we will proposed a weighted sparse classification framework to improve the coding structure.And then, to explore the supervision information hidden in the data, a novel supervision constraint item about coding approximation are also proposed as the part of the weighted sparse classification framework.

    3 Sparse image classification model based on low-rank supervision

    3.1 Low-rank representation model

    Low-rank matrix recovery is the development and promotion of the compressive sensing theory, using the rank as a measure of the matrix sparseness. The low-rank representation model has well capable of handling high-dimensional signal data. The model can be used to recover damaged observational samples and can find low-dimensional feature space from noisy samples. A matrix representation of the image data, the original observation data matrix X ∈Rm×nand the model that uses the rank of matrix for sparse representation is as follows:

    If the sample matrix X exists in several low-dimensional spaces, the sample matrix X can be linearly represented by a set of low-dimensional space bases A, that is, the sample itself is selected as the dictionary, the lower rank represents the coefficient matrix Z, and the Z represents each data vector The weight of a linear combination. A low-rank constraint on the coefficient Z results in a low-rank representation of the sample matrix X as follows:

    which is a non-convex optimization problem. In the solution, we usually apply convexrelaxation method and replace rank( Z)with the matrix kernel function. The following formula:

    In practical applications, the signal is likely to be polluted by noise, so the above equation can be further expressed as:

    In recent years, researchers have proposed many efficient optimization algorithms to solve the above problems, such as Iterative Thresholding, Exact Augmented Lagrange Multiplier, Inexact Augmented Lagrange Multiplier and so on. These algorithms can learn fromminimization, the following describes the Exact and Inexact Augmented Lagrange Multiplier algorithm. Before that, we need Augmented Lagrange Multiplier (ALM) algorithm.

    The objective function for a constrained optimization problem can be described by:

    where Y is Lagrange multiplier. μis known as the penalty coefficient that is a positive number.is inner product operator. When the values of Y and μare appropriate, the objective function can be solved by Augmented Lagrange algorithm. Asume thatThe Eq. (11) can be expressed by this:

    which can be solved by using alternate ways to optimize the update. Let Y andbe fixed values solving minimized L to obtain A . Then fix Y and A, and solve for minimized Lto obtain E. It is done iteratively until the convergent solution is eventually obtained that is called the Exact Augmented Lagrange Multiplier method.Since Aand E are coupled to each other in Eq. (12), each iteration will be placed in the same sub-problem during the iteration, which makes the solution feasible. In practice the Exact Augmented Lagrange Multiplier method requires multiple iterations, so the algorithm runs at a lower speed. On the basis of the former, an Inexact AugmentedLagrange Multiplier method is proposed, in which the A and are updated once at each iteration to obtain the approximate solution of the sub-problem.

    Figure 1: Ideal structure of the coefficient matrix in SRC

    The core idea based on sparse representation is that the test sample is expressed by a linear combination of training samples. Because most non-zero elements have the same base corresponding to the test sample. In ideal case, we expect each sample to be in its own category subspace and all the category subspaces do not overlap. Specifically, given a category of images recorded as C and dictionaries D =[D1,...,DC]which DCcontains training samples from the category c=1,...,C. That is meaning that a new sample y belonging to the category c can be represented byis a representation coefficient under a sub-dictionary Dc. So we can use the dictionary D to represent y :Most of the valid elements should be located in ac,so the coefficient vector acshould be sparse. From vector to matrix form,Y =[Y1,...,Yc,...,YC]is the entire sample set matrix containing the category samples, the coefficient matrix Awill be also sparse, and in the ideal case the matrix Ais a block diagonal matrix as shown in Fig. 1.

    3.2 Correlation matrix construction

    Assume that the test sample matrix Y =[y1, y2,...,yn], yndenotes the no.test sample vector. Based on above analysis, we can use the sample as a dictionary and use the matrix Zto explore the correlation between samples to obtain the low-rank representation model of the test sample as follows:

    3.3 Sparse image classification model based on low-rank supervision

    To make better use of correlations between samples and obtain feature representationswith better discrimination capability, we use the low-rank representation matrix to perform low-rank supervision constraints on the sparse coding process and propose a new low-rank supervised sparse coding model.

    First, we can obtain a supervisory matrix from a low-rank representation matrix, as shown in the following equation:

    where A is feature representation matrix, λand ηare regularization parameters. In the above model, we have added the supervision constraint itemto increase the difference in coding between different categories, where i and jare the column numbers of the feature representation matrixA . Pis the correlated weighted for the sparse coefficient.denotes the element-wise multiplication operator. Wijis used to represent the elements of the row i and column jin the supervising matrix W. By solving the model 10, we can get the representation matrix A with stronger discrimination ability.

    After obtaining the feature representation matrixA , the reconstruction error can be used to classify and the classification model as follows:

    Acorresponding to the ith sample. Find the Eq. (16) to minimize the reconstruction error label is the classification result, that is the formula for the minimum value of k.

    4 Solving the sparse image classification model based on low-rank supervision

    This section describes the process of solving the above low-rank supervised coding model in detail. The model 10 can be transformed as follows:

    Eq. 17 is a non-convex problem. We can make an equidistant transformation to get the following based on

    where trace()is trace of matrix, L is lagrange operator and L=D? W . The matrixis a diagonal matrix whose elements on the diagonal correspond to the sum of all elements of the column vector in the matrix W , namelyand jare the row and column numbers of the matrix. This problem is transformed into a convex optimization problem.

    For the optimization solution, we can get the gradient solution formula of G( A)as follows:

    where ?is a gradient indicator. According to Eq. (19) and Eq. (20), the iterative shrinking algorithm can be used to solve the original model (17), which iterative solution is as follows:

    As for the Eq. (21), it is a typical weighted l1-norm minimization problem, which can be solved with the numerical method in Guleryuz [Guleryuz (2006)]. And then, we can obtain the feature representation matrix A and use the classification model shown in model (16) to classify images.

    5 Experimental results and analysis

    5.1 Experimental data

    For the weighted sparse image classification method based on low-rank representation proposed in this paper, we take the ORL and COIL-20 image datasets as the experimental datasets.

    Figure 2: Two groups of face samples of ORL dataset

    The ORL face dataset contains 400 face images, which were shot by 10 people under different lighting conditions, varying view and expressions. Each person has about 10 face images. Fig. 2 shows some face samples of the ORL dataset.The COIL-20 image dataset is collected by Columbia Object Image Library, which contains images of 20 items. During shooting the image, the object is placed on a rotating plate with black background. Through the 360 degree rotation of the plate to change the attitude of the item relative to the camera. Each object in the dataset has 72 images with different angles, and some examples are shown in Fig. 3.

    Figure 3: Example images of COIL-20 database

    In the experiment, we implemented the sparse representation-based image classification methods SRC, CRC, LRSRC on the above two public datasets. The algorithm proposed in this paper is compared to verify.

    5.2 Comparison and analysis of experiments

    First of all, in order to verify the validity of the method proposed in this chapter, we use the ORL face image data set to display, and visualize the obtained low-rank representation matrix and its corresponding feature representation matrix. The low-rank representation matrix is shown in Fig. 4, and the characteristic representation matrix is shown in Fig. 5.From these two figures, it can be observed that the representation matrix has a distinct diagonal block structure, indicating that the representation matrix should theoretically have the discriminative ability of classification, the experimental result of the latter also confirmed this prediction. From the comparison of these two graphs, we can also find that compared with the low rank representation matrix, the elements in the diagonal matrix of the representation matrix are enhanced after being coded from the supervision limit to make it have a more obvious diagonal block structure, indicating better class resolution.

    Figure 4: Visualization of low-rank representation matrix for ORL

    Figure 5: Visualization of sparse representation matrix for ORL

    We performed simulation experiments on the above methods in the Extended YaleB face image dataset and COIL-20 image dataset and obtained the corresponding experimental results. We test five times for each method, and take the average accuracy as the final result, which is shown in Tab. 1. The number shown in the parentheses denotes the number of training samples.

    Table 1: Average accuracy of each classification method

    From Tab. 1, we can see that our proposed method shows higher accuracy classification result on all the experimental datasets over the comparison methods. It is verified that the weighted sparse coding with the low rank supervision can greatly improve the structure of coding to be more discriminative.

    6 Summary

    On the basis of the SRC method, this paper proposes a weighted sparse image classification method based on low-rank supervision. The proposed one strengthens the sample discrimination ability of the feature representation matrix, which makes full use of the correlation between samples to weighted code the image sparsely. Different types of sparse represent the degree of coding discrimination, which is believed to improve the accuracy of image classification. We replace the non-convex terms in the model equivalently, use the iterative shrinking algorithm to solve and verify the correctness through some experiments. Compared with related conventional classification methods,the method proposed in this paper is better than them.

    Acknowledgement:This research is funded by the National Natural Science Foundation of China (61771154).

    亚洲精品中文字幕一二三四区 | 国产不卡av网站在线观看| 亚洲欧美一区二区三区久久| 免费不卡黄色视频| 又大又爽又粗| 国产三级黄色录像| 精品一品国产午夜福利视频| e午夜精品久久久久久久| 午夜福利乱码中文字幕| 亚洲全国av大片| 精品亚洲成国产av| 亚洲精品一卡2卡三卡4卡5卡 | 50天的宝宝边吃奶边哭怎么回事| 国产区一区二久久| 在线观看免费日韩欧美大片| 久久天躁狠狠躁夜夜2o2o| 99国产精品一区二区蜜桃av | 国产精品免费视频内射| 欧美日韩福利视频一区二区| 久久精品亚洲av国产电影网| 亚洲天堂av无毛| 久久亚洲国产成人精品v| 久久精品国产a三级三级三级| 超碰成人久久| av不卡在线播放| av又黄又爽大尺度在线免费看| 日本a在线网址| 91成年电影在线观看| 午夜免费成人在线视频| 午夜两性在线视频| 99国产精品99久久久久| 国产精品一区二区在线不卡| 亚洲精品国产色婷婷电影| 日本av免费视频播放| 亚洲国产欧美一区二区综合| 中文字幕人妻丝袜制服| 成人免费观看视频高清| 国产三级黄色录像| 亚洲精品久久成人aⅴ小说| av电影中文网址| 男男h啪啪无遮挡| 日日夜夜操网爽| 十分钟在线观看高清视频www| 99国产综合亚洲精品| 日韩视频在线欧美| 国产深夜福利视频在线观看| 夜夜骑夜夜射夜夜干| 亚洲 欧美一区二区三区| 亚洲精品乱久久久久久| 99香蕉大伊视频| 国产成人精品无人区| 精品免费久久久久久久清纯 | 亚洲综合色网址| e午夜精品久久久久久久| 亚洲欧美成人综合另类久久久| 亚洲人成电影免费在线| 亚洲人成电影观看| 亚洲熟女精品中文字幕| 丝袜人妻中文字幕| 亚洲精品粉嫩美女一区| 欧美午夜高清在线| 黑人巨大精品欧美一区二区mp4| 国精品久久久久久国模美| 久久久精品国产亚洲av高清涩受| tocl精华| 亚洲精品自拍成人| 久久国产精品大桥未久av| 午夜福利一区二区在线看| 三上悠亚av全集在线观看| 91九色精品人成在线观看| 国产成人啪精品午夜网站| 亚洲欧美一区二区三区黑人| a级片在线免费高清观看视频| 亚洲精品国产区一区二| 欧美精品亚洲一区二区| 亚洲自偷自拍图片 自拍| 亚洲成人手机| 久久久久国产一级毛片高清牌| 精品一区二区三区四区五区乱码| 午夜老司机福利片| 十八禁人妻一区二区| 99热国产这里只有精品6| 日韩人妻精品一区2区三区| 色精品久久人妻99蜜桃| 天堂中文最新版在线下载| 中文字幕另类日韩欧美亚洲嫩草| 女性生殖器流出的白浆| 久久精品国产亚洲av高清一级| 日韩欧美免费精品| 亚洲中文av在线| 国产不卡av网站在线观看| 日韩欧美一区视频在线观看| 免费少妇av软件| 国产野战对白在线观看| 日韩 亚洲 欧美在线| 老司机在亚洲福利影院| 久久国产亚洲av麻豆专区| svipshipincom国产片| 午夜老司机福利片| 欧美变态另类bdsm刘玥| 国产精品久久久av美女十八| 麻豆乱淫一区二区| 亚洲一区中文字幕在线| 中文字幕精品免费在线观看视频| 91精品伊人久久大香线蕉| 人人妻人人添人人爽欧美一区卜| 国产高清国产精品国产三级| 国产日韩一区二区三区精品不卡| 熟女少妇亚洲综合色aaa.| 亚洲av电影在线观看一区二区三区| 丰满人妻熟妇乱又伦精品不卡| 丝袜美腿诱惑在线| 亚洲视频免费观看视频| 在线十欧美十亚洲十日本专区| 女人爽到高潮嗷嗷叫在线视频| 亚洲欧美日韩高清在线视频 | 国产成人精品在线电影| 亚洲精品一二三| 国产真人三级小视频在线观看| 黄色a级毛片大全视频| 中文字幕最新亚洲高清| 午夜激情久久久久久久| 国产成人免费观看mmmm| 欧美一级毛片孕妇| 亚洲欧洲精品一区二区精品久久久| 欧美性长视频在线观看| 亚洲视频免费观看视频| 日韩人妻精品一区2区三区| 少妇的丰满在线观看| 国产99久久九九免费精品| 免费在线观看视频国产中文字幕亚洲 | 桃花免费在线播放| 国产成人免费无遮挡视频| 国产精品1区2区在线观看. | 国产精品 国内视频| 两人在一起打扑克的视频| 啦啦啦中文免费视频观看日本| 精品人妻1区二区| 黄色视频不卡| 交换朋友夫妻互换小说| 国产成人免费观看mmmm| 啪啪无遮挡十八禁网站| 国产不卡av网站在线观看| 欧美午夜高清在线| 国产精品国产av在线观看| 国产精品香港三级国产av潘金莲| 丰满饥渴人妻一区二区三| 精品国内亚洲2022精品成人 | 麻豆国产av国片精品| 天天操日日干夜夜撸| 一级,二级,三级黄色视频| 可以免费在线观看a视频的电影网站| 在线观看免费高清a一片| 免费黄频网站在线观看国产| 黑丝袜美女国产一区| 后天国语完整版免费观看| 人妻久久中文字幕网| 日韩中文字幕欧美一区二区| av欧美777| 国产福利在线免费观看视频| 亚洲精品av麻豆狂野| 婷婷色av中文字幕| 欧美激情极品国产一区二区三区| 2018国产大陆天天弄谢| 亚洲欧美日韩另类电影网站| 美女高潮喷水抽搐中文字幕| 少妇的丰满在线观看| 中文字幕精品免费在线观看视频| 嫩草影视91久久| 99国产精品一区二区蜜桃av | 国产不卡av网站在线观看| 亚洲七黄色美女视频| av网站免费在线观看视频| 久久久精品94久久精品| 妹子高潮喷水视频| 欧美黑人欧美精品刺激| 久久免费观看电影| 啦啦啦视频在线资源免费观看| 不卡一级毛片| 久久毛片免费看一区二区三区| 国产成人精品无人区| 国产男人的电影天堂91| 国产高清videossex| 亚洲av成人一区二区三| 亚洲专区中文字幕在线| 99精品欧美一区二区三区四区| 色精品久久人妻99蜜桃| 一级片'在线观看视频| 少妇 在线观看| 80岁老熟妇乱子伦牲交| 久久99一区二区三区| 日韩制服骚丝袜av| 老熟妇乱子伦视频在线观看 | 满18在线观看网站| 可以免费在线观看a视频的电影网站| 一边摸一边做爽爽视频免费| 在线观看免费视频网站a站| 精品福利永久在线观看| 免费一级毛片在线播放高清视频 | 国产免费一区二区三区四区乱码| 男人爽女人下面视频在线观看| 亚洲第一青青草原| www.999成人在线观看| 久久精品国产亚洲av高清一级| 精品熟女少妇八av免费久了| 中文字幕av电影在线播放| 97人妻天天添夜夜摸| 国产日韩欧美亚洲二区| 国产一区二区三区综合在线观看| 成年美女黄网站色视频大全免费| 伊人亚洲综合成人网| 少妇裸体淫交视频免费看高清 | 精品亚洲成国产av| 午夜福利在线免费观看网站| 精品国产一区二区久久| 女人爽到高潮嗷嗷叫在线视频| 韩国精品一区二区三区| 中亚洲国语对白在线视频| 国产高清国产精品国产三级| 男人添女人高潮全过程视频| 国产欧美日韩一区二区精品| 亚洲中文日韩欧美视频| 成人国产一区最新在线观看| 久久久国产精品麻豆| www.av在线官网国产| 免费高清在线观看视频在线观看| 日韩大片免费观看网站| 成人影院久久| 美女国产高潮福利片在线看| 欧美日韩av久久| 三上悠亚av全集在线观看| 亚洲精品国产av蜜桃| 激情视频va一区二区三区| 日韩三级视频一区二区三区| 精品免费久久久久久久清纯 | 大码成人一级视频| 精品少妇一区二区三区视频日本电影| 亚洲精品一二三| 日本wwww免费看| 无限看片的www在线观看| 久久久久网色| 不卡一级毛片| 九色亚洲精品在线播放| 一边摸一边抽搐一进一出视频| 亚洲国产毛片av蜜桃av| 久久亚洲国产成人精品v| 90打野战视频偷拍视频| 久久免费观看电影| 亚洲国产精品一区三区| 天天躁狠狠躁夜夜躁狠狠躁| 久9热在线精品视频| 亚洲va日本ⅴa欧美va伊人久久 | 成人三级做爰电影| 午夜激情久久久久久久| 人人妻人人爽人人添夜夜欢视频| 国产伦理片在线播放av一区| 肉色欧美久久久久久久蜜桃| 69av精品久久久久久 | 婷婷丁香在线五月| 亚洲精品乱久久久久久| 50天的宝宝边吃奶边哭怎么回事| 午夜日韩欧美国产| 欧美日韩精品网址| 建设人人有责人人尽责人人享有的| 亚洲精品美女久久久久99蜜臀| 久久久久视频综合| 老司机在亚洲福利影院| 十八禁网站免费在线| 国产精品 国内视频| 青草久久国产| 中文字幕人妻熟女乱码| av片东京热男人的天堂| 99久久国产精品久久久| 午夜福利视频精品| 丰满少妇做爰视频| 在线亚洲精品国产二区图片欧美| 午夜精品国产一区二区电影| 日韩制服丝袜自拍偷拍| 日本91视频免费播放| 制服人妻中文乱码| 中文字幕高清在线视频| 天堂俺去俺来也www色官网| 搡老岳熟女国产| 国产免费福利视频在线观看| 亚洲天堂av无毛| 国产伦人伦偷精品视频| 黄色视频不卡| av有码第一页| 老鸭窝网址在线观看| 这个男人来自地球电影免费观看| 国产一区二区三区av在线| 欧美一级毛片孕妇| 2018国产大陆天天弄谢| 中文字幕精品免费在线观看视频| 丰满饥渴人妻一区二区三| 亚洲男人天堂网一区| 99精品久久久久人妻精品| 黑人操中国人逼视频| 日本欧美视频一区| 一区二区三区精品91| 爱豆传媒免费全集在线观看| 亚洲专区中文字幕在线| 国产成人精品久久二区二区91| 正在播放国产对白刺激| 日韩电影二区| 99国产精品99久久久久| 久久久久久久精品精品| 国产精品免费视频内射| 国产精品久久久av美女十八| 国产主播在线观看一区二区| 在线看a的网站| 一区二区av电影网| av不卡在线播放| 巨乳人妻的诱惑在线观看| 黄色视频不卡| 午夜精品久久久久久毛片777| 国产片内射在线| 欧美成人午夜精品| 午夜91福利影院| 美女视频免费永久观看网站| 国产免费福利视频在线观看| 亚洲精品国产色婷婷电影| 一区二区三区激情视频| 一二三四社区在线视频社区8| 国产成人av教育| 制服人妻中文乱码| 国产主播在线观看一区二区| 亚洲五月婷婷丁香| 一本大道久久a久久精品| 亚洲精品国产精品久久久不卡| 免费观看a级毛片全部| 美女福利国产在线| 国产精品久久久av美女十八| 热99re8久久精品国产| 国产在视频线精品| 十八禁高潮呻吟视频| 美女高潮到喷水免费观看| 99久久国产精品久久久| 水蜜桃什么品种好| 亚洲精品一二三| 电影成人av| 国产精品欧美亚洲77777| 激情视频va一区二区三区| 岛国毛片在线播放| 高清av免费在线| 十八禁高潮呻吟视频| 亚洲一码二码三码区别大吗| 免费在线观看日本一区| 18禁观看日本| 两个人免费观看高清视频| 午夜精品久久久久久毛片777| 久久精品亚洲熟妇少妇任你| 人人妻人人爽人人添夜夜欢视频| 欧美日韩福利视频一区二区| 欧美日韩亚洲高清精品| 精品乱码久久久久久99久播| 99久久国产精品久久久| 九色亚洲精品在线播放| 久久av网站| 99久久精品国产亚洲精品| 老司机影院成人| 亚洲av美国av| 亚洲一区二区三区欧美精品| 啦啦啦视频在线资源免费观看| 国产亚洲av片在线观看秒播厂| 久9热在线精品视频| 搡老熟女国产l中国老女人| 在线观看免费视频网站a站| 日韩欧美免费精品| 精品久久久精品久久久| 久久亚洲国产成人精品v| 91国产中文字幕| 日日夜夜操网爽| 亚洲精品国产av成人精品| 91麻豆精品激情在线观看国产 | 一区二区三区精品91| 黄片播放在线免费| 亚洲成人免费av在线播放| 久久久久国产精品人妻一区二区| 久久久水蜜桃国产精品网| 天天操日日干夜夜撸| 男女国产视频网站| 欧美日韩亚洲高清精品| 老鸭窝网址在线观看| 久久午夜综合久久蜜桃| 日本欧美视频一区| 久9热在线精品视频| 欧美性长视频在线观看| 美女中出高潮动态图| 性少妇av在线| 午夜两性在线视频| 日韩,欧美,国产一区二区三区| 国产人伦9x9x在线观看| 国产深夜福利视频在线观看| 香蕉丝袜av| 免费在线观看视频国产中文字幕亚洲 | 久久久久久亚洲精品国产蜜桃av| 国产精品国产三级国产专区5o| 精品国产一区二区三区久久久樱花| 亚洲第一欧美日韩一区二区三区 | 丝袜美足系列| 国产亚洲av片在线观看秒播厂| 在线av久久热| 黑人巨大精品欧美一区二区mp4| 美女高潮到喷水免费观看| 久久 成人 亚洲| 国产精品久久久人人做人人爽| 午夜老司机福利片| 国产日韩欧美亚洲二区| 中文字幕色久视频| 日韩电影二区| 91大片在线观看| a级片在线免费高清观看视频| 精品第一国产精品| 日韩,欧美,国产一区二区三区| 日日摸夜夜添夜夜添小说| 国产欧美亚洲国产| 精品高清国产在线一区| 狠狠婷婷综合久久久久久88av| 中文字幕精品免费在线观看视频| 亚洲精品中文字幕一二三四区 | 国产日韩欧美亚洲二区| 人妻久久中文字幕网| 精品一区二区三卡| 亚洲精品国产一区二区精华液| 一区福利在线观看| 中文精品一卡2卡3卡4更新| 久久亚洲精品不卡| 黄片播放在线免费| 日韩熟女老妇一区二区性免费视频| 真人做人爱边吃奶动态| 最近中文字幕2019免费版| 一区二区三区乱码不卡18| 香蕉丝袜av| 色老头精品视频在线观看| 亚洲精华国产精华精| 国产视频一区二区在线看| 中国美女看黄片| 天堂8中文在线网| 免费黄频网站在线观看国产| 母亲3免费完整高清在线观看| 久久久久久久久免费视频了| 欧美av亚洲av综合av国产av| 国产黄频视频在线观看| 无限看片的www在线观看| 国产精品九九99| 成年女人毛片免费观看观看9 | 在线亚洲精品国产二区图片欧美| 中国美女看黄片| 91老司机精品| 女性生殖器流出的白浆| 一级毛片电影观看| 丝袜喷水一区| 亚洲欧洲日产国产| h视频一区二区三区| 99久久国产精品久久久| videos熟女内射| 波多野结衣一区麻豆| 成年人午夜在线观看视频| 久久免费观看电影| 男人爽女人下面视频在线观看| 性高湖久久久久久久久免费观看| 夫妻午夜视频| 少妇精品久久久久久久| 三级毛片av免费| 一区二区三区激情视频| 亚洲av日韩精品久久久久久密| 国产免费一区二区三区四区乱码| 成人免费观看视频高清| 王馨瑶露胸无遮挡在线观看| 在线天堂中文资源库| 免费av中文字幕在线| 91成人精品电影| 国产精品秋霞免费鲁丝片| 在线观看www视频免费| 在线观看免费午夜福利视频| 妹子高潮喷水视频| 美女大奶头黄色视频| 天堂俺去俺来也www色官网| 亚洲国产精品999| 婷婷丁香在线五月| 国产亚洲av片在线观看秒播厂| 亚洲色图 男人天堂 中文字幕| av免费在线观看网站| 99精国产麻豆久久婷婷| a 毛片基地| 国产伦人伦偷精品视频| 91精品国产国语对白视频| 亚洲七黄色美女视频| 9191精品国产免费久久| 亚洲第一青青草原| 男男h啪啪无遮挡| 十八禁网站网址无遮挡| 狂野欧美激情性xxxx| 一个人免费看片子| 成人国语在线视频| 蜜桃在线观看..| 亚洲国产中文字幕在线视频| 日本91视频免费播放| 法律面前人人平等表现在哪些方面 | 久久精品国产亚洲av香蕉五月 | 菩萨蛮人人尽说江南好唐韦庄| 少妇粗大呻吟视频| 久久九九热精品免费| 热re99久久精品国产66热6| 国产伦人伦偷精品视频| 制服人妻中文乱码| 久久久久国产一级毛片高清牌| 人人妻,人人澡人人爽秒播| 亚洲精品国产一区二区精华液| 日韩精品免费视频一区二区三区| 老司机午夜福利在线观看视频 | 日韩 欧美 亚洲 中文字幕| 成人国产av品久久久| 亚洲中文日韩欧美视频| 十八禁网站免费在线| 中文字幕最新亚洲高清| 女性生殖器流出的白浆| 精品高清国产在线一区| 国产精品九九99| 亚洲午夜精品一区,二区,三区| 国产福利在线免费观看视频| 夜夜夜夜夜久久久久| 国产真人三级小视频在线观看| 中文字幕色久视频| 亚洲国产毛片av蜜桃av| 亚洲欧美成人综合另类久久久| 精品一区二区三卡| 免费久久久久久久精品成人欧美视频| 亚洲第一av免费看| 久久热在线av| 亚洲国产精品999| 国产高清videossex| 国产精品.久久久| 久久国产亚洲av麻豆专区| 欧美av亚洲av综合av国产av| 中国国产av一级| 日本精品一区二区三区蜜桃| 国产高清videossex| 欧美日韩中文字幕国产精品一区二区三区 | 老司机午夜福利在线观看视频 | svipshipincom国产片| 欧美日韩av久久| 别揉我奶头~嗯~啊~动态视频 | 午夜福利,免费看| 欧美大码av| 国产不卡av网站在线观看| 黄色怎么调成土黄色| 国产在线视频一区二区| 啦啦啦在线免费观看视频4| 日韩精品免费视频一区二区三区| 中文字幕av电影在线播放| 国产精品.久久久| 99久久综合免费| 久久精品国产亚洲av高清一级| 两性午夜刺激爽爽歪歪视频在线观看 | 人妻久久中文字幕网| 在线精品无人区一区二区三| 免费在线观看视频国产中文字幕亚洲 | tocl精华| 亚洲精品中文字幕在线视频| 少妇的丰满在线观看| 搡老岳熟女国产| 欧美人与性动交α欧美精品济南到| 久久狼人影院| 午夜成年电影在线免费观看| 日本欧美视频一区| 法律面前人人平等表现在哪些方面 | 欧美97在线视频| 亚洲欧洲日产国产| 国产免费现黄频在线看| 99九九在线精品视频| 亚洲成人国产一区在线观看| 亚洲av电影在线观看一区二区三区| 国产精品久久久人人做人人爽| 成人三级做爰电影| videosex国产| 搡老熟女国产l中国老女人| 天堂中文最新版在线下载| 淫妇啪啪啪对白视频 | 深夜精品福利| 久久99热这里只频精品6学生| 性色av乱码一区二区三区2| 色老头精品视频在线观看| 伊人亚洲综合成人网| 91国产中文字幕| 亚洲综合色网址| 精品人妻在线不人妻| 丁香六月欧美| 最近中文字幕2019免费版| 在线观看免费高清a一片| 久久天躁狠狠躁夜夜2o2o| 久久人人爽人人片av| 免费人妻精品一区二区三区视频| av在线老鸭窝| 久久久久视频综合| 国产在线观看jvid| 免费一级毛片在线播放高清视频 | 一级毛片精品| 美女高潮到喷水免费观看| 丰满少妇做爰视频| 欧美成狂野欧美在线观看| 韩国精品一区二区三区| 三级毛片av免费| 亚洲男人天堂网一区| 免费一级毛片在线播放高清视频 | 九色亚洲精品在线播放| 国产成人精品在线电影| 超碰97精品在线观看| av福利片在线| 久久毛片免费看一区二区三区| 精品国内亚洲2022精品成人 | 人妻久久中文字幕网| 啦啦啦免费观看视频1| 国产片内射在线|