• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Balanced Deep Supervised Hashing

    2019-07-18 01:59:08HefeiLingYangFangLeiWuPingLiJiazhongChenFuhaoZouandJialieShen
    Computers Materials&Continua 2019年7期

    Hefei Ling,Yang Fang,Lei Wu,Ping Li, , Jiazhong Chen,Fuhao Zou and Jialie Shen

    Abstract: Recently, Convolutional Neural Network (CNN) based hashing method has achieved its promising performance for image retrieval task.However, tackling the discrepancy between quantization error minimization and discriminability maximization of network outputs simultaneously still remains unsolved.Motivated by the concern, we propose a novel Balanced Deep Supervised Hashing (BDSH) based on variant posterior probability to learn compact discriminability-preserving binary code for large scale image data.Distinguished from the previous works,BDSH can search an equilibrium point within the discrepancy.Towards the goal,a delicate objective function is utilized to maximize the discriminability of the output space with the variant posterior probability of the pair-wise label.A quantization regularizer is utilized as a relaxation from real-value outputs to the desired discrete values (e.g., -1/+1).Extensive experiments on the benchmark datasets show that our method can yield state-of-the-art image retrieval performance from various perspectives.

    Keywords:Deep supervised hashing,equilibrium point,posterior probability.

    1 Introduction

    We are living in an age of information explosion every day,hundreds of billions of images are uploaded to the internet.How to develop effective and efficient image search algorithm is becoming more and more important.In fact,the simplest way to search relevant images is sorting the database images according to the distances between the database images and the query image in the feature space,and returning the nearest images.For a database with billions of images, which is quite common today, searching linearly through a database is unimaginable due to a great deal of time and memory cost.Therefore, hashing method draws more and more attention due to its fast query speed and low memory cost[Gong and Lazebnik(2011)].

    Hashing method with hand-crafted features was a hot spot in computer vision field for a long time.These hashing methods [Zhang, Zhang, Li et al.(2014); Shen, Shen, Liu et al.(2015);Lin,Shen,Shi et al.(2014)]have achieved their good performance in image retrieval by utilizing some elaborately designed features, which are more appropriate for tackling the visual similarity retrieval rather than the semantic similarity retrieval.By hashing approaches, the images as inputs are mapped to compact binary codes, which approximately preserve the data structure in the original space[Liu,Wang,Ji et al.(2012)].The cost of retrieval time and restore memory can be greatly reduced,because the images are represented by binary codes(e.g., -1/+1)instead of real-valued features.On the other hand,the recent success of CNNs in many tasks,such as image classification[Krizhevsky,Sutskever and Hinton (2012)], objection detection [Szegedy, Toshev and Erhan (2013);Meng, Rice, Wang et al.(2018)], visual recognition [Chen, Chen, Wang et al.(2014);Wu, Wang, Li et al.(2018); Wang, Lin, Wu et al.(2017)], brings more probability to tackle hashing problem.In these various tasks, the convolutional neural networks can be regarded as a feature extractor, which is driven by the objection functions that are specifically designed for the separate tasks.These promising applications of CNNs show the robustness of feature learned to scale, translation, rotation and occlusion.The feature learned by convolutional neural networks can well capture the latent semantic information of images instead of appearance differences.Because of the satisfactory performance of CNNs as a feature extractor,hashing approaches based CNNs,such as [Lai,Pan,Liu et al.(2015);Zhuang,Lin,Shen et al.(2016);Liu,Wang,Shan et al.(2016);Li,Wang and Kang(2016); Zhu and Gao (2017)], are proposed to solve hashing problem.Generally, deep hashing methods consist of two modules: i) feature extractor and ii) feature quantization that encourages the CNNs outputs to approximate the desired discrete values(e.g.,-1/+1).

    Figure 1: The network architecture of BDSH consists of 5 convolution layers, 3 pooling layers and 3 fully connected layers.The objective function is elaborately designed to exploit discriminative features between image pairs and make the network outputs approximate the desired discrete values.And the binary hash codes are generated by directly quantizing the image outputs with function sign

    Our goal is to map the images to compact binary hash codes and preserve the discriminability of features to support efficient and effective search simultaneously.As shown in Fig.2(a),our learning framework aims at minimizing the quantization error from the network real-valued features to the desired discrete values(e.g.,-1/+1).And meanwhile,as shown in Fig.2(b), another goal achieved by our framework is to maximize the discriminability of network outputs.Since it is extremely difficult to optimize CNNs based model by the non-differentiable loss function in Hamming space.It suggests that directly computing compact binary codes by the CNNs based model could be challenging.As shown in Fig.2,minimizing the feature quantization error in hashing can lead to the changes of feature distribution,thus inevitably reduce the discriminability of features[Zhu and Gao(2017)].Among the existing hashing methods, there always exists a discrepancy between maximizing the discriminability of network outputs and minimizing the quantization error.Inspired by this concern,we propose Deep Supervised Hashing based on variant posterior probability to support fast and accurate image retrieval , whose objective is to search an equilibrium point between the discriminability and the quantization error.In practice, a delicate objective function is proposed to maximize the discriminability of network outputs with the variant posterior probability of the pair-wise label.Simultaneously we expect that the distance between the similar image pairs is as small as possible, and the distance between the dissimilar ones is large.Meanwhile, we adopt a quantization module as a relaxation to make the network outputs approach the desired discrete values.The main contributions of this paper can be summarized as following:

    · Based on posterior probability, we address the discrepancy between the quantization error minimization and the discriminability maximization.A mathematical connection between posterior probability and contrastive loss is made to better understand the overall objective function within our method.

    · We propose a Balanced Deep Supervised Hashing based on variant posterior probability-an end-to-end framework, which can effectively achieve good balance between the quantization error and feature discriminability.

    · Experiment studies on benchmark datasets show that BDSH can greatly outperform all existing methods to achieve the state-of-the-art performance in image retrieval tasks.

    2 Related work

    Existing hashing methods,including LSH[Gionis,Indyk and Motwani(1999)],SH[Weiss,Torralba and Fergus (2008)], ITQ [Gong and Lazebnik (2011)], LFH [Zhang, Zhang, Li et al.(2014)], LCDSH[Zhu and Gao(2017)]and etc, have been proposed to improve the effectiveness of approximate nearest neighbour search because of their low restore memory and high retrieval speed.And all these existing methods can be divided into two classes:data-independent hashing methods[Gionis,Indyk and Motwani(1999);Andoni and Indyk(2008)] and data-dependent hashing methods [Weiss, Torralba and Fergus (2008); Gong and Lazebnik(2011)].

    In the early years, because of the lack of image data, many researchers focus on the data-independent hashing methods, which use random projections to produce hashing codes.Data-independent hashing methods, for example, Locality Sensitive Hashing(LSH) [Gionis, Indyk and Motwani (1999)], can achieve good performance with long enough codes (32 bits or even more) theoretically.However, the huge demands of bits quantization is against the motivation of hashing.To solve the limitation of data-independent hashing methods, data-dependent hashing methods are proposed.These proposed methods try to learn a hash function from training data to hash codes by data-driven methods.

    Data-dependent hashing method can be further categorized into two classes: unsupervised hashing methods and supervised hashing methods.On the one hand, compared with supervised hashing methods,unsupervised hashing methods only utilize unlabelled training data to learn hashing function to produce compact hash codes.For example, Spectral Hashing (SH) [Weiss, Torralba and Fergus (2008)] defined a hard criterion for a good code that is related to graph partitioning and used a spectral relaxation to obtain a binary code; Iterative Quantization (ITQ) [Gong and Lazebnik (2011)] attempts to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube.RSCMVD[Wang,Lin,Wu et al.(2015a)]proposes robust subspace clustering for multi-view data by exploiting correlation consensus.WMFRW [Wang, Zhang, Wu et al.(2015)]constructs multiple graphs with each one corresponding to an individual view,and a cross-view fusion approach based on graph random walk is presented to derive an optimal distance measure by fusing multiple metrics.On the other hand,supervised hashing methods are proposed to explore complex semantic similarity with supervised learning.LBMCH[Wang,Lin,Wu et al.(2015b)]learned bridging mapping between images and tags to preserve cross-modal semantic correlation.Supervised discrete hashing (SDH) [Shen,Shen, Liu et al.(2015)], in which the learning objective is to produce the optimal binary hash code for linear classification,directly solved the corresponding discrete optimization without any relaxations.The method above learns hash function by linear projections, so it can hardly achieve satisfactory performance on linearly inseparable data.To avoid this shortcoming, Supervised Hashing with Kernels (KSH) [Liu, Wang, Ji et al.(2012)] and Binary Reconstruction Embedding(BRE)[Kulis and Darrell(2009)]are proposed to obtain compact binary code in kernels space.

    Figure 2: The distributions of network outputs in ideal case.(a) The feature distribution with minimizing quantization error and neglecting discriminability.(b) The feature distribution with maximizing discriminability and neglecting quantization error

    While the above methods have certainly achieved improved retrieval performance by some extend, the features used are still based on hand-crafted features.These methods are not be able to capture the semantic structure in large-scale image data.To tackle the problem, most recently, deep learning is used to learn features and hashing function simultaneously.Deep Hashing [Liong, Lu, Wang et al.(2015)] produce a compact binary code by a non-linear deep network.Methods such as [Zhao, Huang, Wang et al.(2015); Lai, Pan, Liu et al.(2015); Zhang, Lin, Zhang et al.(2015); Wu and Wang (2018)] are proposed to learn both image feature representations and hash codes together by the promising CNNs, which have achieve improved retrieval performance.Zhao et al.[Zhao, Huang, Wang et al.(2015); Lai, Pan, Liu et al.(2015); Zhang, Lin, Zhang et al.(2015)] make use of CNNs to learn hash function, which can preserve the semantic relations of image-triplets.DSH [Liu, Wang, Shan et al.(2016)] maximize the discriminability of the output space by a contrastive loss part [Hadsell, Chopra and Lecun (2006)].And simultaneously DSH imposed a regularization on the real-valued outputs to approximate the desired discrete values by a quantization regularizer.DPSH [Li, Wang and Kang (2016)] adopted a negative log likelihood function similar to LFH [Zhang, Zhang, Li et al.(2014)] to maximize the feature discriminability, while the quantization part is used to reduce the quantization error.LCDSH [Zhu and Gao (2017)] models the hash problem as maximizing the posterior probability of the pairwise label given pairwise hash codes.However, in formula, the loss function of LCDSH is still a combination of discriminability part and quantization part.But LCDSH is prone to maximize the discriminability, which will cause huge quantization error.

    By extracting pair-wise images feature and binary-like code learning, these hash methods have achieved greatly performance on image retrieval tasks.But there exist still some drawbacks about the objective function of these hash methods, which limit greatly their practical performance on image retrieval.And in the experiment section, we will show these details by a series of extensive experiments.

    3 Approach

    Our goal is to learn a projection P from I to B that produces compact binary codes for images such that: i) the binary codes of relevant images should be similar in Hamming space, and vice versa; ii) the binary codes should be produced efficiently.To this end, the hash codes of similar semantically images should be as near as possible, meanwhile the hash codes of dissimilar ones should be as far as possible.To keep a balance between minimizing the quantization error and maximizing the discriminability of binary codes, we propose a Balanced Deep Supervised Hashing (BDSH) method.And the network architecture of our BDSH is displayed in Fig.1.

    Table 1: The notation of BDSH

    3.1 Loss function of BDSH

    Given the pairwise similarity relationship S = {sij},the Maximum a Posterior estimation of hash codes can be represented as:

    where p(S|B)denotes the likelihood function,p(B)is the prior distribution.For each pair of the images,p(sij|B)is the conditional probability of sijgiven their hash codes B,which is defined as follows:

    where δ(x)=1/(1+e-x)is the sigmoid function,

    Deep supervised hashing method is to learn a mapping from I to B, such that there is a suitable binary code bi∈{+1,-1}kfor each image Ii.For hashing task, semantically relevant images should be encoded to similar binary hash codes.More exactly, the binary hash codes of similar images should be as near as possible in the Hamming space,meanwhile the binary codes of dissimilar ones should be as far as possible.For this purpose,the objective function is naturally designed to pull the features of similar images close in the output space,and push the features of dissimilar ones far away from each other.So as a special variant of Eq.(3),the loss with respect to image pairs is defined as:

    where the distance between two binary-like features is computed directly by inner product〈·,·〉,and m is a threshold parameter.The first term is to punish similar images encoded to dissimilar binary-like codes,when their distances falls below the margin threshold m.And the second term is to penalize dissimilar images encoded to similar binary-like codes.To avoid collapsed solution,only those image pairs(similar/dissimilar)keeping their distances within a range(m)are eligible to devote to the loss function.

    But it is very difficult to optimize Eq.(4) directly in Hamming space.To eliminate this limitation, in this work we adopt a special regularizer that encourages the real-valued features to approximate the desired discrete codes(e.g.,+1/-1).The regularizer is defined as:

    We aim to maximize the discriminability of the real-valued network outputs and minimize the quantization error from real-values to desired discrete values simultaneously.Then the whole loss function can be written as:

    where α is a weight parameter to control the strength of the regularizer.Theoretically,when the α is larger,the network outputs is closer to the desired discrete values,and consequently the feature discriminability will decrease sharply.And 1 is a vector of all ones.More details will be shown in the extensive experiments.Here we use inner product 〈·,·〉 to measure the distance between network outputs directly, and L2-norm is adopted to encourage the real-valued feature to approximate the desired discrete hash codes.

    With the objective function, the network model can be trained by back-propagation algorithm by Adam method (of course, mini-batch gradient descent method can also be adopted).The sub-gradients of the Eq.(6)are respectively written as:

    Our purpose is to minimize the overall objective function:

    3.2 Implementation details

    Our BDSH method is implemented with TensorFlow on a single NVIDIA 1080 GPU.The network architecture is illustrated in Fig.1.The weights layers of the last one fully-connected layers are initialized with "Xavier" initialization.In the training process,the batch size is set to 200 and epoch to 100.The learning rate of the first seven layers is set to 10-5and the last fully-connected layers to 10-4.The network is trained by back-propagation algorithm with Adam method,and beta1 is set to 0.9,beta2 to 0.999.The threshold parameter m in Eq.(4)is set to 2k (k is the hash codes length).The weighting parameter α in Eq.(6)is set to 10 to control the strength of the quantization regularizer.

    4 Experiments

    4.1 Datasets and evaluation metrics

    Figure 3: The convergence rate and MAP result of our model on CIFAR-10.(a) The convergence rate w.r.t different number of epochs.(b) The precision-recall curves on different hash code lengths.(c)The precision with different number of top returned images

    We compare our proposed model with other state-of-the-art methods on two widely used benchmark datasets: (1) CIFAR-10 [Krizhevsky (2009)].This dataset is composed of 60,000 32×32 color images,which are divided into 10 classes(6000 images per class).It is a single-label dataset,where each image belongs to one of the ten categories.The images are resized to 224×224 before inputting to the the CNN-based models.(2)NUS-WIDE[Chua,Tang, Hong et al.(2009)].This dataset has 269,648 images gathered from Flickr.It is a multi-label dataset, where each image belongs to one or multiple class labels from 81 classes.Following Liu et al.[Liu,Wang,Shan et al.(2016);Li,Wang and Kang(2016);Zhu and Gao (2017)], we only make use of the images consociated with the 21 most frequent classes, where each of these classes consist of at least 5000 images.As a result, a total of 195,834 images in NUS-WIDE are used.These images also are resized to 224×224 and then utilized as input data for these CNN-based state-of-the-art methods as well as our BDSH.In our experiments, we sample 1000 images (100 images per class) as the query set in CIFAR-10 at random.For the supervised methods, we make a random sample of 5000 images(500 images per class)from the rest images as the training set.The pair-wise label set S is constructed based on the image category label.On the other words,two images(Iiand Ij) will be considered to be similar (sij=1), if Iiand Ijhave the same label.For the unsupervised methods,we make use of the rest images as the training set.In NUS-WIDE,by following the strategy in [Xia, Pan, Lai et al.(2014)], we make a random selection of 2100 query images from 21 most frequent labels(100 images per class).For the supervised methods,we make a random sample of 10500 images(500 images per class)from the rest images as the training set.The pair-wise label set S is constructed based on the image category label.More exactly,if two images(Iiand Ij)share at least one positive label,Iiand Ijare considered to be similar(sij=1),and dissimilar otherwise.We calculate the mean Average Precision values within the top 5000 returned neighbors.

    Following previous works,the mean Average Precision(MAP)for different code lengths is utilized to measure the retrieval performance of our proposed method and other baselines.

    4.2 Evaluation to hyper-parameter

    In this part, we validate the effectiveness of the Hyper-Parameter α and m.We test the models with α = {0,10,20,30,40,50} and m = {1,2,3,4,5} ?k with k = 12.In Tab.2(b), we report the MAP of our method with respect to different α in CIFAR-10 and NUS-WIDE dataset.In Tab.2(a),we report the MAP of our method with respect to different m in CIFAR-10 and NUS-WIDE dataset.The retrieval MAP of different models are listed in Tab.3.And Fig.4 reports the distribution of feature on the test set of CIFAR-10 with respect to different Hyper-Parameter α, where m = 24 (k = 12).From the experiment results,we can make three observations:

    · In Tab.2(a),we can observe that different m imposes little effect upon the MAP for hash codes with k =12.

    Table 2: MAP of model under different setting of m and α on CIFAR-10 and NUS-WIDE

    · When α=0,the features of network concentrate on 0(Fig.4(a))and we can see that the MAP is quite low on both two datasets(CIFAR-10 and NUS-WIDE)in Tab.2(b).As α grows,the network outputs gradually concentrate on-1 and+1 respectively.

    Figure 4: The distributions of network outputs under different settings of α(m = 24)on CIFAR-10

    · Under proper settings of α and m, our method can generate compact hash codes for images.From Fig.4 and Tab.2(b), we can observe that the smaller α is, the more notable the discriminability of the network outputs is.And the larger α is,the closer the real-valued features is to the desired discrete hash codes.Thus there exists a discrepancy obviously of deep hashing between maximizing the discriminability and minimizing the quantization error.However,we can attempt to search an equilibrium point to keep a balance, where images can be mapped to compact binary codes by maximizing the discriminability of the network outputs and minimizing the quantization error from real-valued features to the desired discrete hash codes.

    4.3 Comparison with the state-of-the-art

    Comparative methods:we compare our method with a number of state-of-the-art hashing methods.These hashing methods can be divided into three categories:

    · Unsupervised hashing methods with hand-crafted features, including Spectral Hashing(SH) [Weiss, Torralba and Fergus (2008)] and Iterative Quantization (ITQ) [Gong and Lazebnik(2011)].

    · Supervised hashing methods with hand-crafted features,including Latent Factor Hashing(LFH)[Zhang,Zhang,Li et al.(2014)],Fast Supervised Hashing(FastH)[Lin,Shen,Shi et al.(2014)]and Supervised Discrete Hashing(SDH)[Shen,Shen,Liu et al.(2015)].

    · Deep hashing methods, including Network in Network Hashing (NINH) [Lai, Pan,Liu et al.(2015)], CNNH [Xia, Pan, Lai et al.(2014)], Deep Binary Embedding Network (DBEN) [Zhuang, Lin, Shen et al.(2016)], Deep Supervised Hashing with Pairwise Labels (DPSH) [Li, Wang and Kang (2016)], Deep Supervised Hashing(DSH)[Liu,Wang,Shan et al.(2016)],Locality-Constrained Deep Supervised Hashing(LCDSH)[Zhu and Gao(2017)].

    For hashing methods with hand-crafted features, each image in CIFAR-10 [Krizhevsky(2009)]is represented with a 512-D GIST feature vector.And each image in NUS-WIDE[Chua,Tang,Hong et al.(2009)]is represented by a 1134-D low level feature vector,which consists of a 64-D color histogram, a 73-D edge direction histogram, a 128-D wavelet texture,144-D color correlogram,a 255-D block-wise color moments and a 500-D bag of words based on SIFT descriptions.

    For deep hashing methods,the raw image pixels are directly used as inputs,which all have been resized into 224×224.We adopt the CNN-F networks to initialize the first seven layers of our models,which is pre-trained on the ImageNet dataset[Russakovsky,Deng,Su et al.(2015)].And,the initialization strategy is same as other deep hashing methods including,DSRH[Zhao,Huang,Wang et al.(2015)],DSH[Liu,Wang,Shan et al.(2016)],DPSH[Li,Wang and Kang(2016)],LCDSH[Zhu and Gao(2017)].

    The MAP of different methods on two benchmark datasets (CIFAR-10 and NUS-WIDE)is reported in Tab.3.It is observed that our BDSH greatly outperforms other baselines.Although both LCDSH and DPSH are CNN-based hashing methods with image pairs and quantization error,BDSH outperforms these two methods.

    Table 3: MAP of different hashing methods on CIFAR-10 and NUS-WIDE.The MAP for two datasets is calculated based on the top 5,000 returned neighbors.DSH* denotes replacing the original network of DSH with CNN-F and then training the model by the similar initialization strategy as ours

    Table 4:Training time(hours)of different hashing methods on CIFAR-10 and NUS-WIDE

    Comparison of training time:Here we compare our methods with three hashing methods,including DPSH, DBEN and NINH, because only the source codes of these hashing methods are online available.Tab.4 shows the training time of different hashing methods with 12-bit code length in both CIFAR-10 and NUS-WIDE datasets.We can see that our model is faster than DBEN and NINH,and equivalent to DPSH.It is worth noting that the training time gaps between these hashing methods are due to the differences of the inputs and the framework.

    4.4 Result analysis

    The MAP of different methods on CIFAR-10 and NUS-WIDE is reported in Tab.3.It is observed that our method greatly outperforms other baselines.In general, these CNN-based methods greatly outperform the conventional hashing methods on these two datasets.Moreover,as shown in Tab.3,we investigate some conventional hashing methods,which are trained with deep features extracted by CNN-F network.The performance were significantly improved,but they were still inferior to our model.

    Figure 5: Examples of top 10 retrieved images and precision@10 on CIFAR-10

    Although,LCDSH models the hash problem as maximizing the posterior probability of the pairwise label given pairwise hash codes, the aim of LCDSH is to preserve the pairwise similarity rather than minimize the feature quantization error.Because of the discrepancy between discriminability and quantization error,LCDSH will cause huge quantization error.The distribution of LCDSH approximate extremely the distribution shown in Fig.4(a).DSH utilized a combination of contrastive loss and quantization error.However, feature quantization based hashing can lead to the change of feature distribution,which will make the feature less discriminative.For fair comparison, we replace the network of DSH with CNN-F.But the MAP of DSH*is still inferior to our method.DPSH make use of a posterior probability to measure the discriminability of image pairs,which is similar to LCDSH.As reported in Fig.4, minimizing the feature quantization error in hashing can lead to the change of feature distribution, thus inevitably reduce the feature discriminability.Instead of minimizing the quantization error or maximizing the discriminability, we attempt to search an equilibrium point within the discrepancy.Different from these deep hashing method, a combination of posterior probability and contrastive loss is made to measure the discriminability.And the distribution of BDSH is shown in Fig.4(b) and Fig.4(e).Naturally, as shown in Tab.3, our BDSH method outperforms current state-of-the-art methods on CIFAR-10 and NUS-WIDE datasets.And examples of top 10 retrieved images and precision@10 on CIFAR-10 are reported in Fig.5.BDSH can achieve effective and efficient large scale image retrieval.

    5 Conclusion

    In order to achieve optimal balance between maximizing the discriminability and minimizing the quantization error, we propose a Balanced Deep Supervised Hashing to achieve effective and efficient large scale image retrieval.Since the discrepancy is extremely difficult to tackle, we aim at seizing an equilibrium point to ease the conflict.To demonstrate the advantages of the proposed method, extensive experimental study has been conducted.And results show that the proposed method greatly outperforms other hashing methods.And our method is faster than conventional hashing methods in training time and retrieval effectiveness.In future work, it is interesting and promising to develop theoretical framework to optimize the performance further and apply framework to other types of data(e.g.,audio,video and text).

    Acknowledgement:This work was supported in part by the Natural Science Foundation of China under Grant U1536203 and 61672254, in part by the National key research and development program of China (2016QY01W0200), in part by the Major Scientific and Technological Project of Hubei Province(2018AAA068).

    国内精品宾馆在线| 欧美成人免费av一区二区三区| 乱人视频在线观看| 变态另类丝袜制服| 日韩制服骚丝袜av| 日韩三级伦理在线观看| av又黄又爽大尺度在线免费看 | 我要搜黄色片| 久久久久久国产a免费观看| 久久久成人免费电影| 爱豆传媒免费全集在线观看| 亚洲av男天堂| 日韩大片免费观看网站 | 日韩av在线大香蕉| 成人午夜高清在线视频| 亚洲av熟女| 亚洲国产精品成人综合色| 亚洲高清免费不卡视频| 国产精品久久视频播放| 女人十人毛片免费观看3o分钟| 少妇裸体淫交视频免费看高清| 91在线精品国自产拍蜜月| 国产亚洲最大av| 久久久精品欧美日韩精品| 久久久久久久午夜电影| 成人漫画全彩无遮挡| 成人特级av手机在线观看| av在线天堂中文字幕| 最近中文字幕高清免费大全6| 免费观看性生交大片5| 99国产精品一区二区蜜桃av| 一个人观看的视频www高清免费观看| 亚洲国产最新在线播放| 少妇猛男粗大的猛烈进出视频 | 欧美日韩一区二区视频在线观看视频在线 | 99热全是精品| 韩国高清视频一区二区三区| 国产精品伦人一区二区| 日本三级黄在线观看| 日本一本二区三区精品| 天堂网av新在线| 成人综合一区亚洲| 国产午夜精品一二区理论片| 波多野结衣巨乳人妻| 国产探花极品一区二区| 熟妇人妻久久中文字幕3abv| 伊人久久精品亚洲午夜| 欧美激情久久久久久爽电影| 亚洲成色77777| 国产高潮美女av| 亚洲精品久久久久久婷婷小说 | 噜噜噜噜噜久久久久久91| 中国国产av一级| 国产黄片视频在线免费观看| 国模一区二区三区四区视频| 国产精品熟女久久久久浪| 99久国产av精品国产电影| 欧美xxxx性猛交bbbb| 国产视频内射| av线在线观看网站| 国产探花在线观看一区二区| 看十八女毛片水多多多| 波多野结衣高清无吗| 99久久中文字幕三级久久日本| 欧美一区二区国产精品久久精品| 国产视频首页在线观看| 欧美激情在线99| 亚洲精品国产av成人精品| 51国产日韩欧美| 中文字幕免费在线视频6| 级片在线观看| 欧美97在线视频| 中国美白少妇内射xxxbb| 亚洲色图av天堂| 麻豆久久精品国产亚洲av| 久久99精品国语久久久| 婷婷色综合大香蕉| 亚洲综合色惰| 久久精品影院6| 欧美xxxx黑人xx丫x性爽| 久久精品久久精品一区二区三区| 99九九线精品视频在线观看视频| 嫩草影院新地址| 午夜福利在线观看吧| 简卡轻食公司| a级毛色黄片| 蜜桃久久精品国产亚洲av| 亚洲av成人精品一区久久| 国产精品人妻久久久久久| 国产毛片a区久久久久| 99视频精品全部免费 在线| 成人毛片a级毛片在线播放| 国产高潮美女av| 91aial.com中文字幕在线观看| 国内揄拍国产精品人妻在线| 亚洲欧洲日产国产| 男女视频在线观看网站免费| 国产淫片久久久久久久久| 嘟嘟电影网在线观看| 啦啦啦韩国在线观看视频| 高清视频免费观看一区二区 | 精品人妻偷拍中文字幕| 国产 一区精品| 色综合亚洲欧美另类图片| 九九热线精品视视频播放| av在线天堂中文字幕| 精品久久久久久久末码| 国产精品美女特级片免费视频播放器| 女人被狂操c到高潮| 国产亚洲av片在线观看秒播厂 | 久热久热在线精品观看| 国产综合懂色| 免费一级毛片在线播放高清视频| 亚洲无线观看免费| 国内精品美女久久久久久| 久久99精品国语久久久| 亚洲av男天堂| 日本爱情动作片www.在线观看| 国产人妻一区二区三区在| 午夜精品国产一区二区电影 | 亚洲精品,欧美精品| 七月丁香在线播放| 国产成人福利小说| 一区二区三区四区激情视频| 国产精品久久久久久精品电影小说 | 日本一二三区视频观看| 国产单亲对白刺激| 联通29元200g的流量卡| 看免费成人av毛片| 神马国产精品三级电影在线观看| 久久婷婷人人爽人人干人人爱| 久久精品综合一区二区三区| 久久99热这里只频精品6学生 | 免费人成在线观看视频色| 亚洲成色77777| 国产亚洲5aaaaa淫片| av又黄又爽大尺度在线免费看 | 亚洲精品一区蜜桃| 亚洲无线观看免费| 全区人妻精品视频| 久久国产乱子免费精品| 国产欧美另类精品又又久久亚洲欧美| 99久久无色码亚洲精品果冻| 免费黄色在线免费观看| 99久久人妻综合| 在线播放无遮挡| av.在线天堂| 黄片wwwwww| 男女国产视频网站| 一边摸一边抽搐一进一小说| 精品无人区乱码1区二区| 国产大屁股一区二区在线视频| 亚洲在久久综合| 国产老妇女一区| 欧美bdsm另类| 女人十人毛片免费观看3o分钟| 久久草成人影院| 人人妻人人看人人澡| 三级男女做爰猛烈吃奶摸视频| 欧美日韩在线观看h| 久久久久久久久久黄片| 欧美激情在线99| 99热这里只有是精品在线观看| 久久精品熟女亚洲av麻豆精品 | 小蜜桃在线观看免费完整版高清| 欧美精品一区二区大全| 亚洲熟妇中文字幕五十中出| 欧美成人a在线观看| 成年av动漫网址| 三级男女做爰猛烈吃奶摸视频| 最新中文字幕久久久久| 免费观看人在逋| 午夜爱爱视频在线播放| 国产日韩欧美在线精品| 亚洲经典国产精华液单| 国产不卡一卡二| 欧美zozozo另类| 日本黄色片子视频| 久久精品熟女亚洲av麻豆精品 | 国产成人午夜福利电影在线观看| 熟女电影av网| 婷婷六月久久综合丁香| av卡一久久| 青春草国产在线视频| 免费人成在线观看视频色| 特大巨黑吊av在线直播| 建设人人有责人人尽责人人享有的 | 久久久精品94久久精品| 18禁裸乳无遮挡免费网站照片| 久久精品国产99精品国产亚洲性色| 日韩一区二区视频免费看| 国产老妇女一区| 久久精品久久久久久噜噜老黄 | 国产精品人妻久久久久久| 国产成人精品久久久久久| 美女国产视频在线观看| 少妇猛男粗大的猛烈进出视频 | 久久久久久久久久黄片| videossex国产| 精品99又大又爽又粗少妇毛片| 欧美日本视频| 久久久亚洲精品成人影院| 精品久久久久久久久av| 水蜜桃什么品种好| 亚洲av成人精品一区久久| 人体艺术视频欧美日本| 国内揄拍国产精品人妻在线| 91久久精品国产一区二区成人| 久久精品国产亚洲网站| 大又大粗又爽又黄少妇毛片口| 国产老妇女一区| 久久亚洲精品不卡| 亚洲最大成人av| 麻豆成人午夜福利视频| 日本一本二区三区精品| .国产精品久久| 少妇被粗大猛烈的视频| 成人漫画全彩无遮挡| 中文字幕精品亚洲无线码一区| 欧美一级a爱片免费观看看| 99视频精品全部免费 在线| 久久久成人免费电影| 丝袜美腿在线中文| 中文字幕免费在线视频6| 大香蕉久久网| 91aial.com中文字幕在线观看| 我要看日韩黄色一级片| 久久精品夜色国产| 中文字幕免费在线视频6| 久久精品国产鲁丝片午夜精品| 波多野结衣巨乳人妻| 欧美激情国产日韩精品一区| 婷婷色麻豆天堂久久 | 老女人水多毛片| 国产黄片美女视频| 97人妻精品一区二区三区麻豆| 免费看美女性在线毛片视频| 熟女人妻精品中文字幕| 联通29元200g的流量卡| 九九在线视频观看精品| 色综合亚洲欧美另类图片| 色播亚洲综合网| 欧美成人免费av一区二区三区| 欧美激情国产日韩精品一区| 亚洲真实伦在线观看| 深爱激情五月婷婷| 一级黄片播放器| 欧美日本视频| 中文字幕av成人在线电影| 淫秽高清视频在线观看| 乱系列少妇在线播放| 亚洲三级黄色毛片| 午夜福利网站1000一区二区三区| 丰满人妻一区二区三区视频av| 国产精品熟女久久久久浪| 99热精品在线国产| 久久久久久九九精品二区国产| 变态另类丝袜制服| 日韩 亚洲 欧美在线| 少妇的逼水好多| 91aial.com中文字幕在线观看| 岛国在线免费视频观看| 老司机福利观看| 国内精品一区二区在线观看| 国产伦精品一区二区三区视频9| 性色avwww在线观看| 久久久精品大字幕| 日韩欧美 国产精品| 中国国产av一级| 特大巨黑吊av在线直播| 亚洲经典国产精华液单| 爱豆传媒免费全集在线观看| 午夜福利在线观看吧| 国产高清三级在线| av.在线天堂| 99久久中文字幕三级久久日本| 一本久久精品| 免费人成在线观看视频色| 久久精品人妻少妇| 中文字幕熟女人妻在线| 久久99热6这里只有精品| 国产亚洲精品av在线| 91aial.com中文字幕在线观看| 亚洲最大成人av| 啦啦啦啦在线视频资源| 久久鲁丝午夜福利片| 久久99热这里只有精品18| av国产久精品久网站免费入址| 国产av一区在线观看免费| 亚洲电影在线观看av| 亚洲一级一片aⅴ在线观看| 一本久久精品| 大又大粗又爽又黄少妇毛片口| 国产精品综合久久久久久久免费| 国产一区二区亚洲精品在线观看| 国产亚洲一区二区精品| www.av在线官网国产| 日本色播在线视频| av卡一久久| 最近最新中文字幕大全电影3| 国产免费福利视频在线观看| 色哟哟·www| 最近的中文字幕免费完整| 91狼人影院| 久久精品久久久久久噜噜老黄 | 一个人看视频在线观看www免费| 国内揄拍国产精品人妻在线| 日韩欧美精品免费久久| 午夜精品一区二区三区免费看| 免费av毛片视频| 久久久精品94久久精品| 一级黄片播放器| 不卡视频在线观看欧美| 少妇丰满av| 久久国内精品自在自线图片| 99热精品在线国产| 国产极品天堂在线| 最近中文字幕高清免费大全6| 国产v大片淫在线免费观看| 欧美不卡视频在线免费观看| 日韩制服骚丝袜av| 久久久久网色| 看黄色毛片网站| 2021天堂中文幕一二区在线观| 免费观看性生交大片5| 直男gayav资源| 国产精品一区二区性色av| 中国国产av一级| 18禁动态无遮挡网站| 九九在线视频观看精品| 色哟哟·www| 久久精品国产99精品国产亚洲性色| 国产精品一区二区三区四区免费观看| 亚洲av成人精品一二三区| 国产亚洲av嫩草精品影院| 国产精品野战在线观看| 久久99蜜桃精品久久| 亚洲国产精品成人综合色| 久久久精品大字幕| 国产午夜精品久久久久久一区二区三区| 最近中文字幕高清免费大全6| 国产大屁股一区二区在线视频| 久久欧美精品欧美久久欧美| 欧美zozozo另类| 国产日韩欧美在线精品| 男人舔奶头视频| av在线播放精品| 一级爰片在线观看| 日日啪夜夜撸| 国产黄色小视频在线观看| 男人狂女人下面高潮的视频| 国产熟女欧美一区二区| 国产精品人妻久久久影院| 熟女人妻精品中文字幕| 大香蕉久久网| 亚洲自偷自拍三级| 桃色一区二区三区在线观看| www.av在线官网国产| 国产欧美另类精品又又久久亚洲欧美| 国产69精品久久久久777片| av天堂中文字幕网| 欧美一区二区国产精品久久精品| 国产精品久久电影中文字幕| 成年女人看的毛片在线观看| 国产精品女同一区二区软件| 亚洲精品乱码久久久v下载方式| 日韩av不卡免费在线播放| av在线播放精品| a级一级毛片免费在线观看| 少妇丰满av| 久久国内精品自在自线图片| 小蜜桃在线观看免费完整版高清| 国产不卡一卡二| 久久久国产成人免费| 日韩在线高清观看一区二区三区| 精品国产三级普通话版| 国产精品美女特级片免费视频播放器| 女人久久www免费人成看片 | 久久久久久久久久久丰满| 亚洲av中文av极速乱| 亚洲婷婷狠狠爱综合网| 欧美潮喷喷水| 看非洲黑人一级黄片| 中文乱码字字幕精品一区二区三区 | 韩国av在线不卡| 免费av不卡在线播放| 成人特级av手机在线观看| 晚上一个人看的免费电影| 大话2 男鬼变身卡| 欧美激情在线99| 久久久欧美国产精品| 草草在线视频免费看| 1024手机看黄色片| 中文资源天堂在线| 国产精品久久久久久精品电影| 欧美xxxx性猛交bbbb| 欧美不卡视频在线免费观看| 久久精品影院6| 老司机福利观看| 欧美一区二区精品小视频在线| 日韩成人av中文字幕在线观看| 又黄又爽又刺激的免费视频.| 神马国产精品三级电影在线观看| 91久久精品国产一区二区成人| 成人一区二区视频在线观看| 精品久久久久久久久亚洲| 亚洲国产精品成人久久小说| 女人久久www免费人成看片 | 亚洲av成人精品一区久久| 久久久午夜欧美精品| 成人午夜高清在线视频| 国产色婷婷99| 午夜a级毛片| 51国产日韩欧美| 精品不卡国产一区二区三区| 天堂√8在线中文| 亚洲国产精品国产精品| 欧美日本亚洲视频在线播放| 99久久精品国产国产毛片| 免费看光身美女| 精品久久国产蜜桃| 亚洲av电影不卡..在线观看| 精品一区二区三区人妻视频| 国产成人a∨麻豆精品| 国产午夜精品一二区理论片| 99热网站在线观看| 免费在线观看成人毛片| 色网站视频免费| 久久久精品94久久精品| 精品人妻一区二区三区麻豆| 亚洲成人精品中文字幕电影| 国产老妇伦熟女老妇高清| 少妇高潮的动态图| a级毛片免费高清观看在线播放| 老师上课跳d突然被开到最大视频| 免费看a级黄色片| av女优亚洲男人天堂| 成人亚洲欧美一区二区av| 日本av手机在线免费观看| 亚洲三级黄色毛片| 美女国产视频在线观看| 亚洲成人精品中文字幕电影| 一边亲一边摸免费视频| 中文字幕亚洲精品专区| 成人亚洲欧美一区二区av| 欧美日韩一区二区视频在线观看视频在线 | АⅤ资源中文在线天堂| 国产精品久久电影中文字幕| 亚洲精品日韩在线中文字幕| 国产在视频线精品| 91狼人影院| 亚洲熟妇中文字幕五十中出| 亚洲av电影不卡..在线观看| 汤姆久久久久久久影院中文字幕 | 一边亲一边摸免费视频| 高清毛片免费看| 村上凉子中文字幕在线| 老司机影院成人| 人人妻人人澡人人爽人人夜夜 | 欧美日韩精品成人综合77777| 成人av在线播放网站| 最近中文字幕2019免费版| 日日干狠狠操夜夜爽| 国产在线男女| 五月玫瑰六月丁香| 在线播放国产精品三级| 国内少妇人妻偷人精品xxx网站| av国产久精品久网站免费入址| 国产免费男女视频| 亚洲最大成人av| 秋霞伦理黄片| 在线免费观看的www视频| 久久久久久久国产电影| 91aial.com中文字幕在线观看| 国产真实乱freesex| 永久网站在线| 能在线免费看毛片的网站| 亚洲在久久综合| 欧美一区二区国产精品久久精品| 久久人妻av系列| www.色视频.com| 国产精品久久电影中文字幕| 国产一区二区三区av在线| 少妇人妻精品综合一区二区| 看免费成人av毛片| 99九九线精品视频在线观看视频| 亚洲人成网站在线观看播放| 亚洲自偷自拍三级| 99久久成人亚洲精品观看| 看非洲黑人一级黄片| 国产单亲对白刺激| 只有这里有精品99| 久久精品国产亚洲网站| 成人毛片a级毛片在线播放| 免费观看的影片在线观看| 欧美最新免费一区二区三区| 国产成人a∨麻豆精品| 99久久九九国产精品国产免费| 超碰av人人做人人爽久久| 内射极品少妇av片p| 欧美性猛交黑人性爽| 国产探花在线观看一区二区| 日韩大片免费观看网站 | 人人妻人人澡人人爽人人夜夜 | 国产成人免费观看mmmm| 九九爱精品视频在线观看| 精品人妻视频免费看| 在线免费观看不下载黄p国产| 午夜福利网站1000一区二区三区| 亚洲va在线va天堂va国产| 欧美日韩综合久久久久久| 99国产精品一区二区蜜桃av| 国产精品乱码一区二三区的特点| 欧美一区二区亚洲| 尾随美女入室| 中国国产av一级| 国产在线男女| 色哟哟·www| 啦啦啦观看免费观看视频高清| 国产成年人精品一区二区| 十八禁国产超污无遮挡网站| 久久婷婷人人爽人人干人人爱| 亚洲久久久久久中文字幕| 日韩中字成人| 国产精品久久电影中文字幕| 免费播放大片免费观看视频在线观看 | 1024手机看黄色片| 精品国产露脸久久av麻豆 | 国产亚洲91精品色在线| 老女人水多毛片| 国产av不卡久久| 大又大粗又爽又黄少妇毛片口| 国产成人福利小说| 一级毛片aaaaaa免费看小| 有码 亚洲区| 中文字幕久久专区| 长腿黑丝高跟| 精品久久久噜噜| 亚洲av福利一区| 久久久精品大字幕| 精品久久久久久久久av| 麻豆一二三区av精品| a级毛片免费高清观看在线播放| 久久久欧美国产精品| or卡值多少钱| 欧美日本视频| 一个人免费在线观看电影| 特级一级黄色大片| 嫩草影院新地址| 女人久久www免费人成看片 | 人人妻人人澡人人爽人人夜夜 | 97超视频在线观看视频| 国产日韩欧美在线精品| 高清日韩中文字幕在线| 久久精品国产99精品国产亚洲性色| 国产成人精品久久久久久| 国产高清有码在线观看视频| 级片在线观看| 欧美一区二区精品小视频在线| or卡值多少钱| 人妻系列 视频| 日韩成人伦理影院| 中文欧美无线码| 午夜福利在线观看吧| 十八禁国产超污无遮挡网站| 3wmmmm亚洲av在线观看| 国产69精品久久久久777片| 日本色播在线视频| 欧美成人免费av一区二区三区| 色噜噜av男人的天堂激情| 九九爱精品视频在线观看| 日日撸夜夜添| 国产免费又黄又爽又色| 亚洲国产欧美在线一区| av专区在线播放| 噜噜噜噜噜久久久久久91| 国产精品99久久久久久久久| av视频在线观看入口| 亚洲av二区三区四区| 日韩欧美精品免费久久| 精品熟女少妇av免费看| 成人性生交大片免费视频hd| 日韩人妻高清精品专区| 久久99热6这里只有精品| 村上凉子中文字幕在线| 欧美极品一区二区三区四区| 99久久精品热视频| 观看免费一级毛片| 久久精品影院6| 欧美日本视频| 日韩av在线大香蕉| 国产精品一区二区在线观看99 | 久久久久久大精品| 国产伦在线观看视频一区| 亚洲aⅴ乱码一区二区在线播放| 国产高清不卡午夜福利| 免费观看的影片在线观看| 欧美一区二区精品小视频在线| 99热这里只有是精品在线观看| 国产午夜精品久久久久久一区二区三区| 精品久久久久久成人av| 国产成人免费观看mmmm| 免费人成在线观看视频色| or卡值多少钱| 久久久久国产网址| 插阴视频在线观看视频| 成人毛片60女人毛片免费| 亚洲欧美中文字幕日韩二区| 日产精品乱码卡一卡2卡三| 极品教师在线视频| 国产成人91sexporn| 美女被艹到高潮喷水动态| 久久韩国三级中文字幕| 国产白丝娇喘喷水9色精品| 男人的好看免费观看在线视频|