• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image Retrieval Based on Vision Transformer and Masked Learning

    2023-10-29 11:41:44LIFengPANHuangsheng潘煌圣SHENGShouxiang盛守祥WANGGuodong王國棟

    LI Feng(李 鋒), PAN Huangsheng(潘煌圣)*, SHENG Shouxiang(盛守祥), WANG Guodong(王國棟)

    1 College of Computer Science and Technology, Donghua University, Shanghai 201620, China 2 Huafang Co., Ltd., Binzhou 256617, China

    Abstract:Deep convolutional neural networks (DCNNs) are widely used in content-based image retrieval (CBIR) because of the advantages in image feature extraction. However, the training of deep neural networks requires a large number of labeled data, which limits the application. Self-supervised learning is a more general approach in unlabeled scenarios. A method of fine-tuning feature extraction networks based on masked learning is proposed. Masked autoencoders (MAE) are used in the fine-tune vision transformer (ViT) model. In addition, the scheme of extracting image descriptors is discussed. The encoder of the MAE uses the ViT to extract global features and performs self-supervised fine-tuning by reconstructing masked area pixels. The method works well on category-level image retrieval datasets with marked improvements in instance-level datasets. For the instance-level datasets Oxford5k and Paris6k, the retrieval accuracy of the base model is improved by 7% and 17% compared to that of the original model, respectively.

    Key words:content-based image retrieval; vision transformer; masked autoencoder; feature extraction

    0 Introduction

    Currently, the mainstream technology in the field of image retrieval is content-based image retrieval (CBIR). The workflow of CBIR is to extract the visual features of the image, generate the corresponding feature descriptors, and finally use the similarity measure function for evaluation. Compact yet rich feature representations are at the core of CBIR[1]. Therefore, image feature extraction is an essential factor affecting the effectiveness of image retrieval.

    CBIR methods are mainly available at the instance level and the category level. In instance-level image retrieval, a query image of a specific object or scene is usually given. The goal is to find images containing the same object or scene. The images may have different backgrounds. In contrast, category-level image retrieval aims to find images of the same class as the query.

    In the last two decades, significant progress has been made in the image feature extraction, which consists of two essential phases: feature engineering and feature learning[1].

    The feature engineering phase focuses on several low-dimensional semantic features extracted by mathematical computation, such as color features, texture features and shape or spatial location[2]. One of the better-handcrafted features in this area is the scale-invariant feature transform (SIFT)[3]. The bag of words (BoWs) model has achieved good results in image classification and retrieval through SIFT descriptors. However, the feature expression ability extracted by the BoWs is poor, so the performance of the image retrieval dataset is not very good. These low-level features need to be designed manually, and the retrieval stability in different scenarios is low.

    Feature learning is mainly based on deep learning networks.It has been developed since 2012 with the rise of deep learning and the excellent results of deep convolutional neural networks (DCNNs) for image classification tasks[1]. In particular, with the great success of AlexNet[4]and networks such as VggNet[5], GoogleNet[6]and ResNet[7]in image classification, the exploration of DCNNs in image retrieval has also been launched. DCNNs can learn powerful feature representations with multiple levels of abstraction directly from the data, and these features work well for image retrieval. There are two main approaches to extract feature by DCNNs: using existing models and fine-tuning them. It is essential to know how to use the network more effectively to extract features.

    With the success of the transformer architecture in the field of natural language processing (NLP), the superiority of the transformer has been demonstrated in image processing in the last two years. The vision transformer (ViT)[8]model has shown better results on image classification tasks. There are also many researchers investigating the use of the ViT model in downstream tasks in the image domain, such as detection transformer (DETR)[9]to obtain state-of-the-art (SOTA) results in the field of target detection and segmentation transformer (SETR)[10]to obtain the same SOTA results in semantic segmentation. However, the application of the ViT model in image retrieval is still relatively few, and the performance of the ViT model needs to be verified.

    The existing network structures and their parameters were designed and trained on datasets for image classification tasks. When these networks are used on instance-level datasets to extract features, they cannot work well, so it is necessary to fine-tune the networks on these datasets. Supervised fine-tuning requires much time and labor to annotate images, so self-supervised learning is preferred.

    In the paper, a masked learning-based approach is proposed for fine-tuning feature extraction networks. In detail, the ViT model is used as the backbone network for feature extraction, and the self-supervised masking learning method is combined to fine-tune the image in the specific domain. The influence of class vectors and average pooling strategy on feature extraction is discussed in detail, and a suitable feature extraction scheme is proposed for self-supervised learning. Four standard image retrieval datasets (Ukbench[11], Holidays[12], Oxford5K[13]and Paris6k[14]) are used to evaluate the performance.

    1 Related Work

    1.1 Vision transformer

    The transformer[15]architecture was initially used in the natural language processing (NLP) domain and achieved SOTA on many NLP tasks. Convolutional neural network (CNN) architectures in the visual domain cannot transform images into sequences, and a transformer block is usually added after the CNN layer. In order to solve this problem, the ViT model divided the image into many 16 pixel×16 pixel image patches and then projected them into a fixed-length vector. The subsequent coding operation is the same as that of the transformer. Compared to traditional CNNs, more data are required in the ViT model to achieve better results due to the lack of inductive bias. There has been much-improved research, for example swin-transformer[16], to address the problem that the ViT model training requires a larger dataset and more time.

    There are some applications of the ViT model in the field of image retrieval. Gkeliosetal.[17]applied the ViT model without fine-tuning to image retrieval, compared it with some traditional methods on several datasets and found that it worked well. Lietal.[18]used the ViT model as a backbone network to extract features and combine features with standard deep hashing methods for compressive coding. El-Noubyetal.[19]used the ViT model as the feature extraction network and combined it with metric learning methods. Yangetal.[20]proposed an EfficientNet model and a swin-transformer model based on the deep orthogonal fusion of local and global features (DOLG) model.

    1.2 Self-supervised learning

    In computer vision, most neural networks use supervised learning, which can be very time consuming and challenging due to the large amount of labeled data required. In contrast, deep learning networks can use self-supervised learning methods to train the network without labeled data.

    There are currently three main categories of self-supervised learning in computer vision: (i) a pretext task, by constructing a task to drive the model for learning; (ii) contrastive learning, such as the MoCo[21-23]series of algorithms, SimCLR[24]series of algorithms, of which the core idea is that the same image undergoes different data enhancement methods and the extracted features still have high consistency; (iii) masked learning, such as BERT pre-training of image transformers (BEiTs)[25]and masked autoencoders (MAEs)[26], which mainly involves randomly masking blocks of images and reconstructing images of masked regions using images of unmasked regions.

    Among them, masked learning is a more suitable self-supervised learning approach for the ViT model. ViT models are more conducive to combining models related to the NLP domain to generate larger multi-modal general models than traditional convolutional neural networks.

    More research is needed on self-supervised fine-tuning in image retrieval[1], and there are currently two main approaches. One is manifold learning[27], which fine-tunes the network by mining correlations in features to find positive and negative samples in the dataset. The other one is the autoencoder method, which focuses on reconstructing the output, so it is as close to the input as possible. The MAE method used in this paper is the autoencoder method.

    2 Methods

    2.1 Overview

    The fine-tuning and the feature extraction are focused on by using the MAE model on image retrieval datasets. As shown in Fig.1, a pre-trained checkpoint directly replaces the part left of the dashed line, and the right part is the work of this article. The MAE model is pre-trained and self-supervised on the ImageNet dataset. Then fine-tuning of the corresponding image classification dataset was supervised when image classification tasks were done. In this paper, the checkpoint model with self-supervised pre-training was used on the ImageNet dataset as the baseline model.

    Fig.1 System flow diagram

    The right side of the dashed line in Fig.1 shows the main workflow. The solid line represents the process of self-supervised fine-tuning of the model. The dot-dash line represents extracting features from the image retrieval dataset and building a feature library. The dashed line represents the final image retrieval process.

    2.2 Self-supervised fine-tuning

    In the field of NLP, bidirectional encoder representation from transformer(BERT)[28]uses a self-supervised approach to train the transformer model, the principle of which is to mask words randomly with a 15% probability and then predict the original words at the masked locations. MAE is the BERT of the vision domain, and its core idea is to mask some image patches in a picture at random with a certain percentage and then reconstruct the pixel values of these parts. The MAE model starts with self-supervised training on a large dataset, followed by supervised fine-tuning to achieve SOTA for the image classification task eventually.

    As shown in Fig.2, MAE uses an asymmetric encoder-decoder design. The encoder part mainly encodes the image patches while the decoder reduces them. Only the unmasked images are encoded in the encoding phase, while all image patches are processed in the decoding phase. After completing pre-training, the decoder can be removed and only the encoder is used for the image retrieval task. The separable design of the encoder-decoder allows us to easily apply the encoder to downstream tasks.

    Fig.2 MAE architecture diagram[26]

    The encoder architecture uses the ViT model. Taking the base model as an example, the image is usually firstly divided into multiple 16 pixel×16 pixel blocks. Then, each block is projected into a fixed-length vector by a linear layer, a particular class embedding vector is added, and all these vectors add position encoding. Finally, a series of transformer blocks are stacked together for self-attention (the main difference between different sizes of the ViT model is the number of stacked transformer blocks). Thus, the global features of the image can be well learned through the encoder and are suitable for image retrieval tasks. The difference between MAE and ViT is that here the encoder only needs to be passed in the unmasked image patches, thus speeds up the encoder’s processing. The encoder also discards the MLP layer from ViT and is only added for subsequent supervised fine-tuning.

    The self-attention mechanism in the transformer encoder module is briefly introduced. As shown in Fig.3, the left part shows the architecture of the transformer encoder part, and the right part shows the schematic diagram of self-attention. The class vector does the attention operation with all other vectors:

    (1)

    The class vector gets the global features from all vectors. While single attention obtains a single representation space, multi-head attention can obtain several different representation spaces, facilitating the model to learn different image features.

    Fig.3 Architecture diagram of transformer encoder module[15]

    The decoder usually uses fewer transformer blocks than the encoder, and the number of the transformer blocks set to be eight in the MAE base model. However, even one block can work. A checkpoint model pre-trained on a large dataset is used in this paper. The parameters of the decoder are frozen and only the encoder is trained to prevent the decoder from learning too much about the dataset features and affecting the representation of the image learned by the encoder.

    Another feature of MAE is using a high masking rate which can be as high as 75% and MAE still recovers the original image using the model. Because there is too much redundant information, the interpolation method can recover the image when the masking rate is too low. The random masking operation is another image enhancement that can achieve a good result when there is little data in the dataset.

    2.3 Feature extraction method

    In the feature extraction phase, the MAE encoder is used. Specifically, the encoder uses pre-processed (size normalization and pixel normalization) images as input. The data sent to the encoder is no longer masked and the encoding operation is the same as the standard ViT architecture. After the image is divided into image patches and embedded into vectors, self-attention is done at these vectors by transformer blocks. There is no need for an MLP layer because feature extraction is focused on.

    Taking the ViT base model as an example, after a single image passes through the encoder, a vector of 197×768 dimension is obtained, as shown in Fig.4. In the traditional CNN model, the feature map of the last convolutional layer is usually used to get a lower-dimensional vector for the subsequent retrieval using pooling. However, under the ViT framework, it can be found that the low-dimensional vector obtained by pooling the extracted vectors is less effective than using the class vector as the image descriptor. The class vector has learned enough feature expression when global self-attention operations are done with the image patch, so the class vector is an excellent global feature.

    Fig.4 Feature extraction architecture diagram

    3 Experiment

    The experimental environment is described in detail and is evaluated by the proposed method in four available datasets.

    3.1 Dataset

    The Ukbench[11]dataset has 10 200 pictures. Four pictures are in one group and there are a total of 2 550 groups. The size of each image is 640 pixel×480 pixel, and the images in one group are taken from different perspectives and light conditions. The Ukbench dataset uses the average scores of the top four candidates to measure retrieval accuracy.

    The Holidays[12]dataset consists of 1 491 photos taken by mobile phones, mainly including some personal holiday photos and a variety of scenes such as nature, architecture and humanities. The Holidays dataset consists of 500 image groups, each representing a different scene or object, and the number of images in each group ranges from 2 to 13. The first image of each group is the query image, and the correct retrieval results are the other images of the group.

    The Paris6k[14]dataset contains 6 412 photos representing unique Paris landmarks. This collection contains 55 images of buildings and monuments from the request. There are more landmarks in Paris than in Oxford.

    The Oxford5k[13]building dataset contains 5 062 images collected from the Flickr dataset by searching for specific Oxford landmarks. The dataset has been manually annotated to generate comprehensive ground conditions for 11 different landmarks. Five possible queries represent each landmark, and a total of 55 queries can evaluate the object retrieval system. In the Oxford5k building dataset, completely different views of the same building are marked with the same name, making the collection challenging for image retrieval tasks.

    In the Ukbench dataset, the recall of the first four candidates is used to evaluate the retrieval performance. There are only four correct results for each query in the Ukbench dataset, so hereNocan represents the number of true positive cases. That is, in the best case,No=4.

    No=Vo,

    (2)

    (3)

    (4)

    wherePprecisionis the precision of search results;Rrecallis the recall of search results;Vois the number of true-positive cases;Vuis the number of false-positive cases;Vwis the number of false-nagetive cases;

    For Holidays, Paris6k, and Oxford5k datasets, the mean average precision (mAP) is generally used as the evaluation index. Based on the precision-recall curve, an array form of evaluation is average precision (AP), by calculating the average precision value corresponding to each recall value.

    (5)

    whereArepresents AP;nrepresents the total number of interpolation points;ri(i=1, 2, …,n-1) is the interpolation point for the recall values;Pinteris the accuracy value interpolation of the point.

    The average value of all categories of AP is

    (6)

    wheremrepresents mAP;krepresents the total number of all categories;Airepresents each category of AP. In the best case, the maximum value of mAP is 100%.

    3.2 Experiment details

    Distance is used to measure similarity. Typical distance measures include Manhattan, Euclidean, cosine, Chebyshev, Hamming, and correlation distance measures. Here, cosine similarity is used to measure the similarity between features. The cosine similarity distance is mainly used to calculate the feature’s inner product space and compare the feature vector’s direction.

    (7)

    whereaandbrepresent two vectors which are needed to calculate the similarity, and their included angle isθ.

    As shown in Eq. 7, the similarity range is [0, 1]. The smaller the angle between the two vectors, the closer the similarity to 1.

    The L2 normalization is used:

    (8)

    wherexirepresents a vector that needs L2 normalization;yirepresents the L2 normalization vector;nrepresents the total number of vectors needed to be L2 normalized.

    The experiment is carried out on the GPU workstation. The configuration includes Intel Xeon Bronze 3204, 64 G memory, two RTX2080Ti graphics cards, and the operating system Ubuntu 20.04. The deep learning framework uses the PyTorch framework, version 1.7.1. MAE uses the open-source PyTorch version of Meta. Since the implementation of the ViT module in MAE relies on the Timm library, the Timm0.3.2 version of ViT is used in this paper. Timm is an excellent open-source library of visual neural network models, which aims to integrate various SOTA models and can reproduce the training results of ImageNet.

    The fine-tuning experiment in this paper combines data from the Ukbench, Holidays, Oxford5k, and Paris6k datasets into a large dataset for our training. The pre-trained checkpoint model is obtained from MAE because only the visual checkpoint model contains the decoder layer parameters, so the performance is slightly worse than that of the non-visual model.

    For the base model, the image needs to be resized to 224 pixel×224 pixel. The optimizer uses AdamW, the base learning rate is 1.5x10-4, the weight decay is 0.05, the optimizer momentum is between 0.90 and 0.95, the batch size is 32, and 200 epochs are trained. Figure 5 shows the trend of loss and the learning rate in the training process of the base model after the decoder is frozen. It can be seen that after 200 epochs of training, the loss has been steadily declined, indicating that the training is practical.

    Fig.5 Trend of loss and learning rate during training: (a) loss; (b) learning rate

    4 Results and Discussion

    The performance of different fine-tuning methods on Ukbench, Holidays, Oxford5k, and Paris6k datasets is listed in Table 1. Visualize-raw is the pre-trained MAE visual checkpoint model; visualize-finetuned is the fine-tuned model using the checkpoint model; visualize-nodecoder is the fine-tuned model using the checkpoint model after freezing the decoder layer. The “base” in base-cls refers to the use of base model to extract image features, and the “cls” refers to the use of class vectors to represent image features. The base model is a standard MAE model, and the large model is a model with more transformer blocks stacked in the encoder.

    Table 1 Results on various datasets using different fine-tuning methods

    It can be seen that the fine-tuning method has significant improvement on different datasets. Among them, the improvement on the Ukbench and Holidays datasets is not significant. However, for Oxford5k and Paris6k datasets, the mAP of the base model increases by 7% and 17%, respectively, and that of the large model increases by 9% and 19%, respectively. Ukbench and Holidays datasets are pictures of life. Such pictures account for a part of the dataset used in the pre-training of the MAE model, so the improvement is not significant after fine-tuning. Oxford5k and Paris6k datasets are mostly landmark buildings, and such images are less likely to appear during pre-training, so there is a noticeable improvement after fine-tuning by using the method in this paper.

    Table 2 shows the comparison between the recognition results by using average pooling (avg) and cls as a feature, where finetuned-raw provides MAE’s checkpoint model with supervised fine-tuning on ImageNet. For the pre-training model, the cls vector is significantly better than the average pooling method, and the cls vector is still work better after self-supervised fine-tuning on the pre-training model. After supervised fine-tuning on the pre-training model, the average pooling method is more conducive to feature extraction. This phenomenon in self-supervised pre-training can be explained that the cls vector does global self-attention and is already a good image descriptor. When supervised fine-tuning, the cls vector is used as the input of the classification output head, which pays more attention to specific local objects and loses some global information. Hence, the average pooling method performs better. Therefore, when self-supervised fine-tuning, the cls vector is used as the image feature is better.

    Table 2 Comparison of average pooling and cls vector as feature recognition results

    For different versions of the same deep learning network, the performance on the same dataset is pretty different. Generally, the self-supervised training model is worse than the supervised model. As shown in Table 2, there is little difference in retrieval accuracy between unsupervised fine-tuning using cls vectors and supervised fine-tuning using cls vectors on Holidays, Oxford5k, and Paris6k datasets. Supervised training will pay more attention to specific objects (or local features) in the training process. In image retrieval, more attention is paid to specific objects in the image, so supervised models usually have better performance in retrieval. A simple look at the heat map (Fig.6) shows that a supervised training pays more attention to the local objects we expect to search, which undoubtedly improves the retrieval accuracy. The left side of each group is the heat map of the self-supervised pre-training model, and the right side is the heat map after supervised fine-tuning. Careful observation shows that the left map of each group generally pays more attention to the background of the sky and the ground. In comparison, the right map pays more attention to local buildings or reduces the attention to the background.

    Fig.6 Eight groups of heat maps: (a) group 1; (b) group 2; (c) group 3; (d) group 4; (e) group 5; (f) group 6; (g) group 7; (h) group 8

    5 Conclusions

    In this paper, a self-supervised fine-tuning model is used to extract image descriptors in the field of image retrieval. This method has achieved good results on different datasets and used less time in image retrieval. The self-supervised fine-tuning method in this paper gets global features, and the global self-attention done by the ViT architecture also strengthens the global features. Local features could be used for fusion to get better results. With the continuous advancement of self-supervised learning and more optimization of the ViT architecture, such methods would indeed receive more attention in image retrieval.

    欧美97在线视频| 亚洲av日韩在线播放| 午夜影院在线不卡| 午夜日韩欧美国产| 777米奇影视久久| 中文欧美无线码| 少妇裸体淫交视频免费看高清 | 在线观看www视频免费| 黄色毛片三级朝国网站| 国产国语露脸激情在线看| 久久国产精品男人的天堂亚洲| 丰满人妻熟妇乱又伦精品不卡| 91成年电影在线观看| 97精品久久久久久久久久精品| 欧美人与性动交α欧美精品济南到| 精品人妻1区二区| 18禁裸乳无遮挡动漫免费视频| 考比视频在线观看| av电影中文网址| 久9热在线精品视频| 精品视频人人做人人爽| 脱女人内裤的视频| 一级片'在线观看视频| 亚洲av国产av综合av卡| 这个男人来自地球电影免费观看| 国产黄色免费在线视频| 亚洲国产欧美一区二区综合| 日韩熟女老妇一区二区性免费视频| 日韩欧美一区二区三区在线观看 | 在线观看免费视频网站a站| 精品人妻一区二区三区麻豆| 亚洲成人免费av在线播放| 韩国精品一区二区三区| 国产亚洲一区二区精品| 亚洲成人免费电影在线观看| 狂野欧美激情性bbbbbb| 我要看黄色一级片免费的| 中文字幕高清在线视频| 人人妻人人添人人爽欧美一区卜| www.精华液| 大片电影免费在线观看免费| 夜夜夜夜夜久久久久| 久久久国产成人免费| 视频在线观看一区二区三区| 电影成人av| 亚洲黑人精品在线| 一级片免费观看大全| 久久精品国产亚洲av高清一级| 天天添夜夜摸| 免费观看人在逋| 老司机影院毛片| 免费女性裸体啪啪无遮挡网站| 俄罗斯特黄特色一大片| 亚洲精品美女久久久久99蜜臀| 日本91视频免费播放| 国产在线一区二区三区精| 美女主播在线视频| 免费少妇av软件| 午夜免费鲁丝| 色视频在线一区二区三区| 久久热在线av| 制服人妻中文乱码| 青青草视频在线视频观看| 91大片在线观看| 久久ye,这里只有精品| 成人影院久久| 国产精品 欧美亚洲| 日本猛色少妇xxxxx猛交久久| 法律面前人人平等表现在哪些方面 | 一级片免费观看大全| 亚洲精华国产精华精| 一区二区三区精品91| 99精品久久久久人妻精品| 久久这里只有精品19| 亚洲精品乱久久久久久| 免费久久久久久久精品成人欧美视频| 啦啦啦视频在线资源免费观看| 满18在线观看网站| 亚洲精华国产精华精| 一级片免费观看大全| 狠狠狠狠99中文字幕| 纵有疾风起免费观看全集完整版| 欧美精品人与动牲交sv欧美| 中文字幕人妻丝袜制服| 国产福利在线免费观看视频| 亚洲av欧美aⅴ国产| 精品国产一区二区三区久久久樱花| 99国产精品免费福利视频| 欧美老熟妇乱子伦牲交| 欧美av亚洲av综合av国产av| 青春草视频在线免费观看| av在线老鸭窝| 精品视频人人做人人爽| 精品国产一区二区三区四区第35| 精品视频人人做人人爽| 在线永久观看黄色视频| 日韩电影二区| 午夜免费观看性视频| 99精品欧美一区二区三区四区| 精品人妻熟女毛片av久久网站| 久久精品人人爽人人爽视色| 一本一本久久a久久精品综合妖精| 99精品欧美一区二区三区四区| 国产成人精品在线电影| 日韩熟女老妇一区二区性免费视频| 99国产综合亚洲精品| 日韩欧美免费精品| 永久免费av网站大全| 中亚洲国语对白在线视频| 久久久久国内视频| 欧美人与性动交α欧美精品济南到| 亚洲av男天堂| av网站在线播放免费| 欧美少妇被猛烈插入视频| 欧美久久黑人一区二区| 亚洲欧美日韩高清在线视频 | 国产精品秋霞免费鲁丝片| 精品久久久久久电影网| 成年女人毛片免费观看观看9 | 日韩一区二区三区影片| 另类精品久久| 欧美+亚洲+日韩+国产| 国产一区有黄有色的免费视频| 久久亚洲精品不卡| 十八禁网站免费在线| 午夜福利在线免费观看网站| 国产精品av久久久久免费| videosex国产| 亚洲九九香蕉| 欧美在线一区亚洲| 91国产中文字幕| 一进一出抽搐动态| e午夜精品久久久久久久| 日本猛色少妇xxxxx猛交久久| 免费av中文字幕在线| 久久久久久久精品精品| 一本大道久久a久久精品| 日韩 亚洲 欧美在线| 免费在线观看影片大全网站| 女人高潮潮喷娇喘18禁视频| 亚洲国产毛片av蜜桃av| avwww免费| 99精品欧美一区二区三区四区| 老司机影院毛片| 麻豆av在线久日| 久9热在线精品视频| 日本wwww免费看| 黑人猛操日本美女一级片| 国产精品一区二区免费欧美 | 欧美黑人精品巨大| av有码第一页| 欧美精品人与动牲交sv欧美| 99精品久久久久人妻精品| 一本色道久久久久久精品综合| 欧美国产精品一级二级三级| 日本av手机在线免费观看| av国产精品久久久久影院| 不卡av一区二区三区| 国产日韩欧美视频二区| 乱人伦中国视频| 国产精品久久久人人做人人爽| 精品久久久久久久毛片微露脸 | 桃花免费在线播放| 欧美精品av麻豆av| 在线观看舔阴道视频| 9热在线视频观看99| 亚洲国产欧美日韩在线播放| 亚洲欧美色中文字幕在线| av视频免费观看在线观看| videosex国产| 久久天躁狠狠躁夜夜2o2o| 欧美黄色淫秽网站| 久久久久久久精品精品| 国产精品熟女久久久久浪| 亚洲精品在线美女| 午夜免费观看性视频| 视频区欧美日本亚洲| 亚洲国产精品一区二区三区在线| 久久精品成人免费网站| 午夜日韩欧美国产| 欧美日韩av久久| 女人高潮潮喷娇喘18禁视频| 久久久欧美国产精品| 国产亚洲欧美在线一区二区| 国产成人精品久久二区二区91| 欧美日韩一级在线毛片| 欧美一级毛片孕妇| 黄色怎么调成土黄色| 亚洲 欧美一区二区三区| 女性被躁到高潮视频| 秋霞在线观看毛片| 亚洲少妇的诱惑av| 黄片小视频在线播放| 免费观看人在逋| 如日韩欧美国产精品一区二区三区| 亚洲欧美成人综合另类久久久| 91老司机精品| 欧美人与性动交α欧美软件| 午夜视频精品福利| 久久狼人影院| 一个人免费在线观看的高清视频 | 国产男女内射视频| av在线播放精品| 侵犯人妻中文字幕一二三四区| 亚洲伊人色综图| 精品少妇黑人巨大在线播放| 电影成人av| 大码成人一级视频| 欧美日韩国产mv在线观看视频| 亚洲精品成人av观看孕妇| 国产深夜福利视频在线观看| 国产精品久久久人人做人人爽| 狠狠婷婷综合久久久久久88av| 亚洲人成电影免费在线| 大码成人一级视频| 热re99久久国产66热| 黄网站色视频无遮挡免费观看| 日韩欧美一区视频在线观看| 久久人妻福利社区极品人妻图片| 中文字幕人妻熟女乱码| av天堂久久9| 99国产精品一区二区三区| 99热全是精品| 一个人免费看片子| 亚洲精品国产精品久久久不卡| 91国产中文字幕| 伦理电影免费视频| 一边摸一边抽搐一进一出视频| 成人av一区二区三区在线看 | 亚洲avbb在线观看| 咕卡用的链子| 老熟妇仑乱视频hdxx| 一本一本久久a久久精品综合妖精| 色94色欧美一区二区| 又大又爽又粗| 国产亚洲欧美精品永久| netflix在线观看网站| 999精品在线视频| 热99久久久久精品小说推荐| 亚洲国产精品一区二区三区在线| e午夜精品久久久久久久| av国产精品久久久久影院| 亚洲三区欧美一区| 午夜免费观看性视频| 在线看a的网站| 免费在线观看黄色视频的| 久久99热这里只频精品6学生| 水蜜桃什么品种好| 少妇精品久久久久久久| 久久中文字幕一级| 两人在一起打扑克的视频| 国精品久久久久久国模美| 久久久精品区二区三区| 国产主播在线观看一区二区| 性色av乱码一区二区三区2| 热99re8久久精品国产| 国产99久久九九免费精品| 日韩一区二区三区影片| 一区二区日韩欧美中文字幕| 国产成人精品久久二区二区91| 夜夜夜夜夜久久久久| 国产成人a∨麻豆精品| 国产无遮挡羞羞视频在线观看| 日韩欧美一区二区三区在线观看 | 久久这里只有精品19| 啦啦啦在线免费观看视频4| 激情视频va一区二区三区| 一区福利在线观看| 欧美精品一区二区大全| 国产成人一区二区三区免费视频网站| 青青草视频在线视频观看| 性色av一级| 人妻 亚洲 视频| 亚洲av日韩在线播放| 在线av久久热| 无限看片的www在线观看| 精品亚洲成国产av| 香蕉丝袜av| 国产区一区二久久| 亚洲成人手机| 天天躁夜夜躁狠狠躁躁| 美女高潮到喷水免费观看| 波多野结衣一区麻豆| 我的亚洲天堂| 国产成人欧美| 在线精品无人区一区二区三| 欧美成狂野欧美在线观看| 亚洲av电影在线观看一区二区三区| 一边摸一边抽搐一进一出视频| 亚洲第一av免费看| 99精国产麻豆久久婷婷| 欧美日韩一级在线毛片| 久久99热这里只频精品6学生| 老司机靠b影院| 国产欧美日韩精品亚洲av| 日本av免费视频播放| 亚洲精品国产一区二区精华液| 热99国产精品久久久久久7| 国产精品影院久久| 成年人午夜在线观看视频| 天天躁狠狠躁夜夜躁狠狠躁| 成年美女黄网站色视频大全免费| 国产区一区二久久| 热99国产精品久久久久久7| 在线观看一区二区三区激情| 黄频高清免费视频| 国产精品秋霞免费鲁丝片| 在线观看免费高清a一片| www.自偷自拍.com| 日韩视频一区二区在线观看| 国产成人免费观看mmmm| 亚洲专区字幕在线| 天天躁狠狠躁夜夜躁狠狠躁| 国产免费福利视频在线观看| 亚洲精品国产一区二区精华液| 亚洲精品第二区| 亚洲成av片中文字幕在线观看| 黄片大片在线免费观看| 国产高清videossex| 久久久精品区二区三区| 国产精品二区激情视频| 69精品国产乱码久久久| 精品卡一卡二卡四卡免费| 欧美精品高潮呻吟av久久| 国产精品久久久久久人妻精品电影 | 在线观看一区二区三区激情| 男女下面插进去视频免费观看| 国产亚洲欧美精品永久| 五月天丁香电影| 欧美变态另类bdsm刘玥| 这个男人来自地球电影免费观看| 久久ye,这里只有精品| 成年人午夜在线观看视频| 日韩欧美一区视频在线观看| 欧美激情高清一区二区三区| 日韩中文字幕视频在线看片| av不卡在线播放| 国产真人三级小视频在线观看| 久久国产精品大桥未久av| 国产精品亚洲av一区麻豆| 不卡av一区二区三区| 久久精品国产亚洲av香蕉五月 | 手机成人av网站| 人妻 亚洲 视频| 亚洲精品久久久久久婷婷小说| 久久午夜综合久久蜜桃| 999精品在线视频| 亚洲av电影在线进入| 老司机午夜福利在线观看视频 | 欧美黄色淫秽网站| 欧美 亚洲 国产 日韩一| 精品国产一区二区三区四区第35| a在线观看视频网站| 精品少妇久久久久久888优播| 伊人久久大香线蕉亚洲五| 亚洲欧美精品自产自拍| 国产精品 欧美亚洲| 狂野欧美激情性bbbbbb| 亚洲成av片中文字幕在线观看| 久久久精品国产亚洲av高清涩受| 中文字幕精品免费在线观看视频| 波多野结衣av一区二区av| 国产亚洲精品久久久久5区| 国产成人啪精品午夜网站| 黄网站色视频无遮挡免费观看| 久久av网站| 极品人妻少妇av视频| 不卡av一区二区三区| 久久毛片免费看一区二区三区| 一区在线观看完整版| 高清欧美精品videossex| av视频免费观看在线观看| 视频在线观看一区二区三区| 人人妻人人添人人爽欧美一区卜| 亚洲av成人不卡在线观看播放网 | 国产精品一区二区免费欧美 | 三级毛片av免费| 国产成+人综合+亚洲专区| 午夜福利一区二区在线看| 国产黄色免费在线视频| 精品国内亚洲2022精品成人 | 久久精品人人爽人人爽视色| 国产成人欧美在线观看 | 中文字幕人妻丝袜制服| 亚洲av电影在线观看一区二区三区| 三上悠亚av全集在线观看| 精品国产一区二区三区久久久樱花| 日韩制服骚丝袜av| 在线观看舔阴道视频| 午夜免费观看性视频| 丝袜人妻中文字幕| 免费在线观看日本一区| 无遮挡黄片免费观看| 男女之事视频高清在线观看| 亚洲综合色网址| 亚洲精品在线美女| 免费观看人在逋| av天堂久久9| 伊人久久大香线蕉亚洲五| 精品国产一区二区三区久久久樱花| 女警被强在线播放| 热99国产精品久久久久久7| 欧美黄色片欧美黄色片| 日本wwww免费看| 狂野欧美激情性bbbbbb| 亚洲久久久国产精品| 成人国语在线视频| 国产亚洲av高清不卡| 国产精品九九99| 老司机午夜十八禁免费视频| 狂野欧美激情性bbbbbb| 日韩熟女老妇一区二区性免费视频| 菩萨蛮人人尽说江南好唐韦庄| 不卡av一区二区三区| 99精品欧美一区二区三区四区| 成人国产av品久久久| 精品一区二区三区av网在线观看 | 黄色毛片三级朝国网站| 国产亚洲av片在线观看秒播厂| 亚洲激情五月婷婷啪啪| 啦啦啦在线免费观看视频4| 男女无遮挡免费网站观看| 亚洲五月婷婷丁香| 亚洲欧美一区二区三区久久| 精品人妻在线不人妻| 婷婷成人精品国产| 一二三四在线观看免费中文在| 考比视频在线观看| 人人妻人人澡人人看| 午夜激情av网站| www.999成人在线观看| 一区二区av电影网| 麻豆乱淫一区二区| 天天添夜夜摸| 亚洲精品国产色婷婷电影| 亚洲国产成人一精品久久久| 久久中文字幕一级| 两个人看的免费小视频| 午夜视频精品福利| 99国产精品一区二区蜜桃av | 日韩欧美一区二区三区在线观看 | 亚洲精品乱久久久久久| 热re99久久精品国产66热6| 亚洲av电影在线进入| 欧美精品av麻豆av| 婷婷色av中文字幕| 肉色欧美久久久久久久蜜桃| 久久热在线av| 伊人久久大香线蕉亚洲五| 欧美精品人与动牲交sv欧美| 亚洲精品av麻豆狂野| 午夜影院在线不卡| 又大又爽又粗| 老司机深夜福利视频在线观看 | 午夜精品久久久久久毛片777| 91精品国产国语对白视频| 性色av一级| 国产一区二区三区综合在线观看| 久久久久国产一级毛片高清牌| 亚洲成人免费电影在线观看| 性高湖久久久久久久久免费观看| 成年人黄色毛片网站| 一本综合久久免费| 国产亚洲精品第一综合不卡| www.999成人在线观看| √禁漫天堂资源中文www| av天堂在线播放| 国产成人精品久久二区二区91| 伦理电影免费视频| 一个人免费看片子| 一区在线观看完整版| 99国产精品一区二区三区| 久久精品国产亚洲av香蕉五月 | 日本欧美视频一区| 日韩 欧美 亚洲 中文字幕| 在线观看免费视频网站a站| 日韩电影二区| 欧美黄色片欧美黄色片| 国产男人的电影天堂91| 999精品在线视频| 十八禁网站网址无遮挡| 国产人伦9x9x在线观看| 巨乳人妻的诱惑在线观看| 日韩一卡2卡3卡4卡2021年| 黄网站色视频无遮挡免费观看| 人成视频在线观看免费观看| 一区二区三区乱码不卡18| 性高湖久久久久久久久免费观看| 国产一区二区 视频在线| av在线app专区| av福利片在线| 动漫黄色视频在线观看| 美女大奶头黄色视频| 99国产精品一区二区三区| 黄色视频,在线免费观看| 91精品伊人久久大香线蕉| 啦啦啦 在线观看视频| 欧美久久黑人一区二区| 国产精品影院久久| 欧美人与性动交α欧美精品济南到| 午夜激情av网站| 亚洲av日韩在线播放| 无限看片的www在线观看| 成人亚洲精品一区在线观看| 国产99久久九九免费精品| 欧美黄色片欧美黄色片| 亚洲精品国产色婷婷电影| 日韩 欧美 亚洲 中文字幕| 国产成人免费观看mmmm| 亚洲全国av大片| 亚洲五月婷婷丁香| 一级黄色大片毛片| 啪啪无遮挡十八禁网站| 欧美 亚洲 国产 日韩一| 国产精品麻豆人妻色哟哟久久| 交换朋友夫妻互换小说| 欧美精品啪啪一区二区三区 | 国产欧美日韩一区二区精品| 97精品久久久久久久久久精品| 90打野战视频偷拍视频| 精品国内亚洲2022精品成人 | 国产成人免费观看mmmm| 亚洲av成人不卡在线观看播放网 | 日韩一区二区三区影片| 久久精品成人免费网站| 水蜜桃什么品种好| 交换朋友夫妻互换小说| av不卡在线播放| 中文字幕另类日韩欧美亚洲嫩草| 亚洲欧美一区二区三区久久| 一个人免费在线观看的高清视频 | 午夜精品国产一区二区电影| 国产一区二区三区av在线| 国产亚洲av高清不卡| 久久久久国内视频| 狂野欧美激情性bbbbbb| 男女下面插进去视频免费观看| 在线十欧美十亚洲十日本专区| 90打野战视频偷拍视频| 99国产精品99久久久久| 岛国毛片在线播放| 久久国产亚洲av麻豆专区| 黄色视频不卡| 亚洲精品久久午夜乱码| 在线天堂中文资源库| 亚洲 欧美一区二区三区| 中文字幕人妻丝袜一区二区| 91大片在线观看| a级片在线免费高清观看视频| av天堂久久9| 91麻豆精品激情在线观看国产 | 亚洲情色 制服丝袜| videos熟女内射| 国产国语露脸激情在线看| 狠狠精品人妻久久久久久综合| 欧美日韩福利视频一区二区| 国产成人精品久久二区二区免费| 亚洲精品一二三| 国产高清国产精品国产三级| 欧美黑人精品巨大| 在线观看免费午夜福利视频| 久久久久久久大尺度免费视频| 99香蕉大伊视频| 91字幕亚洲| 午夜老司机福利片| 国产在线一区二区三区精| 国产麻豆69| 涩涩av久久男人的天堂| 免费不卡黄色视频| 女人高潮潮喷娇喘18禁视频| 99热网站在线观看| videos熟女内射| 久久久久视频综合| 国产在线一区二区三区精| 三级毛片av免费| 女人高潮潮喷娇喘18禁视频| 成人国语在线视频| 无遮挡黄片免费观看| 老司机午夜福利在线观看视频 | 淫妇啪啪啪对白视频 | 亚洲五月婷婷丁香| 精品国产超薄肉色丝袜足j| 麻豆国产av国片精品| 老汉色∧v一级毛片| 亚洲美女黄色视频免费看| 少妇粗大呻吟视频| 69精品国产乱码久久久| 欧美日韩亚洲高清精品| 国产麻豆69| av在线app专区| 精品欧美一区二区三区在线| 精品亚洲成国产av| 日韩一卡2卡3卡4卡2021年| 伊人久久大香线蕉亚洲五| 两人在一起打扑克的视频| 国产精品久久久av美女十八| 国产97色在线日韩免费| 亚洲国产欧美日韩在线播放| 满18在线观看网站| 丝袜脚勾引网站| 亚洲一区二区三区欧美精品| 国产成+人综合+亚洲专区| 99国产精品一区二区蜜桃av | 波多野结衣一区麻豆| 久久久久久久国产电影| 亚洲国产av新网站| 国产成人系列免费观看| 国产精品免费大片| 国产不卡av网站在线观看| 人人妻人人添人人爽欧美一区卜| 国产三级黄色录像| 色播在线永久视频| 香蕉丝袜av| 亚洲一区中文字幕在线| 18在线观看网站|