• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image Retrieval with Text Manipulation by Local Feature Modification

    2023-09-22 14:28:54ZHAJianhong查劍宏YANCairong燕彩蓉ZHANGYanting張艷婷WANGJun

    ZHA Jianhong(查劍宏), YAN Cairong(燕彩蓉)*, ZHANG Yanting (張艷婷)*, WANG Jun(王 俊)

    1 College of Computer Science and Technology, Donghua University, Shanghai 201620, China 2 College of Fashion and Design, Donghua University, Shanghai 200051, China

    Abstract:The demand for image retrieval with text manipulation exists in many fields, such as e-commerce and Internet search. Deep metric learning methods are used by most researchers to calculate the similarity between the query and the candidate image by fusing the global feature of the query image and the text feature. However, the text usually corresponds to the local feature of the query image rather than the global feature. Therefore, in this paper, we propose a framework of image retrieval with text manipulation by local feature modification (LFM-IR) which can focus on the related image regions and attributes and perform modification. A spatial attention module and a channel attention module are designed to realize the semantic mapping between image and text. We achieve excellent performance on three benchmark datasets, namely Color-Shape-Size (CSS), Massachusetts Institute of Technology (MIT) States and Fashion200K (+8.3%, +0.7% and +4.6% in R@1).

    Key words:image retrieval; text manipulation; attention; local feature modification

    Introduction

    Image retrieval is a research hotspot in computer vision. Current image retrieval systems usually take a text or an image as input, namely text-to-image[1]and image-to-image retrieval[2]. However, in actual scenarios, a single image or plain text usually cannot accurately express the user’s intentions. Therefore, many researchers now integrate other types of data into the query in order to improve the retrieval performance, such as attribute[3-4], spatial layout[5], or modification text[6-8]. In this paper, we focus on the task of image retrieval with text manipulation. This retrieval mode is common in many fields. For example, in the field of fashion retrieval[9], users can find the fashion products that they want more accurately by inputting an image and a description.

    The challenges of image retrieval tasks with text manipulation are mainly reflected in two aspects. Firstly, since different modal data exist in different feature spaces with different representations and distribution characteristics, the model should be able to handle the modal gap between the query and the target image. Secondly, the model not only needs to understand the text and image, but also needs to associate the visual features of the image with the semantic features of the text.

    Considering the above challenges, the representation of the query is obtained by most researchers[6-8]through a certain method of feature fusion, and then the similarity between the query and the candidate images is computed through deep metric learning methods[10]. For example, text image residual gating (TIRG) model[6]modifies the image feature via a gated residual connection to make the new feature close to the target image feature. Joint visual semantic matching (JVSM) model[7]learns image-text compositional embeddings by jointly associating visual and textual modalities in a shared discriminative embedding space via compositional losses. The common point of these models is to obtain the representation of the query by fusing the global feature of the image and the feature of text. However, the desired modification usually corresponds to the local feature of the query image rather than the global feature, so the model should learn to modify the local feature of the query image.

    Attention mechanism[11-12]is a very hot technology in recent years, which is widely used in various fields of deep learning, for example, image retrieval[13-14]and scene segmentation[15]tasks. In this paper, we use attention mechanism to focus on the local feature of the query image, and then modify it. Specifically, we first use a spatial attention module to locate the regions to be modified, and then use a channel attention module to obtain the specific attributes to be modified, and finally modify them to obtain the modified query image feature.

    The main contributions can be summarized as follows.

    1) We propose an image retrieval framework based on local feature modification (LFM-IR) to deal with the task of image retrieval with text manipulation.

    2) We design a spatial attention module to locate the image regions to be modified and a channel attention module to focus on the attributes to be modified.

    3) We achieve excellent performance for compositional image retrieval on three benchmark datasets, Color-Shape-Size(CSS), Massachusetts Institute of Technology(MIT) States, and Fashion 200K.

    1 Proposed Method

    1.1 Network structure

    Given a query imagexand a modification textt, LFM-IR modifies the local feature of the query image through the text feature to obtain the modified image featuref(x,t)=φxt∈RC, whereCis the number of feature channels. The modified image feature does not change its feature space. Therefore, for the target image, we can extract its feature in the same way, and then calculate the similarity between them through a similarity function. Figure 1 shows the structure of LFM-IR. The network consists of feature extraction module, spatial attention module, channel attention module and feature modification module.

    Fig.1 Structure of LFM-IR

    1.1.1Featureextractionmodule

    For a query imagexand a target imagey, we use the ResNet-18 pre-trained on ImageNet to extract their features. To preserve the spatial information of the query imagex, we remove the last fully connected layer when extracting the feature of the query image. Therefore, the representation of the query image isφx∈RW×H×C, whereWis the width andHis the height. The feature of the target imageyis extracted with the complete ResNet-18 and represented byφy∈RC.For the modification textt, we use the standard long short-term memory (LSTM) to extract its semantic information, which is represented byφt∈RC.

    1.1.2Spatialattentionmodule

    Considering that the modification text is usually related to some specific regions of the query image, we only need to modify the related regions which refer to the image regions that need to be modified. And the related regions are not fixed, so we propose a spatial attention module to adaptively capture the related regions of the query image. Given a query imagexand a modification textt, we need to obtain the spatial attention weightsαs∈RW×H.Specifically, we transform the text featureφtto make it the same dimension as the query image featureφx.We can obtain the transformed text featureφ′t∈RW×H×Cthrough spatial duplication, and then the spatial attention weightαsis computed as

    αs=σ(Lc(*φx⊙φ′t)),

    (1)

    where * represents batch normalization layer, ⊙ represents the element-wise multiplication,Lcindicates a convolutional layer that contains a 3×3 convolution kernel, andσis the Sigmoid function. We mark the regions of the query image that need to be modified asR1, and the regions of the query image that do not need to be modified asR2.With adaptive attention weights, the feature ofR1can be computed as

    φR1=L1(Favg(αsφx)),

    (2)

    and the feature ofR2can be computed as

    φR2=L1(Favg((1-αs)φx)),

    (3)

    whereφR1andφR2are the features ofR1andR2, andφR1∈RC,φR2∈RC.Favgrepresents a global average pooling layer, andL1is a fully connected layer.

    1.1.3Channelattentionmodule

    Although the spatial attention module can focus on the related regions of the query image, there are many attributes associated with the modification text in the regions. For example, we only modify the size or color or shape of a certain entity in the regions. Therefore, after focusing on the related regions, we need to focus on the related attributes which refer to the attributes that need to be modified. Specifically, we first fuse the text featureφtand the feature ofR1through a simple concatenation, and then feed it into a fully connected layer to obtain the channel attention weightαc∈RC.Formally, the attention weightsαcis computed as

    αc=σ(L2(δ(*[φR1,φt]))),

    (4)

    whereL2is a fully connected layer, [] represents concatenation operation, andδis the ReLU function. With adaptive attention weights, the related attributesφA1∈RCcan be represented as

    φA1=αc⊙φR1

    .

    (5)

    And the unrelated attributesφA2∈RCcan be represented as

    φA2=(1-αc)⊙φR1.

    (6)

    1.1.4Featuremodificationmodule

    After determining the related regions and attributes, we need to modify the related attributes. Specifically, we first fuse the text featureφtand the related attributesφA1inR1through a simple concatenation, and then feed it into the multilayer perceptron (MLP) to obtain the modified attributes representationφ′A1∈RC.Formally,φ′A1is calculated as

    φ′A1=LM([φA1,φt]),

    (7)

    whereLMis the MLP. Then we mergeφ′A1with the unrelated attributesφA2to obtain the feature of the modifiedR1, which is denoted asφ′R1∈RC.Formally,φ′R1is calculated as

    φ′R1=w1φ′A1+w2φA2,

    (8)

    wherew1andw2are learnable weights. Finally, we fuse the features of the two regions to obtain the modified query image featureφxt:

    φxt=w3φ′R1+w4φR2,

    (9)

    wherew3andw4are learnable weights.

    1.2 Similarity learning

    Following TIRG[6], we employ two different loss functions, namely the soft triplet loss and the batch classification loss. The soft triplet loss is defined as

    (10)

    wheres() represents the similarity function, andBis the batch. Following TIRG[6], we use the dot product as the similarity function. The batch classification loss is defined as

    (11)

    According to the experience of predecessors[6], we employ the soft triplet loss for the CSS[6]and MIT States[16]datasets, and for the Fashion200K[17]dataset we employ the batch classification loss.

    2 Experiments

    In this section, we conduct extensive experiments to evaluate LFM-IR. The experiments answer the following three questions.

    1) RQ1:How does LFM-IR perform on these datasets?

    2) RQ2:Can the spatial attention module accurately focus on the regions that needs to be modified?

    3) RQ3:How do different attention modules improve model performance?

    2.1 Datasets

    To evaluate LFM-IR, we conduct extensive experiments on the three datasets of CSS[6], MIT States[16]and Fashion200K[17]. A short description for each dataset is as follows.

    CSS[6]is a synthetic dataset containing complex modification texts. It contains about 16 000 training queries and 16 000 test queries, each query contains a query image, a modification text and a target image. Each image contains some geometric objects of different shapes, sizes and colors, and there are three types of modification texts:adding, removing or changing object attributes.

    MIT States[16]dataset contains approximately 60 000 images, each image contains a noun label and an adjective label, which respectively represent the object and the state of the object in the image. The query image and the target image have the same noun label but different adjective label, and modification text is the desired state of the object. The training set contains about 43 000 queries and the test set contains about 10 000 queries.

    Fashion200K[17]is a dataset in the fashion field. It contains about 200 000 images of fashion products, and each image has a corresponding multi-word fashion label. The label of the query image and the label of the target image differ by only one word, and modification text is a description of the label difference. The training set contains about 172 000 queries, and the test set contains about 31 000 queries.

    2.2 Experiments settings

    We choose the following models as the baselines, including Image only[6], Text only[6], Concatenation[6], Relationship[18], Multimodal Residual Networks (MRN)[19], Feature-wise Linear Modulation (FiLM)[20], Show and Tell[21]and TIRG[6]. In addition, we compare LFM-IR with Joint Attribute Manipulation and Modality Alignment Learning (JAMMAL)[8]. The metric for retrieval is recall at rankK(R@K,K={1, 5, 10, 50}), which represents the proportion of the correct labeled image in the topKretrieved images. Each experiment is repeated 5 times for a stable experimental result, and the mean and standard deviation are reported.

    2.3 Performance comparison (RQ1)

    We evaluate the retrieval performance of existing models and LFM-IR on three benchmark datasets:CSS, MIT States and Fashion200K, the results are shown in Table 1, and the best number is shown in bold. Besides, Fig.2 shows some qualitative examples of LFM-IR, each row shows a text, a query image, and the retrieved images. The examples are from CSS, MIT States, and Fashion200K datasets, respectively. And the green rectangle indicates the correct target image.

    Fig.2 Some qualitative examples of LFM-IR:(a) CSS; (b) MIT States; (c) Fashion 200 K

    Column 2 in Table 1 shows theR@Kperformance on CSS dataset, we use the 3D version of dataset (both query image and target image are 3D). And we only report the measuredR@1, becauseR@1 is good enough. The results show that LFM-IR performs the best, LFM-IR gains an 8.3% performance boost in terms ofR@1 compared to JAMMAL.

    Columns 3-5 in Table 1 show the results of LFM-IR and other models on MIT States dataset, and we report theR@K(K={1, 5, 10}) of different models on this dataset. The results show that LFM-IR is superior to most models and is comparable to the state-of-the-art model. In particular, compared with JAMMAL, LFM-IR achieves a 0.7% performance improvement in terms ofR@1.But in terms ofR@5 andR@10, JAMMAL performs better than LFM-IR. We believe that this is because the text information of this dataset is too simple, causing the attention mechanism to be unable to accurately focus on the regions that need to be modified. In addition, all models except JAMMML use ResNet-18 as the image encoder, while JAMMML uses ResNet-101 as the image encoder. Therefore, the image features extracted by JAMMAL are more expressive.

    Columns 6-8 in Table 1 show the results on Fashion200K dataset, we report theR@K(K={1,10,50}) of different models on this dataset. The results show that although the modification text of this dataset is relatively simple, LFM-IR is still better than the state-of-the-art model. And compared with JAMMAL, LFM-IR gains a 4.6% performance boost in terms ofR@1.

    Table 1 Performance on CSS, MIT States and Fashion200K datasets

    2.4 Attention visualization (RQ2)

    To further explore the learning situation of the model, we visualize the spatial attention module in LFM-IR on the CSS dataset, because the text information of the CSS dataset is richer than other datasets. As shown in Fig.3, the spatial attention module pays more attention to the regions that need to be modified, while paying less attention to the regions that do not need to be modified. The experiment shows that the spatial attention module can accurately focus on the regions that need to be modified.

    Fig.3 Visualization of spatial attention module:(a) examples of removing object; (b) examples of changing attributes; (c) examples of adding object

    2.5 Ablation studies (RQ3)

    To explore the effects of spatial attention and channel attention modules on the proposed method, we conduct ablation studies on three benchmark datasets and chooseR@1 as the evaluation metric. After removing the spatial attention and channel attention modules, the LFM-IR model degenerates into the concatenation model. The results are shown in Table 2.

    Table 2 Retrieval performance of ablation studies

    The results show that, compared with the concatenation model, adding the spatial attention module or the channel attention module can significantly improve the performance. Adding two attention modules at the same time has better retrieval performance than adding a single spatial attention or channel attention module.

    3 Conclusions

    In this paper, we propose a novel method based on local feature modification for image retrieval with text manipulation. Through a unified feature space, texts are mapped to different image regions so that we can find where to modify and what to modify. The semantic mapping relationship between text and image is established by the proposed spatial attention module and channel attention module. Extensive experiments are conducted on three benchmark datasets, showing the superiority of LFM-IR. In the future, we will try to use more complex feature extraction networks to enhance the expression ability of feature and further improve the performance of LFM-IR.

    h视频一区二区三区| 99热这里只有是精品在线观看| 91aial.com中文字幕在线观看| 日韩欧美一区视频在线观看| 一本大道久久a久久精品| 久久久国产一区二区| 国产高清三级在线| 嫩草影院入口| xxx大片免费视频| 永久免费av网站大全| 国产xxxxx性猛交| 美女国产高潮福利片在线看| 久久久精品94久久精品| av.在线天堂| 中文字幕制服av| xxx大片免费视频| 国产女主播在线喷水免费视频网站| 日韩欧美一区视频在线观看| 女的被弄到高潮叫床怎么办| 亚洲高清免费不卡视频| 国产欧美另类精品又又久久亚洲欧美| 成年美女黄网站色视频大全免费| 日韩精品免费视频一区二区三区 | 久久久久精品久久久久真实原创| 亚洲成色77777| 久久狼人影院| 晚上一个人看的免费电影| 一级毛片我不卡| 有码 亚洲区| 中文字幕人妻熟女乱码| 免费观看av网站的网址| 赤兔流量卡办理| 亚洲精品国产av蜜桃| 午夜福利影视在线免费观看| 亚洲av电影在线观看一区二区三区| 精品国产一区二区三区久久久樱花| 久久久精品免费免费高清| 中文字幕精品免费在线观看视频 | 欧美精品国产亚洲| 午夜日本视频在线| 极品少妇高潮喷水抽搐| 在线免费观看不下载黄p国产| 日日爽夜夜爽网站| 五月玫瑰六月丁香| 日本av免费视频播放| 国产熟女欧美一区二区| 日韩精品免费视频一区二区三区 | 九色成人免费人妻av| 欧美bdsm另类| 中文天堂在线官网| 夫妻性生交免费视频一级片| 久久亚洲国产成人精品v| 国产成人精品一,二区| 免费看av在线观看网站| 亚洲av免费高清在线观看| 精品少妇久久久久久888优播| 少妇的逼好多水| 成人手机av| 国产熟女欧美一区二区| 九草在线视频观看| 久久狼人影院| 久久精品人人爽人人爽视色| 青春草视频在线免费观看| 婷婷色av中文字幕| av国产精品久久久久影院| 啦啦啦在线观看免费高清www| 免费女性裸体啪啪无遮挡网站| 免费av不卡在线播放| 新久久久久国产一级毛片| 欧美丝袜亚洲另类| av电影中文网址| 伦理电影免费视频| 最近最新中文字幕免费大全7| 一区二区三区精品91| 满18在线观看网站| 十八禁网站网址无遮挡| 丝袜喷水一区| 婷婷成人精品国产| 成年人午夜在线观看视频| 啦啦啦视频在线资源免费观看| 日韩中字成人| 91国产中文字幕| 免费观看av网站的网址| 九草在线视频观看| 18+在线观看网站| 少妇精品久久久久久久| 女的被弄到高潮叫床怎么办| 亚洲精品日本国产第一区| 男男h啪啪无遮挡| 两个人看的免费小视频| 亚洲精品色激情综合| 视频区图区小说| 免费人妻精品一区二区三区视频| 久久久久久久久久人人人人人人| 精品人妻在线不人妻| 新久久久久国产一级毛片| 狠狠精品人妻久久久久久综合| 黄色视频在线播放观看不卡| 一区二区三区四区激情视频| 色吧在线观看| 91aial.com中文字幕在线观看| 国产免费福利视频在线观看| 亚洲,欧美,日韩| 久久女婷五月综合色啪小说| 久久精品国产亚洲av涩爱| 久久这里有精品视频免费| 自线自在国产av| 美女内射精品一级片tv| 久久青草综合色| 国产片内射在线| 欧美少妇被猛烈插入视频| 爱豆传媒免费全集在线观看| 国产熟女午夜一区二区三区| 视频在线观看一区二区三区| 国产精品久久久久久久电影| 97超碰精品成人国产| 青春草视频在线免费观看| 亚洲三级黄色毛片| 乱人伦中国视频| 黄色 视频免费看| 建设人人有责人人尽责人人享有的| 九色亚洲精品在线播放| 中文字幕最新亚洲高清| 亚洲美女视频黄频| 草草在线视频免费看| 亚洲熟女精品中文字幕| 久久99一区二区三区| 国产av精品麻豆| 国产探花极品一区二区| 午夜视频国产福利| 天天影视国产精品| 99热6这里只有精品| 美女国产高潮福利片在线看| videossex国产| 毛片一级片免费看久久久久| 制服人妻中文乱码| 久久久国产一区二区| 精品午夜福利在线看| 成人国产麻豆网| 日韩一本色道免费dvd| 亚洲av日韩在线播放| 免费大片黄手机在线观看| 欧美激情 高清一区二区三区| 热re99久久精品国产66热6| 一级黄片播放器| 99热6这里只有精品| 国产精品99久久99久久久不卡 | 欧美国产精品va在线观看不卡| 日日摸夜夜添夜夜爱| 亚洲欧美日韩卡通动漫| 亚洲精品日本国产第一区| 自拍欧美九色日韩亚洲蝌蚪91| 精品福利永久在线观看| 国产成人精品福利久久| 捣出白浆h1v1| 久久亚洲国产成人精品v| h视频一区二区三区| 亚洲国产精品国产精品| 国产深夜福利视频在线观看| 欧美3d第一页| 在线观看www视频免费| 欧美xxxx性猛交bbbb| 欧美人与性动交α欧美软件 | 老司机亚洲免费影院| 久久久久久久精品精品| 欧美国产精品一级二级三级| 婷婷色综合大香蕉| 亚洲欧美一区二区三区黑人 | 五月天丁香电影| 看免费av毛片| 免费av中文字幕在线| 夫妻午夜视频| 亚洲精品色激情综合| 精品久久久久久电影网| 少妇人妻 视频| 色婷婷av一区二区三区视频| 亚洲综合色惰| 99热网站在线观看| 成人国产av品久久久| 国产麻豆69| 99香蕉大伊视频| 91aial.com中文字幕在线观看| 国产 一区精品| 九九在线视频观看精品| 亚洲久久久国产精品| 成年人免费黄色播放视频| 日韩,欧美,国产一区二区三区| h视频一区二区三区| 久久久欧美国产精品| 久久精品aⅴ一区二区三区四区 | 18+在线观看网站| 欧美变态另类bdsm刘玥| 日韩一区二区视频免费看| 考比视频在线观看| 在线精品无人区一区二区三| 国产一区二区三区综合在线观看 | 赤兔流量卡办理| 精品国产一区二区三区四区第35| 麻豆乱淫一区二区| 国产 精品1| 亚洲国产色片| 国产片特级美女逼逼视频| 亚洲欧美中文字幕日韩二区| 久久久久网色| 99久久精品国产国产毛片| 精品一区二区三区视频在线| 国产精品99久久99久久久不卡 | 久久婷婷青草| 欧美人与善性xxx| 亚洲国产精品一区二区三区在线| 成人国产麻豆网| 丰满乱子伦码专区| 大码成人一级视频| 免费av不卡在线播放| 午夜老司机福利剧场| 亚洲av综合色区一区| 国产又色又爽无遮挡免| 久久久久久久精品精品| 国产亚洲午夜精品一区二区久久| 国产一区二区在线观看av| 成人国产麻豆网| 一本—道久久a久久精品蜜桃钙片| 一区二区av电影网| 一本—道久久a久久精品蜜桃钙片| 日韩大片免费观看网站| 亚洲av.av天堂| 十分钟在线观看高清视频www| 精品国产一区二区久久| 美女国产视频在线观看| 一本—道久久a久久精品蜜桃钙片| 久久人人爽人人爽人人片va| www.熟女人妻精品国产 | av在线观看视频网站免费| 精品人妻偷拍中文字幕| 国产精品熟女久久久久浪| 天美传媒精品一区二区| 最近中文字幕高清免费大全6| 美国免费a级毛片| 美国免费a级毛片| 国产精品久久久久久av不卡| 日韩免费高清中文字幕av| 亚洲四区av| 婷婷色综合www| 久久久久精品性色| 亚洲五月色婷婷综合| 黄色视频在线播放观看不卡| 一级黄片播放器| 欧美 日韩 精品 国产| 亚洲美女搞黄在线观看| 亚洲av在线观看美女高潮| 一级爰片在线观看| 亚洲伊人色综图| 最近2019中文字幕mv第一页| 人妻少妇偷人精品九色| 免费高清在线观看日韩| 精品少妇黑人巨大在线播放| 久久久久久人妻| 在线天堂中文资源库| 成人手机av| 久久精品久久精品一区二区三区| 天堂俺去俺来也www色官网| 麻豆精品久久久久久蜜桃| 亚洲国产精品999| 一区在线观看完整版| 91精品伊人久久大香线蕉| 亚洲国产欧美日韩在线播放| 人体艺术视频欧美日本| 美女国产视频在线观看| 一级a做视频免费观看| 国产日韩欧美视频二区| 欧美老熟妇乱子伦牲交| 如日韩欧美国产精品一区二区三区| 精品少妇久久久久久888优播| 韩国高清视频一区二区三区| 性高湖久久久久久久久免费观看| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲国产精品成人久久小说| 国产免费现黄频在线看| 国产精品欧美亚洲77777| 成人二区视频| 狠狠婷婷综合久久久久久88av| 午夜久久久在线观看| 亚洲精品中文字幕在线视频| 国产精品一区二区在线观看99| 日韩精品有码人妻一区| 亚洲精品日本国产第一区| 精品一品国产午夜福利视频| 哪个播放器可以免费观看大片| 久久国产亚洲av麻豆专区| 国产乱人偷精品视频| 中文天堂在线官网| av国产精品久久久久影院| 在线观看人妻少妇| 黄片无遮挡物在线观看| 91精品伊人久久大香线蕉| 日本欧美视频一区| 免费观看在线日韩| www.熟女人妻精品国产 | xxx大片免费视频| 51国产日韩欧美| 精品人妻熟女毛片av久久网站| 免费播放大片免费观看视频在线观看| 国产成人aa在线观看| 在线天堂中文资源库| 国产欧美另类精品又又久久亚洲欧美| 97精品久久久久久久久久精品| 亚洲精品久久成人aⅴ小说| 肉色欧美久久久久久久蜜桃| 国产成人免费观看mmmm| 麻豆乱淫一区二区| 久久女婷五月综合色啪小说| 亚洲欧洲日产国产| 在线观看人妻少妇| 国产一区二区在线观看日韩| 国产黄色视频一区二区在线观看| 久久久国产精品麻豆| 五月玫瑰六月丁香| 丰满饥渴人妻一区二区三| 赤兔流量卡办理| 18禁国产床啪视频网站| 永久网站在线| 日韩三级伦理在线观看| 2021少妇久久久久久久久久久| 人妻人人澡人人爽人人| 亚洲精品国产av蜜桃| 人人妻人人爽人人添夜夜欢视频| 午夜av观看不卡| 少妇猛男粗大的猛烈进出视频| 亚洲av在线观看美女高潮| 肉色欧美久久久久久久蜜桃| 免费久久久久久久精品成人欧美视频 | 中文字幕另类日韩欧美亚洲嫩草| 少妇的逼好多水| 久久久久国产精品人妻一区二区| 亚洲精品一二三| 欧美日韩精品成人综合77777| 久久精品国产综合久久久 | 亚洲精品色激情综合| 少妇人妻精品综合一区二区| 久久鲁丝午夜福利片| 日本vs欧美在线观看视频| 曰老女人黄片| 伊人亚洲综合成人网| www日本在线高清视频| 99国产综合亚洲精品| 99香蕉大伊视频| 一级毛片我不卡| 91成人精品电影| 在线天堂最新版资源| xxx大片免费视频| 男人爽女人下面视频在线观看| 精品人妻在线不人妻| 免费观看a级毛片全部| 啦啦啦啦在线视频资源| 欧美日韩亚洲高清精品| 乱人伦中国视频| 大香蕉久久网| 最近最新中文字幕免费大全7| 国语对白做爰xxxⅹ性视频网站| 中文字幕免费在线视频6| 亚洲中文av在线| 久久久久久伊人网av| 亚洲av免费高清在线观看| 久久这里有精品视频免费| 婷婷色综合大香蕉| 最近中文字幕高清免费大全6| 18禁动态无遮挡网站| 欧美人与善性xxx| 国产一区二区在线观看av| 成年人午夜在线观看视频| 日韩一区二区视频免费看| 亚洲一码二码三码区别大吗| 一级片'在线观看视频| 大片免费播放器 马上看| 久久精品国产a三级三级三级| 欧美日本中文国产一区发布| 制服诱惑二区| 青春草亚洲视频在线观看| 亚洲精品第二区| 亚洲久久久国产精品| 国产一区二区在线观看日韩| 亚洲欧美清纯卡通| 成年女人在线观看亚洲视频| 亚洲精品乱码久久久久久按摩| 日本av手机在线免费观看| 乱人伦中国视频| 午夜福利网站1000一区二区三区| 在线观看美女被高潮喷水网站| 男女午夜视频在线观看 | 久久综合国产亚洲精品| 熟妇人妻不卡中文字幕| 日本黄大片高清| 久久影院123| 伊人久久国产一区二区| 22中文网久久字幕| 日日爽夜夜爽网站| 少妇的逼水好多| 亚洲美女视频黄频| 99热6这里只有精品| 欧美日韩综合久久久久久| 亚洲精品色激情综合| 国产一区有黄有色的免费视频| 亚洲精品色激情综合| 成人综合一区亚洲| 多毛熟女@视频| 国产成人午夜福利电影在线观看| 国产精品女同一区二区软件| 国产在线视频一区二区| 王馨瑶露胸无遮挡在线观看| 日本免费在线观看一区| 色婷婷久久久亚洲欧美| 久久免费观看电影| 黄网站色视频无遮挡免费观看| 男女免费视频国产| 97精品久久久久久久久久精品| 亚洲精品久久久久久婷婷小说| 秋霞在线观看毛片| 成人手机av| 久久久久久久大尺度免费视频| 日韩不卡一区二区三区视频在线| 极品少妇高潮喷水抽搐| 2018国产大陆天天弄谢| 免费看不卡的av| 欧美日韩视频高清一区二区三区二| 成年人免费黄色播放视频| 久久 成人 亚洲| 老司机影院毛片| 春色校园在线视频观看| 人成视频在线观看免费观看| 在线免费观看不下载黄p国产| 免费高清在线观看视频在线观看| 美女视频免费永久观看网站| 少妇人妻 视频| 久久久a久久爽久久v久久| 五月玫瑰六月丁香| 国产不卡av网站在线观看| 欧美性感艳星| 日韩,欧美,国产一区二区三区| 精品少妇久久久久久888优播| 岛国毛片在线播放| 最近手机中文字幕大全| 国产永久视频网站| 欧美丝袜亚洲另类| 精品一区二区三区四区五区乱码 | 大香蕉久久成人网| 婷婷色av中文字幕| 中文字幕人妻熟女乱码| 91成人精品电影| 爱豆传媒免费全集在线观看| 国产精品国产av在线观看| 美女xxoo啪啪120秒动态图| 在线精品无人区一区二区三| av卡一久久| 你懂的网址亚洲精品在线观看| 99久国产av精品国产电影| 成人毛片a级毛片在线播放| 侵犯人妻中文字幕一二三四区| 国产精品偷伦视频观看了| 久久久久久久国产电影| 在线观看免费高清a一片| 黄色配什么色好看| 国产在视频线精品| 亚洲av中文av极速乱| 亚洲精品中文字幕在线视频| 亚洲综合色惰| 成人午夜精彩视频在线观看| 一区二区三区四区激情视频| 99视频精品全部免费 在线| 在线 av 中文字幕| 视频在线观看一区二区三区| 午夜精品国产一区二区电影| 亚洲一级一片aⅴ在线观看| 午夜福利乱码中文字幕| 看免费av毛片| 国产一区二区激情短视频 | 免费女性裸体啪啪无遮挡网站| 日本欧美国产在线视频| 国产精品偷伦视频观看了| 韩国精品一区二区三区 | 免费在线观看完整版高清| 亚洲色图综合在线观看| 一级毛片黄色毛片免费观看视频| a 毛片基地| 只有这里有精品99| 亚洲精品一区蜜桃| 国产亚洲欧美精品永久| 亚洲国产精品999| 丝袜美足系列| 亚洲精品aⅴ在线观看| 99热这里只有是精品在线观看| 午夜av观看不卡| 精品亚洲成a人片在线观看| 日韩中文字幕视频在线看片| 久久精品aⅴ一区二区三区四区 | 久久韩国三级中文字幕| 高清黄色对白视频在线免费看| 51国产日韩欧美| 看非洲黑人一级黄片| 国产日韩欧美在线精品| 丰满乱子伦码专区| 久久毛片免费看一区二区三区| 日本猛色少妇xxxxx猛交久久| 国产伦理片在线播放av一区| 精品少妇久久久久久888优播| 久久久久久久久久成人| 天美传媒精品一区二区| 精品熟女少妇av免费看| 99热这里只有是精品在线观看| 亚洲国产av影院在线观看| 日韩精品免费视频一区二区三区 | 日韩精品免费视频一区二区三区 | av视频免费观看在线观看| 视频中文字幕在线观看| 国产高清不卡午夜福利| 欧美日韩av久久| 国产一区二区在线观看日韩| 美女中出高潮动态图| 国产综合精华液| 亚洲成色77777| 亚洲欧美中文字幕日韩二区| 日本午夜av视频| 国产免费又黄又爽又色| 成人18禁高潮啪啪吃奶动态图| kizo精华| 欧美日韩视频高清一区二区三区二| 久久精品国产亚洲av涩爱| 精品少妇久久久久久888优播| 在线精品无人区一区二区三| 两个人看的免费小视频| 亚洲,欧美精品.| 欧美另类一区| 美女中出高潮动态图| 国产精品免费大片| 欧美97在线视频| 97在线视频观看| 五月开心婷婷网| 黄色视频在线播放观看不卡| 又大又黄又爽视频免费| 久久狼人影院| 91在线精品国自产拍蜜月| av.在线天堂| 成人二区视频| 午夜激情久久久久久久| 国产高清国产精品国产三级| 国产精品.久久久| 国产黄色视频一区二区在线观看| 国产麻豆69| 国产不卡av网站在线观看| 久久精品国产综合久久久 | 在线亚洲精品国产二区图片欧美| 大码成人一级视频| 日本黄色日本黄色录像| 99久久精品国产国产毛片| 国产精品偷伦视频观看了| 免费黄频网站在线观看国产| 精品一区在线观看国产| 免费高清在线观看日韩| 韩国高清视频一区二区三区| 爱豆传媒免费全集在线观看| 精品人妻熟女毛片av久久网站| 在线 av 中文字幕| 国产欧美日韩综合在线一区二区| 成年动漫av网址| 免费观看无遮挡的男女| 91午夜精品亚洲一区二区三区| 九九爱精品视频在线观看| 国产片内射在线| 老司机影院毛片| 韩国高清视频一区二区三区| 各种免费的搞黄视频| 亚洲精品一二三| 我要看黄色一级片免费的| 91国产中文字幕| 国产亚洲精品第一综合不卡 | 免费高清在线观看日韩| 国产精品一二三区在线看| 国产欧美另类精品又又久久亚洲欧美| 成年人免费黄色播放视频| 精品一区二区三区视频在线| 搡女人真爽免费视频火全软件| 国产av一区二区精品久久| 一级毛片我不卡| 香蕉国产在线看| 日韩精品免费视频一区二区三区 | 国产成人一区二区在线| 欧美 亚洲 国产 日韩一| 91国产中文字幕| 亚洲国产看品久久| 中文字幕av电影在线播放| 国产永久视频网站| 极品少妇高潮喷水抽搐| 人人妻人人添人人爽欧美一区卜| 大话2 男鬼变身卡| 在线亚洲精品国产二区图片欧美| 激情五月婷婷亚洲| 最近中文字幕2019免费版| 欧美丝袜亚洲另类| 日本猛色少妇xxxxx猛交久久| 老司机影院毛片| 国产精品久久久久久精品古装| 国产激情久久老熟女| 欧美日韩综合久久久久久| 亚洲欧美中文字幕日韩二区| 91国产中文字幕| 国产深夜福利视频在线观看| 99久久中文字幕三级久久日本| 精品国产一区二区三区四区第35| 一级毛片电影观看| 91精品三级在线观看| 丰满少妇做爰视频| 一级毛片黄色毛片免费观看视频| 久久久久精品性色| 亚洲精品久久久久久婷婷小说| 国产精品久久久久久久久免| 国产国语露脸激情在线看|