• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Clothing Parsing Based on Multi-Scale Fusion and Improved Self-Attention Mechanism

    2023-12-28 08:49:34CHENNuoWANGShaoyu王紹宇LURanLIWenxuan李文萱QINZhidong覃志東SHIXiujin石秀金
    關(guān)鍵詞:李文

    CHEN Nuo(陳 諾), WANG Shaoyu(王紹宇), LU Ran (陸 然), LI Wenxuan(李文萱), QIN Zhidong(覃志東), SHI Xiujin(石秀金)

    College of Computer Science and Technology, Donghua University, Shanghai 201620, China

    Abstract:Due to the lack of long-range association and spatial location information, fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods. This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information. The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework. In addition, the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images. The experimental results based on the colorful fashion parsing dataset (CFPD) show that the proposed network structure achieves 53.68% mean intersection over union (mIoU) and has better performance on the clothing parsing task.

    Key words:clothing parsing; convolutional neural network; multi-scale fusion; self-attention mechanism; vision Transformer

    0 Introduction

    In recent years, with the continuous development of the garment industry and the increasing improvement of e-commerce platforms, people have gradually started to pursue the personalization of clothing matching. Virtual fitting technology enables consumers to accurately obtain clothing information and achieve the matching of clothing. At the same time, designers can obtain information about consumers’ shopping preferences through trend forecasting to grasp current fashion trends. For this purpose, parsing techniques that can extract various types of fashion items from complex images become a prerequisite to achieve the goal. However, clothing parsing is limited by various factors such as different clothing styles, models and environments. To solve this problem, researchers have proposed their own solutions from different perspectives of pattern recognition[1-2]and deep learning[3-6].

    Recently, researchers have tried to introduce a model named Transformer to vision tasks. Dosovitskiyetal.[7]proposed vision Transformer (ViT) which divides an image into multiple patches and inputs the sequence of linear embeddings of these patches with position embedding into Transformer. This achieves a surprising performance on large-scale datasets. On this basis, researchers explored the integration of Transformer into traditional encoder-decoder frameworks to adapt to image parsing tasks. Zhengetal.[8]proposed segmentation Transformer (SETR) which consists of a pure Transformer and a simple decoder to confirm the feasibility of ViT for image parsing tasks. Inspired by frameworks such as SETR, Xieetal.[9]proposed SegFormer which does not require positional encoding and avoids the use of complex decoders. Liuetal.[10]improved the patch form of ViT and proposed Swin Transformer which achieves better results by sliding windows and relative positional encoding.

    Despite the great promise and potential of applying Transformer to image parsing, there are still several challenges that are difficult to ignore when applying the Transformer architecture to parse clothing images. First, due to the time complexity limitation, current Transformer architectures use the flattened segmented patch of an image as the input sequence for self-attention or input the low-resolution feature map of the convolutional backbone network into the Transformer encoder. However, for complex clothing images, feature maps of different scales can affect the final parsing results. Second, the pure Transformer performs poorly on small-scale datasets due to its lack of inductive bias for visual tasks. The clothing parsing task, on the other hand, lacks large-scale data to meet the data requirement of the pure Transformer due to its rich variety and high resolution of practical applications.

    In this paper, we propose a network named MFSANet based on multi-scale fusion and improved self-attention. MFSANet combines the respective advantages of convolution and self-attention for clothing parsing. We refer to the framework design of the high-resolution network (HRNet)[11]to extract and exchange the long-range association information and position information in each stage. MFSANet achieves good results on a generalized clothing parsing dataset and is promising to achieve good generalization in downstream tasks.

    1 Related Work

    1.1 Multi-scale fusion

    Convolutional neural networks applied to image parsing essentially abstract the target image layer by layer to extract the features of each layer of the target. Since Zeileretal.[12]proposed the visualization of convolutional neural networks to formally describe the differences between the feature maps of deep and shallow networks in terms of geometric and semantic information representation, more and more researchers have focused on multi-scale fusion networks. Linetal.[13]proposed the feature pyramid network (FPN) containing a multi-scale structure with a pyramidal structure. It was a model for the design of subsequent pyramidal networks by sampling the features at the bottom layer and then recovering and fusing the features layer by layer to obtain high-resolution features. Chenetal.[14]proposed DeepLab with atrous convolution to obtain multi-scale information by adjusting the receptive field of the filter. HRNet with a parallel convolution structure maintains and aggregates features at each resolution, thus enhancing the extraction of multi-scale features. Inspired by the strong aggregation capability of the HRNet framework, we incorporate long-range association information of the Transformer to achieve clearer parsing.

    1.2 Transformer

    As the first work to transfer Transformer to image tasks with fewer changes and achieve state-of-the-art results on large-scale image datasets, ViT split the image into multiple patches with positional embeddings attached as sequences fed into the Transformer’s encoder. Based on that, Wangetal.[15]constructed the pyramid vision Transformer (PVT) with a pyramid structure, demonstrating the feasibility of applying Transformer to multi-scale fusion structures. Chuetal.[16]proposed Twins which enhanced the Transformer’s representation of hierarchical features without an absolute position embedding, optimizing the Transformer performance on dense prediction tasks.

    2 Methods

    2.1 Overall architecture

    Figure 1 highlights the overall structure of MFSANet. We combine the characteristics of convolution and Transformer to extract local features and long-range information and use the multi-scale fusion structure to achieve information exchange. Meanwhile, the inductive bias of the convolution part of the hybrid network can effectively alleviate the lack of large-scale sample training of Transformer. We choose to include the Transformer module in the high-resolution feature maps that have participated in more multi-scale fusion to make more use of the information from multiple scales and incorporate long-range association information.

    2.2 Multi-scale representation

    Unlike the linear process of hybrid networks such as SETR and SegFormer, our network implements multi-scale parallel feature extraction for the input image. These multi-scale features simultaneously cover different local features of the image at different scales, which can fully extract the rich semantic information in the clothing images to improve the performance of parsing. After the parallel feature extraction is completed, the fusion operation will also be used to exchange multi-scale information to assist in generating feature maps at each scale by using high- and low-resolution semantic information, so that the feature maps at each scale contain richer information. Specifically, for stagek∈{2,3,4} represented as the blue region in Fig.1, given a multi-scale input feature map, the output of the stageX′ is expressed as

    (1)

    MHSA——multi-head self-attention; MLP——multi-layer perception.

    whereXiis the input feature map of theith scale;Fusionmeans the fusion operation of the feature map;BasicBlockindicates the corresponding residual or Transformer operation. Each stage receives all the scale feature map output from the previous stage and generates the input feature map of the next stage after processing and fusion. As shown in Fig.1, there are two basic blocks inside the stages,i.e., the Transformer basic block and the residual basic block. The former is responsible for the long-range association information extraction of the high-resolution feature maps. The latter is responsible for the local feature information extraction of the low- and medium-resolution feature maps and completing the information exchange in the multi-scale fusion. In particular, the high-resolution feature map of stage 1 also has to pass through the residual basic block to complete the local feature information extraction of the high-resolution feature map.

    2.3 Improved self-attention mechanism

    The standard Transformer is built on the basis of the Multi-Head Attention (MHA) module[17], which uses multiple heads to compute self-attention synchronously, and finally concatenates and projects to complete the computation, thus enabling the network to jointly focus on information from different representation subspaces at different locations. For simplicity, the subsequent descriptions are developed based on a single attention head. The standard Transformer takes the feature mapX∈RC×H×Was input, whereCis the number of channels, andHandWare the height and width of the feature map. After projecting and flattening,Xgets the input of self-attention,Q,K,V∈Rn×d, whereQ,KandVare sequences of Query, Key and Value, respectively;n=H×W;dis the sequences’ dimension of each head. The key to self-attention is a scaled dot-product and it is computed as

    The self-attention mechanism builds the correlation matrix between sequences by matrix dot product, thus obtaining long-range association information. However, this matrix dot product also bringsO(n2d) complexity, wherenis much larger than the usual image size in the field of high-resolution images such as clothing images, which leads to the fact that the self-attention in high-resolution images will be limited by the size. The square level of complexity is unacceptable for the current clothing images which are often of millions of pixels.

    For clothing images, most of the local regions within the image have highly approximate features, which leads to many redundant parts for inter-pixel association computation on the global. Meanwhile, the theory proves that the contextual association matrixPin the self-attention mechanism is of low rank for long sequences[18],

    Based on this, we design a self-attention mechanism for high-resolution images, as shown in Fig.2, and introduce a dimensionality reduction projection operation to generate equivalent sequencesKr=projection(K)∈Rnr×dandVr=Projection(V)∈Rnr×d, wherenr=Hr×Wr?n;HrandWrare the reduced height and weight of input after projecting, respectively. The modified formula is

    Fig.2 Improved self-attention mechanism architecture

    By the dimensionality reduction projection operation, we reduce the complexity of the matrix computation toO(nnrd) without having a large impact on the Transformer’s effect, thus allowing it to adapt to the high-resolution input of the clothing image. The complexity of the input width and height, after being reduced to the fixed value or proportionally smaller values, has been able to allow for self-attention operations at high resolution.

    In the field of vision, there are controversies about the role of relative and absolute position encoding for the Transformer, which led us to explore the application of relative position encoding on clothing parsing. For clothing images, their highly structured characteristics make their location features rich in detailed information. However, the standard Transformer is permutation equivariant, which cannot extract location information. Therefore, we refer to the two-dimensional relative position encoding[19]to introduce a complement to the relative position information. The attention logit using relative position from pixeli=(ix,iy) to pixelj=(jx,jy) is formulated as

    (2)

    whereqiandkjare the related query vector and key vector;rW, jx-ixandrH, jy-iyare learning embeddings for relative width and relative height, respectively. Similarly, the calculation of the relative position information is also subject to the dimensionality reduction projection operation. As shown in Fig.2,RHandRW, the corresponding learned embeddings, are added to the self-attention mechanism after the projection and aggregation operations. Therefore, the fully improved self-attention mechanism is expressed as:

    (3)

    whereSWr,SHr∈RHW×HrWr;SWrandSHrare the matrices of relative position logits, containing the position information of relative width and relative height.

    3 Experiments

    In this section, we experimentally validate the feasibility of the proposed network on the colorful fashion parsing dataset (CFPD)[20].

    3.1 Experiment preparation

    A graphic processing unit (GPU) was used in experiments to speed up the training of the network, and the optimization algorithm AdamW was used as an optimizer to accelerate the convergence of the network. We trained the network for 100 epochs, using an exponential learning rate scheduler with a warm-up strategy. In the training phase, a batch size of 4, an initial learning rate of 0.001, and a weight decay of 0.000 1 were used.

    3.2 Experiment results

    To verify the validity of the individual components and the overall framework proposed in the paper, we set up baseline and ablation experiments on the CFPD. The experimental results are shown in Table 1. Pixel accuracy (PA) and mean intersection over union (mIoU) are used as evaluation metrics. As shown in Table 1, compared with the baseline HRNet, the PA and mIoU of the MFSANet without relative position encoding (w/o RPE) increase 0.30% and 1.40%, while the PA and mIoU of the MFSANet increase 0.44% and 2.08%, respectively.

    Table 1 Results of baseline and ablation experiments on the CFPD

    To explore the effect of the MFSANet in improving the accuracy of clothing parsing, the parsing results are visualized as shown in Fig.3. For the first example, the MFSANet can divide the blouse block above the shoulder and divide the background block between pants and blouses, confirming its powerful ability to extract details. For the second example, the MSFANet can accurately segment sunglasses between hair and faces, as well as a clear demarcation between shorts and leg skin. For the third example, the MSFANet successfully identifies the hand skin at the boundary of the sweater of the model’s left hand, demonstrating its ability to delineate the inter-class demarcation obtained by extracting association information. The MFSANet provides more consistent segmentation results for the boundary and details of image parsing, demonstrating its effectiveness and robustness.

    Fig.3 Visual results on the CFPD

    In Table 2, we compare the ability of the downscaling projection operation to reduce complexity for different target scales. For an input feature map withH=144 pixels andW=96 pixels, we setHrandWrto the numbers as shown in parentheses in Tabe 2. As can be seen in the second column, smaller scales correspond to fewer parameters, which demonstrates the effect of the operation on reducing the complexity. Due to the need for map building and learning of position relations in relative position encoding and the fact that this need is amplified by multiple repetitions in the structure, the method requires a smaller spatial complexity. The downscaling projection operation can adjust the memory occupation of the method to a reasonable size, which confirms the necessity of the downscaling projection operation. Therefore, the more reasonable parameters (16, 16) are used as our standard parameters.

    Table 2 Comparison of different scales in downscaling projection operation

    4 Conclusions

    In this paper, a clothing parsing network based on multi-scale fusion and an improved self-attention mechanism is proposed. The network integrates the ability of self-attention to extract long-range association information with the overall architecture of multi-scale fusion through appropriate dimensionality reduction projection operations and incorporates two-dimensional relative position encoding to apply the rich position information in clothing images. The network proposed in this paper can effectively utilize the information from various aspects and accomplish the task of clothing parsing more accurately, thus providing help for practical garment field applications.

    猜你喜歡
    李文
    Advances in thermoelectric(GeTe)x(AgSbTe2)100-x
    定制化光學(xué)遙感器精益質(zhì)量管理探索與實(shí)踐
    Padéapproximant approach to singular properties of quantum gases:the ideal cases
    買狗糧
    心臟在哪兒
    上海故事(2017年8期)2017-08-23 10:11:25
    自 尊
    東方劍(2016年4期)2016-07-25 11:20:59
    Amplitude-Phase Modulation,Topological Horseshoe and Scaling Attractor of a Dynamical System?
    無比艱難的容易事
    夫妻那點(diǎn)事兒
    故事會(2012年20期)2012-10-11 04:47:18
    Monitoring Method for the Electrical Properties of Piezoelectric Transducer
    中国三级夫妇交换| 欧美激情高清一区二区三区 | 9191精品国产免费久久| 黄片小视频在线播放| 午夜91福利影院| 亚洲av中文av极速乱| 国产成人精品福利久久| 国产乱来视频区| 国产男女内射视频| 99精品久久久久人妻精品| 亚洲欧美成人精品一区二区| 亚洲欧美日韩另类电影网站| 国产精品99久久99久久久不卡 | 飞空精品影院首页| 亚洲欧美中文字幕日韩二区| 最新在线观看一区二区三区 | 成人亚洲欧美一区二区av| 99国产综合亚洲精品| 99国产精品免费福利视频| 成人亚洲精品一区在线观看| 大话2 男鬼变身卡| 十八禁人妻一区二区| 精品国产乱码久久久久久小说| 十分钟在线观看高清视频www| 久久热在线av| 欧美 日韩 精品 国产| 亚洲第一青青草原| 亚洲美女黄色视频免费看| 欧美国产精品一级二级三级| 欧美精品亚洲一区二区| 国产精品久久久人人做人人爽| 男女免费视频国产| 亚洲欧洲精品一区二区精品久久久 | 免费观看性生交大片5| 麻豆乱淫一区二区| 国产精品二区激情视频| 美女主播在线视频| 亚洲婷婷狠狠爱综合网| 亚洲第一av免费看| 1024视频免费在线观看| 观看美女的网站| 啦啦啦啦在线视频资源| 国产在线视频一区二区| 日本vs欧美在线观看视频| 色播在线永久视频| 巨乳人妻的诱惑在线观看| 国产av国产精品国产| 亚洲三区欧美一区| 高清视频免费观看一区二区| 成人手机av| 欧美久久黑人一区二区| 三上悠亚av全集在线观看| 精品一品国产午夜福利视频| 精品人妻熟女毛片av久久网站| 免费观看人在逋| 亚洲av欧美aⅴ国产| 国产极品天堂在线| 欧美日韩亚洲综合一区二区三区_| 19禁男女啪啪无遮挡网站| 一区福利在线观看| 女人高潮潮喷娇喘18禁视频| 老司机亚洲免费影院| 91精品国产国语对白视频| 久久综合国产亚洲精品| 国产一区二区在线观看av| 成人国产麻豆网| 少妇的丰满在线观看| 亚洲国产中文字幕在线视频| 最近中文字幕高清免费大全6| 好男人视频免费观看在线| 欧美 亚洲 国产 日韩一| 国产精品国产三级国产专区5o| 伊人久久大香线蕉亚洲五| 男女无遮挡免费网站观看| 色94色欧美一区二区| 又大又爽又粗| 精品一区二区免费观看| 亚洲精品国产色婷婷电影| 91精品国产国语对白视频| 777久久人妻少妇嫩草av网站| 亚洲精品国产av蜜桃| 80岁老熟妇乱子伦牲交| 色网站视频免费| 香蕉国产在线看| 18禁裸乳无遮挡动漫免费视频| 亚洲欧洲精品一区二区精品久久久 | 国产高清国产精品国产三级| 咕卡用的链子| 2021少妇久久久久久久久久久| 欧美另类一区| 成人三级做爰电影| 免费不卡黄色视频| 99精品久久久久人妻精品| 成人国产麻豆网| 狂野欧美激情性xxxx| av片东京热男人的天堂| 国产精品三级大全| 久久韩国三级中文字幕| 国产1区2区3区精品| 亚洲国产最新在线播放| 精品少妇内射三级| 欧美乱码精品一区二区三区| 一本久久精品| 麻豆乱淫一区二区| 亚洲美女搞黄在线观看| 在线观看人妻少妇| 免费看不卡的av| 成人亚洲精品一区在线观看| 亚洲七黄色美女视频| 久久热在线av| 久久久精品94久久精品| 欧美精品av麻豆av| 制服人妻中文乱码| 91精品国产国语对白视频| a级毛片黄视频| 精品亚洲乱码少妇综合久久| 国产欧美亚洲国产| 欧美在线一区亚洲| 日韩不卡一区二区三区视频在线| 99热网站在线观看| av一本久久久久| 99九九在线精品视频| 国产亚洲av高清不卡| 97精品久久久久久久久久精品| 国产乱来视频区| 纯流量卡能插随身wifi吗| 久久久精品免费免费高清| 老司机影院毛片| 51午夜福利影视在线观看| 国产 一区精品| 最新在线观看一区二区三区 | 亚洲欧美精品自产自拍| 亚洲av欧美aⅴ国产| 免费不卡黄色视频| 综合色丁香网| 女性被躁到高潮视频| 美女视频免费永久观看网站| 国产免费又黄又爽又色| 亚洲成av片中文字幕在线观看| 天堂中文最新版在线下载| 国产国语露脸激情在线看| 免费不卡黄色视频| 免费高清在线观看日韩| 人人妻人人爽人人添夜夜欢视频| 成人免费观看视频高清| 精品国产露脸久久av麻豆| 日韩av在线免费看完整版不卡| 欧美人与性动交α欧美精品济南到| av又黄又爽大尺度在线免费看| 亚洲欧美一区二区三区久久| 国产精品熟女久久久久浪| 日韩免费高清中文字幕av| 一本大道久久a久久精品| 一边摸一边做爽爽视频免费| 亚洲美女搞黄在线观看| 在线观看一区二区三区激情| 午夜免费观看性视频| 国产av一区二区精品久久| 欧美日韩亚洲综合一区二区三区_| 男女边摸边吃奶| 亚洲av成人精品一二三区| 大片电影免费在线观看免费| 日日摸夜夜添夜夜爱| 一本大道久久a久久精品| 国产亚洲一区二区精品| 黑人巨大精品欧美一区二区蜜桃| 国产成人系列免费观看| 精品一品国产午夜福利视频| 十八禁网站网址无遮挡| 一级毛片黄色毛片免费观看视频| 久久免费观看电影| 婷婷色综合大香蕉| 最近最新中文字幕免费大全7| 成人三级做爰电影| 一级毛片电影观看| 丰满迷人的少妇在线观看| a级毛片黄视频| 欧美在线黄色| 黄色视频不卡| 老司机影院毛片| 国产黄色免费在线视频| 最新在线观看一区二区三区 | 亚洲七黄色美女视频| 久久天躁狠狠躁夜夜2o2o | 亚洲精品自拍成人| 日韩大片免费观看网站| 免费久久久久久久精品成人欧美视频| 国产成人啪精品午夜网站| 亚洲av电影在线观看一区二区三区| 搡老乐熟女国产| 啦啦啦视频在线资源免费观看| 中文乱码字字幕精品一区二区三区| 国产成人欧美| 18禁国产床啪视频网站| 国产成人精品久久久久久| 最新在线观看一区二区三区 | 大陆偷拍与自拍| 麻豆av在线久日| 亚洲一级一片aⅴ在线观看| 一级毛片电影观看| 777米奇影视久久| 人人妻人人添人人爽欧美一区卜| 啦啦啦中文免费视频观看日本| 少妇猛男粗大的猛烈进出视频| 男男h啪啪无遮挡| 不卡av一区二区三区| 99热全是精品| 午夜福利视频在线观看免费| 五月开心婷婷网| 1024视频免费在线观看| 亚洲精品国产一区二区精华液| 国产精品女同一区二区软件| 国产精品一区二区精品视频观看| 婷婷成人精品国产| 在线观看免费视频网站a站| 在线看a的网站| 欧美日韩视频精品一区| 狂野欧美激情性xxxx| av国产精品久久久久影院| 9热在线视频观看99| 日韩制服丝袜自拍偷拍| 国产一区二区激情短视频 | 久久99热这里只频精品6学生| 亚洲欧美清纯卡通| 最近的中文字幕免费完整| 国产深夜福利视频在线观看| 欧美激情 高清一区二区三区| 女人久久www免费人成看片| 亚洲精品自拍成人| 国产一区二区在线观看av| 美女扒开内裤让男人捅视频| 日日爽夜夜爽网站| 美国免费a级毛片| 一二三四在线观看免费中文在| 国产爽快片一区二区三区| 国产免费又黄又爽又色| 尾随美女入室| 男女下面插进去视频免费观看| e午夜精品久久久久久久| 啦啦啦视频在线资源免费观看| 如何舔出高潮| 亚洲在久久综合| 超色免费av| www.精华液| 久久韩国三级中文字幕| 久久久久久久大尺度免费视频| 免费在线观看完整版高清| 国产精品一区二区在线观看99| 欧美精品人与动牲交sv欧美| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲精品久久午夜乱码| 在线观看免费午夜福利视频| 日韩中文字幕视频在线看片| 99久久99久久久精品蜜桃| 韩国高清视频一区二区三区| 99re6热这里在线精品视频| 国产精品二区激情视频| 你懂的网址亚洲精品在线观看| 国产一区二区三区av在线| 精品第一国产精品| 五月天丁香电影| 狠狠精品人妻久久久久久综合| 亚洲国产看品久久| 黄频高清免费视频| 午夜激情久久久久久久| 亚洲精品日本国产第一区| av在线老鸭窝| 亚洲人成77777在线视频| bbb黄色大片| 啦啦啦在线观看免费高清www| 久久久久久久精品精品| 曰老女人黄片| 免费黄网站久久成人精品| 久久av网站| 男女边摸边吃奶| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲综合精品二区| 久久久久久免费高清国产稀缺| 亚洲成人国产一区在线观看 | 51午夜福利影视在线观看| 精品一区二区免费观看| 黄色怎么调成土黄色| 麻豆精品久久久久久蜜桃| 深夜精品福利| 日韩不卡一区二区三区视频在线| 国产免费一区二区三区四区乱码| tube8黄色片| 亚洲国产欧美一区二区综合| 又粗又硬又长又爽又黄的视频| 中文字幕高清在线视频| 一级a爱视频在线免费观看| 久久人人97超碰香蕉20202| 午夜免费男女啪啪视频观看| 亚洲第一区二区三区不卡| 国产欧美日韩综合在线一区二区| 中文字幕最新亚洲高清| 欧美 亚洲 国产 日韩一| 伊人久久大香线蕉亚洲五| 欧美激情极品国产一区二区三区| 伊人亚洲综合成人网| 国产一区二区在线观看av| 9191精品国产免费久久| 黄色 视频免费看| 国产一卡二卡三卡精品 | 亚洲美女视频黄频| av卡一久久| 日本91视频免费播放| 亚洲成色77777| 99香蕉大伊视频| 成人漫画全彩无遮挡| 中文乱码字字幕精品一区二区三区| 男女午夜视频在线观看| 久久久久久久久久久久大奶| 精品一区二区免费观看| 国产精品一区二区在线不卡| 欧美乱码精品一区二区三区| 99久久人妻综合| 男女免费视频国产| 日本vs欧美在线观看视频| 最近手机中文字幕大全| 亚洲精品国产区一区二| 日韩欧美一区视频在线观看| 天天添夜夜摸| 国产精品偷伦视频观看了| 777米奇影视久久| 国产免费一区二区三区四区乱码| 精品国产超薄肉色丝袜足j| 看十八女毛片水多多多| 欧美精品一区二区免费开放| 啦啦啦视频在线资源免费观看| 国产精品国产三级国产专区5o| 搡老岳熟女国产| 大陆偷拍与自拍| 日本一区二区免费在线视频| 国产人伦9x9x在线观看| 老司机深夜福利视频在线观看 | 亚洲一码二码三码区别大吗| 观看av在线不卡| 欧美精品av麻豆av| 午夜激情av网站| 免费少妇av软件| 欧美激情高清一区二区三区 | 丝袜在线中文字幕| 成年人免费黄色播放视频| 日韩一本色道免费dvd| 亚洲色图综合在线观看| 老司机在亚洲福利影院| 日韩 欧美 亚洲 中文字幕| 免费女性裸体啪啪无遮挡网站| 国产精品秋霞免费鲁丝片| 国精品久久久久久国模美| 亚洲精品中文字幕在线视频| 久久天堂一区二区三区四区| 久久久久久久久久久久大奶| 精品一区在线观看国产| 国产无遮挡羞羞视频在线观看| 亚洲欧美成人精品一区二区| 电影成人av| 丰满乱子伦码专区| 国产精品久久久久久精品电影小说| 少妇被粗大的猛进出69影院| 精品福利永久在线观看| 一本—道久久a久久精品蜜桃钙片| 欧美日韩国产mv在线观看视频| 久久久久久久大尺度免费视频| 十八禁网站网址无遮挡| 波多野结衣一区麻豆| 黄片播放在线免费| 国产精品一二三区在线看| 1024香蕉在线观看| 精品国产乱码久久久久久小说| 欧美 亚洲 国产 日韩一| 欧美激情 高清一区二区三区| 99精品久久久久人妻精品| 亚洲成人手机| h视频一区二区三区| 青草久久国产| 一区在线观看完整版| av国产精品久久久久影院| 女人高潮潮喷娇喘18禁视频| 亚洲精品视频女| 国产av国产精品国产| 国产一区有黄有色的免费视频| 亚洲精品久久久久久婷婷小说| 国产成人a∨麻豆精品| 免费少妇av软件| 91aial.com中文字幕在线观看| 欧美日韩国产mv在线观看视频| 日韩欧美一区视频在线观看| 国产精品一国产av| 色播在线永久视频| 久久天堂一区二区三区四区| 99热国产这里只有精品6| 王馨瑶露胸无遮挡在线观看| 午夜免费观看性视频| 久久久久精品久久久久真实原创| 久久精品亚洲熟妇少妇任你| 欧美老熟妇乱子伦牲交| 99香蕉大伊视频| 毛片一级片免费看久久久久| 91成人精品电影| 亚洲av中文av极速乱| 亚洲婷婷狠狠爱综合网| 另类亚洲欧美激情| 国产精品一国产av| 久久国产亚洲av麻豆专区| www.自偷自拍.com| 国产精品 国内视频| 黄色怎么调成土黄色| 午夜影院在线不卡| 这个男人来自地球电影免费观看 | 美女视频免费永久观看网站| 99精国产麻豆久久婷婷| 在线观看国产h片| 欧美av亚洲av综合av国产av | 欧美日韩av久久| 欧美日本中文国产一区发布| 别揉我奶头~嗯~啊~动态视频 | 精品免费久久久久久久清纯 | 国产精品一区二区在线观看99| 精品午夜福利在线看| 久久人人爽人人片av| 亚洲中文av在线| 日韩一卡2卡3卡4卡2021年| 女性被躁到高潮视频| 久久久久久久久久久免费av| 精品第一国产精品| 国产亚洲av片在线观看秒播厂| 一级片免费观看大全| 91精品国产国语对白视频| 国产色婷婷99| 街头女战士在线观看网站| 男男h啪啪无遮挡| 少妇被粗大的猛进出69影院| e午夜精品久久久久久久| 一级黄片播放器| 亚洲成人av在线免费| svipshipincom国产片| 黄网站色视频无遮挡免费观看| 国产精品一区二区在线观看99| 夫妻午夜视频| 女的被弄到高潮叫床怎么办| 高清黄色对白视频在线免费看| 黄色毛片三级朝国网站| 国产免费又黄又爽又色| 精品国产一区二区三区久久久樱花| 观看av在线不卡| 2021少妇久久久久久久久久久| 午夜福利,免费看| 极品人妻少妇av视频| 97人妻天天添夜夜摸| 日本黄色日本黄色录像| 十分钟在线观看高清视频www| 欧美精品亚洲一区二区| 国产免费现黄频在线看| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲熟女毛片儿| 亚洲,欧美,日韩| 精品久久蜜臀av无| 18禁观看日本| 亚洲精品久久久久久婷婷小说| 夜夜骑夜夜射夜夜干| 老汉色av国产亚洲站长工具| 亚洲情色 制服丝袜| 久久狼人影院| av又黄又爽大尺度在线免费看| 乱人伦中国视频| 亚洲精品自拍成人| 国产免费一区二区三区四区乱码| 国产精品久久久av美女十八| 亚洲人成电影观看| 亚洲国产最新在线播放| 久久av网站| 2018国产大陆天天弄谢| 国产深夜福利视频在线观看| 午夜日本视频在线| 中文字幕精品免费在线观看视频| 宅男免费午夜| 在线观看三级黄色| 久久久久久人人人人人| 波多野结衣一区麻豆| 日韩一区二区三区影片| 成人毛片60女人毛片免费| 在线观看免费高清a一片| 老汉色av国产亚洲站长工具| 久久午夜综合久久蜜桃| 欧美精品高潮呻吟av久久| bbb黄色大片| 最近2019中文字幕mv第一页| 九草在线视频观看| 青春草亚洲视频在线观看| 久久国产精品男人的天堂亚洲| 国产欧美日韩综合在线一区二区| 久久人人爽人人片av| 性高湖久久久久久久久免费观看| 青春草亚洲视频在线观看| 国产又爽黄色视频| 亚洲国产av影院在线观看| 久久精品久久久久久久性| 久久久久久久久久久久大奶| 中文字幕av电影在线播放| 国产在视频线精品| 欧美人与善性xxx| www.精华液| 悠悠久久av| 久久久久久久久久久免费av| 美女扒开内裤让男人捅视频| 飞空精品影院首页| 成人免费观看视频高清| 久久久久国产一级毛片高清牌| 午夜精品国产一区二区电影| 亚洲精品自拍成人| 又黄又粗又硬又大视频| 国产又色又爽无遮挡免| 色婷婷av一区二区三区视频| 免费看av在线观看网站| 日韩人妻精品一区2区三区| 99九九在线精品视频| 午夜福利,免费看| 一本色道久久久久久精品综合| 国产精品久久久久久精品古装| av.在线天堂| 久久青草综合色| 午夜福利一区二区在线看| 菩萨蛮人人尽说江南好唐韦庄| 国产亚洲av高清不卡| 51午夜福利影视在线观看| 国产成人免费无遮挡视频| 午夜福利一区二区在线看| 成人国产麻豆网| 婷婷色综合www| 久久毛片免费看一区二区三区| 亚洲一卡2卡3卡4卡5卡精品中文| 国产黄色视频一区二区在线观看| 久久精品熟女亚洲av麻豆精品| 黄频高清免费视频| 大码成人一级视频| 久久精品久久久久久久性| 美女高潮到喷水免费观看| 丰满乱子伦码专区| 亚洲国产成人一精品久久久| 最近最新中文字幕大全免费视频 | 日韩,欧美,国产一区二区三区| 十分钟在线观看高清视频www| 国产精品国产三级国产专区5o| 大陆偷拍与自拍| 人人妻人人添人人爽欧美一区卜| 波多野结衣av一区二区av| 国产精品成人在线| 九色亚洲精品在线播放| 看免费成人av毛片| 国产视频首页在线观看| 国产亚洲av片在线观看秒播厂| 国产国语露脸激情在线看| 亚洲欧美日韩另类电影网站| 久久国产精品大桥未久av| 久久99热这里只频精品6学生| 国产免费又黄又爽又色| 美女主播在线视频| 美女国产高潮福利片在线看| 成年人免费黄色播放视频| 久久ye,这里只有精品| 精品视频人人做人人爽| 久久久久国产一级毛片高清牌| 精品亚洲成a人片在线观看| 国产一区二区在线观看av| 国产高清国产精品国产三级| 午夜福利视频精品| 尾随美女入室| 午夜激情久久久久久久| 涩涩av久久男人的天堂| 最黄视频免费看| 妹子高潮喷水视频| 亚洲一区二区三区欧美精品| 日韩免费高清中文字幕av| 深夜精品福利| 日韩av免费高清视频| 一级爰片在线观看| 少妇的丰满在线观看| 国产亚洲av片在线观看秒播厂| 69精品国产乱码久久久| 亚洲欧美激情在线| 一级,二级,三级黄色视频| 亚洲欧美清纯卡通| 免费高清在线观看日韩| 国产1区2区3区精品| 久久久国产精品麻豆| 男男h啪啪无遮挡| 99精国产麻豆久久婷婷| 欧美 亚洲 国产 日韩一| 大陆偷拍与自拍| 女人爽到高潮嗷嗷叫在线视频| 国产精品av久久久久免费| 人妻人人澡人人爽人人| 99国产综合亚洲精品| 最近手机中文字幕大全| 亚洲一码二码三码区别大吗| 国产精品久久久久久精品电影小说| 这个男人来自地球电影免费观看 | 成年人免费黄色播放视频| 亚洲精品av麻豆狂野| 日韩熟女老妇一区二区性免费视频| 久久国产亚洲av麻豆专区| 伦理电影大哥的女人| 卡戴珊不雅视频在线播放| 黄色一级大片看看| 日日摸夜夜添夜夜爱| videosex国产| 好男人视频免费观看在线| 成人亚洲欧美一区二区av| 9191精品国产免费久久| 国产高清不卡午夜福利| 99久久99久久久精品蜜桃| 色网站视频免费| 美国免费a级毛片|