• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-layer dynamic and asymmetric convolutions①

    2022-10-22 02:22:56LUOChunjie羅純杰ZHANJianfeng
    High Technology Letters 2022年3期

    LUO Chunjie (羅純杰), ZHAN Jianfeng

    (Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, P.R.China)

    (University of Chinese Academy of Sciences, Beijing 100049, P.R.China)

    Abstract Dynamic networks have become popular to enhance the model capacity while maintaining efficient inference by dynamically generating the weight based on over-parameters. They bring much more parameters and increase the difficulty of the training. In this paper,a multi-layer dynamic convolution (MDConv) is proposed, which scatters the over-parameters over multi-layers with fewer parameters but stronger model capacity compared with scattering horizontally; it uses the expanding form where the attention is applied to the features to facilitate the training; it uses the compact form where the attention is applied to the weights to maintain efficient inference. Moreover, a multi-layer asymmetric convolution (MAConv) is proposed,which has no extra parameters and computation cost at inference time compared with static convolution. Experimental results show that MDConv achieves better accuracy with fewer parameters and significantly facilitates the training;MAConv enhances the accuracy without any extra cost of storage or computation at inference time compared with static convolution.

    Key words: neural network, dynamic network, attention, image classification

    0 Introduction

    Deep neural networks have received great successes in many areas of machine intelligence. Many researchers have shown rising interest in designing lightweight convolutional networks[1-8]. Light-weight networks improve the efficiency by decreasing the size of the convolutions. That also leads to the decrease of the model capacity.

    Dynamic networks[9-12]have become popular to enhance the model capacity while maintaining efficient inference by applying attention on the weight. Conditionally parameterized convolutions (CondConv)[9]and dynamic convolution (DYConv)[10]were proposed to use a dynamic linear combination ofnexperts as the kernel of the convolution. CondConv and DYConv bring much more parameters. WeightNet[11]used a grouped fully-connected layer applied to the attention vector to generate the weight in a group-wise manner,achieving comparable accuracy with fewer parameters than CondConv and DYConv. Dynamic convolution decomposition (DCD)[12]replaced dynamic attention over channel groups with dynamic channel fusion, resulting in a more compact model. However, DCD brings new problems: it increases the depth of the weight, thus it hinders error back-propagation; it increases the dynamic coefficients since it uses a full dynamic matrix. More dynamic coefficients make the training more difficult.

    To reduce the parameters and facilitate the training, a multi-layer dynamic convolution (MDConv) is proposed, which scatters the over-parameters over multi-layers with fewer parameters but stronger model capacity compared with scattering horizontally; it uses the expanding form where the attention is applied to the features to facilitate the training; it uses the compact form where the attention is applied to the weights to maintain efficient inference. In CondConv and DYConv, the over-parameters are scattered horizontally.The key foundation for the success of deep learning is that deeper layers have stronger model capacity. Unlike CondConv and DYConv, MDConv scatters overparameters over multi-layers, enhancing the model capacity with fewer parameters. Moreover, MDConv brings fewer dynamic coefficients thus is easier to train compared with DCD. There are two additional mechanisms to facilitate the training of deeper layers in the expanding form. One is batch normalization (BN) after each convolution. BN can significantly accelerate and improve the training of deeper networks. The other mechanism that helps the training is the bypass convolution with the static kernel. The bypass convolution shortens the path of error back-propagation.

    At training time, the attention in MDConv is applied to features. While at inference time,the attention becomes weight attention. Batch normalization can be fused into the convolution. Squeeze-and-excite (SE)attention can be viewed as a diagonal matrix. Then,the three convolutions and SE attention can be further fused into a single convolution with dynamic weight for efficient inference. After fusion, the weight of the final convolution for inference is dynamically generated, and only one convolution needs to be performed. When implementing, there is no need to construct the diagonal matrix. After generating the dynamic coefficients,broadcasting multiply can be used instead of matrix multiply. Thus MDConv costs fewer memories and computational resources than DCD, which generates a dense matrix.

    Although dynamic attention could significantly enhance the model capacity, it brings extra parameters and the number of float point operations (FLOPs) at inference time. Besides multi-layer dynamic convolution, a multi-layer asymmetric convolution (MAConv)is proposed, which removes the dynamic attention from multi-layer dynamic convolution. After the training,the weights need to be fused just one time and re-parameterized as new static kernels since they do not depend on the input anymore. As a result, there are no extra parameters and FLOPs at inference time compared with static convolution.

    The experiments show that:

    The remainder of this paper is structured as follows. Section 1 briefly presents related work. Section 2 describes the details of multi-layer dynamic convolution. Section 3 introduces multi-layer asymmetric convolution. In Section 4, the experiment settings and the results are presented. Conclusions are made in Section 5.

    1 Related work

    1.1 Dynamic networks

    CondConv[9]and DYConv[10]compute convolutional kernels as a function of the input instead of using static convolutional kernels. In particular,the convolutional kernels are over-parameterized as a linear combination ofnexperts. Although largely enhancing the model capacity, CondConv and DYConv bring much more parameters, thus are prone to over-fitting. Besides, more parameters require more memory resources. Moreover, the dynamics make the training more difficult. To avoid over-fitting and facilitate the training, these two methods apply additional constraints. For example, CondConv shares routing weights between layers in a block. DYConv uses the Softmax with a large temperature instead of Sigmoid on the output of the routing network. WeightNet[11]uses a grouped fully-connected layer applied to the attention vector to generate the weight in a group-wise manner.WeightNet achieves comparable accuracy with fewer parameters than CondConv and DYConv. To further compact the model, DCD[12]decomposes the convolutional weight, which reduces the latent space of the weight matrix and results in a more compact model.

    Although dynamic weight attentions enhance the model capacity, they increase the difficulty of the training since they introduce dynamic factors. Extremely, dynamic filter network[13]generates all the convolutional filters dynamically conditioned on the input. On the other hand, SE[14]is an effective and robust module by applying attention to the channel-wise features.Other dynamic networks[15-20]try to learn dynamic network structure with static convolution kernels.

    1.2 Re-parameterization

    ExpandNet[21]expands convolution into multiple linear layers without adding any nonlinearity. The expanding network can benefit from over-parameterization during training and can be compressed back to the compact one algebraically at inference. For example, ak×kconvolution is expanded by three convolutional layers with kernel size 1 ×1,k×kand 1 ×1, respectively. ExpandNet increases the network depth, thus makes the training more difficult. ACNet[22]uses asymmetric convolution to strengthen the kernel skeletons for powerful networks. At training time, it uses three branches with 3 ×3, 1 ×3, and 3 ×1 kernels respectively. At inference time, the three branches are fused into one static kernel. RepVGG[23]constructs the training-time model using branches consisting of identity map, 1 ×1 convolution and 3 ×3 convolution. After training, RepVGG constructs a single 3 ×3 kernel by re-parameterizing the trained parameters. ACNet and RepVGG can only be used fork×k(k >1) convolution.

    2 Multi-layer dynamic convolution

    The main problem of CondConv[9]and DYConv[10]is that they bring much more parameters. DCD[12]reduces the latent space of the weight matrix by matrix decomposition and results in a more compact model.However, DCD brings new problems: (1) it increases the depth of the weight, thus hinders error back-propagation; (2) it increases the dynamic coefficients since it uses a full dynamic matrix. More dynamic coefficients make the training more difficult. In the extreme situation, e. g., dynamic filter network[13], all the convolutional weights are dynamically conditioned on the input. It is hard to train and can not be applied in modern deep architecture successfully.

    To reduce the parameters and facilitate the training, MDConv is proposed. As shown in Fig.1, MDConv has two branches: (1) the dynamic branch consists of ak×k(k >= 1) convolution, a SE module,and a 1 ×1 convolution; (2) the bypass branch consists of ak×kconvolution with a static kernel. The output of MDConv is the addition of the two branches.

    Fig.1 Training and inference of MDConv

    Unlike CondConv and DYConv, MDConv encapsulates the dynamic information into multi-layer convolutions by applying SE attention between two convolutional layers. By scattering the over-parameters over multi-layers, MDConv increases the model capacity with fewer parameters than horizontally scattering. Moreover, MDConv facilitates the training of dynamic networks. In MDConv, SE can be viewed as a diagonal matrixA.

    whereFis a multi-layer fully-connected attention network. Compared with DCD, which uses a full dynamic matrix, MDConv brings fewer dynamic coefficients thus is easier to train. There are two additional mechanisms to facilitate the training of deeper layers in MDConv.One is batch normalization after each convolution.Batch normalization can significantly accelerate and improve the training of deeper networks. Another mechanism that helps the training is the bypass convolution with the static kernel. The bypass convolution shortens the path of error back-propagation.

    MDConv uses two layer convolutions in the dynamic branch. Three or more layers bring the following problems: (1) more convolutions bring more computation FLOPs; (2) more dynamic layers are harder to train and need more training data.

    Although the expanding form of MDConv facilitates the training, it is more expensive since there are three convolutional operators at training time. The compact form of MDConv can be used for efficient inference. MDConv can be defined as

    Then the three convolutions of MDConv can be fused into a single convolution for efficient inference.

    whereWinferandbinferare the new weight and bias of the convolution after re-parameterization.

    When implementing, the diagonal matrixAdoes not need be constructed. After generating the dynamic coefficients, the broadcasting multiply can be used instead of matrix multiply. Thus MDConv costs fewer memories and computational resources than DCD,which generates a dense matrixA.

    3 Multi-layer asymmetric convolution

    Although dynamic attention could significantly enhance model capacity, it still brings extra parameters and FLOPs at inference time. Besides MDConv, this paper also proposes MAConv, which removes the dynamic attention in MDConv.

    In ExpandNet[21], ak×k(k>=1) convolution is expanded vertically by three convolutional layers with kernel size 1 × 1,k×k, 1 × 1, respectively.Whenk >1,it cannot use BN in the intermediate layer. The bias caused by BN fusion cannot pass forward through thek×kkernel, thus cannot be fused with the bias of the next layer. MAConv avoids this problem,thus can use BN to facilitate the training. Besides,MAConv uses the bypass convolution for shortening the path of error back-propagation. BN and the bypass shortcut in MAConv help the training of deep layers.Both BN and the bypass shortcut can be compressed and re-parameterized to the compact one, thus without any extra cost at inference time.

    ACNet[22]and RepVGG[23]horizontally expand thek×k(k >1) convolution into convolutions with different kernel shape. That hinders its usage for lightweight networks, which heavily utilizes the 1 ×1 pointwise convolutions. MAConv uses asymmetric depth instead of asymmetric kernel shape and expands the convolution both vertically and horizontally. MAConv can be used for both 1 ×1 convolution andk×k(k >1)convolution.

    4 Multi-layer asymmetric convolution

    4.1 ImageNet

    ImageNet classification dataset[24]has 1.28 ×106training images and 50 000 validation images with 1000 classes. The experiments are based on the official example of Pytorch.

    The standard augmentation is used for training image as the same as the official example: (1) randomly cropped with the size of 0.08 to 1.0 and aspect ratio of 3/4 to 4/3, and then resized to 224 ×224; (2) randomly horizontal flipped. The validation image is resized to 256 ×256, and then center cropped with size 224 ×224. Each channel of the input image is normalized into 0 mean and 1 STD globally. The batch size is 256. Four TITAN Xp GPUs are used to train the models.

    Firstly, comparisons are made between static convolution, DYConv[10], MAConv, and MDConv on MobileNetV2 and ShuffleNetV2. For DYConv, the number of experts is set to the default value 4[10]. For MDConv and MAConv, the number of intermediate channelsmis set to 20. For DYConv and MDConv, the attention network is a two-layer fully-connected network with the hidden units to be 1/4 input channels. As recommended in the original paper[10], the temperature of Softmax is set to 30 in DYConv. DYConv, MDConv, and MAConv are applied to the pointwise convolutional layer in the inverted bottlenecks of Mobile-NetV2 or the blocks of ShuffleNetV2. They are used to replace the static convolution.

    Table 1 Top-1 accuracies of lightweight networks on ImageNet validation dataset

    Comparisons are also made onk×k(k >1) convolution. DYConv, MDConv, and MAConv are applied in the 3 ×3 convolutional layer of ResNet18’s residual block. Table 2 shows that MAConv is also effective onk×k(k >1) convolution. MDConv increases the accuracy by 2.086% with only 1 ×106additional parameters compared with static convolution. Moreover, it achieves higher accuracy with much fewer parameters than DYConv.

    Table 2 Top-1 accuracies of ResNet18 on ImageNet validation dataset

    Table 3 Comparison of validation accuracies (%) between DCD and MDConv on MobileNetV2x1. 0 trained with different amounts of data

    Fig.2 Comparison of training and validation accuracy curves between DCD and MDConv on MobileNetV2x1.0 trained with different amounts of data

    Table 4 Comparison of validation accuracies on ImageNet between MDConv and SE

    4.2 CIFAR-10

    CIFAR-10 is a dataset of natural 32 ×32 RGB images in 10 classes with 50 000 images for training and 10 000 for testing. The training images are padded with 0 to 36 ×36 and then randomly cropped to 32 ×32 pixels. Then randomly horizontal flipping is made. Each channel of the input is normalized into 0 mean and 1 STD globally.

    SGD with momentum 0.9 and weight decay 5e-4 are used. The batch size is set to 128. The learning rate is set to 0.1, and scheduled to arrive at zero using the cosine annealing scheduler. The networks are trained with 200 epochs.

    MobileNetV2 with different width multipliers are evaluated on this small dataset. The setups for different attentions are the same as subsection 4.1. Each test is run 5 times. The mean and the STD of the accuracies are listed in Table 5. MAConv increases the accuracy compared with static convolution. MAConv even outperforms DYConv on this small dataset. MDConv further improves the performance and achieves the best performance, while DCD achieves the worst performance and has large variance. That implies again DCD brings more dynamics and is difficult to train on the small dataset.

    Table 5 Test accuracies on CIFAR-10 with MobileNetV2

    4.3 CIFAR-100

    CIFAR-100 is a dataset of natural 32 ×32 RGB images in 100 classes with 50 000 images for training and 10 000 for testing. The training images are padded with 0 to 36 ×36 and then randomly cropped to 32 ×32 pixels. Then randomly horizontal flipping is made.Each channel of the input is normalized into 0 mean and 1 STD globally. SGD with momentum 0. 9 and weight decay 5e-4 are used. The batch size is set to 128. The learning rate is set to 0.1, and scheduled to arrive at zero using the cosine annealing scheduler.The networks are trained with 200 epochs.

    MobileNetV2x0.35 is evaluated on this dataset.The setups for different attentions are the same as subsection 4.1. Each test is run 5 times. The mean and the standard deviation of the accuracies are reported in Table 6. Results show that dynamic networks do not improve the accuracy compared with the static network. Moreover, more dynamic factors lead to worse performance. For example, DCD is worse than MDConv, and MDConv is worse than DyConv. That is because dynamic networks are harder to train and need more training data and CIFAR-100 has 100 classes,and each class has fewer training examples than CIFAR-10.MAConv achieves the best performance, 70. 032%.When the training dataset is small, MAConv is still effective to enhance the model capacity instead of dynamic ones.

    Table 6 Test accuracies on CIFAR-100

    4.4 SVHN

    The street view house numbers (SVHN) dataset includes 73 257 digits for training, 26 032 digits for testing, and 531 131 additional digits. Each digit is a 32 ×32 RGB image. The training images are padded with 0 to 36 ×36 and then randomly cropped to 32 ×32 pixels. Then randomly horizontal flipping is made.Each channel of the input is normalized into 0 mean and 1 STD globally. SGD with momentum 0.9 and weight decay 5e-4 are used. The batch size is set to 128. The learning rate is set to 0.1, and scheduled to arrive at zero using the cosine annealing scheduler. The networks are trained with 200 epochs.

    MobileNetV2x0.35 is used on this dataset. The setups for different attentions are the same as subsection 4.1. Each test is run 5 times. The mean and the standard deviation of the accuracies are reported in Table 7. Results show that DCD decreases the performance compared with the static one. DCD is hard to train on the small dataset. DYConv, MAConv, and MDConv increase the accuracy compared with the static one. Among them, MAConv and MDConv achieve similar performance, better than DYConv.

    Table 7 Test accuracies on SVHN

    4.5 Ablation study

    Ablation experiments are carried out on CIFAR-10 using two network architectures. One is MobileNetV2 x0.35. To make the comparison more distinct, a smaller and simpler network named SmallNet is also used.SmallNet has the first convolutional layer with 3 × 3 kernels and 16 output channels, followed by three blocks. Each block comprises of a 1 ×1 pointwise convolution with 1 stride and no padding, and a 3 × 3 depthwise convolution with 2 strides and 1 padding.These 3 blocks have 16, 32 and 64 output channels,respectively. Each convolutional layer is followed by batch normalization, and ReLU activation. The output of the last layer is passed through a global average pooling layer, followed by a Softmax layer with 10 classification output. Other experiment settings are the same as subsection 4.2.

    The effect of the bypass shortcut in the MAConv and MDConv is investigated firstly. To show the effect of dynamic attention, ReLU activation is further used instead of dynamic attention in the multi-layer branch.The results are shown in Table 8, w/ means with bypass shortcut, w/o means without bypass shortcut. Results show that the bypass shortcut improves the accuracy, especially in deeper networks (MobileNetV2 x0.35). Moreover, MDConv (with dynamic attention)increases the capabilities of models compared with MAConv (without dynamic attention). Using ReLU activation instead of dynamic attention can further increase the capabilities. However, the weights cannot be fused into compact one anymore because of the non-linear activation function. Thus the costs of storage and computation are much higher than single-layer convolution at inference time.

    Table 8 Effect of bypass shortcut

    Next, the networks are trained by using the compact form directly. Table 9 shows the comparison between the expanding training and the compact training.Results show that expanding training improves the performance of MAConv and MDConv. To evaluate the benefit of BN in expanding training, BN is applied after the addition of two branches instead of after each convolution (without BN after each convolution). Resultsshow that BN in expanding form helps the training since it achieves better performance than that without BN. Moreover, the expanding form without BN helps training itself, since it achieves better performance than compact training.

    Table 9 Effect of expanding training

    The effect of different input/output channels and different intermediate channels are also investigated.Different width multipliers are applied on all layers except the first layer of SmallNet. The results are shown in Table 10. Results show that the gains of MAConv and MDConv are higher with fewer channels. These results imply that over-parameterization is more effective in smaller networks. SmallNet are then trained with different intermediate channels. The results are shown in Table 11. Results show that the gains of MAConv and MDConv are trivial when increasing the intermediate channel on the CIFAR-10 dataset. Using more intermediate channels means that the dynamic part takes more influence, thus increases the difficulty of training and needs more training data.

    Table 10 Effect of input/output channel width

    Table 11 Effect of different intermediate channel width

    Finally, different setups for the attention network are investigated in SmallNet with MDConv. Different numbers of the hidden units are used, ranging from 1/4 times of the input channel to 4 times of the input channel. Table 12 shows that increasing hidden units can improve the performance until 4 times of the input channel. Softmax and Softmax with different temperatures, as proposed in Ref.[10], are used as the gate function in the last layer of the attention network. As shown in Table 13, Softmax achieves better accuracy than Sigmoid. However, the temperature does not improve the performance.

    Table 12 Effect of the hidden layer in the attention network

    Table 13 Effect of the gate function in the last layer of the attention network

    5 Conclusions

    Two powerful convolutions are proposed to increase the model’s capacity: MDConv and MAConv.MDConv expands the static convolution into multi-layer dynamic one, with fewer parameters but stronger model capacity than horizontally expanding. MAConv has no extra parameters and FLOPs at inference time compared with static convolution. MDConv and MAConv are evaluated on different networks. Experimental results show that MDConv and MAConv improve the accuracy compared with static convolution. Moreover,MDConv achieves better accuracy with fewer parameters and facilitates the training compared with other dynamic convolutions.

    欧美中文综合在线视频| 国产免费男女视频| 黄色 视频免费看| 97人妻精品一区二区三区麻豆| 88av欧美| 色播亚洲综合网| 国产成人一区二区三区免费视频网站| 99精品在免费线老司机午夜| 午夜福利在线观看免费完整高清在 | 中文字幕熟女人妻在线| 国产成人av教育| 亚洲人成网站高清观看| 国产一区二区三区视频了| 亚洲美女视频黄频| 十八禁人妻一区二区| 美女扒开内裤让男人捅视频| 国产乱人伦免费视频| 少妇人妻一区二区三区视频| 欧美性猛交黑人性爽| 麻豆国产av国片精品| 制服人妻中文乱码| 欧美日韩黄片免| 精品国产美女av久久久久小说| 两个人视频免费观看高清| 久久久久性生活片| 国产伦一二天堂av在线观看| 亚洲精品在线观看二区| 极品教师在线免费播放| 桃色一区二区三区在线观看| 久久精品亚洲精品国产色婷小说| 国产真人三级小视频在线观看| 亚洲狠狠婷婷综合久久图片| 色视频www国产| 亚洲av片天天在线观看| 波多野结衣高清无吗| 日韩三级视频一区二区三区| 日韩有码中文字幕| 一级毛片高清免费大全| 久久精品人妻少妇| 嫩草影院精品99| 热99re8久久精品国产| 脱女人内裤的视频| 伊人久久大香线蕉亚洲五| 长腿黑丝高跟| 三级毛片av免费| 国产aⅴ精品一区二区三区波| 免费av不卡在线播放| 久久亚洲精品不卡| 免费观看的影片在线观看| 国产高清videossex| 757午夜福利合集在线观看| 美女免费视频网站| 亚洲aⅴ乱码一区二区在线播放| 亚洲精品456在线播放app | 最近视频中文字幕2019在线8| 成人av在线播放网站| 精品国产乱子伦一区二区三区| 精品一区二区三区视频在线观看免费| 亚洲国产欧洲综合997久久,| tocl精华| 精品国产乱子伦一区二区三区| 亚洲一区二区三区色噜噜| 日韩欧美三级三区| 一本一本综合久久| 亚洲 欧美一区二区三区| 亚洲人成网站在线播放欧美日韩| 18禁观看日本| 大型黄色视频在线免费观看| 精品国产美女av久久久久小说| 90打野战视频偷拍视频| 19禁男女啪啪无遮挡网站| www.999成人在线观看| 久久久水蜜桃国产精品网| 国产精品爽爽va在线观看网站| 午夜亚洲福利在线播放| 母亲3免费完整高清在线观看| 色播亚洲综合网| 亚洲色图 男人天堂 中文字幕| 欧美3d第一页| xxxwww97欧美| 狂野欧美白嫩少妇大欣赏| 熟女少妇亚洲综合色aaa.| 午夜精品一区二区三区免费看| 一个人看视频在线观看www免费 | 夜夜看夜夜爽夜夜摸| av国产免费在线观看| 黄频高清免费视频| 一进一出抽搐动态| 真人做人爱边吃奶动态| 日韩欧美国产一区二区入口| 国产黄色小视频在线观看| 99久久综合精品五月天人人| 女同久久另类99精品国产91| 麻豆国产97在线/欧美| 国产真实乱freesex| 成人精品一区二区免费| 美女免费视频网站| 午夜福利在线观看吧| 黄色丝袜av网址大全| 白带黄色成豆腐渣| 亚洲,欧美精品.| 性色avwww在线观看| 国产aⅴ精品一区二区三区波| 亚洲中文字幕日韩| 成年女人永久免费观看视频| 国产v大片淫在线免费观看| 不卡av一区二区三区| 国产又色又爽无遮挡免费看| 一边摸一边抽搐一进一小说| 精品久久久久久久人妻蜜臀av| 此物有八面人人有两片| cao死你这个sao货| 亚洲av成人精品一区久久| 黄色 视频免费看| 国产av不卡久久| 日本一二三区视频观看| 国产精品野战在线观看| 欧美又色又爽又黄视频| 亚洲aⅴ乱码一区二区在线播放| 久久久久精品国产欧美久久久| 中文亚洲av片在线观看爽| 国产私拍福利视频在线观看| 亚洲人成电影免费在线| 久久精品国产综合久久久| 国产成人系列免费观看| 国产成人影院久久av| 亚洲人成网站高清观看| 精品人妻1区二区| 欧美日韩乱码在线| 日韩欧美国产在线观看| 又粗又爽又猛毛片免费看| 舔av片在线| 国产精品亚洲av一区麻豆| 亚洲自偷自拍图片 自拍| 亚洲自拍偷在线| 亚洲欧美一区二区三区黑人| 午夜久久久久精精品| 欧美日韩黄片免| 亚洲无线在线观看| 校园春色视频在线观看| 欧美日韩黄片免| 九九热线精品视视频播放| 久久久国产成人精品二区| 一个人看的www免费观看视频| 色吧在线观看| 99热6这里只有精品| 一本综合久久免费| 久久中文看片网| 亚洲人与动物交配视频| 久久精品亚洲精品国产色婷小说| 亚洲,欧美精品.| 亚洲成人久久爱视频| 又黄又粗又硬又大视频| 欧美黄色片欧美黄色片| 成年免费大片在线观看| 久久香蕉精品热| 亚洲中文av在线| 首页视频小说图片口味搜索| 亚洲电影在线观看av| 九九热线精品视视频播放| 特大巨黑吊av在线直播| 精品国产超薄肉色丝袜足j| a级毛片a级免费在线| 成人亚洲精品av一区二区| 国产精品亚洲一级av第二区| 黄色视频,在线免费观看| 成人亚洲精品av一区二区| 一区二区三区激情视频| 成人特级av手机在线观看| 亚洲精品456在线播放app | 欧美乱妇无乱码| 欧美一级a爱片免费观看看| 人人妻,人人澡人人爽秒播| 日日干狠狠操夜夜爽| 欧美大码av| 欧美一区二区精品小视频在线| 免费大片18禁| www.熟女人妻精品国产| 色哟哟哟哟哟哟| 国产精品九九99| 夜夜夜夜夜久久久久| 网址你懂的国产日韩在线| 亚洲熟妇熟女久久| 91麻豆av在线| 又紧又爽又黄一区二区| 免费大片18禁| 日日夜夜操网爽| 亚洲真实伦在线观看| 91久久精品国产一区二区成人 | 日韩欧美国产在线观看| 日韩精品青青久久久久久| 亚洲精品中文字幕一二三四区| av黄色大香蕉| 久久人人精品亚洲av| a级毛片在线看网站| 精品国内亚洲2022精品成人| 午夜福利欧美成人| 欧美日韩福利视频一区二区| 国产精品99久久99久久久不卡| 欧美黑人欧美精品刺激| 成人三级做爰电影| 91字幕亚洲| 亚洲av熟女| 观看免费一级毛片| 国产真人三级小视频在线观看| 嫁个100分男人电影在线观看| 色综合婷婷激情| 日本 av在线| 国产亚洲av嫩草精品影院| 级片在线观看| 757午夜福利合集在线观看| 精品国产亚洲在线| 国产私拍福利视频在线观看| 免费看美女性在线毛片视频| 操出白浆在线播放| www.精华液| 精品国产亚洲在线| 国产精品av久久久久免费| av黄色大香蕉| 波多野结衣高清无吗| 嫩草影院入口| 欧美高清成人免费视频www| 一a级毛片在线观看| 麻豆成人午夜福利视频| 免费av不卡在线播放| 欧美日韩中文字幕国产精品一区二区三区| 91麻豆av在线| 神马国产精品三级电影在线观看| 青草久久国产| 亚洲自偷自拍图片 自拍| 岛国视频午夜一区免费看| 国产日本99.免费观看| 国产黄片美女视频| 久久久久性生活片| 黑人巨大精品欧美一区二区mp4| 久久香蕉国产精品| 99久久成人亚洲精品观看| 亚洲 欧美 日韩 在线 免费| а√天堂www在线а√下载| 脱女人内裤的视频| www日本黄色视频网| 欧美色视频一区免费| 国产免费男女视频| 久久久国产精品麻豆| 成人18禁在线播放| 婷婷精品国产亚洲av在线| 色精品久久人妻99蜜桃| 久久精品国产综合久久久| 国产免费av片在线观看野外av| 18禁黄网站禁片午夜丰满| 激情在线观看视频在线高清| 久久久久久大精品| 欧美最黄视频在线播放免费| 色噜噜av男人的天堂激情| 国产成人aa在线观看| 精品福利观看| 久久人人精品亚洲av| 久久久水蜜桃国产精品网| 国产精品影院久久| 99久久精品热视频| 久久久久久久精品吃奶| svipshipincom国产片| 日韩精品中文字幕看吧| 精品国产三级普通话版| 最近最新免费中文字幕在线| 成人亚洲精品av一区二区| 一本综合久久免费| 日韩欧美精品v在线| 亚洲无线在线观看| 中文字幕精品亚洲无线码一区| 国产成人精品久久二区二区免费| 伊人久久大香线蕉亚洲五| 听说在线观看完整版免费高清| 欧美3d第一页| 天天一区二区日本电影三级| 国产爱豆传媒在线观看| 亚洲中文av在线| 精品一区二区三区四区五区乱码| 精品国产乱子伦一区二区三区| 12—13女人毛片做爰片一| 女生性感内裤真人,穿戴方法视频| 18禁黄网站禁片午夜丰满| 99国产精品99久久久久| 好男人电影高清在线观看| 国产精品女同一区二区软件 | 巨乳人妻的诱惑在线观看| 一级作爱视频免费观看| 久久香蕉精品热| 日本a在线网址| 国产av麻豆久久久久久久| 国产精品av视频在线免费观看| 18禁黄网站禁片午夜丰满| 久久精品91蜜桃| 欧美zozozo另类| 久久精品国产综合久久久| 老汉色av国产亚洲站长工具| 最新美女视频免费是黄的| 久久人人精品亚洲av| 日韩有码中文字幕| 麻豆久久精品国产亚洲av| www.www免费av| www日本在线高清视频| av欧美777| 青草久久国产| 国产伦精品一区二区三区四那| 国产野战对白在线观看| 一个人观看的视频www高清免费观看 | 最好的美女福利视频网| 19禁男女啪啪无遮挡网站| 非洲黑人性xxxx精品又粗又长| 国产精品99久久99久久久不卡| 天堂√8在线中文| 国产真实乱freesex| 国产精品一区二区三区四区免费观看 | 国产综合懂色| 久久精品人妻少妇| 国产黄色小视频在线观看| 免费观看精品视频网站| 亚洲av成人不卡在线观看播放网| 亚洲七黄色美女视频| 最近最新中文字幕大全电影3| 精品国产超薄肉色丝袜足j| 亚洲欧美一区二区三区黑人| 精品国产乱子伦一区二区三区| 午夜影院日韩av| 男女视频在线观看网站免费| 淫妇啪啪啪对白视频| 亚洲中文字幕日韩| 婷婷精品国产亚洲av在线| 舔av片在线| 亚洲精品乱码久久久v下载方式 | 欧美日韩精品网址| 久久久国产欧美日韩av| 国产精品久久久久久人妻精品电影| 色综合欧美亚洲国产小说| 欧美激情久久久久久爽电影| 色吧在线观看| 可以在线观看的亚洲视频| 亚洲专区字幕在线| 久久人人精品亚洲av| av视频在线观看入口| 国内久久婷婷六月综合欲色啪| 三级国产精品欧美在线观看 | 一区二区三区国产精品乱码| 久久久国产欧美日韩av| 久久亚洲真实| 啦啦啦观看免费观看视频高清| 男人舔女人的私密视频| 母亲3免费完整高清在线观看| 脱女人内裤的视频| 国产精品久久久av美女十八| 嫁个100分男人电影在线观看| 看黄色毛片网站| 黄色女人牲交| 可以在线观看的亚洲视频| 免费av不卡在线播放| 亚洲黑人精品在线| 成年版毛片免费区| 男女之事视频高清在线观看| 中文字幕av在线有码专区| 深夜精品福利| 夜夜躁狠狠躁天天躁| 久久久久久久久久黄片| 最新美女视频免费是黄的| 18禁国产床啪视频网站| 国产aⅴ精品一区二区三区波| 久久婷婷人人爽人人干人人爱| 成年女人毛片免费观看观看9| 2021天堂中文幕一二区在线观| 国产日本99.免费观看| www.自偷自拍.com| 国产日本99.免费观看| 亚洲av成人不卡在线观看播放网| 成年免费大片在线观看| 少妇人妻一区二区三区视频| 亚洲欧美精品综合久久99| 嫩草影院入口| 又黄又粗又硬又大视频| 日日干狠狠操夜夜爽| www日本在线高清视频| 一夜夜www| 欧美乱码精品一区二区三区| 叶爱在线成人免费视频播放| 亚洲av片天天在线观看| 亚洲九九香蕉| 搞女人的毛片| 亚洲欧美一区二区三区黑人| 婷婷精品国产亚洲av| 久久久久久久精品吃奶| 黄色成人免费大全| 嫩草影院入口| 亚洲中文字幕日韩| 别揉我奶头~嗯~啊~动态视频| 亚洲中文字幕一区二区三区有码在线看 | 在线观看舔阴道视频| 久久久国产精品麻豆| 波多野结衣巨乳人妻| 综合色av麻豆| 99精品在免费线老司机午夜| 舔av片在线| 欧美乱码精品一区二区三区| av视频在线观看入口| 日韩精品中文字幕看吧| 丝袜人妻中文字幕| 一区二区三区激情视频| 亚洲成人久久爱视频| 日本黄大片高清| 久久性视频一级片| av在线蜜桃| 成人av在线播放网站| 91字幕亚洲| 99热6这里只有精品| 国产精品影院久久| 99久国产av精品| 丁香欧美五月| 亚洲国产精品久久男人天堂| 色吧在线观看| 国产精品野战在线观看| 综合色av麻豆| 欧美日韩瑟瑟在线播放| 欧美日韩乱码在线| 免费av不卡在线播放| 中亚洲国语对白在线视频| 三级男女做爰猛烈吃奶摸视频| 亚洲国产欧美一区二区综合| 亚洲中文字幕一区二区三区有码在线看 | 亚洲国产精品合色在线| 一个人免费在线观看的高清视频| 亚洲av片天天在线观看| 狠狠狠狠99中文字幕| 精品福利观看| 久久性视频一级片| 精品国产超薄肉色丝袜足j| 成人性生交大片免费视频hd| 悠悠久久av| 中文字幕精品亚洲无线码一区| 国产男靠女视频免费网站| 久久精品国产99精品国产亚洲性色| 97人妻精品一区二区三区麻豆| 欧美极品一区二区三区四区| 99热只有精品国产| 久久伊人香网站| 国产精品久久久久久精品电影| 国产成人av激情在线播放| 国产黄a三级三级三级人| 久久精品国产综合久久久| 国产伦人伦偷精品视频| 国产黄片美女视频| 好男人在线观看高清免费视频| 欧美一区二区精品小视频在线| 国产视频内射| 美女午夜性视频免费| 午夜成年电影在线免费观看| 97超视频在线观看视频| 亚洲av成人一区二区三| 免费看美女性在线毛片视频| 免费无遮挡裸体视频| 国产成人影院久久av| 三级国产精品欧美在线观看 | 亚洲美女视频黄频| 91老司机精品| 亚洲av日韩精品久久久久久密| 熟妇人妻久久中文字幕3abv| 欧美zozozo另类| 夜夜看夜夜爽夜夜摸| 成人性生交大片免费视频hd| 欧美日本亚洲视频在线播放| 免费在线观看成人毛片| 亚洲国产看品久久| 淫妇啪啪啪对白视频| 久久久久久久午夜电影| 不卡av一区二区三区| 嫩草影院入口| 国产爱豆传媒在线观看| 精品无人区乱码1区二区| 国产真人三级小视频在线观看| 日韩欧美在线二视频| www.999成人在线观看| 色综合婷婷激情| а√天堂www在线а√下载| 99视频精品全部免费 在线 | 国产爱豆传媒在线观看| 性欧美人与动物交配| 免费av毛片视频| 欧美一级a爱片免费观看看| 国产精品爽爽va在线观看网站| 色老头精品视频在线观看| 麻豆av在线久日| 亚洲国产高清在线一区二区三| 午夜激情福利司机影院| 亚洲国产看品久久| 精品久久久久久久久久免费视频| 99国产综合亚洲精品| 国产精品亚洲美女久久久| 国产精品 国内视频| 亚洲一区二区三区色噜噜| 97超级碰碰碰精品色视频在线观看| 国产一区二区在线观看日韩 | 国产成人影院久久av| 日韩精品中文字幕看吧| 99精品欧美一区二区三区四区| 亚洲avbb在线观看| 国产探花在线观看一区二区| 精品一区二区三区视频在线观看免费| 国产精品久久久久久人妻精品电影| 天天一区二区日本电影三级| 岛国在线免费视频观看| 岛国在线观看网站| 亚洲av熟女| 成人精品一区二区免费| 19禁男女啪啪无遮挡网站| 啦啦啦韩国在线观看视频| 国产精品,欧美在线| 久久草成人影院| 亚洲 国产 在线| 国产成人影院久久av| 久久香蕉精品热| 一本久久中文字幕| 岛国在线观看网站| 国内揄拍国产精品人妻在线| 色综合欧美亚洲国产小说| 男女午夜视频在线观看| 最近在线观看免费完整版| 日日摸夜夜添夜夜添小说| 中文字幕人成人乱码亚洲影| 欧美日韩一级在线毛片| 不卡av一区二区三区| 亚洲精品美女久久av网站| 99久久成人亚洲精品观看| 最好的美女福利视频网| 久久午夜亚洲精品久久| 熟妇人妻久久中文字幕3abv| 国产av不卡久久| 在线播放国产精品三级| 又粗又爽又猛毛片免费看| 色精品久久人妻99蜜桃| 美女被艹到高潮喷水动态| 亚洲美女视频黄频| 精品日产1卡2卡| 亚洲精品一区av在线观看| 久久久色成人| 又紧又爽又黄一区二区| 日日干狠狠操夜夜爽| 亚洲一区二区三区不卡视频| 变态另类丝袜制服| 91老司机精品| 九九久久精品国产亚洲av麻豆 | 久久草成人影院| 婷婷精品国产亚洲av在线| 操出白浆在线播放| 黄色 视频免费看| 午夜精品在线福利| 女同久久另类99精品国产91| 啪啪无遮挡十八禁网站| www日本在线高清视频| 99国产综合亚洲精品| 亚洲av片天天在线观看| 90打野战视频偷拍视频| 久久亚洲真实| 757午夜福利合集在线观看| 日日干狠狠操夜夜爽| 国产成+人综合+亚洲专区| 亚洲av电影在线进入| 真人做人爱边吃奶动态| 美女高潮喷水抽搐中文字幕| 丰满人妻熟妇乱又伦精品不卡| 啦啦啦免费观看视频1| 色视频www国产| 日韩欧美 国产精品| 色精品久久人妻99蜜桃| 99国产精品一区二区三区| 久久久久久久午夜电影| 精品国内亚洲2022精品成人| a级毛片a级免费在线| 国产伦在线观看视频一区| 狠狠狠狠99中文字幕| 色老头精品视频在线观看| 亚洲av第一区精品v没综合| 欧美绝顶高潮抽搐喷水| 亚洲中文av在线| 中文资源天堂在线| 国产一区二区三区在线臀色熟女| 人妻丰满熟妇av一区二区三区| 变态另类丝袜制服| 国产精品永久免费网站| 国产精品久久久av美女十八| 国产日本99.免费观看| 欧美又色又爽又黄视频| 午夜免费激情av| 一区二区三区激情视频| 亚洲欧美日韩高清专用| 三级国产精品欧美在线观看 | 久久亚洲精品不卡| 少妇的逼水好多| 这个男人来自地球电影免费观看| 亚洲av成人不卡在线观看播放网| 在线播放国产精品三级| 少妇裸体淫交视频免费看高清| 999精品在线视频| 美女扒开内裤让男人捅视频| 欧美性猛交黑人性爽| 99视频精品全部免费 在线 | 色吧在线观看| 免费在线观看视频国产中文字幕亚洲| 欧美成狂野欧美在线观看| 亚洲国产看品久久| 成人精品一区二区免费| 性色avwww在线观看| 亚洲国产精品999在线| 国产1区2区3区精品| 色播亚洲综合网| 午夜福利在线在线| 夜夜躁狠狠躁天天躁| 一个人观看的视频www高清免费观看 | 日韩 欧美 亚洲 中文字幕|