• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-layer dynamic and asymmetric convolutions①

    2022-10-22 02:22:56LUOChunjie羅純杰ZHANJianfeng
    High Technology Letters 2022年3期

    LUO Chunjie (羅純杰), ZHAN Jianfeng

    (Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, P.R.China)

    (University of Chinese Academy of Sciences, Beijing 100049, P.R.China)

    Abstract Dynamic networks have become popular to enhance the model capacity while maintaining efficient inference by dynamically generating the weight based on over-parameters. They bring much more parameters and increase the difficulty of the training. In this paper,a multi-layer dynamic convolution (MDConv) is proposed, which scatters the over-parameters over multi-layers with fewer parameters but stronger model capacity compared with scattering horizontally; it uses the expanding form where the attention is applied to the features to facilitate the training; it uses the compact form where the attention is applied to the weights to maintain efficient inference. Moreover, a multi-layer asymmetric convolution (MAConv) is proposed,which has no extra parameters and computation cost at inference time compared with static convolution. Experimental results show that MDConv achieves better accuracy with fewer parameters and significantly facilitates the training;MAConv enhances the accuracy without any extra cost of storage or computation at inference time compared with static convolution.

    Key words: neural network, dynamic network, attention, image classification

    0 Introduction

    Deep neural networks have received great successes in many areas of machine intelligence. Many researchers have shown rising interest in designing lightweight convolutional networks[1-8]. Light-weight networks improve the efficiency by decreasing the size of the convolutions. That also leads to the decrease of the model capacity.

    Dynamic networks[9-12]have become popular to enhance the model capacity while maintaining efficient inference by applying attention on the weight. Conditionally parameterized convolutions (CondConv)[9]and dynamic convolution (DYConv)[10]were proposed to use a dynamic linear combination ofnexperts as the kernel of the convolution. CondConv and DYConv bring much more parameters. WeightNet[11]used a grouped fully-connected layer applied to the attention vector to generate the weight in a group-wise manner,achieving comparable accuracy with fewer parameters than CondConv and DYConv. Dynamic convolution decomposition (DCD)[12]replaced dynamic attention over channel groups with dynamic channel fusion, resulting in a more compact model. However, DCD brings new problems: it increases the depth of the weight, thus it hinders error back-propagation; it increases the dynamic coefficients since it uses a full dynamic matrix. More dynamic coefficients make the training more difficult.

    To reduce the parameters and facilitate the training, a multi-layer dynamic convolution (MDConv) is proposed, which scatters the over-parameters over multi-layers with fewer parameters but stronger model capacity compared with scattering horizontally; it uses the expanding form where the attention is applied to the features to facilitate the training; it uses the compact form where the attention is applied to the weights to maintain efficient inference. In CondConv and DYConv, the over-parameters are scattered horizontally.The key foundation for the success of deep learning is that deeper layers have stronger model capacity. Unlike CondConv and DYConv, MDConv scatters overparameters over multi-layers, enhancing the model capacity with fewer parameters. Moreover, MDConv brings fewer dynamic coefficients thus is easier to train compared with DCD. There are two additional mechanisms to facilitate the training of deeper layers in the expanding form. One is batch normalization (BN) after each convolution. BN can significantly accelerate and improve the training of deeper networks. The other mechanism that helps the training is the bypass convolution with the static kernel. The bypass convolution shortens the path of error back-propagation.

    At training time, the attention in MDConv is applied to features. While at inference time,the attention becomes weight attention. Batch normalization can be fused into the convolution. Squeeze-and-excite (SE)attention can be viewed as a diagonal matrix. Then,the three convolutions and SE attention can be further fused into a single convolution with dynamic weight for efficient inference. After fusion, the weight of the final convolution for inference is dynamically generated, and only one convolution needs to be performed. When implementing, there is no need to construct the diagonal matrix. After generating the dynamic coefficients,broadcasting multiply can be used instead of matrix multiply. Thus MDConv costs fewer memories and computational resources than DCD, which generates a dense matrix.

    Although dynamic attention could significantly enhance the model capacity, it brings extra parameters and the number of float point operations (FLOPs) at inference time. Besides multi-layer dynamic convolution, a multi-layer asymmetric convolution (MAConv)is proposed, which removes the dynamic attention from multi-layer dynamic convolution. After the training,the weights need to be fused just one time and re-parameterized as new static kernels since they do not depend on the input anymore. As a result, there are no extra parameters and FLOPs at inference time compared with static convolution.

    The experiments show that:

    The remainder of this paper is structured as follows. Section 1 briefly presents related work. Section 2 describes the details of multi-layer dynamic convolution. Section 3 introduces multi-layer asymmetric convolution. In Section 4, the experiment settings and the results are presented. Conclusions are made in Section 5.

    1 Related work

    1.1 Dynamic networks

    CondConv[9]and DYConv[10]compute convolutional kernels as a function of the input instead of using static convolutional kernels. In particular,the convolutional kernels are over-parameterized as a linear combination ofnexperts. Although largely enhancing the model capacity, CondConv and DYConv bring much more parameters, thus are prone to over-fitting. Besides, more parameters require more memory resources. Moreover, the dynamics make the training more difficult. To avoid over-fitting and facilitate the training, these two methods apply additional constraints. For example, CondConv shares routing weights between layers in a block. DYConv uses the Softmax with a large temperature instead of Sigmoid on the output of the routing network. WeightNet[11]uses a grouped fully-connected layer applied to the attention vector to generate the weight in a group-wise manner.WeightNet achieves comparable accuracy with fewer parameters than CondConv and DYConv. To further compact the model, DCD[12]decomposes the convolutional weight, which reduces the latent space of the weight matrix and results in a more compact model.

    Although dynamic weight attentions enhance the model capacity, they increase the difficulty of the training since they introduce dynamic factors. Extremely, dynamic filter network[13]generates all the convolutional filters dynamically conditioned on the input. On the other hand, SE[14]is an effective and robust module by applying attention to the channel-wise features.Other dynamic networks[15-20]try to learn dynamic network structure with static convolution kernels.

    1.2 Re-parameterization

    ExpandNet[21]expands convolution into multiple linear layers without adding any nonlinearity. The expanding network can benefit from over-parameterization during training and can be compressed back to the compact one algebraically at inference. For example, ak×kconvolution is expanded by three convolutional layers with kernel size 1 ×1,k×kand 1 ×1, respectively. ExpandNet increases the network depth, thus makes the training more difficult. ACNet[22]uses asymmetric convolution to strengthen the kernel skeletons for powerful networks. At training time, it uses three branches with 3 ×3, 1 ×3, and 3 ×1 kernels respectively. At inference time, the three branches are fused into one static kernel. RepVGG[23]constructs the training-time model using branches consisting of identity map, 1 ×1 convolution and 3 ×3 convolution. After training, RepVGG constructs a single 3 ×3 kernel by re-parameterizing the trained parameters. ACNet and RepVGG can only be used fork×k(k >1) convolution.

    2 Multi-layer dynamic convolution

    The main problem of CondConv[9]and DYConv[10]is that they bring much more parameters. DCD[12]reduces the latent space of the weight matrix by matrix decomposition and results in a more compact model.However, DCD brings new problems: (1) it increases the depth of the weight, thus hinders error back-propagation; (2) it increases the dynamic coefficients since it uses a full dynamic matrix. More dynamic coefficients make the training more difficult. In the extreme situation, e. g., dynamic filter network[13], all the convolutional weights are dynamically conditioned on the input. It is hard to train and can not be applied in modern deep architecture successfully.

    To reduce the parameters and facilitate the training, MDConv is proposed. As shown in Fig.1, MDConv has two branches: (1) the dynamic branch consists of ak×k(k >= 1) convolution, a SE module,and a 1 ×1 convolution; (2) the bypass branch consists of ak×kconvolution with a static kernel. The output of MDConv is the addition of the two branches.

    Fig.1 Training and inference of MDConv

    Unlike CondConv and DYConv, MDConv encapsulates the dynamic information into multi-layer convolutions by applying SE attention between two convolutional layers. By scattering the over-parameters over multi-layers, MDConv increases the model capacity with fewer parameters than horizontally scattering. Moreover, MDConv facilitates the training of dynamic networks. In MDConv, SE can be viewed as a diagonal matrixA.

    whereFis a multi-layer fully-connected attention network. Compared with DCD, which uses a full dynamic matrix, MDConv brings fewer dynamic coefficients thus is easier to train. There are two additional mechanisms to facilitate the training of deeper layers in MDConv.One is batch normalization after each convolution.Batch normalization can significantly accelerate and improve the training of deeper networks. Another mechanism that helps the training is the bypass convolution with the static kernel. The bypass convolution shortens the path of error back-propagation.

    MDConv uses two layer convolutions in the dynamic branch. Three or more layers bring the following problems: (1) more convolutions bring more computation FLOPs; (2) more dynamic layers are harder to train and need more training data.

    Although the expanding form of MDConv facilitates the training, it is more expensive since there are three convolutional operators at training time. The compact form of MDConv can be used for efficient inference. MDConv can be defined as

    Then the three convolutions of MDConv can be fused into a single convolution for efficient inference.

    whereWinferandbinferare the new weight and bias of the convolution after re-parameterization.

    When implementing, the diagonal matrixAdoes not need be constructed. After generating the dynamic coefficients, the broadcasting multiply can be used instead of matrix multiply. Thus MDConv costs fewer memories and computational resources than DCD,which generates a dense matrixA.

    3 Multi-layer asymmetric convolution

    Although dynamic attention could significantly enhance model capacity, it still brings extra parameters and FLOPs at inference time. Besides MDConv, this paper also proposes MAConv, which removes the dynamic attention in MDConv.

    In ExpandNet[21], ak×k(k>=1) convolution is expanded vertically by three convolutional layers with kernel size 1 × 1,k×k, 1 × 1, respectively.Whenk >1,it cannot use BN in the intermediate layer. The bias caused by BN fusion cannot pass forward through thek×kkernel, thus cannot be fused with the bias of the next layer. MAConv avoids this problem,thus can use BN to facilitate the training. Besides,MAConv uses the bypass convolution for shortening the path of error back-propagation. BN and the bypass shortcut in MAConv help the training of deep layers.Both BN and the bypass shortcut can be compressed and re-parameterized to the compact one, thus without any extra cost at inference time.

    ACNet[22]and RepVGG[23]horizontally expand thek×k(k >1) convolution into convolutions with different kernel shape. That hinders its usage for lightweight networks, which heavily utilizes the 1 ×1 pointwise convolutions. MAConv uses asymmetric depth instead of asymmetric kernel shape and expands the convolution both vertically and horizontally. MAConv can be used for both 1 ×1 convolution andk×k(k >1)convolution.

    4 Multi-layer asymmetric convolution

    4.1 ImageNet

    ImageNet classification dataset[24]has 1.28 ×106training images and 50 000 validation images with 1000 classes. The experiments are based on the official example of Pytorch.

    The standard augmentation is used for training image as the same as the official example: (1) randomly cropped with the size of 0.08 to 1.0 and aspect ratio of 3/4 to 4/3, and then resized to 224 ×224; (2) randomly horizontal flipped. The validation image is resized to 256 ×256, and then center cropped with size 224 ×224. Each channel of the input image is normalized into 0 mean and 1 STD globally. The batch size is 256. Four TITAN Xp GPUs are used to train the models.

    Firstly, comparisons are made between static convolution, DYConv[10], MAConv, and MDConv on MobileNetV2 and ShuffleNetV2. For DYConv, the number of experts is set to the default value 4[10]. For MDConv and MAConv, the number of intermediate channelsmis set to 20. For DYConv and MDConv, the attention network is a two-layer fully-connected network with the hidden units to be 1/4 input channels. As recommended in the original paper[10], the temperature of Softmax is set to 30 in DYConv. DYConv, MDConv, and MAConv are applied to the pointwise convolutional layer in the inverted bottlenecks of Mobile-NetV2 or the blocks of ShuffleNetV2. They are used to replace the static convolution.

    Table 1 Top-1 accuracies of lightweight networks on ImageNet validation dataset

    Comparisons are also made onk×k(k >1) convolution. DYConv, MDConv, and MAConv are applied in the 3 ×3 convolutional layer of ResNet18’s residual block. Table 2 shows that MAConv is also effective onk×k(k >1) convolution. MDConv increases the accuracy by 2.086% with only 1 ×106additional parameters compared with static convolution. Moreover, it achieves higher accuracy with much fewer parameters than DYConv.

    Table 2 Top-1 accuracies of ResNet18 on ImageNet validation dataset

    Table 3 Comparison of validation accuracies (%) between DCD and MDConv on MobileNetV2x1. 0 trained with different amounts of data

    Fig.2 Comparison of training and validation accuracy curves between DCD and MDConv on MobileNetV2x1.0 trained with different amounts of data

    Table 4 Comparison of validation accuracies on ImageNet between MDConv and SE

    4.2 CIFAR-10

    CIFAR-10 is a dataset of natural 32 ×32 RGB images in 10 classes with 50 000 images for training and 10 000 for testing. The training images are padded with 0 to 36 ×36 and then randomly cropped to 32 ×32 pixels. Then randomly horizontal flipping is made. Each channel of the input is normalized into 0 mean and 1 STD globally.

    SGD with momentum 0.9 and weight decay 5e-4 are used. The batch size is set to 128. The learning rate is set to 0.1, and scheduled to arrive at zero using the cosine annealing scheduler. The networks are trained with 200 epochs.

    MobileNetV2 with different width multipliers are evaluated on this small dataset. The setups for different attentions are the same as subsection 4.1. Each test is run 5 times. The mean and the STD of the accuracies are listed in Table 5. MAConv increases the accuracy compared with static convolution. MAConv even outperforms DYConv on this small dataset. MDConv further improves the performance and achieves the best performance, while DCD achieves the worst performance and has large variance. That implies again DCD brings more dynamics and is difficult to train on the small dataset.

    Table 5 Test accuracies on CIFAR-10 with MobileNetV2

    4.3 CIFAR-100

    CIFAR-100 is a dataset of natural 32 ×32 RGB images in 100 classes with 50 000 images for training and 10 000 for testing. The training images are padded with 0 to 36 ×36 and then randomly cropped to 32 ×32 pixels. Then randomly horizontal flipping is made.Each channel of the input is normalized into 0 mean and 1 STD globally. SGD with momentum 0. 9 and weight decay 5e-4 are used. The batch size is set to 128. The learning rate is set to 0.1, and scheduled to arrive at zero using the cosine annealing scheduler.The networks are trained with 200 epochs.

    MobileNetV2x0.35 is evaluated on this dataset.The setups for different attentions are the same as subsection 4.1. Each test is run 5 times. The mean and the standard deviation of the accuracies are reported in Table 6. Results show that dynamic networks do not improve the accuracy compared with the static network. Moreover, more dynamic factors lead to worse performance. For example, DCD is worse than MDConv, and MDConv is worse than DyConv. That is because dynamic networks are harder to train and need more training data and CIFAR-100 has 100 classes,and each class has fewer training examples than CIFAR-10.MAConv achieves the best performance, 70. 032%.When the training dataset is small, MAConv is still effective to enhance the model capacity instead of dynamic ones.

    Table 6 Test accuracies on CIFAR-100

    4.4 SVHN

    The street view house numbers (SVHN) dataset includes 73 257 digits for training, 26 032 digits for testing, and 531 131 additional digits. Each digit is a 32 ×32 RGB image. The training images are padded with 0 to 36 ×36 and then randomly cropped to 32 ×32 pixels. Then randomly horizontal flipping is made.Each channel of the input is normalized into 0 mean and 1 STD globally. SGD with momentum 0.9 and weight decay 5e-4 are used. The batch size is set to 128. The learning rate is set to 0.1, and scheduled to arrive at zero using the cosine annealing scheduler. The networks are trained with 200 epochs.

    MobileNetV2x0.35 is used on this dataset. The setups for different attentions are the same as subsection 4.1. Each test is run 5 times. The mean and the standard deviation of the accuracies are reported in Table 7. Results show that DCD decreases the performance compared with the static one. DCD is hard to train on the small dataset. DYConv, MAConv, and MDConv increase the accuracy compared with the static one. Among them, MAConv and MDConv achieve similar performance, better than DYConv.

    Table 7 Test accuracies on SVHN

    4.5 Ablation study

    Ablation experiments are carried out on CIFAR-10 using two network architectures. One is MobileNetV2 x0.35. To make the comparison more distinct, a smaller and simpler network named SmallNet is also used.SmallNet has the first convolutional layer with 3 × 3 kernels and 16 output channels, followed by three blocks. Each block comprises of a 1 ×1 pointwise convolution with 1 stride and no padding, and a 3 × 3 depthwise convolution with 2 strides and 1 padding.These 3 blocks have 16, 32 and 64 output channels,respectively. Each convolutional layer is followed by batch normalization, and ReLU activation. The output of the last layer is passed through a global average pooling layer, followed by a Softmax layer with 10 classification output. Other experiment settings are the same as subsection 4.2.

    The effect of the bypass shortcut in the MAConv and MDConv is investigated firstly. To show the effect of dynamic attention, ReLU activation is further used instead of dynamic attention in the multi-layer branch.The results are shown in Table 8, w/ means with bypass shortcut, w/o means without bypass shortcut. Results show that the bypass shortcut improves the accuracy, especially in deeper networks (MobileNetV2 x0.35). Moreover, MDConv (with dynamic attention)increases the capabilities of models compared with MAConv (without dynamic attention). Using ReLU activation instead of dynamic attention can further increase the capabilities. However, the weights cannot be fused into compact one anymore because of the non-linear activation function. Thus the costs of storage and computation are much higher than single-layer convolution at inference time.

    Table 8 Effect of bypass shortcut

    Next, the networks are trained by using the compact form directly. Table 9 shows the comparison between the expanding training and the compact training.Results show that expanding training improves the performance of MAConv and MDConv. To evaluate the benefit of BN in expanding training, BN is applied after the addition of two branches instead of after each convolution (without BN after each convolution). Resultsshow that BN in expanding form helps the training since it achieves better performance than that without BN. Moreover, the expanding form without BN helps training itself, since it achieves better performance than compact training.

    Table 9 Effect of expanding training

    The effect of different input/output channels and different intermediate channels are also investigated.Different width multipliers are applied on all layers except the first layer of SmallNet. The results are shown in Table 10. Results show that the gains of MAConv and MDConv are higher with fewer channels. These results imply that over-parameterization is more effective in smaller networks. SmallNet are then trained with different intermediate channels. The results are shown in Table 11. Results show that the gains of MAConv and MDConv are trivial when increasing the intermediate channel on the CIFAR-10 dataset. Using more intermediate channels means that the dynamic part takes more influence, thus increases the difficulty of training and needs more training data.

    Table 10 Effect of input/output channel width

    Table 11 Effect of different intermediate channel width

    Finally, different setups for the attention network are investigated in SmallNet with MDConv. Different numbers of the hidden units are used, ranging from 1/4 times of the input channel to 4 times of the input channel. Table 12 shows that increasing hidden units can improve the performance until 4 times of the input channel. Softmax and Softmax with different temperatures, as proposed in Ref.[10], are used as the gate function in the last layer of the attention network. As shown in Table 13, Softmax achieves better accuracy than Sigmoid. However, the temperature does not improve the performance.

    Table 12 Effect of the hidden layer in the attention network

    Table 13 Effect of the gate function in the last layer of the attention network

    5 Conclusions

    Two powerful convolutions are proposed to increase the model’s capacity: MDConv and MAConv.MDConv expands the static convolution into multi-layer dynamic one, with fewer parameters but stronger model capacity than horizontally expanding. MAConv has no extra parameters and FLOPs at inference time compared with static convolution. MDConv and MAConv are evaluated on different networks. Experimental results show that MDConv and MAConv improve the accuracy compared with static convolution. Moreover,MDConv achieves better accuracy with fewer parameters and facilitates the training compared with other dynamic convolutions.

    久久久久精品国产欧美久久久| 香蕉久久夜色| 男女午夜视频在线观看| 久久热在线av| 久久久国产成人免费| 亚洲精品久久国产高清桃花| 免费搜索国产男女视频| 制服丝袜大香蕉在线| 99国产精品免费福利视频| 在线天堂中文资源库| 老汉色av国产亚洲站长工具| 黄色视频不卡| 91精品三级在线观看| 国产三级黄色录像| 国产午夜精品久久久久久| 亚洲狠狠婷婷综合久久图片| 免费看美女性在线毛片视频| 日韩视频一区二区在线观看| 色播亚洲综合网| 一边摸一边抽搐一进一小说| 在线观看免费午夜福利视频| 老司机午夜福利在线观看视频| xxx96com| 黄网站色视频无遮挡免费观看| 久久人人精品亚洲av| 免费高清视频大片| 一夜夜www| a级毛片在线看网站| 成人免费观看视频高清| 亚洲av片天天在线观看| 久久精品人人爽人人爽视色| 成在线人永久免费视频| 国产一区二区三区在线臀色熟女| 久久天堂一区二区三区四区| 91麻豆精品激情在线观看国产| 亚洲伊人色综图| 免费在线观看影片大全网站| 成人av一区二区三区在线看| 国产一区二区三区视频了| 精品电影一区二区在线| av天堂在线播放| 乱人伦中国视频| 久久久久久免费高清国产稀缺| 国产精品一区二区精品视频观看| 国产成人欧美| 欧美日本亚洲视频在线播放| 99re在线观看精品视频| 1024香蕉在线观看| 国产午夜精品久久久久久| 亚洲熟妇熟女久久| 国产精品日韩av在线免费观看 | 亚洲免费av在线视频| 久久久国产成人免费| 手机成人av网站| 国产亚洲欧美精品永久| www.熟女人妻精品国产| 亚洲午夜理论影院| 久久精品亚洲精品国产色婷小说| 国产成+人综合+亚洲专区| 777久久人妻少妇嫩草av网站| 国产精品亚洲一级av第二区| 一进一出抽搐gif免费好疼| 国产xxxxx性猛交| 久热这里只有精品99| 他把我摸到了高潮在线观看| 黄片小视频在线播放| 日韩视频一区二区在线观看| 国产精品日韩av在线免费观看 | 老司机午夜福利在线观看视频| www国产在线视频色| 午夜精品久久久久久毛片777| 在线观看午夜福利视频| 一级作爱视频免费观看| 首页视频小说图片口味搜索| 伦理电影免费视频| 欧美一级a爱片免费观看看 | 男女之事视频高清在线观看| 1024视频免费在线观看| 桃红色精品国产亚洲av| bbb黄色大片| 最新美女视频免费是黄的| 午夜福利影视在线免费观看| 成人三级黄色视频| 亚洲欧美一区二区三区黑人| e午夜精品久久久久久久| 色综合婷婷激情| 亚洲一区二区三区色噜噜| 男女做爰动态图高潮gif福利片 | 校园春色视频在线观看| 757午夜福利合集在线观看| 无遮挡黄片免费观看| 90打野战视频偷拍视频| aaaaa片日本免费| 精品国内亚洲2022精品成人| 亚洲专区中文字幕在线| 精品久久久久久久毛片微露脸| 精品熟女少妇八av免费久了| 99久久99久久久精品蜜桃| 日韩免费av在线播放| 久久天堂一区二区三区四区| 高清黄色对白视频在线免费看| 国产成人一区二区三区免费视频网站| 欧洲精品卡2卡3卡4卡5卡区| 国产乱人伦免费视频| 老司机靠b影院| 人人妻人人澡欧美一区二区 | 女人爽到高潮嗷嗷叫在线视频| 成人国产一区最新在线观看| 丝袜美腿诱惑在线| 一边摸一边抽搐一进一小说| 桃红色精品国产亚洲av| 母亲3免费完整高清在线观看| av欧美777| 一二三四社区在线视频社区8| 中文字幕另类日韩欧美亚洲嫩草| 十八禁人妻一区二区| 久久久久九九精品影院| 久久天躁狠狠躁夜夜2o2o| 黄片大片在线免费观看| 在线观看免费日韩欧美大片| 日韩有码中文字幕| 国产成人精品无人区| 色精品久久人妻99蜜桃| 欧美日韩福利视频一区二区| 国产精品 欧美亚洲| 脱女人内裤的视频| 精品卡一卡二卡四卡免费| 岛国在线观看网站| 一边摸一边抽搐一进一出视频| 亚洲午夜精品一区,二区,三区| 亚洲国产高清在线一区二区三 | 自拍欧美九色日韩亚洲蝌蚪91| 欧美中文综合在线视频| 国产激情久久老熟女| 久久人妻av系列| 精品一区二区三区av网在线观看| 免费一级毛片在线播放高清视频 | 久久久久久久午夜电影| 日日干狠狠操夜夜爽| 午夜精品在线福利| 欧美丝袜亚洲另类 | 黄频高清免费视频| 久久九九热精品免费| 老司机福利观看| 丁香六月欧美| 亚洲欧美激情在线| 亚洲欧美日韩无卡精品| 久久亚洲真实| 国产99久久九九免费精品| 每晚都被弄得嗷嗷叫到高潮| 国产精品爽爽va在线观看网站 | 最新在线观看一区二区三区| 日韩精品中文字幕看吧| 国产主播在线观看一区二区| 在线观看日韩欧美| 又黄又粗又硬又大视频| 国产主播在线观看一区二区| 亚洲人成电影免费在线| 少妇熟女aⅴ在线视频| 日韩欧美一区视频在线观看| 欧美日韩一级在线毛片| 成人永久免费在线观看视频| 午夜福利,免费看| 久99久视频精品免费| 91九色精品人成在线观看| 在线观看免费视频网站a站| 桃红色精品国产亚洲av| 又黄又粗又硬又大视频| 亚洲久久久国产精品| 热99re8久久精品国产| 日韩国内少妇激情av| 国产精品久久久久久精品电影 | 啦啦啦观看免费观看视频高清 | 欧洲精品卡2卡3卡4卡5卡区| 日本免费一区二区三区高清不卡 | 侵犯人妻中文字幕一二三四区| 亚洲男人天堂网一区| 黄色片一级片一级黄色片| 欧美 亚洲 国产 日韩一| 他把我摸到了高潮在线观看| 最新在线观看一区二区三区| 一边摸一边做爽爽视频免费| 国产精品爽爽va在线观看网站 | 我的亚洲天堂| 天天躁狠狠躁夜夜躁狠狠躁| 给我免费播放毛片高清在线观看| 好看av亚洲va欧美ⅴa在| 亚洲精品久久成人aⅴ小说| 给我免费播放毛片高清在线观看| 女性被躁到高潮视频| 亚洲第一青青草原| 99久久综合精品五月天人人| 黄网站色视频无遮挡免费观看| 露出奶头的视频| 午夜福利成人在线免费观看| 91字幕亚洲| 69av精品久久久久久| 人人澡人人妻人| 久久人人精品亚洲av| 国产精品久久久久久人妻精品电影| 淫妇啪啪啪对白视频| 乱人伦中国视频| 日本 欧美在线| 中文字幕久久专区| 精品人妻在线不人妻| 亚洲色图 男人天堂 中文字幕| av电影中文网址| 亚洲国产精品999在线| 国产精品久久久久久人妻精品电影| 老熟妇仑乱视频hdxx| 19禁男女啪啪无遮挡网站| 一边摸一边抽搐一进一小说| av在线天堂中文字幕| 国产欧美日韩一区二区精品| 欧美成人免费av一区二区三区| 精品国产国语对白av| 日韩精品中文字幕看吧| 在线观看66精品国产| 男人舔女人下体高潮全视频| 99国产精品免费福利视频| 国内毛片毛片毛片毛片毛片| 禁无遮挡网站| 日本三级黄在线观看| 精品一区二区三区av网在线观看| 曰老女人黄片| 嫩草影视91久久| 黄色视频不卡| 非洲黑人性xxxx精品又粗又长| 日韩大尺度精品在线看网址 | a级毛片在线看网站| 欧美老熟妇乱子伦牲交| 露出奶头的视频| 亚洲男人天堂网一区| 久久婷婷成人综合色麻豆| 国产在线精品亚洲第一网站| 精品国产乱子伦一区二区三区| netflix在线观看网站| 亚洲精华国产精华精| 亚洲精华国产精华精| 成熟少妇高潮喷水视频| 黄片播放在线免费| 日韩国内少妇激情av| 国产精品久久久av美女十八| 色综合亚洲欧美另类图片| 午夜精品在线福利| 丁香六月欧美| 国产一级毛片七仙女欲春2 | √禁漫天堂资源中文www| 窝窝影院91人妻| 免费不卡黄色视频| www.精华液| 一级毛片精品| 男人舔女人的私密视频| 国产主播在线观看一区二区| 宅男免费午夜| 两性夫妻黄色片| 可以在线观看的亚洲视频| 国产区一区二久久| 啦啦啦观看免费观看视频高清 | 一区二区三区精品91| 波多野结衣一区麻豆| 日韩高清综合在线| 精品久久久久久久毛片微露脸| 日韩视频一区二区在线观看| 午夜免费激情av| 一本久久中文字幕| 一区在线观看完整版| 免费高清在线观看日韩| 久久精品国产亚洲av高清一级| 一夜夜www| 一本久久中文字幕| 在线观看日韩欧美| 亚洲人成伊人成综合网2020| 极品人妻少妇av视频| 咕卡用的链子| 搡老熟女国产l中国老女人| 最好的美女福利视频网| 久久久久久久久久久久大奶| 人人妻人人澡人人看| 啦啦啦观看免费观看视频高清 | 一级作爱视频免费观看| 欧美日韩精品网址| 日韩精品免费视频一区二区三区| 18禁裸乳无遮挡免费网站照片 | 久久中文字幕一级| 午夜成年电影在线免费观看| 久久国产精品影院| 搡老妇女老女人老熟妇| 久久热在线av| 久久精品国产亚洲av高清一级| 中文字幕色久视频| 午夜免费成人在线视频| 国产成人精品无人区| www.精华液| 99国产极品粉嫩在线观看| 满18在线观看网站| 午夜久久久在线观看| 亚洲精品一卡2卡三卡4卡5卡| 99国产综合亚洲精品| 亚洲成国产人片在线观看| 亚洲午夜精品一区,二区,三区| 最好的美女福利视频网| 国产精品秋霞免费鲁丝片| 国产精品亚洲美女久久久| 在线观看免费视频网站a站| 香蕉国产在线看| 曰老女人黄片| 99国产精品免费福利视频| 欧美色视频一区免费| 一级毛片高清免费大全| 在线天堂中文资源库| 操美女的视频在线观看| 一边摸一边做爽爽视频免费| АⅤ资源中文在线天堂| 国产精品久久电影中文字幕| 又紧又爽又黄一区二区| 免费高清视频大片| 可以免费在线观看a视频的电影网站| 亚洲全国av大片| 国产av精品麻豆| 成人三级做爰电影| 色哟哟哟哟哟哟| 桃红色精品国产亚洲av| 午夜福利影视在线免费观看| 色综合亚洲欧美另类图片| 一卡2卡三卡四卡精品乱码亚洲| 少妇 在线观看| 国产欧美日韩精品亚洲av| 操出白浆在线播放| 国产亚洲av高清不卡| 久久久久久国产a免费观看| 国产成人一区二区三区免费视频网站| 亚洲va日本ⅴa欧美va伊人久久| 久热这里只有精品99| 狂野欧美激情性xxxx| 亚洲色图av天堂| 午夜福利,免费看| 欧美色欧美亚洲另类二区 | 69av精品久久久久久| 色av中文字幕| 男男h啪啪无遮挡| 亚洲第一av免费看| 亚洲一区高清亚洲精品| 国产99白浆流出| 90打野战视频偷拍视频| 丁香六月欧美| 午夜a级毛片| 制服诱惑二区| 欧美日韩黄片免| 亚洲精品国产精品久久久不卡| 电影成人av| 亚洲色图综合在线观看| 日本黄色视频三级网站网址| 亚洲av电影在线进入| 色av中文字幕| 国产精品1区2区在线观看.| 制服丝袜大香蕉在线| 黄片播放在线免费| 变态另类丝袜制服| 最新在线观看一区二区三区| 亚洲中文字幕一区二区三区有码在线看 | 18禁观看日本| 久久精品国产亚洲av高清一级| 国产成人av激情在线播放| 日韩欧美一区二区三区在线观看| 嫁个100分男人电影在线观看| 真人做人爱边吃奶动态| 人人妻人人澡欧美一区二区 | 亚洲三区欧美一区| or卡值多少钱| 日韩一卡2卡3卡4卡2021年| 亚洲专区字幕在线| 国产xxxxx性猛交| 麻豆国产av国片精品| 欧美日韩黄片免| 亚洲性夜色夜夜综合| 国产成人啪精品午夜网站| 女同久久另类99精品国产91| 91成年电影在线观看| 国产又爽黄色视频| 黑人巨大精品欧美一区二区mp4| 一区福利在线观看| 久久久国产成人免费| 久久午夜综合久久蜜桃| 亚洲欧美日韩另类电影网站| 亚洲精品美女久久av网站| 一进一出抽搐动态| 国产成+人综合+亚洲专区| 免费看a级黄色片| 亚洲aⅴ乱码一区二区在线播放 | 免费少妇av软件| 琪琪午夜伦伦电影理论片6080| 很黄的视频免费| 国产日韩一区二区三区精品不卡| 波多野结衣巨乳人妻| 波多野结衣高清无吗| 一本久久中文字幕| 国产精品野战在线观看| 免费av毛片视频| 亚洲国产欧美一区二区综合| 在线国产一区二区在线| 在线视频色国产色| 黄片大片在线免费观看| 9色porny在线观看| 日本 av在线| 三级毛片av免费| 麻豆国产av国片精品| 欧美不卡视频在线免费观看 | 国产精华一区二区三区| 精品国产国语对白av| 亚洲国产毛片av蜜桃av| 精品欧美国产一区二区三| 国产激情久久老熟女| 九色亚洲精品在线播放| 国产精品爽爽va在线观看网站 | 欧美一区二区精品小视频在线| 黄色女人牲交| 亚洲欧美一区二区三区黑人| 波多野结衣巨乳人妻| 久久人人精品亚洲av| 无人区码免费观看不卡| 99riav亚洲国产免费| 午夜免费鲁丝| 国产精品香港三级国产av潘金莲| a级毛片在线看网站| 国产亚洲精品av在线| 欧美+亚洲+日韩+国产| 亚洲精品粉嫩美女一区| 国产高清videossex| 岛国在线观看网站| 长腿黑丝高跟| 成人18禁在线播放| 日韩免费av在线播放| 免费高清视频大片| 给我免费播放毛片高清在线观看| www.熟女人妻精品国产| 一区二区三区高清视频在线| 国产午夜精品久久久久久| 亚洲欧美精品综合久久99| 自线自在国产av| 淫秽高清视频在线观看| 淫秽高清视频在线观看| 91老司机精品| 香蕉久久夜色| 国产成人欧美| 日韩av在线大香蕉| 法律面前人人平等表现在哪些方面| 国产成人精品久久二区二区免费| 亚洲九九香蕉| 99精品在免费线老司机午夜| 国产单亲对白刺激| 色播在线永久视频| 亚洲一区二区三区色噜噜| 美女免费视频网站| 99国产极品粉嫩在线观看| 亚洲成av片中文字幕在线观看| 如日韩欧美国产精品一区二区三区| 亚洲情色 制服丝袜| 悠悠久久av| 国产av一区二区精品久久| 久久久久久久久久久久大奶| 9191精品国产免费久久| 免费看a级黄色片| 性色av乱码一区二区三区2| 男人的好看免费观看在线视频 | 亚洲狠狠婷婷综合久久图片| 亚洲精华国产精华精| 国产激情久久老熟女| 搡老熟女国产l中国老女人| av天堂在线播放| 久久久精品欧美日韩精品| 亚洲五月婷婷丁香| 中文字幕久久专区| 午夜福利高清视频| 亚洲九九香蕉| 多毛熟女@视频| 亚洲中文字幕一区二区三区有码在线看 | 麻豆久久精品国产亚洲av| 亚洲久久久国产精品| 可以在线观看的亚洲视频| av电影中文网址| av在线播放免费不卡| 中国美女看黄片| 黄色a级毛片大全视频| 精品国产超薄肉色丝袜足j| av欧美777| 免费搜索国产男女视频| 色av中文字幕| 久久精品亚洲熟妇少妇任你| 99re在线观看精品视频| 视频在线观看一区二区三区| 精品欧美一区二区三区在线| 一边摸一边抽搐一进一出视频| 亚洲精品一卡2卡三卡4卡5卡| 久久欧美精品欧美久久欧美| 美女高潮喷水抽搐中文字幕| 18禁美女被吸乳视频| 一级作爱视频免费观看| 久久久久国产一级毛片高清牌| 欧美中文综合在线视频| 亚洲欧美激情在线| 国产精品,欧美在线| 一夜夜www| 久久草成人影院| 一级黄色大片毛片| 亚洲第一青青草原| 亚洲av熟女| 女人被狂操c到高潮| 久久青草综合色| 悠悠久久av| 国产三级黄色录像| 亚洲一卡2卡3卡4卡5卡精品中文| 看免费av毛片| 国产精品久久久久久人妻精品电影| 黄片播放在线免费| 精品人妻在线不人妻| 国产av又大| 国产熟女xx| 久久久久久久午夜电影| 国产97色在线日韩免费| 又紧又爽又黄一区二区| 久久中文看片网| 操出白浆在线播放| 国产成人av激情在线播放| 久久婷婷成人综合色麻豆| 午夜福利免费观看在线| 桃色一区二区三区在线观看| 欧美精品亚洲一区二区| 18禁国产床啪视频网站| 久99久视频精品免费| 亚洲狠狠婷婷综合久久图片| 极品人妻少妇av视频| 久久 成人 亚洲| 国产精品影院久久| 国产精品久久久久久人妻精品电影| 两个人看的免费小视频| 亚洲欧美精品综合一区二区三区| 在线免费观看的www视频| bbb黄色大片| 又黄又爽又免费观看的视频| 黄色女人牲交| 午夜精品久久久久久毛片777| 欧美中文综合在线视频| 女同久久另类99精品国产91| 91麻豆精品激情在线观看国产| 大型av网站在线播放| av天堂久久9| 国产精品亚洲一级av第二区| 亚洲av日韩精品久久久久久密| 亚洲男人的天堂狠狠| 激情视频va一区二区三区| 成人亚洲精品一区在线观看| 午夜老司机福利片| 日韩国内少妇激情av| 国产91精品成人一区二区三区| 黄色片一级片一级黄色片| 在线观看舔阴道视频| 国产精品,欧美在线| 午夜老司机福利片| 精品少妇一区二区三区视频日本电影| 宅男免费午夜| 成年人黄色毛片网站| 午夜福利影视在线免费观看| 国产一卡二卡三卡精品| 国产成人av教育| 免费无遮挡裸体视频| 国产欧美日韩精品亚洲av| 国产区一区二久久| 久久精品91蜜桃| 两个人视频免费观看高清| 狂野欧美激情性xxxx| 九色亚洲精品在线播放| 日本 欧美在线| 麻豆一二三区av精品| 在线十欧美十亚洲十日本专区| avwww免费| 国产欧美日韩综合在线一区二区| 又黄又爽又免费观看的视频| www.999成人在线观看| 国内精品久久久久久久电影| 人人妻,人人澡人人爽秒播| 中国美女看黄片| 老熟妇乱子伦视频在线观看| 一个人免费在线观看的高清视频| 黑人操中国人逼视频| 丁香欧美五月| √禁漫天堂资源中文www| 一进一出好大好爽视频| 亚洲情色 制服丝袜| av欧美777| xxx96com| 亚洲少妇的诱惑av| 久久天躁狠狠躁夜夜2o2o| www日本在线高清视频| 搡老岳熟女国产| 法律面前人人平等表现在哪些方面| 成人av一区二区三区在线看| 19禁男女啪啪无遮挡网站| 9色porny在线观看| 国产精品久久久人人做人人爽| 亚洲国产中文字幕在线视频| 国产精品乱码一区二三区的特点 | 精品人妻在线不人妻| 18美女黄网站色大片免费观看| 变态另类成人亚洲欧美熟女 | 国产成人影院久久av| 精品少妇一区二区三区视频日本电影| 日韩三级视频一区二区三区| 成人亚洲精品av一区二区| 久热这里只有精品99| 操美女的视频在线观看| 夜夜看夜夜爽夜夜摸| 中国美女看黄片| 熟妇人妻久久中文字幕3abv| 在线永久观看黄色视频|