• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MobileNet network optimization based on convolutional block attention module

    2022-05-05 07:30:04ZHAOShuxuMENShiyaoYUANLin

    ZHAO Shuxu, MEN Shiyao, YUAN Lin

    (School of Electronics and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China)

    Abstract: Deep learning technology is widely used in computer vision. Generally, a large amount of data is used to train the model weights in deep learning, so as to obtain a model with higher accuracy. However, massive data and complex model structures require more calculating resources. Since people generally can only carry and use mobile and portable devices in application scenarios, neural networks have limitations in terms of calculating resources, size and power consumption. Therefore, the efficient lightweight model MobileNet is used as the basic network in this study for optimization. First, the accuracy of the MobileNet model is improved by adding methods such as the convolutional block attention module (CBAM) and expansion convolution. Then, the MobileNet model is compressed by using pruning and weight quantization algorithms based on weight size. Afterwards, methods such as Python crawlers and data augmentation are employed to create a garbage classification data set. Based on the above model optimization strategy, the garbage classification mobile terminal application is deployed on mobile phones and raspberry pies, realizing completing the garbage classification task more conveniently.

    Key words: MobileNet; convolutional block attention module (CBAM); model pruning and quantization; edge machine learning

    0 Introduction

    Artificial intelligence has developed rapidly in many engineering fields through deep learning and machine learning in recent years. Deep learning performs well as an effective way in the computer vision field to achieve intelligence. It can accurately solve the problems of image classification, target detection and image segmentation by using a large amount of data to train the model[1]. However, the model becomes more and more complex in order to obtain better accuracy, resulting in higher calculating requirements.

    Therefore, the lightweight network model has become a new direction for deep learning[2]. Currently, the effective model method is mainly constructed by two ideas.

    It is useful to optimize the existing excellent models, i.e. model compression. The classic model compression algorithms are listed below.

    1) Weight pruning is a method of pruning a trained model, which is currently the most used method in model compression. It is usually to find an effective means to judge the importance of parameters, and cut unimportant connections or filters to reduce model redundancy. Finally, retrain the model to restore some accuracy after pruning[3].

    2) Weight sharing and quantization is another efficient method to compress the model. Weight sharing and quantization can further compress the pruned network. They compress the model by reducing the number of bits needed to represent each weight, limiting the number of effective weights and storing multiple connections that share the same weight. The commonly used weight sharing method is K-means. After that, the weight parameters are reencoded and saved according to the communication principle.

    3) Huffman coding as an optimal prefix code generally is used for lossless data compression, adopts variable-length code words to encode source symbols. Code according to the probability of each symbol appearing, representing more common symbols with fewer bits.

    The second idea is to build a lightweight networkmodel directly by designing the convolution operation and network structure. Google proposed a MobileNet network model based on depthwise separable convolution. The essence of convolution operation was the sparse expression with less redundant information[4]. Comparing with existing networks for image classification problems such as Inception v3 and VGG16, this network exhibited the characteristics of small size, few parameters and fast calculation speed. Simultaneously, Google provided a seamless and convenient mobile solution called tensor flow lite for the MobileNet. Besides, the effectiveness of MoblieNet was demonstrated by several experimental data[4].

    Therefore, the MobileNet is used as the basic network for optimization research in this paper and applied to real applications. The performance of the optimized model on the CIFAR10 standard data set was presented in this paper. Additionally, the Android application of garbage classification was achieved based on the optimization strategy.

    1 Model optimization

    1.1 Basic network

    MobileNet is an efficient and lightweight neural networkmodel proposed by Google for mobile and embedded devices[4]. In the MobileNet network, a new convolution called depthwise separable is proposed to build the lightweight network model[4]. This convolution method separates the standard convolution into two new convolution methods to extract the plane features and channel features of the data respectively, as illustrated in Fig.1.

    Fig.1 Depthwise separable convolution

    Meanwhile, the size of the model and the number of parameters can be adjusted with two simple parameters, making a balance between accuracy and efficiency. MobileNet significantly reduces the number of network parameters and computations compared to other advanced models on the ImageNet standard data set. Besides, the accuracy of MobileNet is not decreased dramatically compared with the traditional excellent network[4]. As presented in Table 1, the networks’ classification accuracy (such as VGG) on the ImageNet dataset is basically the same.

    Table 1 MobileNet comparison to popular models

    This conclusion in another set of experiments is also gotten. The fine-grained classification task on the Stanford Dogs dataset is provided in Table 2. The classification accuracy of MobileNet is close to that of Inception V3 when the calculation and model size tremendously decrease[4].

    Table 2 MobileNet for stanford dogs

    To sum up, the MobileNet network is a valid lightweight model that can ensure the accuracy of classification in the task, allowing it to deploy on the edge terminals for machine learning inference. Therefore, the calculating resource and communication delays of the entire system can be reduced.

    1.2 Solutions to overfitting

    Neural networks often have overfitting problems. To prevent the overfitting during training, a L2 regularization strategy is added to the depthwise separable convolution structure. This strategy is arranged after depthwise convolution and before pointwise convolution, as illustrated in Fig.2.

    L2 regularization is a classic method to prevent overfitting in deep learning by reducing the number of coefficients to limit the complexity of the model. The specific process is to make some elements in vector zero or limit the number of non-zero elements. Therefore, a penalty termΩ(ω) is added to the original loss function to restrict the model complexity[5]. Its mathematical expression is

    L(ω;X,y)=L1(ω;X,y)+λΩ(ω)

    (1)

    and it satisfies the condition of

    (2)

    Fig.2 L2 regularization in network structure

    For the model weight coefficientω, the solution is to minimize the loss function, that is

    (3)

    whereXrepresents the training data that is input, andydenotes the output result.

    The complexity of the model can only be limited by adjusting the value of the constantCto satisfy condition

    (4)

    It should be ensured that the sum of squares of allωdoes not exceed the parameterC. Generally, the goal is to minimize the training sample errorL1while following the condition that the sum ofωsquares is less thanC[6].

    1.3 Optimization of feature extraction

    After solving the problem of overfitting, the ability to extracting data features needs to be improved. To avoid increasing the number of layers and complexity of the model structure as much as possible, the expansion convolution and CBAM are added to the feature extraction structure instead.

    As illustrated in Fig.3, the expansion convolution is added to the network structure. The convolution generally used in neural networks is based on consecutive adjacent pixels. The expansion convolution is different. The distribution of its operators is sparser, and the operation parameters remain unchanged, as presented in Fig.4.

    Fig.3 Expansion convolution in network structure

    Fig.4 Expansion convolution

    The expansion convolution can expand the receptive field of the convolution kernel and obtain more context information between image pixels without increasing the model parameters. In computer vision, it is possible to understand the context of the picture better.

    After adding expansion convolution, the CBAM is added at the end of the block of each depthwise separable convolution, as exhibited in Fig.5.

    Fig.5 CBAM module in network structure

    In this way, the feature extraction capability of the depthwise separable convolutional block significantly improves at the cost of adding a few numbers of operations.

    CBAM is a simple and effective attention module for feedforward convolutional neural networks[7]. After inputting the feature map, the module infers the attention map based on two independent dimensions of channel and space. Then, the attention map is multiplied by the input feature map to perform adaptive feature refinement, as exhibited in Fig.6. From the perspective of computing resources, CBAM is a lightweight application module that can be integrated seamlessly into any CNN network that does not require high calculating resources. Besides, the entire CNN network is employed to directly train the model[7]. Experimental results on ImageNet-1K, MS COCO and VOC 2007 datasets demonstrate that the modified module improves the classification and detection performance of different models, fully illustrating the effectiveness of the module.

    Fig.6 CBAM module

    1.4 Optimization strategies for training

    1.4.1 Cross-entropy loss function

    After the model is optimized, the training of the model is needed to start thinking about. The loss function is the most basic and critical element to be considered in model training. The cross-entropy loss function is the loss function commonly used in training neural networks. The cross-entropy describes the degree of tightness between the two sets of probability distributions, real values and putative values, as expressed in

    H(p,q)=-∑p(x)logq(x),

    (5)

    wherep(x) denotes the probability distribution of the true classification situation, andq(x) represents the probability distribution of the inferred classification[8]. The smaller the value of the cross-entropy, the closer the distribution of the two probabilities[8].

    Cross entropy is used as a loss function for classification problems in machine learning[9]. The loss function for all sample data is

    (6)

    whereNrepresents the number of samples;ydenotes the sample label, andprefers to the predicted probability. Suppose that there are two separate probability distributionsp(x) andq(x) for the same random variableX, KL divergence can be used to measure the difference between the two probability distributions. In the machine learning training network, the input data and labels are often determined. Therefore, the actual probability distributionp(x) is also determined, and the information entropy is constant. Since the value of KL divergence represents the difference between the true probability distributionp(x) and the predicted probability distributionq(x), the smaller the value, the better the anticipated result. Therefore, the KL divergence needs to be minimized. The cross-entropy is equal to KL divergence plus a constant (information entropy), and it is easier for the formula to calculate compared to the KL divergence. Consequently, the cross-entropy loss function is often used to calculate the loss in deep learning.

    1.4.2 Adam algorithm

    On the other hand, the optimizer is used to update and calculate the network parameters that affect the model training and model output to approximate or reach the optimal value, thereby minimizing the loss function. Therefore, the optimization function is as important as the loss function in neural networks. The Adam algorithm is adopted to replace the SGD algorithm when the classification results are iteratively optimized. Adam is a popular algorithm in deep learning because it can quickly obtain good results. Experimental results indicate that Adam performs well in practice and is better than other stochastic optimization methods (Fig.7). In the original paper, experiments verifies that the convergence of the algorithm meets the expectations of theoretical analysis.

    Thus, the logistic regression algorithm in MNIST can be optimizedby using the Adam algorithm. Adam can also be employed when using the multilayer perceptron algorithm on MNIST and training convolutional neural networks on the CIFAR-10 image recognition dataset. To sum up, Adam can effectively solve practical deep learning problems demonstrated by the experiment on the large models and data sets[10].

    Fig.7 Adam on MINIST

    1.5 Model compression

    Based on the above strategy, the accuracy of the model can be increased significantly. In the next step, a model compression algorithm called pruning is used to process the model to limit the size and parameters of the model. The method of pruning the trained model is currently the most used in model compression. As presented in Fig.8, the model weights are trained up. They are then pruned, and the model is retrained.

    Fig.8 Pruning method

    Weight pruning refers to eliminating unnecessary values in weight tensor. First,sort the neural network parameters according to the size of the weight. Then, make some parameters to zero as needed to eliminate the low-weight connections among the layers of the neural network[11], that will bring the following advantages to the model.

    1) Compress. The sparse tensor (model weight) can be obtained by compressing only non-zero values and their corresponding coordinates.

    2) Speed. Sparse tensor (model weight) can skip other unnecessary calculations during model inference.

    Through the experiments, the model can be compressed almost several times on the MINIST dataset, as illustrated in Table 3. A CNN network is built with three layers of convolutional structure, and the MNIST is adopted to train the model. The MNIST handwritten digital data set comes from the National Institute of Standards and Technology of the United States, which is one of the famous public data sets. Generally, people use this data set as an introductory case for deep learning. The digital pictures drawn by 250 persons of different occupations in the data set contains 70 000 samples, in which 60 000 are the training dataset, and 10 000 are the test dataset. Each training element is a 28×28 pixel handwritten digital picture, and each one represents every digit from 0 to 9. Besides, the model is trained for 12 epochs by using the MINIST dataset, and the eventual accuracy is 0.992. After pruning the model (sparse is 0.6), the model size is compressed by nearly 4-5 times while the accuracy of the model is not reduced.

    Table 3 Pruning on MINIST

    Moreover, the model size can be further compressed with pruning combined other optimization techniques such as quantization. Quantization converts weights from 32-bit to 8-bit, leading to reducing the size of the model several times. Its performance on MINIST is verified through experiments, as presented in Table 4.

    Table 4 Weight quantization on MINIST

    2 Experiments

    In this part, the effects of the original model and the optimized model are compared by using the standard data set CIFAR10.

    2.1 CIFAR-10 dataset

    Cifar-10 is a computer vision data set for universal object recognition collected by Alex Krizhevsky and Ilya Sutskever, which contains 60 000 32×32 RGB color pictures with a total of 10 categories where 50 000 sheets are the training set and 10 000 sheets are the test dataset[12], respectively, as shown in Fig.9.

    Fig.9 CIFAR10 data

    2.2 Experimental results

    2.2.1 MobileNet original model

    First, the CIFAR10 dataset is used to directly train the original MobileNet model. Before training, it is needed to set the relevant parameters. Therefore, set the class as 10, the batch size as 64, the epoch as 300, and the iterations as 782, respectively. Then, set DROPOUT to 0.2, and weight decay to 1e-4. After the training, the accuracy of the test data set remains at about 0.8, as illustrated in Fig.10 (after smoothing).

    Fig.10 Accuracy of original model on test datasets

    2.2.2 Result of L2 regularization optimization

    To prevent overfitting, the L2 regularization strategy is added in the pointwise convolution, and Adam algorithm and cross-entropy loss function are used to train the model. Simultaneously, the accuracy of the model is finally improved to 0.85 by using he_normal to replace the default weight initialization strategy glorot_uniform[13-14], as exhibited in Fig.11 (after smoothing).

    Fig.11 Accuracy of optimized model on test datasets

    2.2.3 Optimization results of feature extraction

    After solving the impact of overfitting, CBAM module is added after each depthwise separable convolution to improve the ability of the model to extract features. Besides, the standard convolution of the first layer is changed to expansion convolution to obtain a more extensive receiving range and more abstract features. As presented in Table 5, the model accuracy finally steadily increases to 0.90. The model size comparison is exhibited in Fig.12 (after smoothing).

    Fig.12 Optimized model accuracy on test datasets using CBAM

    Table 5 Model comparison of CBAM

    2.2.4 Model compression

    After obtaining the higher accurate model weights, the model is compressed. First, sort the model weights according to the weight magnitude. Then, prune the model weight layer by layer and set 0.6 as the pruning sparsity. Next, train the pruned model for another 50 times to achieve the aim of restoring accuracy. After pruning and retraining, the model weight is changed from float 32-bit to int 8-bit data to further explore the possibility of model compression. However, the accuracy of the model will be significantly reduced. Because after pruning the model, all the remaining weights have immense influence on the inference results. Therefore, these weights cannot be simply quantified before finding a more effective quantization strategy.

    Through the above processing, the size of the model is compressed nearly 4-5 times, and the test results indicate that the accuracy of the model is not significantly reduced. More details are provided in Table 6.

    Table 6 Comparison of quantization and pruning

    2.3 Experimental summary

    According to the proposed optimization strategy, the accuracy of the model is improved from 0.8 to 0.85 by using the L2 regularization and Adam algorithm. Then the accuracy of the model is improved to 0.9 by CBAM and expansion convolution. Finally, the size of the model is effectively compressed through pruning and weight quantization. The model reaches a balance between accuracy and size. More details are provided in Table 7.

    Table 7 Comparison of models

    3 Application of garbage classification

    After using the standard dataset to illustrate the effect of the optimization strategy, the garbage classification problem is taken as an example to illuminate the application value of model optimization. First, collect the garbage data that frequently appears in public places as a data set. Then, train and compress the model according to the model optimization strategy. Finally, convert the training model to the corresponding format, and deploy the application on the Android system.

    3.1 Experimental environment

    The model training environment and the mobile phone deployment environment are presented in Tables 8 and 9.

    Table 8 Model training environment

    Table 9 Mobile deployment environment

    3.2 Data preprocessing

    Since there is no standard dataset for garbage classification, it is necessary to create a training dataset forit. According to the regulations released by Shanghai, domestic garbage is classified into four basic standard categories. They are recyclable garbage, hazardous garbage, wet garbage and dry garbage. The data preprocessing is described as follows.

    1) According to the four types of garbage classification in the regulations. Python is used to crawl on the Internet to obtain each garbage sample image. Besides, pictures is also collected into the dataset in real life.

    2) Delete unreasonable data. Invalid data set images with too much interference information should be deleted, as illustrated in Fig.13.

    Fig.13 Invalid data and interfering data

    3) Divide each type of data into the train data set and test data set at a ratio of 7∶3.

    4) Data enhancement. Flip or rotate existing data to get more data. Thus, the neural network has a better generalization ability[15]. As presented in Figs.14 and 15, the samples are randomly cropped and flipped horizontally and vertically.

    Fig.14 Random cropping of data

    Fig.15 Horizontal and vertical flipping of data

    In summary, the garbage classification data set contains 1 000 pictures in each class after the processing of the above four steps.

    3.3 Results and discussion

    3.3.1 Model training

    First, the trained and optimized model is tested on the PC side. As illustrated in Figs.16-18, tissue and batteries are correctly identified as dry garbage (dry score=0.999 98) and hazardous garbage (hazard score=0.997 801), respectively. Besides, the framework tool tensor board is used to display the visual evaluation data of the classification results on the test set, as exhibited in Figs.19 and 20. Moreover, the classification accuracy on the test data set is maintain at 93%-95%, and the cross-entropy loss function is stable at the range of 0.15-0.2.

    Fig.16 Test data: napkin and lead battery

    Fig.17 Napkin classification result

    Fig.18 Lead battery classification result

    Fig.19 Accuracy on test dataset

    Fig.20 Cross entropy loss function result

    3.3.2 Android mobile deployment

    Python API is used to convert the model obtained on the PC into a T flite format file. Then, put this file into the Android project. Finally, it is deployed to the mobile phone through Android studio. The classification of empty bottles and waste food are presented in Fig.21, respectively.

    Referring to the published papers on MobileNet network[16-17], the experiment completes the task basically. However, the model cannot fully learn all the features of garbage images due to the size of the data set and the limitation of sample collection. This may result in insufficient classification accuracy and poor robustness.

    Fig.21 Classification results on phone

    4 Conclusions

    The lightweight model MobileNet is used as the research object. Its feature extraction ability and the efficiency of model parameters are optimized. The results demonstrate that the accuracy of the model on the standard data set CIFAR10 is improved, and the model size is effectively reduced. Besides, a garbage image classification application is deployed on Android phones. Furthermore, more data can be obtained by building a big data collection platform for garbage classification, contributing to an improvement of the robustness of the model.

    国产精品成人在线| 两个人看的免费小视频| 91在线精品国自产拍蜜月| 亚洲婷婷狠狠爱综合网| 亚洲欧美一区二区三区国产| 亚洲三级黄色毛片| 国产午夜精品一二区理论片| 免费观看在线日韩| 日韩精品有码人妻一区| av又黄又爽大尺度在线免费看| 男女边吃奶边做爰视频| 久久精品夜色国产| av在线app专区| 亚洲欧美精品综合一区二区三区 | 国产野战对白在线观看| 黄片小视频在线播放| 日韩人妻精品一区2区三区| 五月伊人婷婷丁香| 亚洲欧洲日产国产| 国产 精品1| 18禁动态无遮挡网站| 精品亚洲成a人片在线观看| 性高湖久久久久久久久免费观看| av网站免费在线观看视频| 日韩av不卡免费在线播放| 国产一区亚洲一区在线观看| av在线播放精品| 午夜福利影视在线免费观看| 天天影视国产精品| 女性生殖器流出的白浆| 亚洲人成电影观看| www.精华液| 9色porny在线观看| 18禁动态无遮挡网站| 80岁老熟妇乱子伦牲交| 人体艺术视频欧美日本| 国产 一区精品| 欧美精品国产亚洲| 国产 精品1| 久久精品国产鲁丝片午夜精品| 国产淫语在线视频| 午夜日本视频在线| 午夜激情久久久久久久| 久久国产亚洲av麻豆专区| 国语对白做爰xxxⅹ性视频网站| 欧美日韩精品网址| 午夜福利一区二区在线看| 波野结衣二区三区在线| 美女大奶头黄色视频| 女人精品久久久久毛片| 天堂8中文在线网| 好男人视频免费观看在线| 国产深夜福利视频在线观看| 黄色一级大片看看| 免费在线观看黄色视频的| 久久鲁丝午夜福利片| 大香蕉久久成人网| 国精品久久久久久国模美| 美女视频免费永久观看网站| 少妇的逼水好多| 亚洲国产精品一区二区三区在线| 性少妇av在线| 啦啦啦在线免费观看视频4| 飞空精品影院首页| 午夜福利,免费看| 91aial.com中文字幕在线观看| 啦啦啦在线观看免费高清www| 亚洲精品av麻豆狂野| 国产无遮挡羞羞视频在线观看| 亚洲欧美色中文字幕在线| 黄片小视频在线播放| 超色免费av| av又黄又爽大尺度在线免费看| 少妇猛男粗大的猛烈进出视频| 18禁观看日本| 国产在线视频一区二区| 丝袜人妻中文字幕| 我的亚洲天堂| 少妇熟女欧美另类| 亚洲第一青青草原| 亚洲国产精品一区二区三区在线| 国产亚洲最大av| 中文字幕精品免费在线观看视频| 巨乳人妻的诱惑在线观看| 久久精品人人爽人人爽视色| 亚洲美女黄色视频免费看| 啦啦啦中文免费视频观看日本| 久久精品夜色国产| a级片在线免费高清观看视频| 国产成人一区二区在线| 亚洲国产最新在线播放| 18在线观看网站| av网站免费在线观看视频| 国产成人精品久久二区二区91 | 18禁裸乳无遮挡动漫免费视频| 亚洲av.av天堂| 少妇人妻 视频| 又大又黄又爽视频免费| 日韩制服骚丝袜av| 免费av中文字幕在线| 97在线视频观看| www.熟女人妻精品国产| 午夜激情久久久久久久| 夫妻午夜视频| 亚洲欧美清纯卡通| 国产黄色视频一区二区在线观看| 久久精品国产鲁丝片午夜精品| 99国产精品免费福利视频| 哪个播放器可以免费观看大片| 国产亚洲最大av| 97人妻天天添夜夜摸| 一个人免费看片子| 亚洲第一区二区三区不卡| av网站在线播放免费| 一级毛片我不卡| 免费看av在线观看网站| 波野结衣二区三区在线| 亚洲av男天堂| 狂野欧美激情性bbbbbb| 成年女人在线观看亚洲视频| 国产极品天堂在线| 亚洲av电影在线观看一区二区三区| 久久久久人妻精品一区果冻| videossex国产| av在线观看视频网站免费| 国产极品天堂在线| 欧美精品一区二区免费开放| 国产极品粉嫩免费观看在线| 国产激情久久老熟女| 亚洲综合精品二区| 在线 av 中文字幕| 国产精品一二三区在线看| 在线观看人妻少妇| 国产成人精品久久久久久| 精品国产乱码久久久久久小说| 亚洲国产欧美在线一区| 丰满迷人的少妇在线观看| 99热全是精品| 亚洲国产精品国产精品| 80岁老熟妇乱子伦牲交| 国产成人精品一,二区| 看十八女毛片水多多多| av网站免费在线观看视频| 国产麻豆69| √禁漫天堂资源中文www| 久久精品久久精品一区二区三区| 好男人视频免费观看在线| 天堂俺去俺来也www色官网| 国产伦理片在线播放av一区| 国产男女超爽视频在线观看| 在线看a的网站| 满18在线观看网站| 成人亚洲欧美一区二区av| 下体分泌物呈黄色| 成年人午夜在线观看视频| 亚洲精品第二区| 新久久久久国产一级毛片| 国产免费又黄又爽又色| 亚洲欧美成人精品一区二区| 看十八女毛片水多多多| 一级,二级,三级黄色视频| 亚洲精品乱久久久久久| 国产精品女同一区二区软件| 亚洲国产欧美网| 国产老妇伦熟女老妇高清| 少妇熟女欧美另类| 一区二区日韩欧美中文字幕| 欧美黄色片欧美黄色片| 人体艺术视频欧美日本| 老司机亚洲免费影院| 在线观看www视频免费| 深夜精品福利| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 精品人妻一区二区三区麻豆| 亚洲国产精品999| 久久免费观看电影| av福利片在线| 国产精品女同一区二区软件| 美女国产高潮福利片在线看| 国产日韩欧美视频二区| 日韩在线高清观看一区二区三区| 亚洲欧美精品自产自拍| 如日韩欧美国产精品一区二区三区| 免费播放大片免费观看视频在线观看| 我的亚洲天堂| 又黄又粗又硬又大视频| 在线亚洲精品国产二区图片欧美| 成年女人在线观看亚洲视频| 欧美xxⅹ黑人| 国产黄频视频在线观看| 亚洲成国产人片在线观看| 久久久久久久精品精品| 精品久久久精品久久久| 国产精品99久久99久久久不卡 | 日日撸夜夜添| 卡戴珊不雅视频在线播放| 中文乱码字字幕精品一区二区三区| 国产精品久久久久久久久免| 欧美国产精品va在线观看不卡| 高清欧美精品videossex| 久久ye,这里只有精品| 精品一区在线观看国产| 亚洲欧美清纯卡通| 国产精品一二三区在线看| 波多野结衣av一区二区av| a级片在线免费高清观看视频| 国产色婷婷99| 99国产综合亚洲精品| 国产在线免费精品| 免费观看在线日韩| 波多野结衣一区麻豆| 日韩成人av中文字幕在线观看| 99久国产av精品国产电影| 久久久久久久国产电影| 日韩欧美一区视频在线观看| 国产免费现黄频在线看| 亚洲av成人精品一二三区| 国产成人a∨麻豆精品| 免费在线观看视频国产中文字幕亚洲 | 午夜日韩欧美国产| 色视频在线一区二区三区| 国产在视频线精品| 毛片一级片免费看久久久久| 色哟哟·www| 久久 成人 亚洲| 国产爽快片一区二区三区| 亚洲国产看品久久| 成人免费观看视频高清| 母亲3免费完整高清在线观看 | 国产亚洲最大av| 99热全是精品| 亚洲国产看品久久| 90打野战视频偷拍视频| 亚洲国产av新网站| 性色avwww在线观看| 国产精品蜜桃在线观看| 久久精品人人爽人人爽视色| 午夜免费鲁丝| 国产精品国产av在线观看| 日产精品乱码卡一卡2卡三| 日韩欧美精品免费久久| 国产不卡av网站在线观看| 国产成人免费观看mmmm| 99热全是精品| 熟女av电影| 久久久久国产一级毛片高清牌| 免费日韩欧美在线观看| 国产亚洲av片在线观看秒播厂| 在线观看人妻少妇| 美女国产视频在线观看| 免费观看在线日韩| 亚洲色图 男人天堂 中文字幕| 又粗又硬又长又爽又黄的视频| 精品久久久精品久久久| 飞空精品影院首页| 18禁国产床啪视频网站| 色网站视频免费| 人人妻人人澡人人看| 亚洲av电影在线观看一区二区三区| 热99国产精品久久久久久7| 久久久久久人妻| 久久亚洲国产成人精品v| 久久99精品国语久久久| 日韩av免费高清视频| 亚洲精品国产av成人精品| 亚洲国产毛片av蜜桃av| 高清黄色对白视频在线免费看| 国产精品欧美亚洲77777| 丰满饥渴人妻一区二区三| 亚洲国产精品一区二区三区在线| 日韩三级伦理在线观看| 卡戴珊不雅视频在线播放| 亚洲国产最新在线播放| 欧美亚洲日本最大视频资源| 久久毛片免费看一区二区三区| 丝袜美腿诱惑在线| 久久精品久久久久久噜噜老黄| av电影中文网址| 九九爱精品视频在线观看| 人人澡人人妻人| 狂野欧美激情性bbbbbb| 三级国产精品片| 在线天堂中文资源库| 黄片无遮挡物在线观看| 一区二区三区乱码不卡18| 欧美激情极品国产一区二区三区| 丰满乱子伦码专区| av在线app专区| 久久精品国产自在天天线| 热99国产精品久久久久久7| 波野结衣二区三区在线| 在线观看美女被高潮喷水网站| 日韩欧美一区视频在线观看| 欧美日韩成人在线一区二区| 国产av国产精品国产| 国产精品麻豆人妻色哟哟久久| 亚洲av欧美aⅴ国产| videossex国产| av视频免费观看在线观看| 亚洲国产欧美在线一区| 晚上一个人看的免费电影| 一区二区三区精品91| 亚洲成人手机| 日韩在线高清观看一区二区三区| 如日韩欧美国产精品一区二区三区| 久久久久视频综合| 亚洲四区av| 欧美成人精品欧美一级黄| 欧美激情 高清一区二区三区| 最近最新中文字幕大全免费视频 | 观看av在线不卡| 亚洲 欧美一区二区三区| 热re99久久精品国产66热6| 香蕉精品网在线| 香蕉丝袜av| 国产精品一区二区在线不卡| 国产精品久久久av美女十八| 一本—道久久a久久精品蜜桃钙片| 韩国av在线不卡| 天天操日日干夜夜撸| 国产成人免费无遮挡视频| 亚洲,欧美精品.| 亚洲av.av天堂| 亚洲av免费高清在线观看| 久久女婷五月综合色啪小说| 久久热在线av| 蜜桃在线观看..| 亚洲av.av天堂| 亚洲美女搞黄在线观看| 大香蕉久久成人网| 亚洲国产欧美网| 国产精品.久久久| 久久精品国产自在天天线| 我要看黄色一级片免费的| 午夜福利,免费看| 十分钟在线观看高清视频www| 天天躁日日躁夜夜躁夜夜| 午夜免费男女啪啪视频观看| 一边亲一边摸免费视频| 久久精品国产亚洲av天美| 国产极品粉嫩免费观看在线| 日韩中文字幕欧美一区二区 | 青春草国产在线视频| 欧美精品一区二区大全| 精品久久久精品久久久| 国产av精品麻豆| 久久热在线av| 一个人免费看片子| 欧美精品国产亚洲| 国产乱人偷精品视频| 国产一区二区在线观看av| 日韩一区二区三区影片| 一级,二级,三级黄色视频| 欧美日韩国产mv在线观看视频| 国产精品一区二区在线观看99| 亚洲精品国产色婷婷电影| 免费黄网站久久成人精品| 亚洲精品视频女| 国精品久久久久久国模美| 亚洲国产日韩一区二区| 青春草亚洲视频在线观看| av视频免费观看在线观看| 亚洲av.av天堂| 丰满饥渴人妻一区二区三| 久久99精品国语久久久| 国产精品熟女久久久久浪| 日本免费在线观看一区| 国产精品久久久久久av不卡| 亚洲av电影在线进入| 9色porny在线观看| 久久久久久久亚洲中文字幕| 在线观看三级黄色| 涩涩av久久男人的天堂| 一级毛片 在线播放| 一区二区三区乱码不卡18| 亚洲精品国产一区二区精华液| 国产精品嫩草影院av在线观看| 亚洲,欧美精品.| 制服诱惑二区| 韩国av在线不卡| 久久鲁丝午夜福利片| 极品人妻少妇av视频| 飞空精品影院首页| 午夜福利在线免费观看网站| 99久久中文字幕三级久久日本| 国产精品一二三区在线看| 赤兔流量卡办理| 成人黄色视频免费在线看| 国产av国产精品国产| 亚洲人成网站在线观看播放| 国产国语露脸激情在线看| 亚洲国产色片| 久久精品国产亚洲av高清一级| 久久精品久久精品一区二区三区| 亚洲国产毛片av蜜桃av| 精品一区二区免费观看| 在线 av 中文字幕| 久久久欧美国产精品| 午夜久久久在线观看| 国产亚洲一区二区精品| 成人免费观看视频高清| 久久99热这里只频精品6学生| 九九爱精品视频在线观看| videosex国产| 最近中文字幕高清免费大全6| 波多野结衣av一区二区av| 天堂中文最新版在线下载| 热re99久久国产66热| videos熟女内射| 最近最新中文字幕大全免费视频 | 天堂俺去俺来也www色官网| 美女高潮到喷水免费观看| 日本vs欧美在线观看视频| 亚洲一区中文字幕在线| 日韩电影二区| 香蕉丝袜av| 日韩大片免费观看网站| 国产免费视频播放在线视频| 涩涩av久久男人的天堂| 在线观看免费日韩欧美大片| 久久97久久精品| 97在线人人人人妻| 美女大奶头黄色视频| 热re99久久国产66热| 精品一区二区三卡| 亚洲av男天堂| 午夜激情av网站| 尾随美女入室| 国产男女内射视频| 国产无遮挡羞羞视频在线观看| 伦理电影大哥的女人| 国产人伦9x9x在线观看 | 美女脱内裤让男人舔精品视频| 桃花免费在线播放| 国产欧美日韩一区二区三区在线| 欧美日韩综合久久久久久| √禁漫天堂资源中文www| 新久久久久国产一级毛片| 欧美激情高清一区二区三区 | 看非洲黑人一级黄片| 日韩欧美精品免费久久| 日韩欧美一区视频在线观看| 男女免费视频国产| 中文字幕av电影在线播放| 亚洲第一av免费看| 亚洲人成电影观看| 啦啦啦在线免费观看视频4| 黄片小视频在线播放| 亚洲国产av新网站| a 毛片基地| 亚洲欧美一区二区三区黑人 | 婷婷色综合www| 国产一区二区激情短视频 | 男女高潮啪啪啪动态图| 久热这里只有精品99| 久久午夜综合久久蜜桃| 啦啦啦啦在线视频资源| 麻豆av在线久日| 国产老妇伦熟女老妇高清| 日产精品乱码卡一卡2卡三| 少妇 在线观看| 国产熟女午夜一区二区三区| 晚上一个人看的免费电影| 国产97色在线日韩免费| 午夜福利视频在线观看免费| 国产精品三级大全| 久久毛片免费看一区二区三区| 日韩成人av中文字幕在线观看| 人妻一区二区av| 免费av中文字幕在线| 国产xxxxx性猛交| 国产一区亚洲一区在线观看| 亚洲三区欧美一区| 人体艺术视频欧美日本| 美女福利国产在线| 亚洲人成网站在线观看播放| 亚洲综合色网址| 亚洲三区欧美一区| 熟女电影av网| 欧美中文综合在线视频| 91成人精品电影| 91国产中文字幕| 咕卡用的链子| 国产有黄有色有爽视频| 色播在线永久视频| 在线精品无人区一区二区三| 日韩免费高清中文字幕av| 国产有黄有色有爽视频| 99热全是精品| 中文欧美无线码| 极品少妇高潮喷水抽搐| 一本—道久久a久久精品蜜桃钙片| 少妇被粗大的猛进出69影院| 久久ye,这里只有精品| 看十八女毛片水多多多| av国产久精品久网站免费入址| 美女大奶头黄色视频| 九色亚洲精品在线播放| av免费观看日本| 如何舔出高潮| 色婷婷av一区二区三区视频| av在线老鸭窝| 免费av中文字幕在线| 亚洲精华国产精华液的使用体验| 国产成人a∨麻豆精品| 亚洲人成77777在线视频| 91精品国产国语对白视频| 国产熟女欧美一区二区| 麻豆精品久久久久久蜜桃| 91aial.com中文字幕在线观看| 久久精品国产亚洲av天美| 亚洲av.av天堂| 另类亚洲欧美激情| 国产综合精华液| 成人免费观看视频高清| 美国免费a级毛片| 永久免费av网站大全| a 毛片基地| 国产 一区精品| 天天躁夜夜躁狠狠躁躁| 午夜久久久在线观看| 免费看不卡的av| 精品卡一卡二卡四卡免费| 国产熟女欧美一区二区| 日韩免费高清中文字幕av| 在线看a的网站| 又黄又粗又硬又大视频| 亚洲色图 男人天堂 中文字幕| 丝袜喷水一区| 色哟哟·www| 性高湖久久久久久久久免费观看| 午夜91福利影院| 丝袜人妻中文字幕| 999精品在线视频| 777米奇影视久久| 午夜免费观看性视频| 一区二区三区四区激情视频| 国产精品一区二区在线观看99| 一级a爱视频在线免费观看| av在线观看视频网站免费| 精品亚洲成a人片在线观看| 18禁裸乳无遮挡动漫免费视频| 国产成人一区二区在线| 黄色毛片三级朝国网站| a级毛片在线看网站| 建设人人有责人人尽责人人享有的| 制服诱惑二区| 18禁观看日本| 成人国语在线视频| 久久精品熟女亚洲av麻豆精品| 国产男女内射视频| 亚洲精品中文字幕在线视频| 亚洲色图 男人天堂 中文字幕| 自线自在国产av| 国产亚洲最大av| 99国产精品免费福利视频| 在线 av 中文字幕| 男女下面插进去视频免费观看| 狠狠精品人妻久久久久久综合| 大码成人一级视频| 热99久久久久精品小说推荐| 天天躁日日躁夜夜躁夜夜| 国产福利在线免费观看视频| 精品99又大又爽又粗少妇毛片| 日韩制服骚丝袜av| kizo精华| 免费观看无遮挡的男女| 91午夜精品亚洲一区二区三区| 另类精品久久| 久久久久国产一级毛片高清牌| av视频免费观看在线观看| 午夜老司机福利剧场| 纵有疾风起免费观看全集完整版| www.自偷自拍.com| 一本—道久久a久久精品蜜桃钙片| 久久精品国产亚洲av涩爱| 久久青草综合色| 天天躁夜夜躁狠狠久久av| 韩国av在线不卡| 久久久久视频综合| 国产又爽黄色视频| 午夜福利视频精品| 2022亚洲国产成人精品| 中文字幕制服av| 欧美国产精品一级二级三级| 亚洲,欧美,日韩| 欧美日韩av久久| 日本av免费视频播放| 久久婷婷青草| 中国国产av一级| 在线看a的网站| 国产精品免费视频内射| 成人黄色视频免费在线看| 精品一品国产午夜福利视频| 人人妻人人添人人爽欧美一区卜| 男的添女的下面高潮视频| 啦啦啦啦在线视频资源| 国产激情久久老熟女| 国产精品国产av在线观看| 久久国内精品自在自线图片| 人成视频在线观看免费观看| 高清在线视频一区二区三区| 国产精品人妻久久久影院| 免费大片黄手机在线观看| 免费黄色在线免费观看| 五月伊人婷婷丁香| 国产不卡av网站在线观看| 91精品国产国语对白视频| 欧美日韩成人在线一区二区| 中文字幕制服av| 国产日韩一区二区三区精品不卡| 大话2 男鬼变身卡| 黄网站色视频无遮挡免费观看| av网站在线播放免费| 伦精品一区二区三区|