• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An attention-based prototypical network for forest frie smoke few-shot detection

    2022-09-08 12:54:04TingtingLiHaoweiZhuChunheHuJunguoZhang
    Journal of Forestry Research 2022年5期

    Tingting Li·Haowei Zhu·Chunhe Hu·Junguo Zhang

    Abstract Existing almost deep learning methods rely on a large amount of annotated data,so they are inappropriate for forest fire smoke detection with limited data.In this paper,a novel hybrid attention-based few-shot learning method,named Attention-Based Prototypical Network,is proposed for forest fire smoke detection.Specifically,feature extraction network,which consists of convolutional block attention module,could extract high-level and discriminative features and further decrease the false alarm rate resulting from suspected smoke areas.Moreover,we design a metalearning module to alleviate the overfitting issue caused by limited smoke images,and the meta-learning network enables achieving effective detection via comparing the distance between the class prototype of support images and the features of query images.A series of experiments on forest fire smoke datasets and miniImageNet dataset testify that the proposed method is superior to state-of-the-art few-shot learning approaches.

    Keywords Forest fire smoke detection·Few-shot learning·Channel attention module·Spatial attention module·Prototypical network

    Introduction

    Fire detection

    Forest fires are fast-spreading and difficult to extinguish due to the large amount of combustible material.They not only cause serious damage to human life and property,but also lead to the rapid degeneration of the ecological environment(Peng and Wang 2019).Therefore,early detection of forest fires is essential for disaster reduction.Image-based fire detection methods are more suitable for outdoor environment such as forest,mountainous and parking area compared to sensors (Frizzi et al.2016).These methods are generally divided into flame detection and smoke detection (Barmpoutis et al.2020).Smoke is an important sign for early fire detection because it spreads faster than flame and moves over a wide area (Wang et al.2019).Conventional imagebased fire smoke detection methods handcrafted low-level features (e.g.color,texture,shape,etc.) rely on experience.However,the above methods can only be applied in specific scenarios and the accuracy rate decreases once the environment changes (Hu and Lu 2018).

    With the rapid development of artificial intelligence and computer vision technology,deep learning has made significantly strides on several challenging visual tasks (Ferreira et al.2020;Xie et al.2019;Hu and Guan 2020;Liu et al.2019).Fire smoke detection methods based on convolutional neural networks (CNNs) have recently drawn much attention and have significantly improved the accuracy of fire smoke detection.Compared with traditional image-based smoke detection approaches,these methods have the ability to extract depth features automatically during the feature extraction process.What’s more,existing methods can be applied to variable wilderness environment.However,the false alarm rate of forest frie smoke detection increases when smoke accounts for only a small portion of the image (Li et al.2018).Since,convolutional neural networks have difficulty focusing on small smoke and are unable to extract discriminative features of smoke (Li et al.2018).In this case,the network prefers to detect by the image background rather than the smoke itself.In addition,these images often have complex backgrounds and may contain scattered areas,such as shadows,clouds,haze,fog,etc.These scattered areas are one of the key challenges of small smoke detection.

    Recently,few-shot learning (FSL) methods have become the main approach to solve the overfitting problem caused by limited training data.The success of fire smoke detection methods based on deep learning can be partially attributed to a large amount of annotated data (Xue and Wang 2020).However,forest fire smoke images far away from the camera are very hard to capture in real-life.There are also few small smoke images in public forest fire smoke datasets.Therefore,it is necessary to propose an effective detection approach for small smoke based on limited training data.Few-shot learning follows the paradigm of learning-to-learn and aims to solve a target classification task by learning a set of base classification tasks (Rizve et al.2021).The target classification task dataset is divided into a support set and a query set.Few-shot learning methods are able to classify query images using a few annotated support images (Zhang et al.2021).How to leverage information from the supporting images of small smoke is another key challenge in small smoke detection scenario.

    Towards the aforementioned challenges,we propose a novel hybrid attention-based few-shot learning (FSL) algorithm,called Attention-Based Prototypical Network (ABPNet),for forest fire smoke detection.The proposed method consists of two modules:feature extraction module and meta-learning module.First,inspired by the attention mechanism can increase the discriminative ability of the features weighted by the attention maps,the CNN embedded with Convolutional Block Attention Module (CBAM) is designed as the feature extraction module to extract high-level and discriminative forest fire smoke features.Second,the metalearning module is implemented to distinguish between class prototypes and query targets,thereby avoiding the overf titing problem of few-shot forest fire smoke detection.Finally,the performance of the few-shot learning algorithm with hybrid attention is validated on our forest fire smoke dataset and miniImageNet dataset.

    The remainder of this paper is organized as follows.Introduction reviews work related to fire detection,fewshot learning and attention mechanism.The proposed ABP-Net method is systematically introduced in Materials and methods.In Results and discussion,we compare the performance of our method with classical few-shot learning algorithms.Finally,the paper is concluded in Conclusions.In this section,we present the strengths and weaknesses of flame detection and smoke detection methods,then review the state-of-the-art few-shot learning approaches for object detection,and finally introduce the application of different attention mechanism.

    Image-based fire detection methods can be classified into flame detection and smoke detection.Previous image-based approaches (Ko et al.2009;Chen et al.2010;Yuan 2012;Yuan et al.2013) rely on prior knowledge to extract smoke and flame features manually,which do not provide reliable robustness.Due to the ability to automatically extract features and learn complex representations,fire detection methods based on deep neural networks have drawn much attention in recent years.Tao et al.(2016) proposed a novel smoke detection framework based on CNNs that extracts smoke features automatically.Shen et al.(2018) used an optimized YOLO model for flame detection from video frames.To effectively exploit the long-range motion context and spatial representation,Yin et al.(2019) designed a novel recurrent motion-space context model.These methods not only solve the limitations of conventional image-based methods,but also remarkably improve the fire detection accuracy.To further improve fire detection performance and detect fire locations,many studies have integrated conventional machine learning algorithms into CNN.Maksymiv et al.(2016) integrated AdaBoost,local binary patterns (LBP)and a CNN in a smoke detection algorithm to improve the smoke detection performance and reduce time complexity.Luo et al.(2017) proposed a strategy to implicitly enlarge the suspected regions and designed a CNN-based smoke detection algorithm via the motion characteristics of smoke.Barmpoutis et al.(2019) combined the power of faster R-CNN with multidimensional dynamic texture analysis based on higher-order LDSs aiming flame detection.Although the flame detection methods aforementioned are widely used for fire detection,they may not be suitable for complex forest environment.It is almost impossible to view the flame from surveillance in time because of the amount of cover and the fact that flames have a narrower spread than smoke (Shi et al.2018).

    Few-shot object detection

    Few-shot learning (FSL) (e.g.,data augmentation (Perez and Wang 2017),transfer learning (Sun et al.2020),meta-learning (Wu et al.2020a),etc.) have previously been employed to tackle the problem of overfitting and to relieve the diffi-culty and cost of large-scale image annotation (Wang et al.2020a).Xu et al.(2017) proposed a deep domain adaptation method for forest fire smoke detection and trained it on synthetic as well as real forest fire smoke images,which demonstrated the effectiveness of synthetic smoke images in deep learning.To improve the accuracy of smoke detection,an effective dense optical flow approach based on transfer learning was designed by Wu et al.(2020b).These methods have the ability to mitigate the overfitting issue and reduce the false alarm rate.Although data augmentation and transfer learning can only alleviate overfitting,the problem cannot be completely eliminated (Geng et al.2019).Recent researches have focused on solving the above challenge via meta-learning.The goal of meta-learning based FSL is to extract the transferable meta-knowledge in order to form a set of labeled examples which directly generalize unseen classes (Hou et al.2019).Vinyals et al.(2016) introduced a matching network with a weighted nearest neighbor classifier that is useful in labeling multiple components of oneshot classification tasks.Snell et al.(2017) first proposed the prototypical network for FSL,with the ability to indicate each class by the mean of its support images.Boney and Ilin(2017) extended the prototypical networks to augment training data via a soft-assignment method in a semi-supervised few-shot learning scenario.Numerous improved prototypical networks have been proposed.These methods are simple and efficient compared to recent meta-learning methods,while still produced state-of-the-art results (Snell et al.2017).

    Attention mechanism

    Attention mechanisms improve the feature representations of networks by focusing on important features and ignore unnecessary information,which is inspired by the human visual perception process (Mnih et al.2014).Attention mechanisms were first applied to natural language processing (NLP) in (Bahdanau et al.2015) and are now widely used in computer vision tasks.In order to perform explicit spatial transformations of features,Jaderberg et al.(2016)proposed a new self-contained module for neural network called the spatial transformation,which has gain in accuracy across several tasks.Hu et al.(2017) concentrated on the channel relationship and proposed the Squeeze-and-Excitation (SE) block.This block not only improves the representational power of a network,but also brings significant improvements in performance without increasing the computational cost.To capture more sophisticated channel-wise dependencies,a number of improved SE block have been proposed (Wang et al.2020b;Qin et al.2021).An Efficient Channel Attention (ECA) module based on local cross-channel interaction strategy is proposed by Wang et al.(2020b),which is extremely light-weight and shows good generalization ability in various visual tasks such as object detection and instance segmentation.In addition,many researchers have combined spatial attention and channel attention to design more sophisticated attention modules.Woo et al.(2018) presented the CBAM,a new efficient architecture that uses both spatial and channel-wise attention.Experiments on different benchmark datasets have demonstrated the superiority of this module over using only the channelwise attention.A Convolutional Triplet Attention Module(CTAM) capable of capturing cross-dimension interaction between spatial attention and channel attention was proposed by Misra et al.(2021).This module is essentially the same as the convolution module in terms of light-weighting and floating point operations per second (FLOPs).

    Materials and methods

    Forest Fire Smoke dataset

    The proposed method is evaluated on our forest fire smoke(FFS) dataset and miniImageNet dataset (Vinyals et al.2016).The FFS dataset includes 5 categories:smoke,cloud,fog,trees and cliffs,while the miniImageNet contains 100 categories.Each category in both datasets has 600 RGB images of dimensions 224×224.Figure 1 presents examples from the FFS dataset categories.The miniImageNet dataset is utilized as the base dataset to ensure adequate training dataset.Diversity of training dataset can improve the generalization ability of few-shot learning model.Therefore,80 classes are randomly selected from the miniImageNet dataset and then split into training set and validation set according to 4:1,which is introduced by Ravi and Larochelle(2017).The remaining 20 classes were used as a benchmark dataset along with the FFS dataset to evaluate the performance of our proposed method.

    Overall architecture

    In this paper,a novel hybrid attention-based few-shot learning algorithm for forest fire smoke detection is proposed.The proposed framework consists of two components:the feature extraction module and meta-learning module.The overall framework proposed is shown in Fig.2.

    Fig.1 Sample images from the forest fire smoke dataset:a Smoke image,b cloud image,c fog image,d tree image,and e cliffs image

    Fig.2 The architecture of our proposed network.Discriminative features F′ are extracted from support images x i and query image x q by feature extraction module.Then,Prototype of class j is computed from 5 support images features via the meta-learning module.The Euclidean distance between and query feature F′(x q ) is calculated to determine the similarity of two features and finally the detection result is output

    We define a set of base tasksT1and a K-shot target taskT2.The base datasetD basewith base classesM=80 is used for training and validation.The base dataset in this paper contains labeled samples from magnanimity classes and labeled samples.The larger the base dataset,the better the model learning ability.Randomly select a subset from the base dataset as a training task for an episode,and then each class of subsets is split into support set and query set.The target task in this paper is 5-way 5-shot forest fire smoke detection.The target datasetD targetwith test classesN=5 is used for testing.This dataset is divided into support setSand query setwherex idenotes annotated support image,y iis the corresponding label forx iandx qdenotes unlabeled query image.MandNare mutually exclusive.The goal is to leverage the supervision provided by the support set in order to correctly classify the query images.

    Feature extraction module

    We propose a CNN based on CBAM (Woo et al.2018) as the feature extraction module.This module consists of a CNN backbone and two CBAMs,as shown in Fig.3.Inspired by the effectiveness of CNNs in fire smoke detection,we employ a CNN to extract high-level and detailed smoke features.The CNN contains four convolution blocks.Each block comprises a 64-filter convolutional layer,an activation function layer and a max-pooling layer.Convolution is used to generate feature maps from input images.Feature maps of thet-th convolutional layer can be represented as(k=1,2,...,nt),wherentdenotes the number of feature maps.To obtainwe first convolve the feature mapsof the (t-1)-th convolutional layer with the filterand add the biasand then delivered to the rectified linear unit (ReLU) activation function Φ (.):

    Fig.3 Architecture of the feature extraction module

    where *denotes the convolution operation.In addition,to mitigate overfitting and accelerate convergence,a batch normalization layer is appended into each convolutional block followed by convolutional layer (Gu et al.2020).

    We also adopt the CBAM to improve the representation capacity of global information.As shown in Fig.4,the CBAM compromises two components:a 1-dimensional (1D) channel attention mapMCAM∈RC×1×1and a2-dimensional (2D) spatial attention mapMSAM∈R1×H×W.The former is able to exploit the inter-channel relationship of the features,while the latter can utilize the inter-spatial relationship of the features (Woo et al.2018).The overall attention operations can be summarized as:

    Fig.4 The structure of CBAM

    Meta-learning module

    Inspired by the recent application of meta-learning in fewshot object detection (Finn et al.2017;Fort et al.2017;Song et al.2020),we adopt the learning approaches in the prototypical network (Snell et al.2017) to solve the overfitting problem caused by limited small smoke images.The proposed network generalizes a classifier from the base dataset and is able to recognize query images from limited support images.

    The prototypical network initially learns a prototype of its class from the support images and subsequently learns a metric space.This is followed by the detection of the query images by computing the prototype representation distances of each classes.Each prototype is taken as the mean vector of the support images:

    whereS jdenotes the support subset andjis the corresponding class.The prototype representation of each classand feature representation of query imageF’(x q) were generated by the attention-based neural network.

    The distance function applied in metric space can significantly influence the performance of prototypical networks.The square Euclidean function is applied to calculate the similarity betweenandF’(x q).The detection probability of the query imagex qis computed according to softmax and is expressed as follows:

    whered(.,.) is the Euclidean distance between two vectors.

    The learning process is followed by minimizing the log-probabilityJ=-logp(y=j|xq) of the true classjvia Adam.The lossJfor a training base task is computed as follows:

    whereN bis the number of classes in base subset;andN qis the number of query samples per class.

    Experimental devices and evaluation criterion

    We adopt Pytorch to implement all approaches.Our model was trained via Adam with an initial learning rate of 10-3 that subsequently decreased by 50% every 2000 episodes.L2regularization was then adopted to mitigate overfitting.The proposed networks were trained using the Euclidean distance in the 5-shot scenarios,withN bandN qset to 15 and 5,respectively.Tests were performed using 5-way episodes for 5-shot detection and each class contained 15 query samples per episode.

    The 5-way accuracy (Acc.) was employed as the evaluation metric for the proposed forest fire smoke detection method.The accuracy rate (AR),false alarm rate (FAR),detection rate (DR),recall rate(RR) and F1-score(F1) (Mao et al.2017) were adopted as performance evaluation criteria for few-shot learning methods on forest fire smoke dataset.

    Results and discussion

    Exploration the location of attentional mechanism on the FFS dataset

    We explored the influence of the CBAM location in the convolutional neural network on feature extraction capabilities.The CBAM can be placed in any of the convolutional blocks within the CNN backbone.Figure 5 depicts the accuracy of the CBAM at different locations on the FFS dataset.CBAM_1 means that is applied in the first convolution block of the CNN backbone.The other annotations in Fig.5 are similar.The CBAM-based model outperforms the network without CBAM,and the accuracy is significantly improved by applying two CBAMs in the feature extraction module.However,when more than two were used,the accuracy decreased.The maximum test accuracy (68.61%)was achieved using the CBAM in the first and the fourth convolutional blocks.This demonstrates the ability of the CBAM to improve the detection accuracy by extracting discriminative features.Therefore,we take the convolutional neural network based on CBAM_14 as the few-shot feature extraction module in this paper.

    Fig.5 Few-shot detection accuracy of the CBAM under different convolutional blocks

    In order to evaluate the class discriminatory ability of our proposed network,we used Gradient-weighted Class Activation Mapping (Grad-CAM) and Guided Grad-CAM(Selvaraju et al.2020) to visualize the regions of the smoke image that provide support for a particular prediction.The visualization results are shown in Fig.6,Grad-CAM (1) and Grad-CAM (2) represent the attention map without using the CBAM and the attention map of our proposed network,respectively.Our proposed network shows precise smoke localization to support predictive performance,while the Guided Grad-CAM can even localize tiny smoke (Fig.6 e and f).

    Fig.6 Visualization of randomly selected smoke images from the FFS dataset

    Comparison of different attention mechanisms on the FFS dataset

    We also compared the detection accuracy of CBAM and Convolutional Triplet Attention Module (CTAM) (Misra et al.2021).Like CBAM,CTAM captures the interaction information of spatial attention and channel attention without increasing model complexity or computation.As shown in Fig.7,the CBAM and CTAM placed in both the first and fourth convolutional block achieved the highest accuracy 68.61% and 64.96%,respectively.The detection accuracy of CBAM is significantly better than CTAM at different locations,with a maximum discrepancy of 3.91%.Therefore,CBAM is more suitable for forest fire smoke detection than CTAM.

    Fig.7 Comparison of different attention modules based on detection accuracy

    Evaluation of backbone networks on the FFS dataset

    We discussed the feature extraction capabilities of different convolutional neural networks.AlexNet,VGG-16,ResNet-18,ResNet-50 and DenseNet-121 are compared with our proposed backbone network.To ensure reliability and fairness of the experimental results,the integration method of CBAM and backbone network was inspired by Woo et al (2018).The average accuracy of the 100 tests are shown in Fig.8.Compared to shallow backbone networks,our proposed backbone network is competitive in the case of relatively similar training time (68.61%).The highest test accuracy of our proposed network is 71.2%.The deeper the backbone network,the better the model performance.DenseNet-121 achieved the highest detection accuracy(82.31%).Although there is a significant improvement in detection accuracy,the training cost also increases greatly.The processing time per iteration of DenseNet-121 is more than three times of our proposed backbone network.

    Fig.8 Comparison of different backbone networks based on detection accuracy

    Selection of distance metric on the FFS dataset

    Choosing different distance metrics may bring different performance to the model.In order to investigate the distance metric settings,we trained our proposed network and the prototypical network using the Euclidean and Cosine distances function,respectively.Figure 9 illustrates the range of fulctuations over 5 training sessions.The validation accuracy increased rapidly from 0 to 10 epochs and the loss drops from 0 to 20 epochs,while both converged after 40 epochs.The convergence using Euclidean distance function is significantly better than that using cosine distance function.Due to some of the data are very similar,the range in Fig.9 is not obvious.Our proposed algorithm with Euclidean distance function has good stability in validation accuracy.The validation accuracy of our proposed method using Euclidean distance reached approximately 68%,which is 15% higher than that of the cosine distance.

    Fig.9 Comparison of the detection accuracy of Cosine and Euclidean distance function

    Our proposed method improves the test accuracy by 8%compared to the prototypical network and also achieves the highest accuracy in the 5-way 5-shot detection as shown in Table 1.The test accuracies of the prototypical network and our method using Euclidean distance are almost 15% and 17% higher than that using Cosine distance,respectively.Moreover,compared to conventional prototype network,our proposed algorithm with attention mechanism is proven to have better feature extraction capabilities and achieved a higher accuracy.

    Table 1 Few-shot detection accuracy on the forest fire smoke dataset

    The Euclidean distance model exhibits a greater test accuracy than that of the cosine distance model.Thus,the Euclidean distance is more suitable for both the prototypical network and our proposed network.This is largely because unlike Euclidean distance,cosine distance is not a Bregman divergence (Snell et al.2017).

    Comparative test on the FFS Dataset

    We compared our proposed method on the FFS dataset with several existing few-shot methods:the matching network(Vinyals et al.2016),prototypical network (Snell et al.2017),and meta-learning long short-term memory (LSTM)(Ravi and Larochelle 2017).

    To verify the effectiveness of our proposed method on the FFS dataset,we compared the evaluation criteria of classical few-shot learning algorithms.Table 2 reports the presentation of the results.Our proposed method achieved the highest AR (69.83%) and DR (84%) among the models,outperforming the prototypical network accuracy by 8.46%and 9%,respectively.RR and F1 are important evaluation metrics for the performance of forest fire smoke detection models,with higher RR and F1 values indicating a better model performance.The classical algorithms listed may not be suitable for distinguishing smoke from suspected smoke areas.For example,the highest FAR of the matching network was 17.00%,which was 4 times higher than that of our proposed network.As shown in Fig.10,the proposed CBAM-based prototypical network achieved a promising performance on the forest fire smoke dataset in few-shot learning.In summary,our method outperforms the other methods on the forest fire smoke dataset.

    Fig.10 Confusion matrix of the proposed method

    Table 2 Comparison of performance evaluation criteria for few-shot learning methods

    MniImageNet dataset experiment

    In order to further validate the performance of the proposed method,we compare it with several well-known few-shot methods,namely,the matching network (Vinyals et al.2016),the prototypical network (Snell et al.2017),and meta-learning LSTM (Ravi and Larochelle 2017) on the original miniImageNet test dataset.Our method achieves the highest accuracy in 5-way 5-shot detection is reported in Table 3,proving its ability to extract discriminative features with few support sample.

    Table 3 Few-shot detection accuracy on miniImageNet

    Conclusions

    We proposed an attention-based prototypical network for forest fire smoke few-shot detection.Firstly,a convolutional neural network based on convolutional block attention module,which included channel and spatial attention modules,was designed to extract features from support and query images.It can automatically extract high-level image features and focus on more discriminative features from small targets.Secondly,we applied a meta-learning moduleto alleviate the overfitting problem caused by limited data.It utilized the prototype of support images for comparison with query features to achieve effective detection.Experiments on forest fire smoke datasets and miniImageNet dataset show that the proposed method is more effective than recent few-shot learning approaches and achieves the highest test accuracy.

    亚洲国产av新网站| 在线播放无遮挡| 亚洲av中文字字幕乱码综合| 80岁老熟妇乱子伦牲交| 91精品国产九色| 国产美女午夜福利| 亚洲欧美清纯卡通| 免费观看无遮挡的男女| 亚洲成人中文字幕在线播放| 久久99精品国语久久久| 2022亚洲国产成人精品| 亚洲av国产av综合av卡| av天堂中文字幕网| 人人妻人人看人人澡| 免费av观看视频| 久久久精品免费免费高清| 成人午夜精彩视频在线观看| 午夜免费男女啪啪视频观看| 最近中文字幕2019免费版| 亚洲一区二区三区欧美精品 | 噜噜噜噜噜久久久久久91| 亚洲av免费高清在线观看| 亚洲国产精品国产精品| 免费观看在线日韩| 中文精品一卡2卡3卡4更新| 亚洲综合精品二区| 免费人成在线观看视频色| 国产午夜福利久久久久久| 菩萨蛮人人尽说江南好唐韦庄| 一级毛片久久久久久久久女| 久热久热在线精品观看| 国产精品.久久久| 哪个播放器可以免费观看大片| av又黄又爽大尺度在线免费看| 91久久精品国产一区二区成人| 建设人人有责人人尽责人人享有的 | 午夜福利视频1000在线观看| 在线观看一区二区三区激情| 大话2 男鬼变身卡| 久久久久久久午夜电影| 热re99久久精品国产66热6| 欧美高清成人免费视频www| 永久免费av网站大全| 黑人高潮一二区| 午夜福利视频1000在线观看| 美女被艹到高潮喷水动态| 制服丝袜香蕉在线| a级毛片免费高清观看在线播放| 99久国产av精品国产电影| 中文在线观看免费www的网站| 日韩,欧美,国产一区二区三区| 亚洲精品亚洲一区二区| 在线观看美女被高潮喷水网站| 久久精品夜色国产| 久久精品国产亚洲av涩爱| 少妇人妻一区二区三区视频| 午夜福利视频精品| 老女人水多毛片| 你懂的网址亚洲精品在线观看| 久久久成人免费电影| 国产成人a∨麻豆精品| 亚洲精品乱码久久久v下载方式| 高清视频免费观看一区二区| 天天躁夜夜躁狠狠久久av| 亚洲成人中文字幕在线播放| 亚洲国产精品成人综合色| 亚洲欧美清纯卡通| 秋霞伦理黄片| 黄片无遮挡物在线观看| 亚洲婷婷狠狠爱综合网| 大又大粗又爽又黄少妇毛片口| 你懂的网址亚洲精品在线观看| 久久久久精品久久久久真实原创| 69人妻影院| 特大巨黑吊av在线直播| 插逼视频在线观看| 九色成人免费人妻av| 国产精品久久久久久精品电影小说 | 国产亚洲av片在线观看秒播厂| 看十八女毛片水多多多| 菩萨蛮人人尽说江南好唐韦庄| 少妇猛男粗大的猛烈进出视频 | 大码成人一级视频| 最近2019中文字幕mv第一页| 日韩一区二区视频免费看| 久久精品人妻少妇| a级毛片免费高清观看在线播放| 视频中文字幕在线观看| 王馨瑶露胸无遮挡在线观看| 国产精品伦人一区二区| 免费少妇av软件| 久久久久久久久久久免费av| 日产精品乱码卡一卡2卡三| 亚洲国产av新网站| 久久久精品欧美日韩精品| 亚洲色图综合在线观看| 日本一二三区视频观看| 热99国产精品久久久久久7| 午夜福利视频1000在线观看| 免费黄色在线免费观看| 成人鲁丝片一二三区免费| 久久精品国产鲁丝片午夜精品| 在线观看人妻少妇| 国产成人一区二区在线| 夫妻性生交免费视频一级片| 国产精品一区二区性色av| 免费观看无遮挡的男女| 嫩草影院新地址| 高清欧美精品videossex| 久久久久九九精品影院| 久久99热6这里只有精品| 晚上一个人看的免费电影| 亚洲精品一二三| 制服丝袜香蕉在线| 97人妻精品一区二区三区麻豆| 精品人妻偷拍中文字幕| 久久久久精品久久久久真实原创| 亚洲av免费在线观看| 激情 狠狠 欧美| 精品久久久精品久久久| 一本一本综合久久| 美女视频免费永久观看网站| 直男gayav资源| 精品视频人人做人人爽| 看免费成人av毛片| 黄色配什么色好看| 亚洲欧美精品自产自拍| av免费观看日本| 色吧在线观看| 噜噜噜噜噜久久久久久91| 免费观看性生交大片5| 亚洲欧美一区二区三区国产| 亚洲精品一区蜜桃| 日韩国内少妇激情av| 亚洲欧美精品专区久久| 亚洲综合精品二区| 高清视频免费观看一区二区| 爱豆传媒免费全集在线观看| 麻豆精品久久久久久蜜桃| 久久精品国产亚洲av涩爱| 久久99热这里只有精品18| 国产亚洲av嫩草精品影院| 国产伦精品一区二区三区四那| 亚洲精品日韩在线中文字幕| 天美传媒精品一区二区| 直男gayav资源| 麻豆久久精品国产亚洲av| 欧美另类一区| 美女视频免费永久观看网站| 日韩欧美 国产精品| 久久人人爽av亚洲精品天堂 | 久久精品熟女亚洲av麻豆精品| 日韩人妻高清精品专区| 国精品久久久久久国模美| av黄色大香蕉| 国产精品国产三级专区第一集| 美女xxoo啪啪120秒动态图| 中文字幕免费在线视频6| 国产精品久久久久久久电影| 一级二级三级毛片免费看| 久久久久久久国产电影| 欧美xxⅹ黑人| 日本爱情动作片www.在线观看| 国产91av在线免费观看| 天天躁夜夜躁狠狠久久av| 久久99热这里只频精品6学生| 亚洲不卡免费看| 亚洲欧美精品自产自拍| 久久97久久精品| av在线老鸭窝| 日韩欧美精品v在线| 97热精品久久久久久| 国产淫语在线视频| 国产真实伦视频高清在线观看| 一二三四中文在线观看免费高清| 精品人妻一区二区三区麻豆| 国产极品天堂在线| 国产亚洲av嫩草精品影院| 亚洲精品一二三| 亚洲美女视频黄频| 久久99热6这里只有精品| 久久女婷五月综合色啪小说 | 亚洲美女搞黄在线观看| 亚洲av国产av综合av卡| 大香蕉97超碰在线| 亚洲最大成人中文| 色哟哟·www| 免费看日本二区| 肉色欧美久久久久久久蜜桃 | 亚洲国产精品999| 日韩人妻高清精品专区| eeuss影院久久| 亚洲av男天堂| 国产高清有码在线观看视频| 欧美丝袜亚洲另类| 国产熟女欧美一区二区| 综合色丁香网| 午夜免费观看性视频| 日本午夜av视频| 亚洲四区av| 人人妻人人爽人人添夜夜欢视频 | 三级经典国产精品| 一级a做视频免费观看| 在线亚洲精品国产二区图片欧美 | 青春草国产在线视频| 在线a可以看的网站| 国国产精品蜜臀av免费| av在线app专区| 国产黄色免费在线视频| 国产亚洲精品久久久com| 人人妻人人爽人人添夜夜欢视频 | 久久久久国产网址| a级毛片免费高清观看在线播放| 极品教师在线视频| 91午夜精品亚洲一区二区三区| 91在线精品国自产拍蜜月| 欧美97在线视频| 欧美日韩精品成人综合77777| 日韩免费高清中文字幕av| 97超碰精品成人国产| 亚洲国产精品专区欧美| 国产精品伦人一区二区| 国产黄色免费在线视频| 亚洲四区av| 亚洲最大成人中文| 激情 狠狠 欧美| 最近最新中文字幕大全电影3| 午夜日本视频在线| 丰满少妇做爰视频| 欧美激情久久久久久爽电影| 欧美另类一区| 成人美女网站在线观看视频| 国产精品久久久久久精品电影| 狂野欧美激情性bbbbbb| 精品熟女少妇av免费看| 国产成人a∨麻豆精品| 国产又色又爽无遮挡免| 日韩大片免费观看网站| 国产v大片淫在线免费观看| 麻豆久久精品国产亚洲av| 免费观看无遮挡的男女| 欧美+日韩+精品| 亚洲国产欧美在线一区| 综合色av麻豆| 在线观看一区二区三区激情| 亚洲综合色惰| 欧美成人午夜免费资源| 精品人妻视频免费看| 麻豆成人午夜福利视频| 亚洲av二区三区四区| 国产黄a三级三级三级人| 晚上一个人看的免费电影| 免费观看av网站的网址| 天天躁日日操中文字幕| 亚洲人成网站在线观看播放| 亚洲av福利一区| 国产爽快片一区二区三区| 国产爱豆传媒在线观看| 久久国内精品自在自线图片| 久久久久久久久久成人| 91久久精品国产一区二区三区| 久久人人爽人人片av| 美女脱内裤让男人舔精品视频| 一级片'在线观看视频| 丝瓜视频免费看黄片| 自拍偷自拍亚洲精品老妇| 免费观看的影片在线观看| 99re6热这里在线精品视频| 午夜激情久久久久久久| 中文字幕制服av| 日韩av不卡免费在线播放| 最近手机中文字幕大全| 国产白丝娇喘喷水9色精品| 舔av片在线| 中文字幕久久专区| 亚州av有码| 精品99又大又爽又粗少妇毛片| 国产爱豆传媒在线观看| 老司机影院成人| 亚洲色图av天堂| 噜噜噜噜噜久久久久久91| 天堂网av新在线| 草草在线视频免费看| 麻豆精品久久久久久蜜桃| 两个人的视频大全免费| 一级毛片我不卡| 日韩,欧美,国产一区二区三区| 爱豆传媒免费全集在线观看| 久久久久久久国产电影| 久久国产乱子免费精品| 又爽又黄无遮挡网站| 天天躁夜夜躁狠狠久久av| 国产亚洲精品久久久com| 久久久久久久久久久免费av| 久久久久久久精品精品| 亚洲,欧美,日韩| 精品久久国产蜜桃| 黄色欧美视频在线观看| 亚洲国产欧美在线一区| 18禁动态无遮挡网站| 亚洲精品亚洲一区二区| 51国产日韩欧美| 欧美区成人在线视频| av国产精品久久久久影院| 国产伦理片在线播放av一区| 久久久精品94久久精品| 80岁老熟妇乱子伦牲交| 日本猛色少妇xxxxx猛交久久| 国产乱人偷精品视频| 久久国产乱子免费精品| 在线免费观看不下载黄p国产| 人人妻人人爽人人添夜夜欢视频 | 人妻一区二区av| 国产国拍精品亚洲av在线观看| 插阴视频在线观看视频| av天堂中文字幕网| 久久午夜福利片| www.av在线官网国产| 欧美一区二区亚洲| 亚洲,欧美,日韩| 嫩草影院精品99| 欧美潮喷喷水| 国产黄a三级三级三级人| 亚洲人成网站高清观看| av又黄又爽大尺度在线免费看| 内地一区二区视频在线| 草草在线视频免费看| 欧美成人精品欧美一级黄| 日产精品乱码卡一卡2卡三| 熟女人妻精品中文字幕| 中文字幕av成人在线电影| 毛片女人毛片| 亚洲欧美一区二区三区黑人 | 国产男人的电影天堂91| 国产免费又黄又爽又色| 亚洲av.av天堂| 久久久久久久久久久丰满| 精品国产乱码久久久久久小说| 少妇熟女欧美另类| 久久99精品国语久久久| 亚洲成人一二三区av| 国产精品福利在线免费观看| 男人狂女人下面高潮的视频| 精品久久久久久久久亚洲| 久久99热6这里只有精品| 亚洲av国产av综合av卡| 国产美女午夜福利| 久久久久久九九精品二区国产| 免费观看无遮挡的男女| 久久久久久久久久久免费av| 久久午夜福利片| 久久女婷五月综合色啪小说 | 人妻少妇偷人精品九色| 日韩一本色道免费dvd| 天天躁日日操中文字幕| 日韩在线高清观看一区二区三区| 2018国产大陆天天弄谢| a级一级毛片免费在线观看| 麻豆国产97在线/欧美| 男女那种视频在线观看| 国产精品人妻久久久影院| 又黄又爽又刺激的免费视频.| 草草在线视频免费看| 精品99又大又爽又粗少妇毛片| 亚洲自拍偷在线| 美女高潮的动态| 日产精品乱码卡一卡2卡三| 秋霞在线观看毛片| 一区二区三区精品91| 在线免费观看不下载黄p国产| 国产高潮美女av| 好男人在线观看高清免费视频| 成人亚洲精品一区在线观看 | 少妇猛男粗大的猛烈进出视频 | 69人妻影院| 久久鲁丝午夜福利片| 啦啦啦中文免费视频观看日本| 又黄又爽又刺激的免费视频.| 少妇人妻一区二区三区视频| 久热久热在线精品观看| 亚洲欧美成人综合另类久久久| 久久人人爽人人片av| freevideosex欧美| 国产精品伦人一区二区| 少妇人妻 视频| 亚洲av在线观看美女高潮| 菩萨蛮人人尽说江南好唐韦庄| 亚洲国产精品成人久久小说| 大话2 男鬼变身卡| 狂野欧美激情性bbbbbb| 亚洲自拍偷在线| 免费看av在线观看网站| 男女下面进入的视频免费午夜| 午夜激情福利司机影院| 91午夜精品亚洲一区二区三区| 亚洲欧美清纯卡通| 交换朋友夫妻互换小说| 国产乱来视频区| 亚洲成人精品中文字幕电影| 香蕉精品网在线| 最新中文字幕久久久久| 老女人水多毛片| 亚洲国产欧美在线一区| 国产午夜精品一二区理论片| 国产综合精华液| 亚洲在久久综合| 最近中文字幕高清免费大全6| 中文字幕久久专区| 激情五月婷婷亚洲| 欧美人与善性xxx| 日韩强制内射视频| 高清在线视频一区二区三区| av在线观看视频网站免费| 嫩草影院入口| 黄色欧美视频在线观看| 中文欧美无线码| 春色校园在线视频观看| 国产精品99久久久久久久久| 在线观看av片永久免费下载| 最后的刺客免费高清国语| 欧美激情国产日韩精品一区| 丰满乱子伦码专区| 一级爰片在线观看| 亚洲国产av新网站| 亚洲精品456在线播放app| 亚洲av欧美aⅴ国产| 在线观看美女被高潮喷水网站| 国产片特级美女逼逼视频| 免费av不卡在线播放| 狂野欧美激情性xxxx在线观看| 亚洲图色成人| 日韩欧美精品v在线| 狂野欧美白嫩少妇大欣赏| 两个人的视频大全免费| 精品久久久精品久久久| h日本视频在线播放| 精品少妇久久久久久888优播| 国产中年淑女户外野战色| 亚洲av日韩在线播放| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 高清欧美精品videossex| 在线播放无遮挡| 成人国产av品久久久| 男的添女的下面高潮视频| 久久久国产一区二区| 高清在线视频一区二区三区| 久久久久精品久久久久真实原创| 欧美xxxx性猛交bbbb| freevideosex欧美| 狠狠精品人妻久久久久久综合| 日韩伦理黄色片| 极品少妇高潮喷水抽搐| 国产精品三级大全| 在线精品无人区一区二区三 | 又爽又黄无遮挡网站| 久久精品久久久久久噜噜老黄| 午夜精品国产一区二区电影 | 国产69精品久久久久777片| 婷婷色综合大香蕉| tube8黄色片| 日韩av免费高清视频| 在线播放无遮挡| 国产成人免费无遮挡视频| 国产有黄有色有爽视频| 久久久久国产精品人妻一区二区| 黄片无遮挡物在线观看| 高清在线视频一区二区三区| 国产一区二区亚洲精品在线观看| 国产成人午夜福利电影在线观看| 欧美一区二区亚洲| 在线a可以看的网站| 国产精品嫩草影院av在线观看| 精品人妻偷拍中文字幕| 晚上一个人看的免费电影| 日韩大片免费观看网站| 18+在线观看网站| 91久久精品国产一区二区成人| 老司机影院毛片| 久久久久精品性色| av国产免费在线观看| 乱码一卡2卡4卡精品| 久久久久久国产a免费观看| 精品少妇黑人巨大在线播放| 春色校园在线视频观看| 欧美亚洲 丝袜 人妻 在线| 亚洲久久久久久中文字幕| xxx大片免费视频| 中文资源天堂在线| 王馨瑶露胸无遮挡在线观看| 国产乱人视频| 18禁动态无遮挡网站| 国产一区亚洲一区在线观看| 在线播放无遮挡| 嫩草影院入口| 日韩av不卡免费在线播放| 亚洲国产成人一精品久久久| 自拍欧美九色日韩亚洲蝌蚪91 | 日韩视频在线欧美| 亚洲av一区综合| 美女脱内裤让男人舔精品视频| 中文乱码字字幕精品一区二区三区| 男人舔奶头视频| 欧美成人精品欧美一级黄| 九九久久精品国产亚洲av麻豆| 欧美 日韩 精品 国产| 街头女战士在线观看网站| 久久精品夜色国产| 免费观看在线日韩| 性色avwww在线观看| 免费av不卡在线播放| 搡老乐熟女国产| 青春草视频在线免费观看| 久久久成人免费电影| 久久精品国产亚洲av天美| 99久久中文字幕三级久久日本| 别揉我奶头 嗯啊视频| 亚洲av免费在线观看| 国产在线男女| 深爱激情五月婷婷| 欧美变态另类bdsm刘玥| 777米奇影视久久| 联通29元200g的流量卡| 国产精品人妻久久久影院| 狠狠精品人妻久久久久久综合| 亚洲激情五月婷婷啪啪| 亚洲国产色片| 免费av不卡在线播放| 亚洲aⅴ乱码一区二区在线播放| 免费av观看视频| 99九九线精品视频在线观看视频| 日本一本二区三区精品| 精品一区二区免费观看| 亚洲久久久久久中文字幕| 一区二区av电影网| 新久久久久国产一级毛片| 七月丁香在线播放| 免费黄频网站在线观看国产| 最近最新中文字幕免费大全7| 亚洲国产精品国产精品| 亚洲一级一片aⅴ在线观看| 国产日韩欧美在线精品| 国产黄a三级三级三级人| 欧美变态另类bdsm刘玥| 欧美性感艳星| 久久久久国产网址| 亚洲婷婷狠狠爱综合网| 日本免费在线观看一区| 色婷婷久久久亚洲欧美| 男女边摸边吃奶| 成年免费大片在线观看| 欧美日韩在线观看h| 中文字幕亚洲精品专区| 亚洲伊人久久精品综合| 国产日韩欧美在线精品| 欧美国产精品一级二级三级 | 亚洲精品国产av蜜桃| 久久久久久国产a免费观看| 久久亚洲国产成人精品v| 日韩成人av中文字幕在线观看| 人妻一区二区av| 丝袜美腿在线中文| 国产av码专区亚洲av| 久久久久网色| 2021天堂中文幕一二区在线观| 亚洲精品456在线播放app| 菩萨蛮人人尽说江南好唐韦庄| av在线老鸭窝| av一本久久久久| 人妻制服诱惑在线中文字幕| 欧美成人一区二区免费高清观看| 欧美激情久久久久久爽电影| 色视频在线一区二区三区| 九草在线视频观看| 一个人观看的视频www高清免费观看| 大香蕉久久网| 久久久久九九精品影院| 男的添女的下面高潮视频| 韩国高清视频一区二区三区| 欧美bdsm另类| 久久久a久久爽久久v久久| 国产国拍精品亚洲av在线观看| 精品一区二区三卡| 国产精品爽爽va在线观看网站| 91精品一卡2卡3卡4卡| 亚洲aⅴ乱码一区二区在线播放| 成年版毛片免费区| 亚洲最大成人手机在线| 在线观看av片永久免费下载| 久久精品久久久久久久性| 亚洲美女视频黄频| 亚洲在线观看片| 搡老乐熟女国产| 国产白丝娇喘喷水9色精品| 国内精品宾馆在线| 久久久午夜欧美精品| 欧美极品一区二区三区四区| 欧美亚洲 丝袜 人妻 在线| 国产精品麻豆人妻色哟哟久久| 国产综合精华液| 亚洲av欧美aⅴ国产| 精品人妻一区二区三区麻豆| 校园人妻丝袜中文字幕| 精品熟女少妇av免费看| 99热这里只有精品一区| 青春草亚洲视频在线观看| 日本黄大片高清| 欧美成人午夜免费资源| 久久人人爽av亚洲精品天堂 | 美女主播在线视频| 人人妻人人看人人澡| 国产亚洲av嫩草精品影院| 久久久久精品久久久久真实原创| 人妻 亚洲 视频| 婷婷色综合www| 免费播放大片免费观看视频在线观看|