• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Optimized Deep Learning Model for Fire Semantic Segmentation

    2022-11-11 10:46:30SongbinLiPengLiuQiandongYanandRuilingQian
    Computers Materials&Continua 2022年9期

    Songbin Li,Peng Liu,Qiandong Yan and Ruiling Qian

    1Institute of Acoustics,Chinese Academy of Sciences,Beijing,100190,China

    2Loughborough University,Loughborough,LE11 3TT,United Kingdom

    Abstract: Recent convolutional neural networks(CNNs)based deep learning has significantly promoted fire detection.Existing fire detection methods can efficiently recognize and locate the fire.However,the accurate flame boundary and shape information is hard to obtain by them, which makes it difficult to conduct automated fire region analysis, prediction, and early warning.To this end, we propose a fire semantic segmentation method based on Global Position Guidance(GPG)and Multi-path explicit Edge information Interaction (MEI).Specifically, to solve the problem of local segmentation errors in low-level feature space,a top-down global position guidance module is used to restrain the offset of low-level features.Besides, an MEI module is proposed to explicitly extract and utilize the edge information to refine the coarse fire segmentation results.We compare the proposed method with existing advanced semantic segmentation and salient object detection methods.Experimental results demonstrate that the proposed method achieves 94.1%,93.6%, 94.6%, 95.3%, and 95.9% Intersection over Union (IoU)on five test sets respectively which outperforms the suboptimal method by a large margin.In addition,in terms of accuracy,our approach also achieves the best score.

    Keywords:Fire semantic segmentation;local segmentation errors;global position guidance;multi-path explicit edge information interaction;feature fusion

    1 Introduction

    Vision-based fire detection is a difficult but particularly important task for public safety.From existing literature,vision-based fire detection methods can be divided into two types.One is to judge whether there is a flame in an image [1-5].The other regards the flame as an object and uses the object detection based method to detect fire[6-8].Compared with the first type,the object detection based fire detection method can not only recognize the existence of fire but also locate the fire.However,it lacks accurate flame edge and shape information which makes it hard to accurately and automatically estimate the fire area.In general, due to the lack of precise area, shape, and location of flame, automated fire intensity analysis, prediction, and early warning are difficult to carry out.Therefore,it is necessary to realize the fire semantic segmentation in an image.

    The goal of fire semantic segmentation is to recognize whether the pixel belongs to fire(shown in Fig.1,which is similar to image segmentation tasks.Recently,advances in image processing techniques[9,10]have boosted the state-of-the-art to a new level for many tasks,such as semantic segmentation and salient object detection.However, it is still difficult to accurately resolve flames from a single image.The main reason may be the different backgrounds,multiple scales of fire at different evolving stages, and disturbance by fire-like objects.In this paper, we propose a fire semantic segmentation method based on global position guidance and multi-path explicit edge information interaction.Specifically, to alleviate the problem of local segmentation errors in low-level feature space caused by the disturbance of fire-like objects and background noise,a global position guidance mechanism is proposed.This module uses the accurate top-level position information of top-level features to reconstruct spatial detailed information in a top-down manner.Besides, we employ a multipath explicit edge information interaction module to organically aggregate coarse segmentation results and edge information to refine the fire boundary.In this module, we first explicitly construct edge information extraction through strong supervised learning,and then realize the interaction between edge information and coarse segmentation results through a convolutional layer.

    Figure 1:The goal of fire semantic segmentation is to recognize whether the pixel belongs to fire.Each column represents an original image and the corresponding fire semantic segmentation map.The pixels belonging to fire are marked as white,and the others are marked as black

    The main contributions of this paper can be summarized as follows:

    1)We propose a novel fire semantic segmentation method based on global position guidance and multi-path explicit edge information interaction.The experimental results show that our method achieves 94.7% average IoU on five test sets which outperforms the best semantic segmentation method and salient object detection method by 15.9% and 0.8%, respectively.It demonstrates that our method has better performance on fire segmentation than previous state-of-the-art semantic segmentation and salient object detection methods.

    2)In this paper, a global position guidance module is proposed to solve the problem of local segmentation errors in low-level feature space.Besides,a multi-path explicit edge information interaction module based on edge guidance is utilized to organically aggregate coarse segmentation results and edge information to refine the fire boundary.

    3)A fire semantic segmentation dataset of 30000 images is established,which is currently the first fire semantic segmentation dataset in this area.This dataset is created by synthesizing the real flame region with normal images.We randomly select 1100 images from[5]and label them to obtain the real flame region.

    2 Related Work and Scope

    In this section,we give a summary of related works in Tab.1.Traditional fire detection methods[11-15] mainly focus on handcraft features, such as color, shape, texture, motion, etc.They have some defects, such as lacking robustness, failing to detect fire at a long distance or in a challenging environment, etc.Recent date-driven based deep learning promoted the progress of fire detection.Fire detection methods based on deep learning can be divided into two categories: classificationbased methods[1-5]and object detection-based methods[6-8].Classification-based approaches treat fire detection as an image classification task.These methods can judge whether there is fire in an image, but cannot locate the fire.The object detection-based fire detection methods can not only recognize the existence of fire but also locate the fire.However, the fire position is marked with rectangular boxes.It is unable to provide flame edge and shape information.The goal of fire semantic segmentation is to recognize whether the pixel belongs to fire,which is similar to image segmentation tasks.However, it is difficult to obtain good results by directly applying the existing deep learning based segmentation methods[16-24]to fire detection.These methods are not specially designed for fire semantic segmentation, so the discrimination ability of fire-like objects is relatively weak, and it is difficult to accurately parse the fire boundary.In addition, they have poor performance on local small-scale fires.To this end,we propose a fire semantic segmentation method based on global position guidance and multi-path explicit edge information interaction.The global position guidance mechanism is proposed to alleviate the problem of local segmentation errors in low-level feature space caused by the disturbance of flame-like objects and background noise.It uses the accurate toplevel position information of top-level features to reconstruct spatial detailed information in a topdown manner.Besides,the multipath explicit edge information interaction mechanism is proposed to organically aggregate coarse segmentation results and edge information to refine the fire boundary.

    Table 1: Summary of related works

    Table 1:Continued

    3 Global Position Guidance Mechanism

    The encoder based on CNN can extract different feature representations.Top-level semantic features preserve precise fire position information.Low-level spatial detail features contain rich fire boundary information.Both of them are vital to fire segmentation.The progressive fusion of different levels of features has a very significant effect on fire segmentation tasks.However,attacked by background noise and flame-like objects, the low-level fire spatial features may arise local segmentation errors.Consequently, the key to improving the performance of fire semantic segmentation is to restrain the offset of low-level spatial features.

    As mentioned above,the receptive field of the top-level features is the largest among these encoded features and the fire position information of them is the most accurate.Besides,when the information progressively flows from the top-level to the low level, the accurate position information contained in top-level features is gradually diluted.Thus a top-down global position guidance mechanism to directly deliver top-level position information to low-level feature space to restrain the local segmentation errors is designed.

    In this module,the top-level featuresare outputted from the last layer of the encoder.Besides,we define the encoded features fromithlayer as∈(1,t-1).First, two pointwise convolution layers with batch normalization(BN)and ReLU activation function are performed to change the number of channels ofandtoM.Then,a bilinear interpolation function to up-sampleto the same size asThe fused featurescould be denoted as:

    where (ωi,bi)and (ωt,bt)are the kernel weight and bias ofandrespectively,Upstands for up-sample,?means convolution operation and [...] means concatenation.Next, a same pointwise convolution layer is used to squeeze the channel ofintoM.So far,we obtain the relative position attention mapwhich has accurate position information.

    To further enhance the representation capability of,we introduce efficient channel attention.The mapis first compressed by a global pooling operationGto obtain the vectorYwhich has global contextual information.

    whereW,Hdenotes the width and height of the input respectively.Then,an efficient fully connected layer is utilized to transform the vectorYinto a reconstruction coefficientω.

    whereαjrepresents the weight parameters,σis the sigmoid activation function,represents the set of k adjacent channels ofYm, and C is the number of channels.Next, a channel-wise multiplication operation is employed to reconstruct the,

    As shown in Fig.2,the baseline without the GPG module has some wrong segmentation.With the GPG module applied,the local segmentation errors are restrained.

    Figure 2: The heat map visualization results of baseline and global position guidance module.They demonstrate that the GPG module can effectively restrain the local segmentation errors

    4 Multi-Path Explicit Edge Information Interaction Mechanism

    Another challenge of fire semantic segmentation is edge prediction.Different from central pixels that have higher prediction accuracy due to the internal consistency of the fire, pixels near the boundary are more prone to be misdetected.The main reasons are as follows.Compared with central pixels, the edge of fire contains less information.Besides, diverse and complex backgrounds will suppress edge information.Therefore,to solve the problem of edge segmentation error caused by lack of flame edge information.we need to explicitly utilize flame edge information.

    To achieve this, the edge information of the flame needs to be extracted explicitly.A simple approach is to construct an edge information extraction branch and train it through strong supervised learning.First, we apply the edge extraction algorithm (e.g., Canny, Sobel, and Laplace operator, etc.)to label imageYlabelto obtain the corresponding edge annotationYedge.To explicitly extract the edge information,the output featuresof the last layer of the decoder are inputted into the edge information extraction branch.This branch consists of a 3×3 convolution layer, a batch normalization,and an activation function.The edge information Iedgecould be denoted as:

    whereωeandberepresent the kernel parameters and bias respectively.?means activation function.Then,we use three loss functions to train them,

    whereGr,candIr,cmean the fire edge confidence of the ground truth and prediction map respectively,μxandμyrepresent the average value of prediction and ground truth respectively,σ*means the variance.C1andC2are two small constants.

    After the complementary fire edge information is obtained, we aim to aggregate flame edge information and flame object features to achieve information interaction.It is useful for obtaining better flame semantic segmentation results.The decoded features(flame object features)are defined as∈(1,l).Then,the information interaction can be denoted as:

    wherestands for refined results.

    Algorithm 1:Multi-path Explicit Edge Information Interaction Input:coarse results F(i)d ,i ∈(1,l);edge information Iedge Output:refined fire prediction map Oi d 1: if explicit edge extraction then 2:Iedge ←F(Fld)3:return Iedge 4: if edge information interaction then 5:while i=1;i ≤l;i ←i+1 do 6:Fi images/BZ_772_1001_1812_1020_1858.png, Up(Iedge)7:Oid ←Convimages/BZ_772_888_1871_907_1917.pngimages/BZ_772_907_1871_925_1917.pngFid,Iedged,Iedge ←Upimages/BZ_772_923_1812_942_1858.pngF(i)images/BZ_772_1055_1871_1073_1917.pngimages/BZ_772_1073_1871_1091_1917.png8: return{Oid|i=1,......,l}d

    5 Overview of Global Position Guidance and Multi-Path Explicit Edge Information InteractionNetworks

    Based on the above ideas,we design a fire semantic segmentation network based on global position guidance and multi-path explicit edge information interaction.The overview of the proposed model is illustrated in Fig.3.It consists of a deep encoder, four global position guidance modules with feature fusion operation, an explicit edge information extraction module, and a multi-path explicit edge information interaction module.The input imageXis fed into the encoder[5]to obtain encoded features,

    Figure 3: The overview architecture of the global position guidance and multi-path explicit edge information interaction based fire semantic segmentation networks

    It is worth noting that the encoder includes three main parts,namely multi-scale feature extraction,implicit deep supervision, and channel attention mechanism.First, to establish a good feature foundation for the high-level semantic feature and global position information extraction, a multiscale feature extraction module is used.

    whereA∈RC×H×Wis the input feature,hk×kmeans the convolution operation with a kernel size ofk×k,andBis the output.Then,three densely connected structures[25]which permit the gradient to flow directly to earlier layers are employed to enhance the feature representation capability.At last,the channel attention widely used in computer vision tasks is utilized.The process of it can be described as:

    whereois the final output,means the input,is a vector that includes the global information.ω2andω1are the corresponding weight matrixes.xlbis a reconstruction vector.

    When the encoded featureis captured, we use a convolution layer to squeeze the channel of top-level featureinto 256.Then,the featureis fed into the GPG module to restrain the local segmentation errors of low-level feature space.Besides, we aggregate the information progressively from the top level to the low level like the U-Net architecture [26] through a simple feature fusion operation.At last,as mentioned in Section 4,an MEI module is used to refine the coarse segmentation results.The cross-entropy loss based supervision is applied to train the whole network.It can be represented as:

    where L represents the total loss,Oidis the fire prediction map,andjis the number of categories.Gstands for the ground truth,αandθare the weight coefficient.

    6 Experiments and Analysis

    In this section,we first introduce the dataset and evaluation metrics.Then we present the implementation details.Next,a series of ablation studies are conducted to verify the effect of each module.Finally,we carry out reasonable experiments on our created dataset to evaluate the performance of the proposed method.Experimental results demonstrate that our method achieves the best performance compared with the existing semantic segmentation and salient object detection methods.

    6.1 Dataset and Evaluation Metrics

    In this paper, we create a fire semantic segmentation dataset (FSSD)which consists of 30000 synthetic images and 1100 real fire images.The generation of the dataset is described as follows.First,we randomly select 1100 images from datasets[5]and label them carefully.Then,we extract the real flame region and synthesize them with normal images to create the dataset.Finally,1000 images are used to generate training datasets,and 100 images are used to generate testing datasets.Some real fire images and synthetic images are shown in Fig.4.In this paper, 26000 images are used for training(25000 synthetic images and 1000 real images).Besides, we divide the test images into five test sets(each includes 1000 images).To improve the performance of fire semantic segmentation,we use the dataset[5](except for the 1000 images used to extract the real flame region)to pre-train the encoders of all comparison methods.

    Figure 4: Some visual examples of our created fire semantic segmentation dataset.Each column represents an original image and the corresponding annotation

    We use three measurements to evaluate all methods.Mean Absolute Error(MAE)is described as the average pixel-wise absolute difference between the prediction map and the ground truth.Therefore,the mathematical formula of the MAE can be expressed as:

    wherePdenotes the fire semantic segmentation map,Gis the corresponding ground truth.Interaction over Union (IoU)is widely used in semantic segmentation [27] to evaluate the performance of the algorithm.It represents the degree of overlap between the prediction map and the ground truth.The IoU can be computed by

    The third evaluation metric is accuracy,which is defined as the ratio of the number of correctly predicted images(The IoU threshold is set to 0.4)to the total images.The accuracy can be illustrated as:

    whereMindicates the images correctly predicted,Nis the total images.

    6.2 Implementation Details

    In this paper,we adopt EFDNet[5]pre-trained on FSSD(only for encoder)as our backbone.In the training stage,we resize each image to 320×320 with random flipping,then randomly crop a patch with the size of 288×288 for training.We utilize Pytorch to implement our method.The Adaptive moment estimation is applied to optimize the whole parameters of the network with a batch size of 8.The hyperparameter values are shown in Tab.2,referring to the settings in[5].To avoid the model failing into suboptimal, we adopt the “poly”learning rate policy with the initial learning rate 1e-5 for the backbone and 0.001 for the other parts to train our model.Like[21],the maximum iterative epoch of all methods is set to 30.

    Table 2: Hyperparameter values

    6.3 Ablation Study

    In this section,to investigate the effect of the proposed GPG and MEI modules,a series of ablation studies are performed.As illustrated in Tab.3,the baseline which does not contain any optimization achieves 0.008%and 88.3%in terms of MAE and IoU,respectively.With the GPG module applied,both IoU and MAE are improved, where the MAE score is decreased by 50.0% compared with the baseline.The IoU of GPG is 91.5%which outperforms the baseline by 3.2%demonstrating that the idea of using top-level accurate position information to restrain the local fire segmentation errors is very efficient.Besides,when we aggregate MEI and GPG,the performance of the proposed approach is enhanced further.In terms of MAE,the final model achieves 0.002 which brings a 50.0%improvement compared with the baseline.It also outperforms GPG.Furthermore,the final model improves the IoU from 91.5%to 94.1%based on GPG.

    Table 3: The quantitative results of the ablation experiment with different components on the DS01

    6.4 Compared with Existing Deep Learning Based Segmentation Methods

    In this section, to demonstrate the performance of our method, 9 segmentation methods (5 semantic segmentation methods[16-20]and 4 salient object detection methods[21-24])are compared.For a fair comparison, the fire semantic segmentation results of different methods are obtained by running their released codes under the default parameters.Moreover, we pre-train all encoders on FSSD.

    The quantitative comparison results on our created benchmark are illustrated in Tabs.4 and 5.Compared with other methods, our method achieves the best performance.In terms of MAE, the proposed method achieves a better performance on five test sets which outperforms the other methods by a large margin.The IoU evaluation metric is widely used in the semantic segmentation task.Our method improves it from 93.2%to 94.1%on DS01.Besides,we use accuracy as an evaluation metric for image-level fire detection.From the results,we can see that our method achieves an accuracy of 96.2%which outperforms other methods by a large margin(ThresholdTis set to 0.6).

    Table 4: The quantitative comparison results with existing semantic segmentation methods on the FSSD dataset.The best result of each evaluation metric is highlighted in boldface

    Table 5: The quantitative comparison results with existing semantic segmentation methods on the FSSD dataset.The best result of each evaluation metric is highlighted in boldface

    To comprehensively compare the performance of different methods,we present some visual results of different methods.As illustrated in Fig.5,our method has a better performance than the previous semantic segmentation methods.Specifically, the proposed method not only highlights the correct fire regions clearly but also well suppresses the background noises.Besides, it is robust in dealing with flame-like objects(row 1)and low contrast background(row 4).Moreover,compared with other methods,the fire boundary generated by the proposed method is more accurate.

    Figure 5: Some visual results of different methods.Each row stands for one original image and corresponding fire semantic segmentation maps.Each column represents the predictions of one method

    6.5 Analysis of Model Parameters

    In this subsection, we analyze the parameters of different methods.The results are illustrated in Tab.6.We can see that the proposed method has only 6.9 MB parameters which is suitable for resource-constrained devices.Compared with the suboptimal method,it decreases 72.9%.

    Table 6: The parameter size of different methods

    7 Conclusion

    In this paper,a method based on global position guided and multi-path explicit edge information interaction is proposed for fire semantic segmentation.First, existing literature shows that it is challenging to accurately separate the fire from diverse backgrounds and flame-like objects.To this end,considering the accurate position information contained in top-level features,we propose a global position guidance module to restrain the feature offset in low-level feature space thereby correcting the local segmentation errors.Besides,to further get more accurate boundary prediction,we first explicitly extract the edge information through strong supervision.Then,a multi-path information interaction is designed to refine the coarse segmentation.Experimental results on FSSD datasets show that the proposed method outperforms previous state-of-the-art methods under three evaluation metrics.

    In the future work,we intend to introduce multitask learning to further improve the performance of the model and multi-scale feature extraction to deal with small flame segmentation.Besides, the fast and small model which can be easily implemented on resource-limited mobile devices will be also considered.

    Funding Statement:This work was supported in part by the Important Science and Technology Project of Hainan Province under Grant ZDKJ2020010,and in part by Frontier Exploration Project Independently Deployed by Institute of Acoustics,Chinese Academy of Sciences under Grant QYTS202015 and Grant QYTS202115.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产一级毛片在线| 成人漫画全彩无遮挡| 人妻 亚洲 视频| 一个人观看的视频www高清免费观看| 精华霜和精华液先用哪个| 亚洲精品一二三| 久久99热这里只频精品6学生| 女人久久www免费人成看片| 国产日韩欧美亚洲二区| 免费av观看视频| 毛片一级片免费看久久久久| 三级国产精品欧美在线观看| 日本一本二区三区精品| 国产成人免费无遮挡视频| 国产乱人视频| 久久精品国产亚洲av天美| 亚洲av电影在线观看一区二区三区 | 亚洲av一区综合| 精品久久久久久久久亚洲| 亚洲欧美成人精品一区二区| 国产在线一区二区三区精| 国产亚洲一区二区精品| 欧美性猛交╳xxx乱大交人| 肉色欧美久久久久久久蜜桃 | 亚洲精品亚洲一区二区| 日日啪夜夜爽| 校园人妻丝袜中文字幕| 亚洲欧美一区二区三区黑人 | 在线观看一区二区三区激情| 在线观看免费高清a一片| 免费高清在线观看视频在线观看| 亚洲三级黄色毛片| 下体分泌物呈黄色| 高清av免费在线| 人体艺术视频欧美日本| 亚洲久久久久久中文字幕| 亚洲精品国产成人久久av| 亚洲自拍偷在线| 色综合色国产| 久久人人爽人人片av| 久久人人爽人人爽人人片va| 日韩欧美 国产精品| 伊人久久国产一区二区| 欧美xxxx黑人xx丫x性爽| av免费观看日本| 18+在线观看网站| 国产乱来视频区| 欧美一区二区亚洲| 人妻夜夜爽99麻豆av| 亚洲不卡免费看| 最近中文字幕高清免费大全6| 国产一区亚洲一区在线观看| 欧美三级亚洲精品| 中文字幕免费在线视频6| 青青草视频在线视频观看| 久久久亚洲精品成人影院| 日韩成人伦理影院| 女人被狂操c到高潮| 日韩强制内射视频| 亚洲真实伦在线观看| 永久免费av网站大全| 高清毛片免费看| 在线观看免费高清a一片| 久久久久精品久久久久真实原创| 成年版毛片免费区| 久久精品国产鲁丝片午夜精品| 一级爰片在线观看| 中文在线观看免费www的网站| 精品99又大又爽又粗少妇毛片| 亚洲精品国产av蜜桃| 三级国产精品片| 少妇人妻精品综合一区二区| 日本一二三区视频观看| 亚洲精品国产av成人精品| av卡一久久| 少妇猛男粗大的猛烈进出视频 | 伊人久久国产一区二区| 丝瓜视频免费看黄片| 国产在线一区二区三区精| 亚洲国产精品999| 99精国产麻豆久久婷婷| 亚洲av国产av综合av卡| 日本wwww免费看| 美女国产视频在线观看| 中国国产av一级| 日韩成人av中文字幕在线观看| 一区二区三区四区激情视频| 丝袜美腿在线中文| 国产高清有码在线观看视频| 国产男人的电影天堂91| 亚洲国产欧美在线一区| 又粗又硬又长又爽又黄的视频| 国产精品三级大全| 免费av毛片视频| 日本wwww免费看| 久久久久久伊人网av| 国产 一区精品| freevideosex欧美| 99久久九九国产精品国产免费| 国产淫语在线视频| 中文字幕免费在线视频6| 狂野欧美白嫩少妇大欣赏| 六月丁香七月| 99九九线精品视频在线观看视频| 久久6这里有精品| 91在线精品国自产拍蜜月| 三级国产精品欧美在线观看| 看黄色毛片网站| 免费在线观看成人毛片| 青春草亚洲视频在线观看| 日韩欧美 国产精品| 国产精品一区二区三区四区免费观看| 国产午夜精品一二区理论片| 成年av动漫网址| 久久久a久久爽久久v久久| 亚洲三级黄色毛片| 久久久久九九精品影院| 在线观看美女被高潮喷水网站| 午夜亚洲福利在线播放| av在线播放精品| 毛片一级片免费看久久久久| 在线免费观看不下载黄p国产| 亚洲精品第二区| 国产69精品久久久久777片| 秋霞在线观看毛片| 国产v大片淫在线免费观看| 水蜜桃什么品种好| 午夜福利视频1000在线观看| 国产高清国产精品国产三级 | 亚洲精品久久久久久婷婷小说| 亚洲无线观看免费| 真实男女啪啪啪动态图| 国产高清三级在线| 内地一区二区视频在线| 国产乱来视频区| 一本一本综合久久| 日韩欧美精品免费久久| 高清欧美精品videossex| 亚洲欧美清纯卡通| 亚洲av免费高清在线观看| 久久久色成人| 国内精品宾馆在线| 精品久久久久久久末码| 能在线免费看毛片的网站| 大片免费播放器 马上看| 色播亚洲综合网| 嫩草影院入口| 又大又黄又爽视频免费| 草草在线视频免费看| 精品熟女少妇av免费看| 国产av码专区亚洲av| 美女被艹到高潮喷水动态| 欧美 日韩 精品 国产| 国产精品麻豆人妻色哟哟久久| 欧美xxxx性猛交bbbb| 热re99久久精品国产66热6| videossex国产| 亚洲av一区综合| 欧美zozozo另类| 一级片'在线观看视频| 黑人高潮一二区| 欧美日韩在线观看h| 在线观看av片永久免费下载| 神马国产精品三级电影在线观看| 成年版毛片免费区| 亚洲精品,欧美精品| 最近中文字幕2019免费版| 成人亚洲欧美一区二区av| 亚洲av中文字字幕乱码综合| 日韩一区二区三区影片| 国产69精品久久久久777片| 亚洲国产最新在线播放| 在线观看国产h片| 久久午夜福利片| 国产精品爽爽va在线观看网站| 久久99热这里只有精品18| 精品国产乱码久久久久久小说| 欧美xxⅹ黑人| 国产精品.久久久| 欧美人与善性xxx| 亚洲av成人精品一区久久| 精品亚洲乱码少妇综合久久| 免费看日本二区| 简卡轻食公司| 欧美3d第一页| 老司机影院成人| 一个人观看的视频www高清免费观看| 欧美成人a在线观看| 搡老乐熟女国产| 自拍欧美九色日韩亚洲蝌蚪91 | 毛片女人毛片| 色视频在线一区二区三区| 狂野欧美激情性xxxx在线观看| 国产精品精品国产色婷婷| 又大又黄又爽视频免费| 亚州av有码| 久久精品久久久久久噜噜老黄| 麻豆国产97在线/欧美| eeuss影院久久| freevideosex欧美| 成人国产麻豆网| 久久99精品国语久久久| 男人舔奶头视频| 1000部很黄的大片| 国产精品一区二区在线观看99| 亚洲成色77777| 特级一级黄色大片| 精品久久久久久久人妻蜜臀av| 日本wwww免费看| 18禁在线无遮挡免费观看视频| 亚洲美女搞黄在线观看| 久久久久久久久久人人人人人人| 亚洲国产av新网站| 男插女下体视频免费在线播放| 免费少妇av软件| 久久久亚洲精品成人影院| 噜噜噜噜噜久久久久久91| 少妇 在线观看| 色5月婷婷丁香| 最新中文字幕久久久久| 欧美日韩国产mv在线观看视频 | 少妇 在线观看| 天天躁日日操中文字幕| 国产黄色视频一区二区在线观看| 能在线免费看毛片的网站| 一区二区三区免费毛片| 王馨瑶露胸无遮挡在线观看| 天美传媒精品一区二区| 亚洲精品国产av成人精品| 91久久精品国产一区二区成人| 视频中文字幕在线观看| 六月丁香七月| 卡戴珊不雅视频在线播放| 美女高潮的动态| 国产黄色视频一区二区在线观看| 熟妇人妻不卡中文字幕| 嘟嘟电影网在线观看| 免费看日本二区| 在线亚洲精品国产二区图片欧美 | 午夜老司机福利剧场| 91精品伊人久久大香线蕉| 最近手机中文字幕大全| 毛片一级片免费看久久久久| 乱系列少妇在线播放| 在线 av 中文字幕| 热99国产精品久久久久久7| 最近2019中文字幕mv第一页| 下体分泌物呈黄色| 别揉我奶头 嗯啊视频| 亚洲不卡免费看| 精品人妻一区二区三区麻豆| 久久久久久久亚洲中文字幕| 精品人妻一区二区三区麻豆| 日韩国内少妇激情av| 80岁老熟妇乱子伦牲交| 另类亚洲欧美激情| 中国美白少妇内射xxxbb| 中文资源天堂在线| 熟女电影av网| 99精国产麻豆久久婷婷| 久久精品人妻少妇| 欧美精品国产亚洲| 青青草视频在线视频观看| 久久影院123| 在现免费观看毛片| 亚洲av免费高清在线观看| 乱系列少妇在线播放| 少妇人妻精品综合一区二区| 麻豆乱淫一区二区| 精品人妻一区二区三区麻豆| 国产日韩欧美亚洲二区| 99久久精品一区二区三区| 成人一区二区视频在线观看| 日韩欧美一区视频在线观看 | 五月玫瑰六月丁香| 久久久久久伊人网av| 亚洲av福利一区| 久久久久久国产a免费观看| 久久影院123| 亚洲婷婷狠狠爱综合网| 一区二区三区乱码不卡18| 干丝袜人妻中文字幕| 久久久久久久大尺度免费视频| 亚洲av不卡在线观看| av线在线观看网站| 日本一本二区三区精品| 网址你懂的国产日韩在线| 三级经典国产精品| 亚洲精品久久久久久婷婷小说| av播播在线观看一区| 久久久成人免费电影| 爱豆传媒免费全集在线观看| 亚洲国产欧美人成| 91精品伊人久久大香线蕉| 毛片一级片免费看久久久久| 91精品国产九色| av线在线观看网站| 亚洲国产精品999| freevideosex欧美| 午夜免费鲁丝| 男插女下体视频免费在线播放| 日本一二三区视频观看| 国产精品嫩草影院av在线观看| 欧美xxxx性猛交bbbb| 99久久人妻综合| 国产大屁股一区二区在线视频| 搞女人的毛片| 久久久久久久久久人人人人人人| av网站免费在线观看视频| 2021天堂中文幕一二区在线观| 欧美精品国产亚洲| 日韩免费高清中文字幕av| 欧美成人精品欧美一级黄| 久久99热这里只频精品6学生| 一级毛片我不卡| av线在线观看网站| 欧美日韩综合久久久久久| eeuss影院久久| 黄色怎么调成土黄色| 国产黄色视频一区二区在线观看| 男女无遮挡免费网站观看| 久久鲁丝午夜福利片| 日韩一本色道免费dvd| 我的女老师完整版在线观看| 水蜜桃什么品种好| 国产精品国产av在线观看| 欧美日韩在线观看h| 久久99热这里只频精品6学生| 精品99又大又爽又粗少妇毛片| 国产欧美亚洲国产| 国产亚洲5aaaaa淫片| 黄色怎么调成土黄色| 成年免费大片在线观看| 人人妻人人看人人澡| 久久99精品国语久久久| 国产av国产精品国产| 丝瓜视频免费看黄片| 国产欧美日韩精品一区二区| 亚洲精品亚洲一区二区| 国产男女内射视频| 热99国产精品久久久久久7| 国产成人精品一,二区| 在线天堂最新版资源| 日韩av在线免费看完整版不卡| 麻豆国产97在线/欧美| 国产亚洲一区二区精品| 91精品一卡2卡3卡4卡| 国产爱豆传媒在线观看| 激情五月婷婷亚洲| 热99国产精品久久久久久7| 亚洲天堂国产精品一区在线| 色综合色国产| 免费观看a级毛片全部| 中国国产av一级| 最近2019中文字幕mv第一页| 大又大粗又爽又黄少妇毛片口| 一级毛片黄色毛片免费观看视频| 精品国产三级普通话版| 精品少妇久久久久久888优播| 国产精品国产三级国产专区5o| 日韩电影二区| 国内精品美女久久久久久| 国产成人免费观看mmmm| 美女内射精品一级片tv| 亚洲精品国产av蜜桃| 国产成人freesex在线| 亚洲自拍偷在线| 一级毛片久久久久久久久女| 欧美3d第一页| 99精国产麻豆久久婷婷| av免费观看日本| 亚洲婷婷狠狠爱综合网| 在线观看人妻少妇| 你懂的网址亚洲精品在线观看| 高清日韩中文字幕在线| 麻豆成人午夜福利视频| 亚洲无线观看免费| 精品国产乱码久久久久久小说| 最近手机中文字幕大全| 99久久精品一区二区三区| 亚洲成色77777| 国产成年人精品一区二区| 免费观看性生交大片5| 亚洲精品国产av成人精品| 久久久精品免费免费高清| 欧美亚洲 丝袜 人妻 在线| 国产成年人精品一区二区| 一区二区三区四区激情视频| 国产一级毛片在线| 成年女人在线观看亚洲视频 | 久久99热这里只有精品18| 日本欧美国产在线视频| 久热这里只有精品99| 日韩欧美精品v在线| 国语对白做爰xxxⅹ性视频网站| 国产成人a区在线观看| av在线蜜桃| 久久影院123| 色视频www国产| 国产色婷婷99| 22中文网久久字幕| 久久影院123| 别揉我奶头 嗯啊视频| 亚洲av日韩在线播放| 九色成人免费人妻av| 亚洲精品成人av观看孕妇| 天天一区二区日本电影三级| 男女下面进入的视频免费午夜| 欧美成人一区二区免费高清观看| 国产成人a区在线观看| 午夜福利高清视频| 偷拍熟女少妇极品色| 国产欧美日韩一区二区三区在线 | 白带黄色成豆腐渣| 深夜a级毛片| 久久综合国产亚洲精品| 国产精品伦人一区二区| 熟女av电影| 久久99热这里只有精品18| 国产av国产精品国产| 97在线人人人人妻| 最近中文字幕2019免费版| 亚洲欧洲国产日韩| 下体分泌物呈黄色| 亚洲在线观看片| 久久久亚洲精品成人影院| 久久97久久精品| 一级爰片在线观看| 自拍偷自拍亚洲精品老妇| av一本久久久久| av女优亚洲男人天堂| 久久久久久久久久成人| 亚洲精品久久久久久婷婷小说| 久久韩国三级中文字幕| 欧美 日韩 精品 国产| 国产成人一区二区在线| 深夜a级毛片| 青青草视频在线视频观看| 欧美成人午夜免费资源| 伊人久久精品亚洲午夜| 一级毛片 在线播放| 亚洲精品影视一区二区三区av| 别揉我奶头 嗯啊视频| 亚洲精品国产av蜜桃| 白带黄色成豆腐渣| 男女啪啪激烈高潮av片| www.色视频.com| 五月玫瑰六月丁香| 亚洲精品国产成人久久av| 久久人人爽av亚洲精品天堂 | av.在线天堂| 日韩中字成人| 久久久久久久久大av| 国产男人的电影天堂91| 欧美另类一区| 91在线精品国自产拍蜜月| 性色avwww在线观看| 国产片特级美女逼逼视频| 性色av一级| 欧美高清性xxxxhd video| 欧美高清成人免费视频www| 欧美激情国产日韩精品一区| 最近中文字幕2019免费版| 国产高潮美女av| 国产精品福利在线免费观看| 视频区图区小说| 成人国产av品久久久| 成人无遮挡网站| 联通29元200g的流量卡| 两个人的视频大全免费| 一级片'在线观看视频| 午夜爱爱视频在线播放| 在线观看一区二区三区激情| 亚洲精品乱码久久久v下载方式| 在线播放无遮挡| 亚洲欧美精品专区久久| 黄色怎么调成土黄色| 女的被弄到高潮叫床怎么办| www.av在线官网国产| 99热国产这里只有精品6| 少妇猛男粗大的猛烈进出视频 | 欧美亚洲 丝袜 人妻 在线| 欧美一区二区亚洲| 亚洲精品乱码久久久v下载方式| 黄色日韩在线| 狂野欧美激情性xxxx在线观看| 久久人人爽人人片av| 91狼人影院| 亚洲av成人精品一二三区| 性插视频无遮挡在线免费观看| 99久久人妻综合| 日本-黄色视频高清免费观看| 久久久久久久午夜电影| 免费在线观看成人毛片| 高清视频免费观看一区二区| 亚洲伊人久久精品综合| 亚洲最大成人手机在线| 伦精品一区二区三区| 少妇被粗大猛烈的视频| 午夜老司机福利剧场| 久热这里只有精品99| 亚洲精品,欧美精品| 2021少妇久久久久久久久久久| 中文字幕久久专区| 天天躁夜夜躁狠狠久久av| 免费观看性生交大片5| 久久精品国产鲁丝片午夜精品| 日产精品乱码卡一卡2卡三| 欧美日韩视频高清一区二区三区二| 午夜精品国产一区二区电影 | 午夜福利在线在线| 女人十人毛片免费观看3o分钟| 亚洲欧美中文字幕日韩二区| 久久精品熟女亚洲av麻豆精品| 国产欧美亚洲国产| 日日啪夜夜撸| 大片电影免费在线观看免费| 国产精品久久久久久av不卡| 又爽又黄a免费视频| 欧美激情国产日韩精品一区| 亚洲精品aⅴ在线观看| 亚洲一级一片aⅴ在线观看| 免费观看性生交大片5| 久热久热在线精品观看| 亚洲,欧美,日韩| 日韩制服骚丝袜av| 日韩亚洲欧美综合| 亚洲国产精品国产精品| 岛国毛片在线播放| 亚洲精品日韩av片在线观看| 亚洲最大成人av| 欧美极品一区二区三区四区| av在线亚洲专区| 自拍偷自拍亚洲精品老妇| 日本av手机在线免费观看| 亚洲国产精品成人久久小说| 联通29元200g的流量卡| 久久99精品国语久久久| 高清欧美精品videossex| 亚洲成人av在线免费| 免费大片18禁| 联通29元200g的流量卡| 自拍欧美九色日韩亚洲蝌蚪91 | 热99国产精品久久久久久7| 精品少妇黑人巨大在线播放| 自拍欧美九色日韩亚洲蝌蚪91 | 22中文网久久字幕| 91在线精品国自产拍蜜月| 免费人成在线观看视频色| 亚洲美女视频黄频| 91精品国产九色| 中文天堂在线官网| 午夜老司机福利剧场| 久久鲁丝午夜福利片| 日本猛色少妇xxxxx猛交久久| 蜜桃久久精品国产亚洲av| 国产黄色免费在线视频| 亚洲四区av| 欧美区成人在线视频| 啦啦啦中文免费视频观看日本| 亚洲丝袜综合中文字幕| 国产成人精品婷婷| 直男gayav资源| 免费观看a级毛片全部| 国产视频内射| 麻豆久久精品国产亚洲av| 精品久久久久久久末码| 久久久久久久久久人人人人人人| 在线观看美女被高潮喷水网站| av专区在线播放| 男女国产视频网站| 夜夜爽夜夜爽视频| 精品久久久久久电影网| 97在线人人人人妻| 少妇丰满av| 又爽又黄无遮挡网站| 国产精品成人在线| 日韩一区二区三区影片| 欧美xxⅹ黑人| 丰满少妇做爰视频| 亚洲经典国产精华液单| 视频中文字幕在线观看| 免费看不卡的av| 久久久精品94久久精品| 尤物成人国产欧美一区二区三区| 男女下面进入的视频免费午夜| 啦啦啦啦在线视频资源| 男人舔奶头视频| 乱系列少妇在线播放| 成人毛片60女人毛片免费| 国产亚洲av嫩草精品影院| 亚洲av二区三区四区| 毛片一级片免费看久久久久| 日韩,欧美,国产一区二区三区| 中国国产av一级| av专区在线播放| 亚洲内射少妇av| 午夜精品一区二区三区免费看| 亚洲精品色激情综合| 新久久久久国产一级毛片| 国产精品av视频在线免费观看| 九九爱精品视频在线观看| 久久精品国产自在天天线| 免费看av在线观看网站| 日本与韩国留学比较| 三级国产精品片| 日产精品乱码卡一卡2卡三| 久久99热6这里只有精品| 国产黄频视频在线观看| 18禁动态无遮挡网站| 精品一区二区免费观看| 内地一区二区视频在线| 久久久久久伊人网av| 男女无遮挡免费网站观看| 中国国产av一级|