• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Algorithm of Helmet Wearing Detection Based on AT-YOLO Deep Mode

    2021-12-10 11:53:28QingyangZhouJiaohuaQinXuyuXiangYunTanandNealXiong
    Computers Materials&Continua 2021年10期

    Qingyang Zhou,Jiaohua Qin,*,Xuyu Xiang,Yun Tan and Neal N.Xiong

    1College of Computer Science and Information Technology,Central South University of Forestry&Technology,Changsha,410004,China

    2Department of Mathematics and Computer Science,Northeastern State University,Tahlequah,74464,OK,USA

    Abstract:The existing safety helmet detection methods are mainly based on one-stage object detection algorithms with high detection speed to reach the real-time detection requirements,but they can’t accurately detect small objects and objects with obstructions.Therefore,we propose a helmet detection algorithm based on the attention mechanism(AT-YOLO).First of all,a channel attention module is added to the YOLOv3 backbone network,which can adaptively calibrate the channel features of the direction to improve the feature utilization,and a spatial attention module is added to the neck of the YOLOv3 network to capture the correlation between any positions in the feature map so that to increase the receptive field of the network.Secondly,we use DIoU(Distance Intersection over Union)bounding box regression loss function,it not only improving the measurement of bounding box regression loss but also increases the normalized distance loss between the prediction boxes and the target boxes,which makes the network more accurate in detecting small objects and faster in convergence.Finally,we explore the training strategy of the network model,which improves network performance without increasing the inference cost.Experiments show that the mAP of the proposed method reaches 96.5%,and the detection speed can reach 27 fps.Compared with other existing methods,it has better performance in detection accuracy and speed.

    Keywords:Safety helmet detection;attention mechanism;convolutional neural network;training strategies

    1 Introduction

    In recent years,intelligent surveillance has played an increasingly important role in our daily life.As a hotspot of computer vision,object detection provides many ideas for intelligent surveillance.Object detection methods mainly use traditional features in early research.Zhu et al.[1]used Histograms of Oriented Gradients(HoG)to extract image features,combined with the cascade-of-rejectors to accelerate the calculation speed and realize pedestrian detection.Zuo et al.[2]used Haar-wavelets transform to model local texture attributes,successfully compensated for the extra cost of two-dimensional texture features,and realized face detection.However,traditional object detection algorithms have many drawbacks,the feature extraction algorithms have poor generalization ability and low robustness for complex scene images,and the region selection methods using sliding window lacks specific calculation,so its time complexity is too high to solve practical problems.

    The development of massively parallel computing technology provides a technical guarantee for deep learning,and deep learning also provides more effective solutions for information hiding[3–7],image classification[8–10],image retrieval[11,12],object detection[13],image inpainting[14],and many other fields.So far,the most state-of-the-art detection algorithms are based on deep learning.R-CNN[15]firstly uses the deep learning model to extract image features and generate region proposals with a sliding window,but it has many repeated calculations,and the amount of calculation is too high.Fast R-CNN[16]integrates the classification and regression of bounding boxes into a network to reduce repeated calculations while using the SPP module to generate fixed-size output.Faster R-CNN[17]inputs the features extracted from images into RPN(Region Proposal Networks).The RPN can input feature maps of any size,output the coordinate information and confidence of the object candidate boxes,and then classify the object candidate boxes.Because region selection and classification should be performed step by step,Faster R-CNN belongs to the two-stage object detection model.With the continuous development of deep learning,two-stage detection methods such as Faster R-CNN are affected by factors such as the complexity of the basic network,the number of candidate boxes,the complexity of classification,and regression sub-networks,the amount of calculation continue to increase.YOLO[18]discards the candidate boxes extraction step in the algorithm and directly implements feature extraction,candidate boxes classification,and regression in an end-to-end deep convolutional network.The detection algorithm similar to YOLO is a one-stage detection algorithm,which further simplifies the implementation steps of object detection,makes the network structure simpler,and the detection speed is faster than that of the two-stage network.

    As an essential practical application in object detection,safety helmet wearing detection is closely related to our production and life.Many scholars have done a lot of research about it.Mneymneh et al.[19]extracted the features of the worker and the helmet in the image and set up a cascade according to the feature points to judge whether the worker is wearing a helmet.Li et al.[20]used head positioning,color space transformation,and color feature recognition to achieve the detection of wearing helmets based on the detection results of pedestrians.Wu et al.[21]used the improved YOLO-Densebackbone for helmet detection to improve feature resolution.Long et al.[22]used SSD(Single Shot multi-box Detector)[23]to detect wearing helmets.However,the current detection methods for wearing helmets have disadvantages,such as low detection accuracy for small objects and poor generalization ability for multi-scene detection.Chen et al.[24]introduced the K-means++ clustering algorithm to cluster the size of the helmet in the image and then used the improved Faster-RCNN algorithm to detect the helmet wearing.

    However,among the current helmet-wearing detection algorithms,the detector based on onestage has a faster detection speed,but its detection accuracy for small targets and dense targets is low,and the generalization ability of multiple scenes is poor.The two-stage detector has the disadvantages of large calculation amount and slow detection speed,which is difficult to meet the real-time requirements of helmet detection.

    In order to solve the above problems,we propose an AT-YOLO network model based on the attention mechanism for helmet wearing detection.In this paper,we model the correlation between the channel and the spatial dimension in the feature to enhance the ability of feature representation.At the same time,this paper optimizes the dataset,loss function,and training strategy to improve the network detection performance in all directions while maintaining a high detection speed.The main contributions of this paper include:

    (1)Construct a helmet dataset with more balanced categories and richer scenarios.Part of the data comes from Safety-Helmet-Wearing-Dataset,and on this basis,site images are collected through web crawler,video capture,and other ways to expand the dataset so as to make the dataset scene richer and categories more balanced.

    (2)Propose the AT-YOLO object detection algorithm for helmet wearing detection.The mutual dependence of features is simulated in the dimensions of space and channel,respectively,so that the network can obtain better detection results on small objects and occluded images.

    (3)The DIoU bounding box regression loss function is used.Combining the IoU between the prediction box and the ground truth box and the center point distance as the bounding box regression loss function,the loss function measurement is improved.While improving the accuracy of the network’s detection of small objects,it also accelerates the speed of the network convergence.

    (4)Different training strategies are used to improve network performance in the network training stage.This paper uses several different training strategies to enhance the network’s performance without increasing the cost of network inference and providing a valuable reference for other image research.

    The rest of this paper is organized as follows.Section 2 reviews the related research on attention mechanisms and target detection algorithm.Section 3 introduces the method proposed in this paper.Section 4 introduces the evaluation experiment of the method in this paper.Section 5 summarizes the work of this paper.

    2 Related Work

    2.1 You Only Look Once

    YOLO integrates the candidate region extraction and classification and regression tasks of object detection into an end-to-end deep convolutional network.That is,the input image is inferred once,and the positions and categories of all objects and the corresponding confidence probabilities can be obtained.The backbone network of the YOLO network is similar to GoogLeNet[25].The inception structure is removed to make the backbone network simpler.Input the image into the YOLO model to obtain a feature map with a size of 7 × 7,which divides divide the image into 7 × 7 regions.Each area has the confidence of the target,the position of the bounding boxes,and the category information.The YOLO network is simple,the detection speed is fast,and the background false detection rate is low,but the detection accuracy is not as good as the R-CNN detection method,and YOLO is not accurate enough in object positioning.

    YOLOv2[26]uses a new backbone network called Darknet-19,which is the same as the VGG16[27]model design principle.The network mainly adopts the 3 × 3 convolution and the 2 × 2 maximum pooling layer.After passing through the maximum pooling layer,the height and width of the feature map are halved,and the number of channels of the feature map is doubled.YOLOv2 still has the advantage of fast speed.However,its backbone network is not deep enough,it is difficult to recognize more abstract image semantic features,and the bounding box predicted by each grid cell is too less,which is not effective in predicting targets with large-scale changes.

    YOLOv3[28]draws on the idea of Resnet[29],introduces the residual structure,and establishes a deeper Darknet-53.And compared to YOLOv2,the downsampling method of the pooling layer is canceled,but the feature map is downsampled by adjusting the step size of the convolutional layer to obtain more fine-grained features.YOLOv3 uses multiple-scale fusion methods to make predictions.Similar to FPN,YOLOv3 integrates feature maps of three scales and simultaneously detects multiple-scale feature maps.The small size of the feature map is used to detect large-size objects,and the large size of the feature map is used to detect small-size objects.Compared with YOLOv2,YOLOv3 uses multiple scales to predict at the same time to make the bounding box more,cover a richer object size,closer to the real object size,and also strengthen the detection effect of small objects.

    However,YOLOv3 still has low utilization of features and poor detection performance for small objects and object-intensive images.In the application of actual helmet wearing detection,small objects and dense objects are very common.Therefore,we need a network with better performance to make the helmet-wearing detection system more robust.

    2.2 Attention Mechanism

    Both language and vision problems contain information that is closely related to the research task and some irrelevant information.The attention mechanism can help the algorithm focus on analyzing some vital information while ignoring irrelevant information.In recent years,the attention mechanism has been used in various tasks and achieved good results.Bahdanau et al.[30]are the first to use the attention mechanism to solve machine translation problems.Wang et al.[31]draw on the idea of non-local mean filtering to model the correlation of arbitrary non-local features.Non-local operations would not change the size of features and can be embedded in any network.Hu et al.[32]achieved first place in the 2017 ILSVRC classification task through the attention mechanism and adaptive calibration of the channel direction’s characteristic response.DAnet[33]can be considered as a specialized example of Nolocal-Net[31],which solves the problem of scene segmentation by capturing rich contextual relevance through a self-attention mechanism.

    Different from previous work,this paper adds attention mechanisms to the task of helmet detection.Based on YOLOv3,we have added a spatial attention module and a channel attention module to capture the correlation between features,enrich context information,and improve the detection ability of feature representation in object detection.

    3 Our Methods

    3.1 AT-YOLO Network Construction

    This paper designs the AT-YOLO deep model to optimize the feature expression ability and feature learning ability of the network.The backbone and neck of the YOLOv3 have been added channel attention modules and a spatial attention module,respectively.The network adaptively captures the correlation between the channel and space of the feature and models the global context relationship,improving the feature representation ability of the object detection algorithm.The AT-YOLO helmet detection framework is shown in Fig.1.

    Channel attention block.The Darknet-53 network used residuals linking to merge the features of different layers to alleviate the problem of gradient disappearance.However,the network does not make good use of the dependence between the channel of the feature map.Inspired by[32],we insert CA-block(Channel Attention block)into each residual block in Darknet.Recalibrate the dependencies between channels by learning global features,selectively enhancing high-contribution information,suppressing low-contribution information,and improving the feature expression ability and feature utilization of the network.The CA-block we designed is shown in Fig.2.

    Figure 1:AT-YOLO helmet wearing detection model frame

    Figure 2:Channel attention block

    Suppose the input feature mapFin,first obtain the feature mapFR∈RC×H×Wthrough two DarkNet convolutions.

    The feature mapFinobtains a global feature mapFg∈RC×1×1after the global average pooling operation.Subsequently,two convolution kernels with a size of 1 × 1 are performed to obtain the information of the global receptive field,and simultaneously,the feature dependence in the channel is learned.The feature vectorF′C∈R1×1×Cwas obtained at this time.Then activate the feature vectorF′Cto getFse.Fseis used to describe the learned channel weight.Finally,the learned channel weights are used to weight the feature mapFR,and then the feature maps of the residual link are combined to obtain the final output.

    Position attention block:In object detection,there are many detection objects dense or object occlusion,which leads to object false detection.Therefore,to obtain richer contextual information and enhance the expressive ability of feature maps,this paper adds a spatial attention module PA-block(Position Attention block),as shown in Fig.3.

    Figure 3:Position attention block

    Suppose the input local feature map isFin∈RC×H×W.First,two new feature mapsF1andF2are obtained through a convolutional layer {F1,F2}∈RC×H×W.Then reshape them to obtain feature maps with the scale ofRC×NwhereN=H×W.Then perform matrix multiplication on theF1andF2transpose,and the spatial attention feature mapFs∈RN×Nis obtained after normalization processing withsoftmaxactivation function:

    whereFsjirepresents the degree of influence of thei-th position on thej-th position.The greater the connection between the two locations,the more similar their feature representations.

    Simultaneously,we convolve the feature mapFinto obtain a feature mapF3∈RC×H×Wand then reshape it to obtain a feature map of sizeRC×N.Then,matrix multiplication is performed on the deformations ofF3andFs,and the result is reshaped to obtain a feature map of sizeRC×H×W.Finally,multiply it with the scale parameterαand perform element addition with the feature mapFinto obtain the final outputFout∈RC×H×W,which is calculated as follows:

    where the scale parameterαis initialized to 0,and it is learned to give greater weight to different location features during the training process.It can be seen from the above formula thatFouteach point is the weighted sum of the original featureFinand features across all locations.Therefore,theFouthas a global receptive field and selectively aggregates the context information in the location attention feature map.

    3.2 DIoU Bounding Box Regression Loss Function

    In the YOLOv3,the MSE(mean square error)loss function is used for bounding box regression,but the mean square error has problems in evaluating the prediction results.That is,the mean square error loss function is quite sensitive to the object scale.In order to solve the above problems,we modified the bounding box regression loss function part,using DIoU[34]loss function.

    Besides,using the IoU of the real boxes and the predicted boxes can better reflect the detection effect of the predicted boxes.When the prediction boxes are completely contained in the real boxes,the relative position between the prediction boxes and the real boxes is ambiguous,and the gradient descent is easy to occur slowly.

    Therefore,when training the model,this paper uses DIoU as the bounding box regression loss function,which is defined as follows:

    IoU means the intersection ratio of the predicted boxes and the real boxes,b,bgtrespectively represent the center points of the predicted boxes and the real boxes,ρrepresents the calculation of the Euclidean distance between the two central points,andcrepresents the diagonal distance of the minimum closure distance area that can contain both the predicted box and the real box,as shown in Fig.4.

    The DIoU loss function increases the normalized distance between the predicted box and the real box based on IoU.When the predicted box is completely contained in the real box,it can still provide the moving direction for the bounding box to make the network convergence faster and more accurate.

    Figure 4:Center point distance normalization

    3.3 Training Strategy

    Without changing the network structure,different training strategies get different results.Thanks to Zhang et al.[35],we have explored different training strategies to enhance the performance of the model without adding additional calculations to model inference.

    Label smoothing.In order to prevent the network from over-confidence about the category probability prediction and causing overfitting,we use a label smoothing strategy.In the classification model,we use a one-hot form of encoding to predict the probability that the sample belongs to each label.However,there would be such a label encoding that the model may be too confident in the prediction to ignore the small number of sample labels in the actual process.Therefore,this paper adopts the label smoothing strategy for learning.The formula for the real probability of label smoothing is:

    where y is the label probability generated by one-hot encoding,andKis the number of categories.There are two categories of hat and person,soK=2.We setε=0.01,the probability valuep=0.995 after label smoothing when the true labely=1 of a certain sample.Label smoothing can avoid the absolutization of the predicted value of the network,which can improve the overall effect.

    Multi-scale input.Due to the different resolutions in the captured natural images and the memory limitations of different operating devices,we need to improve the adaptability of the network to different scale image input,so we use multi-scale input for model training.

    During training,each iteration of the network randomly selects pictures of different sizes for training.Since the network uses a total of 32 times downsampling,the size of our image input is a multiple of 32,for example:{320,352,...,608}.Therefore,the smallest input picture size is 320× 320,and the largest is 608 × 608.There are a total of ten different sizes of the input pictures.

    Using this method can force the network to learn predictions of various input scales while preventing the network from overfitting.The speed of the network is faster under a smaller input size,and the smaller the memory of the running device is.In the case of high-resolution image input,it has higher accuracy.

    4 Experimental Results and Analysis

    4.1 Datasets

    Dataset acquisition.Part of the dataset in this paper comes from Safety-Helmet-Wearing-Dataset,which contains 7581 pictures and the label information about whether a person wears a helmet or not.The label contains two detection categories,personandhat,where thehatrepresents the head of the person wearing a safety helmet,thepersonrepresents the person’s head without a safety helmet.This dataset contains a large number of crowded scene pictures,but there is a phenomenon of unbalanced categories,as shown in Fig.5.The number of hat and person instances is 9031 and 111514,respectively,and the category ratio is close to 1:11.Simultaneously,this dataset has fewer detection scenarios,which easily causes the network to overfit,low generalization ability,and poor multi-scene detection ability.We used web crawlers to grab pictures of wearing helmets on the Internet.At the same time,we selected video clips containing hard hat scenes and cropped them into images to expand the dataset,alleviating the imbalance of dataset categories and enhancing the generalization ability of the model.Particularly,we have selected pictures of wearing ordinary hats so that the network can better distinguish the difference between safety helmets and ordinary hats.

    Figure 5:Comparison of the number of extended categories in the dataset

    Data cleaning.A large number of similar pictures in the dataset would affect the model performance.Therefore,we use the deep learning model DenseNet[36],which is pre-trained on ImageNet[37]to extract the features of each image in the dataset and then calculate the Euclidean distance of the image features of each two pictures in the dataset to determine the similarity of the two images.The manually annotate the collected image,following the VOC labeling format,marking the coordinates of the upper left and lower right vertices of the bounding box of the object and the bounding box category,and stored in the XML file.Finally,we obtained 13620 pictures,the number of hat and person instances is 27236 and 118357,respectively,and the category ratio is close to 1:4(see Fig.5),which greatly alleviates the category imbalance of the dataset.

    Data augmentation.To avoid possible over-fitting of the network,we use six methods for data augment:random horizontal flipping of images,image cropping,image filling,color dithering,brightness augment,and mixup[38].

    4.2 Implementation Details

    Due to the uneven distribution of the dataset categories,we use random sampling to randomly select 1000 images on the dataset as the test set,500 images as the validation set,and 12260 images as the training set.

    All experiments are completed in Intel(R)Core(TM)i7-7800X CPU@3.50 GHz,64.00 GB RAM and Nvidia GeForce GTX 1080Ti GPU,the Tensorflow framework is adopted.

    SGD is used as training optimizer,the initial learning rate is set to 0.0001,and it is changed every iteration.The momentum coefficient is 0.9,the L2 weight decay coefficient is 0.0005,the batch normalized decay coefficient is 0.99,and the batch size is set to 4.Iterate for 400 epochs and save the model weight with the least loss value of the validation set.

    4.3 Ablation Experiments for Training Strategy

    To compare the impact of different training strategies on the network,we explored the impact of label smoothing,multi-scale input,and data augmentation on network detection performance,as shown in Tab.1,whereBaselinerepresents the baseline model of YOLOV3,LSrepresents label smoothing,Multi-Inputrepresents random multi-scale input,andDArepresents data augmentation.

    Table 1:Network detection performance comparison under different training strategies

    Compared with YOLOv3,the label smoothing increases mAP by 3.15%,which proved that the label smoothing strategy could effectively prevent network overfitting.The multi-scale increases mAP by 1.62%,which can make the network adapt to more input size images,thus adapting to the memory limitations of different code running devices and preventing network overfitting.The data augmentation increases mAP by 1.97%,the color dithering helps the network recognize different color scenes,and the brightness augmentation helps the network recognize images in dark or bright environments.

    4.4 Ablation Experiments for Attention Module

    In order to explore the influence of the attention mechanism on the network model,we set up the experiment in Tab.2.We insert CA-block into darknet-53 to make the network focus on channel features with a more significant contribution.After the last module of the backbone network,the spatial attention module PA-block is used to obtain richer context information.To verify the improvement of the model performance by this design,the network designed in this paper is compared with the object detection network YOLOv3 when the input image size is 608× 608,and the training strategy is the same.The detection accuracy improvement effect is shown in Tab.2.

    Table 2:Attention module ablation contrast experiment

    It can be seen from Tab.2 that compared with YOLOv3,after adding CA-block,the accuracy is increased by 1.72%,reaching an accuracy of 95.62%,and the detection speed is 27.4 FPS.After adding PA-block,the accuracy is increased by 0.68%,reaching an accuracy of 96.30%,while the detection speed is maintained at 25.7 FPS.We can find that although the computational cost increases after the dual attention module are added,the frame rate of the general video does not exceed 25 FPS,which can meet the requirements of real-time video detection.Moreover,we also use the DIoU bounding box loss function to train the model,which improves the convergence speed of the network,makes the bounding box prediction more accurate,improves the detection accuracy of the network,and makes the model testing accuracy reach 96.50%.

    Considering,our detection model is used for video surveillance of construction sites.Since the location of the surveillance camera is generally far away from the surveillance scene,and the pixel size of the worker in the video is small,we designed a comparative experiment on the detection accuracy of the model for small objects(pixel area<322).

    It can be seen from Tab.3 that the proposed network can also achieve better detection accuracy when detecting small objects.That’s because small objects can be inferred through the contextual information aggregated by the attention mechanism,and feature representation can be highlighted,thereby improving the detection results.

    Table 3:Accuracy comparison under the small object

    We also compared the speed and accuracy of AT-YOLO with an input size of 320 × 320(AT-YOLO 320),AT-YOLO with an input size of 512 × 512(AT-YOLO 512),AT-YOLO with an input size of 608 × 608(AT-YOLO 608),and YOLOv3,then drawn a speed-accuracy curve.As shown in Fig.6,although AT-YOLO increases the attention mechanism,it is slower than YOLOv3 under the same input size,but under the premise of the same inference speed,AT-YOLO has higher accuracy than YOLOv3.AT-YOLO is more efficient and accurate.

    Figure 6:Speed(ms) vs.accuracy(mAP)

    Figure 7:(a),(c),(e)are the detection results of YOLOv3,(b),(d),(f)are the detection results of AT-YOLO,(g)is the detection result without adding the ordinary hat image in the dataset,(h)is the detection result with adding the ordinary hat image in the dataset

    The visualization test results are shown in Fig.7,where(a),(c),(e)are the detection results of YOLOv3,(b),(d),(f)are the detection results of AT-YOLO,(g)is the detection result without adding the ordinary hat image in the dataset,(h)is the detection result with adding the ordinary hat image in the dataset.For(a),(b)with dense and more occluded objects,and(e),(f)with occluded objects,some occluded objects can’t be detected by YOLOv3 but can be detected by ATYOLO.Since AT-YOLO extracts the surrounding information of the object through the spatial attention module,making it easier to infer the occluded object.For(c),(d)with low resolution.AT-YOLO analyzes the interdependence between the channels through the channel attention mechanism,extracts more effective features,and detects low-pixel images excellent.For(g),(h)containing ordinary hats,it is easier for the model to distinguish ordinary hats from safety helmets after adding ordinary hats into the dataset.

    4.5 Performance Comparison of Different Models

    We tested our dataset on different networks,as shown in Tab.4.The mAP of our method is 9.34% higher than that of YOLOv3 without increase too much inference cost.Compared with the two-stage method(Faster-RCNN with FPN[39]),our method is better in accuracy and efficiency.

    Table 4:Comparison of the performance of different models

    5 Conclusion

    This paper proposes an AT-YOLO helmet wearing detection model.By introducing the attention mechanism into the YOLOv3,the modeling ability of the network on the dependencies between different positions in the image is enhanced to extract more accurate features and effectively improve the model’s feature representation ability.At the same time,we combine the optimized training strategy to improve the network performance without increasing the inference cost.To verify the performance of these proposed methods,we produced a dataset and conducted some evaluation tests on it.The experimental results show that the methods proposed in this paper effectively improve the performance of the AT-YOLO network,thereby providing an excellent solution for the helmet-wearing detection system in actual scenarios.

    In the next work,we will combine the correlation of time series to optimize the video’s object detection to improve the accuracy and speed of detection and make it more suitable for the safety helmet wearing detection system.

    Acknowledgement:The author would like to thank the support of Central South University of Forestry &Technology and the support of National Natural Science Fund of China.

    Funding Statement:This work was supported in part by the National Natural Science Foundation of China under Grant 61772561,author J.Q,http://www.nsfc.gov.cn/;in part by the Degree &Postgraduate Education Reform Project of Hunan Province under Grant 2019JGYB154,author J.Q,http://xwb.gov.hnedu.cn/;in part by the Postgraduate Excellent teaching team Project of Hunan Province under Grant[2019]370-133,author J.Q,http://xwb.gov.hnedu.cn/;in part by the Science Research Projects of Hunan Provincial Education Department under Grant 18A174,author X.X,http://kxjsc.gov.hnedu.cn/;in part by the Science Research Projects of Hunan Provincial Education Department under Grant 19B584,author Y.T,http://kxjsc.gov.hnedu.cn/;in part by the Natural Science Foundation of Hunan Province(No.2020JJ4140),author Y.T,http://kjt.hunan.gov.cn/;and in part by the Natural Science Foundation of Hunan Province(No.2020JJ4141),author X.X,http://kjt.hunan.gov.cn/;in part by the Key Research and Development Plan of Hunan Province under Grant 2019SK2022,author Y.T,http://kjt.hunan.gov.cn/;in part by the Graduate Science and Technology Innovation Fund Project of Central South University of Forestry and Technology under Grant CX2020107,author Q.Z,https://jwc.csuft.edu.cn/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产黄色小视频在线观看| 99热网站在线观看| 看黄色毛片网站| 成年免费大片在线观看| 99久久精品国产国产毛片| 国产精品三级大全| 亚洲国产高清在线一区二区三| 中文字幕久久专区| 一级黄片播放器| 毛片一级片免费看久久久久 | 网址你懂的国产日韩在线| 精品一区二区三区av网在线观看| 夜夜夜夜夜久久久久| 午夜爱爱视频在线播放| av女优亚洲男人天堂| www.www免费av| 亚洲狠狠婷婷综合久久图片| 91在线观看av| 国产又黄又爽又无遮挡在线| 国产精品爽爽va在线观看网站| av在线蜜桃| 亚洲,欧美,日韩| 欧美区成人在线视频| 黄色欧美视频在线观看| 国产中年淑女户外野战色| 校园人妻丝袜中文字幕| 久久这里只有精品中国| 人妻夜夜爽99麻豆av| 日本熟妇午夜| 久久久久精品国产欧美久久久| 国模一区二区三区四区视频| 我要搜黄色片| 国产精品,欧美在线| 3wmmmm亚洲av在线观看| 少妇人妻一区二区三区视频| 亚洲精品亚洲一区二区| 一进一出抽搐gif免费好疼| 日韩,欧美,国产一区二区三区 | aaaaa片日本免费| 嫩草影院新地址| 深夜精品福利| eeuss影院久久| 亚洲成人久久性| 亚洲色图av天堂| 国产在线男女| 中文字幕av在线有码专区| 精品久久久噜噜| 中文字幕久久专区| 精品久久久久久久久久久久久| 人妻少妇偷人精品九色| 午夜激情福利司机影院| 桃红色精品国产亚洲av| 日本免费a在线| 露出奶头的视频| 国产不卡一卡二| 亚洲成人精品中文字幕电影| 婷婷精品国产亚洲av| 亚洲色图av天堂| 亚洲欧美日韩东京热| 成人永久免费在线观看视频| 可以在线观看毛片的网站| 国内久久婷婷六月综合欲色啪| 亚洲人成网站在线播| 亚洲va在线va天堂va国产| 欧美激情在线99| 久久久久久大精品| 长腿黑丝高跟| 国产精品,欧美在线| 99热网站在线观看| 久久国产乱子免费精品| 精品人妻1区二区| ponron亚洲| 不卡视频在线观看欧美| netflix在线观看网站| 亚洲午夜理论影院| 日本一本二区三区精品| 亚洲第一电影网av| 成年免费大片在线观看| 热99re8久久精品国产| 国产人妻一区二区三区在| 国产精品久久久久久av不卡| 一个人免费在线观看电影| 国产精品女同一区二区软件 | 99在线视频只有这里精品首页| 国模一区二区三区四区视频| 色吧在线观看| 我的女老师完整版在线观看| 日本 av在线| 国产高清激情床上av| 18禁黄网站禁片免费观看直播| 亚洲国产精品sss在线观看| 国产精品福利在线免费观看| 日韩精品青青久久久久久| 久久精品人妻少妇| 午夜福利高清视频| 精品一区二区三区人妻视频| 国产老妇女一区| 1000部很黄的大片| 欧美日韩黄片免| 久久久久国产精品人妻aⅴ院| 搞女人的毛片| 久久久久久久久久成人| netflix在线观看网站| 国产精品亚洲美女久久久| 久久久久久久久中文| 伦精品一区二区三区| 搞女人的毛片| 全区人妻精品视频| 亚洲国产精品合色在线| 亚洲av五月六月丁香网| 色综合色国产| 黄色一级大片看看| 久久久久国产精品人妻aⅴ院| 99热这里只有是精品在线观看| 成人特级av手机在线观看| 日韩欧美三级三区| 成人三级黄色视频| 欧美xxxx性猛交bbbb| 波多野结衣高清作品| 久久人人爽人人爽人人片va| 亚洲内射少妇av| 久久久精品欧美日韩精品| 日本免费a在线| 91在线精品国自产拍蜜月| 女人十人毛片免费观看3o分钟| 毛片一级片免费看久久久久 | 欧美日韩亚洲国产一区二区在线观看| 亚洲人成网站在线播| 直男gayav资源| 国产色爽女视频免费观看| 日本一本二区三区精品| 人妻丰满熟妇av一区二区三区| 啦啦啦韩国在线观看视频| 国产伦一二天堂av在线观看| av天堂中文字幕网| 成人永久免费在线观看视频| 看片在线看免费视频| 亚洲无线观看免费| 亚洲,欧美,日韩| 麻豆成人av在线观看| 国内少妇人妻偷人精品xxx网站| 午夜免费成人在线视频| 91av网一区二区| 波多野结衣高清无吗| 亚洲美女视频黄频| 国产男人的电影天堂91| 琪琪午夜伦伦电影理论片6080| 国产国拍精品亚洲av在线观看| 亚洲国产欧洲综合997久久,| 欧美激情在线99| 亚洲欧美日韩高清专用| 欧美另类亚洲清纯唯美| 亚洲不卡免费看| 久久久国产成人精品二区| 老女人水多毛片| 麻豆一二三区av精品| 99久久九九国产精品国产免费| 国产精品乱码一区二三区的特点| 国产高清不卡午夜福利| 不卡一级毛片| 亚洲真实伦在线观看| 少妇被粗大猛烈的视频| 99热只有精品国产| 日本爱情动作片www.在线观看 | 超碰av人人做人人爽久久| 欧美成人一区二区免费高清观看| 无人区码免费观看不卡| 国内久久婷婷六月综合欲色啪| 午夜视频国产福利| 一个人看的www免费观看视频| 精品人妻偷拍中文字幕| 亚洲精品成人久久久久久| 亚洲av免费在线观看| 中文字幕熟女人妻在线| 成人无遮挡网站| 亚洲欧美激情综合另类| 中出人妻视频一区二区| 久久久精品大字幕| 国产精品不卡视频一区二区| 色哟哟哟哟哟哟| 欧美日韩精品成人综合77777| 精品乱码久久久久久99久播| 淫秽高清视频在线观看| 亚洲精品久久国产高清桃花| 夜夜夜夜夜久久久久| 午夜精品一区二区三区免费看| 国产精品人妻久久久久久| 国产精品乱码一区二三区的特点| 久久精品91蜜桃| 日本色播在线视频| 精品国产三级普通话版| 天堂动漫精品| 欧美bdsm另类| 长腿黑丝高跟| 久久久久九九精品影院| xxxwww97欧美| 国产大屁股一区二区在线视频| 国产成人一区二区在线| 国产一区二区三区视频了| 免费看a级黄色片| 我的女老师完整版在线观看| 久久久久久久午夜电影| АⅤ资源中文在线天堂| 少妇裸体淫交视频免费看高清| 色精品久久人妻99蜜桃| 亚洲人成伊人成综合网2020| 久久久精品大字幕| 日本熟妇午夜| 成人国产一区最新在线观看| 99热网站在线观看| 极品教师在线免费播放| 精品一区二区三区视频在线| 国产久久久一区二区三区| 日韩欧美在线乱码| 悠悠久久av| 亚洲av中文字字幕乱码综合| 亚洲av电影不卡..在线观看| 窝窝影院91人妻| 看十八女毛片水多多多| 婷婷精品国产亚洲av在线| 日韩中字成人| 久久精品综合一区二区三区| 最近最新免费中文字幕在线| 亚洲成a人片在线一区二区| 99热这里只有精品一区| 国产探花在线观看一区二区| 亚洲专区国产一区二区| 亚洲专区中文字幕在线| 国产精品福利在线免费观看| 国国产精品蜜臀av免费| 简卡轻食公司| 中出人妻视频一区二区| av专区在线播放| 国产老妇女一区| www日本黄色视频网| 天堂动漫精品| 久久久久国内视频| 女的被弄到高潮叫床怎么办 | 成人国产麻豆网| 狂野欧美激情性xxxx在线观看| 欧美xxxx黑人xx丫x性爽| 免费看日本二区| 欧美日本视频| 亚洲人成网站在线播放欧美日韩| 深夜a级毛片| АⅤ资源中文在线天堂| 在线观看午夜福利视频| 91麻豆av在线| 国产精品亚洲美女久久久| 国内少妇人妻偷人精品xxx网站| 一进一出好大好爽视频| 一级毛片久久久久久久久女| 中文字幕精品亚洲无线码一区| 尤物成人国产欧美一区二区三区| 51国产日韩欧美| 成人美女网站在线观看视频| 亚洲专区国产一区二区| 3wmmmm亚洲av在线观看| 亚洲最大成人av| 精品人妻熟女av久视频| 成人美女网站在线观看视频| 99视频精品全部免费 在线| 午夜福利18| 91久久精品国产一区二区三区| 99riav亚洲国产免费| 女人被狂操c到高潮| 亚洲av成人精品一区久久| 婷婷精品国产亚洲av| 欧美+亚洲+日韩+国产| 精品午夜福利在线看| 啪啪无遮挡十八禁网站| 国产亚洲欧美98| 日韩欧美在线乱码| 三级国产精品欧美在线观看| 国产精品久久久久久精品电影| 成人午夜高清在线视频| 乱系列少妇在线播放| av在线老鸭窝| 中文亚洲av片在线观看爽| 少妇丰满av| 女人十人毛片免费观看3o分钟| 精品日产1卡2卡| 亚洲,欧美,日韩| 精华霜和精华液先用哪个| 国产精品日韩av在线免费观看| 国产精华一区二区三区| 校园春色视频在线观看| 欧美高清成人免费视频www| 久久久久免费精品人妻一区二区| 欧美日韩综合久久久久久 | 99久久精品热视频| 五月玫瑰六月丁香| 欧美成人a在线观看| 天天躁日日操中文字幕| 很黄的视频免费| 国产亚洲精品av在线| 亚洲精品一区av在线观看| 极品教师在线免费播放| 亚洲最大成人手机在线| 亚洲av不卡在线观看| 天天一区二区日本电影三级| www日本黄色视频网| 一夜夜www| 人妻少妇偷人精品九色| 日日夜夜操网爽| 亚洲国产精品合色在线| 老熟妇仑乱视频hdxx| 国产亚洲91精品色在线| 麻豆成人午夜福利视频| 国内精品久久久久精免费| 在线天堂最新版资源| 少妇的逼好多水| 免费观看的影片在线观看| 真实男女啪啪啪动态图| 成人毛片a级毛片在线播放| 久久久久久久午夜电影| 99在线视频只有这里精品首页| 啦啦啦韩国在线观看视频| av天堂在线播放| 成人午夜高清在线视频| 级片在线观看| 亚洲中文字幕一区二区三区有码在线看| 美女黄网站色视频| 一本久久中文字幕| 日韩欧美国产在线观看| 国产精品亚洲一级av第二区| 日韩欧美三级三区| 日韩亚洲欧美综合| 人妻丰满熟妇av一区二区三区| av.在线天堂| 琪琪午夜伦伦电影理论片6080| 非洲黑人性xxxx精品又粗又长| a级一级毛片免费在线观看| 国产aⅴ精品一区二区三区波| 国产高清不卡午夜福利| 五月玫瑰六月丁香| 国产人妻一区二区三区在| 搡老熟女国产l中国老女人| 国产一区二区在线观看日韩| videossex国产| 国产高清三级在线| 亚洲一区二区三区色噜噜| 亚洲av日韩精品久久久久久密| 在线免费观看不下载黄p国产 | 欧美一区二区国产精品久久精品| 无人区码免费观看不卡| 国产精品一区二区免费欧美| 又粗又爽又猛毛片免费看| 我要看日韩黄色一级片| 亚洲欧美精品综合久久99| 国产精品国产高清国产av| 国产精品av视频在线免费观看| 久久久久国内视频| 成人一区二区视频在线观看| 成人国产一区最新在线观看| 久久久久久国产a免费观看| 最近在线观看免费完整版| 精品久久久噜噜| 在线观看av片永久免费下载| 最近视频中文字幕2019在线8| 国产v大片淫在线免费观看| 伦理电影大哥的女人| 亚洲自偷自拍三级| 成人午夜高清在线视频| 欧美极品一区二区三区四区| 婷婷亚洲欧美| 美女黄网站色视频| 日韩av在线大香蕉| 最新在线观看一区二区三区| av在线亚洲专区| 日本免费一区二区三区高清不卡| 亚州av有码| 一级黄片播放器| 成年版毛片免费区| 国产一区二区在线av高清观看| 久久99热6这里只有精品| 婷婷亚洲欧美| 国产精品人妻久久久影院| 免费在线观看成人毛片| 免费看光身美女| 九九久久精品国产亚洲av麻豆| 欧美xxxx黑人xx丫x性爽| 国产av麻豆久久久久久久| av在线观看视频网站免费| 男女视频在线观看网站免费| 国产精品一区二区免费欧美| 日韩欧美国产一区二区入口| 欧美成人性av电影在线观看| 亚洲在线自拍视频| 人人妻,人人澡人人爽秒播| 男女视频在线观看网站免费| a级一级毛片免费在线观看| 99国产极品粉嫩在线观看| 日韩强制内射视频| 亚洲美女搞黄在线观看 | 亚洲人成网站在线播放欧美日韩| 亚洲精品粉嫩美女一区| 男人狂女人下面高潮的视频| 免费观看在线日韩| 欧美性感艳星| 久久精品国产亚洲av天美| 日韩精品青青久久久久久| av在线天堂中文字幕| 女的被弄到高潮叫床怎么办 | 久久6这里有精品| 日本在线视频免费播放| 22中文网久久字幕| 色播亚洲综合网| 日日夜夜操网爽| 美女免费视频网站| 亚洲av免费高清在线观看| 国产午夜精品论理片| 日韩欧美三级三区| 在线看三级毛片| 夜夜看夜夜爽夜夜摸| 日本免费a在线| 精品久久久噜噜| 亚洲成人中文字幕在线播放| 午夜爱爱视频在线播放| 一个人看视频在线观看www免费| 亚州av有码| 亚洲av不卡在线观看| 久久久久久九九精品二区国产| 女人十人毛片免费观看3o分钟| 中文字幕久久专区| 男女下面进入的视频免费午夜| av女优亚洲男人天堂| 伊人久久精品亚洲午夜| 波多野结衣高清作品| 欧美成人免费av一区二区三区| 亚洲成人久久性| 嫩草影院入口| 国产精品野战在线观看| 五月伊人婷婷丁香| 熟女人妻精品中文字幕| 婷婷亚洲欧美| 亚洲内射少妇av| 欧美国产日韩亚洲一区| 亚洲精品在线观看二区| 日韩av在线大香蕉| 亚洲成av人片在线播放无| 热99在线观看视频| 99热6这里只有精品| 熟女电影av网| 欧美在线一区亚洲| 老女人水多毛片| 自拍偷自拍亚洲精品老妇| 亚洲精品影视一区二区三区av| 国产av麻豆久久久久久久| 一进一出好大好爽视频| 美女高潮的动态| 亚洲中文字幕日韩| 免费看美女性在线毛片视频| 欧美一级a爱片免费观看看| 91av网一区二区| 精品一区二区三区视频在线| 网址你懂的国产日韩在线| 有码 亚洲区| 久久精品国产亚洲av天美| 99国产极品粉嫩在线观看| 99久久精品国产国产毛片| 99久久九九国产精品国产免费| 露出奶头的视频| 日本一本二区三区精品| 婷婷色综合大香蕉| 国产精品一区www在线观看 | 久久久久久久午夜电影| 国产 一区 欧美 日韩| 自拍偷自拍亚洲精品老妇| 亚洲三级黄色毛片| 一区二区三区四区激情视频 | 国国产精品蜜臀av免费| 免费人成视频x8x8入口观看| 夜夜看夜夜爽夜夜摸| 最近视频中文字幕2019在线8| 欧美+日韩+精品| 亚洲精华国产精华精| 变态另类成人亚洲欧美熟女| 99在线视频只有这里精品首页| 国产男靠女视频免费网站| 老司机福利观看| 亚洲av免费在线观看| 欧美潮喷喷水| 一个人免费在线观看电影| 亚洲专区国产一区二区| 国产成人aa在线观看| 美女免费视频网站| 国产精品一区二区免费欧美| 九九热线精品视视频播放| 久久精品国产亚洲av涩爱 | 成人高潮视频无遮挡免费网站| 日本免费a在线| 小说图片视频综合网站| 国产精品美女特级片免费视频播放器| 国产真实伦视频高清在线观看 | 国产成人福利小说| 日韩,欧美,国产一区二区三区 | 1000部很黄的大片| 美女高潮的动态| 久久久久久久久大av| 美女被艹到高潮喷水动态| 天美传媒精品一区二区| 国产熟女欧美一区二区| 在线免费观看不下载黄p国产 | 极品教师在线视频| 99热只有精品国产| netflix在线观看网站| 少妇猛男粗大的猛烈进出视频 | 变态另类成人亚洲欧美熟女| 国产麻豆成人av免费视频| 国产不卡一卡二| 又黄又爽又刺激的免费视频.| 91久久精品国产一区二区三区| 亚洲av五月六月丁香网| 国产精品一区www在线观看 | 精品久久久久久久久久免费视频| av在线天堂中文字幕| 麻豆成人av在线观看| 在线免费观看不下载黄p国产 | 国产av麻豆久久久久久久| 岛国在线免费视频观看| 日韩欧美 国产精品| 婷婷丁香在线五月| 天堂√8在线中文| 亚洲欧美日韩东京热| 男插女下体视频免费在线播放| 日韩一本色道免费dvd| 亚洲久久久久久中文字幕| 国产一区二区在线av高清观看| 又黄又爽又免费观看的视频| 99在线视频只有这里精品首页| 少妇人妻一区二区三区视频| 99热这里只有是精品50| 极品教师在线免费播放| 亚洲人成网站在线播放欧美日韩| 久久久久性生活片| 九九久久精品国产亚洲av麻豆| 亚洲av成人精品一区久久| 偷拍熟女少妇极品色| 欧美区成人在线视频| 国内精品久久久久久久电影| www.色视频.com| 国产精品野战在线观看| 99久久无色码亚洲精品果冻| 久久热精品热| 国产久久久一区二区三区| 亚洲内射少妇av| 又黄又爽又免费观看的视频| 久久久国产成人免费| 黄片wwwwww| 成人欧美大片| 女人十人毛片免费观看3o分钟| 精品国产三级普通话版| 国产探花在线观看一区二区| 亚洲欧美日韩卡通动漫| ponron亚洲| 精品99又大又爽又粗少妇毛片 | 成人高潮视频无遮挡免费网站| 成年人黄色毛片网站| 少妇人妻一区二区三区视频| 国内精品久久久久精免费| 69人妻影院| 国产精品98久久久久久宅男小说| 一进一出抽搐动态| 国产主播在线观看一区二区| 看黄色毛片网站| 一边摸一边抽搐一进一小说| 国产男靠女视频免费网站| 久久国产精品人妻蜜桃| 午夜亚洲福利在线播放| 国产精品久久久久久久电影| 成人二区视频| 久久久久久伊人网av| 91狼人影院| 91久久精品国产一区二区成人| 久久热精品热| 高清在线国产一区| 国产精品精品国产色婷婷| 亚洲国产高清在线一区二区三| 成人二区视频| 国产成人福利小说| 黄色日韩在线| 97超级碰碰碰精品色视频在线观看| 国产精品免费一区二区三区在线| 熟妇人妻久久中文字幕3abv| 麻豆国产av国片精品| 一a级毛片在线观看| 小蜜桃在线观看免费完整版高清| 亚洲五月天丁香| 亚洲真实伦在线观看| 老女人水多毛片| 一级毛片久久久久久久久女| 人妻夜夜爽99麻豆av| 欧美日韩亚洲国产一区二区在线观看| 精品久久久久久久久av| 免费av不卡在线播放| 非洲黑人性xxxx精品又粗又长| 我的女老师完整版在线观看| 国产成人一区二区在线| 久久久国产成人免费| 男女做爰动态图高潮gif福利片| 91在线观看av| 国产成人影院久久av| 日本色播在线视频| 国产伦精品一区二区三区四那| 欧美黑人巨大hd| 最好的美女福利视频网| 亚洲欧美日韩高清在线视频| 亚洲欧美日韩卡通动漫| 国产乱人视频| 国产精品伦人一区二区| 五月玫瑰六月丁香| 老熟妇乱子伦视频在线观看| aaaaa片日本免费| 他把我摸到了高潮在线观看|