• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Precise Agriculture:Effective Deep Learning Strategies to Detect Pest Insects

    2022-01-25 12:50:34LucaButeraAlbertoFerranteMauroJerminiMauroPrevostiniandCesareAlippi
    IEEE/CAA Journal of Automatica Sinica 2022年2期

    Luca Butera,Alberto Ferrante,Mauro Jermini,Mauro Prevostini,and Cesare Alippi,

    Abstract—Pest insect monitoring and control is crucial to ensure a safe and profitable crop growth in all plantation types,as well as guarantee food quality and limited use of pesticides.We aim at extending traditional monitoring by means of traps,by involving the general public in reporting the presence of insects by using smartphones.This includes the largely unexplored problem of detecting insects in images that are taken in noncontrolled conditions.Furthermore,pest insects are,in many cases,extremely similar to other species that are harmless.Therefore,computer vision algorithms must not be fooled by these similar insects,not to raise unmotivated alarms.In this work,we study the capabilities of state-of-the-art (SoA) object detection models based on convolutional neural networks (CNN)for the task of detecting beetle-like pest insects on nonhomogeneous images taken outdoors by different sources.Moreover,we focus on disambiguating a pest insect from similar harmless species.We consider not only detection performance of different models,but also required computational resources.This study aims at providing a baseline model for this kind of tasks.Our results show the suitability of current SoA models for this application,highlighting how FasterRCNN with a MobileNetV3 backbone is a particularly good starting point for accuracy and inference execution latency.This combination provided a mean average precision score of 92.66% that can be considered qualitatively at least as good as the score obtained by other authors that adopted more specific models.

    I.INTRODUCTION

    THIS work is in the field of precise agriculture and aims at exploring different deep learning (DL) models for detecting insects in images.More in detail,this paper discusses the use of DL-based object detection models for the recognition of pest insects outdoors.In particular,we focus on the comparison of known architectures in the context of a novel dataset.Considering the field of interest,we also evaluate computational resource requirements of the considered models.

    Early pest insect identification is a crucial task to ensure healthy crop growth;this reduces the chance of yield loss and enables the adoption of precise treatments that provide an environmental and economic advantage [1].Currently,this task is mainly carried out by experts,who have to monitor field traps to assess the presence of species which cause significant economic loss.This procedure is prone to delays in the identification and requires a great deal of human effort.In fact,human operators need to travel to the different locations where traps are installed to check the presence of the target insects and,in some cases,to manually count the number of captured insects.The current trend is to introduce machine learning techniques to help decision makers in the choice of suitable control strategies for pest insects.Even though this approach is going to automate some time-consuming tasks,it leaves monitoring of pest insects limited to traps.For some pest insects that spread outside crop fields,it can be beneficial to have additional information coming from other locations.This is the case of Popillia japonica,a dangerous phytophage registered as a quarantine organism at European level [2].Early detection of the presence of this insect in specific regions is extremely important,as it permits to track its spread.In this exercise,the general public may be of great help,as they can provide valuable input by means of mobile phones.As shown in Fig.1,our monitoring system for pest insects will integrate inputs from traps and from smartphones,with the purpose of creating a map of the presence of the pest insects.Machine learning techniques become necessary when reports come from a large pool of users that take images with different cameras,lighting conditions,framing and backgrounds.Furthermore,pest insects are,in many cases,extremely similar to other species that are harmless (e.g.,Popillia japonica is very similar to two harmless species,Cetonia aurata and Phyllopertha horticola).Therefore,computer vision algorithms must be able to distinguish between them,even in presence of high similarities.This is especially important when the general public is involved in the process,to avoid non-motivated alarms and insect eradication actions that are useless and may damage the environment.

    Throughout the years,many Computer Vision approaches have been proposed to assess the presence of pest insects in crop fields.Early solutions (e.g.,[3]) were based on image classification by means of handcrafted features.These approaches suffered from lack of generality,as most features were specific to particular species and,moreover,required great domain knowledge and design effort.

    Fig.1.Sketch of a multimodal detection system.

    With the advent of DL and convolutional neural networks(CNN),the problem has started being tackled with Data Driven approaches that leveraged the abstraction power of deep architectures both to extract meaningful features and perform the recognition task.The majority of these studies focused on Image Classification [4],[5] which assigns a class label to each image.CNNs solved this problem with the help of Transfer Learning from models pre-trained on generic datasets,such as ImageNet [6].

    Only in recent years,as highlighted in [7],some studies have tackled the task of both recognizing and localizing pest insects in images.Remarkably,PestNet [8] compared an adhoc network to state-of-the-art (SoA) ones,for the task of detecting insects in images taken in highly controlled environments,such as in traps.

    However,not much work has been done on detecting insects in the open,despite the fact that exploiting the pictures any citizen could take with a smartphone,can greatly improve the control of pests over the territory.In the context of a multimodal pest monitoring system like the one shown in Fig.1,it is important to have a reliable detection algorithm that is accurate with highly diverse backgrounds and robust to the presence of harmless insects that may be mistaken for dangerous ones.In fact,in such a system,observations would come from a large number of locations and at much higher rates if compared to trap-only monitoring systems.In a large scale system,false alarms become a critical problem that requires a specific study.

    In this paper,we assess the capabilities of SoA Generic Object Detection models for the detection of very similar insects in non-controlled environments.We do so as we believe that current architectures are well suited for this task and there is no need for new ones;the main problem is understanding how they compare and which ones are better choices.

    We built a dataset of images taken from the Internet,with high variability in their characteristics,such as background,perspective,and resolution.The dataset contains 3 classes:the first one contains images of a pest insect,Popillia japonica;the other two classes contain images of insects commonly mistaken for our target one.Special care was put in minimizing the presence of highly similar samples.This task poses many challenges that SoA methods may have trouble with,such as presence,in the images,of small or partially occluded insects,bad lighting conditions or insects disguised in the background.Most importantly,without any prior knowledge,the object detectors may not be able to learn the general features necessary to disambiguate two similar insects.

    We then used our dataset to study the performance of three SoA object detection models across four different backbones.The goal was to:

    1) Assess the adaptability of existing models to detection of Popillia japonica.

    2) Identify the best candidate for task-specific improvements and modifications.

    3) Study thecomputational cost/accuracytrade-off of different backbones.

    With respect to existing literature,the novelty of our work can be summed up as:

    1) We adopt advanced filtering techniques to provide enhanced datasets.The impact on performance is well evident both in terms of accuracy and reliability.Notably,we avoid near-duplicate samples that may hinder the model evaluation procedure correctness in case of similar couples being split in the training and test sets.

    2) We experimentally verify that designing object detectors targeted to beetle-like insects is neither requested nor advisable,as general purpose models can be successfully used after proper tuning.Furthermore,we compare these general models both from the stand-point of performance and from that of computational resources by employing common metrics.

    3) For the first time,we focus on the detection of extremely similar insects.In our application only one insect is actually harmful and must not be confused with others closely resembling it.

    4) We consider images that are very different in their characteristics and,in general,not shot in controlled environments nor with the same equipment.This,together with the previous point,sets up a highly complex detection environment that has not been previously addressed in similar studies on insects.

    5) We propose a study on the impact of output thresholding on false positive count,showing that it is possible to find a good trade-off between false alarms w.r.t.the detection of the dangerous insects also for our specific application scenario.

    6) We show the applicability of Guided Backpropagationbased visualization techniques that can help experts gather insights on insect features considered by the models.This is of particular importance since it allows validation of the algorithm’s decisions from a more qualitative standpoint.These visualizations highlight the key features of the image used for the prediction:a good detector is expected to rely on visually meaningful parts of the image.

    The results we obtained show that known SoA Object Detection models are suitable for beetle-like insect recognition in the open,even in presence of similar species.Additionally,backbone choice can affect performance,in particular for SSD models.Transfer Learning is a necessary step but pre-training on a generic insect dataset provided no improvements.In general,FasterRCNN with MobileNetV3 backbone provides the best trade-off between accuracy and inference speed.

    The paper is organized as follows:In Section II,we go over the state of the art for both Generic Object Detection and Pest Insects Detection.In Section III,we describe our dataset,how we created it and how we ensured absence of duplicates/nearduplicates.In Section IV,we briefly introduce the different models that we have compared.In Section V,we explain how we designed our experiments.In Section VI,we present and comment on our experimental results.

    II.STATE OF THE ART

    In this section,we describe the state of the art for our application.First we spend a few words on Generic Object Detection,then we talk specifically about the work that has been done specifically on insect recognition.

    A.Generic Object Detection

    Generic Object Detection is the task of assigning,for each relevant object instance in an image,a Bounding Box and a Label.The state of the art about this topic grew considerably in recent years as shown in [9],which represent an up-to-date and complete survey about Generic Object Detection.

    We can divide Generic Object Detection models based on CNNs into two broad categories:Two-stage and One-stage detectors.Two-stage detectors are generally slower but more accurate.They first compute regions of interest (RoI) for each image and then compute the actual predictions for each RoI.One-stage detectors,instead,are faster at the cost of some accuracy,as they compute the predictions directly.Both detector types are composed of two parts:a feature extraction section,called backbone,and a detection head.The backbone is generally an Image Classification model stripped of the classification layer.Common examples are VGG [10] and ResNet [11].

    We chose to select one representative model for each category to compare the trade-offs of the two approaches.FasterRCNN [12],for Two-Stage Detectors,is a well known architecture that uses a region proposal network (RPN) to extract the RoIs.For One-Stage Detectors we picked SSD[13],as it has shown,in the available literature,the highest inference speed for its class,while maintaining a respectable accuracy.SSD uses a set of Default Boxes which span the whole image across different scales,instead of an RoI pooling layer.To the two aforementioned models we have added the third competitor:RetinaNet [14],which is sort of a mixedbreed.It is a One-Stage Detector but its speed and accuracy are similar to the ones of common Two-Stage Detectors.Its peculiar characteristic is that it neither uses an RoI pooling layer nor Default Boxes,but it exploits a Focal Loss function which weights the foreground samples as of higher importance with respect to the background samples.

    B.Pest Insects Detection

    For Pest Insects Detection the state of the art is quite narrow.The majority of the studies focus on Insect Classification,usually with an approach similar to the one introduced in [4],where known CNN-based classifiers are trained on multi-class insect datasets like IP102 [15].This dataset is one of the few openly available of its kind and it comprises over 75 000 images of insects from 102 different species;additionally,about 19 000 images of this dataset are annotated with bounding boxes.IP102,though,presents a very long tailed distribution,which is not ideal for learning.

    Another popular branch,commonly associated with Pest Insect Detection is the recognition of Plant Diseases.For example,[16]–[18] namely relate to the identification of pest insects,but they actually detect their presence indirectly,by considering damages on plants.

    For actual Pest Insect Detection,different studies,like [8],[19]–[23],use common DL architectures.In these examples,however,images are collected in controlled environments,by adopting standardized equipment and locations.In [21]images are taken in an open field on a wide,bright red,trap.In [8],[19],[20],closed traps with a white background are used,while [22] and [23] consider images taken in grain containers.The only study closely related to ours,both in terms of approach and final goal,is [24].In this study,images of 24 different species have been collected from the internet,and a network similar to FasterRCNN [12] with the VGG [10]backbone has been trained to perform detection.In comparison to the aforementioned studies our work has a major difference:our images are not taken in controlled environments nor shot with the same equipment.Differently from the other works,and in particular from [24],in our work we consider insect species that exhibit high similarity.Unlike other works we have extensively studied the capabilities of SoA models in order to find the best starting point for finer model design.Moreover,we show a duplicate safe data acquisition procedure,while similar studies have not given relevance to this aspect,which,in our opinion,is quite important.

    III.DATASET

    The images for our dataset have been gathered through the internet,by scraping the results from two popular Search Engines,Google and Bing,as well as from Flickr,a known photo sharing community.We have collected images for three classes of insects:Popillia japonica(PJ,Fig.2(a)),Cetonia aurata(CA,Fig.2(b)) andPhyllopertha horticola(PH,Fig.2(c)).The first one is a rapidly spreading and dangerous pest insect;the other two,even though similar to P.japonica,are harmless.P.horticola,in particular,for its morphological characteristics,is easily mistaken for P.japonica [25].In a real-world scenario,it is really important to detect those insects correctly to avoid false alarms.For each class of insects we have collected the first 4000 results from each website,corresponding to a total of 36 000 pictures.These samples were then subject to two phases of filtering,with the purpose of obtaining a high-quality dataset.The first phase involved some automatic methods,but also manual filtering;the second phase,instead,was fully automatic.

    Fig.2.The species selected for the dataset.

    A.Initial Filtering

    In the initial filtering phase,we have applied 3 steps:

    1) Duplicate removal based on file size and on the hash of the first 1024 bytes.

    2) Unrelated removal based on a weak classifier that distinguishes images containing insects from images that do not.

    3) Manual inspection.

    After the filtering we had approximately 1200 images remaining for each class.

    B.Removal of Near Duplicates

    A problem that persisted through the first filtering phase was the presence of near duplicates:two or more non-identical but extremely similar images.These images,of which an example is shown in Fig.3,are particularly harmful in assessing performance of models.This problem is highlighted and tackled in [26] for the CIFAR Dataset [27].We have decided to use a similar approach,with a ResNet50 [11]Image Classification Model,pre-trained on ImageNet [6],instead of a model specifically trained on our dataset,as we found the results to be sufficiently good for our application.Once the classification layer is removed,the model,given an RGB image as input,outputs a 2048 dimensional vector,which can be interpreted as an embedding of the input image.These embeddings are known to be useful,in image retrieval applications,to assess image similarity by means of common distance metrics,as stated by [28].We have used theL2-distanceand an empirically chosen distance threshold of 90,below which we have considered two images to be near duplicates.In the end,we built our sanitized dataset incrementally,by adding iteratively all the images that were distant enough from the ones already present.We chose our threshold conservatively;thus,this procedure may have ruled out some “good” samples.Nonetheless,we preferred it to having near duplicates slip through.This step removed another 5% to 7% of the samples,depending on the class.Finally,we split the dataset into training and test sets,with the usual 80%–20% proportions.The actual number of samples for each class is shown in Table I.Splits are balanced with respect to the set of insect classes that appear in the image:this approach produces subsets with reasonably similar distributions.

    Fig.3.An example of near duplicates.

    TABLE IDETECTION DATASET SAMPLE COUNTS NUMBERS REFER TO TOTAL OBjECTS

    IV.MODELS

    In this section,we provide a brief explanation of the main characteristics of the considered architectures.Since all detection models work on high-level features extracted by the backbone,we first focus on describing the actual detection networks.Backbones are described later in this section.

    A.Detection Models

    A Detection Model tries to assign a Bounding Box and a Class Label to each relevant object in an input image.The former is generally represented by a four-value vector;the latter is a scalar that encodes the actual class name.The Object Detectors we consider are based on CNN [29] which are implemented as a computational graph that comprises layers of 2D convolutions,nonlinear activations and maxpooling operations.The weights of the convolution filters act as parameters of the network.These models are generally composed of two conceptual sub-networks:the backbone and the detector.The backbone is generally derived from SoA CNN Image Classifiers and outputs a set of high level features extracted from the input image.The detector takes these features as input and,through a series of convolution and linear operations,outputs the desired predictions.Each CNN is a parametric function,whose parameters can be optimized iteratively by means of common gradient descent algorithms,such as Stochastic Gradient Descent and Adam.In general the loss function compares the predicted output with the expected one and yields lower values when predictions are close to ground truths.Since these networks work with images,they must be robust to changes in the input that are not semantically meaningful.This is partially achieved during training with augmentation techniques that enhance data variability and improve shift,scale,and rotation invariance of the learned model.Scale invariance,usually,is also boosted by specific design choices that allow features to be implicitly computed at different scales.

    We have considered the following models:FasterRCNN,SSD,and RetinaNet.We have chosen these models as they represent the state of the art when it comes to general performance and reliability in real world scenarios.

    1) FasterRCNN:FasterRCNN [12] is a well known Two-Stage detector.In the first stage,the region proposal network(RPN),shown in Fig.4,extracts proposals from high level feature maps with respect to a set of predefined Anchor Boxes.These proposals take the form of:

    i) an objectness score:The probability that an anchor in a specific location contains an object.

    ii) a box regression:A 4-value vector that represents the displacement of the anchor that best matches the object position.

    Fig.4.FasterRCNN’s RPN architecture from [12].

    Features are then extracted from the best proposals,by means of an RoI Pooling layer.These proposed-specific features are then passed to a regression head and a classification head;the former computes the final displacements to best fit the proposed box onto the object;the latter assigns the predicted class label.This model uses a loss function with two terms:one accounting for the classification error and one for the box regression one.The former takes the shape of a log loss over a binary classification problem,the latter is the smooth L1 loss for the box displacement regression.

    2) SSD:SSD [13] belongs to the family of One-Stage detectors.To improve inference speed,even though sacrificing accuracy,features are directly computed on top of the backbone,without the aid of RoIs.Each layer extracts features at a different scale,and uses them to calculate predictions;this helps achieving scale invariance.Predictions have the same form of FasterRCNN’s ones.Box Regression values are computed with respect to a set of Default Boxes,which are similar to the Anchor Boxes used in FasterRCNN.This model uses a loss similar to that of FasterRCNN,with a softmax log loss accounting for the classification error and a smooth L1loss taking care of the box regression one.

    3) RetinaNet:RetinaNet’s [14] architecture is straightforward:features coming from the backbone are fed into a regression head and a classification head,which predicts the bounding box regressions with respect to a set of anchors and the class label.

    The uniqueness of this network is not in the architecture,but in the loss function that it uses,calledFocal Loss.This loss assigns higher importance to hard-to-classify samples and,in turn,it reduces the problem of the gradient being overwhelmed by the vast majority of easily classifiable samples belonging to the background class.

    B.Backbones

    The backbone is the section of a detection model that performs feature extraction.Usually,a backbone is composed of an Image Classification network stripped of its classification head.Here we briefly introduce the backbones considered in our study that are:VGG,ResNet,DenseNet,and MobileNet.As previously stated for the detectors,these models have been chosen for their general reliability in real world scenarios.

    1) VGG:VGG [10] is one of the first very deep Image Classification networks.Its design is rather surpassed by fully convolutional approaches,but it still represents a relevant baseline.Its architecture borrows from AlexNet [30],with Convolutional Layers followed by Max Pooling and activation function.The final fully connected layers are stripped,since the model needs only to perform feature extraction.

    2) ResNet:ResNet [11] is considered to be the current gold standard both as image classifier and as backbone.ResNet first introduced the concept of Residual Blocks,shown in Fig.5.This is a classic convolutional block with a skip connection that allows the gradient to flow backwards freely and minimizes the problem of Vanishing Gradient.Letxbe the block input,andy=F(x) its output;then the residual block with skip connection is defined asy=x+F(x);this corresponds to the computational flow in Fig.5.

    3) DenseNet:DenseNet [31] is considered a good Image Classifier,but it is not popular as a backbone.DenseNet takes the idea of skip connections a step further,with the DenseBlock:a series of feature extraction blocks,each one connected to the following ones,allowing both backward gradient flow and forward information flow.This is particularly suitable when learning from small datasets.In comparison to ResNet,features from different depth levels are explicitly combined inside the Dense Block.

    Fig.5.ResNet’s residual block from [11].

    4) MobileNet:MobileNet is an effort to optimize CNNbased Image Classifiers to make them run on mobile devices while minimizing loss of accuracy.To achieve this,a number of strategies used to reduce the parameter count have been employed.The convolution operations have been decomposed into a Deptwise Separable Convolution,where the operation against a CxNxN filter gets split into a convolution against a 1xNxN followed by a convolution against a Cx1x1 one.On top of that,MobileNet uses an Inverted Residual Block that has the same purpose as the one shown in Fig.5,but it is more efficient from the standpoint of computational resources.

    5) Feature Pyramid Network:A feature pyramid network(FPN) [32] is a building block designed to enhance the output of a feature extraction backbone with features that span the input across multiple scales.Features are computed across different scales by means of convolution operations,then they are merged,as shown in Fig.6,in order to obtain a set of rich,multi-scale,feature maps.

    Fig.6.FPN architecture from [32].

    The FPN is very important for Object Detection networks,as multi-scale features improve the performances when recognizing objects of different sizes in the same image.

    V.ExPERIMENT DESIGN

    In this section,we describe the experiments used to compare the different models and backbones discussed in Section IV.Additionally,we describe some experiments performed to assess the impact of pre-training on model performances.

    A.Performance Evaluation of Models and Backbones

    Each detection model has been combined with each of the four selected backbones to assess how this choice impacts accuracy and inference time.Backbones were all pre-trained on ImageNet and the experiments have been carried out on an Nvidia Titan V GPU.

    Note that FasterRCNN and RetinaNet use FPNs,while SSD works directly on the backbone’s output constructing multiscale features.

    Each model was trained on the same data;parameters were optimized with stochastic gradient descent (SGD) with Momentum,using Cosine Annealing with Linear Warmup as learning rate schedule policy.This policy,shown in Fig.7,stabilizes training in the initial epochs and allows for fine optimization in the last ones.

    Fig.7.Example of Cosine Annealing with Linear Warmup learning rate scheduling.

    Table II shows the different hyperparameters considered for each Backbone/Detector pair.The number of epochs has been selected so that each model was trained to convergence.SSD was trained with a larger batch size as its smaller memory footprint allowed to do so.

    We employ McNemar’s test,as suggested in [33],to assess statistic difference among pairs of models performance.This test is applied to a 2×2contingency table,of the form shown in Table III,where,given a binary classification test,aanddcount the samples on which the two models agree,whilebandccount those on which they do not.In the Object Detection case,each model can predict a bounding box for a particular ground truth or not,hence:

    ●ais the number of ground truths predicted by both models.

    ●dis the number of ground truths missed by both models.

    ●bis the number of ground truths predicted by the first model but missed by the second one.

    ●cis the number of ground truths missed by the first model but predicted by the second one.

    McNemar’s test statistic makes use of the formula shown in(1)

    The null hypothesis is thatH0:pb=pc;the alternative is thatH1:pb≠pc,wherepsymboldenotes the theoretical probability of occurrences in the corresponding cell.In case of rejection of the null hypothesis it is possible to affirm that the two models are significantly different.

    Applying McNemar’s test to an Object Detection algorithm is not straightforward,as each input image may contain multiple objects and each of these objects may or may not be recognized correctly.In our case we considered an object recognized if the model predicted a box with the followingcharacteristics:intersection over union (IoU) with the object’s ground truth box greater than 0.5;confidence score greater than 0.5;the correct label for the object.Each prediction can match only one object.IoU for two boxesAandB,as defined in (2),is the ratio between the area of the intersection of the two boxes and the area of their union.Thus,its value is 1 when the two boxes perfectly overlap but shrinks to 0 as the two boxes either move away from each other or one becomes contained by the other.

    TABLE IIMODELS AND HYPERPARAMETERS USED FOR TRAINING

    TABLE IIIExAMPLE OF CONTINGENCY TABLE

    B.Pre-Training Impact

    A secondary aspect we wanted to test is the impact of pretraining on performance for the following two different scenarios:

    1) Starting from a model completely pre-trained on the common objects in context (COCO) [34] dataset.

    2) Using a backbone pre-trained on a generic insect dataset instead of ImageNet.

    We restricted this experiment toResNet50-FPNFasterRCNNas pre-trained COCO weights were already available.COCO is a widely known dataset,annotated for different computer vision tasks,one of which is Object Detection.To train the insect-specific backbone,we have used a custom dataset generated by combining the images from our 3-class dataset with a filtered version of IP102 [15],from which we removed insects with too low sample count,duplicates,and misplaced images.To this we have added some background images that contained only plants or flowers without any insect.All the steps to ensure dataset fairness,described in Section III,have been applied to this data as well.To account for the high unbalance of this dataset,we have used a Weighted Cross Entropy loss function,assigning higher importance to less frequent classes.For these tests we have used the same hyperparameters ofResNet101-FPNFasterRCNN,which are listed in Table II.

    VI.RESULTS

    In this section,we present the results of the experiments discussed in Section IV.Furthermore,we compare our results to the ones obtained in other studies available in the scientific literature.

    A.Comparison Across Models

    Table IV shows the performance of each tested modelbackbone combination.The capabilities of the model to correctly predict the bounding boxes and to assign the labels have been evaluated by means of mean average precision(mAP).In Object Detection,the calculation of this parameter,though,may introduce some ambiguity.Therefore,we have used theAPIoU=.50as specified in the COCO challenge [34].Another relevant aspect that we have evaluated is the computational requirements,as we target smart agriculture,where power IoT devices are deployed in the field and edge computation may be preferable to avoid frequent energy expensive data transfers over the network.To evaluate the computational requirements,we have measured the inference speed in Frames per Second,both on GPU (an Nvidia GeForce RTX 2 080) and on CPU (an Intel Xeon Silver 4 116).These measurements do not provide a direct evaluation of the performance on IoT nodes,but rather a relative ordering of the model-backbone combinations in terms of computational requirements.These tests have been performed averaging the inference time of 100 iterations without batching;the inference operation is comprehensive of any preprocessing needed by the model,which usually is normalization and resizing of the input images.The input images were randomly generated,with a size of 1280×720 pixels.

    As shown in Table IV,FasterRCNN and RetinaNet perform better than SSD with small changes in mAP among different backbones.SSD struggles with less powerful backbones likeMobileNet and VGG16,but it obtains higher scores when paired with more demanding backbones.There is no clear winner in terms of raw mAP,as scores of the best performing model-backbone combinations are very similar.However,these architectures do not support interpretability of results as it is not possible to open the black box and draw cause-effect indications.This is particularly true when we keep the detector unchanged and we only switch the backbone.For these reasons,the optimal architecture must be found empirically through a trial and error approach.

    TABLE IVPERFORMANCE OF THE COMPARED MODELS IN TERMS OF MEAN AVERAGE PRECISION ON THE TEST SET AND FRAMES PER SECOND AT INFERENCE TIME.FALSE POSITIVES COUNT FOR CLASS POPILLIA JAPONICA IS ALSO SHOWN

    Regarding computation speed,SSD is the fastest on CPU;this does not come as a surprise,as this model is designed to be fast and it is especially suitable for applications where the available hardware has no GPU acceleration.On GPU,SSD is still fast,but,surprisingly,FasterRCNN is computationally lighter with a noticeably higher FPS than all the others.This follows from the MobileNetV3 model being specifically optimized to reduce execution time;this is proportionally more relevant for FasterRCNN than for other detectors.Whether this depends on how the computation is allocated on GPU or to some specific backbone-detector interaction is difficult to say and should be the object of a further investigation task,outside of the scope of this work.

    Table IV also shows the number of false positives,produced on the test set,for the class P.Japonica.FasterRCNN is the model showing less false positives;in particular,when this model is associated with the MobileNetV3 backbone,the number of false positives is the lowest among all models.If we consider these numbers with respect to the overall number of non-PJ insects present in our dataset,which is 530,we see that the best configuration has a false positive ratio of 2.26%.Mind that this is not a formally precise false positive ratio as the concept of negative sample is ill defined for the problem of Object Detection.

    Fig.8 shows the influence of a threshold on the confidence score of predictions on the false positives count and the mAP for each model.Every prediction below the threshold is automatically ignored.For the best performers,mAP is not greatly reduced,even at higher threshold values;while using small thresholds can still have great impact on the number of false positives for models that produce many of them.This type of plot is really helpful in reducing false alarms while preserving detection performance.

    Fig.8.False Positives for PJ class and mAP variation for different prediction confidence thresholds.

    Fig.9 shows the trend of the training loss for our three models;we can notice how it smoothly converges in all cases but FasterRCNN with MobileNetV3 backbone.The loss increase,however,seems to marginally affect the mAP score during validation,as shown in Fig.10(a).This tells us that a thoroughly calibrated training procedure may lead the loss of this model,which already is one of the best,to a plateau as well,possibly with better performance.

    Fig.9.Train loss for the 3 detection architectures with different backbones.

    Fig.10.Validation mAP for MobileNetV3 and DenseNet169.

    Fig.10 shows validation mAP score for our models with the DenseNet169 and MobileNetV3 backbones.As shown in Fig.10(b),DenseNet169 provides a smooth increase in mAP for all architectures,with minimal variance among training epochs and detectors,whilst MobileNetV3 plot is noisier and the results among the detectors are very different.The equivalent plot for ResNet101 is closer to DenseNet169,while the one for VGG16 is closer to MobileNetV3;this suggests that deeper and more demanding backbones have more consistent behavior independently of the specific detection model,while less demanding ones have troubles with light detectors like SSD.

    Fig.11 shows the McNemar’s Test p-values for each model pair.Considering the conventional significance level of α=0.05,we can see that the majority of the models are significantly different one another,meaning that,even with similar performances,they have different weaknesses and strengths.This makes the choice of the best model a nontrivial decision.Furthermore,for each detector aside from SSD,the mAP score remains similar across the different backbones,suggesting that backbone choice is not that critical.

    Given the performance results together with McNemar’s test outcome we can confidently say that a FasterRCNN detector with a MobileNetV3 backbone is the best choice for our case study,as it yields top mAP coupled with the highest speed on GPU.Moreover,the false positives are the lowest among competitors,this result is particularly relevant in this application.

    When computational power is a strong constraint and no hardware acceleration is available,SSD with the MobileNetV3 backbone should be preferred instead,as it is reasonably fast without a significant drop in mAP.We comment that,even though an SSD with VGG16 backbone is almost 3 times faster on CPU,the loss in detection accuracy does not nicely trade off with the improvement in latency.Yet,this combination can be used,accepting the reduced detection performance,in systems that are highly constrained in computational resources and/or energy.

    B.Effects of Pre-Training

    Table V shows the performance of FasterRCNN with the ResNet50 backbone in the 3 cases described in Section V-B.For this evaluation,we only consider Mean Average Precision since the architecture is constant.Thus,the FPS scores of the different solutions are constant too.

    The highest mAP is reached when the model is fully pretrained on COCO;pre-training the backbone on the insect dataset presented in Section V-B results,instead,in the lowest score.This is consistent with the loss trends of Fig.12,withcocobeing the lowest curve andinsectsthe highest.Results suggest that using an ad-hoc dataset to pre-train the backbone harms the overall Transfer Learning process instead of improving it.Whether this is due to the relatively small size of the dataset used or inherent in the usage of less diverse data,needs to be investigated further.

    Fig.11.McNemar’s Test on compared models.Each cell contains the p-value for the specific pair.Significance level is α=0.05.

    TABLE VPERFORMANCE OF FASTERRCNN-RESNET50, WITH DIFFERENT PRE-TRAINING DATASETS, ExPRESSED IN TERMS OF MEAN AVERAGE PRECISION ON THE TEST SET

    As shown in Fig.13,no statistical difference between the model pre-trained on COCO and the one pre-trained on ImageNet is present,suggesting that pre-training the detection head has no significant benefits,at least in this case.The model pre-trained on generic insect images,however,is significantly different from the others.This further supports the thesis that pre-training on specific data significantly affects the learning,possibly for the worst.Hence,sticking to common Transfer Learning procedures seems to be the safer choice.

    C.Visualization of Importance

    In applications where the solution is implemented by means of very deep models,it is also important to assess the prediction quality in a human-readable form.Gradientweighted class activation map (Guided GradCAM) [35] is an approach that combines the results from Guided Backpropagation [36] and GradCAM [35] to form a visually understandable image of the areas of major importance for the prediction.

    Guided Backpropagation relies on the intuition that the gradient of the relevant output (e.g.,the class score) w.r.t.the input image is a good indicator of which areas of the image are more relevant for the prediction.It is calledguidedbecause only positive gradients,corresponding to positive activations,are back-propagated.Instead,in GradCAM,the activation of the deepest convolutional layer of a CNN,properly weighted by the gradient of the relevant output w.r.t.said layer’s parameters,is interpreted as a heatmap of the relevant regions of the input.Guided GradCAM is the product of these two pieces of information.

    In the case of insect recognition,these visualizations are particularly useful if an expert wants to evaluate the model based on the relevance of the highlighted insect features.In Fig.14 we can see how the regions of the highest importance are located on the head of the Popillia japonica and around its white hairs,a distinctive feature of this species.

    D.Comparison with Other Studies

    Our results are qualitatively similar to those of other works in the scientific literature,such as [8] and [24].The former reached 75.46% average mAP with a peak of 90.48% on the best predicted class;the latter topped at 89.22% mAP.We have reached a maximum mAP of 93.3%.

    However,we point out that,even though the two mentioned studies consider some of the models that we have also included in our work,this comparison can only be qualitative for the following reasons:

    1) The considered datasets are different;in particular for [8],whose data come from traps.To be more precise,we believe that our dataset is inherently more challenging.For this reason,we can consider our results to be at least as good as the ones reported by the cited works.

    2) The two mentioned studies have not disclosed their approach to mAP calculation.Therefore,the reported numbers may not have the same exact meaning as in our paper.

    3) In [24],the adopted backbone and the procedure used to train SSD and FasterRCNN are not specified.

    4) In [8],when ResNet101 and FasterRCNN are considered,reported results are significantly worse than ours,with a top mAP of 71.62%,as opposed to our 92.14%.This strengthens the belief that adopted procedures are inherently different.

    VII.CONCLUSIONS AND FUTURE WORK

    Fig.12.Train loss (left) and Validation mAP (right) for FasterRCNN ResNet50 with different pre-training.

    Fig.13.McNemar’s Test on FasterRCNN ResNet50 models with different pre-training datasets.The first name is the dataset used to pre-train the whole model,the second is the one used for the backbone.Significance level is α=0.05.

    In this paper,we have evaluated different combinations of models and backbones for detecting a pest insect in images that are not obtained in controlled environments.Our results demonstrate that,at least for insects similar to Popillia japonica,this task can be performed with high accuracy,even by using general-purpose models.Not only detection performances have been estimated,but also inference speed,which can provide information on which model is less computational resource hungry among the tested ones.The best detection performance has been reached by the combination RetinaNet-ResNet101 (mAP=93.3%),but,on average,FasterRCNN was the best performer.The model with the best throughput on GPU is FasterRCNN paired with a MobileNetV3 backbone (FPS=60.92).The best throughput on CPU was obtained by the combination SSD-VGG16(FPS=4.27).Given the statistical similarity between some of the models we think that the critical part is the choice of the overall detection architecture rather that the specific backbone.SSD is an exception to this,as the backbone choice demonstrated to play a significant role in the final results.

    Fig.14.Example of visualizing image importance through Guided GradCAM for a FasterRCNN model with ResNet50 backbone.

    Additionally,our experiments show that pre-training on ImageNet is a suitable Transfer Learning setup for the insect recognition tasks and pre-training on small task-related datasets seemingly has no benefits.Overall,we consider FasterRCNN with MobileNetV3 backbone a strong baseline for insect detection given both the good performance and high inference speed on CPU and GPU;moreover this model produced the lowest number of false positives for the pest insect class and this is of particular importance for this type of application.

    We conclude that widely adopted generic object detection architectures are well suited for the recognition of beetle-like insects and,realistically,for insects in general.The real advance in general insect recognition would probably come from the construction of bigger datasets,with a big number of species and images,rather than from the search for particular architectures which may,in the end,be too much task specific.

    Future work should investigate the development of optimized models that take the found optimum as a baseline and make task-specific improvements to the architecture.A few examples are:addressing hardware and embedded system resource constraints to port the solution on mobile devices,leveraging known characteristics of the target pest insect(inductive bias),working on methods aimed at improving detection of small insects as well as dealing with bad lighting conditions and harsh environments.In the spirit of deep learning we would also envisage generation of a huge image dataset,possibly containing different species.This might be achieved by considering collaborative methods where citizens contribute in taking pictures and delivering them to a cloud validating process before enriching the database.

    欧美老熟妇乱子伦牲交| 18禁黄网站禁片午夜丰满| 国产精品乱码一区二三区的特点 | 亚洲av美国av| xxx96com| 丝袜美腿诱惑在线| 亚洲欧美日韩另类电影网站| a在线观看视频网站| 丰满饥渴人妻一区二区三| 一级作爱视频免费观看| 一二三四在线观看免费中文在| 国产欧美日韩一区二区三| 18禁黄网站禁片午夜丰满| 男女高潮啪啪啪动态图| 女同久久另类99精品国产91| 久久人妻福利社区极品人妻图片| 亚洲avbb在线观看| 欧美日韩亚洲高清精品| 99riav亚洲国产免费| 国产av一区二区精品久久| 99精品在免费线老司机午夜| 国产av精品麻豆| 在线观看66精品国产| 亚洲aⅴ乱码一区二区在线播放 | 欧美黑人精品巨大| 国产精品1区2区在线观看. | 欧美精品一区二区免费开放| 老司机影院毛片| 国产av一区二区精品久久| 精品国产国语对白av| 国产亚洲精品久久久久5区| 999久久久国产精品视频| 99久久综合精品五月天人人| 老汉色av国产亚洲站长工具| 一边摸一边抽搐一进一出视频| 男人操女人黄网站| 国产真人三级小视频在线观看| 法律面前人人平等表现在哪些方面| 日韩三级视频一区二区三区| ponron亚洲| 18在线观看网站| 久久亚洲精品不卡| 在线观看www视频免费| 欧美日韩亚洲国产一区二区在线观看 | 亚洲熟女毛片儿| 18禁裸乳无遮挡免费网站照片 | 一a级毛片在线观看| 两个人看的免费小视频| 欧美在线一区亚洲| 国产一区有黄有色的免费视频| 国产精品欧美亚洲77777| 黄色视频,在线免费观看| 少妇粗大呻吟视频| 少妇的丰满在线观看| av视频免费观看在线观看| 18禁裸乳无遮挡免费网站照片 | 一夜夜www| 欧美国产精品va在线观看不卡| 欧美中文综合在线视频| cao死你这个sao货| 一级a爱片免费观看的视频| 性少妇av在线| 91麻豆精品激情在线观看国产 | 麻豆成人av在线观看| 黑丝袜美女国产一区| 久久国产乱子伦精品免费另类| 精品国内亚洲2022精品成人 | 999精品在线视频| 亚洲精品国产一区二区精华液| 91麻豆精品激情在线观看国产 | av网站免费在线观看视频| 国产精品影院久久| 99久久精品国产亚洲精品| 亚洲国产看品久久| 老司机福利观看| tube8黄色片| 99国产精品一区二区三区| tube8黄色片| 夜夜夜夜夜久久久久| 久久国产精品男人的天堂亚洲| 国产黄色免费在线视频| 五月开心婷婷网| 我的亚洲天堂| 精品午夜福利视频在线观看一区| 欧美一级毛片孕妇| 精品一区二区三区视频在线观看免费 | 老熟女久久久| 丰满迷人的少妇在线观看| 19禁男女啪啪无遮挡网站| 国产亚洲欧美98| 成年人午夜在线观看视频| 另类亚洲欧美激情| 亚洲欧美一区二区三区久久| 老司机靠b影院| 久久久精品免费免费高清| 国产精品亚洲av一区麻豆| 天天影视国产精品| 麻豆成人av在线观看| 国产激情久久老熟女| 91麻豆精品激情在线观看国产 | 久久久久精品人妻al黑| 亚洲欧洲精品一区二区精品久久久| 天天添夜夜摸| 日韩制服丝袜自拍偷拍| 日韩一卡2卡3卡4卡2021年| 99国产综合亚洲精品| 精品一区二区三区视频在线观看免费 | 日韩熟女老妇一区二区性免费视频| 久久久精品国产亚洲av高清涩受| 欧美激情久久久久久爽电影 | 欧美激情 高清一区二区三区| 午夜福利影视在线免费观看| 极品人妻少妇av视频| 精品人妻熟女毛片av久久网站| 国产视频一区二区在线看| 91麻豆av在线| 国产精品av久久久久免费| 久久人人97超碰香蕉20202| 我的亚洲天堂| 欧美乱妇无乱码| 日韩视频一区二区在线观看| 97人妻天天添夜夜摸| 成人手机av| 国产av一区二区精品久久| 王馨瑶露胸无遮挡在线观看| √禁漫天堂资源中文www| 中国美女看黄片| 国产精品免费一区二区三区在线 | 9191精品国产免费久久| 亚洲国产精品一区二区三区在线| tube8黄色片| 免费久久久久久久精品成人欧美视频| 男女午夜视频在线观看| 9色porny在线观看| 欧美日韩成人在线一区二区| 日韩三级视频一区二区三区| 日韩制服丝袜自拍偷拍| 一本大道久久a久久精品| 亚洲熟妇中文字幕五十中出 | 亚洲五月色婷婷综合| 在线观看免费午夜福利视频| 90打野战视频偷拍视频| 新久久久久国产一级毛片| 久久久国产成人精品二区 | 这个男人来自地球电影免费观看| 99国产极品粉嫩在线观看| 国产精品久久久久成人av| 99久久精品国产亚洲精品| 亚洲五月天丁香| 久久久久久人人人人人| 亚洲avbb在线观看| 无人区码免费观看不卡| 精品久久久久久久久久免费视频 | 欧美国产精品一级二级三级| 黑人巨大精品欧美一区二区蜜桃| 人人妻人人澡人人看| 久久精品国产清高在天天线| 嫩草影视91久久| 大型黄色视频在线免费观看| 国产精品免费视频内射| 亚洲一区高清亚洲精品| 精品一区二区三卡| 亚洲专区中文字幕在线| 国产成人免费观看mmmm| 亚洲精品美女久久久久99蜜臀| 国产精品综合久久久久久久免费 | 欧美色视频一区免费| 老司机影院毛片| 免费一级毛片在线播放高清视频 | 男女之事视频高清在线观看| 国产亚洲精品第一综合不卡| 欧美日韩精品网址| 亚洲av成人一区二区三| 亚洲精品国产一区二区精华液| 男女免费视频国产| av中文乱码字幕在线| 老熟女久久久| 国产精品98久久久久久宅男小说| 亚洲国产中文字幕在线视频| 日韩欧美三级三区| 国产亚洲精品久久久久久毛片 | 99国产精品99久久久久| 精品午夜福利视频在线观看一区| 18在线观看网站| 91成年电影在线观看| 91成人精品电影| 黄网站色视频无遮挡免费观看| 精品国产一区二区久久| 日韩成人在线观看一区二区三区| av片东京热男人的天堂| 久久人人97超碰香蕉20202| 一级黄色大片毛片| 天天躁日日躁夜夜躁夜夜| 在线播放国产精品三级| 啦啦啦 在线观看视频| 下体分泌物呈黄色| 18禁黄网站禁片午夜丰满| 18禁国产床啪视频网站| 亚洲综合色网址| 午夜亚洲福利在线播放| 亚洲精品在线观看二区| 窝窝影院91人妻| 亚洲精品自拍成人| 老司机靠b影院| 最新美女视频免费是黄的| 女人久久www免费人成看片| 一级a爱视频在线免费观看| 黄色视频,在线免费观看| 热re99久久精品国产66热6| 久久久精品国产亚洲av高清涩受| av中文乱码字幕在线| 午夜免费鲁丝| 无人区码免费观看不卡| 午夜精品久久久久久毛片777| 国产男女超爽视频在线观看| 欧美激情极品国产一区二区三区| 久久精品aⅴ一区二区三区四区| 亚洲精品久久午夜乱码| 精品国内亚洲2022精品成人 | 黄色片一级片一级黄色片| 亚洲精华国产精华精| 国产成人精品久久二区二区91| 国产av精品麻豆| 国产免费现黄频在线看| 91精品三级在线观看| 国产一卡二卡三卡精品| 一进一出抽搐gif免费好疼 | 免费黄频网站在线观看国产| 黑人巨大精品欧美一区二区蜜桃| 又黄又粗又硬又大视频| 夜夜躁狠狠躁天天躁| 两性夫妻黄色片| 大香蕉久久网| 国产片内射在线| 欧美日韩成人在线一区二区| 亚洲av熟女| 1024香蕉在线观看| 国产不卡av网站在线观看| 天天躁夜夜躁狠狠躁躁| 高清在线国产一区| 三级毛片av免费| 91精品三级在线观看| 热re99久久精品国产66热6| 精品欧美一区二区三区在线| 中文字幕色久视频| 欧美另类亚洲清纯唯美| 免费av中文字幕在线| 日韩视频一区二区在线观看| videosex国产| 国产成人精品久久二区二区91| 激情在线观看视频在线高清 | 18禁观看日本| 中文字幕最新亚洲高清| 久久性视频一级片| 国产精品二区激情视频| 每晚都被弄得嗷嗷叫到高潮| 午夜日韩欧美国产| 一区在线观看完整版| 欧美激情极品国产一区二区三区| 欧美日韩黄片免| 搡老乐熟女国产| a级毛片在线看网站| 少妇粗大呻吟视频| 中出人妻视频一区二区| 亚洲欧美一区二区三区黑人| 久久久久久久国产电影| 午夜视频精品福利| 日本五十路高清| 国产无遮挡羞羞视频在线观看| 欧美激情久久久久久爽电影 | 欧美日韩亚洲高清精品| 日本黄色视频三级网站网址 | 欧美乱码精品一区二区三区| 一级毛片高清免费大全| 免费看十八禁软件| 丰满饥渴人妻一区二区三| 欧美中文综合在线视频| 亚洲人成77777在线视频| 精品视频人人做人人爽| 岛国在线观看网站| 久久久国产成人免费| 国产精品久久久人人做人人爽| 免费少妇av软件| 美女午夜性视频免费| 少妇被粗大的猛进出69影院| tube8黄色片| 国产真人三级小视频在线观看| 欧美黑人精品巨大| 亚洲国产毛片av蜜桃av| 91老司机精品| 51午夜福利影视在线观看| 欧美黑人欧美精品刺激| 亚洲国产看品久久| 精品一区二区三区av网在线观看| 99riav亚洲国产免费| 欧美日韩成人在线一区二区| 午夜福利,免费看| 成人国产一区最新在线观看| 欧美+亚洲+日韩+国产| 飞空精品影院首页| 男人舔女人的私密视频| 午夜免费成人在线视频| 天天躁夜夜躁狠狠躁躁| 国产野战对白在线观看| 婷婷成人精品国产| 久久婷婷成人综合色麻豆| 亚洲专区国产一区二区| 亚洲一区中文字幕在线| 精品国产亚洲在线| 国产99久久九九免费精品| 制服诱惑二区| 午夜免费观看网址| 日日夜夜操网爽| 看黄色毛片网站| 夜夜躁狠狠躁天天躁| 法律面前人人平等表现在哪些方面| 大码成人一级视频| 成年动漫av网址| 久热这里只有精品99| 亚洲伊人色综图| 建设人人有责人人尽责人人享有的| 国产亚洲av高清不卡| 国产亚洲一区二区精品| 黄色怎么调成土黄色| 国产成人av教育| 满18在线观看网站| 精品福利永久在线观看| 亚洲国产欧美一区二区综合| 国产成人精品久久二区二区91| 精品福利永久在线观看| 久久狼人影院| 在线天堂中文资源库| 亚洲黑人精品在线| 老鸭窝网址在线观看| 最近最新免费中文字幕在线| 国产成人影院久久av| 老司机午夜十八禁免费视频| 成年女人毛片免费观看观看9 | 精品国产亚洲在线| 成熟少妇高潮喷水视频| 精品久久久久久久毛片微露脸| 极品教师在线免费播放| 国产91精品成人一区二区三区| 他把我摸到了高潮在线观看| 国产男女超爽视频在线观看| 美女国产高潮福利片在线看| 19禁男女啪啪无遮挡网站| 成人永久免费在线观看视频| 国精品久久久久久国模美| 在线观看舔阴道视频| 成人永久免费在线观看视频| 欧美激情高清一区二区三区| 婷婷成人精品国产| 身体一侧抽搐| 久久国产精品男人的天堂亚洲| 黄片小视频在线播放| 久久香蕉国产精品| 国产午夜精品久久久久久| 天天躁夜夜躁狠狠躁躁| netflix在线观看网站| 热99国产精品久久久久久7| 精品少妇一区二区三区视频日本电影| 99国产精品一区二区三区| 欧美中文综合在线视频| 亚洲五月婷婷丁香| 精品一区二区三区四区五区乱码| 老鸭窝网址在线观看| 久久精品亚洲av国产电影网| 极品教师在线免费播放| a级片在线免费高清观看视频| 天天躁夜夜躁狠狠躁躁| 91成人精品电影| 大型黄色视频在线免费观看| 久久国产精品影院| 99精品在免费线老司机午夜| 香蕉久久夜色| 不卡av一区二区三区| 亚洲欧美激情在线| 久久久精品国产亚洲av高清涩受| 日韩欧美免费精品| 精品亚洲成国产av| 国产成人欧美| 欧美在线黄色| 另类亚洲欧美激情| videosex国产| 一区二区日韩欧美中文字幕| 精品一品国产午夜福利视频| 亚洲国产精品合色在线| 亚洲久久久国产精品| 人妻久久中文字幕网| 久久香蕉激情| 夜夜躁狠狠躁天天躁| 成年人午夜在线观看视频| 午夜日韩欧美国产| 国产aⅴ精品一区二区三区波| 一级毛片女人18水好多| 国产精品美女特级片免费视频播放器 | 美女高潮到喷水免费观看| 少妇粗大呻吟视频| 欧美乱码精品一区二区三区| 亚洲 欧美一区二区三区| 亚洲精华国产精华精| 精品人妻在线不人妻| 国产精品九九99| 午夜精品国产一区二区电影| 精品久久久久久,| 亚洲成人免费电影在线观看| www.自偷自拍.com| 久久精品国产亚洲av香蕉五月 | 婷婷精品国产亚洲av在线 | 国产99白浆流出| 欧美成人午夜精品| 18禁美女被吸乳视频| 在线观看舔阴道视频| 久久人妻av系列| 国产1区2区3区精品| 热99国产精品久久久久久7| 午夜精品国产一区二区电影| 女性生殖器流出的白浆| 一边摸一边抽搐一进一小说 | 亚洲一区二区三区欧美精品| 日本一区二区免费在线视频| 国产激情久久老熟女| 国产亚洲精品一区二区www | 巨乳人妻的诱惑在线观看| 久久中文字幕一级| 日本wwww免费看| 中文字幕av电影在线播放| 成人特级黄色片久久久久久久| 久热爱精品视频在线9| 看黄色毛片网站| 天堂俺去俺来也www色官网| 精品亚洲成a人片在线观看| 80岁老熟妇乱子伦牲交| 欧美日韩中文字幕国产精品一区二区三区 | 中文欧美无线码| 中文字幕色久视频| 黑人欧美特级aaaaaa片| 精品国产国语对白av| 女同久久另类99精品国产91| 国产又色又爽无遮挡免费看| 韩国精品一区二区三区| 丁香欧美五月| 午夜福利,免费看| 日韩欧美免费精品| 国产男女内射视频| 欧美人与性动交α欧美软件| 少妇粗大呻吟视频| 精品人妻1区二区| 国产午夜精品久久久久久| 男女之事视频高清在线观看| 亚洲七黄色美女视频| 交换朋友夫妻互换小说| 精品熟女少妇八av免费久了| 国产区一区二久久| 欧美国产精品一级二级三级| 精品国产乱子伦一区二区三区| 50天的宝宝边吃奶边哭怎么回事| 一边摸一边抽搐一进一出视频| 动漫黄色视频在线观看| 啦啦啦免费观看视频1| 在线视频色国产色| 满18在线观看网站| 夜夜夜夜夜久久久久| 亚洲av片天天在线观看| 老司机靠b影院| 在线观看免费视频网站a站| 极品教师在线免费播放| 久久亚洲真实| 国产精品欧美亚洲77777| 精品卡一卡二卡四卡免费| 悠悠久久av| 精品人妻在线不人妻| 视频区欧美日本亚洲| 免费看十八禁软件| 男女免费视频国产| 黄片大片在线免费观看| 亚洲欧洲精品一区二区精品久久久| 91在线观看av| 亚洲三区欧美一区| av视频免费观看在线观看| 大陆偷拍与自拍| 在线观看免费午夜福利视频| 久久久久视频综合| 国产1区2区3区精品| 在线观看免费视频日本深夜| 在线永久观看黄色视频| 中亚洲国语对白在线视频| 久99久视频精品免费| 成人18禁高潮啪啪吃奶动态图| 精品国产国语对白av| 国产精品综合久久久久久久免费 | 最近最新中文字幕大全电影3 | 999精品在线视频| 99国产精品一区二区三区| 日韩欧美国产一区二区入口| 丝袜美腿诱惑在线| 99精品久久久久人妻精品| 欧美另类亚洲清纯唯美| 热99国产精品久久久久久7| 欧美精品高潮呻吟av久久| 乱人伦中国视频| 国产乱人伦免费视频| 国产男靠女视频免费网站| 99国产极品粉嫩在线观看| 91成人精品电影| 一本一本久久a久久精品综合妖精| 女性生殖器流出的白浆| 操出白浆在线播放| 国产成+人综合+亚洲专区| 丝袜在线中文字幕| 精品人妻熟女毛片av久久网站| 两性夫妻黄色片| 久久人妻av系列| 亚洲精品中文字幕在线视频| 国产av一区二区精品久久| 国产麻豆69| 黑人操中国人逼视频| 色在线成人网| 夜夜爽天天搞| 久久热在线av| 在线观看www视频免费| 午夜福利视频在线观看免费| 欧美日韩亚洲综合一区二区三区_| 国产亚洲欧美98| 国产精品久久久久成人av| 搡老岳熟女国产| cao死你这个sao货| av福利片在线| 少妇 在线观看| 黄片小视频在线播放| 别揉我奶头~嗯~啊~动态视频| 精品国产乱子伦一区二区三区| 亚洲熟女毛片儿| 国产一区二区激情短视频| 免费不卡黄色视频| 两性午夜刺激爽爽歪歪视频在线观看 | 人人妻人人爽人人添夜夜欢视频| 中国美女看黄片| 黄片播放在线免费| 一进一出抽搐gif免费好疼 | 亚洲欧美激情综合另类| 欧美日韩国产mv在线观看视频| 亚洲欧美激情在线| 午夜免费观看网址| 18禁裸乳无遮挡免费网站照片 | 免费人成视频x8x8入口观看| 国产男女内射视频| 欧美成人午夜精品| 久久久水蜜桃国产精品网| 日本wwww免费看| 亚洲va日本ⅴa欧美va伊人久久| 两个人免费观看高清视频| 夜夜夜夜夜久久久久| 热99国产精品久久久久久7| 黄色女人牲交| 国产精华一区二区三区| 色综合婷婷激情| 女同久久另类99精品国产91| 国产欧美日韩精品亚洲av| 中文字幕人妻熟女乱码| 一本大道久久a久久精品| 国产又色又爽无遮挡免费看| 亚洲五月天丁香| 亚洲精品国产色婷婷电影| 激情视频va一区二区三区| 国产亚洲欧美在线一区二区| 99久久精品国产亚洲精品| 欧美精品亚洲一区二区| 久久久精品免费免费高清| 老司机福利观看| 亚洲国产毛片av蜜桃av| 免费不卡黄色视频| 伊人久久大香线蕉亚洲五| 三级毛片av免费| 伊人久久大香线蕉亚洲五| 制服诱惑二区| 老司机靠b影院| 日本a在线网址| 热99国产精品久久久久久7| 国产精品 国内视频| 国产视频一区二区在线看| 亚洲av日韩在线播放| 欧美 亚洲 国产 日韩一| 99精国产麻豆久久婷婷| 亚洲av片天天在线观看| 日韩视频一区二区在线观看| 色综合婷婷激情| 亚洲中文av在线| 亚洲 欧美一区二区三区| 人人妻人人爽人人添夜夜欢视频| 多毛熟女@视频| 午夜福利视频在线观看免费| 午夜精品在线福利| av不卡在线播放| 亚洲av成人av| 嫁个100分男人电影在线观看| 亚洲在线自拍视频| 久久精品国产清高在天天线| 免费在线观看黄色视频的| www日本在线高清视频| 亚洲久久久国产精品| bbb黄色大片| 国产欧美日韩一区二区三区在线| 黄频高清免费视频| 9色porny在线观看| 日本黄色视频三级网站网址 | 男女床上黄色一级片免费看| 午夜福利在线观看吧| 中文字幕高清在线视频| 女人久久www免费人成看片| 亚洲欧洲精品一区二区精品久久久| 999久久久国产精品视频| 99国产精品一区二区蜜桃av | 欧美黄色片欧美黄色片|