• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Precise Agriculture:Effective Deep Learning Strategies to Detect Pest Insects

    2022-01-25 12:50:34LucaButeraAlbertoFerranteMauroJerminiMauroPrevostiniandCesareAlippi
    IEEE/CAA Journal of Automatica Sinica 2022年2期

    Luca Butera,Alberto Ferrante,Mauro Jermini,Mauro Prevostini,and Cesare Alippi,

    Abstract—Pest insect monitoring and control is crucial to ensure a safe and profitable crop growth in all plantation types,as well as guarantee food quality and limited use of pesticides.We aim at extending traditional monitoring by means of traps,by involving the general public in reporting the presence of insects by using smartphones.This includes the largely unexplored problem of detecting insects in images that are taken in noncontrolled conditions.Furthermore,pest insects are,in many cases,extremely similar to other species that are harmless.Therefore,computer vision algorithms must not be fooled by these similar insects,not to raise unmotivated alarms.In this work,we study the capabilities of state-of-the-art (SoA) object detection models based on convolutional neural networks (CNN)for the task of detecting beetle-like pest insects on nonhomogeneous images taken outdoors by different sources.Moreover,we focus on disambiguating a pest insect from similar harmless species.We consider not only detection performance of different models,but also required computational resources.This study aims at providing a baseline model for this kind of tasks.Our results show the suitability of current SoA models for this application,highlighting how FasterRCNN with a MobileNetV3 backbone is a particularly good starting point for accuracy and inference execution latency.This combination provided a mean average precision score of 92.66% that can be considered qualitatively at least as good as the score obtained by other authors that adopted more specific models.

    I.INTRODUCTION

    THIS work is in the field of precise agriculture and aims at exploring different deep learning (DL) models for detecting insects in images.More in detail,this paper discusses the use of DL-based object detection models for the recognition of pest insects outdoors.In particular,we focus on the comparison of known architectures in the context of a novel dataset.Considering the field of interest,we also evaluate computational resource requirements of the considered models.

    Early pest insect identification is a crucial task to ensure healthy crop growth;this reduces the chance of yield loss and enables the adoption of precise treatments that provide an environmental and economic advantage [1].Currently,this task is mainly carried out by experts,who have to monitor field traps to assess the presence of species which cause significant economic loss.This procedure is prone to delays in the identification and requires a great deal of human effort.In fact,human operators need to travel to the different locations where traps are installed to check the presence of the target insects and,in some cases,to manually count the number of captured insects.The current trend is to introduce machine learning techniques to help decision makers in the choice of suitable control strategies for pest insects.Even though this approach is going to automate some time-consuming tasks,it leaves monitoring of pest insects limited to traps.For some pest insects that spread outside crop fields,it can be beneficial to have additional information coming from other locations.This is the case of Popillia japonica,a dangerous phytophage registered as a quarantine organism at European level [2].Early detection of the presence of this insect in specific regions is extremely important,as it permits to track its spread.In this exercise,the general public may be of great help,as they can provide valuable input by means of mobile phones.As shown in Fig.1,our monitoring system for pest insects will integrate inputs from traps and from smartphones,with the purpose of creating a map of the presence of the pest insects.Machine learning techniques become necessary when reports come from a large pool of users that take images with different cameras,lighting conditions,framing and backgrounds.Furthermore,pest insects are,in many cases,extremely similar to other species that are harmless (e.g.,Popillia japonica is very similar to two harmless species,Cetonia aurata and Phyllopertha horticola).Therefore,computer vision algorithms must be able to distinguish between them,even in presence of high similarities.This is especially important when the general public is involved in the process,to avoid non-motivated alarms and insect eradication actions that are useless and may damage the environment.

    Throughout the years,many Computer Vision approaches have been proposed to assess the presence of pest insects in crop fields.Early solutions (e.g.,[3]) were based on image classification by means of handcrafted features.These approaches suffered from lack of generality,as most features were specific to particular species and,moreover,required great domain knowledge and design effort.

    Fig.1.Sketch of a multimodal detection system.

    With the advent of DL and convolutional neural networks(CNN),the problem has started being tackled with Data Driven approaches that leveraged the abstraction power of deep architectures both to extract meaningful features and perform the recognition task.The majority of these studies focused on Image Classification [4],[5] which assigns a class label to each image.CNNs solved this problem with the help of Transfer Learning from models pre-trained on generic datasets,such as ImageNet [6].

    Only in recent years,as highlighted in [7],some studies have tackled the task of both recognizing and localizing pest insects in images.Remarkably,PestNet [8] compared an adhoc network to state-of-the-art (SoA) ones,for the task of detecting insects in images taken in highly controlled environments,such as in traps.

    However,not much work has been done on detecting insects in the open,despite the fact that exploiting the pictures any citizen could take with a smartphone,can greatly improve the control of pests over the territory.In the context of a multimodal pest monitoring system like the one shown in Fig.1,it is important to have a reliable detection algorithm that is accurate with highly diverse backgrounds and robust to the presence of harmless insects that may be mistaken for dangerous ones.In fact,in such a system,observations would come from a large number of locations and at much higher rates if compared to trap-only monitoring systems.In a large scale system,false alarms become a critical problem that requires a specific study.

    In this paper,we assess the capabilities of SoA Generic Object Detection models for the detection of very similar insects in non-controlled environments.We do so as we believe that current architectures are well suited for this task and there is no need for new ones;the main problem is understanding how they compare and which ones are better choices.

    We built a dataset of images taken from the Internet,with high variability in their characteristics,such as background,perspective,and resolution.The dataset contains 3 classes:the first one contains images of a pest insect,Popillia japonica;the other two classes contain images of insects commonly mistaken for our target one.Special care was put in minimizing the presence of highly similar samples.This task poses many challenges that SoA methods may have trouble with,such as presence,in the images,of small or partially occluded insects,bad lighting conditions or insects disguised in the background.Most importantly,without any prior knowledge,the object detectors may not be able to learn the general features necessary to disambiguate two similar insects.

    We then used our dataset to study the performance of three SoA object detection models across four different backbones.The goal was to:

    1) Assess the adaptability of existing models to detection of Popillia japonica.

    2) Identify the best candidate for task-specific improvements and modifications.

    3) Study thecomputational cost/accuracytrade-off of different backbones.

    With respect to existing literature,the novelty of our work can be summed up as:

    1) We adopt advanced filtering techniques to provide enhanced datasets.The impact on performance is well evident both in terms of accuracy and reliability.Notably,we avoid near-duplicate samples that may hinder the model evaluation procedure correctness in case of similar couples being split in the training and test sets.

    2) We experimentally verify that designing object detectors targeted to beetle-like insects is neither requested nor advisable,as general purpose models can be successfully used after proper tuning.Furthermore,we compare these general models both from the stand-point of performance and from that of computational resources by employing common metrics.

    3) For the first time,we focus on the detection of extremely similar insects.In our application only one insect is actually harmful and must not be confused with others closely resembling it.

    4) We consider images that are very different in their characteristics and,in general,not shot in controlled environments nor with the same equipment.This,together with the previous point,sets up a highly complex detection environment that has not been previously addressed in similar studies on insects.

    5) We propose a study on the impact of output thresholding on false positive count,showing that it is possible to find a good trade-off between false alarms w.r.t.the detection of the dangerous insects also for our specific application scenario.

    6) We show the applicability of Guided Backpropagationbased visualization techniques that can help experts gather insights on insect features considered by the models.This is of particular importance since it allows validation of the algorithm’s decisions from a more qualitative standpoint.These visualizations highlight the key features of the image used for the prediction:a good detector is expected to rely on visually meaningful parts of the image.

    The results we obtained show that known SoA Object Detection models are suitable for beetle-like insect recognition in the open,even in presence of similar species.Additionally,backbone choice can affect performance,in particular for SSD models.Transfer Learning is a necessary step but pre-training on a generic insect dataset provided no improvements.In general,FasterRCNN with MobileNetV3 backbone provides the best trade-off between accuracy and inference speed.

    The paper is organized as follows:In Section II,we go over the state of the art for both Generic Object Detection and Pest Insects Detection.In Section III,we describe our dataset,how we created it and how we ensured absence of duplicates/nearduplicates.In Section IV,we briefly introduce the different models that we have compared.In Section V,we explain how we designed our experiments.In Section VI,we present and comment on our experimental results.

    II.STATE OF THE ART

    In this section,we describe the state of the art for our application.First we spend a few words on Generic Object Detection,then we talk specifically about the work that has been done specifically on insect recognition.

    A.Generic Object Detection

    Generic Object Detection is the task of assigning,for each relevant object instance in an image,a Bounding Box and a Label.The state of the art about this topic grew considerably in recent years as shown in [9],which represent an up-to-date and complete survey about Generic Object Detection.

    We can divide Generic Object Detection models based on CNNs into two broad categories:Two-stage and One-stage detectors.Two-stage detectors are generally slower but more accurate.They first compute regions of interest (RoI) for each image and then compute the actual predictions for each RoI.One-stage detectors,instead,are faster at the cost of some accuracy,as they compute the predictions directly.Both detector types are composed of two parts:a feature extraction section,called backbone,and a detection head.The backbone is generally an Image Classification model stripped of the classification layer.Common examples are VGG [10] and ResNet [11].

    We chose to select one representative model for each category to compare the trade-offs of the two approaches.FasterRCNN [12],for Two-Stage Detectors,is a well known architecture that uses a region proposal network (RPN) to extract the RoIs.For One-Stage Detectors we picked SSD[13],as it has shown,in the available literature,the highest inference speed for its class,while maintaining a respectable accuracy.SSD uses a set of Default Boxes which span the whole image across different scales,instead of an RoI pooling layer.To the two aforementioned models we have added the third competitor:RetinaNet [14],which is sort of a mixedbreed.It is a One-Stage Detector but its speed and accuracy are similar to the ones of common Two-Stage Detectors.Its peculiar characteristic is that it neither uses an RoI pooling layer nor Default Boxes,but it exploits a Focal Loss function which weights the foreground samples as of higher importance with respect to the background samples.

    B.Pest Insects Detection

    For Pest Insects Detection the state of the art is quite narrow.The majority of the studies focus on Insect Classification,usually with an approach similar to the one introduced in [4],where known CNN-based classifiers are trained on multi-class insect datasets like IP102 [15].This dataset is one of the few openly available of its kind and it comprises over 75 000 images of insects from 102 different species;additionally,about 19 000 images of this dataset are annotated with bounding boxes.IP102,though,presents a very long tailed distribution,which is not ideal for learning.

    Another popular branch,commonly associated with Pest Insect Detection is the recognition of Plant Diseases.For example,[16]–[18] namely relate to the identification of pest insects,but they actually detect their presence indirectly,by considering damages on plants.

    For actual Pest Insect Detection,different studies,like [8],[19]–[23],use common DL architectures.In these examples,however,images are collected in controlled environments,by adopting standardized equipment and locations.In [21]images are taken in an open field on a wide,bright red,trap.In [8],[19],[20],closed traps with a white background are used,while [22] and [23] consider images taken in grain containers.The only study closely related to ours,both in terms of approach and final goal,is [24].In this study,images of 24 different species have been collected from the internet,and a network similar to FasterRCNN [12] with the VGG [10]backbone has been trained to perform detection.In comparison to the aforementioned studies our work has a major difference:our images are not taken in controlled environments nor shot with the same equipment.Differently from the other works,and in particular from [24],in our work we consider insect species that exhibit high similarity.Unlike other works we have extensively studied the capabilities of SoA models in order to find the best starting point for finer model design.Moreover,we show a duplicate safe data acquisition procedure,while similar studies have not given relevance to this aspect,which,in our opinion,is quite important.

    III.DATASET

    The images for our dataset have been gathered through the internet,by scraping the results from two popular Search Engines,Google and Bing,as well as from Flickr,a known photo sharing community.We have collected images for three classes of insects:Popillia japonica(PJ,Fig.2(a)),Cetonia aurata(CA,Fig.2(b)) andPhyllopertha horticola(PH,Fig.2(c)).The first one is a rapidly spreading and dangerous pest insect;the other two,even though similar to P.japonica,are harmless.P.horticola,in particular,for its morphological characteristics,is easily mistaken for P.japonica [25].In a real-world scenario,it is really important to detect those insects correctly to avoid false alarms.For each class of insects we have collected the first 4000 results from each website,corresponding to a total of 36 000 pictures.These samples were then subject to two phases of filtering,with the purpose of obtaining a high-quality dataset.The first phase involved some automatic methods,but also manual filtering;the second phase,instead,was fully automatic.

    Fig.2.The species selected for the dataset.

    A.Initial Filtering

    In the initial filtering phase,we have applied 3 steps:

    1) Duplicate removal based on file size and on the hash of the first 1024 bytes.

    2) Unrelated removal based on a weak classifier that distinguishes images containing insects from images that do not.

    3) Manual inspection.

    After the filtering we had approximately 1200 images remaining for each class.

    B.Removal of Near Duplicates

    A problem that persisted through the first filtering phase was the presence of near duplicates:two or more non-identical but extremely similar images.These images,of which an example is shown in Fig.3,are particularly harmful in assessing performance of models.This problem is highlighted and tackled in [26] for the CIFAR Dataset [27].We have decided to use a similar approach,with a ResNet50 [11]Image Classification Model,pre-trained on ImageNet [6],instead of a model specifically trained on our dataset,as we found the results to be sufficiently good for our application.Once the classification layer is removed,the model,given an RGB image as input,outputs a 2048 dimensional vector,which can be interpreted as an embedding of the input image.These embeddings are known to be useful,in image retrieval applications,to assess image similarity by means of common distance metrics,as stated by [28].We have used theL2-distanceand an empirically chosen distance threshold of 90,below which we have considered two images to be near duplicates.In the end,we built our sanitized dataset incrementally,by adding iteratively all the images that were distant enough from the ones already present.We chose our threshold conservatively;thus,this procedure may have ruled out some “good” samples.Nonetheless,we preferred it to having near duplicates slip through.This step removed another 5% to 7% of the samples,depending on the class.Finally,we split the dataset into training and test sets,with the usual 80%–20% proportions.The actual number of samples for each class is shown in Table I.Splits are balanced with respect to the set of insect classes that appear in the image:this approach produces subsets with reasonably similar distributions.

    Fig.3.An example of near duplicates.

    TABLE IDETECTION DATASET SAMPLE COUNTS NUMBERS REFER TO TOTAL OBjECTS

    IV.MODELS

    In this section,we provide a brief explanation of the main characteristics of the considered architectures.Since all detection models work on high-level features extracted by the backbone,we first focus on describing the actual detection networks.Backbones are described later in this section.

    A.Detection Models

    A Detection Model tries to assign a Bounding Box and a Class Label to each relevant object in an input image.The former is generally represented by a four-value vector;the latter is a scalar that encodes the actual class name.The Object Detectors we consider are based on CNN [29] which are implemented as a computational graph that comprises layers of 2D convolutions,nonlinear activations and maxpooling operations.The weights of the convolution filters act as parameters of the network.These models are generally composed of two conceptual sub-networks:the backbone and the detector.The backbone is generally derived from SoA CNN Image Classifiers and outputs a set of high level features extracted from the input image.The detector takes these features as input and,through a series of convolution and linear operations,outputs the desired predictions.Each CNN is a parametric function,whose parameters can be optimized iteratively by means of common gradient descent algorithms,such as Stochastic Gradient Descent and Adam.In general the loss function compares the predicted output with the expected one and yields lower values when predictions are close to ground truths.Since these networks work with images,they must be robust to changes in the input that are not semantically meaningful.This is partially achieved during training with augmentation techniques that enhance data variability and improve shift,scale,and rotation invariance of the learned model.Scale invariance,usually,is also boosted by specific design choices that allow features to be implicitly computed at different scales.

    We have considered the following models:FasterRCNN,SSD,and RetinaNet.We have chosen these models as they represent the state of the art when it comes to general performance and reliability in real world scenarios.

    1) FasterRCNN:FasterRCNN [12] is a well known Two-Stage detector.In the first stage,the region proposal network(RPN),shown in Fig.4,extracts proposals from high level feature maps with respect to a set of predefined Anchor Boxes.These proposals take the form of:

    i) an objectness score:The probability that an anchor in a specific location contains an object.

    ii) a box regression:A 4-value vector that represents the displacement of the anchor that best matches the object position.

    Fig.4.FasterRCNN’s RPN architecture from [12].

    Features are then extracted from the best proposals,by means of an RoI Pooling layer.These proposed-specific features are then passed to a regression head and a classification head;the former computes the final displacements to best fit the proposed box onto the object;the latter assigns the predicted class label.This model uses a loss function with two terms:one accounting for the classification error and one for the box regression one.The former takes the shape of a log loss over a binary classification problem,the latter is the smooth L1 loss for the box displacement regression.

    2) SSD:SSD [13] belongs to the family of One-Stage detectors.To improve inference speed,even though sacrificing accuracy,features are directly computed on top of the backbone,without the aid of RoIs.Each layer extracts features at a different scale,and uses them to calculate predictions;this helps achieving scale invariance.Predictions have the same form of FasterRCNN’s ones.Box Regression values are computed with respect to a set of Default Boxes,which are similar to the Anchor Boxes used in FasterRCNN.This model uses a loss similar to that of FasterRCNN,with a softmax log loss accounting for the classification error and a smooth L1loss taking care of the box regression one.

    3) RetinaNet:RetinaNet’s [14] architecture is straightforward:features coming from the backbone are fed into a regression head and a classification head,which predicts the bounding box regressions with respect to a set of anchors and the class label.

    The uniqueness of this network is not in the architecture,but in the loss function that it uses,calledFocal Loss.This loss assigns higher importance to hard-to-classify samples and,in turn,it reduces the problem of the gradient being overwhelmed by the vast majority of easily classifiable samples belonging to the background class.

    B.Backbones

    The backbone is the section of a detection model that performs feature extraction.Usually,a backbone is composed of an Image Classification network stripped of its classification head.Here we briefly introduce the backbones considered in our study that are:VGG,ResNet,DenseNet,and MobileNet.As previously stated for the detectors,these models have been chosen for their general reliability in real world scenarios.

    1) VGG:VGG [10] is one of the first very deep Image Classification networks.Its design is rather surpassed by fully convolutional approaches,but it still represents a relevant baseline.Its architecture borrows from AlexNet [30],with Convolutional Layers followed by Max Pooling and activation function.The final fully connected layers are stripped,since the model needs only to perform feature extraction.

    2) ResNet:ResNet [11] is considered to be the current gold standard both as image classifier and as backbone.ResNet first introduced the concept of Residual Blocks,shown in Fig.5.This is a classic convolutional block with a skip connection that allows the gradient to flow backwards freely and minimizes the problem of Vanishing Gradient.Letxbe the block input,andy=F(x) its output;then the residual block with skip connection is defined asy=x+F(x);this corresponds to the computational flow in Fig.5.

    3) DenseNet:DenseNet [31] is considered a good Image Classifier,but it is not popular as a backbone.DenseNet takes the idea of skip connections a step further,with the DenseBlock:a series of feature extraction blocks,each one connected to the following ones,allowing both backward gradient flow and forward information flow.This is particularly suitable when learning from small datasets.In comparison to ResNet,features from different depth levels are explicitly combined inside the Dense Block.

    Fig.5.ResNet’s residual block from [11].

    4) MobileNet:MobileNet is an effort to optimize CNNbased Image Classifiers to make them run on mobile devices while minimizing loss of accuracy.To achieve this,a number of strategies used to reduce the parameter count have been employed.The convolution operations have been decomposed into a Deptwise Separable Convolution,where the operation against a CxNxN filter gets split into a convolution against a 1xNxN followed by a convolution against a Cx1x1 one.On top of that,MobileNet uses an Inverted Residual Block that has the same purpose as the one shown in Fig.5,but it is more efficient from the standpoint of computational resources.

    5) Feature Pyramid Network:A feature pyramid network(FPN) [32] is a building block designed to enhance the output of a feature extraction backbone with features that span the input across multiple scales.Features are computed across different scales by means of convolution operations,then they are merged,as shown in Fig.6,in order to obtain a set of rich,multi-scale,feature maps.

    Fig.6.FPN architecture from [32].

    The FPN is very important for Object Detection networks,as multi-scale features improve the performances when recognizing objects of different sizes in the same image.

    V.ExPERIMENT DESIGN

    In this section,we describe the experiments used to compare the different models and backbones discussed in Section IV.Additionally,we describe some experiments performed to assess the impact of pre-training on model performances.

    A.Performance Evaluation of Models and Backbones

    Each detection model has been combined with each of the four selected backbones to assess how this choice impacts accuracy and inference time.Backbones were all pre-trained on ImageNet and the experiments have been carried out on an Nvidia Titan V GPU.

    Note that FasterRCNN and RetinaNet use FPNs,while SSD works directly on the backbone’s output constructing multiscale features.

    Each model was trained on the same data;parameters were optimized with stochastic gradient descent (SGD) with Momentum,using Cosine Annealing with Linear Warmup as learning rate schedule policy.This policy,shown in Fig.7,stabilizes training in the initial epochs and allows for fine optimization in the last ones.

    Fig.7.Example of Cosine Annealing with Linear Warmup learning rate scheduling.

    Table II shows the different hyperparameters considered for each Backbone/Detector pair.The number of epochs has been selected so that each model was trained to convergence.SSD was trained with a larger batch size as its smaller memory footprint allowed to do so.

    We employ McNemar’s test,as suggested in [33],to assess statistic difference among pairs of models performance.This test is applied to a 2×2contingency table,of the form shown in Table III,where,given a binary classification test,aanddcount the samples on which the two models agree,whilebandccount those on which they do not.In the Object Detection case,each model can predict a bounding box for a particular ground truth or not,hence:

    ●ais the number of ground truths predicted by both models.

    ●dis the number of ground truths missed by both models.

    ●bis the number of ground truths predicted by the first model but missed by the second one.

    ●cis the number of ground truths missed by the first model but predicted by the second one.

    McNemar’s test statistic makes use of the formula shown in(1)

    The null hypothesis is thatH0:pb=pc;the alternative is thatH1:pb≠pc,wherepsymboldenotes the theoretical probability of occurrences in the corresponding cell.In case of rejection of the null hypothesis it is possible to affirm that the two models are significantly different.

    Applying McNemar’s test to an Object Detection algorithm is not straightforward,as each input image may contain multiple objects and each of these objects may or may not be recognized correctly.In our case we considered an object recognized if the model predicted a box with the followingcharacteristics:intersection over union (IoU) with the object’s ground truth box greater than 0.5;confidence score greater than 0.5;the correct label for the object.Each prediction can match only one object.IoU for two boxesAandB,as defined in (2),is the ratio between the area of the intersection of the two boxes and the area of their union.Thus,its value is 1 when the two boxes perfectly overlap but shrinks to 0 as the two boxes either move away from each other or one becomes contained by the other.

    TABLE IIMODELS AND HYPERPARAMETERS USED FOR TRAINING

    TABLE IIIExAMPLE OF CONTINGENCY TABLE

    B.Pre-Training Impact

    A secondary aspect we wanted to test is the impact of pretraining on performance for the following two different scenarios:

    1) Starting from a model completely pre-trained on the common objects in context (COCO) [34] dataset.

    2) Using a backbone pre-trained on a generic insect dataset instead of ImageNet.

    We restricted this experiment toResNet50-FPNFasterRCNNas pre-trained COCO weights were already available.COCO is a widely known dataset,annotated for different computer vision tasks,one of which is Object Detection.To train the insect-specific backbone,we have used a custom dataset generated by combining the images from our 3-class dataset with a filtered version of IP102 [15],from which we removed insects with too low sample count,duplicates,and misplaced images.To this we have added some background images that contained only plants or flowers without any insect.All the steps to ensure dataset fairness,described in Section III,have been applied to this data as well.To account for the high unbalance of this dataset,we have used a Weighted Cross Entropy loss function,assigning higher importance to less frequent classes.For these tests we have used the same hyperparameters ofResNet101-FPNFasterRCNN,which are listed in Table II.

    VI.RESULTS

    In this section,we present the results of the experiments discussed in Section IV.Furthermore,we compare our results to the ones obtained in other studies available in the scientific literature.

    A.Comparison Across Models

    Table IV shows the performance of each tested modelbackbone combination.The capabilities of the model to correctly predict the bounding boxes and to assign the labels have been evaluated by means of mean average precision(mAP).In Object Detection,the calculation of this parameter,though,may introduce some ambiguity.Therefore,we have used theAPIoU=.50as specified in the COCO challenge [34].Another relevant aspect that we have evaluated is the computational requirements,as we target smart agriculture,where power IoT devices are deployed in the field and edge computation may be preferable to avoid frequent energy expensive data transfers over the network.To evaluate the computational requirements,we have measured the inference speed in Frames per Second,both on GPU (an Nvidia GeForce RTX 2 080) and on CPU (an Intel Xeon Silver 4 116).These measurements do not provide a direct evaluation of the performance on IoT nodes,but rather a relative ordering of the model-backbone combinations in terms of computational requirements.These tests have been performed averaging the inference time of 100 iterations without batching;the inference operation is comprehensive of any preprocessing needed by the model,which usually is normalization and resizing of the input images.The input images were randomly generated,with a size of 1280×720 pixels.

    As shown in Table IV,FasterRCNN and RetinaNet perform better than SSD with small changes in mAP among different backbones.SSD struggles with less powerful backbones likeMobileNet and VGG16,but it obtains higher scores when paired with more demanding backbones.There is no clear winner in terms of raw mAP,as scores of the best performing model-backbone combinations are very similar.However,these architectures do not support interpretability of results as it is not possible to open the black box and draw cause-effect indications.This is particularly true when we keep the detector unchanged and we only switch the backbone.For these reasons,the optimal architecture must be found empirically through a trial and error approach.

    TABLE IVPERFORMANCE OF THE COMPARED MODELS IN TERMS OF MEAN AVERAGE PRECISION ON THE TEST SET AND FRAMES PER SECOND AT INFERENCE TIME.FALSE POSITIVES COUNT FOR CLASS POPILLIA JAPONICA IS ALSO SHOWN

    Regarding computation speed,SSD is the fastest on CPU;this does not come as a surprise,as this model is designed to be fast and it is especially suitable for applications where the available hardware has no GPU acceleration.On GPU,SSD is still fast,but,surprisingly,FasterRCNN is computationally lighter with a noticeably higher FPS than all the others.This follows from the MobileNetV3 model being specifically optimized to reduce execution time;this is proportionally more relevant for FasterRCNN than for other detectors.Whether this depends on how the computation is allocated on GPU or to some specific backbone-detector interaction is difficult to say and should be the object of a further investigation task,outside of the scope of this work.

    Table IV also shows the number of false positives,produced on the test set,for the class P.Japonica.FasterRCNN is the model showing less false positives;in particular,when this model is associated with the MobileNetV3 backbone,the number of false positives is the lowest among all models.If we consider these numbers with respect to the overall number of non-PJ insects present in our dataset,which is 530,we see that the best configuration has a false positive ratio of 2.26%.Mind that this is not a formally precise false positive ratio as the concept of negative sample is ill defined for the problem of Object Detection.

    Fig.8 shows the influence of a threshold on the confidence score of predictions on the false positives count and the mAP for each model.Every prediction below the threshold is automatically ignored.For the best performers,mAP is not greatly reduced,even at higher threshold values;while using small thresholds can still have great impact on the number of false positives for models that produce many of them.This type of plot is really helpful in reducing false alarms while preserving detection performance.

    Fig.8.False Positives for PJ class and mAP variation for different prediction confidence thresholds.

    Fig.9 shows the trend of the training loss for our three models;we can notice how it smoothly converges in all cases but FasterRCNN with MobileNetV3 backbone.The loss increase,however,seems to marginally affect the mAP score during validation,as shown in Fig.10(a).This tells us that a thoroughly calibrated training procedure may lead the loss of this model,which already is one of the best,to a plateau as well,possibly with better performance.

    Fig.9.Train loss for the 3 detection architectures with different backbones.

    Fig.10.Validation mAP for MobileNetV3 and DenseNet169.

    Fig.10 shows validation mAP score for our models with the DenseNet169 and MobileNetV3 backbones.As shown in Fig.10(b),DenseNet169 provides a smooth increase in mAP for all architectures,with minimal variance among training epochs and detectors,whilst MobileNetV3 plot is noisier and the results among the detectors are very different.The equivalent plot for ResNet101 is closer to DenseNet169,while the one for VGG16 is closer to MobileNetV3;this suggests that deeper and more demanding backbones have more consistent behavior independently of the specific detection model,while less demanding ones have troubles with light detectors like SSD.

    Fig.11 shows the McNemar’s Test p-values for each model pair.Considering the conventional significance level of α=0.05,we can see that the majority of the models are significantly different one another,meaning that,even with similar performances,they have different weaknesses and strengths.This makes the choice of the best model a nontrivial decision.Furthermore,for each detector aside from SSD,the mAP score remains similar across the different backbones,suggesting that backbone choice is not that critical.

    Given the performance results together with McNemar’s test outcome we can confidently say that a FasterRCNN detector with a MobileNetV3 backbone is the best choice for our case study,as it yields top mAP coupled with the highest speed on GPU.Moreover,the false positives are the lowest among competitors,this result is particularly relevant in this application.

    When computational power is a strong constraint and no hardware acceleration is available,SSD with the MobileNetV3 backbone should be preferred instead,as it is reasonably fast without a significant drop in mAP.We comment that,even though an SSD with VGG16 backbone is almost 3 times faster on CPU,the loss in detection accuracy does not nicely trade off with the improvement in latency.Yet,this combination can be used,accepting the reduced detection performance,in systems that are highly constrained in computational resources and/or energy.

    B.Effects of Pre-Training

    Table V shows the performance of FasterRCNN with the ResNet50 backbone in the 3 cases described in Section V-B.For this evaluation,we only consider Mean Average Precision since the architecture is constant.Thus,the FPS scores of the different solutions are constant too.

    The highest mAP is reached when the model is fully pretrained on COCO;pre-training the backbone on the insect dataset presented in Section V-B results,instead,in the lowest score.This is consistent with the loss trends of Fig.12,withcocobeing the lowest curve andinsectsthe highest.Results suggest that using an ad-hoc dataset to pre-train the backbone harms the overall Transfer Learning process instead of improving it.Whether this is due to the relatively small size of the dataset used or inherent in the usage of less diverse data,needs to be investigated further.

    Fig.11.McNemar’s Test on compared models.Each cell contains the p-value for the specific pair.Significance level is α=0.05.

    TABLE VPERFORMANCE OF FASTERRCNN-RESNET50, WITH DIFFERENT PRE-TRAINING DATASETS, ExPRESSED IN TERMS OF MEAN AVERAGE PRECISION ON THE TEST SET

    As shown in Fig.13,no statistical difference between the model pre-trained on COCO and the one pre-trained on ImageNet is present,suggesting that pre-training the detection head has no significant benefits,at least in this case.The model pre-trained on generic insect images,however,is significantly different from the others.This further supports the thesis that pre-training on specific data significantly affects the learning,possibly for the worst.Hence,sticking to common Transfer Learning procedures seems to be the safer choice.

    C.Visualization of Importance

    In applications where the solution is implemented by means of very deep models,it is also important to assess the prediction quality in a human-readable form.Gradientweighted class activation map (Guided GradCAM) [35] is an approach that combines the results from Guided Backpropagation [36] and GradCAM [35] to form a visually understandable image of the areas of major importance for the prediction.

    Guided Backpropagation relies on the intuition that the gradient of the relevant output (e.g.,the class score) w.r.t.the input image is a good indicator of which areas of the image are more relevant for the prediction.It is calledguidedbecause only positive gradients,corresponding to positive activations,are back-propagated.Instead,in GradCAM,the activation of the deepest convolutional layer of a CNN,properly weighted by the gradient of the relevant output w.r.t.said layer’s parameters,is interpreted as a heatmap of the relevant regions of the input.Guided GradCAM is the product of these two pieces of information.

    In the case of insect recognition,these visualizations are particularly useful if an expert wants to evaluate the model based on the relevance of the highlighted insect features.In Fig.14 we can see how the regions of the highest importance are located on the head of the Popillia japonica and around its white hairs,a distinctive feature of this species.

    D.Comparison with Other Studies

    Our results are qualitatively similar to those of other works in the scientific literature,such as [8] and [24].The former reached 75.46% average mAP with a peak of 90.48% on the best predicted class;the latter topped at 89.22% mAP.We have reached a maximum mAP of 93.3%.

    However,we point out that,even though the two mentioned studies consider some of the models that we have also included in our work,this comparison can only be qualitative for the following reasons:

    1) The considered datasets are different;in particular for [8],whose data come from traps.To be more precise,we believe that our dataset is inherently more challenging.For this reason,we can consider our results to be at least as good as the ones reported by the cited works.

    2) The two mentioned studies have not disclosed their approach to mAP calculation.Therefore,the reported numbers may not have the same exact meaning as in our paper.

    3) In [24],the adopted backbone and the procedure used to train SSD and FasterRCNN are not specified.

    4) In [8],when ResNet101 and FasterRCNN are considered,reported results are significantly worse than ours,with a top mAP of 71.62%,as opposed to our 92.14%.This strengthens the belief that adopted procedures are inherently different.

    VII.CONCLUSIONS AND FUTURE WORK

    Fig.12.Train loss (left) and Validation mAP (right) for FasterRCNN ResNet50 with different pre-training.

    Fig.13.McNemar’s Test on FasterRCNN ResNet50 models with different pre-training datasets.The first name is the dataset used to pre-train the whole model,the second is the one used for the backbone.Significance level is α=0.05.

    In this paper,we have evaluated different combinations of models and backbones for detecting a pest insect in images that are not obtained in controlled environments.Our results demonstrate that,at least for insects similar to Popillia japonica,this task can be performed with high accuracy,even by using general-purpose models.Not only detection performances have been estimated,but also inference speed,which can provide information on which model is less computational resource hungry among the tested ones.The best detection performance has been reached by the combination RetinaNet-ResNet101 (mAP=93.3%),but,on average,FasterRCNN was the best performer.The model with the best throughput on GPU is FasterRCNN paired with a MobileNetV3 backbone (FPS=60.92).The best throughput on CPU was obtained by the combination SSD-VGG16(FPS=4.27).Given the statistical similarity between some of the models we think that the critical part is the choice of the overall detection architecture rather that the specific backbone.SSD is an exception to this,as the backbone choice demonstrated to play a significant role in the final results.

    Fig.14.Example of visualizing image importance through Guided GradCAM for a FasterRCNN model with ResNet50 backbone.

    Additionally,our experiments show that pre-training on ImageNet is a suitable Transfer Learning setup for the insect recognition tasks and pre-training on small task-related datasets seemingly has no benefits.Overall,we consider FasterRCNN with MobileNetV3 backbone a strong baseline for insect detection given both the good performance and high inference speed on CPU and GPU;moreover this model produced the lowest number of false positives for the pest insect class and this is of particular importance for this type of application.

    We conclude that widely adopted generic object detection architectures are well suited for the recognition of beetle-like insects and,realistically,for insects in general.The real advance in general insect recognition would probably come from the construction of bigger datasets,with a big number of species and images,rather than from the search for particular architectures which may,in the end,be too much task specific.

    Future work should investigate the development of optimized models that take the found optimum as a baseline and make task-specific improvements to the architecture.A few examples are:addressing hardware and embedded system resource constraints to port the solution on mobile devices,leveraging known characteristics of the target pest insect(inductive bias),working on methods aimed at improving detection of small insects as well as dealing with bad lighting conditions and harsh environments.In the spirit of deep learning we would also envisage generation of a huge image dataset,possibly containing different species.This might be achieved by considering collaborative methods where citizens contribute in taking pictures and delivering them to a cloud validating process before enriching the database.

    99久久国产精品久久久| 日韩人妻精品一区2区三区| 国产淫语在线视频| 一二三四在线观看免费中文在| 亚洲精品av麻豆狂野| 可以免费在线观看a视频的电影网站| 这个男人来自地球电影免费观看| 免费观看人在逋| 亚洲成av片中文字幕在线观看| 欧美97在线视频| 丝袜在线中文字幕| 黑人操中国人逼视频| 在线av久久热| a在线观看视频网站| 亚洲九九香蕉| 久热这里只有精品99| 国产成人欧美| 热99国产精品久久久久久7| 久久香蕉激情| 欧美在线一区亚洲| 一个人免费在线观看的高清视频 | 在线观看人妻少妇| 亚洲精品中文字幕在线视频| www日本在线高清视频| 国产成人av激情在线播放| 自拍欧美九色日韩亚洲蝌蚪91| 欧美日韩一级在线毛片| 最近最新免费中文字幕在线| 亚洲精品美女久久久久99蜜臀| 久久精品熟女亚洲av麻豆精品| 久久久久久免费高清国产稀缺| 人妻人人澡人人爽人人| 中文字幕最新亚洲高清| 一本色道久久久久久精品综合| 美女主播在线视频| 久久天躁狠狠躁夜夜2o2o| 一个人免费在线观看的高清视频 | 亚洲精品日韩在线中文字幕| 女人久久www免费人成看片| 日韩欧美一区视频在线观看| 欧美 亚洲 国产 日韩一| 欧美激情久久久久久爽电影 | 国内毛片毛片毛片毛片毛片| 国产有黄有色有爽视频| 大香蕉久久成人网| 日韩大码丰满熟妇| 99热国产这里只有精品6| 中文字幕人妻熟女乱码| 国产亚洲欧美精品永久| 美女福利国产在线| 丝袜在线中文字幕| 国产成人系列免费观看| 91麻豆精品激情在线观看国产 | 一本一本久久a久久精品综合妖精| 午夜福利一区二区在线看| 亚洲国产毛片av蜜桃av| 亚洲精品久久成人aⅴ小说| 在线观看一区二区三区激情| 一个人免费看片子| 亚洲精品美女久久久久99蜜臀| 成年人免费黄色播放视频| 美女高潮到喷水免费观看| 亚洲国产精品999| 精品国产乱码久久久久久男人| 亚洲精品久久久久久婷婷小说| 国产熟女午夜一区二区三区| 桃花免费在线播放| 日本wwww免费看| 国产色视频综合| 国产精品成人在线| 一级,二级,三级黄色视频| 欧美精品一区二区大全| 中文字幕人妻丝袜制服| 成人亚洲精品一区在线观看| 久久毛片免费看一区二区三区| 国产精品久久久av美女十八| 中文字幕制服av| 一二三四在线观看免费中文在| 在线观看舔阴道视频| 人人妻人人添人人爽欧美一区卜| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲欧洲精品一区二区精品久久久| 欧美中文综合在线视频| 啦啦啦在线免费观看视频4| 亚洲情色 制服丝袜| 久久 成人 亚洲| 色播在线永久视频| 精品卡一卡二卡四卡免费| www日本在线高清视频| 国产精品国产三级国产专区5o| 少妇 在线观看| 超碰成人久久| 免费日韩欧美在线观看| 国产在视频线精品| 久久人人爽人人片av| 国产亚洲av片在线观看秒播厂| 国产一区有黄有色的免费视频| 亚洲成av片中文字幕在线观看| 亚洲成av片中文字幕在线观看| 老司机影院毛片| 国产欧美亚洲国产| 深夜精品福利| 欧美午夜高清在线| 久久精品国产亚洲av香蕉五月 | 欧美人与性动交α欧美精品济南到| 波多野结衣一区麻豆| 午夜老司机福利片| 成人影院久久| 美女福利国产在线| 老司机靠b影院| 国产精品香港三级国产av潘金莲| tube8黄色片| 精品视频人人做人人爽| 国产日韩欧美在线精品| 涩涩av久久男人的天堂| 国产成人系列免费观看| 亚洲少妇的诱惑av| av天堂在线播放| 十八禁高潮呻吟视频| 妹子高潮喷水视频| 天堂俺去俺来也www色官网| 黄网站色视频无遮挡免费观看| 国产精品1区2区在线观看. | 性色av一级| a级毛片黄视频| 亚洲专区国产一区二区| 高清欧美精品videossex| 99久久99久久久精品蜜桃| 脱女人内裤的视频| 久久热在线av| 人人澡人人妻人| 欧美精品av麻豆av| av网站在线播放免费| 一级,二级,三级黄色视频| 亚洲国产欧美日韩在线播放| 亚洲欧美色中文字幕在线| 黑丝袜美女国产一区| 欧美精品一区二区免费开放| 可以免费在线观看a视频的电影网站| 美女福利国产在线| 亚洲精品一卡2卡三卡4卡5卡 | 欧美亚洲日本最大视频资源| 久热这里只有精品99| 免费观看av网站的网址| 精品一区二区三区四区五区乱码| 看免费av毛片| 午夜激情av网站| 久久精品熟女亚洲av麻豆精品| 天天操日日干夜夜撸| 可以免费在线观看a视频的电影网站| 一本综合久久免费| 亚洲少妇的诱惑av| 亚洲av电影在线进入| 又大又爽又粗| 大片电影免费在线观看免费| 两个人看的免费小视频| a 毛片基地| 精品一品国产午夜福利视频| 日本a在线网址| 18禁裸乳无遮挡动漫免费视频| 亚洲av成人一区二区三| 999久久久国产精品视频| 中文字幕人妻丝袜一区二区| 大型av网站在线播放| 黄色片一级片一级黄色片| 丁香六月欧美| 老汉色∧v一级毛片| 亚洲第一av免费看| 最近中文字幕2019免费版| 香蕉国产在线看| 国产成人av教育| 性少妇av在线| 亚洲国产中文字幕在线视频| 少妇猛男粗大的猛烈进出视频| 老汉色∧v一级毛片| 久久精品国产亚洲av香蕉五月 | 欧美精品啪啪一区二区三区 | 69av精品久久久久久 | 操美女的视频在线观看| 黄色a级毛片大全视频| 亚洲三区欧美一区| 午夜免费观看性视频| 51午夜福利影视在线观看| 欧美大码av| 久久久久久久大尺度免费视频| 大陆偷拍与自拍| 午夜激情久久久久久久| 亚洲专区中文字幕在线| a级毛片在线看网站| 高潮久久久久久久久久久不卡| av在线app专区| 亚洲精品日韩在线中文字幕| 欧美人与性动交α欧美软件| 亚洲男人天堂网一区| 菩萨蛮人人尽说江南好唐韦庄| 精品福利观看| videosex国产| 黑人欧美特级aaaaaa片| 在线观看一区二区三区激情| 欧美日韩福利视频一区二区| 水蜜桃什么品种好| 叶爱在线成人免费视频播放| 国精品久久久久久国模美| 国产一区有黄有色的免费视频| 亚洲一区二区三区欧美精品| 搡老熟女国产l中国老女人| 两个人免费观看高清视频| 99精品久久久久人妻精品| www日本在线高清视频| 精品少妇黑人巨大在线播放| www.精华液| 伊人久久大香线蕉亚洲五| 久久久久久久久久久久大奶| 国产免费一区二区三区四区乱码| 国产欧美日韩精品亚洲av| 一进一出抽搐动态| 欧美日韩国产mv在线观看视频| 亚洲欧美日韩另类电影网站| 人妻人人澡人人爽人人| avwww免费| 久久九九热精品免费| 亚洲欧美日韩高清在线视频 | 中文字幕精品免费在线观看视频| 男女午夜视频在线观看| 成年美女黄网站色视频大全免费| 色婷婷av一区二区三区视频| 一本大道久久a久久精品| 亚洲国产看品久久| 亚洲激情五月婷婷啪啪| 亚洲 欧美一区二区三区| 久久精品国产a三级三级三级| 免费高清在线观看日韩| 香蕉国产在线看| 免费一级毛片在线播放高清视频 | 欧美 日韩 精品 国产| 国产一区二区三区在线臀色熟女 | 日日摸夜夜添夜夜添小说| 久久久久久免费高清国产稀缺| 国产主播在线观看一区二区| 久久人妻福利社区极品人妻图片| 极品人妻少妇av视频| av国产精品久久久久影院| 亚洲三区欧美一区| 亚洲第一青青草原| 后天国语完整版免费观看| 黄色a级毛片大全视频| 黄色视频在线播放观看不卡| 国产人伦9x9x在线观看| 又黄又粗又硬又大视频| 精品一品国产午夜福利视频| 汤姆久久久久久久影院中文字幕| 国产精品一区二区免费欧美 | 国产免费福利视频在线观看| 多毛熟女@视频| 亚洲欧美一区二区三区久久| 国产亚洲一区二区精品| 久久久久精品人妻al黑| 亚洲伊人久久精品综合| 免费日韩欧美在线观看| 黄片小视频在线播放| 操出白浆在线播放| 久久国产亚洲av麻豆专区| 极品少妇高潮喷水抽搐| 狠狠狠狠99中文字幕| av片东京热男人的天堂| 欧美人与性动交α欧美软件| 日韩欧美国产一区二区入口| 日本黄色日本黄色录像| 国产极品粉嫩免费观看在线| 国产在视频线精品| 国产91精品成人一区二区三区 | bbb黄色大片| 99九九在线精品视频| 伊人久久大香线蕉亚洲五| 国产免费现黄频在线看| 首页视频小说图片口味搜索| 精品国产国语对白av| 亚洲av成人不卡在线观看播放网 | 精品一区二区三区av网在线观看 | 超碰成人久久| 国产真人三级小视频在线观看| 男女下面插进去视频免费观看| 国产男女内射视频| 一边摸一边抽搐一进一出视频| 久热这里只有精品99| 亚洲美女黄色视频免费看| 久久久久久久精品精品| 91字幕亚洲| 亚洲欧美精品自产自拍| 午夜激情久久久久久久| 建设人人有责人人尽责人人享有的| 成年人免费黄色播放视频| 日韩一卡2卡3卡4卡2021年| 久久影院123| 桃红色精品国产亚洲av| 老司机午夜福利在线观看视频 | 日本一区二区免费在线视频| 王馨瑶露胸无遮挡在线观看| 国产日韩欧美在线精品| 欧美精品一区二区大全| 中文字幕最新亚洲高清| 国产精品一二三区在线看| 欧美激情极品国产一区二区三区| 国产97色在线日韩免费| 色老头精品视频在线观看| 人妻 亚洲 视频| 69精品国产乱码久久久| 91精品伊人久久大香线蕉| 99国产极品粉嫩在线观看| 少妇裸体淫交视频免费看高清 | 午夜激情av网站| 国产亚洲精品久久久久5区| 大片免费播放器 马上看| 99国产精品免费福利视频| 国产精品一区二区在线不卡| 久久中文看片网| 欧美中文综合在线视频| 国产主播在线观看一区二区| 欧美日韩亚洲国产一区二区在线观看 | 后天国语完整版免费观看| 亚洲五月色婷婷综合| 亚洲性夜色夜夜综合| 性少妇av在线| 亚洲激情五月婷婷啪啪| 欧美一级毛片孕妇| 亚洲va日本ⅴa欧美va伊人久久 | 国产免费福利视频在线观看| 50天的宝宝边吃奶边哭怎么回事| 伊人久久大香线蕉亚洲五| 欧美日本中文国产一区发布| 99香蕉大伊视频| 大片免费播放器 马上看| 国产一区有黄有色的免费视频| av网站免费在线观看视频| 国产精品一区二区在线不卡| 欧美日韩亚洲高清精品| 午夜福利免费观看在线| 在线av久久热| 一级黄色大片毛片| 十八禁网站网址无遮挡| 日韩一卡2卡3卡4卡2021年| 曰老女人黄片| 一二三四社区在线视频社区8| 国产欧美日韩综合在线一区二区| 一本久久精品| 黄片小视频在线播放| 十分钟在线观看高清视频www| 亚洲国产欧美日韩在线播放| 波多野结衣一区麻豆| 国产无遮挡羞羞视频在线观看| 午夜福利在线观看吧| 欧美精品亚洲一区二区| av超薄肉色丝袜交足视频| 曰老女人黄片| 又大又爽又粗| 不卡一级毛片| 99热国产这里只有精品6| 久久久久久人人人人人| 久久久精品国产亚洲av高清涩受| 成人国产一区最新在线观看| 99热网站在线观看| 午夜影院在线不卡| 一级片'在线观看视频| 精品一区二区三卡| 男女高潮啪啪啪动态图| 久久亚洲国产成人精品v| 成人18禁高潮啪啪吃奶动态图| 18禁国产床啪视频网站| 高潮久久久久久久久久久不卡| 深夜精品福利| 啦啦啦视频在线资源免费观看| 国产三级黄色录像| 丝袜美足系列| 少妇裸体淫交视频免费看高清 | 亚洲人成77777在线视频| 色播在线永久视频| 91av网站免费观看| 国产精品久久久久久精品电影小说| 亚洲欧美精品自产自拍| 九色亚洲精品在线播放| 久久精品亚洲av国产电影网| 女人被躁到高潮嗷嗷叫费观| 国产一区二区在线观看av| 97在线人人人人妻| 在线av久久热| 久久精品熟女亚洲av麻豆精品| 国产成人啪精品午夜网站| 亚洲精品久久成人aⅴ小说| 国产精品欧美亚洲77777| 日韩欧美一区二区三区在线观看 | 国产色视频综合| 亚洲精品中文字幕在线视频| 人人妻人人澡人人看| av天堂在线播放| avwww免费| 久久久精品国产亚洲av高清涩受| 久久久久国产精品人妻一区二区| 人人妻人人澡人人爽人人夜夜| 久久午夜综合久久蜜桃| 精品亚洲成国产av| 中文欧美无线码| a级毛片黄视频| 久久国产精品大桥未久av| 亚洲情色 制服丝袜| avwww免费| av片东京热男人的天堂| 久久久久久免费高清国产稀缺| 国产成人精品久久二区二区91| 美女福利国产在线| 精品乱码久久久久久99久播| 午夜成年电影在线免费观看| 午夜福利视频在线观看免费| 亚洲欧美成人综合另类久久久| 热re99久久精品国产66热6| 日韩人妻精品一区2区三区| 高清欧美精品videossex| 90打野战视频偷拍视频| 免费在线观看日本一区| 精品人妻在线不人妻| 国产区一区二久久| 嫩草影视91久久| 制服诱惑二区| 乱人伦中国视频| 首页视频小说图片口味搜索| 一本大道久久a久久精品| 国产一区二区三区在线臀色熟女 | 亚洲五月色婷婷综合| 狠狠精品人妻久久久久久综合| 啪啪无遮挡十八禁网站| 国产1区2区3区精品| 精品人妻在线不人妻| 欧美黄色淫秽网站| 18禁观看日本| 免费高清在线观看视频在线观看| www日本在线高清视频| 久久国产精品大桥未久av| av国产精品久久久久影院| 亚洲,欧美精品.| 欧美另类亚洲清纯唯美| av在线app专区| 国产欧美日韩综合在线一区二区| 建设人人有责人人尽责人人享有的| 欧美在线一区亚洲| 我要看黄色一级片免费的| 亚洲人成77777在线视频| 亚洲精品av麻豆狂野| 十八禁网站免费在线| 国产成人啪精品午夜网站| 国产xxxxx性猛交| 亚洲成国产人片在线观看| 999久久久国产精品视频| 桃花免费在线播放| 午夜福利一区二区在线看| 50天的宝宝边吃奶边哭怎么回事| 免费高清在线观看日韩| 午夜激情av网站| 日本av免费视频播放| 日韩欧美国产一区二区入口| 中文字幕色久视频| 宅男免费午夜| 窝窝影院91人妻| 少妇裸体淫交视频免费看高清 | 亚洲精品美女久久久久99蜜臀| 在线观看免费高清a一片| 免费观看av网站的网址| 欧美成人午夜精品| 久久这里只有精品19| 精品一区二区三卡| 我的亚洲天堂| 欧美激情高清一区二区三区| 欧美日韩av久久| 国产一区二区三区av在线| 日本五十路高清| 亚洲欧洲日产国产| 老汉色av国产亚洲站长工具| 一区二区三区乱码不卡18| cao死你这个sao货| 欧美亚洲日本最大视频资源| 日韩大片免费观看网站| kizo精华| 精品国产国语对白av| 伊人亚洲综合成人网| 热re99久久精品国产66热6| 极品人妻少妇av视频| 久久亚洲精品不卡| 日日爽夜夜爽网站| 男女下面插进去视频免费观看| 性色av一级| 99久久精品国产亚洲精品| 男人添女人高潮全过程视频| 9色porny在线观看| 在线精品无人区一区二区三| 91老司机精品| 女人被躁到高潮嗷嗷叫费观| 精品视频人人做人人爽| 一本色道久久久久久精品综合| 蜜桃国产av成人99| 国产精品 国内视频| 我要看黄色一级片免费的| 精品久久久久久久毛片微露脸 | a级毛片黄视频| 国产成人精品无人区| 夜夜骑夜夜射夜夜干| 亚洲九九香蕉| 国产在视频线精品| 日韩欧美免费精品| 欧美乱码精品一区二区三区| 黄色视频,在线免费观看| 中文字幕高清在线视频| 女人久久www免费人成看片| 一区二区三区四区激情视频| 成年人免费黄色播放视频| 国精品久久久久久国模美| 女人久久www免费人成看片| 一级毛片精品| 国产成人精品在线电影| 9191精品国产免费久久| 大陆偷拍与自拍| 不卡av一区二区三区| 国产黄频视频在线观看| 午夜免费成人在线视频| 男女国产视频网站| 五月开心婷婷网| 9热在线视频观看99| 精品熟女少妇八av免费久了| 久久精品熟女亚洲av麻豆精品| 亚洲 欧美一区二区三区| 亚洲国产精品一区二区三区在线| 999精品在线视频| 精品欧美一区二区三区在线| 捣出白浆h1v1| 美女国产高潮福利片在线看| av福利片在线| 日韩欧美一区视频在线观看| 精品高清国产在线一区| 亚洲精品自拍成人| 久久精品国产a三级三级三级| 一级毛片电影观看| 天天添夜夜摸| 亚洲精品第二区| 免费不卡黄色视频| 亚洲视频免费观看视频| 91九色精品人成在线观看| 欧美亚洲 丝袜 人妻 在线| 国产亚洲精品第一综合不卡| 久久人人爽av亚洲精品天堂| 亚洲精品中文字幕一二三四区 | 国产在线免费精品| 水蜜桃什么品种好| 极品人妻少妇av视频| 亚洲精品一二三| 日韩免费高清中文字幕av| 成年人黄色毛片网站| 亚洲国产毛片av蜜桃av| 精品国内亚洲2022精品成人 | 国产欧美日韩一区二区三 | 国产在线视频一区二区| a级毛片黄视频| 国产国语露脸激情在线看| 国产免费av片在线观看野外av| 国产免费一区二区三区四区乱码| 免费人妻精品一区二区三区视频| 国产一区有黄有色的免费视频| 精品少妇久久久久久888优播| 男女下面插进去视频免费观看| 亚洲专区中文字幕在线| 正在播放国产对白刺激| 免费女性裸体啪啪无遮挡网站| 亚洲精品在线美女| 大片电影免费在线观看免费| 久久精品人人爽人人爽视色| 亚洲三区欧美一区| 最新在线观看一区二区三区| 欧美黑人欧美精品刺激| 不卡av一区二区三区| 汤姆久久久久久久影院中文字幕| 纯流量卡能插随身wifi吗| 无遮挡黄片免费观看| 久久精品久久久久久噜噜老黄| av在线app专区| 日本猛色少妇xxxxx猛交久久| 国产精品久久久人人做人人爽| 久久久久久久久免费视频了| 91成年电影在线观看| 女性被躁到高潮视频| 亚洲一区二区三区欧美精品| 每晚都被弄得嗷嗷叫到高潮| 在线av久久热| 久久毛片免费看一区二区三区| 亚洲 欧美一区二区三区| 欧美激情高清一区二区三区| 欧美日韩精品网址| 亚洲熟女精品中文字幕| 亚洲成人免费av在线播放| 高清在线国产一区| www.av在线官网国产| 热99国产精品久久久久久7| 超色免费av| 国产精品影院久久| 亚洲精品国产一区二区精华液| 成人免费观看视频高清| 亚洲三区欧美一区| 欧美日韩亚洲国产一区二区在线观看 | 午夜激情久久久久久久| 蜜桃在线观看..| 亚洲精品久久午夜乱码| 亚洲av电影在线进入| 色播在线永久视频| 亚洲精品中文字幕一二三四区 | 色老头精品视频在线观看| 天堂中文最新版在线下载| 又紧又爽又黄一区二区| 亚洲五月婷婷丁香| 午夜视频精品福利| 热99re8久久精品国产|