• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fusion of Region Extraction and Cross-Entropy SVM Models for Wheat Rust Diseases Classification

    2023-12-15 03:57:12DeepakKumarVinayKukrejaAyushDograBhawnaGoyalandTalalTahaAli
    Computers Materials&Continua 2023年11期

    Deepak Kumar,Vinay Kukreja,Ayush Dogra,★,Bhawna Goyal and Talal Taha Ali

    1Department of Computer Science&Engineering,Chitkara University Institute of Engineering and Technology,Chitkara University,Punjab,140401,India

    2Department of Electronics&Communication Engineering,Chandigarh University,Punjab,140413,India

    3Department of Dental Industry Techniques,Al-Noor University College,Nineveh,41018,Iraq

    ABSTRACT Wheat rust diseases are one of the major types of fungal diseases that cause substantial yield quality losses of 15%-20%every year.The wheat rust diseases are identified either through experienced evaluators or computerassisted techniques.The experienced evaluators take time to identify the disease which is highly laborious and too costly.If wheat rust diseases are predicted at the development stages,then fungicides are sprayed earlier which helps to increase wheat yield quality.To solve the experienced evaluator issues,a combined region extraction and cross-entropy support vector machine(CE-SVM)model is proposed for wheat rust disease identification.In the proposed system,a total of 2300 secondary source images were augmented through flipping,cropping,and rotation techniques.The augmented images are preprocessed by histogram equalization.As a result,preprocessed images have been applied to region extraction convolutional neural networks(RCNN);Fast-RCNN,Faster-RCNN,and Mask-RCNN models for wheat plant patch extraction.Different layers of region extraction models construct a feature vector that is later passed to the CE-SVM model.As a result,the Gaussian kernel function in CE-SVM achieves high F1-score(88.43%)and accuracy(93.60%)for wheat stripe rust disease classification.

    KEYWORDS Wheat rust diseases;agricultural;region extraction models;intercropping;image processing;feature extraction;precision agriculture

    1 Introduction

    Wheat is one of the most important staple crops in the world [1],providing a major source of food for billions of people.However,the growth and productivity of wheat crops are often threatened by a variety of diseases,including fungal,bacterial,and viral infections [2].These diseases can cause significant damage to plants,leading to reduced yields and decreased grain quality.Generally,professionals are responsible for making decisions about the need to use pesticides.There are several methods for recognizing wheat diseases[3,4]including visual inspection,laboratory analysis,and the use of digital tools such as image analysis.Image analysis,in particular,has gained increasing attention as a promising approach for the rapid and accurate recognition of wheat diseases.By leveraging the power of computer vision and machine learning[5],image analysis can assist in the early detection and diagnosis of diseases,which helps to reduce yield quality losses.Over the past few years,automated solutions that incorporate artificial intelligence techniques and smartphone applications have been used in automated plant protection[6].Among all types of wheat diseases,wheat rust diseases are a major threat to the global food supply,and their recognition is crucial for efficient crop management.Wheat rust[7,8]is a plant disease caused by fungal pathogens belonging to the genus Puccinia.There are three main types of wheat rust: stem rust,leaf rust,and stripe rust.These diseases can cause significant yield loss and reduce the quality of wheat grain.Stem rust is the most damaging,as it can kill the entire plant[9],while leaf rust and stripe rust mainly affect the leaves and stems,respectively.To control wheat rust diseases,farmers [10] can use different fungicides to control the disease in the flowering stage.Image processing,image segmentation,machine learning,and deep learning have become increasingly popular tools in the field of plant disease recognition.Image processing techniques are used to preprocess and enhance the quality of the images captured of the wheat plant.Image segmentation [11] is then used to separate the infected parts of the plant from the healthy parts.Machine learning and deep learning algorithms are then trained on these segmented images to accurately recognize and classify the type of rust disease present in wheat plants.

    1.1 Wheat Rust

    Rust is a fungal disease that spreads through resistant cultivars and environmental conditions[12].Even,wheat rust diseases are caused by fungal pathogens that can be transmitted in a few different ways.The main mode of transmission for wheat rust is through the airborne spores of the fungal pathogen.When the spores are released from infected wheat plants,they can be carried by wind currents to infect other nearby wheat plants.There are three main types of wheat rust diseases:stem rust,stripe rust,and leaf rust diseases.

    1.1.1 Stem Rust

    This disease is caused by the fungus Puccinia graminis f.sp.tritici known as PGT [13],which attacks the stems,leaves,and spikes of wheat plants.It can cause significant damage to the crop,leading to yield losses of up to 100%.PGT is typically discovered in warm,moist environments and symptoms of infection are typically expressed as masses of red-brick urediniospores on the leaf sheaths.

    1.1.2 Stripe Rust

    This disease is affected by the fungus Puccinia striiformis (PST),which attacks the leaves and spikes of wheat plants.It can cause yellow stripes on the leaves and lead to yield losses of up to 60%.The PST pathogen is germinated in temperate regions along with cool and wet weather[14].Yelloworange spores are generated as the pustules mature stage.As the disease grows,the tissues surrounding the pustules become brown and dry.

    1.1.3 Leaf Rust

    Leaf rust is produced by the fungus Puccinia triticina(PT)[14],which attacks the leaves of wheat plants.The yellowish-orange pustules appear on the leaves,leading to defoliation and yield losses of up to 50%.The pathogen grows in areas with mild temperatures and high humidity.The summary of rust diseases along with their pathogen has been presented in Table 1.

    Table 1: Summary of rust diseases

    1.2 Deep Learning-Based Image Recognition Process

    To recognize the type of rust diseases,one way to approach for recognize the rust diseases is to develop a deep learning-based image recognition process,which involves breaking down the disease[2,15]into its parts and identifying the visual features that distinguish it from other diseases or healthy plants.The recognition process involves image preprocessing,image segmentation,feature extraction,and image classification phases.The description of recognition process along with its phases has been described as:

    1.2.1 Image Processing

    It is an important step in image recognition tasks.It involves applying a set of techniques to improve the quality of images.Image preprocessing techniques are applied to improve the visibility of the features[16-18]in the image and to remove any unwanted noise to produce enhanced images.

    1.2.2 Image Segmentation

    Image segmentation is the process of dividing an image into multiple segments or regions of interest,and it is a common technique used in computer vision for disease detection in crops[19].One-stage segmentation models such as YOLACT [6],RetinaNet [20],and YOLOV5 [21] can simultaneously perform both feature extraction and object detection in a single pass.These models have shown promising results in detecting wheat rust diseases in images.Two-stage segmentation models include the region extraction models and their variants such as Region-based convolutional neural networks (RCNN) Mask-RCNN [12],Fast-RCNN [16],Faster-RCNN [17],models separate feature extraction and object detection into two stages.The first stage generates region proposals,and the second stage classifies each proposal as either a disease or a nondisease infection.These models typically require more computational resources,but they can provide more accurate and fine-grained segmentation results compared to one-stage models.Hence,one-stage and two-stage segmentation techniques allow for highly automated and accurate recognition of wheat rust diseases,contributing to improved crop management and increased food security.

    ? Simple segmentation:It is a basic segmentation technique that divides an image into different regions based on simple features[17].This type of segmentation is useful for computer vision object detection and tracking tasks.

    ? Stage-wise segmentation:In stage-wise segmentation,it finds multiple objects and separates each object into the same object class [18,19].The recognition result shows that the large variety of object-related categories in the real scenario can be distinguished and that instances of objects belonging to the same class,which is subject to intraclass appearance variation,may describe the computing cost of the segmentation algorithm.However,it refers to efficient realtime computational expenses,such as lower memory/storage needs and reduced CPU load in seconds.

    ? Process of stage-wise segmentation:Instance segmentation is a two-way process.First,it takes the input as an image.After inputting an image,the region proposal network(RPN)aligns the region of interest (ROI) in an image.The second process of instance segmentation is known as the classification process.The classification and localization [20] of an image are achieved through a fully connected network layer.With the help of the softmax function,multiple classes of the image can be classified.The regression in the fully connected network (FCN) layer makes the bounding boxes of each classified class in an image.With the development of deep learning techniques,many frameworks based on instance segmentation have been developed.The process of instance segmentation in terms of classification and localization is shown in Fig.1.

    Figure 1:Process of stage-wise segmentation

    1.2.3 Feature Extraction

    This involves identifying the visual features that distinguish the disease from other parts of the image.This can include texture,color,shape,and size features.It is often necessary to extract features from segmented regions after performing image segmentation [21,22].Feature extraction involves identifying and quantifying certain properties of the image segments that can be used for various purposes,such as object recognition and classification.

    1.2.4 Image Classification

    It is the process of categorizing an image into a predetermined set of classes.This involves analyzing the image and identifying its distinguishing features[23-25]such as color,texture,and shape,to determine which category it belongs to.Once these features have been extracted,they can be used as input to a machine learning model or deep learning model to perform image classification.The purpose of image classification after feature extraction is to predict the category or class of a new image based on the extracted features.

    1.3 Problem Definition

    Color contrast and noise variance are the important factors that create issues in the image recognition process.Image background complexity describes the characteristics of an image.There are many aspects,such as different types of noises,and color contrast,which often resemble the area of interest itself[26,27].Because of the presence of undesirable or random fluctuations in pixel values,this can damage image quality and generate noise variance in images.Color contrast issues are caused by a lack of diversity in color intensity,which can result in a muted appearance[28].There is a need to handle the above-mentioned issues in a better way.

    1.4 Highlights of the Study

    ? This study designs a combined approach of region extraction and CE-SVM model to improve the classification accuracy for wheat rust disease recognition.The CE-SVM model performs a multi-class classification by classifying different types of rust diseases effectively.

    ? Using an augmented rust disease dataset ensures a complex feature learning which improves the rust disease differentiation.

    ? The study offers early identification and quick disease control strategies that increase the crop production rate.

    1.5 The Major Contribution of This Study

    The major contribution of this study is described as:

    ? The current study improves classification accuracy by recognizing wheat rust diseases through the integration of region extraction and CE-SVM models.

    ? The application of CE-SVM shows the machine learning flexibility for multiclass rust disease classification and helps accurate choice-making in farming environments.

    ? The proposed model helps minimize crop losses,optimizing resource allocation and supporting sustainable agricultural practices.

    1.6 Paper Organization

    The structure of this paper is as follows:The related work has been described in Section 2.Even,the proposed method along with materials and methods details has been described in Section 3.The results of the system assessment have been represented in Section 4.Lastly,the conclusion has been narrated in Section 5.

    2 Related Work

    Crop diseases greatly affect the local crop production agricultural environment.The first group of methods uses handmade features to distinguish the symbols of disease.The second group of methods uses digital imaging to diagnose diseases as well as examine diseased components from a microscopic to regional scale.In addition,estimates of the various types of crop leaf characteristics are also considered for consideration of the magnitude of the leaf damage to the plant.There have been several recent studies on recognizing wheat rust diseases using image segmentation models.These studies have used various deep-learning techniques such as YOLOV5,YOLACT,and Mask-RCNN,among others.These studies demonstrate the effectiveness of using image segmentation models for recognizing wheat rust diseases and highlight the potential of these techniques for practical applications in agriculture.

    The authors[2]detected fusarium head blight wheat disease and its severity in wheat spikes have been found through the Mask-RCNN technique.With the help of the Mask-RCNN technique,a total of 77.76%detection rates have been found on wheat spikes and diseased areas of FHB,respectively.

    The study [3] described a study that aimed to improve the accuracy of ear segmentation in winter wheat crops at the flowering stage.The researchers [4] used semantic segmentation,a type of deep learning,to identify and segment the ears in the images.The results showed that the semantic segmentation method improved the accuracy of ear segmentation compared to traditional methods.The improved accuracy could be useful for breeders and agronomists in monitoring crop growth and yield.The detection of FHB wheat disease in wheat spikes has been found through the Mask-RCNN technique[5].A total of 166 images have been captured in the University of Minnesota location.The Mask-RCNN technique achieves 99.80%in 27,000 iterations for FHB disease recognition.

    The severity of Northern leaf blight(NHB)disease in maize leaves has been found[6].A total of 900 images have been used for training and validation,and 300 images have been used for testing purposes.Out of 300 test images,an NHB disease has been found in 296 images.With the help of YOLACT++,MIoU (84.91%),and recall (98.02%) for NHB disease in maize leaves have been achieved.

    Even Wheat heads have been detected through the YOLOV4 model[7].With the help of YOLOV4,a total of 94.5%accuracy has been achieved in a real-time environment.For the detection of spikes in wheat,the YOLOV5 model is used[8].This model achieves a 94.10%average accuracy rate for spike detection in the wheat plant.

    The FHB disease and its severity from wheat spikes are detected through the Mask-RCNN technique [9].With the help of the Labelme data annotation technique,a total of 3754 wheat spike sub-images have been annotated.Throughout the experimentation,the Mask-RCNN model achieves 77.16% and 98.81% for wheat spikes and FHB disease subsequently.For wheat ears counting in a complex background,two models such as Faster-RCNN and RetinaNet models have been used[10].Among the total images,365 images with filling stage,and 350 mature stages for wheat ears have been used for experimentation purposes.During experimentation,the RetinaNet model(97.22%)achieves high R2after transfer learning as compared to Faster-RCNN (87.02%).The FHB disease and its severity from wheat spikes are detected through the Mask-RCNN technique [11].With the help of the Labelme data annotation technique,a total of 3754 wheat spike sub-images have been annotated.Throughout the experimentation,the Mask-RCNN model achieves 77.16% and 98.81% for wheat spikes and FHB disease subsequently.

    The macro disease index of wheat stripe rust has been calculated through the Segformer model in the autumn stage[12].The segmentation rate for stripe rust disease has been effectively increased with the data augmentation technique.With the help of the data augmentation technique,an F1-score(86.60%)of Segformer for wheat stripe rust at the macro disease index has been calculated.

    For wheat ears counting in a complex background,two models such as Faster-RCNN and RetinaNet models have been used[17].Among the total images,365 images with filling stage,and 350 mature stages for wheat ears have been used for experimentation purposes.During experimentation,the RetinaNet model(97.22%)achieves high R2after transfer learning as compared to Faster-RCNN(87.02%).

    The wheat stripe rust disease is identified through UAV images[20].With the help of the PSPNET model,a generalization vector of stripe rust disease has been calculated.Even,the PSPNET model has been compared with SVM,and the Unet model and achieves high classification accuracy(98%)than other models.

    The main aim of the study[21]is to evaluate the levels of damage caused by Fusarium head blight in wheat crops using an improved YOLOV5 computer vision method.The researchers improved the YoloV5 algorithm,a popular object detection tool,to accurately identify and classify infected wheat heads in digital images.The improved YOLOV5 method was tested on a dataset of wheat head images,and results showed that it was able to accurately assess the levels of damage caused by the disease.

    In the study[22],a deep learning model called Mask-RCNN to detect Wheat Mosaic Virus(WMV)in wheat images.The model was trained on a dataset of infected and healthy wheat plant images and was able to detect the virus with high accuracy.The Mask-RCNN model implementation,wheat leaves,and mosaic virus disease have been detected with 97.16%accuracy.

    The authors[23]proposed a new network architecture called Automatic Tandem Dual Blendmask Networks (AT-DBMNet) to automatically diagnose the FHB severity level by analyzing images of wheat spikes.The AT-DBMNet architecture consists of two sub-networks,each of which uses a different type of attention mechanism to weigh the importance of different parts of the image.The results of the study showed that the AT-DBMNet outperformed other state-of-the-art methods in terms of accuracy and computational efficiency,demonstrating the potential of this approach for improving FHB diagnosis in wheat crops.The authors[24]collected images of wheat spikes infected with loose smut and used MRCNN to analyze the images and predict the severity of the disease.They compared the results from MRCNN with those from manual inspection and found that MRCNN had a high accuracy,with a coefficient of determination(R2)of 0.93 and a root mean squared error(RMSE) of 4.23%.The authors concluded that MRCNN is a promising method for quantifying the severity of loose smut in wheat crops,offering an efficient and accurate alternative to manual inspection.The authors[29]used an RCNN algorithm to detect and classify wheat aphids in images,which was improved with the addition of a mask-scoring module.The parallel processing allowed for real-time analysis of multiple images.The technique was tested on a dataset of wheat plant images and demonstrated high accuracy in identifying the presence of wheat aphids and their severity.

    3 Proposed Work

    The novel wheat rust detection model consists of image acquisition,data augmentation,histogram equalization,and four region classifier models.The main aim of the wheat rust detection model is to classify the type of rust in its wheat plant with different region extraction models.The overall flow of the wheat rust detection model is shown in Fig.2.The wheat rust model consists of six phases:data acquisition,data augmentation,image enhancement,dataset annotation,region extraction models,and a hybrid classifier for wheat rust disease classification.

    Figure 2:Novel wheat rust disease detection model

    3.1 Data Acquisition

    This is the first step of wheat rust disease recognition.There are three types of wheat rust,namely,yellow rust,black rust,and brown rust.Yellow rust disease occurs on wheat a stripe is called stripe rust.Brown rust occurred on wheat leaves.Black rust disease is rectified on the stems of wheat leaves.The dataset was collected from Github,Kaggle,and UCI repository secondary sources [17,30-32].The dataset details from different secondary sources are shown in Table 2.

    Table 2: Detailed description of the dataset

    The wheat-healthy and rust disease datasets were collected from Kaggle,GitHub,and other internet sources [30,32].With the help of secondary sources,756 images of yellow rust,594 images of brown rust,381 black rust,and 621 wheat-healthy images were gathered.The samples of wheat gathered from images from secondary sources are shown in Fig.3.

    Figure 3:Samples of wheat gathered images

    3.2 Data Augmentation

    The quantity of data supplied during training heavily influences how accurately deep learning models make predictions [20].The prediction accuracy is improved with a large amount of data.With the help of secondary dataset sources such as Kaggle,GitHub,and other sources,a total of 2352 nearby wheat images were gathered.The goal of data augmentation is to artificially increase the diversity of the training data,making the model more robust to variations in the input data.For example,in object detection,data augmentation can be used to add rotation,scaling,and translation to the original images,to help the model better learn to recognize objects under different conditions.A large number of datasets are directly related to improving disease prediction accuracy.The size of the dataset is increased through data augmentation techniques,which help to improve the rust disease classification accuracy.Data augmentation is the second step of preprocessing.Three main types of data augmentation techniques flipping,cropping,and rotation[33],have been implemented to increase the dataset size.The representation results of flipping,cropping,and rotation augmentation techniques are shown in Fig.4.The flipping technique flips the image horizontally.The cropping shows the resizing effect on an image.Normally,the images have been cropped to a size of 224*224 pixels.The image is rotated right or left on an axis between 1° and 359° to perform rotation augmentations.The rotation degree parameter has a significant impact on the safety of rotation augmentations.First,data augmentation is carried out,and four rotation angles,45,135,210,and 320,are taken into consideration.As a result of data augmentation,more than 60%of the dataset is increased along with the original data,which is more impactful for improving rust disease prediction accuracy.The limitations of rotation techniques have been described as:

    ? Rotating or flipping an image may result in semantic issues.If an image contains text or objects with certain orientations,for example,flipping or rotating the image may result in incorrect or impossible formations.

    ? Augmenting an image with rotations or flips might introduce closures or overlaps between different objects that were not there in the original.This may confuse a model during training and result in inaccurate predictions.

    Figure 4:Representation results of data augmentation techniques

    3.3 Image Enhancement

    This is the main step of preprocessing.To improve the rust region visibility,contrast enhancement is important .The enhancement of contrast regions is improved through histogram equalization.Histogram equalization[3]improves the brightness of an image through frequency distribution.The main aim of histogram equalization is to improve the brightness of rust disease on each wheat plant part so that the disease can be easily predicted on each wheat plant part.In histogram equalization,the image is applied as input for histogram generation which is known as histogram computation.Once,the histogram [9] is generated,the local minima of the histogram is to be calculated,which is called the normalized sum minima.Based on local minima,the histogram is to be partitioned.After histogram partitioning,the grayscale levels were determined.Earlier in the process,the grayscale levels were calculated,and then histogram equalization was applied to each partition of an image formerly known as a transformation.The effective results of histogram equalization on or before the image for contrast enhancement are shown in Fig.5.

    Figure 5:Effective results of histogram equalization

    As a result of data augmentation,14,112 images were generated,which is effective for improving the training accuracy.The augmented images have been used for wheat rust disease prediction.During data augmentation,a small number of images are too blurry and noisy,which reduces the prediction accuracy.As a consequence,contrast enhancement is needed to help improve the diagnosis rate.Among the augmented images,a total of 1263 images have been discontinued from the whole augmented data due to low contrast and high blur,which makes it impossible to enhance the contrast of an image through histogram equalization.Thus,a real-time 12,849 augmented wheat disease dataset has been provoked,which has been used to identify rust diseases.

    3.4 Data Annotation

    The annotation of wheat plant images was performed using the computer vision annotation tool(CVAT).The CVAT tool is used to annotate the experimental data.Therefore,the performance of each object detection model was compared by annotated mask images with the prediction results of the mask[19].The labelled images are shown in Fig.6.

    Figure 6:Annotated wheat plant images

    3.5 Region Extraction Models

    For better image representation,computer vision techniques namely deep learning technology,are the most accurate way to address a variety of image recognition functions,including image recognition,fine-grained recognition,object detection classification,and image acquisition,used in a variety of databases [3].Image annotation is very helpful in the training process,but it requires millions of parameters for estimation.Object classification is performed through the CNN model.However,how many parts of the plant are recognized by rust disease is not done by the CNN model.Finding each object in an image with boundary boxes in terms of their interest and classification is done through object detection models.The object detection models are based on region proposals [7].Instance segmentation is a combination of object detection and localization.The object detection models are based on RCNN models.Procedures for acquisition based on regional proposals,especially the RCNN series method gained a high segmentation rate in terms of their performance.The descriptions of the RCNN,Fast-RCNN,Faster-RCNN,and Mask-RCNN models are as follows.

    3.5.1 RCNN

    Finding the number of regions according to their interest and locating their bounding boxes in an object is achieved through the RCNN model.The RCNN model is designed especially for detecting multiple objects in an image.The RCNN model follows a selective search algorithm.The main aim of the RCNN model is to detect multiple objects and make the boundary boxes[20]around all the objects in an image.With the help of a selective search algorithm,information about the ROI is extracted.Therefore,the ROI is presented with a rectangle.For classification of the objects that are coming under the ROI have been classified through an SVM model.The main backbone of RCNN for classifying each object in an image is achieved through the SVM model.The structure of the RCNN model in terms of classification,feature extraction,and regression is shown in Fig.7.

    Figure 7:Structure of the RCNN model

    The RCNN model consists of four modules.The first module is the selective search algorithm.The selective search algorithm combines similar regions and makes the groups based on color,shape,and size.The second module is the ROI.The selective search algorithm extracts multiple regions,and the combination of all regions,namely,R1,and R2,is known as an RPN.The RPN is applied to the CNN model for feature extraction [22].Three types of features texture,color,and shape,have been extracted by CNN.For feature extraction,CNN uses a pre-trained model such as Resnet-50.The third module consists of classification,which has been achieved through SVM.The SVM performs multiclass classification.Different classes of objects,such as C1 and C2,have been classified by CNN.The fourth module of RCNN is regression.The regression module makes the boundary of each classified object in an image in the form of a rectangle.The boundary box makes the rectangle bounding multiple classified objects in an image.With the help of selective search,Resnet-50,SVM models,and regression,the bounding of each object is made.Based on a selective search algorithm,the RCNN model covers only 2000 regions.Increasing the range of regions in the RCNN takes 47-50 s.

    3.5.2 Fast-CNN

    To solve the problems of the RCNN model,the Fast-RCNN model has been developed.The name of this model is Fast-RCNN because it detects objects faster than the RCNN model.This model is known as quick RCNN [7].Hence,it can increase the accuracy of object detection as well as classification.In the Fast-RCNN model,the input image is given to the CNN model.Mostly,a pre-trained model VGG16 has been used as a CNN model.The pre-trained model of CNN can generate a convolutional feature map.The selective search algorithm extracts the regions from the image.Therefore,VGG16 is the heart of the Fast-RCNN model.The regions are combined to make a region of proposals.Through the ROI pooling layer,the region of proposals is resized[9].The resized region of proposals is the input given to the fully connected (FC) layer.The FC layer consists of the two-way model.One FC layer is softmax for classification purposes.The second way of the FC layer model is to make the category-specific bounding regression box.The RCNN is slower than the Fast-RCNN because there is no need to extract the 2000 region proposals every time,and these 2000 region proposals are applied to the CNN model.However,in the Fast-RCNN model,the convolution operation is performed on one image,and the feature map is directly accomplished from the image.The structure of the Fast-RCNN model is shown in Fig.8.

    Figure 8:Structure of the Fast-RCNN model

    In the Fast-RCNN model,the ROI pooling layer is known as the spatial pyramid pooling layer.The spatial pyramid layer resizes the combined region of proposals into the form of squares.The output from the ROI pooling layer is described as follows:where Osppis the output from the ROI pooling layer.N is the number of region proposals.The img_size is the image size in the form of height and width pixels.The bounding box has a target class U.The range of u lies between 0 and 1.If the value of U>1,then the value of I makes bounding boxes,and then V is used.If the value of u=0,then no bounding box has been made.The u=0 indicates that there is a background region in pixels.

    3.5.3 Faster-RCNN

    The RCNN and Fast-RCNN extract the regions from the image through a selective search algorithm.The selective search algorithm[19,20]takes considerable time to extract the regions from an image.The RCNN model uses region proposal,feature extraction,region classification,and bounding boxes of classified regions.The Fast-RCNN model uses the VGG16 model as the backbone for feature extraction and classifies object proposals.The Fast-RCNN uses the selective search algorithm for region extraction,which has a negative impact and decreases the performance of the Fast-RCNN model.

    To overcome the issues of RCNN and Fast-RCNN,a Faster-RCNN model has been developed.The Faster-RCNN is the combination of RPN and Fast-RCNN.The main objective of Faster-RCNN is to detect objects in much less time than RCNN and Faster-RCNN models.The Faster-RCNN model is composed of RPN and feature extraction.The features of an image are extracted through a CNNpretrained model.The sliding window is used as a target classless object.The main goal of RPN is to produce a set of proposals.The RPN module generates a probability of the class object as well as a label for each object.These proposals have two siblings for the FCN layer:one sibling is classification,and the other sibling is bounding box regression.These are responsible for providing a predetermined set of bounding boxes with various sizes and dimensions used for reference when the first RPN object locations are predicted.These boxes are defined to capture the scale and average class of objects.Anchor boxes usually focus on a sliding window.The RPN generates 2 K classification scores and 4 K coordinates of bounding boxes.The 2 K classification scores have been calculated in terms of the foreground and background of regions.The coordinates of regression bounding boxes are encoded into k regions of proposals.

    When translating an item into the image,a recommendation followed by translation,to the same function can be used to predict a recommendation in any case due to the presence of translation fluctuations.

    The design of multidimensional anchors is the key to using external features to add more time to solve scale problems.After feature extraction and bounding boxes with relevant objects found in terms of regions,filters were applied to find the top anchors.The ROI pooling layer extracts those features from the regions that correspond to new relevant objects found by RPN as a new tensor.Thus,the Faster-RCNN model classifies the object in bounding box form and its coordinates.Thus,RPN generates the number of region proposals,and Fast-RCNN recognizes the multiple objects in the regions.The structure of the Faster-RCNN model is shown in Fig.9.

    In the Faster-RCNN model,all anchors are extracted with a size of 256 from one image.With the help of anchors,the RPN is trained.Therefore,all anchors of an image may be combined in terms of their similar features.During the combination of anchors in one image,the network may slow and take considerable time.

    Figure 9:Structure of the Faster-RCNN model

    3.5.4 Mask-RCNN

    To overcome the issues of Faster-RCNN,a Mask-RCNN model has been developed.Mask-RCNN is the combination of RPN and classifier[22].First,the input is an image given to the CNN pre-trained model.Generally,a pre-trained CNN model such as Resnet-50 has been used for feature mapping.With the help of a pre-trained model and binary classifier,multiple regions of interest(proposals) have been generated.The RPN can offer the object bounding boxes,and the classifier can generate the binary mask for every class.The ROI pooling network makes bounding boxes of each object and warps them into fixed-size dimensions.The wrapped features are applied to the FC layer as input [24].The FC layer performs multiclass classification and bounding box regression of each object in an image.The warp features are embedded as a mask classifier.The mask classifier is the combination of two convolution layers for generating the binary mask of each ROI.Mask Classifier allows the network to produce a mask for each class without competing between classes.Thus,the Mask-RCNN model generates [26] three outputs.The candidate object and warp mask.The candidate object has a class label and bounding box in terms of coordinates.The Mask-RCNN localizes and classifies multiple objects in a single image.The structure of the Mask-RCNN model is shown in Fig.10.

    During training with different iterations,the total loss of mask RCNN is described as:

    where Lclsis the classification loss.Lcls and Lbox are losses generated by the RPN.

    Overall,the description of region extraction models has been described in Table 3.

    Table 3: Description of region extraction models

    3.6 Hybrid Classifier

    The hybrid classifier consists of a feature extraction and classification model.After the extraction of segmented regions through region extraction models,the disease features were extracted through the Resnet-50 pre-trained model [23].Once,the disease features have been extracted,rust diseases can be identified by a cross-entropy support vector machine model[34].The Resnet-50 model passes the image patches into different convolution layers for feature extraction.After rust disease feature extraction,the CE-SVM model calculates the probability of each rust disease through the probability distribution function.The probability distribution function calculates the cross-entropy vector for each rust disease.Once,the vector of each rust disease has been generated,the set of vectors has been set as input to multiclass SVM.The multiclass SVM has four split margins that have been used to characterize the type of rust disease features.The whole process of disease feature classification is known as the cross-entropy SVM (CE-SVM) algorithm.The diagrammatic representation of the hybrid classifier is shown in Fig.11.

    Figure 10:Structure of the Mask-RCNN model

    Figure 11:Hybrid classifier for wheat rust disease detection

    4 System Assessments

    In this section,the results obtained from different region extraction models,including RCNN,Fast-RCNN,Faster-RCNN,and Mask-RCNN with different pre-trained models along with their hybrid classifier are used to evaluate their performance analysis through performance parameters to answer the research questions.The research questions are as follows:

    RQ1:How do the region extraction models improve the segmentation rate and localization compared with the previous state-of-the-art segmentation techniques?

    RQ2:Are there any advantages of region extraction models using data augmentation techniques?

    RQ3:Are there any significant changes in the performance of region extraction models along with the hybrid classifier?

    RQ4:Is there any comparison between one-stage and two-stage segmentation models?

    Towards the answer to each research question,RQ1 defines the segmentation rate comparison of region extraction models along with one-stage and two-stage object detection models.Even,the advantages of augmented data are helpful for region extraction in the form of patches,which is beneficial to increase the segmentation rate as well as training and testing accuracy of the classifier in RQ2.The role of a hybrid classifier with high levels of rust feature extraction and classification of features has been defined in RQ3.In RQ4,the CE-SVM defends the classification accuracy without losing any rust features.

    4.1 Experimental Setup

    All the region-based classifier experiments were performed on an Ubuntu server 18.04 powered DELLEMC Power Edge R840 four-way rack server with an Intel Xeon(R) Gold 5120 processor and an Nvidia Tesla P100 GPU.The region-based classifiers have been executed on PyTorch python notebook.There are many inbuilt Python libraries,such as Keras,TensorFlow,pandas,and h5py libraries that have been enumerated to run region extraction as well as hybrid classifier models.A total of 12,849 augmented images were used for training and testing purposes.Among the augmented images,a total of 1500 images were randomly selected for data annotation purposes.

    4.2 Parameter Defining

    This phase is incorporated with region extraction models and hybrid classifier parameters,which is beneficial for estimating the desired output according to the given input in the form of an image.The parameter details of each model have been defined as follows.

    4.2.1 A Framework of Region Extraction Models

    For the identification and location of each object in a single image,four types of region extraction models RCNN,Fast-RCNN,Faster-RCNN,and Mask-RCNN were used in this study.Based on considerations of each object detection model,the RCNN model uses image size in terms of their height and width dimensions.During training,the RCNN model uses the selective search algorithm for region proposals with learning rate,epochs,and batch size.The Fast-RCNN model uses a selective search algorithm for region proposals and bounding box threshold values,and iterations in terms of epochs for recognition.The Faster-RCNN model uses the RGB image with fixed dimensions(224*224) and uses the VGG16 model for feature extraction.For bounding boxes of each object in an image,the Faster-RCNN model uses Intersection over Union (IoU) with a value (0.6).The VGG16 model has 32 filters from the middle layer to the last layer.After applying Faster-RCNN to an image,different images were recognized with pooling output size in[7*7]dimensions.Additionally,the Mask-RCNN model uses Resnet-50 for feature maps and takes 2 images per GPU.During iterations,the Mask RCNN model generates RPN and mask loss.Several types of pre-trained network parameters such as image size,epochs,and batch size,have been used in each object detection model as shown in Fig.12.

    Figure 12:Refined pre-trained network parameters of region extraction models

    4.2.2 Hybrid Classifier Framework

    The hybrid classifier is a combined phase consisting of feature extraction as well as the CE-SVM model.The Resnet-50 model is used as the backbone of feature extraction,and the CE-SVM model is employed to classify rust disease features.The parameter details of the hybrid classifier are shown in Table 4.

    Table 4: Parameter details of the hybrid classifier

    4.3 Result Analysis

    The main aim of region extraction models is to define the wheat rust disease part along with their bounding box in an effective manner.To analyze the performance of region extraction models and hybrid classifiers,four research questions have been planned.The analysis of each research question is defined in its result-oriented form,which is interpreted as follows.

    4.3.1 RQ1:Segmentation Rate and Localization

    Region extraction models,such as RCNNs,can improve the segmentation rate and localization compared to previous state-of-the-art segmentation techniques for wheat rust disease recognition by incorporating domain-specific knowledge into the model.In the context of recognizing wheat rust diseases,region extraction models can be trained on annotated images of wheat plants,with the regions of interest(ROIs)being defined as the regions of the plants that contain rust.The model then learns to recognize the characteristic patterns [12].Two-stage segmentation models such as Mask-RCNN,RCNN,Faster-RCNN,and Fast RCNN have been compared with YOLACT++,YOLOV5,SSRNET,RetinaNet,and R-FCN for wheat rust disease identification in terms of segmentation rate and localization.The segmentation rate and localization of two-stage segmentation models such as RCNN,Fast RCNN,Faster-RCNN,and Mask-RCNN,as well as one-stage models such as YOLACT++,YOLOV5,SSRNET,RetinaNet,and R-FCN for wheat rust disease identification,can vary depending on several factors,including the size and complexity of the dataset[12],the quality of annotations,and the specific architecture and training procedures used for each model.

    In general,two-stage segmentation models tend to achieve higher segmentation rates and more accurate localizations compared to one-stage models,as they incorporate additional context from the image and refine the localization of objects.In training two-stage segmentation models,several 1500 labeled images were considered ground truth images for training purposes.The segmentation,as well as localization rate,was determined with the local bounding box in terms of wheat rust objects in an image.With a momentum of 0.9,50 epochs was the maximum number of iterations for the model’s parameters.The weight decay was set to 0.0001,and the learning rate was set to 0.001,which is better suited for small batches with quick convergence.When an IoU had a ground-truth box greater than 0.5,an ROI was considered positive;otherwise,it was considered negative.Positives outnumbered negatives 1:1.Three ratios,0.5,1,and 2,were covered by the RPN anchor.We decided to run the mini-batch with two images per GPU to extract the wheat rust patches.

    The segmentation rate and the localization of each rust object in different wheat images along with a different number of epochs are shown in Figs.13 and 14,respectively.The number of epochs is directly proportional to the localization and segmentation rate improvement.The number of epochs shows that the localization and segmentation rates of the two-stage segmentation models are higher than those of the one-stage segmentation models.Among the two-stage segmentation models,Mask-RCNN has a high number of segmentations(0.97)and localization rates(0.69)with a high number of epochs for wheat rust disease object localization.

    Figure 13:Segmentation rate of one-stage and two-stage segmentation models

    Figure 14:Localization rate of one-stage and two-stage segmentation models

    4.3.2 RQ2:Improving the Generalization of Region Extraction Models

    There are several advantages of using region extraction models with data augmentation techniques for wheat rust disease detection:

    ? Improved Accuracy:Data augmentation techniques such as rotation,cropping,flipping,etc.,can help increase the diversity of the training dataset[10,11],which can improve the accuracy of the region extraction model.

    ? Reduced Overfitting:Overfitting is a common problem in deep learning models.By using data augmentation techniques[12,23],the model can be trained on a larger and more diverse dataset,which can reduce overfitting.

    ? Robustness to Variations:Data augmentation can also help increase the robustness of the model to variations in the input data,such as different lighting conditions,and angles.

    ? Better Generalization:By training the model on a larger and more diverse dataset,it can generalize better to unseen data[24,25]resulting in improved performance on new images.

    Overall,data augmentation techniques can significantly improve the performance of region extraction models for wheat rust disease detection.

    4.3.3 RQ3:Performance Evaluation

    Object rust has been found in the wheat stem,stripe,and leaf parts.Once,the patches have been extracted,the classification of rust disease is easily determined by the hybrid classifier.The performance of region extraction models has been measured through mean IoU,and mean average precision (mAP) has been easily estimated with ground truth and predicted image [21].The labeled image has been considered a ground truth image.The effective estimation of extracted patches through different region extraction models corresponding to ground truth images is shown in Fig.15.

    Figure 15:Patches extraction by RCNN,fast RCNN,Faster-RCNN,and Mask-RCNN models

    The performance of each rust disease extracted patch was measured with ground truth and predicted sub-image,which was evaluated through IoU,mean IoU(MIoU)and mean average precision(mAP) parameters.A detailed description of the performance achieved by each region extraction model is shown in Table 5.In region extraction models,a total of 63,485 different patches of wheat rust diseases have been extracted,which have been useful in extracting the dynamicity of rust disease features.

    Table 5: Performance achieved by each extraction model for rust disease patch extraction

    At each stage of patch extraction,a total of 63,485 rust diseases were used as input to the feature extraction module.The extracted patches have a size of 32*32 pixels along with their bounding box location.The Resnet-50 model is used for feature extraction.The extracted features have been helpful for classification purposes.Based on the region extraction model outcome of the extracted patch,the contour of the combined image was extracted.To create the feature vector for rust infections features such as contour area,perimeter,roundness,and Hu invariant moment were extracted through the Resnet-50 feature extraction model.As a result,our work,with the help of trained model feature extraction,shares the same attributes as the trained model and initializes the network using those features.The intended result was accomplished by the trained model.Transfer learning can minimize computing load [2],hasten network convergence,and address the underfitting issue brought on by insufficient tag training data.In this study,a substantial portion of our dataset was used to fine-tune the trained model following the properties of the component image.The extracted features have been used for classification purposes in the CE-SVM classifier.The CE-SVM acts as a multiclassifier that classifies[34]the three types of wheat rust diseases on stem,stripe,and leaf plant parts.The multiclass classifier was trained based on input feature vectors.The CE-SVM was trained according to a different category,and the truly defined category of each rust disease feature vector was set to 0,if no rust disease feature vector was identified,it was set to 0.The iteration of CE-SVM is set to 100,and the polynomial,linear,and Gaussian functions have been trained to measure the accuracy of training the CE-SVM classifier.

    In the hybrid classifier,a ratio of 70:30 patches was used for training and testing purposes.Recall that the ratio of true positive occurrences to the total of true positives and false negatives was represented.The proportion of true positive instances to all positive instances was indicated by precision.Precision and recall were combined to create the F1-score.The percentage of correctly anticipated labels was the accuracy.The performance of three different CE-SVM kernel functions for wheat rust disease classification is shown in Table 6.

    Table 6:Performance of three different CE-SVM classifiers with three kernel functions for wheat rust disease classification

    The performance analysis of CE-SVM for different wheat rust disease classifications,along with the different number of feature vector samples was measured with stem,stripe,and leaf rust diseases.Among the three different kernel functions in CE-SVM,the Gaussian function has high precision,recall,F1-score,and accuracy compared to linear and polynomial kernel functions.The higher accuracy achieved by the Gaussian function is 93.60%,which is more sufficient for wheat rust disease classification.Even,if the proposed model has been tested on different datasets such as CGIAR[23,24],and wheat leaf dataset[25]to measure the generalization.During the testing of the proposed model,the CGIAR dataset(83.67%)outperforms the wheat leaf dataset(79.87%)for wheat rust disease classification.

    4.3.4 RQ4:Comparison of Two-Stage Segmentation with One-Stage Segmentation Models for Wheat Rust Disease Classification with a Hybrid Classifier

    The main factors to compare two-stage segmentation models with one-stage segmentation models along with their cross-entropy support vector machine classifier are as follows:

    ?Speed:Two-stage models are faster than one-stage models because they do not require multiple forward passes through the network[8,10].

    ?Accuracy:One-stage models tend to have better accuracy because they have a more complex pipeline,but this comes at the cost of slower speed[11,22].

    ?Memory:Two-stage models tend to use less memory as they have fewer layers,but again,this may come at the cost of lower accuracy[12,24].

    ?Flexibility:Two-stage models can be more flexible because they can incorporate different loss functions,such as cross-entropy and support vector machine,which can improve accuracy[34].

    ?Model size:Two-stage models are typically smaller than one-stage models,which make them easier to deploy in resource-constrained environments[35].

    It is important to note that the trade-off between speed,accuracy,memory,flexibility,and model size will depend on the specific use case to classify wheat rust diseases.

    4.4 Comparison of the Proposed Method with the Previous State-of-the-Art Approaches

    In this section,region extraction models such as RCNN,Fast RCNN,Faster-RCNN,and Mask-RCNN along with the hybrid classifier are compared with previous state-of-the-art approaches.For example:the fusarium head blight is detected through the Mask-RCNN model[2].With the help of the Mask-RCNN technique,a total of 77.76%detection rates have been found on wheat spikes and diseased areas of FHB,respectively.A single-stage segmentation model YOLOV5 has been used to count the number of wheat spikes in wheat plants.Once,the wheat spikes have been counted,the classification is not performed by the authors[8,9,11,12].Hence,the classification is performed by the proposed approach to validate the results of the classifier.A detailed comparison is shown in Table 7.

    Table 7:Results comparison of the proposed method with the previous state-of-the-art approaches

    5 Conclusion and Future Work

    Wheat rust diseases caused by fungal pathogens pose a substantial threat to global wheat production and food security.Early detection and accurate prediction of these diseases can minimize yield quality losses.In this paper,four different region extraction models RCNN,Fast RCNN,Faster-RCNN,and Mask-RCNN,along with the CE-SVM model have been employed to classify the three types of wheat rust diseases.First,a total of 2352 wheat rust and wheat healthy plant images were gathered from secondary sources.Second,three basic data augmentation techniques flipping,cropping,and rotation,have been applied to the secondary source dataset to improve the training speed as well as the classification accuracy.Throughout the usage of data augmentation techniques,a total number of 12,849 augmented images have been used for patch extraction in region extraction models.The patches have been extracted through data annotation in the training phase of the region extraction model.The annotated patches are considered ground truth,and extracted patches are considered predicted patches,which is helpful for IoU calculation.Several types of invariant,hue,and area features have been extracted in informed patches due to ResNet-50 pre-trained models.The CE-SVM method complements the RCNN models by providing a robust classification framework.Among all types of kernel functions,the Gaussian function in the cross-entropy SVM model achieves high classification accuracy(93.60%)for wheat stripe rust disease.In the future,the RCNN models will be fine-tuned to yield results comparable to the kernel functions,supporting that superior segmentation results provide increased classification accuracy.Object detection or segmentation models can successfully extract feature information from images specifically diseaseaffected regions.The method is strongly reliant on precise disease area localization.Incorrect region extraction might have a negative impact on categorization outcomes.By focusing on key image regions,using region extraction models in conjunction with CE-SVM for wheat rust disease classification has the potential to improve accuracy and interpretability.Even,the proposed combined approach can assist farmers and agronomists in making informed decisions,such as optimizing fungicide application or implementing resistant cultivars to effectively mitigate the impact of wheat rust diseases.Efficient wheat rust disease classification using a hybrid region extraction model and CE-SVM method includes timely disease detection,improved crop management,enhanced decision-making,resource optimization,disease monitoring and surveillance.

    Acknowledgement:The author would like to thank the support of Chitkara University.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:Conceptualization,D.K and V.K.,A.D.,B.G.;methodology,D.K.,V.K.,A.D.,B.G.;software,V.K,A.D.,B.G.,T.T.;formal analysis,D.K.,V.K.,A.D.,T.T.;data correction,V.K.,B.G.,T.T.;writing-original draft preparation,D.K,V.K.,A.D.,B.G.,T.T.;writing-review and editing,D.K.,V.K.,B.G.,T.T.;supervision,V.K,A.D.,B.G.,T.T.

    Availability of Data and Materials:There is no associated data with this article.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    97精品久久久久久久久久精品| 久久国产精品男人的天堂亚洲 | 18+在线观看网站| 国产av码专区亚洲av| 亚洲美女黄色视频免费看| 自拍欧美九色日韩亚洲蝌蚪91| 青青草视频在线视频观看| 久久久久国产精品人妻一区二区| 亚洲丝袜综合中文字幕| 亚洲成人av在线免费| 精品久久久久久久久亚洲| 制服人妻中文乱码| 日韩 亚洲 欧美在线| 自线自在国产av| 亚洲久久久国产精品| 欧美精品人与动牲交sv欧美| 2018国产大陆天天弄谢| 欧美精品亚洲一区二区| 欧美一级a爱片免费观看看| 国产在线视频一区二区| 日韩精品有码人妻一区| 18禁在线无遮挡免费观看视频| 欧美 日韩 精品 国产| 欧美老熟妇乱子伦牲交| 国产一区二区三区av在线| 久久久久人妻精品一区果冻| 国产无遮挡羞羞视频在线观看| 精品人妻在线不人妻| 在线观看www视频免费| 久久精品国产鲁丝片午夜精品| 91精品三级在线观看| 欧美亚洲 丝袜 人妻 在线| 黑人猛操日本美女一级片| 春色校园在线视频观看| 成人综合一区亚洲| 精品人妻偷拍中文字幕| 有码 亚洲区| 男人爽女人下面视频在线观看| 少妇精品久久久久久久| 纵有疾风起免费观看全集完整版| 日韩av免费高清视频| 亚洲四区av| 国产色爽女视频免费观看| 波野结衣二区三区在线| 中国三级夫妇交换| av女优亚洲男人天堂| 免费少妇av软件| 欧美xxⅹ黑人| 国产国拍精品亚洲av在线观看| av不卡在线播放| 高清在线视频一区二区三区| 美女脱内裤让男人舔精品视频| 亚洲伊人久久精品综合| 亚洲国产最新在线播放| 精品一区二区三卡| 国产极品天堂在线| 免费大片18禁| a级毛色黄片| av视频免费观看在线观看| 久久热精品热| 人人澡人人妻人| 性色av一级| 免费看不卡的av| 亚洲av国产av综合av卡| 校园人妻丝袜中文字幕| 老熟女久久久| 欧美日韩综合久久久久久| 亚洲国产av新网站| 国语对白做爰xxxⅹ性视频网站| 日本黄色片子视频| videos熟女内射| 赤兔流量卡办理| 日韩熟女老妇一区二区性免费视频| 99国产综合亚洲精品| 成人综合一区亚洲| 国产精品一区二区在线观看99| 久久精品国产自在天天线| 久久99热这里只频精品6学生| 亚洲精品456在线播放app| 伊人久久精品亚洲午夜| 极品人妻少妇av视频| 热re99久久国产66热| 国产乱人偷精品视频| 国产av精品麻豆| 中国三级夫妇交换| 亚洲精品成人av观看孕妇| kizo精华| 国产国语露脸激情在线看| 久久韩国三级中文字幕| 亚洲av成人精品一区久久| 三级国产精品欧美在线观看| 日韩av免费高清视频| 国产黄色视频一区二区在线观看| 国产毛片在线视频| 最黄视频免费看| 亚洲成人手机| 国产淫语在线视频| 日本av免费视频播放| 亚洲欧美日韩卡通动漫| 丝袜喷水一区| 久久久午夜欧美精品| 美女xxoo啪啪120秒动态图| 欧美丝袜亚洲另类| 成人亚洲精品一区在线观看| 大码成人一级视频| 青春草国产在线视频| 亚洲久久久国产精品| 少妇被粗大猛烈的视频| 亚洲四区av| 成人午夜精彩视频在线观看| 国产精品无大码| 国精品久久久久久国模美| 如何舔出高潮| 夫妻性生交免费视频一级片| 午夜av观看不卡| 欧美性感艳星| 国产日韩欧美视频二区| 国产av一区二区精品久久| 只有这里有精品99| 大片免费播放器 马上看| 久久久国产欧美日韩av| 婷婷色麻豆天堂久久| 青春草亚洲视频在线观看| 啦啦啦啦在线视频资源| 99视频精品全部免费 在线| 免费不卡的大黄色大毛片视频在线观看| 男人爽女人下面视频在线观看| 国产精品人妻久久久久久| 日本色播在线视频| 热99国产精品久久久久久7| 日韩欧美一区视频在线观看| 一区在线观看完整版| 国产成人a∨麻豆精品| 国产亚洲精品第一综合不卡 | 97超视频在线观看视频| 人妻系列 视频| 亚洲精品美女久久av网站| 精品一区二区免费观看| 国产精品嫩草影院av在线观看| 日韩电影二区| 日韩伦理黄色片| 日韩欧美精品免费久久| 美女中出高潮动态图| 亚洲精品久久久久久婷婷小说| 精品少妇久久久久久888优播| 老熟女久久久| 亚洲av中文av极速乱| 色视频在线一区二区三区| 晚上一个人看的免费电影| 欧美日韩综合久久久久久| 大片电影免费在线观看免费| 欧美亚洲日本最大视频资源| 国产精品久久久久久久电影| 免费大片18禁| 黄色欧美视频在线观看| 国产精品国产三级专区第一集| 一区二区三区四区激情视频| 美女中出高潮动态图| av播播在线观看一区| 少妇猛男粗大的猛烈进出视频| 天天影视国产精品| 少妇被粗大猛烈的视频| 九色亚洲精品在线播放| 国产成人精品在线电影| 制服人妻中文乱码| 亚洲国产毛片av蜜桃av| 欧美精品一区二区免费开放| 人人妻人人添人人爽欧美一区卜| 人妻人人澡人人爽人人| 视频区图区小说| 国产亚洲欧美精品永久| 18禁在线无遮挡免费观看视频| 特大巨黑吊av在线直播| 一个人看视频在线观看www免费| 日韩人妻高清精品专区| 亚洲国产av影院在线观看| 国产高清有码在线观看视频| 久久精品国产亚洲网站| videos熟女内射| 久久鲁丝午夜福利片| 亚洲一区二区三区欧美精品| 一级片'在线观看视频| 热99国产精品久久久久久7| 91成人精品电影| 成人亚洲欧美一区二区av| 日韩伦理黄色片| 久久青草综合色| 天堂俺去俺来也www色官网| 两个人免费观看高清视频| 美女脱内裤让男人舔精品视频| 亚洲怡红院男人天堂| 亚洲欧美中文字幕日韩二区| 夜夜爽夜夜爽视频| 色5月婷婷丁香| 成人二区视频| av在线老鸭窝| 免费大片黄手机在线观看| 只有这里有精品99| 久久久精品区二区三区| 日韩成人伦理影院| 婷婷色麻豆天堂久久| 精品一区在线观看国产| 午夜激情福利司机影院| 精品人妻在线不人妻| 天天躁夜夜躁狠狠久久av| 九色成人免费人妻av| 国产精品99久久99久久久不卡 | 国产精品不卡视频一区二区| 色婷婷av一区二区三区视频| 欧美亚洲日本最大视频资源| 亚洲一区二区三区欧美精品| 热99国产精品久久久久久7| 久久久久久久久久成人| 又粗又硬又长又爽又黄的视频| 亚洲精品中文字幕在线视频| 日韩人妻高清精品专区| 国模一区二区三区四区视频| 麻豆成人av视频| 中国三级夫妇交换| 三上悠亚av全集在线观看| 伊人亚洲综合成人网| 亚洲精品av麻豆狂野| 日韩视频在线欧美| 国产极品粉嫩免费观看在线 | 人体艺术视频欧美日本| 青春草亚洲视频在线观看| 老女人水多毛片| 一区二区三区精品91| 一区二区三区乱码不卡18| 熟女av电影| 成人国产av品久久久| 永久免费av网站大全| 中文字幕av电影在线播放| 亚洲精品久久成人aⅴ小说 | 纯流量卡能插随身wifi吗| 美女大奶头黄色视频| 在线观看免费高清a一片| 两个人免费观看高清视频| 日本vs欧美在线观看视频| 久久人人爽人人片av| 99国产综合亚洲精品| 综合色丁香网| 男女无遮挡免费网站观看| 少妇丰满av| 亚洲五月色婷婷综合| 久久久久久久久久人人人人人人| 亚洲欧美日韩卡通动漫| 亚洲精品一二三| 国产欧美另类精品又又久久亚洲欧美| 日日啪夜夜爽| 国产欧美日韩综合在线一区二区| 久久精品久久久久久久性| 免费观看在线日韩| 久久青草综合色| 久久久精品免费免费高清| 国产一区二区在线观看av| 99热这里只有是精品在线观看| 天堂中文最新版在线下载| 日日摸夜夜添夜夜爱| 国产精品人妻久久久久久| 各种免费的搞黄视频| 晚上一个人看的免费电影| 久久免费观看电影| 久久久久网色| 久久人妻熟女aⅴ| 91精品国产国语对白视频| 亚洲精品一二三| 中文精品一卡2卡3卡4更新| 中文字幕亚洲精品专区| 日本色播在线视频| 最近的中文字幕免费完整| 国产精品久久久久久久电影| 丰满少妇做爰视频| 亚洲在久久综合| 日本猛色少妇xxxxx猛交久久| 精品亚洲乱码少妇综合久久| 日本午夜av视频| 久久久久国产精品人妻一区二区| 亚洲av欧美aⅴ国产| 国语对白做爰xxxⅹ性视频网站| 欧美日本中文国产一区发布| 9色porny在线观看| 欧美老熟妇乱子伦牲交| 日韩av免费高清视频| av在线app专区| 国产精品一国产av| 青春草国产在线视频| 交换朋友夫妻互换小说| 女性被躁到高潮视频| 免费av不卡在线播放| 国产亚洲最大av| 久久久久久久亚洲中文字幕| 一级毛片 在线播放| 中文字幕免费在线视频6| 亚洲精品,欧美精品| 午夜激情av网站| 永久网站在线| 我要看黄色一级片免费的| 看免费成人av毛片| 日韩成人av中文字幕在线观看| 午夜免费男女啪啪视频观看| 精品久久久久久久久av| 亚洲丝袜综合中文字幕| 免费久久久久久久精品成人欧美视频 | 亚洲伊人久久精品综合| 黑人巨大精品欧美一区二区蜜桃 | 午夜免费鲁丝| 久久精品国产亚洲av涩爱| 搡老乐熟女国产| 精品视频人人做人人爽| 九色成人免费人妻av| 国产精品麻豆人妻色哟哟久久| 久久精品久久久久久噜噜老黄| 国产无遮挡羞羞视频在线观看| 视频在线观看一区二区三区| 亚洲精品成人av观看孕妇| 欧美精品人与动牲交sv欧美| 欧美日韩av久久| 日本-黄色视频高清免费观看| 美女内射精品一级片tv| 免费高清在线观看日韩| 中文字幕av电影在线播放| 亚洲精品久久成人aⅴ小说 | 国产av一区二区精品久久| 中文字幕人妻丝袜制服| 国产精品人妻久久久影院| 简卡轻食公司| 国产探花极品一区二区| 久久久久久久久大av| 国产精品99久久99久久久不卡 | av.在线天堂| 狂野欧美激情性xxxx在线观看| 久久久久久久国产电影| 免费观看av网站的网址| 欧美日韩视频高清一区二区三区二| 国产日韩欧美亚洲二区| 肉色欧美久久久久久久蜜桃| 国产高清国产精品国产三级| 亚洲国产精品国产精品| 亚洲精品视频女| 99久久综合免费| 久久这里有精品视频免费| 少妇被粗大猛烈的视频| 成年人午夜在线观看视频| 久久精品国产亚洲av涩爱| 自拍欧美九色日韩亚洲蝌蚪91| 久久女婷五月综合色啪小说| 男女边吃奶边做爰视频| 中文字幕亚洲精品专区| 久久免费观看电影| av国产久精品久网站免费入址| 日韩精品免费视频一区二区三区 | 制服人妻中文乱码| 午夜福利,免费看| 男女免费视频国产| 大片免费播放器 马上看| 亚洲中文av在线| 在线观看美女被高潮喷水网站| 久久亚洲国产成人精品v| 狂野欧美激情性xxxx在线观看| 国产午夜精品一二区理论片| 亚洲av电影在线观看一区二区三区| 18+在线观看网站| 亚洲欧洲精品一区二区精品久久久 | 精品国产一区二区三区久久久樱花| av卡一久久| 亚洲经典国产精华液单| av电影中文网址| 熟女电影av网| 久久久久网色| 亚洲精品视频女| 一级黄片播放器| 亚洲精品久久久久久婷婷小说| 一级黄片播放器| 欧美日韩成人在线一区二区| 嘟嘟电影网在线观看| 日本与韩国留学比较| 国产69精品久久久久777片| 亚洲av国产av综合av卡| 成人二区视频| 免费播放大片免费观看视频在线观看| 精品人妻一区二区三区麻豆| xxx大片免费视频| 大又大粗又爽又黄少妇毛片口| 午夜激情久久久久久久| 亚洲人成网站在线观看播放| 美女xxoo啪啪120秒动态图| 午夜福利,免费看| 色视频在线一区二区三区| 国产一区二区三区综合在线观看 | 久久精品国产亚洲av天美| 亚洲欧洲国产日韩| 国产日韩欧美亚洲二区| 精品国产一区二区久久| 少妇 在线观看| 老司机影院毛片| 免费大片18禁| 日本欧美视频一区| 免费观看a级毛片全部| 久久久久精品性色| 亚洲性久久影院| 在线观看三级黄色| 国产一级毛片在线| 欧美xxⅹ黑人| 免费观看无遮挡的男女| 国产成人午夜福利电影在线观看| 美女国产视频在线观看| 啦啦啦视频在线资源免费观看| 日本欧美视频一区| 成人国产麻豆网| 精品卡一卡二卡四卡免费| 秋霞伦理黄片| 一本色道久久久久久精品综合| 国产精品蜜桃在线观看| 99热6这里只有精品| 欧美激情国产日韩精品一区| 亚洲精品国产色婷婷电影| 国产亚洲精品第一综合不卡 | 人妻夜夜爽99麻豆av| 亚洲综合精品二区| 久久久久国产网址| 激情五月婷婷亚洲| 赤兔流量卡办理| 桃花免费在线播放| 精品亚洲乱码少妇综合久久| 日韩不卡一区二区三区视频在线| 麻豆精品久久久久久蜜桃| 高清黄色对白视频在线免费看| 一级a做视频免费观看| 久久久久久久国产电影| freevideosex欧美| 夜夜骑夜夜射夜夜干| 亚洲久久久国产精品| 一级a做视频免费观看| 日日啪夜夜爽| 国产熟女午夜一区二区三区 | 国产在线免费精品| 男女边吃奶边做爰视频| 在线观看人妻少妇| 成人国产麻豆网| 国产一级毛片在线| 欧美亚洲 丝袜 人妻 在线| 简卡轻食公司| 制服丝袜香蕉在线| 久久午夜福利片| 久久久久国产网址| 久久综合国产亚洲精品| 日韩在线高清观看一区二区三区| 久久久国产一区二区| 久久久国产欧美日韩av| 亚洲美女视频黄频| 丝袜在线中文字幕| 蜜臀久久99精品久久宅男| 一本一本综合久久| 国产成人精品在线电影| 黄色毛片三级朝国网站| 人人澡人人妻人| 观看美女的网站| 亚洲精品456在线播放app| 久久久亚洲精品成人影院| 国产精品嫩草影院av在线观看| 人人妻人人澡人人爽人人夜夜| 久久 成人 亚洲| 在线精品无人区一区二区三| av一本久久久久| 在线观看免费视频网站a站| 欧美精品国产亚洲| 久久久久久久亚洲中文字幕| 十分钟在线观看高清视频www| 18禁动态无遮挡网站| 亚洲激情五月婷婷啪啪| 国产精品一区www在线观看| 免费观看在线日韩| 亚洲综合精品二区| 夫妻性生交免费视频一级片| 少妇猛男粗大的猛烈进出视频| 看免费成人av毛片| 国产黄片视频在线免费观看| 一个人看视频在线观看www免费| 9色porny在线观看| 亚洲无线观看免费| 中国国产av一级| 青春草视频在线免费观看| 女人精品久久久久毛片| 国产欧美日韩一区二区三区在线 | 日日撸夜夜添| 亚洲不卡免费看| 哪个播放器可以免费观看大片| 成人二区视频| av福利片在线| 国产爽快片一区二区三区| 99热国产这里只有精品6| 欧美成人精品欧美一级黄| 男人操女人黄网站| 国产无遮挡羞羞视频在线观看| 天堂中文最新版在线下载| 美女cb高潮喷水在线观看| 美女脱内裤让男人舔精品视频| 免费观看的影片在线观看| 国产男女超爽视频在线观看| 国产日韩一区二区三区精品不卡 | 制服丝袜香蕉在线| 免费观看的影片在线观看| 夜夜看夜夜爽夜夜摸| 2021少妇久久久久久久久久久| 久久国产亚洲av麻豆专区| videos熟女内射| 国产一区二区三区av在线| 在线免费观看不下载黄p国产| 亚洲av二区三区四区| 免费观看无遮挡的男女| 国产精品一区二区三区四区免费观看| 成人18禁高潮啪啪吃奶动态图 | 亚洲精品亚洲一区二区| 黄色毛片三级朝国网站| 国产成人精品在线电影| 日日爽夜夜爽网站| 亚洲成色77777| 一本一本综合久久| 欧美日韩一区二区视频在线观看视频在线| av国产久精品久网站免费入址| 国产成人精品在线电影| 国产免费一区二区三区四区乱码| 免费大片黄手机在线观看| 日本av手机在线免费观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 最近2019中文字幕mv第一页| 一级,二级,三级黄色视频| 国产乱来视频区| av播播在线观看一区| 91精品国产九色| 日韩精品有码人妻一区| 成年av动漫网址| 久久久久久伊人网av| tube8黄色片| 高清av免费在线| 国产又色又爽无遮挡免| 国产综合精华液| 亚洲中文av在线| 一级毛片我不卡| 在线看a的网站| 黄色视频在线播放观看不卡| 汤姆久久久久久久影院中文字幕| 性色av一级| 少妇被粗大猛烈的视频| 亚洲不卡免费看| 我的女老师完整版在线观看| 国产成人精品一,二区| 国产精品免费大片| 熟女av电影| 亚洲,一卡二卡三卡| 波野结衣二区三区在线| 大香蕉97超碰在线| 日韩人妻高清精品专区| 性色av一级| 欧美bdsm另类| 中文字幕人妻熟人妻熟丝袜美| 在线观看三级黄色| 热99久久久久精品小说推荐| 在线观看美女被高潮喷水网站| 国产成人freesex在线| 91在线精品国自产拍蜜月| 黄色配什么色好看| 91精品伊人久久大香线蕉| 大香蕉久久网| 能在线免费看毛片的网站| 夜夜看夜夜爽夜夜摸| 下体分泌物呈黄色| 国产精品 国内视频| 99热这里只有是精品在线观看| 国产精品无大码| 国产深夜福利视频在线观看| 中国三级夫妇交换| 精品一品国产午夜福利视频| 一级二级三级毛片免费看| 黄色一级大片看看| 国产在视频线精品| 国产永久视频网站| 亚洲成人一二三区av| 亚洲国产av影院在线观看| 夜夜爽夜夜爽视频| 丰满迷人的少妇在线观看| 国产白丝娇喘喷水9色精品| 日本欧美国产在线视频| 国产黄色免费在线视频| 中文字幕亚洲精品专区| 成人国产av品久久久| 性色av一级| 亚洲av成人精品一二三区| 天天影视国产精品| 久久午夜综合久久蜜桃| 精品少妇黑人巨大在线播放| 亚洲,一卡二卡三卡| 最近中文字幕2019免费版| 男人爽女人下面视频在线观看| 国产黄频视频在线观看| 日日啪夜夜爽| 精品久久国产蜜桃| 日韩,欧美,国产一区二区三区| 久久午夜福利片| 寂寞人妻少妇视频99o| 狂野欧美白嫩少妇大欣赏| 国产成人精品无人区| 人妻少妇偷人精品九色| 蜜臀久久99精品久久宅男| 久热久热在线精品观看| 日韩不卡一区二区三区视频在线| 亚洲三级黄色毛片| 亚洲av电影在线观看一区二区三区| 精品酒店卫生间| 婷婷色综合www| 久久国产精品大桥未久av| 下体分泌物呈黄色| 国产午夜精品久久久久久一区二区三区| 午夜精品国产一区二区电影|