• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Mango Leaf Disease Identification Using Fully Resolution Convolutional Network

    2021-12-15 07:09:02RabiaSaleemJamalHussainShahMuhammadSharifandGhulamJillaniAnsari
    Computers Materials&Continua 2021年12期

    Rabia Saleem,Jamal Hussain Shah,*,Muhammad Sharif and Ghulam Jillani Ansari

    1COMSATS University Islamabad,Wah Campus,47040,Pakistan

    2University of Education,Lahore,54000,Pakistan

    Abstract:Due to the high demand for mango and being the king of all fruits,it is the need of the hour to curb its diseases to fetch high returns.Automatic leaf disease segmentation and identification are still a challenge due to variations in symptoms.Accurate segmentation of the disease is the key prerequisite for any computer-aided system to recognize the diseases,i.e.,Anthracnose,apicalnecrosis,etc.,of a mango plant leaf.To solve this issue,we proposed a CNN based Fully-convolutional-network(FrCNnet)model for the segmentation of the diseased part of the mango leaf.The proposed FrCNnet directly learns the features of each pixel of the input data after applying some preprocessing techniques.We evaluated the proposed FrCNnet on the real-time dataset provided by the mango research institute,Multan,Pakistan.To evaluate the proposed model results,we compared the segmentation performance with the available state-of-the-art models,i.e.,Vgg16,Vgg-19,and Unet.Furthermore,the proposed model’s segmentation accuracy is 99.2% with a false negative rate (FNR) of 0.8%, which is much higher than the other models.We have concluded that by using a FrCNnet,the input image could learn better features that are more prominent and much specific, resulting in an improved and better segmentation performance and diseases’identification.Accordingly,an automated approach helps pathologists and mango growers detect and identify those diseases.

    Keywords: Mango plant leaf; disease detection; CNN; FrCNnet;segmentation

    1 Introduction

    Plant diseases being insidious are becoming a nightmare for agricultural countries.Such diseases are causing colossal losses to the economies of countries by decreasing both the quantity and quality of fruit and field crops.Pakistan is one of those countries that earn precious foreign exchange by cultivating diverse crops, including fruits and vegetables.These crops are grown in various parts of the country.Therefore, the a need exist to employ image processing and computer vision research based smart techniques to identify the diseases.In recent times, researchers have been working hard to devise novel computer-based solutions for disease identification of mango plants at an early stage.That will help farmers protect their crop till it is harvested, resulting in reduction of economic losses [1].Mango is the main popular fruit of Pakistan.Moreover, it is famous all over the world due to its distinct flavor, texture and nutritive value.Nowadays, the farmer of this area is worried about the effects of various diseases on mango plants, which is natural due to climate changes and other associated factors.Thus, it is imperative to protect this king of fruit plant to ensure the availability of a sweet and healthy mango for rest of the world.Unfortunately, few techniques exist to deal with fruit plants’wellness, while it is only fateful that a small number of techniques have been reported till now to address mango plant diseases [2].It’s all due to the complex and complicated pattern of the diseased part on the leaf of the mango plant.Hence, to proceed in this resilient research area, efficient and robust techniques are required to cure mango plants.For doing this, the images were captured using the digital and mobile camera.These images are used as a baseline for identifying and detecting the disease on mango plant leaves.The visualization (detection and identification) of a disease is not an easy task.Mostly harmful diseases are found on the stems and leaves of the mango plant.To tackle all these issues and understand the leaf anatomy, there is much emphasis on precise identification of disease on leaves of the mango plant.Few common diseases on mango leaf have been described [3].These common diseases include Blossom Blight, Anthracnose, and Apical Necrosis that are mostly found on the leaves of the mango plants.

    Automated techniques play an important role in identifying and detecting the mango leaf diseases to handle such challenging issues.Few automated techniques such as segmentation and classification using K-means and Support-vector-machine (SVM) to detect the unhealthy region,respectively, are reported [4].Further, a deep learning approach is employed to determine the type of mango [5] followed by a semantic segmentation-based method for counting a number of mangoes on the plant in the orchard using CNN [6] and leaf disease identification of mango plant using deep learning [2].Hence, as per the discussion mentioned above, it reflects a dire need for an innovative and efficient automated technique for the detection, identification, and classification of the disease on the leaves of the mango plant.

    Keeping the perspective in mind, this article is focused on the following issues to be covered:(1) image noise removal, which encounters due to camera adjustment and environment, (2) track change in disease shape, color, size, and texture, (3) tackle variation in the background and disease spot, (4) proper and accurate segmentation of diseased spot due to its similarity with the healthy parts of the leaf, (5) robust and useful feature extraction and fusion for classification at the later stage.The following key contributions of the intended work are listed below:

    (i) Low-contrast-haze-reduction (LCHR) approach is extended for enhancing and reducing the noise of infected regions in the query images.

    (ii) A method for segmentation based on deep learning is proposed for leaf disease detection.Compared with the latest available deep-learning models (Vgg-16, Vgg-19, and Unet) is performed to authenticate the proposed model.

    (iii) Color, texture (LBP), and geometric features are extracted; canonical-correlation-analysis(CCA) based fusion of extracted features is performed.The neighborhood-Component-Analysis (NCA) technique is applied to select the best features.Ten different types of classifiers are then implemented for recognition/identification.

    (iv) A comparison is also performed with existing articles for mango disease.For the authenticity of the proposed system, the classification results are also computed from three different strategies.

    The rest of the article has the following sections:Section 2 highlights the literature review,while Section 3 depicts the details of the proposed work.The experimentation and results are discussed in Section 4, whereas the whole paper is concluded in Section 5.

    2 Literature Review

    In recent times, there are various methods developed for plant leaf disease detection.These are broadly categorized into disease detection and classification methods.Most of the techniques use a deep convolution network for segmentation, feature extraction, and classification considering the images of mango, citrus, grapes, cucumber, wheat, rice tomato, and sugarcane.Likewise, these methods are also suitable for fruits and their leaf diseases.Since few diseases can be correctly identified if they are precisely segmented and relevant features are extracted for classification purposes.Machine learning techniques are generally used, such as disease symptom enhancement and segmentation support, to calculate texture, color, and geometric features.Further, these techniques are also helpful for the classification of the segmented disease symptom.As a result, all this could be applied to make an automated computer-based system work efficiently and robustly.

    Iqbal et al.[7], in their presented survey, discussed different detection and classification techniques and concluded that all these were at an infancy stage.Not much work has been performed on plants, especially the mango plant.Further, in their survey paper, they have discussed almost all the methods considering their advantages, disadvantages, challenges, and concepts of image processing to detect disease.

    Sharif et al.[8] have proposed an automated system for the segmentation and classification of citrus plant disease.They have adopted an optimized weighted technique to segment the diseased part of the leaf symptom in the first phase of the proposed system.Afterward, in the second phase, various descriptors, including color, texture, and geometric, are combined.Finally, the optimal features are selected using a hybrid feature selection method that consists of the PCA approach, entropy, and covariant vector based on skewness.As a result, the proposed system has achieved above 90% accuracy for all types of diseases.Few prominent articles regarding classification of wheat grain [9], apple grading based on CNN [10], crop characterization by CNN based target-scattering-model [11], plant leaf disease, and soil-moisture prediction by introducing support vector (SVM) for feature classification [12], and UAV-based smart farming for the classification of crop and weed reflecting the use of geometric features have been reported.Safdar et al.[13]applied watershed segmentation to segment the diseased part.The suggested technique is applied to the combination of three different citrus datasets.The outcome of their technique is 95.5%accuracy, which is incredible for segmenting the diseased part.Febrinanto et al.[14] used the K-nearest-neighbour approach for segmentation.The detection rate of their methodology is 90.83%.Further, the authors segment the diseased part of the citrus leaf using a two and nine clusters approach by incorporating optimal minimum bond parameters that are 3%, which lead to final accuracy of 99.17%.

    Adeel et al.[15] achieved a 90% and 92% accuracy level for segmentation and classification,respectively, on grape images from the Plant Village dataset.Results were obtained by applying the local-contrast-haze-reduction (LCHR) enhancement technique and LAB color transformation to select the best channel for thresholding.After that, color, texture, and geometric features were extracted and fused by canonical correlation analysis (CCA).M-class SVM performs the classification of finally reduced features.Zhang et al.[16] presented a GoogLeNet and Cifar10 network to classify the diseases on maize leaves.The models presented by them achieved the highest accuracy when compared with VGG & Alexnet to classify nine different types of leaves of a maize plant.Gandhi et al.[17] worked on identifying plant leaf diseases using a mobile application.The authors presented their work with Generative Adversal Networks (GAN) to augment the images.Then classification was performed by using convolution neural networks (CNN)deployed in the smartphone application.Durmu?s et al.[18] classified plant diseases of tomato plant leaf taken from the PlantVillage database.Two deep-learning architectures (Alex-net and squeeze-net) were tested, first Alex-Net and then Squeeze-Net.For both of these deep-learning networks, training and validation were done on the Nvidia Jetson TX1.Bhargava et al.[19] used different fruit images and grouped them into red, green, and blue for grading purposes.Then they subtracted the background using the split and merge algorithm.Afterward, they implemented four different classifiers viz.K-Nearest-Neighbor (KNN), Support-Vector-Machine (SVM), Sparse-Representative-Classifier (SRC), and Artificial-Neural-Network (ANN) to classify the quality.The maximum accuracy achieved for fruit detection using KNN was 80%, where K was taken as 10,SRC (85.51%), ANN (91.03%), and SVM (98.48%), respectively.Among all, SVM proved to be more effective in quality evaluation, and the results obtained were encouraging and comparable with the state-of-art techniques.Singh et al.[20] proposed a classification method considering infected leaves of the Anthracnose disease on the mango plants.They used a multilayer convolution neural network for this purpose.The technique was implemented on 1070 self-collected images.The multilayer-convolution-neural-network (MCNN) was developed for the classification of the mango plant leaves.As a result of this setup, the classification accuracy raised to 97.13%accurately.Kestur et al.[6] proposed a deep learning segmentation technique called Mangonet in 2019.The accuracy level of 73.6% was achieved using this method.Arivazhagan et al.[2] used a convolutional neural network (CNN) and achieved an accuracy of 96.6% for mango plant leaf images.In the mentioned technique, only feature extraction with no preprocessing was done.

    Keeping in view the scanty literature, we proposed a precise deep-learning approach for the segmentation of the diseased leaves of the mango plant called a full resolution convolutional network (FrCNnet).We aim to extend the U-net deep learning model, which is a famous segmentation deep learning technique.This method directly segments the diseased part of the leaf after implementing some preprocessing.

    This paper proposed a novel, precise deep learning approach for segmenting and classification of mango plant leaves.After segmenting the diseased part, three features (color, texture, and geometric) are extracted for fusion and classification purposes.The fused feature vector is then reduced using the PCA-based feature selection approach to obtain an optimal feature set to get enhanced classification accuracy.The dataset used for this work is a collection of self-collected images captured using different types of image capturing gadgets.The subsequent section will highlight in detail the work done in this regard.

    3 Proposed Work

    The first and foremost task for this work is preparing a dataset, a collection of RGB images gathered from various mango-producing areas of Pakistan, including Multan, Faisalabad, and Lahore.The collected images reflected dissimilar sizes, so the same was resized to 256×256 after getting annotated by the expert plant pathologists.The workflow of the proposed technique is shown in Fig.1.The major steps of the proposed technique are:1) preprocessing of images that consist of image resizing and contrast enhancement/stretching followed by data augmentation and ground-truth generation.2) use of fine-tuned proposed FrCNnet to segmentation the resultant enhanced image using deep learning-based proposed model (codebook).3) Color, texture, and geometric feature extraction, fusion, and PCA-based feature reduction applied before feature fusion.Finally, classification is applied by using ten classifiers.

    Figure 1:Framework diagram of basic concept and structure of a proposed automated system

    3.1 Preprocessing

    Preprocessing is an initial step adopted for this work.The purpose is to enhance image quality, which will help get better segmentation and classification accuracy.A detailed description of each step adopted in this regard is given below:

    3.1.1 Resizing and Data Augmentation

    We had a collection of 2286 images of the diseased mango leaves.Few of the images i.e.,2.88% (66 out of 2286), were distorted after resizing operation.These distorted images were discarded.It is well known that deep learning architectures are data-hungry algorithms.Hence,these algorithms require a large number of images to obtain the proper results.Thus, keeping in view the nature of deep learning architecture, 2220 images were then augmented by rotating and flipping them in horizontal, vertical, horizontal, plus vertical using the power law transformation with gamma = 0.5 and c = 1.By doing this, we had 8880 images available for training deep learning algorithms.Tab.1 shows the distribution of the images of the dataset.

    Table 1:Distribution of training and testing data

    The images from 1 to 7104 and 7105 to 8880 were allocated for training and testing, respectively.Also, an equal image size of 256×256 was employed for the current study.Only diseased images were used for segmentation from the whole dataset, and 4440 rest of the healthy images were used for classification.

    3.1.2 LCHR Contrast Stretching

    The image quality matters a lot for further performance of the system in image processing techniques [21].This approach helped reduce the errors affecting the system’s performance in terms of its accuracy and execution cost.It could be done by removing noise from the image since noise can betray the system to detect the region of interest falsely.Moreover, the purpose of the image enhancement is useful for extracting different types of features used for the classification process.So, in this paper, we have extended Local-Contrast-Haze-Reduction (LCHR)approach [15], which helps solve few major issues such as 1) noise removal, 2) background correction, 3) regularization of image intensity and, 4) differentiation between diseased and healthy regions.

    SupposeI(r,s)is the original input RGB image with the dimensions 256×256.The idea is to perform and improve the local contrast of the diseased area of the input images using nxm window size and given as follows.

    whereβdenotes the contrast parameter having values between 0 and 1.Global mean is denoted byμandσis the standard deviation.To obtain the final enhanced function, the local minima is computed denoted by M.The definitions for local minima (M) and standard deviation (σ) are as under.

    Further the contrast function (F) used these aforementioned computed values of local minima M and standard deviationσ.Hence, the formula for the contrast function becomes as:

    To conclude this step, the simple probability theory is used to perform the complement local contrast enhanced image F(I).Since, the image and its complement are equal to 1 as given under:

    where complement of the image is denoted byC(I)as shown in Eq.(5).In this way, we have regenerated an existing haze reduction technique [22], that helps to clear the diseased regions by removing the noise.The complete illustration of this section is shown in Fig.2.

    Figure 2:Different steps of LCHR contrast stretching (a) input image (b) contrast-enhanced image (c) complement of an image (d) haze reduction image

    Afterward, a method was designed to generate the ground truth masks for the diseased leaves.The ground truth segmentation mask was only provided for the testing and training purpose to calculate matching the segmented image.Fig.3 reflects the images and their ground truth masks.

    Figure 3:Images with their ground truth masks

    3.2 Proposed FrCNnet Architecture/Model

    To get pixel-wise classification from segmented images; generally, CNN is employed.The CNN has two important parts:1) Convolutional layer and subsampling and 2) Upsampling.

    The convolutional layer is used to extract deep features by using different filters from the input image.Subsampling is used to reduce the size of the feature map extracted [23].The redundancy of the feature map extracted is eliminated and computational time reduced [24,25] to avoid overfitting.This is all because of the fact that reduced features represent the label of the input image.Subsampling reduces the resolution of the spatial features of an input image for the pixel-wise segmentation.

    In the upsampling of the second part of the convolutional-neural-network, we have proposed a novel deep-learning-based segmentation method named as FrCNnet.This novel idea helps to eliminate concatenation operation from the architecture.In this way, full-resolution features of the input pixels are reserved.Here we adopt, transposed convolution which is an up-sampling technique that expands the size of images by doing some padding on the original image after the convolution operation (Convolutional layer).Every pixel can be represented as a training sample in this case.This proposed method consists of 51 layers inspired by Unet model [26] that has more layers then the proposed model for deep segmentation.It is customization in the layers of the Unet model in which concatenation operation is not used in proposed FrCNnet as it is used in Unet.Tab.2 shows the details of each layer of the proposed system.

    Table 2:Continued

    Table 2:Continued

    The CNN originally translated feature maps F using convolution operations [27,28].The basic component of this network is ‘*’called convolution operator, W(the filter-kernels) andφ(·)activation function.Therefore, the Nth feature map of layers (L) can be calculated as:

    whereis a bias for each feature map generated by each layer.After each convolution operation we applied a non-linear activation-function via rectified-linear-units (ReLUs).The activation function ReLU is used due to its efficiency and robustness and counter the vanishing gradient problem instead of tan and sigmoid function described in [29] and [30].ReLU can be defined as:

    The max-pooling operation in deep learning is used to resolve the over-fitting problem of deep layers.After the convolutional layer 2, 5, 7, 9, 11, 13, and 14 max-pooling function is used in the proposed architecture (FrCNnet) as shown in Tab.2 reflects the dimensions of this operation.The purpose of using max-pooling is to downsample the input, including the image, output matrix, or hidden layer, by reducing the dimensions of the matrix.It also allows making assumptions about the features contained in the binned region.After that, the transposed-convolutional layers are used to increase or expand the size of the image.This is done by implementing certain padding techniques after the application of the convolutional layer.Softmax is the classifier (multinomial logistic regression), and it is the last and final layer of the proposed FrCNnet architecture.As a result of this, resolution of the output maps is the same as the input maps.Then the overall crossentropy-loss function (class output layer or the pixel-classification layer) is calculated between ground truth (Z) and predicted segmented map (Z∧) to minimize the overall loss L of each pixel throughout the training.The pixel classification layer automatically ignores the pixel lables undefined during training and outputs of the categorical label for each image pixel processed by the network.

    The aforementioned cross-entropy-loss function is used when a deep-convolution network is applied for a pixel-wise classification [31].The objective of training through deep-learning models is to optimize the weight parameters at every layer—the optimization of single-cycle proceeds [32].Firstly, the forward propagation is done sequentially to take the output at each layer.Secondly,by using the loss function error between the ground truth and predicted output, the accuracy is measured in the last layer of the output for the dataset.Thirdly, the back propagation is performed to minimize error in training dataset.As a result of all these, the training weights of the proposed architecture can be updated using the training data.

    3.2.1 Feature Extraction and Fusion

    Feature extraction is a very useful technique for machine learning and computer vision.Providing useful and efficient information and avoid redundancy are the major concerns of feature classification.For recognition of disease on plant leaves, texture and color features are more useful descriptors.The texture feature represents the texture analysis, and the color feature shows the color information of the diseased part.We have also extracted geometric, LBP, and color features to conduct a classification of diseases on the leaves of the mango.Feature extraction, fusion, and classification framework structure are shown in Fig.4.

    Figure 4:Framework of feature extraction, fusion and classification

    The detailed description of each phase is as follows:in the training phase, the LCHR enhanced images are passed through the FrCNnet, and then the segmented map is broken for feature extraction and for calculating LBP and geometric features, the LCHR enhanced images are converted into gray-scale images; then the features are extracted as shown in Fig.4.

    After extracting the features, the Canonical Correlation Analysis (CCA) is used to fuse them into a single matrix.Moreover, a reduction technique of Neighborhood Component Analysis(NCA) reduces about 50% of features, and those are then fed to SVM to train a model.At the later stages, LCHR contrast images are also supplied to the proposed system in the testing phase that extracts the features and segments in the diseased part.After that, the texture and geometric features are computed from the segmented map of the images, as shown in Fig.4.Finally, the recognition in terms of appropriate labels is performed by matching the trained labels with the fused and reduced features.

    Color Features:Color features highlight and assume an indispensable job for recognition and detection of the diseased piece of the plants and individuals as every disease has its own shading.Color-features have been extracted in this article for the recognition of the disease on the leaves of a mango plant as discussed that each disease has its own shading and pattern.The various types of color spaces including red, green and blue (RGB), hue, intensity and saturation (HIS),luminance, a, b components (LAB) and hue, saturation and variance (HSV) are applied on the images in the wake of upgrading or enhancing the image using the LCHR approach.Eight types of statistical metrics are utilized to extract each type of color features (divided into three separate channels).The statistical metrics are area, mean, entropy, standard deviation, variance,kurtosis, skewness and harmonic mean.These all are expressed from Eqs.(9)–(16), respectively.The dimension of the combined feature vector for all color spaces is computed as 1 × 108,where each color channel has dimension 1 × 9.Therefore, final size of color feature vector after concatenating all the feature in a single matrix isn×108, where n is the total number of images(training and testing) and is denoted by(CF).

    The formulae for the statistical metrics are as under:

    Texture Features:The local structure and texture of any object are determined by a Local Binary Pattern (LBP) method.This method extracts information from the local patterns as compared to pixel levels.To handle the texture complication of an image, an old but simple approach,LBP, is used [33].The definition of LBP is given as under:

    The obtained vector in LBP represented byψ(LBP)is 1×59 and for all the images in the datasets isn×59.

    Geometric Features:Geometric features or sometimes known as shape-features, are generally used to recognize objects.The significance of these features is to perceive the object for collecting geometric data including edges, lines, corners, spots (blobs) and folds (ridges).Six types of geometric features are used in this work and are numerically represented as follows in Eqs.(18)–(23):

    A major axis by, minor axis by, where area is represented by A, perimeter is represented byM′a,θfor representing orientation and extent is represented by E.In the above Eqs.(18)–(23)d,eare the distances from the focus-point in an ellipse,Frepresents a distance between the focuspoint,P,Q,Rare parameters representing distances from the central-point, L represents the length and W represents width.The size of geometric-feature vector length isn×6 where n is the total number of images and is denoted byψ(GV).

    Feature Fusion and Reduction:In feature fusion, numerous features are combined into a single vector.The fusion process shows a lot of progress in the applications where pattern recognition is implemented because each feature has its characteristics and qualities.Combining all the features gives much-improved performance, but it expands the computation cost.Therefore, we used CCA for this purpose.CCA feature fusion is used to maximize the correlation.Aside from all the benefits of feature fusion, there exist numerous issues with it as well.An increase in computational time and the problem of dimensionality occur if the features fussed are not equally represented.When the features are combined in a single vector, there may arise a redundant information problem.Therefore, Neighborhood-Correlation-Analysis (NCA) based feature reduction is performed to remove the dimensionality and redundant information after implementing canonical-correlation-analysis (CCA) for features fusion in this work.

    Eq.(24) shows the correlation between,,color, texture, and geometric features that are to be fused, whereD=[d1,d2,...,d],E=[e1,e2,...,en] andF=[f1,f2,...,fn] are sample matrices.Formally CCA can be solved as:

    whereCdef=DEFTdefines covariance matrix between feature sets andCdd=DDT,Cee=EET,Cff=FFTrepresents covariance within three feature sets.When the matrices within feature sets are non-singular then CCA can be obtained by computing generalized Eigen-problem, which is formulated as:

    LetXd=[xd1,xd2,...,xa,m],Xe=[xe1,xe2,...,xe,m],Xf=[xf1,xf2,...,xf,m] denote 3 projection directories matrices.Where the vector pairs(xdi,xei,xfi)mi=1corresponds to the largestmgeneralized Eigen-value.Fused feature obtained from the 3 modalities is as under:

    where,Z(i)denotes the fused vector.The fused vector is obtained after the maximum correlation between features viz.color, texture, and geometric, discarding the irrelevant information.In order to obtain final classification results, sometimes it is significant to remove the redundant information.Therefore, Neighborhood-Correlation-Analysis (NCA) reduction method eliminates redundant information and only selects the good or informative features.Selected features are fed to classifiers to obtain the classification results.The proposed labeled results are shown in the results section.

    3.2.2 Training and Testing

    In recent times, agricultural applications, especially mango plants, still suffer from the shortage of data.This is due to the complexity, cost of the collection process due to the plant’s availability at limited places, and labeling cost during data acquisition [34].Hence all these factors motivate us, and we have collected the data from different mango-producing areas of Pakistan to overcome this issue.We adopted two strategies:1) data-augmentation and 2) transfer-learning [14,24–26,35].None of the techniques has been proposed for segmentation using deep learning on the agriculture sector i.e., specially mango plant.We train our dataset the Vgg16, Vgg19, and Unet models compared to our proposed FrCNnet architecture to evaluate the results.We used the distribution of datasets to test and test the results, which is mentioned above in Tab.1.The evaluation of the proposed network performance and optimization were taken out independently.We have adopted the most recent deep learning approach that is double-crossed validation scheme.This approach is more reliable and commonly used in contrast to the trial-and-error method.Finally,the segmented images were fed into the system for feature extraction, fusion, and reduction and then for classification.The computation time for the training data is about 18 h for 100 epochs with a batch size of 20.Moreover, the segmentation of a single leaf took about 7.5 s.To implement the whole algorithm and its experiments, we used a personal computer having the following specifications:Intel? Xeon? processor with CPU frequency 2.2 GHz having 32 GB RAM and NVIDIA GPU GEForce GTx1080 and Matlab 2018(b) on windows 64-bit for coding the algorithm.

    4 Results and Discussion

    As a priority, the accuracy-based results of the proposed segmentation technique are produced first, and afterward the classification results as per the proposed technique are given by employing different benchmark classifiers.The detailed description of all the results is presented in the following section.

    4.1 Segmentation Results

    In this part, the proposed model’s segmentation performance is discussed compared with the other state-of-the-art deep learning models.For acquiring segmentation results, the dataset is distributed in 7040 images for training and 1760 images for testing purpose; respectively, Vgg-16,Vgg-19, and Unet models were also implemented on this dataset to compare the results of the proposed model.These models were fine-tuned by using the same receptive fields, maps as in the proposed model.The quantitative results are arranged in tabular and also visualized the effect graphically.The matching accuracy of segmented part of the image with ground truth at 1stEpoch for Vgg-16 is 62.88%, for Vgg-19 is a bit higher than Vgg-16 that is 68.58%, for Unet model is 91.7%, much higher than Vgg-16 and Vgg-19 and the percentage for the proposed FrCNnet is 99.2% being much higher than Vgg-16 and Vgg-19 and Unet models.For the 10thEpoch, the accuracy of Vgg-16 is higher than at 1stepoch that is 67.1% and accuracy for Vgg-19 is also increased than at 1stepoch that is 72.9%, but it remains same for Unet at 1stand 10thepoch which is 91.7% and for proposed architecture FrCNnet it increases up to a highest level that is 99.2% same as on 1stepoch.Then the accuracies are checked for the 100thEpoch in which for Vgg-16, we have observed much increase up to 77.4%, a minimal increase in Vgg-19 at 100thEpoch; it becomes 73% at 100thepoch.We have seen an increase in accuracy for the Unet that is 91.7%.on proposed FrCNnet architecture and the accuracy percentage remained same as on 10thepoch but is greater than all the models that is found to be 99.2% shown in Tab.3.

    Table 3:Percent accuracy based segmentation results obtained using Vgg-16, Vgg-19, Unet and FrCNnet at 1st, 10th and 100th Epoch

    4.2 Classification

    The proposed classification results are two folds:firstly, the classification results are performed for each disease-class with healthy-class.Secondly, classification is performed for all disease-classes including healthy-class.All these results are obtained using 10 cross folds that means the testing feature set is divided into 10 subsets out of which 1 subset is used for testing and 9 remaining folds are for training.This whole process is performed 10 times and an average value is calculated after ten iterations.For comparison purpose with proposed technique, 10 state of the art classifiers are selected including Linear discriminant, Linear-Support-Vector-Machine (LSVM), Quadratic-SVM (QSVM), Cubic-SVM, Fine KNN, Medium KNN, Cubic K-Nearest-Neighbor (CKNN),Weighted-KNN, Ensemble-Subspace-Discriminative (ESD) and Ensemble-subspace-KNN.The performance is measured using the parameters including sensitivity, specificity, AUC (Area Under the Curve), FNR (False Negative Rate), and accuracy.The detailed result analysis of the proposed system is presented in this section.The results are also presented in both graphical and tabular form.

    4.2.1 Test 1:Anthracnose vs.Healthy

    In this test, classification of Anthracnosevs.healthy images was performed.We had 4440 images of Anthracnose diseased images with equal number of healthy images as well.Tab.4 shows that accuracy of 98.9% is achieved by the Linear-SVM which is superior among all the other competitive classifiers.The values of other measures including Sensitivity, Specificity, Area Under the Curve (AUC), False Negative Rate (FNR) are 0.01, 0.98, 0.99, 1.1, respectively.

    4.2.2 Test 2:Apical Necrosis vs.Healthy

    In this test, classification of apical necrosisvs.healthy images was performed.We had 4440 images of Anthracnose diseased images with equal number of healthy images as well.Tab.5 shows that accuracy of 97.1% is achieved by the Quadratic-SVM which is superior among all the other competitive classifiers.The values of other obtained measures including Sensitivity, Specificity,Area Under the Curve (AUC), False Negative Rate (FNR) are 0.02, 0.96, 0.99, 2.9, respectively.

    Table 4:Classification results of Test 1

    Table 5:Classification results of test 2

    4.2.3 Test 3:All Diseases vs.Healthy

    The results presented in this section are two folds:(i) CCA based features (color, texture and geometric) and (ii) NCA-based feature reduction.Tab.6 shows the classification results obtained after CCA based feature fusion.The accuracy level of 96.2% is achieved by the Cubic-SVM which is superior among all the other competitive classifiers.The values of other measures including Sensitivity, Specificity, Area Under the Curve (AUC), False Negative Rate (FNR) are 0.02, 0.94,0.99, 3.8, respectively.

    Tab.7 shows the classification results obtained after NCA based feature reduction.The accuracy of 98.8% is achieved by the Quadratic-SVM and Cubic-SVM is superior among all the other competitive classifiers.The values of other measures including Sensitivity, Specificity, Area Under the Curve (AUC), False Negative Rate (FNR) are 0.01, 0.98, 0.99, 1.1, 98.9, respectively for both the Quadratic-SVM and Cubic-SVM.This gives more accuracy as NCA based feature reduction removes redundant features provided by CCA based feature fusion.

    Table 6:Classification results of test 3 (CCA based feature fusion)

    Table 7:Classification results of test 3 (NCA based feature reduction)

    4.3 Discussion

    The obtained results are in conformity with the findings of Ramesh et al.(2019) who introduced a mango counter named Mangonet by using deep learning segmentation technique to attain the accuracy of 73.6%.Segmentation results of the proposed model are very much improved by 99.2% as compared to mangonet as shown in Tab.8.Classification results obtained after feature extraction (color, texture and geometric) and NCA-based feature reduction are 98.9%, which are much improved than Arivazhagan et al., 2018 who obtained accuracy of 96.6% and Udhay et al.,2019 who presented Multilayer Convolutional neural network for the classification of mango leaves infected by anthracnose disease and his classification accuracy is 97.13%.

    Table 8:Comparison of different CNN model with proposed model

    5 Conclusion

    The technique FrCNnet was introduced in this paper that can be used to segment the diseased part so that it can be classified/identified timely and properly.We generated three types of ground truth masks to compare the results of the proposed model.The images were given to the system, and codebook (down-sampling (convolution, relu), up-sampling (deconvolution, relu),softmax, and pixel classification) layers were used in this work.Unlike the previous state-of-the-art deep learning approaches, FrCNnet can produce a full resolution feature of the input images.That leads to an improved segmentation performance.This limited-time is much feasible for the pathologists to practically segment the diseased leaf part—the results of the proposed model overperformed Vgg-16, Vgg-19, and Unet models.The results were evaluated on the images of the leaves of the mango plant provided by the mango research institute, Multan.In the future, a large number of the images are required to improve the segmentation performance of each class.Features are extracted, which are then fed into the classifiers to classify them after implementing CCA-based feature fusion and then NCA-based feature reduction.The classification accuracy achieved in this work by Quadratic SVM and Cubic SVM is 98.9% superior to the already available work.

    Acknowledgement:We are thankful to Mango research institute, Multan and Tara Crop Sciences private limited for providing us the data (images).

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    午夜福利高清视频| 久久精品国产鲁丝片午夜精品 | 欧美3d第一页| 日本黄色片子视频| 两人在一起打扑克的视频| 成年免费大片在线观看| 男人狂女人下面高潮的视频| 免费看av在线观看网站| 99久国产av精品| 日本三级黄在线观看| 国产高清三级在线| 看片在线看免费视频| 欧美日韩国产亚洲二区| 小说图片视频综合网站| 国产乱人视频| 婷婷精品国产亚洲av在线| 色噜噜av男人的天堂激情| 国内毛片毛片毛片毛片毛片| 蜜桃亚洲精品一区二区三区| 欧美绝顶高潮抽搐喷水| www日本黄色视频网| 国产精品电影一区二区三区| 夜夜看夜夜爽夜夜摸| 日韩精品有码人妻一区| 欧美成人a在线观看| 特大巨黑吊av在线直播| 亚洲av五月六月丁香网| 久久精品国产自在天天线| 欧美性感艳星| 99热这里只有是精品在线观看| 亚洲av中文字字幕乱码综合| 一进一出抽搐gif免费好疼| 亚洲一级一片aⅴ在线观看| 欧美成人a在线观看| 国产精品久久电影中文字幕| 小说图片视频综合网站| 99热只有精品国产| 午夜老司机福利剧场| 久久精品国产亚洲av天美| 日韩精品中文字幕看吧| 国产免费av片在线观看野外av| 99国产精品一区二区蜜桃av| 黄色丝袜av网址大全| 精品福利观看| 色吧在线观看| 男人和女人高潮做爰伦理| 九色国产91popny在线| 变态另类成人亚洲欧美熟女| 88av欧美| 乱系列少妇在线播放| 国内精品久久久久精免费| 国产一区二区三区视频了| 精品乱码久久久久久99久播| 成人性生交大片免费视频hd| 看片在线看免费视频| 国产乱人视频| 69人妻影院| 性欧美人与动物交配| 久久久久免费精品人妻一区二区| 久久精品国产自在天天线| 99国产精品一区二区蜜桃av| 久久人妻av系列| 国产亚洲精品久久久久久毛片| 丰满的人妻完整版| 99在线视频只有这里精品首页| 99热精品在线国产| 色在线成人网| 天美传媒精品一区二区| 亚洲第一电影网av| 国产一级毛片七仙女欲春2| 88av欧美| 简卡轻食公司| 国产伦一二天堂av在线观看| 日韩欧美精品v在线| 免费看美女性在线毛片视频| 性欧美人与动物交配| 天天一区二区日本电影三级| 精品乱码久久久久久99久播| 国内精品久久久久久久电影| 在线播放无遮挡| 国产三级在线视频| 欧美成人a在线观看| 在线播放国产精品三级| 中文亚洲av片在线观看爽| 亚洲av.av天堂| 男插女下体视频免费在线播放| 男女那种视频在线观看| 麻豆久久精品国产亚洲av| 在线免费观看的www视频| 春色校园在线视频观看| 亚洲av成人精品一区久久| 永久网站在线| 麻豆国产97在线/欧美| 能在线免费观看的黄片| 亚洲国产精品合色在线| 久久亚洲精品不卡| 亚洲欧美日韩高清专用| 国产伦精品一区二区三区四那| 亚洲专区中文字幕在线| 69人妻影院| 免费黄网站久久成人精品| 在线观看66精品国产| 国产私拍福利视频在线观看| 一进一出抽搐动态| 久久久久久久久久成人| 他把我摸到了高潮在线观看| 亚洲成人久久爱视频| 韩国av一区二区三区四区| 亚洲自拍偷在线| 欧美成人性av电影在线观看| 国产精品一区二区免费欧美| 亚洲真实伦在线观看| 超碰av人人做人人爽久久| 如何舔出高潮| 国产精品1区2区在线观看.| 国产91精品成人一区二区三区| 在现免费观看毛片| 日日摸夜夜添夜夜添小说| 亚洲av中文av极速乱 | 免费无遮挡裸体视频| 黄色女人牲交| 亚洲精华国产精华精| 老女人水多毛片| 色哟哟哟哟哟哟| 久久久久久久午夜电影| 校园春色视频在线观看| 人妻夜夜爽99麻豆av| a级毛片免费高清观看在线播放| 成人鲁丝片一二三区免费| 欧美日韩瑟瑟在线播放| 天堂动漫精品| 人妻久久中文字幕网| 国产精品1区2区在线观看.| 亚州av有码| 少妇人妻精品综合一区二区 | 免费看日本二区| 黄色日韩在线| or卡值多少钱| 国产在视频线在精品| 日本黄大片高清| 91久久精品国产一区二区三区| 成年免费大片在线观看| 永久网站在线| 亚洲av熟女| 午夜久久久久精精品| 欧美一区二区精品小视频在线| 舔av片在线| 欧美成人免费av一区二区三区| 欧美最新免费一区二区三区| 欧美黑人欧美精品刺激| 国产69精品久久久久777片| 欧美不卡视频在线免费观看| 欧美精品国产亚洲| 亚洲男人的天堂狠狠| 不卡一级毛片| 亚洲国产精品成人综合色| 老女人水多毛片| 久久欧美精品欧美久久欧美| 国产成人影院久久av| 午夜视频国产福利| 久久久成人免费电影| 免费在线观看成人毛片| 亚洲经典国产精华液单| 九九热线精品视视频播放| 老司机午夜福利在线观看视频| 久久香蕉精品热| 麻豆国产97在线/欧美| 成人二区视频| 国产一级毛片七仙女欲春2| 听说在线观看完整版免费高清| 人妻久久中文字幕网| 欧美三级亚洲精品| 最近视频中文字幕2019在线8| 中出人妻视频一区二区| 国产高清视频在线播放一区| 一区二区三区免费毛片| 免费av观看视频| 黄片wwwwww| 嫁个100分男人电影在线观看| 联通29元200g的流量卡| 深夜a级毛片| 久久久久九九精品影院| 少妇熟女aⅴ在线视频| 色综合亚洲欧美另类图片| 亚洲国产色片| 一个人看的www免费观看视频| 精品久久久久久久久av| 日本三级黄在线观看| 国产爱豆传媒在线观看| 亚洲va在线va天堂va国产| 看片在线看免费视频| 午夜a级毛片| 真人一进一出gif抽搐免费| 国产午夜精品论理片| 国产一区二区三区av在线 | 黄色视频,在线免费观看| 成熟少妇高潮喷水视频| 麻豆久久精品国产亚洲av| 我要搜黄色片| 中出人妻视频一区二区| 我的老师免费观看完整版| 国内毛片毛片毛片毛片毛片| 国产精品一区二区免费欧美| 精品国产三级普通话版| 亚洲成人久久爱视频| 白带黄色成豆腐渣| 综合色av麻豆| 级片在线观看| 97人妻精品一区二区三区麻豆| 春色校园在线视频观看| 看十八女毛片水多多多| 极品教师在线免费播放| 久久久精品大字幕| 热99在线观看视频| 夜夜夜夜夜久久久久| 国产视频一区二区在线看| 亚洲性夜色夜夜综合| 亚洲第一电影网av| 婷婷色综合大香蕉| 在线免费十八禁| 欧美激情久久久久久爽电影| 九色成人免费人妻av| 特大巨黑吊av在线直播| av在线观看视频网站免费| 18禁裸乳无遮挡免费网站照片| 亚洲av中文字字幕乱码综合| 日本精品一区二区三区蜜桃| 性欧美人与动物交配| 亚洲中文字幕一区二区三区有码在线看| 噜噜噜噜噜久久久久久91| 国产精品永久免费网站| 嫩草影视91久久| 亚洲精华国产精华精| 日韩在线高清观看一区二区三区 | 国产精品伦人一区二区| 国内精品美女久久久久久| 校园人妻丝袜中文字幕| 99精品在免费线老司机午夜| 黄色女人牲交| 国产69精品久久久久777片| 国产一级毛片七仙女欲春2| 日本爱情动作片www.在线观看 | 又紧又爽又黄一区二区| 亚洲精品国产成人久久av| 国产久久久一区二区三区| 国产免费一级a男人的天堂| 欧美成人性av电影在线观看| 国产精品亚洲美女久久久| 深夜精品福利| 国内久久婷婷六月综合欲色啪| 亚洲精品在线观看二区| 日韩 亚洲 欧美在线| 在线观看66精品国产| 我要搜黄色片| 尾随美女入室| 亚洲最大成人av| 久久国产精品人妻蜜桃| 精品国产三级普通话版| 久久精品国产99精品国产亚洲性色| 不卡一级毛片| 99riav亚洲国产免费| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲av日韩精品久久久久久密| 九九爱精品视频在线观看| 午夜福利在线在线| 亚洲av免费高清在线观看| 老司机深夜福利视频在线观看| 国产欧美日韩精品亚洲av| 亚洲男人的天堂狠狠| 久久欧美精品欧美久久欧美| 成人性生交大片免费视频hd| 国产精品一区二区性色av| 亚洲国产精品成人综合色| 午夜免费男女啪啪视频观看 | 国产精品久久久久久av不卡| 国产av麻豆久久久久久久| 丰满的人妻完整版| 欧美另类亚洲清纯唯美| 色在线成人网| 亚洲人与动物交配视频| 国产极品精品免费视频能看的| 日日干狠狠操夜夜爽| 真人做人爱边吃奶动态| 丰满的人妻完整版| 日韩欧美在线二视频| 自拍偷自拍亚洲精品老妇| 亚洲最大成人手机在线| 在线观看66精品国产| 亚洲七黄色美女视频| 国产亚洲精品综合一区在线观看| 麻豆国产97在线/欧美| 久久人妻av系列| 亚洲美女视频黄频| 日韩欧美在线二视频| 精品人妻视频免费看| 热99re8久久精品国产| 国内精品美女久久久久久| videossex国产| 国产探花极品一区二区| 99九九线精品视频在线观看视频| 赤兔流量卡办理| 波多野结衣高清作品| 久久久久久久久久久丰满 | 一进一出抽搐动态| 22中文网久久字幕| 18禁黄网站禁片午夜丰满| 国产一级毛片七仙女欲春2| 中国美女看黄片| 少妇熟女aⅴ在线视频| 欧美性猛交╳xxx乱大交人| 国产av麻豆久久久久久久| 免费不卡的大黄色大毛片视频在线观看 | 午夜福利视频1000在线观看| 麻豆成人av在线观看| а√天堂www在线а√下载| 国产精品不卡视频一区二区| 露出奶头的视频| 精品国内亚洲2022精品成人| 五月玫瑰六月丁香| 淫妇啪啪啪对白视频| 一进一出好大好爽视频| 直男gayav资源| 久久午夜亚洲精品久久| 夜夜看夜夜爽夜夜摸| 精品久久久久久久末码| 成人国产综合亚洲| 91久久精品国产一区二区成人| 在线观看舔阴道视频| 女生性感内裤真人,穿戴方法视频| 国产成人a区在线观看| 99久久精品热视频| 天堂影院成人在线观看| 亚洲一级一片aⅴ在线观看| 久久精品国产鲁丝片午夜精品 | 99精品在免费线老司机午夜| 国产一区二区在线av高清观看| 观看免费一级毛片| 99热只有精品国产| 免费看av在线观看网站| 动漫黄色视频在线观看| 男人的好看免费观看在线视频| 亚洲欧美日韩高清在线视频| 精品久久久久久,| 韩国av在线不卡| 国产精品久久久久久亚洲av鲁大| 99热6这里只有精品| 精品人妻偷拍中文字幕| 在线免费十八禁| 亚洲成人久久性| 日韩亚洲欧美综合| 亚洲成人久久性| 1024手机看黄色片| 1000部很黄的大片| 亚洲人与动物交配视频| 午夜福利高清视频| 国内精品久久久久久久电影| av天堂在线播放| 99在线视频只有这里精品首页| 国产一区二区三区视频了| 国产成人影院久久av| 日本成人三级电影网站| 性色avwww在线观看| 99视频精品全部免费 在线| 毛片一级片免费看久久久久 | a级毛片免费高清观看在线播放| 啦啦啦韩国在线观看视频| 国产av不卡久久| 久久久精品大字幕| 精品久久久久久久久av| 日韩一区二区视频免费看| 免费不卡的大黄色大毛片视频在线观看 | 亚洲国产色片| 国产高清视频在线播放一区| 国产美女午夜福利| 男人舔奶头视频| 好男人在线观看高清免费视频| 国产午夜福利久久久久久| 午夜福利视频1000在线观看| 成人av一区二区三区在线看| bbb黄色大片| 日本免费一区二区三区高清不卡| 少妇人妻精品综合一区二区 | 国产伦一二天堂av在线观看| 成人午夜高清在线视频| 欧美精品啪啪一区二区三区| 欧美激情在线99| 国产精品国产高清国产av| 亚洲aⅴ乱码一区二区在线播放| 少妇熟女aⅴ在线视频| 久久久久久久精品吃奶| 国产老妇女一区| 日本一二三区视频观看| 国产免费一级a男人的天堂| 成人性生交大片免费视频hd| 人人妻,人人澡人人爽秒播| 久久99热6这里只有精品| 国产一区二区三区av在线 | 免费高清视频大片| av视频在线观看入口| 国产麻豆成人av免费视频| 成人一区二区视频在线观看| 欧美bdsm另类| 在线免费十八禁| 国产精品乱码一区二三区的特点| 免费不卡的大黄色大毛片视频在线观看 | 老熟妇仑乱视频hdxx| 在线观看午夜福利视频| 欧美激情国产日韩精品一区| 国产黄a三级三级三级人| 99热这里只有精品一区| 亚洲精品在线观看二区| 国产成人av教育| 国内少妇人妻偷人精品xxx网站| 久久久国产成人免费| 亚洲第一区二区三区不卡| 97热精品久久久久久| 联通29元200g的流量卡| 十八禁网站免费在线| 黄色丝袜av网址大全| 美女大奶头视频| x7x7x7水蜜桃| 成人av一区二区三区在线看| 亚洲欧美清纯卡通| 少妇熟女aⅴ在线视频| 成年女人永久免费观看视频| 精品久久久久久成人av| 97人妻精品一区二区三区麻豆| av女优亚洲男人天堂| 欧美最新免费一区二区三区| 日本a在线网址| 人人妻人人看人人澡| 精品一区二区免费观看| 久久草成人影院| 色在线成人网| 亚洲人成伊人成综合网2020| 日本免费一区二区三区高清不卡| 午夜精品久久久久久毛片777| 身体一侧抽搐| 一本一本综合久久| 亚洲中文字幕日韩| 91久久精品国产一区二区成人| 黄色配什么色好看| 亚洲av日韩精品久久久久久密| 麻豆一二三区av精品| 97超视频在线观看视频| 亚洲天堂国产精品一区在线| 亚洲综合色惰| 最近最新中文字幕大全电影3| 丰满的人妻完整版| 亚洲图色成人| 一区福利在线观看| a级毛片免费高清观看在线播放| 桃红色精品国产亚洲av| 久久香蕉精品热| 一本久久中文字幕| 婷婷精品国产亚洲av| 国产精品日韩av在线免费观看| 欧美最黄视频在线播放免费| 国产精品一区二区三区四区久久| 又爽又黄a免费视频| 一进一出好大好爽视频| 日韩欧美国产在线观看| 色吧在线观看| 国产大屁股一区二区在线视频| 美女大奶头视频| 成人二区视频| 国产男靠女视频免费网站| 美女 人体艺术 gogo| av在线蜜桃| 精品欧美国产一区二区三| 天天一区二区日本电影三级| 亚洲最大成人av| 内地一区二区视频在线| 精品人妻1区二区| 国产人妻一区二区三区在| 悠悠久久av| 99在线视频只有这里精品首页| 女同久久另类99精品国产91| a在线观看视频网站| 免费观看的影片在线观看| 成年版毛片免费区| 国产一区二区激情短视频| 十八禁国产超污无遮挡网站| 精品一区二区免费观看| 欧美国产日韩亚洲一区| www.色视频.com| 国产综合懂色| 99久久中文字幕三级久久日本| 日本三级黄在线观看| 国产私拍福利视频在线观看| 成人美女网站在线观看视频| 天堂√8在线中文| 99精品在免费线老司机午夜| 九九热线精品视视频播放| 国产亚洲精品av在线| 国产伦人伦偷精品视频| 午夜日韩欧美国产| 久久精品综合一区二区三区| 国产精品野战在线观看| 亚洲熟妇中文字幕五十中出| 国产成人a区在线观看| 国产真实伦视频高清在线观看 | 看片在线看免费视频| 国产一区二区在线av高清观看| 亚洲三级黄色毛片| 一区二区三区高清视频在线| 免费观看人在逋| 日韩国内少妇激情av| 他把我摸到了高潮在线观看| 麻豆国产av国片精品| 在线观看66精品国产| 亚洲av熟女| 亚洲av五月六月丁香网| 亚洲黑人精品在线| 在线观看66精品国产| 久久天躁狠狠躁夜夜2o2o| 亚洲中文字幕日韩| 日本欧美国产在线视频| 日本黄色视频三级网站网址| 美女cb高潮喷水在线观看| 欧美日韩黄片免| 麻豆国产av国片精品| 成人av一区二区三区在线看| 亚洲成人久久爱视频| 日本与韩国留学比较| 亚洲av免费高清在线观看| av在线蜜桃| 成人av在线播放网站| 麻豆久久精品国产亚洲av| 真人一进一出gif抽搐免费| 女同久久另类99精品国产91| a在线观看视频网站| av.在线天堂| 嫁个100分男人电影在线观看| av.在线天堂| 1024手机看黄色片| 国产精品综合久久久久久久免费| 能在线免费观看的黄片| 久久精品人妻少妇| 夜夜夜夜夜久久久久| 日本色播在线视频| 一卡2卡三卡四卡精品乱码亚洲| avwww免费| 又爽又黄a免费视频| 美女高潮喷水抽搐中文字幕| 午夜精品一区二区三区免费看| 少妇的逼好多水| 午夜日韩欧美国产| 超碰av人人做人人爽久久| 少妇熟女aⅴ在线视频| 性插视频无遮挡在线免费观看| 国语自产精品视频在线第100页| 精品久久久久久久末码| 99国产极品粉嫩在线观看| 乱码一卡2卡4卡精品| 乱人视频在线观看| 久久精品91蜜桃| 久久久国产成人精品二区| 国产精品一区二区三区四区久久| 久久久久国产精品人妻aⅴ院| 午夜爱爱视频在线播放| 午夜a级毛片| 久久久久久久久中文| 性色avwww在线观看| 无遮挡黄片免费观看| 久久国产精品人妻蜜桃| 日本爱情动作片www.在线观看 | 丰满人妻一区二区三区视频av| 中国美白少妇内射xxxbb| 国产亚洲精品久久久com| 日韩欧美精品v在线| 大又大粗又爽又黄少妇毛片口| 免费人成视频x8x8入口观看| 在线观看av片永久免费下载| 99久久成人亚洲精品观看| 18禁在线播放成人免费| 色哟哟哟哟哟哟| 亚洲av电影不卡..在线观看| a级毛片免费高清观看在线播放| 国产老妇女一区| 免费看av在线观看网站| 亚洲熟妇熟女久久| 在线观看66精品国产| 最新在线观看一区二区三区| 长腿黑丝高跟| 波野结衣二区三区在线| 国产日本99.免费观看| 美女黄网站色视频| 欧美日韩瑟瑟在线播放| 中文字幕精品亚洲无线码一区| 亚洲一级一片aⅴ在线观看| 亚洲va日本ⅴa欧美va伊人久久| 欧美色视频一区免费| 男女下面进入的视频免费午夜| 色综合婷婷激情| 久久国内精品自在自线图片| 国产 一区精品| 精品国内亚洲2022精品成人| 久久午夜福利片| 日韩欧美精品v在线| 美女被艹到高潮喷水动态| 亚洲精华国产精华精| 亚洲成人精品中文字幕电影| 成人一区二区视频在线观看| 中国美女看黄片| 91av网一区二区| 小蜜桃在线观看免费完整版高清| 亚洲欧美清纯卡通| 一a级毛片在线观看| 亚洲国产日韩欧美精品在线观看| 真人一进一出gif抽搐免费| 少妇人妻一区二区三区视频| 色尼玛亚洲综合影院| 精品乱码久久久久久99久播| 亚洲三级黄色毛片|