• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A New Method for Scene Classification from the Remote Sensing Images

    2022-08-24 12:56:20PurnachandKollapudiSalehAlghamdiNeenavathVeeraiahYouseefAlotaibiSushmaThotakuraandAbdulmajeedAlsufyani
    Computers Materials&Continua 2022年7期

    Purnachand Kollapudi, Saleh Alghamdi, Neenavath Veeraiah, Youseef Alotaibi,Sushma Thotakuraand Abdulmajeed Alsufyani

    1Department of CSE, B V Raju Institute of Technology, Narsapur, Medak, Telangana, India

    2Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, 21944,Saudi Arabia

    3Department of Electronics and Communications, DVR&DHS MIC Engineering College, Kanchikacharla, Vijayawada,A.P., India

    4Department of Computer Science, College of Computer and Information Systems, Umm Al-Qura University, Makkah,21955, Saudi Arabia

    5Department of ECE, P.V.P Siddhartha Institute of Technology, Vijayawada, India

    6Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, 21944, Saudi Arabia

    Abstract: The mission of classifying remote sensing pictures based on their contents has a range of applications in a variety of areas.In recent years, a lot of interest has been generated in researching remote sensing image scene classification.Remote sensing image scene retrieval, and scene-driven remote sensing image object identification are included in the Remote sensing image scene understanding (RSISU) research.In the last several years, the number of deep learning (DL) methods that have emerged has caused the creation of new approaches to remote sensing image classification to gain major breakthroughs, providing new research and development possibilities for RS image classification.A new network called Pass Over(POEP) is proposed that utilizes both feature learning and end-to-end learning to solve the problem of picture scene comprehension using remote sensing imagery (RSISU).This article presents a method that combines feature fusion and extraction methods with classification algorithms for remote sensing for scene categorization.The benefits (POEP) include two advantages.The multi-resolution feature mapping is done first, using the POEP connections, and combines the several resolution-specific feature maps generated by the CNN, resulting in critical advantages for addressing the variation in RSISU data sets.Secondly, we are abletouse Enhanced pooling tomake the most use of themulti-resolution featuremaps that include second-order information.This enablesCNNs to better cope with (RSISU) issues by providing more representative feature learning.The data for this paper is stored in a UCI dataset with 21 types of pictures.In the beginning, the picture was pre-processed, then the features were retrieved using RESNET-50, Alexnet, and VGG-16 integration of architectures.After characteristics have been amalgamated and sent to the attention layer, after this characteristic has been fused, the process of classifying the data will take place.We utilize an ensemble classifier in our classification algorithm that utilizes the architecture of a Decision Tree and a Random Forest.Once the optimum findings have been found via performance analysis and comparison analysis.

    Keywords: Remote sensing; RSISU; DL; RESNET-50; VGG-16

    1 Introduction

    Information obtained through remote sensing, which provides us with important data about the Earth’s surface, may enable us to precisely measure and monitor geographical features [1].The rate of growth in the number of remote sensing pictures is due to the recent improvements in earth observation technologies.The urgency associated with the search for ways to make full use of expanding remote sensing pictures for intelligent earth observation has been heightened due to this.Thus, to make sense of large and complicated remote sensing pictures, it is crucial to comprehend them completely.In regard to their work as a difficult and difficult-to-solve issue for understanding remote sensing data, research on scene categorization [2,3] of remote sensing pictures has been quite active.Correctly labelling remote sensing pictures using pre-set semantic categories, as illustrated in Fig.1, is a function of remote sensing image classification.Advanced remote sensing picture scene classification [4] research, which includes many studies on urban planning, natural hazards identification, environment monitoring, vegetationmapping, and geospatial item recognition, has occurred due to the significance of these fields in the real world [5,6].

    Figure 1: Classifying remote sensing imagery

    Assigning a specific semantic name to a scene, such as“urban”and“forest,”is an example of the categorization of land-use scenes.An increase in satellite sensor development is enabling a massive rise in the amount of high-resolution remote sensing picture data.In order to create intelligent databases,it is essential to use robust and efficient categorization techniques on huge remote sensing pictures.Classifying aerial or satellite pictures using computer vision methods is very interesting.For example,the bag-of-visual-words (BOVW) paradigm groups the local visual characteristics collected from a series of pictures and creates a set of visual words (i.e., a visual vocabulary).A histogram shows how many words from a certain picture appear in the histogram.An acronym for Remote Sensing Land-Use Scene Categorization (BOVW) has been useful in classification of remote sensing images of land-use scenes, which have been a particularly excellent use of the BOVW model.However, this is ignoring the spatial information in the pictures.By integrating texture information into remote sensing land-use picture data, the BOVW model’s performance may be enhanced.Fig.2 shows the development of remote sensing picture classification by a progression from pixel-level, object-level,to scene-level categorization.Due to the variety of remote sensing picture classification systems, we choose to use the generic phrase of“remote sensing image classification”rather than“remote sensing image classification technology.”In general, scholars worked to categorize remote sensing pictures by labelling each pixel with a semantic class since the spatial resolution of remote sensing images is extremely poor, which is comparable to how things are represented in the early scientific literatures.Furthermore, this is still an ongoing research subject for multispectral and hyperspectral remote sensing picture analysis.

    Figure 2: Classification of remote sensing images on three different levels

    Computational time and memory utilization have become important advancements in computer vision.Classifiers, on the other hand, are needed to have significant generalization ability while also producing high performance.A growing area of study for remote sensing imagery characterization is noted.Extra remote sensing image analysis execution measures have been found using the featurebased method, which is an additional step from data mining strategies.Classification of images is an important use of computer vision in this field.Our main goal is to advance machine learning methods for remote sensing picture categorization.The information included in satellite pictures, such as buildings, landscapes, deserts, and structures, is categorized and analysed throughout time using images including satellite imagery [7].

    This paper presents a method that combines feature fusion and extraction with classification algorithms for remote sensing for scene categorization.The benefits the benefits (POEP) include two advantages include two advantages.The multi-resolution feature mapping is done first, using the Pass Over connections, and combines the several resolution-specific feature maps generated by the CNN,resulting in critical advantages for addressing the variation in RSISU data sets.Secondly, we are able to use Enhanced pooling to make the most use of the multi-resolution feature maps that include second-order information.This enables CNNs to better cope with (RSISU) issues by providing more representative feature learning.In the beginning, the picture was pre-processed, then the features were retrieved using RESNET-50, Alexnet, and VGG-16 integration of architectures.After characteristics have been amalgamated and sent to the attention layer, after this characteristic has been fused, the process of classifying the data will take place.We utilize an ensemble classifier in our classification algorithm that utilizes the architecture of a Decision Tree and a Random Forest.Once the optimum findings have been found via performance analysis and comparison analysis.

    The remainder of the article is structured as follows: Section 2 presents relevant literature on categories that have been observed.Section 3 outlines the proposed process.Section 4 presents the results.Summaries of conclusions is found in Section 5.

    2 Related Works

    There are just a few iterations required for the RSSCNet model recommended by Sheng-Chieh et al.[8] to be used in conjunction with a two-stage cycle of learning rate training policy and the nofreezing transfer learning technology.It is possible to get a high degree of precision in this manner.Using data augmentation, regularization, and an early-stopping approach, the issue of restricted generalization observed during fast deep neural network training may be addressed as well.Using the model and training methods presented in this article outperforms existing models in terms of accuracy, according to the findings of the experiments.To be effective, this approach must concentrate on picture rectification pre-processing for cases where outliers are suspected and combine various explainable artificial intelligence analysis technologies to enhance interpretation skills.Kim et al.[9]proposed a new self-attention feature selection module integrated multi-scale feature fusion network for few-shot remote sensing scene categorization, referred to as SAFFNet.For a few-shot remote sensing classification task, informative representations of images with different receptive fields are automatically selected and re-weighted for feature fusion after refining network and global pooling operations.This is in contrast to a pyramidal feature hierarchy used for object detection.The support set in the few-shot learning job may be used to fine-tune the feature weighting value.The proposed remote sensing scene categorization model is tested on three public ally accessible datasets.To accomplish more efficient and meaningful training for the fine-tuning of a CNN backbone network,SAFFNet needs less unseen training samples.

    The fusion-based approach for remote sensing picture scene categorization was suggested by Yin et al.[10] Front side fusion, middle side fusion, and rear side fusion are the three kinds of fusion modes that are specified.Different fusion modes have typical techniques.There are many experiments being conducted in their entirety.Various fusion mode combinations are tested.On widely used datasets, model accuracy and training efficiency results are shown.Random crop + numerous backbone + average is the most effective technique, as shown by the results of this experiment.Different fusion modes and their interactions are studied for their characteristics.Research on the fusion-basedapproach with particular structure must be conducted in detail, and an external dataset should be utilized to enhance model performance.Campos-Taberner et al.[11] using Sentinel-2 time data, this research seeks to better comprehend a recurrent neural network for land use categorization in the European Common Agricultural Policy setting (CAP).Using predictors to better understand network activity allows us to better address the significance of predictors throughout the categorization process.According to the results of the study, Sentinel-2’s red and near infrared bands contain the most relevant data.The characteristics obtained from summer acquisitions were the most significant in terms of temporal information.These findings add to the knowledge of decision-making models used in the CAP to carry out the European Green Deal (EGD) intended to combat climate change, preserve biodiversity and ecosystems, and guarantee a fair economic return for farmers.They also help.This approach should put more emphasis on making accurate predictions.

    An improved land categorization technique combining Recurrent Neural Networks (RNNs) and Random Forests (RFs) has been proposed for different research objectives by Xu et al.[12].We made use of satellite image spatial data (i.e., time series).Pixel and object-based categorization are the foundations of our experimental classification.Analyses have shown that this new approach to remote sensing scene categorization beats the alternatives now available by up to 87%, according to the results.This approach should concentrate on the real-time use of big, complicated picture scene categorization data.For small sample sizes with deep feature fusion, a new sparse representationbased approach is suggested by Mei et al.[13].To take full use of CNNs’feature learning capabilities,multilevel features are first retrieved from various levels of CNNs.Observe how to extract features without labeled samples using current well-trained CNNs, e.g., AlexNet, VGGNet, and ResNet50.The multilevel features are then combined using sparse representation-based classification, which is particularly useful when there are only a limited number of training examples available.This approach outperforms several current methods, particularly when trained on small datasets as those from UCMerced andWHU-RS19.For the categorization of remote sensing high-resolution pictures, Petrovska et al.[14] developed the two-stream concatenation technique.Aerial images were first processed using neural networks pre-trained on ImageNet datasets,whichwere then combined into a final picture using convolutional neural networks (CNNs).After the extraction, a convolutional layer’s PCA transformed features and the average pooling layer’s retrieved features were concatenated to create a unique feature representation.In the end, we classified the final set of characteristics using an SVM classifier.We put our design to the test using two different sets of data.Our architecture’s outcomes were similar to those of other cutting-edge approaches.If a classifier has to be trained with a tiny ratio on the training dataset, the suggested approach may be useful.The UC-Merced dataset’s “dense residential” picture class, for example, has a high degree of inter-class similarity, and this approach may be an effective option for classifying such datasets.The correctness of this procedure must be the primary concern.

    End-to-end local-global-fusion feature extraction (LGFFE) network for more discriminative feature representation proposed by Lv and colleagues [15].A high-level feature map derived from deep CNNs is used to extract global and local features fromthe channel and spatial dimensions, respectively.To capture spatial layout and context information across various areas, a new recurrent neural network (RNN)-based attention module is initially suggested for local characteristics.The relevant weight of each area is subsequently generated using gated recurrent units (GRUs), which take a series of imagepatch characteristics as input.By concentrating on the most important area, a rebalanced regional feature representation may be produced.By combining local and global features, the final feature representation will be obtained.End-to-end training is possible for feature extraction and feature fusion.However, this approach has the disadvantage of increasing the risk of misclassification due to a concentration on smaller geographic areas Hong et al.[16] suggest the use of CTFCNN, a CaffeNet-based technique for investigating a pre-trained CNN’s discriminating abilities effectively.In the beginning, the pretrained CNN model is used as a feature extractor to acquire convolutional features from several layers, FC features, and FC features based on local binary patterns (LBPs).The discriminating information from each convolutional layer is then represented using an improved bag-of-view-word (iBoVW) coding technique.Finally, various characteristics are combined for categorization using weighted concatenation.The proposed CTFCNN technique outperforms certain state-of-the-art algorithms on the UC-Merced dataset and the Aerial Image Dataset (AID), withoverall accuracy up to 98.44% and 94.91%, respectively.This shows that the suggested frameworkis capable of describing the HSRRS picture in a specific way.The categorization performance of this technique need improvement.When generating discriminative hyperspectral pictures, Ahmed and colleagues [17] stressed the significance of spectral sensitivities.Such a representation’s primary objective is to enhance picture content identification via the use of just the most relevant spectral channels during processing.The fundamental assumption is that each image’s information can be better retrieved using a particular set of spectral sensitivity functions for a certain category.Content-Based Image Retrieval (CBIR) evaluates these spectral sensitivity functions.Specifically for Hyperspectral remote sensing retrieval and classification, we provide a new HSI dataset for the remote sensing community in this study.This dataset and a literature dataset have both been subjected to a slew of tests.Findings show that the HSI provides a more accurate representation of picture content than the RGB image presentation because of its physical measurements and optical characteristics.As the complexity of sensitivity functions increases, this approach should be refined.By considering existing methods drawbacks, we propose the Pass Over network for remote sensing scene categorization, a novel Hybrid Feature learning and end-to-end learning model that combines feature fusion and extraction with classification algorithms for remote sensing for scene categorization.In the beginning, the picture was pre-processed, then the features were retrieved using RESNET-50, Alexnet,and VGG-16 integration of architectures.After characteristics have been amalgamated and sent to the attention layer, after this characteristic has been fused, the process of classifying the data will take place.We utilize an ensemble classifier in our classification algorithm that utilizes the architecture of a Decision Tree and a Random Forest.Once the optimum findings have been found via performance analysis and comparison analysis.

    3 Research Methodology

    The suggested network is part of the Hybrid Feature Learning [18] and End-to-End Learning Model Learning Systems category of networks.Proposed technique may be taught end-to-end, which improves classification performance compared to existing feature-based or feature learning-based approaches.Categorize the VGG-16, Alexnet, and Resnet-50 convolutions as Conv2D_3, Conv2D_4 and Conv2D_5 of Alexnet with the VGG-16 Conv2D_3, Conv2D_4, Conv2D_5 and also with Resnet-50 Conv2D_3, Conv2D_4, Conv2D_5.Instead of doing picture pre-processing, the suggested approach eliminates it altogether [19,20].The proposed approach has the advantage of requiring a considerably smaller number of training parameters and proposed network requires a tenth of the characteristics of its competitors.Because of the limited number of parameters needed by proposed method,we aremore likely to avoid the overfitting issue when training a deepCNNmodel on relatively small data sets.This is a significant innovation.Fig.3 depicts the entire architecture.

    Figure 3: Overall architecture of proposed method

    For remote sensing-based scene categorization, we developed an effective and efficient feature extraction approach using machine learning classifiers.The UCI dataset had 21 classes when it was first used.Dimensionality reduction [21,22] with noise removal has been used to pre-process this data.The extraction of features based on the architectures of RESNET-50, VGG-16, and Alexnet was then carried out.Based on the Multi-layer feature fusion model, this data has been merged (MFF).This was followed by a try at implementing the same action of focusing on just certain important items in the attention layer.The characteristics are then retrieved and categorized as a result of this procedure.Machine learning classifiers Randomn Forest [23] and Decision Tree were used to classify the retrieved feature.Fig.4 depicts the suggested methodology’s implementation architecture.

    Pre-processing techniques for DR may take a variety of methods.In light of these benefits, the DR is taken into consideration.The amount of memory available for storing information decreases as the size of the object shrinks [24-26].Fewer dimensions need shorter training and calculation durations.Most feature extraction techniques struggle when dealing with data that has many dimensions.DR methods effectively deal with the multi-collinearity of various data characteristics and remove redundancy within them.Finally, the data’s smaller size makes it easier to see.

    Figure 4: Architecture for implementing the suggested approach

    3.1 Hybrid Feature Learning and End-to-End Learning Model

    Fig.5 depicts the proposed network’s design, which makes use of the RESNET-50, VGG-16, and ALEXNET backbones.Three convolution layers are utilized to convolute the input, while the rest are skipped via Pass Over connections, as stated before.A matrix is formed along the channel dimension of the feature maps if the multi-resolution feature maps are designated as“X”instead of“X”.The resulting multi-resolution feature maps are then aggregated using a multi-scale pooling layer.The FC layer and SoftMax layer follow this one.Following that, we’ll go through the two newest additions:Pass Over connections and multi-scale pooling.

    Figure 5: Architecture of the proposed POEP network

    For illustrative purposes, the backbone consists of the off-the-shelf Resnet 50, Alexnet, and VGG- 16.The Pass Over connection operation and a multi-scale pooling approach combine the feature maps from several layers.SVD refers to the singular value decomposition, whereas Vec indicates the vectorization process.Concat refers to the concatenate operation.CWAvg stands for channel-wise average pooling,whereasAvg indicates average pooling on the network as awhole.End-to-end learning system (also known as a hybrid system) is the classification given to the planned POEP network.Our methodologymay be taught utilizing a hybrid feature learning and end-to-end learning strategy, which enhances classification performance in comparison to hand-crafted feature-based methods or feature learning-based techniques.It also exhibits competitive classification performance when compared to existing method.In comparison to other approaches, ours has the benefit of needing amuch smaller set of training settings.The parameters needed by our POEP network’s competitors are reduced by 90%.As a result of our methodology’s fewer parameters, we’re more likely to avoid overfitting problems while training a deep CNN model on a small data set.The Alexnet and Resnet-51 are used as Pass Over connections in the suggested approach [27-30].

    3.2 Multi-Layer Aggregation Passover Connections

    Let’s say there are three sets of feature maps accessible, all with the same resolution.

    X1∈RH×W×D1X2∈RH×W×D2, andX3∈RH×W×D3To get the multi-resolution aggregated feature map X, use the connections method described below

    In this case, [X1;X2;X3] represents the third-dimensional concatenation.Fig.6 shows an example of a Pass Over connection method for three different feature maps.There are two reasons for aggregating multi-layer feature maps with Pass Over links [31-33].In classification and object recognition tasks, scale variance is an issue that must be addressed using the CNN model, which can naturally generate feature maps with a pyramidal form using hierarchical layers.

    Second, the data contained in the feature maps generated from different levels is complementary.Example of using feature maps from Alexnet’s different layers for demonstration purposes is shown in Fig.7.When using the Pass Over connection, you may take use of the feature maps’diverse set of characteristics to improve classification precision

    Figure 7: Graphical example illustrating the feature maps extracted fromdifferent layers of Alexnet for three different images.(a) Input image.(b) Feature map from the third convolutional layer.(c) Feature map from the fourth convolutional layer.(d) Feature map from the fifth convolutional layer

    Note that average pooling is used to combine feature maps with varying spatial resolutions.Prior to concatenating the feature maps, CWAvg pooling is used to decrease the number of channels in each set by a factor of 2.The following is a comprehensive mathematical explanation of CWAvg pooling.

    Y= [Y1;Y2;...;YL]∈RH×W×Lfor the 3-D feature map tensor in which,Y= [Y1;Y2;...;YL]∈RH×W×LAssuming stride k, the following is how the CWAvg pooling is as

    Consequently,Z= [Z1,Z2,...,ZL/k]∈RH×W×(L/k)is produced as the output feature map tensor.In real life, we choose a k number that ensures L is divisible by k before using it.L/k is an integer.

    Forward Propagation of Multi-scale Pooling

    The forward propagation of Multi-scale Pooling is performed as follows for a feature matrixX∈RD×N, where D=D1+D2+D3 is the dimensionality of the features and N=H×W is the number of features.To begin, a matrix of multi-scale C is calculated.

    A matrix with the elementsC=UΣUTandUcalled an eigenvector matrix and eigenvalue matrix of C.The vectorization of F is shown in Fig.2 as f.The symmetric nature of the matrix F means that just the rows and columns in the top triangle of the matrix F need to be vectorized, thus vector f has dimensions equal to D(D + 1)/2.

    Backward Propagation of Multi-scale Pooling

    Multi-scale pooling uses global and structured matrix calculations instead of the conventional max or average pooling methods, which treat the intermediate variable’s spatial coordinates (a matrix or a vector) separately.To calculate the partial derivative of the loss function L with respect to the multi-scale pooling input matrix, we use the matrix back-propagation technique.Because of this, we initially treat(?L/?F),(?L/?U)and(?L/?Σ)as partial derivatives of the partial derivative transmitted from the higher FC layer.Following is an example of a chain rule expression:

    The fluctuation of the relevant variable is denoted by d(.).The: operation is represented by the symbol, and A: B = the trace (AT B).The following formulation may be derived from (5):

    Putting (6) into (5), (?L/?U) and (?L/?) are obtained as follows:

    To get (?L/?U) and (?L/?), let us compute (?L/?C) through use the eigen decomposition (EIG)of C and C = UUT, and then calculate (L/C).The following is the whole chain rule expression:

    Like (6), This version of matrix C may be obtained.

    Eqs.(8) and (9) may be used with the characteristics of the matrix inner product, and the EIG properties to obtain the following partial derivatives of the loss function L relative to C

    where°denotes the Hadamard product, (·) sym denotes a symmetric operation, (·) diag is (·) with all off-diagonal elements being 0, and K is computed by manipulating the eigenvalues σ in as shown in the following:

    There are further instructions on how to calculate (7) and (10).Lastly, (?L/?C), assuming that the loss function L has a partial derivative with respect to the feature matrix X of the form:

    3.3 Network Architecture for Proposed VGG-16

    Tab.1 lists the VGG-16’s architectural specs.It has 3 FC layers and 5 3×3 convolutional layers,each having a stride size of 1.The stride for 2×2 pooling layers is 2, while the input picture size in VGG-16 is 224×224 by default.Every time a pooling layer is applied, the feature map is shrunk by a factor of two.FC layer features a 7×7 feature map with 512 channels that is expanded to a vector with 25,088 (7×7×512) channels before the FC layer is applied.

    VGG-16 uses five convolutional layer blocks to process 224×224 video frame images.As the number of 3×3 filters increases, so does the complexity of the block.This is done using a stride of 1 while padding the convolutional layer’s inputs to maintain spatial resolution.The max-pooling layers are used to disconnect every block.Stride 2 has 22 windows, which are used for max-pooling.Three FC layers are included in addition to the Convolutional layers.After that, the soft-max layer is applied, and here is where the class probabilities are calculated.The complete network model is shown in Fig.8.

    3.4 Ensemble Classifier [Random Forest and Decision Tree]

    K basic decision trees are merged to create the random forest, which is a combined classifier.For the first batch of data, we used

    Table 1: Architectural parameters for VGG-16

    Figure 8: Network model for VGG-16

    From the original datasets randomly select sub-datasetsx1,y1(X,Y) to construct the classifierhk(x), then the combined classifier can be described as,

    The random forest method generates K training subsets from the original dataset using the bagging sampling approach.Approximately two-thirds of the original dataset is used for each training subset, and samples are taken at random and then re-used in the sampling process.Every time in the sample set, the sample’s chance of being acquired is 1/m, while the probability of not being obtained is (1-1/m).It isn’t gathered after m samples are taken.The chances are.When m approaches infinity,the expression becomesm→∞.That is to say, the sample set misses approximately 36.8% of the data in the training set during each cycle of random sampling and bagging.About 36.8% of the data was not sampled in this section and is referred to as“Out of Bag”(OOB).These data have not been fitted to the training set model; therefore, they may be used to evaluate the generalization capabilities of the model in a different setting.Bag sampling is used to create K decision trees from the K training subsets.Random forests’decision tree method uses the CART algorithm, which is quite popular right now.The CART algorithm’s node splitting technique is its nucleus.Node splitting is performed using the CART algorithm using the GINI coefficient technique.

    To put it another way, the Gini coefficient is a measure of the likelihood that a randomly chosen portion of a sample set will be divided in half.To put it another way, a smaller Gini index indicates that there is a lesser chance that the chosen sample will be divided, while a larger Gini index signifies that the collection is purer.

    That is the Gini index (Gini impurity) = (probability of the sample being selected) * (probability of the sample being misclassified).

    1.When the possibility that a sample belongs to the kth category is given by pk, the likelihood that the sample will be divided is given by (1-pk).

    2.Samples may belong to any of the K categories, and a random sample from the set can, too,thus increasing the number of categories.

    3.When it’s split into two, Gini(P) = 2p(1-p)

    If a feature is used to split a sample set into D1 and D2, there are only two sets: D1 equals the given feature value and D2 does not, in fact, contain the provided feature value.CART (classification and regression trees) are binary trees.Multiple values are binary processed in a single way.

    To find out how pure each subset of the sample set D is, divide it into two using the partitioning feature = a certain feature value.

    This means that when there are more than two values for a feature, the purity Gini(D, Ai) of the subset must be calculated after each value is divided by the sample D as the dividing point (where Ai represents the characteristic A Possible value) The lowest Gini index among all feasible Gini values is then determined (D, Ai).Based on the data in sample set D, this partition’s division is the optimal division point.

    3.5 Algorithm Description of the Random Forest

    After the random sampling process, the resulting decision tree can be trained with data.According to the idea of random forests, decision trees have a high degree of independence from each other, and this feature also ensures the independence of the results produced by each decision tree.Then the remaining work consists of two: performing training tasks on each decision tree to produce results and voting to select the optimal solution from the results.Fig.9 shows the tree.

    Algorithm stages may be summed up by the following description:

    Step1: This decision tree is made up of nodes that are randomly chosen from a large range of possible values for the dataset’s characteristics, S.The number of s in the decision tree does not vary as the tree grows.

    Figure 9: Classification tree topology

    Step2: Uses the GINI technique to divide the node.

    Step3: Set up a training environment for each decision tree and run training exercises

    Step4: Vote to determine the optimal solution; Definition 1.for a group of classifiersh1(x),h2(x),...,hk(x), and a vector (X, Y) produced at random from the dataset, with the margin function set to,

    where I(?) is used as an indication.It’s 1 if the equation in parentheses holds true; if not, it’s 0.

    The margin function measures how accurate the average categorization is compared to how inaccurate it is.The more reliable something is, the higher its worth.

    The error message is as follows:

    For a set of decision trees, all sequences Θ1,Θ2,....Θκ, The error will converge to

    To prevent over-fitting, the random selection of the number of samples and the attribute may be utilized as described in the random forest approach.

    4.Experimental Setup

    As an example, there are 21 scene types in the UC Merced Land Use dataset: agri-industrial/aeronautical/baseballdiamond/beach/buildings/chaparral/freeway/forest/intersection/mediumresidential/mobile-home-park/overpass/parkinglot/river/ruway/tennis-court There are 100 pictures in each class, each of which is 256×256 pixels in size.

    4.1 Performance Evaluation

    Fig.10 shows the Confusion matrix of the and the ROC Curve of Alexnet with the Decision tree classifier, Alexnet with the Random Forest classifier, Resnet-50 with the Decision tree classifier,Resnet-50 with the Random Forest classifier, VGG-16 with the Decision tree classifier and VGG-16 with the Random Forest classifier combinations for the 21 classes images of agricultural, airplane,beach, baseball diamond, buildings, chaparral, dense residential, forest freeway golf course, harbor,intersection, medium residential, mobile home park, over pass, parking lot, river Runway, sparse residential, storage tanks and tennis court are drawn between the True positivity rate to the False positivity rate.

    Tab.2 shows how the featured extracted classes after the Concatenate the Conv2D_3, Conv2D_4 and Conv2D_5of Alexnetwiththe VGG-16 Conv2D_3,Conv2D_4,Conv2D_5andalsowith Resnet-50 Conv2D_3, Conv2D_4, Conv2D_5.

    Figure 10: The confusion matrix and ROC curve for the 21 classes of (a) & (b) Alexnet with the decision tree classifier (c) & (d) Alexnet with the random forest classifier (e) & (f)Resnet-50 with the decision tree classifier (g) & (h) Resnet-50 with the random forest classifier(i) & (j) VGG-16 with the decision tree classifier (k) & (l) VGG-16 with the random forest classifier

    Table 2: Featured extracted classes

    It has been analysed that the input to the VGG-16, Resnet-50 and Alexnet the accuracy is less compared to the proposed model, the time consumption to train the existing is shown in Tab.3.

    Table 3: Comparison between the existing and proposed time requirements

    RESNET50 ->{‘Training Time Per Epoch’: 10.185 min,‘Accuracy’: 0.0633} and the proposed Feature Extraction the time consumption is 52 min and the Classification time is 15 s.The accuracy model for the Alexnet [21] is about 90.21%, the accuracy level for the training model Resnet50 is about 62.01% for training and 91.85% for the VGG16 and the proposed architecture model gets the highest model of about 97.3%.

    5 Conclusion

    This paper proposes the Pass Over network for remote sensing scene categorization, a novel Hybrid Feature learning and end-to-end learning model.The Pass Over connection procedure, followed by a multi-scale pooling approach, introduces two new components, i.e., pass over connections and the feature maps from various levels.In addition to combining multi-resolution feature maps from various layers in the CNN model, our Pass Over network can also use high-order information to achievemore representative feature learning.It was found that the accuracy of the current ALEXNET,,VGG16, RESNET50, and the proposed Feature Extraction is less than half that of the proposed model, and that the time required to train the existing models is 52 min longer than the proposed Feature Extraction’s classification time.It’s estimated that Alexnet’s accuracy model is 90.21%, while Resnet50’s training model accuracy level is 62.01%, while the VGG16 model accuracy is 91.85%, and the suggested architectural model obtains a high accuracy estimate of 97.3 percent.

    Acknowledgement:We deeply acknowledge Taif University for supporting this study through Taif University Researchers Supporting Project Number (TURSP-2020/115), Taif University, Taif, Saudi Arabia.

    Funding Statement:This research is funded by Taif University, TURSP-2020/115.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    有码 亚洲区| 亚洲av日韩在线播放| av在线亚洲专区| 久久久久久久久久成人| 亚洲成人av在线免费| 亚洲成人av在线免费| 边亲边吃奶的免费视频| 国产真实伦视频高清在线观看| 人妻夜夜爽99麻豆av| 欧美激情久久久久久爽电影| 老女人水多毛片| 亚洲综合精品二区| 男女啪啪激烈高潮av片| 国产一区二区三区综合在线观看 | 99视频精品全部免费 在线| 91狼人影院| 成人一区二区视频在线观看| 国产久久久一区二区三区| 午夜激情欧美在线| 禁无遮挡网站| 一区二区三区四区激情视频| 毛片女人毛片| 一级av片app| 国产综合懂色| 久久精品国产亚洲av涩爱| av天堂中文字幕网| 亚洲乱码一区二区免费版| 久久99精品国语久久久| 欧美精品国产亚洲| 天天躁日日操中文字幕| 水蜜桃什么品种好| 亚洲国产精品国产精品| 免费观看精品视频网站| 精品少妇黑人巨大在线播放| 欧美日韩一区二区视频在线观看视频在线 | 夫妻午夜视频| 国模一区二区三区四区视频| 乱系列少妇在线播放| 特大巨黑吊av在线直播| 亚洲av在线观看美女高潮| 最近中文字幕高清免费大全6| 91精品一卡2卡3卡4卡| 久久精品久久久久久久性| 51国产日韩欧美| 久久97久久精品| 日韩三级伦理在线观看| 国产 一区精品| 人体艺术视频欧美日本| 九草在线视频观看| 麻豆成人av视频| 午夜精品一区二区三区免费看| 久久草成人影院| 久久久久精品性色| 亚洲av免费高清在线观看| 日本黄色片子视频| 亚洲精品中文字幕在线视频 | 日本免费在线观看一区| 18禁动态无遮挡网站| 九九久久精品国产亚洲av麻豆| 成人国产麻豆网| 国产成年人精品一区二区| 国产一区二区亚洲精品在线观看| 在线观看美女被高潮喷水网站| 大片免费播放器 马上看| 2021少妇久久久久久久久久久| 在线免费十八禁| 乱系列少妇在线播放| 国产人妻一区二区三区在| 欧美日韩国产mv在线观看视频 | 菩萨蛮人人尽说江南好唐韦庄| 大香蕉久久网| 国产成人a区在线观看| 国产一级毛片在线| 丰满少妇做爰视频| 久久精品久久精品一区二区三区| 少妇的逼好多水| 免费不卡的大黄色大毛片视频在线观看 | 人妻一区二区av| videos熟女内射| 老女人水多毛片| 中文字幕人妻熟人妻熟丝袜美| 高清毛片免费看| 国产视频首页在线观看| 成人亚洲欧美一区二区av| 久久久久免费精品人妻一区二区| 可以在线观看毛片的网站| 在线免费观看不下载黄p国产| 晚上一个人看的免费电影| 真实男女啪啪啪动态图| 日日啪夜夜爽| 免费av观看视频| 91精品一卡2卡3卡4卡| 久久久久久久久久成人| 欧美三级亚洲精品| 神马国产精品三级电影在线观看| 黄片wwwwww| 亚洲av成人精品一二三区| 在线免费观看的www视频| 国产国拍精品亚洲av在线观看| 99九九线精品视频在线观看视频| 国产精品久久久久久av不卡| 最后的刺客免费高清国语| 国产在视频线精品| 97精品久久久久久久久久精品| 高清在线视频一区二区三区| 99re6热这里在线精品视频| 我的老师免费观看完整版| 日韩精品有码人妻一区| 最近中文字幕高清免费大全6| 在线观看免费高清a一片| 国产老妇伦熟女老妇高清| 亚洲久久久久久中文字幕| 少妇熟女aⅴ在线视频| 成人欧美大片| 亚洲最大成人中文| 日韩人妻高清精品专区| 我的女老师完整版在线观看| 久久精品久久精品一区二区三区| 国产人妻一区二区三区在| 成年av动漫网址| 亚洲在线自拍视频| 少妇高潮的动态图| 永久免费av网站大全| 精品一区二区免费观看| 一个人免费在线观看电影| 中国国产av一级| 国产亚洲精品久久久com| 午夜激情福利司机影院| 国产免费又黄又爽又色| av在线亚洲专区| 99久久九九国产精品国产免费| 精品国产一区二区三区久久久樱花 | 免费看av在线观看网站| 久久久久久久亚洲中文字幕| 伦精品一区二区三区| 精品不卡国产一区二区三区| 亚洲av中文av极速乱| 久久精品夜夜夜夜夜久久蜜豆| av免费观看日本| 久久久久久久久中文| av福利片在线观看| 国产午夜精品久久久久久一区二区三区| 少妇熟女欧美另类| 好男人视频免费观看在线| 国产精品一区二区三区四区久久| 熟女电影av网| 一级毛片电影观看| 国产一区二区三区av在线| 亚洲乱码一区二区免费版| 亚洲精品,欧美精品| 亚洲av国产av综合av卡| 男人舔奶头视频| 国产免费一级a男人的天堂| 国产熟女欧美一区二区| 久久久久久久久久人人人人人人| 日韩av在线免费看完整版不卡| 国产 亚洲一区二区三区 | 国产黄频视频在线观看| 久久久久久久午夜电影| 色播亚洲综合网| 午夜精品在线福利| 亚洲成人一二三区av| 99久久人妻综合| 亚洲精品乱码久久久v下载方式| 特级一级黄色大片| 亚洲av日韩在线播放| 日韩一本色道免费dvd| 久久精品国产自在天天线| 久久久成人免费电影| 国产欧美另类精品又又久久亚洲欧美| 国产黄片视频在线免费观看| 在现免费观看毛片| 男人狂女人下面高潮的视频| 久久人人爽人人片av| 成年免费大片在线观看| 日韩 亚洲 欧美在线| 国产v大片淫在线免费观看| 精品久久久久久成人av| 国产91av在线免费观看| 中文字幕人妻熟人妻熟丝袜美| 秋霞伦理黄片| 岛国毛片在线播放| 国产精品国产三级专区第一集| 欧美日韩精品成人综合77777| 最近最新中文字幕免费大全7| 色哟哟·www| 免费观看无遮挡的男女| 99久久精品热视频| 亚洲精品久久久久久婷婷小说| 日韩伦理黄色片| 日韩三级伦理在线观看| 我的老师免费观看完整版| 国产一区二区三区av在线| 免费电影在线观看免费观看| 日本与韩国留学比较| 99re6热这里在线精品视频| 亚洲精品日本国产第一区| 91久久精品电影网| 看免费成人av毛片| 啦啦啦韩国在线观看视频| 色视频www国产| 一本一本综合久久| 免费大片18禁| 欧美bdsm另类| 国产亚洲一区二区精品| 午夜免费男女啪啪视频观看| 白带黄色成豆腐渣| 最近中文字幕2019免费版| 精品一区二区三区人妻视频| 午夜福利在线观看免费完整高清在| 久久久久久久久久人人人人人人| 成人高潮视频无遮挡免费网站| www.av在线官网国产| 肉色欧美久久久久久久蜜桃 | 国产亚洲精品av在线| 不卡视频在线观看欧美| 精品不卡国产一区二区三区| 日韩人妻高清精品专区| av又黄又爽大尺度在线免费看| 69人妻影院| 秋霞在线观看毛片| 日韩在线高清观看一区二区三区| 国产毛片a区久久久久| 日韩欧美精品v在线| 精品久久国产蜜桃| 性插视频无遮挡在线免费观看| 麻豆乱淫一区二区| 美女被艹到高潮喷水动态| 看十八女毛片水多多多| 亚洲欧洲国产日韩| 欧美性猛交╳xxx乱大交人| 搡老妇女老女人老熟妇| 99久久精品国产国产毛片| a级毛色黄片| 午夜爱爱视频在线播放| 中文字幕亚洲精品专区| 啦啦啦韩国在线观看视频| 国产成人精品福利久久| 亚洲精品aⅴ在线观看| 亚洲国产精品sss在线观看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产精品一区二区三区四区久久| av在线播放精品| 亚洲精品日本国产第一区| 蜜桃亚洲精品一区二区三区| 免费少妇av软件| 欧美97在线视频| 看十八女毛片水多多多| 国产免费一级a男人的天堂| 亚洲一级一片aⅴ在线观看| 色播亚洲综合网| av女优亚洲男人天堂| 亚洲人成网站在线播| 日日摸夜夜添夜夜添av毛片| 日韩av在线免费看完整版不卡| av免费在线看不卡| 免费观看性生交大片5| 老司机影院成人| 直男gayav资源| 国产探花极品一区二区| 久久精品国产自在天天线| 成人漫画全彩无遮挡| 午夜福利在线观看吧| 99热这里只有是精品50| 在线观看一区二区三区| 国产乱来视频区| 亚洲国产高清在线一区二区三| 中国国产av一级| 观看免费一级毛片| 亚洲美女视频黄频| 男人和女人高潮做爰伦理| 色吧在线观看| 亚洲av成人精品一二三区| 免费观看性生交大片5| 超碰97精品在线观看| 久久久欧美国产精品| 免费观看在线日韩| 国产精品久久久久久精品电影| 国产黄色小视频在线观看| 91在线精品国自产拍蜜月| 日本免费a在线| av在线天堂中文字幕| 久久6这里有精品| 久久97久久精品| 大片免费播放器 马上看| 男女视频在线观看网站免费| 日韩av在线大香蕉| 国精品久久久久久国模美| 免费大片18禁| 少妇的逼好多水| 亚洲av男天堂| 我的女老师完整版在线观看| 我的女老师完整版在线观看| 精品久久久久久电影网| 亚洲欧美精品专区久久| 国产精品一区二区性色av| 成年女人看的毛片在线观看| 日韩欧美 国产精品| 国产精品久久久久久精品电影| 亚洲国产精品成人综合色| 国产精品一区www在线观看| 精品人妻一区二区三区麻豆| 人妻一区二区av| 在线观看av片永久免费下载| 亚洲精品色激情综合| 国语对白做爰xxxⅹ性视频网站| 国产在线一区二区三区精| 在现免费观看毛片| 天美传媒精品一区二区| 亚洲精品成人av观看孕妇| 久久精品熟女亚洲av麻豆精品 | 性插视频无遮挡在线免费观看| 亚洲国产最新在线播放| 亚洲精品日本国产第一区| 内地一区二区视频在线| 99re6热这里在线精品视频| 日日啪夜夜撸| 国产成人精品婷婷| 毛片一级片免费看久久久久| 国产精品伦人一区二区| 国产伦理片在线播放av一区| 亚洲真实伦在线观看| 男女边吃奶边做爰视频| 亚洲欧美中文字幕日韩二区| 联通29元200g的流量卡| 欧美日韩一区二区视频在线观看视频在线 | 久久99蜜桃精品久久| 91久久精品国产一区二区成人| 中文在线观看免费www的网站| 肉色欧美久久久久久久蜜桃 | 精品少妇黑人巨大在线播放| 男女啪啪激烈高潮av片| 久久99热这里只有精品18| 听说在线观看完整版免费高清| 成年人午夜在线观看视频 | 亚洲欧美日韩东京热| 汤姆久久久久久久影院中文字幕 | 亚洲伊人久久精品综合| 午夜免费激情av| 别揉我奶头 嗯啊视频| 国产高清不卡午夜福利| 国产精品久久久久久精品电影| 国产精品美女特级片免费视频播放器| 在线播放无遮挡| 日产精品乱码卡一卡2卡三| 国产亚洲午夜精品一区二区久久 | 中文乱码字字幕精品一区二区三区 | 成人特级av手机在线观看| 免费观看的影片在线观看| 欧美成人午夜免费资源| 人人妻人人澡人人爽人人夜夜 | 在线免费观看不下载黄p国产| 亚洲久久久久久中文字幕| 亚洲精品一二三| 中文字幕av成人在线电影| 777米奇影视久久| 在线免费十八禁| 国产激情偷乱视频一区二区| 国内精品美女久久久久久| 少妇被粗大猛烈的视频| 国产视频首页在线观看| 少妇人妻精品综合一区二区| 人人妻人人看人人澡| 亚洲精品视频女| 女人十人毛片免费观看3o分钟| 大又大粗又爽又黄少妇毛片口| 少妇被粗大猛烈的视频| 非洲黑人性xxxx精品又粗又长| 你懂的网址亚洲精品在线观看| 日韩成人av中文字幕在线观看| 成年女人在线观看亚洲视频 | 九九久久精品国产亚洲av麻豆| 亚洲人与动物交配视频| 午夜福利在线观看免费完整高清在| 啦啦啦中文免费视频观看日本| 亚洲三级黄色毛片| 久久久久久伊人网av| 国产精品一二三区在线看| 国产有黄有色有爽视频| 久久韩国三级中文字幕| 99re6热这里在线精品视频| 床上黄色一级片| 欧美成人午夜免费资源| 久久久午夜欧美精品| 久久久久九九精品影院| 国产成人精品福利久久| 中国国产av一级| 成人欧美大片| 国产成年人精品一区二区| 欧美极品一区二区三区四区| 麻豆久久精品国产亚洲av| 少妇的逼好多水| 卡戴珊不雅视频在线播放| 国产高清国产精品国产三级 | av又黄又爽大尺度在线免费看| 国产成人freesex在线| 国产成人精品一,二区| 我的老师免费观看完整版| 好男人视频免费观看在线| 国产探花极品一区二区| 伦理电影大哥的女人| 亚洲最大成人av| 一级爰片在线观看| 久久久久久久久大av| 中文字幕亚洲精品专区| 国产一区二区在线观看日韩| 少妇熟女欧美另类| 神马国产精品三级电影在线观看| 三级毛片av免费| ponron亚洲| 亚洲欧美一区二区三区国产| 永久网站在线| 国产淫片久久久久久久久| 国产免费一级a男人的天堂| 春色校园在线视频观看| 美女脱内裤让男人舔精品视频| 国产成人a∨麻豆精品| 国产精品.久久久| 嫩草影院新地址| 日本wwww免费看| 一级黄片播放器| 神马国产精品三级电影在线观看| 亚洲一区高清亚洲精品| 精品久久久久久久久亚洲| 亚洲av不卡在线观看| 欧美日韩精品成人综合77777| 国产精品久久久久久久久免| 最新中文字幕久久久久| 直男gayav资源| 亚洲伊人久久精品综合| 国产在线一区二区三区精| 亚洲欧洲国产日韩| 亚洲天堂国产精品一区在线| av又黄又爽大尺度在线免费看| 国产探花在线观看一区二区| 欧美另类一区| 久久久久免费精品人妻一区二区| 69人妻影院| 白带黄色成豆腐渣| 丰满少妇做爰视频| 国产伦精品一区二区三区四那| 精品一区二区免费观看| 韩国高清视频一区二区三区| 日韩一本色道免费dvd| 99热全是精品| 97人妻精品一区二区三区麻豆| 亚洲国产成人一精品久久久| 国产成人免费观看mmmm| 国产色婷婷99| 听说在线观看完整版免费高清| 国产伦在线观看视频一区| 一个人免费在线观看电影| 成人高潮视频无遮挡免费网站| 久久6这里有精品| h日本视频在线播放| 欧美成人一区二区免费高清观看| 欧美xxⅹ黑人| 久久久久性生活片| 亚洲成人中文字幕在线播放| 久久精品国产亚洲av天美| 亚洲国产精品专区欧美| 午夜精品一区二区三区免费看| 麻豆国产97在线/欧美| 午夜免费男女啪啪视频观看| 欧美日韩视频高清一区二区三区二| 日韩,欧美,国产一区二区三区| 国产免费视频播放在线视频 | 免费看av在线观看网站| 少妇被粗大猛烈的视频| 亚洲av在线观看美女高潮| 日韩大片免费观看网站| 秋霞伦理黄片| 超碰av人人做人人爽久久| 久久鲁丝午夜福利片| 精品久久久久久久末码| 亚洲欧美一区二区三区黑人 | 亚洲成人中文字幕在线播放| 极品少妇高潮喷水抽搐| 超碰97精品在线观看| 又大又黄又爽视频免费| 男女那种视频在线观看| 久久久久久久午夜电影| 一个人看视频在线观看www免费| 91精品伊人久久大香线蕉| 国产av码专区亚洲av| 国产片特级美女逼逼视频| 亚洲真实伦在线观看| 欧美日韩亚洲高清精品| 日韩欧美三级三区| 国产精品人妻久久久久久| 69人妻影院| 久久久久久久久久久免费av| 大香蕉97超碰在线| 日韩精品青青久久久久久| 久久久久久久大尺度免费视频| 精品一区二区免费观看| 久久这里有精品视频免费| 久久综合国产亚洲精品| 国产精品一区二区在线观看99 | 成人无遮挡网站| 国产女主播在线喷水免费视频网站 | 少妇熟女欧美另类| 热99在线观看视频| 可以在线观看毛片的网站| 成人漫画全彩无遮挡| 国产日韩欧美在线精品| 乱人视频在线观看| 少妇高潮的动态图| 国产亚洲av嫩草精品影院| 国产免费一级a男人的天堂| 精品人妻熟女av久视频| 国产 一区精品| 精品久久久久久久久久久久久| 久久久国产一区二区| 日日摸夜夜添夜夜添av毛片| 天天躁日日操中文字幕| 1000部很黄的大片| 成人二区视频| av免费在线看不卡| 小蜜桃在线观看免费完整版高清| 女人十人毛片免费观看3o分钟| 在线天堂最新版资源| 51国产日韩欧美| 美女主播在线视频| 真实男女啪啪啪动态图| or卡值多少钱| 国产黄片美女视频| 亚洲人与动物交配视频| 在线观看一区二区三区| 国产高清三级在线| 中国国产av一级| h日本视频在线播放| 午夜爱爱视频在线播放| 久久精品久久久久久噜噜老黄| 亚洲人成网站高清观看| av在线天堂中文字幕| 国产高清不卡午夜福利| 狂野欧美白嫩少妇大欣赏| 五月伊人婷婷丁香| 免费观看精品视频网站| 国产黄色小视频在线观看| 日本免费在线观看一区| 麻豆精品久久久久久蜜桃| 日本wwww免费看| 国产午夜精品一二区理论片| 一个人观看的视频www高清免费观看| 少妇高潮的动态图| 国产精品美女特级片免费视频播放器| 婷婷色综合大香蕉| 精品久久久噜噜| 超碰av人人做人人爽久久| 肉色欧美久久久久久久蜜桃 | 久久久久免费精品人妻一区二区| 国语对白做爰xxxⅹ性视频网站| 久久精品人妻少妇| 精品亚洲乱码少妇综合久久| a级毛片免费高清观看在线播放| 白带黄色成豆腐渣| 三级毛片av免费| 精品久久久久久电影网| 国产亚洲精品av在线| 成人无遮挡网站| 久久99精品国语久久久| 午夜久久久久精精品| 一级二级三级毛片免费看| 精品不卡国产一区二区三区| 久久人人爽人人片av| 国产成人91sexporn| 看黄色毛片网站| 美女内射精品一级片tv| 91午夜精品亚洲一区二区三区| 国产亚洲精品久久久com| 亚洲真实伦在线观看| 男女那种视频在线观看| 久久99热这里只频精品6学生| 人妻夜夜爽99麻豆av| 如何舔出高潮| 久久久久久国产a免费观看| 女人久久www免费人成看片| 在线免费十八禁| 久久精品国产自在天天线| 免费黄频网站在线观看国产| 综合色丁香网| 三级国产精品欧美在线观看| 如何舔出高潮| 久久精品综合一区二区三区| 国产老妇女一区| 又爽又黄无遮挡网站| 国产精品人妻久久久影院| 久久精品国产亚洲av涩爱| 午夜精品在线福利| 久久国产乱子免费精品| 国产一区有黄有色的免费视频 | 亚洲在线观看片| 女人十人毛片免费观看3o分钟| 男女啪啪激烈高潮av片| 蜜桃久久精品国产亚洲av| 激情 狠狠 欧美| 成人亚洲精品一区在线观看 | 国产亚洲最大av| 人体艺术视频欧美日本| 蜜桃亚洲精品一区二区三区| 国产精品人妻久久久影院| 免费看光身美女| 亚洲欧美中文字幕日韩二区| 黄片wwwwww| 亚洲av成人精品一区久久| 免费观看性生交大片5| 久久热精品热| 老女人水多毛片| 日韩成人av中文字幕在线观看| ponron亚洲| 久久精品国产鲁丝片午夜精品| 在线观看美女被高潮喷水网站| 精品一区二区三区人妻视频|