• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A New Method for Scene Classification from the Remote Sensing Images

    2022-08-24 12:56:20PurnachandKollapudiSalehAlghamdiNeenavathVeeraiahYouseefAlotaibiSushmaThotakuraandAbdulmajeedAlsufyani
    Computers Materials&Continua 2022年7期

    Purnachand Kollapudi, Saleh Alghamdi, Neenavath Veeraiah, Youseef Alotaibi,Sushma Thotakuraand Abdulmajeed Alsufyani

    1Department of CSE, B V Raju Institute of Technology, Narsapur, Medak, Telangana, India

    2Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, 21944,Saudi Arabia

    3Department of Electronics and Communications, DVR&DHS MIC Engineering College, Kanchikacharla, Vijayawada,A.P., India

    4Department of Computer Science, College of Computer and Information Systems, Umm Al-Qura University, Makkah,21955, Saudi Arabia

    5Department of ECE, P.V.P Siddhartha Institute of Technology, Vijayawada, India

    6Department of Computer Science, College of Computers and Information Technology, Taif University, Taif, 21944, Saudi Arabia

    Abstract: The mission of classifying remote sensing pictures based on their contents has a range of applications in a variety of areas.In recent years, a lot of interest has been generated in researching remote sensing image scene classification.Remote sensing image scene retrieval, and scene-driven remote sensing image object identification are included in the Remote sensing image scene understanding (RSISU) research.In the last several years, the number of deep learning (DL) methods that have emerged has caused the creation of new approaches to remote sensing image classification to gain major breakthroughs, providing new research and development possibilities for RS image classification.A new network called Pass Over(POEP) is proposed that utilizes both feature learning and end-to-end learning to solve the problem of picture scene comprehension using remote sensing imagery (RSISU).This article presents a method that combines feature fusion and extraction methods with classification algorithms for remote sensing for scene categorization.The benefits (POEP) include two advantages.The multi-resolution feature mapping is done first, using the POEP connections, and combines the several resolution-specific feature maps generated by the CNN, resulting in critical advantages for addressing the variation in RSISU data sets.Secondly, we are abletouse Enhanced pooling tomake the most use of themulti-resolution featuremaps that include second-order information.This enablesCNNs to better cope with (RSISU) issues by providing more representative feature learning.The data for this paper is stored in a UCI dataset with 21 types of pictures.In the beginning, the picture was pre-processed, then the features were retrieved using RESNET-50, Alexnet, and VGG-16 integration of architectures.After characteristics have been amalgamated and sent to the attention layer, after this characteristic has been fused, the process of classifying the data will take place.We utilize an ensemble classifier in our classification algorithm that utilizes the architecture of a Decision Tree and a Random Forest.Once the optimum findings have been found via performance analysis and comparison analysis.

    Keywords: Remote sensing; RSISU; DL; RESNET-50; VGG-16

    1 Introduction

    Information obtained through remote sensing, which provides us with important data about the Earth’s surface, may enable us to precisely measure and monitor geographical features [1].The rate of growth in the number of remote sensing pictures is due to the recent improvements in earth observation technologies.The urgency associated with the search for ways to make full use of expanding remote sensing pictures for intelligent earth observation has been heightened due to this.Thus, to make sense of large and complicated remote sensing pictures, it is crucial to comprehend them completely.In regard to their work as a difficult and difficult-to-solve issue for understanding remote sensing data, research on scene categorization [2,3] of remote sensing pictures has been quite active.Correctly labelling remote sensing pictures using pre-set semantic categories, as illustrated in Fig.1, is a function of remote sensing image classification.Advanced remote sensing picture scene classification [4] research, which includes many studies on urban planning, natural hazards identification, environment monitoring, vegetationmapping, and geospatial item recognition, has occurred due to the significance of these fields in the real world [5,6].

    Figure 1: Classifying remote sensing imagery

    Assigning a specific semantic name to a scene, such as“urban”and“forest,”is an example of the categorization of land-use scenes.An increase in satellite sensor development is enabling a massive rise in the amount of high-resolution remote sensing picture data.In order to create intelligent databases,it is essential to use robust and efficient categorization techniques on huge remote sensing pictures.Classifying aerial or satellite pictures using computer vision methods is very interesting.For example,the bag-of-visual-words (BOVW) paradigm groups the local visual characteristics collected from a series of pictures and creates a set of visual words (i.e., a visual vocabulary).A histogram shows how many words from a certain picture appear in the histogram.An acronym for Remote Sensing Land-Use Scene Categorization (BOVW) has been useful in classification of remote sensing images of land-use scenes, which have been a particularly excellent use of the BOVW model.However, this is ignoring the spatial information in the pictures.By integrating texture information into remote sensing land-use picture data, the BOVW model’s performance may be enhanced.Fig.2 shows the development of remote sensing picture classification by a progression from pixel-level, object-level,to scene-level categorization.Due to the variety of remote sensing picture classification systems, we choose to use the generic phrase of“remote sensing image classification”rather than“remote sensing image classification technology.”In general, scholars worked to categorize remote sensing pictures by labelling each pixel with a semantic class since the spatial resolution of remote sensing images is extremely poor, which is comparable to how things are represented in the early scientific literatures.Furthermore, this is still an ongoing research subject for multispectral and hyperspectral remote sensing picture analysis.

    Figure 2: Classification of remote sensing images on three different levels

    Computational time and memory utilization have become important advancements in computer vision.Classifiers, on the other hand, are needed to have significant generalization ability while also producing high performance.A growing area of study for remote sensing imagery characterization is noted.Extra remote sensing image analysis execution measures have been found using the featurebased method, which is an additional step from data mining strategies.Classification of images is an important use of computer vision in this field.Our main goal is to advance machine learning methods for remote sensing picture categorization.The information included in satellite pictures, such as buildings, landscapes, deserts, and structures, is categorized and analysed throughout time using images including satellite imagery [7].

    This paper presents a method that combines feature fusion and extraction with classification algorithms for remote sensing for scene categorization.The benefits the benefits (POEP) include two advantages include two advantages.The multi-resolution feature mapping is done first, using the Pass Over connections, and combines the several resolution-specific feature maps generated by the CNN,resulting in critical advantages for addressing the variation in RSISU data sets.Secondly, we are able to use Enhanced pooling to make the most use of the multi-resolution feature maps that include second-order information.This enables CNNs to better cope with (RSISU) issues by providing more representative feature learning.In the beginning, the picture was pre-processed, then the features were retrieved using RESNET-50, Alexnet, and VGG-16 integration of architectures.After characteristics have been amalgamated and sent to the attention layer, after this characteristic has been fused, the process of classifying the data will take place.We utilize an ensemble classifier in our classification algorithm that utilizes the architecture of a Decision Tree and a Random Forest.Once the optimum findings have been found via performance analysis and comparison analysis.

    The remainder of the article is structured as follows: Section 2 presents relevant literature on categories that have been observed.Section 3 outlines the proposed process.Section 4 presents the results.Summaries of conclusions is found in Section 5.

    2 Related Works

    There are just a few iterations required for the RSSCNet model recommended by Sheng-Chieh et al.[8] to be used in conjunction with a two-stage cycle of learning rate training policy and the nofreezing transfer learning technology.It is possible to get a high degree of precision in this manner.Using data augmentation, regularization, and an early-stopping approach, the issue of restricted generalization observed during fast deep neural network training may be addressed as well.Using the model and training methods presented in this article outperforms existing models in terms of accuracy, according to the findings of the experiments.To be effective, this approach must concentrate on picture rectification pre-processing for cases where outliers are suspected and combine various explainable artificial intelligence analysis technologies to enhance interpretation skills.Kim et al.[9]proposed a new self-attention feature selection module integrated multi-scale feature fusion network for few-shot remote sensing scene categorization, referred to as SAFFNet.For a few-shot remote sensing classification task, informative representations of images with different receptive fields are automatically selected and re-weighted for feature fusion after refining network and global pooling operations.This is in contrast to a pyramidal feature hierarchy used for object detection.The support set in the few-shot learning job may be used to fine-tune the feature weighting value.The proposed remote sensing scene categorization model is tested on three public ally accessible datasets.To accomplish more efficient and meaningful training for the fine-tuning of a CNN backbone network,SAFFNet needs less unseen training samples.

    The fusion-based approach for remote sensing picture scene categorization was suggested by Yin et al.[10] Front side fusion, middle side fusion, and rear side fusion are the three kinds of fusion modes that are specified.Different fusion modes have typical techniques.There are many experiments being conducted in their entirety.Various fusion mode combinations are tested.On widely used datasets, model accuracy and training efficiency results are shown.Random crop + numerous backbone + average is the most effective technique, as shown by the results of this experiment.Different fusion modes and their interactions are studied for their characteristics.Research on the fusion-basedapproach with particular structure must be conducted in detail, and an external dataset should be utilized to enhance model performance.Campos-Taberner et al.[11] using Sentinel-2 time data, this research seeks to better comprehend a recurrent neural network for land use categorization in the European Common Agricultural Policy setting (CAP).Using predictors to better understand network activity allows us to better address the significance of predictors throughout the categorization process.According to the results of the study, Sentinel-2’s red and near infrared bands contain the most relevant data.The characteristics obtained from summer acquisitions were the most significant in terms of temporal information.These findings add to the knowledge of decision-making models used in the CAP to carry out the European Green Deal (EGD) intended to combat climate change, preserve biodiversity and ecosystems, and guarantee a fair economic return for farmers.They also help.This approach should put more emphasis on making accurate predictions.

    An improved land categorization technique combining Recurrent Neural Networks (RNNs) and Random Forests (RFs) has been proposed for different research objectives by Xu et al.[12].We made use of satellite image spatial data (i.e., time series).Pixel and object-based categorization are the foundations of our experimental classification.Analyses have shown that this new approach to remote sensing scene categorization beats the alternatives now available by up to 87%, according to the results.This approach should concentrate on the real-time use of big, complicated picture scene categorization data.For small sample sizes with deep feature fusion, a new sparse representationbased approach is suggested by Mei et al.[13].To take full use of CNNs’feature learning capabilities,multilevel features are first retrieved from various levels of CNNs.Observe how to extract features without labeled samples using current well-trained CNNs, e.g., AlexNet, VGGNet, and ResNet50.The multilevel features are then combined using sparse representation-based classification, which is particularly useful when there are only a limited number of training examples available.This approach outperforms several current methods, particularly when trained on small datasets as those from UCMerced andWHU-RS19.For the categorization of remote sensing high-resolution pictures, Petrovska et al.[14] developed the two-stream concatenation technique.Aerial images were first processed using neural networks pre-trained on ImageNet datasets,whichwere then combined into a final picture using convolutional neural networks (CNNs).After the extraction, a convolutional layer’s PCA transformed features and the average pooling layer’s retrieved features were concatenated to create a unique feature representation.In the end, we classified the final set of characteristics using an SVM classifier.We put our design to the test using two different sets of data.Our architecture’s outcomes were similar to those of other cutting-edge approaches.If a classifier has to be trained with a tiny ratio on the training dataset, the suggested approach may be useful.The UC-Merced dataset’s “dense residential” picture class, for example, has a high degree of inter-class similarity, and this approach may be an effective option for classifying such datasets.The correctness of this procedure must be the primary concern.

    End-to-end local-global-fusion feature extraction (LGFFE) network for more discriminative feature representation proposed by Lv and colleagues [15].A high-level feature map derived from deep CNNs is used to extract global and local features fromthe channel and spatial dimensions, respectively.To capture spatial layout and context information across various areas, a new recurrent neural network (RNN)-based attention module is initially suggested for local characteristics.The relevant weight of each area is subsequently generated using gated recurrent units (GRUs), which take a series of imagepatch characteristics as input.By concentrating on the most important area, a rebalanced regional feature representation may be produced.By combining local and global features, the final feature representation will be obtained.End-to-end training is possible for feature extraction and feature fusion.However, this approach has the disadvantage of increasing the risk of misclassification due to a concentration on smaller geographic areas Hong et al.[16] suggest the use of CTFCNN, a CaffeNet-based technique for investigating a pre-trained CNN’s discriminating abilities effectively.In the beginning, the pretrained CNN model is used as a feature extractor to acquire convolutional features from several layers, FC features, and FC features based on local binary patterns (LBPs).The discriminating information from each convolutional layer is then represented using an improved bag-of-view-word (iBoVW) coding technique.Finally, various characteristics are combined for categorization using weighted concatenation.The proposed CTFCNN technique outperforms certain state-of-the-art algorithms on the UC-Merced dataset and the Aerial Image Dataset (AID), withoverall accuracy up to 98.44% and 94.91%, respectively.This shows that the suggested frameworkis capable of describing the HSRRS picture in a specific way.The categorization performance of this technique need improvement.When generating discriminative hyperspectral pictures, Ahmed and colleagues [17] stressed the significance of spectral sensitivities.Such a representation’s primary objective is to enhance picture content identification via the use of just the most relevant spectral channels during processing.The fundamental assumption is that each image’s information can be better retrieved using a particular set of spectral sensitivity functions for a certain category.Content-Based Image Retrieval (CBIR) evaluates these spectral sensitivity functions.Specifically for Hyperspectral remote sensing retrieval and classification, we provide a new HSI dataset for the remote sensing community in this study.This dataset and a literature dataset have both been subjected to a slew of tests.Findings show that the HSI provides a more accurate representation of picture content than the RGB image presentation because of its physical measurements and optical characteristics.As the complexity of sensitivity functions increases, this approach should be refined.By considering existing methods drawbacks, we propose the Pass Over network for remote sensing scene categorization, a novel Hybrid Feature learning and end-to-end learning model that combines feature fusion and extraction with classification algorithms for remote sensing for scene categorization.In the beginning, the picture was pre-processed, then the features were retrieved using RESNET-50, Alexnet,and VGG-16 integration of architectures.After characteristics have been amalgamated and sent to the attention layer, after this characteristic has been fused, the process of classifying the data will take place.We utilize an ensemble classifier in our classification algorithm that utilizes the architecture of a Decision Tree and a Random Forest.Once the optimum findings have been found via performance analysis and comparison analysis.

    3 Research Methodology

    The suggested network is part of the Hybrid Feature Learning [18] and End-to-End Learning Model Learning Systems category of networks.Proposed technique may be taught end-to-end, which improves classification performance compared to existing feature-based or feature learning-based approaches.Categorize the VGG-16, Alexnet, and Resnet-50 convolutions as Conv2D_3, Conv2D_4 and Conv2D_5 of Alexnet with the VGG-16 Conv2D_3, Conv2D_4, Conv2D_5 and also with Resnet-50 Conv2D_3, Conv2D_4, Conv2D_5.Instead of doing picture pre-processing, the suggested approach eliminates it altogether [19,20].The proposed approach has the advantage of requiring a considerably smaller number of training parameters and proposed network requires a tenth of the characteristics of its competitors.Because of the limited number of parameters needed by proposed method,we aremore likely to avoid the overfitting issue when training a deepCNNmodel on relatively small data sets.This is a significant innovation.Fig.3 depicts the entire architecture.

    Figure 3: Overall architecture of proposed method

    For remote sensing-based scene categorization, we developed an effective and efficient feature extraction approach using machine learning classifiers.The UCI dataset had 21 classes when it was first used.Dimensionality reduction [21,22] with noise removal has been used to pre-process this data.The extraction of features based on the architectures of RESNET-50, VGG-16, and Alexnet was then carried out.Based on the Multi-layer feature fusion model, this data has been merged (MFF).This was followed by a try at implementing the same action of focusing on just certain important items in the attention layer.The characteristics are then retrieved and categorized as a result of this procedure.Machine learning classifiers Randomn Forest [23] and Decision Tree were used to classify the retrieved feature.Fig.4 depicts the suggested methodology’s implementation architecture.

    Pre-processing techniques for DR may take a variety of methods.In light of these benefits, the DR is taken into consideration.The amount of memory available for storing information decreases as the size of the object shrinks [24-26].Fewer dimensions need shorter training and calculation durations.Most feature extraction techniques struggle when dealing with data that has many dimensions.DR methods effectively deal with the multi-collinearity of various data characteristics and remove redundancy within them.Finally, the data’s smaller size makes it easier to see.

    Figure 4: Architecture for implementing the suggested approach

    3.1 Hybrid Feature Learning and End-to-End Learning Model

    Fig.5 depicts the proposed network’s design, which makes use of the RESNET-50, VGG-16, and ALEXNET backbones.Three convolution layers are utilized to convolute the input, while the rest are skipped via Pass Over connections, as stated before.A matrix is formed along the channel dimension of the feature maps if the multi-resolution feature maps are designated as“X”instead of“X”.The resulting multi-resolution feature maps are then aggregated using a multi-scale pooling layer.The FC layer and SoftMax layer follow this one.Following that, we’ll go through the two newest additions:Pass Over connections and multi-scale pooling.

    Figure 5: Architecture of the proposed POEP network

    For illustrative purposes, the backbone consists of the off-the-shelf Resnet 50, Alexnet, and VGG- 16.The Pass Over connection operation and a multi-scale pooling approach combine the feature maps from several layers.SVD refers to the singular value decomposition, whereas Vec indicates the vectorization process.Concat refers to the concatenate operation.CWAvg stands for channel-wise average pooling,whereasAvg indicates average pooling on the network as awhole.End-to-end learning system (also known as a hybrid system) is the classification given to the planned POEP network.Our methodologymay be taught utilizing a hybrid feature learning and end-to-end learning strategy, which enhances classification performance in comparison to hand-crafted feature-based methods or feature learning-based techniques.It also exhibits competitive classification performance when compared to existing method.In comparison to other approaches, ours has the benefit of needing amuch smaller set of training settings.The parameters needed by our POEP network’s competitors are reduced by 90%.As a result of our methodology’s fewer parameters, we’re more likely to avoid overfitting problems while training a deep CNN model on a small data set.The Alexnet and Resnet-51 are used as Pass Over connections in the suggested approach [27-30].

    3.2 Multi-Layer Aggregation Passover Connections

    Let’s say there are three sets of feature maps accessible, all with the same resolution.

    X1∈RH×W×D1X2∈RH×W×D2, andX3∈RH×W×D3To get the multi-resolution aggregated feature map X, use the connections method described below

    In this case, [X1;X2;X3] represents the third-dimensional concatenation.Fig.6 shows an example of a Pass Over connection method for three different feature maps.There are two reasons for aggregating multi-layer feature maps with Pass Over links [31-33].In classification and object recognition tasks, scale variance is an issue that must be addressed using the CNN model, which can naturally generate feature maps with a pyramidal form using hierarchical layers.

    Second, the data contained in the feature maps generated from different levels is complementary.Example of using feature maps from Alexnet’s different layers for demonstration purposes is shown in Fig.7.When using the Pass Over connection, you may take use of the feature maps’diverse set of characteristics to improve classification precision

    Figure 7: Graphical example illustrating the feature maps extracted fromdifferent layers of Alexnet for three different images.(a) Input image.(b) Feature map from the third convolutional layer.(c) Feature map from the fourth convolutional layer.(d) Feature map from the fifth convolutional layer

    Note that average pooling is used to combine feature maps with varying spatial resolutions.Prior to concatenating the feature maps, CWAvg pooling is used to decrease the number of channels in each set by a factor of 2.The following is a comprehensive mathematical explanation of CWAvg pooling.

    Y= [Y1;Y2;...;YL]∈RH×W×Lfor the 3-D feature map tensor in which,Y= [Y1;Y2;...;YL]∈RH×W×LAssuming stride k, the following is how the CWAvg pooling is as

    Consequently,Z= [Z1,Z2,...,ZL/k]∈RH×W×(L/k)is produced as the output feature map tensor.In real life, we choose a k number that ensures L is divisible by k before using it.L/k is an integer.

    Forward Propagation of Multi-scale Pooling

    The forward propagation of Multi-scale Pooling is performed as follows for a feature matrixX∈RD×N, where D=D1+D2+D3 is the dimensionality of the features and N=H×W is the number of features.To begin, a matrix of multi-scale C is calculated.

    A matrix with the elementsC=UΣUTandUcalled an eigenvector matrix and eigenvalue matrix of C.The vectorization of F is shown in Fig.2 as f.The symmetric nature of the matrix F means that just the rows and columns in the top triangle of the matrix F need to be vectorized, thus vector f has dimensions equal to D(D + 1)/2.

    Backward Propagation of Multi-scale Pooling

    Multi-scale pooling uses global and structured matrix calculations instead of the conventional max or average pooling methods, which treat the intermediate variable’s spatial coordinates (a matrix or a vector) separately.To calculate the partial derivative of the loss function L with respect to the multi-scale pooling input matrix, we use the matrix back-propagation technique.Because of this, we initially treat(?L/?F),(?L/?U)and(?L/?Σ)as partial derivatives of the partial derivative transmitted from the higher FC layer.Following is an example of a chain rule expression:

    The fluctuation of the relevant variable is denoted by d(.).The: operation is represented by the symbol, and A: B = the trace (AT B).The following formulation may be derived from (5):

    Putting (6) into (5), (?L/?U) and (?L/?) are obtained as follows:

    To get (?L/?U) and (?L/?), let us compute (?L/?C) through use the eigen decomposition (EIG)of C and C = UUT, and then calculate (L/C).The following is the whole chain rule expression:

    Like (6), This version of matrix C may be obtained.

    Eqs.(8) and (9) may be used with the characteristics of the matrix inner product, and the EIG properties to obtain the following partial derivatives of the loss function L relative to C

    where°denotes the Hadamard product, (·) sym denotes a symmetric operation, (·) diag is (·) with all off-diagonal elements being 0, and K is computed by manipulating the eigenvalues σ in as shown in the following:

    There are further instructions on how to calculate (7) and (10).Lastly, (?L/?C), assuming that the loss function L has a partial derivative with respect to the feature matrix X of the form:

    3.3 Network Architecture for Proposed VGG-16

    Tab.1 lists the VGG-16’s architectural specs.It has 3 FC layers and 5 3×3 convolutional layers,each having a stride size of 1.The stride for 2×2 pooling layers is 2, while the input picture size in VGG-16 is 224×224 by default.Every time a pooling layer is applied, the feature map is shrunk by a factor of two.FC layer features a 7×7 feature map with 512 channels that is expanded to a vector with 25,088 (7×7×512) channels before the FC layer is applied.

    VGG-16 uses five convolutional layer blocks to process 224×224 video frame images.As the number of 3×3 filters increases, so does the complexity of the block.This is done using a stride of 1 while padding the convolutional layer’s inputs to maintain spatial resolution.The max-pooling layers are used to disconnect every block.Stride 2 has 22 windows, which are used for max-pooling.Three FC layers are included in addition to the Convolutional layers.After that, the soft-max layer is applied, and here is where the class probabilities are calculated.The complete network model is shown in Fig.8.

    3.4 Ensemble Classifier [Random Forest and Decision Tree]

    K basic decision trees are merged to create the random forest, which is a combined classifier.For the first batch of data, we used

    Table 1: Architectural parameters for VGG-16

    Figure 8: Network model for VGG-16

    From the original datasets randomly select sub-datasetsx1,y1(X,Y) to construct the classifierhk(x), then the combined classifier can be described as,

    The random forest method generates K training subsets from the original dataset using the bagging sampling approach.Approximately two-thirds of the original dataset is used for each training subset, and samples are taken at random and then re-used in the sampling process.Every time in the sample set, the sample’s chance of being acquired is 1/m, while the probability of not being obtained is (1-1/m).It isn’t gathered after m samples are taken.The chances are.When m approaches infinity,the expression becomesm→∞.That is to say, the sample set misses approximately 36.8% of the data in the training set during each cycle of random sampling and bagging.About 36.8% of the data was not sampled in this section and is referred to as“Out of Bag”(OOB).These data have not been fitted to the training set model; therefore, they may be used to evaluate the generalization capabilities of the model in a different setting.Bag sampling is used to create K decision trees from the K training subsets.Random forests’decision tree method uses the CART algorithm, which is quite popular right now.The CART algorithm’s node splitting technique is its nucleus.Node splitting is performed using the CART algorithm using the GINI coefficient technique.

    To put it another way, the Gini coefficient is a measure of the likelihood that a randomly chosen portion of a sample set will be divided in half.To put it another way, a smaller Gini index indicates that there is a lesser chance that the chosen sample will be divided, while a larger Gini index signifies that the collection is purer.

    That is the Gini index (Gini impurity) = (probability of the sample being selected) * (probability of the sample being misclassified).

    1.When the possibility that a sample belongs to the kth category is given by pk, the likelihood that the sample will be divided is given by (1-pk).

    2.Samples may belong to any of the K categories, and a random sample from the set can, too,thus increasing the number of categories.

    3.When it’s split into two, Gini(P) = 2p(1-p)

    If a feature is used to split a sample set into D1 and D2, there are only two sets: D1 equals the given feature value and D2 does not, in fact, contain the provided feature value.CART (classification and regression trees) are binary trees.Multiple values are binary processed in a single way.

    To find out how pure each subset of the sample set D is, divide it into two using the partitioning feature = a certain feature value.

    This means that when there are more than two values for a feature, the purity Gini(D, Ai) of the subset must be calculated after each value is divided by the sample D as the dividing point (where Ai represents the characteristic A Possible value) The lowest Gini index among all feasible Gini values is then determined (D, Ai).Based on the data in sample set D, this partition’s division is the optimal division point.

    3.5 Algorithm Description of the Random Forest

    After the random sampling process, the resulting decision tree can be trained with data.According to the idea of random forests, decision trees have a high degree of independence from each other, and this feature also ensures the independence of the results produced by each decision tree.Then the remaining work consists of two: performing training tasks on each decision tree to produce results and voting to select the optimal solution from the results.Fig.9 shows the tree.

    Algorithm stages may be summed up by the following description:

    Step1: This decision tree is made up of nodes that are randomly chosen from a large range of possible values for the dataset’s characteristics, S.The number of s in the decision tree does not vary as the tree grows.

    Figure 9: Classification tree topology

    Step2: Uses the GINI technique to divide the node.

    Step3: Set up a training environment for each decision tree and run training exercises

    Step4: Vote to determine the optimal solution; Definition 1.for a group of classifiersh1(x),h2(x),...,hk(x), and a vector (X, Y) produced at random from the dataset, with the margin function set to,

    where I(?) is used as an indication.It’s 1 if the equation in parentheses holds true; if not, it’s 0.

    The margin function measures how accurate the average categorization is compared to how inaccurate it is.The more reliable something is, the higher its worth.

    The error message is as follows:

    For a set of decision trees, all sequences Θ1,Θ2,....Θκ, The error will converge to

    To prevent over-fitting, the random selection of the number of samples and the attribute may be utilized as described in the random forest approach.

    4.Experimental Setup

    As an example, there are 21 scene types in the UC Merced Land Use dataset: agri-industrial/aeronautical/baseballdiamond/beach/buildings/chaparral/freeway/forest/intersection/mediumresidential/mobile-home-park/overpass/parkinglot/river/ruway/tennis-court There are 100 pictures in each class, each of which is 256×256 pixels in size.

    4.1 Performance Evaluation

    Fig.10 shows the Confusion matrix of the and the ROC Curve of Alexnet with the Decision tree classifier, Alexnet with the Random Forest classifier, Resnet-50 with the Decision tree classifier,Resnet-50 with the Random Forest classifier, VGG-16 with the Decision tree classifier and VGG-16 with the Random Forest classifier combinations for the 21 classes images of agricultural, airplane,beach, baseball diamond, buildings, chaparral, dense residential, forest freeway golf course, harbor,intersection, medium residential, mobile home park, over pass, parking lot, river Runway, sparse residential, storage tanks and tennis court are drawn between the True positivity rate to the False positivity rate.

    Tab.2 shows how the featured extracted classes after the Concatenate the Conv2D_3, Conv2D_4 and Conv2D_5of Alexnetwiththe VGG-16 Conv2D_3,Conv2D_4,Conv2D_5andalsowith Resnet-50 Conv2D_3, Conv2D_4, Conv2D_5.

    Figure 10: The confusion matrix and ROC curve for the 21 classes of (a) & (b) Alexnet with the decision tree classifier (c) & (d) Alexnet with the random forest classifier (e) & (f)Resnet-50 with the decision tree classifier (g) & (h) Resnet-50 with the random forest classifier(i) & (j) VGG-16 with the decision tree classifier (k) & (l) VGG-16 with the random forest classifier

    Table 2: Featured extracted classes

    It has been analysed that the input to the VGG-16, Resnet-50 and Alexnet the accuracy is less compared to the proposed model, the time consumption to train the existing is shown in Tab.3.

    Table 3: Comparison between the existing and proposed time requirements

    RESNET50 ->{‘Training Time Per Epoch’: 10.185 min,‘Accuracy’: 0.0633} and the proposed Feature Extraction the time consumption is 52 min and the Classification time is 15 s.The accuracy model for the Alexnet [21] is about 90.21%, the accuracy level for the training model Resnet50 is about 62.01% for training and 91.85% for the VGG16 and the proposed architecture model gets the highest model of about 97.3%.

    5 Conclusion

    This paper proposes the Pass Over network for remote sensing scene categorization, a novel Hybrid Feature learning and end-to-end learning model.The Pass Over connection procedure, followed by a multi-scale pooling approach, introduces two new components, i.e., pass over connections and the feature maps from various levels.In addition to combining multi-resolution feature maps from various layers in the CNN model, our Pass Over network can also use high-order information to achievemore representative feature learning.It was found that the accuracy of the current ALEXNET,,VGG16, RESNET50, and the proposed Feature Extraction is less than half that of the proposed model, and that the time required to train the existing models is 52 min longer than the proposed Feature Extraction’s classification time.It’s estimated that Alexnet’s accuracy model is 90.21%, while Resnet50’s training model accuracy level is 62.01%, while the VGG16 model accuracy is 91.85%, and the suggested architectural model obtains a high accuracy estimate of 97.3 percent.

    Acknowledgement:We deeply acknowledge Taif University for supporting this study through Taif University Researchers Supporting Project Number (TURSP-2020/115), Taif University, Taif, Saudi Arabia.

    Funding Statement:This research is funded by Taif University, TURSP-2020/115.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩精品中文字幕看吧| 亚洲国产高清在线一区二区三 | 国产精品99久久99久久久不卡| 国产一级毛片七仙女欲春2 | 久久精品91无色码中文字幕| 亚洲精品国产一区二区精华液| 啦啦啦观看免费观看视频高清 | 成年人黄色毛片网站| 久久草成人影院| 9色porny在线观看| 久热爱精品视频在线9| 在线观看www视频免费| 亚洲欧美日韩无卡精品| 国产亚洲精品久久久久5区| 欧美日韩一级在线毛片| 精品国内亚洲2022精品成人| 亚洲国产欧美日韩在线播放| 一二三四社区在线视频社区8| 日韩av在线大香蕉| 一卡2卡三卡四卡精品乱码亚洲| 色综合婷婷激情| 亚洲精品在线美女| 亚洲av成人av| 给我免费播放毛片高清在线观看| 国产精品一区二区免费欧美| 亚洲欧美精品综合一区二区三区| 久久婷婷人人爽人人干人人爱 | 一二三四在线观看免费中文在| 欧美av亚洲av综合av国产av| 亚洲欧美一区二区三区黑人| 在线av久久热| 国产精品亚洲美女久久久| 天天躁夜夜躁狠狠躁躁| 久久久久久大精品| 热99re8久久精品国产| 黄色a级毛片大全视频| 欧美精品亚洲一区二区| 他把我摸到了高潮在线观看| 丝袜人妻中文字幕| 国产在线精品亚洲第一网站| 亚洲精品国产区一区二| 国产精品二区激情视频| 大香蕉久久成人网| 啦啦啦观看免费观看视频高清 | 日本vs欧美在线观看视频| 国产激情欧美一区二区| 一区二区三区高清视频在线| av有码第一页| 久久这里只有精品19| 脱女人内裤的视频| 久久久久久免费高清国产稀缺| 免费av毛片视频| 男女下面插进去视频免费观看| 久久久久久亚洲精品国产蜜桃av| 熟妇人妻久久中文字幕3abv| 美女午夜性视频免费| 欧美激情高清一区二区三区| 中出人妻视频一区二区| 成人国产综合亚洲| 一级a爱片免费观看的视频| 亚洲狠狠婷婷综合久久图片| 亚洲欧美激情综合另类| 久久精品亚洲精品国产色婷小说| 久久国产乱子伦精品免费另类| 国产成人精品在线电影| 黄色片一级片一级黄色片| 免费观看精品视频网站| 大型黄色视频在线免费观看| 天天添夜夜摸| 亚洲一区高清亚洲精品| 日韩国内少妇激情av| 国产男靠女视频免费网站| 91成年电影在线观看| 在线观看免费午夜福利视频| 色播在线永久视频| 亚洲欧美日韩无卡精品| 日本 欧美在线| 精品久久久久久成人av| 欧美成人午夜精品| 长腿黑丝高跟| 久久久精品国产亚洲av高清涩受| 久久久久久久久中文| 嫩草影视91久久| 欧美激情极品国产一区二区三区| 国产国语露脸激情在线看| 国产精品野战在线观看| 村上凉子中文字幕在线| 亚洲一码二码三码区别大吗| 国产片内射在线| 黄色成人免费大全| 久久天堂一区二区三区四区| 真人一进一出gif抽搐免费| 黄片播放在线免费| 97碰自拍视频| 午夜亚洲福利在线播放| 亚洲国产精品久久男人天堂| 国产亚洲精品久久久久久毛片| 国产私拍福利视频在线观看| 久久香蕉激情| netflix在线观看网站| 精品国内亚洲2022精品成人| 999久久久国产精品视频| 久久香蕉国产精品| 午夜福利在线观看吧| 首页视频小说图片口味搜索| 国产精品98久久久久久宅男小说| 自线自在国产av| 亚洲一区二区三区色噜噜| 久久精品国产综合久久久| 日日爽夜夜爽网站| 咕卡用的链子| 亚洲精品国产区一区二| 涩涩av久久男人的天堂| 无遮挡黄片免费观看| 欧美黑人欧美精品刺激| 麻豆av在线久日| 日本vs欧美在线观看视频| 欧美人与性动交α欧美精品济南到| 在线观看免费日韩欧美大片| 大码成人一级视频| 国产免费男女视频| 日本黄色视频三级网站网址| 日韩欧美国产一区二区入口| 久久精品国产亚洲av香蕉五月| 亚洲精品美女久久av网站| 69av精品久久久久久| 免费在线观看黄色视频的| 欧美中文综合在线视频| 高潮久久久久久久久久久不卡| 一本大道久久a久久精品| 亚洲中文字幕一区二区三区有码在线看 | 亚洲va日本ⅴa欧美va伊人久久| 欧美不卡视频在线免费观看 | 欧美黄色淫秽网站| 多毛熟女@视频| 在线国产一区二区在线| 久久香蕉精品热| 日韩成人在线观看一区二区三区| 天堂√8在线中文| 亚洲精品国产精品久久久不卡| 国产精品久久视频播放| 午夜亚洲福利在线播放| 久久中文字幕一级| 欧美激情高清一区二区三区| 久久中文字幕一级| 高清毛片免费观看视频网站| 夜夜夜夜夜久久久久| 日韩大尺度精品在线看网址 | 日日干狠狠操夜夜爽| 在线观看日韩欧美| 多毛熟女@视频| 亚洲专区字幕在线| 亚洲精品国产区一区二| 在线观看免费视频网站a站| 久久久久久久午夜电影| cao死你这个sao货| 国产av又大| 日日夜夜操网爽| 午夜福利高清视频| 亚洲黑人精品在线| 可以在线观看毛片的网站| 国产av在哪里看| 久久精品国产亚洲av高清一级| 欧洲精品卡2卡3卡4卡5卡区| 免费在线观看影片大全网站| 欧美成人性av电影在线观看| 亚洲欧美日韩高清在线视频| 在线视频色国产色| 好男人在线观看高清免费视频 | 亚洲精品在线观看二区| 久久精品91无色码中文字幕| 精品国产国语对白av| 一a级毛片在线观看| 亚洲人成电影观看| av电影中文网址| 国产精品香港三级国产av潘金莲| 在线观看日韩欧美| 亚洲欧美精品综合一区二区三区| 99国产精品免费福利视频| 亚洲欧美激情综合另类| 亚洲 欧美 日韩 在线 免费| 淫秽高清视频在线观看| 亚洲欧美精品综合久久99| 女同久久另类99精品国产91| av在线天堂中文字幕| 欧美丝袜亚洲另类 | 丰满人妻熟妇乱又伦精品不卡| 久久影院123| 欧美不卡视频在线免费观看 | 69av精品久久久久久| 日本vs欧美在线观看视频| 精品日产1卡2卡| 非洲黑人性xxxx精品又粗又长| 嫁个100分男人电影在线观看| 亚洲 国产 在线| 中文字幕人妻丝袜一区二区| 久久性视频一级片| 校园春色视频在线观看| 免费无遮挡裸体视频| 日本 av在线| 亚洲一码二码三码区别大吗| av片东京热男人的天堂| 久久久久久亚洲精品国产蜜桃av| 啦啦啦韩国在线观看视频| 激情视频va一区二区三区| 一区二区日韩欧美中文字幕| 免费女性裸体啪啪无遮挡网站| 国产成人一区二区三区免费视频网站| 久久人人爽av亚洲精品天堂| 国产乱人伦免费视频| 在线视频色国产色| 露出奶头的视频| 黄色片一级片一级黄色片| 9色porny在线观看| 夜夜看夜夜爽夜夜摸| 丰满的人妻完整版| 大型av网站在线播放| 亚洲免费av在线视频| 成人国产综合亚洲| 天天添夜夜摸| 狂野欧美激情性xxxx| aaaaa片日本免费| 亚洲成av片中文字幕在线观看| cao死你这个sao货| 欧美中文日本在线观看视频| 欧美绝顶高潮抽搐喷水| 午夜福利欧美成人| 国产精品久久视频播放| xxx96com| 999久久久精品免费观看国产| 日韩三级视频一区二区三区| 性少妇av在线| 免费在线观看视频国产中文字幕亚洲| 亚洲精品在线观看二区| 国产精品av久久久久免费| 最近最新免费中文字幕在线| 大码成人一级视频| 亚洲自偷自拍图片 自拍| 两个人免费观看高清视频| 午夜免费激情av| 日日摸夜夜添夜夜添小说| 亚洲少妇的诱惑av| 香蕉丝袜av| 黄色片一级片一级黄色片| 国产成人av激情在线播放| 老司机在亚洲福利影院| 天天躁狠狠躁夜夜躁狠狠躁| 国产一区二区在线av高清观看| 亚洲成av人片免费观看| 欧美国产日韩亚洲一区| 国产精品免费视频内射| 午夜a级毛片| 日本 欧美在线| 黄色丝袜av网址大全| 侵犯人妻中文字幕一二三四区| 两个人免费观看高清视频| a级毛片在线看网站| 色尼玛亚洲综合影院| 久热爱精品视频在线9| 国产蜜桃级精品一区二区三区| 母亲3免费完整高清在线观看| www.熟女人妻精品国产| 国产视频一区二区在线看| 亚洲男人天堂网一区| 美女高潮喷水抽搐中文字幕| 99国产极品粉嫩在线观看| 男女床上黄色一级片免费看| 1024视频免费在线观看| 久久影院123| 亚洲电影在线观看av| 最好的美女福利视频网| 精品国产美女av久久久久小说| 久久精品91无色码中文字幕| 一卡2卡三卡四卡精品乱码亚洲| 丁香六月欧美| 欧美日韩瑟瑟在线播放| 涩涩av久久男人的天堂| 国产免费av片在线观看野外av| cao死你这个sao货| 制服人妻中文乱码| 国产亚洲av高清不卡| 国产蜜桃级精品一区二区三区| 大香蕉久久成人网| 50天的宝宝边吃奶边哭怎么回事| 午夜视频精品福利| 免费高清在线观看日韩| 国产成人av教育| 久久青草综合色| 动漫黄色视频在线观看| 亚洲av成人av| 高清毛片免费观看视频网站| 亚洲国产精品久久男人天堂| 9热在线视频观看99| 宅男免费午夜| 一级作爱视频免费观看| netflix在线观看网站| 国产在线观看jvid| 日韩av在线大香蕉| 最近最新中文字幕大全电影3 | 色尼玛亚洲综合影院| 亚洲中文字幕一区二区三区有码在线看 | 两个人免费观看高清视频| 亚洲色图 男人天堂 中文字幕| 日日爽夜夜爽网站| 午夜福利高清视频| 国产精品九九99| 欧美色视频一区免费| 久久国产精品人妻蜜桃| 一区二区三区高清视频在线| 亚洲狠狠婷婷综合久久图片| 中文字幕色久视频| 欧美国产精品va在线观看不卡| 精品人妻在线不人妻| 久久久久九九精品影院| 久9热在线精品视频| 最近最新中文字幕大全电影3 | 搞女人的毛片| 国产私拍福利视频在线观看| 成年人黄色毛片网站| 中文字幕久久专区| 国产高清视频在线播放一区| 他把我摸到了高潮在线观看| 亚洲aⅴ乱码一区二区在线播放 | 村上凉子中文字幕在线| 日韩av在线大香蕉| 91大片在线观看| 国产高清视频在线播放一区| 国产精品av久久久久免费| 亚洲精品国产一区二区精华液| 在线观看免费视频日本深夜| 久久精品影院6| 欧美中文日本在线观看视频| 亚洲五月色婷婷综合| 久久久水蜜桃国产精品网| 女人被狂操c到高潮| 变态另类成人亚洲欧美熟女 | 日本一区二区免费在线视频| 最新美女视频免费是黄的| 91九色精品人成在线观看| 日韩成人在线观看一区二区三区| 久久性视频一级片| 国产熟女xx| 日韩欧美一区视频在线观看| 欧美日韩福利视频一区二区| 天天一区二区日本电影三级 | 亚洲avbb在线观看| 亚洲精品在线美女| 国产精品av久久久久免费| 亚洲国产中文字幕在线视频| 不卡一级毛片| 日日干狠狠操夜夜爽| 国产aⅴ精品一区二区三区波| 老汉色∧v一级毛片| or卡值多少钱| 正在播放国产对白刺激| 精品人妻1区二区| 夜夜爽天天搞| 丝袜美足系列| 亚洲激情在线av| 99在线视频只有这里精品首页| 咕卡用的链子| 欧美黄色片欧美黄色片| www国产在线视频色| 一边摸一边做爽爽视频免费| 国产精品av久久久久免费| 多毛熟女@视频| 12—13女人毛片做爰片一| 叶爱在线成人免费视频播放| 国产在线精品亚洲第一网站| 亚洲第一av免费看| 精品熟女少妇八av免费久了| 国产乱人伦免费视频| 免费高清在线观看日韩| 我的亚洲天堂| 美女大奶头视频| 99在线人妻在线中文字幕| 一二三四社区在线视频社区8| 精品无人区乱码1区二区| 欧美日韩精品网址| 久久久水蜜桃国产精品网| 精品无人区乱码1区二区| 99国产精品99久久久久| 成人精品一区二区免费| 午夜福利一区二区在线看| 久久香蕉激情| 国产伦一二天堂av在线观看| 国产av一区二区精品久久| 99香蕉大伊视频| 成人永久免费在线观看视频| 日韩欧美三级三区| av片东京热男人的天堂| 亚洲成人免费电影在线观看| 精品国产乱码久久久久久男人| 黄色丝袜av网址大全| 亚洲av成人不卡在线观看播放网| cao死你这个sao货| 成人精品一区二区免费| 在线永久观看黄色视频| 国产国语露脸激情在线看| 国产精品一区二区精品视频观看| av有码第一页| 丰满人妻熟妇乱又伦精品不卡| 神马国产精品三级电影在线观看 | 精品福利观看| 日韩视频一区二区在线观看| 窝窝影院91人妻| 国产一区二区三区综合在线观看| 窝窝影院91人妻| 美女高潮到喷水免费观看| 99精品在免费线老司机午夜| 精品久久久久久久毛片微露脸| 亚洲av成人一区二区三| 性色av乱码一区二区三区2| 在线国产一区二区在线| 欧美激情高清一区二区三区| 一边摸一边抽搐一进一出视频| 国产高清有码在线观看视频 | 91麻豆av在线| 久久婷婷人人爽人人干人人爱 | 国产精品九九99| 波多野结衣巨乳人妻| 成人免费观看视频高清| 香蕉久久夜色| 日本a在线网址| 少妇熟女aⅴ在线视频| 两个人视频免费观看高清| 成人精品一区二区免费| 国产精品久久久人人做人人爽| 涩涩av久久男人的天堂| 色精品久久人妻99蜜桃| 亚洲中文日韩欧美视频| 国产亚洲精品第一综合不卡| 一本久久中文字幕| 成人永久免费在线观看视频| 午夜免费鲁丝| 此物有八面人人有两片| 午夜福利在线观看吧| 日韩大尺度精品在线看网址 | 国产精品 国内视频| 久久精品亚洲精品国产色婷小说| 亚洲av日韩精品久久久久久密| 制服人妻中文乱码| 国产主播在线观看一区二区| 狂野欧美激情性xxxx| 亚洲全国av大片| 在线观看免费日韩欧美大片| 一二三四在线观看免费中文在| bbb黄色大片| 亚洲人成网站在线播放欧美日韩| 大型av网站在线播放| 9191精品国产免费久久| 国产欧美日韩精品亚洲av| 日本在线视频免费播放| 精品国产一区二区三区四区第35| 两人在一起打扑克的视频| 777久久人妻少妇嫩草av网站| 日韩欧美三级三区| x7x7x7水蜜桃| 成人国产一区最新在线观看| 制服丝袜大香蕉在线| 国产人伦9x9x在线观看| 少妇粗大呻吟视频| 国产精品精品国产色婷婷| 国产精品永久免费网站| 一二三四在线观看免费中文在| 久久久水蜜桃国产精品网| 国内精品久久久久久久电影| 少妇熟女aⅴ在线视频| 天天添夜夜摸| 久久午夜综合久久蜜桃| 天天躁狠狠躁夜夜躁狠狠躁| 99香蕉大伊视频| 国产成+人综合+亚洲专区| 日韩欧美一区二区三区在线观看| 在线观看午夜福利视频| 50天的宝宝边吃奶边哭怎么回事| 国产真人三级小视频在线观看| 午夜福利视频1000在线观看 | 一区二区日韩欧美中文字幕| 三级毛片av免费| 欧美国产精品va在线观看不卡| 99国产精品免费福利视频| 精品日产1卡2卡| 一进一出抽搐gif免费好疼| 国产精品久久视频播放| 亚洲av成人一区二区三| 久久亚洲精品不卡| 久久久国产成人精品二区| 亚洲精品中文字幕在线视频| 一级片免费观看大全| 日韩大尺度精品在线看网址 | 黄色毛片三级朝国网站| 日日干狠狠操夜夜爽| 日本黄色视频三级网站网址| 日本撒尿小便嘘嘘汇集6| 精品国产一区二区三区四区第35| 午夜免费鲁丝| 免费女性裸体啪啪无遮挡网站| 黑人巨大精品欧美一区二区mp4| 国产极品粉嫩免费观看在线| 在线观看一区二区三区| 看免费av毛片| 亚洲七黄色美女视频| 精品国产超薄肉色丝袜足j| 日本vs欧美在线观看视频| 精品人妻在线不人妻| 国产激情欧美一区二区| 亚洲av熟女| 妹子高潮喷水视频| 国产97色在线日韩免费| 亚洲第一青青草原| netflix在线观看网站| 最近最新免费中文字幕在线| 伊人久久大香线蕉亚洲五| 熟女少妇亚洲综合色aaa.| 天堂√8在线中文| 两性午夜刺激爽爽歪歪视频在线观看 | 搡老岳熟女国产| 久久午夜综合久久蜜桃| 99国产综合亚洲精品| 女同久久另类99精品国产91| 日韩一卡2卡3卡4卡2021年| 午夜福利一区二区在线看| 久久国产亚洲av麻豆专区| 日本a在线网址| 亚洲精品中文字幕在线视频| 可以免费在线观看a视频的电影网站| 欧美激情 高清一区二区三区| 国产亚洲精品av在线| www.www免费av| 亚洲 国产 在线| 日韩欧美在线二视频| 久久久水蜜桃国产精品网| 女性被躁到高潮视频| 欧美乱妇无乱码| 亚洲国产毛片av蜜桃av| 亚洲在线自拍视频| 一夜夜www| 999精品在线视频| 精品久久蜜臀av无| 欧美日本中文国产一区发布| 美女国产高潮福利片在线看| 操出白浆在线播放| 婷婷六月久久综合丁香| 黄色毛片三级朝国网站| a级毛片在线看网站| 日韩精品青青久久久久久| 多毛熟女@视频| 90打野战视频偷拍视频| 国产精品亚洲一级av第二区| 最新在线观看一区二区三区| 深夜精品福利| 不卡av一区二区三区| 国产欧美日韩一区二区精品| 免费少妇av软件| 午夜免费鲁丝| 亚洲aⅴ乱码一区二区在线播放 | 亚洲电影在线观看av| 久久久国产成人精品二区| 天堂动漫精品| 亚洲午夜精品一区,二区,三区| 日韩有码中文字幕| 久久精品国产亚洲av香蕉五月| 丝袜美足系列| 黄片大片在线免费观看| 美女大奶头视频| 啦啦啦韩国在线观看视频| 国产精品久久久久久人妻精品电影| 亚洲专区中文字幕在线| √禁漫天堂资源中文www| 女性生殖器流出的白浆| 精品少妇一区二区三区视频日本电影| 女人精品久久久久毛片| 亚洲久久久国产精品| 色老头精品视频在线观看| 人人妻人人澡人人看| 久久天堂一区二区三区四区| 久久精品亚洲精品国产色婷小说| 后天国语完整版免费观看| 中文字幕人成人乱码亚洲影| 91精品国产国语对白视频| 亚洲精品久久国产高清桃花| 无遮挡黄片免费观看| 看免费av毛片| 亚洲五月天丁香| 久久久久亚洲av毛片大全| av天堂久久9| 欧美日韩乱码在线| 一进一出抽搐gif免费好疼| 亚洲国产精品久久男人天堂| av在线天堂中文字幕| 麻豆久久精品国产亚洲av| 欧洲精品卡2卡3卡4卡5卡区| 国产av精品麻豆| 欧美亚洲日本最大视频资源| 色av中文字幕| 嫩草影院精品99| 不卡一级毛片| 国产成人啪精品午夜网站| 大陆偷拍与自拍| 亚洲精品在线美女| 欧美日韩亚洲国产一区二区在线观看| 视频区欧美日本亚洲| 久久精品91蜜桃| 大码成人一级视频| 久久精品人人爽人人爽视色| 一边摸一边做爽爽视频免费| 99久久综合精品五月天人人| videosex国产| 亚洲三区欧美一区| 色av中文字幕| 黄色视频,在线免费观看| 黄色 视频免费看| 伦理电影免费视频| 亚洲精品中文字幕一二三四区|