• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    ENSOCOM:Ensemble of Multi-Output Neural Network’s Components for Multi-Label Classification

    2022-11-11 10:47:54KhudranAlzhrani
    Computers Materials&Continua 2022年9期

    Khudran M.Alzhrani

    Department of Information Systems,Al-Qunfudhah Computing College,Umm Al-Qura University,Al-Qunfudhah,Mecca,Saudi Arabia

    Abstract: Multitasking and multioutput neural networks models jointly learn related classification tasks from a shared structure.Hard parameters sharing is a multitasking approach that shares hidden layers between multiple taskspecific outputs.The output layers’weights are essential in transforming aggregated neurons outputs into tasks labels.This paper redirects the multioutput network research to prove that the ensemble of output layers prediction can improve network performance in classifying multi-label classification tasks.The network’s output layers initialized with different weights simulate multiple semi-independent classifiers that can make non-identical label sets predictions for the same instance.The ensemble of a multi-output neural network that learns to classify the same multi-label classification task per output layer can outperform an individual output layer neural network.We propose an ensemble strategy of output layers components in the multi-output neural network for multi-label classification(ENSOCOM).The baseline and proposed models are selected based on the size of the hidden layer and the number of output layers to evaluate the proposed method comprehensively.The ENSOCOM method improved the performance of the neural networks on five different multi-label datasets based on several evaluation metrics.The methods presented in this work can substitute the standard labels representation and predictions generation of any neural network.

    Keywords:Ensemble learning;multilabel classification;neural networks

    1 Introduction

    Accelerating advances in intelligent computer modeling promoted their adoption as solutions to complex tasks.Classification problems that traditional single and multiclass intelligent models cannot address are handled with task-oriented learning techniques.In classification tasks,the ensemble methods are often referred to as multiple classifier systems that combine the prediction of the multiple learners to improve performance and robustness.One clear advantage of the multiple classifiers systems over the individual classifier is reducing the variance [1], which improves the collection of models’predictive accuracy.

    Ensemble learning is considered homogeneous when all the base classifiers are of the same type; neural network ensemble only consists of neural networks.Homogeneous ensemble learning methods can be categorized into two general categories known as bagging and boosting.The classifiers in Bagging methods learn in parallel, whereas the boosting methods classifier learns sequentially.The classifiers’predictions are combined based on a fusion strategy.However, the current neural networks ensemble methods require building multiple neural network models.In practice,compiling and running the multiple neural network models is often sequential.

    Neural networks architecture is adjustable and can be modified depending on classification tasks and datasets.The flexibility of deep neural networks enabled researchers to design novel architectures suited for a wide range of classification applications.Multi-output neural networks provide the ability to perform multiple tasks simultaneously[2].The need for models that maps between a single input to multiple outputs is emerging in many fields [3,4].The outputs might represent various tasks or data types[5].Multi-label classification is one of the domains that benefited from multioutput neural networks[6].

    Unlike multiclass and single classification, instances in multi-label classification can be labeled with one or more classes.The Input features are mapped to a non-fixed number of overlapping classes for each instance, making the learning process more complicated.Assuming the input space is represented byX, which are instances of the multi-label datasetD.The attributes extracted from the datasetDare denoted as A,and all possible labels forDinstancesXasL.Moreover,the output space is represented asY.Hence, the dataset instances and attributes association is represented asX∈A1×A2×...×Af.The end of the range f is the number of input attributes in the entire set.Only a subset of the attributes is associated with an instanceXiin the datasetD.Every possible combination of labels from theLset is called a label powerset P(L).The count of l in theLset is the absolute value of L,k= |L|.The datasetDconsists of all instance’s attributes and all possible label combinationsA1×A2×...×Af×P(L).

    This work combines three concepts,multi-label,multioutput,and ensemble learning,to construct and improve the performance of a neural network.Each output layer is dedicated to predicting a single class label in the conventional multioutput neural network for multi-label classification.We extend the traditional approaches of classifying multi-label data in neural networks by restructuring the label representation in the output layers.Output layer-based classifiers create multiple end paths within the neural network with different initialized weights and backpropagation.Therefore, the proposed method can make multiple predictions using a single neural network for the same instance.Other neural network ensemble methods require several neural networks to be built and run separately.We are unaware of previous studies that utilized a single neural network to learn different outcomes for the same output.The purpose of this work is to test the ability of the ENSOCOM method to outperform the results of a neural network for multi-label classification.In order to limit the influence of the neural networks factors on the models’outcomes,a shallow multilayer perceptron(MLP)is used for the baseline and ENSOCOM models.However, the proposed methodology can extend any neural network by restructuring its final layer.The construction of this paper is summarized as follows:

    ? We developed a methodology to construct multiple classifiers from a single multi-output neural network.The set of classifiers can be trained simultaneously without recompiling.

    ? We designed a strategy to select the best model for each output-based classifier in the neural network.

    ? We introduced an ensembling method that mitigates the effect of data imbalance in multi-label classification.

    ? The proposed models outperformed the baselines in multi-label classification problems based on several evaluation metrics.

    The rest of the paper is organized as follows:In Section 2,works related to the ensemble of multilabel classification are briefly discussed and reviewed.Details on the design and specifications of the proposed work are presented in Section 3.Statistical information and a brief review of the datasets are listed in Section 4.In Section 5,experiments and results are illustrated to evaluate the performance of the proposed method.The future work and conclusions of the paper are discussed in Section 6.

    2 Related Work

    Problem transformation learning techniques reduce multi-label classification complexity by converting labels to a more straightforward form.Binary Relevance(BR)is one of the earliest transformation methods that assign a classifier to each label[7,8].In traditional supervised learning settings,classifiers learn to map between a feature set and a class or an object, as in text [9,10], and image classification problems [11] or object detection [12], which is often represented by a single exclusive label for each instance.For multi-label problems,BR imitates the traditional approaches by assigning a classifier for each label that predicts the label’s presence or absence for all unseen examples.One of the drawbacks of BR is the lack of correlation among instance labels, which might influence the models’prediction performance[13].Therefore,multiple research papers proposed an extension to BR that includes label correlation in the learning process.One of the most popular approaches to provide correlation capabilities to BR methods is Classifiers Chain(CC)[14,15].CC technique builds a chain of classifiers that predict single label association to an instance via passing the classifier’s feature space to the following one.Sharing the feature space to the subsequent classifier mitigates label independence issues in the traditional BR method.However,CC performance is highly susceptible to the order of classifiers in the chain because label information sharing occurs between adjacent classifiers.One way to improve CC is by Ensemble Classifier Chains(ECC),which sums the votes generated by a randomly structured CC to limit the effect of classifiers’order in the chains.

    Label Powerset(LP)is another approach that reduces data labels entanglement and enables the correlation of multi-label data by combining the associated set of labels into a single atomic label[16,17].The combination of labels allows for treating the multi-label problem as a standard multiclass classification.Since every unique combination of labels is considered a single label,as a result,there might be many newly constructed classes.The number of classes will vary depending on the dataset.One can train the classifier on the most relevant labels by pruning the less important ones[18].

    On the other hand, some learning approaches adapted to the multi-label problem by extending the existing algorithm that worked with the standard multiclassification task to a suitable one.For instance,a variation of the K-nearest neighbor algorithm(ML-KNN)[19]determines the training set’s K-nearest neighbors.The maximum a posteriori(MAP)principle identifies labels associated with each unseen instance based on statistical information obtained from neighbors’label sets.Similarly,a wellknown algorithm for multiclass classification,decision trees effectively predict hierarchal multi-label instances without label transformation [20].Labels in hierarchal multi-label data [21] are structured to organize the label set associated with an instance into parents and children labels,assuming label dependency.Some other approaches rely on optimizing classifiers,such as SVM,to better fit algorithm parameters on multi-label data by using a label ranking method based on a cost function[22].

    Customizability is one of the deep neural networks features that provided solutions to a diverse multi-label application.Deep neural networks are apt to implement problem transformation and adaptation techniques.A learning network with a single RNN layer was trained to predict the labels of several multi-label datasets[23].The experiment showed that the adapted version of the neural network outperformed several multi-label learning techniques,including a variation of the RNN network that implemented the classifiers chain method.

    A shallow Multilayer Perceptron with a single layer and a deep one with two layers was trained on patients’examination records to detect chronic diseases,such as diabetes and hypertension[24].The records are labeled with zero or more diseases totaling eight unique classes.The authors used several traditional classifiers and the multilayer perceptron network to learn from the original multi-label sets and the unique set(LP).The paper showed that the deep neural networks reported higher accuracy than other classifiers but lower F1-score on multi-label sets.Bidirectional RNN with attention mechanism neural networks was employed to predict the labels of three biomedical text datasets in[25].The proposed framework had two outputs, one for the number of labels associated with the instance and the second for the labels set.The authors claimed that their proposed method is favorable compared to other deep neural network frameworks and BR.Besides emphasizing the neural networks themselves,the Hierarchical Label Set Expansion method introduced in[26]added more complexity to the multi-label data by expanding the label sets.The approach is suitable for hierarchal multi-label datasets since it finds missing labels between descendants and ancestors to append to the labels set.

    Due to the difficulty in handling multi-label classification problems, the ensembled methods that use deep neural networks are considered promising solutions.For example, the ensemble of classifiers for multi-label classification has applications in video object recognition.One proposed using the ensemble of two deep neural networks,GoogleLeNet and VGGNet,to detect surgical tools in laparoscopic videos [27].Before learning the network, the surgical tools labels were combined into a single label.The two neural networks’outputs were averaged to compute the final prediction.The ensemble method outperforms the paper’s other baselines based on the mean accuracy precision metric.

    The use of the ensemble of five different neural network frameworks to predict tweets’labels is presented in[28].The comparison was made between two ensemble approaches,namely stacked and weighted.A stacked ensemble of the five classifiers was conducted by concatenating the output of all models before passing it into two fully connected layers.The weighted ensemble sums the product of a tensor,the neural networks models output,over a single axis to construct a one-dimensional vector representing a tweet’s labels.The paper reported that the weighted ensemble approach performed slightly better than the stacked one.

    3 Methodology

    3.1 Single and Multi-Output Deep Neural Networks

    Changes are made to the conventional DNN structure to adapt to MLC problems that expect each sample in the dataset to be marked with one or more labels.As seen in Fig.1, two general approaches represent multiple labels at the network’s end.The first approach is to approximate the problem of MLC in DNN by creating multiple output layers equal to the number of all possible labels of a dataset.In the standard multioutput neural network for MLC, each output layer has a single neuron that takes the inputs from the proceeding layer.The neuron sums the dot product of inputs and their corresponding weights,which is often computed by matrix manipulation.

    Figure 1:Single vs.multi-output neural networks for multilabel classification

    In most cases, the weights in the network layers are trainable to determine the influence or importance of input values of the neurons.Classes are exclusive in binary and multiclass classification problems so that any given instance will be labeled with one class.Every output layer in the standard multioutput neural networks for MLC acts similarly to the output layer of a single classification network.The outcomes returned by the output layers’neurons are separately passed into an activation function.

    The activation function produces a score representing confidence degree that predicts the label presence or absence in an instance.In other words, the expected output of the activation function after thresholding is either 0 or 1.The simplest thresholding method is scaling down values less than 0.5 to 0 and higher ones to 1.Sigmoid function has this property that squash neurons output to produce a floating-point number.However, it is worth noting that there is another way to conduct binary classification problems in DNN.It is possible to perform binary classification problems with two neurons in the output layer instead of one.

    The Sigmoid function will not be appropriate for standard multiple-output networks with two neurons in the output layer.It can be done by a well-known activation function that measures the probability of a class belonging to an instance,namely SoftMax.Each output layer passes its outcome to the SoftMax function that produces a probability distribution over all the possible classes.The sum of the probability distribution generated by the SoftMax activation function equals 1.The class that received the highest probability among the output components is designated as the prediction of that output.The SoftMax function for binary and multiclass classification is mathematically defined as follows:

    However, assigning a separate output layer for each unique class is challenging to manage.The second approach that DNN can take to perform MLC tasks, Fig.1, is by limiting the number of outputs in the DNN to a single output layer with multiple neurons equal to the number of unique labels in the dataset.Correspondingly,the output layer has a single vector with multiple components that allocates all possible classes.This approach requires an activation function that measures the degree of confidence for every class independently.The sigmoid activation function is known by its ability to produce a score for all the possible labels with values ranging between 0 and 1 independently.The sigmoid function is defined as:

    The main difference between the multioutput and single output DNN for multi-label classification is that the loss is optimized over each output layer in the multi-output neural network,one output layer per class.While in a single output layer neural network,the loss optimization is done for each output component,one class per component.Nonetheless,both are valid approaches and are widely used in practice.

    3.2 Ensemble of Output Components in Multioutput Neural Network

    This paper proposes another approach that takes advantage of DNN flexibility in specifying the number of output layers.The ensemble of outputs’components of multioutput neural networks(ENSOCOM)combines some properties of the single output and multi-output neural network for MLC to improve its performance.Before going into the details of the ENSOCOM framework,we will briefly highlight the learning process of DNN.

    Deep Neural networks are a collection of neurons organized and grouped in multiple layers.A neural network with a single hidden layer is often referred to as a shallow neural network.In multiple layers neural networks,upper layers are often used to extract features from the dataset,and the lower ones are fully connected layers.All neurons or nodes are connected to the preceding and posterior layers in the fully-connected networks,hence the name.Neurons aggregate the incoming connections from the previous layer and apply an activation function.Several well-documented activation functions in the literature for intermediate layers improve the network’s ability to learn complex patterns.The connections between neurons edges and bias vector are the learnable parameters of neural networks,also known as weights W.These weights are updated in the training phase to map input space attributes to a label or a subset of all possible labels.The connection between two layers can be mathematically defined as follows:

    where aijis the result of the activation functionσby the neuronjin layeri.are the weights from the neuronkin the previous layer tojneuron in the current layer.Finally,bijis the bias for thejneuron in the current layeri.

    The final layer in the network is an output layer that makes predictions based on all learnable parameters’values.The network’s performance is measured by a cost function that considers neural network weightsW, biasesB, the input of an instanceSrand the expected output of the training instanceEr.The cost function can be expressed as:

    The cost of function for predicting a training instance is used to find the error of the output layerδas in the following equation:

    In backpropagation, the weights are updated according to a gradient descent optimizer to minimize the error of the cost function.Therefore, a gradient measures the relationship between changes made to the network weights concerning the network error.

    The initialized weights are essential in network optimization since they are considered the starting point for reaching the global optimum [29].Although the weights are initialized before the actual learning starts, it impacts the final outputs of the network.Therefore, several weights initialization mechanisms can be seen in the literature.Zero initialization is one of the simplest initialization methods, where all weights are set to zeros.However, initializing weights with zeros will result in layers receiving the exact update and preventing the network from learning different functions at each neuron.Initializing weights with different values will break the symmetry of neurons located in the same layer with identical inputs and activation functions.While there are multiple weights initialization approaches,we are interested in weights initialized from a random distribution.

    As with any layer, the output layer of the deep neural network also has trainable weights.The output layer weights are the first to be updated in the backpropagation phase.Fig.2 illustrates the lower end of the network of the proposed framework that ensembles the multi-output neural network’s output components(ENSOCOM).

    Figure 2:Ensemble of output layers components

    ENSOCOM is a neural network with a multioutput layer.Each one of the output layers has the same number of neurons that represent classes in the multi-label set.The output layers are identical regarding their inputs,activation functions,and expected outputs.However,the weights for all output layers are initialized separately and are not the same.Hence, the output layers will have distinct predictions and costs in the first iteration of the forward propagation.Since the updates made to the output layers differ depending on their cost function, the starting point for finding the global optimum is not identical for the output layers.The more the network is updated in every iteration,the closer each output layer gets to the global optimum.Because the output layers are initialized with different weights,their outcomes will differ.We argue that ENSOCOM will search for different paths to minimize the error for each output layer,resulting in learning more patterns.

    ENSOCOM enables the network to make multiple predictions for the same class without constructing additional networks.Since the output layers in ENSOCOM makes predictions regarding all the possible labels in a component set, the Sigmoid function is employed in each output layer.The Sigmoid function will take inputs and independently produce a confidence degree for each component in the output layer.As we have said before,the predictions made for each corresponding component across output layers are not necessarily the same.In other words,ENSOCOM constructs multiple prediction models,which share the same input and hidden layer from a single neural network.Pseudocode 1 illustrates the entirety of the proposed ENSOCOM methodology.

    The neural network will iterate over training examples during the training phase to improve the network accuracy and minimize the error rate.Network performance on the training set is not a good indicator of the model’s ability to accurately predict unseen data’s label.After each epoch,the model is evaluated on unseen examples,also known as a validation set.In single output layer neural networks,the model that performed the best on the validation set is often selected for evaluation on the testing set.The same concept can be applied to multioutput neural networks by choosing the best-performed model on the validation set for every output layer.The weights of the prediction model will be saved if the output layer performance on the validation set on the current epoch is better than the previous ones.The best model selection process is independent and only considers the accuracy of a single output layer at the time.A group of best-fitted models on the validation set will be created by the end of the training phase.It is worth mentioning that the best models for the output layers do not have to be from the same epoch.In other words,the model weights will be saved for every output layer separately regardless of the epoch.The final number of models saved weights equals the number of output layers in the neural network.The best model-saved weights are associated with the corresponding output,and they will be used later in the testing phase.

    In the testing phase,one can loop through the models’saved weights and load them to the neural network architecture.As seen in Algorithm1,each output layer is considered an independent classifier for classifying multi-labeled instances.The saved weights associated with the output layer are loaded to the neural network.The entire network,including all the output layers,is set with the saved weights to predict the instances in the testing set.The predictions made with the output layer associated with the saved weights are used in the ensemble phase; predictions made by the other output layers are discarded.The exact process is repeated for all the output layers (classifiers).The outcome of the whole procedure is a multidimensional array of predictions made by the classifiers.The size of the multidimensional array is(Len(OL,Len(X),Len(L)),which are the length output layers,instances,and all possible labels set,respectively.

    ?

    Now that we have multiple prediction sets for the same class per instance, we must determine the final prediction set.Labels in classification problems are represented as predefined values known as categorical or nominal data.The categorical data could be strings or integer numbers,limiting the learning process.The number of classes is not identical for all instances in the multi-label classification.However,having different number label sets for each instance raises some implementation difficulties.Therefore,the label sets for every instance are represented by a binary vector that indicates a label’s presence and absence.The collection of label sets associated with all instances in the dataset is a(X,L)binary matrix.

    The multidimensional array that consists of all predictions made by the best model per output layer is used in the ensemble process.Fig.3 shows a snapshot of the ensemble process for a single instance, where multiple predictions are transformed into binary vectors form.Each component in the output layers vector is converted to a 0 or 1.Bitwise OR operation is applied on all the corresponding components generated by the output-based models.This strategy focuses on capturing the positive labels predicted by the model while ignoring the negative ones.One clear advantage of this approach is limiting the data imbalance problem in multi-label classification by giving labels with low representation in the dataset a better chance of getting detected.Also,the more significant is the number of the output layers (classifiers), the higher are the chances that some output-based models might detect patterns not seen by the other models.Of course, one clear drawback of this fusion technique is increasing the false positive rate.

    Figure 3:Fusion of multiple predictions by OR operation for a multilabel instance

    4 Datasets

    Five datasets are selected from various domains to represent multi-label classification problems better.The datasets are well-cited standardized with stratified training and testing sets.The domains covered in the datasets are domains texts,biology,and images,as shown in Tab.1.The Table shows several statistics concerning the selected dataset,such as instances and labels numbers.All the datasets discussed and experimented on in this paper were initially published on MULAN[30],an open-source data repository that supports java learning from multi-label datasets.

    Table 1: Multilabel datasets

    The models’performance of the multi-label classification tasks can be measured in accuracy and speed.Therefore, the Scikit-multilearn repository introduced Python compatibility for easier access and manipulation.A more significant number of instances provides classifiers with more information that can help to identify patterns between the feature space and the labels.Nominal attributes often represent names or categories related to the instance.Zipcodes,gender,or hair color are examples of nominal attributes.The numerical attributes in the datasets have values that can be measured.One can assume that the larger is the number of the labels,the more difficult it is to find all labels associated with an instance.Cardinality is another information in the dataset statistics table that indicates the average number of labels per instance.In addition,density is measured by dividing cardinality by the number of labels.Finally,distinct is the number of unique labels combination in the dataset.

    The datasets are generalized, meaning all features are already extracted and ready for training without preprocessing.The following is a brief description of each one of the datasets.BibTeX[31]is one of the most recognized text datasets for multi-label classification.An instance in a BibTeX contains bibliography items associated with articles,books,or thesis.The BibTeX dataset was constructed by collecting items tagged in users’submissions to Bibsonomy,a publication sharing system.The main application of the BibTeX dataset is to support a recommender system that automatically identifies tags of unseen BibTeX instances.Tab.1 shows that BibTeX has the highest number of labels,distinct label sets,and features compared to other datasets.

    The second dataset, Genbase [32], belongs to the biology domain, which contains protein sequences with classes representing protein families.The protein sequence can belong to one or more protein family classes.It has the lowest number of instances and features among the other multi-label classification datasets.On the other hand,the Medical dataset[33]is anonymized clinical text collected from Cincinnati Children’s Hospital Medical Center’s Department of Radiology.The clinical texts represent radiology reports that include ICD-9-CM codes assignments that indicate clinical history and radiologists’impressions.Except for the Genbase dataset,the Medical dataset has fewer instances than the other three but has a relatively high number of features and labels.

    In addition to the previous datasets,the Scene dataset[34]includes natural scenes images labeled with classes such as beach and sunset.The Scene dataset has the lowest number of multi-label and distinct label sets.Finally, the Yeast dataset [22] is constructed by micro-array expression data and phylogenetic profiles representing the gene set.Functionality classes,such as metabolism,and aging,are assigned for each yeast gene.As shown in Tab.1,genes can be associated with many functionality classes,resulting in high cardinality and density values.

    5 Results and Discussion

    5.1 Neural Network Architectures and Experiments Setup

    Deep neural networks are categorized into multiple types based on their architectures and tasks.The dominant types of DNN are Convolutional Neural Networks(CNN),Recurrent Neural Networks(RNN),and Artificial Neural Networks(ANN),also known as FeedForward Neural Networks.CNN is suited for features engineering and extraction from images, texts, and audio.On the other hand,RNN is often associated with learning that requires time series or specific sequenced data.Multilayer Perceptron(MLP)are purely made by fully-connected layers,a special class of FeedForward Neural Networks.Fully-connected layers are the building blocks of MLP,and they are placed at the lower end of CNN and RNN.The multi-label datasets we are experimenting on are generalized and standardized for competitive benchmarking.Since the proposed methodology objective is to reconstruct the final layer of the neural networks,we employ multiple variations of shallow MLP networks to control the experiment environment efficiently.The proposed methodology is extendable to any neural network architecture for multi-label classification.

    The information flow in the FeedForward neural networks is always forward with no feedback connections;otherwise,it will be considered RNN.The classifier maps xi instance to the predefined categoriesyiby learning the best possible values of the parametersθ, which can be represented byyi=f (xi;θ).Tab.2 shows the entirety of the models built for the experiments with the number of parameters.The baseline models have three layers, an input, a hidden, and an output layer.The other models have single input,hidden,and multioutput layers.The number of neurons is 200,300,400, or 500 in the hidden layer of the baseline models and the models that implement the proposed methodology.To evaluate the proposed methodology’s effectiveness in improving the MLP network performance on multi-label classification problems,we set the number of the output layers to 4,8,16,and 20 output layers.We implemented 20 neural network models,4 of which are baseline models,and the remaining 16 models are multi-output neural networks.

    Table 2: Neural Networks hidden layer sizes,output layers,and trainable parameters statistics

    The number of trainable parameters steadily increases whenever we add output layers to the neural network Tab.2.The trainable parameters for a single output layer and 20 output layers increased by 60%,30%,36%,27%,and 69%for BibTeX,Genbase,Medical,Scene,and Yeast datasets,respectively.The difference in the increment percentage between different datasets is attributed to the number of labels,cardinality,distinct,and other dataset characteristics.

    The output of the hidden layer is transformed using a rectified linear activation function(ReLU)to achieve non-linearity in the mapping between inputs and outputs.The hidden layer and the output layers weights are initialized using a Glorot uniform initializer that picks samples from a uniform distribution within a predefined range.Although the samples were drawn randomly,seeds are assigned for the random number generators in the global session,programming language virtual environment,and external libraries to enable experiments reproducibility.Adam is used as an optimization algorithm with the following settings,a learning rate of 0.001,the first exponential decay rate is set 0.9,and the second exponential decay rate to 0.9.Binary Cross-entropy is the loss function for every component in the output layers with binary accuracy as an evaluation metric in training.The number of training epochs is 300,with batch size set to 128.Finally,10%of the training set is used for validation in the training process.

    5.2 Evaluation Metrics

    Unlike single class and multiclass classification, evaluating multi-label classification models is not straightforward.Most of the multi-label datasets are imbalanced, invalidating the reliability of traditional accuracy as an evaluation metric.Several evaluation metrics for multi-label classification measure the performance of the classification models.The following is a list of metrics used in this work with a brief description.

    ? Weighted F1-Measure:The standard F1-measure is often described as the harmonic mean of precision and recall.Precision can be obtained by dividing the number of true positives (TP)predicted by the model by all the instances marked as positive,which include the false positives(FP).

    On the other hand,recall is the number of true positives predicted by the model divided by the number of positive instances that the model should have predicted.

    The weighted F1-Measure differs from the standard F1-measure by giving weights to each class in the dataset equal to class probability.The classes with more instances receive higher weights than those with fewer instances.

    ? Micro-F1 Measure:In micro F1-measure, all the true positives, false positives, and false negatives are computed globally regardless of the classes.Then the micro-precision and microrecall are calculated based on the micro TP, FP, and FN.Finally, the Micro F1-Measure is obtained by finding the harmonic mean of the micro-precision and recall.The micro F1-Measure is a better evaluation metric for multi-label classification than weighted F1-Measure since it treats all instances equally.

    ? Macro-F1 Measure:Macro-F1 Measure does not consider data imbalance because it equally treats all the classes in the datasets.The macro TP, FP, and FN are computed for each class.This approach will favor classes with more examples in the dataset since the classifier will most likely be skewed towards them.

    ? Samples-F1 Measure:This metric is one of the better evaluation metrics among the other variations of the F1 measures.Samples-F1 Measure is computed by averaging the TPs, FPs,and FPs of each instance, which is very helpful in evaluating the performance classifiers on multi-label classification problems.

    ? Subset Accuracy (Exact Matching Ration):Subset Accuracy is calculated by dividing the number of instances that have been correctly predicted for all the class labels in the labels set over the total number of instances.While Subset accuracy is a good metric for evaluating the classifier’s ability to identify the exact instances’classes,it disregards the correct labels within the label set of the other instances.

    ? Hamming Loss:Hamming loss counts for the misclassified labeled within the instance label set.It computes the exclusive-or of the predicted labels set and instance true label setYi,l⊕Xi,lto derive the loss of that instance.The hamming loss is the total mislabels divided by the total number of labels.

    ? Ranking Loss:Ranking loss is the percentage of incorrectly ordered classes to correctly ordered classes.The lower is the ranking loss of a classifier, the better it is.The ground truth of the binary matrix of the ordered labels is defined asy∈{0,1}ninstances*nclasses.Each class is associated with a score donated as∈{R}ninstances*nclasses.The ranking loss can be expressed as follows:

    where‖ .‖0is thel0normalization andis the cardinality of the set.

    5.3 Experiments Results

    We examine the performance of the 20 intelligent models in classifying the labels of 5 multi-label classification datasets.As seen in Figs.4-8, the models behaved differently on BibTeX, Genbase,Medical, Scene, and Yeast datasets.We selected four evaluation metrics out of the seven metrics included in this work to be plotted in the figures for simplicity and clarity.Weighted-F1 Measure,Subset Accuracy, Hamming Loss, and Ranking Loss are the selected evaluation metrics.At some point, the ENSOCOM models outperformed those with a single output layer for all the datasets.Sometimes models with the same number of output layers and hidden layer size improved on some metrics,but not all.

    Figure 4:Shows the evaluation of multiple variations of ENSOCOM models in comparison to singleoutput network models based on W-F1↑,Sub-Acc↑,Ham-Loss↓,Rank-Loss↓metrics for the BibTeX dataset

    Figure 5:Shows the evaluation of multiple variations of ENSOCOM models on the Genbase dataset

    Figure 6:Shows the evaluation of multiple variations of ENSOCOM models on the Medical dataset

    Figure 7:Shows the evaluation of multiple variations of ENSOCOM models on the scene dataset

    Figure 8:Shows the evaluation of multiple variations of ENSOCOM models on the yeast dataset

    Fig.4 shows that the multioutput models achieved better scores on the BibTeX dataset depending on the number of output layers.For instance,The best weighted-F1 score for networks with 100 and 500 neurons in the hidden layer was with 16 output layers.The models with 300 and 400 neurons in the hidden layer reported their highest W-F1 score with 20 output layers.Models with fewer neurons in the hidden layer had better Subset Accuracy with more output layers.The improvement of models in terms of their Hamming Loss score is much more evident than the previous two metrics; all the ENSOCOM models reported better Hamming Loss than the single output mode.Similar to Weighted-F1,fewer neurons need more outputs to outperform the single output model to improve the Ranking Loss metric.Tab.3 shows the mean of all the evaluation metrics achieved by the ENSOCOM models to evaluate the proposed methodology better.By comparing the single output performance to the mean of the results obtained from the ENSOCOM models,we found that ENSOCOM models with 16 output layers outperformed other models on all metrics except for Ranking Loss.

    Table 3:Illustrates a comprehensive evaluation of the ENOSOCOM and single network models over hidden layer sizes on the BibTeX dataset

    The performance of the ENSOCOM models is better recognized on the Genbase dataset.The single output layer never outperformed the ENSOCOM models on any evaluation metric regardless of the number of output layers, as illustrated in Fig.5.The difference between the ENSOCOM models and single output models’performance becomes apparent in Tab.4.The Table shows that the ENSOCOM model with 500 hidden layer neurons reported the highest scores for all evaluation metrics.

    The size of the hidden layer and the number of output layers impact the performance of the ENSOCOM models on the Medical dataset.Models with larger hidden layer sizes are better than the low ones.We can see a tradeoff between the Weighted-F1 Measure and Hamming Loss evaluation metric,where the ENSOCOM models reported better loss and lower F1-score than the single output model Fig.6.However, the overall performance of ENSOCOM models with 500 neurons in the hidden layers outperformed the baseline models Tab.5.Moreover,the ENSOCOM model variations outperformed all the Scene dataset baseline models, see Fig.7.The difference between the baseline models’performance and the proposed small-sized models is more significant than the larger ones.However,models with more neurons in the hidden layer showed the potential for improvement with more output layers.Unlike the previous datasets, the best overall performance of the ENSOCOM models was not limited to models with a high number of neurons Tab.6.The same applied to the performance of the ENSOCOM models on the Yeast dataset Fig.8 and Tab.7.Hence,the two datasets do not need models with many output layers for most evaluation metrics.We further validated the proposed method’s performance by computing the Paired t-Test to compare the means of the results obtained by single output and ENSOCOM models on all the datasets.TheP-value of the paired t-test is 8.882e-15, which means the null hypothesis is rejected, and the improvement of the ENSOCOM over single output models is statistically significant.

    Table 4:Illustrates a comprehensive evaluation of the baseline and proposed models on the Genbase dataset

    Table 5: Comprehensive evaluation of baseline and EMSOCOM models on the Medical dataset

    Table 6: Comprehensive evaluation of baseline and EMSOCOM models on the Scene dataset

    Table 7: Comprehensive evaluation of baseline and EMSOCOM models on the Yeast dataset

    The previous experiments proved that ENSOCOM network models are better than traditional ones.On some datasets, the degree of improvement is more significant than the others.To study the reasons behind this, we conduct a correlation analysis between the degree of improvements of all ENSOCOM models with datasets cardinality, density, and distinct.We used two correlation algorithms known as Spearman’s rank correlation and Pearson correlation.Tab.8 shows correlations coefficients between evaluation metrics and the size of hidden layers in the ENSOCOM.There is no clear pattern for both coefficients over hidden layer size variations.The correlation coefficients for the loss metrics are more consistent over different network models than the Weighted-F1 and Subset Accuracy.

    Table 8:Spearman Rank and Pearson Correlation values of Card.,Dens.,and Dist.to ENSOCOM models’metrics

    Spearman’s rank and Pearson agree on the negative relationship between dataset distinct and Weight-F1.The higher the number of distinct labels in the dataset, the lower the Weighted-F1.Moreover, Subset Accuracy suffers from the same problem, and it is highly affected by the number of distinct labels in the multi-label datasets.On the other hand,high dataset density result is in better Subset Accuracy,so both move in the same direction.While density has a negative relationship with Hamming Loss and Ranking Loss, it is beneficial since the lower the loss, the better model.Also,Pearson consistently reports a significantly negative relationship between cardinality and Hamming Loss.Spearman’s rank agrees with Pearson on the negativity of the relationship between cardinality and Hamming Loss,but not with high confidence.

    6 Conclusion

    ENSOCOM proved that the ensemble of multiple output layer-based classifiers with the same hidden layer reported better results than the conventional single output layer neural networks.By experimenting with various multi-label data domains, the ENSOCOM improved the network’s predictions performance regardless of the data type.We found that the optimal number of output layers and hidden layer sizes are data-dependent and not generalized.The change in the number of parameters from single to multioutput layers is also dependent on the dataset statistical characteristics.There are still many research directions to explore the applications more applications for ENSOCOM.The proposed method can be incorporated into more complex deep neural networks for better evaluation.

    Acknowledgement:The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340018DSR02).

    Funding Statement:The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340018DSR02).

    Conflicts of Interest:The author declares that they have no conflicts of interest to report regarding the present study.

    最新在线观看一区二区三区| 久久精品久久久久久噜噜老黄 | 男插女下体视频免费在线播放| 午夜福利在线观看吧| 国模一区二区三区四区视频| 天天躁日日操中文字幕| 成人午夜高清在线视频| 卡戴珊不雅视频在线播放| 女人十人毛片免费观看3o分钟| 久久综合国产亚洲精品| 日本撒尿小便嘘嘘汇集6| 成人美女网站在线观看视频| 日本黄色视频三级网站网址| 日韩欧美三级三区| 欧美成人一区二区免费高清观看| 亚洲图色成人| 伦理电影大哥的女人| 国产午夜精品久久久久久一区二区三区 | 一级毛片电影观看 | 日韩欧美三级三区| 日韩高清综合在线| 99热这里只有精品一区| 婷婷精品国产亚洲av| 国产一区二区三区av在线 | 美女cb高潮喷水在线观看| 亚洲欧美精品综合久久99| 18+在线观看网站| 国产女主播在线喷水免费视频网站 | 最近2019中文字幕mv第一页| 天堂av国产一区二区熟女人妻| 国产熟女欧美一区二区| 免费高清视频大片| 男插女下体视频免费在线播放| 亚洲图色成人| 你懂的网址亚洲精品在线观看 | 日本与韩国留学比较| 女生性感内裤真人,穿戴方法视频| 久久国内精品自在自线图片| 亚洲人成网站高清观看| 麻豆久久精品国产亚洲av| 99热只有精品国产| 一区二区三区免费毛片| 真实男女啪啪啪动态图| 麻豆久久精品国产亚洲av| 亚洲av美国av| 久久久国产成人免费| 日本一本二区三区精品| 亚洲一区二区三区色噜噜| 国产精品,欧美在线| 国产高清有码在线观看视频| 最近中文字幕高清免费大全6| 午夜视频国产福利| 欧美一级a爱片免费观看看| 国产在线精品亚洲第一网站| 黄色欧美视频在线观看| 中国美白少妇内射xxxbb| 看免费成人av毛片| 亚洲成人久久性| 国产三级在线视频| 国产高清激情床上av| 国产男人的电影天堂91| 日韩强制内射视频| av卡一久久| 亚洲国产色片| 国产精品久久电影中文字幕| 国产探花在线观看一区二区| 伦精品一区二区三区| 99久久九九国产精品国产免费| 日本色播在线视频| 亚洲七黄色美女视频| 97超级碰碰碰精品色视频在线观看| av专区在线播放| 成人av一区二区三区在线看| 国产精品日韩av在线免费观看| 国产精品美女特级片免费视频播放器| 18+在线观看网站| 成人毛片a级毛片在线播放| 免费电影在线观看免费观看| 国产亚洲精品久久久com| 国产精品永久免费网站| 亚洲一区二区三区色噜噜| 亚洲自偷自拍三级| 亚洲成人久久爱视频| 精品国内亚洲2022精品成人| 午夜福利在线在线| 一个人观看的视频www高清免费观看| 91精品国产九色| 免费av观看视频| av天堂在线播放| 日韩强制内射视频| 成年女人毛片免费观看观看9| 亚洲人成网站在线观看播放| 国产精品综合久久久久久久免费| 久久久精品欧美日韩精品| 久久久久性生活片| 91在线观看av| 搡女人真爽免费视频火全软件 | 久久草成人影院| 黄色配什么色好看| 亚洲美女视频黄频| 国内精品宾馆在线| 看黄色毛片网站| 此物有八面人人有两片| 69人妻影院| 蜜桃亚洲精品一区二区三区| 91狼人影院| 国产 一区精品| 综合色丁香网| 国产v大片淫在线免费观看| 国产精品99久久久久久久久| 成人三级黄色视频| 日日摸夜夜添夜夜爱| 九九爱精品视频在线观看| 变态另类成人亚洲欧美熟女| 97超视频在线观看视频| 美女cb高潮喷水在线观看| 免费观看精品视频网站| 国产男靠女视频免费网站| 免费大片18禁| 精品久久久久久成人av| 免费观看在线日韩| 欧美性感艳星| 久久精品国产99精品国产亚洲性色| 久久久久国产网址| 婷婷色综合大香蕉| 亚洲五月天丁香| 国产乱人偷精品视频| 女同久久另类99精品国产91| 亚洲av成人av| 日韩 亚洲 欧美在线| 久久亚洲国产成人精品v| 2021天堂中文幕一二区在线观| 少妇裸体淫交视频免费看高清| 国产单亲对白刺激| 女同久久另类99精品国产91| 国产精品爽爽va在线观看网站| 国产中年淑女户外野战色| 哪里可以看免费的av片| 级片在线观看| 亚洲一区二区三区色噜噜| 麻豆国产97在线/欧美| 国产精品一区www在线观看| 国产探花在线观看一区二区| 深夜a级毛片| 国产高清有码在线观看视频| 日本一本二区三区精品| 日韩精品青青久久久久久| 天美传媒精品一区二区| av在线老鸭窝| 国产av在哪里看| 岛国在线免费视频观看| 国内精品久久久久精免费| 国产三级中文精品| 日本a在线网址| 亚洲欧美精品综合久久99| 欧美日韩综合久久久久久| 久久久久久久久久黄片| 日本撒尿小便嘘嘘汇集6| 黄色配什么色好看| 欧美精品国产亚洲| 国产视频内射| 亚洲成人av在线免费| 国产蜜桃级精品一区二区三区| 亚洲av免费高清在线观看| 国产伦精品一区二区三区四那| 亚洲精品日韩在线中文字幕 | 中国美女看黄片| 淫妇啪啪啪对白视频| 男人舔奶头视频| 男女之事视频高清在线观看| 99热这里只有是精品50| 日韩一本色道免费dvd| 99九九线精品视频在线观看视频| 久久久久久久久久久丰满| 亚洲电影在线观看av| 九九热线精品视视频播放| 一级av片app| 三级男女做爰猛烈吃奶摸视频| 国产成年人精品一区二区| 美女 人体艺术 gogo| 国产午夜精品论理片| 久久婷婷人人爽人人干人人爱| 禁无遮挡网站| 欧美日韩国产亚洲二区| 亚洲欧美日韩无卡精品| 欧美色视频一区免费| 97超碰精品成人国产| 国产一区二区在线观看日韩| 99在线视频只有这里精品首页| 久久久久免费精品人妻一区二区| 日本精品一区二区三区蜜桃| 欧美中文日本在线观看视频| 午夜福利在线观看免费完整高清在 | 床上黄色一级片| 国产美女午夜福利| 亚洲人成网站在线播放欧美日韩| 欧美一区二区亚洲| 亚洲欧美日韩高清在线视频| 日韩精品有码人妻一区| 精品人妻熟女av久视频| 99视频精品全部免费 在线| 久久人人爽人人爽人人片va| 国产v大片淫在线免费观看| 亚洲中文字幕日韩| 搡老妇女老女人老熟妇| 免费人成在线观看视频色| 日本与韩国留学比较| 一边摸一边抽搐一进一小说| 国产精品人妻久久久久久| 最新中文字幕久久久久| 非洲黑人性xxxx精品又粗又长| 亚洲国产日韩欧美精品在线观看| 日韩欧美三级三区| 亚洲欧美精品综合久久99| 成年女人看的毛片在线观看| 日韩成人av中文字幕在线观看 | 激情 狠狠 欧美| 特大巨黑吊av在线直播| 亚洲成人久久性| 在线观看美女被高潮喷水网站| av黄色大香蕉| 欧美又色又爽又黄视频| 性色avwww在线观看| 国产精品人妻久久久久久| 日韩精品有码人妻一区| 99热全是精品| 日本黄大片高清| 亚洲最大成人av| 国产91av在线免费观看| 日韩一本色道免费dvd| 国产精品久久久久久av不卡| 中国美白少妇内射xxxbb| 3wmmmm亚洲av在线观看| 国产成人aa在线观看| 亚洲av五月六月丁香网| 91久久精品国产一区二区三区| 一卡2卡三卡四卡精品乱码亚洲| 国产一级毛片七仙女欲春2| av在线天堂中文字幕| 十八禁网站免费在线| 校园春色视频在线观看| 国产探花极品一区二区| 老熟妇仑乱视频hdxx| 国产一区二区在线av高清观看| a级毛色黄片| 人人妻人人看人人澡| 国产高清不卡午夜福利| 亚洲精品亚洲一区二区| 我的女老师完整版在线观看| 日韩欧美精品免费久久| 最近最新中文字幕大全电影3| 老熟妇仑乱视频hdxx| 亚洲熟妇中文字幕五十中出| 免费搜索国产男女视频| 日本黄色片子视频| 97超碰精品成人国产| 97碰自拍视频| 国产精品人妻久久久影院| 天天躁日日操中文字幕| 内地一区二区视频在线| 一个人看视频在线观看www免费| av在线观看视频网站免费| 97在线视频观看| 亚洲欧美中文字幕日韩二区| 又黄又爽又刺激的免费视频.| 在线观看免费视频日本深夜| 久久久成人免费电影| 中国美女看黄片| 亚洲在线观看片| 人妻夜夜爽99麻豆av| 色吧在线观看| 久久久午夜欧美精品| 久久久午夜欧美精品| 日本精品一区二区三区蜜桃| 在线观看一区二区三区| 91久久精品国产一区二区三区| 天天躁日日操中文字幕| 91精品国产九色| 伦精品一区二区三区| 国产男靠女视频免费网站| 日韩强制内射视频| 色视频www国产| 久久久久久久亚洲中文字幕| 我的老师免费观看完整版| 免费不卡的大黄色大毛片视频在线观看 | 男人舔奶头视频| 我要看日韩黄色一级片| 最近在线观看免费完整版| 美女被艹到高潮喷水动态| 人人妻人人澡欧美一区二区| 日韩成人av中文字幕在线观看 | 婷婷精品国产亚洲av在线| 久久精品综合一区二区三区| 最近中文字幕高清免费大全6| 有码 亚洲区| 亚洲七黄色美女视频| 中文字幕免费在线视频6| 国产黄片美女视频| 国产精品一区二区三区四区久久| 亚洲精品国产av成人精品 | 波多野结衣巨乳人妻| 不卡一级毛片| 久久婷婷人人爽人人干人人爱| 欧美一区二区国产精品久久精品| 少妇人妻一区二区三区视频| 久久综合国产亚洲精品| 久久精品夜色国产| 亚洲中文日韩欧美视频| 五月伊人婷婷丁香| 久久久久国内视频| 亚洲天堂国产精品一区在线| 国产视频一区二区在线看| 亚洲一级一片aⅴ在线观看| 亚洲最大成人手机在线| 特级一级黄色大片| www.色视频.com| 精品一区二区三区视频在线| 亚洲性久久影院| 久久亚洲精品不卡| 国模一区二区三区四区视频| 老司机福利观看| 精品熟女少妇av免费看| 久久精品影院6| 97超级碰碰碰精品色视频在线观看| 内射极品少妇av片p| 看非洲黑人一级黄片| 一本久久中文字幕| 国产大屁股一区二区在线视频| 国内精品美女久久久久久| 在线观看美女被高潮喷水网站| 精品99又大又爽又粗少妇毛片| 久久热精品热| 色播亚洲综合网| 精品久久久久久成人av| 亚洲第一区二区三区不卡| 国产黄a三级三级三级人| 中文字幕免费在线视频6| 熟女电影av网| 欧美一区二区国产精品久久精品| 成熟少妇高潮喷水视频| 国产精品一区二区性色av| 亚洲av第一区精品v没综合| 啦啦啦观看免费观看视频高清| 日产精品乱码卡一卡2卡三| 亚洲va在线va天堂va国产| 亚洲无线观看免费| 一进一出好大好爽视频| 国产麻豆成人av免费视频| 亚洲欧美成人精品一区二区| 99热网站在线观看| 狂野欧美激情性xxxx在线观看| 秋霞在线观看毛片| 欧美人与善性xxx| 久久亚洲精品不卡| 在线观看美女被高潮喷水网站| 人人妻人人澡欧美一区二区| 嫩草影院精品99| 国产精品一区www在线观看| 日日摸夜夜添夜夜添小说| 97碰自拍视频| 日韩av在线大香蕉| 91在线精品国自产拍蜜月| 亚洲精品影视一区二区三区av| 午夜久久久久精精品| 国产精品一区二区性色av| 噜噜噜噜噜久久久久久91| 日本撒尿小便嘘嘘汇集6| 亚洲无线观看免费| 五月伊人婷婷丁香| 日本一二三区视频观看| 久久人妻av系列| 一级黄色大片毛片| 亚洲熟妇熟女久久| 人妻久久中文字幕网| 永久网站在线| 在线观看美女被高潮喷水网站| 国产色爽女视频免费观看| 欧美日本亚洲视频在线播放| 91久久精品国产一区二区成人| 国产又黄又爽又无遮挡在线| 天天躁夜夜躁狠狠久久av| 亚洲aⅴ乱码一区二区在线播放| 国产在线精品亚洲第一网站| 国产色婷婷99| 日韩欧美精品v在线| 插阴视频在线观看视频| 国产伦在线观看视频一区| 久久久久久久久大av| 亚洲人与动物交配视频| 免费av不卡在线播放| av在线天堂中文字幕| 99热这里只有是精品50| 精品久久久久久久久亚洲| 麻豆国产av国片精品| 中文字幕久久专区| 亚洲激情五月婷婷啪啪| 久久午夜福利片| 午夜视频国产福利| 免费av毛片视频| 我要搜黄色片| 日本a在线网址| 淫妇啪啪啪对白视频| 搡女人真爽免费视频火全软件 | 欧美成人精品欧美一级黄| 国产精品一区二区性色av| 国产高清有码在线观看视频| 真人做人爱边吃奶动态| 九九久久精品国产亚洲av麻豆| 99热这里只有是精品在线观看| 欧美区成人在线视频| 一级毛片久久久久久久久女| 久久99热6这里只有精品| 欧美区成人在线视频| 亚洲精品色激情综合| 日本黄色片子视频| 一个人看视频在线观看www免费| 亚洲av.av天堂| 51国产日韩欧美| 人人妻人人澡人人爽人人夜夜 | 亚洲第一电影网av| 精品不卡国产一区二区三区| 亚洲自偷自拍三级| 日本撒尿小便嘘嘘汇集6| 一个人看视频在线观看www免费| 少妇丰满av| 国产人妻一区二区三区在| 国语自产精品视频在线第100页| 久久久成人免费电影| 精品人妻熟女av久视频| 亚洲性久久影院| 亚洲熟妇熟女久久| 国产在视频线在精品| 久久国产乱子免费精品| 1024手机看黄色片| av.在线天堂| 嫩草影院新地址| 99精品在免费线老司机午夜| 国产精品国产三级国产av玫瑰| 欧美另类亚洲清纯唯美| 精品人妻偷拍中文字幕| 午夜福利在线观看吧| 亚洲无线观看免费| 日韩精品青青久久久久久| 热99re8久久精品国产| 大香蕉久久网| 国产一区二区激情短视频| 欧美中文日本在线观看视频| 99视频精品全部免费 在线| www日本黄色视频网| 亚洲成av人片在线播放无| 亚洲综合色惰| 一级毛片我不卡| 日韩在线高清观看一区二区三区| 欧美bdsm另类| 如何舔出高潮| 欧美日韩在线观看h| 最近手机中文字幕大全| 免费看光身美女| 亚洲最大成人av| 老女人水多毛片| 亚洲乱码一区二区免费版| 欧美xxxx黑人xx丫x性爽| 黄色一级大片看看| 亚洲自偷自拍三级| 男人舔奶头视频| 国产精品国产高清国产av| 色哟哟·www| 国内久久婷婷六月综合欲色啪| 男女啪啪激烈高潮av片| 最近在线观看免费完整版| 免费av不卡在线播放| 天天躁日日操中文字幕| 国产黄色视频一区二区在线观看 | 日韩人妻高清精品专区| 日日摸夜夜添夜夜爱| 搡老熟女国产l中国老女人| 久久久久久久久大av| 如何舔出高潮| 99久久久亚洲精品蜜臀av| 久久6这里有精品| 人人妻人人看人人澡| 99热网站在线观看| 成熟少妇高潮喷水视频| 天天躁夜夜躁狠狠久久av| 少妇人妻精品综合一区二区 | 赤兔流量卡办理| 精品午夜福利在线看| 特大巨黑吊av在线直播| .国产精品久久| 一个人观看的视频www高清免费观看| 久久久久久久久中文| 免费无遮挡裸体视频| 国产 一区精品| 国产人妻一区二区三区在| 欧美日韩国产亚洲二区| 亚洲18禁久久av| 日本三级黄在线观看| 精品人妻偷拍中文字幕| 99久国产av精品国产电影| 国产aⅴ精品一区二区三区波| www日本黄色视频网| 国产中年淑女户外野战色| 免费无遮挡裸体视频| 九九热线精品视视频播放| 国产在视频线在精品| 少妇人妻一区二区三区视频| 久久久色成人| 在线天堂最新版资源| 久久久精品欧美日韩精品| 观看免费一级毛片| 97超视频在线观看视频| 女的被弄到高潮叫床怎么办| 亚洲激情五月婷婷啪啪| 欧美zozozo另类| 日日摸夜夜添夜夜添av毛片| 国产一区二区在线av高清观看| 人人妻人人看人人澡| 日韩强制内射视频| 日本一本二区三区精品| 美女黄网站色视频| 中文字幕免费在线视频6| 国产av在哪里看| 久久久国产成人免费| 成年女人看的毛片在线观看| 亚洲国产欧美人成| 夜夜爽天天搞| 中国美白少妇内射xxxbb| 国产一区亚洲一区在线观看| 又爽又黄a免费视频| 国产精品无大码| 三级毛片av免费| 伊人久久精品亚洲午夜| 欧美+日韩+精品| 一进一出好大好爽视频| 国产久久久一区二区三区| 日韩精品有码人妻一区| 欧美极品一区二区三区四区| 亚洲人成网站在线播放欧美日韩| 观看美女的网站| 亚洲欧美日韩高清在线视频| 国产高清激情床上av| 成年女人毛片免费观看观看9| 久久人人精品亚洲av| 可以在线观看的亚洲视频| 亚洲成人久久性| 美女内射精品一级片tv| 亚洲av二区三区四区| 国产精品久久视频播放| 国产精华一区二区三区| 97超碰精品成人国产| 成人特级av手机在线观看| 国产精品一区二区免费欧美| 久久精品久久久久久噜噜老黄 | 欧美在线一区亚洲| 亚洲,欧美,日韩| 日韩av在线大香蕉| 国产伦一二天堂av在线观看| 久久综合国产亚洲精品| 亚洲无线观看免费| 黄色视频,在线免费观看| 国产探花在线观看一区二区| 国产精品免费一区二区三区在线| 亚洲第一电影网av| 久久久成人免费电影| 天堂影院成人在线观看| 国产精品,欧美在线| 天堂网av新在线| 久久精品影院6| 天堂av国产一区二区熟女人妻| 丰满乱子伦码专区| av在线亚洲专区| 欧美zozozo另类| 久久久久免费精品人妻一区二区| 一夜夜www| 一个人看视频在线观看www免费| 日韩av在线大香蕉| 成人亚洲欧美一区二区av| av天堂中文字幕网| 九九热线精品视视频播放| 中国美白少妇内射xxxbb| 欧美日韩乱码在线| 黄色一级大片看看| 一边摸一边抽搐一进一小说| 亚洲中文字幕日韩| 在线免费观看不下载黄p国产| 国内精品美女久久久久久| 亚洲国产精品成人久久小说 | 成年女人永久免费观看视频| 精品福利观看| 国产中年淑女户外野战色| 免费电影在线观看免费观看| 欧美激情国产日韩精品一区| av女优亚洲男人天堂| 一区福利在线观看| 亚洲精华国产精华液的使用体验 | 欧美xxxx性猛交bbbb| www日本黄色视频网| 精品国内亚洲2022精品成人| 亚洲高清免费不卡视频| 亚洲精品在线观看二区| 观看免费一级毛片| 男女之事视频高清在线观看| 成人亚洲欧美一区二区av| 好男人在线观看高清免费视频| 亚洲av中文av极速乱| 精品久久久久久久久久久久久| 在线观看美女被高潮喷水网站| 亚洲色图av天堂| 日韩高清综合在线| 天天躁夜夜躁狠狠久久av| 99久久成人亚洲精品观看| 少妇裸体淫交视频免费看高清| av.在线天堂| 人妻久久中文字幕网| 亚洲最大成人av|