• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Sentiment Analysis on Social Media Using Genetic Algorithm with CNN

    2022-03-14 09:25:16DharmendraDangiAmitBhagatandDheerajKumarDixit
    Computers Materials&Continua 2022年3期

    Dharmendra Dangi,Amit Bhagat and Dheeraj Kumar Dixit

    1Department of Mathematics and Computer Applications,Maulana Azad National Institute of Technology(MANIT),Bhopal,Madhya Pradesh,India

    Abstract:There are various intense forces causing customers to use evaluated data when using social media platforms and microblogging sites.Today,customers throughout the world share their points of view on all kinds of topics through these sources.The massive volume of data created by these customers makes it impossible to analyze such data manually.Therefore, an efficient and intelligent method for evaluating social media data and their divergence needs to be developed.Today,various types of equipment and techniques are available for automatically estimating the classification of sentiments.Sentiment analysis involves determining people’s emotions using facial expressions.Sentiment analysis can be performed for any individual based on specific incidents.The present study describes the analysis of an image dataset using CNNs with PCA intended to detect people’s sentiments(specifically,whether a person is happy or sad).This process is optimized using a genetic algorithm to get better results.Further,a comparative analysis has been conducted between the different models generated by changing the mutation factor, performing batch normalization,and applying feature reduction using PCA.These steps are carried out across five experiments using the Kaggledataset.The maximum accuracy obtained is 96.984%, which is associated with the Happy and Sad sentiments.

    Keywords: Sentiment analysis; convolutional neural networks; facial expression; genetic algorithm

    1 Introduction

    The continuous increase in social awareness worldwide has been accompanied by an increase in the popularity of social networks such as Twitter.Twitter is one of the most popular social media platforms, where anyone can post tweets to freely express their thoughts and feelings about anything.Low internet fees, inexpensive portable devices, and social responsibility have encouraged people to tweet about various events.For these reasons, Twitter contains a massive amount of data.Tweets cannot exceed 140 characters, meaning that people need to choose their words carefully when expressing their sentiments.They can also augment their posts with images to express their feelings.

    Sentiment analysis (or opinion mining) is the assessment of people’s opinions or emotions from the text and image data that they provide through, for instance, their tweets.From any given individual, sentiment analysis is based on the specific incident that they are expressing their feelings about [1].

    Facial identification comprises three phases: detecting a face, extracting its features, and recognizing the face (Fig.1).When analyzing facial expressions, much care must be taken to enhance the process of capturing facial expressions that corresponds to human characteristics, as this could provide an effective way for people to interact with facial recognition system.

    Figure 1: Three main phases of face recognition

    Accurate face recognition can be used for countless uses.The purposes of connecting the regions of a face to various expressions include identifying people; controlling access; recording mobile videos; and improving various applications like video conferencing, forensic applications,interfaces between computers and humans, automatic monitoring, and cosmetology [2].

    Many techniques are currently being used to improve face identification methods.Such techniques are highly diverse in terms of what factors they consider—some techniques focus on environmental features, whereas others might deal with direction, appearance, the effect of light,or facial features [3].

    Deep learning techniques play an essential role in the classification process.Deep learning is a subcategory of machine learning that deals with the architecture of neural networks [4].Various deep learning algorithms have been designed to solve many complex real-world problems.Convolutional neural networks (CNNs) are relevant to the present study, as they are used for classification purposes [5].In this work, CNNs are used to classify images according to the emotions or sentiment.A genetic algorithm (GA) is also used to perform a hyperparameter search to determine which CNN performs the best for the task at hand.The GA’s hyperparameters are tuned across five different experiments to attain optimal results.

    The remainder of this article is presented as follows: Part 2 discusses the existing sentiment analysis techniques.Part 3 describes the CNN model’s integration with the classification of the GA.Part 4 presents and analyzes the simulation results.Part 5 presents the conclusions and suggests future research endeavors.

    2 Literatureq Review

    The concept of face recognition has been widely evaluated since it is relevant to everyone, and the corresponding techniques are easy to use, non-obtrusive, and can be extended if one is willing to accept the extra cost.Many complementary concepts have also been developed throughout the past two centuries through deliberations and research works.Such research shares similarities with research in the fields of the survey, replacement with security (terminals), closed-circuit television (CCTV) control, client validation, human-computer interfaces (HCIs), and intelligent robots, among others.

    Several facial recognition techniques have been proposed, including techniques that recognize face architecture.However, no consensus has been reached regarding which design is the best,which is particularly important regarding original surroundings—for instance, determining PCA and ABC by hybridizing them through their office or integrating a family—for enhancement according to demands using a CNN.The determination made at the end of the presentation utilizes the evaluations of false acceptance rate, false rejection rate, and accuracy [6].

    Sentiment classification for movie reviews has been utilized through a hybrid classification design.The incorporation of various characteristics and classification techniques, namely the na?ve Bayes-genetic algorithm (NB-GA), has been studied, and its accuracy has been assessed.The hybrid NB-GA is more effective than the base classifier, and the GA remains more effective than NB [7].

    The polarity of a document is also an essential aspect of text mining.Future engineering with tree kernels has been discussed previously [8].This technique yields better results than other techniques.The authors who proposed this technique defined two classification models (i.e., twoway and three-way classification).In two-way classification, sentiments are classified as either positive or negative; in three-way classification, sentiments are classified as either positive, negative,or neutral.

    The authors considered a tree-based technique for representing tweets in the tree kernel method.The tree kernel-based model achieved the best accuracy and was the best feature-based model.The results showed that this technique performed 4% better than a unigram model.

    A hierarchical sentiment analysis approach can also be used for cascaded classification.The authors cascaded three classifications—objectivevs.subjective, polar versus non-polar, and positive versus negative—to construct a hierarchical model.This model was compared with a four-way classification (objective, neutral, positive, negative) model.The hierarchical model outperformed the four-way classification model [9].

    A domain-specific feature-based model for movie reviews has been developed by [10].Here,the aspect-based technique is used, which analyzes text movie reviews and assigns a sentiment label to them based on the aspect.Each aspect is then aggregated from multiple reviews, and the sentiment score of a specific movie is determined.The authors used a sentiment WordNet-based technique for feature extraction and to compute document-level sentiment analysis.The results were compared with those obtained using Alchemy API.The feature-based model provided better results than the Alchemy API-based model.

    In the short aspect, a wise sentiment result is better than a document-wise result.A sentiment classifier model was previously constructed to classify tweets as positive, negative, or neutral.Specific face recognition techniques have also been involved in the design of holistic techniques based on features, as well as hybrid techniques.These designs comprise principal component analysis(PCA) and two-dimensional principal component analysis (2DPCA), derived from PCA [11].

    A model has been developed using PCA, SVM, and the GA for facial recognition.This model presented the highest accuracy (99%) on the face database of the Institute of Chinese Academy of Sciences [12].An automatic selection of a CNN architecture using a GA has been developed for image classification.An experiment was conducted to estimate the accuracy of the models,with the CNN-GA model achieving the highest accuracy (96.78%) [13].

    A genetic-based approach was designed to detect unknown faces via matching with training images.The results yielded by this approach were compared with those provided by PCA and LDA.The GA-based approach outperformed the PCA and LDA approaches [14].

    In other work, a hybrid face recognition system was developed by combining CNNs and SVM.The best CNN architectures were obtained with a GA.The last CNN layer was then combined with SVM to optimize the results [15].Other hybrid designs comprise techniques based on modular Eigenfaces [16].In addition, researchers have clustered k-means and their derivations for face recognition applications due to their computation efficiency.Autoencoders, including deep autoencoders [17], have been employed broadly and remain complex aspects of face recognition.Although CNN-based techniques require a long training time, they have become the most broadly used techniques for all image processing simulations and facial recognition applications because of their favorable spatial feature consideration capabilities.

    Very few works have investigated the use of CNNs and GAs for facial recognition purposes.Therefore, in this work, CNNs are implemented with a GA to improve facial recognition results.Also, a mutation factor change analysis has not yet been done in sentiment analyses in previous works.A detailed description of the methodology is given in the next section.

    The main contributions made in this paper are as follows:

    ? Sentiment analysis is an essential topic of significant relevance in the modern world.This paper describes the use of CNNs with GAs for sentiment analysis.

    ? This paper serves as a reference for anyone who wants to work on the advancement of CNNs and GAs.

    ? A comprehensive study of changes in five mutation factors shows how varying the mutation factor in the GA affects a CNN’s accuracy.

    ? The final model provides favorable results for feature reduction using PCA.The results show that dimensionality reduction significantly improves accuracy.

    3 Proposed Methodology

    Coding and translating techniques were integrated for the proposed technique.Initially, the data were processed and modified to correlate the dataset with the proposed method.Fig.2 provides the architectural design for the proposed sentiment analysis.The primary task was to design a CNN capable of localizing human faces and accurately classifying their facial emotions.Faces were broadly classified as either Happy (positive emotion) or Sad (negative emotion).The main problem faced when training deep learning architectures is the proper selection of hyperparameters, which ultimately governs the overall prediction capabilities of the model.This process of hyperparameter selection was inspired by the real-world natural evaluation process in which a species becomes more powerful and adaptive through a continuous process of mutation and natural selection.

    The models were defined such that the hyperparameters represented a unique signature of the models, similar to the DNA of real-world individuals.The complete model training process is outlined as follows:

    1.A group of random models with unique hyperparameter combinations was generated as the first-generation models [18].

    2.All models were trained with the facial emotion recognition dataset, and each model’s accuracy was calculated as its fitness score.

    3.Top-performing models from the previous generation were selected.New models were then generated from these top-performing models by combining their hyperparameters.

    4.For the newly generated models (i.e., the next generation), the process is repeated (starting from step 2) until the desired model is obtained.

    Figure 2: Overview of the architectural design

    3.1 Dataset

    The dataset was taken from a Kaggle facial expression recognition challenge [19].The dataset contained cropped 48x48 greyscale face images with the corresponding emotion collected from multiple sources, including social media, movies, and the Internet.The overall dataset was divided into seven emotion categories (Angry, Disgust, Fear, Happy, Sad, Surprised, and Neutral).The Happy and Sad subsets (See Fig.3) of the dataset were the most suitable for the current task.

    Figure 3: Sample of image dataset (Happy and Sad)

    3.2 Dataset Pre-Processing

    The Happy and Sad subclasses of the dataset were used as positive and negative sentiments,respectively, during training.Six thousand samples were randomly selected from the dataset for each class.These samples were eventually used for the purposes of training and validation.The sampled dataset was split into 80% training and 20% validation subsets by random selection.The images’pixel values were also normalized within the range of 0-1 and divided by 255, thus making the data input into the model more generalizable.This process, in turn, improved the processes of learning and regulating the model weights [20].

    3.3 General CNN Architecture

    The overall CNN architecture was divided into three blocks, with each block containing two convolution layers and one MaxPooling layer.These blocks were then followed by a Flattened layer and a Dense layer, which were finally linked with two nodes of the Dense layer, each representing a class from our dataset (i.e., positive and negative).The basic block used in the model architecture is depicted below in Fig.4.

    Figure 4: CNN model architecture block

    The convolution layers used in the blocks use “same padding” to ensure that the input and output image sizes of these convolution layers are the same.As a result, the output of each block is just half of the input size (e.g., 48x48xc1 →block →24x24xc2).In this way, we ensured a proper input size was available for the next block regardless of the filter size and filter counts used in the block.

    The output image dimensions for any given padding value p are derived with the given formula seen in Eq.(1) [21].

    where

    n →input image dimension, p →padding to be used, f →filter size in the same dimension,

    s →filter stride value for the same direction

    When the above formula is implemented on both input image dimensions (i.e., height and width), the output image size is calculated using Eq.(2).

    where

    subscript 1 →values corresponding to the first dimension of the images (height),

    subscript 2 →values corresponding to the second dimension of the images (width)

    In cases where the padding is the same for both dimensions, the required image output size should be the same as that of the input size, as shown in Eq.(3).

    Therefore, to keep the input image and output image size equal for each convolution layer, the padding value needs to be updated dynamically according to the corresponding filter size, input and output image size, and filter stride value.

    The padding value required to implement the same padding can be derived using Eq.(4) [22].

    Since the Convolution2D layers used the same padding as described above, the input image and the output filter map size were the same for both Convolution2D layers in the block.Finally,these layers were followed by the MaxPooling layer, which concluded the output of the layer.

    Using the output image dimension formula, we can define the output image size for the blocks, as the input image size and output image size were the same for the Convolution2D layers.Also, the input image size for the MaxPooling layer was the same as that of the input image size of the block.

    For the MaxPooling layer, filter size →(2x2), filter stride →(2x2), padding →0.

    Therefore, the output image dimension formula is derived as Eq.(5) [23].

    Using a MaxPooling layer with the above configurations reduces the input image size to half its original size by combining the output results of the Convolution2D layers with the same padding, followed by the MaxPooling layer.The resulting output image size can be calculated using Eq.(6).

    where

    n1→input image height, n2→input image width, nc→number of channels in the input image,

    fs→filter size for the block, fc→filter count for the block

    The above equation clarifies that the block output image’s dimensions are halved in both dimensions (height and width) with each block’s operation on the input image.Also, the channel count is interchangeable with the block’s total filter count value.

    Only the filter size and filter count are needed for the blocks to function completely.Therefore,a block can be uniquely defined based solely on the values of these two parameters.The same square filter size and filter count are used in the Convolution2D layers.The MaxPooling2D layer’s kernel/filter size is fixed to 2x2.

    Since the CNN architecture comprises three blocks (see Fig.5), a CNN model’s complete architecture can be defined if the filter sizes and filter counts of each block are known.These filter sizes and filter counts represent the genes of the CNN model or individual.

    Figure 5: Model gene visualization

    The filter sizes and filter counts depicted in the above figure are represented with the help of two Python lists—one for filter sizes and another for filter counts.The list’s length is equivalent to that of several blocks (three blocks in the present case).Based on the above figure, the model gene can be represented as follows:

    ((Filter_size_1, Filter_size_2, Filter_size_3), (Filter_count_1, Filter_count_2, Filter_count_3))

    3.4 Genetic Algorithm Approach

    The primary task of the GA approach is to find the best filter size and filter count combination or gene (collectively).This combination is that which produces the best results for the problem at hand when used to generate a CNN.

    Steps

    1.Random initialization of population

    ? Random filter size and filter count values are determined from the given selection range.

    ? The population size is maintained at 10 individuals.

    2.Training of individuals

    ? Individuals are trained with the given training dataset.

    ? TensorFlow’s early stopping callback is also employed to stop training if the validation loss has not decreased over the last 10 epochs.

    3.Calculating individuals’fitness

    ? Each individual’s validation accuracy is considered as its fitness.

    4.Selecting the top-performing individuals based on their fitness

    ? The top four individuals are selected based on their fitness scores.

    5.Repopulating the population with the selected individuals

    ? Ten combinations are generated from the top four individuals to generate 10 new child individuals.

    6.Repeating the process from step 2 to step 5 until an individual with the desired fitness is obtained

    4 Performance Analysis

    This section describes the simulation analysis performed for face sentiment recognition using the proposed CNN-GA.

    4.1 Simulation Arrangement

    A complete simulation has been carried out for the proposed CNN-GA using a Python tool in which the architecture has been considered.The experiment has been done using a PC with Windows 10 OS, 4GB RAM, and Intel I5 processor.

    4.2 Experimental Analysis

    The proposed CNN-GA is simulated based on the true negative rate (TNR), true positive rate(TPR), and accuracy.All of this is evaluated using the classification report of the best model.The ROC-AUC curve for the best model is also shown.The classification report describes the accuracy, recall, precision, F1 score, and support.These values can be calculated with a confusion matrix [24].The formulas used to calculate accuracy, recall, precision, and F1 score are as follows:

    Accuracy: Accuracy resolves the proximity for detection by the classifier, which is determined by Eq.(7).

    A true positive (TP) occurs when a value is predicted to be positive and is later confirmed to be positive in an AI model.A false positive (FP) occurs when a value is predicted to be negative but is later shown to be positive in an AI model.A true negative (TN) occurs when a negative value is predicted and later confirmed by an AI model.A false negative (FN) occurs when a value is predicted to be positive but is later shown to be negative in an AI model.

    Mean generation fitness: This parameter provides an overview of the overall performance of all individuals in a specified generation during GA training as shown in Eq.(8).

    Pseudo Code for the CNN-GA:

    ?

    Pseudo Code 1 determines the overall approach followed by the GA to find the bestperforming hyperparameter configuration.The code starts with the initialization of an initial population of randomly generated individuals with constrained parameters.After this, all individuals are trained, and their capabilities and performance are verified.Next, a new generation of individuals is formed via the crossover and mutation of the best-performing individuals from the previous generation.During this process, the best-performing individual is defined as that whose gene or hyperparameter configuration is the best for the problem at hand.This individual’s track is kept.

    4.3 Simulation Results

    The architecture is simulated with different hyperparameters to obtain better results.Five different experiments are conducted to improve the accuracy of the model.The models are compiled using the ‘a(chǎn)dam’optimizer.As techniques ranging from basic to advanced are considered in this work to increase the model’s accuracy, it is understandable that the models evolved across generations of the GA-based approach.

    Experiment 1: All default values

    The configurations used for this initial baseline experiment for the GA training are:

    population_size = 10, max_filter_size = 20, max_filter_count = 100, max_generations= 10, max_epochs = 20, mutation_factor = 0.7, num_blocks = 3, needed_fitness = 0.9

    Results obtained:

    The best model had a fitness value of 0.847 with gene ((5, 1, 1), (72, 83, 101)).The maximum mean generation fitness obtained was 0.6437.The ratio of dummy individuals to total individuals was 92:100.

    Observations:

    Since the problem starts as a two-class classification problem, the individuals always predict a single class—by doing this, they can easily achieve 50% accuracy.We refer to these individuals as dummy individuals.Due to the high mutation factor (i.e., 70%), variance was high among the repopulated individuals.Therefore, a significant part of the population was unable to properly receive their parents’genes.

    As a result, all models had approximately 50% accuracy (dummy individuals).Since the maximum filter size was 20, the block’s input size was sometimes smaller than the filter size.Because those blocks served as identity layers (i.e., they merely sped up the transfer of input into output without applying filters), they contributed nothing to individuals’fitness.Only 8%of individuals learned properly; all others were dummy individuals.The result is weak child generations.

    Experiment 2: Mutation factor 0.3

    In this experiment, we tried to solve the problem presented in Experiment 1 (i.e., the high number of dummy individuals).This was a problem because dummy individuals cannot specialize well in the classification problem at hand.This problem was resolved by reducing the mutation factor from 0.7 to 0.3 (i.e., 30% of the individuals in the new generation were mutated).

    Configuration used for the GA:

    population_size = 10, max_filter_size = 20, max_filter_count = 100, max_generations = 10,max_epochs = 20, mutation_factor = 0.3, # earlier = 0.7, num_blocks = 3, needed_fitness = 0.9

    Results obtained:

    The best model in this experiment had a fitness value of 0.8894 with gene ((2, 6, 6), (93, 29,16)).The maximum mean generation fitness obtained was 0.8626.The ratio of dummy individuals to total individuals was 11:100.

    Observations:

    By reducing the mutation factor, we obtained a high number of learner individuals that could efficiently classify emotions.After further observations of the results, we saw that many individuals remained the same as we moved on to higher generations due to the low mutation factor.The whole process depended substantially on the first generation’s genes, which did not help the population acquire different genes (which, in turn, could have led to better results).

    Experiment 3: Mutation factor 0.6

    Reducing the mutation factor limited the GA’s ability to generate high-performing mutated individuals.Therefore, the properties of generated individuals resembled those of randomly generated individuals.Therefore, in this experiment, a mutation factor of 0.6 was selected (i.e., 60%of the generated individuals were mutated).

    The configurations used in this experiment are as follows:

    population_size = 10, max_filter_size = 15, max_filter_count = 100, max_generations = 10,max_epochs = 20, mutation_factor = 0.6, # default = 0.7, num_blocks = 3, needed_fitness = 0.9

    Results:

    The best model in this experiment had a fitness value of 0.8794 with gene ((3, 2, 6), (101, 64,16)).The maximum mean generation fitness obtained was 0.7983.The ratio of dummy individuals to learner individuals was 31:100.

    Observations:

    These configurations provide better results in terms of the learner and dummy individual counts.In this case, 69% of the individuals were capable of learning and classifying our dataset—however, we still failed to achieve the desired fitness.The problem associated with having redundant blocks serve as identity layers was also reduced when the max_filter_size was decreased.While investigating the reason underlying lower accuracy, we looked into the training and validation losses of the individuals, which showed high overfitting in the individuals toward the training data.

    Experiment 4: Mutation factor 0.6 and batch normalization

    As observed in Experiment 3, we achieved a reasonable dummy-to-learner ratio, and the problem of blocks working as identity layers was also reduced.Even so, the individuals exhibited overfitting, resulting in high training accuracy (approx.96-98%) and low validation accuracy (80-83%).In order to overcome this problem, we added batch normalization and dropout layers in the blocks (see Fig.6).At this point, the blocks were as follows:

    Figure 6: Modified CNN model architecture block with additional regularization layers

    Configurations used for the GA in this experiment:

    population_size = 10, max_filter_size = 12, # default = 20, max_filter_count = 100,max_generations = 10, max_epochs = 20, mutation_factor = 0.6, # earlier = 0.7, num_blocks =3, needed_fitness = 0.9

    Results obtained:

    The best model in this experiment had a fitness value of 0.8738 with gene ((2, 3, 6), (93, 38,16)).The maximum mean generation fitness obtained was 0.8536.The ratio of dummy individuals to total individuals was 0:100.

    Observations:

    After adding batch normalization, all of our models became learners, which is a strong sign that the population generates better next-generation individuals, thus indicating an upward trend in mean generation fitness.However, even upon observing the training accuracy and validation accuracy, the individuals were still overfitting.When the best individual from this experiment was tested with real-world images, it could not classify the sentiments correctly, showing poor generalization of the individuals.

    Experiment 5: Using the Happy and Sad subsets

    After a thorough investigation, it was found that the overfitting was due to an insufficient Angry subset of the original dataset.Experiment 4 was performed (with the same configurations imposed on the Happy and Sad subsets of the original dataset) to overcome this.This yielded very promising results, as the best individual’s fitness was 0.9038.

    The following configurations were applied to the GA:

    population_size = 10, max_filter_size = 15, # default = 20, max_filter_count = 100,max_generations = 10, max_epochs = 20, mutation_factor = 0.6, # default = 0.7, num_blocks= 3, needed_fitness = 0.9

    Results:

    The best model in this experiment had a fitness value of 0.9054 with gene ((3, 4, 16), (101,98, 101)).The maximum mean generation fitness obtained was 0.8687.The ratio of dummy individuals to total individuals was 2:100.

    Observations:

    ? The overfitting problem was reduced by shifting to the Happy and Sad subsets of the dataset.

    ? The model generalized well to real-world images.

    Generalized results:

    ? Fig.7 below depicts the mean generation fitness values of all the experiments in a single graph.As can be seen, the overall results and learning abilities increased with each experiment when we used optimized GA configurations.

    Figure 7: Mean generation fitness with all mutation factors

    4.3.1 Model’s Hyperparameter Space Visualization with PCA

    Hyperparameter optimization is a very time-consuming process because the overall parameter space to be traversed (i.e., the space that mirrors the probability space used for probability calculations) is so vast that it is impossible to test all possible hyperparameter combinations.Assuming that a hypothetical model hasxhyperparameters that govern the model’s overall performance,the number of possible combinations of hyperparameters is the total number of possible discrete values multiplied by the number of parameters.

    In this study, the possible hyperparameter combinations for the provided parameter ranges are depicted in Tab.1.

    Table 1: Hyperparameter, range, and total discrete values for all possible combinations

    Due to the massive number of possible combinations, it is crucial to employ an appropriate hyperparameter optimization technique that efficiently finds the best possible combination from among all possible combinations in a way that satisfies the problem at hand.

    The grid search and random search methods cover most of the parameter space following a structured approach.Still, these approaches sometimes miss the best-performing parameter region due to a lack of information about the model’s fitness in different regions.Our GA approach overcomes this problem by traversing the parameter space while considering the model’s fitness.In other words, the GA approach focuses on regions with high model fitness (see Fig.8).

    The first generations of the GA are randomly initialized.As a result, hyperparameter combinations were selected randomly from the overall hyperparameter space, as in the random search approach.The following generation’s individuals were then generated based on the performance of the models of previous generations, which helped the individuals traverse the parameter space in a more performance-centric manner.

    The GA’s performance in the hyperparameter search within this parameter space can be better understood if all models are depicted in a single graph.In such a graph, each point represents a model’s unique gene or set of hyperparameters (similarly to the depictions of the grid search and random search methods presented in the above figure).Considering three blocks per model with one filter size and one filter count per block, a model can be uniquely represented as the combination of the six hyperparameters considered in this research.Because six hyperparameters(or, to put it another way, six-dimensional values) are required to represent a single model, it is impossible to visualize such values on a 2D plane.

    Figure 8: Assuming a two-hyperparameter model, the traditional hyperparameter optimization techniques (i.e., grid search and random search) traverse the hyperparameter space as depicted in the figure.Each point in the figure represents a unique hyperparameter combination formed by the intersection of specific hyperparameter values from the corresponding axis, thus resembling a unique model configuration selected using the respective approach

    Figure 9: The reduction of six-dimensional data to two-dimensional data (named Reduced Hyperparameter 1 and Reduced Hyperparameter 2) with the help of PCA for the sake of visualizing the model’s hyperparameter combinations on a 2D plot

    PCA was performed based on these six hyperparameters to transform the six-dimensional data into two-dimensional data.Thus, the same model can be represented using just two values instead of six while retaining most of the important information (Fig.9).

    Based on the obtained two-dimensional values, all models obtained by the GA hyperparameter search are plotted into the 2D plot presented in Fig.10.The figure provides a visualization of how the GA can traverse the parameter space.

    Figure 10: A plot of all hyperparameter combinations generated and analyzed by the GA after being reduced into two-dimensional data.Hyperparameter combinations generated in the respective experiments are depicted with corresponding colors

    Final Model Training with Best Model Genes:

    Considering all experiments together, the best-performing model was that with genes ((3, 4,16), (33, 98, 101)) in Experiment 5, as it had a fitness value of 0.9054.When the model with these gene values was trained individually for significant epochs, it achieved 96.984% accuracy when compared to the best model.The ROC curve, precision-recall curve, and classification report for this best model are as follows:

    Confusion Matrix:

    The accuracy can be calculated using Eq.(7) from the normalized confusion matrix (Fig.11)as follows:

    Accuracy = 96.984%

    Precision-Recall Curve and ROC Curve:

    Figure 11: Confusion matrix

    The precision-recall curve corresponding to the result is given in Fig.12.The ROC-AUC curve reflecting the accuracy is depicted in Fig.13.

    Figure 12: Precision-recall curve

    Figure 13: ROC curve with AUC

    Tab.2 summarizes the hyperparameters and experiments.

    Table 2: Hyperparameters and best fitness for each experiment

    4.3.2 Comparison with Previous Works

    A comparison shows that where your work exists.A result of 96.984%is obtained for a binary classification of Happy and Sad images from a set of experiments where the GA hyperparameters are changed to find the best CNN architecture.The results of experiments should be compared only when they have the same setup and are performed in the same environment.Nevertheless,comparisons between the current results and those of other works are presented in Tab.3.

    Table 3: Comparisons with previous works

    5 Conclusion

    This paper developed an algorithm in which CNNs are combined with a GA, which can generalize well to different CNN architectures.This results in a complete autonomous training method that finds the best combinations of hyperparameter configurations for the problem at hand.The research objective was successfully achieved by designing a generalized GA-based CNN hyperparameter search strategy.

    The proposed approach was examined using a Kaggle face sentiment dataset [18].A comparison with the top-performing approaches in the field revealed that this approach achieves up to 96.984% accuracy.Moreover, the proposed approach is automatic, making it easy to use, even for users without comprehensive knowledge of CNNs or GAs.

    The scope of advancement of this work is described as follows:

    ? In this study, the GA was applied only to filter sizes and filter counts for three blocks.The experimental protocol can be performed on more hyperparameters in the future.

    ? Better algorithms can be employed for the crossover and mutation of new individuals.

    ? A better initialization technique of first-generation individuals could reduce overall training time.

    ? A comprehensive discussion can be held regarding variations among crossover methods in the GA so that sentiments can be analyzed more accurately.

    ? Sentiment analyses using different GAs (e.g., particle swarm optimization, honey bee optimization, ant colony optimization) can be evaluated.

    ? Genetic evolutionary techniques, hill-climbing, simulated annealing, and Gaussian adaptation could perhaps achieve better results through optimization.

    The most notable limitations of this work are as follows:

    ? The GA-based hyperparameter search is quite a slow process when large parameter spaces are involved.

    ? The improper initialization of first-generation individuals could lead to inefficient individuals.

    ? The minimum and maximum limits for the parameter space play a vital role in convergence speed.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    黑人欧美特级aaaaaa片| 成人18禁高潮啪啪吃奶动态图| 男女边吃奶边做爰视频| av有码第一页| 久久精品国产a三级三级三级| 国产在视频线精品| 亚洲欧美日韩卡通动漫| 免费大片18禁| 欧美另类一区| 大片电影免费在线观看免费| 我的女老师完整版在线观看| 日韩av不卡免费在线播放| 国产av码专区亚洲av| 久久精品人人爽人人爽视色| 亚洲,欧美精品.| 18禁国产床啪视频网站| 精品一品国产午夜福利视频| 中文字幕免费在线视频6| 男女无遮挡免费网站观看| 国产白丝娇喘喷水9色精品| 18在线观看网站| 成人国产av品久久久| 97超碰精品成人国产| 国产白丝娇喘喷水9色精品| 国产黄色免费在线视频| 亚洲欧洲精品一区二区精品久久久 | 色5月婷婷丁香| 国产福利在线免费观看视频| 一本—道久久a久久精品蜜桃钙片| 欧美日韩视频高清一区二区三区二| 国产片特级美女逼逼视频| 日日啪夜夜爽| 精品国产一区二区三区久久久樱花| 免费高清在线观看视频在线观看| 欧美少妇被猛烈插入视频| 高清欧美精品videossex| 亚洲精品国产av蜜桃| 久久av网站| 最近最新中文字幕大全免费视频 | 亚洲国产看品久久| 人体艺术视频欧美日本| 久久精品国产自在天天线| 美女xxoo啪啪120秒动态图| 欧美精品亚洲一区二区| 美女中出高潮动态图| videosex国产| 成人国产麻豆网| 久久青草综合色| 亚洲精品成人av观看孕妇| 一级爰片在线观看| 最近最新中文字幕大全免费视频 | 另类亚洲欧美激情| 1024视频免费在线观看| 日本猛色少妇xxxxx猛交久久| 亚洲精品国产av成人精品| 九九爱精品视频在线观看| 在线精品无人区一区二区三| 汤姆久久久久久久影院中文字幕| 久久影院123| 亚洲欧美中文字幕日韩二区| 考比视频在线观看| 日韩人妻精品一区2区三区| 十分钟在线观看高清视频www| 久久青草综合色| 一边摸一边做爽爽视频免费| 日韩欧美一区视频在线观看| 在线看a的网站| 一本—道久久a久久精品蜜桃钙片| 国产成人免费无遮挡视频| 亚洲成人一二三区av| 蜜桃在线观看..| 制服丝袜香蕉在线| 一级片'在线观看视频| 亚洲欧美清纯卡通| 成年女人在线观看亚洲视频| 精品福利永久在线观看| 久久人人爽人人爽人人片va| 精品一区二区三区视频在线| 精品一区二区三卡| 男女下面插进去视频免费观看 | 久久亚洲国产成人精品v| 久久久亚洲精品成人影院| 一边亲一边摸免费视频| 男人爽女人下面视频在线观看| 一个人免费看片子| 黑丝袜美女国产一区| 哪个播放器可以免费观看大片| 国产精品女同一区二区软件| 中文字幕制服av| 18禁国产床啪视频网站| 久久精品国产a三级三级三级| 日韩,欧美,国产一区二区三区| 亚洲精品一二三| 午夜免费男女啪啪视频观看| 亚洲成色77777| 国产精品.久久久| 国产永久视频网站| 免费av不卡在线播放| 伦理电影大哥的女人| 久久久久久久久久成人| 精品少妇黑人巨大在线播放| 在线天堂中文资源库| 日本-黄色视频高清免费观看| 亚洲欧美成人精品一区二区| 国产欧美亚洲国产| 如日韩欧美国产精品一区二区三区| 一级毛片电影观看| 欧美3d第一页| 久久精品人人爽人人爽视色| 七月丁香在线播放| av不卡在线播放| 99久久人妻综合| 亚洲人成77777在线视频| 免费人妻精品一区二区三区视频| av在线观看视频网站免费| 99re6热这里在线精品视频| 久久久久久久国产电影| 天天躁夜夜躁狠狠躁躁| 亚洲成人一二三区av| 亚洲成人一二三区av| 国产精品久久久久久精品电影小说| a级毛片黄视频| 韩国av在线不卡| 欧美成人午夜免费资源| 亚洲精品久久成人aⅴ小说| 久热这里只有精品99| 女人久久www免费人成看片| 边亲边吃奶的免费视频| 久久精品国产自在天天线| 在线观看三级黄色| 久久人人爽av亚洲精品天堂| 久久久国产一区二区| 一本—道久久a久久精品蜜桃钙片| 色婷婷av一区二区三区视频| 国语对白做爰xxxⅹ性视频网站| 婷婷色av中文字幕| 又黄又爽又刺激的免费视频.| 黄色一级大片看看| 女人精品久久久久毛片| 国产精品嫩草影院av在线观看| 高清在线视频一区二区三区| 久久亚洲国产成人精品v| 国产色婷婷99| 伊人久久国产一区二区| 水蜜桃什么品种好| 如何舔出高潮| 亚洲av电影在线进入| 日韩精品有码人妻一区| 国内精品宾馆在线| 中国美白少妇内射xxxbb| 国产 一区精品| 久久 成人 亚洲| 精品99又大又爽又粗少妇毛片| 人人妻人人澡人人爽人人夜夜| 搡女人真爽免费视频火全软件| 日韩大片免费观看网站| 人人妻人人澡人人看| 天天操日日干夜夜撸| 狠狠婷婷综合久久久久久88av| 高清视频免费观看一区二区| 91精品三级在线观看| 国产av码专区亚洲av| 国产精品偷伦视频观看了| 中国三级夫妇交换| 国产视频首页在线观看| 妹子高潮喷水视频| 亚洲国产日韩一区二区| 精品久久久久久电影网| 国产成人aa在线观看| av视频免费观看在线观看| 这个男人来自地球电影免费观看 | 婷婷色麻豆天堂久久| 免费大片18禁| 精品人妻一区二区三区麻豆| 国产精品国产三级专区第一集| 最近中文字幕高清免费大全6| 久久久久久久国产电影| 中文天堂在线官网| 免费观看性生交大片5| 内地一区二区视频在线| 99视频精品全部免费 在线| 亚洲成国产人片在线观看| 又黄又粗又硬又大视频| 欧美变态另类bdsm刘玥| 欧美日韩精品成人综合77777| 精品少妇内射三级| 欧美精品高潮呻吟av久久| 在线看a的网站| av天堂久久9| 黄网站色视频无遮挡免费观看| 欧美+日韩+精品| 日韩制服骚丝袜av| 少妇人妻精品综合一区二区| 五月开心婷婷网| 菩萨蛮人人尽说江南好唐韦庄| 色婷婷久久久亚洲欧美| 亚洲成av片中文字幕在线观看 | xxx大片免费视频| 黄色视频在线播放观看不卡| 欧美人与善性xxx| www.色视频.com| 亚洲欧美清纯卡通| 十八禁网站网址无遮挡| 日本av免费视频播放| 在线观看美女被高潮喷水网站| 成年av动漫网址| 在线看a的网站| 久久人人爽av亚洲精品天堂| 精品亚洲成国产av| 国产女主播在线喷水免费视频网站| 久久影院123| 九九爱精品视频在线观看| 中文字幕另类日韩欧美亚洲嫩草| 日韩av免费高清视频| 久久久精品区二区三区| 成人亚洲欧美一区二区av| 香蕉国产在线看| 99香蕉大伊视频| 日本av免费视频播放| 国产视频首页在线观看| 国产成人aa在线观看| 9热在线视频观看99| 亚洲精品456在线播放app| 成年人免费黄色播放视频| 极品人妻少妇av视频| 老司机影院成人| 日韩成人伦理影院| 亚洲国产av影院在线观看| 亚洲欧美清纯卡通| 久久久久久久久久成人| 久久久国产一区二区| 蜜桃国产av成人99| 人人妻人人澡人人爽人人夜夜| 免费久久久久久久精品成人欧美视频 | 巨乳人妻的诱惑在线观看| 精品人妻偷拍中文字幕| 咕卡用的链子| 色5月婷婷丁香| 日日撸夜夜添| 啦啦啦啦在线视频资源| 久久99一区二区三区| 美女xxoo啪啪120秒动态图| 十八禁高潮呻吟视频| 国产女主播在线喷水免费视频网站| 校园人妻丝袜中文字幕| 精品一品国产午夜福利视频| 热re99久久国产66热| 亚洲美女搞黄在线观看| 最新的欧美精品一区二区| 免费观看性生交大片5| 国产又爽黄色视频| 亚洲熟女精品中文字幕| 水蜜桃什么品种好| 久久久久久久久久人人人人人人| 男女国产视频网站| 在线观看人妻少妇| 免费在线观看黄色视频的| 99久久精品国产国产毛片| 伦精品一区二区三区| 成人国产av品久久久| 国产精品一区二区在线不卡| 丝瓜视频免费看黄片| 男的添女的下面高潮视频| av在线播放精品| tube8黄色片| 午夜av观看不卡| 美国免费a级毛片| 赤兔流量卡办理| 亚洲伊人久久精品综合| 国产一级毛片在线| 亚洲精品乱码久久久久久按摩| 亚洲高清免费不卡视频| 99re6热这里在线精品视频| xxx大片免费视频| 女的被弄到高潮叫床怎么办| 亚洲精品自拍成人| av不卡在线播放| 人人妻人人爽人人添夜夜欢视频| 国产成人精品婷婷| 国产深夜福利视频在线观看| 一级,二级,三级黄色视频| 日本爱情动作片www.在线观看| 国产色婷婷99| 国产成人免费观看mmmm| 如何舔出高潮| 亚洲精品,欧美精品| 久久午夜福利片| 亚洲精华国产精华液的使用体验| 少妇的逼好多水| 亚洲图色成人| 欧美人与性动交α欧美精品济南到 | 天美传媒精品一区二区| 80岁老熟妇乱子伦牲交| 黑人高潮一二区| 欧美成人精品欧美一级黄| 亚洲精品aⅴ在线观看| 日本爱情动作片www.在线观看| 免费人成在线观看视频色| 视频区图区小说| 99热全是精品| 伊人亚洲综合成人网| 亚洲欧美日韩卡通动漫| 亚洲,欧美,日韩| 国产精品免费大片| 精品99又大又爽又粗少妇毛片| 18在线观看网站| 精品久久久精品久久久| 亚洲经典国产精华液单| 久久青草综合色| 男女无遮挡免费网站观看| 亚洲国产av新网站| 三上悠亚av全集在线观看| 国产女主播在线喷水免费视频网站| 精品少妇内射三级| 中国美白少妇内射xxxbb| 午夜激情久久久久久久| 丁香六月天网| 五月伊人婷婷丁香| 久久99热6这里只有精品| 国产免费福利视频在线观看| 大片免费播放器 马上看| 国产伦理片在线播放av一区| 黑人猛操日本美女一级片| 国产深夜福利视频在线观看| 国产一区二区三区av在线| 男人爽女人下面视频在线观看| 免费av不卡在线播放| 午夜免费鲁丝| 久久久精品免费免费高清| 日日撸夜夜添| 午夜精品国产一区二区电影| 欧美日韩成人在线一区二区| 亚洲精品日本国产第一区| 国产精品一区二区在线观看99| 黑人欧美特级aaaaaa片| 国产色婷婷99| 国产欧美另类精品又又久久亚洲欧美| 新久久久久国产一级毛片| 久久精品国产自在天天线| 黄网站色视频无遮挡免费观看| 午夜福利在线观看免费完整高清在| 少妇猛男粗大的猛烈进出视频| 少妇被粗大的猛进出69影院 | 成年av动漫网址| 国产精品国产三级专区第一集| 精品一区二区三区视频在线| 男女免费视频国产| 成人无遮挡网站| 国产一区二区三区综合在线观看 | 成人免费观看视频高清| av在线播放精品| 欧美激情 高清一区二区三区| 久久久久视频综合| 乱人伦中国视频| av播播在线观看一区| 97精品久久久久久久久久精品| 满18在线观看网站| 精品久久久精品久久久| 人体艺术视频欧美日本| 精品久久久精品久久久| 人体艺术视频欧美日本| 午夜老司机福利剧场| 女性生殖器流出的白浆| 日韩成人av中文字幕在线观看| 少妇的逼水好多| 自拍欧美九色日韩亚洲蝌蚪91| 在线亚洲精品国产二区图片欧美| 两性夫妻黄色片 | 全区人妻精品视频| 日韩欧美精品免费久久| 最近中文字幕2019免费版| 久久精品久久精品一区二区三区| 精品少妇黑人巨大在线播放| 综合色丁香网| 伦精品一区二区三区| 久久久久人妻精品一区果冻| 久久这里有精品视频免费| 精品久久蜜臀av无| 乱人伦中国视频| 亚洲精品aⅴ在线观看| 91在线精品国自产拍蜜月| 黑人巨大精品欧美一区二区蜜桃 | 黄色怎么调成土黄色| 日韩制服丝袜自拍偷拍| 国产成人精品无人区| 黑人欧美特级aaaaaa片| 国产乱人偷精品视频| 两性夫妻黄色片 | 美女中出高潮动态图| 高清黄色对白视频在线免费看| 免费不卡的大黄色大毛片视频在线观看| 久久人人爽人人片av| 母亲3免费完整高清在线观看 | 欧美3d第一页| 波多野结衣一区麻豆| 秋霞在线观看毛片| 在线观看免费日韩欧美大片| 97精品久久久久久久久久精品| 母亲3免费完整高清在线观看 | 蜜桃国产av成人99| 国产黄频视频在线观看| 欧美丝袜亚洲另类| 一区在线观看完整版| 亚洲精品第二区| 男女边摸边吃奶| a 毛片基地| 天堂8中文在线网| 欧美亚洲 丝袜 人妻 在线| 中文字幕最新亚洲高清| 在线观看www视频免费| 宅男免费午夜| 99热国产这里只有精品6| 最后的刺客免费高清国语| 男女午夜视频在线观看 | 精品一区二区三区视频在线| 国产又色又爽无遮挡免| 欧美bdsm另类| www日本在线高清视频| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 99国产精品免费福利视频| 七月丁香在线播放| 另类精品久久| 成年美女黄网站色视频大全免费| 九色成人免费人妻av| 飞空精品影院首页| 成人亚洲精品一区在线观看| 午夜激情久久久久久久| 日韩一区二区三区影片| 99re6热这里在线精品视频| 自拍欧美九色日韩亚洲蝌蚪91| 国产精品久久久久久精品电影小说| 亚洲经典国产精华液单| 精品少妇久久久久久888优播| 一个人免费看片子| 欧美日韩精品成人综合77777| 9热在线视频观看99| 亚洲久久久国产精品| √禁漫天堂资源中文www| 丝袜在线中文字幕| 你懂的网址亚洲精品在线观看| 黄片播放在线免费| av在线app专区| 欧美人与性动交α欧美精品济南到 | 天天操日日干夜夜撸| 国国产精品蜜臀av免费| 在线观看www视频免费| 大话2 男鬼变身卡| 国产成人av激情在线播放| 九九爱精品视频在线观看| 天天躁夜夜躁狠狠久久av| 一区二区三区乱码不卡18| 免费大片黄手机在线观看| 日本色播在线视频| 色婷婷久久久亚洲欧美| 欧美日韩综合久久久久久| 最近中文字幕高清免费大全6| 秋霞伦理黄片| 我的女老师完整版在线观看| 精品99又大又爽又粗少妇毛片| av在线观看视频网站免费| 男人爽女人下面视频在线观看| 熟女电影av网| 久久韩国三级中文字幕| 亚洲精华国产精华液的使用体验| 国产精品国产三级国产av玫瑰| 亚洲伊人色综图| 久久99蜜桃精品久久| 亚洲四区av| 亚洲精品美女久久av网站| 国产男人的电影天堂91| 新久久久久国产一级毛片| 日韩中字成人| 中文字幕精品免费在线观看视频 | 国产男女内射视频| 97在线人人人人妻| 欧美bdsm另类| 亚洲人与动物交配视频| 国产成人精品久久久久久| 国产精品久久久久久av不卡| 国产成人午夜福利电影在线观看| 多毛熟女@视频| 丝袜在线中文字幕| 男女免费视频国产| 久久青草综合色| 精品人妻一区二区三区麻豆| 街头女战士在线观看网站| 夜夜骑夜夜射夜夜干| 不卡视频在线观看欧美| av一本久久久久| 夜夜爽夜夜爽视频| 这个男人来自地球电影免费观看 | 久久久久久久精品精品| 国产一区有黄有色的免费视频| 香蕉国产在线看| 在线 av 中文字幕| 欧美最新免费一区二区三区| 日本猛色少妇xxxxx猛交久久| 精品亚洲乱码少妇综合久久| 中文天堂在线官网| videossex国产| 日韩一区二区视频免费看| 欧美bdsm另类| 欧美国产精品va在线观看不卡| 午夜免费男女啪啪视频观看| 日韩三级伦理在线观看| 亚洲精品一区蜜桃| 99香蕉大伊视频| 秋霞在线观看毛片| 人成视频在线观看免费观看| 少妇猛男粗大的猛烈进出视频| 18禁国产床啪视频网站| 黄色怎么调成土黄色| 夫妻性生交免费视频一级片| 老司机影院成人| freevideosex欧美| 精品99又大又爽又粗少妇毛片| 免费不卡的大黄色大毛片视频在线观看| 一区二区日韩欧美中文字幕 | 亚洲国产看品久久| av一本久久久久| 黄色配什么色好看| 最近手机中文字幕大全| 亚洲激情五月婷婷啪啪| 18禁动态无遮挡网站| 成人国语在线视频| 秋霞伦理黄片| 免费观看性生交大片5| 国产av码专区亚洲av| 日韩 亚洲 欧美在线| 亚洲天堂av无毛| 涩涩av久久男人的天堂| 久久久精品94久久精品| av有码第一页| 日韩免费高清中文字幕av| 日韩人妻精品一区2区三区| 国产亚洲一区二区精品| 国产成人91sexporn| 成年女人在线观看亚洲视频| 两个人看的免费小视频| 日韩熟女老妇一区二区性免费视频| 婷婷色综合www| freevideosex欧美| 欧美成人午夜免费资源| 一边亲一边摸免费视频| 久久久久精品人妻al黑| 成人无遮挡网站| 亚洲激情五月婷婷啪啪| 国产爽快片一区二区三区| av又黄又爽大尺度在线免费看| 国产精品不卡视频一区二区| 好男人视频免费观看在线| 母亲3免费完整高清在线观看 | 国产一区有黄有色的免费视频| 国产高清国产精品国产三级| 岛国毛片在线播放| a级毛片黄视频| 一区二区三区精品91| 亚洲内射少妇av| 捣出白浆h1v1| 人妻人人澡人人爽人人| 国产精品 国内视频| 国产精品久久久久久精品古装| 成人国语在线视频| 少妇的逼水好多| 午夜福利视频精品| 纵有疾风起免费观看全集完整版| 婷婷成人精品国产| 日韩不卡一区二区三区视频在线| 免费少妇av软件| 美女xxoo啪啪120秒动态图| 人体艺术视频欧美日本| 日韩一区二区三区影片| 色婷婷av一区二区三区视频| 在线观看免费视频网站a站| 色视频在线一区二区三区| 免费观看在线日韩| 天堂俺去俺来也www色官网| 国产成人免费观看mmmm| 欧美激情 高清一区二区三区| 国产成人a∨麻豆精品| 精品人妻偷拍中文字幕| 久久久久久人妻| 一级片'在线观看视频| 看免费成人av毛片| 曰老女人黄片| 18在线观看网站| 久久人人97超碰香蕉20202| 18禁裸乳无遮挡动漫免费视频| 国产精品人妻久久久久久| 日日摸夜夜添夜夜爱| 人成视频在线观看免费观看| 蜜桃国产av成人99| 最近最新中文字幕免费大全7| 午夜福利,免费看| 丰满少妇做爰视频| 欧美日韩成人在线一区二区| 久久精品国产亚洲av天美| www.熟女人妻精品国产 | 人妻少妇偷人精品九色| 国产国语露脸激情在线看| 夜夜爽夜夜爽视频| 亚洲欧美日韩卡通动漫| 母亲3免费完整高清在线观看 | 日韩精品免费视频一区二区三区 | 亚洲性久久影院| 国产亚洲精品久久久com| 日本色播在线视频| 成年女人在线观看亚洲视频| 久久99热这里只频精品6学生| 永久免费av网站大全| 男人舔女人的私密视频| 2018国产大陆天天弄谢| 久久久国产精品麻豆| 久久久久精品性色| 精品国产一区二区久久|