• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Hyper-Parameter Optimization of Semi-Supervised GANs Based-Sine Cosine Algorithm for Multimedia Datasets

    2022-11-10 02:32:58AnasAlRagehiSaidJadidAbdulkadirAmgadMuneerSafwanSadeqandQasemAlTashi
    Computers Materials&Continua 2022年10期

    Anas Al-Ragehi,Said Jadid Abdulkadir,2,*,Amgad Muneer,2,Safwan Sadeq and Qasem Al-Tashi

    1Computer and Information Sciences Department,Universiti Teknologi PETRONAS,Seri Iskandar,32610,Perak,Malaysia

    2Centre for Research in Data Science(CERDAS),Universiti Teknologi PETRONAS,Seri Iskandar,32610,Malaysia

    3Mechanical Engineering Department,Universiti Teknologi PETRONAS,Seri Iskandar,32610,Perak,Malaysia

    4Department of Imaging Physics,The University of Texas MD Anderson Cancer Center,Houston,TX,USA

    5University of Albaydha,Albaydha,Yemen

    Abstract:Generative Adversarial Networks(GANs)are neural networks that allow models to learn deep representations without requiring a large amount of training data.Semi-Supervised GAN Classifiers are a recent innovation in GANs,where GANs are used to classify generated images into real and fake and multiple classes,similar to a general multi-class classifier.However,GANs have a sophisticated design that can be challenging to train.This is because obtaining the proper set of parameters for all models-generator,discriminator,and classifier is complex.As a result,training a single GAN model for different datasets may not produce satisfactory results.Therefore,this study proposes an SGAN model (Semi-Supervised GAN Classifier).First,a baseline model was constructed.The model was then enhanced by leveraging the Sine-Cosine Algorithm and Synthetic Minority Oversampling Technique (SMOTE).SMOTE was used to address class imbalances in the dataset,while Sine Cosine Algorithm(SCA)was used to optimize the weights of the classifier models.The optimal set of hyperparameters (learning rate and batch size) were obtained using grid manual search.Four well-known benchmark datasets and a set of evaluation measures were used to validate the proposed model.The proposed method was then compared against existing models,and the results on each dataset were recorded and demonstrated the effectiveness of the proposed model.The proposed model successfully showed improved test accuracy scores of 1%,2%,15%,and 5% on benchmarking multimedia datasets;Modified National Institute of Standards and Technology (MNIST) digits,Fashion MNIST,Pneumonia Chest X-ray,and Facial Emotion Detection Dataset,respectively.

    Keywords:Generative adversarial networks;semi-supervised generative adversarial network;sine-cosine algorithm;SMOTE;principal component analysis;grid search

    1 Introduction

    Since the invention of Generative Adversarial Networks(GANs),they have been extensively and almost exclusively applied for image generation.GANs are trained using two adversarial networks—a Discriminator and a Generator—working against each other under a min-max objective.Since 2014,many different variants of the GAN[1]model have been developed to improve the image generation task,e.g.,Style Generative Adversarial Network(StyleGAN)[2].StyleGAN is the architecture behind the famous website[3]that can generate realistic human faces that do not exist.Another famous variant of GAN is Wasserstein GAN (WGAN)[4].In the original GANs or Deep Convolutional Generative Adversarial Networks(DCGANs),the Jensen-Shannon divergence is minimized to perplex the discriminator so that it cannot distinguish between real or fake images.However,in the case of Wasserstein Generative Adversarial Networks (WGANs),the Earth-Mover Distance (EMD) is minimized instead.This small change results in much better and more stable image generation results.Some other potential improvement on GANs is Big Generative Adversarial Network(BigGAN)[5],which is the most recent cutting-edge model applied on the ImageNet[6]model.The other one is Progressive GAN,where the authors add new blocks of convolutional layers to both the generator and the discriminator models,which then take help from the real image samples and try to generate high-resolution images from them.Also,Pix2Pix GAN[7]has several exciting applications,such as edge-maps to photo-realistic images[8].Moreover,advanced technological tools such as Web 3.0 manipulate images to increase their quality or extract useful information from them.However,to optimize efficiency and avoid time wastage,images must be processed post-capture at a post-processing step[9].The authors in[10,11]indicated that“Due to the small size and dense distribution of objects in UAV vision,most of the existing algorithms are difficult to detect effectively,while GAN can be used to generate more synthetic samples,including samples from different perspectives and those from the identical perspective yet having subtle distinctions,thus enabling deep neural network training.”

    Semi-supervised learning is a machine learning technique where a small portion of data fed to the model is labeled during training,and the rest of the data is unlabelled.There are two types of learning.In semi-supervised learning,a small amount of labeled data is combined with a large amount of unlabeled data during training.On the other hand,supervised learning deals with only labeled training data.It is a unique case of insufficient supervision.Semi-supervised learning is considered the way to go in many recent problems as it curbs many problems related to overfitting training data due to huge amounts of unlabeled noisy data.Another reason why semi-supervised learning is gaining popularity these days is because there is a lack of labeled data many times since this requires human annotators,specialized equipment,and time-consuming tests[12].All Generative models are applications of semisupervised learning.Here,the labeled samples are the actual samples,while the generated samples are unsupervised.There are two main avenues for improvement in a GAN-based class—improve the GAN as a whole and/or target the specific class labels and generate samples accordingly.Conditional GANs(cGANS)[13]have been proposed for the same.Additional information,such as class labels connected with the input images,can improve the GAN.This enhancement could be in the form of more steady training,faster training,or higher quality generated images.

    From this concept of cGANs,researchers have been able to develop methods for semi-supervised generative classification of generated images-SGAN[14],Auxiliary Classifier GAN (ACGAN)[15],and External Classifier GANs (ECGAN)[16].The discriminator of a typical GAN discriminates between real and fake samples.The same architecture can be used to build a classifier with just the output layer different via transfer learning,by which we can classify the generated images as required during the semi-supervised training of the SGAN.The discriminator model is updated in the Semi-Supervised GAN to predict the labels for the classes needed as the output alongside the prediction for the Real/Fake labels.Hence,we can train the same discriminator and classifier models to predict the classes.That is how the semi-supervised GAN classifiers work.Similarly,there is a constant urge for improvement in the field of image classification,which led to the development of two improved types of GAN classifiers—Auxiliary Classifier GAN,which has two separated dense output layers,one for identifying the fake/real samples and the other one for the multi-class classification task[16].In July 2021,High School Student Ayaan Haque developed the External Classifier GAN,containing three models—a generator,a discriminator,and an external multi-class classifier[17].

    Hyperparameters lie at the heart of any machine learning or deep learning architecture.Without the tuned hyperparameters,it is impossible to obtain the best results for a task using that particular model.Searching for the correct set of hyperparameters manually is a very hectic task[18,19].There are various mathematical computation methods and algorithms for performing Hyperparameter Tuning.Some computational methods include Random Search[20-22]and Grid Search[23,24].Then there are some algorithms like Population-Based Training(PBT)[25],Grey Wolf Optimizer(GWO)[26,27]or Bayesian Optimization[28],and the Hyperband Method (BOHB)[29],which can be used for optimization.In this study,the proposed SGAN model was trained on four benchmark datasets:(i)MNIST Digits Dataset,(ii) MNIST Fashion Dataset,and (iii) Pneumonia Detection Chest X-Ray Dataset and(iv)Facial Emotion Recognition Dataset.

    Once a working baseline was set up,the authors in this study progressed towards finding an optimal set of hyperparameters,like the learning rate for both the Generator and the Discriminator models.The authors further applied a hybrid metaheuristic optimization algorithm called Sine Cosine Algorithm (SCA) for hyperparameter tuning to obtain better results on the same datasets as above more efficiently than using Grid Search and Manual optimization.Finally,the results of each method were compared.Section 2,which follows this section,highlights the background and the related works of the study.Section 3 discusses the proposed research methodology,while Section 4 describes the experimental results of the proposed model.Lastly,Section 5 concludes the research and provides future work.

    2 Related Work

    In the generative adversarial networks,there are two models the Generator(G)and the Discriminator(D)that are put up against one another in an adversarial fashion.The task of the G model is to take the input of random noise as input and output a fake image that is then fed to the D model.On its part,the D model tries to classify whether the image it got from the generator is real or fake.The task of the G model is to fool the D model and generate realistic fake images after epochs of training and backpropagation.The GAN architecture and the entire classification process are shown in Fig.1.

    GANs were generally used for image synthesis from random noise.Hence,they were applied to different fields like fake image synthesis,data augmentation,and conditional image synthesis.However,recent advancements in the field of GANs have resulted in the diversified application of GANs.One such application is a classification of fake images generated by the Generator model.This field exploded with the advent of Conditional GANs.Conditional GANs are an extension of the minimax Generative Modelling (GANs) whereby the required class is passed as a part of the input to the Generator and the Discriminator,respectively,along with the random noise and the fake sample generated,respectively.This results in the generation of images belonging to a particular class as desired.This paper,though preliminary,opens up a whole new domain of interesting application of GANs.Fig.2 present the conditional-GAN Architecture.

    After this,many new GAN architectures were designed to solve the problem of image classification(both supervised and semi-supervised).Some of them are discussed below:

    2.1 Semi-Supervised Classification with GANs(SGAN)

    In this method,the authors classify a data point x into K classes using a standard classifier as an extension of the discriminator model.Thus,the discriminator model is designed to contain multiple output channels,i.e.,(1) Binary Classifier for identifying fakevs.actual samples (2) K channels consisting of probabilities derived from applying soft-max activation function to the elements of K dimensional output vector.The K output logits are then passed through the softmax activation layer to predict the probability of the generated sample belonging to one of the K classes in the output column.This technique is widely known as SGAN[30,31].

    2.2 Auxiliary Classifier GANs(AC-GAN)

    The AC-GAN is another type of GAN classifier that changes the discriminator such that instead of taking the class labels as input (which was the case in Conditional GANs),it predicts the class labels of the generated image that was passed into the discriminator with Auxiliary Classifier.It makes the training process stable.Hence,the generator can generate high-quality images when its weights and biases get trained through forward and backward propagation.In this method,the authors pass the class labels along with the random noise at the Generator end.However,unlike the cGANs,the authors do not pass the labels at the Discriminator end.As before,the discriminator model must estimate whether the input image is real or fake and the image’s class label.There are two dense layers:one for the sample classification into fake or real and the other for categorical multi-class classification into K classes,to which the image belongs.

    2.3 External Classifier GANs(ECGAN)

    This is a semi-supervised GAN architecture where the fake image generated by the generator is used to improve image classification.Generally,the present models for classification with GANs(ACGAN,SGAN)have the same discriminator and classifier models,with the only difference being the output layer.EC-GAN attaches a GAN’s generator to a classifier,hence the name,as opposed to sharing a single architecture for discrimination and classification.The promising results of the algorithm could prompt new related research on how to use artificial data for many different machine learning tasks and applications.Thus,there are three separate models-generator,a discriminator,and a multi-class classifier.The Discriminator is trained in the classical fashion for GANs.The same goes for the classifier.In ECGANs,all the real samples must have labels assigned to them.The generated images are used as inputs for classification supplementation during training.In this case,the architecture follows a semi-supervised approach because the generated images do not have any labels.The generated images and labels are only kept if the model correctly predicts the sample class with a high probability(The labeling is done through a process of pseudo-labeling initially)[32].

    The Google Brain paper mentions that GAN is sensitive to hyperparameter optimization because its benefit can override the cost function if it is not optimized correctly.Therefore,the performance of the cost functions can fluctuate within the different hyperparameter settings,and this fluctuation can occur when the developers do not know whether the GAN model is not working or they need to engage in a lengthy process.During training the GAN,hyperparameter tuning requires patience.Cost functions cannot work without spending time on the hyperparameter tuning since the new cost functions may introduce hyperparameter(s)with sensitive performance.Vanishing gradient exists during the GANs training process due to the multiplication of the gradient with small values through the backpropagation process.This causes the Neural Network to stop learning,distorting the accuracy.Therefore,enhancing GANs networks’architectures leads to better performance in terms of prediction accuracy.Many factors affect GANs’performance,which can be explored for a particular task to improve the prediction accuracy.For instance,generating the GANs weights based on meta-heuristic algorithms(like the Sine-Cosine Algorithm used in this study)instead of traditional manners will lead to better performance and overcome the limitations of existing work.This is the overview of what the proposed research is trying to achieve through the Weight Initialization of the discriminator and the classifier networks of the GAN in the proposed research.

    3 Proposed Methodology

    This study aims to build a single Fine-Tuned Generative Adversarial Network Classifier Architecture that can efficiently perform binary and multi-class classification tasks and apply population-based meta-heuristic sine cosine algorithm[22,33,34]to initialize the parameters of the first layer,namely the weights and the biases.After this,the authors compared the results of the applied SCA model and the baseline model to look for improvement in train and test accuracies.Fig.3 shows the proposed research methodology of this study.

    3.1 Benchmark Datasets Description

    For the task,the authors benchmarked the proposed model with four datasets that are all imagebased classification datasets.These datasets are:

    ? MNIST Digits Dataset[35]-Contains handwritten examples of digits from 0-9 with the labels corresponding to each drawing showing the digit it represents.

    ? Fashion Dataset[36]-Contains examples of various garments(10 types)like-shirts,pants,boots,headwear,etc.,and their corresponding labels.

    ? Pneumonia Detection from Chest X-Rays[37]-This dataset contains two classes,chest X-Ray images of healthyvs.pneumonic lungs.

    ? Facial Emotion Detection Dataset[38]-This is a tricky dataset with large-size images of seven different classes of facial emotions.

    After that,the authors developed a Semi-Supervised GAN classifier.This is the proposed baseline model.Then,the authors fine-tuned the hyperparameters like the learning rates and the batch sizes for the SGAN model by Manual Searching.First,the authors preprocessed the data to get the images in the form of a pixel value table,with each of the scaled pixel values as the features and the target as the labels of the images.Next,the authors combined the generator,discriminator,and classifier models.In a broad sense,the classifier model is similar to the discriminator model;the only difference is the output layer.The discriminator has a sigmoid layer for classifying real/fake images,and the classifier has a softmax layer used to perform the multi-class classification.The authors also trained the proposed baseline model for classification tasks on all the benchmarking datasets.Squares 3 and 4 had class imbalances,leading to poorer results than expected.Hence,the authors decided to try two techniques to deal with the problem:(i)Synthetic Minority Oversampling Technique(SMOTE)[39](ii)Principal Component Analysis(PCA)[40,41].Furthermore,the authors built a script for applying Sine Cosine Algorithm for weights and biases initialization in the discriminator and the classifier models.The authors could not do the same thing for the generator model because its first layer was a Dense/Fully Connected Layer.As a result,there were too many parameters,such that it was practically impossible to train them all.After training all the combinations of the models,the authors trained them on all the four benchmarking datasets to get expected improvements upon using the Sine Cosine Algorithm to initialize the Parameters instead of doing it randomly.

    3.2 Synthetic Minority Oversampling Technique

    SMOTE is applied to those tabular datasets with a high degree of imbalance between the output classes.There are very few labels for some classes and a high number of labels for some classes.SMOTE is an oversampling technique that oversamples the examples from the minority classes so that the number of examples in the minority becomes equal to those in the majority classes[42].Therefore,SMOTE can be considered a Data Augmentation Technique.SMOTE works by selecting examples in the feature spaces close together,distinguishing between the examples,and trying to draw a new instance at a point along that line[42].To be more specific,a random illustration from the minority class is chosen first.Then,k of its nearest neighbors is found(typically,k=5).A randomly chosen neighbor is chosen,and a synthetic example is created in feature space at a randomly chosen point between the two examples.Here,authors select some examples from the feature map and then try to make a class division between the selected data by the k-nearest neighbors (KNN),where authors generally take K to be 5.After this,they identify the minority classes and generate new samples on an imaginary line between two samples from the minority class.In this way,authors can generate as many new samples as required.Thus,SMOTE generates only new samples for the minority classes.However,some researchers suggest using random undersampling of the majority classes and oversampling the minority classes to obtain the best results.From the above discussion,the authors can figure out that one of the drawbacks of using SMOTE is that it does not consider the majority of classes at all.As a result,many times when there is not any clear distinction between the samples of the majority and minority classes,SMOTE often fails to generate perfect augmented samples.

    3.3 Principal Component Analysis

    PCA was also applied to the datasets for dimensionality reduction.Nonetheless,the idea did not lead to any improvement in the accuracy scores.Furthermore,reduction dimensions’hyperparameter tuning did not significantly change scores,so the authors discarded the idea.PCA was implemented with an“auto”SVD solver.A default policy based on X.shape and n components select the solver:if the input data is greater than 500×500 and the number of features to extract is less than 80%of the lowest dimension of the data,the more efficient‘randomized’method is enabled.Otherwise,the exact full SVD is calculated and then optionally truncated.

    3.4 Sine-Cosine Algorithm

    Sine-Cosine Algorithm is a recent advancement in Metaheuristic Population-Based Optimization algorithms[43-48].Seyedali Mirjalili proposed it in 2016.As is common to algorithms belonging to the same family,the optimization process consists of the movement of the individuals of the population within the search space,which represent approximations to the problem.For this purpose,SCA uses trigonometric sine and cosine functions.At each step of the calculation,it updates the solutions according to the following equations:

    Where Xtidenotes the position of the current agent at the iteration in the ith dimension;Piis the position of the best agent at the iteration in the ith dimension.The random agents are r1,r2,r3,and r4.

    r1defines the afterward search space between the exterior and the solution space.r2defines the amount of space left between the exterior and the solution space.

    where T indicates the maximum iterations number and t is the iteration running.The a is a constant variable.

    Algorithm 1 Algorithm SCA Input:-Set the lower bound and upper bound of X solutions-Set the population size-Initialize the agents of the search space randomly-Specify the maximum number of iterations Output:-The best-selected solution(X*)LOOP Process 1:while(t ≤Max number of iterations T )do 2:Calculate every single solutions candidate 3:Define the best-selected solution(X*)4:Update r1,r2,r3,and r4 5:Update agents’locations in the search space with(3.4)6:end while(Return X*)

    Presently,SCA is considered the best population-based method to find the optimal solution at the fastest speed.SCA is considered a meta-heuristic algorithm because it is a generalized algorithm that can be used to find solutions to various problems;it is not problem-dependent.

    4 Experimental Results

    This section describes and discusses the proposed GAN model’s findings in this study.The first section focuses on the evaluation matrices used to evaluate the performance of the proposed model on four different datasets.

    4.1 Evaluation Metrics

    In this paper,four standard metrics are utilized to thoroughly evaluate the effectiveness and prediction of the proposed models:Accuracy,Precision,Recall,and F1-score.The following are the definitions of these four metrics:

    As the name suggests,Confusion Matrix gives us a matrix as output and describes the complete performance of the model.Fig.4 shows an Example Confusion Matrix of SGAN performance on Pneumonia Chest X-ray Dataset.

    There are four essential terms:

    ? True Positives:The cases in which the authors predicted YES (1),and the actual output was also YES(1).(i.e.,379 in the figure above).

    ? True Negatives:The cases in which the proposed model predicted NO(0)and the actual output was NO(0).(i.e.,169 in the figure above).

    ? False Positives:The cases in which the proposed model predicted YES(1),and the actual output was NO(0).(i.e.,221 in the figure above).

    ? False Negatives:The cases in which the proposed model predicted NO(0)and the actual output was YES(1).(i.e.,11 in the figure above).

    4.2 Optimal Hyperparameters

    The hyperparameters for the SGAN implementation on various datasets included the batch size of the data for training and the learning rates of the discriminator/classifier and the GAN in general.The Hyperparameters were selected using Grid Search and Manual Search over a discrete search space chosen after careful considerations by the authors.The information regarding the various hyperparameters is given in Tabs.1 and 2.

    Table 1:Grid search space hyperparameters

    Table 2:Optimal hyperparameters

    4.3 Results of the Proposed Model

    The Sine-Cosine Algorithm was implemented on the oversampled data using SMOTE.The authors named this result the SMOTE+SCA result.The authors expected to get the best results than all the previously experimented models because they had the dataset imbalance fixed,and the parameters were to be initialized using the SCA.For the Fashion MNIST dataset,the authors got around one improvement only for both the train and the test data.For the Pneumonia Detection dataset,they got an overwhelming increase of around 21%in the training accuracy and around 15%increase in accuracy over the baseline results for the test accuracy.The most satisfying results were obtained for the Facial Emotion Detection Dataset—the authors obtained over 30% accuracy for both train and test data for the first time since trying out all the different combinations of methods.Tab.3 showed the results when the authors combined the proposed GAN model along with SMOTE and SCA methods.

    Table 3:SMOTE+SCA results

    4.4 Baseline and Balanced Dataset Results

    The Baseline results consist of the classification results on the four datasets whereby the model is a simple Semi-Supervised GAN Classifier that is directly trained and tested on the pixel value tables of the images of the dataset.The results show that benchmark datasets like the MNIST Digits Dataset show superior results under baseline conditions,majorly due to how easy it is to classify handwritten digits given how complex and fine-tuned the SGAN classifier is.However,when a difficult dataset like the facial emotion detection dataset is considered,the model predicts a person’s facial emotion from the image of the face provided,the same complex SGAN classifier becomes counterproductive to inaccurate results in most cases.Evidently,GANs are difficult to train,and it is difficult to achieve good results at times.Tab.4 shows the baseline model results.

    Table 4:Baseline results

    Next,the authors applied SMOTE,a minority oversampling technique,because of the four different datasets(MNIST digits and MNIST fashion).In contrast,the datasets like the pneumonia detection from chest X-Ray images and facial emotion detection had class imbalances.The authors expected to get similar results as the baseline results for the balanced datasets and significant improvements for the imbalanced datasets.However,the results were poorer for both MNIST digit and MNIST fashion datasets than the baseline results.This was somewhat expected since it is a balanced dataset.For the pneumonia chest X-Ray dataset,which was heavily imbalanced,the authors got very significant improvements of almost 20%for the train data and around 7%for the test data.For the facial emotion detection dataset,which was also imbalanced,the model did not show much improvement,which could have been due to the complexity and the size of the examples in the dataset.Another reason why SGAN could not perform better on the facial emotion dataset was that the GAN architecture might not have converged to a minimum error state,even after exhaustive grid search hyperparameter tuning.Tab.5 shows the proposed model results after applying SMOTE technique.

    Table 5:SMOTE results

    Therefore,all the results are summarized in Fig.5.The proposed SCA method showed outstanding improvement in performance over the sGAN method baseline.MNIST Digits&Fashion datasets have less or negative results due to their already balanced nature on applying SMOTE.Furthermore,when SCA was implemented for weight initialization,there was a considerable improvement over the SGAN and SGAN-SMOTE baselines.Similar observations were seen in the Pneumonia Chest X-Ray and Facial Emotion Detection datasets.However,in the Fashion-MNIST dataset,the performance of the sGAN baseline model slightly decreased after SMOTE was applied to the dataset.This might be because the dataset was already balanced,so SMOTE could not make a difference in the dataset label distribution.

    4.5 Comparison with Literature

    This section compares the proposed GAN baseline model with the state-of-the-art models.Different deep learning methods have been used to perform classification in the literature.Some of these methods have been used as a comparison benchmark against our best-performing model.In the MNIST Dataset,our proposed model yields a test accuracy of 95.28%.Projection Net paper,on the other hand,gives the same performance of 95%.Recent developments such as Perceptron with a tensor train layer performed at 98.2%,which is way higher than our baseline model performance.Tab.6 shows the comparison of the proposed method with related literature contributions.

    In the MNIST Dataset,our proposed model yields a test accuracy of 94.86%.Similar work in neural networks such as CapsNet and Graph Convolutional Network achieves an accuracy score of 77%and 46%,respectively.In the Fashion MNIST,our proposed model surpasses machine learning methods such as Naive Bayes,Decision Trees,and Bayesian Forest by a considerable margin,which could be further seen in Tab.6.Furthermore,in the Pneumonia Chest X-ray detection dataset,our proposed model’s performance is slightly lower than the pre-trained models such as DenseNet121.This might be because these architectures are trained on large scale datasets such as ImageNet,which gives these architectures a massive advantage over the non-pre-trained models

    Table 6:Comparison of the proposed method with related literature contributions

    Table 6:Continued

    5 Conclusion

    In this study,the authors proposed an SGAN classifier model that performs binary and multiclass classification accurately with the help of the Population-Based MetaHeuristic Sine Cosine Algorithm (SCA).While previous works majorly focused on a single dataset,we have implemented the proposed architecture on four different benchmark datasets to show the efficiency of the proposed model.The datasets used were the MNIST digits dataset,the MNIST Fashion dataset,Pneumonia Detection from Chest X-Rays dataset,and Facial Emotion Detection dataset.The class imbalance was addressed using the SMOTE method,which further led to a substantial improvement in the model’s performance.However,PCA did not yield better results.The application of SMOTE yielded better results on the imbalanced datasets,whereas we remained the same on datasets having balanced class distribution.One of the reasons why PCA might have failed is that the differentiating characteristics of the classes are not reflected in the variance of the variables.This is because PCA does not consider class information when calculating the principal components.Furthermore,the application of the Sine Cosine algorithm (SCA) used for initialization of weights and biases of Discriminator led to substantial improvement in the accuracy scores of the model.Lastly,this work could be further expanded by implementing the SCA algorithm to initialize weights and biases of the Generator efficiently without making it so computationally expensive.It is also suggestible to experiment with more benchmark datasets such as CIFAR-10 and CIFAR-100.To further improve the results,it is possible to use Auto-Encoders where the sheer number of parameters would benefit from their effective initialization through SCA.

    Author Contributions:Conceptualization,A.A.;methodology,A.A.;S.J.A and A.M;software,A.A;validation,A.A.and A.M.formal analysis,A.A;S.J.A and A.M;data curation,A.A.and A.M.writing—original draft preparation,A.A.;A.M.and S.S;writing—review and editing,S.J.A.and Q.A visualization,A.A.;A.M.and S.S;supervision,S.J.A.project administration,S.J.A.funding acquisition,S.J.A.All authors have read and agreed to the published version of the manuscript.

    Funding Statement:This research was supported by Universiti Teknologi PETRONAS,under the Yayasan Universiti Teknologi PETRONAS (YUTP) Fundamental Research Grant Scheme(YUTPFRG/015LC0-308).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美+日韩+精品| 成人国语在线视频| 久久久久精品性色| 天美传媒精品一区二区| 人妻人人澡人人爽人人| 韩国高清视频一区二区三区| 婷婷色av中文字幕| 国产在线一区二区三区精| 亚洲精品久久成人aⅴ小说 | av在线app专区| 男男h啪啪无遮挡| 啦啦啦在线观看免费高清www| 波野结衣二区三区在线| 久久久久久久久久久免费av| 国产色婷婷99| 一区二区日韩欧美中文字幕 | 婷婷色综合www| 国产精品嫩草影院av在线观看| 能在线免费看毛片的网站| 成人无遮挡网站| 免费日韩欧美在线观看| 国产精品久久久久久精品电影小说| 午夜激情福利司机影院| 91午夜精品亚洲一区二区三区| 精品久久蜜臀av无| 丝袜脚勾引网站| 夜夜爽夜夜爽视频| 日韩av在线免费看完整版不卡| 国产精品免费大片| 亚洲精品成人av观看孕妇| 日韩电影二区| 这个男人来自地球电影免费观看 | 欧美+日韩+精品| 日本与韩国留学比较| √禁漫天堂资源中文www| 亚洲精品国产色婷婷电影| 一本色道久久久久久精品综合| av福利片在线| 七月丁香在线播放| 丰满迷人的少妇在线观看| 爱豆传媒免费全集在线观看| 亚洲精华国产精华液的使用体验| 日韩欧美一区视频在线观看| 97超视频在线观看视频| videos熟女内射| 性色avwww在线观看| 一本一本综合久久| 欧美日韩av久久| 久久精品国产鲁丝片午夜精品| 人妻一区二区av| 制服丝袜香蕉在线| 国产成人免费观看mmmm| 色哟哟·www| 男女国产视频网站| 久久久国产一区二区| 国产成人免费观看mmmm| 亚洲欧洲日产国产| 国产av国产精品国产| 99热全是精品| 天天影视国产精品| 91精品一卡2卡3卡4卡| 人妻一区二区av| 日本黄色日本黄色录像| 精品一区在线观看国产| 久久久久久久久久久久大奶| 亚洲熟女精品中文字幕| 97超视频在线观看视频| 狂野欧美激情性xxxx在线观看| 一级毛片电影观看| 国产黄片视频在线免费观看| 免费观看性生交大片5| 精品卡一卡二卡四卡免费| 在线 av 中文字幕| 欧美成人精品欧美一级黄| 久久精品国产a三级三级三级| 一级爰片在线观看| av视频免费观看在线观看| 亚洲,欧美,日韩| 色网站视频免费| 久久久午夜欧美精品| 亚洲精品乱久久久久久| 精品久久久久久久久av| 成人影院久久| 日本猛色少妇xxxxx猛交久久| 一本色道久久久久久精品综合| 又黄又爽又刺激的免费视频.| 大又大粗又爽又黄少妇毛片口| 大香蕉久久网| 亚洲av国产av综合av卡| 桃花免费在线播放| 啦啦啦视频在线资源免费观看| 精品国产国语对白av| 美女视频免费永久观看网站| 日韩成人伦理影院| 尾随美女入室| 制服丝袜香蕉在线| 中文天堂在线官网| 亚洲欧洲国产日韩| 国产一区二区在线观看av| 日韩三级伦理在线观看| 国产片内射在线| 国产黄片视频在线免费观看| 亚洲综合精品二区| 久热久热在线精品观看| a级毛色黄片| 日本欧美视频一区| 老司机影院毛片| 美女大奶头黄色视频| 国产熟女午夜一区二区三区 | 在现免费观看毛片| 妹子高潮喷水视频| 久久久国产欧美日韩av| a级片在线免费高清观看视频| 国产毛片在线视频| 全区人妻精品视频| 亚洲天堂av无毛| 桃花免费在线播放| 18禁裸乳无遮挡动漫免费视频| 久久久久人妻精品一区果冻| 亚洲欧美日韩卡通动漫| 国产精品99久久99久久久不卡 | 日韩在线高清观看一区二区三区| 老司机影院毛片| 日本色播在线视频| 国产在视频线精品| 久久婷婷青草| 精品国产乱码久久久久久小说| av国产久精品久网站免费入址| 国产精品麻豆人妻色哟哟久久| 美女大奶头黄色视频| 丰满迷人的少妇在线观看| 飞空精品影院首页| 久久人妻熟女aⅴ| 黑人猛操日本美女一级片| 热re99久久精品国产66热6| 成年人免费黄色播放视频| 亚洲怡红院男人天堂| av不卡在线播放| 久热这里只有精品99| 黄片无遮挡物在线观看| av国产久精品久网站免费入址| 精品久久久噜噜| 久久久午夜欧美精品| 啦啦啦中文免费视频观看日本| 久久精品国产鲁丝片午夜精品| 日本黄大片高清| 国产亚洲最大av| 成人二区视频| 国产精品.久久久| 亚州av有码| 亚洲精品日本国产第一区| 亚洲精华国产精华液的使用体验| 久久久久久伊人网av| 亚洲欧美中文字幕日韩二区| 女人久久www免费人成看片| 免费不卡的大黄色大毛片视频在线观看| 丝袜脚勾引网站| 久久久久久久久久成人| 在线观看三级黄色| 97在线视频观看| 日韩一区二区三区影片| 97超碰精品成人国产| 国产不卡av网站在线观看| 国产在线免费精品| 亚洲精品aⅴ在线观看| 国产免费现黄频在线看| 国产一区二区三区综合在线观看 | 免费看不卡的av| 亚洲av欧美aⅴ国产| 亚洲国产日韩一区二区| 纵有疾风起免费观看全集完整版| 黑人猛操日本美女一级片| 久久久久精品性色| av视频免费观看在线观看| 欧美日韩视频精品一区| 中文字幕av电影在线播放| 亚洲少妇的诱惑av| 国产有黄有色有爽视频| 最近最新中文字幕免费大全7| 成人无遮挡网站| 18禁观看日本| 波野结衣二区三区在线| 亚洲综合色网址| 国产精品一区www在线观看| 国产午夜精品一二区理论片| 欧美精品一区二区大全| 欧美另类一区| 中文字幕制服av| 久久久久久久久久久免费av| 亚洲伊人久久精品综合| 99久国产av精品国产电影| 涩涩av久久男人的天堂| 大香蕉久久网| 亚洲精品,欧美精品| 久久久精品94久久精品| 国产精品99久久99久久久不卡 | 国产欧美亚洲国产| 国产一区二区三区av在线| 中国国产av一级| 黄色视频在线播放观看不卡| 丁香六月天网| videosex国产| 我的老师免费观看完整版| 久久99精品国语久久久| 久久久精品94久久精品| 国产黄色视频一区二区在线观看| 国产精品国产av在线观看| 亚洲精品亚洲一区二区| av一本久久久久| 99热这里只有是精品在线观看| 午夜福利,免费看| 日韩伦理黄色片| 美女内射精品一级片tv| 秋霞在线观看毛片| 18+在线观看网站| 国产精品无大码| 免费观看无遮挡的男女| 又黄又爽又刺激的免费视频.| 交换朋友夫妻互换小说| 人人澡人人妻人| 黑人巨大精品欧美一区二区蜜桃 | 22中文网久久字幕| √禁漫天堂资源中文www| 国产 精品1| 不卡视频在线观看欧美| 精品国产一区二区久久| 国产在视频线精品| 亚洲不卡免费看| 黄色一级大片看看| 日本午夜av视频| 青青草视频在线视频观看| 久久99热6这里只有精品| 国产av一区二区精品久久| 男女边摸边吃奶| 大片电影免费在线观看免费| a 毛片基地| 久久这里有精品视频免费| 亚洲国产精品一区二区三区在线| 亚洲怡红院男人天堂| 王馨瑶露胸无遮挡在线观看| 女的被弄到高潮叫床怎么办| 中国美白少妇内射xxxbb| 男的添女的下面高潮视频| 免费av中文字幕在线| 99热这里只有是精品在线观看| 精品熟女少妇av免费看| 成人无遮挡网站| 欧美人与善性xxx| 又粗又硬又长又爽又黄的视频| 内地一区二区视频在线| 欧美人与性动交α欧美精品济南到 | 精品人妻熟女毛片av久久网站| 2018国产大陆天天弄谢| av在线老鸭窝| 国产成人免费观看mmmm| 久久精品国产亚洲av天美| 亚洲精品亚洲一区二区| 丝袜脚勾引网站| 国产av精品麻豆| 国产成人av激情在线播放 | 色吧在线观看| 久久精品国产鲁丝片午夜精品| 街头女战士在线观看网站| 亚洲精品日本国产第一区| 亚洲成色77777| 国产精品女同一区二区软件| 免费人成在线观看视频色| 日本vs欧美在线观看视频| 亚洲精品中文字幕在线视频| freevideosex欧美| 狂野欧美激情性bbbbbb| 国产免费一级a男人的天堂| 99久久综合免费| 欧美激情国产日韩精品一区| 欧美性感艳星| 久久这里有精品视频免费| 桃花免费在线播放| 亚洲少妇的诱惑av| 精品一区二区免费观看| 久久精品国产亚洲av天美| 日本-黄色视频高清免费观看| 免费久久久久久久精品成人欧美视频 | 制服人妻中文乱码| 热99久久久久精品小说推荐| 国国产精品蜜臀av免费| 亚洲丝袜综合中文字幕| 久久久国产一区二区| 中文字幕免费在线视频6| 在线看a的网站| 综合色丁香网| 少妇高潮的动态图| 26uuu在线亚洲综合色| 免费观看的影片在线观看| 久久久久久久久久久丰满| 男女啪啪激烈高潮av片| 日韩成人av中文字幕在线观看| 亚洲第一av免费看| 精品人妻偷拍中文字幕| 一级片'在线观看视频| 久久99热6这里只有精品| 国产精品99久久久久久久久| 婷婷色综合大香蕉| 精品少妇久久久久久888优播| 国产欧美日韩一区二区三区在线 | 亚洲欧洲日产国产| 国产精品一二三区在线看| 久久精品熟女亚洲av麻豆精品| 制服丝袜香蕉在线| 亚洲欧洲精品一区二区精品久久久 | 国产国拍精品亚洲av在线观看| 国产成人a∨麻豆精品| 欧美成人午夜免费资源| 亚洲av欧美aⅴ国产| 青春草亚洲视频在线观看| 99热6这里只有精品| 午夜福利网站1000一区二区三区| 精品一区二区三区视频在线| 91久久精品电影网| 日韩欧美精品免费久久| 熟女电影av网| 亚洲欧美清纯卡通| 午夜福利在线观看免费完整高清在| av国产久精品久网站免费入址| 大香蕉久久网| 美女福利国产在线| 免费高清在线观看日韩| 午夜91福利影院| 国产免费现黄频在线看| 精品卡一卡二卡四卡免费| 国产成人精品福利久久| 国产精品久久久久久av不卡| 免费av不卡在线播放| 一区二区三区四区激情视频| 国产成人freesex在线| 久久综合国产亚洲精品| 免费看不卡的av| 免费av中文字幕在线| 精品一品国产午夜福利视频| 免费av中文字幕在线| h视频一区二区三区| 大片电影免费在线观看免费| 久久这里有精品视频免费| 高清av免费在线| 夜夜爽夜夜爽视频| 日本猛色少妇xxxxx猛交久久| 欧美97在线视频| 五月玫瑰六月丁香| 久久婷婷青草| 91久久精品国产一区二区三区| 天天操日日干夜夜撸| 搡老乐熟女国产| 精品久久久久久久久av| 国产欧美另类精品又又久久亚洲欧美| 国产精品熟女久久久久浪| 色5月婷婷丁香| 我的女老师完整版在线观看| 51国产日韩欧美| 国产成人freesex在线| 成人国产av品久久久| 中文字幕久久专区| 在线天堂最新版资源| 久久午夜综合久久蜜桃| 国产欧美日韩一区二区三区在线 | 春色校园在线视频观看| 日韩在线高清观看一区二区三区| 亚洲色图 男人天堂 中文字幕 | 91久久精品国产一区二区三区| 丁香六月天网| 国产精品国产三级专区第一集| 亚洲精品av麻豆狂野| 日韩电影二区| 亚洲av中文av极速乱| 一区二区三区乱码不卡18| 国产又色又爽无遮挡免| 两个人的视频大全免费| 久久午夜综合久久蜜桃| 亚洲精品乱码久久久v下载方式| 欧美精品一区二区大全| 18禁在线播放成人免费| 在线观看人妻少妇| 日韩av在线免费看完整版不卡| 美女内射精品一级片tv| 丝袜喷水一区| 制服诱惑二区| 老司机影院成人| 国产又色又爽无遮挡免| 亚洲综合色惰| 免费看av在线观看网站| 免费av不卡在线播放| 一级毛片电影观看| 亚洲国产色片| 免费大片黄手机在线观看| 黑人猛操日本美女一级片| 亚洲av成人精品一区久久| 又粗又硬又长又爽又黄的视频| 亚洲丝袜综合中文字幕| 成人漫画全彩无遮挡| 欧美人与善性xxx| 国产高清有码在线观看视频| 成人亚洲欧美一区二区av| 亚洲精品国产av成人精品| 制服丝袜香蕉在线| 久久久久国产精品人妻一区二区| 中文欧美无线码| 女的被弄到高潮叫床怎么办| 亚洲综合色惰| 久久精品熟女亚洲av麻豆精品| 欧美另类一区| 国产探花极品一区二区| 好男人视频免费观看在线| 全区人妻精品视频| 午夜激情av网站| 亚洲精品久久久久久婷婷小说| 免费看av在线观看网站| 亚洲av成人精品一二三区| 国产亚洲精品第一综合不卡 | 国产片特级美女逼逼视频| 久久热精品热| 超碰97精品在线观看| 精品久久久久久久久亚洲| 丝瓜视频免费看黄片| 中国三级夫妇交换| 精品人妻在线不人妻| 成人漫画全彩无遮挡| 国产一区二区三区综合在线观看 | av黄色大香蕉| 亚洲综合色网址| 亚洲人成网站在线观看播放| 最近中文字幕高清免费大全6| 女人久久www免费人成看片| 午夜老司机福利剧场| 三级国产精品欧美在线观看| 国产精品国产三级国产专区5o| 免费观看a级毛片全部| 国产深夜福利视频在线观看| 国产男人的电影天堂91| 91精品一卡2卡3卡4卡| 国产高清有码在线观看视频| 国产熟女欧美一区二区| 精品人妻熟女av久视频| 国产精品一国产av| 国产乱人偷精品视频| 日韩三级伦理在线观看| 亚洲精华国产精华液的使用体验| 黄色毛片三级朝国网站| 国产免费福利视频在线观看| 人妻系列 视频| 少妇被粗大猛烈的视频| 久久久久久久亚洲中文字幕| 一级毛片电影观看| 看免费成人av毛片| 高清欧美精品videossex| 国产女主播在线喷水免费视频网站| 一边摸一边做爽爽视频免费| 又大又黄又爽视频免费| 少妇精品久久久久久久| 亚洲欧洲精品一区二区精品久久久 | 国产午夜精品久久久久久一区二区三区| 久久久久人妻精品一区果冻| 伦精品一区二区三区| 热re99久久国产66热| 亚洲欧美精品自产自拍| 国产视频内射| 亚洲欧洲国产日韩| 久久狼人影院| 校园人妻丝袜中文字幕| 一级毛片黄色毛片免费观看视频| 97精品久久久久久久久久精品| 日本色播在线视频| 久久久国产欧美日韩av| 搡老乐熟女国产| 精品酒店卫生间| 欧美 亚洲 国产 日韩一| 亚洲精品视频女| 免费看av在线观看网站| 大香蕉久久成人网| 国产69精品久久久久777片| 大陆偷拍与自拍| 免费观看a级毛片全部| 天堂俺去俺来也www色官网| 亚洲精品久久久久久婷婷小说| 免费黄色在线免费观看| 日韩一本色道免费dvd| 曰老女人黄片| 永久网站在线| 一二三四中文在线观看免费高清| 色5月婷婷丁香| 国产视频首页在线观看| 日韩欧美一区视频在线观看| 精品久久久久久久久av| 成人综合一区亚洲| 又粗又硬又长又爽又黄的视频| 精品久久久噜噜| 五月玫瑰六月丁香| 在线观看一区二区三区激情| 成人国产av品久久久| 好男人视频免费观看在线| 久久久国产精品麻豆| 美女福利国产在线| 久久精品人人爽人人爽视色| 亚洲成人手机| 国产精品一区二区在线观看99| 99久久综合免费| 亚洲成人av在线免费| 亚洲综合精品二区| 日产精品乱码卡一卡2卡三| 亚洲图色成人| 男女边吃奶边做爰视频| 日韩熟女老妇一区二区性免费视频| 国产老妇伦熟女老妇高清| 亚洲色图 男人天堂 中文字幕 | 99久久精品一区二区三区| 久久久久精品久久久久真实原创| 91精品国产九色| 人人妻人人澡人人看| 在线亚洲精品国产二区图片欧美 | 日本欧美国产在线视频| 九九在线视频观看精品| 卡戴珊不雅视频在线播放| 日韩,欧美,国产一区二区三区| 精品少妇久久久久久888优播| 一个人看视频在线观看www免费| 国产视频首页在线观看| 国产精品人妻久久久久久| 亚洲av电影在线观看一区二区三区| 尾随美女入室| 色视频在线一区二区三区| 日韩,欧美,国产一区二区三区| 一二三四中文在线观看免费高清| videosex国产| 欧美日韩国产mv在线观看视频| 熟女电影av网| 老司机亚洲免费影院| 欧美精品人与动牲交sv欧美| 狂野欧美白嫩少妇大欣赏| 超色免费av| 如何舔出高潮| 免费播放大片免费观看视频在线观看| av线在线观看网站| 99热网站在线观看| 欧美精品高潮呻吟av久久| 九色成人免费人妻av| 一区二区日韩欧美中文字幕 | www.色视频.com| 一个人免费看片子| 精品久久国产蜜桃| 欧美激情 高清一区二区三区| 亚洲精品久久午夜乱码| 国产免费现黄频在线看| 春色校园在线视频观看| 日本与韩国留学比较| 欧美精品高潮呻吟av久久| 天美传媒精品一区二区| 亚洲精品aⅴ在线观看| 久久久久久人妻| 激情五月婷婷亚洲| 九九在线视频观看精品| 婷婷色av中文字幕| 国产国拍精品亚洲av在线观看| 大香蕉久久网| 在线免费观看不下载黄p国产| 久久久久久久久大av| 午夜福利,免费看| 一本一本综合久久| 国产极品粉嫩免费观看在线 | 中文精品一卡2卡3卡4更新| 亚洲国产精品999| 日本欧美国产在线视频| 26uuu在线亚洲综合色| 考比视频在线观看| 人妻人人澡人人爽人人| 亚洲熟女精品中文字幕| 久久婷婷青草| 亚洲国产最新在线播放| 制服诱惑二区| 午夜免费观看性视频| 99九九在线精品视频| 黄片无遮挡物在线观看| 男女无遮挡免费网站观看| 精品少妇内射三级| 免费高清在线观看视频在线观看| 国产一区有黄有色的免费视频| 亚洲色图综合在线观看| 视频在线观看一区二区三区| tube8黄色片| 国产av国产精品国产| 亚洲激情五月婷婷啪啪| 免费黄频网站在线观看国产| 亚洲色图综合在线观看| 亚洲av中文av极速乱| 国产色爽女视频免费观看| 国精品久久久久久国模美| 午夜日本视频在线| 久久久久久久亚洲中文字幕| 国产成人免费无遮挡视频| 18在线观看网站| 国产成人精品一,二区| 少妇猛男粗大的猛烈进出视频| 国产精品国产三级专区第一集| 国产日韩欧美在线精品| 男男h啪啪无遮挡| 成人手机av| 亚洲三级黄色毛片| 久久午夜福利片| 亚洲天堂av无毛| 少妇的逼水好多| 大香蕉97超碰在线| 日韩不卡一区二区三区视频在线| 午夜福利,免费看| 国产精品一区www在线观看| 日日撸夜夜添| 欧美激情极品国产一区二区三区 | 在线精品无人区一区二区三| 男人爽女人下面视频在线观看| 日本vs欧美在线观看视频|