• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Electroencephalography(EEG)Based Neonatal Sleep Staging and Detection Using Various Classification Algorithms

    2023-12-15 03:57:06HafzaAyeshaSiddiqaMuhammadIrfanSaadullahFarooqAbbasiandWeiChen
    Computers Materials&Continua 2023年11期

    Hafza Ayesha Siddiqa,Muhammad Irfan,Saadullah Farooq Abbasi and Wei Chen

    1Center for Intelligent Medical Electronics,Department of Electronic Engineering,Fudan University,Shanghai,200433,China

    2Department of Biomedical Engineering,Riphah International University,Islamabad,45320,Pakistan

    ABSTRACT Automatic sleep staging of neonates is essential for monitoring their brain development and maturity of the nervous system.EEG based neonatal sleep staging provides valuable information about an infant’s growth and health,but is challenging due to the unique characteristics of EEG and lack of standardized protocols.This study aims to develop and compare 18 machine learning models using Automated Machine Learning(autoML)technique for accurate and reliable multi-channel EEG-based neonatal sleep-wake classification.The study investigates autoML feasibility without extensive manual selection of features or hyperparameter tuning.The data is obtained from neonates at post-menstrual age 37±05 weeks.3525 30-s EEG segments from 19 infants are used to train and test the proposed models.There are twelve time and frequency domain features extracted from each channel.Each model receives the common features of nine channels as an input vector of size 108.Each model’s performance was evaluated based on a variety of evaluation metrics.The maximum mean accuracy of 84.78%and kappa of 69.63%has been obtained by the AutoML-based Random Forest estimator.This is the highest accuracy for EEG-based sleep-wake classification,until now.While,for the AutoML-based Adaboost Random Forest model,accuracy and kappa were 84.59%and 69.24%,respectively.High performance achieved in the proposed autoML-based approach can facilitate early identification and treatment of sleep-related issues in neonates.

    KEYWORDS AutoML;Random Forest;adaboost;EEG;neonates;PSG;hyperparameter tuning;sleep-wake classification

    1 Introduction

    Sleep is an anatomical action that is usually found in all species of animal.Approximately one third of a person’s life is dedicated to sleep [1].A healthy life requires sleep,as sleep deprivation leads to severe medical complications,such as cognitive impairment and death.Sleep is a naturally reverting state of the brain and body.It is characterized by diversified consciousness,comparatively inhibited sensory activity,lessened muscle activity and interference with almost all voluntary muscles,and reduced interactions with the surroundings.Neonates consume their time mostly relaxing in a sleep condition.

    Now the question arises that why sleep is important to a baby’s development? Eliot et al.in 1999 proposed that in one second a baby makes up to 1.8 million new neuronal connections in the brain,and what a baby feels,sees,hears,and smells determines which of these connections will remain[2].Thiedke et al.in 2001 researched that infants spend most of their time asleep and that a notable proportion of sleep is in the processing Rapid eye movement (REM) stage [3].The authors in [4,5] suggested that sleep is necessary during the early development of a baby’s brain and body.Clinically,the main symbol of brain evolution in infants is Sleep-Wake Cycle (SWC) [6,7].SWC basically demonstrates 24-h everyday sleep-wake pattern.It consists of normally 16 h of wakefulness in the daytime and the remaining 8 h of sleep at night-time[6,7].In particular,neonatal sleep should be preserved and encouraged in a neonatal intensive care unit (NICU).Infants can suffer from many sleep-related serious problems,including Sleep Apnea,Infantile Spasms,Blindness,Irregular sleep-wake cycle,non-24-h sleep-wake cycle,Down syndrome,and Nighttime Sleep Disturbances[8].Neonatal sleep staging is also important to reduce the risk of sleep-related infant deaths which are tragic and devastating outcomes during sleep [9].These deaths are categorized into two main types: Infant Death Syndrome (SIDS): SIDS is the sudden,unpredictable death of a neonate aged less than one year,often while sleeping,and its exact cause is unknown.Accidental Suffocation or Strangulation in Bed(ASSB):This occurs when something in the sleep environment blocks an infant’s airway during sleep.It is the most common cause of death for infants under one year.Polysomnography(PSG) is basically a sleep study and a thorough test used to detect sleep and sleep problems [10].The polysomnography method records brain waves,heart rate,respiration,oxygen level in the blood,and eye and leg movements during sleep.The electrical activity in the brain is evaluated by using an Electroencephalogram(EEG)and the process is called Electroencephalography.With the help of electrical impulses,brain cells communicate with each other.Brain wave patterns are tracked and recorded by an EEG.In the past,researchers have illustrated the practicability of automatic sleep classification algorithms with PSG signals,out of which EEG is contemplated as the most authentic signal for both mature people[11-13]and neonates[14-16].

    There exist conflicting EEG patterns for infants and mature people.A comparison of the EEG patterns of adults and infants will reveal that infant’s EEG patterns have smaller amplitudes.Within the first three years of an infant’s life,different maturity changes happen[17].So,for automated neonatal sleep classification,several algorithms have been developed that are based on EEG.Most of the already existing algorithms designed for EEG-based neonatal sleep staging do not distinguish ‘wake’as a distinct state.While other algorithms classify sleep stages according to different characteristics of EEG signals,including Low Voltage Irregular (LVI),Active Sleep II (AS II),High Voltage Slow (HVS)and Trace Alternant(TA)/Trace Discontinue(TD)[16,18-20].The process of brain maturation starts during AS and wakes.

    Main contributions:The main aim of this research is to design such an algorithm that basically avoids the jumbling of multiple sleep stages by categorizing wake and sleep as totally different stages and improving the accuracy of [21].Therefore in this paper,autoML-based 18 different algorithms:Random Forest,Adaboost Random Forest,Decision Tree(DT),Adaboost DT,Support Vector Machine (SVM),Adaboost SVM,Gaussian Naive Bayes (GNB),Quadratic Discriminant Analysis (QDA),Linear Discriminant Analysis (LDA),KNeighbours (KN),Ensemble Extra Tree Classifier(ETC),ETC with GridSearchCV,Multi-Layer Perceptron(MLP),Voting Classifier(Logistic Regression(LR),DT Classifier,Support Vector Classifier(SVC)),Stacking Classifier(KN,LR,MLP,Random Forest),Gradient Boosting(GB),Extreme Gradient Boosting(XGB),and LR are presented for the categorization of sleep-wake.Among these 18 different AutoML-based algorithms,this study also aimed to identify the algorithm that achieved the highest accuracy.This research is basically comprised of three parts: (1) feature extraction (3) autoML based hyperparameter tuning and (2)classification of sleep and wake.Total twelve numbers of features were extracted from multichannel EEG data.Then one by one all estimators were applied for training and testing.Furthermore,a comparison is made between the proposed methodology and reference [21] through the use of the same dataset,resulting in an improvement in accuracy and kappa.

    The rest of the article is arranged as follows:Section 2 presents the related work.Methodology is proposed in Section 3.The classification results using the proposed methods are reported in Section 4 and discussed in Section 5.The conclusion of the research is presented in Section 6.Furthermore,Section 7 includes future recommendations.

    2 Related Work

    The first application of EEG,based on the study of human sleep behavior,was made in 1937 by Loomis et al.[22].Loomis’s new research has resulted in numerous algorithms for classification of adult sleep using deep and machine learning [14-16,23-30].Using least squares support vector machine (LS-SVM) classifiers,De et al.proposed an advanced model to estimate the neonate’s Postmenstrual Age (PMA) during Quiet Sleep (QS) and to classify sleep stages [28].Cluster-based Adaptive Sleep Staging (CLASS) was designed by Dereymaeker et al.for automatic QS detection and to highlight its role in brain maturation[14].Based on EEG data,the authors in[15]developed a SVM algorithm that tracks neonatal sleep states and identifies QS with an efficiency of 85%.On the basis of multi-channel EEG recordings,Pillay et al.developed a model to automatically classify neonate’s sleep using HMM(Hidden Markov Models)and GMM(Gaussian Mixture Models).They found HMMs superior to GMMs,with a Cohen’s Kappa of 0.62.Later,they used an algorithm based on a Convolutional Neural Network (CNN) to classify 2 and 4 states of sleep [29].For classifying QS with an enhanced Sinc-based CNN based on EEG data,another study achieved a mean Kappa of 0.77 ± 0.01 (with 8-channel EEG) and 0.75 ± 0.01 (with one bipolar channel EEG) [31].Using publicly available single-channel EEG datasets,Rui et al.classified neonates’sleep patterns into Wake,N1,N2,N3,and REM [32].Classification was performed using the MRASleepNet module.Hangyu et al.designed MS-HNN in 2023 for the automatic classification of newborns’sleep using two,four,and eight channels [33].In order to extract more features from sleep signals involving temporal information,they employed multiscale convolutional neural networks(MSCNN),squeezeand-excitation blocks(SE)and temporal information learning(TIL).However,none of the algorithms above included the waking state as a separate state in infants.Table 1 summarizes the literature review,including papers,datasets,and their contribution to classifying wake as a distinct state.

    Table 1: Summary of the literature review,including papers,datasets,and their contribution to classifying wake as a distinct state

    3 Methodology

    The complete description of the designed AutoML-based estimators is presented in this section.Fig.1 illustrates the step-wise flowchart of the proposed methodology.The method can be further elaborated in the following steps.

    Figure 1:Step-wise flowchart of the proposed methodology

    3.1 Dataset

    EEG was recorded from 19 infants in the NICU at a Children’s Hospital of Fudan University(CHFU),China.The Research Ethics Committee of the CHFU approved the study (Approval No.(2017) 89).Generally,a neonate was kept under observation for 2 h and data was recorded.At least one sleep cycle was observed during these 2 h.A complete 10-20 electrode installation system includes:“FP1-2”,“F3-4”,“F7-8”,“C3-4”,“P3-4”,“T3-4”,“T5-6”,“O1-2”,and“Cz”(17 electrodes).Depending on where the electrode is placed on the brain,a letter will indicate what lobe or location is being assessed.Pre-frontal,frontal,temporal,parietal,occipital,and center,respectively,are represented by the letters FP,F,T,P,O,and C.From 19 EEG recordings,15 contain all the electrodes specified,except for“T5-6”,“F7-8”and“O1-2”(11 electrodes).For the remaining four recordings,“T5-6”,“F7-8”,“Cz”and“O1-2”were not recorded,resulting in 10 electrodes being included[21].The multichannel EEG was obtained using the NicoletOne EEG system.Fig.2 shows the electrode locations for the 10 electrodes used in this study based on the 10-20 system.Note that NZ indicates the nose’s root and IZ indicates protuberance.

    Figure 2:Electrode locations for the 10 electrodes used in this study based on the 10-20 system

    3.2 Visual Sleep Scoring

    In this stage,two professionally trained doctors visually annotated the EEG segments into three main categories,i.e.,sleep,wake,and artifacts.One of the doctors labeled segments by defining sleep,wake,and artifactual regions and was referred to as a primary rater (PR).The second doctor(secondary rater(SR))verified the first doctor’s annotation and also annotated the regions where PR was not agreed on.Non-cerebral characteristics and EEG were utilized during the identification of sleep and wake stages.Moreover,during the annotation process,the doctor also kept the videos from the NICU in consideration.

    3.3 Pre-Processing

    EEG recordings were processed at 500 Hz,which is the actual recording frequency.In order to remove noise and artifacts from these EEG recordings,they were pre-processed.During the preprocessing phase,the following steps are carried out:

    1.Firstly,in the EEG-lab,a FIR (Finite Impulse Response) filter was used to filter EEG recordings with frequencies between 0.3 and 35 Hz.There are relatively few artifacts and noise in this frequency range,which captures most sleep-related EEG activity.In addition to falling within the bandwidth of most EEG electrodes and amplifiers,it has become the de facto standard for sleep classification using EEG.

    2.Now the filtered multi-channel EEG signals are segmented into 30 s epochs[38].

    3.A label is assigned to each epoch after segmentation.

    4.EEG recordings were contaminated with artifacts and noise during the recording and processing phases.On the basis of annotations made by well-trained doctors(PR and SR),artifacts are also removed.Thus,after pre-processing,3535 segments are left for testing and training.This study uses 70%of the data for training and 30%for testing the model.

    3.4 Feature Extraction

    In this step,8 time-domain and 4 frequency-domain attributes are extracted from each EEG signal.These 12 features are combined together to generate an input vector of size 108.Fast Fourier Transform (FFT) is utilized to extract the frequency domain features of the EEG channels.Then the mean frequencies are extracted from all bands,i.e.,Delta (0.5-3 Hz),Theta (3-8 Hz),Alpha(8-12 Hz),and Beta(12-30 Hz).Table 2 provides a brief description of these four EEG bands.Each EEG channel’s time and frequency domain features are listed in Fig.1.

    Table 2: A brief description of four EEG bands

    3.5 Hyper-Parameter Tuning

    To use a machine learning model for different problems,hyperparameter optimization is required[39].Choosing the best hyper-parameters will always improve the performance of a model.Many automatic optimization techniques are available,such as AutoML hyper-parameter tuning,RandomizedSearchCV,and GridSearchCV.While RandomizedSearchCV and GridSearchCV require users to specify a range of parameters for testing,autoML hyper-parameter tuning is time-saving and automatically selects the most appropriate values.Through the use of grid search or Bayesian optimization strategies,AutoML tunes hyperparameters,selects search strategies,and evaluates multiple models using performance metrics.A dynamic search space is updated based on results,a criterion is applied to terminate the search,and the best configuration is selected for the best performance of the model.By automating these processes,one can reduce manual effort,improve configurations,and enhance the performance of machine learning models.Therefore,autoML hyper parameter tuning is used.In this study,the AutoMLSearch class from the EvalML library was used to tune hyperparameters.As part of the analysis,the whole data was split into testing and training.AutoMLSearch was initialized with training data,a hyperparameter search was conducted,the best pipeline was selected based on performance,the pipeline was evaluated on the test set,and the results were stored.As a result of this automated process,hyperparameters were explored efficiently and optimal configurations were identified for each algorithm.Each of the 18 algorithms used in this study and MLP employed in[21]has its own set of parameters,which are listed in Table 3.n_estimators represent the number of trees in the forest,whereas random_state controls the randomness in the machine learning model and it can only take positive integral values,but not negative integral values.

    Table 3: List of all the parameters for each algorithm

    3.6 Random Forest

    Random Forest is a powerful learning algorithm.This algorithm is applied as an ensemble technique,due to the fact that it makes the final decision based on the results of all the decision trees combined[40].Besides being flexible and easy to use,it can be used for both classification and regression [41].Due to the fact that Random Forest takes the average of all predicted results and cancels partiality,there is no overfitting problem.In addition,it handles missing values.In order to select the most contributing features for classification,it will provide relative feature importance.The feature importance in each decision tree is calculated as[42]:

    where,ficis feature importance for the column in the particular decision tree,nicis the node impurity of the particular column,andnitotal node impurity in the whole decision tree.Node impurity measures how well a tree splits the data.In this case,thenicis:

    wherenleftis the number of samples in the left child nodes andnrightis the number of samples in the right child nodes[43].As a result,thenican be calculated as follows:

    At nodei,Crepresents the columns that are being split.Now,the value officfrom Eq.(1)can be normalized between 0 and 1 by dividing it by the sum of all feature importance values.

    Now,the final importance of features from all Random Forest trees can be obtained by dividing the normalized feature importance value by the total number of trees in the designed Random Forest model.

    whereTis the total number of trees.So,the more trees a Random Forest has,the stronger a forest is.Random Forest will first select some random samples from the dataset provided,then construct a decision tree for each sample.After each decision tree has been evaluated,it will give predicted results.At the end,it will select the results with the most votes to make a final prediction.Fig.3 illustrates the general working mechanism of the Random Forest classifier on a data set.By setting random_state to 21 and using 1000 n_estimators in this particular research,an AutoMl-based Random Forest estimator is applied to EEG data.Its accuracy and kappa are 84.78% and 69.63%,respectively,which are the maximum accuracy and kappa for the categorization of sleep-wake stages.

    3.7 Adaboost Algorithm

    Adaboost trains and deploys a set of trees in series,that is why called the ensemble method.It works on boosting principles,in which data samples that were misclassified by a previous weak classifier are reclassified by the new weak classifier.All the weak classifiers are connected in series to produce a strong classifier at the end.Basically,Adaboost combines a number of decision trees in the boosting process.These decision trees are called stumps.When the first decision tree/model is trained and deployed,the record which is falsely classified over the first model will be given more priority[44].Only these records will be sent as input to the second model.Now the second tree will be trained in such a way that it will strictly observe the weaknesses in the previous tree.Now the weights of the previously miss-classified samples will be boosted in such a way that the next tree keeps the focus on correctly classifying the previously miss-classified data samples.The process will continue until a strong classifier is generated.One can increase the classification accuracy by adding in series the weak classifiers.However,this may result in severe overfitting [45].Fig.4 illustrates the general working mechanism of the Adaboost classifier on a data set of two classes and two features.As one can see from Fig.4,that weak learner 2 enhances the results of weak learner 1 and results in a strong learner that has the decision boundaries of both learners.The Adaboost algorithm works by assigning some weights to the data points[44].

    whereNrepresents the total number of data points andn=1,2,3,...,N.The next step is to determine which stump classifies the data in a good way.The tree with the lowest Gini index must be selected[40].After that,the stump’s performance and total error must be calculated as follows[40]:

    The total error must be between 0 and 1,where 1 represents a horrible stump and 0 shows a perfect stump,respectively.Now,to calculate the updated weights of data points,the following formula could be used[40]:

    Figure 3:General diagram of Random Forest classifier implementation on a dataset

    If the data is correctly classified,the value of performance will be negative otherwise it will be positive.Now,the weights must be updated and normalized accordingly.All these steps must be repeated until a low training error is obtained.So,an autoML-based Adaboost Random Forest estimator is applied to EEG data by setting the value of random_state as 15 and using 1000 n_estimators.As a base_estimator,a Random Forest classifier with 2000 n_estimators was used.For the categorization of sleep-wake stages,its accuracy and kappa came out as 84.59% and 69.24%,respectively.Since this research was designed to achieve accurate classification of neonatal sleep and wake using EEG data,the emphasis was placed on Random Forest and Adaboost Random Forest due to their outclass performance in terms of accuracy and all other performance parameters.However,a brief explanation of all other 16 algorithms is given below.

    Figure 4:General diagram of Adaboost classifier

    Decision Tree:A decision tree is made up of nodes representing features,branches representing rules,and leaves representing the outcome.The purpose of decision trees is to optimize the information gained or minimize impurities by recursively splitting the data according to the most advantageous feature.The classification is based on a single decision tree in this combination.Adaboost DT: In Adaboost DT,DT is used as a base estimator.As each tree is trained,its mistakes are corrected,producing a sequence of decision trees.Final predictions are based on aggregating all prediction trees,with more accurate trees being given a higher weight.SVM:This classifying algorithm divides features into classes by determining an optimized hyperplane.It aims to minimize classification errors while maximizing the margin between classes.Data is mapped into higher-dimensional spaces by kernel functions,so that linear separations can be achieved.Adaboost SVM: Adaboost SVM uses SVM as a base estimator.The next SVM corrects the mistakes of the previous SVM as it creates a sequence of SVMs.SVMs are combined into a final prediction,with more accurate SVMs given greater weight.GNB: Naive Bayes is a classification algorithm based on Bayes’theorem,which assumes features are independent.In GNB,continuous features are assumed to follow a Gaussian distribution.Given the observed features,the class conditional probability is calculated and predictions are based on the likelihood of each class achieving that outcome.QDA:QDA uses quadratic decision boundaries as its basis for classifying data.For each class,a quadratic function is used to estimate the class-conditional probability densities.By calculating posterior probabilities,QDA predicts which class will have the highest probability given the observed features.LDA: LDA assumes that each class has a Gaussian distribution.Probability densities and prior probabilities are estimated for each class condition.Using LDA,the most probable class under each set of features is determined based on the posterior probability of the class given those features.KN: In KN,the k nearest training samples are taken into account when making predictions from a feature space.Majority voting among the k nearest neighbors determines the label for the class.ETC: Similar to Random Forest,Extra Trees are also ensemble learning methods.By creating multiple splits using random numbers,multiple decision trees are built,and the split with the lowest number of impurities is selected.A majority vote or an average of all predictions makes the final prediction.ETC with GridSearchCV:ETC with GridSearchCV optimizes hyperparameters and improves performance of ETC.MLP:MLP neural networks transform inputs into desired outputs through a series of non-linear transformations,identifying optimal weights and biases along the way.To minimize the error between predicted and actual outputs,backpropagation is used to adjust the weights and biases during training.Voting Classifier:The proposed Voting Classifier combines the predictions of multiple classifiers,including LR,DT,and SVC,using a voting strategy (e.g.,majority voting or weighted voting).The final prediction is made based on the combined decisions of all classifiers.Stacking Classifier:The proposed stacking classifier utilizes multiple classifiers,including KN,LRR,MLP,and Random Forest,using a meta-classifier.Predicting the final outcome of a classifier is done through input features which were derived from the base classifiers.These classifiers were trained on training data prior to the metaclassifier was created.GB: Gradient boosting integrates multiple weak learners (e.g.,decision trees)in a stepwise fashion.In this method,weak learners are trained to correct the previous learner’s mistakes.In order to get the final prediction,the predictions of all the weak learners are combined,and the predictions of the most accurate learners are given a higher weight.XGB:XGB,or XGBoost,is a gradient boosting method optimized for large datasets.Through a regularized model,the algorithm reduces overfitting and uses a more efficient tree construction algorithm,improving upon traditional gradient boosting.LR:In LR,features are mapped to a probability of belonging to a class based on a linear classification algorithm.A logistic function is used to convert the linear output into probabilities after estimating the coefficients of the linear equation using maximum likelihood estimation.The list of parameters for all algorithms used in this study is already presented in Table 3.

    4 Results

    The proposed scheme is tested and evaluated via different performance matrices such as confusion matrix,accuracy,Cohen’s kappa,recall,precision,Mathew’s correlation coefficient,F1-score,specificity,sensitivity,ROC (Receiver Operating Characteristic) curve and precision-recall curve.Mathematically,these values of accuracy[42],Cohen’s kappa[46],recall[47],precision[47],Mathew’s correlation coefficient [46],F1-score [44],sensitivity [28,45],and specificity [28,45] are computed as follows:

    The experimentally computed values of all the above-mentioned performance matrices for all the proposed algorithms are shown in Table 4.

    Table 4: Experimentally computed values of the proposed algorithms

    4.1 Confusion Matrix

    Classification models are evaluated using a confusion matrix.This matrix shows predictive and actual classification information.The binary class confusion matrix is shown in Fig.5a.In Fig.5a,TP represents true positives(sleep predicted as sleep),TN represents true negatives(wake predicted as wake),FP represents false positives(sleep predicted as wake),and FN represents false negatives(wake predicted as sleep).Confusion matrix for AutoML-based Random Forest classifier is illustrated in Fig.5b.Table 5 shows the values of TP,FN,FP,and TN from the confusion matrices of all applied models.

    Figure 5:Confusion matrices(a)General binary classification confusion matrix,(b)Confusion matrix for AutoML-based Random Forest classifier

    Table 5: Experimentally computed values from the confusion matrices of the proposed algorithms

    4.2 ROC Curve

    A ROC curve shows a model’s performance at every classification threshold.In the ROC curve,two parameters are plotted:TPR(True Positive Rate)and FPR(False Positive Rate).Where TPR is same as recall and it is already defined in Eq.(11)and FPR is defined as:

    The closer the ROC curve is to the top left corner,the better the model is at categorizing the data.In ROC graphs,AUC is the Area Under the Curve,with values ranging from 0 to 1.Excellent models have AUC near 1,which indicates good separation capability.Models with an AUC near 0 have the lowest degree of separation,so their AUC is the lowest.An AUC of 0.5 means a model is unable to separate classes at all.Fig.6a shows a ROC curve for an AutoML-based Random Forest classifier.

    Figure 6:Performance curves(a)ROC curve for autoML-based random forest classifier,(b)Precisionrecall curve for AutoML-based Random Forest classifier

    4.3 Precision-Recall Curve

    For different thresholds,the precision-recall curve shows how precision varies with recall.Classifiers with high precision and recall have a large area under the curve and always hug the upper right corner of the graph.Fig.6b shows a precision-recall curve for an AutoML-based Random Forest classifier.

    In general,ROC curves and precision-recall curves deal with class imbalance in different ways.Precision-recall curves are insensitive to class imbalance and measure how well a classifier accurately predicts a positive class.While ROC curves are sensitive to class imbalance and measure how well a classifier distinguishes between two classes.In addition,the ROC curve allows threshold selection that balances the false positive and true positive rate trade-offs.In Fig.7,the ROC and Precision-recall curves for all the classifiers are compared.

    5 Discussions

    There are several algorithms for EEG-based automatic neonatal sleep staging,but most researchers have not defined wake as a distinct state.In most cases,wake and AS I combined to form an LVI stage.This causes the two sleep stages to be intermixed.Previously,a multilayer perceptron neural network was designed for the classification of sleep and wake as distinct states,achieving an accuracy of 82.53%[21].This study aimed to improve accuracy and kappa of[21]by using the same dataset and proposing 18 different classification models.Firstly,EEG recordings were obtained from 19 infants in the NICU at a CHFU,China.The data is obtained from neonates with post-menstrual age 37±05 weeks.Firstly,in the EEG-lab,a FIR filter was used to filter recordings with frequencies between 0.3 and 35 Hz.After filtering,the multi-channel EEG signals were segmented into 30 s epochs and each epoch was assigned a label.Finally,artifacts are removed based on annotations by doctors(PR and SR).Thus,after pre-processing,3535 segments are left for testing and training.A total of 12 features were extracted,of which 8 were in the time domain and 4 in the frequency domain.An input vector of size 108 will be created by combining the 12 features from each of the 9 EEG channels.The most significant features were in the frequency domain.Four frequency bands(delta,theta,alpha,and beta)were extracted using FFT,and mean amplitudes of each band were determined.Feature scaling was applied after feature extraction.After feature scaling,an automatic optimization technique such as AutoML hyperparameter tuning is used to improve the performance of models.As an input,all autoML-based classifiers take a vector of size 108 consisting of the joint attributes of nine channels.30% of the total EEG data is used for testing these classifiers and the remaining 70% is used for training purposes.Moreover,every algorithm was trained and tested with the same data.

    Figure 7:Performance curves(a)A comparison of the ROC curves for all classifiers,(b)A comparison of the precision-recall curves for all classifiers

    This study applied 18 different machine learning algorithms and tested their performance.Since this research was designed to achieve accurate classification of neonatal sleep and wake using EEG data,the emphasis was placed on Random Forest and Adaboost Random Forest due to their outclass performance in terms of accuracy and all other performance parameters.Each of the 18 algorithms used in this study and MLP employed in[21]has its own set of hyper-parameters,which are listed in Table 3.The experimentally calculated results are demonstrated in Table 4.These results are computed based on a binary class,i.e.,sleep or wake.Thus,from Table 4,one can see that for the autoML-based Random Forest the accuracy and kappa are 84.78%and 69.63%respectively,which are the maximum accuracy and kappa for sleep-wake categorization.In addition,the accuracy and kappa values of the autoML-based Adaboost Random Forest and stacking classifier have also been improved.In the autoML-based Adaboost Random Forest model,accuracy and kappa were 84.59%and 69.24%,respectively;in the stacking classifier,accuracy and kappa were 84.21%and 68.44%.The accuracy and kappa for[21]was 82.7%and 65%,respectively.Confusion matrix for autoML-based Random Forest is illustrated in Fig.5b.While for the other algorithms,the values of confusion matrices are listed in Table 5.It is clear from Table 5 that the computed values for TP and TN in the case of autoML-based Random Forest are very large as compared to all other applied algorithms.Moreover,Fig.6a shows the ROC curve for an autoML-based Random Forest classifier.This model is better at categorizing data since the ROC curve is closer to the top left corner.It is also noteworthy that the AUC value of the model is 0.91,which indicates that it is capable of good separation.A plot of precision-recall is shown in Fig.6b.Because the curve is closer to the right corner and the area under the curve is also high,it can be concluded that this classifier has high precision and recall,which makes it better at classification.The proposed autoML-based Random Forest classifier is also performing outclass in terms of recall,precision,MCC,F1-score,sensitivity,and specificity.The experimentally computed recall,precision,MCC,F1-score,sensitivity,and specificity values for autoML-based Random Forest are 89.68%,81.01%,70%,85.13%,80.14%,and 89.68%,respectively.Furthermore,for[21],the values of sleep F1-score,wake F1-score,sensitivity,and specificity are 81.45%,82.55%,83.29%,and 81.73%,respectively.Table 6 gives a Performance comparison between the proposed study and existing work on the same dataset.It can be seen from Table 6,that the authors in [36] achieved more accuracy because they used video EEG data not aEEG data.However,infant’s faces and voices can be found in EEG video data,raising privacy concerns.Thus,based on all the above discussion and proofs,the autoML-based Random Forest performed outclasses in terms of accuracy,kappa and other performance parameters and therefore,outclasses all other algorithms.

    This study’s main limitation is that only 19 subjects were used,which is a very small sample size.The performance and effectiveness of the algorithm may be improved by using a larger dataset.Moreover,artifacts were manually removed at the prepossessing stage in this study.A future study can design a model that automatically removes these artifacts.This will enable the proposed method to be used practically in the NICU.It would also be possible to categorize more sleep stages,such as Active Sleep(AS),Quiet Sleep(QS),and Wake.Similarly,to enhance performance,more data can be utilized.

    6 Conclusion

    In this study,multi-channel EEG data and autoML-based 18 different estimators are utilized to classify neonate’s sleep-wake states.Each of these estimators takes a vector of size 108 as input,containing the joint attributes of nine channels.For training and testing of the proposed approach,3525 30-s segments of EEG recordings from 19 infants were used.The data is obtained from neonates at post-menstrual age 37 ± 05 weeks.Random Forest,Adaboost Random Forest,DT,Adaboost Decision Tree,SVM,Adaboost SVM,GNB,QDA,LDA,KNeighbours,Ensemble ETC,ETC with GridSearchCV,MLP,Voting Classifier(LR,DT Classifier,SVC),Stacking Classifier(KNeighbours,LR,MLP,Random Forest),GB,XGB,LR,and reference[21]were applied to the same dataset and their results are compared.Compared to the study in [21],which classified sleep and wake stages using MLP neural networks,this study achieved higher accuracy.A maximum accuracy of 84.78%and kappa of 69.63% is achieved for autoML-based Random Forest.The study shows that multichannel EEG signals can be successfully classified by autoML-based approaches for neonatal sleepwake classification and can help healthcare providers in the early identification and treatment of sleeprelated issues in neonates.

    7 Future Recommendations

    In the future,the accuracy could be improved as well as the ability to classify further sleep states i.e.,Active Sleep (AS),Quite Sleep (QS),and Wake.To enhance the performance of the proposed methodology,more data can be utilized.Moreover,in the prepossessing stage,the artifacts were manually removed,but in future,an automatic removal model can be designed to remove these artifacts.

    Acknowledgement:Not applicable.

    Funding Statement:This research work is funded by Chinese Government Scholarship.The findings and conclusions of this article are solely the responsibility of the authors and do not represent the official views of the Chinese Government Scholarship.

    Author Contributions:Study conception and design: Hafza;data collection: Saadullah;analysis and interpretation of results:Hafza,Wei,Muhammad;draft manuscript preparation:Hafza,Saadullah.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The dataset analyzed during the current study is not publicly available due to lack of permission from children’s hospital affiliated with Fudan University but codes are available from the corresponding author on reasonable request.

    Ethics Approval:Authors would like to inform you that the neonate picture in Fig.1 is the first author’s own daughter.She has given her consent to use the picture for publication.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    老鸭窝网址在线观看| 精品卡一卡二卡四卡免费| 黄色视频在线播放观看不卡| 无遮挡黄片免费观看| 99re6热这里在线精品视频| 纵有疾风起免费观看全集完整版| 久久久国产一区二区| 天天躁日日躁夜夜躁夜夜| 久久久久久久国产电影| 大香蕉久久网| 久久久国产成人免费| 国产高清国产精品国产三级| 日韩一区二区三区影片| 美女高潮喷水抽搐中文字幕| 黄片大片在线免费观看| 欧美另类一区| 男人爽女人下面视频在线观看| 国产日韩欧美亚洲二区| 久久人妻熟女aⅴ| 国产麻豆69| 精品少妇久久久久久888优播| 国产精品国产三级国产专区5o| 日韩电影二区| 一级,二级,三级黄色视频| 欧美另类亚洲清纯唯美| 侵犯人妻中文字幕一二三四区| 日韩电影二区| 汤姆久久久久久久影院中文字幕| 制服人妻中文乱码| 一边摸一边做爽爽视频免费| 91精品三级在线观看| 国产黄频视频在线观看| 一区二区三区四区激情视频| 韩国精品一区二区三区| 丝袜美腿诱惑在线| 一级片'在线观看视频| 我要看黄色一级片免费的| 国产亚洲精品第一综合不卡| 高清黄色对白视频在线免费看| 亚洲精品乱久久久久久| a级毛片黄视频| 精品视频人人做人人爽| 1024视频免费在线观看| 国产成人av激情在线播放| 欧美成人午夜精品| 老汉色av国产亚洲站长工具| 国产成人一区二区三区免费视频网站| 亚洲精品国产av成人精品| 精品人妻一区二区三区麻豆| 狠狠狠狠99中文字幕| 一本色道久久久久久精品综合| 9191精品国产免费久久| 男人舔女人的私密视频| 成人亚洲精品一区在线观看| 日韩制服骚丝袜av| 两性午夜刺激爽爽歪歪视频在线观看 | 后天国语完整版免费观看| 2018国产大陆天天弄谢| 水蜜桃什么品种好| 亚洲精品久久成人aⅴ小说| 交换朋友夫妻互换小说| 国产人伦9x9x在线观看| 亚洲欧美激情在线| av有码第一页| 亚洲国产av新网站| 老司机深夜福利视频在线观看 | 亚洲avbb在线观看| 欧美日韩av久久| 国产主播在线观看一区二区| 国产精品成人在线| 91精品伊人久久大香线蕉| 亚洲全国av大片| 女人久久www免费人成看片| 免费在线观看黄色视频的| 久久国产亚洲av麻豆专区| 丝瓜视频免费看黄片| 亚洲av日韩精品久久久久久密| 亚洲人成电影免费在线| 99久久99久久久精品蜜桃| 99国产极品粉嫩在线观看| 黑丝袜美女国产一区| 亚洲免费av在线视频| 夜夜骑夜夜射夜夜干| 午夜影院在线不卡| 汤姆久久久久久久影院中文字幕| 中文字幕高清在线视频| 日韩 欧美 亚洲 中文字幕| 啪啪无遮挡十八禁网站| 91老司机精品| 亚洲少妇的诱惑av| 天天操日日干夜夜撸| 欧美另类亚洲清纯唯美| 久久久精品区二区三区| 90打野战视频偷拍视频| 成年女人毛片免费观看观看9 | 精品国产一区二区久久| 午夜福利影视在线免费观看| 久久久精品免费免费高清| 久久久久国内视频| 午夜福利视频精品| 国产野战对白在线观看| 1024视频免费在线观看| 亚洲精品av麻豆狂野| 欧美+亚洲+日韩+国产| 亚洲人成77777在线视频| 国产97色在线日韩免费| 亚洲国产精品成人久久小说| 国产精品二区激情视频| 首页视频小说图片口味搜索| 成年人午夜在线观看视频| 久久亚洲精品不卡| 国产精品成人在线| 午夜福利影视在线免费观看| 国产一卡二卡三卡精品| 精品国产一区二区久久| 九色亚洲精品在线播放| 欧美激情极品国产一区二区三区| 欧美一级毛片孕妇| 妹子高潮喷水视频| 99精国产麻豆久久婷婷| 欧美国产精品一级二级三级| 天天躁夜夜躁狠狠躁躁| 亚洲黑人精品在线| 日日爽夜夜爽网站| 丁香六月天网| 大香蕉久久成人网| 亚洲国产欧美网| 午夜福利视频在线观看免费| 久久久久国内视频| 国产免费一区二区三区四区乱码| 日韩精品免费视频一区二区三区| 国产熟女午夜一区二区三区| av不卡在线播放| 人妻 亚洲 视频| 国产精品成人在线| 欧美人与性动交α欧美软件| 人妻一区二区av| 99香蕉大伊视频| 亚洲国产欧美网| 美女午夜性视频免费| 久久天躁狠狠躁夜夜2o2o| 丰满迷人的少妇在线观看| 老司机靠b影院| 天天添夜夜摸| 国产日韩一区二区三区精品不卡| 日本五十路高清| 在线看a的网站| 高清在线国产一区| 女人高潮潮喷娇喘18禁视频| 黄片大片在线免费观看| 亚洲精品国产av成人精品| 精品人妻在线不人妻| 青草久久国产| 又黄又粗又硬又大视频| 高清视频免费观看一区二区| 国产成人欧美在线观看 | 亚洲自偷自拍图片 自拍| 80岁老熟妇乱子伦牲交| 日韩制服骚丝袜av| 黄网站色视频无遮挡免费观看| 十八禁网站网址无遮挡| 亚洲精品自拍成人| 50天的宝宝边吃奶边哭怎么回事| 十八禁高潮呻吟视频| 欧美黄色片欧美黄色片| av片东京热男人的天堂| 啦啦啦啦在线视频资源| 性色av乱码一区二区三区2| 高清在线国产一区| 丁香六月天网| 国产av又大| 亚洲欧美一区二区三区久久| 午夜福利一区二区在线看| 99精品久久久久人妻精品| 高清欧美精品videossex| 久热这里只有精品99| 99精品久久久久人妻精品| 久久精品亚洲熟妇少妇任你| 久久久久精品人妻al黑| 老司机福利观看| 亚洲欧美日韩另类电影网站| 久久国产精品大桥未久av| √禁漫天堂资源中文www| 老司机靠b影院| 天堂中文最新版在线下载| 久久热在线av| 天天影视国产精品| 日韩有码中文字幕| 国产精品免费视频内射| 视频区欧美日本亚洲| 91麻豆av在线| 亚洲全国av大片| 精品国产一区二区三区久久久樱花| 中文精品一卡2卡3卡4更新| 少妇 在线观看| 亚洲精品粉嫩美女一区| 精品一区在线观看国产| 国产亚洲av片在线观看秒播厂| 视频在线观看一区二区三区| 午夜福利影视在线免费观看| 日韩三级视频一区二区三区| 男女高潮啪啪啪动态图| 国产亚洲av片在线观看秒播厂| 18在线观看网站| 中文字幕高清在线视频| 99精品欧美一区二区三区四区| 亚洲免费av在线视频| 亚洲精品第二区| 一个人免费在线观看的高清视频 | 天天躁狠狠躁夜夜躁狠狠躁| 免费在线观看黄色视频的| 国产精品久久久久久精品电影小说| 老汉色av国产亚洲站长工具| 亚洲欧洲精品一区二区精品久久久| 亚洲av成人不卡在线观看播放网 | 啦啦啦啦在线视频资源| 精品国产乱码久久久久久男人| 国产片内射在线| www.自偷自拍.com| 欧美精品一区二区免费开放| a级毛片黄视频| av有码第一页| 久久国产精品人妻蜜桃| 美女午夜性视频免费| 最黄视频免费看| 不卡av一区二区三区| 中文字幕人妻丝袜一区二区| 国产片内射在线| 亚洲伊人色综图| 亚洲七黄色美女视频| 久久综合国产亚洲精品| 久久久精品免费免费高清| 亚洲国产av新网站| 超碰97精品在线观看| 性色av乱码一区二区三区2| 青青草视频在线视频观看| 国产精品麻豆人妻色哟哟久久| tocl精华| 下体分泌物呈黄色| 久久国产精品大桥未久av| 欧美少妇被猛烈插入视频| 免费在线观看影片大全网站| 男女边摸边吃奶| 99久久99久久久精品蜜桃| 欧美激情 高清一区二区三区| 国产97色在线日韩免费| 韩国精品一区二区三区| 精品少妇内射三级| 国产一卡二卡三卡精品| 国产精品 欧美亚洲| www.自偷自拍.com| 后天国语完整版免费观看| 丝袜脚勾引网站| 国产成人精品在线电影| 亚洲精品久久午夜乱码| 亚洲精品一二三| 亚洲精品乱久久久久久| 人人妻,人人澡人人爽秒播| 十八禁人妻一区二区| 无限看片的www在线观看| 国产91精品成人一区二区三区 | 亚洲va日本ⅴa欧美va伊人久久 | 国产99久久九九免费精品| 精品视频人人做人人爽| av又黄又爽大尺度在线免费看| 999久久久国产精品视频| 日韩大码丰满熟妇| 欧美日韩精品网址| 视频区欧美日本亚洲| 亚洲av电影在线观看一区二区三区| 国产精品麻豆人妻色哟哟久久| 亚洲人成电影免费在线| 老汉色∧v一级毛片| cao死你这个sao货| 免费在线观看视频国产中文字幕亚洲 | 国产极品粉嫩免费观看在线| 亚洲精品第二区| 免费女性裸体啪啪无遮挡网站| 精品国产乱子伦一区二区三区 | 国产免费视频播放在线视频| 亚洲精品美女久久久久99蜜臀| 国产男女内射视频| 欧美精品高潮呻吟av久久| 精品熟女少妇八av免费久了| 蜜桃国产av成人99| 欧美日韩成人在线一区二区| 又紧又爽又黄一区二区| 一级,二级,三级黄色视频| 我要看黄色一级片免费的| 日韩熟女老妇一区二区性免费视频| 国产一区二区 视频在线| 波多野结衣av一区二区av| 亚洲一卡2卡3卡4卡5卡精品中文| 啦啦啦啦在线视频资源| 黄片大片在线免费观看| 日韩一区二区三区影片| 最新的欧美精品一区二区| 国产精品熟女久久久久浪| 久久久国产精品麻豆| 青青草视频在线视频观看| 80岁老熟妇乱子伦牲交| 欧美国产精品一级二级三级| 久久精品aⅴ一区二区三区四区| 日韩大片免费观看网站| av在线app专区| 欧美日韩福利视频一区二区| 老熟妇仑乱视频hdxx| 免费人妻精品一区二区三区视频| 日韩一区二区三区影片| 超色免费av| 曰老女人黄片| 人成视频在线观看免费观看| 老熟妇乱子伦视频在线观看 | 91精品伊人久久大香线蕉| 亚洲成av片中文字幕在线观看| 久久久精品94久久精品| 国产精品久久久人人做人人爽| 精品久久蜜臀av无| 在线观看一区二区三区激情| 在线av久久热| 天天躁狠狠躁夜夜躁狠狠躁| 狂野欧美激情性xxxx| 97在线人人人人妻| 桃花免费在线播放| 波多野结衣av一区二区av| 亚洲伊人久久精品综合| av天堂在线播放| 欧美日韩一级在线毛片| 国产欧美日韩一区二区三区在线| 精品国产乱码久久久久久小说| 蜜桃国产av成人99| 久久久国产成人免费| 精品国产超薄肉色丝袜足j| 亚洲一区二区三区欧美精品| 又紧又爽又黄一区二区| 国产成人影院久久av| 一边摸一边抽搐一进一出视频| 狠狠婷婷综合久久久久久88av| 日日摸夜夜添夜夜添小说| 一二三四社区在线视频社区8| 久久午夜综合久久蜜桃| 午夜福利,免费看| 国产精品 欧美亚洲| 日韩精品免费视频一区二区三区| 午夜成年电影在线免费观看| 国产av一区二区精品久久| 老司机福利观看| 国产精品久久久av美女十八| 夜夜夜夜夜久久久久| 亚洲av成人不卡在线观看播放网 | 男女无遮挡免费网站观看| 亚洲第一欧美日韩一区二区三区 | a级片在线免费高清观看视频| svipshipincom国产片| 国产淫语在线视频| 午夜久久久在线观看| 日韩精品免费视频一区二区三区| 国产亚洲午夜精品一区二区久久| 精品视频人人做人人爽| 男女高潮啪啪啪动态图| 1024香蕉在线观看| 精品一区二区三区av网在线观看 | 成人亚洲精品一区在线观看| 在线观看舔阴道视频| 色94色欧美一区二区| 日日爽夜夜爽网站| 免费高清在线观看日韩| 一本综合久久免费| 国产免费福利视频在线观看| 在线 av 中文字幕| 电影成人av| 亚洲欧美精品综合一区二区三区| 又大又爽又粗| 国产无遮挡羞羞视频在线观看| 黄片大片在线免费观看| 免费在线观看影片大全网站| 三级毛片av免费| 黑人巨大精品欧美一区二区蜜桃| 99热网站在线观看| 国产一区有黄有色的免费视频| 18禁裸乳无遮挡动漫免费视频| 午夜福利视频在线观看免费| 2018国产大陆天天弄谢| 久久久欧美国产精品| 国产片内射在线| kizo精华| 我的亚洲天堂| 亚洲精品粉嫩美女一区| 午夜91福利影院| 精品国产一区二区三区久久久樱花| 一区二区三区乱码不卡18| 大陆偷拍与自拍| 丰满少妇做爰视频| 国产免费福利视频在线观看| 日韩 亚洲 欧美在线| av超薄肉色丝袜交足视频| 久热爱精品视频在线9| 十八禁人妻一区二区| 精品国产超薄肉色丝袜足j| 欧美日韩视频精品一区| 嫩草影视91久久| 99热网站在线观看| 亚洲国产看品久久| 精品一区在线观看国产| 亚洲精品美女久久久久99蜜臀| 免费在线观看日本一区| 成人亚洲精品一区在线观看| 老司机深夜福利视频在线观看 | 亚洲精品国产区一区二| 午夜老司机福利片| 国产在线观看jvid| 一级毛片电影观看| 午夜福利,免费看| 国产99久久九九免费精品| 女性生殖器流出的白浆| 亚洲精品国产av成人精品| 亚洲av日韩在线播放| 欧美精品亚洲一区二区| 一区二区三区精品91| 老汉色∧v一级毛片| bbb黄色大片| 女人高潮潮喷娇喘18禁视频| 高清av免费在线| 国产免费av片在线观看野外av| 中文字幕最新亚洲高清| 欧美激情极品国产一区二区三区| 天天躁日日躁夜夜躁夜夜| 亚洲欧美精品自产自拍| 久久人人爽人人片av| 后天国语完整版免费观看| 亚洲全国av大片| 操美女的视频在线观看| 精品一品国产午夜福利视频| 欧美久久黑人一区二区| 性色av乱码一区二区三区2| 美女国产高潮福利片在线看| 丝袜人妻中文字幕| 飞空精品影院首页| 久久久久久亚洲精品国产蜜桃av| 天天躁狠狠躁夜夜躁狠狠躁| 2018国产大陆天天弄谢| 一本一本久久a久久精品综合妖精| 精品少妇久久久久久888优播| 久久久国产精品麻豆| 一本大道久久a久久精品| 国产精品久久久久久人妻精品电影 | 不卡av一区二区三区| 日韩人妻精品一区2区三区| 亚洲中文字幕日韩| 女警被强在线播放| 啦啦啦免费观看视频1| www.熟女人妻精品国产| 亚洲精品国产精品久久久不卡| 精品高清国产在线一区| 高清黄色对白视频在线免费看| 国产亚洲av高清不卡| 日韩大码丰满熟妇| 嫩草影视91久久| 亚洲第一欧美日韩一区二区三区 | 女性生殖器流出的白浆| 国产精品熟女久久久久浪| 亚洲一区二区三区欧美精品| 纯流量卡能插随身wifi吗| 日韩一卡2卡3卡4卡2021年| 国产xxxxx性猛交| 中国国产av一级| 五月开心婷婷网| 亚洲欧美清纯卡通| 日韩大码丰满熟妇| 99久久人妻综合| 婷婷丁香在线五月| 亚洲男人天堂网一区| 亚洲成人手机| 欧美在线一区亚洲| 蜜桃国产av成人99| 一本色道久久久久久精品综合| 欧美变态另类bdsm刘玥| 精品国内亚洲2022精品成人 | 久久国产精品男人的天堂亚洲| 丰满人妻熟妇乱又伦精品不卡| 国产欧美日韩精品亚洲av| 亚洲av片天天在线观看| 电影成人av| 飞空精品影院首页| 大码成人一级视频| 亚洲,欧美精品.| 亚洲国产欧美网| av网站免费在线观看视频| 国产亚洲一区二区精品| 操美女的视频在线观看| 国产又色又爽无遮挡免| 91精品国产国语对白视频| 久久人妻熟女aⅴ| 国产精品国产三级国产专区5o| 人人妻人人澡人人看| 美女午夜性视频免费| 人人澡人人妻人| 99国产综合亚洲精品| 精品福利观看| 日本一区二区免费在线视频| 久久狼人影院| 一区二区三区四区激情视频| 高清黄色对白视频在线免费看| 精品国产超薄肉色丝袜足j| 免费不卡黄色视频| 91麻豆精品激情在线观看国产 | 亚洲精华国产精华精| 亚洲国产欧美日韩在线播放| 少妇的丰满在线观看| 亚洲,欧美精品.| 亚洲欧美一区二区三区久久| 又大又爽又粗| 精品人妻1区二区| 男女国产视频网站| 成人免费观看视频高清| 777久久人妻少妇嫩草av网站| 成人黄色视频免费在线看| 亚洲av成人不卡在线观看播放网 | 国产一区二区激情短视频 | av网站免费在线观看视频| a 毛片基地| 免费高清在线观看视频在线观看| 精品人妻一区二区三区麻豆| 免费久久久久久久精品成人欧美视频| 午夜福利一区二区在线看| 国产人伦9x9x在线观看| 日韩视频一区二区在线观看| 久久人人爽人人片av| 日韩一卡2卡3卡4卡2021年| 欧美变态另类bdsm刘玥| 99久久99久久久精品蜜桃| 视频在线观看一区二区三区| 午夜免费成人在线视频| 91大片在线观看| 国产av精品麻豆| 亚洲五月色婷婷综合| 亚洲国产av影院在线观看| 久久毛片免费看一区二区三区| 午夜日韩欧美国产| 中国美女看黄片| 人人妻,人人澡人人爽秒播| 国产人伦9x9x在线观看| 久久中文字幕一级| 另类精品久久| 中文字幕色久视频| 日本精品一区二区三区蜜桃| 老司机亚洲免费影院| 麻豆国产av国片精品| 久久狼人影院| 国产成人a∨麻豆精品| 久久久久久久久久久久大奶| 亚洲精品国产av蜜桃| √禁漫天堂资源中文www| 国产免费现黄频在线看| 99热全是精品| 亚洲av男天堂| 精品乱码久久久久久99久播| 老司机深夜福利视频在线观看 | 欧美精品高潮呻吟av久久| 日本av手机在线免费观看| 亚洲欧美一区二区三区黑人| 亚洲国产成人一精品久久久| 美女扒开内裤让男人捅视频| 午夜福利免费观看在线| 一级黄色大片毛片| 欧美黄色淫秽网站| 老汉色av国产亚洲站长工具| 亚洲午夜精品一区,二区,三区| 18禁黄网站禁片午夜丰满| 欧美变态另类bdsm刘玥| 99久久99久久久精品蜜桃| 久久99一区二区三区| 又紧又爽又黄一区二区| 久久香蕉激情| 欧美精品高潮呻吟av久久| 国产高清videossex| 美女高潮到喷水免费观看| 热99国产精品久久久久久7| 美女大奶头黄色视频| 热re99久久精品国产66热6| 日韩有码中文字幕| 搡老岳熟女国产| 久久影院123| 成人亚洲精品一区在线观看| 亚洲午夜精品一区,二区,三区| 制服诱惑二区| 一个人免费看片子| 一级毛片电影观看| 午夜福利,免费看| 亚洲欧美一区二区三区久久| 人成视频在线观看免费观看| 蜜桃国产av成人99| 女人精品久久久久毛片| 男女无遮挡免费网站观看| 黄片播放在线免费| 国产成人精品在线电影| 欧美国产精品va在线观看不卡| 交换朋友夫妻互换小说| 亚洲欧美激情在线| 免费在线观看黄色视频的| 国产一区有黄有色的免费视频| 男女高潮啪啪啪动态图| 成年人免费黄色播放视频| 亚洲精品国产av成人精品| 日韩人妻精品一区2区三区| 国产成人欧美| 亚洲免费av在线视频| 午夜免费观看性视频| xxxhd国产人妻xxx| 欧美激情高清一区二区三区| 侵犯人妻中文字幕一二三四区| 一区在线观看完整版| 在线观看一区二区三区激情| 91精品伊人久久大香线蕉| 欧美精品一区二区免费开放|