• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-Modality and Feature Fusion-Based COVID-19 Detection Through Long Short-Term Memory

    2022-11-11 10:44:26NoureenFatimaRashidJahangirGhulamMujtabaAdnanAkhunzadaZahidHussainShaikhandFaizaQureshi
    Computers Materials&Continua 2022年9期

    Noureen Fatima,Rashid Jahangir,Ghulam Mujtaba,Adnan Akhunzada,Zahid Hussain Shaikh and Faiza Qureshi

    1Center of Excellence for Robotics,Artificial Intelligence,and Blockchain,Department of Computer Science,Sukkur IBA University,Sukkur,Pakistan

    2Department of Computer Science,COMSATS University Islamabad-Vehari Campus,Pakistan

    3Faculty of Computing and Informatics,University Malaysia Sabah,Kota Kinabalu,88400,Malaysia

    4Department of Mathematics,Sukkur IBA University,Sukkur,Pakistan

    Abstract: The Coronavirus Disease 2019 (COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse transcription polymerase chain reaction(RT-PCR)testing.Nevertheless, this testing method is accurate enough for the diagnosis of COVID-19.However,it is time-consuming,expensive,expert-dependent,and violates social distancing.In this paper, this research proposed an effective multimodality-based and feature fusion-based(MMFF)COVID-19 detection technique through deep neural networks.In multi-modality,we have utilized the cough samples, breathe samples and sound samples of healthy as well as COVID-19 patients from publicly available COSWARA dataset.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.Several useful features were extracted from the aforementioned modalities that were then fed as an input to long short-term memory recurrent neural network algorithms for the classification purpose.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.The experimental results showed that our proposed approach outperformed compared to four baseline approaches published recently.We believe that our proposed technique will assists potential users to diagnose the COVID-19 without the intervention of any expert in minimum amount of time.

    Keywords:Covid-19 detection;long short-term memory;feature fusion;deep learning;audio classification

    1 Introduction

    The novel COVID-19 has been declared as a pandemic by World Health Organization (WHO)because of its rapid spread.Its trajectory growth began on January 4,2020,which constrained most of the countries to take serious precautionary measures, such as lockdowns and dedicated isolation facilities in hospitals,to keep the infection rate at a minimum.As of mid-January 2021,the number of confirmed cases of COVID-19 has crossed 95 million with more than two million deaths.Because of its devastation,COVID-19 has put millions of lives at stake in 221 countries and territories.The global effort to address the challenge has empowered two leading companies:Moderna and Pfizer-BioNTech,which developed a vaccine for the disease with reported efficacy exceeding 90% [1].Furthermore,dozens of vaccines are under clinical trials and around 20 are in their last stages of testing 3.

    One of the most effective methods to control the spread of COVID-19 is self-isolation.The isolation period of COVID-19 may take two weeks on average [2].The most prominent symptom found in COVID-19 patients is the failure of the respiratory system in the guise of dry cough and dyspnea;more severe condition causes rhinorrhea and sore throat SARS-CoV positive patients,after 7-10 days of infection,may show unconventional radiographic variations in lungs,thereby indicating pneumonia.About 70%to 90%of CoV-positive patients may suffer from Lymphocytopenia[3].

    Real-time PCR is the most practiced method for quantifying the unique sequence of viruses in the designated gene, Ribonucleic Acid (RNA), with results available in 2-48 h [4].This method, though generally employed to diagnose COVID-19,is inadequate to control the disease for certain reasons:a)dearth of skillful paramedical[5]At times when RT-PCR diagnosis detects COVID-19 in a patient,the virus is already spread.

    Corona cases have increased with such rapidity that their increase has brought about an outgrowth in proposals on technological resolutions for healthcare.Certainly, the need for the development of modest,economical,fast,and accurate testing procedures for COVID-19 diagnosis has become pivotal for healthcare,policymaking,and economic growth in several nations.The main focus of this study is to use machine learning-based and/or deep learning-based techniques to provide an efficient model for diagnosis of COVID-19 as an alternative to traditional and cheaper alternative of RT-PCR test.

    Audio signals generated by the human body (e.g., sighs, breathing, heart, digestion, vibration sounds)have often been used by clinical research to diagnose diseases.Researchers have used the human voice to diagnose the earlier symptoms of diseases such as Parkinsona ?s disease correlates with the softness of speech,vocal tone,pitch,rhythm,rate,and volume correlate with invisible illnesses such as post-traumatic stress disorder[6].

    Deep Learning (DL)is an area of Artificial intelligence that enables the creation of end-to-end models to achieve promised results using input data,without the need for manual feature extraction[7-9].The best approach to these models is these models learn rich features from given raw data instead of human-engineered features.The deep learning models work effectively due to the multiple layered approaches and the model extracts more features than a human-engineered feature.Deep learning techniques have been successfully applied in many problems.For instance, arrhythmia detection[4,10], skin cancer classification[11], breast cancer detection[12].brain disease classification, image segmentation[13]many others.

    Several researchers have employed machine learning and DL techniques to detect COVID-19 through various modalities including,X-ray images[14],patient sound,breathing,and cough sound[15-19].For instance, Imran, et al.[17] employed machine learning-based and deep learning-based approaches to identify COVID-19 patients using cough modality and achieved 93% classification accuracy.Bagad,et al.[15]applied a DL-based approach to identify COVID-19 from cough sound and yielded 90%sensitivity.In the aforementioned studies,authors have used only one modality to detect COVID-19 and their obtained accuracies can be further improved.Furthermore, in several studies,it has been reported that the combination of multi-modalities features and their fusion can further generate robust results[20,21].To facilitate this,Sharma,et al.[18]provided an open-access dataset named COWSARA for the detection of COVID-19.This dataset includes several modalities such as breathing sound,patient sound,cough sound,and some other features like smoker,temperature,etc.Sharma,et al.[18]used the COWSARA dataset to detect COVID-19 patients.The authors achieved 63%accuracy using the random forest algorithm.However,this accuracy was low and can be enhanced with the use of several new ML-based algorithms.Thus,to overcome this,Grant,et al.[22]used the same COWSARA dataset and obtained 87% accuracy.There are two major limitations in the work done by Bagad,et al.[15]and Imran,et al.[17].Firstly,the authors used only one modality from the COWSARA dataset that was the patient sound modality.Secondly,the accuracy is low and can further be improved with the help of more features from multiple modalities and with the help of employing more robust deep neural network algorithms Many researchers have proven that a single modality in the field of medicine is sometimes ineffective to differentiate complex detail of any disease[23]and highly suggested using the multi-modality for a better result.Keeping given those suggestions,we have proposed MMFFT.

    To address this issue, we proposed a multi-modality feature fusion-based (MMFF)technique through a deep neural network for the classification of COVID-19 patients using the COSWARA dataset.Therefore, we proved our hypothesis by utilizing the cough samples, breathe samples, and sound samples of healthy as well as COVID-19.Afterward,several useful features were extracted from the aforementioned modalities that were then fed as an input to long short term memory recurrent neural network algorithm for the classification purpose.An extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.The experimental results showed that our proposed approach outperformed compared to four baseline approaches published recently.

    This research aims to develop an effective multi-modality-based and feature fusion-based COVID-19 detection technique through deep neural networks.In multi-modality, we aim to use the cough samples, breathe samples, and sound samples of healthy as well as COVID-19 patients from the publicly available COSWARA dataset.

    The purpose of this purposed study is to answer the following questions:

    RQ1: How MMFFT is beneficial for diagnosis of COVID-19 from sound, cough and breathe samples?

    RQ2:What dataset has been utilized for diagnosis of COVID-19?

    RQ3:How to deal with imbalance COSWARA dataset problem?

    RQ4: What techniques are being employed to classify either the sound sample is belonging to healthy or COVID-19 patient?

    RQ5:How our proposed model is performing compared to four existing baseline techniques?

    We believe that our proposed methodology,“a multi-modality and feature fusion-based COVID-19 detection through Long Short-Term Memory(LSTM)”can be effective in enabling multi-modalitybased technology solution for point-of-care detection of COVID-19,and shortly,this can help to detect the COVID-19.Moreover,this method provides COVID-19 detection results easily,within 2-3 min,and without violating social distance.Moreover,this research will provide new directions to researchers who will pursue research on COVID detection.

    The rest of this study is arranged as follows.Section 2 describes existing works on COVID-19 detection techniques using machine learning or deep learning algorithms.Section 3 presents proposed techniques and proposed research methodology.Section 4 presents the experimental results.Section 5 presents the theoretical analysis of our obtained results in the form of a discussion section.Finally,Section 6 concludes our research.

    2 Related Work

    In this section, we discuss the algorithms and methods developed by researchers for COVID-19 detection.Several researchers have employed machine learning and deep learning to detect the COVID-19 from various modalities including,medical images and audio signals.

    Image Bases Literature:Immense work has been done on radiology images for infectious COVD-19 disease diagnosis using artificial intelligence techniques.A study by[24]proposed a novel CVOIDXNet framework for automatic identification or confirmation of COVID-19 from X-ray images using seven different models of Convolutional Neural Network(CNN); namely DenseNet121, VGG19,ResNetV2,Xception,MobileNetV2,InceptionV3,and InceptionResNetV2.The experimental results revealed that the proposed COVIDX-Net achieved the best performance on DenseNet121 and VGG19 DL models.Similarly,[14]introduced COVID-Net based on deep CNN for the COVID-19 detection from open source and publicly available CXR images.This study also investigated the explain ability method to analyze the critical factors related to COVID patients to help the clinicians.The proposed COVID-Net framework was evaluated on the COVID test data and obtained 92.4% accuracy for a DL model.

    To handle the availability issue of COVID-19 test kits, [25] proposed a quick alternative for the diagnosis of COVID-19 by implementing an automatic detection system.The authors employed five pre-trained CNNs models(InceptionV3,ResNet50,ResNet101,ResNet152 and Inception-ResNetV2)to detect the COVID-19 infected patients using X-ray radiographs.The model was implemented with four classes(normal,bacterial pneumonia,viral pneumonia,and COVID-19)by using 5-fold crossvalidation.The performance results show that the ResNet50 model outperformed the other four models by achieving 96.1%,99.5%,and 99.7%accuracy on three different datasets.

    Audio Signals Literature-Besidesusing X-ray images,various researchers have been working on the use of respiratory sound(audio)to diagnose illness and recognize sound patterns for years.COVID-19 detection using deep learning has achieved excellent results in medical images.However, the biggest challenge using Medical images modalities is that if a patient has a positive result for the RT-PCR test,at the same time of admission he must have a chest infection because chest infection may occur after one or two days of onset of symptoms[26].

    Brief summary of audio-based literature is shown in Tab.1.The authors have worked on multimodalities to classify the COVID-19 and healthy patients[20,22,24,27]still there are some limitations.Firstly, the authors have not deal with feature fusion techniques to make a model generalized.The generalized model is more reliable,stable,and accurate while dealing with feature fusion techniques.Secondly, the authors [22,24] have used Random Forest (RF)classifier and achieved 66.74% and 87.5%.However,The Random Forest(RF)classifier is slower than the LSTM and it can be a slowdown performance of classification when RF has a large number of a tree.In previous studies,the author[16] has used CNN model on Crowdsourced dataset on two modality such as cough and breathe and achieved AUC of 80.46%.Moreover,by using the same dataset,the author[28]applied VGGish algorithms and obtained 80%(Area under Curve)AUC respectively.In addition to this,the authors have not deal with dataset imbalance[17,22,24].

    Table 1: Audio based literature

    3 Proposed MMFF Technique

    This section presents in detail the philosophy behind the construction of our proposed MMFF technique.To summarize, for the construction of the MMFF technique, a publicly available multimodality dataset was used for the detection of COVID-19.Afterward, several speech preprocessing techniques were employed to remove non-discriminative information from the audio signals.Subsequently,several discriminative features were extracted from each preprocessed modality to generate a master feature vector(MFV).Finally,the generated MFV was then fed as an input to the Long Short-Term Memory(LSTM)recurrent neural network algorithm to construct the COVID-19 classification model.The dataset which we used for experiments was imbalanced.Thus, audio augmentation techniques were employed to overcome the class imbalance problem.The details are available in a subsequent section.

    3.1 Data Collection

    COSWARA is publicly available dataset [18], realized by Indian Institute of Science (IISc)Bangalore.This dataset uses the audio files of respiratory,cough,and speech sounds of normal as well as COVID-19 patients.The samples were collected from all the regions of world except Africa as shown in Fig.1.Voice alternation was recorded from these three modalities:cough sounds(shallow and deep),breathing sounds(slow and fast),vowel sounds(a,e,and o),counting numbers from 1 to 20(at a slow and fast pace).There are 1400 patients’data:97 records belong to COVID-19 patients while the rest belong to healthy patients.All the sounds have uniformed sampling rate at 44.1 KHz.Moreover,some audio files contain noise.Besides,these sound modalities,the COSWAR dataset also contains 26 more features of healthy as well as COVID-19 patients.These features include,current status of health of the patient,age,country,smoker or non-smoker,and others.Fig.1 shows the subjectivity analysis of each age group along with female and male gender.1 indicate the age group of 21, 2 for 26-45 age group,3 represent for 46 to 65 age group and 4 represent the above 65 group of the age.Additionally,it also shows the percentage of each item calculated in relation to each category.

    Figure 1:Pie chart of country wise classification and subjectivity analysis gender-wise

    3.2 Proposed Master Features Vector

    This section discusses the detail of feature fusion vector composition and Time-domain feature and frequency-based feature extraction.As shown in Algorithm 1,the master feature vector combined all five features set into a single set named it master feature vector.In the Coswara dataset,we have a folder of each patient with its P_ID.Each patient folder contains 9 types of audio sounds:cough deep,cough shallow,vowel a,vowel e,vowel o,counting fast.MFV is the master feature vector for each of the nine audios of a patient.These five features are steps are discussed in the following paragraphs.

    Zero-Crossing Rate (ZCR)features were initially extracted from patient audio.Zero crossing is a rate of measure the occurrence of zero in a given time frame.The rate of significant changes along with signal for example the signal change from positive to zero or positive to negative or negative to positive.Zero crossing is an important feature in audio processing because it helping in to differentiate the percussive and pitched audio signal.Percussive audio has a random zero-crossing rate across buffer whereas pitched have a more constant value.

    Root Mean Square(RMS)features were extracted from patient audio.The RMS is the root mean square of a short-time Fourier transform that provides loudness of the audio.This feature is important that will give information about a rough idea about the loudness of an audio signal.

    Spectral Centroid (SCD)indicates where the center of mass of spectrum is located, and it is calculated as the weighted mean of frequency present in the signals.If the frequencies of an audio signal are the same in each number of frames,then spectral centroid must be around at the center and if there are high frequencies at the end of audio signals then spectral centroid would be at its end.

    Spectral Roll-Off(SRO)indicates where the frequency lies below the specified percentage.If the frequency is cut off from the corner in dB per octave.An octave is a double of frequency.The spectral roll-off is used to differentiate the harmonic and noisy audio.

    Mel Frequency Cepstral Coefficients(MFCC)We humans can’t linearly interpret a pitch;various scales of frequencies are formulated to denote the way humans hear the distances between pitches.Mel scale is one of them and Mel is a unit of measure based on human perceived frequency.MFFCs are comprised of the following sub-processes:framing the audio sound,windowing,Discrete Fourier transform(DFT),the logarithm of magnitude,after that wrapping frequency on the Mel scale in the last discrete cosine transform(DCT).

    3.3 Multi-Modality Fusion

    Varies researchers have employed machine learning and DL techniques to detect COVID-19 through single modality including, X-ray images, patient sound, breathing, and cough sound.However, there are no prominent symptoms found in COVID-19 patients, it may be dry cough,breathing problem,sore throat,or fever.Therefore,we should not rely on a single modality to detect the COVID-19.For instance,a patient may have a COVID-19 positive result but it may not have a cough symptom similarly,it is not necessarily that a patient has a breathing problem and any other symptoms at the same time.Many researchers have proven that a single modality in the field of medicine is sometimes ineffective to differentiate complex detail of any disease [23] and highly suggested using the multi-modality for a better result.Keeping given those suggestions,we have proposed an Effective Multi-Modality and Feature_Fusion-Based COVID-19 Detection through LSTM.In which model we combine three modality sound,cough,and breathe.

    3.4 Balanced Sampling and Data Augmentation

    Data augmentation is a technique that creates new training data samples from existing training data to produce quantity and diversity in a dataset.This technique has been proven to effectively alleviate model overfitting [29].Data augmentation not only improves the overall performance but also enhanced the data distribution invariant that leads to variance reduction [30].Standard signal augmentation methods have been applied on the audio raw signals and best parameter selection as given by Nanni,et al.[31].These are Gaussian noise,time stretch,pitch shift,and changing the speed.The detail of each is given below:

    Gaussian Noise (GN)is added to raw audio samples in between variance of 0 and 1.Gaussian noise generates a new raw audio sample with preserving its originality of audio sample.It is very much important to choose the right hyper parameter for noise amplitude,σis a notation of it.Largeσsize makes it difficult to learn classifier and smaller size ofσdifficult to disturb The parameter sizes in this proposed methodology were selected in between[0.004,0.005]by following a uniform distribution.

    Time Stretch (TS)is a technique for audio augmentation that changes the tempo and length of an audio clip without being changing the pitch.?is a parameter factor that is being added to actual audio to generate new audio? The value of?is in-between [0.18, 1.25] and selection of?is a bit tough because if the value of? >1 then audio signal may speed up and if? <1 then, a signal may slow down and increase the length of an original audio clip.The parameter sizes in this proposed methodology were selected in between[0.5,0.8]by following a uniform distribution.

    Pitch Shift(PS)is a technique that generates a new sound without changing the tempo of audio by shifting the pitch of wavelength n_steps.Time stretch is reciprocal of pitch shift.The values of n_steps should be in between[-4,4].The parameter n_steps in this proposed methodology were selected in between[-2,-4]by following a uniform distribution.

    Changing Speed(CS)is similar to changing the pitch but here it stretches times series by a fixed rate.The parametern_steps in this proposed methodology were selected in between [0.8, 0.5] by following a uniform distribution.

    Algorithm 1:Algorithm to Construct Master Feature Vector Input:A path to main folder of COSWARA dataset Output:construction of master features vector comprising 5 features of all audios 1folders ←count the total number of subfolders in COSWARA folders/P_ID 2Initialization of variables:assign zero to variable i and j 3while(i <folders)do//Read all the subfolders in main folder 4audio-files ←count the total number of wav files in each subfolder 5while(j <audio-files)do//Read each wav file one by one to extract features 6compute zero cross rate from audio j by using librosa.feature.zcr 7compute root mean square from audio j by using librosa.feature.rms 8compute spectral centroid from audio j by librosa.feature.spectral_centroid 9compute spectral roll-off from audio j by using librosa.feature.spectral_rolloff 10compute MFCC from audio file j by using librosa.feature.mfcc 11convert each computed features in to single column matrix 12concatenate all these single column with previous audio files master vector 13MFV=zero cross+root mean square+spectral centroid+spectral roll-off+MFCC 14end 15end

    Signal Speed (SS)in this augmentation technique we speed roll of by speedup factor range in between[0.8,1.2].We have applied roll-off on 0.8 and 1.1 audio signals.

    Imbalanced class problems are found in many classification problems [27].Class imbalance problems occur when the number of instances from one or more classes is considerably greater than another class[32].In Coswara dataset 6%of dataset contain the COVID patients and 94%of dataset contain healthy patient.Such a large imbalanced dataset,up-sampling or down-sampling is difficult to implement because u up-sampling leads to uncertainty meanwhile down-sampling wastes a large portion of the data[33].In the proposed methodology we alleviate the imbalanced class by using data augmentation to create diversity in a dataset and to make it balance.In this proposed methodology,we have 80 COVID-19 patient data, we apply 5 different augmentations on two parameters then it generated 2*5*80=800 new augmented similar sound.

    3.5 Long Short-Term Memory

    Long Short term Memory[34]architecture consists of three main gates namely,input,forget and output gates[35].The hidden state is computed using these three gates.where Eq.(1)represents the network input gate, Eq.(2)represents the memory cell of the network candidate,equation Eq.(3)represent shows the activation function of the forget gate,Eq.(4)represents the calculation of the final output gate’s value for a new memory cell, and equation Eq.(5)and equation Eq.(6)explain the value of network’s final output gate.Moreover,b shows the bias vector,W represents a weight matrix,shows the input to the memory cell at time g.Whereas i,c,f,o indicate the input,cell memory,forget,and output gates respectively as shown in Fig.2

    Figure 2:LSTM network

    LSTM is known for learning the long-term sequences dependencies.It has the quality to learn the information for a longer period and have the decision capability to decide based on information to keep or discard.Therefore, It is proven that LSTM perform better than RNN and CNN [10].Moreover,the LSTM network has a gated mechanism through which it controls the flow of sequences called cells.The single LSTM network cell is shown in Fig.2.LSTM is known for its cell stage and the main purpose of the cell stage is to build the bridge for a sequence to flow the information among gates.Fig.2,Ct represents the new cell state and Ct-1 is the old cell state.The path between these both cell sate is shown in form of horizontal for the interaction of the cells.The network gate is used to control the flow of information as the pink circle represents the multiplication operator for the pointwise and the yellow box represents the sigmoid function.Sigmoid functions work on 0 and 1,whereas 1 means need to be done and 0 means nothing needs to be done inside the network gate.Moreover,the forget gate is to check which information should be excluded and which information should include.The decision is done by checking the previous output stage.Whereas is the output of the previous stage,is the current input,and a decision is made by a sigmoid function.The input data structure consists of two neural network layers namely sigmoid and tanh.A sigmoid neural network is used to decide what value needs to be updated whereas tanh is used to create a new vector for a new candidate value.Once the new candidate has been created,it’s time to entire C1.This has been done by multiplying the output of the previous stage with a new stage as shown in and equations Eq.(4)and Eq.(5).Finally,the output of a sequence is calculated via the output gate.After the sigmoid layer decides which part of the sequence has to send to the output layer,the tanh layer creates a new cell state value in between(1,-1),and the sigmoid value is multiplied with the output of selected information.

    Recently,many researchers have employed LSTM for audio classification,emotion detection[10]and reported outperformance of LSTM in audio processing.The customized LSTM architecture has been designed to classify COVID-19 and non-COVID patients.The architecture consists of 1 input layer, 4 hidden layers, and 1 output layer as shown in Tab.1.We had tried different configuration settings to opt for the best performance of LSTM.RMS,Adam,and SGD optimizers were employed with different settings of dropout and batch size.The best-fit setting was on Adam optimizer,having a 0.5 dropout value with 216 batch size.The input layer has 512 neurons,and the hidden layer consists of half of its output neurons.To remove the overfitting single dropout has been added to the network at layer 3.To overcome the overfitting problem in LSTM many researchers have successfully employed the dropout technique and successfully.The neuron section process is done carefully and effectively because a minimal number of a neuron may cause underfitting, whereas lager size of a neuron may cause overfitting[10].Each hidden layer has Rectified Linear Unit(ReLU)function which computes output from the input within the range of 0 and 1.Finally,in the output layer,the sigmoid function has been applied and has a single neuron because the output layer neuron depends upon several classes.We applied four dropouts to overcome the overfitting issue.

    3.6 Evaluation Matrices

    The performance of each COVID-19 detection model was evaluated using different evaluation metrics.These evaluation metrics include accuracy (training, validation, and testing), F1-score,precision,and recall.For each sound class,the detection was measured with the labels,and the number of false-positive(FP),true positive(TP),false-negative(FN),and true negative(TN)were computed using the confusion matrix of each prediction.The mathematical representation of all the metrics are given.In addition,these evaluation metrics have been widely employed for the evaluation of various disease detection, classification and related systems [10].Precision-Precision is the ratio of correctly predicted labels for the specific class about all predicted labels of the class.It evaluates the performance of the proposed models to correctly detect actual respiratory sound.Recall-Recall is the ratio of all predicted labels for the specific class to the actual labels of the class.It calculates the number of accurately detected instances as positive instances.Accuracy-Compute the frequency of accurately detected respiratory sound classes from the total number of sound signals.F1-Score-It computes the weighted harmonic mean/balanced ratio of recall and precision.

    4 Results

    This section is dedicated to the results and discussion of our experiments performed to evaluate the performance of our proposed MMFFT technique.First,we evaluated the performance of individual modalities of the COSWARA dataset, to compare the single modality experimental results with multi-modality experimental results.Secondly, our proposed MMFFT was evaluated using multimodality and feature fusion based by using LSTM classifier algorithms.Third, we compared the performance of our proposed MMFFT technique with augmentation on raw data.In the last, we compared our purposed MMFFT with four[16,18,22,28]existing baseline techniques.The objective of this comparison was to find out the accuracy of MMFFT with the baseline.Furthermore,we perform the experiments on window 10 with GPU(GeForce MX130),using python languages.

    4.1 Individual COSWARA Modality Result

    In this setting,from each of nine modalities,we extracted five different types of features to prepare MFV.The constructed MFV of individual modality was then fed as an input to LSTM to evaluate its results.The results of each modality are shown in Tab.4.As shown in the table,we have achieved accuracy in between 89%to 97%on a training set,88%to 93%on the validation set,and 88%to 96%on the test set.

    The training accuracies observed for individual modalities were in between (92% to 93%)The highest accuracy(93%)on the validation set was observed in the CHCM dataset followed by CHCM,CSCM, CFCM, and BSCM datasets (92%).The lowest validation accuracy (89%)was observed in the VACM dataset.The highest test set accuracy(94%)was observed in CHCM followed by CNCM and VECM datasets(92%).The lowest test set accuracy(88%)was observed in VFCM and VOCM.We have also reported the confusion matrix results of test set analyses in Fig.3 to show the true positive,true negative,false positive,and false negative values of each analysis.As can be seen from Fig.3,most of the instances were classified correctly into their respective classes.Furthermore,as can be seen from Tab.2, the single modality models were just the right models and these were neither overfitted nor under fitted.However, the results still need further improvement in many modalities.Therefore,to further improve the results,we proposed fusion of multi-modalities in the construction of the classification model in Section 4.2.

    Figure 3:Confusion matrix of each modality

    Table 2: LSTM configuration

    4.2 Multi-Modality Result

    This section presents the result of multi-Modality as shown in Tab.3.We compared the performance of our proposed MMFFT technique.In this setting,we first prepared nine MFVs(one from each modality)then we combined all nine MFVs to prepare one super MFV.Finally, we gave this super MFV as an input to the LSTM algorithm to evaluate the performance of our proposed MMFFT technique.

    Table 3: Performance of combined modalities using LSTM

    We run 100 epochs to train our proposed model by using LSTM.Nevertheless, we observed the highest training accuracy after 12 epochs and after 12 epochs the model started cramming or overfitting.The fusion of all nine modalities shows better results compared to a single modality.The experimental results of the single modality showed 94%testing accuracy and 96.5%F1-score.Fig.4 shows the confusion matrix of our proposed COVID-19 classification for multi-modality.As can be seen from this figure,most of the instances were correctly classified into their respective classes except six instances which were healthy patients but were classified as COVID-19 patients by our proposed MMFFT-based model.

    Figure 4:Confusion matrix for COVID-19 detection with and without data augmentation

    4.3 Multi-Modality with Data Augmentation

    This section presents the result for multi-modality with data augmentation, results are shown in Tab.3).The COSWARA dataset is an imbalance in nature where the instances of the COVID-19 class are too much smaller in number than the healthy class.We employed five different augmentation techniques to improve the number of minority classes as discussed in Section 3.5.After performing the audio augmentation, we prepared nine MFVs (one from each modality)then we combined all nine MFVs to prepare one super MFV.Finally, we gave this super MFV as an input to LSTM to evaluate the performance of our proposed MMFFT technique with audio augmentation.We run 100 epochs to train our proposed model with audio augmentation by using LSTM.The fusion of all nine modalities with audio augmentation showed improvement of 2%of performances(94%testing accuracy)compared to experimental setting-II (96% training accuracy).Fig.4 shows the confusion matrix of experimental setting-III.As can be seen here 20 instances were falsely positive and 2 were false negative and the rest of the instances were classified correctly into their respective classes.This shows that the audio augmentation showed marginal higher results compare to multi-modality.

    Fig.4 shows the confusion matrices of our proposed COVID-19 classification for multi-modality with augmentation.In this setting proposed methodology, for COVID-19 class, 264 instances were correctly classified whereas 2 instances were misclassified.For the non-COVID class, 341 instances were classified correctly and 20 were incorrect.The main reason for not being classified correctly is maybe data is similar or sequences almost resemble each other.

    4.4 Comparing the Result of Proposed MMFFT with Baselines Methods

    To show the effectiveness of our proposed COVID-19 Classification coupled with LSTM.We compared the performance of our proposed MMFFT technique with four baseline techniques recently published in the literature.The detailed results are shown in Tab.4.The author’s experimental results showed 66.74% accuracy on the test set.The results were further improved [22].In [22] authors applied feature fusion and Random Forest classifier and obtained 87.5 & classification accuracy.In[16] and [10] authors used a Crowdsourced dataset where they used cough and breathe modalities and fed the features of these modalities to CNN and VGGish algorithms and obtained 80.46% and 80%AUC respectively.It can be seen from Tab.4,our proposed MMFFT technique by using LSTM outperformed and showed 17%improved accuracy compared to baseline techniques

    Table 4: Performance of each modality with feature fusion and LSTM

    5 Discussion

    This section presents the analysis and significant result of the proposed MMFFT classification model using the COSWRA dataset.The proposed MMFFT technique obtained reliable and improved classification performance.The rigorous experimental evaluation of complex,challenging,and standard publicly available COSWRA dataset proved that the MMFFT technique is less complex,accurate,reliable, and more effective than the existing baseline techniques.Several studies have proposed the COVID-19 detection model to classify the COVID-19 and healthy patients that claim the higher accuracy.However, such a baseline model suffers from three major limitations.First, these models are highly imbalanced thus,the reported results in baseline studies may not be applicable on a wider scale.Secondly,the researchers used only one modality dataset that was the patient sound modality or cough modality.Third,the overall accuracy of a model is low.Finally,most of the classification models have been employed using traditional ML approaches.This section aims to critically analyze the obtained results and justifies why our proposed MMFFT technique outperformed the single modality dataset and existing baseline techniques.In addition to that this section provides justification that why augmented data could not bring further improvement in the results.Finally,this section also provides the error analyses of misclassified instances by our proposed MMFFT technique.

    To overcome the issue of the baseline classification model,we proposed the MMFFT technique to classify COVID-19 and healthy patients using the publicly available COSWARA sound modalities dataset.The experimental results showed the proposed MMFFT technique is effective to classify COVID-19 and healthy patients.Furthermore, the experimental results showed that the proposed multi-modal and feature fusion-based technique is more effective to single modality datasets.In addition, the experimental results showed that our proposed MMFFT technique outperform by achieving 96% accuracy, as compared to four existing baseline techniques.Finally, the COSWARA dataset is highly imbalanced, and thus, we applied the audio augmentation technique to make the dataset balanced and to evaluate whether augmented balanced data improve the classification results Moreover,our experiments showed that the augmentation of data improves the overall performance on classification results.

    The finding single modality dataset can classify the COVID-19 and healthy patient 88%to 96%correctly.However, such a single modality dataset can show an error of 4% to 12%.The possible reason for this error may be due to the inability of the features to produce the discriminative and representative pattern for COVID-19 and healthy patients.The experimental result of single modality models were just the right models and these were neither overfitted nor under fitted.However, the results still need further improvement in many modalities.As for COVID-19 symptoms may vary patient to patient[36]and,45%of COVID-19 patients have breathlessness symptoms,14%have severe respiratory dysfunction,and 41.7%with voice,swallow,and laryngeal sensitivity.Therefore,we should not rely on a single modality to detect the COVID-19.

    Several recent studies have proven that the multi-modality and feature fusion yielded promising results in the field of medicine and most of the researcher studies suggest utilizing the multi-modality techniques to obtain robust results.Therefore, to further improve the classification results obtained from single modality feature fusion-based technique.The MMFFT technique with multi-modality showed promising results compare to single modality results.The number of misclassified instances was reduced significantly.The error was only 4%.The possible reason for improved performance was that we have implied multi-modality with feature fusion-based technique to increase the diversity and fuse the diverse information to the reliabilities,robustness,and improve generalization.Though,the proposed MMFFT technique performed better than a single modality.However,we noticed that the COSWARA dataset was imbalanced.The main cause of non-significantly performances was maybe highly imbalanced class problem[37].

    As per our hypothesis,the proposed MMFFT technique performed better than a single modality.Moreover,the COSWARA dataset is imbalanced.In previous studies,it is reported that augmentation techniques can improves classification overall accuracy.Therefore, we used the audio augmentation techniques.The fusion of all nine modalities fed to LSTM with audio augmentation showed improvement of 2%of performances(94%testing accuracy)compared to experimental setting-II(96%training accuracy).Precision(1%),recall(2%)and F1-score by(2%).The possible reason for improved performance was that we have implied multi-modality with feature fusion-based technique to increase the diversity and fuse the diverse information to the reliabilities,robustness,and improve generalization.We are intent to improve the overall performance of MMFFT by using audio Generative Adversarial Network(GAN)techniques by generating the COVID-19 audios sample to make it balance.

    Our experimental results shown that the proposed MMFFT technique outperformed on four baseline techniques [16,18,22,28].In these baseline studies, the authors have worked on multimodalities to classify the COVID-19 and healthy patients,still there are some limitations.Firstly,the authors have not deal with feature fusion techniques to make a model generalized.The generalized model is more reliable,stable,and accurate while dealing with feature fusion techniques.Secondly,the authors[18,22]have used Random Forest(RF)classifier and achieved 66.74%and 87.5%.However,The Random Forest(RF)classifier is slower than the LSTM and it can be a slowdown performance of classification when RF has a large number of a tree.In previous studies,the author[16]has used CNN model on Crowdsourced dataset on two modality such as cough and breathe and achieved AUC of 80.46%.Moreover,by using the same dataset,the author[28]applied VGGish algorithms and obtained 80%AUC respectively.In addition to this,the authors have not deal with dataset imbalance[22].

    To overcome the limitations,we have employed LSTM algorithm on COSWERA dataset.In this study, we have implied feature fusion on multi-modalities with feature fusion-based technique and improved 17% accuracy as compared to previous studies mention above.Our experimental results shown a high correlation MFV and classification accuracy.In addition, the proposed technique significantly improves the overall performance on multi-modalities with augmentation techniques.Setting II and Setting III Tab.3 proved that our proposed MMFFT technique can be used as a secondary tool to classify the healthy as well as COVID-19 patients without violating the social distancing rule.

    The clinical significance of our proposed studies is that our proposed methodology, MMFFT can be effective in enabling multi-modality-based technology solution for point-of-care detection of COVID-19, resulting into quick detection of the COVID-19.This method provides COVID-19 detection results easily,within 2-3 min,without violating social distance.The proposed model can be integrated into any android app to detect the COVID-19 within a minute.Across the world,anyone could use the COVID-19 app and take great advantage of technology.Moreover, this research will provide new directions to researchers who will pursue research on COVID detection.

    The major limitation of this proposed MMFFT is that have balanced the dataset by using data augmentation to avoid the biasness.However,data augmentation is a synthetic data generation which is not good enough in term of contextual.Therefore, this COSWARA dataset was not enough for COVID-19 patient.

    6 Conclusion and Future Directions

    This study proposed an effective MMFFT technique to classify the healthy and COIVD-19 patients from multi-modality audio files using the COSWARA dataset.In multi-modality,we used nine different modalities and from each modality,five different features were extracted.These features were then fused to create a super MFV.The super MFV was then fed an input to LSTM and algorithms for the classification purpose.Our experimental results showed that our proposed technique outperformed by achieving the accuracy of 96% in both classifiers LSTM and improved the 17%-20% accuracy from the four baseline techniques.Furthermore,the dataset which we used for experiments was highly imbalanced.Thus,we employed audio augmentation techniques to overcome the class imbalance issue.We evaluated our proposed technique on both balanced and imbalanced data and found that our proposed technique showed to improve the overall performance of with augmentation.Our promising results show that the proposed MMFFT technique can be utilized as a secondary tool for classifying the health as well as COVID-19 patients without violating the social distancing rule.Moreover, it can be adopted in many other application areas of audio processing and classification including,sentimental analysis, gender classification, speaker identification, etc.In future, we will design an automatic diagnosis tool for COVID-19 from spectrogram using CycleGAN and Transfer Learning.

    Funding Statement:This research has been financially supported by University Malaysia Sabah.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    99久国产av精品国产电影| 三级国产精品片| 别揉我奶头 嗯啊视频| 亚洲欧美成人精品一区二区| 欧美三级亚洲精品| xxx大片免费视频| 大片免费播放器 马上看| 国产精品伦人一区二区| 欧美+日韩+精品| av网站免费在线观看视频| 我的女老师完整版在线观看| av国产久精品久网站免费入址| 国产精品久久久久久久久免| 天天一区二区日本电影三级| 在线天堂最新版资源| 看非洲黑人一级黄片| 国产成人福利小说| 精品一区二区三区视频在线| 最近最新中文字幕大全电影3| 日韩,欧美,国产一区二区三区| 神马国产精品三级电影在线观看| 国国产精品蜜臀av免费| 最近的中文字幕免费完整| 亚洲av成人精品一区久久| 久久女婷五月综合色啪小说 | 丰满少妇做爰视频| 国产 一区 欧美 日韩| 一个人看的www免费观看视频| 国产真实伦视频高清在线观看| 欧美少妇被猛烈插入视频| 99久久精品国产国产毛片| 直男gayav资源| 亚洲国产日韩一区二区| 日韩不卡一区二区三区视频在线| 久久午夜福利片| 亚洲人成网站在线播| 亚洲,欧美,日韩| 国产黄色免费在线视频| 深爱激情五月婷婷| 国产精品蜜桃在线观看| 白带黄色成豆腐渣| 搡女人真爽免费视频火全软件| 欧美另类一区| 国产亚洲一区二区精品| 高清视频免费观看一区二区| 国产精品偷伦视频观看了| 国国产精品蜜臀av免费| 国产乱来视频区| 精品一区二区三卡| 男女啪啪激烈高潮av片| 深爱激情五月婷婷| 国产91av在线免费观看| 99精国产麻豆久久婷婷| 国产精品人妻久久久影院| 亚洲va在线va天堂va国产| 成人亚洲精品av一区二区| 高清视频免费观看一区二区| 看非洲黑人一级黄片| 久久精品久久精品一区二区三区| 欧美极品一区二区三区四区| 在线a可以看的网站| 国产伦精品一区二区三区视频9| 一个人观看的视频www高清免费观看| 香蕉精品网在线| 人人妻人人爽人人添夜夜欢视频 | 精品久久久噜噜| 大陆偷拍与自拍| 亚洲伊人久久精品综合| 大香蕉97超碰在线| 日韩一区二区视频免费看| 麻豆久久精品国产亚洲av| 一本色道久久久久久精品综合| 王馨瑶露胸无遮挡在线观看| 国产久久久一区二区三区| 九草在线视频观看| 99热网站在线观看| 国产片特级美女逼逼视频| 久久精品国产亚洲av天美| 秋霞伦理黄片| 80岁老熟妇乱子伦牲交| 国产精品不卡视频一区二区| 日日摸夜夜添夜夜添av毛片| 看免费成人av毛片| 99视频精品全部免费 在线| 日韩一本色道免费dvd| 国产精品一及| 91久久精品国产一区二区成人| 成年免费大片在线观看| 欧美变态另类bdsm刘玥| 人妻夜夜爽99麻豆av| 99热网站在线观看| 美女内射精品一级片tv| 99热6这里只有精品| av专区在线播放| 欧美xxⅹ黑人| 国产精品av视频在线免费观看| 亚洲av一区综合| 亚洲人与动物交配视频| 高清在线视频一区二区三区| 黄色一级大片看看| 精华霜和精华液先用哪个| 91久久精品国产一区二区三区| 日韩欧美精品免费久久| 成人二区视频| av在线亚洲专区| 亚洲一区二区三区欧美精品 | 亚洲人成网站高清观看| 99视频精品全部免费 在线| 精品久久久久久电影网| 在线观看免费高清a一片| 男人狂女人下面高潮的视频| 午夜精品一区二区三区免费看| 特级一级黄色大片| 亚洲第一区二区三区不卡| 精品国产乱码久久久久久小说| 2018国产大陆天天弄谢| 欧美日韩视频高清一区二区三区二| 亚洲精品第二区| 免费看不卡的av| 汤姆久久久久久久影院中文字幕| 一个人观看的视频www高清免费观看| 国产在线一区二区三区精| 国产男女超爽视频在线观看| 久久久久久久久大av| 禁无遮挡网站| 亚洲精品一二三| 欧美日韩在线观看h| 丝袜美腿在线中文| 精品一区二区三卡| 亚洲成人中文字幕在线播放| 毛片女人毛片| 亚洲无线观看免费| 久久久久久伊人网av| 五月天丁香电影| 午夜视频国产福利| 精品国产三级普通话版| 国产精品久久久久久精品电影小说 | 女人十人毛片免费观看3o分钟| 能在线免费看毛片的网站| 一级av片app| 国产伦理片在线播放av一区| 22中文网久久字幕| 视频中文字幕在线观看| 又黄又爽又刺激的免费视频.| 亚洲av欧美aⅴ国产| 日韩强制内射视频| 高清在线视频一区二区三区| 2022亚洲国产成人精品| 欧美xxxx黑人xx丫x性爽| 国产精品麻豆人妻色哟哟久久| 久久久亚洲精品成人影院| 偷拍熟女少妇极品色| 丝瓜视频免费看黄片| 亚洲国产高清在线一区二区三| 一级a做视频免费观看| 国产精品国产三级国产专区5o| 国产成人精品福利久久| 国国产精品蜜臀av免费| 午夜精品国产一区二区电影 | 日日摸夜夜添夜夜爱| 97在线人人人人妻| 日韩av在线免费看完整版不卡| 嫩草影院新地址| av黄色大香蕉| 亚洲真实伦在线观看| 女人十人毛片免费观看3o分钟| 观看免费一级毛片| 欧美日韩视频精品一区| 下体分泌物呈黄色| 久久久久久久久久久免费av| 五月玫瑰六月丁香| 欧美丝袜亚洲另类| 插逼视频在线观看| 观看美女的网站| 精品人妻一区二区三区麻豆| 久久99精品国语久久久| 春色校园在线视频观看| 一个人看视频在线观看www免费| 亚洲av.av天堂| 亚洲人与动物交配视频| 国产熟女欧美一区二区| 天美传媒精品一区二区| 可以在线观看毛片的网站| 中文欧美无线码| 性色avwww在线观看| 看黄色毛片网站| 又大又黄又爽视频免费| 乱系列少妇在线播放| 内射极品少妇av片p| 免费av毛片视频| 2021少妇久久久久久久久久久| 久久久欧美国产精品| 在线观看一区二区三区| 97精品久久久久久久久久精品| 伦理电影大哥的女人| 午夜激情久久久久久久| 亚洲不卡免费看| 欧美精品一区二区大全| 成人无遮挡网站| 亚洲人成网站在线观看播放| 97超视频在线观看视频| a级毛色黄片| 九九在线视频观看精品| 久久精品久久精品一区二区三区| 少妇的逼水好多| 精品国产三级普通话版| 欧美丝袜亚洲另类| av网站免费在线观看视频| 亚洲,一卡二卡三卡| 日韩电影二区| 日韩制服骚丝袜av| 成人鲁丝片一二三区免费| 三级男女做爰猛烈吃奶摸视频| 夫妻午夜视频| 国产 一区 欧美 日韩| 2021天堂中文幕一二区在线观| 国产免费一区二区三区四区乱码| 国产成人freesex在线| 97超视频在线观看视频| 99热这里只有精品一区| 一区二区三区精品91| 日韩欧美精品免费久久| 国产成人精品福利久久| av在线天堂中文字幕| 涩涩av久久男人的天堂| 亚洲精品日韩av片在线观看| 亚洲欧美日韩另类电影网站 | 精品人妻偷拍中文字幕| 国产在视频线精品| 免费看光身美女| 一边亲一边摸免费视频| 中文在线观看免费www的网站| 黄片无遮挡物在线观看| 国产精品嫩草影院av在线观看| 99re6热这里在线精品视频| 国产男人的电影天堂91| 久久精品国产鲁丝片午夜精品| 国产乱人视频| 亚洲va在线va天堂va国产| 亚洲性久久影院| 日韩三级伦理在线观看| 18禁动态无遮挡网站| 国内少妇人妻偷人精品xxx网站| av卡一久久| 国产精品国产av在线观看| www.色视频.com| 国产视频内射| 在线观看av片永久免费下载| 国产免费一区二区三区四区乱码| 九草在线视频观看| 一级毛片久久久久久久久女| 日韩视频在线欧美| 我的女老师完整版在线观看| 久热久热在线精品观看| 日日啪夜夜撸| av女优亚洲男人天堂| 久久久久久久久久久免费av| 69人妻影院| 80岁老熟妇乱子伦牲交| 国产国拍精品亚洲av在线观看| 狠狠精品人妻久久久久久综合| 亚洲精品乱码久久久久久按摩| 欧美成人一区二区免费高清观看| 看免费成人av毛片| 狂野欧美激情性xxxx在线观看| videos熟女内射| 黄片无遮挡物在线观看| 久久久久性生活片| 91午夜精品亚洲一区二区三区| 一区二区三区乱码不卡18| 伦精品一区二区三区| 又粗又硬又长又爽又黄的视频| 国产91av在线免费观看| 五月伊人婷婷丁香| 男人狂女人下面高潮的视频| 亚洲真实伦在线观看| 中文乱码字字幕精品一区二区三区| 白带黄色成豆腐渣| 久久6这里有精品| 国产在线男女| 亚洲国产欧美人成| 欧美潮喷喷水| 亚洲av福利一区| 99热这里只有精品一区| 看非洲黑人一级黄片| 三级国产精品片| 日韩欧美 国产精品| 97热精品久久久久久| 欧美激情国产日韩精品一区| 国产精品爽爽va在线观看网站| 国产日韩欧美在线精品| 免费av毛片视频| 女的被弄到高潮叫床怎么办| 国产真实伦视频高清在线观看| 国产美女午夜福利| 永久免费av网站大全| 一区二区三区免费毛片| 国产毛片a区久久久久| 嫩草影院精品99| 天堂网av新在线| 亚洲成人久久爱视频| 黄色日韩在线| 在线观看免费高清a一片| 欧美+日韩+精品| 亚洲精品国产成人久久av| 久久久久久久亚洲中文字幕| 亚洲欧美成人综合另类久久久| 女人被狂操c到高潮| 两个人的视频大全免费| 亚洲国产最新在线播放| 三级男女做爰猛烈吃奶摸视频| 最近中文字幕高清免费大全6| 看黄色毛片网站| 九色成人免费人妻av| 中文字幕人妻熟人妻熟丝袜美| 日本午夜av视频| 麻豆精品久久久久久蜜桃| 午夜福利网站1000一区二区三区| 亚洲精品日本国产第一区| 最近中文字幕2019免费版| 在线观看免费高清a一片| 十八禁网站网址无遮挡 | 亚洲精品乱久久久久久| 国产高清三级在线| 国产精品人妻久久久影院| 亚洲精品一区蜜桃| 日日撸夜夜添| 中文在线观看免费www的网站| 亚洲美女视频黄频| 蜜桃久久精品国产亚洲av| 日本与韩国留学比较| 在线免费观看不下载黄p国产| 三级国产精品欧美在线观看| 国内精品宾馆在线| 午夜福利在线观看免费完整高清在| 嫩草影院入口| 久久女婷五月综合色啪小说 | 日韩成人伦理影院| 激情五月婷婷亚洲| 午夜免费男女啪啪视频观看| 嫩草影院精品99| 国产精品国产av在线观看| 精品久久国产蜜桃| 亚洲人成网站在线观看播放| 久久久精品欧美日韩精品| 午夜视频国产福利| 精品一区在线观看国产| 午夜免费男女啪啪视频观看| 在线观看国产h片| 午夜视频国产福利| 中国国产av一级| 免费看不卡的av| 激情五月婷婷亚洲| 国产伦理片在线播放av一区| 少妇裸体淫交视频免费看高清| 国产毛片在线视频| 综合色丁香网| www.av在线官网国产| 日本爱情动作片www.在线观看| 欧美人与善性xxx| 青青草视频在线视频观看| 欧美潮喷喷水| 久久ye,这里只有精品| 国产伦精品一区二区三区四那| 国产成人福利小说| kizo精华| 午夜免费鲁丝| 女人久久www免费人成看片| 校园人妻丝袜中文字幕| 女人久久www免费人成看片| 亚洲国产精品成人综合色| 欧美激情久久久久久爽电影| 大片免费播放器 马上看| 久久女婷五月综合色啪小说 | 久久久久久久久久成人| 一区二区三区四区激情视频| 不卡视频在线观看欧美| 久久97久久精品| 一个人看视频在线观看www免费| 观看美女的网站| 久久精品国产亚洲av涩爱| 视频中文字幕在线观看| 国产成人精品福利久久| 欧美成人一区二区免费高清观看| 免费观看av网站的网址| 亚洲精品日本国产第一区| 亚洲欧美一区二区三区黑人 | 日日摸夜夜添夜夜爱| 欧美激情久久久久久爽电影| av国产久精品久网站免费入址| 国产老妇女一区| 午夜老司机福利剧场| 夜夜看夜夜爽夜夜摸| 一级毛片我不卡| 欧美三级亚洲精品| 熟女人妻精品中文字幕| freevideosex欧美| 国产精品久久久久久久久免| 久久精品久久久久久噜噜老黄| 女的被弄到高潮叫床怎么办| 亚洲国产精品专区欧美| 国产色婷婷99| 久久精品综合一区二区三区| 日韩欧美一区视频在线观看 | 丰满人妻一区二区三区视频av| 新久久久久国产一级毛片| 亚洲av男天堂| 久久久久久久亚洲中文字幕| 日韩av免费高清视频| 午夜免费观看性视频| 亚洲欧洲日产国产| 国产伦在线观看视频一区| 成人漫画全彩无遮挡| 日韩国内少妇激情av| 欧美激情在线99| 成人免费观看视频高清| 久久久久九九精品影院| 国产免费一级a男人的天堂| 国产黄色视频一区二区在线观看| 91aial.com中文字幕在线观看| 欧美成人精品欧美一级黄| 国产精品秋霞免费鲁丝片| 观看免费一级毛片| 国产亚洲午夜精品一区二区久久 | 一二三四中文在线观看免费高清| 国产av码专区亚洲av| 热99国产精品久久久久久7| 国产女主播在线喷水免费视频网站| 在线免费十八禁| 午夜福利在线在线| 狠狠精品人妻久久久久久综合| 少妇人妻久久综合中文| 精品一区二区三卡| 国产黄a三级三级三级人| 国产成人午夜福利电影在线观看| 国产高清有码在线观看视频| 波野结衣二区三区在线| 男人爽女人下面视频在线观看| 亚洲精品视频女| 男人狂女人下面高潮的视频| 少妇人妻久久综合中文| 天堂网av新在线| a级毛片免费高清观看在线播放| 国产毛片在线视频| 亚洲婷婷狠狠爱综合网| 亚洲欧美精品自产自拍| 少妇 在线观看| 成人黄色视频免费在线看| 亚洲人成网站在线播| 日韩大片免费观看网站| 精品久久久久久久人妻蜜臀av| 一级毛片久久久久久久久女| av福利片在线观看| 五月天丁香电影| 久久久午夜欧美精品| 超碰97精品在线观看| 草草在线视频免费看| 国产日韩欧美在线精品| 青春草亚洲视频在线观看| 免费大片黄手机在线观看| 老司机影院毛片| 天天躁日日操中文字幕| 夜夜看夜夜爽夜夜摸| 熟女人妻精品中文字幕| 日韩欧美精品v在线| 99热这里只有精品一区| 亚洲精品色激情综合| 久热这里只有精品99| 美女xxoo啪啪120秒动态图| 国产精品.久久久| 插阴视频在线观看视频| 天天躁夜夜躁狠狠久久av| 国产精品久久久久久精品古装| 日韩精品有码人妻一区| 成人美女网站在线观看视频| 激情 狠狠 欧美| 伦理电影大哥的女人| 亚洲最大成人手机在线| 婷婷色麻豆天堂久久| 女的被弄到高潮叫床怎么办| 97超视频在线观看视频| 国产精品熟女久久久久浪| 日韩av不卡免费在线播放| 免费看av在线观看网站| 国产精品久久久久久精品电影| 成人一区二区视频在线观看| 激情 狠狠 欧美| 国产午夜精品一二区理论片| 夫妻午夜视频| 久久久久久久久久成人| 亚洲国产最新在线播放| 亚洲国产日韩一区二区| av在线播放精品| 久久女婷五月综合色啪小说 | 最近中文字幕2019免费版| 乱系列少妇在线播放| 久久国产乱子免费精品| 免费黄频网站在线观看国产| 久久久精品94久久精品| 女人久久www免费人成看片| 日韩亚洲欧美综合| 街头女战士在线观看网站| 亚洲精品中文字幕在线视频 | 精品国产一区二区三区久久久樱花 | 在线观看免费高清a一片| 身体一侧抽搐| 干丝袜人妻中文字幕| 亚洲怡红院男人天堂| 成人欧美大片| 校园人妻丝袜中文字幕| 免费观看在线日韩| 久久精品国产亚洲网站| 精品久久久久久电影网| 赤兔流量卡办理| 啦啦啦中文免费视频观看日本| 高清午夜精品一区二区三区| 三级国产精品片| 99热网站在线观看| 免费看不卡的av| 大片免费播放器 马上看| 精品人妻偷拍中文字幕| 成年av动漫网址| 国产男女内射视频| 久久久久国产网址| 国产片特级美女逼逼视频| 哪个播放器可以免费观看大片| 国产色爽女视频免费观看| 欧美性感艳星| 欧美区成人在线视频| 狂野欧美白嫩少妇大欣赏| 精品国产三级普通话版| 人妻 亚洲 视频| 男男h啪啪无遮挡| 欧美精品人与动牲交sv欧美| 欧美 日韩 精品 国产| 亚洲av免费在线观看| 欧美+日韩+精品| 大香蕉97超碰在线| 日本色播在线视频| 日本一本二区三区精品| 一级毛片久久久久久久久女| 亚洲电影在线观看av| 干丝袜人妻中文字幕| 国产欧美另类精品又又久久亚洲欧美| 亚洲国产精品999| 日本一二三区视频观看| 免费看a级黄色片| 久久久a久久爽久久v久久| 偷拍熟女少妇极品色| 免费观看av网站的网址| 免费看日本二区| 国产成人精品婷婷| 大香蕉久久网| 久久久色成人| 欧美日韩在线观看h| 天堂网av新在线| 一区二区三区四区激情视频| 91久久精品电影网| 免费黄网站久久成人精品| a级毛片免费高清观看在线播放| av天堂中文字幕网| 韩国高清视频一区二区三区| 18禁裸乳无遮挡动漫免费视频 | 日韩欧美精品免费久久| 国产黄频视频在线观看| 国产综合懂色| 欧美日韩视频高清一区二区三区二| 午夜福利在线观看免费完整高清在| 国产v大片淫在线免费观看| 2018国产大陆天天弄谢| 一个人看的www免费观看视频| 秋霞在线观看毛片| 各种免费的搞黄视频| 亚洲欧美日韩东京热| 午夜爱爱视频在线播放| 久久久成人免费电影| 欧美日本视频| 午夜福利视频1000在线观看| 五月伊人婷婷丁香| 波多野结衣巨乳人妻| 午夜老司机福利剧场| 欧美日韩在线观看h| 大又大粗又爽又黄少妇毛片口| 女人十人毛片免费观看3o分钟| 久久久亚洲精品成人影院| av黄色大香蕉| 日本av手机在线免费观看| av网站免费在线观看视频| 国产在线一区二区三区精| 免费大片18禁| 91久久精品电影网| 又爽又黄a免费视频| 色视频在线一区二区三区| 交换朋友夫妻互换小说| 日日啪夜夜爽| 欧美一区二区亚洲| 91久久精品电影网| 97在线人人人人妻| 久久99热这里只频精品6学生| 中文字幕av成人在线电影| 国产综合懂色| 久久6这里有精品| 日产精品乱码卡一卡2卡三| 亚洲在线观看片| 性插视频无遮挡在线免费观看| 国产黄频视频在线观看| 久久精品熟女亚洲av麻豆精品| 国产精品久久久久久av不卡| 国产黄频视频在线观看| 成人一区二区视频在线观看| 欧美成人一区二区免费高清观看| 欧美日韩视频精品一区| 80岁老熟妇乱子伦牲交| 午夜福利视频1000在线观看| 最新中文字幕久久久久|