• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Early-Stage Segmentation and Characterization of Brain Tumor

    2022-11-10 02:30:22SyedNauyanRashidMuhammadHanifUsmanHabibAkhtarKhalilOmairInamandHafeezUrRehman
    Computers Materials&Continua 2022年10期

    Syed Nauyan Rashid,Muhammad Hanif,Usman Habib,Akhtar Khalil,Omair Inam and Hafeez Ur Rehman

    1Department of Computer Science,National University of Computer and Emerging Sciences Islamabad,Peshawar Campus,Pakistan

    2Faculty of Computer Science and Engineering,Ghulam Ishaq Khan(GIK)Institute of Engineering Sciences and Technology,Topi,Pakistan

    3IFahja Pvt Limited,Peshawar,Pakistan

    4Department of Electrical and Computer Engineering,COMSATS University Islamabad,Pakistan

    Abstract:Gliomas are the most aggressive brain tumors caused by the abnormal growth of brain tissues.The life expectancy of patients diagnosed with gliomas decreases exponentially.Most gliomas are diagnosed in later stages,resulting in imminent death.On average,patients do not survive 14 months after diagnosis.The only way to minimize the impact of this inevitable disease is through early diagnosis.The Magnetic Resonance Imaging(MRI)scans,because of their better tissue contrast,are most frequently used to assess the brain tissues.The manual classification of MRI scans takes a reasonable amount of time to classify brain tumors.Besides this,dealing with MRI scans manually is also cumbersome,thus affects the classification accuracy.To eradicate this problem,researchers have come up with automatic and semiautomatic methods that help in the automation of brain tumor classification task.Although,many techniques have been devised to address this issue,the existing methods still struggle to characterize the enhancing region.This is because of low variance in enhancing region which give poor contrast in MRI scans.In this study,we propose a novel deep learning based method consisting of a series of steps,namely:data pre-processing,patch extraction,patch pre-processing,and a deep learning model with tuned hyper-parameters to classify all types of gliomas with a focus on enhancing region.Our trained model achieved better results for all glioma classes including the enhancing region.The improved performance of our technique can be attributed to several factors.Firstly,the non-local mean filter in the pre-processing step,improved the image detail while removing irrelevant noise.Secondly,the architecture we employ can capture the non-linearity of all classes including the enhancing region.Overall,the segmentation scores achieved on the Dice Similarity Coefficient (DSC) metric for normal,necrosis,edema,enhancing and non-enhancing tumor classes are 0.95,0.97,0.91,0.93,0.95;respectively.

    Keywords:Segmentation;CNN;characterization;brain tumor;MRI

    1 Introduction

    Tumors are mainly caused because of excessive and abnormal growth of tissues.This abnormal growth later takes the shape of a mass.Similarly,brain tumors are caused due to the abnormal growth of brain tissues[1].Gliomas are types of brain tumors that start from the brain and eventually spread towards the spinal cord.These are the most aggressive kind of brain tumors[2],resulting in many deaths worldwide.The famous types of gliomas are astrocytomas,brain stem gliomas,ependymomas,mixed gliomas,oligodendrogliomas,and Optic pathway gliomas.These gliomas can be graded into Hight Grade Gliomas(HGG)and Low-Grade Gliomas(LGG)with HGG being extremely belligerent and infiltrative whereas LGG being less belligerent and non-infiltrative[2,3].Common symptoms of gliomas are headaches,seizures,personality changes,weakness in the arms,face or legs numbness,problems with speech,nausea,vomiting,vision loss,and dizziness.Diagnosis of gliomas can be done by medical history and examination,brain scans (MRI and CT),and biopsy.Brain tumors are lifetaking diseases.On average patients only survive 14 months after diagnosis[4].The main reason for the deaths of patients is late diagnosis.The aim of this research is solely to save as many lives as possible by developing an automatic classification method that would produce timely results with high accuracy.

    This research will help doctors to plan the treatment of patients without waiting too long for the MRI scans to be verified by oncologists regarding the tumor localization.It will also help surgeons during the surgery to check the location of tumors instantly before removing the tumors.Follow-up care for cancer can be done by checking if the tumor has regrown after surgery,without waiting for MRI scans to be assessed by oncologists.In a nutshell,this research is going to help save the lives of patients by doing an early diagnosis so that proper treatment can be started in the early stages of cancer and mortality rates could be reduced.MRI(Magnetic Resonance Imaging)scans are mostly used for the diagnosis of brain tumors as they provide 3-dimensional views of a human brain[2].Oncologists asses these MRI scans by manually classifying tumors.But due to the structural complexity of MRI scans the task of identification of gliomas are very time-consuming.Manually speeding up this process is usually at the expense of inaccurate results[2].In order to speed up the process of classification and produce highly accurate results research communities are working to develop automatic or semiautomatic methods for classification of gliomas[2,5].But there are number of challenges to accomplish this task due to variable shape,size,and location of tumor.These tumors also disturb the appearance of surrounding tissues as well which makes classification very hard.In addition,MRI scans also possesses problems such as intensity inhomogeneity[6]and different intensities among the same sequences of scans[7].With all these challenges,the enhancing tumor class is different from the other classes as in the MRI scans the enhancing class contrast is very low as these abnormalities are in their initial growth phase which makes them appear dark in MRI scans and eventually makes them harder to detect.

    Over the past few years,many proposals have come up and have produced high classification results on BRATS 2013 and BRATS 2015[5].One of the best works is done by Sergio Pereira and Pinto[8].In which they have been able to produce high classification accuracy for all tumor classes except for the enhancing tumor class that lies around 77%on the small validation dataset of BRATS 2013 dataset.These results further degrade on the large dataset of BRATS 2015.The main goal of this research work is to propose an automatic classification method that can enhance the classification accuracy of enhancing tumor class.

    Since winning the ImageNet challenge in 2012[9]deep learning has solved many complex image recognition and computer vision problems.So,in order to overcome this problem,we are employing deep learning as our classification model and using image pre-processing techniques for the removal of noise from MRI scans.The novelty of our work is employing unique pre-processing on MRI scans,patch creation method,construction of deep learning architecture,and selection of well-tuned hyperparameters.We experimented by pre-processing MRI scans and tuned the deep learning classifier until the right combination of hyper-parameters was achieved.After hyper-parameters were tuned and MRI scans were pre-processed,we ran the experiments in 10-Fold cross-validation settings.We achieved remarkable results in terms of dice similarity score for all classes including the enhancing region class when benchmarked on BRATS 2015 dataset.

    To automate the process of brain tumor classification,researchers have employed several methods.Among these methods,the machine learning-based methods stand out.These methods are supervised in nature and can be broadly classified as manual feature-based methods and automatic feature-based methods.Manual features can be extracted through generalization,transformation,or other similar techniques which are applied to the raw pixel to form features vctors.Many feature extraction methods have been employed by researchers for example:encoding context[10-12],gradients[10,13],firstorder,and fractal-based texture[10,12-15],physical properties[16]and brain symmetry[10,13,16].Using these feature extraction methods authors have employed supervised learning models for classification like Condition Random Field(CRF)[10,13,17,18],and Support Vector Machines(SVM)[17,18].But the best results were produced by Random Forests(RF)as Tustison et al.[16]employed RF in the form of a two-stage segmentation method in which they gave the output of the first classifier as input to the second classifier in order to improve the classification accuracy.Whereas,Geremia et al.[19]presented hierarchical adaptive RF scaling from rough to finer scales of textures.Meier et al.[20]employed semi-supervised RF in their work.Until now,manual feature extraction techniques with the combination of RF or SVM were producing good classification results,but they were not impressive to be used for clinical practices[16-20].The prime reason was that brain tissue has a highly variable and complex structure.Due to this,it was hard to produce high-quality features that would make it easy for the classifiers to label data accurately.Also,some methods produced impressive results on labeled data for all tumors classes except for enhancing tumor class.This was typical because enhancing tumor cells are in the early phases of growth and mostly hard to detect.Eventually,automated feature-based methods(i.e.,the second class of methods most prominently deep learning networks),were introduced to fill this gap of feature engineering[21-23].

    Deep learning is a supervised learning algorithm that carries out automatic feature extraction without the intervention of experts.Deep learning is also known as end-to-end learning because raw data is given as input to the model and no external intervention is required for training.Once raw data is given to the model it then extracts features automatically from the raw data and on basis of these extracted features,the model classifies data into the given classes.Deep Learning has been winning computer vision and images recognition challenges since 2012[9].Due to this breakthrough deep learning has been employed for the brain tumor classification.Since deep learning automatically crafts feature so the trend has now shifted towards the creation of architectures instead of handcrafted features.Recently there have been many proposals of deep learning in the field of brain tumor classification[24-29].Zikic et al.,[24]used shallow CNN with standard 2D multi-channel convolutions.The CNN operated in the manner of the sliding window over the 3D space by taking a patch at each point.BRATS 2013 dataset was used for training.For pre-processing,inhomogeneity correction was applied to each channel of the dataset.After that median of each channel was set to zero and images were down sampled by a factor of 2.Stochastic gradient descent with momentum was used as an optimizer.The 2-Fold validation was applied to the dataset.The inputs used for validation were also down sampled by a factor of 2 before giving them to the model for prediction.Scores reported for the complete tumor class were good.Whereas for enhancing tumor and core regions the scores were moderate.The model proposed by Zikic outperformed RF classifier.Urban et al.[25]presented a novel CNN-based architecture that used 3D filters and took inputs in the form of 3D voxels.The 3D CNN model is comprised of three spatial dimensions and one dimension for the channel.Thus,a convolutional layer have to deal with 4-dimensional data at a time.The network consists of multiple convolutional layers on which the filters are convolved over the inputs.Gradient descent was being used as an optimization function and hyperbolic tangent function was3 being used as an activation function in the CNN model.To speed up the training process Urban employed the use of GPUs.Also,during the training synthetic data was left out because they don’t have variable intensities and have few artifacts among them.For pre-processing of inputs mean CSF was applied.The proposed pipeline was ranked second on BRATS 2014 challenge but despite this DSC score for enhancing tumor class was moderate.Davy et al.[27]proposed a pipeline in which he employed a two-pathway CNN network in which the main target was the smaller and larger context of the pixels.This model was trained on the BRATS 2013 dataset.The inputs of the model were in the form of 2D patches extracted from the axial plane and the patch size was 32×32.The N4ITK filter was applied only on T1 and T1c modalities to remove intensity inhomogeneity.Zero mean and unit variance were applied to each modality of MRI.

    Since training of 3D convolutional neural networks is computationally very expensive authors have opted to use 2D filters[26-29].Havaei et al.[26]used Deep Neural Network (DNN) in their method.Segmentation was done slice by slice due to a lack of resolution in the third dimension.These 2D slices were from the axial plane.The DNN model was trained by 2D patches.Havaei et al.proposed two DNN architectures:Two-pathway architecture and Cascaded architectures.The Two-pathway architecture had two streams one with larger receptive fields and the other with smaller receptive fields.The motivation was to have a larger and smaller context of visual details.Later these two paths were concatenated and the output is given by the softmax layer.The cascaded architecture was employed because most of the CNN’s did not give accurate segmentation at the boundaries of two or more classes.The cascaded architecture concatenates the output of the first CNN with the input slices and this concatenated stream was given as input to the second CNN architecture.The concatenation was done in three ways:input concatenation,local pathway concatenation,and pre-output concatenation.The models were evaluated on dice,specificity,and sensitivity metrics.The best results were achieved by input cascade CNN.Classification scores for complete tumors were excellent but was moderate for core and enhancing tumors.Lyksborg et al.[27],carried out the method in four steps:Pre-processing,segmentation of the whole tumor,refining of segmentation,and segmentation of sub-regions from the whole tumor.Pre-processing was done by applying the N4 method that overcomes the problem of intensity inhomogeneity.For segmentation of whole tumor ensemble of 3 convolutional neural networks were employed each was given the same inputs but with different planes i.e.,axial,coronal,and sagittal.Refining of segmentation was done by using cellular automata which helped to smooth the edges at boundaries.For the segmentation of the sub-regions of the tumor,an ensemble of 3 CNN’s was used.In which the same sequence of inputs was given but with axial,coronal,and sagittal planes to each CNN.The models were evaluated by dice,positive predictive,and sensitivity metrics.The scores achieved for whole,core,and enhancing tumors were moderate.

    Rao et al.[28],used BRATS 2015 dataset for training.The pipeline proposed by Rao employed four different CNN classifiers to train four modalities of MRI and the output of these four classifiers were concatenated and given to an RF classifier as an input.The inputs were pre-processed using the ITK library on patches.The patches were prepared from a naive histogram classification algorithm which extracted Cerebral Spinal Fluid (CSF) patches.The patch size was 32×32.The activation function used in the CNN model was Relu and the loss function was stochastic gradient descent.Dvorak et al.[30]developed a brain tumor segmentation pipeline on BRATS 2015 dataset.In his proposed pipeline he divided the problem into three sub-problems.These sub-problems consisted of classification of the whole tumor,core tumor,and enhancing tumor.Each sub-problem was a binary class classification problem.To carry out training,Dvorak created a label dictionary used for the binary classification in all three sub-problems.The model used for classification was CNN which had convolution and pooling layers in alternating order.The filter was 5×5 and 24 convolutional filters were used.The 2D slice was given as input to the model.For pre-processing N4 bias field correction was applied.For normalization of image intensities,average intensity and the standard deviation were applied.The pipeline performed very well for all the tumor classes except for the enhancing tumor class.For a large image dataset,the authors in[31]formulized the effect of convolutional layers on the performance of CNN.The authors concluded that a significant improvement on the configurations can be achieved with the network depth of 16 to 19 weighted layers.

    The proposed work in this paper is inspired by the work of Pereira et al.[8].In their work,first pre-processing was applied to correct bias field distortion using the N4ITK method and to overcome intensity inhomogeneity across MRI scans,intensity normalization method proposed by Nyul et al.was applied.Next 2D patches were extracted from all four modalities.Later they were pre-processed by applying zero mean and unit variance.Deeper CNN models with smaller kernel sizes were used so that more convolutional layers can be stacked without over-fitting the training data.Secondly more layers were stacked to increase the number of weights so that more information can be stored in the form of weights.Data augmentation was also done to avoid over-fitting.The model was evaluated by Dice Similarity Coefficient (DSC),Positive Predictive Value (PPV),and sensitivity.The results achieved were excellent and Pereira et al.won the BRATS 2013 challenge.Despite all this,the classification accuracy of enhancing tumor region was mediocre.The prime reason for low classification scores for enhancing tumor class is due to the low contrast as these tumors are in the early phases of growth and difficult for the classifiers to identify them.

    2 Proposed Methodology

    Brain tumors are the most fatal tumor that spread rapidly across the brain tissues.In order to diagnose brain tumour in early stages,in this work we are proposing an automatic segmentation method that will be able to help oncologists classify brain tumors into five different classes to facilitate the treatment.An overview of the proposed method is shown in Fig.1 below.The method consists of five steps namely:MRI Pre-Processing,Patch Creation,Patch Pre-Processing,Weights Tuning,Classification Model.

    Figure 1:Proposed method for classification of enhancing tumor regions

    2.1 MRI Pre-Processing

    The MRI scans usually have inherited noise due to the biased distortion field which cause the intensity variation in tissues across an image.The CNN’s models are normally noise-tolerant up to a certain level.Experiments have proved that if pre-processed data is provided to these models,they perform exceptionally well as compared to the un-processed data.The Probability Density Function(PDF)Of 2D MRI slices indicate the presence of Gaussian noise.So,in order to remove the Gaussian noise,we have employed the non-local means filter[32],as it performs very well for the Gaussian noise removal and preserve the edge information,which is very crucial for the segmentation task.Conventional local mean filters take the mean of pixels within the window.Whereas in non-local mean filter,the pixels having similar intensity values within a defined window are used for the mean calculation.The mathematical form of non-local mean filter[32]is given as:

    whereΩis the image size,u(p)is the filtered value at point p of an image.Whereasv(q)is the original value at point q.The weighted function is given asf(p,q)andC(p)is the normalizing factor.

    2.2 Patch Creation

    Once 2D slices of all modalities are pre-processed and noise is removed,2D patches are created which are then used for training the CNN model.In order to create these patches,we used the 2D slices of all 4 modalities,namely T1,T1c,T2,FLAIR along with the ground truth.Next,we randomly search for the pixel with the value close to the ground truth value of the class.Once that ground truth value is found we crop a region around that pixel to extract the patch.The cropped region has the same size as of the ground truth patch.In our case,we used a moderate 31×31 patch size to accommodate variation in the tumor lesions.After region cropping as a patch,it is ensured that it must contain at least 50%of pixels of that class for which the patch is being created otherwise discard the patch and create a new one until the above condition is met.

    2.3 Patch Pre-Processing

    The next step in the proposed methodology is pre-processing the created patches.The main goal of patch pre-processing is to achieve convergence faster in the training process.For that,we normalized the patches with zero mean and unit variance.As a result,the intensity values of all the patches lie between-1 and 1 which helps the CNN to converge faster.The patch normalization is done using the following equation.

    whereμ,σrepresents the mean and standard deviation of patch x,respectively.

    2.4 Training Weights

    The training weights are useful in boosting the prediction accuracy of the CNN model.They are used to help in predicting the class labels.In the proposed method,the CNN model is trained using 2D patches along with the associated ground truth patch.After training,the 10-fold cross-validation is applied to check the model performance on different cases.Once training is complete,the best trained weights are then saved.These trained weights are then used in step 5 of the proposed method to predict the class labels for each patch.

    2.5 Model

    The class labels are predicted using a CNN model.The CNN model have been producing stateof-the-art results since their breakthrough in ImageNet Large Scale Visual Recognition Challenge(ILSVRC)[22].The CNN’s are very similar to simple neural networks as they have neurons that are given inputs and in return,they produce some output.These neurons have trainable weights and biases.In CNN,there are convolutional layers in which inputs are convolved using a kernel,and as results feature maps are produced.In the training phase,the weights of kernels are adjusted when the error is back-propagated.Since connections are made sparsely in convolutional layers so there are fewer weights to train as compared to fully connected layers where dense connections are made.The purpose of kernels is to extract features from data such as edges or blotch of some colors.Variable size of kernels can be used,depending on the neighborhood and the amount of information required to learn[24,33,34].

    In practice,convolutional layers are stacked on each other.Each convolutional layer extract features maps that gets abstract on layer after layer.A simple CNN contains convolutional layers,dense layers(FC layers),pooling layers,softmax layer,activation functions,loss function,and regularization parameters.

    1)Initialization:Weight initialization is used to achieve convergence and to propagate signal through the network.For the initialization of weights,we use Xavier Initialization[35],given below.

    whereninandnoutare the number of inputs and outputs of a layer,respectively.The Xavier initialization[35]helps in adjusting weights for signals propagating in the network.

    2)Activation Function:The activation function in CNN produce an output of the node when it given some input.In our model,we have used a rectifier linear unit(ReLU)which performs better than traditional sigmoid or hyperbolic tangent functions.ReLU is defined as:

    3)Pooling:Pooling is used for down-sampling in a CNN to reduce the computational load.In the pooling,we can use average pooling[9]or max-pooling depending on our needs.Pooling should not be used in starting layers as important information might be lost.In our model,we are using max pooling.

    4)Regularization:Regularization is used in CNN during training to avoid over-fitting.In order to generalize overall training examples,we used regularization techniques to discard a certain amount of signal.In our model,we are using dropout[36,37]in dense and convolutional layers.

    5) Architecture:Combination of convolutional,pooling,fully connected and softmax layers form a convolutional neural network.The table below shows the arrangement of layers and their configurations in order to build the proposed model for improving the class accuracy of enhancing the tumor region are shown is the Tab.1 below.

    Table 1:Proposed CNN model for improving the classification accuracy of enhancing tumor region

    3)Loss Function:Loss function calculates the difference between predicted value and the actual value(ground truth).We are using categorical cross-entropy as loss function for our model.

    5) Model Output:Finally,once the input patch is processed by the deep learning model,the class labels are produced from one of the five classes namely:normal,necrosis,edema,enhancing,and non-enhancing.Each class has a score between 0 and 1.The class having score higher than 0.5 is considered a positive prediction otherwise it is a negative prediction.The model can predict multiple labels at the sametime.

    3 Experimental Results

    In this section,we are presenting an insight into the dataset and study the significance of the proposed method by experimental evaluation.First,we discuss the configurations and hyperparameters that are required for setting up the experiment.Later,we are going to talk about the metrics used to evaluate the performance of our classification model.Lastly,we will explain the results of our experiments in detail.

    A.Dataset

    The experiments are conducted on the BRATS 2015 training database[5,38].BRATS dataset has both real patient data and synthetic data along with their ground truth.The dataset is divided into two parts Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG)with LGG being less aggressive than HGG.There are 220 samples of HGG and 54 samples of LGG.Each sample of the patient has four modalities namely T1,T1 Contrast(T1c),T2,and T2 FLAIR.In T1 modality,tissues with highfat content appear bright and compartments filled with water appear dark.In T1c modality has high contrast in comparison to T1 modality other than that both modalities are similar.In T2 modality,compartments filled with water appear bright and high-fat content appears dark.Whereas,T2 FLAIR is similar to T2,but with comparatively longer Echo Time(TE)and Repetition Time(TR).First,we conduct experiments on HGG because samples of necrosis and enhancing tumor are small in the LGG dataset.Figs.2a-2d gives a visual of all four MRI modalities.

    Figure 2:Imaging modalities in BRATS 2015 dataset

    B.Setup

    In order to replicate the experiment,the hyper-parameters setting of the model are given in Tab.2.For training the model we extracted 125,000 patches from the HGG samples of the BRATS dataset.The CNN model was developed using Keras[39]along with Tensorflow-GPU[40]back-end.

    Table 2:Hyper-parameters of the proposed method

    C.Example

    In this section we are presenting an example of our proposed method.We will visualize the output of each step:2D Slice Extraction,MRI Pre-Processing,2D Patch Extraction,and 2D Patch Pre-Processing.The dataset used in this example is from BRATS 2015.

    1)2D Slice Extraction:The 2D slices are extracted from the volumes of all four MRI modalities.Each 3D volume contains 155 slices.The extracted 2D slices can be seen in Fig.3 along with their ground truth.

    2)MRI Pre-Processing:The histograms of the 2D MRI slices of all 4 modalities indicate the presence of Gaussian noise.This Gaussian noise is mainly due to the intensity inhomogeneity problem in MRI scans.We have applied the non-local means filter on the 2D MRI slices.The denoising effect of non-local means filter can be seen in Fig.4.

    3)2D Patch Extraction:We extracted equal number of patches for each tumor classes.The patches of each tumor class can be seen in Fig.5.These patches are then used for the CNN model training.

    4)2D Patch Pre-Processing:The extracted patches are normalized so that the CNN classifier can converge faster and training process can speed up.For normalization of patches,we applied zero mean and unit variance to patches.

    Figure 3:2D slices extracted from MRI volume

    Figure 4:2D slices of T1,T1c,T2 and T2 flair after non-local means filter is employed for denoising

    D.Evaluation

    Once the training phase is completed next comes the phase of evaluating the performance of the model.In order to do that the model needs to be evaluated on certain metrics.Accuracy alone cannot be used to compute a model’s performance as it has its own shortcomings.For evaluating our trained model,we used Dice Similarity Coefficient (DSC),Positive Predictive Value (PPV),Sensitivity,Negative Predictive Value(NPV),False Positive Rate(FPR),Recall and F-1 score.A brief introduction to these metrics is given below.Where True Positive(TP)is an outcome when the model correctly predicts the positive class.True Negative(TN)is when the model correctly predicts the negative class.False Positive(FP)is when the model incorrectly predicts the positive class.False Negative(FN)is when the model incorrectly predicts the negative class.

    Figure 5:Extracted 2D patches of normal,necrosis,edema,enhancing and non-enhancing tumor class

    1) Dice Similarity Coefficient(DSC):This is used for comparison between two samples,given as

    2)Positive Predictive Value(PPV)or Precision:It is the proportion of cases correctly identified as belonging to classcamong all cases in which the classifier claims that they belong to classcgiven as:

    3)Sensitivity:Sensitivity is the measure to evaluate actual positives identified as positive,given as:

    4)Negative Predictive Value (NPV):It is the probability that the samples truly do not have disease,given as:

    5)False Positive Rate(FPR):It is ratio between negative events wrongly categorized as positive;given by:

    6)Recall:Recall is the proportion of cases correctly identified as belonging to class c among all cases that truly belong to class c given as:

    7)F-1 Score:F1-Score is the harmonic average of the precision and recall,where an F1 score reaches its best value at 1(perfect precision and recall)and worst at 0 which is given as:

    E.Results

    In this section,we are going to analyze the effect of key components on the experiment and discuss the acquired results.BRATS is a multi-class MRI dataset.We have employed the patch wise training for the CNN model instead of training the whole MRI scan.A comparison of accuracy and number of epochs for the training phase is presented in Fig.6.To report the accuracy scores,we have trained the model for 12 epochs.As we can see,from the 1st epoch the training accuracy is in low 80’s whereas the validation accuracy is in the high 80’s.After the 5th epoch,we can see an increase in training and validation accuracy until the mid-90’s.From the 5th epoch onwards the trend of validation and training accuracy tends to get static with no further improvements.So,after the 12th epoch,we stop the model training as global minima are achieved and we do not want our model to over-fit the training data.

    Figure 6:Trend of accuracy during 12 epochs of training

    Now to check the loss trend during the training phase,we have trained the model for 12 epochs and the results are plotted in Fig.7.We see that training loss starts at 0.55 and drops gradually until the 2nd epoch.After the 2nd epoch,the training loss keeps on decreasing slowly till 12th epoch and then training is stopped.Validation loss starts from 0.35 and gradually drops till the 2nd epoch.After the 2nd epoch validation loss sluggishly decreases to 0.19.After the 12th epoch,we stop training in order to prevent classifier from over-fitting training data.In Fig.8,we present the classification report on metrics namely:Precision,Recall,and F-1 Score;on the x-axis and the tumor classes.The scores for enhancing tumor class are 0.94,0.94,0.94 for Precision,Recall and F1-Score;respectively.We have acquired impressive results for enhancing tumor class along with high Precision,Recall and F-1 Score for all the remaining classes as well.The obtained scores by the proposed method range between 0.91-0.98,which is in the high range for tumor classification.

    Figure 7:Trend of loss during 12 epochs of training

    Figure 8:Classification report of proposed model on precision,recall and F1-score

    It can be seen from the results shown in Figs.6-8 that the trained model is performing very well on validation data but in order to examine predictive performance of the classifier new data or unseen examples of data needs to be given.

    In order to check this,we employed K-Fold Cross-validation.The K-Fold cross-validation works as K-1 samples are used for training and the remaining 1 sample is used for validation.This process is repeated K times with each sample being used once as validation data.In this experiment,we used 10-Fold cross-validation.The DSC is used for comparison of the similarity between two samples.The performance of our proposed method on the DSC metric can be witnessed by looking at Tab.3.In the table,we can see the average DSC score for all tumor classes on 10-Folds validation is 0.94.The average DSC score for enhancing tumor over the 10-Folds of validation is 0.93.

    The sensitivity metric is used to check if the actual positives are identified as positive.In Tab.3 we report the sensitivity value achieved by the proposed method.The average sensitivity score for all tumor classes is 0.94 over 10-Folds of validation.Whereas the average sensitivity score for enhancing tumor class is 0.92.

    Table 3:Result produced by our proposed method on the BRATS 2015 dataset

    Similarly,in Tab.3 we also presented the performance of our proposed CNN model on the Positive Predictive Value(PPV)metric.The purpose of the PPV metric is to check that the model predicts the true positives as positives.We can see that for all tumor classes the average PPV score is 0.94 for the 10-Folds of validation.For enhancing tumor class average PPV scores is 0.93 during 10-Folds of validation.The difference in the value is due to the variation in the sample examples.

    Also,Tab.3 shows the trend of Negative Predictive Value (NPV) over the 10 folds of cross validation.The purpose of NPV is to evaluate how accurately classifier can predict negative classes.We can see that average NPV scores remain in excess of 0.98 for all the classes.Our main objective was to increase the performance of enhancing tumor class and by looking at the table we observe that the scores for enhancing tumor class are very consistent as average NPV score stays around 0.98.Next,we look at the False Positive Rate(FPR)trend.The purpose of FPR is to check the percentage of false labels assigned to a sample during the testing phase.In Tab.3 we can see that the FPR score is low for all tumor classes.The average score lies around 0.014.For enhancing tumor class,the average score remains around 0.01517 through entire 10 folds of validations.

    4 Discussion

    The main goal of this work is to devise an automatic classification method for the enhancing tumor class,with improved accuracy.Also,the method should be able to achieve higher accuracies for all the other tumor classes as well.This is a challenging task due to the complex symmetry and variable shape of brain structure.Secondly,the enhancing tumor class has a low classification accuracy due to the fact that these tumors are in their initial phases of growth and appear dark on MRI which makes them difficult to identify.

    In the proposed solution we have improved the classification accuracy for enhancing tumor class.This was mainly achieved by applying a pipeline of pre-processing,a customized CNN classifier followed by the post processing.We observed that the PDF of MRI scans possess Gaussian noise and in order to remove that we applied non-local mean filters[32].After MRI was pre-processed,we then created 2D slices from the 3D volumes of MRI.The reason for using 2D slices was because training a model with 3D voxels is computationally very expensive.The extracted patches were from the axial plane.Once the 2D patches were created they were then pre-processed to normalize to achieve faster convergence during the training of CNN model.Last,the pre-processed patch was given as input to the model which in return predicted the class labels for the given patch.

    For training the model we did MRI pre-processing,patch extraction,patch pre-processing and tuning of training weights for CNN model.The model was trained for 12epochs and 10-fold crossvalidation was applied to check CNN model performance on easy and hard examples.Once 10-folds of cross-validation was completed the best weights were saved and used in the model for prediction of class labels.Once training of the model is completed and model is predicting class labels.The next step was evaluating the performance of CNN model.For evaluation accuracy alone cannot be used as it comes with its own shortcomings.Due to this reason,we have used Dice Similarity Coefficient(DSC),Specificity,Negative Predictive Value(NPV),False Positive Rate(FPR),Precision,Recall,and F-1 Score to evaluate the performance of the CNN model.The reported scores for all the metrics that are used to evaluate the model were not only impressive for enhancing tumor class but also for all the other tumor classes.The problem of classification accuracy for enhancing tumor region was eradicated by using small patches of the 31×31 size which was small enough to represent a class.Secondly,we ensured that the created patches for a certain class must contain 50%pixels of that class in the patch.Due to this novelty,the accuracy massively increased for enhancing tumor class and for other classes as well.We also used equal samples for all classes during the training phase which also eradicates the problem of class imbalance and eventually prevents classier from making biased predictions.

    5 Conclusion

    The objective of this research was to devise an automatic method that would be able to improve the classification accuracy for enhancing tumor region without degrading the accuracy of other tumor classes.For this purpose,we proposed an automatic classification method based on CNN model that enabled us to increase the classification accuracy of enhancing tumor region along with that it also achieved high classification scores for other tumor classes as well.

    Future work that can be carried out for incremental research can be by doing segmentation from these classifications.Besides this different model e.g.,U-nets or V-nets,with a slight variation can be employed to improve the prediction confidence for a larger dataset.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    无人区码免费观看不卡| 久久久久久久午夜电影| 国产亚洲精品综合一区在线观看 | 草草在线视频免费看| 亚洲av片天天在线观看| 中亚洲国语对白在线视频| 午夜福利在线观看吧| 性色av乱码一区二区三区2| 亚洲熟妇中文字幕五十中出| 欧美精品亚洲一区二区| 欧美黄色淫秽网站| 国产一区二区三区在线臀色熟女| 午夜福利18| 欧美成人午夜精品| 精品久久久久久久人妻蜜臀av| 人妻久久中文字幕网| 久久亚洲真实| 免费在线观看影片大全网站| 搡老熟女国产l中国老女人| 久久热在线av| 国产97色在线日韩免费| 精品欧美国产一区二区三| cao死你这个sao货| 黄色视频,在线免费观看| 国产精品久久久久久亚洲av鲁大| 99国产精品99久久久久| 人成视频在线观看免费观看| 久久久久久人人人人人| 狂野欧美激情性xxxx| 国产亚洲精品久久久久5区| av中文乱码字幕在线| 国产精品免费视频内射| 中文字幕高清在线视频| 中文字幕最新亚洲高清| 久久精品国产99精品国产亚洲性色| 亚洲成人国产一区在线观看| 97人妻精品一区二区三区麻豆 | 黄网站色视频无遮挡免费观看| 欧美日韩精品网址| 757午夜福利合集在线观看| 真人做人爱边吃奶动态| 日韩一卡2卡3卡4卡2021年| or卡值多少钱| 男女午夜视频在线观看| 又紧又爽又黄一区二区| 国产成人影院久久av| 色综合亚洲欧美另类图片| 国产99白浆流出| 中文资源天堂在线| 欧美大码av| 日本撒尿小便嘘嘘汇集6| 91麻豆av在线| 亚洲国产中文字幕在线视频| 怎么达到女性高潮| 欧美另类亚洲清纯唯美| 成人手机av| 日本精品一区二区三区蜜桃| 男人舔女人下体高潮全视频| 国产在线观看jvid| 91字幕亚洲| 曰老女人黄片| 欧美乱色亚洲激情| 国产免费男女视频| 少妇熟女aⅴ在线视频| 国产精品乱码一区二三区的特点| 757午夜福利合集在线观看| 757午夜福利合集在线观看| 欧美日韩精品网址| 欧美成人午夜精品| 国产精品98久久久久久宅男小说| 精品高清国产在线一区| 国产片内射在线| 非洲黑人性xxxx精品又粗又长| 人成视频在线观看免费观看| 最近最新免费中文字幕在线| 国产伦人伦偷精品视频| 岛国视频午夜一区免费看| 国产一卡二卡三卡精品| 亚洲第一青青草原| 亚洲第一青青草原| 久久久水蜜桃国产精品网| 亚洲中文av在线| 女性生殖器流出的白浆| 午夜福利免费观看在线| 90打野战视频偷拍视频| 女生性感内裤真人,穿戴方法视频| 亚洲熟妇中文字幕五十中出| 国产av不卡久久| 国产精品亚洲一级av第二区| 成人欧美大片| aaaaa片日本免费| 国产精品日韩av在线免费观看| 天天躁夜夜躁狠狠躁躁| 国产视频一区二区在线看| 精品久久久久久成人av| 久久久久国内视频| 亚洲欧美一区二区三区黑人| 女性生殖器流出的白浆| 久久久久久大精品| 中文字幕另类日韩欧美亚洲嫩草| 满18在线观看网站| 天堂动漫精品| 99国产极品粉嫩在线观看| 国产三级在线视频| 身体一侧抽搐| 婷婷亚洲欧美| 久久久久久久久免费视频了| 男人的好看免费观看在线视频 | 2021天堂中文幕一二区在线观 | 成人亚洲精品一区在线观看| 亚洲精品色激情综合| 欧美亚洲日本最大视频资源| 黑人欧美特级aaaaaa片| 国内精品久久久久久久电影| 叶爱在线成人免费视频播放| 亚洲av美国av| 久久精品aⅴ一区二区三区四区| 精品国产亚洲在线| 亚洲午夜精品一区,二区,三区| 热re99久久国产66热| xxx96com| 亚洲精品国产一区二区精华液| av片东京热男人的天堂| 777久久人妻少妇嫩草av网站| 老司机靠b影院| 一级a爱片免费观看的视频| 国产一区在线观看成人免费| 在线视频色国产色| 亚洲三区欧美一区| 手机成人av网站| 动漫黄色视频在线观看| 18禁观看日本| 国产精品影院久久| 色综合亚洲欧美另类图片| 亚洲第一青青草原| 国产成年人精品一区二区| 99热只有精品国产| 久久婷婷人人爽人人干人人爱| 视频区欧美日本亚洲| 午夜福利在线观看吧| 国产麻豆成人av免费视频| 天堂影院成人在线观看| 又黄又爽又免费观看的视频| 亚洲自拍偷在线| 一二三四社区在线视频社区8| 日韩视频一区二区在线观看| 禁无遮挡网站| 一卡2卡三卡四卡精品乱码亚洲| 两个人看的免费小视频| 精品国内亚洲2022精品成人| 欧美精品啪啪一区二区三区| 久久久久久人人人人人| 999久久久国产精品视频| 国产亚洲精品第一综合不卡| 男女做爰动态图高潮gif福利片| 欧美激情极品国产一区二区三区| 国产成人精品无人区| 久久久国产成人免费| 精品欧美一区二区三区在线| 成人18禁高潮啪啪吃奶动态图| 久久青草综合色| 女人爽到高潮嗷嗷叫在线视频| 日韩中文字幕欧美一区二区| 欧美丝袜亚洲另类 | 99精品在免费线老司机午夜| 免费无遮挡裸体视频| 伊人久久大香线蕉亚洲五| 久久婷婷人人爽人人干人人爱| 欧美性猛交╳xxx乱大交人| 桃红色精品国产亚洲av| 国产97色在线日韩免费| 精品第一国产精品| 少妇熟女aⅴ在线视频| 国产成人欧美| 男女做爰动态图高潮gif福利片| 黑丝袜美女国产一区| 两性夫妻黄色片| 日韩一卡2卡3卡4卡2021年| svipshipincom国产片| 欧美乱码精品一区二区三区| 夜夜看夜夜爽夜夜摸| 国产一区在线观看成人免费| 免费电影在线观看免费观看| 日本a在线网址| bbb黄色大片| 伦理电影免费视频| 精品卡一卡二卡四卡免费| 久久精品91蜜桃| 日本五十路高清| 深夜精品福利| 夜夜爽天天搞| 国产97色在线日韩免费| 国产精品野战在线观看| 久热爱精品视频在线9| 国产野战对白在线观看| 99精品欧美一区二区三区四区| cao死你这个sao货| 在线观看日韩欧美| 欧美大码av| 99精品在免费线老司机午夜| 一区二区三区国产精品乱码| 在线观看66精品国产| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲精品色激情综合| 最近最新免费中文字幕在线| 久久久久久亚洲精品国产蜜桃av| 亚洲精品av麻豆狂野| 国产亚洲av高清不卡| 精品国产一区二区三区四区第35| 国产精品日韩av在线免费观看| 中文字幕人妻熟女乱码| 色av中文字幕| 欧美色欧美亚洲另类二区| 观看免费一级毛片| 白带黄色成豆腐渣| 亚洲美女黄片视频| 久久久国产精品麻豆| 久久久久九九精品影院| 熟女少妇亚洲综合色aaa.| 国产伦一二天堂av在线观看| 欧美一区二区精品小视频在线| 99热只有精品国产| 啪啪无遮挡十八禁网站| 日韩一卡2卡3卡4卡2021年| 窝窝影院91人妻| 国产午夜福利久久久久久| 国产蜜桃级精品一区二区三区| 亚洲国产精品成人综合色| aaaaa片日本免费| 麻豆一二三区av精品| 国产99白浆流出| 久久精品夜夜夜夜夜久久蜜豆 | 日韩免费av在线播放| 免费在线观看完整版高清| 满18在线观看网站| 97超级碰碰碰精品色视频在线观看| 亚洲欧洲精品一区二区精品久久久| 久久久久免费精品人妻一区二区 | 亚洲美女黄片视频| 在线免费观看的www视频| videosex国产| 1024视频免费在线观看| 成人国语在线视频| 久久国产亚洲av麻豆专区| 亚洲成人免费电影在线观看| 中文字幕人成人乱码亚洲影| 一个人免费在线观看的高清视频| 国产精品 国内视频| 久久午夜亚洲精品久久| e午夜精品久久久久久久| 精品日产1卡2卡| 午夜免费鲁丝| 好看av亚洲va欧美ⅴa在| 午夜激情av网站| 日本免费一区二区三区高清不卡| www日本黄色视频网| 久久久久久久午夜电影| 色精品久久人妻99蜜桃| 精品无人区乱码1区二区| 中文在线观看免费www的网站 | 一区二区三区国产精品乱码| 亚洲中文字幕日韩| 国产又色又爽无遮挡免费看| 天天添夜夜摸| 1024视频免费在线观看| 日本在线视频免费播放| netflix在线观看网站| 亚洲真实伦在线观看| 中文字幕人成人乱码亚洲影| 久久久久久国产a免费观看| 欧美黑人欧美精品刺激| 亚洲精品久久国产高清桃花| 国产激情久久老熟女| 亚洲自偷自拍图片 自拍| 午夜福利在线在线| 天天添夜夜摸| 精品欧美一区二区三区在线| 免费无遮挡裸体视频| 一区二区日韩欧美中文字幕| 国产亚洲av嫩草精品影院| 成人18禁在线播放| 亚洲中文字幕一区二区三区有码在线看 | 不卡av一区二区三区| 777久久人妻少妇嫩草av网站| 每晚都被弄得嗷嗷叫到高潮| 亚洲国产精品999在线| 欧美黑人精品巨大| 国产精品免费一区二区三区在线| 香蕉av资源在线| 村上凉子中文字幕在线| 91成人精品电影| 日韩 欧美 亚洲 中文字幕| www.999成人在线观看| 免费高清在线观看日韩| 国产精品日韩av在线免费观看| 国产精品自产拍在线观看55亚洲| 成人手机av| 久久久久久亚洲精品国产蜜桃av| 脱女人内裤的视频| 久久精品亚洲精品国产色婷小说| 久久久久国产精品人妻aⅴ院| 99国产精品一区二区三区| 婷婷精品国产亚洲av| 久久精品亚洲精品国产色婷小说| 免费无遮挡裸体视频| 国产爱豆传媒在线观看 | 欧美性猛交╳xxx乱大交人| 亚洲狠狠婷婷综合久久图片| 日本 欧美在线| 99久久精品国产亚洲精品| 亚洲男人的天堂狠狠| 一级毛片女人18水好多| 欧美乱色亚洲激情| 亚洲精品国产区一区二| 中文字幕人成人乱码亚洲影| 国产高清激情床上av| 色精品久久人妻99蜜桃| 一本一本综合久久| 成人18禁在线播放| 成人精品一区二区免费| 侵犯人妻中文字幕一二三四区| 91成人精品电影| 18禁国产床啪视频网站| а√天堂www在线а√下载| 亚洲成av人片免费观看| 日本在线视频免费播放| 亚洲av片天天在线观看| 国产精品免费视频内射| 国产精品久久久av美女十八| 俺也久久电影网| 男人的好看免费观看在线视频 | 久久国产亚洲av麻豆专区| 热re99久久国产66热| 亚洲国产高清在线一区二区三 | 国产亚洲精品综合一区在线观看 | 亚洲成av人片免费观看| www国产在线视频色| 日日夜夜操网爽| 国产亚洲av高清不卡| 啦啦啦免费观看视频1| 黑丝袜美女国产一区| 国产欧美日韩一区二区精品| 国内久久婷婷六月综合欲色啪| 免费一级毛片在线播放高清视频| 成人国产综合亚洲| 精品电影一区二区在线| 亚洲片人在线观看| 欧美日韩亚洲综合一区二区三区_| 麻豆国产av国片精品| 黄色a级毛片大全视频| 成年版毛片免费区| 无遮挡黄片免费观看| 欧美激情极品国产一区二区三区| 男人操女人黄网站| 最新美女视频免费是黄的| 亚洲国产精品合色在线| 妹子高潮喷水视频| 亚洲激情在线av| 日本 欧美在线| 免费在线观看成人毛片| 一本精品99久久精品77| 久久精品91蜜桃| 久久久水蜜桃国产精品网| 午夜老司机福利片| 成人永久免费在线观看视频| 中文字幕人成人乱码亚洲影| 香蕉丝袜av| 国产麻豆成人av免费视频| 成人欧美大片| 悠悠久久av| 一个人免费在线观看的高清视频| 日韩一卡2卡3卡4卡2021年| 日韩国内少妇激情av| 精品国产超薄肉色丝袜足j| 欧美黄色片欧美黄色片| 美女 人体艺术 gogo| 欧美乱色亚洲激情| 婷婷六月久久综合丁香| 国产精品精品国产色婷婷| 国产成人欧美在线观看| 曰老女人黄片| 免费电影在线观看免费观看| 久久久国产成人免费| 亚洲av中文字字幕乱码综合 | 免费人成视频x8x8入口观看| 又大又爽又粗| www.精华液| 曰老女人黄片| 国产不卡一卡二| 日本a在线网址| 中出人妻视频一区二区| 又黄又粗又硬又大视频| 午夜影院日韩av| 久久久久久大精品| 无人区码免费观看不卡| 精品国内亚洲2022精品成人| 亚洲精品中文字幕在线视频| 亚洲久久久国产精品| 亚洲五月婷婷丁香| 久久午夜综合久久蜜桃| 国产精品亚洲一级av第二区| 他把我摸到了高潮在线观看| 午夜福利在线观看吧| 黄色丝袜av网址大全| 日本成人三级电影网站| 久久国产精品男人的天堂亚洲| 日本a在线网址| 特大巨黑吊av在线直播 | 视频区欧美日本亚洲| 午夜老司机福利片| 亚洲国产欧美网| 久久人人精品亚洲av| 久久精品国产亚洲av高清一级| 成人三级做爰电影| 老熟妇乱子伦视频在线观看| 午夜免费观看网址| 亚洲一卡2卡3卡4卡5卡精品中文| 好看av亚洲va欧美ⅴa在| 国产蜜桃级精品一区二区三区| 日本一区二区免费在线视频| 手机成人av网站| 天堂√8在线中文| 91麻豆av在线| 亚洲中文字幕一区二区三区有码在线看 | 欧美性猛交黑人性爽| 97超级碰碰碰精品色视频在线观看| 一级黄色大片毛片| 精品乱码久久久久久99久播| 亚洲成av人片免费观看| 我的亚洲天堂| 国产成人系列免费观看| 亚洲五月婷婷丁香| 国产亚洲精品av在线| 国产蜜桃级精品一区二区三区| 18禁黄网站禁片午夜丰满| 国产伦一二天堂av在线观看| 国产精品久久久av美女十八| 免费观看人在逋| 最新美女视频免费是黄的| 99国产极品粉嫩在线观看| 精品免费久久久久久久清纯| 香蕉国产在线看| 99精品久久久久人妻精品| 热re99久久国产66热| 无遮挡黄片免费观看| 啪啪无遮挡十八禁网站| 中文在线观看免费www的网站 | 欧美乱码精品一区二区三区| 搡老岳熟女国产| av超薄肉色丝袜交足视频| 最近最新中文字幕大全电影3 | 午夜福利一区二区在线看| 欧美成狂野欧美在线观看| 国产亚洲精品av在线| 久久婷婷成人综合色麻豆| 国产精品日韩av在线免费观看| 午夜久久久久精精品| 人人妻人人澡欧美一区二区| 男人的好看免费观看在线视频 | 一级毛片高清免费大全| 久久精品aⅴ一区二区三区四区| aaaaa片日本免费| 国产午夜福利久久久久久| 欧美成人性av电影在线观看| 精品国产乱子伦一区二区三区| 日韩 欧美 亚洲 中文字幕| 国内精品久久久久久久电影| 亚洲精品一卡2卡三卡4卡5卡| 变态另类丝袜制服| av福利片在线| 美女国产高潮福利片在线看| 欧美日韩中文字幕国产精品一区二区三区| www.精华液| av中文乱码字幕在线| 午夜福利视频1000在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲人成伊人成综合网2020| 黑人巨大精品欧美一区二区mp4| 久久久久久大精品| 老司机福利观看| 18禁观看日本| 亚洲国产欧美一区二区综合| 国产成人精品久久二区二区免费| 1024香蕉在线观看| 首页视频小说图片口味搜索| 老汉色∧v一级毛片| 最近最新中文字幕大全免费视频| 最新美女视频免费是黄的| 成人国产一区最新在线观看| 成人午夜高清在线视频 | 亚洲第一av免费看| 国产色视频综合| 一本大道久久a久久精品| 免费高清视频大片| 精品欧美一区二区三区在线| 妹子高潮喷水视频| 黄色视频,在线免费观看| 精品国产乱子伦一区二区三区| 国产精品香港三级国产av潘金莲| 很黄的视频免费| 亚洲电影在线观看av| 99久久无色码亚洲精品果冻| 国产亚洲精品一区二区www| 十八禁网站免费在线| 免费看日本二区| 亚洲 国产 在线| 91国产中文字幕| 一级a爱视频在线免费观看| 国产亚洲欧美在线一区二区| 丰满人妻熟妇乱又伦精品不卡| 波多野结衣高清无吗| 欧美乱妇无乱码| 午夜激情av网站| 性色av乱码一区二区三区2| 国产亚洲精品综合一区在线观看 | 久久亚洲精品不卡| 国产麻豆成人av免费视频| 国产亚洲av高清不卡| 欧美成人一区二区免费高清观看 | 19禁男女啪啪无遮挡网站| 国产成人啪精品午夜网站| 精品人妻1区二区| 最近在线观看免费完整版| 国产一区二区三区视频了| 久久久久国内视频| 欧美 亚洲 国产 日韩一| 亚洲性夜色夜夜综合| 国产精品一区二区免费欧美| 日韩欧美 国产精品| 久久人妻福利社区极品人妻图片| 我的亚洲天堂| 国产高清激情床上av| 在线看三级毛片| 999久久久精品免费观看国产| 成人午夜高清在线视频 | 香蕉av资源在线| 中出人妻视频一区二区| 18禁观看日本| 男人的好看免费观看在线视频 | 一区二区三区国产精品乱码| 亚洲精品美女久久久久99蜜臀| 两个人看的免费小视频| 在线永久观看黄色视频| 久久久久精品国产欧美久久久| 亚洲国产高清在线一区二区三 | 桃色一区二区三区在线观看| 少妇粗大呻吟视频| 禁无遮挡网站| 午夜两性在线视频| 欧美黑人巨大hd| 婷婷亚洲欧美| 两个人视频免费观看高清| 无限看片的www在线观看| 极品教师在线免费播放| 精品卡一卡二卡四卡免费| 国产一区二区在线av高清观看| 亚洲人成网站高清观看| 国产91精品成人一区二区三区| www.熟女人妻精品国产| 欧美黑人精品巨大| 国产久久久一区二区三区| 日韩 欧美 亚洲 中文字幕| 十分钟在线观看高清视频www| 国产精品影院久久| 日本一区二区免费在线视频| 午夜视频精品福利| 久久久国产成人免费| ponron亚洲| 国产麻豆成人av免费视频| 999久久久国产精品视频| 欧美日韩福利视频一区二区| 精品国产超薄肉色丝袜足j| 国产精品久久视频播放| 美女国产高潮福利片在线看| 亚洲av第一区精品v没综合| 久久久久国产精品人妻aⅴ院| 满18在线观看网站| 久久精品国产清高在天天线| 中出人妻视频一区二区| 亚洲精品一区av在线观看| xxxwww97欧美| 亚洲国产精品sss在线观看| 精品久久蜜臀av无| 97人妻精品一区二区三区麻豆 | 怎么达到女性高潮| 在线天堂中文资源库| 久久国产亚洲av麻豆专区| 99在线人妻在线中文字幕| 久久久久久久久免费视频了| 国产91精品成人一区二区三区| svipshipincom国产片| av欧美777| 亚洲精品国产一区二区精华液| 国产又爽黄色视频| 日本撒尿小便嘘嘘汇集6| 波多野结衣高清无吗| bbb黄色大片| 亚洲 国产 在线| 一进一出抽搐动态| av天堂在线播放| 亚洲人成伊人成综合网2020| 欧美又色又爽又黄视频| 国产精品美女特级片免费视频播放器 | 久久精品国产亚洲av香蕉五月| 欧美国产日韩亚洲一区| 18禁国产床啪视频网站| 50天的宝宝边吃奶边哭怎么回事| 变态另类丝袜制服| 国产成人精品久久二区二区免费| 听说在线观看完整版免费高清| 无限看片的www在线观看| 老鸭窝网址在线观看| 日本成人三级电影网站| 香蕉国产在线看| 久久久久久九九精品二区国产 |