• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automated brain tumor segmentation on multi-modal MR image using SegNet

    2019-08-05 01:45:28SalmaAlqazzazXianfangSunXinYangandLenNokes
    Computational Visual Media 2019年2期

    Salma Alqazzaz(), Xianfang Sun, Xin Yang, and Len Nokes

    Abstract The potential of improving disease detection and treatment planning comes with accurate and fully automatic algorithms for brain tumor segmentation.Glioma, a type of brain tumor, can appear at different locations with different shapes and sizes. Manual segmentation of brain tumor regions is not only timeconsuming but also prone to human error, and its performance depends on pathologists' experience. In this paper, we tackle this problem by applying a fully convolutional neural network SegNet to 3D data sets for four MRI modalities (Flair, T1, T1ce, and T2)for automated segmentation of brain tumor and subtumor parts, including necrosis, edema, and enhancing tumor. To further improve tumor segmentation, the four separately trained SegNet models are integrated by post-processing to produce four maximum feature maps by fusing the machine-learned feature maps from the fully convolutional layers of each trained model. The maximum feature maps and the pixel intensity values of the original MRI modalities are combined to encode interesting information into a feature representation.Taking the combined feature as input, a decision tree(DT) is used to classify the MRI voxels into different tumor parts and healthy brain tissue. Evaluating the proposed algorithm on the dataset provided by the Brain Tumor Segmentation 2017 (BraTS 2017)challenge, we achieved F-measure scores of 0.85, 0.81,and 0.79 for whole tumor, tumor core, and enhancing tumor, respectively.Experimental results demonstrate that using SegNet models with 3D MRI datasets and integrating the four maximum feature maps with pixel intensity values of the original MRI modalities has potential to perform well on brain tumor segmentation.

    Keywords brain tumor segmentation; multi-modal MRI;convolutional neural networks;fully convolutional networks; decision tree

    1 Introduction

    Glioma is one of the most common types of primary tumour that occur in the brain. They grow from glioma cells and can be categorized into low and high grade gliomas. High grade gliomas (HGG) are more aggressive and highly malignant in a patient,with a life expectancy of at most two years, while low grade gliomas(LGG)can be benign or malignant,and grow more slowly in a patient, with a life expectancy of several years [1]. Accurate segmentation of brain tumor and surrounding tissues such as edema,enhancing tumor, non-enhancing tumor, and necrotic regions is an important factor in assessment of disease progression, therapy response, and treatment planning in patients [2]. Multi-modal magnetic resonance imaging (MRI) is widely employed in clinical routine for diagnosis and monitoring tumor progression. MRI has been one of the popular imaging techniques as it facilitates tumour analysis by visualizing its spread; it also gives soft tissue contrast compared to other techniques like computed tomography (CT) and positron emission tomography(PET). Moreover, multi-modal MRI protocols are normally used to evaluate brain tumor tissues as they have the capability to separate different tissues using a specific sequence based on tissue properties. For example, T1-weighted images are good at separating healthy tissues in the brain while T1ce (contrast enhanced) helps to separate tumor boundaries which appear brighter because of the contrast agent. Edema around tumors is detected well in T2-weighted images,while FLAIR images are best for differentiating edema regions from cerebrospinal fluid (CSF) [3, 4].

    Gliomas have complex structure and appearance.They require accurate delineation in images. Tumor components are often diffuse, with weak contrast.Their borders are often fuzzy and hard to distinguish from healthy tissue (white matter, gray matter, and CSF), making them hard to segment [5]. All these factors lead to time-consuming manual delineation,which is expensive and prone to operator bias.Automatic brain tumor segmentation using MRI would solve these issues by providing an efficient tool for reliable diagnosis and prognosis of brain tumors. Therefore, many researchers have considered automated brain tumor segmentation from MRI images.

    Recently, convolutional neural networks (CNNs)have attracted attention in object detection, segmentation, and image classification. For the BraTS challenge, most CNN-based methods are patch-wise models[5-7]. These methods take only a small region as input to the network, which disregards the image content and label correlations. Additionally, these methods take a long time for training.

    CNN architecture is modified in several ways in fully convolutional networks (FCN). Specifically,instead of making probability distribution prediction patch-wise in CNN, FCN models predict one probability distribution pixel-wise [8]. In the method of Ref. [9], different MRI modalities are stacked together as different input channels into deep learning models. However, the correlation between different MRI modalities was not explicitly considered. To overcome this problem, we develop a feature fusion method to select the most effective information from different modalities. A model is proposed to deal with multiple MRI modalities separately and then incorporate spatial and sequential features from them for 3D brain tumour segmentation.

    In this study, we first trained four SegNet models with 3D data sets with Flair, T1, T1ce, and T2 modalities as input data. The outputs of each SegNet model are four feature maps, which represent the scores of each pixel being classified as background,edema, enhancing tumor, and necrosis. The highest scores in the same class from the four SegNet models are extracted and four feature maps with the highest scores are obtained. These feature maps are combined with the pixel values of the original MRI models, and are taken as the input to a DT classifier to further classify each pixel. Our results demonstrate that this proposed strategy can perform fully automatic segmentation of tumor and sub-tumor regions.

    The main contributions of this paper are as follows:

    · A brain tumour segmentation method that uses 3D data information from the neighbors of the slice in question to increase segmentation accuracy for single mode MR images.

    · Effective combination of features extracted from multi-modal MR images, maximizing the useful information from different modalities of MR images.

    · A decision tree-based segmentation method which incorporates features and pixel intensities from multi-modal MRI images, giving higher segmentation accuracy than single-modal MR images.

    · Evaluation on the BraTS 2017 dataset showing that the proposed method gives state-of-the-art results.

    2 Related work

    Many methods have been investigated for medical image analysis; promising results have been provided by computational intelligence and machine learning methods in medical image processing [10]. The problem of brain tumour segmentation from multimodal MRI scans is still a challenging task, although recently various advanced methods of automated segmentation have been proposed to solve this task.

    Here, we will review some of the relevant works for brain tumour segmentation. For machine learning methods other than deep learning, Gooya et al. [11],Zikic et al. [12], and Tustison et al. [13] present some typical works in this field. Discriminative learning techniques such as SVM,decision forests,and conditional random fields (CRFs) have been reviewed in Ref. [2].

    One common aspect of classical discriminative models is that their implementation is based on predefined features, as opposed to deep learning models that automatically learn a hierarchy of increasingly complex features directly from data,resulting in more robust features[5]. Pereira et al.[7]used two different CNNs for the segmentation of LGG and HGG. The architecture in Ref. [5] involves two pathways, a local pathway that focuses on the information in a pixel's neighborhood, and a global pathway that captures global contextual information from an MRI slice to perform accurate brain tumour segmentation. A dualstream 11-layer network with a 3D fully CRF as postprocessing was presented in Ref. [14]. An adapted version of DeepMedic with residual connection was employed for brain tumour segmentation in Ref. [15].

    Patch-wise methods contain many redundant convolutional calculations, but only explore spatially limited contextual features. To avoid using patches,FCN with deconvolution layers can be used to train an end to end and pixel to pixel CNN for pixel-wise prediction with the whole image as input[8]. Chang [16] demonstrated an algorithm that contains FCN and CRF.Shelhamer et al.[8]suggested to use skip connections to join high-level features from deep decoding layers with appearance features from shallow encoding layers to recover spatial information lost during downsampling. This method has demonstrated promising results on natural images and is also applicable to biomedical images [17].Ronneberger et al. [9] and C?i?cek et al. [18] used UNet architecture which consists of a down-sampling path to capture contextual features and a symmetric up-sampling path that enables accurate localization with 3D extension. However, the depth information is ignored by approaches based on 2D. Nevertheless,Lai [19] used the depth information by implementing a 3D convolution model which utilizes the correlation between slices. A large number of parameters is required by the 3D convolution network. Moreover,in a small dataset, a 3D convolution network is prone to overfitting.

    In Refs. [5, 20], the input data to the deep learning methods were treated as different modality channels. Therefore, the correlation between them is not well used. The correlations between different MRI modalities are utilized in our proposed method by implementing 3D MRI data sets for each MRI modality separately with a SegNet model, and combining the feature maps of the last deconvolution layers for each trained SegNet model with the pixel intensity values of the original MRI models, feeding them into a classifier.

    3 Approach

    Our brain tumor segmentation algorithm aims to locate the entire tumor volume and accurately segment the tumor into four sub-tumor parts. Our method has four main steps: a pre-processing step to construct 3D MRI datasets, a training step to finetune a pretrained SegNet for each MRI modality separately, a post-processing step to extract four maximum feature maps from the SegNet models'score maps, and a classification step to classify each pixel based on the maximum feature maps and the MRI pixel values. Figure 1 shows the pipeline of our proposed system using SegNet networks.

    3.1 Data pre-processing

    In our study, MRI intensity value normalization is important to compensate for MRI artifacts, such as motion and field inhomogeneity, and also to allow data from different scanners to be processed by a single algorithm. Therefore, we need to ensure that the value ranges match between patients and different modalities to avoid initial biases of the network.

    Firstly, to remove unwanted artifacts, N4ITK bias field correction is applied to all MRI modalities[21]. If this correction is not performed in the pre-processing step, artifacts cause high false positives, resulting in poor performance. Figure 2 shows the effects of applying bias field correction to an MR image. Higher intensity values, which can lead to false positives in the predicted output results, are observed in the first scan near the bottom left corner. The second scan has better contrast near the edges after removing the bias.

    Intensity values across MRI slices have been observed to vary greatly, so a normalization preprocessing step is also applied in addition to bias field correction so as to bring the mean intensity value and variance close to 0 and 1, respectively. Equation (1)shows how to compute the slice value In:

    where I is the original intensity value of the MRI slice,and μ and σ are the mean and standard deviation of I respectively.

    Fig. 1 Pipeline of our brain tumour segmentation approach.

    Fig. 2 An MRI scan (a) before and (b) after N4ITK bias field correction.

    Additionally removing the top and bottom 1%intensity values during the normalization process brings the intensity values within a coherent range across all images for the training phase. To remove a significant portion of unnecessary zeros in the dataset and to save training time by reducing the huge memory requirements for 3D data sets, we trimmed some black parts of the image background from the data for all modalities to get input images of size 192×192.

    As shown in Fig.1, the main step in pre-processing is 3D database construction. Since there are four modalities in the MRI dataset for each patient,we took them as four independent inputs. When processing the jth slice, we also use the (j -1)th and (j+1)th slices to make advantage of 3D image information. To do so, the three adjacent slices for each modality are taken as three color channels of an image and used as 3D inputs.

    3.2 Brain tumor image segmentation by SegNet networks

    The semantic segmentation model in Fig. 3 takes full-size images as input for feature extraction in an end-to-end manner. The pretrained SegNet is used,and its parameters are finely tuned using images with manually annotated tumor regions. In the testing process, the final SegNet model is used to create predicted segmentation masks for tumor regions for unidentified images. The motivation for using SegNet networks instead of other deep learning networks is that SegNet has a small number of parameters and does not need high computational resources like DeconvNet [23], and it is easier to train end-to-end.Moreover,in a U-Net network[9],entire feature maps in the encoders are transferred to the corresponding up-sampling decoders and concatenated to give decoder feature maps, which leads to high memory requirements, while in SegNet only pooling indices are reused, needing less memory.

    Fig. 3 (a) Architecture of the SegNet; (b) SegNet which uses max pooling indices to up-sample the feature maps and convolve them with a trainable decoder filter bank [22].

    In our network architecture, the main idea used from FCN is to change the fully connected layers of VGG-16 into convolutional layers. This not only helps in retaining higher resolution feature maps at the deepest encoder outputs, but also reduces the number of parameters in the SegNet encoder network significantly (from 134M to 14.7M). This enables the classification net to output a dense feature map which keeps spatial information [22].

    The SegNet architecture consists of a downsampling (encoding) path and a corresponding upsampling (decoding) path, followed by a final pixelwise classification layer. In the encoder path, there are 13 convolutional layers which match the first 13 convolutional layers in the VGG16 network.Each encoder layer has a corresponding decoder layer; therefore, the decoder network also has 13 convolutional layers. The output of the final decoder layer is fed into a multi-class soft-max classifier to produce class probabilities for each pixel independently.

    The encoder path consists of five convolution blocks,each of which is followed by a max-pooling operation with a 2×2 window and stride 2 for downsampling.Each convolution block is constructed by several layers of 3 × 3 convolution combined with batch normalization and element-wise rectified linear nonlinearity (ReLU). There are two layers in each of the first two convolution blocks, and three layers for the next three blocks. The decoder path has a symmetric structure to the encoder path except that the max-pooling operation is replaced by an upsampling operation. Upsampling takes the outputs of the previous layer and the output of the max pooling indices of the corresponding encoding layer as input. The output of the final decoder, which is a high dimensional feature representation, is fed into a soft-max classifier layer, which classifies each pixel independently. See Fig. 3. Subsequently, the output of the soft-max classifier is a K channel image, where K represents the number of desired classes, with probability value at each pixel.

    3.3 Post-processing

    As described in Section 3.2, four SegNet models are adapted and trained separately for segmentation of brain tumors from multi-modal MR images. The earlier layers of the SegNet models learn simple features like circles and edges, while the deeper layers learn complex and useful finer features. The machinelearned features in the last deconvolution layer in each SegNet model represent four score maps, related to the four classification labels (background, necrosis,edema,and enhancing tumor). The four highest score maps are constructed from the obtained 16 feature maps. The values of each highest activation feature maps represent those strong features that include all hierarchical features (at higher resolution), helping to increase the classification performance. To further increase the information for classification, a feature vector is generated based on combination of the four highest score maps and the pixel intensity values of the original MRI modalities. Finally, the encoded feature vector is applied to a DT classifier to classify each MRI image voxel into tumor and sub-tumor parts. The reason for using DT as the classifier in this work is that it has been shown to provide high performance for brain tumour segmentation [2]. The selection process for highest feature maps and their location in the SegNet architecture are illustrated in Fig. 4.

    3.4 SegNetMax DT

    As described above, the four highest score maps are combined with pixel intensity values and considered as feature vectors. Then, the feature vectors are presented to a DT classifier. In this phase, the maximum number of splits or branch points is specified to control the depth of the designed tree.Different tree depths of DT classifier were examined and tuned on the training datasets. Optimal generalization and accuracy were obtained from a tree with depth 15. 5-fold cross validation data were used to evaluate the classification accuracy.

    3.5 Training and implementation details

    The proposed algorithm was implemented using MATLAB 2018a and run on a PC with an Intel Core i7 CPU with 16 GB RAM using Windows 7.Our implementation was based on the MATLAB deep learning toolbox for semantic segmentation and its classification learner toolbox for training the DT classifier. The whole training process for each model took approximately 3 days on a single NVIDIA GPU Titan XP. We updated the loss function on the training set using stochastic gradient descent, with parameters set as follows: learning rate = 0.0001,maximum number of epochs = 80.

    4 Experiments and results

    Fig. 4 Selection process of maximum feature maps. (a) Background. (b) Edema. (c) Enhancing tumor. (d) Necrosis. (e) Maximum feature maps.

    All 285 patient subjects with HGG and LGG in the BraTS 2017 dataset were included in this study[2, 24]. 75% of the patients (158 HGG and 57 LGG)were used to train the deep learning model and 25%(52 HGG and 18 LGG) were assigned to the testing set. For each patient, there were four types of MRI sequences (Flair, T1, T1ce, and T2). All images were segmented manually in one to four rates (using 3 labels, 1: the necrotic and non-enhancing tumor,2: the peritumoral edema, 4: GD-enhancing tumor).The segmentation ground truth for each subject was observed by experienced neuro-radiologists. Figure 5 demonstrates MRI modalities and their ground truth.

    The model performance was evaluated on the test set. For practical clinical applications, the tumor structures are grouped into three different tumor regions defined by

    · The complete tumor region including all four intratumor classes (necrosis and non-enhancing, edema,enhancing tumor, labels 1, 2, and 4).

    · The core tumor region (as above but excluding edema regions, labels 1 and 4).

    · The enhancing tumor region (only label 4).

    For each tumor region, the segmentation results were evaluated quantitatively using the F-measure which provides an intersection measurement between the manually defined brain tumor regions and the segmentation prediction results of the fully automatic method, as follows:

    From our preliminary results, we observed that our 3D model can achieve brain tumor detection accurately even though we only trained each MRI modality separately instead of combining 4 MRI modalities as input as in other studies. The high accuracy comes from the fact that the network architecture is able to capture 3D fine details of tumor regions from adjacent MRI slices (j-1,j,j+1) of the same modality. Consequently, the convolutional layers can extract more features, which is extremely helpful in improving the performance of brain tumor segmentation. Moreover, relatively accurate brain tumor segmentation was achieved by extracting the four highest feature maps combined with the pixel intensity values of the original MRI images. The score maps are obtained from the last deconvolution layer in each SegNet model because in this layer all hierarchical features that contain finer details (at higher resolution) are included, which gives accurate brain tumor detection results.

    Table 1 gives evaluation results for the proposed method on the BraTS 2017 Training dataset for four MRI modalities, while Table 2 compares our method with other methods.

    From Table 1 it can be seen that Segperforms better than individual SegNet models. As explained in Section 3.4, only the highest scores for each specific sub-tumour regions are selectedfor classification, which is why we can get highest accuracy using SegNet

    Table 1 Segmentation results for the BraTS 2017 dataset

    Fig. 5 (a) Whole tumor visible in FLAIR; (b) tumor core visible in T2; (c) enhancing and necrotic tumor component structures visible in T1ce; (d) final labels of the observable tumor structures noticeable: edema (yellow), necrotic/cystic core (light blue), enhancing core (red).

    Table 2 Comparison of our method and other methods on the BraTS 2017 dataset

    Table 2 shows that our method gives better results in core and enhanced tumor segmentation,though the complete segmentation accuracy is not better than that of Refs. [25] and [26]. This is because that our method has a relatively low detection accuracy for edema. However, we consider the core or enhanced region to be much more important than the edema region. It is worth sacrificing accuracy of edema detection to increase accuracy of core and enhanced tumour detection.

    Figure 6 demonstrates some visual results from semantic segmentation structures of SegNet models and the SegNemethod from an axial view.

    5 Discussion and conclusions

    Fig. 6 Segmentation results of SegNet models and SegNet DT method. (a)-(g) MRI slices, ground truth, SegNet1Flair, SegNet2T1,SegNet3T1ce, SegNet4T2, and SegNetMax DT, respectively.

    In this study, the publicly available BraTS 2017 dataset was used. A DT and four SegNet models were trained with the same training dataset that includes ground truth. A testing dataset without ground truth was used for system evaluation. Our experiments show that the SegNet architectures with 3D dataset and post-processing presented in this work can efficiently and automatically segment brain tumors,completing segmentation for an entire volume in four seconds on a GPU optimized workstation. However,some models like SegNet3and2 do not give accurate results because T1 and T2 MRI modalities only give information related to healthy and whole tumor tissues rather than other sub-parts of a tumor like necrosis and enhancing tumor. To tackle this problem, maximum feature maps from all SegNet models were combined, so that only strong and useful features from all SegNet models are presented to the classifier. Four MRI modalities were trained separately for multiple reasons. Firstly,different modalities have different features, so it is faster to train them using different simple models rather than one complex model. Secondly, specific features can be extracted directly related to the specific modality of each SegNet model, providing clinicians with specific information. Finally, one of the most common MRI limitations is the prolonged scan time required to get different MRI modalities,so sometimes, depending on a single modality to detect a brain tumor can be a good solution to save time in clinical applications.

    It is worth mentioning that in the proposed method, the training stage is time-consuming, which could be considered to be a limitation, but the prediction phase rapidly processes the testing dataset to provide semantic segmentation and classification.Although our method can segment core and enhanced tumors better than state-of-the-art methods, it is not better in segmenting complete tumors. However,further post-processing techniques could improve the accuracy of our method, and the SegNet models could be saved as trained models and refined by use of additional training datasets. Consequently,a longitudinal study using different FCN and CNN architectures should be taken over time to increase the proposed system performance.

    Acknowledgements

    We would like to thank nVidia for their kind donation of a Titan XP GPU.

    老司机影院毛片| 亚洲天堂av无毛| 成年女人毛片免费观看观看9 | 久久精品国产亚洲av高清一级| tocl精华| 亚洲精品国产色婷婷电影| 亚洲色图综合在线观看| 免费在线观看影片大全网站| 亚洲精品美女久久久久99蜜臀| 大型av网站在线播放| 两人在一起打扑克的视频| 99久久99久久久精品蜜桃| 国产野战对白在线观看| 国产欧美日韩一区二区三 | 亚洲少妇的诱惑av| 青春草视频在线免费观看| kizo精华| 亚洲国产日韩一区二区| 黄频高清免费视频| 超色免费av| 一本—道久久a久久精品蜜桃钙片| 亚洲美女黄色视频免费看| 久久精品国产亚洲av高清一级| 欧美精品一区二区大全| 久久久久久久久久久久大奶| 久久中文字幕一级| 国产福利在线免费观看视频| 最新的欧美精品一区二区| 亚洲av国产av综合av卡| 久久天躁狠狠躁夜夜2o2o| 免费看十八禁软件| 亚洲熟女精品中文字幕| 黄色片一级片一级黄色片| 两人在一起打扑克的视频| 国产亚洲欧美精品永久| 国产精品一区二区在线不卡| av天堂久久9| 久热爱精品视频在线9| 制服人妻中文乱码| 建设人人有责人人尽责人人享有的| 国产av一区二区精品久久| 香蕉丝袜av| 亚洲国产av新网站| 91精品伊人久久大香线蕉| 欧美日韩亚洲高清精品| 捣出白浆h1v1| 99精品久久久久人妻精品| 久久影院123| 99热全是精品| 亚洲久久久国产精品| 黄片大片在线免费观看| 精品国产乱子伦一区二区三区 | 亚洲人成电影免费在线| 亚洲精品久久午夜乱码| 18禁观看日本| 美女脱内裤让男人舔精品视频| 免费不卡黄色视频| 欧美性长视频在线观看| 久久亚洲精品不卡| 人成视频在线观看免费观看| 久久久精品94久久精品| 一区二区三区激情视频| 国产一区二区三区综合在线观看| 日韩一卡2卡3卡4卡2021年| 国产av国产精品国产| 美女视频免费永久观看网站| 国产欧美日韩精品亚洲av| 91国产中文字幕| 制服人妻中文乱码| 久久久久久久国产电影| 99精国产麻豆久久婷婷| 中文精品一卡2卡3卡4更新| 两性午夜刺激爽爽歪歪视频在线观看 | 精品人妻在线不人妻| 满18在线观看网站| 一级毛片电影观看| 黑人巨大精品欧美一区二区mp4| 宅男免费午夜| 亚洲色图 男人天堂 中文字幕| av一本久久久久| 国产精品1区2区在线观看. | 美女视频免费永久观看网站| 黄色视频,在线免费观看| 少妇人妻久久综合中文| 久久久久网色| 国产主播在线观看一区二区| 国产精品成人在线| 91精品伊人久久大香线蕉| 日韩欧美国产一区二区入口| 美女脱内裤让男人舔精品视频| 一本色道久久久久久精品综合| 国产在视频线精品| 精品视频人人做人人爽| 欧美日韩成人在线一区二区| 丝袜美足系列| 美女午夜性视频免费| 国产成人精品久久二区二区91| 日本猛色少妇xxxxx猛交久久| a 毛片基地| 午夜免费观看性视频| 亚洲国产欧美一区二区综合| 大型av网站在线播放| 视频区图区小说| 精品国产国语对白av| 涩涩av久久男人的天堂| av网站免费在线观看视频| 午夜成年电影在线免费观看| 一级a爱视频在线免费观看| 欧美97在线视频| 亚洲国产精品一区二区三区在线| av天堂久久9| 国产一级毛片在线| 欧美xxⅹ黑人| 欧美中文综合在线视频| 一区二区三区激情视频| 久久精品人人爽人人爽视色| 中文精品一卡2卡3卡4更新| 狂野欧美激情性bbbbbb| 人妻一区二区av| 国产欧美日韩一区二区精品| 一本久久精品| 欧美97在线视频| 久久天躁狠狠躁夜夜2o2o| 黑人欧美特级aaaaaa片| 97在线人人人人妻| 最新的欧美精品一区二区| 久久久久久久久免费视频了| 精品国产国语对白av| 欧美精品啪啪一区二区三区 | 亚洲精华国产精华精| 久久中文字幕一级| 男女边摸边吃奶| 两性夫妻黄色片| 大码成人一级视频| 欧美另类一区| 亚洲av成人一区二区三| 777久久人妻少妇嫩草av网站| 女人被躁到高潮嗷嗷叫费观| 国产在线观看jvid| 亚洲国产日韩一区二区| 麻豆av在线久日| av片东京热男人的天堂| 日韩中文字幕视频在线看片| 一级毛片精品| 亚洲精品成人av观看孕妇| 欧美日本中文国产一区发布| 91麻豆av在线| 黄色 视频免费看| 99国产综合亚洲精品| 自拍欧美九色日韩亚洲蝌蚪91| 国产视频一区二区在线看| 国产一区二区激情短视频 | 美女中出高潮动态图| 亚洲激情五月婷婷啪啪| 一区二区三区四区激情视频| 精品国产一区二区三区四区第35| 国产欧美日韩综合在线一区二区| 在线 av 中文字幕| 日韩,欧美,国产一区二区三区| 少妇人妻久久综合中文| 韩国高清视频一区二区三区| 狠狠婷婷综合久久久久久88av| 亚洲国产中文字幕在线视频| 91字幕亚洲| 中文字幕另类日韩欧美亚洲嫩草| 久久人人爽av亚洲精品天堂| 亚洲国产毛片av蜜桃av| 最近最新免费中文字幕在线| 91九色精品人成在线观看| 午夜影院在线不卡| 91国产中文字幕| 在线观看舔阴道视频| 男人添女人高潮全过程视频| 欧美日韩一级在线毛片| 又紧又爽又黄一区二区| 最近中文字幕2019免费版| 免费久久久久久久精品成人欧美视频| 动漫黄色视频在线观看| 大型av网站在线播放| 人人澡人人妻人| 韩国精品一区二区三区| 18在线观看网站| 中文字幕av电影在线播放| 免费黄频网站在线观看国产| 精品熟女少妇八av免费久了| 一二三四在线观看免费中文在| 成年人免费黄色播放视频| 最新在线观看一区二区三区| 老汉色∧v一级毛片| 日韩视频一区二区在线观看| 多毛熟女@视频| 12—13女人毛片做爰片一| 欧美激情高清一区二区三区| 亚洲精品久久午夜乱码| 午夜影院在线不卡| 日本wwww免费看| 下体分泌物呈黄色| 一个人免费在线观看的高清视频 | 天天躁夜夜躁狠狠躁躁| 亚洲一区二区三区欧美精品| 咕卡用的链子| 19禁男女啪啪无遮挡网站| 爱豆传媒免费全集在线观看| 欧美日韩亚洲高清精品| 久久久久国内视频| 两性午夜刺激爽爽歪歪视频在线观看 | 免费不卡黄色视频| 激情视频va一区二区三区| 成人免费观看视频高清| 国产精品 欧美亚洲| 高清欧美精品videossex| 国产91精品成人一区二区三区 | 丰满少妇做爰视频| 人人妻人人爽人人添夜夜欢视频| 欧美中文综合在线视频| 波多野结衣av一区二区av| 汤姆久久久久久久影院中文字幕| 无限看片的www在线观看| 亚洲国产精品999| 一本色道久久久久久精品综合| 久久久久国产精品人妻一区二区| 热99久久久久精品小说推荐| 亚洲精品美女久久av网站| 秋霞在线观看毛片| 国产精品久久久久久精品古装| 18禁国产床啪视频网站| 99精国产麻豆久久婷婷| 午夜精品久久久久久毛片777| 中文字幕色久视频| 久久久久网色| videos熟女内射| 免费观看av网站的网址| 午夜久久久在线观看| 下体分泌物呈黄色| 日韩大码丰满熟妇| 色综合欧美亚洲国产小说| 韩国高清视频一区二区三区| 女人久久www免费人成看片| 少妇人妻久久综合中文| 777久久人妻少妇嫩草av网站| 午夜福利,免费看| 菩萨蛮人人尽说江南好唐韦庄| 男人爽女人下面视频在线观看| 91成人精品电影| 免费看十八禁软件| 啦啦啦视频在线资源免费观看| 97精品久久久久久久久久精品| 超碰97精品在线观看| 99精品欧美一区二区三区四区| 久久免费观看电影| 人妻久久中文字幕网| 99国产精品一区二区三区| 极品少妇高潮喷水抽搐| 成在线人永久免费视频| 菩萨蛮人人尽说江南好唐韦庄| 女人爽到高潮嗷嗷叫在线视频| 久久精品国产a三级三级三级| 丝袜美腿诱惑在线| 国产成人欧美在线观看 | 国产精品二区激情视频| 18禁黄网站禁片午夜丰满| 国产精品久久久久成人av| 精品乱码久久久久久99久播| 免费av中文字幕在线| 亚洲伊人色综图| 又大又爽又粗| 一边摸一边抽搐一进一出视频| 亚洲第一青青草原| 99香蕉大伊视频| 脱女人内裤的视频| 亚洲欧美激情在线| 国产精品国产av在线观看| 日韩欧美免费精品| 女人精品久久久久毛片| 亚洲专区字幕在线| 日本91视频免费播放| 欧美日韩亚洲综合一区二区三区_| 后天国语完整版免费观看| 青春草亚洲视频在线观看| 超碰97精品在线观看| 久久久久精品人妻al黑| 丝袜人妻中文字幕| 伦理电影免费视频| 日韩一区二区三区影片| 在线永久观看黄色视频| 亚洲黑人精品在线| 午夜91福利影院| 99国产精品免费福利视频| 免费久久久久久久精品成人欧美视频| 国产真人三级小视频在线观看| 国产一区二区在线观看av| 天堂俺去俺来也www色官网| 男女免费视频国产| 夜夜骑夜夜射夜夜干| 国产欧美亚洲国产| 91大片在线观看| 久久国产精品男人的天堂亚洲| 欧美人与性动交α欧美精品济南到| 国产精品免费视频内射| a级毛片在线看网站| 一本久久精品| 老司机影院毛片| 日日夜夜操网爽| 欧美黑人欧美精品刺激| 黑人猛操日本美女一级片| 国产野战对白在线观看| 十分钟在线观看高清视频www| 男人舔女人的私密视频| 18禁黄网站禁片午夜丰满| 韩国高清视频一区二区三区| 一区二区三区精品91| 国产精品久久久久久精品电影小说| 黑丝袜美女国产一区| 久久青草综合色| 午夜福利,免费看| 精品一区在线观看国产| 久久天堂一区二区三区四区| 国产人伦9x9x在线观看| 另类亚洲欧美激情| 久久午夜综合久久蜜桃| 国产精品国产av在线观看| 国产男女内射视频| 男女无遮挡免费网站观看| 亚洲av男天堂| 日本91视频免费播放| 亚洲国产av新网站| 日韩制服丝袜自拍偷拍| 日韩制服骚丝袜av| 老熟女久久久| av片东京热男人的天堂| 在线观看人妻少妇| 成年女人毛片免费观看观看9 | 午夜日韩欧美国产| 午夜福利在线免费观看网站| 午夜日韩欧美国产| 精品国产一区二区三区久久久樱花| 啦啦啦在线免费观看视频4| 久久av网站| 男女高潮啪啪啪动态图| 宅男免费午夜| 黄色a级毛片大全视频| 亚洲,欧美精品.| 国产精品欧美亚洲77777| 日本欧美视频一区| 国产男女内射视频| 美女中出高潮动态图| 91麻豆av在线| 午夜福利乱码中文字幕| 久久久久精品国产欧美久久久 | 两人在一起打扑克的视频| 日韩欧美国产一区二区入口| 国产成人免费观看mmmm| 欧美精品高潮呻吟av久久| 18禁裸乳无遮挡动漫免费视频| 丁香六月天网| 久久久久久亚洲精品国产蜜桃av| 男女午夜视频在线观看| 桃花免费在线播放| 日本91视频免费播放| 成人国产av品久久久| 一区二区日韩欧美中文字幕| 多毛熟女@视频| 日韩欧美一区视频在线观看| 国产亚洲午夜精品一区二区久久| 一区二区日韩欧美中文字幕| 成人影院久久| 欧美黑人欧美精品刺激| 中文字幕av电影在线播放| 黄色视频不卡| 99国产精品99久久久久| 91成年电影在线观看| 国产成人免费无遮挡视频| 国产一区二区三区综合在线观看| 在线观看免费高清a一片| 日韩,欧美,国产一区二区三区| 一区二区三区四区激情视频| 久久女婷五月综合色啪小说| 精品久久久精品久久久| 成年美女黄网站色视频大全免费| 午夜日韩欧美国产| 肉色欧美久久久久久久蜜桃| 成年人午夜在线观看视频| cao死你这个sao货| 麻豆乱淫一区二区| 欧美人与性动交α欧美精品济南到| 飞空精品影院首页| 日韩制服骚丝袜av| 午夜影院在线不卡| 一级毛片电影观看| 中文字幕人妻丝袜一区二区| 久久国产精品人妻蜜桃| 高清视频免费观看一区二区| 久久 成人 亚洲| 国产免费现黄频在线看| 高清欧美精品videossex| 男女免费视频国产| 精品国产乱码久久久久久男人| 高清欧美精品videossex| 国产免费福利视频在线观看| 亚洲欧美精品综合一区二区三区| 成人手机av| 每晚都被弄得嗷嗷叫到高潮| 极品人妻少妇av视频| 国产野战对白在线观看| 亚洲avbb在线观看| 欧美激情 高清一区二区三区| 国产成人av激情在线播放| 亚洲国产日韩一区二区| 午夜91福利影院| 免费女性裸体啪啪无遮挡网站| 五月天丁香电影| 精品福利永久在线观看| 亚洲五月色婷婷综合| 亚洲精品久久久久久婷婷小说| 麻豆乱淫一区二区| 在线观看免费午夜福利视频| 日韩制服丝袜自拍偷拍| 亚洲第一欧美日韩一区二区三区 | 亚洲国产日韩一区二区| 亚洲欧美一区二区三区久久| 亚洲欧美精品自产自拍| 欧美精品亚洲一区二区| 黄片大片在线免费观看| 男人舔女人的私密视频| 精品久久蜜臀av无| 国产欧美日韩一区二区三 | 久久久精品免费免费高清| 久久午夜综合久久蜜桃| 国产三级黄色录像| 一本大道久久a久久精品| 亚洲七黄色美女视频| 美女国产高潮福利片在线看| 亚洲国产精品一区二区三区在线| 欧美日韩国产mv在线观看视频| 欧美日本中文国产一区发布| 色视频在线一区二区三区| 日韩视频在线欧美| 精品国产一区二区久久| 国产视频一区二区在线看| 啪啪无遮挡十八禁网站| 亚洲av片天天在线观看| 99国产精品免费福利视频| 最新在线观看一区二区三区| 伊人久久大香线蕉亚洲五| 欧美人与性动交α欧美精品济南到| 欧美成人午夜精品| 中文字幕人妻丝袜制服| 91老司机精品| 成年动漫av网址| 久久久精品国产亚洲av高清涩受| 一区二区三区激情视频| av超薄肉色丝袜交足视频| 人妻人人澡人人爽人人| 久久久国产精品麻豆| 我的亚洲天堂| 亚洲视频免费观看视频| 国产精品免费大片| 久久国产精品男人的天堂亚洲| 日韩中文字幕视频在线看片| av在线老鸭窝| 一区二区av电影网| 老司机靠b影院| 嫁个100分男人电影在线观看| 国产精品久久久久久精品电影小说| 国产精品自产拍在线观看55亚洲 | 色94色欧美一区二区| 一区二区三区四区激情视频| 欧美黑人欧美精品刺激| 青春草亚洲视频在线观看| 人妻 亚洲 视频| 亚洲精品国产一区二区精华液| 丁香六月天网| 操美女的视频在线观看| 日韩欧美国产一区二区入口| 日韩有码中文字幕| 亚洲精品国产区一区二| 91国产中文字幕| 日韩中文字幕欧美一区二区| a级毛片在线看网站| 精品福利观看| 久久99一区二区三区| 国产精品熟女久久久久浪| 中文字幕人妻丝袜制服| 99国产精品一区二区三区| 中文字幕人妻丝袜制服| 无限看片的www在线观看| 亚洲九九香蕉| 日韩 亚洲 欧美在线| 成年人黄色毛片网站| 极品少妇高潮喷水抽搐| 亚洲欧洲精品一区二区精品久久久| 捣出白浆h1v1| 欧美精品啪啪一区二区三区 | 水蜜桃什么品种好| 亚洲熟女精品中文字幕| 亚洲欧美色中文字幕在线| 欧美中文综合在线视频| 国产一区二区激情短视频 | 香蕉国产在线看| 99久久人妻综合| 夫妻午夜视频| 女人久久www免费人成看片| 女性被躁到高潮视频| 亚洲人成电影观看| 国产深夜福利视频在线观看| 欧美国产精品一级二级三级| 亚洲av日韩精品久久久久久密| 99国产精品一区二区三区| 久久性视频一级片| 丝袜喷水一区| 亚洲av电影在线观看一区二区三区| 亚洲人成77777在线视频| 欧美精品亚洲一区二区| 99久久国产精品久久久| 人妻 亚洲 视频| 少妇猛男粗大的猛烈进出视频| 51午夜福利影视在线观看| 国产视频一区二区在线看| 国产麻豆69| 成年动漫av网址| 啦啦啦在线免费观看视频4| 满18在线观看网站| 女人被躁到高潮嗷嗷叫费观| 久久久国产一区二区| 亚洲精品美女久久av网站| 在线十欧美十亚洲十日本专区| 如日韩欧美国产精品一区二区三区| 波多野结衣av一区二区av| 黑人欧美特级aaaaaa片| 国产伦人伦偷精品视频| 激情视频va一区二区三区| 最新在线观看一区二区三区| 久久av网站| 高清欧美精品videossex| 久久精品亚洲av国产电影网| 亚洲少妇的诱惑av| 两个人看的免费小视频| 天堂中文最新版在线下载| 又紧又爽又黄一区二区| av片东京热男人的天堂| 亚洲av美国av| 亚洲va日本ⅴa欧美va伊人久久 | 欧美成人午夜精品| 国产97色在线日韩免费| 日本精品一区二区三区蜜桃| 欧美国产精品va在线观看不卡| 国产欧美日韩一区二区精品| 日韩电影二区| 精品少妇内射三级| 日韩大码丰满熟妇| 无遮挡黄片免费观看| 精品一区二区三区av网在线观看 | 超色免费av| 亚洲成人免费av在线播放| 中国国产av一级| 中文字幕人妻丝袜制服| 国产成人欧美| √禁漫天堂资源中文www| 日韩三级视频一区二区三区| netflix在线观看网站| 嫁个100分男人电影在线观看| 无遮挡黄片免费观看| 精品国产乱子伦一区二区三区 | 国产一级毛片在线| 欧美另类亚洲清纯唯美| 91成年电影在线观看| 国产日韩欧美视频二区| 爱豆传媒免费全集在线观看| 蜜桃在线观看..| 丁香六月欧美| 免费女性裸体啪啪无遮挡网站| 侵犯人妻中文字幕一二三四区| 免费在线观看黄色视频的| 啦啦啦免费观看视频1| 亚洲情色 制服丝袜| 一个人免费在线观看的高清视频 | 日韩三级视频一区二区三区| 亚洲欧美清纯卡通| 麻豆av在线久日| 欧美 日韩 精品 国产| 在线av久久热| 日韩视频在线欧美| 两个人看的免费小视频| 精品久久久久久久毛片微露脸 | 亚洲精品美女久久久久99蜜臀| 丰满少妇做爰视频| 精品福利永久在线观看| e午夜精品久久久久久久| 国产成人系列免费观看| 国产av又大| 极品少妇高潮喷水抽搐| 亚洲精品国产一区二区精华液| 久久中文看片网| 热re99久久国产66热| 蜜桃在线观看..| 国产1区2区3区精品| 国产成人精品无人区| 亚洲成人免费电影在线观看| 视频区图区小说| 精品视频人人做人人爽| 淫妇啪啪啪对白视频 | 欧美日韩亚洲国产一区二区在线观看 | 中国美女看黄片| 亚洲精品国产av蜜桃| 18禁裸乳无遮挡动漫免费视频| 蜜桃在线观看..| 91精品国产国语对白视频| 久久99一区二区三区| 国产无遮挡羞羞视频在线观看| 午夜老司机福利片| 少妇精品久久久久久久| 久久午夜综合久久蜜桃| 久久久国产精品麻豆| 美女国产高潮福利片在线看| 午夜精品久久久久久毛片777|