• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automated brain tumor segmentation on multi-modal MR image using SegNet

    2019-08-05 01:45:28SalmaAlqazzazXianfangSunXinYangandLenNokes
    Computational Visual Media 2019年2期

    Salma Alqazzaz(), Xianfang Sun, Xin Yang, and Len Nokes

    Abstract The potential of improving disease detection and treatment planning comes with accurate and fully automatic algorithms for brain tumor segmentation.Glioma, a type of brain tumor, can appear at different locations with different shapes and sizes. Manual segmentation of brain tumor regions is not only timeconsuming but also prone to human error, and its performance depends on pathologists' experience. In this paper, we tackle this problem by applying a fully convolutional neural network SegNet to 3D data sets for four MRI modalities (Flair, T1, T1ce, and T2)for automated segmentation of brain tumor and subtumor parts, including necrosis, edema, and enhancing tumor. To further improve tumor segmentation, the four separately trained SegNet models are integrated by post-processing to produce four maximum feature maps by fusing the machine-learned feature maps from the fully convolutional layers of each trained model. The maximum feature maps and the pixel intensity values of the original MRI modalities are combined to encode interesting information into a feature representation.Taking the combined feature as input, a decision tree(DT) is used to classify the MRI voxels into different tumor parts and healthy brain tissue. Evaluating the proposed algorithm on the dataset provided by the Brain Tumor Segmentation 2017 (BraTS 2017)challenge, we achieved F-measure scores of 0.85, 0.81,and 0.79 for whole tumor, tumor core, and enhancing tumor, respectively.Experimental results demonstrate that using SegNet models with 3D MRI datasets and integrating the four maximum feature maps with pixel intensity values of the original MRI modalities has potential to perform well on brain tumor segmentation.

    Keywords brain tumor segmentation; multi-modal MRI;convolutional neural networks;fully convolutional networks; decision tree

    1 Introduction

    Glioma is one of the most common types of primary tumour that occur in the brain. They grow from glioma cells and can be categorized into low and high grade gliomas. High grade gliomas (HGG) are more aggressive and highly malignant in a patient,with a life expectancy of at most two years, while low grade gliomas(LGG)can be benign or malignant,and grow more slowly in a patient, with a life expectancy of several years [1]. Accurate segmentation of brain tumor and surrounding tissues such as edema,enhancing tumor, non-enhancing tumor, and necrotic regions is an important factor in assessment of disease progression, therapy response, and treatment planning in patients [2]. Multi-modal magnetic resonance imaging (MRI) is widely employed in clinical routine for diagnosis and monitoring tumor progression. MRI has been one of the popular imaging techniques as it facilitates tumour analysis by visualizing its spread; it also gives soft tissue contrast compared to other techniques like computed tomography (CT) and positron emission tomography(PET). Moreover, multi-modal MRI protocols are normally used to evaluate brain tumor tissues as they have the capability to separate different tissues using a specific sequence based on tissue properties. For example, T1-weighted images are good at separating healthy tissues in the brain while T1ce (contrast enhanced) helps to separate tumor boundaries which appear brighter because of the contrast agent. Edema around tumors is detected well in T2-weighted images,while FLAIR images are best for differentiating edema regions from cerebrospinal fluid (CSF) [3, 4].

    Gliomas have complex structure and appearance.They require accurate delineation in images. Tumor components are often diffuse, with weak contrast.Their borders are often fuzzy and hard to distinguish from healthy tissue (white matter, gray matter, and CSF), making them hard to segment [5]. All these factors lead to time-consuming manual delineation,which is expensive and prone to operator bias.Automatic brain tumor segmentation using MRI would solve these issues by providing an efficient tool for reliable diagnosis and prognosis of brain tumors. Therefore, many researchers have considered automated brain tumor segmentation from MRI images.

    Recently, convolutional neural networks (CNNs)have attracted attention in object detection, segmentation, and image classification. For the BraTS challenge, most CNN-based methods are patch-wise models[5-7]. These methods take only a small region as input to the network, which disregards the image content and label correlations. Additionally, these methods take a long time for training.

    CNN architecture is modified in several ways in fully convolutional networks (FCN). Specifically,instead of making probability distribution prediction patch-wise in CNN, FCN models predict one probability distribution pixel-wise [8]. In the method of Ref. [9], different MRI modalities are stacked together as different input channels into deep learning models. However, the correlation between different MRI modalities was not explicitly considered. To overcome this problem, we develop a feature fusion method to select the most effective information from different modalities. A model is proposed to deal with multiple MRI modalities separately and then incorporate spatial and sequential features from them for 3D brain tumour segmentation.

    In this study, we first trained four SegNet models with 3D data sets with Flair, T1, T1ce, and T2 modalities as input data. The outputs of each SegNet model are four feature maps, which represent the scores of each pixel being classified as background,edema, enhancing tumor, and necrosis. The highest scores in the same class from the four SegNet models are extracted and four feature maps with the highest scores are obtained. These feature maps are combined with the pixel values of the original MRI models, and are taken as the input to a DT classifier to further classify each pixel. Our results demonstrate that this proposed strategy can perform fully automatic segmentation of tumor and sub-tumor regions.

    The main contributions of this paper are as follows:

    · A brain tumour segmentation method that uses 3D data information from the neighbors of the slice in question to increase segmentation accuracy for single mode MR images.

    · Effective combination of features extracted from multi-modal MR images, maximizing the useful information from different modalities of MR images.

    · A decision tree-based segmentation method which incorporates features and pixel intensities from multi-modal MRI images, giving higher segmentation accuracy than single-modal MR images.

    · Evaluation on the BraTS 2017 dataset showing that the proposed method gives state-of-the-art results.

    2 Related work

    Many methods have been investigated for medical image analysis; promising results have been provided by computational intelligence and machine learning methods in medical image processing [10]. The problem of brain tumour segmentation from multimodal MRI scans is still a challenging task, although recently various advanced methods of automated segmentation have been proposed to solve this task.

    Here, we will review some of the relevant works for brain tumour segmentation. For machine learning methods other than deep learning, Gooya et al. [11],Zikic et al. [12], and Tustison et al. [13] present some typical works in this field. Discriminative learning techniques such as SVM,decision forests,and conditional random fields (CRFs) have been reviewed in Ref. [2].

    One common aspect of classical discriminative models is that their implementation is based on predefined features, as opposed to deep learning models that automatically learn a hierarchy of increasingly complex features directly from data,resulting in more robust features[5]. Pereira et al.[7]used two different CNNs for the segmentation of LGG and HGG. The architecture in Ref. [5] involves two pathways, a local pathway that focuses on the information in a pixel's neighborhood, and a global pathway that captures global contextual information from an MRI slice to perform accurate brain tumour segmentation. A dualstream 11-layer network with a 3D fully CRF as postprocessing was presented in Ref. [14]. An adapted version of DeepMedic with residual connection was employed for brain tumour segmentation in Ref. [15].

    Patch-wise methods contain many redundant convolutional calculations, but only explore spatially limited contextual features. To avoid using patches,FCN with deconvolution layers can be used to train an end to end and pixel to pixel CNN for pixel-wise prediction with the whole image as input[8]. Chang [16] demonstrated an algorithm that contains FCN and CRF.Shelhamer et al.[8]suggested to use skip connections to join high-level features from deep decoding layers with appearance features from shallow encoding layers to recover spatial information lost during downsampling. This method has demonstrated promising results on natural images and is also applicable to biomedical images [17].Ronneberger et al. [9] and C?i?cek et al. [18] used UNet architecture which consists of a down-sampling path to capture contextual features and a symmetric up-sampling path that enables accurate localization with 3D extension. However, the depth information is ignored by approaches based on 2D. Nevertheless,Lai [19] used the depth information by implementing a 3D convolution model which utilizes the correlation between slices. A large number of parameters is required by the 3D convolution network. Moreover,in a small dataset, a 3D convolution network is prone to overfitting.

    In Refs. [5, 20], the input data to the deep learning methods were treated as different modality channels. Therefore, the correlation between them is not well used. The correlations between different MRI modalities are utilized in our proposed method by implementing 3D MRI data sets for each MRI modality separately with a SegNet model, and combining the feature maps of the last deconvolution layers for each trained SegNet model with the pixel intensity values of the original MRI models, feeding them into a classifier.

    3 Approach

    Our brain tumor segmentation algorithm aims to locate the entire tumor volume and accurately segment the tumor into four sub-tumor parts. Our method has four main steps: a pre-processing step to construct 3D MRI datasets, a training step to finetune a pretrained SegNet for each MRI modality separately, a post-processing step to extract four maximum feature maps from the SegNet models'score maps, and a classification step to classify each pixel based on the maximum feature maps and the MRI pixel values. Figure 1 shows the pipeline of our proposed system using SegNet networks.

    3.1 Data pre-processing

    In our study, MRI intensity value normalization is important to compensate for MRI artifacts, such as motion and field inhomogeneity, and also to allow data from different scanners to be processed by a single algorithm. Therefore, we need to ensure that the value ranges match between patients and different modalities to avoid initial biases of the network.

    Firstly, to remove unwanted artifacts, N4ITK bias field correction is applied to all MRI modalities[21]. If this correction is not performed in the pre-processing step, artifacts cause high false positives, resulting in poor performance. Figure 2 shows the effects of applying bias field correction to an MR image. Higher intensity values, which can lead to false positives in the predicted output results, are observed in the first scan near the bottom left corner. The second scan has better contrast near the edges after removing the bias.

    Intensity values across MRI slices have been observed to vary greatly, so a normalization preprocessing step is also applied in addition to bias field correction so as to bring the mean intensity value and variance close to 0 and 1, respectively. Equation (1)shows how to compute the slice value In:

    where I is the original intensity value of the MRI slice,and μ and σ are the mean and standard deviation of I respectively.

    Fig. 1 Pipeline of our brain tumour segmentation approach.

    Fig. 2 An MRI scan (a) before and (b) after N4ITK bias field correction.

    Additionally removing the top and bottom 1%intensity values during the normalization process brings the intensity values within a coherent range across all images for the training phase. To remove a significant portion of unnecessary zeros in the dataset and to save training time by reducing the huge memory requirements for 3D data sets, we trimmed some black parts of the image background from the data for all modalities to get input images of size 192×192.

    As shown in Fig.1, the main step in pre-processing is 3D database construction. Since there are four modalities in the MRI dataset for each patient,we took them as four independent inputs. When processing the jth slice, we also use the (j -1)th and (j+1)th slices to make advantage of 3D image information. To do so, the three adjacent slices for each modality are taken as three color channels of an image and used as 3D inputs.

    3.2 Brain tumor image segmentation by SegNet networks

    The semantic segmentation model in Fig. 3 takes full-size images as input for feature extraction in an end-to-end manner. The pretrained SegNet is used,and its parameters are finely tuned using images with manually annotated tumor regions. In the testing process, the final SegNet model is used to create predicted segmentation masks for tumor regions for unidentified images. The motivation for using SegNet networks instead of other deep learning networks is that SegNet has a small number of parameters and does not need high computational resources like DeconvNet [23], and it is easier to train end-to-end.Moreover,in a U-Net network[9],entire feature maps in the encoders are transferred to the corresponding up-sampling decoders and concatenated to give decoder feature maps, which leads to high memory requirements, while in SegNet only pooling indices are reused, needing less memory.

    Fig. 3 (a) Architecture of the SegNet; (b) SegNet which uses max pooling indices to up-sample the feature maps and convolve them with a trainable decoder filter bank [22].

    In our network architecture, the main idea used from FCN is to change the fully connected layers of VGG-16 into convolutional layers. This not only helps in retaining higher resolution feature maps at the deepest encoder outputs, but also reduces the number of parameters in the SegNet encoder network significantly (from 134M to 14.7M). This enables the classification net to output a dense feature map which keeps spatial information [22].

    The SegNet architecture consists of a downsampling (encoding) path and a corresponding upsampling (decoding) path, followed by a final pixelwise classification layer. In the encoder path, there are 13 convolutional layers which match the first 13 convolutional layers in the VGG16 network.Each encoder layer has a corresponding decoder layer; therefore, the decoder network also has 13 convolutional layers. The output of the final decoder layer is fed into a multi-class soft-max classifier to produce class probabilities for each pixel independently.

    The encoder path consists of five convolution blocks,each of which is followed by a max-pooling operation with a 2×2 window and stride 2 for downsampling.Each convolution block is constructed by several layers of 3 × 3 convolution combined with batch normalization and element-wise rectified linear nonlinearity (ReLU). There are two layers in each of the first two convolution blocks, and three layers for the next three blocks. The decoder path has a symmetric structure to the encoder path except that the max-pooling operation is replaced by an upsampling operation. Upsampling takes the outputs of the previous layer and the output of the max pooling indices of the corresponding encoding layer as input. The output of the final decoder, which is a high dimensional feature representation, is fed into a soft-max classifier layer, which classifies each pixel independently. See Fig. 3. Subsequently, the output of the soft-max classifier is a K channel image, where K represents the number of desired classes, with probability value at each pixel.

    3.3 Post-processing

    As described in Section 3.2, four SegNet models are adapted and trained separately for segmentation of brain tumors from multi-modal MR images. The earlier layers of the SegNet models learn simple features like circles and edges, while the deeper layers learn complex and useful finer features. The machinelearned features in the last deconvolution layer in each SegNet model represent four score maps, related to the four classification labels (background, necrosis,edema,and enhancing tumor). The four highest score maps are constructed from the obtained 16 feature maps. The values of each highest activation feature maps represent those strong features that include all hierarchical features (at higher resolution), helping to increase the classification performance. To further increase the information for classification, a feature vector is generated based on combination of the four highest score maps and the pixel intensity values of the original MRI modalities. Finally, the encoded feature vector is applied to a DT classifier to classify each MRI image voxel into tumor and sub-tumor parts. The reason for using DT as the classifier in this work is that it has been shown to provide high performance for brain tumour segmentation [2]. The selection process for highest feature maps and their location in the SegNet architecture are illustrated in Fig. 4.

    3.4 SegNetMax DT

    As described above, the four highest score maps are combined with pixel intensity values and considered as feature vectors. Then, the feature vectors are presented to a DT classifier. In this phase, the maximum number of splits or branch points is specified to control the depth of the designed tree.Different tree depths of DT classifier were examined and tuned on the training datasets. Optimal generalization and accuracy were obtained from a tree with depth 15. 5-fold cross validation data were used to evaluate the classification accuracy.

    3.5 Training and implementation details

    The proposed algorithm was implemented using MATLAB 2018a and run on a PC with an Intel Core i7 CPU with 16 GB RAM using Windows 7.Our implementation was based on the MATLAB deep learning toolbox for semantic segmentation and its classification learner toolbox for training the DT classifier. The whole training process for each model took approximately 3 days on a single NVIDIA GPU Titan XP. We updated the loss function on the training set using stochastic gradient descent, with parameters set as follows: learning rate = 0.0001,maximum number of epochs = 80.

    4 Experiments and results

    Fig. 4 Selection process of maximum feature maps. (a) Background. (b) Edema. (c) Enhancing tumor. (d) Necrosis. (e) Maximum feature maps.

    All 285 patient subjects with HGG and LGG in the BraTS 2017 dataset were included in this study[2, 24]. 75% of the patients (158 HGG and 57 LGG)were used to train the deep learning model and 25%(52 HGG and 18 LGG) were assigned to the testing set. For each patient, there were four types of MRI sequences (Flair, T1, T1ce, and T2). All images were segmented manually in one to four rates (using 3 labels, 1: the necrotic and non-enhancing tumor,2: the peritumoral edema, 4: GD-enhancing tumor).The segmentation ground truth for each subject was observed by experienced neuro-radiologists. Figure 5 demonstrates MRI modalities and their ground truth.

    The model performance was evaluated on the test set. For practical clinical applications, the tumor structures are grouped into three different tumor regions defined by

    · The complete tumor region including all four intratumor classes (necrosis and non-enhancing, edema,enhancing tumor, labels 1, 2, and 4).

    · The core tumor region (as above but excluding edema regions, labels 1 and 4).

    · The enhancing tumor region (only label 4).

    For each tumor region, the segmentation results were evaluated quantitatively using the F-measure which provides an intersection measurement between the manually defined brain tumor regions and the segmentation prediction results of the fully automatic method, as follows:

    From our preliminary results, we observed that our 3D model can achieve brain tumor detection accurately even though we only trained each MRI modality separately instead of combining 4 MRI modalities as input as in other studies. The high accuracy comes from the fact that the network architecture is able to capture 3D fine details of tumor regions from adjacent MRI slices (j-1,j,j+1) of the same modality. Consequently, the convolutional layers can extract more features, which is extremely helpful in improving the performance of brain tumor segmentation. Moreover, relatively accurate brain tumor segmentation was achieved by extracting the four highest feature maps combined with the pixel intensity values of the original MRI images. The score maps are obtained from the last deconvolution layer in each SegNet model because in this layer all hierarchical features that contain finer details (at higher resolution) are included, which gives accurate brain tumor detection results.

    Table 1 gives evaluation results for the proposed method on the BraTS 2017 Training dataset for four MRI modalities, while Table 2 compares our method with other methods.

    From Table 1 it can be seen that Segperforms better than individual SegNet models. As explained in Section 3.4, only the highest scores for each specific sub-tumour regions are selectedfor classification, which is why we can get highest accuracy using SegNet

    Table 1 Segmentation results for the BraTS 2017 dataset

    Fig. 5 (a) Whole tumor visible in FLAIR; (b) tumor core visible in T2; (c) enhancing and necrotic tumor component structures visible in T1ce; (d) final labels of the observable tumor structures noticeable: edema (yellow), necrotic/cystic core (light blue), enhancing core (red).

    Table 2 Comparison of our method and other methods on the BraTS 2017 dataset

    Table 2 shows that our method gives better results in core and enhanced tumor segmentation,though the complete segmentation accuracy is not better than that of Refs. [25] and [26]. This is because that our method has a relatively low detection accuracy for edema. However, we consider the core or enhanced region to be much more important than the edema region. It is worth sacrificing accuracy of edema detection to increase accuracy of core and enhanced tumour detection.

    Figure 6 demonstrates some visual results from semantic segmentation structures of SegNet models and the SegNemethod from an axial view.

    5 Discussion and conclusions

    Fig. 6 Segmentation results of SegNet models and SegNet DT method. (a)-(g) MRI slices, ground truth, SegNet1Flair, SegNet2T1,SegNet3T1ce, SegNet4T2, and SegNetMax DT, respectively.

    In this study, the publicly available BraTS 2017 dataset was used. A DT and four SegNet models were trained with the same training dataset that includes ground truth. A testing dataset without ground truth was used for system evaluation. Our experiments show that the SegNet architectures with 3D dataset and post-processing presented in this work can efficiently and automatically segment brain tumors,completing segmentation for an entire volume in four seconds on a GPU optimized workstation. However,some models like SegNet3and2 do not give accurate results because T1 and T2 MRI modalities only give information related to healthy and whole tumor tissues rather than other sub-parts of a tumor like necrosis and enhancing tumor. To tackle this problem, maximum feature maps from all SegNet models were combined, so that only strong and useful features from all SegNet models are presented to the classifier. Four MRI modalities were trained separately for multiple reasons. Firstly,different modalities have different features, so it is faster to train them using different simple models rather than one complex model. Secondly, specific features can be extracted directly related to the specific modality of each SegNet model, providing clinicians with specific information. Finally, one of the most common MRI limitations is the prolonged scan time required to get different MRI modalities,so sometimes, depending on a single modality to detect a brain tumor can be a good solution to save time in clinical applications.

    It is worth mentioning that in the proposed method, the training stage is time-consuming, which could be considered to be a limitation, but the prediction phase rapidly processes the testing dataset to provide semantic segmentation and classification.Although our method can segment core and enhanced tumors better than state-of-the-art methods, it is not better in segmenting complete tumors. However,further post-processing techniques could improve the accuracy of our method, and the SegNet models could be saved as trained models and refined by use of additional training datasets. Consequently,a longitudinal study using different FCN and CNN architectures should be taken over time to increase the proposed system performance.

    Acknowledgements

    We would like to thank nVidia for their kind donation of a Titan XP GPU.

    黑人猛操日本美女一级片| 男女无遮挡免费网站观看| 五月开心婷婷网| 丝袜人妻中文字幕| 青春草亚洲视频在线观看| 欧美 亚洲 国产 日韩一| 久久人人97超碰香蕉20202| 久久亚洲国产成人精品v| 亚洲av欧美aⅴ国产| 十分钟在线观看高清视频www| 中文字幕av电影在线播放| 欧美日韩精品网址| 蜜桃在线观看..| 亚洲国产欧美一区二区综合| 欧美中文综合在线视频| 亚洲欧美中文字幕日韩二区| 欧美变态另类bdsm刘玥| 国产精品成人在线| 校园人妻丝袜中文字幕| 国产爽快片一区二区三区| 国产不卡av网站在线观看| 母亲3免费完整高清在线观看| 日韩中文字幕视频在线看片| 亚洲欧美清纯卡通| 日韩电影二区| 18在线观看网站| 国产亚洲午夜精品一区二区久久| 亚洲五月色婷婷综合| av欧美777| 国产女主播在线喷水免费视频网站| 久久久久视频综合| 色婷婷久久久亚洲欧美| 国产主播在线观看一区二区 | 少妇 在线观看| 欧美在线黄色| 色精品久久人妻99蜜桃| 丰满人妻熟妇乱又伦精品不卡| 9热在线视频观看99| tube8黄色片| 99国产综合亚洲精品| 亚洲国产av新网站| 欧美 亚洲 国产 日韩一| av在线老鸭窝| 亚洲av欧美aⅴ国产| 国产男女内射视频| 欧美性长视频在线观看| 久久毛片免费看一区二区三区| 欧美在线一区亚洲| 精品第一国产精品| 国产欧美日韩一区二区三区在线| 叶爱在线成人免费视频播放| 久久久国产一区二区| a级毛片黄视频| 午夜91福利影院| 99久久人妻综合| 老司机影院毛片| 少妇人妻久久综合中文| 一区二区三区乱码不卡18| 极品少妇高潮喷水抽搐| 精品福利永久在线观看| 熟女少妇亚洲综合色aaa.| 亚洲一区中文字幕在线| 1024香蕉在线观看| 在线观看免费午夜福利视频| 99久久99久久久精品蜜桃| 欧美+亚洲+日韩+国产| 久久精品熟女亚洲av麻豆精品| 最近最新中文字幕大全免费视频 | 亚洲av美国av| 久久性视频一级片| 好男人视频免费观看在线| 97在线人人人人妻| 交换朋友夫妻互换小说| 天天躁狠狠躁夜夜躁狠狠躁| 免费观看人在逋| 成年女人毛片免费观看观看9 | 免费高清在线观看日韩| 亚洲精品国产一区二区精华液| 国产精品久久久久久精品电影小说| 精品一品国产午夜福利视频| 日韩人妻精品一区2区三区| 欧美另类一区| 丝袜在线中文字幕| 黑丝袜美女国产一区| 性少妇av在线| 97人妻天天添夜夜摸| 80岁老熟妇乱子伦牲交| 超色免费av| 国产成人av激情在线播放| 欧美亚洲日本最大视频资源| 波野结衣二区三区在线| 亚洲精品乱久久久久久| 久久国产精品影院| 成年人免费黄色播放视频| 操出白浆在线播放| 国产精品熟女久久久久浪| 男女无遮挡免费网站观看| 男女边摸边吃奶| 亚洲av电影在线观看一区二区三区| 韩国高清视频一区二区三区| 看十八女毛片水多多多| 精品欧美一区二区三区在线| 亚洲中文字幕日韩| 人人妻人人澡人人爽人人夜夜| 精品熟女少妇八av免费久了| 在线观看国产h片| 黄色a级毛片大全视频| 久久亚洲精品不卡| 十八禁网站网址无遮挡| 欧美激情高清一区二区三区| 青草久久国产| 又粗又硬又长又爽又黄的视频| 国产av国产精品国产| 国产日韩欧美视频二区| 亚洲国产毛片av蜜桃av| 午夜福利免费观看在线| 天堂中文最新版在线下载| 国产精品亚洲av一区麻豆| 91九色精品人成在线观看| 亚洲精品国产av蜜桃| 热99久久久久精品小说推荐| 色94色欧美一区二区| 亚洲av欧美aⅴ国产| 七月丁香在线播放| 久久精品久久精品一区二区三区| 9191精品国产免费久久| 亚洲三区欧美一区| 波多野结衣av一区二区av| 大香蕉久久网| 只有这里有精品99| 可以免费在线观看a视频的电影网站| 国产成人免费无遮挡视频| 一本大道久久a久久精品| 成人黄色视频免费在线看| 少妇人妻 视频| 色视频在线一区二区三区| 精品卡一卡二卡四卡免费| 亚洲精品国产av成人精品| 啦啦啦啦在线视频资源| 午夜福利乱码中文字幕| 最近中文字幕2019免费版| 人人妻人人澡人人看| 国产男人的电影天堂91| 欧美性长视频在线观看| 欧美精品高潮呻吟av久久| 色网站视频免费| 十八禁人妻一区二区| 女警被强在线播放| 黄色a级毛片大全视频| 国产亚洲一区二区精品| 久久天堂一区二区三区四区| 亚洲精品日韩在线中文字幕| 女人精品久久久久毛片| 日日爽夜夜爽网站| 中文字幕亚洲精品专区| 欧美激情极品国产一区二区三区| 日本a在线网址| 色婷婷av一区二区三区视频| 国产免费视频播放在线视频| 亚洲三区欧美一区| 亚洲国产看品久久| 你懂的网址亚洲精品在线观看| 日本色播在线视频| 天堂8中文在线网| 又紧又爽又黄一区二区| 久久狼人影院| 91麻豆精品激情在线观看国产 | 美女视频免费永久观看网站| kizo精华| 又大又爽又粗| 91麻豆av在线| 狠狠婷婷综合久久久久久88av| 只有这里有精品99| 高清av免费在线| 日本欧美国产在线视频| 狠狠精品人妻久久久久久综合| 亚洲国产精品999| 久久精品成人免费网站| av网站在线播放免费| 老司机亚洲免费影院| 欧美精品高潮呻吟av久久| 中文字幕人妻丝袜制服| 99香蕉大伊视频| 黑人猛操日本美女一级片| 精品久久久精品久久久| 亚洲伊人色综图| 久久久国产精品麻豆| 欧美日韩福利视频一区二区| 日日爽夜夜爽网站| 欧美+亚洲+日韩+国产| 免费在线观看完整版高清| 亚洲一区二区三区欧美精品| 男女国产视频网站| 亚洲中文字幕日韩| www.999成人在线观看| 在线观看免费午夜福利视频| 9191精品国产免费久久| 国产爽快片一区二区三区| 久久久久精品人妻al黑| 久久精品久久久久久噜噜老黄| 校园人妻丝袜中文字幕| 久热爱精品视频在线9| 精品国产一区二区三区四区第35| 成人亚洲精品一区在线观看| 99香蕉大伊视频| 咕卡用的链子| 97在线人人人人妻| 中文字幕人妻丝袜制服| 国产成人欧美在线观看 | 免费在线观看影片大全网站 | 久久久久久亚洲精品国产蜜桃av| 超碰97精品在线观看| 国产亚洲精品久久久久5区| 日韩中文字幕视频在线看片| 国产精品99久久99久久久不卡| 精品少妇内射三级| 91老司机精品| 欧美日韩亚洲高清精品| 国产欧美日韩精品亚洲av| 麻豆av在线久日| 日韩一卡2卡3卡4卡2021年| 国产人伦9x9x在线观看| 国产精品熟女久久久久浪| 久久久久久人人人人人| 久久久久精品人妻al黑| 一本色道久久久久久精品综合| 久久国产精品影院| 亚洲第一青青草原| 国产极品粉嫩免费观看在线| 国产免费一区二区三区四区乱码| 天天躁狠狠躁夜夜躁狠狠躁| 99精国产麻豆久久婷婷| 99热网站在线观看| 一本综合久久免费| tube8黄色片| 亚洲av欧美aⅴ国产| 久久ye,这里只有精品| 免费看av在线观看网站| 黄色视频在线播放观看不卡| 亚洲国产成人一精品久久久| 深夜精品福利| 国产熟女午夜一区二区三区| 亚洲,一卡二卡三卡| 成年av动漫网址| 欧美xxⅹ黑人| 美女扒开内裤让男人捅视频| 亚洲欧美精品综合一区二区三区| 国产不卡av网站在线观看| 嫁个100分男人电影在线观看 | 国产精品久久久av美女十八| 国产成人精品久久二区二区91| 夫妻午夜视频| 99热全是精品| 国产一区二区三区综合在线观看| 午夜老司机福利片| 我要看黄色一级片免费的| 久久精品熟女亚洲av麻豆精品| 免费看十八禁软件| 校园人妻丝袜中文字幕| 自线自在国产av| 国产熟女欧美一区二区| 中文字幕色久视频| 欧美性长视频在线观看| 日韩 欧美 亚洲 中文字幕| 亚洲激情五月婷婷啪啪| 人人妻人人添人人爽欧美一区卜| 久久久久久久大尺度免费视频| 国产精品.久久久| 性少妇av在线| 国产精品久久久久久精品电影小说| 国产不卡av网站在线观看| 波多野结衣一区麻豆| 热re99久久国产66热| 国产日韩欧美亚洲二区| 亚洲欧美精品自产自拍| 亚洲精品国产一区二区精华液| 老司机亚洲免费影院| 久久亚洲精品不卡| 国产精品国产三级专区第一集| 黑人欧美特级aaaaaa片| 亚洲 国产 在线| 老司机影院毛片| 一边亲一边摸免费视频| 日韩一区二区三区影片| 国产一区二区三区综合在线观看| 亚洲欧美成人综合另类久久久| 午夜91福利影院| 久久狼人影院| 久久性视频一级片| 91精品伊人久久大香线蕉| 国产免费又黄又爽又色| 婷婷色麻豆天堂久久| 真人做人爱边吃奶动态| 日本欧美视频一区| 中文欧美无线码| 国产成人a∨麻豆精品| 亚洲欧美色中文字幕在线| 免费高清在线观看日韩| 岛国毛片在线播放| 在线av久久热| 婷婷色av中文字幕| 99国产精品一区二区蜜桃av | 国产三级黄色录像| 各种免费的搞黄视频| 悠悠久久av| 成年美女黄网站色视频大全免费| av天堂久久9| 丁香六月天网| 在线观看免费日韩欧美大片| 亚洲欧美一区二区三区久久| 考比视频在线观看| 亚洲伊人久久精品综合| 午夜激情av网站| 五月开心婷婷网| 久久午夜综合久久蜜桃| 极品少妇高潮喷水抽搐| 一区二区三区精品91| 久久精品久久精品一区二区三区| 91成人精品电影| 日韩av在线免费看完整版不卡| 午夜91福利影院| 在线精品无人区一区二区三| 色综合欧美亚洲国产小说| 999精品在线视频| 免费观看av网站的网址| 国产精品一区二区免费欧美 | av天堂在线播放| 国产精品偷伦视频观看了| 人妻人人澡人人爽人人| 国产欧美日韩一区二区三 | 国产色视频综合| 色播在线永久视频| 国产在线观看jvid| 午夜福利,免费看| 精品亚洲成a人片在线观看| 欧美日韩综合久久久久久| 一本色道久久久久久精品综合| 国产老妇伦熟女老妇高清| 丝袜喷水一区| 国产精品 国内视频| 天堂8中文在线网| 精品国产超薄肉色丝袜足j| av电影中文网址| 国产又色又爽无遮挡免| 久久精品久久久久久噜噜老黄| 下体分泌物呈黄色| 99精国产麻豆久久婷婷| 精品第一国产精品| 午夜福利乱码中文字幕| 一本—道久久a久久精品蜜桃钙片| 晚上一个人看的免费电影| 色精品久久人妻99蜜桃| 热re99久久国产66热| 久久久精品免费免费高清| 久久久亚洲精品成人影院| svipshipincom国产片| 热re99久久精品国产66热6| 精品久久久久久久毛片微露脸 | 成年人免费黄色播放视频| 成人国产av品久久久| 美女脱内裤让男人舔精品视频| 久久午夜综合久久蜜桃| 欧美av亚洲av综合av国产av| 99香蕉大伊视频| a 毛片基地| 精品久久久久久久毛片微露脸 | 老司机午夜十八禁免费视频| 手机成人av网站| 色网站视频免费| 国产一区二区 视频在线| av在线app专区| 成人亚洲精品一区在线观看| 后天国语完整版免费观看| 脱女人内裤的视频| 久热这里只有精品99| 在线观看免费午夜福利视频| 国产一区二区 视频在线| videos熟女内射| cao死你这个sao货| 国产免费一区二区三区四区乱码| 人人妻人人爽人人添夜夜欢视频| 国产成人欧美在线观看 | 欧美激情 高清一区二区三区| 婷婷丁香在线五月| 午夜视频精品福利| 免费人妻精品一区二区三区视频| 色婷婷久久久亚洲欧美| 国产精品一区二区在线观看99| svipshipincom国产片| 日韩 欧美 亚洲 中文字幕| 女人精品久久久久毛片| 国产在线一区二区三区精| 国产熟女午夜一区二区三区| 性色av一级| 天天躁夜夜躁狠狠久久av| 男男h啪啪无遮挡| 尾随美女入室| 国产亚洲午夜精品一区二区久久| 午夜影院在线不卡| 十分钟在线观看高清视频www| 亚洲精品中文字幕在线视频| 亚洲欧美一区二区三区黑人| av在线播放精品| 久热这里只有精品99| 久久久国产一区二区| 波多野结衣av一区二区av| 国产熟女午夜一区二区三区| 欧美日韩黄片免| 中文欧美无线码| 丰满饥渴人妻一区二区三| 欧美日韩成人在线一区二区| 又紧又爽又黄一区二区| 色精品久久人妻99蜜桃| 国产片特级美女逼逼视频| 大话2 男鬼变身卡| 超碰成人久久| 亚洲欧美一区二区三区黑人| 夜夜骑夜夜射夜夜干| 美女脱内裤让男人舔精品视频| av国产久精品久网站免费入址| 人人妻人人澡人人爽人人夜夜| 男男h啪啪无遮挡| 久久精品人人爽人人爽视色| 精品高清国产在线一区| 亚洲精品美女久久av网站| 午夜免费鲁丝| 亚洲男人天堂网一区| 两个人看的免费小视频| 人妻一区二区av| 婷婷丁香在线五月| av线在线观看网站| 日本午夜av视频| 婷婷成人精品国产| 亚洲精品日韩在线中文字幕| 真人做人爱边吃奶动态| 国产主播在线观看一区二区 | 久久精品国产a三级三级三级| 国产欧美日韩综合在线一区二区| 九色亚洲精品在线播放| 成年美女黄网站色视频大全免费| 日日摸夜夜添夜夜爱| 国产成人一区二区在线| 久久久久久久久免费视频了| 人妻一区二区av| 青青草视频在线视频观看| 男女床上黄色一级片免费看| 精品一区二区三区av网在线观看 | 伦理电影免费视频| 国产亚洲av片在线观看秒播厂| 青草久久国产| 99国产精品99久久久久| 国产在线视频一区二区| 国产一区二区 视频在线| 久久天堂一区二区三区四区| xxx大片免费视频| 国产野战对白在线观看| 美女扒开内裤让男人捅视频| 另类精品久久| 亚洲激情五月婷婷啪啪| 五月天丁香电影| 中文字幕另类日韩欧美亚洲嫩草| 美女福利国产在线| 一二三四社区在线视频社区8| 久久ye,这里只有精品| 一区福利在线观看| 久久久久国产一级毛片高清牌| 在线观看人妻少妇| 日韩一卡2卡3卡4卡2021年| 日韩制服丝袜自拍偷拍| 精品国产乱码久久久久久小说| 丁香六月天网| 久久ye,这里只有精品| 欧美激情极品国产一区二区三区| 日韩免费高清中文字幕av| 久久天躁狠狠躁夜夜2o2o | 国产成人免费观看mmmm| 午夜福利一区二区在线看| 校园人妻丝袜中文字幕| 免费观看av网站的网址| 韩国高清视频一区二区三区| 免费不卡黄色视频| 热99久久久久精品小说推荐| 热re99久久国产66热| 99国产精品一区二区蜜桃av | 国产精品人妻久久久影院| 久久天堂一区二区三区四区| 亚洲成人手机| 一级片免费观看大全| 老司机靠b影院| 一本—道久久a久久精品蜜桃钙片| 国产精品国产av在线观看| 免费一级毛片在线播放高清视频 | 国产熟女欧美一区二区| 欧美成人午夜精品| 性少妇av在线| 国产不卡av网站在线观看| 91精品伊人久久大香线蕉| 男女下面插进去视频免费观看| 久久久久久免费高清国产稀缺| av片东京热男人的天堂| 国产91精品成人一区二区三区 | 黄色a级毛片大全视频| 欧美另类一区| 黄色一级大片看看| 亚洲午夜精品一区,二区,三区| 亚洲色图 男人天堂 中文字幕| 国产人伦9x9x在线观看| 欧美日本中文国产一区发布| 亚洲自偷自拍图片 自拍| 大香蕉久久成人网| 日本91视频免费播放| 国产片内射在线| 看十八女毛片水多多多| 亚洲精品一卡2卡三卡4卡5卡 | 亚洲成国产人片在线观看| a级片在线免费高清观看视频| 91精品伊人久久大香线蕉| 亚洲av欧美aⅴ国产| 中文字幕亚洲精品专区| 免费久久久久久久精品成人欧美视频| 国产欧美亚洲国产| 曰老女人黄片| 中文字幕人妻熟女乱码| avwww免费| 日日夜夜操网爽| 男女下面插进去视频免费观看| 亚洲国产欧美日韩在线播放| 大片电影免费在线观看免费| 看免费成人av毛片| a 毛片基地| 亚洲欧洲国产日韩| 久久毛片免费看一区二区三区| 精品少妇内射三级| 最新的欧美精品一区二区| 十八禁高潮呻吟视频| 欧美精品av麻豆av| 国产av精品麻豆| 日韩中文字幕欧美一区二区 | 国产91精品成人一区二区三区 | 中文字幕高清在线视频| 老鸭窝网址在线观看| www.自偷自拍.com| 久久ye,这里只有精品| 最近中文字幕2019免费版| 亚洲三区欧美一区| 成年女人毛片免费观看观看9 | 9色porny在线观看| 你懂的网址亚洲精品在线观看| 精品国产超薄肉色丝袜足j| 爱豆传媒免费全集在线观看| 国产欧美日韩精品亚洲av| 亚洲精品日本国产第一区| 别揉我奶头~嗯~啊~动态视频 | 最新的欧美精品一区二区| videos熟女内射| 青青草视频在线视频观看| 大话2 男鬼变身卡| 亚洲专区中文字幕在线| 久久久久精品国产欧美久久久 | 久久99精品国语久久久| 免费黄频网站在线观看国产| 高潮久久久久久久久久久不卡| 性色av乱码一区二区三区2| 老司机靠b影院| 免费久久久久久久精品成人欧美视频| 视频区欧美日本亚洲| 又紧又爽又黄一区二区| 日韩大片免费观看网站| 捣出白浆h1v1| 在线观看免费高清a一片| 国产精品av久久久久免费| 男人爽女人下面视频在线观看| 这个男人来自地球电影免费观看| 韩国高清视频一区二区三区| 青草久久国产| 日韩中文字幕欧美一区二区 | 秋霞在线观看毛片| 欧美精品一区二区免费开放| 少妇粗大呻吟视频| 精品国产一区二区久久| 欧美精品高潮呻吟av久久| 免费av中文字幕在线| 亚洲,欧美,日韩| 精品高清国产在线一区| 天堂中文最新版在线下载| 亚洲五月色婷婷综合| 别揉我奶头~嗯~啊~动态视频 | 国产av一区二区精品久久| 国产成人av激情在线播放| 国产成人精品在线电影| 超碰成人久久| 免费高清在线观看视频在线观看| 午夜精品国产一区二区电影| 亚洲精品乱久久久久久| 另类亚洲欧美激情| 啦啦啦在线免费观看视频4| 免费观看人在逋| 国产精品免费视频内射| 亚洲一卡2卡3卡4卡5卡精品中文| 午夜福利视频在线观看免费| 欧美国产精品一级二级三级| 亚洲精品美女久久久久99蜜臀 | 香蕉丝袜av| 丰满饥渴人妻一区二区三| 熟女少妇亚洲综合色aaa.| 大陆偷拍与自拍| 每晚都被弄得嗷嗷叫到高潮| 国产精品欧美亚洲77777| 你懂的网址亚洲精品在线观看| 在线亚洲精品国产二区图片欧美| 午夜激情久久久久久久| 天天添夜夜摸| 女人被躁到高潮嗷嗷叫费观| 成人影院久久| 纵有疾风起免费观看全集完整版|