• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Construction of a convolutional neural network classifier developed by computed tomography images for pancreatic cancer diagnosis

    2020-10-09 08:53:46HanMaZhongXinLiuJingJingZhangFengTianWuChengFuXuZheShenChaoHuiYuYouMingLi
    World Journal of Gastroenterology 2020年34期

    Han Ma, Zhong-Xin Liu, Jing-Jing Zhang, Feng-Tian Wu, Cheng-Fu Xu, Zhe Shen, Chao-Hui Yu, You-Ming Li

    Abstract

    Key Words: Deep learning; Convolutional neural networks; Pancreatic cancer; Computed tomography

    INTRODUCTION

    Pancreatic ductal adenocarcinoma (PDAC) is the most common solid malignancy of the pancreas. It is aggressive and challenging to treat, which is more commonly called “pancreatic cancer”[1]. Pancreatic cancer is a highly lethal malignancy with a very poor prognosis[2]. Despite recent advances in surgical techniques, chemotherapy, and radiation therapy, the 5-year survival rate remains a dismal 8.7%[3]. Most patients with pancreatic cancer have nonspecific symptoms, and the disease is often found at an advanced stage. Only 10%-20% of patients present at the localized disease stage, at which complete surgical resection and chemotherapy offer the best chance of survival, with a 5-year survival rate of approximately 31.5%. The remaining 80%-90% of patients miss the chance to benefit from surgery because of general or local metastases at the time of diagnosis[4,5].

    Currently, effective early diagnosis remains difficult, and it depends mainly on imaging modalities[6]. Compared with ultrasonography, magnetic resonance imaging (MRI), endoscopic ultrasonography, and positron emission tomography, computed tomography (CT) is the most commonly used imaging modality for the initial evaluation of suspected pancreatic cancer[7,8]. CT scans are also used for screening asymptomatic patients at high risk of developing pancreatic cancer. Patients with pancreatic cancer that were incidentally diagnosed during an imaging examination for an unrelated disease have a longer median survival time than those who are already symptomatic[9]. Sensitivity of CT for the pancreatic adenocarcinoma detection ranges from 70% to 90%[10]. The choice for pancreatic cancer diagnosis is a thin section with contrast-enhanced, dual-phase multidetector computed tomography[11].

    Recently, due to promising achievements in deep neural networks and increasing medical needs, computer-aided diagnosis (CAD) systems have become a new research focus. There have been some initial successes in applying deep learning to assess radiological images. Deep learning-aided decision-making has been used in support of pulmonary nodule and skin tumor diagnoses[12,13]. Efforts should be made to develop CAD systems to distinguish pancreatic cancer from benign tissue due to the high morbidity of pancreatic cancer. Therefore, developing an advanced discrimination method for pancreatic cancer is necessary. A convolutional neural network (CNN) is a class of neural network models that can extract features from images by exploring the local spatial correlations presented in images. CNN models have been shown to be effective and powerful for addressing a variety of image classification problems[14].

    In this study, we demonstrated that a deep learning method can achieve pathologically certified pancreatic ductal adenocarcinoma classification using clinical CT images.

    MATERIALS AND METHODS

    Data collection and preparation

    Dataset: Between June 2017 and June 2018, patients with pathologically diagnosed pancreatic cancer in the First Affiliated Hospital, Zhejiang University School of Medicine, China, were eligible for inclusion in the present study. Patients with CTconfirmed normal pancreas were also randomly collected in the same period. All data were retrospectively obtained from patients’ medical records. Images of pancreatic cancers and normal pancreases were extracted from the database. All the cancer diagnoses were based on pathological examinations, either by pancreatic biopsy or by surgery (Figure 1). Participants gave informed consent to allow data collected from them to be published. Because of the retrospective study design, we verbally informed all the participants included in the study. Patients who do not want their information to be shared could opt out. Subject information was anonymized at the collection and analysis stage. All the methods were performed in accordance with the approved guidelines. The Hospital Ethics Committee approved the study protocol. A total of 343 patients were pathologically diagnosed with pancreatic cancer from June 2017 to June 2018. Of these patients, 222 underwent an abdominal enhanced-CT in our hospital before surgery or biopsy. We randomly collected 190 patients who underwent enhanced-CT with normal pancreas. Thus, among the 412 enrolled subjects, 222 were pathologically diagnosed with pancreatic cancer, and the remaining 190 diagnosed with normal pancreas were included as a control group.

    Imaging techniques: Multiphasic CT was performed following a pancreas protocol and using a 256-channel multidetector row CT scanner (Siemens). The scanning protocol included unenhanced and contrast material–enhanced biphasic imaging in the arterial and venous phases after intravenous administration of 100 mL ioversol at a rate of 3 mL/sec using an automated power injector. Images were reconstructed at 5.0-mm thickness. For each CT scan, one to nine pictures of the pancreas were selected from each phase. Finally, datasets of 3494 CT images obtained from 222 patients with pathologically confirmed pancreatic cancer and 3751 CT images from 190 patients with normal pancreas were collected.

    Deep learning technique

    Figure 1 Examples of dataset.

    Data preprocessing: We adopted a CNN model to classify the CT images. A CNN requires the input images to be the same size. Thus, we first cropped each CT image starting at the center to transform it into a fixed 512 × 512 resolution. Each image was stored in the RGB color model, which is a model with red, green, and blue light merged together to reproduce multiple colors, and thus consisted of three-color channels (i.e., red, green, and blue). We normalized each channel of every image using 0.5 as the mean and the standard deviation. This normalization was performed because all the images were processed by the same CNN, and the results might improve if the feature values of the images were scaled to a similar range.

    CNN:In this work, we designed a CNN model to classify the pancreatic CT images to assist in pancreatic cancer diagnosis. The architecture of our proposed CNN model is presented in Figure 2. Our model consisted primarily of three convolutional layers and a fully connected layer. Each convolutional layer was followed by a batch normalization (BN) layer that normalized the outputs of the convolutional layer, a rectified linear unit (ReLU) layer that applied an activation function to its input values, and a max-pooling layer that conducted a down-sampling operation. We also adopted an average-pooling layer before the fully connected layer to reduce the dimensions of the feature values input to the fully connected layer. Following the work by Srivastavaet al[15], a dropout rate of 0.5 was used between the average-pooling layer and the fully connected layer to avoid overfitting and increase the performance. We also tried Spatial Dropout[16]between each max-pooling layer and its following convolutional layer, but found that such dropouts resulted in performance degradation. Therefore, we did not apply Spatial Dropout. As input, the network takes the pixel values of a CT image, and it outputs the probability that the image belongs to a certain class (e.g., the probability that the corresponding patient has pancreatic cancer). The CT images were fed into our model layer by layer. The input to each layer is the output values of the previous layer. The layers perform specific transformations on the input values and then pass the processed values to the next layer.

    Figure 2 Architecture of our convolutional neural network model.

    The convolutional layers and the pooling layers require several hyper-parameters whose settings are shown in Supplementary Material. In the Supplementary Material, we also discuss these layers in sequence: First, the convolutional layer, then, the batch normalization (BN) layer, the Rectified Linear Unit (ReLU) layer, the max-pooling, and average-pooling layers, and finally, the fully connected layer. Then, we present the hyper-parameter settings for our model.

    Training and testing the CNN: We collected three types of CT images: Plain scan, venous phase, and arterial phase and built three datasets from the collected images based on the image types. Each dataset may include several images collected from one patient. To divide a dataset into training, validation, and test sets, we first collected the identity documents (IDs) of all the patients in the dataset. Each patient was labeled as follows: The label may be “no cancer (NC)”, “with cancer at the tail and/or body of the pancreas (TC)” or “with cancer at the head and/or neck of the pancreas (HC)”. For each label,e.g., “no cancer”, we randomly placed 10% of the patients with this label into the validation set, 10% into the test set, and the remaining 80% into the training set. Notablly, images of the same patient appear in only one set.

    All patients and their CT images were marked by one of the three labels,i.e., “no cancer”, “with cancer in the tail of pancreas” and “with cancer in the head of the pancreas”. For each dataset, we could treat the TC and HC patients as “with cancer (CA)”. Then, we trained a binary classifier to classify all the CT images. We also trained a ternary classifier to determine the specific cancer location. Our proposed approach was flexible enough to be used as either a binary classifier or a multiple-class classifier; we needed only to specify the hyperparameter of the fully-connected layer to control the classifier type.

    Given a dataset and the number of target classes (denoted asn), we trained our model on the training set and set the mini-batch size to 32. After each training iteration, we used the cross-entropy loss function to calculate the loss between the predicted results (i.e., the probability distributionPoutput by the fully connected layer) of our model and the ground truth (denoted asG), computed as Formula 1.

    This loss was used to guide the updates of the weights in our CNN model; we used Adam as the optimizer. The statistics of each dataset are presented in Table 1.

    After updating the model, we calculated the accuracy (see section Evaluation below) of the new model on the validation set to assess the quality of the current model. We trained our model for a maximum of 100 epochs, and the model with the highest accuracy on the validation set was selected as the final model. A 10-fold crossvalidation process was used to evaluate our techniques. We randomly divided the images in each phase into 10 folds, 8 of which were used to do the training, 1 fold was the validation set, and the remaining one was used to test the model. The entire process was repeated 10 times, and each fold will be used as the test set once. The average performance was recorded. We evaluated the effectiveness of our CNN model on the test sets in terms of accuracy, precision, and recall (see section Evaluationbelow).

    Table 1 Statistics of our datasets

    Evaluation: We evaluated our approach on the three datasets in terms of both binary and ternary classifications and measured the effectiveness of our approach relying on widely adopted metrics of classification aspects: Accuracy, precision, and recall. Accuracy is the proportion of the images that are correctly classified (denoted asTP) among all the images (denoted asAll) for all classes. The precision for classCiis the proportion of images that are correctly classified as classCi(denoted asTPi) among all images that are classified as class (denoted asTPi+FPi). The recall for classCiis the proportion of images that are correctly classified as classCi(denoted asTPi) among all the images that actually belong to classCi(denoted asAlli). These metrics are calculated as follows:

    We evaluated our approach relying on the accuracy because it measures the overall quality of a classifier on all classes instead of only a specific classCi, which is shown as follows:

    Sensitivity = Recall in cancer detection = (The correctly predicted malignant lesions)/(All the malignant lesions);

    Specificity = Recall in detecting noncancer = (The correctly predicted nonmalignant cases)/(All non-malignant cases);

    Precision in cancer detection = (The correctly predicted malignant lesions)/(All images classified as malignant).

    Evaluation between deep learning and gastroenterologists

    Ten board-certified gastroenterologists and 15 trainees participated in the study, and the accuracy of their image classifications was compared with the predictions of the deep learning technique. Each gastroenterologist or trainee classified the same 100-image set in plain scan randomly selected from the test dataset of the deep learning technique. The human response time was approximately 10 s per image. The images accurately classified by the board-certified gastroenterologists and trainees were compared with the results of the deep learning model.

    Statistical analysis

    We performed statistical analyses using SPSS 13.0 for Windows (SPSS, Chicago, IL, United States). Continuous variables are expressed as mean ± SD and were compared using Student’st-test. Theχ2test was used to compare categorical variables. A value ofP< 0.05 (2-tailed test) was considered statistically significant.

    RESULTS

    Characteristics of the study participants

    Among the 412 enrolled subjects, 222 were pathologically diagnosed with pancreatic cancer, and 190 diagnosed with normal pancreas were included as a control group. The characteristics of the enrolled participants, classified by the presence or absence of pancreatic cancer, are shown in Table 2. The mean age was 63.8 ± 8.7 years for cancer group (range, 39-86 years, 124 men/98 women) and 61.0 ± 12.3 years for non-cancer group (range, 35-83 years, 98 men/92 women). These two groups had no significant differences in age or gender (P> 0.05). For the cancer group, 129 cases were located at the head and neck of pancreas, 93 cases at the tail and body of pancreas. The median tumor size of cancer group was 3.5 cm (range, 2.7-4.3 cm).

    Performance of the deep convolutional neural network used as a binary classifier

    Datasets of 3494 CT images obtained from 222 patients with pathologically confirmed pancreatic cancer and 3751 CT images from 190 patients with normal pancreas were included, statistics of each dataset are presented in Table 1. We labeled each CT image as “with cancer” or “no cancer”. Then, we constructed a binary classifier using our CNN model by 10-fold cross validation on 2094, 2592, and 2559 images in the plain scan, arterial phase, and venous phase, respectively (Table 1).

    The overall diagnostic accuracy of the CNN was 95.47%, 95.76%, and 95.15% on the plain scan, arterial phase, and venous phase, respectively. The sensitivity of the CNN (known as recall in cancer detection - the correctly predicted malignant lesions divided by all the malignant lesions) was 91.58%, 94.08%, and 92.28% on the plain scan, arterial phase, and venous phase images, respectively. The specificity of the CNN (known as recall in detecting non-cancer - the correctly predicted nonmalignant cases divided by all nonmalignant cases) was 98.27%, 97.57% and 97.87% on the three phases, respectively. The results are summarized in Table 3.

    The difference in accuracy, specificity and sensitivity among the three phases were not significant (χ2= 0.346,P= 0.841;χ2= 0.149,P= 0.928;χ2= 0.914,P= 0.633; respectively). Sensitivity of the model is considerably more important than its specificity and accuracy, because the purpose of a CT scan is cancer detection. Compared with arterial and venous phase, plain phase had same sensitivity, easier access, and lower radiation. Thus, these results indicated that the plain scan alone might be sufficient for the binary classifier.

    Comparison between CNN and gastroenterologists for the binary classification

    Table 4 shows the results of the image evaluation of the test data by ten board-certified gastroenterologists and 15 trainees. The accuracy, sensitivity, and specificity in the plain phase were 81.0%, 84.4%, and 80.4%, respectively. The gastroenterologist group was found to have significantly higher accuracy (92.2%vs73.6%,P< 0.05), specificity (92.1%vs79.2%,P< 0.05), and sensitivity (92.3%vs72.5%,P< 0.001) than trainees.

    As described in the methods section, ten board-certified gastroenterologists and 15 trainees participated in the study, and their image classification accuracy was compared with that of the deep learning technique as a binary classifier. The accuracy of the gastroenterologists, trainees, and the CNN was 92.20%, 73.60%, and 95.47%, respectively. The accuracy by CNN and board-certified gastroenterologists achieved higher accuracies than trainees (χ2= 21.534,P< 0.001;χ2= 9.524,P< 0.05; respectively). However, the difference between CNN and gastroenterologists was not significant (χ2= 0.759,P= 0.384). Figure 3 demonstrates the receiver operating characteristic (ROC) curves for the binary classification of the plain scan.

    Performance of the deep convolutional neural network as a ternary classifier

    We also trained a ternary classifier using our CNN model and evaluated it by 10-flod cross validation (Table 1). The overall diagnostic accuracy of the ternary classifier CNN was 82.06%, 79.06%, and 78.80% on the plain scan, arterial phase, and venous phase, respectively. The sensitivity scores for detecting cancers in the tail of pancreas were 52.51%, 41.10% and 36.03% on the three phases. The sensitivity scores for detecting cancers in the pancreas head were 46.21%, 85.24%and 72.87% on the three phases, respectively.

    The difference in accuracy and specificity among the three phases was not significant (χ2= 1.074,P= 0.585;χ2= 0.577,P= 0.749). The difference in the sensitivity scores of cancers in the pancreas head among the three phases was significant (χ2= 16.651,P< 0.001), with the arterial phase having the highest sensitivity. However, difference in sensitivity in cancers in the pancreas tail among the three phases was not significant (χ2= 1.841,P= 0.398). The results are summarized in Table 5.

    Table 2 Characteristics of study participants

    Table 3 Performance of the binary classifiers

    Table 4 Diagnostic accuracy of the binary classifiers in plain scan: Convolutional neural network vs gastroenterologists and trainees

    DISCUSSION

    In this study, we developed an efficient pancreatic ductal adenocarcinoma classifier using a CNN trained on medium-sized datasets of CT images. We evaluated our approach on the datasets in terms of both binary and ternary classifications, with the purposes of detecting and localizing masses. In the binary classifiers, the performance of plain, arterial and venous phase had no difference, its accuracy on plain scan was 95.47%, sensitivity 91.58%, and specificity 98.27%. In the ternary classifier, the arterial phase had the highest sensitivity in detecting cancer in the head of the pancreas among three phases, but it achieved only moderate performances.

    Artificial intelligence has made great strides in bridging the gap between human and machine capabilities. Among the available deep learning architecture, the CNN is the most commonly applied algorithm for analyzing visual images; it can receive an input image, assign weights to various aspects of the image and distinguish one type of image content from another[17]. A CNN includes input, an output layer, and multiplehidden layers. The hidden CNN layers typically consist of convolutional layers, a BN layer, a ReLU layer, pooling layers, and fully connected layers[14]. The CNN acts like a black box, and it can make judgments independent of prior experience or the human effort involved in creating manual features, which is a major advantage. Previous studies showed that CT had a sensitivity of 76%-92%, and an accuracy of 85%-95% for diagnosing pancreatic cancer according to the ability of doctors[18,19]. Our results indicate that our computer-aided diagnostic systems have same detection performance.

    Table 5 Performance of the ternary classifiers

    Figure 3 Receiver operating characteristic curves and AUC values for the binary classification of the plain scan using the convolutional neural network model. Each trainee’s prediction is represented by a single green point. The blue point is the average prediction of them. Each gastroenterologist’s prediction is represented by a single brown point. The red point is the average prediction of them. ROC: Receiver operating characteristic.

    The primary goal for a CNN classifier is to detect pancreatic cancer effectively, thus, the model needs to consider sensitivity as a priority over specificity. In the constructed binary classifier, all three phases had high levels of accuracy and sensitivity, with no significant differences among the three phases. This indicates the potential ability of plain scan in tumor screening. Relatively same performance of sensitivity on plain phase can be explained by the size of tumor in our study and redundant information given by arterial or venous phase. In the current study, most tumors were larger than two centimeters, allowing plain scan easier to assess tumor morphology and size. In addition, there are less noisy and unrelated information in the images of the plain scan phase. Thus, it is relatively easy for our CNN model to distill pancreatic-cancer-related features from such images. Currently, the accuracy of the binary classifier on plain scan was 95.47%, its sensitivity 91.58%, and its specificity 98.27%. When compared with the judgments of gastroenterologists and trainees on the plain phase, the CNN model achieved good performance. The accuracy of the CNN and board-certified gastroenterologists was higher than that of the trainees; however, the difference between CNN and gastroenterologists was not significant. We executed our model using a Nvidia GeForce GTX 1080 GPU when performing classifications; its response time was approximately 0.02 seconds per image. Compared with the 10 s average reaction time required by physicians, although our CNN model cannot stably outperform gastroenterologists, the CNN model can process images much faster and is less prone to fatigue. Thus, binary classifiers might be suitable for screening purposes in pancreatic cancer detection.

    In our ternary classifier, the accuracy differences among the three phases were also not significant. Regarding sensitivity, the arterial phase had the highest sensitivity in finding malignant lesions among all malignancies in the pancreas head. As the typical appearance of an exocrine pancreatic cancer in CT is a hypoattenuating mass within the pancreas[20], the complex vascular structure around the head and neck of the pancreas could be an explanation for the better performance of CNN classifier in detecting pancreas head and neck lesions in the arterial phase. It is worth noting that unopacified superior mesenteric vein (SMV) at arterial phase may cause confusion in tumor detection. However, SMV has a relatively fixed position in CT image, accompanied by the superior mesenteric artery, which may help the classifier distinguish it from tumor. Further studies in pancreatic segmentation should be carried out to solve this problem. The reason why we also tested a ternary classification is that surgeons choose the surgical approach based on the location of the mass in the pancreas. The conventional operation for pancreatic cancer of the head or uncinate process is pancreaticoduodenectomy. Surgical resection of cancers located in the body or tail of the pancreas involves a distal subtotal pancreatectomy, usually combined with a splenectomy. Compared with gastroenterologists, the performance of the ternary classifier was not as good, because when the physicians judged that a mass existed, they also knew the location of the mass.

    Many CNN applications to evaluate organs have been reported, includingHelicobacter pyloriinfection, skin tumors, liver fibrosis, colon polyps, and lung nodules[12,13,21-23], as well as applications for segmenting prostates, kidney tumors, brain tumors, and livers[24-27]. A CNN also has potential applications for pancreatic cancer, mainly focusing on pancreas segmentation by CT[28,29]. Our work concentrates on the detection of pancreatic cancer, and the results demonstrated that on a medium-sized dataset, an affordable CNN model can achieve comparable performance on pancreatic cancer diagnosis and can be helpful as an assistant of the doctors. Another interesting work by Liuet al[30], adopted the faster R-CNN model, which is more complex and harder to train and tune, for pancreatic cancer diagnosis. Their model was mixed images with different phases with an AUC 0.9632, while we trained three classifiers for the plain scan, arterial phase, and venous phase, respectively. Our results indicate that the plain scan, which has easier access and lower radiation, is sufficient for the binary classifier, with an AUC 0.9653.

    Our study has several limitations. First, we used only pancreatic cancer and normal pancreas images in this study; thus, our model was not tested with images showing inflammatory conditions of the pancreas, nor was it trained to assess vascular invasion, metastatic lesions and other neoplastic lesions,e.g., intraductal papillary mucinous neoplasm. In the future, we will investigate the performance of our deep learning models on detecting these diseases. Second, our dataset was created using a database with pancreatic cancer/normal pancreas ratio of approximately 1:1; thus, the risk of malignancy in our study cohort was much higher than the normal real-world rate, which made model calculations easier. Therefore, distribution bias might have influenced the entire study, and further studies are needed to clarify this issue. A third limitation is that although the binary classifier achieved the same accuracy as the gastroenterologists, the classifications were based on the information obtained from a single image. We speculate that if the physicians were given additional information, such as the clinical course or dynamic CT images, their classification of the condition would be more accurate. Further studies are needed to clarify this issue.

    CONCLUSION

    We developed a deep learning-based, computer-aided pancreatic ductal adenocarcinoma classifier trained on medium-sized CT images. The binary classifier may be suitable for disease detection in general medical practice. The ternary classifier could be adopted to localize the mass, with moderate performance. Further improvement in the performance of models would be required before it could be integrated into a clinical strategy.

    ARTICLE HIGHLIGHTS

    ACKNOWLEDGEMENTS

    We would like to thank all the participants and physicians who contributed to the study.

    中文字幕熟女人妻在线| 国产91av在线免费观看| 欧美变态另类bdsm刘玥| 国产爱豆传媒在线观看| 久久精品影院6| 午夜老司机福利剧场| 欧美丝袜亚洲另类| 小蜜桃在线观看免费完整版高清| 亚洲精品乱码久久久久久按摩| 99久久中文字幕三级久久日本| 少妇裸体淫交视频免费看高清| 午夜激情欧美在线| 亚洲av中文字字幕乱码综合| 精品一区二区三区人妻视频| 国产成人精品婷婷| 人妻少妇偷人精品九色| 亚洲丝袜综合中文字幕| 三级国产精品欧美在线观看| 哪里可以看免费的av片| 亚洲成人久久爱视频| 日韩视频在线欧美| 日韩人妻高清精品专区| 亚洲国产欧美在线一区| 亚洲av免费高清在线观看| 国产乱人偷精品视频| 大香蕉久久网| 久久亚洲精品不卡| 嫩草影院入口| 真实男女啪啪啪动态图| 欧美区成人在线视频| 搡女人真爽免费视频火全软件| 欧美激情在线99| 蜜桃亚洲精品一区二区三区| 久久久久国产网址| 在线观看一区二区三区| 国产成人a∨麻豆精品| 亚洲久久久久久中文字幕| 看免费成人av毛片| 久久婷婷人人爽人人干人人爱| 狠狠狠狠99中文字幕| 成人二区视频| 色综合色国产| 国产私拍福利视频在线观看| 麻豆国产av国片精品| 亚洲在线观看片| 欧美日韩一区二区视频在线观看视频在线 | 看免费成人av毛片| 精品人妻偷拍中文字幕| 久久久成人免费电影| 三级毛片av免费| 校园春色视频在线观看| 欧美最黄视频在线播放免费| 亚洲在久久综合| 插逼视频在线观看| 国产精品福利在线免费观看| АⅤ资源中文在线天堂| 综合色av麻豆| 天天躁日日操中文字幕| 免费看日本二区| 十八禁国产超污无遮挡网站| 免费av观看视频| 精品不卡国产一区二区三区| 成人三级黄色视频| 淫秽高清视频在线观看| 免费电影在线观看免费观看| 久久精品国产亚洲av香蕉五月| 狠狠狠狠99中文字幕| 国产成人aa在线观看| 日本三级黄在线观看| 日韩在线高清观看一区二区三区| 你懂的网址亚洲精品在线观看 | 亚洲中文字幕日韩| 久久久久久九九精品二区国产| 99热只有精品国产| 精品久久久久久久末码| a级毛片a级免费在线| 国产一区二区亚洲精品在线观看| 日韩成人av中文字幕在线观看| 性色avwww在线观看| 久久国内精品自在自线图片| 一个人免费在线观看电影| 亚洲一级一片aⅴ在线观看| 我的老师免费观看完整版| 久久国产乱子免费精品| 亚洲一级一片aⅴ在线观看| 少妇猛男粗大的猛烈进出视频 | 晚上一个人看的免费电影| 国产成人a区在线观看| 国产一区二区在线观看日韩| 久久欧美精品欧美久久欧美| 最好的美女福利视频网| 99热网站在线观看| 久久99蜜桃精品久久| 国产一区亚洲一区在线观看| 精品人妻视频免费看| 色综合色国产| 久久精品国产亚洲av涩爱 | 国产精品嫩草影院av在线观看| 嫩草影院入口| 老司机影院成人| 日日摸夜夜添夜夜爱| 99国产精品一区二区蜜桃av| 热99在线观看视频| 亚洲av免费在线观看| 色哟哟·www| 国产黄色小视频在线观看| 欧美在线一区亚洲| 午夜爱爱视频在线播放| 国产精品久久电影中文字幕| 99久久精品一区二区三区| 国产中年淑女户外野战色| 免费在线观看成人毛片| 精品久久久久久成人av| 亚洲国产日韩欧美精品在线观看| 午夜精品国产一区二区电影 | 中文字幕人妻熟人妻熟丝袜美| 欧洲精品卡2卡3卡4卡5卡区| 日韩高清综合在线| 乱系列少妇在线播放| 亚洲最大成人手机在线| 亚洲精品国产av成人精品| 能在线免费观看的黄片| 性欧美人与动物交配| 久久久国产成人免费| 啦啦啦啦在线视频资源| 午夜激情福利司机影院| 欧美成人免费av一区二区三区| 99久国产av精品| 人妻少妇偷人精品九色| 午夜老司机福利剧场| 最后的刺客免费高清国语| 一级毛片我不卡| 久久精品久久久久久久性| 国产精品久久久久久av不卡| 国产国拍精品亚洲av在线观看| 欧美一区二区亚洲| 超碰av人人做人人爽久久| 国产精品久久视频播放| 国国产精品蜜臀av免费| 菩萨蛮人人尽说江南好唐韦庄 | 变态另类成人亚洲欧美熟女| 尾随美女入室| 成人综合一区亚洲| 免费看光身美女| 老熟妇乱子伦视频在线观看| 成人亚洲欧美一区二区av| av天堂在线播放| 热99在线观看视频| 色综合亚洲欧美另类图片| 久久久久久久久大av| 男人舔奶头视频| 亚洲精品乱码久久久久久按摩| 丰满的人妻完整版| 此物有八面人人有两片| 亚洲精品自拍成人| 天堂中文最新版在线下载 | 欧美一区二区精品小视频在线| 男女做爰动态图高潮gif福利片| av免费观看日本| 有码 亚洲区| 91av网一区二区| 长腿黑丝高跟| 亚洲aⅴ乱码一区二区在线播放| 国产视频内射| 精品久久久久久久人妻蜜臀av| 日本黄色视频三级网站网址| 99久久精品热视频| 日日干狠狠操夜夜爽| 麻豆av噜噜一区二区三区| av专区在线播放| 国产一级毛片七仙女欲春2| 久久99热这里只有精品18| 久久6这里有精品| 久久精品久久久久久久性| 日韩欧美在线乱码| 又粗又爽又猛毛片免费看| 狠狠狠狠99中文字幕| 十八禁国产超污无遮挡网站| 欧美日韩精品成人综合77777| 一本久久精品| 18+在线观看网站| 国产精品av视频在线免费观看| av女优亚洲男人天堂| 午夜免费男女啪啪视频观看| 又粗又爽又猛毛片免费看| 日韩制服骚丝袜av| 观看免费一级毛片| 免费av不卡在线播放| 精品国产三级普通话版| 欧美性感艳星| 欧美丝袜亚洲另类| 午夜视频国产福利| 女同久久另类99精品国产91| 久久精品夜夜夜夜夜久久蜜豆| 亚洲精品久久国产高清桃花| 青春草视频在线免费观看| 1000部很黄的大片| 国产精品久久久久久精品电影| 一夜夜www| 亚洲欧美中文字幕日韩二区| 亚洲人成网站高清观看| 美女脱内裤让男人舔精品视频 | 国产精品爽爽va在线观看网站| 成人av在线播放网站| 联通29元200g的流量卡| 人妻夜夜爽99麻豆av| 国产亚洲91精品色在线| 男插女下体视频免费在线播放| 老师上课跳d突然被开到最大视频| 婷婷精品国产亚洲av| 久久久久久伊人网av| 午夜免费激情av| 久久亚洲国产成人精品v| 久久久久久伊人网av| 日本撒尿小便嘘嘘汇集6| 美女脱内裤让男人舔精品视频 | 国产精品电影一区二区三区| 国产精品久久久久久精品电影| 韩国av在线不卡| 国产亚洲精品久久久久久毛片| 亚洲精品粉嫩美女一区| 成年免费大片在线观看| 亚洲最大成人中文| 少妇丰满av| 一区二区三区四区激情视频 | 精品久久久久久久久亚洲| 国内精品宾馆在线| 变态另类成人亚洲欧美熟女| 狠狠狠狠99中文字幕| 成人欧美大片| 久久久久久久久久黄片| 日本三级黄在线观看| 久久久午夜欧美精品| 好男人视频免费观看在线| 亚洲欧美成人精品一区二区| 99热只有精品国产| 直男gayav资源| 2022亚洲国产成人精品| 欧美色视频一区免费| 老熟妇乱子伦视频在线观看| 免费观看人在逋| 午夜精品一区二区三区免费看| 日韩中字成人| 精品人妻熟女av久视频| 日本黄色视频三级网站网址| 亚洲高清免费不卡视频| 少妇熟女aⅴ在线视频| АⅤ资源中文在线天堂| 真实男女啪啪啪动态图| 菩萨蛮人人尽说江南好唐韦庄 | 精品人妻一区二区三区麻豆| eeuss影院久久| 可以在线观看的亚洲视频| 国产精品一区二区在线观看99 | 只有这里有精品99| 国产精品美女特级片免费视频播放器| 日本成人三级电影网站| 女人十人毛片免费观看3o分钟| 国产精品电影一区二区三区| 高清日韩中文字幕在线| 免费黄网站久久成人精品| 久久久午夜欧美精品| 免费观看的影片在线观看| 不卡视频在线观看欧美| 我要看日韩黄色一级片| 日韩一区二区视频免费看| 久久99热这里只有精品18| 日本黄色片子视频| 日韩人妻高清精品专区| 国产真实乱freesex| 女同久久另类99精品国产91| 六月丁香七月| 99久久九九国产精品国产免费| 狂野欧美激情性xxxx在线观看| 丝袜喷水一区| 久久久成人免费电影| 色视频www国产| 国产女主播在线喷水免费视频网站 | 2021天堂中文幕一二区在线观| 国内精品宾馆在线| 在线天堂最新版资源| а√天堂www在线а√下载| 麻豆一二三区av精品| 美女黄网站色视频| 成熟少妇高潮喷水视频| 亚洲成人中文字幕在线播放| 久久精品久久久久久久性| 一进一出抽搐gif免费好疼| 国产精品久久视频播放| 一个人看的www免费观看视频| 青青草视频在线视频观看| a级一级毛片免费在线观看| 99热网站在线观看| 久久精品国产鲁丝片午夜精品| 日本与韩国留学比较| 乱系列少妇在线播放| av.在线天堂| 色播亚洲综合网| 成人午夜精彩视频在线观看| 亚洲国产欧美人成| 啦啦啦韩国在线观看视频| 久久中文看片网| 蜜桃久久精品国产亚洲av| 亚洲自拍偷在线| 三级毛片av免费| 三级国产精品欧美在线观看| 在现免费观看毛片| 日本黄色视频三级网站网址| 午夜福利视频1000在线观看| 在线天堂最新版资源| 天天躁日日操中文字幕| h日本视频在线播放| 我要搜黄色片| 欧美激情国产日韩精品一区| 一级毛片电影观看 | 国产亚洲精品av在线| 有码 亚洲区| 久久久久久久亚洲中文字幕| 黑人高潮一二区| av黄色大香蕉| 日日撸夜夜添| 午夜久久久久精精品| 国产精品av视频在线免费观看| 蜜桃亚洲精品一区二区三区| 久久99精品国语久久久| 人妻系列 视频| 国产精品综合久久久久久久免费| 午夜爱爱视频在线播放| 国内少妇人妻偷人精品xxx网站| 春色校园在线视频观看| av.在线天堂| 国产成人影院久久av| 夫妻性生交免费视频一级片| 99国产精品一区二区蜜桃av| 白带黄色成豆腐渣| 色播亚洲综合网| 人妻制服诱惑在线中文字幕| 69人妻影院| 国产伦理片在线播放av一区 | 国产色婷婷99| 变态另类丝袜制服| 看片在线看免费视频| 哪里可以看免费的av片| 99久久精品一区二区三区| 草草在线视频免费看| 国产真实乱freesex| 日韩大尺度精品在线看网址| 久久久久久久亚洲中文字幕| 国产精品福利在线免费观看| 国产精品嫩草影院av在线观看| 熟妇人妻久久中文字幕3abv| 国产精品,欧美在线| 亚洲欧美中文字幕日韩二区| 在线国产一区二区在线| 亚洲精品日韩av片在线观看| 欧美xxxx黑人xx丫x性爽| 看片在线看免费视频| 精品国产三级普通话版| 亚洲成人久久爱视频| 国产精品.久久久| АⅤ资源中文在线天堂| 中文字幕制服av| 成人毛片60女人毛片免费| 日日干狠狠操夜夜爽| 青春草视频在线免费观看| 久久国产乱子免费精品| 日韩亚洲欧美综合| 熟妇人妻久久中文字幕3abv| 欧美精品国产亚洲| 日韩一区二区三区影片| 免费av不卡在线播放| 欧美一区二区亚洲| 日韩在线高清观看一区二区三区| 免费看美女性在线毛片视频| 国内精品一区二区在线观看| 性插视频无遮挡在线免费观看| 色综合色国产| 午夜老司机福利剧场| 熟女电影av网| 久久久久网色| 又粗又爽又猛毛片免费看| 在线免费观看不下载黄p国产| 此物有八面人人有两片| 女人被狂操c到高潮| 美女大奶头视频| 极品教师在线视频| 热99在线观看视频| 十八禁国产超污无遮挡网站| 九色成人免费人妻av| 99精品在免费线老司机午夜| 国产精品爽爽va在线观看网站| 欧美高清成人免费视频www| 国产精品精品国产色婷婷| 欧美日韩国产亚洲二区| 女人十人毛片免费观看3o分钟| 国产精品.久久久| 精品久久久久久久末码| 永久网站在线| 国产精品美女特级片免费视频播放器| 自拍偷自拍亚洲精品老妇| 亚洲成a人片在线一区二区| 国产精品福利在线免费观看| 一卡2卡三卡四卡精品乱码亚洲| 26uuu在线亚洲综合色| 成人亚洲精品av一区二区| 国产精品av视频在线免费观看| 中文字幕人妻熟人妻熟丝袜美| 99热只有精品国产| 看片在线看免费视频| 青春草亚洲视频在线观看| 日日摸夜夜添夜夜添av毛片| а√天堂www在线а√下载| 亚洲色图av天堂| 18禁在线无遮挡免费观看视频| 在线a可以看的网站| 国产亚洲精品久久久com| 一区二区三区高清视频在线| 精品无人区乱码1区二区| 久久99蜜桃精品久久| 欧美成人精品欧美一级黄| 搡女人真爽免费视频火全软件| 中文字幕人妻熟人妻熟丝袜美| 亚洲18禁久久av| 亚洲丝袜综合中文字幕| 欧美一区二区亚洲| 亚洲经典国产精华液单| 毛片女人毛片| 久久欧美精品欧美久久欧美| 干丝袜人妻中文字幕| 久久精品久久久久久噜噜老黄 | 99热精品在线国产| 成人漫画全彩无遮挡| 熟妇人妻久久中文字幕3abv| 18禁在线无遮挡免费观看视频| 国产精品久久久久久久电影| 最近2019中文字幕mv第一页| 哪个播放器可以免费观看大片| 成年av动漫网址| 国产精品一及| 国产欧美日韩精品一区二区| 少妇裸体淫交视频免费看高清| 欧美激情久久久久久爽电影| 日本一二三区视频观看| 欧美+亚洲+日韩+国产| 麻豆av噜噜一区二区三区| 亚洲久久久久久中文字幕| 日本免费a在线| 国产成人a区在线观看| www.色视频.com| 校园春色视频在线观看| 亚洲欧美日韩无卡精品| 日本撒尿小便嘘嘘汇集6| 久久精品人妻少妇| 秋霞在线观看毛片| ponron亚洲| 久久久色成人| 人妻夜夜爽99麻豆av| 欧美色视频一区免费| 91久久精品电影网| 麻豆成人午夜福利视频| 国内少妇人妻偷人精品xxx网站| av女优亚洲男人天堂| 91久久精品电影网| 人妻少妇偷人精品九色| 天堂中文最新版在线下载 | 亚洲高清免费不卡视频| 一个人看的www免费观看视频| 91在线精品国自产拍蜜月| 九草在线视频观看| 亚洲aⅴ乱码一区二区在线播放| 成年版毛片免费区| 免费人成视频x8x8入口观看| 午夜免费男女啪啪视频观看| 国产精品爽爽va在线观看网站| 国产在线男女| 哪里可以看免费的av片| 亚洲在线自拍视频| 丰满的人妻完整版| 午夜久久久久精精品| 国产成年人精品一区二区| 国产日韩欧美在线精品| 我的女老师完整版在线观看| 直男gayav资源| 一区福利在线观看| 蜜桃久久精品国产亚洲av| 男人舔女人下体高潮全视频| 啦啦啦啦在线视频资源| 亚洲真实伦在线观看| 午夜激情欧美在线| 亚洲av中文av极速乱| 99热这里只有是精品在线观看| 国产精品福利在线免费观看| 国产淫片久久久久久久久| 成人特级黄色片久久久久久久| 春色校园在线视频观看| 成人特级黄色片久久久久久久| 日韩av在线大香蕉| 一个人看的www免费观看视频| 亚洲欧洲日产国产| 天堂av国产一区二区熟女人妻| 国产国拍精品亚洲av在线观看| 啦啦啦韩国在线观看视频| 久久人人爽人人爽人人片va| 亚洲欧美精品综合久久99| 日韩精品青青久久久久久| 亚洲国产精品成人久久小说 | 在线观看午夜福利视频| 在线观看免费视频日本深夜| 国产一区二区三区在线臀色熟女| 亚洲性久久影院| 欧美日韩乱码在线| 看黄色毛片网站| 欧美日韩乱码在线| 国产精品乱码一区二三区的特点| а√天堂www在线а√下载| 国产探花极品一区二区| 国产黄色小视频在线观看| 亚洲欧洲日产国产| 国产人妻一区二区三区在| 免费观看在线日韩| 99在线视频只有这里精品首页| 国产精品一区二区性色av| 乱系列少妇在线播放| 22中文网久久字幕| 国产人妻一区二区三区在| 久久热精品热| 久99久视频精品免费| 男的添女的下面高潮视频| 亚洲成人久久爱视频| 精品人妻偷拍中文字幕| 51国产日韩欧美| 国产精品久久久久久亚洲av鲁大| 国产麻豆成人av免费视频| 亚洲av中文av极速乱| 亚洲一区高清亚洲精品| 国产精品久久久久久亚洲av鲁大| 欧美丝袜亚洲另类| 国产乱人视频| 可以在线观看的亚洲视频| 国内少妇人妻偷人精品xxx网站| 免费在线观看成人毛片| 久久久午夜欧美精品| 91精品国产九色| 男人舔女人下体高潮全视频| 国产视频内射| 日韩欧美精品v在线| 亚洲一区高清亚洲精品| 国产精品无大码| 中文字幕制服av| 九九久久精品国产亚洲av麻豆| 国产大屁股一区二区在线视频| 国产精品,欧美在线| 国产日韩欧美在线精品| 亚洲精品粉嫩美女一区| 在线播放无遮挡| 日本爱情动作片www.在线观看| 看十八女毛片水多多多| 高清毛片免费观看视频网站| 人妻夜夜爽99麻豆av| 国产美女午夜福利| 国产三级在线视频| 特大巨黑吊av在线直播| 色综合亚洲欧美另类图片| 精品久久久久久久久久免费视频| 精品久久久久久久久av| 亚洲国产日韩欧美精品在线观看| 国产黄色小视频在线观看| 亚洲色图av天堂| 午夜老司机福利剧场| 99久久久亚洲精品蜜臀av| 日韩精品青青久久久久久| 1024手机看黄色片| 久久99蜜桃精品久久| 亚洲第一电影网av| 国产真实伦视频高清在线观看| 亚洲图色成人| 国产一区二区在线观看日韩| 女人被狂操c到高潮| 久久这里只有精品中国| 精品久久久噜噜| 91久久精品国产一区二区三区| 中国美白少妇内射xxxbb| 噜噜噜噜噜久久久久久91| 国产精品.久久久| 国产精品嫩草影院av在线观看| 男的添女的下面高潮视频| 夜夜夜夜夜久久久久| avwww免费| 亚洲在线自拍视频| 国产精品精品国产色婷婷| 岛国毛片在线播放| 69人妻影院| 亚洲国产高清在线一区二区三| 韩国av在线不卡| 久久精品夜夜夜夜夜久久蜜豆| 尤物成人国产欧美一区二区三区| 99热只有精品国产| 日韩一区二区视频免费看| 最近视频中文字幕2019在线8| 日韩在线高清观看一区二区三区| 亚洲七黄色美女视频| 国产黄片美女视频| 午夜老司机福利剧场| 边亲边吃奶的免费视频| 国产日本99.免费观看| 少妇被粗大猛烈的视频| 十八禁国产超污无遮挡网站| 麻豆国产97在线/欧美| 亚洲在久久综合| 亚洲va在线va天堂va国产| 22中文网久久字幕| 蜜桃亚洲精品一区二区三区| 国产精品国产三级国产av玫瑰| 国产激情偷乱视频一区二区| 两个人视频免费观看高清|