• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Artificial intelligence-assisted esophageal cancer management: Now and future

    2020-10-22 04:32:54YuHangZhangLinJieGuoXiangLeiYuanBingHu
    World Journal of Gastroenterology 2020年35期

    Yu-Hang Zhang, Lin-Jie Guo, Xiang-Lei Yuan, Bing Hu

    Abstract Esophageal cancer poses diagnostic, therapeutic and economic burdens in highrisk regions. Artificial intelligence (AI) has been developed for diagnosis and outcome prediction using various features, including clinicopathologic, radiologic, and genetic variables, which can achieve inspiring results. One of the most recent tasks of AI is to use state-of-the-art deep learning technique to detect both early esophageal squamous cell carcinoma and esophageal adenocarcinoma in Barrett’s esophagus. In this review, we aim to provide a comprehensive overview of the ways in which AI may help physicians diagnose advanced cancer and make clinical decisions based on predicted outcomes, and combine the endoscopic images to detect precancerous lesions or early cancer. Pertinent studies conducted in recent two years have surged in numbers, with large datasets and external validation from multi-centers, and have partly achieved intriguing results of expert’s performance of AI in real time. Improved pre-trained computer-aided diagnosis algorithms in the future studies with larger training and external validation datasets, aiming at real-time video processing, are imperative to produce a diagnostic efficacy similar to or even superior to experienced endoscopists. Meanwhile, supervised randomized controlled trials in real clinical practice are highly essential for a solid conclusion, which meets patient-centered satisfaction. Notably, ethical and legal issues regarding the blackbox nature of computer algorithms should be addressed, for both clinicians and regulators.

    Key Words: Artificial intelligence; Computer-aided diagnosis; Deep learning; Esophageal squamous cell cancer; Barrett’s esophagus; Endoscopy

    INTRODUCTION

    Esophageal cancer (EC) is one of the top ten leading prevalent malignancies worldwide, ranking the seventh in incidence and the sixth in mortality in 2018[1]. The major histological types are squamous cell carcinoma (SCC), which is predominant worldwide, and adenocarcinoma (AC) which is more prevalent in Caucasian people[2-4]. Data collected from 12 countries have indicated that AC will possibly experience a dramatic increase in incidence up to 2030, while the incidence of SCC will continuously decrease[5]. It is estimated that EC causes absolute years of life lost reduction of 7.8 (95%CI: 2.3-12.7)[6]. Although EC is not the most common cause of admission or readmission to hospital[7], it certainly imposes economic burdens. A cohort study conducted in the United Kingdom showed that the mean net costs of care per 30 patient-days of AC were $1016, $669, and $8678 for the initial phase, continuing care phase and terminal phase, respectively[8]. The cost grows with an increase of tumor node metastasis (TNM) staging at first diagnosis[8].

    Apparently, EC is a serious health threat, imposing economic burden on both highincome and low-income countries. Therefore, early diagnosis and evidence-based expert opinions on selecting the optimal treatment modality are crucial for reducing such burden. Although various diagnostic methodologies [including endoscopic ultrasonography (EUS), chromoendoscopy, optical coherence tomography (OCT), high-resolution microendoscopy (HRM), confocal laser endomicroscopy (CLE), volumetric laser endomicroscopy (VLE), and positron emission tomography (PET)], serous and genetic predictors have been developed to improve diagnostic accuracy and predict outcomes, inter-observer variabilities in interpreting images and heavy workloads limit their clinical efficiency[9-11]. A practical tool that can improve accuracy and reduce workload is in urgent need for clinical practice.

    Artificial intelligence (AI), which mimics human mind’s cognitive behavior, has been an emerging hot spot globally in various disciplines. Numerous models have been attempted for machine learning (ML), and the terminologies can be referred in the previous studies[12,13]. ML models are trained by datasets to extract and transform features, thereby achieving the goal of classification and prediction by selflearning[13-15]. In gastroenterology, AI-based technologies, which are characterized by deep learning (DL) as state-of-the-art machine learning algorithms, have been mainly developed to identify dysplasia in Barrett’s esophagus (BE), SCC, gastric cancers, andHelicobacter pyloriin upper gastrointestinal (UGI) tract[16], and to diagnose polyps, inflammatory bowel diseases, celiac disease, and gastrointestinal (GI) bleeding in lower GI tract[17]. Various models have been developed and studied to detect anatomical structure, discriminate dysplasia, and predict therapeutic and survival outcomes of EC. The ultimate goal of AI is to assist physicians and patients to make a superior data-based diagnosis or decision. In the following sections, we will (1) provide an overview of AI applications in diagnosis and prediction of advanced cancer; (2) specify computer-aided diagnosis (CAD) for early detection of esophageal SCC (ESCC) and esophageal adenocarcinoma (EAC) based on optical imaging; and (3) outline limitations of the existing studies and future perspectives. We searched PubMed database using terms “esophageal cancer” and “artificial intelligence” for papers published up to March 1, 2020, and initially obtained 172 studies. After exclusion of 128 items, 44 research articles that provided detailed data were included in the review and discussion (Figure 1).

    IMPLICATIONS FOR DIAGNOSIS AND THERAPEUTIC DECISIONS

    EC is highly malignant, and the 5-year survival rate of late-stage EC is less than 25%[18]. Radical therapies, including surgery, chemotherapy, radiotherapy or their combination are highly essential to improve survival outcome. Accurate diagnosis, precise staging, optimal modality selection, as well as responsiveness and survival outcome prediction are necessary in making true clinical decisions. However, these decisions are made mainly based on the current guidelines and expertise in clinical practice. AI technologies have been therefore developed to enhance the reliability of those decisions in an individualized manner.

    Diagnosis

    One of the important roles of AI is to detect malignant lesions. In 1996, Liuet al[19]proposed a tree-based algorithm called PREDICTOR, to classify patients with dyspeptic symptoms into EC, which achieved a discriminating accuracy of 61.3%, with sensitivity (SEN) and specificity (SPE) of 94.9% and 39.8%, respectively. In 2002, a probabilistic network-based decision-support system was developed, which could correctly predict the cancer stage of 85% of tested data in reasonable time[20]. In the same year, a robust classifier, artificial neural network (ANN), imitating neural network of the human brain, was adopted to distinguish BE from EC[21]. The ANN was trained using 160 genes selected by significance analysis of microarrays (SAM) from cDNA microarray data of esophageal lesions. This ANN outperformed cluster analysis by correctly diagnosing all the tested samples. Kanet al[22]also combined ANN with SAM-extracted 60 gene clones to accurately predict lymph node metastasis in 86% of all SCC cases, with SEN and SPE of 88% and 82%, respectively, better than clustering or predictive scoring. Kanet al[22]suggested that AI was a potential tool to detect lymph node metastasis when the SEN of coherence tomography (CT), EUS, and PET is insufficient[23,24]. Since tumor risk factors have complex nonlinear correlations, a fuzzy neural network, trained on hybridization of chaotic optimization algorithm and error back propagation (EBP), was able to correctly diagnose 87.36% of ESCC and 70.53% of dysplasia[25]. This fuzzy-logic based model outperformed traditional statistics, such as multivariate logistic regression model that was previously described by Etemadiet al[26].

    While symptoms are not quite reliable and gene analysis or PET scans are expensive, a simpler noninvasive detection method may be more practical. Liet al[27]combined support vector machine (SVM), a traditional classifier, with surfaceenhanced Raman spectroscopy in order to distinguish serum spectra of EC patients from healthy controls. Eventually, a combination of SVM with principle component analysis (PCA) on the basis of radial basis function (RBF), namely RBF PCA-SVM algorithm, exhibited the greatest efficacy among others with accuracy, SEN, and SPE of 85.2%, 83.3% and 86.7%, respectively.

    Outcome prediction

    Another significant role of AI is to predict prognosis of EC based on various demographic, clinicopathologic, hematologic, radiologic, and genetic variables. Surgery and neoadjuvant chemotherapy, radiotherapy or chemoradiotherapy are important definitive modalities for advanced EC. Selecting the optimal strategy with superior predictive outcome is of vital importance.

    Traditionally, TNM staging system is used as a predictor. However, a previous study showed that it was not very accurate[28]. Hence, multiple computational algorithms were developed to assist more reliable predictions. In 2005, Satoet al[29]trained an ANN to predict survival outcome. They found that the best predictive accuracy was obtained, with 65 clinicopathologic, genetic and biologic variables for 1-year survival and 60 variables for 5-year survival. The area under ROC curve (AUC), SEN, and SPE were 0.883, 78.1%, 84.7% and 0.884, 80.7%, 86.5%, respectively. Similar results with higher SEN and SPE could be achieved in another ANN model to predict the 1- and 3-year post-operative survival of EC and esophagogastric junction cancer[30]. These two ANNs both outperformed TNM staging system[29,30].

    Figure 1 Flow chart of study selection and logic arrangement of review. BE: Barrett’s esophagus; EAC: Esophageal adenocarcinoma; OCT: Optical coherence tomography; ESCC: Esophageal squamous cell carcinoma.

    In addition to ANN, other models were also proposed to solve certain problems. A prognostic scoring system, using serum C-reactive protein and albumin concentrations, was fused with expertise by fuzzy logic[31]. The proposed model could perform 1-year survival prediction with an AUC of 0.773. Another hierarchical forward selection (HFS), a wrapper feature selection method, was developed to solve the problem of small sample size[32]. In this SVM-validated model, clinical and PET features were learned to predict disease-free survival. The results unveiled that HFS achieved the highest accuracy of 94%, with robustness of 96%. Robustness could be further increased to 98%, if HFS was incorporated prior knowledge (pHFS).

    Treatment decision

    Based on the condition and prognosis of patient, an individualized treatment strategy is needed. For instance, when chemotherapy is prescribed for a patient, what is the optimal medication with appropriate dosage and period? Generally, clinicians make decisions according to their own experience, guidelines or consensus. However, those recommendations are often fixed and human errors are sometimes inevitable. A group of Iranian experts attempted to train a multilayer neural network with particle swarm optimization and EBP algorithms, in order to determine the dosage of chemotherapy[33]. Encouraging results showed that accuracy of particle swarm optimization and EBP was both 77.3%. Zahediet al[33]were positive about its future application as a supplementary decision-making system.

    While the majority of decisions are made before treatment, is it possible to make real-time treatment decisions? The answer is YES. Maktabiet al[34]tested a relatively new hyperspectral imaging system. They found that SVM was able to detect cancerous tissue with 63% SEN and 69% SPE within 1s. It is promising that hyperspectral imaging may assist surgeons in identifying tumor borders intra-operatively in real time.

    Treatment response

    A good treatment response is crucial for consequential therapeutic decision and predicting outcome[35]. Endeavors have been made to select candidate factors that correlate with responsiveness to treatment. However, it is often extremely laborious to testify these numerous variables in clinical trials. AI technologies are potential powerful tools for this selection.

    One indicator is the genetic biomarker. In 2010, Warnecke-Eberzet al[36]reported their usage of ANN to predict histopathologic responsiveness of treatment-na?ve patients to neoadjuvant chemoradiotherapy by analyzing 17 genes using the TaqMan low-density arrays. Their results were promising, with 85.4% accuracy, 80% SEN, and 90.5% SPE. Radiology is another important indicator to assess tumor regression after treatment. One rationale for this exploration is that tumor heterogeneity exists within radiologic images[37]. The standardized uptake values of18F-fluorodeoxyglucose in PET imaging were reported to have predictive potential[38]. However, this predictive power is limited[39]due to some confounding factors, such as intra-observer variations. Ypsilantiset al[40]adopted a three-slice convolutional neural network that could extract features from pre-treatment PET scans automatically to predict response to chemotherapy. It achieved a moderate accuracy of 73.4%, with SEN and SPE of 80.7% and 81.6%, respectively, which outperformed other ML algorithms trained on handcrafted PET scan features. Recently, CT radiomics three months after chemoradiotherapy were combined with dosimetric features of gross tumor volume and organs at risk to identify non-responders. Jinet al[41]found that these combinative features trained by the model of extreme gradient boosting plus PCA achieved an accuracy of 70.8%, with AUC of 0.541.

    While tumor regression is an important indicator to assess responsiveness, posttreatment distant metastasis is also vital to evaluate responsiveness, which is correlated with survival outcome. In order to predict post-operative distant metastasis of ESCC, further SVM models incorporated with clinicopathological and immunohistological variables were established[42]. Finally, the SVM model with four clinicopathological features and nine immunomarkers had better performance, with accuracy, SEN, SPE, positive predictive value, and negative predictive value of 78.7%, 56.6%, 97.7%, 95.6%, and 72.3%, respectively. Another least squares SVM model was also proposed to predict post-operative lymph node metastasis in patients who received chemotherapy preoperatively, by exploiting preoperative CT radiomics[43]. Tumor length, thickness, CT value, long axis and short axis size of the largest regional lymph node were analyzed. The model reached an AUC of 0.887.

    In addition to its diagnostic and predictive value, AI has learned to identify meaningful alterations in molecular and genetic level. In 2017, Linet al[44]compared the serum chemical elements concentrations between ESCC patient and healthy controls, and found that nearly half of the elements were different between the two groups. They then trained several classifiers to perform the discrimination, with Random Forest being the best (98.38% accuracy) and SVM the second (96.56% accuracy). Later, Mourikiset al[45]developed a robust sysSVM algorithm to identify 952 genes that promoted EAC development, using 34 biological features of known cancer genes. They called these rare and highly individualized genes ”helper” genes, which function alongside known drivers.

    AI may be a feasible option to help determine an optimal treatment strategy. This was previously evidenced by a study of 13 365 EACs from 33 cancer centers worldwide, which incorporated random forest algorithm, and found that the predicted survival of AI-generated therapy was superior to actual human decisions[46]. However, most of the above-mentioned ML algorithms described were developed for the sake of advanced cancer. Diagnosing EC in an early stage contributes to a far better outcome when treatment is undertaken appropriately. This is highly dependent on the development of optical imaging technologies that can directly visualize the morphology of esophageal lesions.

    MORPHOLOGY-BASED CAD

    In recent decades, endoscopic optical imaging techniques have been rapidly advanced, which provide endoscopists a fine inspection of the morphology of esophageal mucosa, micro-vessels, and even cells. In lieu of white light imaging (WLI) and magnifying endoscopy (ME), emerging OCT, CLE, VLE, and HRM techniques have been developed to diagnose BE[47-49]. Meanwhile, the diagnosis of SCC more relies on chromoendoscopy and intra-epithelial papillary capillary loop (IPCL) observed under narrow band imaging (NBI) plus ME[50]. Although these modalities have yielded preferable diagnostic value, the interpretation of these images need expert’s experience (inter- and intra-observer variability[51]), and processing large dataset is laborious and time consuming. Researchers in medicine and information engineering have collaborated to develop different AI models for this purpose.

    BE versus dysplasia or EAC

    The current screening and surveillance recommendation for BE is endoscopic examination plus random biopsy[52], which is limited by sampling error. AI models trained with various endoscopic modalities and pathologies aimed to overcome these shortcomings (Table 1).

    Endoscopy: In 2009, German experts developed a content-based image retrieval framework[53]. In this frame, novel color-texture features were combined with an interactive feedback loop. The algorithm could correctly recognize 95% of normal mucosa and 70% of BE from 390 training images, with a moderate inter-rater reliability of 0.71. The authors thought that the CAD system might be incorporated to the endoscopic system to help lesser experienced clinicians. In 2013, van der Sommenet al[54]tried an SVM algorithm, which could automatically identify and locate irregularities of esophagus on high-definition endoscopy with an accuracy of 95.9% and AUC of 0.99, taking a first step towards CAD. Later, these authors used a CAD system to automatically recognize region of interest (ROI) in dysplastic BE[55]. The SVM-based classification yielded a SEN and SPE of 83% for per-image level, and 86% and 87%, for the patient level, respectively. However, thef-score of the system, which indicates the similarity with the gold standard, was lower than experts.

    In order to improve the outcome, Horieet al[56]were the first who adopted a deep CNN (Single Shot MultiBox Detector, SSD) model to detect EC from WLI and NBI images in 2018. Only 8 EACs were used in that study. The diagnostic accuracy for EAC was 90%, and SEN for WLI and NBI at patient level was both equal to 88%. The system processed one image in only 0.02 s, which is promising for a real-time job. This ability of SSD to detect EAC was assessed in another study, which outperformed their proposed regional-based CNN (R-CNN), Fast R-CNN and Faster R-CNN in both precision and speed, which achieved F-measure, SEN, and SPE of 0.94, 96% and 92%, respectively[57]. The authors stated that SSD worked faster due to its single forward pass network nature. CNN was then validated in a more recent study to detect early dysplastic BE[58]. The system was pretrained on ImageNet, and was then trained with 1853 images and tested with 458 images. The CNN accurately detected 95.4% of the dysplasia, with 96.4% SEN and 94.2% SPE. One highlight for this study is that it studied WLI and NBI images, as well as images with standard focus and near focus. Another highlight is its ability to deal with real-time videos.

    Except for the above-mentioned CNNs, another CNN built upon residual net (ResNet) was introduced. Ebigboet al[59]tested this system in two databases, Augsburg and Medical Image Computing and Computer-Assisted Intervention, with SEN both being over 90%. Later, de Groofet al[60]used a custom-made hybrid ResNet/U-Net which was pretrained on GastroNet to distinguish non-dysplastic BE from dysplasia. The system was trained using state-of-the-art ML techniques (transfer learning and ensemble learning) and validated in a sequential five datasets, with accuracy of 89% and 88% for two external validation datasets, which were slightly superior to the model pre-trained with ImageNet in its supplementary ablation experiment.

    Endomicroscopy: In 2017, Honget al[61]reported their experience in adopting CNN as a classifier to distinguish intestinal metaplasia (IM), gastric metaplasia (GM) and neoplasia (NPL) of BE using endomicroscopic images. The total accuracy was 80.77%. It performed well for IM and NPL. However, it could not identify GM in the tested samples. VLE is an advanced imaging techinique that can provide a 3-mm deep scan of the esophagus in full circumference, which is commercially available (Nvision VLETMImaging System). In the same year, Swageret al[62]reported the first attempt of using CAD to detect NPL by adopting histology-correlatedex-vivoVLE. The authors used eight separate ML algorithms that were trained with clinically inspired features. They found that “l(fā)ayering and signal decay statistics” feature performed the best, with AUC, SEN, and SPE of 0.95, 90%, and 93%, respectively. Similar results were obtained by van der Sommenet al[63], with a maximum AUC of 0.93 in identifying early EAC in BE. Notably, the authors discovered that scanning depth of 0.5-1 mm was the most appropriate range for classifying tissue categories.

    Table 1 Computer-aided endoscopic diagnosis for dysplastic Barrett’s esophagus

    AdaBoost: Adaptive boost; AUC: Area under ROC curve; BE: Barrett’s esophagus; CAD: Computer-aided diagnosis; CBIR: Content-based image retrieval; CC: Mucosa of cardia; CNN: Convolutional neural network; DA: Discriminant analysis; EAC: Esophageal adenocarcinoma; EP: Epithelium; HGD: High-grade dysplasia; HRM: High-resolution microendoscopy; Knn: K-nearest neighbor; LDA: Linear discriminant analysis; LogReg: Logistic regression; LOO: Leave-oneout cross-validation; LR: Linear regression; NA: Not available; NB: Na?ve bayes; NBI: Narrow band imaging; OCT: Optical coherence tomography; PCA: Principle component analysis; PDE: Principle dimension encoding; R-CNN: Regional-based CNN; RF: Random forest; SEN: Sensitivity; SPE: Specificity; SSD: Single shot multibox detector; SVM: Support vector machine; VLE: Volumetric laser endomicroscopy; WLI: White light imaging.

    Since VLE produces an overwhelming number of images in a short time, a real-time CAD system is more helpful in actual clinical practice. In 2019, Trindadeet al[64]reported a video case illustrating an intelligent real-time image segmentation system which employed three established features to dynamically enhance abnormal VLE images with color in endoscopic procedure. They are now undergoing a multicenter RCT (NCT03814824) to further validate this CAD system. While most studies use single frame to include ROI, a recent study tried to add neighboring VLE images to pathology-correlated ROI[65]. Hopefully, the so-called multi-frame analysis combing PCA improved the performance of single-frame analysis, from an AUC of 0.83 to 0.91. Meanwhile, the novel CAD system needs only 1.3 ms to automatically differentiate non-dysplastic BE from dysplasia in one image, and this is also a promising result for a real-time setting.

    While previous studies employedex-vivoscan images, the following study conducted by van der Puttenet al[66]usedin-vivohistology-correlated images. In addition, they used principle dimension encoding (PDE) to encode images into score vector. They combined this PDE with traditional ML algorithms,e.g.,random forest and SVM, to classify the degree of dysplasia(high-grade dysplasiavsearly EAC). They obtained an AUC of 0.93 and F1 score of 87.4%, which outperformed some traditional DL classifiers, such as Squeezenet and Inception.

    Another kind of endomicroscopic technique is HRM. Shinet al[67]designed an automated imaging processing algorithm extracting epithelium morphology and BE glandular architecture features, and a classification algorithm, which distinguished NPL from dysplasia in BE with an accuracy, SEN and SPE of 84.9%, 88%, and 85% in validation dataset, respectively. This quantitative CAD is cost-effective and may be applied in clinical settings after improvement of image acquisition quality and processing speed.

    OCT: OCT is also a noninvasive imaging technique that can detect BE, dysplasia and early EAC, in compensation to routine endoscopy. In 2006, Qiet al[68]attempted to extract image features using center-symmetric auto-correlation method and a PCAbased CAD algorithm was used for classification. A total of 106 pathology-paired images were included for training, which ended up with an accuracy of 83%, SEN of 82% and SPE of 74% to distinguish non-dysplastic BE from dysplasia. In general, the accuracy of OCT in identifying dysplasia is not satisfactory, which limits its application[69].

    In addition to endoscopic images, pathologic morphology has also been studied. Saboet al[70]employed an ANN-validated computerized nuclear morphometry (pseudostratification, pleomorphism, chromatin texture, symmetry and orientation) model to discriminate the degree of dysplasia in BE. The model was able to differentiate non-dysplastic BE from low-grade dysplasia with an accuracy of 89%, and low-grade dysplasia from high-grade dysplasia with an accuracy of 86%.

    ESCC

    ESCC is the dominant histological type of esophageal cancer worldwide. Diagnosing early cancer mainly depends on endoscopic screening, which also produces a large number of images that needs special training to interpret. AI technologies have also been explored globally to address this issue (Table 2).

    Endoscopy: In 2016, Liuet al[71]designed an algorithm called joint diagonalization principle component analysis, which correctly detected 90.75% of EC with an AUC of 0.9471. To improve the performance of CAD system, Horieet al[56]did the first attempt to use DL to diagnose ESCC with a large number of endoscopic images. The CNN had a diagnostic accuracy of 99% for ESCC, 99% for superficial cancer, and 92% for advanced one. The SEN of CNN was 97% for per-patient level and 77% for per-image level. Later in 2019, Caiet al[72]proposed a novel CAD system called deep neural network (DNN). They used only standard WLI images to train the model. The DNNCAD model could detect 91.4% of early ESCC, higher than senior endoscopists. By using this model, the average diagnostic performance of endoscopists improved satisfactorily, in terms of accuracy, SEN, and SPE. However, these studies excluded magnified images.

    Later, Ohmoriet al[73]evaluated both ME and non-ME images [including WLI and NBI/blue laser imaging (BLI)] using a CNN based on SSD to recognize SCC. The accuracy for ME, non-ME + WLI, and non-ME + NBI/BLI was 77%, 81%, and 77%, respectively, all with high SEN and moderate SPE. The result was similar to experienced endoscopists tested in this study. Zhaoet al[74]conducted another study and evaluated ME + NBI images by employing a double-labeling fully convolutional network. This system used ROI-label and segmentation-label to delineate IPCLs based on the AB classification by the Japan Esophageal Society[75]. The study showed that senior observers had significant higher diagnostic accuracy than mid-level and junior ones. The model reached a diagnostic accuracy of 89.2% and 93% in lesion and pixel level, respectively, for distinguishing type A, B1 and B2 IPCLs, which was similar to that of senior group. Specifically, the model had a higher sensitivity for type A IPCL than clinicians (71.5%vs28.2%-64.9%), which might avoid unnecessary radical treatment. Instead of identifying IPCL patterns, the study conducted by Nakagawaet al[76]aimed to predict invasion depths. The authors developed two separate SSDbased CNNs for ME and non-ME images. The ability of the system to correctly distinguish EP/submucosal (SM) 1 cancers from SM2/SM3 cancers was 91%, 92.9%, and 89.7% for the ME+ non-ME, non-ME and ME images, respectively. Regrading M and SM cancers, the differentiating accuracy was 89.7%, 90.3%, and 92.3% for the total, non-ME, and ME, respectively. The performance of this CAD model was also comparable to experts, but much faster.

    A processing speed over 30 images/s is necessary for dynamic video analysis[56]. Although Horieet al[56], Ohmoriet al[73]and Nakagawaet al[76]reported that their systems could process one image in 0.02, 0.027, and 0.033 s, respectively, they have not tested the systems in real-time videos. After Caiet al[72]had validated the efficacy of their DNN-CAD model, they split the video into images and then assembled them, enabling the model to delineate early cancer in real time. Eversonet al[77]validated another CNN investigating IPCLs using sequential still images in real time of 0.026 to 0.037 s per image. The CNN could differentiate type A from type B IPCLs with a mean accuracy of 93.3%. Last year, Luoet al[78]reported a multicenter, comparative study, exploiting 1 036 496 endoscopic images to construct a gastrointestinal artificial intelligence diagnostic system (GRAIDS) based on the concept of DeepLab’s V3+. The GRAIDS yielded a diagnostic accuracy of UGI cancer ranging from 91.5% to 97.7% for internal, external, and prospective validation datasets, with favorable sensitivities, which were similar to experts and superior to non-experts. They also incorporated the CAD model to endoscopic videos in real time, with the highest speed of 0.008 s per image and latency less than 0.04 s. However, they did not report their outcome in distinct histology. Recently, Guoet al[79]specially developed a CNN-CAD system built on SegNet architecture, aiming at real-time application in clinical settings. In thisstudy, 13144 NBI (ME + non-ME) images and 80 video clips were employed. In the image dataset, the SEN, SPE, and AUC were 98.04%, 95.03% and 0.989, respectively. For the video dataset, the SEN of per frame for non-ME and ME was 60.8% and 96.1%,

    respectively; the SEN of per lesion for non-ME and ME was both 100%. When they analyzed 33 original videos of full-range normal esophagus, they acquired a SPE of 99.95% and 90.9% for per-frame and per-case analysis, respectively. The ability of this model to process each frame with a maximum time of 0.04 s and latency less than 0.1 s set a good example for future model optimization for real-time applications[80].

    Table 2 Computer-aided endoscopic diagnosis for early esophageal squamous cell cancer

    Endomicroscopy: In 2007, Kodashimaet al[81]used ImageJ software to label the border of nuclei under endo-cytologic images from 10 ESCC patients. They found that the computer-labelled nuclei area of ESCC was significantly different from that of normal tissues, which demonstrated the diagnostic possibility of computer. HRM is another low-cost tool that can illustrate the esophageal epithelium in cellular level, which compensates the low specificity of iodine staining and is also more cost-effective compared with CLE. In 2015, Shinet al[82]developed a 2-class linear classification algorithm using nuclei-related features to identify neoplastic squamous mucosa (HGD + cancer). It resulted in an AUC, SEN, and SPE of 0.95, 87%, 97% and 0.93, 84%, 95% for the test and validation datasets, respectively. However, the application of this system for real-time practice needs acceleration of analyzing speed. To solve this problem and reduce the cost of equipment, a smaller, tablet-interfaced HRM with realtime algorithm was developed by Quanget al[83]. The algorithm was able to automatically identify SCC with an AUC, SEN, and SPE of 0.937, 95%, and 91%, respectively, which is comparable to the result achieved by the first generation bulky laptop-interfaced HRM[82]or the combination of Lugol chromoendoscopy and HRM[84].

    STUDY LIMITATIONS AND FUTURE PERSPECTIVES

    Limitations

    The exciting and promising findings of various CAD models have been summarized in detail above. Researches are ongoing worldwide because none of the studies were perfect. The limitations and problems are driving forces for evolution and innovation. We hereby discuss several major drawbacks that limit the strength of the studies.

    Firstly, the most mentioned drawback is insufficient training sample size. The number of endoscopic images that the majority of studies employed ranged from 248 to about 7000 (Tables 1 and 2). The limited number of training data, lack of imaging variability, and single-center nature are likely to cause overfitting[85], which attenuates the ability of AI models to perform well in unused datasets and leads to unstable results[12,55]. To overcome this problem, various regularization methods have been developed, such as segmenting the image or using cross validation with 5 folds or even 10 folds to augment the datasets. Recently, the size of datasets has been greatly enlarged in several studies[56,73,79], the largest of which included over one million UGI images from six centers[78]. Therefore, further multicenter studies including large dataset with different kinds of images (i.e.,WLI, NBI, ME and non-ME) harvested by different endoscopic systems for SCC and AC are likely to produce results with robustness and external generalizability. In addition, different AI algorithms tested in prospective external dataset need to be developed to increase the diversity of AI technology[13].

    Secondly, selection bias is another contributor to limited generalizability. Most of the previous studies were retrospective and used only high-quality images. Suboptimal quality images with mucus, blur, or blood. were excluded. Additionally, unbalanced distribution of lesion types (SCCvsAC, type B1vsB2 and B3 IPCLs), different numbers of images for each patient, and non-uniform processing method for different lesions all might cause bias in the result. Further prospective RCTs will be required in the future.

    Thirdly, almost all of the studies employed still images to train AI model. Not until recently did the researchers validate the efficacy in dealing with endoscopic videos in a real-time manner. Future video-based researches are needed to narrow the gap between study and clinical practice.

    Future perspectives

    Gold standard: Consensus-based ground truth for lesions is preferred over a single expert’s annotation. The committees of expert endoscopists and pathologists from different countries need to be formed to improve the precision of annotation. In addition, the AI should play a role in helping endoscopists recognize lesions and target biopsies for gold standard pathological examination, rather than replacing our “job”.

    Hardware upgrade: Computers equipped with powerful GPU are needed to perform more sophisticated algorithms and process large volume of graphical data, in order to achieve the goal of real-time recognition.

    Pre-training database: ImageNet and GastroNet have been introduced, which store mass datasets of manually labeled images. These databases should be constantly enriched, since CAD models with prior knowledge are prone to have better discriminative ability[60].

    Cost-effect analysis: When a novel diagnostic method is introduced to clinical practice, whether it is cost-effective is an important issue. A recent multi-center add-on analysis revealed that AI is able to reduce cost of colonoscopic management of polyps[86]. Since medical cost is one of the major concerns for both patients and government, it is necessary to assess whether AI can improve diagnostic performance of EC while reducing cost of unnecessary examinations and radical therapies. Future studies concerning medical cost and reimbursement should be conducted in different countries with different healthcare and insurance systems to address this issue.

    Ethics and legality: Believe it or not to believe it, it is a real question. While we have taken a giant leap of AI technology in medicine which has the potential to improve the performance of clinicians with different experience and reduce error, the black-box[87]nature of the ML algorithms truly brings doubts[88]. Can we trust the results of AI, since they lack explainability? What should we do with these computer-generated results? Are they certified to be legal evidence? Challenges for legislation, regulation, insurance and clinical practice are inevitable. Supervised RCTs and AI participation in clinical workflow are needed to provide solid evidence that AI is acceptable within the range of legal and ethical concerns[89]. Nevertheless, trends of AI are irreversible. The ultimate role of AI in medicine might be a supervised task performer[90].

    CONCLUSION

    In this manuscript, we provided a comprehensive review of AI technology in diagnosis, treatment decision and outcome prediction for EC. We searched only PubMed database for clinical researches and applications. Issues regarding computer science and image processing are not our topics. The CAD systems have evolved from traditional ML algorithms to neural network-based DL, and from still image analysis to real-time video processing. AI can improve non-expert’s performance while correct erroneous classification by experts[78]. Researches with larger datasets and more reliable CAD models are being conducted worldwide. It is promising that AI may facilitate early cancer screening, surveillance and treatment in high-risk regions. However, it is noteworthy that patient’s consent and satisfaction are of first priority.

    videos熟女内射| 国产视频首页在线观看| 国产一区二区在线观看av| 3wmmmm亚洲av在线观看| 精品国产乱码久久久久久小说| 亚洲中文av在线| 亚洲精品456在线播放app| 欧美少妇被猛烈插入视频| 99久久精品热视频| 一区二区av电影网| 国产精品久久久久久久电影| 老司机影院成人| 搡老乐熟女国产| 亚洲av.av天堂| 欧美日韩一区二区视频在线观看视频在线| 成人午夜精彩视频在线观看| 久久精品国产亚洲av天美| 国语对白做爰xxxⅹ性视频网站| 午夜老司机福利剧场| 欧美日韩国产mv在线观看视频| 午夜福利影视在线免费观看| 最近手机中文字幕大全| 国产精品福利在线免费观看| 在线看a的网站| 久久久国产精品麻豆| 伊人久久国产一区二区| 午夜激情福利司机影院| 水蜜桃什么品种好| 日本色播在线视频| 久久精品国产a三级三级三级| 国产亚洲午夜精品一区二区久久| 亚洲精品日韩在线中文字幕| 人妻制服诱惑在线中文字幕| 97超碰精品成人国产| 国产午夜精品一二区理论片| 少妇精品久久久久久久| 在现免费观看毛片| 王馨瑶露胸无遮挡在线观看| 夫妻午夜视频| 69精品国产乱码久久久| 蜜臀久久99精品久久宅男| 尾随美女入室| 久久国产精品男人的天堂亚洲 | 日本91视频免费播放| 内地一区二区视频在线| 高清在线视频一区二区三区| 丝袜喷水一区| 亚洲无线观看免费| videos熟女内射| 男人爽女人下面视频在线观看| 人人澡人人妻人| 国产精品国产三级国产av玫瑰| 亚洲精品日韩在线中文字幕| 三级经典国产精品| 国产色婷婷99| 91精品国产九色| av女优亚洲男人天堂| 国产成人免费无遮挡视频| 九色成人免费人妻av| 看非洲黑人一级黄片| 边亲边吃奶的免费视频| 国产欧美亚洲国产| 美女中出高潮动态图| 日韩一区二区三区影片| 亚洲精品国产成人久久av| 欧美xxⅹ黑人| 日日摸夜夜添夜夜添av毛片| 人人妻人人看人人澡| 97精品久久久久久久久久精品| 精品一区二区三区视频在线| 久久精品国产亚洲网站| 建设人人有责人人尽责人人享有的| 在线亚洲精品国产二区图片欧美 | 亚洲国产精品一区三区| 亚洲欧美精品自产自拍| 亚洲电影在线观看av| 日韩人妻高清精品专区| 国产欧美日韩综合在线一区二区 | 国产免费视频播放在线视频| 视频区图区小说| 久久精品国产自在天天线| 成人国产av品久久久| 亚洲丝袜综合中文字幕| 精品亚洲乱码少妇综合久久| 欧美成人精品欧美一级黄| 国产精品伦人一区二区| 少妇人妻一区二区三区视频| 在线天堂最新版资源| 日韩人妻高清精品专区| 国产有黄有色有爽视频| 人人妻人人澡人人爽人人夜夜| 中文字幕人妻丝袜制服| 免费大片黄手机在线观看| 国产精品人妻久久久久久| 人人妻人人爽人人添夜夜欢视频 | 丁香六月天网| 又黄又爽又刺激的免费视频.| 亚洲av成人精品一区久久| 亚洲欧美成人精品一区二区| 国产精品.久久久| 国产免费又黄又爽又色| 午夜影院在线不卡| 日本黄色日本黄色录像| 欧美精品亚洲一区二区| 一本大道久久a久久精品| 啦啦啦中文免费视频观看日本| √禁漫天堂资源中文www| 日韩人妻高清精品专区| 22中文网久久字幕| a级毛片在线看网站| 国产一区二区三区av在线| 少妇人妻 视频| 成年美女黄网站色视频大全免费 | 国产精品久久久久久久久免| 九九爱精品视频在线观看| 亚洲第一av免费看| 久久国产精品男人的天堂亚洲 | 日本午夜av视频| 亚洲av.av天堂| 欧美性感艳星| 国产老妇伦熟女老妇高清| 纯流量卡能插随身wifi吗| 国产伦精品一区二区三区四那| 精品久久久噜噜| 男人和女人高潮做爰伦理| 亚洲成人手机| 成年人午夜在线观看视频| 97精品久久久久久久久久精品| 99热这里只有精品一区| 中文乱码字字幕精品一区二区三区| 男男h啪啪无遮挡| 久久6这里有精品| 久久青草综合色| 日本av手机在线免费观看| 国产精品女同一区二区软件| 成人漫画全彩无遮挡| 亚洲精品乱久久久久久| 亚洲av成人精品一二三区| 亚洲自偷自拍三级| 91精品国产国语对白视频| 久久久久国产精品人妻一区二区| 久久久国产欧美日韩av| 中文字幕人妻熟人妻熟丝袜美| 日韩一区二区视频免费看| 亚洲中文av在线| 中文欧美无线码| 亚洲美女黄色视频免费看| 日产精品乱码卡一卡2卡三| 久久久久精品久久久久真实原创| 啦啦啦在线观看免费高清www| 亚洲精品久久午夜乱码| 欧美精品亚洲一区二区| 亚洲国产精品国产精品| 精品视频人人做人人爽| 婷婷色av中文字幕| 王馨瑶露胸无遮挡在线观看| av国产久精品久网站免费入址| 国产精品熟女久久久久浪| 久久99热6这里只有精品| 黄色欧美视频在线观看| 一个人看视频在线观看www免费| 黑丝袜美女国产一区| 亚洲精品aⅴ在线观看| 日产精品乱码卡一卡2卡三| 桃花免费在线播放| 亚洲欧洲国产日韩| 男女边吃奶边做爰视频| 久久国产亚洲av麻豆专区| 欧美日韩视频高清一区二区三区二| 国精品久久久久久国模美| 亚洲精品日本国产第一区| 一级av片app| 一级毛片aaaaaa免费看小| 99re6热这里在线精品视频| 午夜免费男女啪啪视频观看| 日韩在线高清观看一区二区三区| 欧美变态另类bdsm刘玥| 国产成人午夜福利电影在线观看| av视频免费观看在线观看| 亚洲精品一二三| 亚洲在久久综合| 国产亚洲av片在线观看秒播厂| 一级毛片aaaaaa免费看小| 一级毛片黄色毛片免费观看视频| 国产男女内射视频| 亚洲国产精品专区欧美| 在线观看国产h片| 久久ye,这里只有精品| 亚洲情色 制服丝袜| 日韩强制内射视频| av国产久精品久网站免费入址| 国产日韩欧美视频二区| 一级毛片aaaaaa免费看小| 黑丝袜美女国产一区| 精品熟女少妇av免费看| 男女免费视频国产| 少妇人妻 视频| 人人妻人人添人人爽欧美一区卜| 高清欧美精品videossex| av线在线观看网站| 精品一区二区三卡| 秋霞伦理黄片| 伦精品一区二区三区| 人妻制服诱惑在线中文字幕| 最近中文字幕2019免费版| 老司机影院毛片| a级毛片在线看网站| 日韩大片免费观看网站| 视频中文字幕在线观看| 国产淫片久久久久久久久| 婷婷色综合www| 国产 精品1| 亚洲精品456在线播放app| 一个人免费看片子| 亚洲无线观看免费| 美女脱内裤让男人舔精品视频| 亚洲av国产av综合av卡| 亚洲伊人久久精品综合| 99久久中文字幕三级久久日本| 亚洲精品国产色婷婷电影| 国产精品一区www在线观看| 亚洲国产毛片av蜜桃av| 免费av不卡在线播放| 国产一区二区在线观看日韩| 亚洲一级一片aⅴ在线观看| 欧美国产精品一级二级三级 | 在线观看人妻少妇| 久热这里只有精品99| 国产有黄有色有爽视频| 少妇人妻一区二区三区视频| 精品一区在线观看国产| 色婷婷久久久亚洲欧美| 十八禁高潮呻吟视频 | 日本色播在线视频| 亚洲欧洲国产日韩| 各种免费的搞黄视频| 免费大片18禁| 51国产日韩欧美| 成人综合一区亚洲| 欧美日韩一区二区视频在线观看视频在线| 国产 一区精品| av又黄又爽大尺度在线免费看| 精品卡一卡二卡四卡免费| 亚洲精品中文字幕在线视频 | 国产在线视频一区二区| 久久久亚洲精品成人影院| 视频区图区小说| 国产亚洲最大av| 国产综合精华液| 久久国产亚洲av麻豆专区| 晚上一个人看的免费电影| 成人午夜精彩视频在线观看| 99热这里只有是精品50| 涩涩av久久男人的天堂| 在线观看www视频免费| 十八禁高潮呻吟视频 | 最近最新中文字幕免费大全7| 丰满乱子伦码专区| 亚洲精品国产色婷婷电影| 免费观看无遮挡的男女| 一级av片app| 国产精品不卡视频一区二区| 国产永久视频网站| 国产亚洲91精品色在线| 少妇裸体淫交视频免费看高清| 黄色毛片三级朝国网站 | 中文资源天堂在线| 永久免费av网站大全| 一边亲一边摸免费视频| 国产亚洲精品久久久com| 乱码一卡2卡4卡精品| .国产精品久久| 精品熟女少妇av免费看| 亚洲av成人精品一二三区| 精品卡一卡二卡四卡免费| 国产欧美日韩一区二区三区在线 | 色5月婷婷丁香| 黄色日韩在线| 最新的欧美精品一区二区| 免费播放大片免费观看视频在线观看| 男女国产视频网站| 精品一区二区三区视频在线| 在线观看人妻少妇| 一个人看视频在线观看www免费| 色94色欧美一区二区| 国产精品偷伦视频观看了| 九色成人免费人妻av| 精品国产国语对白av| 在线观看www视频免费| 日韩精品免费视频一区二区三区 | 婷婷色麻豆天堂久久| 亚州av有码| 在线观看人妻少妇| 一级毛片黄色毛片免费观看视频| 一本—道久久a久久精品蜜桃钙片| 亚洲精品日本国产第一区| 麻豆成人av视频| 交换朋友夫妻互换小说| 又大又黄又爽视频免费| 啦啦啦视频在线资源免费观看| 极品教师在线视频| 亚洲av二区三区四区| 国产精品福利在线免费观看| 中文精品一卡2卡3卡4更新| 99九九在线精品视频 | 国产真实伦视频高清在线观看| 青青草视频在线视频观看| 91精品国产九色| 亚洲欧美成人精品一区二区| 高清午夜精品一区二区三区| 如日韩欧美国产精品一区二区三区 | 99热网站在线观看| 久久久久人妻精品一区果冻| 国内精品宾馆在线| 国产欧美日韩精品一区二区| 一级,二级,三级黄色视频| 久久综合国产亚洲精品| 狠狠精品人妻久久久久久综合| 亚洲av二区三区四区| 国产精品国产三级专区第一集| 国产成人91sexporn| 黄色毛片三级朝国网站 | 国产真实伦视频高清在线观看| 免费av中文字幕在线| 亚洲欧美中文字幕日韩二区| 80岁老熟妇乱子伦牲交| 亚洲欧美精品自产自拍| 久久久久久久久大av| 啦啦啦中文免费视频观看日本| 国产在线一区二区三区精| 天堂中文最新版在线下载| a级毛片免费高清观看在线播放| 亚洲精品国产av蜜桃| 日本免费在线观看一区| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲av成人精品一区久久| 人妻制服诱惑在线中文字幕| 蜜臀久久99精品久久宅男| 精品久久久久久久久av| 国产精品久久久久成人av| 十八禁高潮呻吟视频 | 国产精品三级大全| 嫩草影院新地址| 在线观看免费高清a一片| 国产精品久久久久久精品古装| 尾随美女入室| 婷婷色麻豆天堂久久| 亚洲精品国产色婷婷电影| 精品人妻熟女av久视频| 国产成人精品一,二区| 黄色日韩在线| 免费播放大片免费观看视频在线观看| 亚洲精品久久午夜乱码| 成人二区视频| 中文资源天堂在线| 午夜日本视频在线| 精品久久国产蜜桃| 在线 av 中文字幕| 丝袜喷水一区| 久久久久久久久久久久大奶| 人人澡人人妻人| 99久久人妻综合| 纵有疾风起免费观看全集完整版| 久久影院123| 狂野欧美激情性bbbbbb| 欧美精品高潮呻吟av久久| 国产av精品麻豆| 五月玫瑰六月丁香| 热re99久久精品国产66热6| 色网站视频免费| 在线观看三级黄色| 久久精品国产亚洲网站| 色94色欧美一区二区| 七月丁香在线播放| 亚洲综合精品二区| 男女无遮挡免费网站观看| 自拍偷自拍亚洲精品老妇| 久久午夜福利片| 免费少妇av软件| 久久99蜜桃精品久久| 国产免费视频播放在线视频| 超碰97精品在线观看| 在线观看免费视频网站a站| 国产永久视频网站| 少妇裸体淫交视频免费看高清| 在线观看人妻少妇| freevideosex欧美| 嫩草影院入口| 美女中出高潮动态图| 老司机影院毛片| 精品久久久久久电影网| 亚洲精品久久午夜乱码| 自线自在国产av| 成年美女黄网站色视频大全免费 | 亚洲怡红院男人天堂| 亚洲四区av| 久久这里有精品视频免费| 亚洲欧美精品专区久久| 中文字幕av电影在线播放| 新久久久久国产一级毛片| 日韩人妻高清精品专区| 啦啦啦中文免费视频观看日本| 久久久久久久亚洲中文字幕| 黄片无遮挡物在线观看| 国产av国产精品国产| 亚洲精品乱码久久久v下载方式| 男的添女的下面高潮视频| 嫩草影院新地址| 秋霞伦理黄片| 亚洲美女视频黄频| av不卡在线播放| 秋霞伦理黄片| 婷婷色综合大香蕉| 精品一区二区三区视频在线| 伊人久久精品亚洲午夜| 国产男女内射视频| 久久精品国产鲁丝片午夜精品| 亚洲av中文av极速乱| 国语对白做爰xxxⅹ性视频网站| 老女人水多毛片| 日本黄大片高清| 国产有黄有色有爽视频| 少妇人妻久久综合中文| 亚洲欧美清纯卡通| 久久久欧美国产精品| 少妇丰满av| 亚洲av免费高清在线观看| 国产精品99久久久久久久久| 18禁裸乳无遮挡动漫免费视频| 国产爽快片一区二区三区| 91午夜精品亚洲一区二区三区| 精品国产国语对白av| av视频免费观看在线观看| 国产午夜精品久久久久久一区二区三区| 国产av国产精品国产| 免费少妇av软件| 国产极品粉嫩免费观看在线 | 亚洲美女视频黄频| 国产精品国产三级国产专区5o| 我的老师免费观看完整版| 日韩伦理黄色片| 中文字幕精品免费在线观看视频 | 日韩制服骚丝袜av| 女性被躁到高潮视频| 尾随美女入室| 日本wwww免费看| 国产精品熟女久久久久浪| 久久av网站| 少妇人妻久久综合中文| 亚洲欧美中文字幕日韩二区| 久久av网站| 亚洲av男天堂| 精品人妻熟女av久视频| 日韩,欧美,国产一区二区三区| 日韩大片免费观看网站| 亚洲精品成人av观看孕妇| 岛国毛片在线播放| 亚洲国产欧美在线一区| 三级国产精品片| 国产av精品麻豆| 91久久精品国产一区二区三区| 狠狠精品人妻久久久久久综合| 国产免费福利视频在线观看| 久久久久久久精品精品| 男女无遮挡免费网站观看| 99热国产这里只有精品6| 久久精品夜色国产| 欧美精品一区二区大全| videossex国产| 久久鲁丝午夜福利片| 成人影院久久| 亚洲av成人精品一区久久| 亚洲美女视频黄频| 一级av片app| 欧美精品人与动牲交sv欧美| 免费观看无遮挡的男女| 大又大粗又爽又黄少妇毛片口| 国产熟女午夜一区二区三区 | 色婷婷久久久亚洲欧美| 亚洲中文av在线| 国产综合精华液| 亚洲精华国产精华液的使用体验| 一边亲一边摸免费视频| 日日啪夜夜爽| 如日韩欧美国产精品一区二区三区 | 在线精品无人区一区二区三| 国产国拍精品亚洲av在线观看| 中文在线观看免费www的网站| 国产一区二区在线观看日韩| 国产精品久久久久久精品古装| 久久久亚洲精品成人影院| 一区二区三区乱码不卡18| 伊人久久精品亚洲午夜| 国产国拍精品亚洲av在线观看| 极品教师在线视频| a级毛片在线看网站| 午夜福利影视在线免费观看| 岛国毛片在线播放| 国产综合精华液| 久久毛片免费看一区二区三区| 日韩中文字幕视频在线看片| 99热这里只有精品一区| 日韩人妻高清精品专区| 午夜福利视频精品| 免费看不卡的av| 亚洲国产欧美在线一区| 久久精品国产亚洲av天美| 国产精品偷伦视频观看了| 亚洲精品日本国产第一区| 国产精品一区二区在线观看99| 国产日韩一区二区三区精品不卡 | 一区二区三区免费毛片| 国产伦在线观看视频一区| 大码成人一级视频| 亚洲第一av免费看| 2018国产大陆天天弄谢| 日本欧美视频一区| 亚洲精品乱久久久久久| 中文在线观看免费www的网站| 菩萨蛮人人尽说江南好唐韦庄| 三上悠亚av全集在线观看 | 久久久久国产精品人妻一区二区| 亚洲国产精品一区二区三区在线| 一区二区三区乱码不卡18| 久久鲁丝午夜福利片| 国产精品一区二区在线不卡| 精品一区二区三卡| 国产成人精品婷婷| 老女人水多毛片| 丰满人妻一区二区三区视频av| 晚上一个人看的免费电影| 日产精品乱码卡一卡2卡三| 亚洲精品日本国产第一区| 国产熟女欧美一区二区| 毛片一级片免费看久久久久| 久久鲁丝午夜福利片| 日韩成人av中文字幕在线观看| 国产精品不卡视频一区二区| 日韩强制内射视频| h视频一区二区三区| 国产精品人妻久久久影院| 亚洲不卡免费看| 亚洲熟女精品中文字幕| 午夜激情福利司机影院| 国产精品一区二区性色av| 插阴视频在线观看视频| 亚洲经典国产精华液单| 精品卡一卡二卡四卡免费| 亚洲av二区三区四区| 国产成人a∨麻豆精品| 精品酒店卫生间| 六月丁香七月| 欧美区成人在线视频| 亚洲综合精品二区| 在线天堂最新版资源| 日日爽夜夜爽网站| 欧美+日韩+精品| 综合色丁香网| 欧美激情国产日韩精品一区| 9色porny在线观看| 在线播放无遮挡| 97在线视频观看| 99久久精品一区二区三区| 国产精品免费大片| 日韩一区二区视频免费看| 亚洲欧美中文字幕日韩二区| 久久久久久久久久久免费av| 青青草视频在线视频观看| 国产精品不卡视频一区二区| 黑人猛操日本美女一级片| 国产午夜精品一二区理论片| 噜噜噜噜噜久久久久久91| 国产精品不卡视频一区二区| 在线观看美女被高潮喷水网站| 伊人久久精品亚洲午夜| 亚洲欧美日韩卡通动漫| 五月开心婷婷网| 毛片一级片免费看久久久久| 成人亚洲精品一区在线观看| 久久 成人 亚洲| 黄色一级大片看看| 日本爱情动作片www.在线观看| 欧美人与善性xxx| 一级毛片久久久久久久久女| 熟女av电影| 成人漫画全彩无遮挡| 日韩在线高清观看一区二区三区| 激情五月婷婷亚洲| a级一级毛片免费在线观看| 免费久久久久久久精品成人欧美视频 | 国产美女午夜福利| 欧美精品人与动牲交sv欧美| 午夜激情福利司机影院| 久久99蜜桃精品久久| 女的被弄到高潮叫床怎么办| 国产成人91sexporn| 中文字幕久久专区| 国产精品欧美亚洲77777| av专区在线播放| 欧美国产精品一级二级三级 | 色婷婷av一区二区三区视频| 亚洲av在线观看美女高潮| 极品人妻少妇av视频| 涩涩av久久男人的天堂| 亚洲精品色激情综合| 国产色婷婷99| 国产精品免费大片| 午夜免费男女啪啪视频观看| 噜噜噜噜噜久久久久久91| 国产精品不卡视频一区二区| 在线观看国产h片| 亚洲无线观看免费| 日本av手机在线免费观看| 久久鲁丝午夜福利片| 黄色毛片三级朝国网站 |