• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Gastric Tract Disease Recognition Using Optimized Deep Learning Features

    2021-12-11 13:30:44ZainabNayyarMuhammadAttiqueKhanMusaedAlhusseinMuhammadNazirKhursheedAurangzebYunyoungNamSeifedineKadryandSyedIrtazaHaider
    Computers Materials&Continua 2021年8期

    Zainab Nayyar,Muhammad Attique Khan,Musaed Alhussein,Muhammad Nazir,Khursheed Aurangzeb,Yunyoung Nam,Seifedine Kadry and Syed Irtaza Haider

    1Department of Computer Science,HITEC University,Taxila,47040,Pakistan

    2Department of Computer Engineering,College of Computer and Information Sciences,King Saud University,Riyadh,11543,Saudi Arabia

    3Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

    4Department of Mathematics and Computer Science,Faculty of Science,Beirut Arab University,Beirut,Lebanon

    Abstract: Artificial intelligence aids for healthcare have received a great deal of attention.Approximately one million patients with gastrointestinal diseases have been diagnosed via wireless capsule endoscopy (WCE).Early diagnosis facilitates appropriate treatment and saves lives.Deep learning-based techniques have been used to identify gastrointestinal ulcers, bleeding sites,and polyps.However, small lesions may be misclassified.We developed a deep learning-based best-feature method to classify various stomach diseases evident in WCE images.Initially,we use hybrid contrast enhancement to distinguish diseased from normal regions.Then,a pretrained model is fine-tuned,and further training is done via transfer learning.Deep features are extracted from the last two layers and fused using a vector length-based approach.We improve the genetic algorithm using a fitness function and kurtosis to select optimal features that are graded by a classifier.We evaluate a database containing 24,000 WCE images of ulcers, bleeding sites, polyps, and healthy tissue.The cubic support vector machine classifier was optimal; the average accuracy was 99%.

    Keywords:Stomach cancer; contrast enhancement; deep learning;optimization; features fusion

    1 Introduction

    Stomach (gastric) cancer can develop anywhere in the stomach [1] and is curable if detected and treated early [2], for example, before cancer spreads to lymph nodes [3].The incidence of stomach cancer varies globally.In 2019, the USA reported 27,510 cases (17,230 males and 10,280 females) with 11,140 fatalities (6,800 males and 4,340 females) [4].In 2018, 26,240 new cases and 10,800 deaths were reported in the USA (https://www.cancer.org/research/cancer-facts-statistics/allcancer-facts-figures/cancer-facts-figures-2020.html).In Australia, approximately 2,462 cases were diagnosed in 2019 (1,613 males and 849 females) with 1,287 deaths (780 males and 507 females) (www.canceraustralia.gov.au/affected-cancer/cancer-types/stomach-cancer/statistics).Regular endoscopy and wireless capsule endoscopy (WCE) are used to detect stomach cancer [5,6].Typically, WCE yields 57,000 frames, and all must be checked [7].Manual inspection is not easy and must be performed by an expert [8].Automatic classification of stomach conditions has been attempted [9].Image preprocessing is followed by feature extraction, fusion, and classification [10],and image contrast is enhanced by contrast stretching [11].The most commonly used features are color, texture, and shape.Some researchers have fused selected features to enhance diagnostic accuracy [12].Recent advances in deep learning have greatly improved performance [13].

    The principal conventional techniques used to detect stomach cancer are least-squares saliency transformation (LSST), a saliency-based method, contour segmentation, and color transformation [14].Kundu et al.[15] sought to automate WCE frame evaluation employing LSST followed by probabilistic model-fitting; LSST detected the initially optimal coefficient vectors.A saliency/best-features method was used by Khan et al.[16] to classify stomach conditions using a neural network; the average accuracy was 93%.Khan et al.[7] employed deep learning to identify stomach diseases.Deep features were extracted from both original WCE images and segmented stomach regions; the latter was important in terms of model training.Alaskar et al.[17] established a fully automated method of disease classification.Pretrained deep models (AlexNet and GoogleNet) were used for feature extraction and a softmax classifier was used for classification.A fusion of data processed by two pretrained models enhanced accuracy.Khan et al.[10] used deep learning to classify stomach disease, employing Mask RCNN for segmentation and finetuning of ResNet101; the Grasshopper approach was used for feature optimization.Selected features were classified using a multiclass support vector machine (SVM).Wang et al.[18]presented a deep learning approach featuring superpixel segmentation.Initially, each image was divided into multiple slices and superpixels were computed.The superpixels were used to segment lesions and train a convolutional neural network (CNN) that extracted deep learning features and engaged in classification.The features of segmented lesions were found to be more useful than those of the original images.Xing et al.[19] extracted features from globally averaged pooled layers and fused them with the hyperplane features of a CNN model to classify ulcers.Here, the accuracy was better than that afforded by any single model.Most studies have focused on training segmentation, which improves accuracy; however, the computational burden is high.Thus, most existing techniques are sequential and include disease segmentation, feature extraction,reduction, and classification.Most existing techniques focus on initial disease detection to extract useful features, which are then reduced.The limitations include mistaken disease detection and elimination of relevant features.

    In the medical field, data imbalances compromise classification.In addition, various stomach conditions have similar colors.Redundant and irrelevant features must be removed.In this paper, we report the development of a deep learning-based automated system employing a modified genetic algorithm (GA) to accurately detect stomach ulcers, polyps, bleeding sites, and healthy tissue.

    Our primary contributions are as follows.We develop a new hybrid method for color-based disease identification.Initially, a bottom-hat filter is applied and the product is fused with the YCbCr color space.Dehazed colors are used for further enhancement.A pretrained AlexNet model is fine-tuned and further trained using transfer learning.Also, deep learning features are extracted from FC layers 6 and 7 and fused using a vector length-based approach.Finally, an improved GA that incorporates fitness and kurtosis-controlled activation functions is developed.

    The remainder of this paper is organized as follows.Section 2 reviews the literature.Our methodology is presented in Section 3.The results and a discussion follow in Section 4.Conclusions and suggestions for future work are presented in Section 5.

    2 Proposed Methodology

    Fig.1 shows the architecture of the proposed method.Initial database images are processed via a hybrid approach that facilitates color-based identification of diseased and healthy regions.AlexNet was fine-tuned via transfer learning and further trained using stomach features.A cross entropy-based activation function was employed for feature extraction from the last two layers;these were fused using a vector length approach.A GA was modified employing both a fitness function and kurtosis.Several classifiers were tested on several datasets; the outcomes were both numerical and visual.

    Figure 1:Architecture of proposed methodology

    2.1 Color Based Disease Identification

    Early, accurate disease identification is essential [20,21].Segmentation is commonly used to identify skin and stomach cancers [22].We sought to identify stomach conditions in WCE images.To this end, we employed color-based discrimination of healthy and diseased regions.The latter were black or near-black.We initially applied bottom-hat filtering and then dehazing.The output was passed to the YCbCr color space for final visualization.Mathematically, this process is presented as follows.

    GivenΔ(x)is a database of four classesc1,c2,c3, andc4.Consider,X(i,j)∈Δ(x)is an input image of dimensionN×M×3, whereN=256,M=256, andk=3, respectively.The bottom hat filtering is applied on imageX(i,j)as follows:

    where the bottom hat image is represented byXbot(i,j), s is a structuring element of value 21,and · is a closing operator.To generate the color, a dehazing formulation is applied onXbot(i,j)as follows [23]:

    Here,Xhaz(i,j)represents a haze reduced image of the same dimension as the input image,Lightrepresents the internal color of an image,t(x)is transparency and its value is between[0,1].Then, YCbCr color transformation is applied onXhaz(i,j)for the final infected region discrimination.The YCbCr color transformation is defined by the following formula [24].

    Here, the red, green, and blue channels are denotedR,G, andB, respectively.The visual output of this transformation is shown in Fig.2.The top row shows original WCE images of different infections, and the dark areas in the images in the bottom row are the identified resultant disease infected parts.These resultant images are utilized in the next step for deep learning feature extraction.

    Figure 2:Visual representation of contrast stretching results

    2.2 Convolutional Neural Network

    A CNN is a form of deep learning that facilitates object recognition in medical [25], object classification [26], agriculture [27], action recognition [28], and other [29] fields.Classification is a major issue.Differing from most classification algorithms, a CNN does not require significant preprocessing.A CNN features three principal hierarchical layers.The first two layers (convolution and pooling) are used for feature extraction (weights and biases).The last layer is usually fully connected and derives the final output.In this study, we use a pretrained version of AlexNet as the CNN.

    2.2.1 Modified AlexNet Model

    AlexNet [30] facilitates fast training and reduces over-fitting.The AlexNet model has five convolutional layers and three fully connected layers.All layers employ the max-out activation function, and the last two use a softmax function for final classification [31].Each input is of dimension 227×227×3.The dataset is denotedΔ, and the training data is represented byAcd∈Δ.EachAcdbelongs to the real numberR.

    Heres(.)denotes the ReLU activation function andρ(1)denotes the bias vector.m(1)denotes the weights of the first layer and is defined as follows:

    whereFdenotes the fully connected layer.The input of the next layer is the output from the previous layer.This process is shown in mathematical form below.

    Here, ?(n?1)and ?(n)are the second last and last fully connected layers, respectively.Moreover,m(2)∈RF(2)×F(1)andm(2)∈RF(2)×F(1); therefore, ?(Z)denotes the last fully connected layer which helps extract the high-level feature.

    Here,W(a)denotes the cross-entropy function,Uindicates the overall number of classes v andp, andQis predicted probability.The overall architecture of AlexNet is shown in Fig.3.

    Figure 3:Architecture of AlexNet model

    2.2.2 Transfer Learning

    Transfer Learning [32] is used to further train a model that is already trained.Transfer learning improves model performance.The given input isand the learning task isThe target isand its learning task is(m,r)where r ?mandyK1andye1are training data labels.We fine-tuned the AlexNet architecture and removed the last layer (Fig.4).Then, we added a new layer featuring ulcers, polyps, bleeding sources, and normal tissue; these are the target labels.Fig.5 shows that the source data were derived from ImageNet and that the source model was AlexNet.The number of classes/labels was 1,000.The modified model featured four classes (see above) and was fine-tuned.Transfer learning delivered the new knowledge to create a modified CNN used for feature extraction.

    Figure 4:Fine-tuning of original AlexNet model

    Figure 5:Transfer learning for stomach infection classification

    2.3 Features Extraction&Fusion

    Feature extraction is vital; the features are the object input [33].We extracted deep learning features from layers FC6 and FC7.Mathematically, the vectors are F1and F2.The original feature sizes wereN×4096; 4,096 features were extracted for each image.However, the accuracies of individual vectors were inadequate.Thus, we combined multiple features into single vectors.We fused information based on vector length, as follows.

    The resultant feature-length isN×8192.This feature-length is large, and many features will be redundant/irrelevant.We minimized this issue by applying a mean threshold function that compared each feature to the mean.Mathematically, this process is expressed as follows.

    This shows that fused vector features ≥mwere selected before proceeding to the next step.The other features are ignored.Then, the optimal features are chosen using an improved GA (IGA).

    2.4 Modified Genetic Algorithm

    A GA [34] is an evolutionary algorithm applied to identify optimal solutions among a set of original solutions.In other words, a GA is a heuristic search algorithm that organizes the best solutions into spaces.GAs involve five steps:initialization/population initialization, crossover,mutation, selection, and reproduction.

    Initialization.The maximum number of iterations, population size, crossover percentage, offspring number, mutation percentage, number of mutants, and the mutation and selection rates are initialized.Here, the iteration number is 100, the population size 20, the mutation rate 0.2, the crossover rate 0.5, and the selection pressure 7.

    Population Initialization.We initialize the size of the GA population (here 20).Every population is selected randomly in terms of its fused vector and evaluated using a fitness function.Here,the softmax function with the fine-k-nearest neighbor [F-KNN] method is used.Non-selected features undergo crossover and mutation.

    Crossover.Crossover mirrors chromosomal behavior.A parent is used to create a child.Here,the uniform crossover rate is 0.5.Mathematically, crossover can be expressed as follows.

    Here, P1and P2are the parents, which are selected,uis a random value that is initially selected as 1.Visually, this process is shown in Fig.6.

    Figure 6:Architecture of crossover

    Mutation.To impart unique characteristics to the offspring, one mutation is created in each offspring generated by crossover.The mutation rate was 0.2.Then, we used the Roulette Wheel(RW) [35] method to select chromosomes.The RW is based on probability.

    In Eq.(16), the sorted population isyδ, the last population isOl, andβ1is the selected parent, which is 7.When the mutation is done, a new generation will be selected.

    Selection and Reproduction.Crossover and mutation facilitate chromosome selection by the RW method.Thus, the selection pressure is moderate rather than high or low.All offspring engage in reproduction, and then fitness values are computed.The chromosomes are illustrated in Fig.7.They were evaluated using the fitness function where the error rate was the measure of interest.Then, the old generation was updated.

    This process continues until no further iteration is possible.A vector has been obtained, but remains of high dimensions.To reduce the length, we added an activation function based on kurtosis.This value is computed after iteration is complete and used to compare selected features(chromosomes).Those that do not fulfill the activation criterion are discarded.Mathematically, it can be expressed as follows:

    The final selected vector is passed to several machine learning classifiers for classification.In this study, the vector dimension in isN×1726.

    Figure 7:Demonstration of chromosomes

    3 Results and Analysis

    3.1 Experimental Setup

    We used 4,000 WCE images and employed 10 classifiers:The Cubic SVM, Quadratic SVM,Linear SVM, Coarse Gaussian SVM, Medium Gaussian SVM, Fine KNN, Medium KNN,Weighted KNN, Cosine KNN, and Bagged Tree.Of the complete dataset, 70% was used for training and 30% for testing (10 cross-validations).We used a Core i7 CPU with 14 GB of RAM and a 4 GB graphics card.Coding employed MATLAB 2020a and Matconvent (for deep learning).We measured sensitivity, precision, the F1-score, the false-positive rate (FPR), the area under the curve (AUC), accuracy, and time.

    3.2 Results

    The results are shown in Tab.1.The highest accuracy was 99.2% (using the Cubic SVM).The sensitivity, precision, and F1-score were all 99.00%.The FPR was 0.002, the AUC was 1.00, and the (computational) time was 83.79 s.The next best accuracy was 99.6% (Quadratic SVM).The associated metrics (in the above order) were 98.75%, 99.00%, 99.00%, 0.002, 1.000, and 78.52 s,respectively.The Cosine KNN, Weighted KNN, Medium KNN, Fine KNN, MG SVM, Coarse Gaussian SVM, Linear SVM, and Bagged Tree accuracies were 97.0%, 98.0%, 96.7%, 98.9%,98.9%, 93.3%, 96.9%, and 96.8%, respectively.The Cubic SVM scatterplot of the original test features is shown in Fig.8.The first panel refers to the original data and the second to the Cubic SVM predictions.The good Cubic SVM performance is confirmed by the confusion matrix shown in Fig.9.Bleeding was accurately predicted 99% of the time, as were healthy tissue and ulcers;the polyp figure was>99%.The ROC plots of the Cubic SVM are shown in Fig.10.

    Next, we applied our improved GA.The results are shown in Tab.2.The top accuracy(99.8%) was afforded by the Cubic SVM accompanied by sensitivity of 99.00%, precision of 99.25%, F1-score of 99.12%, FPR of 0.00, AUC of 1.00, and a time of 211.90 s.The second highest accuracy was 99.0% achieved by the Fine KNN, accompanied by (in the above order) values of 99.0%, 99.25%, 99.12%, 0.00, 1.00, and 239.08 s, respectively.The Cosine KNN, Weighted KNN, Medium KNN, Quadratic SVM, MG SVM, Coarse Gaussian SVM, Linear SVM, and Bagged Tree achieved accuracies of 99.0%, 99.5%, 98.7%, 99.6%, 99.6%, 96.2%, 98.3%, and 98.3%,respectively.The Cubic SVM scatterplot of the original test features is shown in Fig.11.The first panel refers to the original data and the second to the Cubic SVM predictions.The good Cubic SVM performance is confirmed by the confusion matrix shown in Fig.12.In this figure, the four classes are healthy tissue, bleeding sites, ulcers, and polyps.Bleeding was accurately predicted 99%of the time, healthy tissue<99% of the time, and ulcers and polyps>99% of the time.The ROC plots of the Cubic SVM are shown in Fig.13.

    Table 1:Classification accuracy of proposed optimal feature selection algorithm (testing feature results)

    Figure 8:Scatter plot for testing features after applying GA

    Figure 9:Confusion matrix of cubic SVM for proposed method

    Figure 10:ROC plots for selected stomach cancer classes using cubic SVM after applying GA

    Table 2:Classification accuracy of proposed optimal feature selection algorithm using training features

    Figure 11:Scatter plot of training features after applying GA

    Figure 12:Confusion matrix of CUBIC SVM

    Figure 13:ROC plots for selected stomach cancer classes using cubic SVM after applying GA

    3.3 Comparison with Existing Techniques

    In this section, we compare the proposed method to existing techniques (Tab.3).In a previous study [7], CNN feature extraction, fusing of different features, selection of the best features, and classification were used to detect ulcers in WCE images.The dataset was collected in the POF Hospital Wah Cantt, Pakistan; the accuracy was 99.5%.Another study [9] described handcrafted and deep CNN feature extraction from the Kvasir, CVC-ClinicDB, a private, and ETIS-Larib PolypDB datasets.The accuracy was 96.5%.In another study [15], and LSST technique using probabilistic model-fitting was used to evaluate a WCE dataset; the accuracy was 98%.Our method employs deep learning and a modified GA.We used the private dataset of the POF Hospital, and the Kvasir and CVC datasets to identify ulcers, polyps, bleeding sites, and healthy tissue.The accuracy was 99.8% and the computational time was 211.90 s.Our method outperforms the existing techniques.

    Table 3:Proposed method’s accuracy compared with published techniques

    4 Conclusion

    We automatically identify various stomach diseases using deep learning and an improved GA.WCE image contrast is enhanced using a new color discrimination-based hybrid approach.This distinguishes diseased and healthy regions, which facilitates later feature extraction.We finetuned the pretrained AlexNet deep learning model by the classifications of interest.We employed transfer learning further train the AlexNet model.We fused features extracted from two layers;this improved local and global information.We removed some redundant features by modifying the GA fitness function and using kurtosis to select the best features.This improved accuracy and minimized computational time.The principal limitation of the work is that the features are of high dimension, which increases computational cost.We will resolve this problem by employing DarkNet and MobileNet (the latest deep learning models [36,37]).Before feature extraction,disease localization accelerates execution.

    Acknowledgement:The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research group NO (RG-1438-034).The authors thank the Deanship of Scientific Research and RSSU at King Saud University for their technical support.

    Funding Statement:This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724, The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    av在线蜜桃| 成人亚洲欧美一区二区av| 97热精品久久久久久| 成人午夜高清在线视频| 综合色丁香网| 亚洲熟妇中文字幕五十中出| 黄片wwwwww| 日产精品乱码卡一卡2卡三| 看免费成人av毛片| 国产精品久久久久久久久免| 日韩 亚洲 欧美在线| 亚洲综合精品二区| 搡老妇女老女人老熟妇| 美女xxoo啪啪120秒动态图| 亚洲欧洲国产日韩| 国产午夜精品久久久久久一区二区三区| 免费看av在线观看网站| 色播亚洲综合网| 国产国拍精品亚洲av在线观看| 国产极品天堂在线| 2022亚洲国产成人精品| 一个人看视频在线观看www免费| 天堂影院成人在线观看| 亚洲欧美一区二区三区国产| 国产亚洲av嫩草精品影院| 99九九线精品视频在线观看视频| 麻豆国产97在线/欧美| 国产黄a三级三级三级人| 99热网站在线观看| 黄片无遮挡物在线观看| 国产成人a区在线观看| av在线老鸭窝| 午夜精品国产一区二区电影 | 秋霞伦理黄片| 免费看av在线观看网站| 亚洲va在线va天堂va国产| 91精品伊人久久大香线蕉| 激情五月婷婷亚洲| 噜噜噜噜噜久久久久久91| 青春草亚洲视频在线观看| 精品一区在线观看国产| or卡值多少钱| 女人十人毛片免费观看3o分钟| 国产黄色小视频在线观看| 淫秽高清视频在线观看| 日韩成人av中文字幕在线观看| 搡老妇女老女人老熟妇| 嫩草影院入口| 男人和女人高潮做爰伦理| 韩国av在线不卡| 免费黄色在线免费观看| 国产成人aa在线观看| 美女cb高潮喷水在线观看| 一级毛片我不卡| 国产在视频线精品| 久久久色成人| 在线 av 中文字幕| 亚洲最大成人手机在线| 精品人妻熟女av久视频| 日韩一区二区三区影片| 久久久久久久大尺度免费视频| 伊人久久国产一区二区| 国产色婷婷99| 内射极品少妇av片p| 免费av不卡在线播放| 久久热精品热| 美女大奶头视频| 九九久久精品国产亚洲av麻豆| 五月伊人婷婷丁香| 看非洲黑人一级黄片| 91久久精品国产一区二区三区| 成人特级av手机在线观看| 亚洲国产精品专区欧美| 亚洲av福利一区| 日本猛色少妇xxxxx猛交久久| 中文精品一卡2卡3卡4更新| 18禁在线播放成人免费| 久久97久久精品| av卡一久久| 久久久久网色| 国产视频内射| 亚洲精品国产av蜜桃| 亚洲三级黄色毛片| 美女被艹到高潮喷水动态| 欧美+日韩+精品| 一个人看的www免费观看视频| 亚洲欧美一区二区三区黑人 | a级毛片免费高清观看在线播放| 国产 亚洲一区二区三区 | 免费不卡的大黄色大毛片视频在线观看 | 国产人妻一区二区三区在| 乱系列少妇在线播放| 国产视频内射| av线在线观看网站| 97超碰精品成人国产| 亚洲,欧美,日韩| 夫妻性生交免费视频一级片| av网站免费在线观看视频 | 国产亚洲最大av| 中国国产av一级| 亚洲欧美清纯卡通| 精品一区二区三区视频在线| 最近的中文字幕免费完整| 亚洲精品一区蜜桃| 中文字幕av在线有码专区| av在线亚洲专区| 伦精品一区二区三区| 大陆偷拍与自拍| 亚洲精品乱码久久久久久按摩| 午夜福利在线在线| 日韩中字成人| 亚洲内射少妇av| 亚洲欧美日韩卡通动漫| 午夜免费激情av| 建设人人有责人人尽责人人享有的 | 亚洲av电影在线观看一区二区三区 | 国产av码专区亚洲av| 一区二区三区高清视频在线| 国产黄频视频在线观看| 少妇人妻一区二区三区视频| 国产精品久久久久久精品电影| 国产一区亚洲一区在线观看| 久久久精品欧美日韩精品| 毛片一级片免费看久久久久| 亚洲精品日本国产第一区| 久久久久久久大尺度免费视频| 久久久久久久久久久丰满| 波多野结衣巨乳人妻| 日韩成人伦理影院| 国产av码专区亚洲av| 国内少妇人妻偷人精品xxx网站| ponron亚洲| 免费看a级黄色片| 美女国产视频在线观看| 一级毛片电影观看| 少妇猛男粗大的猛烈进出视频 | 一区二区三区四区激情视频| 久久午夜福利片| 三级毛片av免费| 国产单亲对白刺激| 免费黄网站久久成人精品| 精品久久久久久久人妻蜜臀av| 人妻制服诱惑在线中文字幕| 国产永久视频网站| 亚洲国产av新网站| 精品久久国产蜜桃| 18禁在线播放成人免费| 久久99热这里只频精品6学生| 一区二区三区免费毛片| 少妇人妻精品综合一区二区| 中文字幕制服av| 国产免费又黄又爽又色| 日韩在线高清观看一区二区三区| 成年版毛片免费区| 91久久精品电影网| 91av网一区二区| 亚洲国产欧美在线一区| 91久久精品国产一区二区成人| 高清欧美精品videossex| 男人狂女人下面高潮的视频| 乱码一卡2卡4卡精品| 人妻一区二区av| 国产精品1区2区在线观看.| 寂寞人妻少妇视频99o| 人人妻人人澡欧美一区二区| 亚洲一级一片aⅴ在线观看| 青青草视频在线视频观看| 国产人妻一区二区三区在| 熟女人妻精品中文字幕| 久99久视频精品免费| 精品亚洲乱码少妇综合久久| kizo精华| 三级国产精品欧美在线观看| 亚洲国产精品专区欧美| 一个人免费在线观看电影| 大香蕉97超碰在线| 狠狠精品人妻久久久久久综合| 国产探花在线观看一区二区| 人人妻人人澡人人爽人人夜夜 | 国产av不卡久久| 性插视频无遮挡在线免费观看| 久久99热6这里只有精品| 精品久久久噜噜| 最近中文字幕2019免费版| 久久久午夜欧美精品| 国产成人精品婷婷| 最后的刺客免费高清国语| 中文字幕制服av| 九草在线视频观看| 国产黄片美女视频| 汤姆久久久久久久影院中文字幕 | 嫩草影院入口| 成人二区视频| 欧美97在线视频| 亚洲经典国产精华液单| 国产黄频视频在线观看| 国产男人的电影天堂91| 日日摸夜夜添夜夜爱| 天堂俺去俺来也www色官网 | 亚洲欧美精品自产自拍| 久久精品久久久久久噜噜老黄| 欧美另类一区| 免费观看的影片在线观看| 丝瓜视频免费看黄片| 久久久久久久久久成人| 男的添女的下面高潮视频| 99热网站在线观看| 国产av国产精品国产| 男人狂女人下面高潮的视频| 成人午夜精彩视频在线观看| 麻豆久久精品国产亚洲av| 亚洲欧美成人综合另类久久久| 听说在线观看完整版免费高清| 成年人午夜在线观看视频 | 日韩,欧美,国产一区二区三区| 中文天堂在线官网| 在线免费观看的www视频| 午夜亚洲福利在线播放| 国产精品久久久久久精品电影小说 | 内射极品少妇av片p| 日韩av在线免费看完整版不卡| 亚洲一级一片aⅴ在线观看| 嫩草影院入口| 久久久久久久国产电影| 亚洲,欧美,日韩| 亚洲精品乱码久久久久久按摩| 国产成人精品婷婷| 亚洲最大成人手机在线| 国产在线一区二区三区精| 欧美不卡视频在线免费观看| 亚洲欧美精品专区久久| 国产亚洲午夜精品一区二区久久 | 午夜福利视频1000在线观看| av线在线观看网站| www.av在线官网国产| 精品人妻视频免费看| 欧美性猛交╳xxx乱大交人| 亚洲精品日本国产第一区| 欧美一级a爱片免费观看看| 人体艺术视频欧美日本| 久久99热这里只有精品18| 欧美成人a在线观看| 亚洲av电影不卡..在线观看| 日韩成人伦理影院| 久久久午夜欧美精品| 丰满乱子伦码专区| 国产美女午夜福利| 日韩视频在线欧美| 永久免费av网站大全| 久久久国产一区二区| 麻豆成人午夜福利视频| 最近最新中文字幕大全电影3| 岛国毛片在线播放| 看非洲黑人一级黄片| 69人妻影院| 免费看日本二区| 精品久久久久久久久久久久久| 欧美激情久久久久久爽电影| 国产av码专区亚洲av| 婷婷六月久久综合丁香| 国产成人a∨麻豆精品| 午夜福利成人在线免费观看| 国产成人freesex在线| 91aial.com中文字幕在线观看| 成人漫画全彩无遮挡| 久久精品久久久久久久性| 亚洲丝袜综合中文字幕| 亚洲av国产av综合av卡| 久久久久久久久久黄片| 久久久久九九精品影院| 国产成人a区在线观看| 免费观看在线日韩| 又粗又硬又长又爽又黄的视频| 在现免费观看毛片| 精品一区二区三卡| 中文天堂在线官网| 男女边吃奶边做爰视频| 狂野欧美白嫩少妇大欣赏| 日韩在线高清观看一区二区三区| 亚洲av中文av极速乱| 最近视频中文字幕2019在线8| 在线免费观看不下载黄p国产| 亚洲精品一二三| 久久国产乱子免费精品| 69人妻影院| 高清毛片免费看| 国产老妇女一区| 精品熟女少妇av免费看| 黑人高潮一二区| 国产黄色视频一区二区在线观看| 欧美潮喷喷水| freevideosex欧美| 一级毛片久久久久久久久女| 91精品国产九色| 亚洲色图av天堂| 亚洲伊人久久精品综合| 欧美xxxx黑人xx丫x性爽| 国产高清三级在线| 欧美精品一区二区大全| 久久久国产一区二区| 99热6这里只有精品| 国产伦在线观看视频一区| 国产一区二区在线观看日韩| 18禁在线无遮挡免费观看视频| 国产色爽女视频免费观看| 熟女电影av网| 床上黄色一级片| 51国产日韩欧美| 黄色配什么色好看| 久久久久久伊人网av| 好男人在线观看高清免费视频| 免费av毛片视频| 国精品久久久久久国模美| 国产乱人偷精品视频| 日韩欧美 国产精品| 欧美一区二区亚洲| 亚洲av电影不卡..在线观看| 亚洲经典国产精华液单| 国产欧美另类精品又又久久亚洲欧美| 男人和女人高潮做爰伦理| 小蜜桃在线观看免费完整版高清| 免费看a级黄色片| 亚洲国产精品成人综合色| 亚洲久久久久久中文字幕| 免费观看av网站的网址| 国国产精品蜜臀av免费| 男人狂女人下面高潮的视频| 亚洲av二区三区四区| 又粗又硬又长又爽又黄的视频| 少妇熟女欧美另类| 看免费成人av毛片| 国产精品一及| 三级毛片av免费| 十八禁网站网址无遮挡 | 丰满乱子伦码专区| 欧美3d第一页| 亚洲在久久综合| 亚洲精品乱久久久久久| 狂野欧美白嫩少妇大欣赏| 日韩国内少妇激情av| 日韩不卡一区二区三区视频在线| 最后的刺客免费高清国语| 国产午夜精品一二区理论片| 国产不卡一卡二| 婷婷色综合www| 80岁老熟妇乱子伦牲交| 亚洲精品成人久久久久久| 欧美成人一区二区免费高清观看| 观看免费一级毛片| 男人舔奶头视频| 亚洲aⅴ乱码一区二区在线播放| 久久精品国产亚洲av天美| 两个人的视频大全免费| 99九九线精品视频在线观看视频| 亚洲av电影不卡..在线观看| 身体一侧抽搐| 欧美性感艳星| 精品亚洲乱码少妇综合久久| 麻豆久久精品国产亚洲av| 免费看美女性在线毛片视频| 国产一级毛片在线| 精品国产一区二区三区久久久樱花 | 亚洲精品日韩在线中文字幕| 一边亲一边摸免费视频| 晚上一个人看的免费电影| 欧美三级亚洲精品| 日韩强制内射视频| 极品教师在线视频| 永久网站在线| 一级毛片久久久久久久久女| 亚洲精品一区蜜桃| ponron亚洲| 欧美激情在线99| 免费少妇av软件| 国产精品福利在线免费观看| 欧美成人a在线观看| 日韩一区二区视频免费看| 日韩欧美精品v在线| 最近手机中文字幕大全| 97超碰精品成人国产| 精品久久久久久电影网| 男人爽女人下面视频在线观看| 国产午夜精品一二区理论片| 男女下面进入的视频免费午夜| 一级毛片我不卡| 日韩成人伦理影院| 久久99热这里只有精品18| 51国产日韩欧美| 国产高清不卡午夜福利| 亚洲国产欧美人成| 亚洲一区高清亚洲精品| 日本-黄色视频高清免费观看| 乱人视频在线观看| 精品99又大又爽又粗少妇毛片| 免费观看在线日韩| 淫秽高清视频在线观看| 精品亚洲乱码少妇综合久久| 国产午夜福利久久久久久| 国产精品女同一区二区软件| 亚洲美女搞黄在线观看| 成人毛片60女人毛片免费| 免费观看性生交大片5| 美女被艹到高潮喷水动态| 草草在线视频免费看| 亚洲精品成人av观看孕妇| 欧美日韩一区二区视频在线观看视频在线 | 国精品久久久久久国模美| 欧美潮喷喷水| 97人妻精品一区二区三区麻豆| 一本一本综合久久| 成年版毛片免费区| 亚洲精品国产成人久久av| 欧美成人午夜免费资源| 中文天堂在线官网| av卡一久久| 国产亚洲精品av在线| 亚洲精品乱久久久久久| 亚洲精品456在线播放app| 纵有疾风起免费观看全集完整版 | 水蜜桃什么品种好| 白带黄色成豆腐渣| av一本久久久久| 熟妇人妻不卡中文字幕| 成人毛片60女人毛片免费| 国精品久久久久久国模美| 99久久精品一区二区三区| 精品国产三级普通话版| 精品久久久精品久久久| 亚洲人成网站高清观看| 亚洲熟女精品中文字幕| 成人av在线播放网站| 高清午夜精品一区二区三区| 天美传媒精品一区二区| 亚洲三级黄色毛片| 国产精品av视频在线免费观看| 国产黄色视频一区二区在线观看| 一个人看的www免费观看视频| 欧美精品国产亚洲| 久久草成人影院| 久久热精品热| 亚洲va在线va天堂va国产| 久久久久免费精品人妻一区二区| 菩萨蛮人人尽说江南好唐韦庄| 亚洲成人久久爱视频| 精品不卡国产一区二区三区| 一个人免费在线观看电影| 久久精品熟女亚洲av麻豆精品 | or卡值多少钱| videos熟女内射| 国产 一区精品| 最近的中文字幕免费完整| 精品熟女少妇av免费看| 亚洲aⅴ乱码一区二区在线播放| 国产激情偷乱视频一区二区| 日日撸夜夜添| 蜜桃亚洲精品一区二区三区| 两个人的视频大全免费| 一级毛片我不卡| 国内精品一区二区在线观看| 综合色av麻豆| 99热这里只有是精品50| 草草在线视频免费看| 国产精品伦人一区二区| 成人毛片60女人毛片免费| 午夜福利在线观看吧| 国产黄色视频一区二区在线观看| 亚洲,欧美,日韩| 男女边吃奶边做爰视频| 亚洲天堂国产精品一区在线| 国产一区二区三区av在线| 最近2019中文字幕mv第一页| 一级毛片黄色毛片免费观看视频| av在线播放精品| 91狼人影院| 一级毛片电影观看| 联通29元200g的流量卡| 亚洲av.av天堂| 久久久久久久国产电影| 自拍偷自拍亚洲精品老妇| 久99久视频精品免费| 欧美xxⅹ黑人| 国产在视频线精品| 日韩国内少妇激情av| 噜噜噜噜噜久久久久久91| 日韩电影二区| av免费在线看不卡| 午夜精品国产一区二区电影 | 国产伦理片在线播放av一区| 国产精品日韩av在线免费观看| 久久久久久久久久久丰满| 欧美日韩精品成人综合77777| 免费在线观看成人毛片| 天天躁夜夜躁狠狠久久av| 国产精品熟女久久久久浪| 精品酒店卫生间| av在线天堂中文字幕| 国产成人午夜福利电影在线观看| 一级片'在线观看视频| 在线观看人妻少妇| 久久精品久久久久久噜噜老黄| 美女高潮的动态| 亚洲av中文av极速乱| 亚洲成人久久爱视频| 国产免费福利视频在线观看| 美女内射精品一级片tv| 大香蕉久久网| 深爱激情五月婷婷| 看黄色毛片网站| 国国产精品蜜臀av免费| 青春草视频在线免费观看| 丰满人妻一区二区三区视频av| 蜜桃久久精品国产亚洲av| 在线观看一区二区三区| 久久草成人影院| 国产精品福利在线免费观看| 午夜激情欧美在线| 日韩一本色道免费dvd| 水蜜桃什么品种好| 男的添女的下面高潮视频| 好男人在线观看高清免费视频| 美女脱内裤让男人舔精品视频| 一区二区三区免费毛片| 三级经典国产精品| 嫩草影院入口| 国产精品一区二区在线观看99 | 欧美 日韩 精品 国产| 黄片无遮挡物在线观看| 啦啦啦啦在线视频资源| 国产午夜精品久久久久久一区二区三区| 免费黄色在线免费观看| 亚洲精品自拍成人| 最近2019中文字幕mv第一页| 天天躁夜夜躁狠狠久久av| 国产精品美女特级片免费视频播放器| 亚洲国产高清在线一区二区三| 午夜福利在线观看吧| 少妇高潮的动态图| 91精品伊人久久大香线蕉| 伦理电影大哥的女人| 日日摸夜夜添夜夜爱| 久久久久久久久久久免费av| 久久97久久精品| 欧美激情久久久久久爽电影| 国产黄片视频在线免费观看| 十八禁网站网址无遮挡 | 国产麻豆成人av免费视频| 成人毛片a级毛片在线播放| 久久久久精品性色| ponron亚洲| 午夜激情欧美在线| 国产精品国产三级专区第一集| 熟女人妻精品中文字幕| 国产乱人视频| 亚洲精品成人av观看孕妇| 免费高清在线观看视频在线观看| 日韩一区二区三区影片| 国产午夜精品久久久久久一区二区三区| 成人鲁丝片一二三区免费| 97人妻精品一区二区三区麻豆| 男人狂女人下面高潮的视频| 看非洲黑人一级黄片| 99久久人妻综合| 亚洲av男天堂| 亚洲在线自拍视频| 777米奇影视久久| 精品久久久久久久人妻蜜臀av| 男女国产视频网站| 亚洲性久久影院| 久久6这里有精品| 看十八女毛片水多多多| 国模一区二区三区四区视频| 国产有黄有色有爽视频| 一本一本综合久久| 永久免费av网站大全| 97精品久久久久久久久久精品| 亚洲av中文av极速乱| 99热这里只有是精品在线观看| 国产精品无大码| 狠狠精品人妻久久久久久综合| 国产午夜精品久久久久久一区二区三区| 成年免费大片在线观看| 国产真实伦视频高清在线观看| 国产v大片淫在线免费观看| 最新中文字幕久久久久| 亚洲不卡免费看| 久久久久久久久久黄片| 久久精品熟女亚洲av麻豆精品 | 五月天丁香电影| 成人毛片a级毛片在线播放| 人妻系列 视频| 亚洲精品成人久久久久久| 精品久久久久久成人av| 亚洲av.av天堂| 成人一区二区视频在线观看| 夫妻午夜视频| 日本熟妇午夜| 精品久久久精品久久久| 三级国产精品欧美在线观看| 国产麻豆成人av免费视频| 亚洲自拍偷在线| 免费黄网站久久成人精品| 免费不卡的大黄色大毛片视频在线观看 | 国产午夜精品一二区理论片| 深爱激情五月婷婷| 亚洲av二区三区四区| 2021少妇久久久久久久久久久| 午夜福利网站1000一区二区三区| 51国产日韩欧美| 97人妻精品一区二区三区麻豆| 丝瓜视频免费看黄片| 黑人高潮一二区| 18+在线观看网站| 一夜夜www| 国产老妇女一区| 能在线免费观看的黄片|