• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Light Weight CNN Framework Integrated with Marine Predator Optimization for the Assessment of Tear Film-Lipid Layer Patterns

    2023-02-17 03:11:36BejoyAbrahamJesnaMohanLinuShineandSivakumarRamachandran

    Bejoy Abraham, Jesna Mohan, Linu Shine and Sivakumar Ramachandran,*

    1Department of Computer Science and Engineering,College of Engineering Muttathara,Thiruvananthapuram,Kerala,695008,India

    2Department of Computer Science and Engineering,Mar Baselios College of Engineering and Technology,Thiruvananthapuram,Kerala,695015,India

    3Department of Electronics and Communication Engineering,College of Engineering Trivandrum,Kerala,695016,India

    ABSTRACT Tear film,the outermost layer of the eye,is a complex and dynamic structure responsible for tear production.The tear film lipid layer is a vital component of the tear film that provides a smooth optical surface for the cornea and wetting the ocular surface.Dry eye syndrome(DES)is a symptomatic disease caused by reduced tear production,poor tear quality,or excessive evaporation.Its diagnosis is a difficult task due to its multifactorial etiology.Out of several clinical tests available,the evaluation of the interference patterns of the tear film lipid layer forms a potential tool for DES diagnosis.An instrument known as Tearscope Plus allows the rapid assessment of the lipid layer.A grading scale composed of five categories is used to classify lipid layer patterns.The reported work proposes the design of an automatic system employing light weight convolutional neural networks(CNN)and nature inspired optimization techniques to assess the tear film lipid layer patterns by interpreting the images acquired with the Tearscope Plus.The designed framework achieves promising results compared with the existing state-of-the-art techniques.

    KEYWORDS Dry-eye syndrome;tearscope plus;tear film;deep neural networks

    1 Introduction

    The eyes are the most delicate and complex organs that a human being possesses. The ocular surface represents the eye’s outer surface,which consists of the cornea and the conjunctiva.Moreover,this outer surface has a complex and dynamic covering called tear film, which acts as an interface between the eye and the external environment. The tear film is a three-layer structure consisting of the innermost mucous layer,the middle aqueous layer and a delicate anterior lipid layer.The tear film lipid layer, composed of polar and non-polar lipids, provides a smooth covering for the cornea and impedes evaporation from the ocular surface.The abnormal conditions of the tear film due to poor tear quality,reduced tear production,or excessive tear evaporation lead to the cause of Evaporative Dry Eye(EDE)syndrome in the eyes.

    Dry eye syndrome is a symptomatic disease that affects a wide range of the population.Diseaserelated difficulties are most common in persons over 50 years old, but they are also on the rise among young adults, which experts point out the ubiquity of smart phones and computers. The prolonged wearing of contact glasses may also contribute to the prevalence of dry eyes among the young population[1].A recent survey established the usage of face-masks against the spread of SARSCoV-2,reported symptoms of dry eye-related issues in the general population[2].There exist several clinical tests for its diagnosis due to its multifactorial etiology. The evaluation of the interference patterns in the images of the tear film lipid layer could provide diagnostic information of this disease.Tearscope Plus allows clinicians to rapidly assess the lipid layer thickness and grade these patterns into one of the five categories.

    The International Dry Eye Workshop (DEWS) established that dry eye syndrome (DES) is a multifactorial disease with distinct manifestations [3]. The symptoms include visual disturbance,discomfort in the eyes, and tear film instability leading to potential damage to the ocular surface.Moreover,the disorder causes an increased osmotic concentration of the tear film and inflammation of the ocular surface.The statistics show that the disease is prevalent among 5%-50%of the general population[1].

    The lipid layer plays a significant role in restricting evaporation during the inter-blink period and affects tear film stability. The Tearscope Plus, an instrument designed by Guillon, allows the evaluation of lipid layer thickness using five primary grades of interference patterns: Color Fringe,Open Mesh-work, Wave, Closed Mesh-work, and Amorphous. The visual appearance and colour of the interference patterns provide prognostic features of the structural regularity and thickness of the lipid layers. The manual screening of tear film images for identifying different patterns is very cumbersome.Moreover,direct observation of the tear film is complex and poses great difficulties in diagnosing DES. In this paper, an assessment of tear film stability through the lipid layer pattern analysis is reported.We aim to design an automatic system to classify four different tear film lipid layer patterns,namely Color Fringe,Open Mesh-work,Wave and Closed Mesh-work,defined by Guillon[4]. The Amorphous category images are not used in the study as it rarely appears during disease diagnosis [5]. In the tear film images captured using Tearscope Plus, we employ deep learning (DL)techniques to classify the lipid layer patterns into four categories.To the best of our knowledge,this is the first work in literature that uses DL-based techniques for tear film classification.

    In Table 1, we provide the characteristic features of the four interference patterns used in the present study.

    2 Related works

    Literature features a large volume of research works on eye imaging.When compared to retinabased blood vessel segmentation studies [6-9], the amount of research publications on tear film imaging is very less. The research contributions [5,10,11] published by VARPA group are the only reference available for this study. The works reported were based on machine learning techniques employing hand-crafted features.The published results used experiments with different colour channels,feature descriptors and feature selection techniques.The major work reported in[5],used texture and colour features extracted from RGB,grayscale and L*a*b color components.The texture features were generated using Gabor filters, Butterworth filters, Markov Random Fields, Discrete Wavelet Transform,and co-occurrence matrix.The feature selection is performed using the consistency-based filter,Correlation-Based Feature Selection,and INTERACT,followed by classification using the SVM classifier.The remaining works[10,11]were only subsidiary of[5]and hence,they were not discussed again.

    Table 1: Examples of representative images obtained from VARPA dataset along with the typical characteristics of each class,namely open mesh-work,closed mesh-work,Wave,and Color Fringe

    The proposed framework achieves classification efficiency via two stages. First, rather than using highly complex neural network architectures for training, our proposed technique employs a lightweight CNN architecture inspired by two light weight pre-trained mobile CNNs, namely EfficientnetB0[12]and MobilenetV2[13],that is simple enough to reduce computational loads while still providing high accuracy when trained on tearscope images.Moreover,deployment of network in mobile devices is possible with light CNNs.Second,instead of relying on an end to end deep learning framework for classifying tear film lipid layer patterns, we employ machine learning techniques to classify concatenated features extracted from the light-weight CNN architecture. The following sections provide a detailed explanation of the technique, which includes the method for generating features and then classifying them into different lipid pattern groups.

    To summarize, the research works present in the literature for classifying Tearscope images use texture features derived from various colour channels.In addition,for reducing the processing time,the extracted feature set is passed through feature selection algorithms before it is finally fed to a classifier.In our work,instead of using handcrafted features,we employ convolutional neural networks for tear film classification.The main contributions are:

    i. A novel deep learning framework for the classification of tear film images.

    ii. The proposed design integrates light weight convolutional neural networks(CNN)with Marine Predator algorithm,a nature-inspired optimization technique,for the classification.

    iii. The proposed framework utilized graph cut segmentation to extract the region of interest(ROI) from the Tearscope images, whereas state-of-the-art techniques employed complex segmentation algorithms requiring manual interventions.

    3 Materials Used

    The image datasets used in the present study were obtained from the Faculty of Optics and Optometry, University of Santiago De Compostela, Spain. The images were captured using an instrument named Tearscope Plus, and the annotations were made by a group of optometrists. The datasets are publicly available for research on the website of the VARPA1tinyurl.com/5cdf8cep.research group.The dataset features are elaborated on below:

    1. VOPTICAL V_l1 dataset: The VOPTICAL l1 (V_l1) dataset contains 105 images of the preocular tear film taken over optimum illumination conditions, and acquired from healthy subjects with dark eyes and aged from 19 to 33 years. The dataset includes 29 Open Meshwork, 29 Closed Mesh-work, 25 Wave, and 22 Color Fringe images. All the images have a spatial resolution of 1024×768 pixels and have been acquired with the Tearscope Plus.

    2. VOPTICAL L dataset:The VOPTICAL L(V_L)dataset contains 108 images of the preocular tear film taken over optimum illumination conditions,and acquired from healthy subjects aged from 19 to 33 years.The dataset includes 30 Open Mesh-work,28 Closed Mesh-work,27 Wave and 23 Color Fringe images.All the images have a spatial resolution of 1024×768 pixels and have been acquired with the Tearscope Plus.

    4 Method

    The architecture of the proposed framework is shown in Fig.1. The Tearscope eye images are initially passed through a graph cut segmentation module,which removes the region outside the iris for extracting the ROI.The segmented ROI is then applied to a combination of the two light weight pre-trained mobile CNNs,namely EfficientnetB0[12]and MobilenetV2[13].The features extracted from the last fully connected layers of EfficientnetB0 and MobilenetV2 are applied as input to the marine predator algorithm for feature selection. The selected features are finally used to train a knearest neighbor(KNN)classifier which classifies the images into four categories.The various stages of the proposed framework are explained in the following sections.

    Figure 1: Architecture of the proposed DL framework for the classification lipid tear film patterns.From the input image,the ROI is initially segmented followed by feature extraction and prediction

    4.1 Segmentation

    The segmentation of ROI in the tearscopic images is accomplished using the graph cut technique[14,15]. The tear film area is effectively segmented out from the remaining anatomical structures present in an image. The segmentation procedure involves the generation of a network flow graph based on the input image. The image is represented as a graph structure with each pixels forming a vertex or node.Each pixel is connected by edges weighted by the affinity or similarity between its two vertices.The algorithm cuts along weak edges,achieving the delineation of objects in the image.The user needs to specify background and foreground seeds to perform the segmentation of the intended ROI. We used the publicly available graph cut implementation in the MATLAB Image Segmenter2https://tinyurl.com/2p89v8k6.application to segment the ROI present in a given image. Fig.2 shows few sample images and their corresponding segmented counterpart.

    4.2 Feature Extraction

    The proposed model utilizes two light weight pre-trained mobile CNNs, MobilenetV2 [13] and EfficientnetB0[12]for feature extraction.Mobile CNNs are having fewer parameters and are faster than the conventional CNNs.Deployment of networks in mobile devices is possible with light weight CNNs[16].Both the models accept images of size 224×224×3 as input.MobilenetV2 uses depthwise separable convolutions as its basic building block.In addition,the network uses linear bottleneck between layers to remove nonlinearities. Shortcut connections are used between the bottlenecks to provide faster training and improve performance.

    EfficientnetB0 developed by Tan et al. [12] has the mobile inverted bottleneck as its main building block. The model used a compound scaling method which balances network width, depth and resolution for better performance. Both the CNNs were pre-trained on Imagenet database [17]which has images belonging to 1000 different classes.1000 features were extracted from the last fully connected layer of each CNNs.The concatenated feature set consisting of 2000 features is passed to nature inspired marine predator algorithm for feature selection.

    Figure 2: Sample images obtained from the VARPA datasets and their corresponding ROI images extracted using graph-cut technique. The first and third row corresponds to the raw sample images obtained from V_l1 and V_L datasets,respectively.Similarly,the second and fourth row corresponds to the extracted ROI images using the graph-cut technique.(a,e,i,m)represent Closed Mesh-work,(b,f,j,n)Color Fringe,(c,g,k,o)Open Mesh-work and(d,h,l,p)Wave tear film pattern images

    4.3 Feature Selection

    The selection of relevant features is done using an optimization technique namely, Marine Predator Algorithm(MPA)[18].Among the marine predators and prey,the predators use the major strategy called the Brownian and Levy random movement in foraging.This technique is adopted in MPA.Similar to most of the metaheuristics,MPA is a population based method,in which the initial solution is uniformly distributed over the search space as the first trial[18].A set ofnmembers of the prey is selected to be the initial population.The upper and the lower bound in the solution space is calculated using the following equation.

    4.3.1 High-Velocity Ratio

    This phase is where the velocity of the prey is greater than that of the predator [18]. At each iteration in this phase,Pis updated as follows:

    4.3.2 Unit Velocity Ratio

    In this phase,both predator and prey have the same velocity.This phase includes both exploration(Predator)and exploitation(Prey)[18].The rule used in this phase is that when the velocity is unity,the prey moves following the Levy strategy and the predator moves using the Brownian strategy.The mathematical representation of the rule is as follows:

    4.3.3 Low-Velocity Ratio

    In this phase,the predator is moving faster than the prey and is associated with high exploitation capability [18]. In this phase, the predator follows Levy strategy while the prey is moving in either Brownian or Levy.The updation is as follows:

    4.4 Classification Using KNN

    The k-nearest neighbor(KNN)classifier has been extensively used in the classification of biomedical data. Classification is carried out by comparing a given test data with training data of similar nature.The training samples are defined bynattributes and each sample represents a point in an ndimensional space.The KNN classifies an unknown samplex0,by searching for thektraining samples that are the closest to the unknown samplex0[19].Thesektraining samples represent theknearest neighbours ofx0. The new sample will be assigned the most frequent class label associated with thek-nearest neighbours.Closeness of the unknown samplex0with training instances is determined using the Euclidean distance metric[19].Euclidean distance for computing the distance ofkneighbours is evaluated as

    whereNjandMjrepresent specific attribute in a given sample and j represents a variable from 1 toFwhereFis the number of used attributes[20].The value ofkis set empirically(k=3)in the proposed method.

    5 Results and Discussion

    The main objective of our work is to develop a general-purpose technique that can be tweaked and used in a variety of contexts.Despite the fact that our technique is similar to a number of existing deep learning classification techniques,we focus on reducing computational complexity and memory space requirements by using lightweight CNN frameworks,which could also result in improved classification efficiency.Our goal is to propose a method for accurately classifying lipid layer patterns while lowering the amount of computational operations (such as convolution, pooling, batch normalisation, and activations) and the amount of memory required to run the system-two key factors affecting the computational complexity of deep learning-based systems.A framework that produces positive results along these lines can be adapted for a range of scenarios, including deployment in mobile devices.Further exploration of the current approach in each context,however,was deemed outside the scope of the current study.

    The experiments were performed on a 6 GB GPU machine with Intel Core i7 CPU,16 GB RAM and NVidia GTX 1060, using MATLAB software.For the performance evaluation of the proposed pipeline we computed accuracy,precision,recall,F1-score,and kappa score.

    To perform the experiments, we randomly partitioned the data into two folds, each containing 50% of the data for training and validation. The proposed model comprising of EfficientnetB0,MobilenetV2, Marine predator feature optimization and KNN classifier showed an accuracy of 98.08% and 98.15% in datasets V_l1 and V_L, respectively. The confusion matrix and the various performance measures are displayed in Fig.3 and Table 2,respectively.

    Figure 3:(a)Confusion Matrix corresponding to dataset V_l1,(b)Confusion Matrix corresponding to dataset V_L

    Table 2: The optimum performance obtained for the proposed pipeline. The result obtained for the framework composed of MobilenetV2,EfficientnetB0,marine predator algorithm and KNN classifier

    The proposed architecture is selected based on the extensive experiments conducted on the VARPA data sets.In the following,the performance evaluation metrics of the different experiments are shown,which underlines the effectiveness of the proposed pipeline.

    5.1 Comparison of Results Achieved Using Various CNNs

    In the proposed study,instead of using traditional CNN frameworks we used light weight CNNs for feature extraction.The use of light weight CNN reduces both the computational complexity as well as model parameters of the proposed system. The combination of EfficientnetB0 and MobilenetV2 was selected based on experimental analysis using various light CNNs. First, we performed experiments using 4 different light-weight CNNs(MobilenetV2,EfficientnetB0,Shufflenet,NasnetMobile).The best performing CNNs were MobilenetV2 and EfficientnetB0. Then we combined features extracted using MobilenetV2 and EfficientnetB0. The combination of features from MobilenetV2 and EfficientnetB0 showed an improvement over the performance of various CNNs when used individually.

    The experiments were performed using the V_l1 and V_L dataset. The results achieved using various pretrained CNNs are shown in Table 3. Fig.4 shows the comparison of accuracy achieved using various pre-trained CNNs in both datasets.

    Table 3: Result of the proposed pipeline using various CNNs

    Table 3 (continued)Dataset CNN Class Precision Recall F1-score Kappa score Accuracy V_l1 MobilenetV2+EfficientnetB0 Closed Mesh-work 1.0 0.93 0.97 0.974 98.08 Color Fringe 1.0 1.0 1.0 Open Mesh-work 0.93 1.0 0.97 Wave 1.0 1.0 1.0 V_L MobilenetV2+EfficientnetB0 Closed Mesh-work 1.0 0.93 0.97 0.975 98.15 Color Fringe 1.0 1.0 1.0 Open Mesh-work 0.94 1.0 0.97 Wave 1.0 1.0 1.0

    Figure 4: Comparison of accuracy achieved using various pre-trained CNNs in dataset V_l1 and dataset V_L

    The proposed network has an improvement in accuracy of about 2%in V_l1 data set and 4%in V_L data set. Similar improvement can be seen in all other performance metric we considered. The importance of an accurate diagnosis cannot be underestimated,because diagnostic errors cause delays and mistakes in treatment that can be fatal.An accurate diagnosis is critical to prevent wasting precious time on the wrong course of treatment.

    5.2 Comparison of Results Achieved Using Various Feature Selection Methods

    A set of experiments were also conducted to evaluate the performance of the marine predator feature selection algorithm.The results suggest that the proposed MPA-KNN strategy is capable of selecting the most relevant and optimal characteristics.It outperformed the well-known metaheuristic algorithms we put to the test. The performance of five commonly used optimization algorithms for feature selection, namely Particle Swarm Optimization (PSO), Cuckoo Search Algorithm (CS),Artificial Butterfly Optimization (ABO) and Harmony Search (HS) were compared against MPA.Table 4 and Fig.5 illustrate the performance of various feature selection techniques in combination with the proposed multi-CNN and KNN classifier.The results achieved using MPA outperform other techniques in both the datasets.

    Table 4: Results achieved using various feature selection methods

    Table 4 (continued)Dataset Feature selection Class Precision Recall F1-score Kappa score Accuracy V_L ABO Closed Mesh-work 0.93 0.81 0.87 0.876 90.74 Color Fringe 0.91 1.0 0.95 Open Mesh-work 0.88 1.0 0.93 Wave 0.92 0.86 0.89 V_l1 HS Closed Mesh-work 0.93 0.93 0.93 0.948 96.15 Color Fringe 1.0 1.0 1.0 Open Mesh-work 0.93 0.93 0.93 Wave 1.0 1.0 1.0 V_L HS Closed Mesh-work 1.0 0.88 0.93 0.95 96.3 Color Fringe 1.0 1.0 1.0 Open Mesh-work 0.94 1.0 0.97 Wave 0.92 1.0 0.96

    Figure 5:Comparison of accuracy achieved using various feature selection techniques in dataset V_l1 and dataset V_L

    5.3 Analysis of Computational Complexity

    Table 5a shows the computational time taken for feature extraction followed by feature selection and classification (in seconds) in each of the networks. Even though the results achieved using the proposed network is better,computational time is higher than the other networks.Table 5b provides the execution time taken by various feature selection techniques.All the techniques except ACO took a very less amount of time for feature selection.

    Table 5a: Execution time using various networks

    Table 5b: Execution time using various feature selection techniques

    The proposed framework is a combination of two lightweight CNNs,namely MobilenetV2 and EfficientnetB0.Hence,it is obvious that the feature vector size is classified as having a large dimension as it contains the features derived using the two lightweight CNNs. With regard to processing time,the light weight CNNs execution time is better when used singly than in combination.It is worth to note that, even when used in combination, the execution time is only 30.50 s and 32 s, respectively for the datasets V_l1 and V_L,which is a significantly shorter time when considered for real medical applications.

    5.4 Comparison of Results Achieved Using Various Classifiers

    Next set of experiments was conducted to evaluate the performance of KNN against other classifiers.WEKA tool was used for the classification using Naive Bayes(NB),Bayesnet,SVM and Random Forest(RF).Features selected using MPA were passed to the classifiers.Default parameters in WEKA were used for the classification.The results of the classification are displayed in Table 6 and Fig.6.The results show the superior performance of the KNN classifier.

    Table 6: Analysis of performance of various classifiers

    Table 6 (continued)Dataset Classifier Class Precision Recall F1-score Kappa score Accuracy V_L Random Forest Closed Mesh-work 0.833 0.833 0.833 0.7025 77.7778 Color Fringe 0.75 0.9 0.818 Open Mesh-work 0.929 0.684 0.788 Wave 0.625 0.769 0.69

    Figure 6:Comparison of accuracy achieved using various classifiers in dataset V_l1 and dataset V_L

    5.5 Analysis of the Impact of ROI Segmentation

    In addition to the experiments performed using various CNN frameworks and feature selection techniques,we also checked the effect of ROI-segmentation on the performance evaluation.Initially,the images were applied directly to the proposed network architecture without performing the segmentation of ROI.In the second step,using graph-based segmentation,ROI is extracted from the raw images and passed to the network.The experiments show that the best results are obtained for ROI based analysis.Table 7 and Fig.7 illustrate the impact of segmentation on the model’s performance.

    Table 7: Analysis of performance of the model after graph cut segmentation

    Table 7 (continued)Dataset Segmentation Class Precision Recall F1-score Kappa score Accuracy V_l1 After segmentation Closed Mesh-work 1.0 0.93 0.97 0.974 98.08 Color Fringe 1.0 1.0 1.0 Open Mesh-work 0.93 1.0 0.97 Wave 1.0 1.0 1.0 V_L After segmentation Closed Mesh-work 1.0 0.93 0.97 0.975 98.15 Color Fringe 1.0 1.0 1.0 Open Mesh-work 0.94 1.0 0.97 Wave 1.0 1.0 1.0

    Figure 7:Analysis of the effect of segmentation on the model’s performance

    5.6 Comparison with State-of-the-Art Methods

    Table 8 shows the performance comparison of the proposed method with that of the state-of-theart methods.The method achieved superior performance than all other methods except[10].However,the results reported in[10]are solely based on the model’s performance in dataset V_l1 whereas,the proposed pipeline achieved promising results in both datasets.

    Table 8: Comparison of results with state-of-the-art methods

    Table 8 (continued)Work Method Dataset Number of images Accuracy Ramos et al.[21] color and texture features+SVM V_l1 105 91.43 Bolón-Canedo et al.[22] color and texture features+ReliefF+SVM V_l1 105 92.00 Remeseiro et al.[10] color and texture features+PCA+SVM V_l1 105 98.10 Peteiro-Barral et al.[23] MCDM+Rank Correlation V_l1 105 96.00 Remeseiro et al.[24] color and texture features+feature selection+SVM V_l1 105 94.29 color and texture features+feature selection+SVM V_L 108 91.67

    5.7 Limitations and Future Scope

    The major hindrance in DES research is the non-availability of standard datasets for checking the developed algorithms.Even though the proposed method achieved promising results,the number of images available in the dataset was less.Hence,to ensure robustness of the method,the proposed algorithm needs to be tested in larger datasets.Another limitation of the study is the non-inclusion of Amorphous images in the classification framework. It is worth noting that the Amorphous class of images rarely occurs while diagnosing,and hence,similar research works also avoid the classification of the Amorphous category of images.

    The proposed architecture employs pre-trained CNNs,trained using non-medical images for the classification of lipid layer patterns.As a future work,we propose to compile a dry eye disease dataset sufficiently larger to train a CNN from scratch. A custom-made CNN trained using a larger dryeye disease dataset could provide better results.Deployment of the network in mobile devices is also planned as a future work.

    6 Conclusion

    The study described in this paper presents a novel method integrating deep learning with nature inspired feature selection techniques for the diagnosis of dry eye syndrome.Among the various pretrained CNNs,and nature inspired feature selection techniques,the combination of MobilenetV2 and EfficientnetB0 with the Marine Predator algorithm demonstrated the best performance.The proposed method achieved promising results in the experimented datasets.Even though the method produced significant results,empirical studies on larger datasets are required for confirming the robustness of the proposed technique.The method can be used as a computer-aided tool for assisting clinicians after more trials.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    成在线人永久免费视频| 国产精品久久电影中文字幕| 免费观看精品视频网站| 亚洲国产高清在线一区二区三 | 亚洲国产精品sss在线观看| 亚洲成av片中文字幕在线观看| 国内毛片毛片毛片毛片毛片| 性欧美人与动物交配| 色综合婷婷激情| 久久久久久人人人人人| 日韩三级视频一区二区三区| 国产欧美日韩综合在线一区二区| xxx96com| 又紧又爽又黄一区二区| 国产欧美日韩一区二区三| 国产成人精品久久二区二区91| 一进一出抽搐动态| 午夜日韩欧美国产| 色在线成人网| 亚洲精品中文字幕一二三四区| 97超级碰碰碰精品色视频在线观看| 麻豆一二三区av精品| 悠悠久久av| 欧美黄色片欧美黄色片| 69精品国产乱码久久久| 亚洲在线自拍视频| 午夜精品国产一区二区电影| 国产午夜精品久久久久久| 亚洲成av人片免费观看| 精品一区二区三区av网在线观看| 国产一级毛片七仙女欲春2 | 日韩有码中文字幕| 人妻丰满熟妇av一区二区三区| 国产精品一区二区精品视频观看| 国产又爽黄色视频| 中亚洲国语对白在线视频| 午夜福利成人在线免费观看| 亚洲色图av天堂| 国产又色又爽无遮挡免费看| 午夜福利在线观看吧| 色av中文字幕| 亚洲欧美精品综合一区二区三区| 日韩欧美三级三区| 久久午夜亚洲精品久久| 一级毛片女人18水好多| 日韩欧美国产一区二区入口| 丝袜美腿诱惑在线| 久久精品国产99精品国产亚洲性色 | 午夜日韩欧美国产| 波多野结衣av一区二区av| 亚洲九九香蕉| 老司机午夜十八禁免费视频| 亚洲欧美激情综合另类| 在线观看免费日韩欧美大片| 国产三级在线视频| 在线观看一区二区三区| 免费女性裸体啪啪无遮挡网站| 人成视频在线观看免费观看| 给我免费播放毛片高清在线观看| 极品教师在线免费播放| 成人手机av| 日本一区二区免费在线视频| 精品熟女少妇八av免费久了| 自拍欧美九色日韩亚洲蝌蚪91| 国产片内射在线| 久久热在线av| 精品高清国产在线一区| 欧美性长视频在线观看| 熟妇人妻久久中文字幕3abv| 精品福利观看| 一二三四在线观看免费中文在| 老熟妇仑乱视频hdxx| 纯流量卡能插随身wifi吗| 国产又色又爽无遮挡免费看| 久久精品影院6| 18禁美女被吸乳视频| 成年女人毛片免费观看观看9| 亚洲自偷自拍图片 自拍| 亚洲国产毛片av蜜桃av| 亚洲国产欧美一区二区综合| 两个人看的免费小视频| 久久久久久免费高清国产稀缺| 日本精品一区二区三区蜜桃| 热99re8久久精品国产| 午夜日韩欧美国产| 美女免费视频网站| av网站免费在线观看视频| 丰满人妻熟妇乱又伦精品不卡| 黑人巨大精品欧美一区二区mp4| 国产精品,欧美在线| 国产亚洲av嫩草精品影院| 久久人人爽av亚洲精品天堂| 大型黄色视频在线免费观看| 久久精品国产亚洲av高清一级| 国产亚洲欧美精品永久| 亚洲五月色婷婷综合| 久久精品国产清高在天天线| 一级作爱视频免费观看| 国产伦一二天堂av在线观看| 男女做爰动态图高潮gif福利片 | 欧美久久黑人一区二区| 成人国产一区最新在线观看| 国产伦人伦偷精品视频| 日韩欧美国产在线观看| 纯流量卡能插随身wifi吗| 国产成人精品在线电影| 成人国语在线视频| 在线视频色国产色| 亚洲欧美一区二区三区黑人| 午夜成年电影在线免费观看| 亚洲中文字幕一区二区三区有码在线看 | 久久这里只有精品19| 在线观看舔阴道视频| 国产精品久久久av美女十八| 最近最新中文字幕大全免费视频| 人人澡人人妻人| 欧美一区二区精品小视频在线| 嫩草影院精品99| 国产成人欧美在线观看| 久久国产精品影院| 国产黄a三级三级三级人| 热99re8久久精品国产| 一本大道久久a久久精品| 在线观看免费视频网站a站| 又大又爽又粗| 中文字幕高清在线视频| av欧美777| 电影成人av| 久久久久国内视频| 色综合亚洲欧美另类图片| 咕卡用的链子| 亚洲最大成人中文| 中文亚洲av片在线观看爽| 免费少妇av软件| 叶爱在线成人免费视频播放| 久久这里只有精品19| 色播在线永久视频| 国产午夜福利久久久久久| 欧美黄色片欧美黄色片| 成人手机av| 色尼玛亚洲综合影院| 9热在线视频观看99| 成人三级黄色视频| 波多野结衣av一区二区av| 一区二区三区高清视频在线| 最好的美女福利视频网| 国产xxxxx性猛交| 18禁国产床啪视频网站| 久久人妻福利社区极品人妻图片| 在线播放国产精品三级| 国产欧美日韩综合在线一区二区| 夜夜看夜夜爽夜夜摸| 乱人伦中国视频| 在线天堂中文资源库| 满18在线观看网站| 波多野结衣一区麻豆| 久久久久精品国产欧美久久久| 少妇 在线观看| 精品国产一区二区三区四区第35| 在线免费观看的www视频| 亚洲av熟女| 极品人妻少妇av视频| 成人18禁高潮啪啪吃奶动态图| 两人在一起打扑克的视频| 国产成人精品在线电影| 黄色丝袜av网址大全| 亚洲中文字幕一区二区三区有码在线看 | 又大又爽又粗| 亚洲专区字幕在线| 99国产精品免费福利视频| 久久这里只有精品19| 亚洲av五月六月丁香网| 中文字幕人妻丝袜一区二区| 成人亚洲精品av一区二区| 妹子高潮喷水视频| 午夜亚洲福利在线播放| 多毛熟女@视频| 999精品在线视频| 日韩视频一区二区在线观看| 久久久久精品国产欧美久久久| 999久久久国产精品视频| 日本a在线网址| 国产精品亚洲一级av第二区| 露出奶头的视频| 精品久久久久久成人av| 亚洲av成人不卡在线观看播放网| 这个男人来自地球电影免费观看| 亚洲一区二区三区不卡视频| 日韩欧美三级三区| 亚洲一区高清亚洲精品| 中文亚洲av片在线观看爽| 国产一区二区三区在线臀色熟女| 亚洲精品国产色婷婷电影| 日本免费一区二区三区高清不卡 | 国产成+人综合+亚洲专区| 午夜福利欧美成人| 日本一区二区免费在线视频| 亚洲中文字幕日韩| 一夜夜www| 人人澡人人妻人| 999久久久国产精品视频| 国产午夜精品久久久久久| 精品国产超薄肉色丝袜足j| 美女免费视频网站| 亚洲精品国产一区二区精华液| 久久亚洲真实| 国产精品九九99| 精品国产乱码久久久久久男人| cao死你这个sao货| 欧美乱妇无乱码| 久久精品国产99精品国产亚洲性色 | 国内毛片毛片毛片毛片毛片| av视频在线观看入口| 久久精品91无色码中文字幕| 国产麻豆成人av免费视频| 国产精品久久电影中文字幕| 脱女人内裤的视频| 老司机深夜福利视频在线观看| 一区二区三区激情视频| 一级a爱片免费观看的视频| 日韩三级视频一区二区三区| 日韩av在线大香蕉| 久久这里只有精品19| 国产日韩一区二区三区精品不卡| 曰老女人黄片| 天天躁夜夜躁狠狠躁躁| 国产主播在线观看一区二区| 嫩草影院精品99| 在线观看免费日韩欧美大片| 自拍欧美九色日韩亚洲蝌蚪91| xxx96com| 曰老女人黄片| 在线观看一区二区三区| 亚洲男人天堂网一区| 搡老妇女老女人老熟妇| 成人手机av| 国产单亲对白刺激| 免费一级毛片在线播放高清视频 | 国产亚洲精品久久久久5区| 99在线视频只有这里精品首页| 成人18禁在线播放| 好看av亚洲va欧美ⅴa在| 亚洲国产欧美日韩在线播放| 国产av一区二区精品久久| 国产精品自产拍在线观看55亚洲| 禁无遮挡网站| 午夜成年电影在线免费观看| 亚洲国产欧美网| 黄色 视频免费看| 国产精品久久久久久精品电影 | 叶爱在线成人免费视频播放| 给我免费播放毛片高清在线观看| svipshipincom国产片| 咕卡用的链子| 身体一侧抽搐| 亚洲视频免费观看视频| 狠狠狠狠99中文字幕| 老司机午夜十八禁免费视频| 50天的宝宝边吃奶边哭怎么回事| 国产亚洲av高清不卡| 久久天堂一区二区三区四区| 一个人观看的视频www高清免费观看 | 一级a爱片免费观看的视频| 91成年电影在线观看| 男人操女人黄网站| 九色国产91popny在线| 又紧又爽又黄一区二区| 日韩成人在线观看一区二区三区| 久久香蕉国产精品| 亚洲色图 男人天堂 中文字幕| 正在播放国产对白刺激| 欧美激情久久久久久爽电影 | 日本撒尿小便嘘嘘汇集6| netflix在线观看网站| 无遮挡黄片免费观看| 亚洲中文字幕一区二区三区有码在线看 | 激情视频va一区二区三区| 国产成人系列免费观看| 国产成人精品久久二区二区免费| 国产亚洲av高清不卡| 黄片小视频在线播放| 这个男人来自地球电影免费观看| 欧美大码av| 国语自产精品视频在线第100页| 韩国精品一区二区三区| 啦啦啦 在线观看视频| 国产欧美日韩一区二区精品| 高潮久久久久久久久久久不卡| 亚洲精品国产区一区二| 91精品三级在线观看| 精品欧美国产一区二区三| 精品一区二区三区av网在线观看| 999久久久国产精品视频| 嫩草影院精品99| 精品久久蜜臀av无| 曰老女人黄片| 日韩三级视频一区二区三区| 欧美一级毛片孕妇| 又紧又爽又黄一区二区| 在线免费观看的www视频| a在线观看视频网站| 中文字幕人妻熟女乱码| 自拍欧美九色日韩亚洲蝌蚪91| 黄片播放在线免费| 欧美乱码精品一区二区三区| 国产精品98久久久久久宅男小说| 激情视频va一区二区三区| 涩涩av久久男人的天堂| av视频免费观看在线观看| 亚洲精品美女久久久久99蜜臀| 90打野战视频偷拍视频| 日日干狠狠操夜夜爽| 亚洲人成电影观看| 日本撒尿小便嘘嘘汇集6| 欧美大码av| 久久香蕉精品热| 亚洲国产看品久久| 91av网站免费观看| 国产麻豆成人av免费视频| 黄色a级毛片大全视频| 国产亚洲av高清不卡| 精品一品国产午夜福利视频| av在线天堂中文字幕| 日韩有码中文字幕| 国产av一区在线观看免费| 9热在线视频观看99| 久久国产乱子伦精品免费另类| 国产成人精品在线电影| 波多野结衣一区麻豆| 亚洲精品国产色婷婷电影| 色婷婷久久久亚洲欧美| 精品久久久久久,| 国产一区二区三区在线臀色熟女| 波多野结衣一区麻豆| 久久人人爽av亚洲精品天堂| 麻豆成人av在线观看| 一a级毛片在线观看| 欧美日韩乱码在线| 很黄的视频免费| 看黄色毛片网站| 在线观看www视频免费| 日韩欧美一区二区三区在线观看| 狠狠狠狠99中文字幕| 精品欧美一区二区三区在线| 搡老熟女国产l中国老女人| 亚洲av成人不卡在线观看播放网| 91精品三级在线观看| 亚洲成av人片免费观看| 亚洲天堂国产精品一区在线| 国产精品一区二区精品视频观看| av中文乱码字幕在线| 色综合站精品国产| 久99久视频精品免费| 九色国产91popny在线| 日本黄色视频三级网站网址| 在线免费观看的www视频| 午夜免费激情av| 亚洲天堂国产精品一区在线| 欧美成人午夜精品| 国产精品免费一区二区三区在线| 麻豆av在线久日| 丰满人妻熟妇乱又伦精品不卡| 亚洲av五月六月丁香网| 美女免费视频网站| 国产成人精品久久二区二区91| 精品久久蜜臀av无| 少妇熟女aⅴ在线视频| 97人妻精品一区二区三区麻豆 | 纯流量卡能插随身wifi吗| 国产一卡二卡三卡精品| 亚洲五月天丁香| 亚洲天堂国产精品一区在线| 亚洲国产高清在线一区二区三 | 亚洲中文字幕日韩| 国产精品免费一区二区三区在线| 亚洲五月婷婷丁香| 黑人欧美特级aaaaaa片| 麻豆久久精品国产亚洲av| 欧美黄色片欧美黄色片| 国产欧美日韩一区二区精品| 色综合欧美亚洲国产小说| 日日夜夜操网爽| 国产视频一区二区在线看| 波多野结衣高清无吗| 亚洲性夜色夜夜综合| 老司机福利观看| 一本综合久久免费| 麻豆久久精品国产亚洲av| 色综合站精品国产| 成人国语在线视频| 女性被躁到高潮视频| 久热这里只有精品99| 久久九九热精品免费| 亚洲色图综合在线观看| 欧美+亚洲+日韩+国产| 日本免费a在线| 免费久久久久久久精品成人欧美视频| 一级黄色大片毛片| 美女高潮喷水抽搐中文字幕| 男人操女人黄网站| 精品国产国语对白av| 两性午夜刺激爽爽歪歪视频在线观看 | 国产精品爽爽va在线观看网站 | 亚洲国产精品成人综合色| 高清在线国产一区| 欧美日本亚洲视频在线播放| 男人的好看免费观看在线视频 | 国产伦人伦偷精品视频| 欧美日韩亚洲国产一区二区在线观看| 国产精品98久久久久久宅男小说| 人人妻人人爽人人添夜夜欢视频| 久久久国产成人免费| 国产熟女xx| 久热这里只有精品99| 真人一进一出gif抽搐免费| 9191精品国产免费久久| 在线观看一区二区三区| 久久九九热精品免费| 日韩国内少妇激情av| 国产亚洲欧美在线一区二区| 两个人看的免费小视频| 日韩 欧美 亚洲 中文字幕| 禁无遮挡网站| 国产1区2区3区精品| 欧美日本亚洲视频在线播放| 亚洲av熟女| 久久久久精品国产欧美久久久| 妹子高潮喷水视频| 女性生殖器流出的白浆| 操美女的视频在线观看| 日韩国内少妇激情av| 免费av毛片视频| 国产av又大| 久久精品国产综合久久久| 午夜福利欧美成人| 在线观看午夜福利视频| 亚洲 欧美一区二区三区| 此物有八面人人有两片| 少妇裸体淫交视频免费看高清 | www.精华液| 欧美乱色亚洲激情| 亚洲欧美日韩无卡精品| av视频在线观看入口| 91老司机精品| 国产精品久久久av美女十八| 一进一出抽搐动态| 九色亚洲精品在线播放| 精品久久久久久久毛片微露脸| 91大片在线观看| 日本 av在线| 国产区一区二久久| 老汉色∧v一级毛片| 一本久久中文字幕| 欧美乱色亚洲激情| 欧美黑人精品巨大| 一区二区三区精品91| 一区二区三区国产精品乱码| 丁香欧美五月| 十八禁人妻一区二区| 91精品国产国语对白视频| 巨乳人妻的诱惑在线观看| 99riav亚洲国产免费| 中文字幕av电影在线播放| 波多野结衣av一区二区av| 1024视频免费在线观看| 99精品在免费线老司机午夜| 涩涩av久久男人的天堂| 最近最新免费中文字幕在线| 怎么达到女性高潮| 男女午夜视频在线观看| 亚洲精华国产精华精| 精品国内亚洲2022精品成人| 午夜影院日韩av| 夜夜爽天天搞| 怎么达到女性高潮| 日韩成人在线观看一区二区三区| 久久精品91无色码中文字幕| 窝窝影院91人妻| 亚洲国产精品999在线| 久久精品91无色码中文字幕| 一级毛片精品| 在线视频色国产色| 如日韩欧美国产精品一区二区三区| 好看av亚洲va欧美ⅴa在| 国产成年人精品一区二区| 成人三级做爰电影| 午夜福利高清视频| www.www免费av| 超碰成人久久| 国产精品九九99| 中文字幕最新亚洲高清| 成人三级黄色视频| 国产精品美女特级片免费视频播放器 | 成人三级做爰电影| 97碰自拍视频| 禁无遮挡网站| 亚洲av五月六月丁香网| 天天添夜夜摸| 啦啦啦观看免费观看视频高清 | 成人国语在线视频| 女人被躁到高潮嗷嗷叫费观| 久久精品91无色码中文字幕| 久久精品亚洲精品国产色婷小说| 长腿黑丝高跟| 午夜成年电影在线免费观看| 国产亚洲精品第一综合不卡| 99精品久久久久人妻精品| 国产精品98久久久久久宅男小说| 久久中文字幕人妻熟女| 国产精品电影一区二区三区| 亚洲视频免费观看视频| 99久久综合精品五月天人人| 精品电影一区二区在线| 日韩高清综合在线| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲男人天堂网一区| 夜夜夜夜夜久久久久| ponron亚洲| 国产色视频综合| 亚洲国产精品合色在线| 俄罗斯特黄特色一大片| 欧美最黄视频在线播放免费| 亚洲片人在线观看| 少妇的丰满在线观看| 性色av乱码一区二区三区2| 免费观看精品视频网站| 他把我摸到了高潮在线观看| 老司机福利观看| 亚洲自拍偷在线| 国产麻豆69| 欧美日韩黄片免| 国产精品久久视频播放| 欧美绝顶高潮抽搐喷水| 色综合站精品国产| 看免费av毛片| АⅤ资源中文在线天堂| 国产精品电影一区二区三区| 亚洲国产毛片av蜜桃av| 国内精品久久久久久久电影| 麻豆久久精品国产亚洲av| 亚洲人成电影观看| 亚洲午夜精品一区,二区,三区| 99国产综合亚洲精品| 可以在线观看毛片的网站| 国产精品一区二区三区四区久久 | cao死你这个sao货| 精品不卡国产一区二区三区| 午夜精品在线福利| 亚洲 国产 在线| av片东京热男人的天堂| 村上凉子中文字幕在线| 亚洲成国产人片在线观看| 97碰自拍视频| 女警被强在线播放| 久久久久久人人人人人| 午夜精品久久久久久毛片777| 一边摸一边做爽爽视频免费| a级毛片在线看网站| 岛国在线观看网站| 亚洲七黄色美女视频| 老司机深夜福利视频在线观看| 国产精品精品国产色婷婷| 久久久久久久久中文| 涩涩av久久男人的天堂| 99久久精品国产亚洲精品| 一进一出抽搐gif免费好疼| 国产亚洲av嫩草精品影院| 亚洲国产高清在线一区二区三 | 日韩欧美一区视频在线观看| 午夜福利成人在线免费观看| 啦啦啦免费观看视频1| 大码成人一级视频| 亚洲第一av免费看| 精品久久蜜臀av无| 午夜福利欧美成人| 精品欧美一区二区三区在线| 日本欧美视频一区| 精品日产1卡2卡| 精品国产一区二区三区四区第35| 老司机午夜福利在线观看视频| 91成人精品电影| 免费不卡黄色视频| 久久久久亚洲av毛片大全| 精品国产一区二区久久| 神马国产精品三级电影在线观看 | 免费在线观看影片大全网站| 亚洲 欧美一区二区三区| 日本欧美视频一区| 国产精品永久免费网站| 99精品欧美一区二区三区四区| 麻豆一二三区av精品| 精品乱码久久久久久99久播| av超薄肉色丝袜交足视频| 精品福利观看| 视频在线观看一区二区三区| 欧美成人午夜精品| 欧美成狂野欧美在线观看| 悠悠久久av| 男人操女人黄网站| 国产欧美日韩一区二区三区在线| 18禁黄网站禁片午夜丰满| 久久久久亚洲av毛片大全| 国产97色在线日韩免费| 色在线成人网| 成人三级做爰电影| 国产免费av片在线观看野外av| 91大片在线观看| 天堂√8在线中文| 精品高清国产在线一区| 亚洲性夜色夜夜综合| 亚洲国产精品合色在线| av超薄肉色丝袜交足视频| 在线观看免费视频日本深夜| 精品欧美国产一区二区三| 午夜福利高清视频|