• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Automatic counting of retinal ganglion cells in the entire mouse retina based on improved YOLOv5

    2022-10-17 03:27:44JingZhangYiBoHuoJiaLiangYangXiangZhouWangBoYunYanXiaoHuiDuRuQianHaoFangYangJuanXiuLiuLinLiuYongLiuHouBinZhang
    Zoological Research 2022年5期

    Jing Zhang, Yi-Bo Huo, Jia-Liang Yang, Xiang-Zhou Wang, Bo-Yun Yan, Xiao-Hui Du, Ru-Qian Hao, Fang Yang,Juan-Xiu Liu, Lin Liu,*, Yong Liu, Hou-Bin Zhang,*

    1 MOEMIL Laboratory, School of Optoelectronic Information, University of Electronic Science and Technology of China, Chengdu, Sichuan 610054, China

    2 Key Laboratory for Human Disease Gene Study of Sichuan Province, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan 610072, China

    ABSTRACT Glaucoma is characterized by the progressive loss of retinal ganglion cells (RGCs), although the pathogenic mechanism remains largely unknown. To study the mechanism and assess RGC degradation,mouse models are often used to simulate human glaucoma and specific markers are used to label and quantify RGCs. However, manually counting RGCs is time-consuming and prone to distortion due to subjective bias. Furthermore, semi-automated counting methods can produce significant differences due to different parameters, thereby failing objective evaluation. Here, to improve counting accuracy and efficiency, we developed an automated algorithm based on the improved YOLOv5 model, which uses five channels instead of one, with a squeeze-and-excitation block added. The complete number of RGCs in an intact mouse retina was obtained by dividing the retina into small overlapping areas and counting, and then merging the divided areas using a non-maximum suppression algorithm. The automated quantification results showed very strong correlation (mean Pearson correlation coefficient of 0.993) with manual counting. Importantly, the model achieved an average precision of 0.981. Furthermore, the graphics processing unit (GPU) calculation time for each retina was less than 1 min. The developed software has been uploaded online as a free and convenient tool for studies using mouse models of glaucoma, which should help elucidate disease pathogenesis and potential therapeutics.

    Keywords: Retinal ganglion cell; Cell counting;Glaucomatous optic neuropathies; Deep learning;Improved YOLOv5

    INTRODUCTION

    Glaucoma describes a group of diseases characterized by optic papillary atrophy and depression, visual field loss, and hypoplasia (Casson et al., 2012). It is caused by intraocular pressure (IOP)-associated optic neuropathy with loss of retinal ganglion cells (RGCs) (Berkelaar et al., 1994). By the time glaucoma presents with typical visual field defects, such as blurred or lost vision, the loss of RGCs may already be as high as 50%. Thus, apparent changes in the number of RGCs provide the basis for early glaucoma diagnosis (Balendra et al., 2015).

    To study the mechanisms underpinning glaucoma, various animal models, especially mouse models, have been developed to mimic the features of the disease. However, to determine the progression of retinal glaucoma, it is necessary to quantify the number of RGCs. A variety of techniques(Mead & Tomarev, 2016) have been used for RGC labeling with neuronal markers, such as β III-tubulin (Jiang et al.,2015), or immunolabeling with RGC-specific markers, such as BRN3A (Nadal-Nicola?s et al., 2009), RNA-binding protein with multiple scattering (RBPMS) (Rodriguez et al., 2014), or γsynuclein.

    However, after using the appropriate markers, the total number of RGCs must be estimated by manual counting. Not only is this a time-consuming process, but it is also susceptible to subjective bias. Long hours of work can tire taggers, resulting in counting errors. To overcome this issue,several software programs with automatic RGC labeling have been developed, which have the advantages of fast detection speed, high detection accuracy, good objectivity, and RGC degeneration assessment. Geeraerts et al. (2016) developed a freely available ImageJ-based (Abràmoff et al., 2004;Collins, 2007) script to semi-automatically quantify RGCs in entire retinal flat-mounts after immunostaining with RGCspecific transcription factor BRN3A, although the process requires manual parameter adjustment. Guymer et al. (2020)developed automated software to count immunolabeled RGCs accurately and efficiently, with the ability to batch processing images and perform whole-retinal analysis to generate isodensity maps. However, this software cannot effectively detect RGCs with low contrast and brightness.

    The key to accurate measurements of the entire RGC population is efficient processing of RGC-specific immunolabeled images. Ideally, the image background should be a single light color, and the RGCs should have high brightness and distinct edges (Figure 1A). However, image quality can be affected by various factors, such as operator experience and experimental procedures (Figure 1B-E),giving rise to the following issues:

    (1) Small and densely clustered RGCs

    As shown in Figure 1B, RGCs are typically very small(about 15×15 pixels) and are distributed throughout the retina(distance of ≤20 pixels between cells). In addition, many RGCs overlap with each other and are not positioned on the same horizontal plane, resulting in lower image contrast and brightness of some RGCs, as shown in Figure 1C.

    (2) Complicated retinal background

    Other factors may lead to low-quality and noisy images, as shown in Figure 1D. When images of the entire retina are captured using confocal microscopy, images from multiple fields of view need to be stitched into one complete retinal image, which can result in obviously stitched edges, as shown in Figure 1E.

    (3) Large image pixels

    Whole retinal images are more than 8 000×8 000 pixels in size and contain more than 30 000 RGCs. Thus, both manual detection and recognition algorithm detection require a great deal of time.

    As one of the most widely used artificial intelligence technologies, deep learning (Schmidhuber, 2015) allows computers to automatically learn pattern features and integrate feature learning into models to reduce incompleteness caused by artificially designed features.Machine learning algorithms use manually designed features that consider how each feature describes the object or image to be classified. Therefore, the performance of deep learning algorithms is more efficient than machine learning.

    There are many ways to develop deep learning algorithms for real-time object detection. There are various popular CNNbased object detectors, e.g., retinaNet (Lin et al., 2017),faster-rcnn (Ren et al., 2017), and YOLO (Bochkovskiy et al.,2020; Jocher et al., 2020; Redmon & Farhadi, 2017, 2018).

    Figure 1 Examples of ideal and real images of mouse RGCs

    To address the RGC recognition problem, we developed a deep learning model based on YOLOv5 (Jocher et al., 2020).The model can effectively identify RGCs with low brightness and contrast without being affected by background noise. In the case of graphics processing unit (GPU) computing, an 8 000×8 000 pixel image can be detected in less than 1 min.Furthermore, combined with the proposed deep learning model, we developed an automatic recognition software tool for counting BRN3A-labeled RGCs in whole-mount mouse retinas, which can process images in batches to generate RGC heat maps (Wilkinson & Friendly, 2009), and export results in comma-separated value (CSV) format. Our work has the following innovations:

    (1) As RGCs are not at the same level under a biological microscope, a five-channel rather than a single-channel input was used to ensure that the RGCs can be collected at different focal lengths.

    (2) A squeeze-and-excitation (SE) block (Hu et al., 2020)was added to the original network to detect blurred cells and increase detection accuracy.

    (3) The developed software is fully automated and open source for convenient and cooperative use, which should help deepen our understanding of glaucoma and drug development.

    MATERIALS AND METHODS

    Network architecture

    The basic detector (Figure 2) is divided into three parts. The first part performs feature extraction, and consists of convolution, batch normalization, activation function (Gulcehre et al., 2016), SE block (Hu et al., 2020) (details in next section), and cross-stage partial (CSP) structures (Wang et al., 2020). Convolution is critical for extracting image features through deep learning. Batch normalization (Ioffe & Szegedy,2015) can accelerate model convergence and alleviate gradient disappearance, to a certain extent, making the training of deep network models easier and more stable.Activation function introduces nonlinear changes into the neural network to strengthen its learning ability. Here, we combined convolution, batch normalization, and activation function (i.e., CBL module) to increase expression ability. The CSP structure contains the CBL module and multiple convolutions, which can effectively solve gradient information repetition in the backbone of the convolutional neural network and integrate gradient changes into the feature map from beginning to end, thereby reducing model parameters.

    The second part performs feature description, using PANet to aggregate upsampling, CBL module, and CSP structure(Wang et al., 2019). As a new feature pyramid network structure, PANet can enhance the bottom-up path during feature extraction, thereby enhancing the semantic and location information of the feature pyramid and effectively utilizing feature information.

    The third part performs detection and consists of three bounding box priors (i.e., “anchors”) (Redmon & Farhadi,2017). Each feature point predicts three bounding boxes to detect regions of interest of different sizes, finally outputting the prediction boxes and classes.

    Squeeze-and-excitation block

    In the first part of feature extraction, a SE block (Hu et al.,2020) was added to the CBL module of the basic model. The SE block achieved characteristic channel responses by increasing interdependence between channels. Squeeze obtained the global compression features of the current feature map by performing adaptive average pooling on the feature map layer. The activation function obtained the weight of each channel in the feature map, with the weighted feature map then used as the input of the next network layer. This significantly improved performance at a small computational cost. The SE block structure is depicted in Figure 3. Unlike the original description of the SE block, SiLU (Ramachandran et al., 2018) was used instead of ReLU (Howard et al., 2017)(Supplementary Figures S1, S2), as ReLU neuronal weights may not be updated.

    Figure 2 Illustration of the network architecture pipeline

    Figure 3 Squeeze-and-excitation block (Hu et al., 2020)

    Training data pre-processing

    We used five-channel rather than single-channel input images(see Section 3 dataset for detailed description). As the images were too large to be input into the detection network (Van Etten, 2018), each image was divided into smaller 512×512 pixel images with 20% overlaps. If no RGC was found in the small image, it was treated as the background. Adding all backgrounds to the training set would result in missed detections, whereas not adding backgrounds would result in false detections (Supplementary Figure S3). Thus, we added one-fifth of the background images to the training set in addition to all small images with RGCs.

    Training data augmentation

    As RGCs exhibit characteristic rotation invariance, we adopted the following data augmentations: rotation from [-10°,10°],translation from [-10, 10] forxandy, random zoom from [0.5,1.5], and vertical and horizontal flip. In addition, mosaic data augmentation was used (Bochkovskiy et al., 2020). Four images were randomly sampled each time from the overall data, then randomly cropped and spliced to synthesize new images (Supplementary Figures S4, S5).

    Inference

    Given the large size of the whole retinal images, we constructed 512×512 sliding windows (with 20% overlapping areas) from top to bottom and left to right to detect RGCs in each window. Efficient non-maximum suppression (NMS)(Neubeck & Van Gool, 2006) of detection results for each sliding window was used to eliminate duplicate counts. All results were merged via the relative position of each sliding window in the retinal image, with NMS again used to suppress duplicate counts in the overlapping areas (Figure 4).

    Animals

    The experimental mice were maintained under 12-h light/12-h dark cycles in the animal facility of the Sichuan Provincial People’s Hospital. All handling and care procedures followed the guidelines of the Association for Research in Vision and Ophthalmology (ARVO) for the use of animals in research. All experimental protocols were approved by the Animal Care and Use Committee of the Sichuan Provincial People’s Hospital(approval No.: 2016-36).

    Whole-mount immunostaining of mouse RGCs

    Figure 4 Inference for whole retina

    Adult C57B6J mice (3-12 months) were euthanized by cervical dislocation after anesthesia. The eyeballs were enucleated and fixed in 4% paraformaldehyde (PFA) and phosphate-buffered saline (PBS) for 20 min on ice. The sclera and retinal pigment epithelium (RPE) were peeled from the retina, which was then cut into four quadrants and fixed for an additional 12 h. After cryoprotection in 30% sucrose for 2 h,the retina was permeabilized and blocked with 5% normal donkey serum containing 0.5% Triton X-100 for 1 h at room temperature, followed by immunostaining with mouse anti-BRN3A primary antibodies (Abcam, USA, 1:200 dilution) for 3 d at 4 °C (Figure 5) and Alex488 or 594-conjugated donkey anti-mouse secondary antibodies (ThermoFisher, USA, 1:300 dilution). Fluorescent signals were acquired using a Zeiss LSM900 confocal microscope (Zeiss, Germany).

    Generation of glaucoma mouse models

    The glaucoma mouse model was generated by inducing ganglion cell death using N-methyl-D-aspartate (NMDA), as described previously (Lam et al., 1999; Li et al., 1999). Briefly,1 μL of NMDA (Sigma, USA) in PBS was injected into the intravitreal space of adult mice with a microsyringe (Hamilton,USA) through the pars plana, as per previous study (Yang et al., 2021). NMDA triggers RGC death through excessive stimulation of glutamate receptors. RGC death is attributed to NMDA excitotoxicity in several retinal diseases (Evangelho et al., 2019; Kuehn et al., 2005; Kwon et al., 2009) and progressive loss of RGCs is a characteristic feature of glaucoma. The mice were sacrificed by cervical dislocation 1,2, or 3 d after injection with NMDA, representing different stages of RGC loss.

    Figure 5 RGC image acquisition process

    Data source

    RGC images were acquired as described in the Methods section. As RGCs are not on the same horizontal plane in the retina, we selected five different image acquisition positions perpendicular to the Z axis of the microscope platform to ensure that most RGCs were clearly presented in the five images.

    Our dataset contained 14 complete retinal images, including 10 from healthy mice (termed “normal” or “NOR”) and four from mice with RGC degeneration (termed “degenerative” or“DEG”), as summarized in Table 1. The size of each retinal image was 8 000×8 000 pixels, and all RGCs in each image were marked by experienced researchers. The manual marking error between different researchers was within 5%.Eleven images (eight normal and three degenerative images)were used as the training and validation dataset and three images (one degenerative and two normal images) were used as the test dataset. Following the training data preprocessing described in Section 2, 11 complete retinal images were divided into 2 237 small images (512×512 pixels) by manually labeling RGCs, which were divided into training and validation sets according to a 9:1 ratio. The other three complete retinas were used to compare the neural network output and manually labeled results.

    Our algorithm was based on the PyTorch framework, and the experiment was carried out using the Ubuntu system on a 2.50 GHz Intel(R) Xeon(R) E5-2678 v3 CPU with a GTX TITAN XP GPU and 64 Gb RAM. For the neural networks, the learning rate was 0.0001 and batch size was eight.

    Statistical evaluation metrics

    We used three statistical evaluation metrics commonly used in medicine and average precision (AP), which is commonly used in object detection.

    The statistical measurer-squared (r2) represents the proportion of variance for a dependent variable explained by an independent variable or variables in a regression model.While correlation explains the strength of the relationship between an independent and dependent variable,r2explains the extent to which the variance of one variable explains the variance of a second variable.

    Bland-Altman plots (Myles & Cui, 2007) can quantify the consistency between two quantitative measurements by establishing consistency limits, which are calculated using the mean and standard deviation of the difference between two measurements.

    AP metrics

    In deep learning, precision refers to the fraction of relevant instances among retrieved instances, while recall is the fraction of the total relevant instances retrieved. A precisionrecall (PR) curve is simply a graph with precision values on they-axis and recall values on thex-axis. The area enclosed by the PR curve andx- andy-axes, i.e.,AP, is a widely used evaluation index in object detection. Precision and recall are calculated as follows:

    whereTPindicates true positives,FPindicates false positives,andFNindicates false negatives.

    Table 1 Complete retinal data

    The area under the PR curve, i.e.,AP, is calculated as follows:

    where the integral is replaced with the sum at every position.

    RESULTS

    Accuracy and speed of new automatic RGC counting algorithm

    After training, we selected the model weight with the best verification result to test the network. The whole retinal image was divided into smaller images (512×512 pixels), with a step of 400×400 pixels and adjacent small images overlapping by 20%. We synthesized the results as described in Section 2.3.The whole image could be counted in less than 1 min. We fused the five-channel input image into an RGB (red, green,blue model) image to display the result, with red representing the retina and RGCs and green representing the detection box(Figure 6). The detection results of two small images(512×512 pixels) are shown in Figure 6A, B. As indicated by the arrow, we did not consider the RGCs if half of the cell appeared on the boundary, as incomplete RGCs would be displayed and detected on adjacent small images. The results for the whole retina and partial enlargement of the retina are shown in Figure 6C, D, respectively.

    Our test contained three complete retinal images, including two normal images (RGCs-N1 and RGCs-N2, respectively)and one degenerative image (RGCs-D1). We output the network detection results of each image.

    Compared with the manual detection results, the error of automatic detection was less than 5% (Table 2), and there was a significant difference in the number of RGCs between normal and degenerative mice. Both manual labeling and automatic detection identified more than 20 000 RGCs in normal mice, but less than 10 000 RGCs in degenerative mice, which could serve as a reference in future studies.

    For detailed statistical analysis, each retinal image was divided into multiple smaller images according to spatial area to compare the accuracy of spatial distribution and density of cells. Based on image size, the three test retinal images were divided into 317, 296, and 267 smaller images, respectively.

    Linear regression analysis was used to simulate the relationship between ground truth and automatic RGC counts,and a Bland-Altman plot was used to investigate the consistency between these two variables (Table 3). Linear regression analysis showed good agreement between the automated counts and ground truth ((y=0.95x-1.82, Pearsonr=0.994,n=317 frames), (y=0.99x+2.84, Pearsonr=0.999,n=296 frames), (y=1.00x+1.75, Pearsonr=0.985,n=267 frames) for N1, N2, and D1 retinal images, respectively)(Figure 7). The Bland-Altman plots of BRN3A-labeled retinal images (given the larger bias [-35, 11], [-11, 11], [-3.9, 7.4])indicated no difference between automated and manual counting.

    Figure 6 Curated examples of model on our test set, with confidence threshold of 0.3 used for display

    Table 2 Calculation results of test images

    Table 3 Statistical evaluation metrics for test images

    Figure 7 Linear regression analysis and Bland-Altman plots of ground truth (GT) versus automated counts

    For object detection, we usedAPto measure network performance (Table 4). In our computing environment, mean frames per second (FPS) was 40.6 and meanAPwas 0.981,demonstrating high network calculation accuracy and speed.

    To verify its effectiveness, we compared our method with other commonly used object-detection algorithms, as shown in Table 5. We tested faster-rcnn (Ren et al., 2017) and retinaNet (Lin et al., 2017) in the mmdetection (Chen et al.,2019) framework, and found that they were not suitable for working with dense images containing more than 200 RGCs.The YOLOv5 model builds four versions of network model according to the number of different blocks, i.e., s, m, l, and x,which were all tested. Although our method was slower than YOLOv5-l (40.6 FPS vs 41.1 FPS), the AP (0.981) was higher.

    Table 4 Evaluation of object detection for test images

    Table 5 Comparison of deep learning algorithms

    Performance of new algorithm compared to existing counting methods

    We compared our approach with existing RGC counting methods, including the freely available ImageJ-based script developed by Cross et al. (2020), the machine learning script based on CellProfiler open-source software developed by Dordea et al. (2016), and the automated deep learning method used to quantify RBPMS-stained RGCs established by Masin et al. (2021). As shown in Figure 8, the methods proposed by Cross et al. (2020) and Masin et al. (2021) were unable to detect low-contrast RGCs, while the method proposed by Dordea et al. (2016) failed to separate RGCs in contact with each other. In contrast, our approach used deep learning to avoid these problems and achieve high counting accuracy.

    We used the same data and statistical methods for testing.The number of RGCs obtained by the different methods is shown in Table 6. Our method achieved the lowest error rate and highest calculation accuracy (error rates of 2.97%, 0.53%,and 3.14% for N1, N2, and D1, respectively). We split the three test images according to image size and performed linear regression analysis (Table 7). Both our approach and that of Masin et al. (2021) fit well, although our method showed better correlation ((y=0.95x-1.82, Pearsonr=0.994,n=317 frames), (y=0.99x+2.84, Pearsonr=0.999,n=296 frames), (y=1.00x+1.75, Pearsonr=0.985,n=267 frames) for N1, N2 and D1, respectively).

    We designed ablation experiments (Table 8) to demonstrate the validity of our model. We used YOLOv5-m as the baseline and added two improvements. Incorporating the SE block in the CBL module of the basic model increased theAPon the test set from 0.924 to 0.926, demonstrating that the SE block was effective. In addition, using five-channel rather than single-channel input further increased theAPon the test set to 0.981.

    Figure 8 Comparison of RGC counting methods

    Table 6 Quantitative comparisons of different RGC counting methods for test images

    Table 7 Linear regression analysis of test images

    Table 8 Ablation experiments

    User friendly features of the software

    To facilitate operation of the algorithm and display, modify,and analyze the results, we developed an automatic RGC labeling software based on C++ and Qt5. Our proposed recognition algorithm was integrated into the software, and the detection results are shown in Figure 9. Users can zoom in and out to display the results (Supplementary Figure S6), and the report can be manually refined. The software can also process images in batches, display RGC heat maps, and export results in CSV format. Of the five columns in the report,the first two columns represent the coordinates of the upper left corner of each rectangular box in the whole image, the third and fourth columns list the widths and heights,respectively, and the fifth column provides the confidence values of the detection box.

    A heat map was generated based on the RGC labeling results (Figure 9). Figure 9D shows the density distribution of RGCs in normal mice and Figure 9E shows the distribution of RGCs in degenerative mice. Differences in quantity and spatial distribution between normal and degenerative retinas are easily seen in the heat map.

    DISCUSSION

    Our newly developed automatic RGC counting algorithm showed good performance for poor-quality retinal images,including noisy and low-contrast RGCs in RGCs-N1 and highdensity RGCs in RGCs-N2. To explore the limitations of our algorithm, we labeled and tested a retina (RGCs-N3) with obvious stitching lines and a high proportion of low-contrast RGCs with blurred edges. The image and results are shown in Figure 10 and Table 9, respectively. The test results showed that the accuracy of our algorithm depended on the edge information of the RGCs. If the edge information was weak,the effect of the algorithm worsened. Blurred RGC edges were generally caused by inconsistent staining. The better the image quality, the more accurate the algorithm. However, the algorithm was not effective at detecting RGCs localized within a large area with high background staining (Figure 11).

    Figure 9 Automated software for RGC labeling

    Figure 10 Test results for poor-quality images

    Table 9 Limitation analysis experiments

    Automatic RGC counting still faces many challenges. Due to their layering, certain RGCs in an image are always blurred,increasing the difficulty of counting. Other issues, such as labeling RGCs by different techniques, small and dense RGCs, or poor image quality caused by manual operation,require better methods in the future.

    To address the time-consuming problem of manual RGC counting in mouse models, we developed an improved YOLOv5 algorithm through the analysis of RGC characteristics. Although we focused on RGC counting, our proposed method could be used in other fields, e.g., red blood cell and lymphocyte counting. Replacing manual counting with automated counting algorithms should greatly reduce the burden on researchers. This development direction is also the direction for the expansion of our algorithm and software in the future.

    When studying mouse models of glaucoma, it is important to determine the progression of RGC loss by counting these cells throughout the retina. Manual and semi-automatic methods are susceptible to observer-dependent factors and can be time-consuming and inaccurate. To solve these challenges, we adopted a deep learning approach, improved the YOLOv5 network, and developed an open-source RGC recognition software tool. Our automated RGC counting software demonstrated high accuracy, speed, and objectivity.RGC degeneration could also be analyzed by combining the generated RGC heat maps. Our software provides a convenient tool to accurately assess RGC loss in mouse models of glaucoma, which has important implications for exploring the potential mechanisms and treatments of glaucoma.

    SUPPLEMENTARY DATA

    Supplementary data to this article can be found online. Our software has been uploaded online for free(https://github.com/MOEMIL/Intelligent-quantifying-RGCs)along with the publication of this study, and we hope to receive feedback on any potential bugs or issues.

    Figure 11 Test result for retinal image with noise and blurred-edge RGCs

    COMPETING INTERESTS

    The authors declare that they have no competing interests.

    AUTHORS’ CONTRIBUTIONS

    J.Z., L.L., and H.B.Z. jointly conceptualized the project,designed the experiments, analyzed the data, and supervised the project. J.L.Y. and F.Y. performed the animal experiments and manual RGC counting. J.Z., Y.B.H., X.Z.W., B.Y.Y.,X.H.D., R.Q.H., J.X.L., L.L., and Y.L. developed and optimized the algorithm. J.Z. and H.B.Z. wrote and edited the manuscript. All authors read and approved the final version of the manuscript.

    ACKNOWLEDGMENTS

    We would like to express our thanks to Professor Yu-Tang Ye and staff at the MOEMIL Laboratory for help in manually counting the samples used in this study.

    91狼人影院| 欧美色视频一区免费| 亚洲国产欧美人成| 国产一区二区在线av高清观看| 亚洲熟妇中文字幕五十中出| 亚洲精品粉嫩美女一区| 深爱激情五月婷婷| 在线观看午夜福利视频| 国产欧美日韩一区二区精品| 久久天躁狠狠躁夜夜2o2o| 欧美成人一区二区免费高清观看| 观看美女的网站| 麻豆国产av国片精品| 成年女人毛片免费观看观看9| 久久99热6这里只有精品| 午夜福利视频1000在线观看| 人妻少妇偷人精品九色| 亚洲成人久久爱视频| 日本撒尿小便嘘嘘汇集6| 欧美色欧美亚洲另类二区| 成人av在线播放网站| 久久久国产成人免费| 在线观看午夜福利视频| 韩国av在线不卡| 免费看日本二区| 高清在线国产一区| 日韩欧美在线二视频| 亚洲av成人精品一区久久| 日韩欧美在线乱码| 免费无遮挡裸体视频| 精品一区二区三区视频在线观看免费| 亚洲专区国产一区二区| 不卡一级毛片| 中文字幕av成人在线电影| 自拍偷自拍亚洲精品老妇| 亚洲精品亚洲一区二区| 欧美一区二区精品小视频在线| 国产爱豆传媒在线观看| 亚洲国产高清在线一区二区三| 91午夜精品亚洲一区二区三区 | 久久草成人影院| 欧美日韩亚洲国产一区二区在线观看| 日韩欧美在线二视频| 国产探花在线观看一区二区| 搞女人的毛片| 长腿黑丝高跟| 欧美性猛交黑人性爽| 久久亚洲真实| 亚洲国产色片| 蜜桃久久精品国产亚洲av| 亚洲在线自拍视频| 午夜精品一区二区三区免费看| 免费在线观看日本一区| 亚洲专区国产一区二区| 国产老妇女一区| 亚洲乱码一区二区免费版| 国产蜜桃级精品一区二区三区| 久久精品影院6| 狠狠狠狠99中文字幕| 国产淫片久久久久久久久| 99热这里只有精品一区| 日日摸夜夜添夜夜添av毛片 | 国产免费一级a男人的天堂| 国内精品美女久久久久久| 亚洲精品日韩av片在线观看| 亚洲精品色激情综合| 国产一区二区在线av高清观看| 亚洲精品影视一区二区三区av| 久久久国产成人免费| 成人二区视频| 联通29元200g的流量卡| 国语自产精品视频在线第100页| 亚洲中文字幕日韩| 国产男靠女视频免费网站| 成人毛片a级毛片在线播放| 欧美又色又爽又黄视频| 97超视频在线观看视频| 日本a在线网址| 精品欧美国产一区二区三| 成熟少妇高潮喷水视频| 欧美高清成人免费视频www| 一级黄色大片毛片| 国产 一区精品| 他把我摸到了高潮在线观看| 国产国拍精品亚洲av在线观看| а√天堂www在线а√下载| 18禁黄网站禁片午夜丰满| 精品午夜福利视频在线观看一区| 国内精品久久久久精免费| 久久久久免费精品人妻一区二区| 日韩,欧美,国产一区二区三区 | 香蕉av资源在线| 午夜a级毛片| 婷婷六月久久综合丁香| 丰满的人妻完整版| 超碰av人人做人人爽久久| 中文字幕精品亚洲无线码一区| 亚洲欧美日韩高清在线视频| 欧美中文日本在线观看视频| 中文在线观看免费www的网站| 成人综合一区亚洲| 又紧又爽又黄一区二区| 日本 av在线| 久久人人爽人人爽人人片va| 久久久久久大精品| 一本久久中文字幕| 乱系列少妇在线播放| 成年免费大片在线观看| 特大巨黑吊av在线直播| 啦啦啦啦在线视频资源| 国产大屁股一区二区在线视频| 看免费成人av毛片| 村上凉子中文字幕在线| 成人特级av手机在线观看| 毛片一级片免费看久久久久 | 国内精品宾馆在线| 麻豆av噜噜一区二区三区| 免费一级毛片在线播放高清视频| 国产精品国产高清国产av| 国产乱人视频| 黄色女人牲交| 亚洲精品粉嫩美女一区| 99久久无色码亚洲精品果冻| 中文字幕免费在线视频6| 国产精品永久免费网站| 日韩在线高清观看一区二区三区 | 在线a可以看的网站| 免费av不卡在线播放| 亚洲成人久久性| 国产蜜桃级精品一区二区三区| 偷拍熟女少妇极品色| 国产欧美日韩一区二区精品| 在线播放国产精品三级| 亚洲成av人片在线播放无| 国产高清视频在线播放一区| 国产精品免费一区二区三区在线| 天天躁日日操中文字幕| 国产真实乱freesex| 国产亚洲欧美98| 淫妇啪啪啪对白视频| 中文亚洲av片在线观看爽| 日韩欧美 国产精品| 亚洲一级一片aⅴ在线观看| 超碰av人人做人人爽久久| 精品人妻熟女av久视频| 天堂网av新在线| 亚洲av五月六月丁香网| avwww免费| 欧美zozozo另类| 一个人观看的视频www高清免费观看| 久久久久久久精品吃奶| 在现免费观看毛片| av黄色大香蕉| 超碰av人人做人人爽久久| 国产成人aa在线观看| 国产久久久一区二区三区| 中国美白少妇内射xxxbb| 麻豆成人av在线观看| 国产蜜桃级精品一区二区三区| 97人妻精品一区二区三区麻豆| 国产熟女欧美一区二区| 亚洲精品成人久久久久久| 波多野结衣高清作品| 国产亚洲av嫩草精品影院| 级片在线观看| 99久久九九国产精品国产免费| 国产高潮美女av| 日本色播在线视频| 亚洲av第一区精品v没综合| 性欧美人与动物交配| 久久精品国产亚洲av涩爱 | 99热这里只有是精品在线观看| 国产在视频线在精品| 亚洲经典国产精华液单| avwww免费| 毛片一级片免费看久久久久 | 国产毛片a区久久久久| 日本色播在线视频| 亚洲18禁久久av| 精品一区二区三区视频在线| 亚洲男人的天堂狠狠| 国产成人一区二区在线| 国内少妇人妻偷人精品xxx网站| 97人妻精品一区二区三区麻豆| 男女边吃奶边做爰视频| 1024手机看黄色片| 啦啦啦观看免费观看视频高清| 麻豆成人av在线观看| 亚洲自拍偷在线| 女生性感内裤真人,穿戴方法视频| 毛片一级片免费看久久久久 | 亚洲电影在线观看av| 精华霜和精华液先用哪个| 国产高清视频在线观看网站| 久久久久久久精品吃奶| 美女高潮的动态| 99久久九九国产精品国产免费| 在线观看一区二区三区| 日本 欧美在线| 成人二区视频| 99热精品在线国产| 波多野结衣高清无吗| 男女边吃奶边做爰视频| 亚洲成a人片在线一区二区| 中文资源天堂在线| 午夜影院日韩av| 99久久中文字幕三级久久日本| 天天躁日日操中文字幕| 蜜桃久久精品国产亚洲av| 白带黄色成豆腐渣| 国产伦精品一区二区三区视频9| 性色avwww在线观看| 亚洲人成网站在线播| 最近最新中文字幕大全电影3| 亚洲最大成人中文| 日韩精品有码人妻一区| 久久亚洲真实| 老女人水多毛片| 少妇高潮的动态图| 美女免费视频网站| 国产国拍精品亚洲av在线观看| 一个人观看的视频www高清免费观看| 亚洲国产精品久久男人天堂| 黄色配什么色好看| 亚洲美女视频黄频| 99riav亚洲国产免费| 成人亚洲精品av一区二区| 国产亚洲精品av在线| 91av网一区二区| 亚洲精华国产精华精| 国产精品一区二区免费欧美| 一区二区三区高清视频在线| 88av欧美| 日韩高清综合在线| 欧美国产日韩亚洲一区| 少妇熟女aⅴ在线视频| 精品一区二区免费观看| 国产亚洲精品久久久com| 亚洲av中文字字幕乱码综合| 欧美激情久久久久久爽电影| 亚洲av.av天堂| 男女视频在线观看网站免费| 亚洲精品乱码久久久v下载方式| 免费看光身美女| 国内少妇人妻偷人精品xxx网站| 精品久久久久久久久av| 欧美zozozo另类| 99热6这里只有精品| 久久久精品大字幕| 亚洲精品一卡2卡三卡4卡5卡| 久久婷婷人人爽人人干人人爱| 亚洲五月天丁香| 日韩大尺度精品在线看网址| 亚洲精品一区av在线观看| 欧美黑人巨大hd| 日韩欧美国产在线观看| 欧美激情久久久久久爽电影| 男人舔女人下体高潮全视频| www.www免费av| 国产伦精品一区二区三区四那| 村上凉子中文字幕在线| 色精品久久人妻99蜜桃| 窝窝影院91人妻| 国产av在哪里看| av在线天堂中文字幕| 桃色一区二区三区在线观看| 国产大屁股一区二区在线视频| 动漫黄色视频在线观看| 淫秽高清视频在线观看| 久久99热6这里只有精品| av在线亚洲专区| 国产精品久久久久久精品电影| 欧美成人性av电影在线观看| 老女人水多毛片| 久久欧美精品欧美久久欧美| 日日干狠狠操夜夜爽| 99热这里只有是精品50| 色视频www国产| 黄色视频,在线免费观看| 久久久久精品国产欧美久久久| 国产高清视频在线播放一区| 一级毛片久久久久久久久女| 色尼玛亚洲综合影院| 熟女电影av网| 我要看日韩黄色一级片| 亚洲不卡免费看| 12—13女人毛片做爰片一| 高清日韩中文字幕在线| 熟女人妻精品中文字幕| 国产精品99久久久久久久久| 香蕉av资源在线| 能在线免费观看的黄片| 欧美日韩瑟瑟在线播放| 18禁黄网站禁片免费观看直播| 亚洲va在线va天堂va国产| 午夜免费男女啪啪视频观看 | 色综合亚洲欧美另类图片| 亚洲欧美清纯卡通| 国产精品一区二区免费欧美| 欧美+亚洲+日韩+国产| 久久久久国内视频| 色综合婷婷激情| 色综合亚洲欧美另类图片| 在线免费观看不下载黄p国产 | 国产黄色小视频在线观看| 久久久久久久久久久丰满 | 成人特级av手机在线观看| 免费黄网站久久成人精品| 中文亚洲av片在线观看爽| 嫩草影院精品99| 免费黄网站久久成人精品| 日本黄色视频三级网站网址| 成人特级av手机在线观看| 韩国av在线不卡| 国内精品一区二区在线观看| 美女黄网站色视频| 简卡轻食公司| netflix在线观看网站| 国产91精品成人一区二区三区| aaaaa片日本免费| 中文亚洲av片在线观看爽| 99九九线精品视频在线观看视频| 级片在线观看| 男人舔奶头视频| 两个人视频免费观看高清| 精品99又大又爽又粗少妇毛片 | 久久天躁狠狠躁夜夜2o2o| 99久久成人亚洲精品观看| 神马国产精品三级电影在线观看| 久久午夜福利片| 国产精品不卡视频一区二区| 免费av毛片视频| 久99久视频精品免费| 亚洲午夜理论影院| ponron亚洲| 嫩草影院入口| 在线播放无遮挡| 丰满的人妻完整版| 最新在线观看一区二区三区| 夜夜看夜夜爽夜夜摸| 日本免费a在线| 淫妇啪啪啪对白视频| 国产真实伦视频高清在线观看 | av福利片在线观看| 噜噜噜噜噜久久久久久91| 国产综合懂色| 日本五十路高清| 性插视频无遮挡在线免费观看| 麻豆一二三区av精品| 亚洲综合色惰| 欧美3d第一页| 麻豆久久精品国产亚洲av| 国内毛片毛片毛片毛片毛片| 国产大屁股一区二区在线视频| 亚洲国产精品sss在线观看| 亚洲国产高清在线一区二区三| 日韩欧美精品v在线| 国产淫片久久久久久久久| 色综合色国产| 国产免费av片在线观看野外av| bbb黄色大片| 亚洲最大成人中文| 久久久久久大精品| 免费在线观看日本一区| avwww免费| 国产日本99.免费观看| 欧美日韩综合久久久久久 | av视频在线观看入口| 精品久久久噜噜| 91av网一区二区| 精品欧美国产一区二区三| 内射极品少妇av片p| 亚洲一级一片aⅴ在线观看| 免费黄网站久久成人精品| 波多野结衣高清无吗| 99久久无色码亚洲精品果冻| 亚洲av中文字字幕乱码综合| 婷婷六月久久综合丁香| 久久精品国产亚洲网站| 又紧又爽又黄一区二区| 最近视频中文字幕2019在线8| 又爽又黄a免费视频| 午夜免费成人在线视频| 免费观看的影片在线观看| 亚洲欧美日韩东京热| 亚洲无线在线观看| 精品国内亚洲2022精品成人| 国产高清激情床上av| 18禁黄网站禁片午夜丰满| 亚洲第一区二区三区不卡| 春色校园在线视频观看| 国产熟女欧美一区二区| 午夜精品在线福利| 人妻夜夜爽99麻豆av| 简卡轻食公司| 成人国产一区最新在线观看| 国产中年淑女户外野战色| 婷婷精品国产亚洲av在线| 免费在线观看成人毛片| 亚洲乱码一区二区免费版| 国产精品伦人一区二区| 美女免费视频网站| 日韩欧美在线二视频| 亚洲综合色惰| 亚洲在线自拍视频| 女生性感内裤真人,穿戴方法视频| 中文字幕高清在线视频| 国产精品精品国产色婷婷| 久久久久免费精品人妻一区二区| 日本爱情动作片www.在线观看 | 久久精品国产自在天天线| 人妻久久中文字幕网| eeuss影院久久| 亚洲精品乱码久久久v下载方式| 亚洲av.av天堂| 亚洲在线自拍视频| 国产一区二区亚洲精品在线观看| 在线播放无遮挡| 精品免费久久久久久久清纯| 网址你懂的国产日韩在线| 精品久久久久久久末码| 在线天堂最新版资源| 99久久精品国产国产毛片| 99久久久亚洲精品蜜臀av| 少妇人妻精品综合一区二区 | 色5月婷婷丁香| 亚洲av第一区精品v没综合| 老熟妇乱子伦视频在线观看| 久久久久国内视频| 12—13女人毛片做爰片一| 在线免费观看不下载黄p国产 | 别揉我奶头~嗯~啊~动态视频| 黄色日韩在线| 春色校园在线视频观看| 在线观看av片永久免费下载| 久久99热6这里只有精品| 五月伊人婷婷丁香| 日本在线视频免费播放| 五月玫瑰六月丁香| 亚洲av中文字字幕乱码综合| 观看美女的网站| 国产乱人伦免费视频| 日本一二三区视频观看| 大型黄色视频在线免费观看| 久久精品国产99精品国产亚洲性色| 成人国产麻豆网| 乱码一卡2卡4卡精品| 变态另类丝袜制服| 赤兔流量卡办理| 欧美日韩乱码在线| 在线观看66精品国产| 毛片女人毛片| 欧美成人免费av一区二区三区| 亚洲成人久久性| 国产日本99.免费观看| 五月伊人婷婷丁香| 国产高清三级在线| 亚洲狠狠婷婷综合久久图片| 又黄又爽又免费观看的视频| a在线观看视频网站| 成人毛片a级毛片在线播放| 国产亚洲欧美98| 亚洲美女视频黄频| 99精品在免费线老司机午夜| 欧美一区二区亚洲| 免费av不卡在线播放| 亚洲四区av| 午夜福利在线在线| 亚洲不卡免费看| 欧美精品国产亚洲| 亚洲,欧美,日韩| 少妇猛男粗大的猛烈进出视频 | 免费无遮挡裸体视频| 国产精华一区二区三区| 亚洲精品亚洲一区二区| 老司机福利观看| 香蕉av资源在线| 99久久精品一区二区三区| 最近视频中文字幕2019在线8| 国产毛片a区久久久久| a在线观看视频网站| 日韩一本色道免费dvd| 国产单亲对白刺激| 久久精品国产清高在天天线| 看黄色毛片网站| 久久久久久久精品吃奶| 免费在线观看影片大全网站| 最新在线观看一区二区三区| 久久精品国产清高在天天线| 在线国产一区二区在线| 免费人成视频x8x8入口观看| 国产免费一级a男人的天堂| 久久精品国产亚洲网站| 久久精品国产亚洲av香蕉五月| 啦啦啦观看免费观看视频高清| 国产精品女同一区二区软件 | 无人区码免费观看不卡| 午夜福利欧美成人| 国产成人a区在线观看| 亚洲性夜色夜夜综合| 国语自产精品视频在线第100页| 午夜精品在线福利| 亚洲欧美日韩无卡精品| 久久人妻av系列| 亚洲无线在线观看| 黄色日韩在线| 无人区码免费观看不卡| 成人国产麻豆网| 美女被艹到高潮喷水动态| 国产高清有码在线观看视频| 亚洲第一区二区三区不卡| 久久99热6这里只有精品| 他把我摸到了高潮在线观看| 午夜日韩欧美国产| 嫩草影院入口| 97碰自拍视频| 亚洲第一电影网av| 噜噜噜噜噜久久久久久91| 国语自产精品视频在线第100页| 人妻少妇偷人精品九色| 3wmmmm亚洲av在线观看| 免费人成在线观看视频色| 亚洲七黄色美女视频| 永久网站在线| 国产v大片淫在线免费观看| 亚洲自偷自拍三级| 免费看光身美女| 日韩亚洲欧美综合| 婷婷色综合大香蕉| 成人一区二区视频在线观看| а√天堂www在线а√下载| 精品一区二区三区人妻视频| 91麻豆av在线| 亚洲内射少妇av| 少妇熟女aⅴ在线视频| 国产欧美日韩一区二区精品| 精品久久久久久久久亚洲 | 搡老妇女老女人老熟妇| 亚洲一区二区三区色噜噜| 91精品国产九色| 国产欧美日韩精品亚洲av| 亚洲国产精品合色在线| 久99久视频精品免费| 热99在线观看视频| 日韩欧美在线乱码| 午夜福利在线观看吧| 午夜福利在线在线| 九色国产91popny在线| 极品教师在线免费播放| 熟女电影av网| 又爽又黄无遮挡网站| 国产高清视频在线观看网站| 欧美日韩瑟瑟在线播放| av女优亚洲男人天堂| 欧美日韩精品成人综合77777| 啦啦啦韩国在线观看视频| 69av精品久久久久久| 91久久精品电影网| 欧美高清性xxxxhd video| 成年女人永久免费观看视频| 变态另类成人亚洲欧美熟女| 国产亚洲欧美98| 日本精品一区二区三区蜜桃| 伊人久久精品亚洲午夜| 精品一区二区三区视频在线| 村上凉子中文字幕在线| 日韩大尺度精品在线看网址| 色播亚洲综合网| 波多野结衣高清无吗| 国产综合懂色| 亚洲av第一区精品v没综合| 国产精品综合久久久久久久免费| 校园春色视频在线观看| 国产一区二区亚洲精品在线观看| 一级黄色大片毛片| 欧美3d第一页| 变态另类丝袜制服| 波野结衣二区三区在线| 色视频www国产| 国产亚洲91精品色在线| 欧美不卡视频在线免费观看| 国产麻豆成人av免费视频| 99久久精品热视频| 欧美日本视频| 国产亚洲精品久久久com| 国产一区二区在线观看日韩| 看黄色毛片网站| 免费观看的影片在线观看| 黄片wwwwww| 赤兔流量卡办理| 日本一二三区视频观看| 黄片wwwwww| 午夜免费男女啪啪视频观看 | 国内精品久久久久精免费| 少妇猛男粗大的猛烈进出视频 | 成人av一区二区三区在线看| 国产精品久久电影中文字幕| 综合色av麻豆| 日韩中文字幕欧美一区二区| 人妻久久中文字幕网| 国产精品爽爽va在线观看网站| 久久精品91蜜桃| 欧美成人免费av一区二区三区| 亚洲av日韩精品久久久久久密| 麻豆成人av在线观看| 老司机午夜福利在线观看视频| 国产成人福利小说| 久久香蕉精品热| 婷婷丁香在线五月| 噜噜噜噜噜久久久久久91| 亚洲中文字幕一区二区三区有码在线看| 中文字幕免费在线视频6| 乱人视频在线观看| 国产高清视频在线播放一区| 亚洲天堂国产精品一区在线|