• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    HCSP-Net:A Novel Model of Age-Related Macular Degeneration Classification Based on Color Fundus Photography

    2024-05-25 14:39:38ChengWanJianiZhaoXiangqianHongWeihuaYangandShaochongZhang
    Computers Materials&Continua 2024年4期

    Cheng Wan ,Jiani Zhao ,Xiangqian Hong ,Weihua Yang,? and Shaochong Zhang,?

    1College of Electronic and Information Engineering/College of Integrated Circuits,Nanjing University of Aeronautics and Astronautics,Nanjing,211106,China

    2Shenzhen Eye Institute,Shenzhen Eye Hospital,Jinan University,Shenzhen,518040,China

    ABSTRACT Age-related macular degeneration(AMD)ranks third among the most common causes of blindness.As the most conventional and direct method for identifying AMD,color fundus photography has become prominent owing to its consistency,ease of use,and good quality in extensive clinical practice.In this study,a convolutional neural network (CSPDarknet53) was combined with a transformer to construct a new hybrid model,HCSP-Net.This hybrid model was employed to tri-classify color fundus photography into the normal macula(NM),dry macular degeneration (DMD),and wet macular degeneration (WMD) based on clinical classification manifestations,thus identifying and resolving AMD as early as possible with color fundus photography.To further enhance the performance of this model,grouped convolution was introduced in this study without significantly increasing the number of parameters.HCSP-Net was validated using an independent test set.The average precision of HCSPNet in the diagnosis of AMD was 99.2%,the recall rate was 98.2%,the F1-Score was 98.7%,the PPV (positive predictive value)was 99.2%,and the NPV(negative predictive value)was 99.6%.Moreover,a knowledge distillation approach was also adopted to develop a lightweight student network(SCSP-Net).The experimental results revealed a noteworthy enhancement in the accuracy of SCSP-Net,rising from 94%to 97%,while remarkably reducing the parameter count to a quarter of HCSP-Net.This attribute positions SCSP-Net as a highly suitable candidate for the deployment of resource-constrained devices,which may provide ophthalmologists with an efficient tool for diagnosing AMD.

    KEYWORDS Computer-aided diagnosis;deep learning;age-related macular degeneration;transformer

    1 Introduction

    According to a study published inThe Lancet Global Health,the number of patients with severe visual impairment or blindness due to age-related macular degeneration(AMD)is anticipated to reach 288 million by 2040,bringing a significant burden to ophthalmologists[1].As reported in some studies,AMD is influenced by age,genetics,and complex environmental factors(such as smoking and diet)[2,3].However,it remains unclear about the specific pathogenesis of AMD.Aging of macular tissues is believed to be the primary cause of AMD,which is more prevalent in individuals over the age of 50 years[4–6].There are serious challenges to global eye health in the aging population[1,3,7].Although AMD can exert serious effects on patient’s health,it has not yet attracted a high degree of societal attention or investment in medical resources.Insufficient knowledge and medical resources pose dire threats to patients and hinder limited ophthalmologists from providing convenient and comprehensive diagnostic service for such a large patient population.

    AMD can be categorized into Normal Macula(NM),Dry Macular Degeneration(DMD),and Wet Macular Degeneration (WMD) based on its clinical manifestations and imaging features,as shown in Fig.1 [3,8,9].Over 80% of patients with AMD present with DMD,defined by the early development of drusen and later development of geographic atrophy,resulting in a gradual loss of vision and vision distortion[6,10].In contrast,WMD develops from DMD,in which drusen of varying sizes form rapidly in the maculae.As the affected area rapidly invades the surrounding tissues,this condition can cause severe vision loss or even blindness[11].In this study,the classification based on clinical manifestations could aid ophthalmologists in making quick diagnoses and selecting effective therapies.

    Figure 1: (A)Normal macula;(B)dry macular degeneration;(C)wet macular degeneration

    It is important to note that conventional diagnostic techniques have been improved by deeplearning (DL) technology.DL techniques can be employed to avoid misdiagnosis in subjective assessments and make efficient and accurate diagnoses by objectively analyzing numerous data.This can reduce the burden on physicians and facilitate better therapeutic outcomes.A convolutional neural network(CNN)is the main model used in DL.As the depth of the CNN increases,the backpropagation algorithm can be utilized to solve the contribution distribution problem of each layer of the network,so that the model can be applied to prediction.Medical images feature uniform specifications,ease of use,and high quality in long-term clinical practice.The use of DL to process medical images exhibits extremely broad application possibilities and outperforms human experts in diagnosing certain diseases [12–16].Priya et al.proposed a probabilistic graphical model and a series of image-preprocessing techniques to classify AMD [17].To improve the visibility of lesions,they first extracted the green channel.Then,the discrete wavelet transforms in combination with the Kirsch operator were used to locate vessels and identify potential lesions.This optimized preprocessing pipeline effectively extracted pathological discriminative features,achieving a classification accuracy of 96%.In contrast to single-model approaches,Grassmann et al.proposed an integrated framework of convolutional neural networks based on random forests for the automated detection of macular degeneration [18].Through the integration of multiple independently trained CNNs,the integrated model demonstrated superior classification performance compared with individual human experts.Motozawa et al.constructed another dual-model approach for diagnosing AMD[19].The first model was designed to differentiate between NM and AMD,and the second one was used to detect exudative changes in AMD,differentiating between DMD and WMD.However,the construction of two separate convolutional neural networks was required,and a diagnostic accuracy of 93.9%was only achieved for AMD.Vague et al.explored the application of multimodal image analysis methods in three cohorts:Young normal,old normal,and dry macular maculopathy [20].The results showed a diagnostic accuracy of 96%for this task when combined with multimodal training,thus validating the excellence of multimodality for the diagnosis of macular lesions.However,multimodal models are more complex than normal ones and require more computational resources and time for training and optimization.

    Nonetheless,there are still limitations in accurately diagnosing AMD through existing models.Firstly,manual feature engineering and image preprocessing are highly dependent,and more automation and intelligence are required.Secondly,sample imbalance still needs to be addressed.Thirdly,the architecture of existing models lacks lightweight and efficient deployment in real-world applications.Given that,a more automated and efficient diagnostic framework was introduced in this study to overcome these challenges.The proposed HCSP-Net,a DL model with a hybrid architecture,aims to tackle the issues resulting from uneven data distribution.Notably,HCSP-Net achieves an impressive diagnostic accuracy of 99%.Furthermore,knowledge distillation was employed to develop SCSPNet—a lightweight network with enhanced feasibility on resource-limited platforms just based on 1.05 MB parameters.

    2 Structure

    2.1 Data Acquisition

    This study has obtained ethical approval to collect retinal images from the Shenzhen Eye Hospital for research purposes,following the principles outlined in theDeclaration of Helsinki[21].To address privacy concerns associated with patient data usage,careful steps were taken before color fundus photographs were incorporated into the dataset.Utilizing the OpenCV toolkit,all patient-related information,including patient name,age,and date of diagnosis,was systematically removed from fundus photographs[22].During the data preprocessing phase,which involved transforming fundus photographs into binary images,a 10×10 operator was applied to eliminate any remaining patientrelated information.Subsequently,contour screening(extracting the contour with the most significant area) was performed to determine the retinal area and cropping the retinal area.Therefore,it is important to highlight that this study does not disclose patient-related statistical information.

    Recognizing the critical role of image quality in influencing the data analysis of models,745 data points were carefully selected,excluding those with inferior exposure.Besides,a meticulous approach using random number seeds was used for dataset division.This ensured a balanced proportion of each macular data in both Datasets A and B,which achieved equitable data distribution and mitigated the impact of subjective factors,thus ensuring fairness in the experimental process.Additional details about the data division are presented in Table 1.Owing to the limited number of dry maculae(only half as many as the other two maculae),the network’s generalization and convergence may be severely hampered by the uneven spread of data[23].Section 2.2 of this paper provides details of the approach we used.

    Table 1: Number of fundus images in Datasets A and B from clinical classification of AMD

    2.2 Composition of Model Structure

    Convolutional neural networks have been extensively applied to image processing and are constantly evolving,with many excellent networks emerging[24–26].However,when an image is processed locally by CNNs via convolution and pooling,the influence of the context surrounding the image is ignored.This study combines a convolutional neural network with a transformer to access the specific and general contextual information of an image.The core feature extraction component of HCSPNet involves an improved convolutional neural,CSPDarknet53.CSPDarknet53 reduces the number of model parameters while resolving redundant gradients[24].The structure of HCSP-Net is shown in Fig.2.The improved CSPDarknet53 is the backbone of YOLOv5(You Only Look Once-v5)and comprises CSPDarknet53 and SPPF.YOLOv5 comprises four models with various depths and widths:S,M,L,and X.In this study,the backbone of the S model,which has the smallest width and depth,was used to construct a lightweight model.

    Figure 2: The structure of HCSP-Net

    CSPDarknet53 reduces the number of model parameters while resolving redundant gradients.In addition,the data will be deformed when the model input part uniformly scales the image to a fixed resolution.In this study,the SPPF module was retained in the design of the model,making the HCSP-Net more resistant to object deformation[27].Yet,the original SPPF module relies solely on the maximum pooling layer to emphasize the most salient or active features within a given region,disregarding others.In this study,it was maintained that after maximum pooling,there was a need for further integration of features to ensure the effective capture of information from the entire feature map.Consequently,a more intricate integration strategy was adopted in the SPPF module.The convolutional operation was incorporated after each maximal pooling layer to enhance the network’s perceptual capabilities.Additionally,in consideration of the model’s parameter count,the conventional convolution operation was substituted with grouped convolution,aiming to reduce parameters without compromising performance.The enhanced SPPF module is illustrated in Fig.3.The introduction of grouped convolution provided HCSP-Net with more potent feature extraction and processing capabilities.This established a robust foundation for the model to adeptly handle diverse and dynamic data in practical applications.

    Figure 3: ESPPF:Enhanced SPPF module with GCONV(grouped convolution)

    The transformer,a model based on a self-attention mechanism originally proposed by Google for machine translation tasks,has demonstrated superiority over conventional convolutional neural networks in processing long sequence data and modeling global dependencies [28].It has become a pivotal technique in various domains,including image processing[28–31].The classification module employs a Transformer block to capture long-term dependencies in the data while preserving sequence information [28].The structure of the Transformer block is illustrated in Fig.4.Besides,in light of the uneven data distribution and the shared characteristics between DMD and WMD,this study was conducted to enhance the model’s ability to distinguish between the two by incorporating a transformer module.This addition seeks to mitigate the impact of data imbalance on the model’s performance.

    Of note,due to the small size of the feature map output from the convolutional neural network(7×7),the single-headed self-attention mechanism was directly used for calculations.The self-attention can be calculated as follows:

    After the calculation by the Transformer module,its output was transformed into a 3×1 vector using a linear layer,where each vector represents the probability of corresponding to NM,DMD,and WMD,respectively.

    2.3 SCSP-Net Boosted by Knowledge Distillation

    The Teacher-Student Training (TST) method is a widely recognized approach to knowledge distillation[32].TST involves the construction of a teacher network with significant depth and width,alongside generating a comparatively lightweight student network.Then,the trained teacher network is used to supervise the training of the student network,aiming to enhance its overall performance.In this study,the well-trained HCSP-Net served as the teacher network,while a lightweight student network,SCSP-Net,was constructed (refer to Table 2 for model structures).HCSP-Net and SCSPNet exhibited notable distinctions in channel configuration and module settings.Notably,SCSPNet featured half the number of channels across the entire network compared with HCSP-Net.In addition,differences in the number of modules between the two models were observed at specific layers.For example,in the third C3 layer,HCSP-Net incorporated three duplicate C3 modules,whereas SCSP-Net employed only two duplicate stacked modules.HCSP-Net’s model parameters amounted to 4.19 MB,while SCSP-Net,through a reasonable reduction in width and depth,achieved a parameter count of 1.05 MB—approximately one-fourth of that of HCSP-Net.The amalgamation of lightweight design and knowledge distillation rendered the more pragmatic and viable deployment of SCSP-Net on embedded devices.In the knowledge distillation process,the generalization performance of the student model was improved by transferring the soft labels of the teacher model (HCSP-Net).This enabled SCSP-Net to effectively incorporate knowledge from HCSP-Net.

    Table 2: Architectures for HCSP-Net and SCSP-Net.In the table,‘n’represents the number of modules,‘c’represents the number of channels,‘w’represents the width of the feature map,and‘h’represents the height of the feature map

    Figure 4: The structure of the Transformer block

    The loss function comprises two key componesnts,namely,KL divergence and cross-entropy loss.These components can be employed to measure the difference in the output distribution between the teacher-student networks and the matching degree between the student network and the real label,respectively.The specific expressions for KL divergence and cross-entropy loss are shown as follows:

    In the above equation,the symbol ∝represents the weighting coefficient in the loss function,the symbolγrepresents the soft labeling of the output of the teacher’s network,the symbol p represents the predicted value of the student’s network,and the symbol y represents the true labeling of the input image.

    2.4 Implementation

    The HCSP-Net model was built in PyTorch based on Python 3.7.11,and a GPU (NVIDIA GeForce RTX 1080)was used for the experiments[33].Due to the limited data collected,Dataset A was expanded by skimming the data horizontally and flipping them vertically,with the final amount of data expanded to four times the original size.To prevent the model from overfitting,data enhancement techniques,including color space variation,random luminance contrast variation,translation scaling,and random orientation rotation,were applied to the fundus map with probabilities of 0.2,0.2,0.5,and 1.0,respectively.In the training process,Dataset A was divided according to the ratio of 8:2,which was used for model training and validation,respectively,and the weight file with the lowest epoch number and the highest accuracy was saved as the optimal model.

    2.5 Statistical Method

    The Scikit-learn toolkit was used to conduct the statistical study [34].The precision,recall,F1-Score,PPV (positive predictive value),and NPV (negative predictive value) of the HCSP-Net for NM,DMD,and WMD were determined using binary indicators.The areas under the readings and receiver operating characteristic curves(ROC)were also computed.AUC values were classified as poor diagnostic values if they were between 0.5 and 0.70,average diagnostic values if they were between 0.75 and 0.85,and excellent diagnostic values if they were above 0.85.

    The multi-classification indicator Kappa value was used to assess the degree of agreement between the true diagnostic findings and the CSPDarknet53 and HCSP-Net results.The Kappa value ranged from 0 to 1,and a higher Kappa value indicated greater agreement between the model’s predicted and actual results.The Kappa values were calculated as follows:

    wherep0represents the overall classification accuracy.pecan be calculated as follows:

    whereairepresents the number of actual samples in class i,birepresents the number of samples predicted for class i,and n represents the total number of samples.The Jaccard similarity coefficient is also a simple and effective statistical indicator of similarity and diversity.It is defined as the ratio of the number of elements in the intersection of two sets to the number of elements in the concatenation,which can be expressed as follows:

    where A and B represent the real and predicted label sets,respectively.

    3 Results

    3.1 Model Performance Evaluation

    The confusion matrix provided a more intuitive view of the classification performance of the model.CSPDarknet53 can be utilized to identify 100%of the NM;however,owing to a lack of data and similarities between DMD and WMD in terms of lesion features,it is difficult for the model to distinguish between them,as shown in Fig.5A.In Dataset B,26.3% of DMD cases and 2.4% of WMD cases were incorrectly categorized.Notably,with the addition of the transformer module,the misdiagnosis of DMD was reduced by the HCSP-Net(SPPF),indicating the improved ability of this model to capture the difference between DMD and WMD.Following a careful integration of features extracted from the SPPF through the addition of grouped convolution,only one DMD lesion was misdiagnosed as a WMD lesion by HCSP-Net (ESPPF).The corresponding confusion matrices for the two improved networks based on CSPDarknet53 are shown in Figs.5B and 5C,respectively.Hence,the problem of uneven data distribution was resolved,resulting in improved model efficiency.

    Figure 5: Confusion matrix.(A)CSPDarknet53;(B)HCSP-Net(SPPF);(C)HCSP-Net(ESPPF).0 for normal macula,1 for dry macular degeneration,2 for wet macular degeneration

    To objectively evaluate the performance of HCSP-Net,the receiver operating curve (ROC) was used in this study.Fig.6 illustrates the AUC values for NM,DMD,and WMD,which were 1.0000,0.9974,and 0.9880,respectively.

    Figure 6: Roc curve

    3.2 Enhanced SPPF Performance

    In the comparison between SPPF and ESPPF,identical parameter settings,such as the learning rate,optimizer,and weight decay,were utilized in this study to ensure fair experimental results.As shown in Table 3,HCSP-Net (ESPPF) outperforms HCSP-Net (SPPF) model,with a 3.1% relative improvement in the kappa value(increasing from 95.3%to 98.4%)and a 4.7%relative improvement in the Jaccard value(increasing from 92.8%to 97.5%).

    Table 3: Comparison of Kappa and Jaccard values between CSPDarknet53 and HCSP-Net

    In addition,the impact of SPPF and ESPPF on the convergence of the model was explored in this study.As depicted in Fig.7,the initial 30 training epochs reveal a noteworthy observation: HSCPNet(ESPPF)demonstrates a significantly higher initial accuracy compared with HCSP-Net(SPPF).Furthermore,throughout the entire training process,the accuracy curve of HSCP-Net (ESPPF)steadily ascends,ultimately converging.This indicates that HCSP-Net(ESPPF)is adept at capturing underlying patterns and features in the data during the early stages of model training,effectively mitigating the impact of data noise.

    3.3 SCSP-Net Performance Insights

    The compression of the depth and width of HCSP-Net resulted in an increase in the training time of the model and a significant decrease in the diagnostic accuracy to only 94%,as shown in Fig.8A.This study introduced the HCSP-Net(ESPPF)with an accuracy of 99%as a teacher network to guide the SCSP-Net training to address this issue.Under the supervision of the teacher network,SCSP-Net successfully reduced the confusion between DMD and WMD of this model,and the accuracy was improved to 97%in Dataset(as shown in Fig.8B).Drawing on the knowledge of the teacher network cannot only alleviate the performance degradation caused by HCSP-Net compression but also provide more precise guidance for SCSP-Net to perform better when facing complex disease classification tasks.

    Figure 7: Comparison of the training set accuracy between HSCP-Net (SPPF) and HCSP-Net(ESPPF)

    Figure 8: Confusion matrix.Subfigure(A)illustrates SCSP-Net without knowledge distillation,while subfigure(B)depicts SCSP-Net after knowledge distillation

    According to the statistics in Table 4,the number of parameters and time complexity of different models were comprehensively analyzed in this study.Notably,in HCSP-Net(SPPF),replacing conventional fully connected layers with Transformer modules reduced the number of parameters.However,despite the decrease in the number of parameters,the time complexity did not show a significant declining trend.This indicated that introducing Transformer modules may face particular challenges in improving computational efficiency,requiring further in-depth research and optimization.In addition,after grouped convolutions were introduced,HCSP-Net(ESPPF)showed increased parameters and time complexity compared with the baseline model and HCSP-Net(SPPF).Although the increase was inevitable,this increase was acceptable considering the superior performance of HCSP-Net(ESPPF).It was noteworthy that by compressing the width and depth of the model,the number of parameters and time complexity of SCSP-Net were reduced by 74.90%and 72.97%,respectively,demonstrating its advantages in lightweight design.

    Table 4: Comprehensive analysis of model parameters and computational complexity

    3.4 Ablation Experiment

    In this study,the HCSP-Net model was constructed based on an innovative amalgamation of the enhanced CSPDarknet53 and the Transformer module.This model exhibited substantial distinctions compared with conventional counterparts,such as Resnet50,EfficientNetV2,InceptionV3,and ViT.The underlying rationale for this fusion design lay in the deliberate exploitation of the robust feature extraction capabilities of CSPDarknet53 and the utilization of the Transformer module to capture intricate long-term dependencies within the data,thereby facilitating a more profound exploration of abstract features within images.Regarding feature engineering,incorporating the Transformer module empowered HCSP-Net to discern and capture long-term dependencies inherent in image data more effectively,thus extracting expressive and discriminative features.Furthermore,the ESPPF module played a pivotal role during the initial stages of model training,significantly reducing noise impacts on the model.This aspect was of great significance when addressing challenges associated with data heterogeneity and inhomogeneity prevalent in medical images.To further demonstrate the superior performance of HCSP-Net,the performance of several models was evaluated using binary classification indicators.The experimental results are listed in Table 5.All models in this experiment were trained and validated on Dataset A and tested on Dataset B.The obtained test results were considered the ultimate performance indicators for each model.HCSP-Net (ESPPF) performed optimally in all other metrics compared with other models,achieving a precision of 99.2%,recall of 98.2%,F1-Score of 98.7%,PPV of 99.2%,and NPV of 99.6%.The ablation experiment also confirmed the superior performance of HCSP-Net,as the addition of the transformer and ESPPF module led to an improvement in model performance.

    Table 5: HCSP-Net performance evaluation:Precision,recall,F1-Score,PPV,and NPV

    Moreover,to further validate the generalization capability of HCSP-Net (ESPPF),a validation experiment was performed on a publicly available dataset containing 11 types of retinal fundus images(https://www.kaggle.com/datasets/kssanjaynithish03/retinal-fundus-images/data).NM,DMD,and WMD images were extracted from this dataset to construct an external validation set.Then,HCSPNet(ESPPF)was retrained and tested on this validation set.As shown in Fig.9,HCSP-Net(ESPPF)achieves a classification accuracy of 100% on the test set with 256 images.This validation result demonstrated that HCSP-Net (ESPPF) had a strong generalization ability in distinguishing NM,DMD,and WMD fundus images and can adapt to unseen new datasets,laying a foundation for subsequent clinical applications.

    Figure 9: Confusion matrix.0 for normal macular,1 for dry macular degeneration,2 for wet macular degeneration

    4 Discussion

    Age-related macular degeneration (AMD) is a major cause of irreversible damage to vision in individuals over the age of 50,affecting the health of millions of people worldwide [5,7,10].Early detection and treatment of AMD contribute to slow disease progression.However,with the increasing number of patients with AMD,it is difficult for ophthalmologists to provide a comprehensive diagnostic service for this large patient population;therefore,the use of DL techniques to intelligently analyze AMD is of great clinical importance.

    T-distributed stochastic neighbor embedding (T-SNE) is a nonlinear dimensionality reduction algorithm utilized for visualizing high-dimensional data [37].As there is a lack of interpretability in neural networks,T-SNE was employed in this study to reduce the dimensionality of the last hidden layer in both CSPDarknet53 and HCSP-Net (ESPPF) models.Characteristic probability distribution maps for various types of macular degeneration were generated to aid in interpreting the neural network models,as shown in Fig.10.When applied to the macular degeneration triple classification task,the CSPDarknet53 model exhibited a noticeable overlap between the sample data of DMD and WMD.This suggested that CSPDarknet53 did not effectively learn the distinctive features necessary for differentiating between the two conditions.Conversely,the visualization results of HCSP-Net showed a low percentage of sample data overlap,indicating that this model had an improved ability to distinguish between DMD and WMD.This underscored the enhanced capacity of this model to learn the distinctions between the two conditions,successfully differentiating them.Additionally,it was apparent from the probability distribution plots that the probability plots of DMD and WMD overlapped,further emphasizing the relationship between DMD and WMD,namely that WMD developed in the context of DMD.Clinically,reliably differentiating DMD from WMD is critical for appropriate treatment decisions.Thus,the difficulty of CSPDarknet53 in separating these classes could lead to delays in treating WMD or unnecessary treatment for DMD due to incorrect classification.Meanwhile,the more apparent separation achieved by HCSP-Net demonstrated its potential to support more accurate clinical diagnosis and management between these AMD subtypes.These visualization results complement our analysis,offering a more comprehensive understanding of the model’s behavior and its implications for clinical decision-making.

    Figure 10: t-SNE dimensionality reduction visualization.(A)CSPDarknet53;(B)HCSP-Net(ESPPF)

    Moreover,the HCSP-Net architecture was examined and evaluated experimentally.The Transformer model was tested using various model architectures,such as replacing only one C3 module at a time at a different location from the transformer module and replacing all C3 modules with the transformer module.However,none of these models performed as well as HCSP-Net,which may be explained from two aspects.First,the small amount of data used in this study prevented the transformer model from learning all the characteristics of each macula,which was necessary for it to obtain superiority over the convolutional neural network model[22].Second,after pre-training on the COCO dataset,the improved CSPDarknet53 was loaded with weights and was able to derive general features from the images.In this instance,changing the intermediate CSPDarknet53 architecture at random would nullify the impact of the pre-trained weights.The Transformer module successfully integrated its effect on the backbone feature extraction module’s information without erasing the pretrained weights,thereby enhancing the performance of HCSP-Net and allowing the model to learn new features.This allows the model to detect minute variations between various maculae,thereby improving the model’s performance while preserving its lightweight characteristics.

    HCSP-Net,constructed in this study,had a parameter count of 4.19 MB.Further reduction of model parameters to enhance its deployment on mobile devices represented the challenge addressed in this study.Building upon HCSP-Net (ESPPF),knowledge distillation was employed to guide the training of SCSP-Net in this study,resulting in an accuracy improvement from 94% to 97%,with parameters accounting for less than a quarter of HCSP-Net.In CSPDarknet53 and HCSP-Net,there was no misdiagnosis of DMD as NM,indicating that the model can capture the differences between the two conditions.However,even with the assistance of knowledge distillation,SCSP-Net exhibited misdiagnosis,incorrectly identifying DMD as NM.The experimental results highlighted the significant impact of model depth and width on learning data features.This indicated that larger models were more prone to extracting abstract features from the data.Consequently,striking a balance between model detection accuracy and operational speed while reducing model parameters is a key point for the future optimization of SCSP-Net.

    An innovative approach was also introduced in this study by replacing the conventional fully connected classification method with a Transformer model based on the self-attention mechanism.Additionally,improvements were achieved by incorporating group convolution to expedite model convergence and enhance robustness,leading to the development of HCSP-Net(ESPPF)and achieving favorable diagnostic outcomes.All training images were sourced from individuals diagnosed with AMD,and professional ophthalmologists meticulously labeled the dataset.HCSP-Net (ESPPF)exhibited exceptional classification precision(99.2%),recall(98.2%),F1-Score(98.7%),PPV(99.2%),and NPV(99.6%).Compared with other studies,our research obviated the need for complicated data preprocessing steps,such as single-channel data extraction or vascular localization[17].Furthermore,HCSP-Net avoided the complexity of ensemble learning or the construction of dual models to enhance accuracy[18,19].While multimodal approaches were explored for accuracy improvement,acquiring diverse data types posed significant challenges in the medical domain[20].Therefore,a straightforward and efficient diagnostic method that circumvented the complexities associated with data preprocessing and model assembly was presented in this study.

    Nevertheless,there are still some limitations in this study.Firstly,although DL has been immensely successful in many domains,acquiring large amounts of data is frequently challenged due to ethical and privacy concerns.This study used data augmentation to increase image diversity during the data preprocessing step and data inversion to extend the dataset.However,the experimental results showed that it was still necessary to improve HCSP-Net(SPPF)by increasing the number of training rounds to deal with fluctuations in accuracy during the pre-training period.This ongoing need for improvements highlighted the complexity of working with limited data in the context of DL-based research.Secondly,a three-class classification study was conducted based on the leading criteria.Expanding the dataset with more diverse cases may help further verify the robustness of the proposed methods.Thirdly,after compressing the depth and width of the model,the diagnostic accuracy of SCSP-Net decreased significantly.Further balancing the accuracy and efficiency of the model was another limitation of this study.Finally,it is required to perform the integration of the model into clinical workflows and user studies to assess its acceptance.This will be an essential direction for further research.

    This initial study focuses on model development and evaluation on a limited dataset.An important direction for future work is integrating the model into clinical workflows and evaluating its acceptance by healthcare professionals through user studies.Besides,as more AMD data with refined grading become available,the model can be retrained and modified to enhance accuracy and maintain longterm reliability.In meantime,SCSP-Net can be enhanced by leveraging the success achieved in HCSPNet,thus providing improved support for medical professionals in diagnosing AMD.

    5 Conclusion

    In conclusion,AMD is a prevalent retinal disease that can cause blindness,making prompt diagnosis critical.In this study,the viability of using DL technology to aid in the classification of AMD was demonstrated.Besides,an HCSP-Net,a classification model combining a convolutional neural network and a transformer,was constructed to achieve the AMD diagnosis accuracy of 99%.This model can fulfill functions in the early diagnosis of AMD based on color fundus photography,offering valuable assistance to clinicians.In particular,it can offer strong support for the early diagnosis and screening of AMD in primary care,assist in detecting early AMD,and provide reasonable treatment recommendations,thereby enhancing the visual quality of patients.

    Acknowledgement:We gratefully acknowledge the support of the Shenzhen Fund for Guangdong Provincial High-Level Clinical Key Specialties,the Sanming Project of Medicine in Shenzhen,and the Shenzhen Science and Technology Planning Project.

    Funding Statement:Shenzhen Fund for Guangdong Provincial High-Level Clinical Key Specialties(SZGSP014),Sanming Project of Medicine in Shenzhen (SZSM202311012) and Shenzhen Science and Technology Planning Project(KCXFZ20211020163813019).

    Author Contributions:CW and JZ: Analyzed,discussed the data,and drafted the manuscript.XH:Analyzed,discussed the data,and collected and labeled the data.SZ and WY:Designed the research,collected and labeled the data,and revised the manuscript.

    Availability of Data and Materials:The raw data supporting the conclusions of this article will be made available by the authors,without undue reservation.

    Ethics Approval:This study was approved by the Medical Ethics Committee of Shenzhen Eye Hospital(Approval Code:2023KYPJ015,Approval Date:February 24,2023).

    Conflicts of Interest:The authors declare that this study was conducted without commercial or financial relationships that could be construed as potential conflicts of interest.

    欧美成人午夜精品| 午夜精品国产一区二区电影| 久久久久久久久久久久大奶| 久久精品国产综合久久久| 99国产综合亚洲精品| 欧美日韩精品网址| 美女主播在线视频| 肉色欧美久久久久久久蜜桃| 亚洲人成伊人成综合网2020| 亚洲国产欧美日韩在线播放| 一本—道久久a久久精品蜜桃钙片| 日韩欧美免费精品| 欧美 日韩 精品 国产| 久久青草综合色| 亚洲成a人片在线一区二区| 两性夫妻黄色片| 两个人免费观看高清视频| 国产在线精品亚洲第一网站| 欧美午夜高清在线| 黄色丝袜av网址大全| 男女边摸边吃奶| 在线永久观看黄色视频| 亚洲五月色婷婷综合| 亚洲欧洲精品一区二区精品久久久| 日韩熟女老妇一区二区性免费视频| 亚洲色图av天堂| svipshipincom国产片| 99riav亚洲国产免费| 国产欧美日韩一区二区三| 老熟妇仑乱视频hdxx| 黑人巨大精品欧美一区二区mp4| 中文字幕最新亚洲高清| 最近最新中文字幕大全免费视频| 精品亚洲乱码少妇综合久久| 99国产精品一区二区三区| 最黄视频免费看| 国产精品.久久久| 韩国精品一区二区三区| 亚洲五月色婷婷综合| 精品一区二区三区av网在线观看 | 9热在线视频观看99| 亚洲av第一区精品v没综合| 亚洲av成人不卡在线观看播放网| 国产av精品麻豆| 精品国产超薄肉色丝袜足j| 亚洲成a人片在线一区二区| 久久狼人影院| 久久久精品国产亚洲av高清涩受| 午夜福利,免费看| 啦啦啦免费观看视频1| 搡老乐熟女国产| 午夜福利,免费看| 久久国产精品影院| 国产一区二区在线观看av| 亚洲国产欧美日韩在线播放| 亚洲色图综合在线观看| 狂野欧美激情性xxxx| 免费观看a级毛片全部| 国产成人精品久久二区二区91| 国产欧美亚洲国产| 国产精品久久久人人做人人爽| 在线亚洲精品国产二区图片欧美| 国产激情久久老熟女| 无人区码免费观看不卡 | 亚洲五月婷婷丁香| 99在线人妻在线中文字幕 | 中文字幕高清在线视频| 可以免费在线观看a视频的电影网站| 久久精品91无色码中文字幕| 性高湖久久久久久久久免费观看| 在线观看一区二区三区激情| 中文字幕精品免费在线观看视频| 国产片内射在线| 男人操女人黄网站| 精品国产乱码久久久久久男人| 国产精品二区激情视频| 新久久久久国产一级毛片| 精品卡一卡二卡四卡免费| 天堂俺去俺来也www色官网| 黄色丝袜av网址大全| 这个男人来自地球电影免费观看| 极品教师在线免费播放| 97在线人人人人妻| 国产1区2区3区精品| 国产国语露脸激情在线看| 叶爱在线成人免费视频播放| 日韩成人在线观看一区二区三区| 色婷婷久久久亚洲欧美| 精品一区二区三区av网在线观看 | 国产一区二区在线观看av| 亚洲一区二区三区欧美精品| 91精品三级在线观看| 黄色 视频免费看| 国产片内射在线| tocl精华| 国产成人精品久久二区二区91| 午夜福利,免费看| 亚洲国产成人一精品久久久| 免费日韩欧美在线观看| 一本久久精品| 国产精品1区2区在线观看. | 精品国产一区二区久久| 美女主播在线视频| 国产av精品麻豆| 成人手机av| 又大又爽又粗| 亚洲精品自拍成人| 久久性视频一级片| 免费在线观看黄色视频的| 欧美国产精品一级二级三级| 国产亚洲午夜精品一区二区久久| 欧美一级毛片孕妇| 大片电影免费在线观看免费| 精品少妇久久久久久888优播| 亚洲专区字幕在线| 亚洲欧美精品综合一区二区三区| 999久久久精品免费观看国产| 每晚都被弄得嗷嗷叫到高潮| 亚洲全国av大片| 天天影视国产精品| 露出奶头的视频| 久久精品成人免费网站| 中文字幕色久视频| 正在播放国产对白刺激| 国产成+人综合+亚洲专区| 少妇 在线观看| 老熟妇仑乱视频hdxx| 国产野战对白在线观看| 午夜福利,免费看| 19禁男女啪啪无遮挡网站| 免费观看av网站的网址| 又大又爽又粗| 亚洲国产av影院在线观看| 国产精品一区二区免费欧美| 激情在线观看视频在线高清 | 日本av手机在线免费观看| 久久 成人 亚洲| 老司机亚洲免费影院| 午夜两性在线视频| 国产男靠女视频免费网站| 欧美黄色片欧美黄色片| 欧美老熟妇乱子伦牲交| 在线天堂中文资源库| 成人亚洲精品一区在线观看| 性色av乱码一区二区三区2| 1024视频免费在线观看| 黄色视频,在线免费观看| 久久中文字幕人妻熟女| 男女之事视频高清在线观看| 日韩熟女老妇一区二区性免费视频| 99国产综合亚洲精品| 久久精品91无色码中文字幕| 欧美精品一区二区免费开放| 亚洲国产av新网站| 国产精品 欧美亚洲| 亚洲中文日韩欧美视频| 欧美日韩av久久| 亚洲精品中文字幕在线视频| 老熟妇仑乱视频hdxx| 精品人妻在线不人妻| 一二三四社区在线视频社区8| 亚洲精品一二三| 亚洲中文av在线| av网站免费在线观看视频| 久久久久国内视频| aaaaa片日本免费| 欧美精品av麻豆av| 亚洲国产欧美网| 亚洲三区欧美一区| 色在线成人网| 国产精品免费一区二区三区在线 | 丰满饥渴人妻一区二区三| 成人黄色视频免费在线看| 成人黄色视频免费在线看| 国产一区二区在线观看av| 国产精品欧美亚洲77777| 国产一区二区 视频在线| 99国产精品99久久久久| 色综合婷婷激情| 国产精品国产高清国产av | 自拍欧美九色日韩亚洲蝌蚪91| 亚洲第一欧美日韩一区二区三区 | 首页视频小说图片口味搜索| 久久精品国产99精品国产亚洲性色 | 免费av中文字幕在线| 91老司机精品| 9191精品国产免费久久| 一级a爱视频在线免费观看| 亚洲专区中文字幕在线| avwww免费| 久久狼人影院| 午夜福利一区二区在线看| 国内毛片毛片毛片毛片毛片| 亚洲一码二码三码区别大吗| 亚洲久久久国产精品| 91精品三级在线观看| 18在线观看网站| 精品欧美一区二区三区在线| 久久99热这里只频精品6学生| 日本av手机在线免费观看| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲av电影在线进入| 高清视频免费观看一区二区| 久久中文字幕人妻熟女| 99香蕉大伊视频| 精品国产一区二区久久| 两个人看的免费小视频| 日韩免费高清中文字幕av| 午夜两性在线视频| 男女午夜视频在线观看| 欧美老熟妇乱子伦牲交| 亚洲黑人精品在线| 久久九九热精品免费| 一区二区av电影网| 大片电影免费在线观看免费| 成人18禁高潮啪啪吃奶动态图| 少妇猛男粗大的猛烈进出视频| 欧美激情久久久久久爽电影 | 亚洲国产欧美网| av不卡在线播放| 国产成人欧美| 在线观看一区二区三区激情| h视频一区二区三区| 19禁男女啪啪无遮挡网站| 高清黄色对白视频在线免费看| 午夜老司机福利片| 搡老乐熟女国产| 一二三四在线观看免费中文在| 欧美日韩黄片免| 日韩精品免费视频一区二区三区| 男女床上黄色一级片免费看| 丝袜美腿诱惑在线| 国产色视频综合| 国产亚洲欧美精品永久| 欧美一级a爱片免费观看看| 亚洲欧美精品综合一区二区三区| 国产爱豆传媒在线观看| 中文字幕人成人乱码亚洲影| 国产精品爽爽va在线观看网站| 男女视频在线观看网站免费| av女优亚洲男人天堂 | 欧美日韩综合久久久久久 | 亚洲在线观看片| 中文资源天堂在线| 国产精品久久久久久久电影 | 99视频精品全部免费 在线 | 给我免费播放毛片高清在线观看| 黄色片一级片一级黄色片| 夜夜躁狠狠躁天天躁| 老司机深夜福利视频在线观看| 国产乱人视频| 久久精品国产清高在天天线| 欧美日韩乱码在线| 久久九九热精品免费| 成人18禁在线播放| 在线十欧美十亚洲十日本专区| 此物有八面人人有两片| 香蕉av资源在线| 狠狠狠狠99中文字幕| 午夜激情福利司机影院| 久久国产精品人妻蜜桃| 国产熟女xx| 日韩免费av在线播放| 亚洲,欧美精品.| 18美女黄网站色大片免费观看| 亚洲 欧美一区二区三区| 亚洲在线观看片| 国产精品99久久99久久久不卡| 久久人人精品亚洲av| 国产精品九九99| 禁无遮挡网站| 在线播放国产精品三级| 日本熟妇午夜| 好男人电影高清在线观看| 日本精品一区二区三区蜜桃| 香蕉国产在线看| 制服人妻中文乱码| 白带黄色成豆腐渣| 国产一区在线观看成人免费| 舔av片在线| 午夜激情福利司机影院| 国产欧美日韩精品一区二区| 精品一区二区三区视频在线观看免费| 色综合亚洲欧美另类图片| 成人午夜高清在线视频| 中出人妻视频一区二区| 91九色精品人成在线观看| 欧美日韩综合久久久久久 | 亚洲 欧美 日韩 在线 免费| 免费看a级黄色片| 国产私拍福利视频在线观看| 欧美日韩乱码在线| 色av中文字幕| 手机成人av网站| 精品人妻1区二区| 亚洲18禁久久av| 国产精品久久视频播放| 亚洲男人的天堂狠狠| 一个人看视频在线观看www免费 | 亚洲人成网站在线播放欧美日韩| 午夜福利欧美成人| 亚洲av熟女| 丰满人妻熟妇乱又伦精品不卡| 手机成人av网站| 国产真人三级小视频在线观看| 香蕉丝袜av| 精品一区二区三区四区五区乱码| 日日夜夜操网爽| 久久久久久人人人人人| 他把我摸到了高潮在线观看| 在线观看美女被高潮喷水网站 | 国产精品亚洲av一区麻豆| 日韩欧美在线乱码| 精品一区二区三区视频在线观看免费| 亚洲国产欧美人成| 国产成人av教育| 欧美最黄视频在线播放免费| 12—13女人毛片做爰片一| 男女床上黄色一级片免费看| 精品一区二区三区视频在线 | 99久国产av精品| 国产伦在线观看视频一区| 啦啦啦免费观看视频1| 久久精品91无色码中文字幕| 欧美丝袜亚洲另类 | 人人妻人人看人人澡| 一区二区三区高清视频在线| 亚洲激情在线av| 性色av乱码一区二区三区2| av片东京热男人的天堂| 欧美日本视频| 亚洲欧美日韩无卡精品| 亚洲精华国产精华精| 午夜久久久久精精品| 亚洲中文字幕日韩| 国产蜜桃级精品一区二区三区| 久久天躁狠狠躁夜夜2o2o| 国产人伦9x9x在线观看| 国产午夜福利久久久久久| 亚洲熟妇中文字幕五十中出| 99久久国产精品久久久| 日日摸夜夜添夜夜添小说| 国产精品亚洲一级av第二区| 亚洲美女黄片视频| 免费大片18禁| 国产v大片淫在线免费观看| 国产午夜精品论理片| 麻豆久久精品国产亚洲av| 在线免费观看的www视频| 欧美成狂野欧美在线观看| 99热这里只有是精品50| 99视频精品全部免费 在线 | 又大又爽又粗| 午夜激情欧美在线| 99国产精品一区二区三区| 1024香蕉在线观看| 好男人在线观看高清免费视频| 国产成人精品久久二区二区免费| 嫩草影视91久久| 一夜夜www| 天堂影院成人在线观看| 成年女人看的毛片在线观看| 黄片小视频在线播放| 精品一区二区三区视频在线 | 国产高清三级在线| 午夜福利在线观看免费完整高清在 | 亚洲国产欧美网| 好男人电影高清在线观看| 九九热线精品视视频播放| 日本撒尿小便嘘嘘汇集6| 91九色精品人成在线观看| 国产一区在线观看成人免费| 欧美激情在线99| 亚洲一区高清亚洲精品| 网址你懂的国产日韩在线| 午夜免费成人在线视频| 亚洲18禁久久av| 久久久色成人| 美女高潮的动态| 成人国产综合亚洲| 婷婷六月久久综合丁香| 国产成人影院久久av| 国产高清三级在线| 99国产极品粉嫩在线观看| 男女之事视频高清在线观看| 亚洲avbb在线观看| 国产亚洲av高清不卡| 一二三四社区在线视频社区8| 国产亚洲欧美98| 国产精品乱码一区二三区的特点| 国产不卡一卡二| 亚洲五月婷婷丁香| 亚洲 欧美一区二区三区| 国产精品影院久久| av视频在线观看入口| 蜜桃久久精品国产亚洲av| 99视频精品全部免费 在线 | 午夜激情福利司机影院| 又黄又粗又硬又大视频| 男人舔奶头视频| 精品国产乱码久久久久久男人| svipshipincom国产片| 日本成人三级电影网站| 免费电影在线观看免费观看| 美女cb高潮喷水在线观看 | 亚洲国产看品久久| 香蕉丝袜av| 亚洲成人久久爱视频| 窝窝影院91人妻| 成人特级黄色片久久久久久久| 99精品欧美一区二区三区四区| 欧美色视频一区免费| 国产1区2区3区精品| 天天躁狠狠躁夜夜躁狠狠躁| 色精品久久人妻99蜜桃| 日本在线视频免费播放| 亚洲 国产 在线| 男女之事视频高清在线观看| 级片在线观看| 国产99白浆流出| 久久香蕉国产精品| 久久久久性生活片| 老司机深夜福利视频在线观看| www日本黄色视频网| 丝袜人妻中文字幕| 欧美色欧美亚洲另类二区| 国产伦一二天堂av在线观看| 丰满的人妻完整版| 亚洲国产色片| 超碰成人久久| 欧美黄色片欧美黄色片| 午夜激情欧美在线| 两性午夜刺激爽爽歪歪视频在线观看| 一区福利在线观看| 午夜福利在线观看吧| 精品99又大又爽又粗少妇毛片 | 国产精品久久久久久人妻精品电影| 国产精品 欧美亚洲| 国产高潮美女av| 日日干狠狠操夜夜爽| 久久久精品大字幕| 久久人人精品亚洲av| 亚洲五月天丁香| 99精品在免费线老司机午夜| 精品人妻1区二区| 午夜福利在线观看吧| 男女那种视频在线观看| 国产乱人视频| 日本一二三区视频观看| 色老头精品视频在线观看| 欧美性猛交黑人性爽| 精品久久久久久成人av| 老熟妇仑乱视频hdxx| 这个男人来自地球电影免费观看| 成人国产综合亚洲| 亚洲av五月六月丁香网| 草草在线视频免费看| 亚洲国产欧美一区二区综合| 老司机午夜福利在线观看视频| 国产午夜福利久久久久久| 国产精品女同一区二区软件 | 久久久久性生活片| 国产精品乱码一区二三区的特点| 丰满人妻一区二区三区视频av | 99在线人妻在线中文字幕| 很黄的视频免费| 国产精品亚洲一级av第二区| 国产成年人精品一区二区| 桃色一区二区三区在线观看| 亚洲国产精品合色在线| 18禁国产床啪视频网站| 日本撒尿小便嘘嘘汇集6| 国产99白浆流出| 最近最新中文字幕大全免费视频| 啦啦啦观看免费观看视频高清| 午夜影院日韩av| 美女cb高潮喷水在线观看 | 国产精品av视频在线免费观看| 国产成人aa在线观看| 成人特级黄色片久久久久久久| 国产伦人伦偷精品视频| 最新美女视频免费是黄的| 天天躁狠狠躁夜夜躁狠狠躁| 午夜激情欧美在线| 成人国产一区最新在线观看| 一进一出抽搐动态| 精品国产超薄肉色丝袜足j| 叶爱在线成人免费视频播放| 免费看日本二区| 成人特级黄色片久久久久久久| 最近在线观看免费完整版| 真人做人爱边吃奶动态| 热99在线观看视频| 成年免费大片在线观看| 国产乱人视频| 亚洲色图 男人天堂 中文字幕| 日本熟妇午夜| 一二三四社区在线视频社区8| 精品久久久久久久人妻蜜臀av| 啦啦啦免费观看视频1| 1024手机看黄色片| 国产精品99久久99久久久不卡| 嫩草影院精品99| 国产高清videossex| 波多野结衣高清无吗| 一边摸一边抽搐一进一小说| 亚洲无线观看免费| 国产成年人精品一区二区| 精品午夜福利视频在线观看一区| 国产爱豆传媒在线观看| av视频在线观看入口| 一区二区三区国产精品乱码| 久久久成人免费电影| 桃色一区二区三区在线观看| 国产一级毛片七仙女欲春2| cao死你这个sao货| 天堂av国产一区二区熟女人妻| 丰满的人妻完整版| aaaaa片日本免费| 1000部很黄的大片| 午夜福利在线观看吧| 国产成+人综合+亚洲专区| 亚洲精品美女久久av网站| 国产精品久久电影中文字幕| av天堂在线播放| 麻豆国产av国片精品| 在线观看66精品国产| 精品免费久久久久久久清纯| 看片在线看免费视频| 99精品欧美一区二区三区四区| 精品日产1卡2卡| 精品人妻1区二区| 午夜精品久久久久久毛片777| 一级黄色大片毛片| 两个人视频免费观看高清| 亚洲av成人一区二区三| 亚洲av成人av| 高清在线国产一区| av国产免费在线观看| 老司机午夜十八禁免费视频| 悠悠久久av| 桃红色精品国产亚洲av| 手机成人av网站| 制服丝袜大香蕉在线| 好看av亚洲va欧美ⅴa在| 最近视频中文字幕2019在线8| 美女被艹到高潮喷水动态| 在线永久观看黄色视频| 高清在线国产一区| 日韩成人在线观看一区二区三区| 精品一区二区三区四区五区乱码| 亚洲av成人不卡在线观看播放网| 国产日本99.免费观看| 国产野战对白在线观看| 精品无人区乱码1区二区| 好男人电影高清在线观看| 麻豆国产av国片精品| 国产99白浆流出| 一进一出抽搐动态| 午夜福利高清视频| 欧美日韩福利视频一区二区| 国产亚洲精品久久久久久毛片| 在线观看午夜福利视频| 国产亚洲精品久久久久久毛片| 免费观看的影片在线观看| 免费在线观看影片大全网站| av女优亚洲男人天堂 | 亚洲精品美女久久av网站| 国产精品一区二区免费欧美| 精品国产美女av久久久久小说| 色视频www国产| 日韩成人在线观看一区二区三区| 亚洲一区高清亚洲精品| 久久人人精品亚洲av| 99久久精品国产亚洲精品| 国产男靠女视频免费网站| 在线看三级毛片| 日韩欧美国产一区二区入口| 精品无人区乱码1区二区| 大型黄色视频在线免费观看| 国产成人av教育| 久久人人精品亚洲av| 美女高潮的动态| 搡老岳熟女国产| 欧美另类亚洲清纯唯美| 一a级毛片在线观看| 亚洲国产精品久久男人天堂| 久久亚洲精品不卡| 亚洲国产中文字幕在线视频| 91麻豆精品激情在线观看国产| 亚洲天堂国产精品一区在线| 国产精品,欧美在线| 亚洲在线自拍视频| 老汉色av国产亚洲站长工具| 天堂网av新在线| 色综合站精品国产| 在线国产一区二区在线| 成人特级av手机在线观看| 成人精品一区二区免费| 女生性感内裤真人,穿戴方法视频| 99久久精品一区二区三区| 精品久久久久久久毛片微露脸| 亚洲中文日韩欧美视频| 舔av片在线| 成人高潮视频无遮挡免费网站| 欧美成人性av电影在线观看| 成人国产综合亚洲| 最近最新免费中文字幕在线| 黄色日韩在线| 日本一二三区视频观看| 亚洲成人久久性| 国产亚洲欧美98| 国产又色又爽无遮挡免费看| 国产久久久一区二区三区| 在线免费观看的www视频|