• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep learning-based automated grading of visual impairment in cataract patients using fundus images①

    2023-12-15 10:43:28JIANGJiewei蔣杰偉ZHANGYiXIEHeGONGJiaminZHUShaominWUShanjunLIZhongwen
    High Technology Letters 2023年4期

    JIANG Jiewei(蔣杰偉), ZHANG Yi, XIE He, GONG Jiamin, ZHU Shaomin,WU Shanjun,LI Zhongwen

    (?School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, P.R.China)

    (??School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325000, P.R.China)

    (???Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, P.R.China)

    Abstract

    Key words:deep learning, convolutional neural network (CNN), visual impairment grading,fundus image, efficient channel attention

    0 Introduction

    Cataract is a typical visual disorder disease with lens opacity caused by various factors such as genetics,infection, trauma, and aging[1].According to reports[2-3], cataract is the leading cause of reversible blindness and visual impairment globally.It is estimated that there are approximately 208 million cataract patients in China[4], who suffer from poor visual acuity in varying degrees.Clinically, two alternative treatment strategies are available for cataracts according to the severity of visual impairment[5].Conservative treatment with drug intervention can be used for cataract patients with the best corrected visual acuity (BCVA) greater than or equal to 0.3.On the contrary,early surgery is an appropriate choice to prevent the deterioration of visual acuity for mature cataract patients with BCVA lower than 0.3[6].Correspondingly,the degree of visual impairment caused by cataracts can be categorized into two groups: mild visual impairment caused by cataracts (MVICC), and moderate to severe visual impairment caused by cataract (MSVICC).Therefore, it is necessary to grade visual impairment in cataract patients to determine specific treatment strategies.

    Clinically, the assessment of BCVA for cataract patients usually relies on the manual inspection by ophthalmologists, which is a time-consuming and labor-intensive process[7].In particular, the large-scale screening of BCVA is limited by the sparse and uneven distribution of ophthalmologists, and many suspicious patients may not be accurately diagnosed in a timely manner[8].To cross the chasm of manual diagnosis defects in ophthalmology,it is therefore essential to develop a diagnostic algorithm for the automatic grading of visual impairment in cataract patients.

    Recently, artificial intelligence has attained remarkable performance for the automatic diagnosis of various diseases based on medical images, such as skin cancer[9], diabetic retinopathy[10], glaucoma[11],and age-related macular degeneration[12].Also, several studies developed artificial intelligence-based systems for the cataract diagnosis.Guo et al.[13]employed a wavelet transform method to extract features from fundus images and performed the automatic screening and grading of cataract, in which the accuracies reached 90.9% and 77.1%, respectively.A hybrid global convolutional neural network (CNN) was applied to learn high-level semantic features from original images, in which the principle of characterizing cataracts was analyzed using a layer-by-layer deconvolution technique[14].A 16-layer lightweight CNN was proposed to improve the classification performance of cataract diagnosis and reduce the number of parameters, so that it has the potential to be deployed on mobile terminals[15].Zhang et al.[16]employed a superimposed multi-features fusion method for a six-categories grading system of cataracts.Moreover, an artificial intelligence platform for the multihospital collaborative management of congenital cataracts was proposed by our team, providing diagnosis, risk grading, and treatment recommendations for patients with congenital cataract[17].In addition, there have been several researches on cataract diagnosis based on smartphone and slit-lamp images.Askarian et al.[18]achieved satisfactory experiment results based on ocular images captured by smartphones, using a luminance transformation algorithm for feature extraction and support vector machines ( SVM) for cataract identification.Young et al.[19]combined several deep learning algorithms and proposed an automatic grading system of cataract based on slit-lamp and retro-illuminated images, which is effective in recommending appropriate treatments to cataract patients.

    Although the aforementioned studies have investigated the automatic screening and grading of cataracts,they didn’t involve the automatic grading of visual impairment for cataract patients.Along with the cataract diagnosis, the accurate grading of visual impairment is an important clinical reference for assessing the severity of cataract and implementing individualized treatment.However, there are many similarities among different visual impairment categories, and their small blood vessels are somewhat different but blurred.This poses a high challenge for the design of high-accuracy diagnosis algorithms based on deep learning.Moreover,the fundus image of cataract used in this study is an imbalanced dataset, the number of MSVICCs is less than the number of normal samples or MVICCs.The imbalanced dataset easily causes classifiers to produce a relatively high false-negative rate and weak generalization ability.

    To address these issues, in this study, a MECA_CNN (multi-scale efficient channel attention convolutional neural network) is proposed for the automatic grading of visual impairment in cataract patients.First,the contrast-limited adaptive histogram equalization algorithm is used to enhance the contrast of fundus images for highlighting blurred small blood vessels.Second, the MECA_CNN is employed to extract high-level semantic features to classify visual impairment into three grades: Normal, MVICC, and MMSVICC.In MECA_CNN, an efficient channel attention mechanism is utilized to extract multi-scale features of fundus images and focus on lesion-related regions to the greatest extent.To avoid information loss of fine-grained fundus features during the feature extraction process, the asymmetric convolutional modules are embedded in the residual unit.Moreover, the asymmetric loss function is applied to solve the problem of a higher false-negative rate and weak generalization ability caused by the imbalanced dataset.Third, the generalization ability of MECA_CNN is explored for visual impairment grading based on an external clinical dataset.Using this automatic grading strategy, the severity assessment of the visual impairment could potentially help ophthalmologists to perform an individualized treatment strategy for a cataract patient.

    1 Methods

    1.1 Overall grading framework of MECA_CNN

    The framework of the automatic grading system of visual impairment is shown in Fig.1, consisting of three main parts: pre-processing, feature extraction,and grading result output.In pre-processing, the contrast-limited adaptive histogram equalization algorithm is applied to enhance the contrast of fundus images.The data augmentation techniques, including random cropping, random rotations around the image center,and horizontal and vertical flips, are adopted to enlarge the original training dataset by 6 times, which increases the diversity of the dataset and prevents overfitting and bias problems during training.During the feature extraction process, the shallow network is deepened to improve the feature extraction capability of model, the efficient channel attention mechanism is introduced into the residual module, and the 3 ×3 conventional convolution is replaced with asymmetric convolution to form a MECA_CNN Block.Finally, the softmax classifier with asymmetric loss function is performed to output the grading results.

    Fig.1 Framework of automatic visual impairment grading in cataract patients using MECA_CNN

    1.2 Pre-processing

    According to the examination result of visual acuity card, the visual impairment can be classified into three grades: MVICC, MSVICC, and normal sample.For MVICC and MSVICC, the blurred color and texture features are presented in fundus tissues such as blood vessels, optic disc, and macula.To improve these blurred features, the contrast-limited adaptive histogram equalization (CLAHE) algorithm[20]is employed to enhance the contrast of the fundus image in pre-processing stage, thus the features of blood vessels can be easily distinguished from background features,which is beneficial for visual impairment grading.Specifically, the local histogram of fundus image is first calculated, and then its contrast is adjusted by redistributing brightness of image.In addition, the contrast clipping technique is introduced to avoid the amplification noise.Using the CLAHE algorithm, small blood vessels are enhanced to improve the performance of the visual impairment grading.The typical original fundus images and the corresponding pre-processed results using CLAHE are presented in Fig.2.It is not difficult to notice that CLAHE algorithm not only enhances the features of large blood vessels, optic disc, and macular tissue, but more importantly improves the clarity of the blurred small blood vessels.

    Fig.2 Representative original and enhanced fundus images using CLAHE pre-processing algorithm.

    1.3 Improved multi-scale deep residual network

    As we all know, three-scale features exist simultaneously on the same fundus image, including largescale features such as optic disc and macula, mesoscale features such as arterial vessels, and smallscale features such as capillaries.To utilize the twoscale features, a multi-scale deep residual network(Res2Net50)[21]was chosen as the backbone network to construct residual - like connections with hierarchy in a single residual block, replacing the 3 ×3 convolution kernel in the residual unit of the traditional Res-Net50[22].Specifically, the features of input image were divided into four subsets:xi(i=1,2,3,4).Each subset contains the features with the same size and a channel number that is 1/4 of the input features.Except forxi, eachxiis processed using a 3 × 3 convolution,denoted asKi().The output ofKi() are denoted byyi.The output ofKi-1() was added to the subsetxi, and then fed intoKi().To reduce the number of parameters and increase the subsets, omitted the 3× 3 convolution forxi.Thus,yican be formalized as Eq.(1).

    Due to the combinatorial explosion effect, the output of the Res2Net module contains different combinations of receptive fields.Finally,yiis cascaded and fed into a 1 × 1 convolution to obtain convolutional combinations of Res2Net residual units.By expressing multi-scale features at a fine-grained level, this technique significantly expands the receptive field of the model, thereby enabling the module to effectively extract fine-grained features from fundus images.

    Fig.3(a) shows the detailed structure of the MECA_CNN Block, and Fig.3(b) and Fig.3(c) are two sub-blocks of Fig.3(a), which are the asymmetric convolution block and the efficient channel attention block in the MECA_CNN Block.Specifically, to avoid the problem of information loss during fine-grained features extraction process, instead of three 3 × 3 convolution modules, an asymmetric convolution block(ACB)[23]is applied in the Res2Net residual unit.The ACB module enhanced the network’s expression ability to extract high-level features of fundus images through three parallel convolutional kernels of 3 ×3, 1×3,and 3 ×1,as shown in Fig.3(b).After convolution operations, the features were merged and normalized.In addition, the ACB module is beneficial to reduce the number of model’s parameters and improve computational efficiency, which can be applied in the scenario of lightweight model.Furthermore, because of the similarities between MVICC and MSVICC, it is not easy to distinguish them.To address this problem, the first convolutional layer with a 7 ×7 kernel size in the Res2Net50 network was replaced with three convolutional layers with a 3 ×3 kernel size, the fine-grained local features therefore were extracted from fundus images, as shown in Fig.1.The ACB module not only maintains the same receptive field, but also enhances the capability of feature extraction.

    Fig.3 Structure diagram of the multi-scale efficient channel attention block

    1.4 Attention mechanism

    In recent years, the employment of attention mechanisms has gained widespread usage in the field of computer vision.The squeeze-and-excitation network(SE) is the first channel attention mechanism.Based on the SE,the efficient channel attention (ECA) module[24]is proposed, which removed the FC layer of original SE and introduced an adaptive 1D convolution.As shown in Fig.3(c),HandWdenote the height and width of the input image, respectively,Crepresents the number of channels,krepresents the size of the 1D convolution.The non-linear function ofkis defined as shown in Eq.(2).

    where, |t|oddrepresents the nearest odd number oft,and the values ofband γare set to 1 and 2, respectively.Thekvalue is determined using an adaptive method, which is used to reduce computing resources.In this study, after extracting fine-grained features in the Res2Net residual unit, an efficient channel attention module is applied to adjust the MECA_CNN to focus on the lesion characteristics of cataracts.First, on the basis of the output of the previous convolutional layer,the global average pooling operation is carried out for each feature graph channel.Then the local dependence between channels is established using one-dimensional convolution operation.After one-dimensional convolution, the Sigmoid activation function is applied to map the output values to a range of 0 to 1.This allows the output to be interpreted as the attention weight for each channel.Finally, the attention weight is multiplied with the original feature map to achieve channel-level attention weighting.The structure of a multi-scale efficient channel attention (MECA) block is presented in Fig.3(a).Multiple MECA blocks are stacked to form a multi-scale efficient channel attention convolutional neural network, as shown in Fig.1.This design fully considers the lesion feature of the cataracts, enhancing the expression of lesion features while suppressing the expression of noise information, so that the detailed features of blood vessels and optic disc on the fundus image can be learned, which is helpful for the visual impairment grading.

    1.5 Asymmetric loss function

    The fundus image of cataract used in this study is an imbalanced dataset.In Table 1, the number of normal samples is equal to the number of MVICC samples.However, the number of MSVICCs is less than the number of normal samples or MVICCs.This imbalanced dataset can easily increase the risk of higher false-negative rates and weak generalization problem for classifiers.To solve this problem, the asymmetric loss(ASL_Loss)[25]function is applied in the training process, as shown in Eq.(3).

    where,pmdenotes the shifted probability as shown in Eq.(4).L+andL_represent the loss of positive and negative samples, respectively.γ+and γ_are the positive and negative focus parameters, respectively.

    By assigning large weight factor to minority category, the ASL_Loss function can focus on the minority category.

    Table 1 Distribution of fundus images for visual impairment grading

    2 Experiments and results

    2.1 Dataset

    All fundus images were derived from the routine follow-up consultations for cataract patients between November 2017 and March 2022 at School of Ophthalmology and Optometry and Eye Hospital of Wenzhou Medical University.Each image was described and labeled by two experienced ophthalmologists in a doubleblind fashion according to the examination result of visual acuity card, and a third ophthalmologist was consulted in the case of disagreement.The distribution of fundus dataset is shown in Table 1, including 3058 normal images (Normal),2582 mild to moderate visual impairment caused by cataract images with BCVA greater than or equal to 0.3 (MVICC), and 1358 severe visual impairment caused by cataract images with BCVA less than 0.3 (MSVICC).In this study, the dataset was randomly divided into a training set(Train), a validation set (Val),and a test set (Test)according to the ratio of 70%, 15%, and 15%.The external test set used in this study was obtained from Ningbo Eye Hospital, in which each image was also labeled using the same double-blind annotation method.

    2.2 Experimental environment

    The MECA_CNN was developed in this study using the PyTorch deep learning framework (Torch 1.7.1, Torchvision 0.8.2) and trained with four NVIDIA TITAN RTX GPUs in parallel.To accelerate parameter convergence, a batch size of 64 was used on each GPU, and the initial learning rate and number of iterations were set to 1e-03 and 80, respectively.The learning rate was then adjusted to 1/10 of the original value at intervals of 20 epochs.During training, the model’s performance was evaluated using the validation set, and the model with the highest accuracy on the validation set was saved as the optimal model.

    2.3 Evaluation indicators and statistical analysis

    The performance of the MECA_CNN for discriminating Normal, MVICC, and MSVICC is evaluated by calculating the confusion matrix, accuracy, sensitivity,specificity, receiver operating characteristic curve(ROC), and area under the ROC curve (AUC), as shown in Eqs (5) -(7).

    In this study, statistical analyses are conducted using Python 3.8.0 and the Scikit-learn package.TP,FP, TN, and FN are used to denote the numbers of true positives, false positives, true negatives, and false negatives, respectively.The Wilson Score Approach is used to calculate the 95% confidence intervals (CI)for accuracy, specificity, and sensitivity, while Empirical Bootstrap with 2000 is used for AUC.

    2.4 Performance of the MECA_CNN in the internal and external test datasets

    To validate the performance and generalization ability of the MECA_CNN, the model is evaluated based on internal and external test datasets.In the internal test dataset, the MECA_CNN achieves accuracy, sensitivity, and specificity indicators for MVICC(91.3%, 89.9%, and 92%), MSVICC (93.2%,78.5%, and 96.7%), and normal sample (98.1%,98.0%, and 98.1%) as shown in Fig.4(c).The performance of the MECA_CNN on the external test dataset was comparable to that on the internal dataset:MVICC (88.7%, 91.3%, and 87.4%), MSVICC(92.0%,75.8%, and 99.0%), and normal sample(96.0%, 96.3%, and 95.8%) as shown in Fig.4(d).The detailed confusion matrixes are calculated in the internal and external test datasets as shown in Fig.4(a) and (b).

    2.5 Exploring the impact of different modules on the performance of the MECA_CNN

    To explore the impact of different modules of the MECA_CNN on the overall performance, a set of ablation experiments are conducted under the condition that the same basic network Res2Net50 is included.By sequentially adding different modules to the basic network, five groups of ablation experiments are constructed and their differences are compared.Specifically,Model_1 represents the basic network Res2Net50;Model_2 represents the model with the first 7 ×7 convolutional layer replaced by three 3 × 3 convolutional layers; Model_3 represents the model with the 3 ×3 convolutional layer replaced by an asymmetric convolution in the residual unit; Model_4 represents the model with the ECA module adopted; Model_5 represents the model with the ASL_Loss function used; MECA_CNN represents the model that included the four improvements mentioned above.The results of the ablation experiments are shown in Table 2.Ablation experiments demonstrate that the proposed MECA_CNN outperformes the other five models.It is worth noting that the MECA_CNN is specifically suitable for the task of visual impairment grading.All of the above improvements and ablation experiments are performed based on the same dataset of fundus images.

    Table 2 Performance comparison of the MECA_CNN and its ablation experiments

    2.6 Performance comparison of the MECA_CNN and conventional CNNs

    To further examine the performance of MECA_CNN model for visual impairment grading, three conventional CNNs are selected for comparison in this study, including a deep residual network (ResNet50),a multiscale deep residual network (Res2Net50), and a dense convolutional network (DenseNet121).All models are trained and tested based on the same dataset of fundus images.The detailed experimental results are compared in Table 3.

    As shown in Table 3, statistical results show that the average accuracy of the MECA_CNN model on the three grades of visual impairment is 94.2%, which exceeds three conventional CNNs by 2.9%,3.8%, and 2.6%, respectively.The average sensitivity of the MECA_CNN model is 88.8%, which is higher than three conventional CNNs by 4.3%, 5.1%, and 3.9%, respectively.The average specificity of the MECA_CNN model is 95.6%, which is higher than three conventional CNNs by 2.0%, 2.4%, and 1.8%, respectively.

    The t-SNE method is further used to analyze the discriminative ability of the extracted high-level features.The high-level features obtained from the MECA_CNN and conventional CNNs were mapped into a twodimensional space in the internal test dataset to visually show their ability to distinguish between different classes.As shown in Fig.5, the circular points represent normal samples, the square points represent MVICC patients, and the triangular points represent MSVICC patients.Notably, the discrimination ability of conventional CNNs was obviously inferior to that of the MECA_CNN.The t-SNE technique showed that the MECA_CNN had a better capability in high-level features separability for visual impairment grading.

    Table 3 Performance comparison of the MECA_CNN and conventional CNNs for visual impairment grading in the internal test dataset

    Fig.5 Visualization of the separability for the high-level features extracted by the MECA_CNN and three conventional CNNs in the internal test dataset using t-SNE

    The ROC curve is a comprehensive evaluation metric for classification models.The upper-left corner of the ROC curve indicates a superior classifier.As shown in Fig.6, the ROC curves and AUC value of the MECA_CNN and three conventional CNNs on the internal test dataset are compared.The experimental results indicate that the performance of MECA_CNN is superior to those of three conventional CNNs.Specifically, compared with the DenseNet121, the AUC values of the MECA_CNN on the three grades of visual impairment are improved by 0.2%, 3.7%, and 2.1%,respectively.Compared with the ResNet50, the AUC values of the MECA_CNN on three grades of visual impairment are improved by 0.4%, 2.4%, and 1.9%,respectively.Compared with the Res2Net50, the AUC values of the MECA_CNN on the three grades of visual impairment are improved by 0.2%, 2.9%, and 2.8%, respectively.

    Fig.6 ROC curves and AUC of MECA_CNN and conventional CNNs in the internal test dataset

    2.7 Visualization heatmaps of visual impairment grading

    To explore the reasonability of the MECA_CNN model, the gradient-weighted class activation mapping(Grad-CAM) method was use to generate visualization heatmaps, which highlighted the cataract-related regions that the MECA_CNN model focused on the most.The Grad-CAM is an explainable technique for MECA_CNN, which leverages the gradients of any target concepts flowing into the last layer of the MECA_CNN to generate a localization map highlighting remarkable regions in the image for predicting the concept.The regions that appear redder on the visualization heatmaps represent the areas on which the MECA_CNN model has focused the most, indicating their significance as cataract-related features.

    For normal fundus images, there are more redder regions highlighting areas in the heatmap due to the large number of clear vessels.For fundus images of the MVICC and MSVICC, there are fewer redder regions highlighting areas in the heatmap due to the deeper turbidity of the vessels around the optic disc.It is easy to conclude that the visualization heatmaps are consistent with the quantitative experiment results of the MECA_CNN model.The representative heatmaps examples of the MVICC, MSVICC, and normal sample are displayed in Fig.7.Using the Grad-CAM, which illustrates the reasonable of the MECA_CNN for discriminating MVICC, MSVICC, and normal sample.

    Fig.7 Representative heatmaps for visual impairment grading of cataracts

    3 Discussion

    In this study, an automatic assessment model MECA_CNN is developed for visual impairment grading in cataract patients.In the MECA_CNN network,an efficient channel attention mechanism and asymmetric convolutional module are explored to extract multiscale features focusing on small blood vessels, optic disc, and macula, etc.Detailed comparative experiments demonstrate that the MECA_CNN proposed in this study could effectively discriminate MVICC and MSVICC from normal sample.The performance of the MECA_CNN outperforms other conventional CNNs on the internal and external test datasets, which verifies its effectiveness and generalization ability.Moreover,the Grad-CAM technique provides an interpretable path for the visual impairment grading of cataract.

    Due to the reliable performance, the MECA_CNN can be applied as a remote medical screening tool for early detection of cataract patients with abnormal visual acuity.For underdeveloped areas and remote mountain villages, limited ophthalmologists and medical resources are unable to assess the visual impairment for cataract patients in a timely manner, making it impossible for patients to receive appropriate treatment.In addition, less-educated residents tended to neglect eye health, and they sought medical attention only when cataracts seriously affected visual acuity.However, at this time, visual impairment is often severe and might be accompanied by some complications.Only surgical treatment could be implemented, and postoperative recovery is not ideal.Therefore, based on fundus images and corresponding annotations by senior ophthalmologists, the trained MECA_CNN model can be deployed in the clinic to assist doctors to automatically grade visual impairment in cataract patients, which can provide the patients with an appropriate treatment protocol.

    The performance of the MECA_CNN is superior to those of other conventional CNNs.First, the CLAHE algorithm is employed to enhance the quality of fundus images for discerning blurred and small blood vessels from background noises.Second, an efficient channel attention mechanism is performed to enable the MECA_CNN to extract fine-grained features, which is beneficial to improve the recognition rate of the MVICC from MSVICC.Conventional CNNs ignored the fine-grained features and thus potentially led to biased prediction results.Third, to fully consider the problem of imbalanced dataset in clinical practice, the ASL_Loss function is adopted to facilitate the MECA_CNN to focus more on the minority category of MSVICC.In addition,the transfer learning, asymmetric convolution, and multi-scale residual module are adopted to extract multi-scale features for enhancing the performance of MECA_CNN.Compared with the Res2Net50, the accuracy, sensitivity, and specificity of MECA_CNN for MVICC are improved by 3.8%, 9.2%, and 0.6%,respectively.Similar improvements are presented on the grading of MSVICC and normal samples.

    To illustrate the interpretability of the MECA_CNN, the Grad-CAM technique is applied to generate heatmaps for visualizing the regions that the MECA_CNN pays most attention to the final grading.Six typical fundus images are presented to show the visualization results.For normal fundus images, there are more redder regions highlighting optic disc, macula, and blood vessels in the heatmap.For the MVICC and MSVICC images, only the optic disc and its surrounding blurred blood vessels are highlighted.This interpretability exploration further facilitates the MECA_CNN to be applied in real-world clinics, so that ophthalmologists and patients can easily capture the reasons for the final grading results inferred by the MECA_CNN.

    This work has several limitations.First, although the MECA_CNN method provides a practical strategy for visual impairment grading,the sensitivity of MVICC and MSVICC is still slightly low,which may be attributed to the more similarity of the phenotypes between these two categories.The next step is to analyze the characteristics of fundus images and explore finegrained feature extraction and classification algorithms.Second, this study only explores the automatic grading of visual impairment based on fundus images, however, epidemiological datasets, including age, medical history, and other underlying diseases, are still underinvestigated.Combining the electronic medical records and other optical images, multimodal fusion algorithms will be explored to provide valuable supplements for the comprehensive assessment of visual impairment.In addition, due to the limited external test data, the generalization of the MECA_CNN has not been fully verified.As more and more fundus images of cataract and corresponding BCVA are collected and annotated, the generalization of MECA_CNN will be further verified to ensure that the deployed model can be effectively applied in real clinical practice.

    4 Conclusions

    In this study, a feasible MECA_CNN method is proposed for the automated grading of visual impairment using an efficient channel attention mechanism and multi-scale deep residual network.The MECA_CNN model exhibits excellent performance in identifying MVICC and MSVICC from normal sample in both internal and external test datasets.The experimental results verifies that the proposed method is superior to other conventional methods.The ablation experiments demonstrate that the combination of several improved modules in the MECA_CNN is beneficial to exert its optimal performance.Interpretability analysis and external test dataset validation indicate that the MECA_CNN method has better rationality and generalization ability in clinical applications.This study may also provide a valuable reference for the automated analysis of visual impairment in other eye diseases.

    日本在线视频免费播放| 精品无人区乱码1区二区| 亚洲美女搞黄在线观看 | 一夜夜www| 热99re8久久精品国产| 中文字幕久久专区| 日韩一本色道免费dvd| 亚洲av免费在线观看| 一边摸一边抽搐一进一小说| 丝袜喷水一区| 我的老师免费观看完整版| 丰满人妻一区二区三区视频av| 美女内射精品一级片tv| 真实男女啪啪啪动态图| 亚洲成人久久性| 国产精品人妻久久久影院| 国模一区二区三区四区视频| 国产综合懂色| 麻豆久久精品国产亚洲av| 99热只有精品国产| 国产片特级美女逼逼视频| 又黄又爽又刺激的免费视频.| 在线观看66精品国产| 日韩精品有码人妻一区| 久久久久国内视频| 国产在视频线在精品| 黄色日韩在线| 国产精品久久久久久精品电影| 一个人观看的视频www高清免费观看| 国产精品伦人一区二区| h日本视频在线播放| 少妇丰满av| 亚洲七黄色美女视频| 国产精品伦人一区二区| 欧美+日韩+精品| 午夜影院日韩av| 黄色视频,在线免费观看| 免费在线观看成人毛片| 一个人看视频在线观看www免费| 亚洲美女黄片视频| 噜噜噜噜噜久久久久久91| 午夜激情欧美在线| 日本熟妇午夜| 一进一出好大好爽视频| 午夜精品在线福利| 中文字幕av在线有码专区| 91午夜精品亚洲一区二区三区| 精品久久国产蜜桃| 麻豆精品久久久久久蜜桃| 亚洲精华国产精华液的使用体验 | 亚洲最大成人av| 国产麻豆成人av免费视频| 国产欧美日韩精品一区二区| 人人妻人人澡欧美一区二区| 69av精品久久久久久| 特大巨黑吊av在线直播| 熟女电影av网| 欧美一区二区国产精品久久精品| 成人二区视频| av福利片在线观看| 免费高清视频大片| 老司机影院成人| 免费观看精品视频网站| 蜜臀久久99精品久久宅男| 成人漫画全彩无遮挡| 日本撒尿小便嘘嘘汇集6| 中文资源天堂在线| 美女大奶头视频| 一区二区三区四区激情视频 | 看十八女毛片水多多多| 亚州av有码| 久久久色成人| 淫妇啪啪啪对白视频| 三级国产精品欧美在线观看| 中国国产av一级| 女人被狂操c到高潮| 精品少妇黑人巨大在线播放 | av天堂在线播放| 深爱激情五月婷婷| 成人亚洲精品av一区二区| 精品福利观看| 国产男人的电影天堂91| 国产精品一二三区在线看| 亚洲,欧美,日韩| 精品久久久久久久久亚洲| 亚洲aⅴ乱码一区二区在线播放| 成年av动漫网址| .国产精品久久| 久久中文看片网| 男人和女人高潮做爰伦理| 成人综合一区亚洲| 亚洲真实伦在线观看| 亚洲av一区综合| 亚洲av不卡在线观看| 亚洲色图av天堂| 久久这里只有精品中国| 插阴视频在线观看视频| 免费一级毛片在线播放高清视频| 18禁黄网站禁片免费观看直播| 少妇猛男粗大的猛烈进出视频 | 高清午夜精品一区二区三区 | 熟女人妻精品中文字幕| 欧美日韩国产亚洲二区| 国产精品一及| 男人和女人高潮做爰伦理| 国产激情偷乱视频一区二区| 国产男靠女视频免费网站| 两个人视频免费观看高清| 高清毛片免费看| av福利片在线观看| 久久久精品94久久精品| 久久久久性生活片| av专区在线播放| 久久欧美精品欧美久久欧美| 麻豆国产97在线/欧美| 日韩高清综合在线| 日韩欧美在线乱码| av国产免费在线观看| 精品久久久久久久末码| 日韩大尺度精品在线看网址| 97人妻精品一区二区三区麻豆| 天天躁日日操中文字幕| 天堂av国产一区二区熟女人妻| 国产男靠女视频免费网站| 成人午夜高清在线视频| 国产色婷婷99| 五月伊人婷婷丁香| 国产成人影院久久av| 99精品在免费线老司机午夜| 国产一区二区在线观看日韩| 99在线视频只有这里精品首页| 99久久精品一区二区三区| 一进一出抽搐gif免费好疼| 乱人视频在线观看| 久久久久久久久久久丰满| 日本黄大片高清| 成年女人永久免费观看视频| 亚洲美女搞黄在线观看 | 久久99热6这里只有精品| 欧美日韩国产亚洲二区| 亚州av有码| 午夜激情福利司机影院| 国内精品一区二区在线观看| 床上黄色一级片| 国产69精品久久久久777片| 国产白丝娇喘喷水9色精品| 亚洲一级一片aⅴ在线观看| av女优亚洲男人天堂| 搡老熟女国产l中国老女人| 日韩精品有码人妻一区| 给我免费播放毛片高清在线观看| 在线国产一区二区在线| 国产私拍福利视频在线观看| 午夜激情福利司机影院| 国产精品亚洲一级av第二区| 俄罗斯特黄特色一大片| 99热这里只有是精品50| 自拍偷自拍亚洲精品老妇| 男女视频在线观看网站免费| 麻豆一二三区av精品| 精华霜和精华液先用哪个| 亚洲va在线va天堂va国产| 免费一级毛片在线播放高清视频| 久久精品国产亚洲av香蕉五月| 在线免费十八禁| 久久精品国产亚洲av涩爱 | 亚洲精品久久国产高清桃花| 99久久精品一区二区三区| 亚洲精品日韩av片在线观看| 国产片特级美女逼逼视频| 成人永久免费在线观看视频| 国产av不卡久久| 一级av片app| 国产精品久久久久久久电影| 69人妻影院| 丝袜喷水一区| 一级a爱片免费观看的视频| 亚洲欧美清纯卡通| 国内精品一区二区在线观看| a级毛片免费高清观看在线播放| 精品午夜福利视频在线观看一区| 在线观看av片永久免费下载| 插阴视频在线观看视频| 日韩,欧美,国产一区二区三区 | 久久婷婷人人爽人人干人人爱| 国产片特级美女逼逼视频| 久久久久久久久久成人| 精品久久国产蜜桃| 亚洲国产精品久久男人天堂| 人人妻,人人澡人人爽秒播| 亚洲五月天丁香| 看片在线看免费视频| 一a级毛片在线观看| 在线免费十八禁| 日韩,欧美,国产一区二区三区 | 免费av毛片视频| 美女 人体艺术 gogo| 国语自产精品视频在线第100页| 免费高清视频大片| 晚上一个人看的免费电影| 国产免费一级a男人的天堂| 嫩草影院新地址| 精品一区二区三区av网在线观看| 色噜噜av男人的天堂激情| 国产人妻一区二区三区在| 欧美激情久久久久久爽电影| 欧美性猛交╳xxx乱大交人| 精品人妻一区二区三区麻豆 | 国产真实乱freesex| 欧美激情久久久久久爽电影| 亚洲内射少妇av| 精品日产1卡2卡| 亚洲欧美精品自产自拍| 亚洲婷婷狠狠爱综合网| 日本熟妇午夜| 最近视频中文字幕2019在线8| 中文字幕人妻熟人妻熟丝袜美| 日本五十路高清| 最近手机中文字幕大全| 看十八女毛片水多多多| 免费观看人在逋| 午夜精品一区二区三区免费看| 免费看a级黄色片| 禁无遮挡网站| 亚洲精品一卡2卡三卡4卡5卡| 91久久精品国产一区二区三区| 精品久久久久久久久久久久久| 国产男靠女视频免费网站| 男插女下体视频免费在线播放| 欧美人与善性xxx| 国产精品人妻久久久影院| 国产精品久久久久久久久免| 蜜臀久久99精品久久宅男| 亚洲va在线va天堂va国产| 久久久久久久久久成人| 久久人人爽人人爽人人片va| 国产 一区 欧美 日韩| 日韩欧美精品v在线| 国产欧美日韩一区二区精品| 熟女电影av网| 性插视频无遮挡在线免费观看| 97超碰精品成人国产| 免费不卡的大黄色大毛片视频在线观看 | av专区在线播放| 日韩欧美一区二区三区在线观看| 人妻久久中文字幕网| 久久精品人妻少妇| 别揉我奶头~嗯~啊~动态视频| 国产精品乱码一区二三区的特点| 欧美性感艳星| 成人欧美大片| 亚洲av.av天堂| 老司机午夜福利在线观看视频| 亚洲成人久久性| 亚洲高清免费不卡视频| 亚洲性久久影院| 天堂√8在线中文| 色尼玛亚洲综合影院| 最好的美女福利视频网| 中文字幕精品亚洲无线码一区| 夜夜看夜夜爽夜夜摸| 女人十人毛片免费观看3o分钟| 人人妻人人看人人澡| 精品乱码久久久久久99久播| 尤物成人国产欧美一区二区三区| 国产一区二区在线观看日韩| 一级av片app| 日韩av不卡免费在线播放| 18禁裸乳无遮挡免费网站照片| 日本熟妇午夜| 日本免费a在线| 国产激情偷乱视频一区二区| 色吧在线观看| 精品99又大又爽又粗少妇毛片| 久久久午夜欧美精品| 国产精品嫩草影院av在线观看| 国产白丝娇喘喷水9色精品| 偷拍熟女少妇极品色| 成人欧美大片| 少妇的逼好多水| 中文字幕人妻熟人妻熟丝袜美| 精华霜和精华液先用哪个| 国产中年淑女户外野战色| 不卡一级毛片| 亚洲四区av| .国产精品久久| 搡老妇女老女人老熟妇| 成人精品一区二区免费| 精品久久久久久久久av| 精品国内亚洲2022精品成人| 成人漫画全彩无遮挡| 联通29元200g的流量卡| 免费高清视频大片| 黄色欧美视频在线观看| 精品午夜福利在线看| 久久久久精品国产欧美久久久| 精华霜和精华液先用哪个| 最近2019中文字幕mv第一页| 久久久久久九九精品二区国产| 男人的好看免费观看在线视频| 色综合色国产| 免费人成视频x8x8入口观看| 欧美绝顶高潮抽搐喷水| 校园人妻丝袜中文字幕| 搞女人的毛片| 日本爱情动作片www.在线观看 | 嫩草影院入口| 色5月婷婷丁香| 欧美一区二区国产精品久久精品| 精品久久久久久久人妻蜜臀av| 亚洲18禁久久av| 亚洲天堂国产精品一区在线| 国产私拍福利视频在线观看| 老师上课跳d突然被开到最大视频| 一区福利在线观看| 亚洲美女搞黄在线观看 | 淫妇啪啪啪对白视频| 亚洲综合色惰| 老师上课跳d突然被开到最大视频| 最新中文字幕久久久久| 欧美高清性xxxxhd video| 久久国产乱子免费精品| 极品教师在线视频| 国产不卡一卡二| 成年女人毛片免费观看观看9| 亚洲真实伦在线观看| 热99re8久久精品国产| 12—13女人毛片做爰片一| 久久九九热精品免费| 熟女电影av网| 久久九九热精品免费| 中文亚洲av片在线观看爽| 性插视频无遮挡在线免费观看| 亚洲国产精品合色在线| 免费人成视频x8x8入口观看| 一个人看的www免费观看视频| 日韩成人av中文字幕在线观看 | 午夜老司机福利剧场| 国产黄片美女视频| 六月丁香七月| 国产精品亚洲一级av第二区| 日本一二三区视频观看| 亚洲一区二区三区色噜噜| 日本一二三区视频观看| 国内精品美女久久久久久| 一级毛片我不卡| 亚洲成人av在线免费| 18禁在线无遮挡免费观看视频 | 久99久视频精品免费| 1000部很黄的大片| 国产黄色视频一区二区在线观看 | 欧美日韩一区二区视频在线观看视频在线 | 久久综合国产亚洲精品| 国产探花极品一区二区| 69av精品久久久久久| 久久久久国产精品人妻aⅴ院| 国产高清有码在线观看视频| 免费无遮挡裸体视频| 日日摸夜夜添夜夜爱| 国内精品一区二区在线观看| 免费搜索国产男女视频| 在线观看免费视频日本深夜| 亚洲精品国产av成人精品 | 99热全是精品| 亚洲成人精品中文字幕电影| 中出人妻视频一区二区| av国产免费在线观看| 如何舔出高潮| 九九久久精品国产亚洲av麻豆| 久久精品国产亚洲av天美| 日韩制服骚丝袜av| 男女那种视频在线观看| 天堂动漫精品| 国产在线男女| 免费看美女性在线毛片视频| 特级一级黄色大片| av视频在线观看入口| 国产成人freesex在线 | 午夜日韩欧美国产| 我要看日韩黄色一级片| 亚洲av中文字字幕乱码综合| 久久久久久久久久黄片| 亚洲精品乱码久久久v下载方式| 亚洲精品成人久久久久久| 国产单亲对白刺激| 热99在线观看视频| 午夜久久久久精精品| 黄片wwwwww| 欧美高清性xxxxhd video| 久久精品国产亚洲av香蕉五月| 久久久久精品国产欧美久久久| 久久精品国产清高在天天线| 大又大粗又爽又黄少妇毛片口| 亚洲熟妇熟女久久| 亚洲中文日韩欧美视频| 嫩草影院精品99| 日日摸夜夜添夜夜添av毛片| 成人午夜高清在线视频| 在线观看免费视频日本深夜| 麻豆国产av国片精品| 日韩人妻高清精品专区| 成人鲁丝片一二三区免费| 精华霜和精华液先用哪个| 亚洲久久久久久中文字幕| 国产精品久久久久久精品电影| 老女人水多毛片| 精品乱码久久久久久99久播| 日韩一区二区视频免费看| 亚洲无线在线观看| 内射极品少妇av片p| 欧美zozozo另类| 婷婷精品国产亚洲av| 亚洲精品国产av成人精品 | 欧美中文日本在线观看视频| 久久精品久久久久久噜噜老黄 | 色av中文字幕| 久久精品国产清高在天天线| 成人欧美大片| 综合色丁香网| .国产精品久久| 禁无遮挡网站| 此物有八面人人有两片| 女的被弄到高潮叫床怎么办| 我的女老师完整版在线观看| av专区在线播放| 嫩草影院精品99| 国产成人freesex在线 | 午夜福利在线在线| 色av中文字幕| 久久久久久久久久黄片| 亚洲中文日韩欧美视频| 两个人视频免费观看高清| 亚洲精品一卡2卡三卡4卡5卡| 美女cb高潮喷水在线观看| 人妻夜夜爽99麻豆av| 午夜激情福利司机影院| eeuss影院久久| 精品久久久久久久久久久久久| 亚洲自偷自拍三级| 国产亚洲精品综合一区在线观看| 99国产精品一区二区蜜桃av| 久久精品91蜜桃| 能在线免费观看的黄片| 美女大奶头视频| 亚洲国产精品国产精品| 床上黄色一级片| 搡老熟女国产l中国老女人| 69av精品久久久久久| 插逼视频在线观看| 热99re8久久精品国产| 美女大奶头视频| 久久精品国产鲁丝片午夜精品| 国产精品一区二区三区四区久久| 中国美白少妇内射xxxbb| 69av精品久久久久久| 欧美日韩一区二区视频在线观看视频在线 | 久久午夜福利片| 国内精品美女久久久久久| 精品无人区乱码1区二区| 国产成人a区在线观看| 日本精品一区二区三区蜜桃| 国产亚洲精品av在线| 69av精品久久久久久| 亚洲国产高清在线一区二区三| 最近2019中文字幕mv第一页| eeuss影院久久| 国产色爽女视频免费观看| 丰满乱子伦码专区| 99久久九九国产精品国产免费| 免费观看的影片在线观看| 久久久久久九九精品二区国产| 啦啦啦啦在线视频资源| 日日撸夜夜添| 久久久欧美国产精品| 国产高清有码在线观看视频| 久久欧美精品欧美久久欧美| 校园春色视频在线观看| 国产欧美日韩精品亚洲av| 日日摸夜夜添夜夜添av毛片| 欧美xxxx黑人xx丫x性爽| 欧美日韩一区二区视频在线观看视频在线 | 午夜福利在线观看吧| 12—13女人毛片做爰片一| 美女高潮的动态| 菩萨蛮人人尽说江南好唐韦庄 | 美女 人体艺术 gogo| 少妇被粗大猛烈的视频| 午夜精品一区二区三区免费看| 听说在线观看完整版免费高清| 亚洲av二区三区四区| 亚洲欧美中文字幕日韩二区| av国产免费在线观看| 日本一本二区三区精品| 欧美一区二区亚洲| 国产亚洲精品久久久久久毛片| 午夜激情欧美在线| 亚洲欧美成人综合另类久久久 | 日本三级黄在线观看| 国产国拍精品亚洲av在线观看| 成人永久免费在线观看视频| 午夜免费激情av| 日产精品乱码卡一卡2卡三| 久久精品国产亚洲av天美| 欧美精品国产亚洲| 国产精品乱码一区二三区的特点| 日本欧美国产在线视频| 村上凉子中文字幕在线| 国产又黄又爽又无遮挡在线| 俄罗斯特黄特色一大片| 全区人妻精品视频| 在线免费十八禁| 18禁黄网站禁片免费观看直播| 99热这里只有是精品50| 天美传媒精品一区二区| 日本黄大片高清| 国产单亲对白刺激| 国产高清视频在线观看网站| 不卡视频在线观看欧美| 1024手机看黄色片| 欧美日韩精品成人综合77777| 天堂网av新在线| 国产黄色小视频在线观看| 精品熟女少妇av免费看| 男人狂女人下面高潮的视频| 国产精品一区二区三区四区久久| 日本熟妇午夜| 99在线视频只有这里精品首页| 亚洲成人中文字幕在线播放| 啦啦啦观看免费观看视频高清| 搡老妇女老女人老熟妇| 国产免费一级a男人的天堂| 成年女人毛片免费观看观看9| 国产精品人妻久久久影院| 特级一级黄色大片| 好男人在线观看高清免费视频| 91av网一区二区| 人妻丰满熟妇av一区二区三区| 亚洲七黄色美女视频| 1000部很黄的大片| 成年av动漫网址| 色综合站精品国产| 一级黄色大片毛片| aaaaa片日本免费| 日日撸夜夜添| 国产精品亚洲美女久久久| 国产精品一区二区三区四区久久| 日韩,欧美,国产一区二区三区 | av卡一久久| 又粗又爽又猛毛片免费看| 久久久久久久久久久丰满| or卡值多少钱| 国产精品久久久久久久电影| 国产单亲对白刺激| 亚洲熟妇熟女久久| 人人妻人人澡人人爽人人夜夜 | 亚洲成人久久性| 色av中文字幕| 色综合站精品国产| 国产老妇女一区| 麻豆精品久久久久久蜜桃| 精品乱码久久久久久99久播| 不卡视频在线观看欧美| 麻豆成人午夜福利视频| 日韩精品有码人妻一区| 久久久久久久久中文| 99国产精品一区二区蜜桃av| 亚洲激情五月婷婷啪啪| 亚洲av电影不卡..在线观看| 九九热线精品视视频播放| 欧美区成人在线视频| 免费看av在线观看网站| 长腿黑丝高跟| 亚洲av五月六月丁香网| 秋霞在线观看毛片| 99热只有精品国产| 黄色配什么色好看| 少妇人妻精品综合一区二区 | 亚洲熟妇熟女久久| 亚洲经典国产精华液单| 国产精品一二三区在线看| 亚洲18禁久久av| 成年版毛片免费区| aaaaa片日本免费| 色吧在线观看| 日韩,欧美,国产一区二区三区 | 久久午夜福利片| 日产精品乱码卡一卡2卡三| 亚洲在线自拍视频| 在线观看美女被高潮喷水网站| 人人妻人人澡人人爽人人夜夜 | 国产91av在线免费观看| 欧美精品国产亚洲| 成人永久免费在线观看视频| 一边摸一边抽搐一进一小说| 日韩一本色道免费dvd| 成年女人毛片免费观看观看9| 久久中文看片网| a级一级毛片免费在线观看| а√天堂www在线а√下载| 中文亚洲av片在线观看爽| 亚洲国产色片| 人人妻人人澡欧美一区二区| 久久久久久久久久久丰满| 久久精品综合一区二区三区| 欧美人与善性xxx| 一级毛片我不卡| 91午夜精品亚洲一区二区三区| 国产91av在线免费观看| 日韩三级伦理在线观看| 在线播放无遮挡| 久久午夜福利片| 永久网站在线| 美女高潮的动态| 免费av毛片视频| 亚洲人成网站在线观看播放|