• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Predicting visual acuity with machine learning in treated ocular trauma patients

    2023-07-20 10:30:44ZhiLuZhouYiFeiYanJieMinChenRuiJueLiuXiaoYingYuMengWangHongXiaHaoDongMeiLiuQiZhangJieWangWenTaoXia

    Zhi-Lu Zhou, Yi-Fei Yan, Jie-Min Chen, Rui-Jue Liu, Xiao-Ying Yu, Meng Wang,Hong-Xia Hao,5, Dong-Mei Liu, Qi Zhang, Jie Wang, Wen-Tao Xia

    1Department of Forensic Medicine, Guizhou Medical University, Guiyang 550009, Guizhou Province, China

    2Shanghai Key Laboratory of Forensic Medicine, Shanghai Forensic Service Platform, Institute of Forensic Science,Ministry of Justice, Shanghai 200063, China

    3The SMART (Smart Medicine and AI-based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University,Shanghai 200444, China

    4School of Communication and Information Engineering,Shanghai University, Shanghai 200444, China

    5Basic Medical College, Jiamusi University, Jiamusi 154007,Heilongjiang Province, China

    Abstract● AlM: To predict best-corrected visual acuity (BCVA) by machine learning in patients with ocular trauma who were treated for at least 6mo.

    ● KEYWORDS: ocular trauma; predicting visiual acuity; bestcorrected visual acuity; visual dysfunction; machine learning

    INTRODUCTION

    Ocular trauma is the leading cause of blindness in young adults and children[1].Approximately 19 million people worldwide are visually impaired or even blind on account of ocular trauma[2].Owing to the complexity and diversity of ocular trauma and the great difference in posttreatment recovery, clinicians and forensic doctors often pay great attention to the recovery of post-traumatic visual acuity(VA).To better understand visual recovery after surgery or trauma, many researchers have investigated the relationship between macular[3-4], retinal[5-6], and vireteal[7]injuries caused by diseases or trauma.Houet al[8]found that macular injury involving the fovea or a thin optic nerve fiber layer results in severe vision loss.Phillipset al[9]found that in the lens and anterior chamber, timely treatment and effective control can effectively help the recovery of VA.Lianget al[10]found that postoperative VA was positively correlated with cube volume and cube average thickness (CAT).

    With the advent of electrophysiology, new progress has been made in the examination of visual functional conductivity.Studies have found that VA can be assessed by the amplitude waveform and latency period of the pattern visual evoked potential (PVEP) in the scope of vision[11].However, in forensic medicine, the actual VA cannot be determined when performing visual function tests because of the patient's lack of cooperation.Various methods have been proposed to address this problem.At present, the most common are methods of the fogging test, distance transformation, and electrophysiology,but there are many subjective influencing factors of the first two.The waveform of PVEP can be affected by several factors,such as unstable resistance, misalignment of the eye when the patient annotates the stimulus screen, and poor cooperation.Therefore, the results of the image visual evoked potentials therefore need to be combined with the findings of the ocular examination, the site of the ocular injury and the magnitude of the acting forces.During literature review period, it is found that the ocular trauma score (OTS) takes full account of structural changes to the eye, and has been used to predict VA in patients with ocular injuries since the year of 2002[12].Some researchers argue that OTS can provide objective, real, and effective information for the prognosis of ocular trauma, which is also found to be closely related to the severity of ocular trauma and prognosis[13-14].Xianget al[15]found that OTS helped identify situations that disguise or exaggerate VA loss in patients with ocular trauma.

    Interdisciplinary cross-collaboration has become a new trend,and research on the application of artificial intelligence to ophthalmology is maturing.Chenet al[16]used an artificial neural network-based machine-learning algorithm to automatically predict VA after ranibizumab treatment for diabetic macular edema.Huanget al[17]predicted VA and bestcorrected visual acuity (BCVA) using a feed-forward artificial neural networks (ANN) and an error back-propagation learning algorithm in patients with retinopathy of prematurity.Rohmet al[18]used five different machine-learning algorithms to predict VA in patients with neovascular age-related macular degeneration after ranibizumab injections.Weiet al[19]developed a deep learning algorithm based on optical coherence tomography (OCT), to predict VA after cataract surgery in highly myopic eyes.Murphyet al[20]developed a fully automated three-dimensional image analysis method for measuring the minimum linear diameter of macular holes and derived an inferred formula to predict postoperative VA in idiopathic macular holes.

    However, most of the above studies are restricted to singledisease studies and no information has been reported so far in relation to the assessment of VA after ocular trauma.This study utilized the relationship between eyeball structure and vision, extracted features from ophthalmology examination,and introduces OTS scores combined with machine-learning techniques to develop a model for predicting BCVA.In addition, the weight of each feature in the model was visualized using a (Shapley additive exPlanations) map to explore the importance of the above features in the task of BCVA prediction.

    MATERIALS AND METHODS

    Ethical ApprovalAs this study was a retrospective analysis experiment, our ethics committee ruled that approval was not required for this study, and the requirement for individual consent for this study was waived.

    MaterialsAll internal experimental data were obtained from the Key Laboratory of the Academy of Forensic Science,Shanghai, China.The cases were reviewed to evaluate eligibility based on clinical data, OCT images (Heidelberg,Germany; Carl Zeiss, Goeschwitzer Strasse, Germany),and fundus photographs (Carl Zeiss, Goeschwitzer Strasse,Germany).As of October 2021, the datasets comprised 1589 eyes, including 986 traumatic eyes and 603 healthy eyes.Our inclusion criteria were as follows: 1) differing degrees of ocular trauma; 2) the time between injury was at least 6mo; 3)the therapy records were complete; 4) the BCVA after recovery was proven to be real.Patients with ocular or other systemic diseases that likely affected the VA and poor cooperation were excluded.The test dataset was collected while these models training and validating, was also obtained from the Key Laboratory of the Academy of Forensic Science (Shanghai,China) using the same inclusion and exclusion criteria.From January 2022 to April 2023, the test dataset comprised 100 eyes, including 71 traumatic eyes and 29 healthy eyes, after removing the cases that do not meet the inclusion criteria.

    Optical Coherence Tomography ImagesOCT images of the internal dataset were obtainedviatwo different devices:Heidelberg and Zeiss.Because the OCT images were obtained from two machines, and to avoid the effect of fusion of data from two different machines, we divided the data into group I(data from Zeiss), group II (data from Heidelberg), and group III (data from Zeiss and Heidelberg, namely all data).Each group was further divided into a traumatic group (group A)and a healthy and traumatic group (group B).For example,IA represented the traumatic group of Zeiss, ⅡB represented the traumatic and healthy eye group of Heidelberg, and ⅢA represented the traumatic eyes of all data.

    Figure 1 Process of sample selection and source of the variables OCT: Optical coherence tomography; UVA: Uncorrected visual acuity; GOTS:Grading of ocular trauma score; COTS: Classification of ocular trauma score; IVA: Initial vision acuity; CAT: Cube average thickness; CST: Cube subfield thickness; RNFL: The average thickness of retinal never fiber layer; RNFL-S: Superior RNFL; RNFL-I: Inferior RNFL; RNFL-N: Nasal RNFL;RNFL-T: Temporal RNFL; ASV-ANM: Areas the specific value of abnormal and normal macula; CDR: Cup to disc ratio.

    OCT images of the test dataset were also obtainedviathe Zeiss and Heidelberg device, which consisted of 100 eyes.

    Feature ExtractionWe obtained 17 variables (Figure 1),which consisted of six variables from clinical data, eight extracted from OCT images, and three extracted from fundus photos.Table 1 shows the OTS scoring and grading process[12].BCVA was obtained using a projector-eye chart (NIDEK;Aichi magistrate, Japan).For the convenience of statistical analysis, we converted the decimal VA to logarithm of minimal angle of resolution (logMAR) VA[21].To extract variables from fundus photographs, we used the fundus photo reader software RadiAnt DICOM Viewer (Medixant, Poznan, Poland)to measure the area or length that we needed.To decrease the error, the data were measured by the same person, and the average value of the two measurements was taken.The main variable extraction process is shown in Figure 2.

    Vision Acuity ConversionThe VA was converted to its logMAR equivalent, with counting fingers being assigned a value of 1.9, hand motion 2.3, light perception 2.7, and no light perception 3.0[19,21].

    Overview of Machine-Learning Analytic SystemsWe proposed a BCVA analysis system based on machine-learning methods, including the prediction and grading of BCVA using the Extreme Gradient Boosting (XGB) model, and combined it with the method of model post-hoc interpretation, namely SHAP, to analyze the importance of input features of the model.

    Figure 2 Process of variable extraction from fundus photos Area 1 is the abnormal area of the macula, 2 is the normal area of the macula, and 1/2 is areas the specific value of abnormal and normal macula (ASV-ANM).Area 3 denotes abnormal areas of the optic disk,4 denotes normal areas of the optic disk, and 3/4 denotes Comus.Area 6 denotes the vertical diameter of the optic cup, 5 denotes the vertical diameter of the optic disc, and 6/5 denotes the cup to disc ratio (CDR).

    A flowchart of the experiment is shown in Figure 3.First, to determine the importance of each feature, all available features were used to predict BCVA using the XGB model and SHAP method, and then features were filtered by combining the least absolute shrinkage selection operator (LASSO) with an independent samplet-test.

    Second, to complete the regression and classification task the features obtained after screening were randomly divided into the training dataset and the validation dataset by a fivefold cross-validation method using the four models.Then,we performed ablation experiments on screened features and investigated the role of each feature in the corresponding task using SHAP.Finally, the eligible test dataset was used further validate the best-performing model and best variables.

    Figure 3 The flowchart of our experiment UVA: Uncorrected visual acuity; GOTS: Grading of ocular trauma score; COTS: Classification of ocular trauma score; IVA: Initial vision acuity; CAT: Cube average thickness; CST: Cube subfield thickness; RNFL: The average thickness of retinal never fiber layer; RNFL-S: Superior RNFL; RNFL-I: Inferior RNFL; RNFL-N: Nasal RNFL; RNFL-T: Temporal RNFL; ASV-ANM: Areas the specific value of abnormal and normal macula; CDR: Cup to disc ratio; BCVA: Best-corrected visual acuity; SVR: Support vector regression; RFR: Random forest regressor; BYR: Bayesian ridge; XGB: Extreme gradient boosting; SVM: Support vector machine; LR: Logistic regression; RFC: Random forest classifier; MAE: Mean absolute error; RMSE: Root mean square error.

    Table 1 The input variables’statistics

    Feature SelectionThe optimal combination of features was selected using the SHAP method combined with the LASSO and an independent samplet-test.SHAP is an additively explanatory model inspired by cooperative game theory, in which all features are considered “contributors”to the model.The model generates a contribution value, the Shapley value,for each predicted sample, which is the value assigned to each feature in the sample (i.e., the importance of each feature in the model).The LASSO method is widely used in model improvement and selection, and it makes the non-importance coefficient of features parallel to zero to select features by compressing the coefficient of the features and selecting the punishment function.However, the independent samplet-test is a common method in statistical analysis that can be used to test whether the difference between the means of the two types of samples is significant.

    The specific methods were 1) inputting all features of group III into the model to predict BCVA and the Shapley value of this model corresponding to all features was calculated by the SHAP method and ranked.2) Input all features of group III into the model, and the feature that is screened by the LASSO method is viewed as group L.3) Divide group III data into two groups with logMAR of 0.3 as the critical value, and perform an independent samplet-test on all features of these two groups, and the feature with a significant difference as group T.4) The intersection of groups L and T was determined, and the feature with too small a Shapley value was derived as the final feature.

    Extreme Gradient BoostingExtreme Gradient Boosting(XGB) is a tree-integrated model widely used in Kaggle competitions and many other machine-learning competitions with good results.This inference was computed based on the residuals of the upper model.XGB is an optimized gradient tree boosting system that improves computational speed through algorithmic innovations, such as parallel and distributed computation and approximates greedy search,which is controlled by adding regularization coefficients and residual learning to the loss function.In addition, XGB can learn sparse data and has good generalization ability.

    Regression and Classification ModelThe features used in this experiment were extracted based on small samples of clinical variables, using OCT images.Considering the high applicability of machine learning to small sample sets, we selected XGB, support vector regression (SVR), Bayesian ridge (BYR), and random forest regressor (RFR) regression models using the filtered features as model inputs and a grid search approach to find the optimal hyperparameters to predict BCVA for each of the three data sets.For the BCVA classification task, we classified the BCVA of all patients into two categories using logMAR equal to 0.3 as the threshold value.For patients with logMAR less than or equal to 0.3, we assigned label 1, and for patients with logMAR greater than 0.3, we assigned label 0.For this binary classification task, we also used four machine-learning models: XGB, Support Vector Machine (SVM), Logistic Regression Classifier (LRC), and Random Forest Classifier (RFC), to filter the features as input,and grid search hyperparameters to complete the classification of BCVA.

    Evaluation StandardsFor the three experimental datasets,we used a five-fold cross-validation method to separate the training and test sets.For the prediction of corrected VA, we used Pearson correlation coefficient (PCC), Mean absolute error (MAE), and root mean square error (RMSE) to measure the accuracy of model prediction, whereyiis the true value of corrected VA,?iis the predicted corrected VA, andnis the number of samples.

    For the classification of corrected VA classes, we used accuracy, sensitivity, specificity, and precision as evaluation metrics, TP, TN, FP, and FN denote the numbers of true positives, true negatives, false positives, and false negatives,respectively, and samples with logMAR >0.30 represent positive samples.

    StatisticsThe experiments were performed on a Dell computer with an Intel (R) Core (TM) i7-10870H CPU @2.20 GHz and 32 GB RAM.The model development was performed using Python (version 3.10) with the sci-kit-learn library (version 1.0.2), and statistical analyses were performed using a commercially available statistical software package(SPSS Statistics; IBM Chicago, USA).

    Figure 4 Consistency of BCVAs between prediction and ground truth Agreement assessed using Bland-Altman for the predicted value of BCVA and the gold standard in groups A (A) and B (B).In the plots, the solid lines represent the actual mean difference (bias), and dotted lines show 95% limits of agreement.BCVA: Best-corrected visual acuity.

    Table 2 The performance predicted in four models

    Table 3 The performance predicted in four models

    The distribution of variables was described by calculating the mean and standard deviation of each continuous variable for all data.Continuous variables of group A were compared with those of group B using the independent samplest-test method.The consistency between the predicted and actual values was verified using the Bland-Altman diagram.PCC was used to analyze the correlation between the predicted and actual values.

    RESULTS

    The characteristics of the input variables are listed in Table 1.The average age of all the groups was close to 44y.The number of men was higher than that of women in each group.The optimal variables for predicting BCVA include uncorrected VA (UVA), grading of ocular trauma score (GOTS), cube subfield thickness (CST), the average thickness of retinal never fiber layer (RNFL), superior RNFL (RNFL-S), Comus,areas the specific value of abnormal and normal macula (ASVANM), and cup to disc ratio (CDR) were obtained after feature selection.

    Best-Corrected Visual Acuity Prediction Performance AnalysisIn Tables 2 and 3, we compare the performance of the four machine-learning algorithms after using fivefold cross-validation when classifying the dataset into three groups.XGB obtained better results in Groups IA, IIA, and IIIA, with MAEs of 0.32±0.03, 0.30±0.04, and 0.32±0.02,respectively, and RMSEs of 0.45±0.05, 0.40±0.05 and 0.42±0.03, respectively.The best results were obtained for the random forest model in three data groups, IB, IIB, and IIIB,with MAEs of 0.20±0.01, 0.20±0.01, 0.20±0.02, and RMSEs 0.24±0.03, 0.33±0.02, 0.33±0.03, respectively.

    Figure 4 shows a Bland-Altman plot assessing the agreement between the model predictions and the ground truth.The 95%confidence intervals and mean deviations for the consistency of predictions for group III are presented separately.The 95%confidence intervals for groups IIIA and IIIB are -0.47 to 0.89 logMAR and -0.49 to 1.04, with mean deviations of 0.21 and 0.28 respectively.

    Best-Corrected Visual Acuity Grade Classification Performance AnalysisThe experimental results were evaluated using five-fold cross-validation and a grid search approach to determine the best parameters for the model.The qualitative results shown in Tables 4 and 5 indicate that XGB obtained better results for all groups using the same combination of features than the other three methods.Sensitivity, precision, specificity, and accuracy for group IIIA classification were 0.92±0.02, 0.86±0.03, 0.71±0.07,and 0.85±0.03, respectively, and in group IIIB 0.82±0.03,0.82±0.04, 0.90±0.02, and 0.87±0.01, respectively.The sensitivity of group A was higher than the specificity in all three datasets, indicating that the prediction accuracy of BCVA≤0.3 (logMAR) was higher than the prediction accuracy of BCVA>0.3 (logMAR).

    Table 4 The performance of the classified model

    Table 5 The performance of the classified model

    Test DatasetSince the XGB model has the most optimal performance for the performing the regression and classification tasks, the test dataset was used to determine the prediction performance and the accurate classification performance of the XGB model.As shown in Tables 6 and 7, the XGB model demonstrated stable promising results with an MAE of 0.20,RMSE of 0.29, and PCC of 0.96.The sensitivity, precision,specificity, and accuracy were 0.83, 0.92, 0.95, and 0.90,respectively.The Figure 5 shown the confusion matrixes of XGB modle in group ⅢB of internal dataset and test dataset.To understand the role of each feature in the model, we visualized the importance of the XGB model features that obtained the best results using SHAP, where importance refers to the extent to which each feature contributes to the model's predicted results.As shown in Figures 6 and 7, the UVA

    Table 6 The result of regression between the internal dataset and test dataset

    XGB: Extreme gradient boosting; PCC: Person correlation coefficient;MAE: Mean absolute error; RMSE: Root mean square error.Group IIIB represents the all samples in internal dataset.

    XGB model Sensitivity Precision Specificity Accuracyplayed a key role in both the XGB prediction and classification models, and the importance of the remaining features varied in both tasks, with GOTS, RNFL-S, and CST showing a greater contribution to the models in both tasks.

    Table 7 The result of classification between the internal dataset and test dataset

    DISCUSSION

    Figure 5 The confusion matrixes of XGB modle A: All samples in the internal dataset of the XGB model; B: The test dataset of the XGB model,the predicted label = 0 with ture label = 0 represents the correct prediction of visual acuity > 0.3 logMAR, the predicted label = 1 with ture label= 1 represents the correct prediction of visual acuity ≤0.3 logMAR.XGB: Extreme gradient boosting.

    Figure 6 Plot of weights of the different features for the BCVA prediction task The global feature importance plots for all groups with the horizontal coordinates indicating the Shapley value for each sample corresponding to each feature and the color of each point representing the magnitude of this sample feature value.A, B, C, D, E, and F represent group IA, IB, IIA, IIB, IIIA, and IIIB, respectively.BCVA: Best-corrected visual acuity.

    Figure 7 Plot of weights of different features of the BCVA classification task BCVA classification task in all groups, where class 1 represents BCVA≤0.3 and class 2 represents BCVA>0.3.A, B, C, D, E, and F represent group IA, IB, IIA, IIB, IIIA, and IIIB, respectively.BCVA: Best-corrected visual acuity.

    To ensure judicial justice, we should clarify the cause of visual injury and confirm that the BCVA obtained by ophthalmology examination is reliable.In China, VA decline caused by accidental or intentional injury can obtain compensation or impose punishment on the perpetrator.Therefore, some patients disguise severe vision loss or blindness.VA in ocular trauma is usually worse in patients with posterior segment involvement.Therefore, a forecast model that can assess BCVA might be helpful for the accurate judgment of forensic workers.Although there are patients with poor cooperation in practice, because high-resolution OCT and fundus photographs may reveal morphological changes, the factors affecting VA can be identified.In recent years, machine learning has been extensively applied, and it has been found that an OCT scan of the macula could provide millions of morphological parameters affecting VA[22-23].Previous studies have mostly focused on the diagnosis and classification of eye diseases.Due to the complex and changeable nature of ocular trauma,such studies are relatively few.

    Several studies have developed machine-learning algorithms to predict VA in patients with ocular or systemic diseases.Some of them used OCT images of the macular[19], and some used clinical data and measurement features from OCT (such as central retinal thickness)[24].Others use basic information, such as disease type or condition, age, and sex[25].

    As shown in Table 1, the initial VA, UVA, GOTS, classification of ocular trauma score, RNFL, ASV-ANM, CDR, and BCVA were worse in group A than in group B.Our results could identify several rules of thumb: men easily suffered injuries compared to women, and the RNFL was more easily influenced than the macula lutea.The data from OCT images,fundus photos, and clinical information were used to predict BCVA in patients with ocular trauma using the XGB, SVR,BYR, and RFR models, and another four models for the accomplishment of the classification task.The results reveal that these models can predict BCVA in most patients with ocular trauma and shows promising performance.As expected,the best predictor variables by the auto-selected model were including UVA, GOTS, CST, RNFL, RNFL-S, comus[26], ASVANM, and CDR.This outcome coincides with the consensus that the VA is closely related to eyeball structure.We can observe that the predicted values are well correlated with the ground truth values (P>0.7), and the Bland-Altman plot shows good consistency between the gold standard and predicted values.The XGBoost model had the best performance in Group A, and the RFR model had the optimal results in Group B.In forensic clinical assessment of visual function,the recovery vision after ocular trauma below 0.3 logMAR can be used as a basis for assessing the degree of impairment and disability.To improve the efficiency and accuracy of identifying pseudo visual loss, this experiment was combined with the corresponding conditions of the visual function assessment, and finally a dichotomous experiment with a 0.3 logMAR cut-off was performed.From the classification results, the XGB model had the highest accuracy in all groups,and sensitivity was always greater than specificity in group A.We speculate that this is a problem of sample imbalance,and we found that in group A, there were more eyes with BCVA≤0.3 logMAR than eyes with BCVA>0.3 logMAR, but in group B, the increase in the number of healthy eyes leads to an increase in the number of eyes with BCVA>0.3 logMAR.Finally, to prove the generalization of the model, we combined OCT images captured by the two machines to predict VA and compared the outcomes with those of groups I and II; no significant difference in outcomes predicted and classification between each group was observed.We tested the model with additional data to determine how well it eventually performed the regression and classification tasks, and the test set was not involved in the training or gradient descent process of the model.It therefore makes sense to use an independent test set to test the best model against the best variables, and our results show that our model also performs well on the unexposed test set.The results of the test dataset showed that the regression and classification model also showed a stable and promising performance on the test dataset with MAE of 0.20 and RMSE of 0.29.The classification performance was also good.

    The advantages of this experiment are as follows.Our experiments were designed based on the relationship between changes in the eye structure and VA.This innovative experimental design may help evaluate VA after injury.We divided the data into three groups to avoid errors due to OCT images obtained from different systems.Our study has some limitations.First, the data extracted from the OCT images and fundus photos were artificial, and several effective features were lost.Second, the sample size needs to be increased to further improve the robustness and generalizability of the machine-learning models.Finally, an error between the predicted and actual values still exists.

    In the future, we plan to directly input OCT images and fundus photos into the model for VA prediction and continually increase the sample size to optimize the model.It is also expected to develop an open platform by using real-world clinical data for optimized software, which means inputting ophthalmic images into the model to directly obtain the VA and assist in accurate diagnosis.

    In conclusion, owing to the complex and changeable conditions of ocular trauma, the prognosis of vision is difficult to clarify.This is based on the relationship between changes in eye structure and vision and the increasing application of artificial intelligence in ophthalmology.We used four different machine-learning models to predict the BCVA and found useful variables to predict BCVA.It can be used to predict VA and may be helpful for the auxiliary analysis of postoperative VA in clinical ophthalmology.

    ACKNOWLEDGEMENTS

    Foundations:Supported by National Key R&D Program of China (No.2022YFC3302001); the Human Injury and Disability Degree Classification (No.SF20181312); the National Natural Science Foundation of China (No.62071285).

    Conflicts of Interest: Zhou ZL,None;Yan YFNone;Chen JM,None;Liu RJ,None;Yu XY,None;Wang M,None;Hao HX,None;Liu DM,None;Zhang Q,None;Wang J,None;Xia WT,None.

    国产精品久久久av美女十八| 晚上一个人看的免费电影| 国产成人av激情在线播放| 男女边吃奶边做爰视频| 欧美中文综合在线视频| 欧美国产精品一级二级三级| 一级片'在线观看视频| 精品久久久久久电影网| 日本vs欧美在线观看视频| 亚洲av男天堂| 性少妇av在线| 精品午夜福利在线看| 国产精品免费大片| 中文字幕最新亚洲高清| 国产综合精华液| 国产精品女同一区二区软件| 日本爱情动作片www.在线观看| 一区二区三区四区激情视频| 久久 成人 亚洲| 一区二区三区乱码不卡18| 高清欧美精品videossex| 老司机影院成人| 国产精品免费视频内射| 久久久久国产精品人妻一区二区| 麻豆精品久久久久久蜜桃| 久久久久久久大尺度免费视频| 午夜精品国产一区二区电影| 十八禁高潮呻吟视频| 亚洲三级黄色毛片| 一个人免费看片子| 国产xxxxx性猛交| 巨乳人妻的诱惑在线观看| 日韩一区二区视频免费看| 黄片小视频在线播放| 日本黄色日本黄色录像| 国产免费现黄频在线看| 中文字幕人妻丝袜制服| 成人毛片60女人毛片免费| 视频在线观看一区二区三区| 欧美日韩成人在线一区二区| 99香蕉大伊视频| 亚洲欧美成人综合另类久久久| 一本—道久久a久久精品蜜桃钙片| 日本猛色少妇xxxxx猛交久久| 国产片内射在线| 久久青草综合色| 嫩草影院入口| 欧美人与性动交α欧美软件| 一二三四在线观看免费中文在| 免费看av在线观看网站| 在线亚洲精品国产二区图片欧美| 韩国精品一区二区三区| 18禁动态无遮挡网站| 男人舔女人的私密视频| 老熟女久久久| 新久久久久国产一级毛片| 国产色婷婷99| 精品人妻在线不人妻| 91精品三级在线观看| 久久女婷五月综合色啪小说| 欧美+日韩+精品| 国产av码专区亚洲av| 九草在线视频观看| 亚洲第一区二区三区不卡| 高清黄色对白视频在线免费看| 1024视频免费在线观看| 哪个播放器可以免费观看大片| 美女国产视频在线观看| 久久久久国产一级毛片高清牌| 国产人伦9x9x在线观看 | 9热在线视频观看99| 黄网站色视频无遮挡免费观看| 乱人伦中国视频| 国产精品香港三级国产av潘金莲 | 夜夜骑夜夜射夜夜干| 免费黄色在线免费观看| 久久久久久久大尺度免费视频| 免费黄频网站在线观看国产| 久久久精品免费免费高清| 久久久久国产网址| 亚洲av日韩在线播放| av在线观看视频网站免费| 久久这里只有精品19| 一边亲一边摸免费视频| 午夜免费鲁丝| 美国免费a级毛片| 在线免费观看不下载黄p国产| 精品99又大又爽又粗少妇毛片| 日韩视频在线欧美| 国产毛片在线视频| 美女脱内裤让男人舔精品视频| 日韩av不卡免费在线播放| 欧美日韩一级在线毛片| 香蕉精品网在线| 精品亚洲乱码少妇综合久久| 一区二区日韩欧美中文字幕| 亚洲精品美女久久久久99蜜臀 | 少妇熟女欧美另类| 午夜精品国产一区二区电影| 亚洲欧美色中文字幕在线| 欧美激情 高清一区二区三区| 久久精品人人爽人人爽视色| 久久久久久久久久久久大奶| 久久精品国产自在天天线| 欧美精品国产亚洲| 久久精品久久精品一区二区三区| 免费日韩欧美在线观看| 亚洲精品中文字幕在线视频| 欧美精品高潮呻吟av久久| 欧美+日韩+精品| 国产成人精品久久久久久| 国产精品一国产av| av又黄又爽大尺度在线免费看| 日韩成人av中文字幕在线观看| 婷婷色综合www| 欧美亚洲日本最大视频资源| 国产色婷婷99| 大香蕉久久网| 美女大奶头黄色视频| 99热国产这里只有精品6| www.精华液| 久久韩国三级中文字幕| 下体分泌物呈黄色| 18禁观看日本| 免费看不卡的av| 精品国产乱码久久久久久男人| 久久精品亚洲av国产电影网| 亚洲成国产人片在线观看| 日本爱情动作片www.在线观看| 亚洲国产色片| 天天躁夜夜躁狠狠躁躁| 考比视频在线观看| 99国产综合亚洲精品| 亚洲美女黄色视频免费看| 少妇的丰满在线观看| 最近中文字幕高清免费大全6| 丰满少妇做爰视频| 黄网站色视频无遮挡免费观看| 欧美精品高潮呻吟av久久| 黄色配什么色好看| 亚洲激情五月婷婷啪啪| 少妇人妻久久综合中文| 国产片内射在线| 超碰成人久久| 午夜精品国产一区二区电影| 国产又色又爽无遮挡免| 在线天堂最新版资源| 亚洲色图 男人天堂 中文字幕| 国产男人的电影天堂91| 高清不卡的av网站| 久久久久国产精品人妻一区二区| 国产亚洲精品第一综合不卡| 久久鲁丝午夜福利片| 亚洲精品自拍成人| 国产成人免费观看mmmm| 久久久久久久久免费视频了| 下体分泌物呈黄色| 新久久久久国产一级毛片| 久久鲁丝午夜福利片| 国产黄色免费在线视频| 在线观看三级黄色| 嫩草影院入口| 欧美亚洲日本最大视频资源| 欧美变态另类bdsm刘玥| 欧美日韩一区二区视频在线观看视频在线| av免费观看日本| 黑人猛操日本美女一级片| 丰满少妇做爰视频| 欧美成人精品欧美一级黄| 久久毛片免费看一区二区三区| 午夜福利在线观看免费完整高清在| 中文字幕人妻熟女乱码| 高清av免费在线| 美国免费a级毛片| 男人舔女人的私密视频| 深夜精品福利| 日韩一本色道免费dvd| 啦啦啦在线观看免费高清www| 热re99久久国产66热| 另类精品久久| 成人午夜精彩视频在线观看| 中文字幕另类日韩欧美亚洲嫩草| 五月伊人婷婷丁香| 免费av中文字幕在线| 国产精品嫩草影院av在线观看| 极品少妇高潮喷水抽搐| 人妻一区二区av| 美女大奶头黄色视频| 大码成人一级视频| 亚洲欧美一区二区三区久久| 美女视频免费永久观看网站| 国产成人午夜福利电影在线观看| 国产福利在线免费观看视频| 99热网站在线观看| 午夜福利视频精品| 国产无遮挡羞羞视频在线观看| 亚洲人成网站在线观看播放| 亚洲一码二码三码区别大吗| 亚洲伊人久久精品综合| 国精品久久久久久国模美| 热re99久久精品国产66热6| 久久这里有精品视频免费| 久久精品国产亚洲av高清一级| 亚洲综合色惰| 好男人视频免费观看在线| 三上悠亚av全集在线观看| 久久精品国产亚洲av涩爱| 一区二区三区精品91| 男女国产视频网站| 水蜜桃什么品种好| 国产精品香港三级国产av潘金莲 | 黑人巨大精品欧美一区二区蜜桃| 99re6热这里在线精品视频| 欧美日韩成人在线一区二区| 多毛熟女@视频| a 毛片基地| 久久国产精品男人的天堂亚洲| 肉色欧美久久久久久久蜜桃| 精品国产一区二区久久| 啦啦啦视频在线资源免费观看| 亚洲精品中文字幕在线视频| 黄片小视频在线播放| 国产精品国产三级专区第一集| 激情五月婷婷亚洲| 午夜激情av网站| 午夜免费观看性视频| 欧美精品一区二区大全| 啦啦啦在线观看免费高清www| 色婷婷久久久亚洲欧美| 国产成人精品一,二区| 欧美xxⅹ黑人| 爱豆传媒免费全集在线观看| 亚洲欧美清纯卡通| 波多野结衣av一区二区av| 香蕉丝袜av| 久久久久视频综合| 高清欧美精品videossex| 波多野结衣av一区二区av| 久久久久久久久久人人人人人人| 国产国语露脸激情在线看| 丝袜美足系列| 久久精品国产亚洲av天美| 伦理电影大哥的女人| 亚洲欧美成人综合另类久久久| 日日摸夜夜添夜夜爱| 狠狠精品人妻久久久久久综合| 黄色一级大片看看| 亚洲av成人精品一二三区| 精品少妇内射三级| 精品亚洲乱码少妇综合久久| 老司机影院毛片| 久久久久久伊人网av| 亚洲欧美一区二区三区久久| 精品亚洲成国产av| 丝袜美腿诱惑在线| 一本—道久久a久久精品蜜桃钙片| 又黄又粗又硬又大视频| tube8黄色片| 国产福利在线免费观看视频| 久久久国产一区二区| 日韩 亚洲 欧美在线| 午夜福利一区二区在线看| 日韩人妻精品一区2区三区| 亚洲欧洲精品一区二区精品久久久 | 美女中出高潮动态图| a级毛片在线看网站| 国产视频首页在线观看| 天堂俺去俺来也www色官网| 亚洲久久久国产精品| 五月伊人婷婷丁香| 毛片一级片免费看久久久久| 亚洲伊人色综图| 肉色欧美久久久久久久蜜桃| 国产男人的电影天堂91| 97人妻天天添夜夜摸| 国产精品国产av在线观看| 欧美日韩成人在线一区二区| 天堂8中文在线网| 午夜精品国产一区二区电影| 老司机影院毛片| 国产一区亚洲一区在线观看| 亚洲四区av| 免费少妇av软件| 欧美精品一区二区大全| 成人影院久久| 妹子高潮喷水视频| 日韩,欧美,国产一区二区三区| 久久久久精品久久久久真实原创| 日韩欧美精品免费久久| 波多野结衣av一区二区av| 亚洲av欧美aⅴ国产| 久久影院123| 女人被躁到高潮嗷嗷叫费观| 亚洲经典国产精华液单| 18禁裸乳无遮挡动漫免费视频| 久久久久网色| 80岁老熟妇乱子伦牲交| 亚洲精品,欧美精品| 啦啦啦中文免费视频观看日本| 免费黄网站久久成人精品| 黑丝袜美女国产一区| 亚洲国产欧美在线一区| 久久精品夜色国产| 欧美日本中文国产一区发布| 国产成人91sexporn| 国产精品熟女久久久久浪| 日韩一区二区视频免费看| 久久精品亚洲av国产电影网| 啦啦啦视频在线资源免费观看| 精品人妻一区二区三区麻豆| 成人毛片60女人毛片免费| 一级毛片电影观看| 欧美日韩亚洲高清精品| 亚洲成人av在线免费| 又粗又硬又长又爽又黄的视频| 少妇熟女欧美另类| 看免费成人av毛片| 七月丁香在线播放| 亚洲欧美精品综合一区二区三区 | 欧美bdsm另类| 久久国内精品自在自线图片| 亚洲精品久久成人aⅴ小说| 91国产中文字幕| 精品卡一卡二卡四卡免费| 欧美av亚洲av综合av国产av | 婷婷色综合大香蕉| av国产久精品久网站免费入址| 男人添女人高潮全过程视频| 亚洲精品日韩在线中文字幕| xxxhd国产人妻xxx| 91成人精品电影| 日韩av不卡免费在线播放| 免费观看性生交大片5| 国产亚洲av片在线观看秒播厂| 在线看a的网站| 久久热在线av| 97在线人人人人妻| 母亲3免费完整高清在线观看 | 国产深夜福利视频在线观看| 高清黄色对白视频在线免费看| 久久热在线av| 乱人伦中国视频| 国产精品 国内视频| 一区二区三区四区激情视频| 90打野战视频偷拍视频| 免费不卡的大黄色大毛片视频在线观看| 一级a爱视频在线免费观看| 国产一区二区三区综合在线观看| 日日撸夜夜添| 一级a爱视频在线免费观看| 大香蕉久久网| 欧美精品国产亚洲| 性色av一级| 婷婷色麻豆天堂久久| 国产1区2区3区精品| 久久狼人影院| 美女主播在线视频| 亚洲av免费高清在线观看| 亚洲精品一二三| 菩萨蛮人人尽说江南好唐韦庄| a级毛片黄视频| 亚洲精品美女久久久久99蜜臀 | 不卡视频在线观看欧美| 99久久中文字幕三级久久日本| 久久精品夜色国产| 亚洲成色77777| 97在线视频观看| 999久久久国产精品视频| 天天操日日干夜夜撸| 亚洲 欧美一区二区三区| 国产色婷婷99| 美女主播在线视频| 国产精品嫩草影院av在线观看| 亚洲精品美女久久久久99蜜臀 | 黄色一级大片看看| 欧美日韩亚洲国产一区二区在线观看 | 下体分泌物呈黄色| 美女高潮到喷水免费观看| 欧美日韩亚洲国产一区二区在线观看 | 亚洲精品成人av观看孕妇| 女性生殖器流出的白浆| 色哟哟·www| 国产深夜福利视频在线观看| 午夜日本视频在线| 性色avwww在线观看| 男女无遮挡免费网站观看| 国产欧美日韩综合在线一区二区| 在线 av 中文字幕| 国产在线免费精品| 97在线视频观看| 新久久久久国产一级毛片| 在线观看免费日韩欧美大片| 春色校园在线视频观看| 伦精品一区二区三区| 亚洲综合色网址| 中文字幕制服av| 午夜福利在线观看免费完整高清在| 国产成人av激情在线播放| 欧美97在线视频| 最新中文字幕久久久久| 最近的中文字幕免费完整| 久久女婷五月综合色啪小说| 在线观看国产h片| 欧美精品高潮呻吟av久久| 欧美日韩一级在线毛片| 国产精品99久久99久久久不卡 | 精品人妻偷拍中文字幕| 精品卡一卡二卡四卡免费| 久久亚洲国产成人精品v| 亚洲精品乱久久久久久| 晚上一个人看的免费电影| 精品福利永久在线观看| 久久韩国三级中文字幕| 日韩在线高清观看一区二区三区| av片东京热男人的天堂| 久久精品久久久久久久性| 国产成人精品无人区| 午夜日韩欧美国产| 欧美人与善性xxx| 国产激情久久老熟女| 精品人妻一区二区三区麻豆| 亚洲色图 男人天堂 中文字幕| 亚洲精品日本国产第一区| 亚洲成色77777| 在线观看美女被高潮喷水网站| h视频一区二区三区| 日日撸夜夜添| 如何舔出高潮| 2022亚洲国产成人精品| 五月天丁香电影| 黑丝袜美女国产一区| 久久99热这里只频精品6学生| 日韩一区二区视频免费看| 一本—道久久a久久精品蜜桃钙片| 晚上一个人看的免费电影| 亚洲精品日韩在线中文字幕| 9热在线视频观看99| 亚洲精品久久成人aⅴ小说| 七月丁香在线播放| 女的被弄到高潮叫床怎么办| 亚洲一区二区三区欧美精品| 免费在线观看视频国产中文字幕亚洲 | 欧美成人午夜免费资源| freevideosex欧美| 日本欧美国产在线视频| 蜜桃在线观看..| 熟女少妇亚洲综合色aaa.| 新久久久久国产一级毛片| 美女主播在线视频| 人妻系列 视频| 美女大奶头黄色视频| 春色校园在线视频观看| 日韩熟女老妇一区二区性免费视频| 黑人欧美特级aaaaaa片| 精品人妻在线不人妻| 精品久久蜜臀av无| 精品人妻一区二区三区麻豆| 久久精品国产自在天天线| 妹子高潮喷水视频| 亚洲一区二区三区欧美精品| 亚洲精品日本国产第一区| 国产av精品麻豆| 亚洲成av片中文字幕在线观看 | 欧美精品亚洲一区二区| 捣出白浆h1v1| 国产片内射在线| 成年女人在线观看亚洲视频| 亚洲成国产人片在线观看| 老熟女久久久| 免费在线观看黄色视频的| 性色av一级| 亚洲精品日韩在线中文字幕| 超碰成人久久| 在线天堂最新版资源| 99久久综合免费| 国产在线免费精品| 校园人妻丝袜中文字幕| 蜜桃在线观看..| 黑人猛操日本美女一级片| 日韩精品免费视频一区二区三区| 各种免费的搞黄视频| 日日啪夜夜爽| 免费播放大片免费观看视频在线观看| 国产乱来视频区| 中文天堂在线官网| 不卡av一区二区三区| 亚洲经典国产精华液单| 久久久久久久精品精品| kizo精华| av天堂久久9| 亚洲欧美成人精品一区二区| 国产精品三级大全| 免费黄频网站在线观看国产| av片东京热男人的天堂| 亚洲国产精品国产精品| 99热国产这里只有精品6| 国产精品蜜桃在线观看| 亚洲天堂av无毛| 成人二区视频| 国产 一区精品| 成人二区视频| 欧美日韩视频精品一区| 999精品在线视频| 黄片无遮挡物在线观看| 久久久久久免费高清国产稀缺| 超色免费av| 人妻系列 视频| av在线观看视频网站免费| 秋霞在线观看毛片| 麻豆av在线久日| a 毛片基地| 麻豆av在线久日| 亚洲五月色婷婷综合| 2022亚洲国产成人精品| 国产乱来视频区| 国产精品成人在线| 亚洲成人一二三区av| 波多野结衣一区麻豆| 国产一级毛片在线| av免费观看日本| 亚洲国产成人一精品久久久| 欧美日韩一级在线毛片| 中文字幕制服av| 少妇人妻 视频| 伊人久久国产一区二区| 亚洲av.av天堂| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲精品乱久久久久久| 日本欧美视频一区| 一级片'在线观看视频| 美女午夜性视频免费| av网站在线播放免费| 一区二区日韩欧美中文字幕| 国产亚洲午夜精品一区二区久久| 80岁老熟妇乱子伦牲交| 黑人猛操日本美女一级片| 国产又色又爽无遮挡免| 成人18禁高潮啪啪吃奶动态图| 深夜精品福利| 免费观看性生交大片5| 亚洲中文av在线| videossex国产| 中文字幕人妻丝袜制服| www日本在线高清视频| 成人毛片a级毛片在线播放| 欧美日韩精品网址| 午夜日本视频在线| 在线观看www视频免费| 制服诱惑二区| 免费黄网站久久成人精品| 美女国产视频在线观看| 欧美av亚洲av综合av国产av | 欧美日韩视频高清一区二区三区二| 国产毛片在线视频| av在线播放精品| 欧美最新免费一区二区三区| 日韩免费高清中文字幕av| 满18在线观看网站| 97精品久久久久久久久久精品| 大香蕉久久成人网| 18禁动态无遮挡网站| 99热网站在线观看| 国产色婷婷99| 亚洲精品国产色婷婷电影| 欧美人与善性xxx| 久久韩国三级中文字幕| 国产有黄有色有爽视频| 美女国产高潮福利片在线看| 天天躁夜夜躁狠狠久久av| 三级国产精品片| 欧美日韩精品成人综合77777| 男人舔女人的私密视频| 一本—道久久a久久精品蜜桃钙片| 亚洲国产看品久久| 欧美精品av麻豆av| 亚洲国产日韩一区二区| 精品卡一卡二卡四卡免费| 午夜福利在线观看免费完整高清在| 人妻少妇偷人精品九色| 在线观看免费日韩欧美大片| 午夜精品国产一区二区电影| 国产97色在线日韩免费| 在线观看免费视频网站a站| av天堂久久9| 国产成人精品福利久久| 亚洲综合精品二区| 男的添女的下面高潮视频| 久久精品久久久久久久性| 日韩不卡一区二区三区视频在线| tube8黄色片| 亚洲精品国产色婷婷电影| 捣出白浆h1v1| 一本久久精品| a级毛片在线看网站| 国产一级毛片在线| 桃花免费在线播放| 一区二区日韩欧美中文字幕| 在线观看美女被高潮喷水网站| 人人妻人人澡人人爽人人夜夜| 国产老妇伦熟女老妇高清| 只有这里有精品99| www.自偷自拍.com| 久久久国产一区二区| 人妻一区二区av| 亚洲成人手机| 人成视频在线观看免费观看| 在线观看www视频免费| 极品人妻少妇av视频| 欧美精品高潮呻吟av久久| 国产高清国产精品国产三级| 女人久久www免费人成看片| 欧美人与性动交α欧美软件| 亚洲精品日韩在线中文字幕| 成年av动漫网址| 啦啦啦在线观看免费高清www|