• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Robust Length of Stay Prediction Model for Indoor Patients

    2022-03-14 09:25:40AyeshaSiddiqaSyedAbbasZilqurnainNaqviMuhammadAhsanAllahDittaHaniAlquhayzKhanandMuhammadAdnanKhan
    Computers Materials&Continua 2022年3期

    Ayesha Siddiqa,Syed Abbas Zilqurnain Naqvi,Muhammad Ahsan,Allah Ditta,Hani Alquhayz,M.A.Khan and Muhammad Adnan Khan

    1Department of Mechatronics and Control Engineering,University of Engineering and Technology,Lahore,54000,Pakistan

    2Department of Information Sciences,Division of Science and Technology,University of Education,Lahore,54000,Pakistan

    3Department of Computer Science and Information,College of Science in Zulfi,Majmaah University,Al-Majmaah,11952,Saudi Arabia

    4Riphah School of Computing&Innovation,Faculty of Computing,Riphah International University,Lahore Campus,Lahore,54000,Pakistan

    5Pattern Recognition and Machine Learning Lab,Department of Software,Gachon University,Seongnam,13557,Korea

    Abstract: Due to unforeseen climate change, complicated chronic diseases,and mutation of viruses’hospital administration’s top challenge is to know about the Length of stay(LOS)of different diseased patients in the hospitals.Hospital management does not exactly know when the existing patient leaves the hospital; this information could be crucial for hospital management.It could allow them to take more patients for admission.As a result, hospitals face many problems managing available resources and new patients in getting entries for their prompt treatment.Therefore, a robust model needs to be designed to help hospital administration predict patients’LOS to resolve these issues.For this purpose, a very large-sized data (more than 2.3 million patients’data) related to New-York Hospitals patients and containing information about a wide range of diseases including Bone-Marrow, Tuberculosis,Intestinal Transplant,Mental illness,Leukaemia,Spinal cord injury,Trauma,Rehabilitation,Kidney and Alcoholic Patients,HIV Patients,Malignant Breast disorder, Asthma, Respiratory distress syndrome, etc.have been analyzed to predict the LOS.We selected six Machine learning (ML) models named: Multiple linear regression (MLR), Lasso regression (LR), Ridge regression (RR), Decision tree regression (DTR), Extreme gradient boosting regression (XGBR),and Random Forest regression (RFR).The selected models’predictive performance was checked using R square and Mean square error (MSE) as the performance evaluation criteria.Our results revealed the superior predictive performance of the RFR model,both in terms of RS score(92%) and MSE score (5), among all selected models.By Exploratory data analysis(EDA),we conclude that maximum stay was between 0 to 5 days with the meantime of each patient 5.3 days and more than 50 years old patients spent more days in the hospital.Based on the average LOS, results revealed that the patients with diagnoses related to birth complications spent more days in the hospital than other diseases.This finding could help predict the future length of hospital stay of new patients, which will help the hospital administration estimate and manage their resources efficiently.

    Keywords: Length of stay; machine learning; robust model; random forest regression

    1 Introduction

    Like any organization’s success is based on the updated information for its smooth functioning, in the same way, hospital administration’s utmost desire is to have updated data about the admitted patients and their stay in the hospitals.Since emergency cases are increasing day by day worldwide due to climate change as of COVID-19 [1] and population, it has become a severe issue for the hospital administration to deal with many inflows of patients.Most of the time, hospital management does not know when the existing patient leaves the hospital; this information could be crucial for hospital management.It could allow them to take more patients for admission [2].Since patients’Length of stay (LOS) has always remained unpredictable due to complicated issues like a mutation of viruses, chronic diseases, etc., hospital administrations face many problems related to managing available resources and admitting or facilitating new patients [3].Therefore,it is essential to design such models that could help hospital administration predict patients’LOS.

    2 Related Work

    Machine learning (ML) has been widely used to predict the future based on the past behavior of data.A variety of ML models have been used to predict the LOS of the patients, including unsupervised and supervised ML models [4,5].In unsupervised and supervised ML, the model is trained on an unlabeled and labeled dataset, respectively [6].However, the supervised ML framework is more appropriate for a regression task like the one we address in this study.Therefore, in this study, the following supervised ML models, i.e., Multiple linear regression(MLR), Lasso regression (LR), Ridge regression (RR), Decision tree regression (DTR), Extreme gradient boosting regression (XGBR), and Random forest regression (RFR) have been selected and compared to predict the LOS of different diseased patients.

    In the past, different ML techniques have been used to predict hospital LOS.Patients’stay in the hospitals is expected to increase due to the increase in cardiovascular diseases and the population’s ages.This problem affects the healthcare system, with hospitals facing decreased bed capacity, and as a result, the overall cost is increased.To address this issue, in [7], a total of 16,414 cardiac patients were selected for the analysis of prediction of LOS by using ML models(i.e., Support vector machine (SVM), Bayesian network (BN), Artificial neural network (ANN),and RFR).The researcher concluded that the RFR model outperformed others with the highest accuracy score of 0.80.Morton et al.used supervised ML techniques such as MLR, SVM, Multitask learning (MTL), and RFR model to predict the short period and the long period of diabetic patients’LOS.After comparing the results, it was recommended that SVM was more effective in predicting short period patients’stay [8].Bacchi et al.pre-processed 313 patient data and applied different ML techniques like ANN, Natural language processing (NLP), and SVM to develop predictions about LOS and discharge information.Their study revealed the ANN technique’s effectiveness in predicting the LOS with the highest accuracy of 0.74 [9].Patel et al.correlated the performance of various combinations of variables for predicting hospital mortality and diabetic patients’LOS.They concluded that the best combination of variables for predicting LOS by LR model was age, race, insurance status, type of admission, PR-DRG, and severity-calculation [10].

    Walczak et al.used ANN techniques (i.e., Backpropagation (BP), Radial-basis-function(RBF), and Fuzzy ARTMAP) for predicting illness level and hospital LOS of trauma patients.They found out that combinations of BP and fuzzy ARTMAP produced optimal results [11].Yang et al.used data of 1080 burnt patients and applied SVM and Linear regression (LR)techniques to predict the LOS for three different stages: admission, acute, and post-treatment.The study concluded that SVM regression performed better than the other regression techniques for LOS predictions across different stages of burnt patients [12].Another group selected 896 surgical patients and applied supervised ML models (i.e., Local Gaussian Regression (LGR),SVM, and RFR) to make predictions about the LOS [13].For this purpose, they made two groups of patients: Urgent-operational (UO) and non-Urgent-operational (non-UO) and found that blood sugar for the UO group and blood pressure for the non-UO group were the most influential variables in predicting the LOS.Their findings also revealed that the RFR model was the most accurate ML technique for predicting the LOS.Finally, Liu et al.used the dataset of seventeen hospitals of northern California and applied mixture models of Linear regression (LR)and Logistic regression (LR) to predict the LOS in hospitals [14].They showed that Laboratory acute psychological score (LAPS) and Comorbidity point score (COPS) helped boost models’efficiency.

    A comparative analysis of exciting techniques to predict the LOS has been shown in Tab.1.It has been observed that most of the studies are limited to a small dataset of patients and focus on only one or two specific diseases to calculate the LOS [8-11,13].

    Table 1: Comparative analysis of related work

    (Continued)

    Table 1: Continued

    For the general recommendations to the hospital administration, we have selected a large dataset,i.e., more than 2.3 million patients, and included a range of diseases including Heart Transplant, Lungs Transplant, Burt Patients, Bone Marrow Transplant, Mental illness diagnoses, Liver Transplant, Intestinal Transplant, Schizophrenia, Respiratory System Diagnosis,Acute Leukemia, Eating disorder, Bipolar disorder, Trauma, Spinal disorder & injuries, Rehabilitation, Kidney Patients, Alcoholic Patients, Dialysis Patients, Skin Patients, HIV Patients,Malignant Breast disorder, Asthma, Cardiac/Heart-Patient, Cancer, Illness Severity, Surgery, Accident Patients, Respiratory distress syndrome, Abnormal Patients, etc.Above data is related to New-York hospitals.It contains patients’information such as duration of stay, gender, age, race,ethnicity, type of admission, discharge year, and some other essential variables.The main objectives of this study are to explore the dataset to find the hidden patterns of variables and apply different supervised ML models to identify a robust model to make future predictions of the hospital LOS of different diseased patients.In this study, we also calculate the feature importance score by RFR model to identify which features among all the features are relevant to the hospital length of stay.

    3 Methodology

    The framework of the proposed study to predict the LOS of the patients is presented in Fig.1.Below, we briefly explain the various stages of the proposed framework.

    Figure 1: A framework of the proposed study

    3.1 Data Description

    In this study, we have used Inpatient De-identified data fromhealthdata.gov, a website managed by the U.S.Department of Health & Human Services that maintains updated health and social care data in the United States [15].The dataset contains more than 2.3 million patients with 34 variables, listed in Tab.2, including cost, charges, gender, age, race, ethnicity, type of admission, discharge year, etc., recorded in the year 2017.

    3.2 Data Pre-Processing

    It is essential for data analysis that the used data be correct and complete because missing values in the data negatively affect the model’s performance.For this purpose, the data set used for this study was checked, and missing values were identified.It was noticed that among all the variables listed in Tab.2, ten variables had missing values.Three out of these ten variables, i.e.,payment topology 2, payment topology 3, and birth weight, had a higher count of missing values than the rest and were removed from the dataset.However, the remaining seven variables, i.e.,hospital service area, hospital county, operating certificate number, permanent facility id, zip code,ARR severity of illness description, APR risk of mortality, had a relatively low count of missing values.Therefore, we kept these variables, but corresponding rows information was removed for further analysis.

    Table 2: Description, correlation value, and missing values identification of each variable in the inpatient de-identified dataset

    (Continued)

    Table 2: Continued

    (Continued)

    Table 2: Continued

    3.3 Data Exploration and Visualization

    Exploratory data analysis (EDA) was used to analyze the dataset and summarize the dataset’s main variables [16].In this study, univariate and bivariate analyses were applied to the variables to check the relationship between independent variables and target variable (LOS).Before performing both analyses, the correlation between all the input variables and the target variable (LOS)was checked.Correlation is a significant statistical concept that is used to find the relationship between variables.It has a range of values between -1 and +1, where -1 indicates a negative correlation and +1 indicates a positive correlation while 0 means there is no correlation between the variables [17].Tab.2 shows the correlation between independent variables and LOS (target variable).

    As we can see from Tab.2, Facility Name, Ethnicity, CCS Diagnosis Code, Zip Code, Race,APR Risk of Mortality, and APR Severity of Illness Description negatively correlate with LOS.Discharge Year, Abortion Edit Indicator, Payment Typology 2, Payment Typology 3 and Birth Weight have 0 correlation with LOS.While all the remaining variables have a positive correlation with LOS.Total Costs, CCS Diagnoses Code and Total Charges have the highest correlation with LOS.Variables “Discharge Year” and “Abortion Edit Indicator” were removed from the dataset for further analysis because they do not correlate with the target variable (LOS).

    Since LOS is the output variable, we kept this variable along the y-axis of the plots created for Data Visualization.For example, in the dataset, the LOS of a patient with more than four months’stay was given as 120+.Since exact days are not given in the dataset, we replaced 120+with 130 to avoid the error.

    Univariate analysis (UA) was used to explore variables of the datasetPark, 2015 #51.UA summarizes each variables’dataset and identifies the hidden patterns of the dataset.In this study,as we can see in Fig.2, the univariate distribution plot of LOS is displayed in the form of a normalized histogram.The plot shows that LOS distribution is not symmetric; most of the patients stayed almost ~0-5 days with the meantime of each patient is 5.3 days in the hospital,whereas a significantly less number of patients stayed longer than this period.

    Figure 2: Univariate distribution plot of length of stay

    Next, we performed the bivariate analyses to check the relationship between independent and output variables (LOS) using bar graphs.We have displayed the bar graphs in Figs.3a-3f of some variables (i.e., APR Severity of Illness, Age Group, Type of Admission, APR Risk of Mortality,CCS Diagnoses Description, Patient Disposition, Payment Typology 1) that have shown maximum variance in a predictor variable(LOS).For example, the average values of LOS based on APR severity of illness is shown in Fig.3a.As shown in Fig.3a, the highest average LOS belongs to the extreme group followed by the major.Fig.3b shows the average LOS of the different ages group.On average, more than 50 years old patients spent more days in the hospital than the patients of age groups 30-49 years old and the rest of the age groups.Fig.3c shows the average length of hospital stays based on the admission type.As we can see from Fig.3c, patients who belong to the “urgent” category of admission spent the highest number of days on average,followed by emergency based.

    Figure 3: (a) Average length of stay based on APR severity of illness.(b) Average length of stay based on different age groups.(c) Average length of stay based on admission type.(d) Length of stay vs. different APR risk of mortality.(e) Top 10 diagnoses with the longest length of stay.(f) Length of stay vs. different patient disposition.(g) Average length of stay based on different payment methods

    In contrast, the patients who belong to the “not available” category of admission spent the minimum number of days in the hospital on average.The average LOS based on APR risk of mortality is shown in Fig.3d.As Fig.3d reveals, the highest LOS on average belongs to the extreme group, followed by other groups.Based on the average LOS, as shown in Fig.3e, patients with diagnoses related to birth complications spent more days in hospital followed by other diseases.Fig.3f shows the average LOS of Patient disposition.On average, the highest average LOS belongs to the medical cert long term care hospitals, followed by the other dispositions.The average LOS for different payment types is also shown in Fig.3g.We conclude that based on the average LOS as demonstrated in Fig.3g, “Department of Corrections” and “unknown” categories of this feature (Payment Typology 1) have maximum average LOS followed by other categories of this feature.

    3.4 Feature Selection

    Feature selection is an essential part of building a good model.ML requires important variables for training the model.There were a total of 34 variables in the patient’s dataset.After cleaning the dataset and performing EDA, some variables were removed due to a high count of missing values.The EDA helped gain further insights into the data.We used the Mutual Information (MI) regression technique to check the mutual dependence of input variables on the dependant variable (LOS).Information gain of all independent variables is shown in Fig.4.Based on the EDA, ML models were trained using all the dataset’s variables (other than five variables removed due to the high count of missing values and zero correlation with the output variable (LOS).

    Figure 4: Information gain of all independent variables based on LOS (i.e., predictor variable)

    3.5 Machine Learning Regression Techniques

    In this study, since the dataset is taken from the medical hospitals has an output in the form of a continuous numerical value; therefore, supervised ML regression algorithms were used to make predictions of the patient’s LOS.The chosen ML algorithms in this study are MLR, LR,RR, DTR, XGBR, and RFR, respectively.

    3.5.1 Multiple Linear Regression Model

    The multiple Linear Regression (MLR) model is an extension of Linear Regression (LR)which predicts a numeric value using more than one independent variable [18].The general equation of the MLR model is:

    where “y”is the output variable, “x”is the input variable andβis a constant term also named least square estimators.

    3.5.2 Lasso Regression Model

    Lasso regression (LR) model is a subtype of the linear regression model used to shrink the number of coefficients of the regression model.LR model is also used as a regularized regression model, which results in a sparse model with fewer coefficients.It makes some of the coefficients equal to zero, which are not contributing much to the predictions.As a result, the model becomes simpler, which performs better than the unregularized MLR model [19].LR helps in reducing the overfitting problem by making the coefficients equal to zero for the least important features and keeping only those features that contribute to the output predictions.

    3.5.3 Ridge Regression Model

    Ridge regression (RR) is another particular case of linear regression model that helps shrink the coefficients and reducing the model’s complexity.It also helps in reducing multicollinearity.Unlike the LR model, the RR model does not provide absolute shrinkage of the coefficients.However, the RR model makes some of the coefficient values very low or close to zero.Therefore,the features which are not contributing much to the model will have very low coefficients.As a result, the RR model helps in reduces overfitting, which appears from the MLR model [20].

    3.5.4 Decision Tree Regression Model and Extreme Gradient Boosting Regression Model

    Decision tree regression (DTR) is a famous ML model used for classification and regression problems.DTR builds a tree-shaped structure of variables.DTR model breaks the data into smaller subsets, and the associated decision tree is incrementally developed simultaneously [21].DTR model can handle both the numeric as well as categorical nature of data [22].Extreme gradient boosting regression (XGBR) model is DTR based ensemble ML model [23].This model is used to increase the speed and performance accuracy of the model.

    3.5.5 Random Forest Regression Model

    Random forest regression (RFR) model is a collection of multiple decision trees.RFR model is an estimator that fits several classifying decisions on the subsamples of the data and uses averaging criteria to improve the accuracy and control overfitting problems [24].In the case of a classification problem, RFR uses voting criteria.Each tree in the RFR makes its prediction, and at the end, a class is assigned to a new test point based on the maximum voting.In the case of regression, it takes an average of all the numeric values predicted by the individual decision trees.In this way, it improves the accuracy and controls the overfitting of a model [25].

    3.6 Model Evaluation and Validation

    For parameter tuning, Cross-validation (CV) is a very useful technique used in ML modeling,and most of the time, it performs better than the standard validation set approach.It divides the data into k folds, e.g., 10-folds.Every time nine out of 10-folds go for training and the remaining one for testing.This process is repeated ten times so that all the folds go for training as well as for testing.In the end, average test accuracy is obtained [26].One of the main advantages of CV over the simple validation set approach is that in CV, all the sample points go for training and testing, which is not the case in the simple validation set approach [27].Fig.5 shows the working of a K-fold CV.As we can see in Fig.5, all the folds of the model are used for training and testing phases.

    Figure 5: Working of K-fold cross-validation

    After fitting the models, the next step is to measure the performances of the models.Two important performance measuring techniques, i.e., Mean square error (MSE) and R-square score,are used to measure the above-mentioned models’ performance.First, MSE is calculated by subtracting each predicted value from the actual value, then taking the square of each value and,in the end, adding all the squared values and dividing it by the number of training points.The following equation gives the mathematical formula for calculating MSE:

    Here n denotes the number of training points,yi denotes the actual value, and ?y denotes the value predicted by a model.

    The second performance measurement technique is the R-square score, also known as the coefficient of determination.R-square has a value between 0 and 1.RS tells us how well a line fits the data or how well a line follows the variations within a set of data [28].Mathematically it is given as:

    SSRES denotes the sum of squares of residuals, and SSTOT denotes the sum of squares of the total.R-square value of 1 indicates a perfectly best-fitted model, while a score of 0 says the model was unable to fit the data and it is a poorly fitted model.

    4 Results and Discussion

    4.1 Experimental Analysis

    After selecting essential variables of the dataset, six selected models, i.e., MLR, LR, RR,DTR, XGBR, and RFR, were evaluated using a 10-fold CV.For this purpose, the dataset was divided into an 80:20 ratio, i.e., 80% for the training and 20% for testing.The train data with a similar proportion was separated for the training and validation portion.The main idea is to train and validate the model first using 10 folds’CV for parameter tuning and then testing the model on 20% of the test data to see the model’s performance on the test predictions.For the 10-fold CV, the model was built ten times on nine different folds, validated on the tenth fold.In the end, the mean value was taken off ten folds for R-square and MSE.R-square tells us how best the model fits the data, and MSE is the cost function of MLR, which is the square root of the sum of the difference between the actual and predicted value of each record.In the case of best model performance, the MSE result should be minimum (0 in perfect conditions), and the R-square score should be near or equal to 1.The parameter settings corresponding to the minimum validation average were selected for prediction in the test phase.

    4.1.1 Multiple Linear Regression Model

    Multiple linear regression (MLR), as mentioned before, was trained and validated using a 10-fold CV for the prediction of LOS.Then MLR model was used to predict thex-test to see its performance on the test-data.MSE on the training data was 39, and the R-square score of the training data was 0.37.So, both MSE and R-square results showed that the MLR model’s performance was very low on the training data.And on the test data showed an MSE of 38.49,and the R-square score was 0.371.So in the case of both training and testing, MSE was very high,and the R-square score was very low, which indicated a low performance of the MLR model.

    4.1.2 Lasso Regression Model

    Lasso regression (LR) model was applied in a way very similar to the MLR model.LR model showed an MSE of 42.58 and an R-square score of 0.31 for the training data.For the test data,the LR model showed an MSE of 42.19 and an R-square score of 0.310.Thus, for both cases(training and testing), MSE was even higher than MLR, and the R-square score was very low,resulting in low model performance.

    4.1.3 Ridge Regression Model

    The Ridge regression (RR) model showed an MSE of 39 and an R-square score of 0.37 for the training data.However, it showed an MSE of 38.49, and the R-square score was 0.3711 for the test data.Since these results were also far from ideal, RR model performance was also low.

    4.1.4 Decision Tree Regression Model

    The Decision tree regression (DTR) Model showed an MSE of 0.002 and an R-square score of 0.999 for the training data.However, it showed an MSE of 5.93, and the R-square score was 0.903 for the test data.Since these results were relatively close to the ideal, this model’s performance was much better than MLR, LR, and RR.

    4.1.5 Extreme Gradient Boosting Regression Models

    The Extreme gradient boosting regression (XGBR) model showed an MSE of 5.32 and an R-square score of 0.914 for the training data.However, it showed an MSE of 5.62, and the Rsquare score was 0.908 for the test data.As the readings indicate, XGBR performed better than all the previous models.

    4.1.6 Random Forest Regression Model

    Random forest regression (RFR) model was also applied in the same way as other models.RFR model showed an MSE of 0.76 and an R-square score of 0.987 for the training data.However, it showed an MSE of 5 and an R-square score of 0.92 for the test data.These results indicate the superior predictive performance of the RFR method as compared to other models.

    4.2 Discussion

    We have seen that MLR, LR, and RR models could not perform well, as indicated by large MSE and small R-square scores.However, the other two models, i.e., DTR and XGBR, were better in terms of these performance measures, as presented in Tab.3.Thus, overall, the RFR model was found the best model to predict the LOS.

    From Tab.3, it can be seen that both in terms of R-square score and MSE score, the RFR model is the best one, followed by the XGBR model.RFR was the model in which explanatory variables explained the variation inyoutput variable (LOS) with the highest R-square score of 92% and the lowest MSE score of 5 among all six models.XGBR ensemble algorithm is the second-best model in this analysis, with an R-square score of 90.8% and an MSE score of 5.62.MLR, RR, and LR could not fit this hospital data properly and performed poorly on the data.This proposed methodology outperformed with a large dataset and achieved a higher accuracy rate than other studies done in the past.

    Table 3: Mean squared error (MSE) and R-square (RS) score for the different ML models

    4.3 Features Importance

    Features importance is an important technique used to identify which features/variables among all the features/variables are relevant in making predictions.Feature prediction scores were calculated using the RFR model [29].Feature’s importance tells which features primarily contribute to fitting the data or explaining the variation/prediction of the output variabley[30].It can be seen in Fig.6 that “Total Costs”, “CCS Diagnosis”, and “Total Charges”are the essential variables in terms of the importance score.These results are consistent EDA findings, where LOS was found to have a high correlation score with these variables.Apart from these three variables,Fig.6 also reveals the part played by other variables, although secondary, in predicting the LOS.

    Figure 6: Importance of independent variables on length of stay in random forest model

    5 Conclusion

    In this study, the main objectives were to explore the Inpatient De-identified data and to build a robust model that could predict the hospital LOS of patients coming to the hospital in the future.Predicting hospital length of stay will help hospitals estimate resources available for the patients and manage the available resources efficiently.EDA with the help of graphs was performed to develop essential insights from the data.By EDA, we conclude that maximum stay was between 0 to 5 days with the meantime of each patient 5.3 days and more than 50 years old patients spent more days in the hospital.Based on the average LOS, it was also observed that the patients with diagnoses related to birth complications spent more days in the hospital than other diseases.Six ML models were employed and evaluated by using the 10-fold CV approach.Linear multiple regression (LMR), Lasso regression (LR), Ridge regression (RR), Decision tree regression(DTR), Extreme gradient boosting regression (XGBR), and Random forest regression (RFR) were the chosen models in this analysis.The results showed that RFR was the best model for Rsquare and MSE, followed by the XGBR.Feature importance score revealed the relevance of three primary variables, Total Costs, CCS Diagnoses Code, and Total Charges, for predicting the LOS.Based on the above-detailed study, we recommend that future work involve more variables in the given dataset to build a more accurate model that could predict hospital LOS more accurately.

    Acknowledgement:Thanks to the supervisor and co-authors for their valuable guidance and support.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    18禁黄网站禁片午夜丰满| 国产主播在线观看一区二区| ponron亚洲| 亚洲 欧美 日韩 在线 免费| 久久久久久久久久成人| 亚洲久久久久久中文字幕| 很黄的视频免费| 蜜桃亚洲精品一区二区三区| 久久精品国产亚洲av天美| 观看免费一级毛片| 麻豆一二三区av精品| 欧美激情国产日韩精品一区| 波野结衣二区三区在线| 麻豆久久精品国产亚洲av| 日韩欧美国产一区二区入口| 午夜福利在线在线| 午夜免费成人在线视频| 成人三级黄色视频| 免费电影在线观看免费观看| 久久6这里有精品| 精品人妻熟女av久视频| 看黄色毛片网站| 亚洲成人久久性| 亚洲,欧美精品.| 久久午夜福利片| 老司机福利观看| 在线观看舔阴道视频| 亚洲最大成人手机在线| 免费在线观看影片大全网站| 一本精品99久久精品77| 在线播放国产精品三级| 亚洲av中文字字幕乱码综合| 亚洲国产精品合色在线| 一个人看视频在线观看www免费| 欧美xxxx黑人xx丫x性爽| 偷拍熟女少妇极品色| ponron亚洲| a级毛片a级免费在线| 精品久久国产蜜桃| a级毛片a级免费在线| 91久久精品国产一区二区成人| 午夜福利视频1000在线观看| 久9热在线精品视频| 亚洲第一欧美日韩一区二区三区| 欧美激情久久久久久爽电影| 国产精品av视频在线免费观看| 欧美日韩瑟瑟在线播放| 国产真实伦视频高清在线观看 | 一a级毛片在线观看| 一个人免费在线观看电影| 桃色一区二区三区在线观看| 亚洲最大成人中文| 91麻豆精品激情在线观看国产| 欧美成人a在线观看| 久久伊人香网站| 国产乱人伦免费视频| 五月伊人婷婷丁香| 亚洲内射少妇av| 日韩高清综合在线| 国产中年淑女户外野战色| 直男gayav资源| 亚洲色图av天堂| 最新中文字幕久久久久| 久久天躁狠狠躁夜夜2o2o| 欧洲精品卡2卡3卡4卡5卡区| 成人特级黄色片久久久久久久| 久久精品国产亚洲av香蕉五月| av视频在线观看入口| 首页视频小说图片口味搜索| 少妇高潮的动态图| 欧美午夜高清在线| 99在线视频只有这里精品首页| 十八禁国产超污无遮挡网站| 麻豆久久精品国产亚洲av| 亚洲国产欧洲综合997久久,| 久9热在线精品视频| 看免费av毛片| 日本与韩国留学比较| 我要搜黄色片| 天堂影院成人在线观看| 日韩欧美 国产精品| 国产黄a三级三级三级人| 亚洲成人久久爱视频| 看片在线看免费视频| 亚洲 国产 在线| 国产一区二区三区在线臀色熟女| 亚洲avbb在线观看| 亚洲第一区二区三区不卡| 欧美黑人欧美精品刺激| 非洲黑人性xxxx精品又粗又长| 日韩欧美在线乱码| 色在线成人网| 日本 欧美在线| 日本在线视频免费播放| 亚洲成人久久性| 亚洲无线在线观看| 免费黄网站久久成人精品 | 真人一进一出gif抽搐免费| 99精品在免费线老司机午夜| 午夜影院日韩av| 国产黄色小视频在线观看| 成年女人毛片免费观看观看9| 亚洲最大成人中文| 亚洲,欧美精品.| 精品欧美国产一区二区三| 日日夜夜操网爽| 国产精品影院久久| 一个人观看的视频www高清免费观看| 日本在线视频免费播放| 91字幕亚洲| 久久久精品欧美日韩精品| 两人在一起打扑克的视频| 国产免费男女视频| 国产高清激情床上av| 一边摸一边抽搐一进一小说| 国产欧美日韩精品一区二区| 搡老岳熟女国产| 99久久无色码亚洲精品果冻| 亚洲在线自拍视频| 成人av一区二区三区在线看| 国产黄色小视频在线观看| 欧美另类亚洲清纯唯美| 国内少妇人妻偷人精品xxx网站| 亚洲狠狠婷婷综合久久图片| 免费黄网站久久成人精品 | 在线免费观看不下载黄p国产 | 精品一区二区三区人妻视频| 18禁黄网站禁片免费观看直播| 成熟少妇高潮喷水视频| 男女视频在线观看网站免费| 色播亚洲综合网| 夜夜爽天天搞| 一边摸一边抽搐一进一小说| 国产成人啪精品午夜网站| 美女cb高潮喷水在线观看| 丝袜美腿在线中文| 国产精品三级大全| 一级a爱片免费观看的视频| 日本成人三级电影网站| 此物有八面人人有两片| 国产爱豆传媒在线观看| 欧美+亚洲+日韩+国产| 久久这里只有精品中国| 我要看日韩黄色一级片| 久久午夜亚洲精品久久| 久久伊人香网站| 久久久久国产精品人妻aⅴ院| 99国产极品粉嫩在线观看| 精品午夜福利在线看| 中国美女看黄片| 女人被狂操c到高潮| 午夜视频国产福利| 国内精品美女久久久久久| 99久久精品热视频| 最近最新免费中文字幕在线| 亚洲精品乱码久久久v下载方式| 99国产精品一区二区三区| 不卡一级毛片| 欧美午夜高清在线| 琪琪午夜伦伦电影理论片6080| 深夜a级毛片| 国产成年人精品一区二区| 国产不卡一卡二| 两个人的视频大全免费| 全区人妻精品视频| 国产亚洲精品综合一区在线观看| 国内久久婷婷六月综合欲色啪| 亚洲人成网站在线播| 亚洲成人久久爱视频| 亚洲av电影不卡..在线观看| 91九色精品人成在线观看| 在线观看av片永久免费下载| 国产精品一区二区三区四区久久| 夜夜爽天天搞| 成人鲁丝片一二三区免费| 国产欧美日韩精品一区二区| 国产午夜福利久久久久久| av欧美777| 男女之事视频高清在线观看| 欧美3d第一页| netflix在线观看网站| 亚洲一区二区三区色噜噜| 少妇被粗大猛烈的视频| 听说在线观看完整版免费高清| 国产黄a三级三级三级人| 亚洲一区二区三区色噜噜| 亚洲最大成人中文| 欧美成人免费av一区二区三区| 精品久久久久久久末码| 欧美zozozo另类| a在线观看视频网站| 老司机福利观看| 免费看光身美女| av在线观看视频网站免费| 欧美一区二区国产精品久久精品| 免费观看的影片在线观看| 久久伊人香网站| 白带黄色成豆腐渣| 欧美又色又爽又黄视频| 亚洲av成人av| 午夜精品在线福利| 国产高清三级在线| 999久久久精品免费观看国产| 久久精品国产清高在天天线| 成人无遮挡网站| 国产精品伦人一区二区| 成人高潮视频无遮挡免费网站| 日本精品一区二区三区蜜桃| 美女高潮的动态| 日本黄色视频三级网站网址| 亚洲黑人精品在线| 国产精品人妻久久久久久| 热99在线观看视频| 两人在一起打扑克的视频| 中文亚洲av片在线观看爽| 乱人视频在线观看| 精品久久久久久久久久久久久| 日韩人妻高清精品专区| 禁无遮挡网站| 亚洲经典国产精华液单 | 久久人妻av系列| 村上凉子中文字幕在线| 一级作爱视频免费观看| 少妇熟女aⅴ在线视频| 亚洲熟妇中文字幕五十中出| 亚洲精品乱码久久久v下载方式| 久久久久久九九精品二区国产| 免费搜索国产男女视频| 国产成+人综合+亚洲专区| 亚洲,欧美精品.| 国产单亲对白刺激| 国产精品久久久久久亚洲av鲁大| 欧美乱妇无乱码| 人妻制服诱惑在线中文字幕| 露出奶头的视频| 国产av一区在线观看免费| 人人妻人人澡欧美一区二区| 麻豆成人午夜福利视频| 女同久久另类99精品国产91| 女人十人毛片免费观看3o分钟| 中文字幕久久专区| 他把我摸到了高潮在线观看| 人妻久久中文字幕网| 成年女人看的毛片在线观看| 国产高清视频在线观看网站| 深爱激情五月婷婷| 国产伦精品一区二区三区视频9| 观看美女的网站| 欧美日韩福利视频一区二区| 亚洲第一电影网av| 婷婷亚洲欧美| 99国产极品粉嫩在线观看| 欧美精品国产亚洲| 国产亚洲精品久久久com| 国产综合懂色| 久久精品91蜜桃| 综合色av麻豆| 国内精品久久久久久久电影| 国产精品亚洲美女久久久| 18禁黄网站禁片免费观看直播| 免费人成视频x8x8入口观看| 在线播放国产精品三级| 亚洲电影在线观看av| 深夜精品福利| 午夜激情欧美在线| 国产成+人综合+亚洲专区| 一a级毛片在线观看| 久久午夜福利片| 淫妇啪啪啪对白视频| 成人毛片a级毛片在线播放| 啪啪无遮挡十八禁网站| 国产伦在线观看视频一区| 波多野结衣高清作品| 成熟少妇高潮喷水视频| 人人妻,人人澡人人爽秒播| 禁无遮挡网站| 亚洲成人免费电影在线观看| 中亚洲国语对白在线视频| 99精品在免费线老司机午夜| 国产三级中文精品| 国产 一区 欧美 日韩| 色5月婷婷丁香| 日韩欧美 国产精品| 亚洲av日韩精品久久久久久密| 午夜激情欧美在线| 好看av亚洲va欧美ⅴa在| 亚洲国产精品999在线| 99久久无色码亚洲精品果冻| 精品一区二区三区av网在线观看| 男人舔奶头视频| 搡老妇女老女人老熟妇| 欧美极品一区二区三区四区| av天堂中文字幕网| 免费电影在线观看免费观看| 好男人在线观看高清免费视频| 国产在线男女| 国产真实伦视频高清在线观看 | 国产黄色小视频在线观看| 欧美日韩亚洲国产一区二区在线观看| 久久人人精品亚洲av| 国产真实乱freesex| 女同久久另类99精品国产91| 一边摸一边抽搐一进一小说| 日本成人三级电影网站| 美女 人体艺术 gogo| 精品一区二区三区视频在线观看免费| 老鸭窝网址在线观看| 久久精品国产亚洲av涩爱 | 日韩国内少妇激情av| 亚洲av美国av| av欧美777| 色哟哟哟哟哟哟| 婷婷色综合大香蕉| 露出奶头的视频| 狂野欧美白嫩少妇大欣赏| 真人做人爱边吃奶动态| 色吧在线观看| 十八禁国产超污无遮挡网站| 亚洲精品色激情综合| 尤物成人国产欧美一区二区三区| 精品一区二区免费观看| 欧美成人免费av一区二区三区| 村上凉子中文字幕在线| 亚洲成a人片在线一区二区| 成人国产一区最新在线观看| 美女 人体艺术 gogo| 一进一出抽搐gif免费好疼| 欧美成狂野欧美在线观看| 99视频精品全部免费 在线| 国产高清有码在线观看视频| 免费看光身美女| 我要搜黄色片| 老司机深夜福利视频在线观看| 综合色av麻豆| 久久久久免费精品人妻一区二区| 一级毛片久久久久久久久女| 99国产精品一区二区三区| 一级毛片久久久久久久久女| 搡老熟女国产l中国老女人| 十八禁网站免费在线| 又黄又爽又刺激的免费视频.| 国产精品久久久久久精品电影| 精品久久久久久成人av| 日本 av在线| 极品教师在线视频| 美女大奶头视频| 欧美中文日本在线观看视频| 少妇人妻一区二区三区视频| 欧美黄色淫秽网站| 婷婷色综合大香蕉| 日本三级黄在线观看| 一区福利在线观看| 亚洲国产精品久久男人天堂| 男人舔女人下体高潮全视频| 色尼玛亚洲综合影院| 亚洲片人在线观看| 蜜桃久久精品国产亚洲av| 色噜噜av男人的天堂激情| 亚洲成a人片在线一区二区| 日本五十路高清| 在线看三级毛片| 神马国产精品三级电影在线观看| 最新在线观看一区二区三区| 日韩欧美精品v在线| 欧美3d第一页| 欧美bdsm另类| 精品久久久久久久久久久久久| 黄色女人牲交| 男女那种视频在线观看| 少妇的逼好多水| 99热精品在线国产| 九九热线精品视视频播放| 久久久久久久久中文| 露出奶头的视频| 在线播放无遮挡| 黄色视频,在线免费观看| 在线国产一区二区在线| 少妇裸体淫交视频免费看高清| 搞女人的毛片| 啪啪无遮挡十八禁网站| 99视频精品全部免费 在线| 一本久久中文字幕| 精品一区二区三区视频在线观看免费| 简卡轻食公司| 国产男靠女视频免费网站| aaaaa片日本免费| 亚洲真实伦在线观看| 精品无人区乱码1区二区| 精品人妻1区二区| 久久久久性生活片| 97人妻精品一区二区三区麻豆| 欧美中文日本在线观看视频| 男女下面进入的视频免费午夜| 亚洲不卡免费看| 国产精品av视频在线免费观看| 99久久成人亚洲精品观看| 国产精品免费一区二区三区在线| 成人美女网站在线观看视频| 久久精品综合一区二区三区| netflix在线观看网站| 国产精品女同一区二区软件 | 国产亚洲精品久久久久久毛片| 男插女下体视频免费在线播放| 动漫黄色视频在线观看| 午夜精品久久久久久毛片777| 国产乱人视频| 美女高潮的动态| 婷婷色综合大香蕉| 中文字幕人妻熟人妻熟丝袜美| 久久久久久久久久黄片| 真人做人爱边吃奶动态| 在线观看美女被高潮喷水网站 | 欧美性猛交╳xxx乱大交人| 欧美激情国产日韩精品一区| 天天躁日日操中文字幕| 亚洲国产高清在线一区二区三| 最近在线观看免费完整版| 亚洲 国产 在线| 麻豆av噜噜一区二区三区| 精品人妻视频免费看| 亚洲avbb在线观看| 88av欧美| 欧美高清成人免费视频www| 午夜亚洲福利在线播放| av天堂中文字幕网| 99在线视频只有这里精品首页| 又粗又爽又猛毛片免费看| 色视频www国产| 中文字幕人成人乱码亚洲影| 色尼玛亚洲综合影院| 深夜精品福利| 免费av观看视频| 一夜夜www| 亚洲精品一卡2卡三卡4卡5卡| 在线观看午夜福利视频| 91久久精品电影网| 99热这里只有是精品在线观看 | 淫妇啪啪啪对白视频| 亚洲国产欧美人成| 免费看光身美女| 欧美xxxx黑人xx丫x性爽| 午夜福利在线观看免费完整高清在 | 美女被艹到高潮喷水动态| 亚洲性夜色夜夜综合| 欧美绝顶高潮抽搐喷水| 日本 欧美在线| 身体一侧抽搐| x7x7x7水蜜桃| av欧美777| 国产在线精品亚洲第一网站| 露出奶头的视频| 亚洲无线观看免费| 免费人成视频x8x8入口观看| 麻豆一二三区av精品| 天天一区二区日本电影三级| 老鸭窝网址在线观看| 永久网站在线| 精品人妻熟女av久视频| av国产免费在线观看| 日韩欧美国产在线观看| 亚洲熟妇熟女久久| 国产免费男女视频| a在线观看视频网站| 日本免费一区二区三区高清不卡| 国产精品伦人一区二区| 国产91精品成人一区二区三区| 免费在线观看日本一区| 欧美性感艳星| 久久国产精品影院| 一级作爱视频免费观看| 丁香欧美五月| 国产aⅴ精品一区二区三区波| 99在线人妻在线中文字幕| 日日摸夜夜添夜夜添av毛片 | 男人和女人高潮做爰伦理| 欧美又色又爽又黄视频| 日韩欧美在线二视频| 一二三四社区在线视频社区8| 看黄色毛片网站| 99国产精品一区二区蜜桃av| 欧美三级亚洲精品| 国产美女午夜福利| 最新中文字幕久久久久| 欧美最黄视频在线播放免费| 日本a在线网址| 在线观看免费视频日本深夜| 久久热精品热| 如何舔出高潮| 亚州av有码| 久久久久久久久中文| 91久久精品国产一区二区成人| 欧美日韩黄片免| 久久99热6这里只有精品| 99久久精品热视频| 成人精品一区二区免费| 久久久久久久精品吃奶| 在线观看66精品国产| 国产成人aa在线观看| 久久人人精品亚洲av| av黄色大香蕉| 久久久精品欧美日韩精品| 国产精品国产高清国产av| 亚洲欧美日韩高清专用| 国产真实伦视频高清在线观看 | 黄色一级大片看看| 成年人黄色毛片网站| 日本与韩国留学比较| 国产久久久一区二区三区| 88av欧美| 久久精品国产亚洲av涩爱 | 日韩精品青青久久久久久| 欧美成人免费av一区二区三区| 男女床上黄色一级片免费看| 91九色精品人成在线观看| 成人无遮挡网站| 此物有八面人人有两片| 久久草成人影院| 久久精品久久久久久噜噜老黄 | 国产欧美日韩精品一区二区| 精品人妻视频免费看| aaaaa片日本免费| 最好的美女福利视频网| 国产美女午夜福利| 亚洲 国产 在线| 给我免费播放毛片高清在线观看| 欧美最新免费一区二区三区 | 亚洲第一欧美日韩一区二区三区| 91av网一区二区| 女人被狂操c到高潮| 无遮挡黄片免费观看| 亚洲第一欧美日韩一区二区三区| 欧美在线黄色| 午夜福利成人在线免费观看| 久久这里只有精品中国| 一本精品99久久精品77| 美女高潮喷水抽搐中文字幕| 高潮久久久久久久久久久不卡| 久久人人爽人人爽人人片va | 久久人人精品亚洲av| 男人狂女人下面高潮的视频| eeuss影院久久| 国产av一区在线观看免费| 99热只有精品国产| 国产精品电影一区二区三区| 综合色av麻豆| 国产伦一二天堂av在线观看| 丝袜美腿在线中文| 国产老妇女一区| 国产欧美日韩一区二区精品| 日韩欧美国产一区二区入口| 免费看日本二区| 88av欧美| 黄色配什么色好看| 精品人妻熟女av久视频| 黄色女人牲交| 亚洲无线观看免费| 精品欧美国产一区二区三| 久久精品国产自在天天线| 国产精品野战在线观看| 18美女黄网站色大片免费观看| 精品人妻1区二区| 真实男女啪啪啪动态图| 国产黄色小视频在线观看| 亚洲成人久久性| 亚洲美女搞黄在线观看 | 变态另类丝袜制服| 国产黄片美女视频| 中文字幕精品亚洲无线码一区| 男女做爰动态图高潮gif福利片| 国产在线男女| 桃红色精品国产亚洲av| 18禁黄网站禁片免费观看直播| 美女被艹到高潮喷水动态| 亚洲五月天丁香| 2021天堂中文幕一二区在线观| 观看免费一级毛片| 欧美成人性av电影在线观看| 91字幕亚洲| 国产又黄又爽又无遮挡在线| 69人妻影院| 91字幕亚洲| 最近最新中文字幕大全电影3| 嫩草影院新地址| 一区二区三区高清视频在线| 蜜桃久久精品国产亚洲av| 久久久精品欧美日韩精品| 国产极品精品免费视频能看的| 中文字幕免费在线视频6| 亚洲无线在线观看| 欧美三级亚洲精品| 精品99又大又爽又粗少妇毛片 | 夜夜夜夜夜久久久久| 一二三四社区在线视频社区8| 网址你懂的国产日韩在线| 久久99热6这里只有精品| 国产欧美日韩精品亚洲av| 国产精品1区2区在线观看.| 国产69精品久久久久777片| 波多野结衣高清作品| 亚洲经典国产精华液单 | 国产精品,欧美在线| 老司机深夜福利视频在线观看| 能在线免费观看的黄片| 我的女老师完整版在线观看| 欧美日韩瑟瑟在线播放| 免费在线观看亚洲国产| 亚洲熟妇中文字幕五十中出| 亚洲精品一卡2卡三卡4卡5卡| 欧美3d第一页| 国产亚洲欧美在线一区二区| 少妇高潮的动态图| 成年免费大片在线观看| 国产欧美日韩精品亚洲av| 又紧又爽又黄一区二区| 如何舔出高潮|