• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Variable selection strategies and its importance in clinical prediction modelling

    2020-04-04 15:01:10MohammadZiaulIslamChowdhuryTanvirTurin
    Family Medicine and Community Health 2020年1期

    Mohammad Ziaul Islam Chowdhury, Tanvir C Turin,2

    ABSTRACT Clinical prediction models are used frequently in clinical practice to identify patients who are at risk of developing an adverse outcome so that preventive measures can be initiated. A prediction model can be developed in a number of ways; however, an appropriate variable selection strategy needs to be followed in all cases. Our purpose is to introduce readers to the concept of variable selection in prediction modelling, including the importance of variable selection and variable reduction strategies. We will discuss the various variable selection techniques that can be applied during prediction model building (backward elimination, forward selection, stepwise selection and all possible subset selection), and the stopping rule/selection criteria in variable selection (p values, Akaike information criterion, Bayesian information criterion and Mallows’Cp statistic). This paper focuses on the importance of including appropriate variables, following the proper steps,and adopting the proper methods when selecting variables for prediction models.

    lNTRODuCTlON

    Prediction models play a vital role in establishing the relation between the variables used in the particular model and the outcomes achieved and help forecast the future of a proposed outcome. A prediction model can provide information on the variables that are determining the outcome, their strength of association with the outcome and predict the future of an outcome using their specific values. Prediction models have countless applications in diverse areas, including clinical settings, where a prediction model can help with detecting or screening highrisk subjects for asymptomatic diseases (to help prevent developing diseases with early interventions), predicting a future disease(to help facilitate patient-doctor communication based on more objective information),assisting in medical decision- making (to help both doctors and patients make an informed choice regarding treatment) and assisting healthcare services with planning and quality management.

    Different methodologies can be applied to build a prediction model, which techniques can be classified broadly into two categories: mathematical/statistical modelling and computer- based modelling. Regardless of the modelling technique used, one needs to apply appropriate variable selection methods during the model building stage. Selecting appropriate variables for inclusion in a model is often considered the most important and difficult part of model building. In this paper,we will discuss what is meant by variable selection, why variable selection is important, the different methods for variable selection and their advantages and disadvantages. We have also used examples of prediction models to demonstrate how these variable selection methods are applied in model building. The concept of variable selection is heavily statistical and general readers may not be familiar with many of the concepts discussed in this paper. However, we have attempted to present a non- technical discussion of the concept in a plain language that should be accessible to readers with a basic level of statistical understanding. This paper will be helpful for those who wish to be better informed of variable selection in prediction modelling,have more meaningful conversations with biostatisticians/data analysts about their project or select an appropriate method for variable selection in model building with the advanced training information provided by our paper. Our intention is to provide readers with a basic understanding of this extremely important topic to assist them when developing a prediction model.

    BASlC PRlNClPLES OF vARlABLE SELECTlON lN CLlNlCAL PREDlCTlON MODELLlNG The concept of variable selection

    Variable selection means choosing among many variables which to include in a particular model, that is, to select appropriate variables from a complete list of variables by removing those that are irrelevant or redundant.1The purpose of such selection is to determine a set of variables that will provide the best fit for the model so that accurate predictions can be made. Variable selection is one of the most difficult aspects of model building.It is often advised that variable selection should be more focused on clinical knowledge and previous literature than statistical selection methods alone.2Data often contain many additional variables that are not ultimately used in model developing.3Selection of appropriate variables should be undertaken carefully to avoid including noise variables in the final model.

    lmportance of variable selection

    Due to rapid digitalisation, big data (a term frequently used to describe a collection of data that is extremely large in size, is complex and continues to grow exponentially with time) have emerged in healthcare and become a critical source of the data that has helped conceptualise precision public health and precision medicine approaches. At its simplest level, precision health involves applying appropriate statistical modelling based on available clinical and biological data to predict patient outcomes more accurately. Big data sets contain thousands of variables, which makes it difficult to handle and manage efficiently using traditional approaches. Consequently, variable selection has become the focus of much research in different areas including health. Variable selection offers many benefits such as improving the performance of models in terms of prediction, delivering variables more quickly and costeffectively by reducing training and utilisation time, facilitating data visualisation and offering an overall better understanding of the underlying process that generated the data.4

    There are many reasons why variables should be selected, including practicality issues. It is not practical to use a large set of variables in a model. Information involving a large number of variables may not be available for all patients or may be costly to collect. Some variables also may have a negligible effect on outcome and can therefore be excluded. Having fewer variables in the model means less computational time and complexity.5According to the principle of parsimony, simple models with fewer variables are preferred over complex models with many variables. Many variables in the model make the model more dependent on the observed data.6Simple models are easier to interpret, generalise and use in practice.7However, one needs to ensure that important variables are not excluded from the simple model.

    There is no set rule as to the number of variables to include in a prediction model as it often depends on several factors. The ‘one in ten rule’, a rule that stipulates for how many variables/parameters can be estimated from a data set, is quite popular in traditional clinical prediction modelling strategy (eg, logistic regression and survival models). According to this rule, one variable can be considered in a model for every 10 events.89To illustrate, if information for 500 patients is available in a data set and 40 patients die (events) during the study/follow- up period, in predicting mortality, the ‘one in ten rule’ implies that four variables can be considered reliably in the model to give a good fit. Other rules also exist,such as the ‘one in twenty rule’,10‘one in fifty rule’11or ‘five to nine events per variable rule’,12depending on the research question(s). Peduzzi et al913suggested 10-15 events per variable for logistics and survival models to produce reasonably stable estimates. While there are many different rules, these rules are only approximations, and there are situations where fewer or more observations than have been suggested are needed.14If more variables are included in a prediction model than the sample data can support, the issue of overfitting(achieving overly optimistic results that do not really exist in the population and hence fail to replicate the results in another sample) may arise, and prediction outside the training data (the data used to develop the model) will be not useful. Having too many variables (with respect to the number of observation/data set) in a model will result in a relation between variables and the outcome that only exists in that particular data set but not in the true population and power (the probability of detecting an effect when the effect is already there) to detect the true relationships will be reduced.14Including too many variables in a model may deliver results that appear important but may not be in the true population context.14There are examples where prediction models developed using too many candidate variables in a small data set perform poorly when applied to an external data set.1516

    Existing theory and literature, as well as experience and clinical knowledge, provide a general idea as to which candidate variables should be considered for inclusion in a prediction model. Nevertheless, the actual variables used in the final prediction model should be determined by analysing the data. Determining the set of variables for the final model is called variable selection. Variable selection serves two purposes. First, it helps determine all of the variables that are related to the outcome, which makes the model complete and accurate. Second, it helps select a model with few variables by eliminating irrelevant variables that decrease the precision and increase the complexity of the model. Ultimately, variable selection provides a balance between simplicity and fit. Figure 1 describes the steps to follow in variable selection during model building.

    variable reduction strategies

    One way to restrict the list of potential variables is to choose the candidate variables first, particularly, if the sample is small. Candidate variables for a specific topic are those that have demonstrated previous prognostic performance with the outcome.17Candidate variables for a specific topic can be selected based on subject matter knowledge before a study begins. This can be achieved by reviewing the existing literature on the topic and consulting with experts in the area.7In addition, systematic reviews and meta- analyses can be performed to identify candidate variables. With respect to systematic reviews,counting the number of times a variable was found important/significant in the different studies has been shown to be helpful in identifying candidate variables.7

    Figure 1 Variable selection steps. AIC, Akaike information criterion; BIC, Bayesian information criterion.

    Grouping/combining similar, related variables based on subject knowledge and statistical technique can also help restrict the number of variables. If variables are strongly correlated, combining them into a single variable has been considered prudent.7For example, systolic blood pressure and diastolic blood pressure are strongly correlated. In choosing between the two, mean blood pressure may be a better option than selecting either one of them individually.7However, it has also been argued that variables that are highly correlated should be excluded a priori as they provide little independent information.1718Removing a correlated variable should not affect the performance of the model, as it measures the same underlying information as the variable to which it correlates.5Ultimately, both combining correlated variables and excluding them beforehand help restrict the number of variables.

    How variables are distributed can also provide an indication of which ones to restrict. Variables that have a large number of missing values can be excluded, because imputing a large number of missing values will be suspicious to many readers due to the lack of reliable estimation, which problem may recur in applications of the model.717Often, 5-20 candidate variables are sufficient to build an adequate prediction model.7Nevertheless, care must be taken in restricting variables, as one drawback is that certain variables and their effects may be excluded from the prediction model.

    variable selection methods

    Once the number of potential candidate variables has been identified from the list of all available variables in the data set, a further selection of variables is made for inclusion in the final model. There are different ways of selecting variables for a final model. However, there is no consensus on which method is the best.17There are recommendations that all candidate variables should be included in the model, which approach is called the full model approach.17A model developed using the full model approach has advantages. In a full model approach, the problem of selection bias is absent and the SEs and p values of the variables are correct.17However,due to practical reason and the difficulties involved in defining a full model, it often is not possible to consider the full model approach.17

    It has also been suggested that variable selection should start with the univariate analysis of each variable.6Variables that show significance (p<0.25) in the univariate analysis, as well as those that are clinically important,should be included for multivariate analysis.6Nevertheless, univariate analysis ignores the fact that individual variables that are weakly associated with the outcome can contribute significantly when they are combined.6This issue can be solved partially by setting a higher significance level to allow more variables to illustrate significance in the univariate analysis.6In general, when there are many candidate variables available and there is confusion or uncertainty regarding which variables to consider in the final model development, formal variable selection methods should be followed. Outlined below are four major variable selection methods: backward elimination,forward selection, stepwise selection and all possible subset selection, and a discussion of their pros and cons.

    Backward elimination

    Backward elimination is the simplest of all variable selection methods. This method starts with a full model that considers all of the variables to be included in the model.Variables then are deleted one by one from the full model until all remaining variables are considered to have some significant contribution to the outcome.1The variable with the smallest test statistic (a measure of the variable’s contribution to the model) less than the cut- off value or with the highest p value greater than the cut- off valuethe least significant variable-is deleted first. Then the model is refitted without the deleted variable and the test statistics or p values are recomputed. Again, the variable with the smallest test statistic or with the highest p value greater than the cut- off value is deleted in the refitted model. This process is repeated until every remaining variable is significant at the cut- off value. The cut- off value associated with the p value is sometimes referred to as‘p- to- remove’ and does not have to be set at 0.05.

    Kshirsagar et al19developed a hypertension prediction model for middle- aged and older adults using data from two community- based cohorts in the USA. The purpose of the study was to develop a simple prediction model/score with easy and routinely available variables. The model was developed using 7610 participants and eight variables (age, level of systolic and diastolic blood pressure, smoking, family history of hypertension, diabetes mellitus, female sex, high body mass index (BMI), lack of exercise). Candidate variables were selected based on the scientific literature and numeric evidence. One of the data sets did not have information on a specific variable (family history of hypertension) used in the final model. Values for this variable were imputed, however,this approach is not ideal and often not recommended,7as imputing a large number of missing values can raise questions as to acceptability and accuracy of the outcome.The study applied a backward elimination variable selection technique to select variables for the final model with a conventional p value threshold of 0.05. The study found that some important variables did not contribute independently to the outcome following multivariate adjustment. Setting a higher threshold for the p value and giving priority to clinical reasoning in selecting variables,along with statistical significance, perhaps would have allowed more important variables to be entered into the model.

    While a set of variables can have significant predictive ability, a particular subset of them may not. Unfortunately,both forward selection and stepwise selection do not have the capacity to identify less predictive individual variables that may not enter the model to demonstrate their joint behaviour. However, backward elimination has the advantage to assess the joint predictive ability of variables as the process starts with all variables being included in the model. Backward elimination also removes the least important variables early on and leaves only the most important variables in the model. One disadvantage of the backward elimination method is that once a variable is eliminated from the model it is not re- entered again.However, a dropped variable may become significant later in the final model.

    Forward selection

    The forward selection method of variable selection is the reverse of the backward elimination method. The method starts with no variables in the model then adds variables to the model one by one until any variable not included in the model can add any significant contribution to the outcome of the model.1At each step, each variable excluded from the model is tested for inclusion in the model. If an excluded variable is added to the model, the test statistic or p value is calculated. The variable with the largest test statistic greater than the cut- off value or the lowest p value less than the cut- off value is selected and added to the model. In other words, the most significant variable is added first. The model then is refitted with this variable and test statistics or p values are recomputed for all remaining variables. Again, the variable with the largest test statistic greater than the cut- off value or the lowest p value less than the cut- off value is chosen from among the remaining variables and added to the model.This process continues until no remaining variable is significant at the cut- off level when added to the model.In forward selection, if a variable is added to the model,it remains there.1

    Dang et al20developed a predictive model (BariWound)for incisional surgical site infections (SSI) within 30 days of bariatric surgery. The objective was to construct a clinically useful prediction model to stratify individuals into different risk groups (eg, very high, high, medium and low). A clinically rich database, Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program was used to develop the prediction model. An initial univariate screen was performed to identify baseline variables that were significantly associated (p<0.05)with the outcome 30- day SSI. Variables then were checked further for clinical relevance with the outcome. Finally, a forward selection procedure (p<0.01) was applied among the selected variables in the univariate screen to build the prediction model. A total of nine variables (procedure type, chronic steroid or immunosuppressant use, gastrooesophageal reflux disease, obstructive sleep apnoea, sex,type 2 diabetes, hypertension, operative time and BMI)identified through forward selection were included in the final model. As mentioned earlier, a p value threshold of 0.05 in univariate screening and of 0.01 in forward selection is a concern, as it creates the chance of missing some important variables in the model.

    One advantage of forward selection is that it starts with smaller models. Also, this procedure is less susceptible to collinearity (very high intercorrelations or interassociations among independent variables). Like backward elimination, forward selection also has drawbacks. In forward selection, inclusion of a new variable may make an existing variable in the model non- significant; however,the existing variable cannot be deleted from the model.A balance between backward elimination and forward selection is therefore required which can be achieved in stepwise selection.

    Stepwise selection

    Stepwise selection methods are a widely used variable selection technique, particularly in medical applications.This method is a combination of forward and backward selection procedures that allows moving in both directions, adding and removing variables at different steps.The process can start with both a backward elimination and forward selection approach. For example, if stepwise selection starts with forward selection, variables are added to the model one at a time based on statistical significance. At each step, after a variable is added, the procedure checks all the variables already added to the model to delete any variable that is not significant in the model.The process continues until every variable in the model is significant and every excluded variable is insignificant.Due to its similarity, this approach is sometimes considered as a modified forward selection. However, it differs from forward selection in that variables entered into the model do not necessarily remain in the model. However,if stepwise selection starts with backward elimination,the variables are deleted from the full model based on statistical significance and then added back if they later appear significant. The process is a rotation of choosing the least significant variable to drop from the model and then reconsidering all dropped variables to re- enter into the model. Stepwise selection requires two separate significance levels (cut- offs) for adding and deleting variables from the model. The significance levels for adding variables should be less than the significance levels for deleting variables so that the procedure does not get into an infinite loop. Within stepwise selection, backward elimination is often given preference as in backward elimination the full model is considered, and the effect of all candidate variables is assessed.7

    Chien et al21developed a new prediction model for hypertension risk in the Chinese population. A prospective cohort of 2506 ethnic Chinese community individuals in Taiwan was used to develop the model. Two different models, a clinical model with five variables and a biochemical model with eight variables, were developed. The objective was to identify high- risk Chinese community individuals with hypertension risk using the newly developed model. The variables for the model were selected using the stepwise selection method, the most common method for variable selection that permits using both forward and backward procedures iteratively in model building. Generally, to apply a stepwise selection procedure, a set of candidate variables need to be identified first. However, information about candidate variables and the number of variables considered in stepwise selection was absent in this study. Although it was indicated that the selected variables were statistically associated with the risk of hypertension, without a discussion about the potential candidate variables, how variables were selected and how many were included in the model, the reader is left uninformed about the variable selection process,which raises concern about the reliability of the finally selected variables. Moreover, setting a higher significance level is strongly recommended in stepwise selection to allow more variables to be included in the model.A significance level of only 0.05 was used in this study,and that cut- off value can sometimes miss important variables in the model. This likely happened in this study,as an important variable termed ‘gender’ was forcefully entered into the biochemical model even though it did not appear significant at the 0.05 level. Alternatively, the study could use Akaike information criterion (AIC) or Bayesian information criterion (BIC) (discussed later),which often provide the most parsimonious model.

    The stepwise selection method is perhaps the most widely used method of variable selection. One reason is that it is easy to apply in statistical software.7This method allows researchers to examine models with different combinations of variables that otherwise may be overlooked.6The method is also comparatively objective as the same variables are generally selected from the same data set even though different persons are conducting the analysis. This helps reproduce the results and validate in model.7There are also disadvantages to using the stepwise selection method. There is instability of variable selection if a different sample is used; however,a large effective sample size (50 events per variable)can help overcome this issue.6The p values obtained by this method are also in doubt, as so many multiple tests occur during the selection process. If there are too many candidate variables, then the method fails to provide the best model, as some irrelevant variables are entered into the model.16The regression coefficients obtained by this method are also biased. It also prevents researchers from thinking about the problem.1There is also criticism that stepwise and other automated variable selection processes can generate biologically implausible models.6Collinearity is often considered a serious issue in stepwise variable selection. Variables that best describe a particular data set are chosen by the stepwise procedure due to their high- magnitude coefficients for that data set, not necessarily for the underlying population. If there are two highly correlated variables and they contribute equally to the outcome, there is a good chance that both of the correlated variables will be out of the model in stepwise selection if they are individually less significant than other non- correlated variables. Conversely, if one of the two correlated variables contributes substantially better to the outcome for a particular data set and thus appears in the model, the estimate of its coefficient can be much higher in magnitude than its true population value. Additionally,potential valuable information from its correlated variable can be lost and the results less generalisable.

    All possible subset selection

    In all possible subset selection, every possible combination of variables is checked to determine the best subset of variables for the prediction model. With this procedure, all one- variable, two- variable, three- variable models,and so on, are built to determine which one is the best according to some specific criteria. If there are K variables, then there are 2Kpossible models that can be built.Holden et al22developed a model to identify variables(which combination of perceptions) that best predict bar- coded medication administration (BCMA) acceptance (intention to use, satisfaction) using cross- sectional survey data among registered nurses in the Midwest United States. An all possible subset selection procedure was used to identify combinations of variables to model BCMA acceptance most efficiently. Two different models were constructed. In model 1, the outcome of acceptance was nurses’ behavioural intention to use BCMA while in model 2, the outcome of acceptance was nurses’ satisfaction with BCMA. A set of nine theory- based candidate variables (seven perception and two demographic)were assessed for inclusion in the models. To determine the optimal set of variables for the models, investigators assessed every combination of the models generated by an all possible subset selection procedure using five different measures. After comparing the various models according to five different measures, the best model was selected. Application of an all possible subset selection procedure was feasible here due to the small number of candidate variables.The ability to identify a combination of variables,which is not available in other selection procedures, is an advantage of this method.7Among the disadvantages,computing can be an issue in an all subset selection procedure, as the number of possible subsets can be huge and many models can be produced, particularly when the number of variables is large. In addition, an all possible subset selection procedure can produce models that are too small23or overfitted due to examining many models with multiple testing.7Further, a selection criterion needs to be specified in advance.

    Stopping rule/selection criteria in variable selection

    In all stepwise selection methods including all subset selection, a stopping rule or selection criteria for inclusion or exclusion of variables need to be set. Generally, a standard significance level for hypothesis testing is used.7However, other criteria are also frequently used as a stopping rule such as the AIC, BIC or Mallows’ Cpstatistic. We discuss these major selection criteria below.

    P values

    If the stopping rule is based on p values, the traditional choice for significance level is 0.05 or 0.10. However, the optimum value of the significance level to decide which variable to include in the model is suggested to be 1, which exceeds the traditional choices.18This suggestion assumes absence of few strong variables or completely irrelevant variables in the data.18In reality, some strong and some irrelevant variables always exist in the outcome. In such a situation, a significance level of 0.50 is proposed, which allows some variables to exit in the selection process.18There is also a strong recommendation for using a p value in the range of 0.15-0.206, although using a higher significance level has the disadvantages that some unimportant variables may be included in the model.6However, we believe a higher significance level for variable selection should be considered so that important variables relevant to the outcome are not missed and to avoid deleting less significant variables that may have practical and clinical reasoning.

    Akaike information criterion

    AIC is a tool for model selection that compares different models. Including different variables in the model provides different models, and AIC attempts to select the model by balancing underfitting (too few variables in the model) and overfitting (too many variables in the model).24Including too few variables often fails to capture the true relation and too many variables create a generalisability problem.25A trade- off is therefore required between simplicity and adequacy of model fitting and AIC can help achieve this.26A model cannot precisely represent the true relation that exists in the data, as there is some information loss in estimating the true relation through modelling. AIC tries to estimate that relative information loss compared with other candidate models.Quality of the model is believed to be better with smaller information loss and it is important to select the model that best minimises that loss. Candidate models for the specific data are ranked from best to worst according to the value of AIC.24Among the available models for the specific data, the model with minimum AIC is best.26

    AIC only provides information about the quality of a model relative to the other models and does not provide information on the absolute quality of the model. With a small sample size (relative to a large number of parameters/variables or any number of variables/parameters),AIC often provides models with too many variables.However, this issue can be solved with a modified version of AIC called AICC,which introduces an extra penalty term for the number of variables/parameters. For a large sample size, this penalty term becomes zero and AICCsubsequently converges to AIC, which is why it is suggested that AICCbe used in practice.24

    Bayesian information criterion

    BIC is another variable selection criterion that is similar to AIC, but with a different penalty for the number of variables (parameters) included in the model. Like AIC, BIC also balances between simplicity and goodness of model fitting. In practice, for a given data set, BIC is calculated for each of the candidate models, and the model corresponding to the minimum BIC value is chosen. BIC often chooses models that are more parsimonious than AIC,as BIC penalises bigger models more due to the larger penalty term inherent in its formula.27

    Although there are similarities between AIC and BIC,and both criteria balance simplicity and model fit, differences exist between them. The underlying theory behind AIC is that the data stem from a very complex model,there are many candidate models to fit the data and none of the candidate models (including the best model) are the exact functional form of the true model.25In addition,the number of variables (parameters) in the best model may not include all variables (parameters) in the true model.25In other words, a best model is only an approximation of the true model and a true model that perfectly represents reality does not exist.24Conversely, the underlying theory behind BIC is that the data are derived from a simple model and there exists a candidate model that represents the true model.25Depending on the situation,however, each criterion has an advantage over the other.There are many studies that have compared AIC and BIC and recommended which one to use. If our objective is to select a best model that will provide maximum predictive accuracy, then AIC is superior (because there is no true model, and the best model is selected to maximise the predictive accuracy and represent an approximate true relation). However, if the goal is to select a correct model that is consistent, then BIC is superior (because BIC consistently selects the correct model from among the candidate models that best represent the true model).25For large data sets, the performance of both criteria improves, but with different objectives.25

    Mallows’ Cp statistic

    Mallows’ Cpstatistic is another criterion used in variable selection. The purpose of the statistic is to select the best model using a subset of variables from all available variables. This criterion is most widely used in the all subset selection method. Different models derived in all subset selection are compared based on Mallows’ Cpstatistic and the model with the lowest Mallows’ Cpstatistic closest to the number of variables plus the constant is often chosen.A small Mallows’ Cpvalue near the number of variables indicates that the model is relatively more precise than other models (small variance and less bias).28

    CONCLuSlON

    It is extremely important to include appropriate variables in prediction modelling, as model’s performance largely depends on which variables are ultimately included in the model. Failure to include the proper variables in the model provides inaccurate results, and the model will fail to capture the true relation that exists in the data between the outcome and the selected variables. There are numerous occasions when prediction models are developed without following the proper steps or adopting the proper method of variable selection. Researchers need to be more aware of and cautious about these very important aspects of prediction modelling.

    TwitterTanvir C Turin @drturin

    ContributorsTCT and MZIC developed the study idea. MZIC prepared the manuscript with critical intellectual inputs from TCT. The manuscript has been finalised by MZIC and TCT.

    FundingThe authors have not declared a specific grant for this research from any funding agency in the public, commercial or not- for- profit sectors.

    Competing interestsNone declared.

    Patient consent for publicationNot required.

    Provenance and peer reviewNot commissioned; externally peer reviewed.

    Data availability statementThere are no data in this work.

    Open accessThis is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY- NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non- commercially,and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non- commercial. See: http:// creativecommons. org/ licenses/ by- nc/ 4. 0/.

    一级片免费观看大全| 久久久亚洲精品成人影院| freevideosex欧美| 在线观看三级黄色| 2018国产大陆天天弄谢| 高清av免费在线| 免费av中文字幕在线| 国产男女内射视频| 免费av中文字幕在线| 狂野欧美激情性xxxx在线观看| 日本猛色少妇xxxxx猛交久久| 五月玫瑰六月丁香| 一级a做视频免费观看| 又粗又硬又长又爽又黄的视频| 亚洲综合色惰| 成人国产av品久久久| 高清在线视频一区二区三区| 久久久久久久久久成人| 波多野结衣一区麻豆| www.熟女人妻精品国产 | 日韩av不卡免费在线播放| 18禁裸乳无遮挡动漫免费视频| 热re99久久国产66热| 18禁动态无遮挡网站| 久久这里有精品视频免费| 熟女电影av网| 亚洲精品乱久久久久久| 咕卡用的链子| 亚洲欧美色中文字幕在线| 90打野战视频偷拍视频| www.色视频.com| 亚洲色图综合在线观看| 久久久久久人妻| 91精品三级在线观看| 1024视频免费在线观看| 97在线视频观看| 亚洲精品日本国产第一区| 97在线人人人人妻| 人人妻人人澡人人爽人人夜夜| 久久99蜜桃精品久久| 欧美精品av麻豆av| 国产精品嫩草影院av在线观看| 99热全是精品| 美女视频免费永久观看网站| 国产一区二区三区综合在线观看 | 亚洲美女搞黄在线观看| 国产高清不卡午夜福利| 极品人妻少妇av视频| av国产精品久久久久影院| 99热这里只有是精品在线观看| 人妻 亚洲 视频| 狠狠婷婷综合久久久久久88av| 日本wwww免费看| 国精品久久久久久国模美| 韩国高清视频一区二区三区| 91精品三级在线观看| 国产精品无大码| 成人影院久久| 飞空精品影院首页| 成人免费观看视频高清| 一级毛片 在线播放| 男男h啪啪无遮挡| 欧美日韩亚洲高清精品| 91精品三级在线观看| 在线观看免费日韩欧美大片| 天美传媒精品一区二区| 狠狠精品人妻久久久久久综合| 欧美日本中文国产一区发布| 精品一区二区免费观看| 亚洲综合精品二区| 国产高清三级在线| 午夜福利网站1000一区二区三区| 欧美少妇被猛烈插入视频| 日韩 亚洲 欧美在线| 精品人妻在线不人妻| 精品福利永久在线观看| 亚洲国产精品一区三区| 午夜精品国产一区二区电影| 在线天堂最新版资源| 日韩中文字幕视频在线看片| 麻豆精品久久久久久蜜桃| 赤兔流量卡办理| 男女啪啪激烈高潮av片| av天堂久久9| 国产乱来视频区| 亚洲精华国产精华液的使用体验| 两性夫妻黄色片 | 午夜免费观看性视频| 美女大奶头黄色视频| 男女边吃奶边做爰视频| 久久精品夜色国产| 免费看光身美女| 欧美日韩精品成人综合77777| 久久97久久精品| 中文字幕免费在线视频6| 国产白丝娇喘喷水9色精品| 日韩制服骚丝袜av| 亚洲精品日本国产第一区| 精品一品国产午夜福利视频| 天天躁夜夜躁狠狠久久av| 亚洲精品乱码久久久久久按摩| 成人国产av品久久久| 少妇猛男粗大的猛烈进出视频| 性色av一级| 成年女人在线观看亚洲视频| 1024视频免费在线观看| 久久午夜综合久久蜜桃| 亚洲国产欧美日韩在线播放| 日韩,欧美,国产一区二区三区| 国产精品 国内视频| 大码成人一级视频| 美女国产高潮福利片在线看| 欧美变态另类bdsm刘玥| 大香蕉久久成人网| 黄色怎么调成土黄色| 99热这里只有是精品在线观看| 久久亚洲国产成人精品v| freevideosex欧美| 亚洲av电影在线观看一区二区三区| 99热这里只有是精品在线观看| 少妇被粗大的猛进出69影院 | 观看美女的网站| 亚洲国产精品成人久久小说| 亚洲四区av| 久久久久国产精品人妻一区二区| 黄网站色视频无遮挡免费观看| 青春草国产在线视频| 韩国精品一区二区三区 | 熟女电影av网| 久久免费观看电影| 午夜av观看不卡| 又大又黄又爽视频免费| 国产片内射在线| 国产成人免费无遮挡视频| 99久国产av精品国产电影| 午夜激情av网站| 亚洲国产成人一精品久久久| 国产av精品麻豆| 国产成人91sexporn| 王馨瑶露胸无遮挡在线观看| 精品亚洲成国产av| 国产av码专区亚洲av| 我的女老师完整版在线观看| 日韩 亚洲 欧美在线| 国产精品久久久av美女十八| 黄色一级大片看看| 国产免费福利视频在线观看| www.熟女人妻精品国产 | 午夜福利在线观看免费完整高清在| 国产精品偷伦视频观看了| 免费不卡的大黄色大毛片视频在线观看| 青春草亚洲视频在线观看| 青春草国产在线视频| 成人免费观看视频高清| 午夜激情久久久久久久| 亚洲欧美日韩另类电影网站| av有码第一页| 国产片内射在线| 亚洲欧美清纯卡通| a级毛色黄片| 免费在线观看完整版高清| 亚洲av中文av极速乱| 成人漫画全彩无遮挡| 婷婷色av中文字幕| 热99久久久久精品小说推荐| 九九在线视频观看精品| 欧美xxⅹ黑人| 国产一区二区三区av在线| 18禁裸乳无遮挡动漫免费视频| 搡老乐熟女国产| 国产精品熟女久久久久浪| 女的被弄到高潮叫床怎么办| 美女中出高潮动态图| 精品酒店卫生间| 国产精品无大码| 丰满乱子伦码专区| 大香蕉97超碰在线| 欧美日韩国产mv在线观看视频| av不卡在线播放| 丰满饥渴人妻一区二区三| 成年av动漫网址| 国产精品不卡视频一区二区| 亚洲精品aⅴ在线观看| av片东京热男人的天堂| 一二三四中文在线观看免费高清| 黄色视频在线播放观看不卡| 乱码一卡2卡4卡精品| 人人妻人人澡人人爽人人夜夜| 免费女性裸体啪啪无遮挡网站| 久久精品国产鲁丝片午夜精品| videos熟女内射| 男人添女人高潮全过程视频| 人体艺术视频欧美日本| 最新中文字幕久久久久| 大片电影免费在线观看免费| 精品午夜福利在线看| 日本猛色少妇xxxxx猛交久久| 最近中文字幕高清免费大全6| 咕卡用的链子| 欧美日韩亚洲高清精品| 中国美白少妇内射xxxbb| 国产高清不卡午夜福利| 汤姆久久久久久久影院中文字幕| 久久久久精品久久久久真实原创| 久久ye,这里只有精品| 大片免费播放器 马上看| 黄网站色视频无遮挡免费观看| 精品国产一区二区三区四区第35| 熟女av电影| 亚洲av中文av极速乱| 女人精品久久久久毛片| 国产av国产精品国产| 9热在线视频观看99| av线在线观看网站| 99re6热这里在线精品视频| 国产福利在线免费观看视频| 我的女老师完整版在线观看| 久久 成人 亚洲| 久久精品国产亚洲av天美| 日韩制服丝袜自拍偷拍| 久久国产精品大桥未久av| 国产精品免费大片| 国产一级毛片在线| 全区人妻精品视频| 亚洲欧美成人综合另类久久久| 亚洲成国产人片在线观看| 搡老乐熟女国产| 久久精品国产综合久久久 | 青春草视频在线免费观看| 国产精品.久久久| 视频在线观看一区二区三区| 男女高潮啪啪啪动态图| 一区二区三区乱码不卡18| 久久久久精品性色| 久久人人97超碰香蕉20202| 一级爰片在线观看| 香蕉丝袜av| 亚洲国产精品专区欧美| 日本av手机在线免费观看| 男人爽女人下面视频在线观看| 国产乱来视频区| 久久精品aⅴ一区二区三区四区 | 少妇熟女欧美另类| 亚洲国产毛片av蜜桃av| 一级爰片在线观看| 亚洲欧美一区二区三区国产| 九色亚洲精品在线播放| 久久久久久人妻| 考比视频在线观看| 午夜福利网站1000一区二区三区| 日本vs欧美在线观看视频| 国产成人免费无遮挡视频| 国产69精品久久久久777片| 岛国毛片在线播放| 日韩三级伦理在线观看| 欧美日韩亚洲高清精品| 亚洲精品aⅴ在线观看| 亚洲欧美日韩卡通动漫| tube8黄色片| 黄片无遮挡物在线观看| 午夜老司机福利剧场| 另类亚洲欧美激情| 各种免费的搞黄视频| 激情视频va一区二区三区| 国产高清不卡午夜福利| 午夜免费鲁丝| 免费黄网站久久成人精品| 亚洲成人av在线免费| 成人黄色视频免费在线看| 久久午夜综合久久蜜桃| 99热这里只有是精品在线观看| kizo精华| 男女啪啪激烈高潮av片| av一本久久久久| 亚洲高清免费不卡视频| 如何舔出高潮| 十八禁网站网址无遮挡| 亚洲欧美精品自产自拍| 国产亚洲最大av| 内地一区二区视频在线| 欧美亚洲 丝袜 人妻 在线| 亚洲四区av| 久久狼人影院| 最新的欧美精品一区二区| 最近中文字幕高清免费大全6| 熟妇人妻不卡中文字幕| 久久久久久久久久久免费av| 欧美少妇被猛烈插入视频| 又黄又爽又刺激的免费视频.| 欧美日韩视频精品一区| 欧美精品亚洲一区二区| videosex国产| 成人影院久久| 亚洲国产毛片av蜜桃av| 亚洲欧美成人综合另类久久久| 一二三四在线观看免费中文在 | 国产精品国产av在线观看| 黑人高潮一二区| 99热6这里只有精品| 成年人免费黄色播放视频| 久久久久久人人人人人| 老女人水多毛片| 国产高清三级在线| 国产精品一区二区在线观看99| 欧美日韩视频精品一区| 免费久久久久久久精品成人欧美视频 | 日韩,欧美,国产一区二区三区| 亚洲色图 男人天堂 中文字幕 | 两个人看的免费小视频| 欧美成人午夜精品| 久久久欧美国产精品| 国产精品国产三级专区第一集| 久久99热这里只频精品6学生| 亚洲国产精品国产精品| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 欧美精品亚洲一区二区| 午夜免费观看性视频| 色网站视频免费| 日韩中字成人| 成人18禁高潮啪啪吃奶动态图| 亚洲精品第二区| 亚洲成人av在线免费| av天堂久久9| 亚洲国产毛片av蜜桃av| 日韩三级伦理在线观看| 中文字幕亚洲精品专区| 国产精品 国内视频| 天美传媒精品一区二区| 久久久精品区二区三区| 新久久久久国产一级毛片| 国产成人一区二区在线| 国产极品天堂在线| 国产精品一区二区在线不卡| 国产69精品久久久久777片| 在线天堂中文资源库| 久久精品久久久久久噜噜老黄| 亚洲情色 制服丝袜| 最近2019中文字幕mv第一页| 人妻一区二区av| 成人手机av| 毛片一级片免费看久久久久| 免费黄网站久久成人精品| 精品一区二区三卡| 免费看不卡的av| 最近中文字幕2019免费版| 午夜av观看不卡| 久久 成人 亚洲| 男女午夜视频在线观看 | 国产亚洲欧美精品永久| 色婷婷久久久亚洲欧美| 午夜福利乱码中文字幕| 男女啪啪激烈高潮av片| 又大又黄又爽视频免费| 亚洲av电影在线观看一区二区三区| 精品一区二区三区视频在线| 男人添女人高潮全过程视频| 99视频精品全部免费 在线| 在线观看三级黄色| 欧美xxxx性猛交bbbb| 午夜福利,免费看| 少妇人妻 视频| 亚洲国产色片| 老司机影院毛片| 日本爱情动作片www.在线观看| 成年女人在线观看亚洲视频| 极品人妻少妇av视频| 18禁动态无遮挡网站| 国产精品偷伦视频观看了| 狂野欧美激情性bbbbbb| 成人毛片60女人毛片免费| 亚洲国产色片| 免费观看在线日韩| 丰满少妇做爰视频| 国产有黄有色有爽视频| 日韩免费高清中文字幕av| 国产在线视频一区二区| 男女高潮啪啪啪动态图| 国产成人一区二区在线| 国产高清国产精品国产三级| 熟女av电影| 有码 亚洲区| av在线app专区| 亚洲av.av天堂| 18禁观看日本| 又大又黄又爽视频免费| 久久久久久人人人人人| 国产一区二区三区av在线| 999精品在线视频| 日产精品乱码卡一卡2卡三| 九色亚洲精品在线播放| 最新的欧美精品一区二区| 成年人午夜在线观看视频| 国产白丝娇喘喷水9色精品| 亚洲高清免费不卡视频| 成人免费观看视频高清| 观看美女的网站| 欧美人与善性xxx| 久久久精品94久久精品| 欧美3d第一页| 老司机影院毛片| 激情五月婷婷亚洲| 高清在线视频一区二区三区| 美女主播在线视频| 极品少妇高潮喷水抽搐| 一级毛片 在线播放| 久久精品国产鲁丝片午夜精品| 美女主播在线视频| 乱人伦中国视频| 精品一区二区三区四区五区乱码 | 国产精品国产av在线观看| 亚洲精品色激情综合| 蜜桃国产av成人99| 人成视频在线观看免费观看| 亚洲精品日本国产第一区| 国产永久视频网站| 久久久久精品性色| 一级,二级,三级黄色视频| 热99国产精品久久久久久7| 男的添女的下面高潮视频| 亚洲国产色片| 高清黄色对白视频在线免费看| 最近手机中文字幕大全| 成人毛片60女人毛片免费| 一二三四在线观看免费中文在 | 精品国产国语对白av| www.av在线官网国产| 精品卡一卡二卡四卡免费| 欧美变态另类bdsm刘玥| 精品一区二区三区四区五区乱码 | 各种免费的搞黄视频| 麻豆乱淫一区二区| 久久免费观看电影| 性色av一级| 欧美精品一区二区免费开放| 另类精品久久| 日韩精品有码人妻一区| 国产欧美日韩一区二区三区在线| 精品国产乱码久久久久久小说| 免费看av在线观看网站| 亚洲欧美成人综合另类久久久| 免费黄网站久久成人精品| 国产一区二区在线观看日韩| 日日啪夜夜爽| 99久久中文字幕三级久久日本| 日韩视频在线欧美| 又粗又硬又长又爽又黄的视频| 国产精品久久久av美女十八| av天堂久久9| 亚洲性久久影院| 国产成人a∨麻豆精品| av有码第一页| 欧美日韩国产mv在线观看视频| 亚洲一码二码三码区别大吗| 欧美日韩av久久| 两性夫妻黄色片 | 国产高清不卡午夜福利| videossex国产| 国产av码专区亚洲av| 国产精品一区www在线观看| 久久久亚洲精品成人影院| 99热6这里只有精品| 亚洲一区二区三区欧美精品| 欧美3d第一页| 一级毛片 在线播放| 午夜福利视频在线观看免费| 国产精品人妻久久久影院| 美女中出高潮动态图| 亚洲情色 制服丝袜| 人人澡人人妻人| 久久人人97超碰香蕉20202| 免费观看a级毛片全部| 在线观看www视频免费| 超色免费av| 欧美亚洲日本最大视频资源| 只有这里有精品99| 三级国产精品片| 18在线观看网站| 99热全是精品| 午夜av观看不卡| 蜜桃国产av成人99| 亚洲av.av天堂| 丝袜喷水一区| 精品久久蜜臀av无| 亚洲丝袜综合中文字幕| 国产精品一区二区在线不卡| 国产成人一区二区在线| 国语对白做爰xxxⅹ性视频网站| 亚洲欧美一区二区三区黑人 | 久久精品国产鲁丝片午夜精品| 亚洲国产av新网站| 亚洲国产毛片av蜜桃av| 啦啦啦在线观看免费高清www| 亚洲丝袜综合中文字幕| 伦理电影免费视频| 五月开心婷婷网| 久久韩国三级中文字幕| 久久女婷五月综合色啪小说| 精品久久久精品久久久| 成人毛片a级毛片在线播放| 女性被躁到高潮视频| 一级,二级,三级黄色视频| 国产日韩欧美视频二区| 日本与韩国留学比较| 丝袜在线中文字幕| 91国产中文字幕| 久久久久精品性色| 欧美亚洲 丝袜 人妻 在线| 蜜桃国产av成人99| 国产熟女午夜一区二区三区| 久久久久久久国产电影| 亚洲一级一片aⅴ在线观看| 天天影视国产精品| 男女高潮啪啪啪动态图| 欧美激情国产日韩精品一区| 国产亚洲精品久久久com| 日本黄大片高清| av电影中文网址| 22中文网久久字幕| 最近2019中文字幕mv第一页| 亚洲精品aⅴ在线观看| 美国免费a级毛片| 人人妻人人爽人人添夜夜欢视频| 一级a做视频免费观看| 久久久a久久爽久久v久久| 日韩不卡一区二区三区视频在线| www日本在线高清视频| 亚洲精品美女久久av网站| av一本久久久久| 久久狼人影院| 一本色道久久久久久精品综合| 久久人人97超碰香蕉20202| 黄片无遮挡物在线观看| 国产黄色视频一区二区在线观看| 亚洲欧美精品自产自拍| 性色avwww在线观看| 九草在线视频观看| 欧美性感艳星| 人妻 亚洲 视频| 十分钟在线观看高清视频www| 欧美3d第一页| 久久人妻熟女aⅴ| 18禁在线无遮挡免费观看视频| 亚洲精品美女久久久久99蜜臀 | 亚洲内射少妇av| 精品亚洲成a人片在线观看| 视频区图区小说| 香蕉精品网在线| 国产精品无大码| 亚洲精品成人av观看孕妇| 精品熟女少妇av免费看| 水蜜桃什么品种好| 欧美日韩视频精品一区| 在线观看人妻少妇| 男女高潮啪啪啪动态图| 亚洲成av片中文字幕在线观看 | 久久精品国产综合久久久 | 午夜91福利影院| 亚洲精品aⅴ在线观看| 国产爽快片一区二区三区| √禁漫天堂资源中文www| 在线精品无人区一区二区三| 久久久久网色| 日本与韩国留学比较| 国产成人精品一,二区| 亚洲精品国产色婷婷电影| 免费看光身美女| 国产 精品1| 成人手机av| 欧美97在线视频| 丰满迷人的少妇在线观看| 中文字幕另类日韩欧美亚洲嫩草| 狂野欧美激情性bbbbbb| 亚洲精品久久久久久婷婷小说| 女的被弄到高潮叫床怎么办| 精品久久蜜臀av无| 国产一区二区激情短视频 | 中文精品一卡2卡3卡4更新| 黄色配什么色好看| 亚洲精品日本国产第一区| 视频区图区小说| 久久婷婷青草| videossex国产| 亚洲激情五月婷婷啪啪| 国产乱人偷精品视频| 国国产精品蜜臀av免费| 久久久久久久久久成人| 人妻少妇偷人精品九色| 五月伊人婷婷丁香| 成年动漫av网址| 国产免费一区二区三区四区乱码| 日本黄大片高清| 久久久久精品性色| 最黄视频免费看| 青春草视频在线免费观看| 在线观看国产h片| 国产欧美亚洲国产| 国产精品久久久久久av不卡| 插逼视频在线观看| 黄色视频在线播放观看不卡| 两性夫妻黄色片 | 午夜福利,免费看| 一二三四中文在线观看免费高清| 国产一区二区激情短视频 | 七月丁香在线播放| 精品视频人人做人人爽| 亚洲成国产人片在线观看| h视频一区二区三区| 一级毛片电影观看| 亚洲精华国产精华液的使用体验| 国产日韩一区二区三区精品不卡| 亚洲国产精品999| 视频中文字幕在线观看| 国产在线免费精品| 久久精品国产综合久久久 | 久久人人爽av亚洲精品天堂|