• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Kappa coefficient: a popular measure of rater agreement

    2015-12-09 03:02:23WanTANGJunHUHuiZHANGPanWUHuaHE
    上海精神醫(yī)學(xué) 2015年1期
    關(guān)鍵詞:小市民骨干教師南通

    Wan TANG*, Jun HU, Hui ZHANG, Pan WU, Hua HE,5

    ?Biostatistics in psychiatry (25)?

    Kappa coefficient: a popular measure of rater agreement

    Wan TANG1*, Jun HU2, Hui ZHANG3, Pan WU4, Hua HE1,5

    interrater agreement; kappa coefficient; weighted kappa; correlation

    1. Introduction

    For most physical illnesses such as high blood pressure and tuberculosis, definitive diagnoses can be made using medical devices such as a sphygmomanometer for blood pressure or an X-ray for tuberculosis. However,there are no error-free gold standard physical indicators of mental disorders, so the diagnosis and severity of mental disorders typically depends on the use of instruments (questionnaires) that attempt to measure latent multi-faceted constructs. For example, psychiatric diagnoses are often based on criteria specified in the Fourth edition of theDiagnostic and Statistical Manual of Mental Disorders(DSM-IV)[1], published by the American Psychiatric Association. But different clinicians may have different opinions about the presence or absence of the speci fic symptoms required to determine the presence of a diagnosis, so there is typically no perfect agreement between evaluators. In this situation,statistical methods are needed to address variability in clinicians’ ratings.

    Cohen’s kappa is a widely used index for assessing agreement between raters.[2]Although similar in appearance, agreement is a fundamentally different concept from correlation. To illustrate, consider an instrument with six items and suppose that two raters’ratings of the six items on a single subject are (3,5), (4,6),(5,7), (6,8), (7,9) and (8,10). Although the scores of the two raters are quite different, the Pearson correlation coefficient for the two scores is 1, indicating perfect correlation. The paradox occurs because there is a bias in the scoring that results in a consistent difference of 2 points in the scores of the two raters for all 6 items in the instrument. Thus, although perfectly correlated(precision), there is quite poor agreement between the two raters. The kappa index, the most popular measure of raters’ agreement, resolves this problem by assessing both the bias and the precision between raters’ ratings.

    In addition to its applications to psychiatric diagnosis, the concept of agreement is also widely applied to assess the utility of diagnostic and screening tests. Diagnostic tests provide information about a patient’s condition that clinicians’ often use when making decisions about the management of patients.Early detection of disease or of important changes in the clinical status of patients often leads to less suffering and quicker recovery, but false negative and false positive screening results can result in delayed treatment or in inappropriate treatment. Thus when a new diagnostic or screening test is developed, it is critical to assess its accuracy by comparing test results with those from a gold or reference standard. When assessing such tests,it is incorrect to measure the correlation of the results of the test and the gold standard, the correct procedure is to assess the agreement of the test results with the gold standard.

    2. Problems

    Consider an instrument with a binary outcome, with‘1’ representing the presence of depression and ‘0’representing the absence of depression. Suppose two independent raters apply the instrument to a random sample ofnsubjects. Let and denote the ratings on thensubjects by the two raters fori=1,2,...,n. We are interested in the degree of agreement between the two raters. Since the ratings are on the same scale of two levels for both raters, the data can be summarized in a 2×2 contingency table.

    To illustrate, Table 1 shows the results of a study assessing the prevalence of depression among 200 patients treated in a primary care setting using two methods to determine the presence of depression;[3]one based on information provided by the individual(i.e., proband) and the other based on information provided by another informant (e.g., the subject’s family member or close friend) about the proband. Intuitively,we may think that the proportion of cases in which the two ratings are the same (in this example, 34.5%[(19+50)/200]) would be a reasonable measure of agreement. But the problem with this proportion is that it is almost always positive, even when the rating by the two methods is completely random and independent of each other. So the proportion of overall agreement does not indicate whether or not two raters or two methods of rating are in agreement.

    Table 1. Diagnosis of depression among 200 primary care patients based on information provided by the proband and by other informants about the proband

    For example, suppose that two raters with no training or experience about depression randomly decide whether or not each of the 200 patients has depression. Assume that one rater makes a positive diagnosis (i.e., considers depression present) 80% of thetime and the other gives a positive diagnosis 90% of thetime. Based on the assumption that their diagnoses are made independently from each other, Table 2 represents the joint distribution of their ratings. The proportion that the two raters give the same diagnosis is 74% (i.e.,0.72+0.02), suggesting that the two raters are doing a good job of diagnosing the presence of depression. But this level of agreement is purely by chance, it doesnotreflect the actual degree of agreement between the two raters. This hypothetical example shows that the proportion of cases in which two raters give the same ratings on an instrument is inflated by the agreement by chance. This chance agreement must be removed in order to provide a valid measure of agreement.Cohen’s kappa coefficient is used to assess the level of agreement beyond chance agreement.

    Table 2. Hypothetical example of proportional distribution of diagnoses by two raters that make diagnoses independently from each other

    3. Kappa for 2×2 tables

    Consider a hypothetical example of two raters giving ratings fornsubjects on a binary scale, with ‘1’representing a positive result (e.g., the presence of a diagnosis) and ‘0’ representing a negative result(e.g., the absence of a diagnosis). The results could be reported in a 2x2 contingency table as shown in Table 3. By convention, the results of the first rater are traditionally shown in the rows (x values) and the results of the second rater are shown in the columns(y values). Thus,nijin the table denotes the number of subjects who receive the rating ofifrom the first rater and the ratingjfrom the second rater. Let Pr(A) denote the probability of event A; thenpij=Pr(x=i,y=j) represent the proportion of all cases that receive the rating ofifrom the first rater and the ratingjfrom the second rater,pi+=Pr(x=i) represents the marginal distribution of the first rater’s ratings, andp+j=Pr(y=j) represents the marginal distribution of the second rater’s ratings.

    Table 3. A typical 2×2 contingency table to assess agreement of two raters

    If the two raters give their ratings independently according to their marginal distributions, the probability that a subject is rated 0 (negative) by chance by both raters is the product of the marginal probabilitiesp0+andp+0. Likewise, the probability of a subject being rated 1 (positive) by chance by both raters is the product of the marginal probabilitiesp1+andp+1. The sum of these two probabilities (p1+*p+1+p0+*p+0) is the agreement by chance, that is, the source of in flation discussed earlier.After excluding this source of inflation from the total proportion of cases in which the two raters give identical ratings (p11+p00), we arrive at the agreement corrected for chance agreement, (p11+p00- (p1+*p+1+p0+*p+0)). In 1960 Cohen[1]recommended normalizing this chanceadjusted agreement as the Kappa coefficient (K):

    This normalization process produces kappa coefficients that vary between -1 and 1, depending on the degree of agreement or disagreement beyond chance. If the two raters completely agree with each other, thenp11+p00=1 andK=1. Conversely, if the kappa coefficient is 1, then the two raters agree completely. On the other hand, if the raters rate the subjects in a completely random fashion, then the agreement is completely due to chance, sop11=p1+*p+1andp00=p0+*p+0do (p11+p00-(p1+*p+1+p0+*p+0))=0 and the kappa coefficient is also 0. In general, when rater agreement exceeds chance agreement the kappa coefficient is positive, and when raters disagree more than they agree the kappa coefficient is negative. The magnitude of kappa indicates the degree of agreement or disagreement.

    The kappa coefficient can be estimated by substituting sample proportions for the probabilities shown in equation (1). When the number of ratings given by each rater (i.e., the sample size) is large, the kappa coefficient approximately follows a normal distribution. This asymptotic distribution can be estimated using delta methods based on the asymptotic distributions of the various sample proportions.[4]Based on the asymptotic distribution, calculations of confidence intervals and hypothesis tests can be performed. For a sample with 100 or more ratings, this generally provides a good approximation. However, it may not work well for small sample sizes, in which case exact methods may be applied to provide more accurate inference.[4]

    Example 1.Assessing the agreement between the diagnosis of depression based on information provided by the proband compared to the diagnosis based on information provided by other informants (Table 1), the Kappa coefficient is computed as follows:

    The asymptotic standard error of kappa is estimated as 0.063. This gives a 95% confidence interval of κ, (0.2026, 0.4497). The positive kappa indicates some degree of agreement about the diagnosis of depression between diagnoses based on information provided by the proband versus diagnoses based on information provided by other informants. However, the level of agreement,though statistically signi ficant, is relatively weak.

    In most applications, there is usually more interest in the magnitude of kappa than in the statistical significance of kappa. When the sample is relatively large (as in this example), a low kappa which represents relatively weak agreement can, nevertheless, be statistically significant (that is, significantly greater than 0). The degree of beyond-chance agreement has been classified in different ways by different authors who arbitrarily assigned each category to specific cutoff levels of Kappa. For example, Landis and Koch[5]proposed that a kappa in the range of 0.21-0.40 be considered ‘fair’ agreement, kappa=0.41-0.60 be considered ‘moderate’ agreement, kappa=0.61-0.80 be considered ‘substantial’ agreement, and kappa >0.81 be considered ‘a(chǎn)lmost perfect’ agreement.

    4. Kappa for categorical variables with multiple levels

    The kappa coefficient for a binary rating scale can be generalized to cases in which there are more than two levels in the rating scale. Suppose there areknominal categories in the rating scale. For simplicity and without loss of generality, denote the rating levels by 1,2,...,k.The ratings from the two raters can be summarized in ak×kcontingency table, as shown in Table 4. In the table,nij,pij,pi+, andp+jhave the same interpretations as in the 2x2 contingency table (above) but the range of the scale is extended toi,j=1,…,k. As in the binary example,we first compute the agreement by chance, (the sum of the products of thekmarginal probabilities, ∑pi+*p+ifori=1,…,k), and subtract this chance agreement from the total observed agreement (the sum of the diagonal probabilities, ∑piifori=1,...,k) before estimating the normalized agreement beyond chance:

    Table 4. Model KxK contingency table to assess agreement about k categories by two different raters

    As in the case of binary scales, the kappa coefficient varies between -1 and 1, depending on the extent of agreement or disagreement. If the two raters completely agree with each other (∑pii=1, fori=1,…,k), then the kappa coefficient is equal to 1. If the raters rate the subjects at random, then the total agreement is equal chance agreement (∑pii=∑pi+*p+i, fori=1,…,k) so the kappa coefficient is 0. In general, the kappa coefficient is positive if there is agreement or negative if there is disagreement, with the magnitude of kappa indicating the degree of such agreement or disagreement between the raters. The kappa index in equation (2) is estimated by replacing the probabilities with their corresponding sample proportions. As in the case of binary scales, we can use asymptotic theory and exact methods to assess con fidence intervals and make inferences.

    5. Kappa for ordinal or ranked variables

    The definition of the kappa coefficient in equation(2) assumes that the rating categories are treated as independent categories. If, however, the rated categories are ordered or ranked (for example, a Likert scale with categories such as ‘strongly disagree’,‘disagree’, ‘neutral’, ‘a(chǎn)gree’,and ‘strongly agree’), then a weighted kappa coefficient is computed that takes into consideration the different levels of disagreement between categories. For example, if one rater ‘strongly disagrees’ and another ‘strongly agrees’ this must be considered a greater level of disagreement than when one rater ‘a(chǎn)grees’ and another ‘strongly agrees’.

    The first step in computing a weighted kappa is to assign weights representing the different levels of agreement for each cell in the KxK contingency table.The weights in the diagonal cells are all 1 (i.e.,wii=1,for alli), and the weights in the off-diagonal cells range from 0 to <1 (i.e., 0<wij<1, for alli≠j). These weights are then added to equation (2) to generate a weighted kappa that accounts for varying degrees of agreement or disagreement between the ranked categories:

    The weighted kappa is computed by replacing the probabilities with their respective sample proportions,pij,pi+, andp+i. Ifwij=0 for alli≠j, the weighted kappa coefficient Kwreduces to the standard kappa in equation (2). Note that for binary rating scales, there is no weighted version of kappa, since κ remains the same regardless of the weights used. Again, we can use asymptotic theory and exact methods to estimate con fidence intervals and make inferences.

    In theory, any weights satisfying the two defining conditions (i.e., weights in diagonal cells=1 and weights in off-diagonal cells >0 and <1) may be used.In practice, however, additional constraints are often imposed to make the weights more interpretable and meaningful. For example, since the degree of disagreement (agreement) is often a function of the difference between theith andjth rating categories,weights are typically set to reflect adjacency between rating categories, such as bywij=f(i-j), wherefis some decreasing function satisfying three conditions: (a)0<f(x)<1; (b)f(x)=f(-x); and (c)f(0)=1. Based on these conditions, larger weights (i.e., closer to 1) are used for weights of pairs of categories that are closer to each other and smaller weights (i.e., closer to 0) are used for weights of pairs of categories that are more distant from each other.

    Two such weighting systems based on column scores are commonly employed. Suppose the column scores are ordered, sayC1≤C2…≤Crand assigned values of 0,1,…r. Then, the Cicchetti-Allison weight and the Fleiss-Cohen weight in each cell of the KxK contingency table are computed as follows:

    Example 2. If depression is categorized into three ranked levels as shown in Table 5, the agreement of the classi fication based on information provided by the probands with the classification based on information provided by other informants can be estimated using the unweighted kappa coefficient as follows:

    Applying the Cicchetti-Allison weights (shown in Table 5) to the unweighted formula generates a weighed kappa:

    Applying the Fleiss-Cohen weights (shown in Table 5) involves replacing the 0.5 weight in the above equation with 0.75 and results in a Kw of 0.4482.Thus the weighted kappa coefficients have larger absolute values than the unweighted kappa coefficients. The overall result indicates only fair to moderate agreement between the two methods of classifying the level of depression. As seen in Table 5, the low agreement is partly due to the fact that a large number of subjects classified as minor depression based on information from the proband were not identified using information from other informants.

    6. Statistical Software

    Several statistical software packages including SAS,SPSS, and STATA can compute kappa coefficients. But agreement data conceptually result in square tables with entries in all cells, so most software packages will not compute kappa if the agreement table is nonsquare, which can occur if one or both raters do not use all the rating categories when rating subjects because of biases or small samples.

    Table 5. Three ranked levels of depression categorized based on information from the probands themselves or on information from other informants about the probands

    In some special circumstances the software packages will compute incorrect kappa coefficients if a square agreement table is generated despite the failure of both raters to use all rating categories. For example, suppose a scale for rater agreement has three categories, A, B, and C. If one rater only uses categories B and C, and the other only uses categories A and B,this could result in a square agreement table such as that shown in Table 6. This is a square table, but the rating categories in the rows are completely different from those represented by the column. Clearly, kappa values generated using this table would not provide the desired assessment of rater agreement. To deal with this problem the analyst must add zero counts for the rating categories not endorsed by the raters to create a square table with the right rating categories, as shown in Table 7.

    校校共建。不同于建在社區(qū)由社會組織運(yùn)營的“希望來吧”,南通市陸洪閘小學(xué)的“希望來吧”建在校內(nèi),由學(xué)校行政人員、黨員、團(tuán)員及骨干教師組成“希望來吧”工作小組,協(xié)同南通大學(xué)、南通航運(yùn)學(xué)院及諸多企事業(yè)單位的熱心志愿者百余人,定期為外來務(wù)工人員子女開展教學(xué)輔導(dǎo)、心理咨詢、主題活動。如“兒童心理健康教育”“為新小市民過集體生日”“新小市民親情聊天”等,讓孩子們走進(jìn)“希望來吧”,就仿佛走進(jìn)了如家一般溫馨的港灣。

    Table 6. Hypothetical example of incorrect agreement table that can occur when two raters on a three-level scale each only use 2 of the 3 levels

    Table 7. Adjustment of the agreement table (byadding zero cells) needed when two raters on a three-level scale each only use 2 of the 3 levels

    6.1 SAS

    In SAS, one may use PROC FREQ and specify the corresponding two-way table with the “AGREE” option.Here are the sample codes for Example 2 using PROC FREQ:

    PROC FREQ DATA = (the data set for the depression diagnosis study); TABLE (variable on result using proband) * (variable on result using other informants)/ AGREE; RUN;

    PROC FREQ uses Cicchetti-Allison weights by default. One can specify (WT=FC) with the AGREE option to request weighted kappa coefficients based on Fleiss-Cohen weights. It is important to check the order of the levels and weights used in computing weighted kappa. SAS calculates weights for weighted kappa based on unformatted values; if the variable of interest is not coded this way, one can either recode the variable or use a format statement and specify the“ORDER = FORMATTED” option. Also note that data for contingency tables are often recorded as aggregated data. For example, 10 subjects with the rating ‘A’ from the first rater and the rating ‘B’ from the second rater may be combined into one observation with a frequency variable of value 10. In such cases a weight statement“weight (the frequency variable);” may be applied to specify the frequency variable.

    6.2 SPSS

    In SPSS, kappa coefficients can be only be computed when there are only two levels in the rating scale so it is not possible to compute weighted kappa coefficients.For a two-level rating scale such as that described in Example 1, one may use the following syntax to compute the kappa coefficient:

    CROSSTABS

    /TABLES=(variable on result using proband) BY

    (variable on result using other informants)

    /STATISTICS=KAPPA.

    An alternatively easier approach is to select appropriate options in the SPSS menu:

    1. Click on Analyze, then Descriptive Statistics, then Crosstabs.

    2. Choose the variables for the row and column variables in the pop-up window for the crosstab.

    3. Click on Statistics and select the kappa checkbox.

    4. Click Continue or OK to generate the output for the kappa coefficient.

    7. Discussion

    In this paper we introduced the use of Cohen’s kappa coefficient to assess between-rater agreement, which has the desirable property of correcting for chance agreement. We focused on cross-sectional studies for two raters, but extensions to longitudinal studies with missing values and to studies that use more than two raters are also available.[6]Cohen’s kappa generally works well, but in some specific situations it may not accurately re flect the true level of agreement between raters.[7]. For example, when both raters report a very high prevalence of the condition of interest (as in the hypothetical example shown in Table 2), some of the overlap in their diagnoses may reflect their common knowledge about the disease in the population being rated. This should be considered ‘true’ agreement, but it is attributed to chance agreement (i.e., kappa=0).Despite such limitations, the kappa coefficient is an informative measure of agreement in most circumstances that is widely used in clinical research.

    Cohen’s kappa can only be applied to categorical ratings. When ratings are on a continuous scale, Lin’s concordance correlation coefficient[8]is an appropriate measure of agreement between two raters,[8]and the intraclass correlation coefficients[9]is an appropriate measure of agreement between multiple raters.

    Conflict of interest

    The authors declare no con flict of interest.

    Funding

    None.

    1. Spitzer RL, Gibbon M, Williams JBW.Structured Clinical Interview for Axis I DSM-IV Disorders. Biometrics Research Department: New York State Psychiatric Institute; 1994

    2. Cohen J. A coefficient of agreement for nominal scales.Educ Psychol Meas.1960;20(1): 37-46

    3. Duberstein PR, Ma Y, Chapman BP, Conwell Y, McGriff J,Coyne JC, et al. Detection of depression in older adults by family and friends: distinguishing mood disorder signals from the noise of personality and everyday life.Int Psychogeriatr.2011; 23(4): 634-643. doi: http://dx.doi.org/10.1017/S1041610210001808

    4. Tang W, He H, Tu XM.Applied Categorical and Count Data Analysis. Chapman & Hall/CRC; 2012

    5. Landis JR, Koch GG. The measurement of observer agreement for categorical data.Biometrics.1977; 33: 159-174. doi: http://dx.doi.org/10.2307/2529310

    6. Ma Y, Tang W, Feng C, Tu XM. Inference for kappas for longitudinal study data: applications to sexual health research.Biometrics.2008; 64: 781-789. doi: http://dx.doi.org/10.1111/j.1541-0420.2007.00934.x

    7. Feinstein AR, Cicchetti DV. High agreement but low kappa:I. The problems of two paradoxes.J Clin Epidemiol. 1990;43(6): 543-549. doi: http://dx.doi.org/10.1016/0895-4356(90)90158-L

    8. Lin L. A concordance correlation coefficient to evaluate reproducibility.Biometrics.1989; 45(1): 255-268. doi: http://dx.doi.org/10.2307/2532051

    9. Shrout PE, Fleiss J. Intraclass correlations: Uses in assessing rater reliability.Psychol Bull. 1979; 86(2): 420-428

    , 2015-01-28; accepted, 2015-02-04)

    Dr. Tang is a Research Associate Professor of Biostatistics in the Department of Biostatistics at the University of Rochester. His research interests are in semi-parametric modeling of longitudinal data with missing values, smoothing methods, and categorical and count data analysis and applications of statistical methods to psychosocial research. Dr. Tang received his PhD in Mathematics from the Department of Mathematics at the University of Rochester in 2004.

    Kappa系數(shù):一種衡量評估者間一致性的常用方法

    唐萬,胡俊,張暉,吳攀,賀華

    評估者間一致性,Kappa系數(shù),加權(quán)Kappa,相關(guān)性

    Summary: In mental health and psychosocial studies it is often necessary to report on the between-rater agreement of measures used in the study. This paper discusses the concept of agreement, highlighting its fundamental difference from correlation. Several examples demonstrate how to compute the kappa coefficient - a popular statistic for measuring agreement - both by hand and by using statistical software packages such as SAS and SPSS. Real study data are used to illustrate how to use and interpret this coefficient in clinical research and practice. The article concludes with a discussion of the limitations of the coefficient.

    [Shanghai Arch Psychiatry. 2015; 27(1): 62-67.

    10.11919/j.issn.1002-0829.215010]

    1Department of Biostatistics and Computational Biology, University of Rochester, Rochester, NY, United States

    2College of Basic Science and Information Engineering, Yunnan Agricultural University, Kunming, Yunnan Province, China

    3Department of Biostatistics, St. Jude Children’s Research Hospital, Memphis, TN, United States

    4Value Institute, Christiana Care Health System, Newark, DE, United States

    5Center of Excellence for Suicide Prevention, Canandaigua VA Medical Center Canandaigua, NY, United States

    *correspondence: wan_tang@urmc.rochester.edu

    概述:在精神衛(wèi)生和社會心理學(xué)研究中,常常需要報(bào)告研究使用某一評估方法的評估者間的一致性。本文討論了一致性的概念,強(qiáng)調(diào)一致性與相關(guān)性的本質(zhì)區(qū)別。Kappa系數(shù)是衡量一致性的一個常用統(tǒng)計(jì)方法。我們用幾個例子說明如何通過手工計(jì)算或統(tǒng)計(jì)軟件包SAS、SPSS等計(jì)算Kappa系數(shù),用真實(shí)的研究數(shù)據(jù)說明如何在臨床研究和實(shí)踐中使用和解釋這個系數(shù)。最后文章討論了該系數(shù)的局限性。

    本文全文中文版從2015年03月25日起在www.shanghaiarchivesofpsychiatry.org/cn可供免費(fèi)閱覽下載

    猜你喜歡
    小市民骨干教師南通
    中小學(xué)骨干教師“雙減”項(xiàng)目式研修模式探索
    藍(lán)印花布:南通獨(dú)具特色的非遺傳承
    華人時刊(2021年19期)2021-03-08 08:35:44
    非遺南通
    華人時刊(2020年19期)2021-01-14 01:17:06
    南通職業(yè)
    守護(hù)健康,把大愛傳遞——全國少兒美術(shù)教育骨干教師抗疫版畫作品選(二)
    守護(hù)健康,把大愛傳遞——全國少兒美術(shù)教育骨干教師抗疫版畫作品選
    時代審美的變格
    第九屆全國硬筆書法骨干教師高級研修班在紹興舉行
    中國篆刻(2017年5期)2017-07-18 11:09:30
    南通中船機(jī)械制造有限公司
    中國船檢(2017年3期)2017-05-18 11:33:12
    小市民
    特別文摘(2016年18期)2016-09-26 15:26:20
    久久久久网色| videos熟女内射| 男女无遮挡免费网站观看| 国产欧美亚洲国产| 午夜日本视频在线| 少妇人妻 视频| 中文精品一卡2卡3卡4更新| 999精品在线视频| 欧美国产精品一级二级三级| 一级片'在线观看视频| 久久久久人妻精品一区果冻| 一本久久精品| 午夜91福利影院| 嫩草影院入口| 黄色毛片三级朝国网站| 黄色怎么调成土黄色| 成人影院久久| 欧美日韩一级在线毛片| 久久精品人人爽人人爽视色| 亚洲,欧美,日韩| 国产又爽黄色视频| 99久久人妻综合| 青春草国产在线视频| 天天躁日日躁夜夜躁夜夜| 久久久精品94久久精品| 亚洲成国产人片在线观看| 99热网站在线观看| av福利片在线| 久久久久精品久久久久真实原创| 国产一区二区在线观看av| 精品久久久久久电影网| 国产精品久久久久成人av| 午夜激情久久久久久久| 一区在线观看完整版| 精品国产一区二区三区久久久樱花| 99国产综合亚洲精品| 色网站视频免费| a级毛片在线看网站| 日韩av免费高清视频| 性色avwww在线观看| 免费黄网站久久成人精品| 精品人妻熟女毛片av久久网站| 久久99蜜桃精品久久| 久久ye,这里只有精品| 欧美日韩av久久| 成年女人在线观看亚洲视频| 精品人妻偷拍中文字幕| 国产亚洲最大av| 水蜜桃什么品种好| 亚洲,欧美精品.| 国产极品天堂在线| 男女国产视频网站| 欧美成人精品欧美一级黄| 欧美人与善性xxx| 亚洲色图 男人天堂 中文字幕| 久久热在线av| 国产精品久久久久久久久免| 岛国毛片在线播放| 菩萨蛮人人尽说江南好唐韦庄| 国产1区2区3区精品| 飞空精品影院首页| 国产男女超爽视频在线观看| 国产一区亚洲一区在线观看| 99九九在线精品视频| 久久人人爽人人片av| 熟妇人妻不卡中文字幕| 国产xxxxx性猛交| 岛国毛片在线播放| 日韩不卡一区二区三区视频在线| 一本久久精品| 少妇的逼水好多| 男女边摸边吃奶| 91久久精品国产一区二区三区| 狂野欧美激情性bbbbbb| 十八禁网站网址无遮挡| 亚洲国产av新网站| 国产精品国产三级专区第一集| 色哟哟·www| 不卡av一区二区三区| 少妇精品久久久久久久| 男女啪啪激烈高潮av片| 欧美少妇被猛烈插入视频| 亚洲欧美一区二区三区国产| 国产在线视频一区二区| 日日撸夜夜添| 国产 精品1| 秋霞伦理黄片| 国产精品无大码| 午夜免费鲁丝| 新久久久久国产一级毛片| 免费女性裸体啪啪无遮挡网站| 亚洲综合色网址| 久久久久精品性色| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | av免费观看日本| 最近中文字幕高清免费大全6| 飞空精品影院首页| 国产成人精品久久久久久| av又黄又爽大尺度在线免费看| 下体分泌物呈黄色| 在线精品无人区一区二区三| av福利片在线| 久久精品国产综合久久久| 新久久久久国产一级毛片| 大话2 男鬼变身卡| 丰满迷人的少妇在线观看| 韩国av在线不卡| 丁香六月天网| 国产精品二区激情视频| 免费观看在线日韩| 久久ye,这里只有精品| 一区二区三区四区激情视频| 999久久久国产精品视频| 人人妻人人澡人人爽人人夜夜| 成人国产麻豆网| 亚洲精品国产色婷婷电影| 一级毛片 在线播放| 国产一区二区在线观看av| 欧美国产精品一级二级三级| 日日爽夜夜爽网站| 国产精品99久久99久久久不卡 | 美女视频免费永久观看网站| 伊人久久国产一区二区| 人妻 亚洲 视频| 久久久国产精品麻豆| 中文天堂在线官网| 亚洲伊人久久精品综合| 久久 成人 亚洲| 亚洲精品一区蜜桃| 日韩av不卡免费在线播放| 成人漫画全彩无遮挡| 亚洲国产成人一精品久久久| 亚洲av电影在线进入| 久久精品国产自在天天线| 波多野结衣一区麻豆| 午夜免费观看性视频| 99国产精品免费福利视频| 久久国产精品大桥未久av| 亚洲久久久国产精品| 国产精品av久久久久免费| 99热国产这里只有精品6| 久久精品久久精品一区二区三区| 一二三四在线观看免费中文在| 男人添女人高潮全过程视频| 国产不卡av网站在线观看| 99精国产麻豆久久婷婷| 少妇被粗大猛烈的视频| 亚洲精品久久午夜乱码| 黄网站色视频无遮挡免费观看| 成人午夜精彩视频在线观看| 边亲边吃奶的免费视频| 极品人妻少妇av视频| 免费观看性生交大片5| 亚洲国产精品成人久久小说| 一二三四在线观看免费中文在| 美女视频免费永久观看网站| 亚洲三级黄色毛片| tube8黄色片| 国产乱人偷精品视频| 999精品在线视频| 久久狼人影院| 18+在线观看网站| 亚洲,欧美,日韩| 蜜桃在线观看..| 国产精品久久久av美女十八| 欧美 日韩 精品 国产| 日韩制服丝袜自拍偷拍| 久久人人97超碰香蕉20202| av在线app专区| 国产av一区二区精品久久| 精品国产一区二区三区久久久樱花| 亚洲国产av影院在线观看| 中文字幕最新亚洲高清| 9191精品国产免费久久| 亚洲国产精品成人久久小说| 国产爽快片一区二区三区| 精品酒店卫生间| 这个男人来自地球电影免费观看 | 亚洲av电影在线进入| 看非洲黑人一级黄片| 久久97久久精品| 亚洲国产欧美日韩在线播放| 亚洲,欧美精品.| 只有这里有精品99| 精品一区在线观看国产| 最近2019中文字幕mv第一页| 天天躁狠狠躁夜夜躁狠狠躁| 精品酒店卫生间| 久久精品国产自在天天线| 久久99热这里只频精品6学生| 亚洲国产欧美日韩在线播放| 亚洲图色成人| 国产高清不卡午夜福利| 美女国产视频在线观看| 国产免费一区二区三区四区乱码| 免费看av在线观看网站| 波多野结衣av一区二区av| www.av在线官网国产| 麻豆av在线久日| 中文字幕色久视频| 91精品国产国语对白视频| 国产精品成人在线| 熟女少妇亚洲综合色aaa.| 大片免费播放器 马上看| 精品国产超薄肉色丝袜足j| 国产日韩欧美亚洲二区| 欧美日本中文国产一区发布| 91aial.com中文字幕在线观看| 在线观看国产h片| 国产精品99久久99久久久不卡 | 免费少妇av软件| 你懂的网址亚洲精品在线观看| 精品久久蜜臀av无| 中文字幕av电影在线播放| 国产爽快片一区二区三区| 成人亚洲精品一区在线观看| 波多野结衣一区麻豆| 看免费av毛片| 国产精品熟女久久久久浪| 午夜免费观看性视频| 国产极品粉嫩免费观看在线| 下体分泌物呈黄色| 中文字幕人妻丝袜制服| 中文字幕av电影在线播放| 日韩一区二区视频免费看| 精品国产乱码久久久久久小说| 老司机亚洲免费影院| 亚洲在久久综合| 可以免费在线观看a视频的电影网站 | 麻豆精品久久久久久蜜桃| 夫妻性生交免费视频一级片| 久久久精品94久久精品| 99久久人妻综合| av在线观看视频网站免费| 我要看黄色一级片免费的| 精品人妻熟女毛片av久久网站| 一级片'在线观看视频| 美女视频免费永久观看网站| 免费高清在线观看视频在线观看| 成年人午夜在线观看视频| 女性生殖器流出的白浆| 美女xxoo啪啪120秒动态图| 国产在线一区二区三区精| 这个男人来自地球电影免费观看 | 少妇精品久久久久久久| 国产麻豆69| 精品少妇一区二区三区视频日本电影 | 国产成人一区二区在线| 有码 亚洲区| 十八禁高潮呻吟视频| 欧美精品一区二区免费开放| 国产成人aa在线观看| 最近中文字幕2019免费版| 激情五月婷婷亚洲| 国产日韩欧美视频二区| 熟女av电影| 母亲3免费完整高清在线观看 | 高清在线视频一区二区三区| 日韩av不卡免费在线播放| av在线播放精品| 伊人久久国产一区二区| 一级毛片黄色毛片免费观看视频| 国产麻豆69| 欧美 亚洲 国产 日韩一| 国产亚洲av片在线观看秒播厂| 五月开心婷婷网| 精品国产一区二区久久| 热re99久久国产66热| 亚洲国产精品国产精品| 91精品三级在线观看| 一区二区三区精品91| 极品少妇高潮喷水抽搐| 女性生殖器流出的白浆| 色吧在线观看| 中文乱码字字幕精品一区二区三区| 丝瓜视频免费看黄片| 黑丝袜美女国产一区| 日韩精品有码人妻一区| 免费观看性生交大片5| 黄色 视频免费看| 精品一区二区三卡| 亚洲美女黄色视频免费看| 美女福利国产在线| 国产免费又黄又爽又色| 欧美日韩成人在线一区二区| 在线观看三级黄色| 亚洲成人手机| 日日啪夜夜爽| 美女脱内裤让男人舔精品视频| 黄片无遮挡物在线观看| av在线播放精品| 久久久久久人妻| 波野结衣二区三区在线| 国产成人91sexporn| 国产免费视频播放在线视频| 三上悠亚av全集在线观看| 亚洲精品久久成人aⅴ小说| 婷婷色麻豆天堂久久| 精品第一国产精品| 久久久久久免费高清国产稀缺| 免费大片黄手机在线观看| av又黄又爽大尺度在线免费看| 亚洲精品一区蜜桃| 日韩一卡2卡3卡4卡2021年| 18禁动态无遮挡网站| 久久99蜜桃精品久久| 久久精品夜色国产| 亚洲av国产av综合av卡| 狂野欧美激情性bbbbbb| 麻豆av在线久日| 国产片特级美女逼逼视频| 午夜福利在线免费观看网站| 卡戴珊不雅视频在线播放| 不卡视频在线观看欧美| 人人澡人人妻人| 日韩欧美一区视频在线观看| 2018国产大陆天天弄谢| 巨乳人妻的诱惑在线观看| 日韩一卡2卡3卡4卡2021年| 18禁观看日本| 一区二区av电影网| 久久人人97超碰香蕉20202| 亚洲美女黄色视频免费看| 少妇熟女欧美另类| 免费人妻精品一区二区三区视频| 秋霞在线观看毛片| 一区二区三区精品91| 精品少妇一区二区三区视频日本电影 | 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 黑人巨大精品欧美一区二区蜜桃| 搡女人真爽免费视频火全软件| 欧美日韩亚洲高清精品| 日本午夜av视频| 黄网站色视频无遮挡免费观看| 久久免费观看电影| 亚洲欧美日韩另类电影网站| 亚洲精品久久久久久婷婷小说| 曰老女人黄片| 日韩av不卡免费在线播放| 亚洲欧美中文字幕日韩二区| 天天操日日干夜夜撸| 中文字幕精品免费在线观看视频| 国产免费福利视频在线观看| 曰老女人黄片| 亚洲一区二区三区欧美精品| 黄片播放在线免费| 亚洲视频免费观看视频| 中文字幕最新亚洲高清| 人人妻人人爽人人添夜夜欢视频| 日本爱情动作片www.在线观看| 国产av一区二区精品久久| 日韩伦理黄色片| 秋霞在线观看毛片| 免费高清在线观看视频在线观看| av不卡在线播放| 18禁国产床啪视频网站| 国产片内射在线| 另类亚洲欧美激情| 午夜日韩欧美国产| 一级毛片黄色毛片免费观看视频| 少妇猛男粗大的猛烈进出视频| 久久精品熟女亚洲av麻豆精品| 欧美亚洲 丝袜 人妻 在线| 亚洲国产最新在线播放| 久久狼人影院| 街头女战士在线观看网站| 好男人视频免费观看在线| 美女午夜性视频免费| 91国产中文字幕| videossex国产| 国产精品久久久av美女十八| 男的添女的下面高潮视频| av片东京热男人的天堂| 一区二区三区激情视频| 成人亚洲精品一区在线观看| 黑人猛操日本美女一级片| 免费黄网站久久成人精品| 亚洲精品aⅴ在线观看| 国产av国产精品国产| 91成人精品电影| 黄片无遮挡物在线观看| 女性被躁到高潮视频| 在线看a的网站| 亚洲四区av| 满18在线观看网站| 亚洲欧美日韩另类电影网站| 国产一区二区三区av在线| 国产成人欧美| 久久精品久久久久久噜噜老黄| 九九爱精品视频在线观看| 26uuu在线亚洲综合色| 在线亚洲精品国产二区图片欧美| 性少妇av在线| 亚洲三级黄色毛片| 国产人伦9x9x在线观看 | 大片免费播放器 马上看| 老司机影院毛片| 久久精品国产综合久久久| 熟女电影av网| 国产 精品1| freevideosex欧美| 欧美 日韩 精品 国产| 久久久久精品久久久久真实原创| 老熟女久久久| 国产男人的电影天堂91| 国产又色又爽无遮挡免| 久久精品久久精品一区二区三区| 久久久久久久久久人人人人人人| 色婷婷久久久亚洲欧美| 午夜日本视频在线| 如何舔出高潮| 久久精品久久久久久噜噜老黄| 亚洲av在线观看美女高潮| 色婷婷av一区二区三区视频| 午夜影院在线不卡| 多毛熟女@视频| a级片在线免费高清观看视频| 精品第一国产精品| 国产白丝娇喘喷水9色精品| 日韩欧美精品免费久久| 国产免费福利视频在线观看| 秋霞在线观看毛片| av福利片在线| 波多野结衣av一区二区av| 久久久国产一区二区| 国产成人免费无遮挡视频| 18+在线观看网站| 色视频在线一区二区三区| av有码第一页| 2021少妇久久久久久久久久久| 精品福利永久在线观看| 啦啦啦啦在线视频资源| 少妇人妻 视频| av在线app专区| 亚洲综合色网址| 午夜免费男女啪啪视频观看| 亚洲 欧美一区二区三区| 亚洲人成77777在线视频| 亚洲四区av| 日韩av不卡免费在线播放| 性色av一级| 亚洲欧洲国产日韩| 欧美av亚洲av综合av国产av | 色婷婷久久久亚洲欧美| 日产精品乱码卡一卡2卡三| 如何舔出高潮| 亚洲精品乱久久久久久| 日韩一区二区视频免费看| 国产精品女同一区二区软件| 午夜福利在线免费观看网站| 亚洲国产精品成人久久小说| 亚洲美女黄色视频免费看| 久久99一区二区三区| 国产精品99久久99久久久不卡 | 日本爱情动作片www.在线观看| 欧美最新免费一区二区三区| 国产精品久久久久久精品电影小说| 欧美亚洲 丝袜 人妻 在线| 亚洲精品国产av蜜桃| 精品国产一区二区久久| 久久狼人影院| 日韩成人av中文字幕在线观看| 久久久久国产网址| 久久久久久久久免费视频了| 高清在线视频一区二区三区| 亚洲视频免费观看视频| 桃花免费在线播放| 亚洲国产成人一精品久久久| 男女边摸边吃奶| 午夜福利影视在线免费观看| 欧美激情极品国产一区二区三区| 天美传媒精品一区二区| 国产毛片在线视频| 中国国产av一级| 少妇的逼水好多| 七月丁香在线播放| 中文字幕亚洲精品专区| 国产 一区精品| 亚洲三区欧美一区| 这个男人来自地球电影免费观看 | 人妻人人澡人人爽人人| 成人亚洲精品一区在线观看| 亚洲,一卡二卡三卡| 国产一区二区三区综合在线观看| 亚洲情色 制服丝袜| 亚洲视频免费观看视频| 免费av中文字幕在线| 国产xxxxx性猛交| 日韩中文字幕视频在线看片| 免费黄频网站在线观看国产| 日韩一卡2卡3卡4卡2021年| 亚洲少妇的诱惑av| 久久久久久伊人网av| 日日啪夜夜爽| 七月丁香在线播放| 国产日韩欧美亚洲二区| 国产片内射在线| 性高湖久久久久久久久免费观看| 国产精品久久久久久精品电影小说| 亚洲色图 男人天堂 中文字幕| 精品亚洲成国产av| 亚洲精品国产色婷婷电影| 成人免费观看视频高清| 97人妻天天添夜夜摸| 少妇被粗大猛烈的视频| 美女中出高潮动态图| 精品少妇久久久久久888优播| 中文字幕另类日韩欧美亚洲嫩草| 亚洲五月色婷婷综合| 亚洲av成人精品一二三区| 国产xxxxx性猛交| 天美传媒精品一区二区| 久久精品国产亚洲av高清一级| 麻豆乱淫一区二区| 精品一区二区免费观看| 欧美变态另类bdsm刘玥| av卡一久久| 欧美精品一区二区大全| 18禁观看日本| 欧美另类一区| 国产免费现黄频在线看| 亚洲精品日本国产第一区| 亚洲天堂av无毛| 日本欧美视频一区| 满18在线观看网站| 香蕉精品网在线| 国产成人精品婷婷| 欧美激情极品国产一区二区三区| 国产精品嫩草影院av在线观看| 国产精品国产三级专区第一集| 亚洲少妇的诱惑av| 亚洲欧洲精品一区二区精品久久久 | 日韩视频在线欧美| 人人妻人人添人人爽欧美一区卜| 国产免费福利视频在线观看| 女的被弄到高潮叫床怎么办| 亚洲综合色网址| 国产欧美亚洲国产| 水蜜桃什么品种好| 极品人妻少妇av视频| 18禁国产床啪视频网站| 亚洲欧美清纯卡通| 波多野结衣一区麻豆| videos熟女内射| 亚洲精品国产一区二区精华液| 成年女人在线观看亚洲视频| 欧美国产精品一级二级三级| 日韩在线高清观看一区二区三区| 伦理电影免费视频| 精品国产一区二区三区四区第35| 2018国产大陆天天弄谢| 男女无遮挡免费网站观看| xxx大片免费视频| 两个人免费观看高清视频| 亚洲精品国产色婷婷电影| 国产毛片在线视频| 一二三四中文在线观看免费高清| 国产av精品麻豆| 老司机影院成人| 久久久亚洲精品成人影院| 国产乱来视频区| 日韩中字成人| 女人高潮潮喷娇喘18禁视频| 最近最新中文字幕免费大全7| 久久久国产欧美日韩av| 亚洲av福利一区| 热99久久久久精品小说推荐| 国产成人av激情在线播放| 精品人妻熟女毛片av久久网站| 国产精品一二三区在线看| 免费观看a级毛片全部| 久久久久久久亚洲中文字幕| 久久综合国产亚洲精品| 国产有黄有色有爽视频| 久久精品夜色国产| 国产精品秋霞免费鲁丝片| 午夜福利乱码中文字幕| 啦啦啦视频在线资源免费观看| 满18在线观看网站| 亚洲国产欧美日韩在线播放| 色吧在线观看| 亚洲久久久国产精品| 亚洲欧美中文字幕日韩二区| 日韩一卡2卡3卡4卡2021年| 人妻一区二区av| 成年人午夜在线观看视频| 久久久精品国产亚洲av高清涩受| 在线观看免费视频网站a站| 毛片一级片免费看久久久久| 一区二区三区精品91| 精品一区二区三卡| 精品卡一卡二卡四卡免费| 青草久久国产| 波多野结衣av一区二区av| 国产av国产精品国产| 亚洲av中文av极速乱| 亚洲欧美色中文字幕在线| 在线观看免费高清a一片| 午夜日本视频在线| 午夜影院在线不卡| 国产精品久久久久久av不卡| 91国产中文字幕| 久久婷婷青草| 一本久久精品| 欧美bdsm另类| 18+在线观看网站| 老司机亚洲免费影院| 日本wwww免费看| 99国产精品免费福利视频| 国产黄色视频一区二区在线观看| 亚洲一级一片aⅴ在线观看| 老汉色av国产亚洲站长工具| 少妇人妻精品综合一区二区| av又黄又爽大尺度在线免费看| 成人亚洲精品一区在线观看|