Inter-rater reliability
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)
(Learn how and when to remove this template message)
|
In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, and so on) is the degree of agreement among raters. It is a score of how much homogeneity, or consensus, there is in the ratings given by various judges. In contrast, intra-rater reliability is a score of the consistency in ratings given by the same person across multiple instances. Inter-rater and intra-rater reliability are aspects of test validity. Assessments of them are useful in refining the tools given to human judges, for example by determining if a particular scale is appropriate for measuring a particular variable. If various raters do not agree, either the scale is defective or the raters need to be re-trained.
There are a number of statistics that can be used to determine inter-rater reliability. Different statistics are appropriate for different types of measurement. Some options are: joint-probability of agreement, Cohen's kappa, Scott's pi and the related Fleiss' kappa, inter-rater correlation, concordance correlation coefficient, intra-class correlation, and Krippendorff's alpha.
Contents
Sources of inter-rater disagreement[edit]
For any task in which multiple raters are useful, raters are expected to disagree about the observed target. By contrast, situations involving unambiguous measurement, such as simple counting tasks (e.g. number of potential customers entering a store), often do not require more than one person performing the measurement. Measurement involving ambiguity in characteristics of interest in the rating target are generally improved with multiple trained raters. Such measurement tasks often involve subjective judgment of quality (examples include ratings of physician 'bedside manner', evaluation of witness credibility by a jury, and presentation skill of a speaker).
Variation across raters in the measurement procedures and variability in interpretation of measurement results are two examples of sources of error variance in rating measurements. Clearly stated guidelines for rendering ratings are necessary for reliability in ambiguous or challenging measurement scenarios. Without scoring guidelines, ratings are increasingly affected by experimenter's bias, that is, a tendency of rating values to drift towards what is expected by the rater. During processes involving repeated measurements, correction of rater drift can be addressed through periodic retraining to ensure that raters understand guidelines and measurement goals.
The philosophy of inter-rater agreement[edit]
There are several operational definitions[1] of "inter-rater reliability" reflecting different viewpoints about what is reliable agreement between raters.
There are three operational definitions of agreement:
- Reliable raters agree with the "official" rating of a performance.
- Reliable raters agree with each other about the exact ratings to be awarded.
- Reliable raters agree about which performance is better and which is worse.
These combine with two operational definitions of behavior:
- Reliable raters are automatons, behaving like "rating machines". This category includes rating of essays by computer[2] This behavior can be evaluated by generalizability theory.
- Reliable raters behave like independent witnesses. They demonstrate their independence by disagreeing slightly. This behavior can be evaluated by the Rasch model.
Statistics[edit]
Joint probability of agreement[edit]
The joint-probability of agreement is the simplest and least robust measure. It is estimated as the percentage of the time the raters agree in a nominal or categorical rating system. It does not take into account the fact that agreement may happen solely based on chance. There is some question whether or not there is a need to 'correct' for chance agreement; some suggest that, in any case, any such adjustment should be based on an explicit model of how chance and error affect raters' decisions.[3]
When the number of categories being used is small (e.g. 2 or 3), the likelihood for 2 raters to agree by pure chance increases dramatically. This is because both raters must confine themselves to the limited number of options available, which impacts the overall agreement rate, and not necessarily their propensity for "intrinsic" agreement (an agreement is considered "intrinsic" if it is not due to chance). Therefore, the joint probability of agreement will remain high even in the absence of any "intrinsic" agreement among raters. A useful inter-rater reliability coefficient is expected (a) to be close to 0, when there is no "intrinsic" agreement, and (b) to increase as the "intrinsic" agreement rate improves. Most chance-corrected agreement coefficients achieve the first objective. However, the second objective is not achieved by many known chance-corrected measures.[4]
Kappa statistics[edit]
Cohen's kappa,[5] which works for two raters, and Fleiss' kappa,[6] an adaptation that works for any fixed number of raters, improve upon the joint probability in that they take into account the amount of agreement that could be expected to occur through chance. They suffer from the same problem as the joint-probability in that they treat the data as nominal and assume the ratings have no natural ordering. If the data does have an order, the information in the measurements is not fully taken advantage of.
Correlation coefficients[edit]
Either Pearson's , Kendall's τ, or Spearman's can be used to measure pairwise correlation among raters using a scale that is ordered. Pearson assumes the rating scale is continuous; The Kendall and Spearman statistics assume only that it is ordinal. If more than two raters are observed, an average level of agreement for the group can be calculated as the mean of the , τ, or values from each possible pair of raters.
Intra-class correlation coefficient[edit]
Another way of performing reliability testing is to use the intra-class correlation coefficient (ICC).[7] There are several types of this and one is defined as, "the proportion of variance of an observation due to between-subject variability in the true scores".[8] The range of the ICC may be between 0.0 and 1.0 (an early definition of ICC could be between −1 and +1). The ICC will be high when there is little variation between the scores given to each item by the raters, e.g. if all raters give the same, or similar scores to each of the items. The ICC is an improvement over Pearson's and Spearman's , as it takes into account the differences in ratings for individual segments, along with the correlation between raters.
Limits of agreement[edit]
Another approach to agreement (useful when there are only two raters and the scale is continuous) is to calculate the differences between each pair of the two raters' observations. The mean of these differences is termed bias and the reference interval (mean ± 1.96 × standard deviation) is termed limits of agreement. The limits of agreement provide insight into how much random variation may be influencing the ratings. If the raters tend to agree, the differences between the raters' observations will be near zero. If one rater is usually higher or lower than the other by a consistent amount, the bias (mean of differences) will be different from zero. If the raters tend to disagree, but without a consistent pattern of one rating higher than the other, the mean will be near zero. Confidence limits (usually 95%) can be calculated for both the bias and each of the limits of agreement.
There are several formulae that can be used to calculate limits of agreement. The simple formula, which was given in the previous paragraph and works well for sample size greater than 60,[9] is
For smaller sample sizes, another common simplification[10] is
However, the most accurate formula (which is applicable for all sample sizes)[9] is
Bland and Altman[10] have expanded on this idea by graphing the difference of each point, the mean difference, and the limits of agreement on the vertical against the average of the two ratings on the horizontal. The resulting Bland–Altman plot demonstrates not only the overall degree of agreement, but also whether the agreement is related to the underlying value of the item. For instance, two raters might agree closely in estimating the size of small items, but disagree about larger items.
When comparing two methods of measurement it is not only of interest to estimate both bias and limits of agreement between the two methods (inter-rater agreement), but also to assess these characteristics for each method within itself (intra-rater reliability). It might very well be that the agreement between two methods is poor simply because one of the methods has wide limits of agreement while the other has narrow. In this case the method with the narrow limits of agreement would be superior from a statistical point of view, while practical or other considerations might change this appreciation. What constitutes narrow or wide limits of agreement or large or small bias is a matter of a practical assessment in each case.
Krippendorff’s alpha[edit]
Krippendorff's alpha[11][12] is a versatile statistic that assesses the agreement achieved among observers who categorize, evaluate, or measure a given set of objects in terms of the values of a variable. It generalizes several specialized agreement coefficients by accepting any number of observers, being applicable to nominal, ordinal, interval, and ratio levels of measurement, being able to handle missing data, and being corrected for small sample sizes. Alpha emerged in content analysis where textual units are categorized by trained coders and is used in counseling and survey research where experts code open-ended interview data into analyzable terms, in psychometrics where individual attributes are tested by multiple methods, in observational studies where unstructured happenings are recorded for subsequent analysis, and in computational linguistics where texts are annotated for various syntactic and semantic qualities.
See also[edit]
References[edit]
- ^ Saal, F. E., Downey, R. G., & Lahey, M. A. (1980). Rating the ratings: Assessing the psychometric quality of rating data. Psychological Bulletin, 88(2), 413.
- ^ Page, E. B., & Petersen, N. S. (1995). The computer moves into essay grading: Updating the ancient test. Phi Delta Kappan, 76(7), 561.
- ^ Uebersax, J. S. (1987). Diversity of decision-making models and the measurement of interrater agreement. Psychological Bulletin, 101(1), 140.
- ^ "Correcting Inter-Rater Reliability for Chance Agreement: Why?". www.agreestat.com. Retrieved 2018-12-26.
- ^ Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37-46.
- ^ Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378.
- ^ Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 86(2), 420.
- ^ Everitt, B. S. (1996). Making sense of statistics in psychology: A second-level course. New York, NY: Oxford University Press.
- ^ a b Ludbrook, J. (2010). Confidence in Altman–Bland plots: a critical review of the method of differences. Clinical and Experimental Pharmacology and Physiology, 37(2), 143-149.
- ^ a b Bland, J. M., & Altman, D. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. The Lancet, 327(8476), 307-310.
- ^ Klaus, Krippendorff. Content analysis : an introduction to its methodology (Fourth ed.). Los Angeles. ISBN 9781506395661. OCLC 1019840156.
- ^ Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 77-89.
Further reading[edit]
- Gwet, Kilem L. (2014) Handbook of Inter-Rater Reliability, Fourth Edition, (Gaithersburg : Advanced Analytics, LLC) ISBN 978-0970806284
- Gwet, K. L. (2008). “Computing inter-rater reliability and its variance in the presence of high agreement.” British Journal of Mathematical and Statistical Psychology, 61, 29–48
- Johnson, R., Penny, J., & Gordon, B. (2009). Assessing performance: Developing, scoring, and validating performance tasks. New York: Guilford Publications. ISBN 978-1-59385-988-6
- Shoukri, M. M. (2010) Measures of Interobserver Agreement and Reliability (2nd edition). Boca Raton, FL: Chapman & Hall/CRC Press, ISBN 978-1-4398-1080-4
External links[edit]
Wikimedia Commons has media related to Inter-rater reliability. |