Intraclass correlation

The intra- class correlation ( also intraclass correlation ) is a parametric statistical method for quantification of agreement ( interrater reliability ) between multiple raters ( raters ) with respect to several objects of observation. The corresponding measure, the intraclass correlation coefficient (English intraclass correlation or ICC, Asendorpf & Wallbott 1979, Shrout & Fleiss 1979, McGraw & Wong 1996, Wirtz & Caspar 2002) assumes interval scale data and is usually calculated, if more than two observers are present and / or a plurality of observation points in time are to be compared.

To determine the interrater reliability of the variance between different rating to the same measurement object ( = object of observation, case, person or feature support, etc.) is compared to the variance incurred over all ratings and metrics.

From a reliablen observation can be assumed if the differences between the measurement objects are relatively large (which systematic differences between the observed cases indicates ), while the variance between observers with respect to the measurement objects small. With great judgment concordance (ie low variance between the estimation values ​​) then the ICC is high.

As with other correlation coefficients, the ICC values ​​between -1.0 and 1.0 accept. Since Reliabilitätsmaße are by definition limited to a range of values ​​from 0 to 1, indicate negative ICCs a reliability of 0 ( Wirtz & Caspar [ 2002, pp. 234 ] ). In the scatter plot for the two measured values ​​of the intraclass correlation coefficient ICC means the deviation of the values ​​of the bisector.

Types of ICC

There can be up to six different types of ICC differ ( Shrout & Fleiss, 1979), depending on whether all raters evaluate all or different cases or whether the raters were randomly selected from a larger set of raters or not. Also, it makes a difference whether the individual values ​​of the raters are compared with each other or there (eg to increase the stability ) is averaged estimates of Ratergruppe.

Are Ratingrohwerte individual raters or averages k different raters data base?

A further distinction needs SPSS the two-way model is whether or not the estimate is to be adjusted or unadjusted. Adjusted and unadjusted ( a mild raters, for example, a strict vs.. ) In the model of the error variance refers to whether mean differences between raters are eliminated or, as in the unadjusted model, are preserved as part of the error variance ( Wirtz & Caspar 2002 ). SPSS refers to the adjusted model as Consistency and unadjusted in absolute agreement. The unadjusted model corresponds to the more severe test.

Calculation

The basic principle of the calculation ( that is, the mathematical model) of the ICC corresponding to an analysis of variance; also this is about the decomposition of variance components and their ratio. if

  • The number of raters is
  • The number of measured objects (cases),
  • The variance between the cases ( = measurement objects, people) ( with )
  • The variance within the cases ( with )
  • The variance between the raters ( with ) and
  • The residual variance ( with )

The following applies:

.

415361
de