Home  >>  Archives  >>  Volume 18 Number 4  >>  st0544

The Stata Journal
Volume 18 Number 4: pp. 871-901



Subscribe to the Stata Journal
cover

Implementing a general framework for assessing interrater agreement in Stata

Daniel Klein
International Centre for Higher Education Research Kassel
Kassel, Germany
[email protected]
Abstract.  Despite its well-known weaknesses, researchers continuously choose the kappa coefficient (Cohen, 1960, Educational and Psychological Measurement 20: 37–46; Fleiss, 1971, Psychological Bulletin 76: 378–382) to quantify agreement among raters. Part of kappa's persistent popularity seems to arise from a lack of available alternative agreement coefficients in statistical software packages such as Stata. In this article, I review Gwet's (2014, Handbook of Inter-Rater Reliability) recently developed framework of interrater agreement coefficients. This framework extends several agreement coefficients to handle any number of raters, any number of rating categories, any level of measurement, and missing values. I introduce the kappaetc command, which implements this framework in Stata.
Terms of use     View this article (PDF)

View all articles by this author: Daniel Klein

View all articles with these keywords: kappaetc, kappaetci, Cohen, Fleiss, Gwet, interrater agreement, kappa, Krippendorff, reliability

Download citation: BibTeX  RIS

Download citation and abstract: BibTeX  RIS