Inter-rater reliability in Learner Corpus Research Insights from a collaborative study on adverb placement

Tove Larsson, Magali Paquot, Luke Plonsky

Research output: Contribution to journalReview articlepeer-review

17 Scopus citations

Abstract

In Learner Corpus Research (LCR), a common source of errors stems from manual coding and annotation of linguistic features. To estimate the amount of error present in a coded dataset, coefficients of inter-rater reliability are used. However, despite the importance of reliability and internal consistency for validity and, by extension, study quality, interpretability and generalizability, it is surprisingly uncommon for studies in the field of LCR to report on such reliability coefficients. In this Methods Report, we use a recent collaborative research project to illustrate the pertinence of considering inter-rater reliability. In doing so, we hope to initiate methodological discussion on instrument design, piloting and evaluation. We also suggest some ways forward to encourage increased transparency in reporting practices.

Original languageEnglish (US)
Pages (from-to)237-251
Number of pages15
JournalInternational Journal of Learner Corpus Research
Volume6
Issue number2
DOIs
StatePublished - Dec 10 2020

Keywords

  • Coding errors
  • Fleiss’ kappa
  • Inter-rater reliability
  • Reporting practices
  • Study quality

ASJC Scopus subject areas

  • Education
  • Language and Linguistics
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'Inter-rater reliability in Learner Corpus Research Insights from a collaborative study on adverb placement'. Together they form a unique fingerprint.

Cite this