DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Grant Peer Review: Improving Inter-Rater Reliability with Training

Abstract

In this study, we developed and evaluated a brief training program for grant reviewers that aimed to increase inter-rater reliability, rating scale knowledge, and effort to read the grant review criteria. Enhancing reviewer training may improve the reliability and accuracy of research grant proposal scoring and funding recommendations. Seventy-five Public Health professors from U.S. research universities watched the training video we produced and assigned scores to the National Institutes of Health scoring criteria proposal summary descriptions. For both novice and experienced reviewers, the training video increased scoring accuracy (the percentage of scores that reflect the true rating scale values), inter-rater reliability, and the amount of time reading the review criteria compared to the no video condition. The increase in reliability for experienced reviewers is notable because it is commonly assumed that reviewers—especially those with experience—have good understanding of the grant review rating scale. Our findings suggest that both experienced and novice reviewers who had not received the type of training developed in this study may not have appropriate understanding of the definitions and meaning for each value of the rating scale and that experienced reviewers may overestimate their knowledge of the rating scale. Lastly, the results underscore the benefits ofmore » and need for specialized peer reviewer training.« less

Authors:
 [1];  [2];  [3];  [3]
  1. Western Washington Univ. Bellingham, WA (United States)
  2. George Mason Univ., Fairfax, VA (United States)
  3. Oak Ridge Associated Universities (ORAU), Oak Ridge, TN (United States)
Publication Date:
Research Org.:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE Office of Science (SC)
OSTI Identifier:
1260644
Grant/Contract Number:  
AC05-06OR23100
Resource Type:
Accepted Manuscript
Journal Name:
PLoS ONE
Additional Journal Information:
Journal Volume: 10; Journal Issue: 6; Journal ID: ISSN 1932-6203
Publisher:
Public Library of Science
Country of Publication:
United States
Language:
English
Subject:
99 GENERAL AND MISCELLANEOUS

Citation Formats

Sattler, David N., McKnight, Patrick E., Naney, Linda, and Mathis, Randy. Grant Peer Review: Improving Inter-Rater Reliability with Training. United States: N. p., 2015. Web. doi:10.1371/journal.pone.0130450.
Sattler, David N., McKnight, Patrick E., Naney, Linda, & Mathis, Randy. Grant Peer Review: Improving Inter-Rater Reliability with Training. United States. https://doi.org/10.1371/journal.pone.0130450
Sattler, David N., McKnight, Patrick E., Naney, Linda, and Mathis, Randy. Mon . "Grant Peer Review: Improving Inter-Rater Reliability with Training". United States. https://doi.org/10.1371/journal.pone.0130450. https://www.osti.gov/servlets/purl/1260644.
@article{osti_1260644,
title = {Grant Peer Review: Improving Inter-Rater Reliability with Training},
author = {Sattler, David N. and McKnight, Patrick E. and Naney, Linda and Mathis, Randy},
abstractNote = {In this study, we developed and evaluated a brief training program for grant reviewers that aimed to increase inter-rater reliability, rating scale knowledge, and effort to read the grant review criteria. Enhancing reviewer training may improve the reliability and accuracy of research grant proposal scoring and funding recommendations. Seventy-five Public Health professors from U.S. research universities watched the training video we produced and assigned scores to the National Institutes of Health scoring criteria proposal summary descriptions. For both novice and experienced reviewers, the training video increased scoring accuracy (the percentage of scores that reflect the true rating scale values), inter-rater reliability, and the amount of time reading the review criteria compared to the no video condition. The increase in reliability for experienced reviewers is notable because it is commonly assumed that reviewers—especially those with experience—have good understanding of the grant review rating scale. Our findings suggest that both experienced and novice reviewers who had not received the type of training developed in this study may not have appropriate understanding of the definitions and meaning for each value of the rating scale and that experienced reviewers may overestimate their knowledge of the rating scale. Lastly, the results underscore the benefits of and need for specialized peer reviewer training.},
doi = {10.1371/journal.pone.0130450},
journal = {PLoS ONE},
number = 6,
volume = 10,
place = {United States},
year = {Mon Jun 15 00:00:00 EDT 2015},
month = {Mon Jun 15 00:00:00 EDT 2015}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 32 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

Improving the peer-review process for grant applications: Reliability, validity, bias, and generalizability.
journal, January 2008


Extraneous factors in judicial decisions
journal, April 2011

  • Danziger, S.; Levav, J.; Avnaim-Pesso, L.
  • Proceedings of the National Academy of Sciences, Vol. 108, Issue 17
  • DOI: 10.1073/pnas.1018033108

A critical discussion of intraclass correlation coefficients
journal, December 1994


Intraclass correlations: Uses in assessing rater reliability.
journal, January 1979


Peer Review in the Funding of Research in Higher Education: The Australian Experience
journal, December 2001

  • Jayasinghe, Upali W.; Marsh, Herbert W.; Bond, Nigel
  • Educational Evaluation and Policy Analysis, Vol. 23, Issue 4
  • DOI: 10.3102/01623737023004343

A Reliability-Generalization Study of Journal Peer Reviews: A Multilevel Meta-Analysis of Inter-Rater Reliability and Its Determinants
journal, December 2010


Heterogeneity of Inter-Rater Reliabilities of Grant Peer Reviews and Its Determinants: A General Estimating Equations Approach
journal, October 2012


Knowledge Transfer and Exchange: Review and Synthesis of the Literature: Knowledge Transfer and Exchange
journal, December 2007


The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective
journal, October 2006


The Validity of Peer Review in a General Medicine Journal
journal, July 2011


Multiple Evaluations of Grant Proposals by Independent Assessors: Confirmatory Factor Analysis Evaluations of Reliability, Validity, and Structure
journal, January 1999


Effects of training on quality of peer review: randomised controlled trial
journal, March 2004


A critical discussion of intraclass correlation coefficients
journal, December 1994


Calibration of measures for psychotherapy outcome studies.
journal, January 1996


Extraneous factors in judicial decisions
journal, April 2011

  • Danziger, S.; Levav, J.; Avnaim-Pesso, L.
  • Proceedings of the National Academy of Sciences, Vol. 108, Issue 17
  • DOI: 10.1073/pnas.1018033108

The Role of Emotion in Decision Making: A Cognitive Neuroscience Perspective
journal, October 2006


Multiple Evaluations of Grant Proposals by Independent Assessors: Confirmatory Factor Analysis Evaluations of Reliability, Validity, and Structure
journal, January 1999


The Validity of Peer Review in a General Medicine Journal
journal, July 2011


Peer Review in the Funding of Research in Higher Education: The Australian Experience
journal, December 2001

  • Jayasinghe, Upali W.; Marsh, Herbert W.; Bond, Nigel
  • Educational Evaluation and Policy Analysis, Vol. 23, Issue 4
  • DOI: 10.3102/01623737023004343

Works referencing / citing this record:

Low agreement among reviewers evaluating the same NIH grant applications
journal, March 2018

  • Pier, Elizabeth L.; Brauer, Markus; Filut, Amarette
  • Proceedings of the National Academy of Sciences, Vol. 115, Issue 12
  • DOI: 10.1073/pnas.1714379115

Peer Review Practices for Evaluating Biomedical Research Grants: A Scientific Statement From the American Heart Association
journal, August 2017


Standardizing an approach to the evaluation of implementation science proposals
journal, May 2018

  • Crable, Erika L.; Biancarelli, Dea; Walkey, Allan J.
  • Implementation Science, Vol. 13, Issue 1
  • DOI: 10.1186/s13012-018-0770-5

What do we know about grant peer review in the health sciences?
journal, January 2017


Measuring bias, burden and conservatism in research funding processes
journal, January 2019


Disparities in ratings of internal and external applicants: A case for model-based inter-rater reliability
journal, October 2018


Is human classification by experienced untrained observers a gold standard in fixation detection?
journal, October 2017

  • Hooge, Ignace T. C.; Niehorster, Diederick C.; Nyström, Marcus
  • Behavior Research Methods, Vol. 50, Issue 5
  • DOI: 10.3758/s13428-017-0955-x

A novel evaluation of two related and two independent algorithms for eye movement classification during reading
journal, May 2018


What do we know about grant peer review in the health sciences?
journal, January 2017


Using machine learning to detect events in eye-tracking data
journal, February 2017

  • Zemblys, Raimondas; Niehorster, Diederick C.; Komogortsev, Oleg
  • Behavior Research Methods, Vol. 50, Issue 1
  • DOI: 10.3758/s13428-017-0860-3

Standardizing an approach to the evaluation of implementation science proposals
journal, May 2018

  • Crable, Erika L.; Biancarelli, Dea; Walkey, Allan J.
  • Implementation Science, Vol. 13, Issue 1
  • DOI: 10.1186/s13012-018-0770-5

What do we know about grant peer review in the health sciences?
journal, January 2017


Disparities in ratings of internal and external applicants: A case for model-based inter-rater reliability
journal, October 2018


The troubles with peer review for allocating research funding
journal, November 2019