Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability.
Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional psychological research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound...
Những tác giả chính: | , , |
---|---|
Định dạng: | Journal article |
Ngôn ngữ: | English |
Được phát hành: |
2008
|
_version_ | 1826301163569414144 |
---|---|
author | Marsh, H Jayasinghe, U Bond, N |
author_facet | Marsh, H Jayasinghe, U Bond, N |
author_sort | Marsh, H |
collection | OXFORD |
description | Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional psychological research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This article summarizes the authors' research program with the Australian Research Council, which receives thousands of grant proposals from the social science, humanities, and science disciplines and reviews by assessors from all over the world. Using multilevel cross-classified models, the authors critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (e.g., age, gender, university, academic rank, research team composition, nationality, experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves. The authors propose a new approach, the reader system, which they evaluated with psychology and education grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications. |
first_indexed | 2024-03-07T05:28:13Z |
format | Journal article |
id | oxford-uuid:e14a206d-6099-40b3-8258-708b632d3e2e |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T05:28:13Z |
publishDate | 2008 |
record_format | dspace |
spelling | oxford-uuid:e14a206d-6099-40b3-8258-708b632d3e2e2022-03-27T09:53:24ZImproving the peer-review process for grant applications: reliability, validity, bias, and generalizability.Journal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:e14a206d-6099-40b3-8258-708b632d3e2eEnglishSymplectic Elements at Oxford2008Marsh, HJayasinghe, UBond, NPeer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional psychological research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This article summarizes the authors' research program with the Australian Research Council, which receives thousands of grant proposals from the social science, humanities, and science disciplines and reviews by assessors from all over the world. Using multilevel cross-classified models, the authors critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (e.g., age, gender, university, academic rank, research team composition, nationality, experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves. The authors propose a new approach, the reader system, which they evaluated with psychology and education grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications. |
spellingShingle | Marsh, H Jayasinghe, U Bond, N Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. |
title | Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. |
title_full | Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. |
title_fullStr | Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. |
title_full_unstemmed | Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. |
title_short | Improving the peer-review process for grant applications: reliability, validity, bias, and generalizability. |
title_sort | improving the peer review process for grant applications reliability validity bias and generalizability |
work_keys_str_mv | AT marshh improvingthepeerreviewprocessforgrantapplicationsreliabilityvaliditybiasandgeneralizability AT jayasingheu improvingthepeerreviewprocessforgrantapplicationsreliabilityvaliditybiasandgeneralizability AT bondn improvingthepeerreviewprocessforgrantapplicationsreliabilityvaliditybiasandgeneralizability |