Order:
  1.  7
    NIH Peer Review: Criterion Scores Completely Account for Racial Disparities in Overall Impact Scores.Elena A. Erosheva, Sheridan Grant, Mei-Ching Chen, Mark D. Lindner, Richard K. Nakamura & Carole J. Lee - 2020 - Science Advances 6 (23):DOI: 10.1126/sciadv.aaz4868.
    Previous research has found that funding disparities are driven by applications’ final impact scores and that only a portion of the black/white funding gap can be explained by bibliometrics and topic choice. Using National Institutes of Health R01 applications for council years 2014–2016, we examine assigned reviewers’ preliminary overall impact and criterion scores to evaluate whether racial disparities in impact scores can be explained by application and applicant characteristics. We hypothesize that differences in commensuration—the process of combining criterion scores into (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Alternative Funding Models Might Perpetuate Black-White Funding Gaps.Carole J. Lee, Sheridan Grant & Elena A. Erosheva - 2020 - The Lancet 396:955-6.
    The White Coats for Black Lives and #ShutDownSTEM movements have galvanised biomedical practitioners and researchers to eliminate institutional and systematic racism, including barriers faced by Black researchers in biomedicine and science, technology, engineering, and mathematics. In our study on Black–White funding gaps for National Institutes of Health Research Project grants, we found that the overall award rate for Black applicants is 55% of that for white applicants. How can systems for allocating research grant funding be made more fair while improving (...)
     
    Export citation  
     
    Bookmark   1 citation  
  3. Refinement: Measuring informativeness of ratings in the absence of a gold standard.Sheridan Grant, Marina Meilă, Elena Erosheva & Carole Lee - 2022 - British Journal of Mathematical and Statistical Psychology 75 (3):593-615.
    We propose a new metric for evaluating the informativeness of a set of ratings from a single rater on a given scale. Such evaluations are of interest when raters rate numerous comparable items on the same scale, as occurs in hiring, college admissions, and peer review. Our exposition takes the context of peer review, which involves univariate and multivariate cardinal ratings. We draw on this context to motivate an information-theoretic measure of the refinement of a set of ratings – entropic (...)
     
    Export citation  
     
    Bookmark