000 02000nas a2200217Ia 4500
008 240802c99999999xx |||||||||||| ||und||
022 _a0034-6543
100 _aWadhwa, Mansi
_9119524
245 0 _aHow Consistent Are Meanings of Evidence-Based? A Comparative Review of 12 Clearinghouses that Rate the Effectiveness of Educational Programs
260 _bReview of Educational Research
260 _c2024
300 _a05-32
520 _aClearinghouses set standards of scientific quality to vet existing research to determine how “evidence-based” an intervention is. This paper examines 12 educational clearinghouses to describe their effectiveness criteria, to estimate how consistently they rate the same program, and to probe why their judgments differ. All the clearinghouses value random assignment, but they differ in how they treat its implementation, how they weight quasi-experiments, and how they value ancillary causal factors like independent replication and persisting effects. A total of 1359 programs were analyzed over 10 clearinghouses; 83% of them were assessed by a single clearinghouse and, of those rated by more than one, similar ratings were achieved for only about 30% of the programs. This high level of inconsistency seems to be mostly due to clearinghouses disagreeing about whether a high program rating requires effects that are replicated and/or temporally persisting. Clearinghouses exist to identify “evidence-based” programs, but the inconsistency in their recommendations of the same program suggests that identifying “evidence-based” interventions is still more of a policy aspiration than a reliable research practice.
650 _a Clearinghouse
_9119525
650 _a Experimental Research
_930653
650 _a Research Methodology
650 _aResearch Utilization
_9119526
700 _a Cook, Thomas D.
_9119527
700 _a Zheng, Jingwen
_9119528
856 _uhttps://doi.org/10.3102/00346543231152262
999 _c133497
_d133497