How Consistent Are Meanings of Evidence-Based? A Comparative Review of 12 Clearinghouses that Rate the Effectiveness of Educational Programs
Material type: Continuing resourcePublication details: Review of Educational Research; 2024Description: 05-32ISSN:- 0034-6543
Item type | Current library | Call number | Vol info | Status | Date due | Barcode | |
---|---|---|---|---|---|---|---|
Article Index | Dr VKRV Rao Library | Vol. 94, No. 1 | Not for loan | AI150 |
Clearinghouses set standards of scientific quality to vet existing research to determine how “evidence-based†an intervention is. This paper examines 12 educational clearinghouses to describe their effectiveness criteria, to estimate how consistently they rate the same program, and to probe why their judgments differ. All the clearinghouses value random assignment, but they differ in how they treat its implementation, how they weight quasi-experiments, and how they value ancillary causal factors like independent replication and persisting effects. A total of 1359 programs were analyzed over 10 clearinghouses; 83% of them were assessed by a single clearinghouse and, of those rated by more than one, similar ratings were achieved for only about 30% of the programs. This high level of inconsistency seems to be mostly due to clearinghouses disagreeing about whether a high program rating requires effects that are replicated and/or temporally persisting. Clearinghouses exist to identify “evidence-based†programs, but the inconsistency in their recommendations of the same program suggests that identifying “evidence-based†interventions is still more of a policy aspiration than a reliable research practice.
There are no comments on this title.