From the abstract:
However, the institutional adoption of policies related to the collection of assessment data or the application of data-driven decision making appears to have no relationship with student experiences or outcomes in the first year of college. Thus, findings from the current study are consistent with the small, but growing, body of literature questioning the effectiveness of accountability and assessment policies in higher education.
And an older study:
There are a growing number of large-scale educational Randomized Controlled Trials (RCTs). Considering their expense, it is important to reflect on the effectiveness of this approach. We assessed the magnitude and precision of effects found in those large-scale RCTs commissioned by the EEF (UK) and the NCEE (US) which evaluated interventions aimed at improving academic achievement in K-12 (141 RCTs; 1,222,024 students). The mean effect size was 0.06 standard deviations (SDs). These sat within relatively large confidence intervals (mean width 0.30 SDs) which meant that the results were often uninformative (the median Bayes factor was 0.56). We argue that our field needs, as a priority, to understand why educational RCTs often find small and uninformative effects.