“Closing the loop” refers to making meaningless curricular changes based on bogus data and it is at the heart of the assessment project. I tried to make that case a few years ago in Inside Higher Ed, but I don’t know much about stats. Dave Eubanks is a mathematician and he does know a thing or two about the use and abuse of statistics. So it was nice to that the AACU was willing to give him some space in the latest issue of Peer Review to offer a critique of the way that stats are abused in assessment.
Here is sample. The full article is here.
Assessment practice also fails at empiricism. What is typically accepted in assessment reviews has little to do with statistics and measurement. Nor could it be otherwise. The 2016 IPEDS data on four-year degrees granted shows that half of the academic programs (using CIP codes) had eight or fewer graduates that year. Such small samples are not suitable for measuring a program’s quality, given the many sources of variation in student performance. By my calculations, fewer than 5 percent of four-year programs had enough graduates to make a serious measurement attempt worthwhile. It’s safe to conclude that most of the 80,000+ bachelor’s degree programs in the United States are not producing trustable statistics about student learning, regardless of the nominal value of their assessment reports.