An article in IHE summarizes a study by Campus Labs (a maker of assessment software) that has revealed a worrisome mismatch between institutional learning outcomes and program learning outcomes. It also had this tidbit:
Perhaps the biggest concern cited by the researchers was the emphasis (or lack thereof) that institutions seemed to put on quantitative reasoning, which appeared far down the list of outcomes that colleges and programs sought to measure.
Interesting that a company whose business model requires that assessment offices suspend their disbelief about quantitive matters, wants more education in that area.
Best comment comes from David Eubanks:
This is a nice study of the words used in learning outcomes statements. I particularly appreciated the inclusion of the reg-ex code used to categorize them. However, there is a large gap in reasoning here, viz, that all those words actually relate to what happens on campuses, including what students learn. The report mentions the importance of data quality, e.g. “The quality of analysis is first contingent upon the quality of data.” Here I mean all those numerical assessments of learning that the platform houses.
What is the quality of the actual assessment data? It is almost certainly very poor, given the constraints of small samples and non-expert analysis. Even summary statistics like number of samples per outcome/time and corresponding range and standard deviation of numerical measures would be helpful. In how many cases is it possible to detect student growth over time, which I would assume is the intended sign of learning? My guess is that Campus Labs could just incorporate a push-button random number generator and save a lot of people a lot of time in regrading papers with rubrics and uploading the numbers.
It’s ironic that one of the main findings is that there isn’t enough quantitative thinking.