Another Article Challenges Assessment on Data Quality

I have just become aware of an article in the AACU’s journal  Liberal Education, that makes an argument strikingly similar to Dave Eubanks’ recent article “A Guide for the Perplexed.”   In it Douglas D. Roscoe echoes Eubanks’ suggestion that the data used in assessment are of very low quality.      Much of Rosoe’s argument is couched in terms of cost to benefit and he finds assessment’s costs to outweigh its benefits.

 

The problem is that assessment data can be either cheap or good, but are rarely both. The fact is that good evidence about student learning is costly.

 

As an example of how difficult it is to produce a meaningful picture of how to improve a program or course he offers this:

 

Even if the assessment data can be used to narrow our focus for improvement, they don’t really tell us how to improve. To do that, we need other data that tell us about the educational experiences of the students. We need to know what classes students took, in what order, and what they did in their classes; we need to know what the assignments they completed were like and what the instructors did to support their learning; we need to know what kind of cocurricular experiences they had; and we need to know something about the individual students themselves, like their work habits, natural intelligence, attitudes about their education, and mental health. These data, correlated with student outcomes data, would show us what works and what should be more broadly implemented.

He then points out that all this has already been done, maybe not specifically for your program, but by educational research on higher ed in general.  He suggests that if the goal is to improve higher ed teaching,  it would  be cheaper and more cost effective to just use the results of research on higher education rather attempt the endless collection of poor quality data.

Unfortunately, Roscoe still seems to be wedded to a top-down approach to the “improvement paradigm” that he hopes will replace assessment.  Noting that what’s most valuable about assessment now is the conversations that it creates among faculty about ways to improve programs he suggests this:

…it would be far better to require regular department discussions about how to improve student learning. Deans might require a report of minutes from these meetings, rather than a report on what the assessment data showed.

A lot of “requires” here.  I agree that these types of conversations are useful and when real improvement happens in academic programs, it usually begins with faculty talking of their own volition about their programs.  But mandatory meetings with minutes to the dean seems like the antithesis of a faculty-centered improvement plan.  I can easily see something like that being pencil whipped in a couple of minutes a few times a semester.  The real discussions would still go on, but they won’t happen in the forced setting of required brainstorming session.

Like Eubanks, Roscoe is an assessment guy.  That two assessment insiders have made such similar arguments in the last couple of months, suggests that there may be a change afoot in the assessment world.  If assessors are starting to question the efficacy of their work, how much longer will the accreditors cling to assessment?