When Bad Assessment was in its formative stage, we considered a couple of different names. One was “The Committee for Sane Assessment.” In the end we settled on Bad Assessment, but something recently turned up on one of the assessment list-serves (sp?) that gives one an idea of what a reasonable type of assessment might be. Its author was, no surprise here, Dave Eubanks of Furman. Here it is:
For non-assessment people, “IE” refers to “Institutional Effectiveness.”
Hi all,
In conversations, it is clear that not everyone experiences the same thing when we talk about “assessment,” but I think a substantial amount of faculty anger is generated by the following mechanism, with these steps:
Accreditation IE reviewers, with the best intentions, and filled with the zeal of service, use a narrow checkbox approach to evaluating assessment, relying on a kind of dogma instead of common sense and understanding of institutional characteristics and goals.
As a consequence, more than 40% of institutions in large areas of the US are routinely found out of compliance. The non-compliance findings can be triggered by bewildering and baroque reasons. The IE reviewer can essentially make up new rules in some cases.
As a learned response, the home IE director has to cover all the bases that have checkboxes attached. This is a reductionism of the useful philosophy of assessment to formulas and forms, at least significantly often. This is necessary to pass the accreditation IE review for many institutions.
During this scramble for compliance, faculty are pressured to create processes, data, and make decisions to show activity that can be documented. These are often inauthentic and formulaic, because there are so many programs to report on. The dogma and checkboxes get pushed down to the faculty, who are the ones who have to be made to do the work.
The dogma and formulas don’t reflect good assessment practice (which relies on good relations with faculty, and serious consideration of their views), so they see it–rightly–as a waste of time. If you read the IHE comments, many of them say this. They are howling with the outrage of being treated this way.
The long-suffering home IE directory finally, after much effort, gets all the boxes checked, and compliance is achieved, but at the cost of alienating much of the faculty.
The home IE director is invited to be an IE reviewer for another institution, where he/she goes off to inflict the same pain on someone else.
This happens a lot where I’m from. If this sounds totally alien to you, then count your lucky stars–but it still affects you because the angry vociferation will not abate until we break this cycle. It’s not the accreditors, it’s us. We have to take the accreditation standards and interpret them in such a way that a healthy cycle of review can replace the one I described.The remedy is one that we control ourselves–just stop being a jerk when you do a review. Here’s how not to be a jerk.
1. No, the institution doesn’t have 100% compliance. Yours doesn’t either. That’s not the issue. Stop looking at each program for in/out of compliance and look at the big picture.
2. Because data quality can’t matter with this volume of activity, then give up on all those dogmatic rules you learned, like grades are bad and rubrics are good.
3. Let the institution define learning, outcomes, and data-gathering however it makes sense to them. They should be able to articulate some kind of vision or approach, but don’t reduce it to the verb tense of learning outcomes.
4. “closing the loop” may take many forms, and not every program is going to have an assessment epiphany. Don’t force them to invent something to talk about.
I invite others to contribute to the list, or share a story about this kind of treatment.
Regards,
dave