Eubanks on “Weaponized Learning Outcomes”

Dave Eubanks has new guest post in Inside Higher Ed.  In it he recounts his experience consulting on a court case in Tonga where an accreditor tried to shut down a university over the alleged shortcomings of its assessment program.  Unfortunately (for him) it sounds like he used Skype to give his testimony and so did not get a trip to Tonga out of it.

Dave, who is the Assistant VP for Institutional Effectiveness at Furman, is a mathematician.  Thus, unlike most other people in the assessment trade he is quite knowledgeable (and concerned) about stats, research methods, and data quality.

He worries that assessment’s reliance on learning outcomes statements (what Bob Shireman called “Blurbs with Verbs“) has turned assessment into an exercise in meaningless box checking.

The assessment bureaucracy—those periodic checkboxy reports—can only be justified if the formal learning outcome statements and their standardized assessments are superior to the native ways faculty know their students. Otherwise we could just ask faculty how the students are doing and use course registrations and grades for data. We could look at the table of contents to find the learning outcomes.

In the article he gives a list of the benefits that assessment offices provide.  I am not sure that “benefits” is the word I would use to describe these things, but I am not someone who has work with other assessment people on a regular basis.

The list:

The benefits your office probably already provides include:

  • Facilitation of external program review. This is the natural extension of faculty ways of knowing and is the most authentic way to understand a program, considering facilities, budgets, faculty numbers and qualifications, curricula, and reviewing samples of student work, for example.
  • Being an internal consultant for program development, e.g. leading discussions of curriculum coherence or identifying intuitive learning goals that span courses. This leads to more agreement about what students should be accomplishing, and helps the faculty’s natural language converge.
  • Summarizing or modeling data, when there’s enough of it to work with.
  • Coordinating assessment reporting for regulatory purposes using cookie-cutter forms, often entered into expensive software systems.

The last one is the most expensive and time-consuming but provides the least benefit to the institution. We need to get out of the checkbox-reporting business, and the sooner the better.

As a faculty member, my sense is that for most of us, the last bullet point is how they most often encounter the assessment office.

He would like to see assessment office start to take grades seriously as form of data.

Course grades don’t fit nicely into the learning outcome ideology. You may have been told that they provide only “indirect evidence” and are not useful as primary data for understanding learning. This is, of course, preposterous. Here are some questions you could start with:

  • What is the distribution of academic performance among students by demographic?

  • What is the distribution of course difficulty by program or courses within programs?

  • Does learning suffer when students wait to take introductory courses?

  • How reliable are grade assignments by program?

  • How well do grades predict other things you care about, like standardized tests or other assessment data, internship evaluations, and outcomes after graduation?

If assessment offices focused on this type of stuff I think they would get a lot more support from faculty.

So, read the article and visit his blog.