Shireman on Jerry Muller’s Tyranny of Metrics

 

By Robert Shireman

Starting in 2000 and repeating every year for most of the decade, a distinguished national committee chaired by former North Carolina Governor James B. Hunt Jr., a Democrat, released an annual report card on higher education. Color-coded maps showed every state’s A-F grade in each of several categories including college affordability, participation, and graduation. In one category, though, every state, every year, got an “Incomplete.” That category for which every state was deficient, was “learning,” because, as the authors complained, there is “no nationwide approach to assessing learning” in college, “no common benchmarks that would permit state comparisons of the knowledge and skills of college students.”

Taking a cue from Hunt’s effort, Secretary of Education Margaret Spellings, a Republican, in 2006 launched her own blue-ribbon commission. The Spellings Commission and the regulatory process that followed threatened to require accrediting agencies, as gatekeepers to federal funding, to set standards for student learning “outcomes,” and to make it possible to compare learning across colleges. The negative reaction was swift and noisy, raising the specter of a higher education version of No Child Left Behind. Spellings ultimately relented, but only after the accreditors said that they would require colleges to track these mysterious things labeled “student learning outcomes.”

The accreditors followed up by requiring colleges to have lists of learning outcomes for all courses and degrees. At the colleges, the reaction frequently was confusion, according to a post-mortem from an accreditor that had drunk the Kool-aid even before the Spellings Commission (the accreditor is doing a “reboot”). What is the difference between the course objectives that have always been discussed and these so-called outcomes? What is “assessment” of the outcomes if not the assignments and tests that faculty members already use to determine (er, assess) whether the students are understanding the concepts and capturing the knowledge?

Into this maelstrom walked history professor Jerry Z. Muller. As chair of his department at Catholic University, Muller saw his role as mentoring faculty members and helping them to develop as scholars and as teachers. But demands from the school’s accrediting agency forced him to spend time putting together reports using instruments that “added no useful insights to our previous measuring instruments, namely grades.” Over time the college hired ever more data specialists and even created a position of vice president for assessment, to do what seemed like busy-work.

Muller wondered what would cause a bureaucracy to engage in such seemingly useless data exercises. So he started digging, and the result of his research is his latest book, The Tyranny of Metrics. I have to admit that my first reaction to the book, before I opened it, was skeptical. It was the hard-hitting title that worried me. One of the reasons that it has taken so long for valid criticism of the outcomes-assessment regimes to be heard and heeded is that promoters dismissed faculty frustrations as obstinacy or arrogance. As the assessment consultants saw it, their insistence on an aligned, consensus-based learning systems paradigm (whatever that is) made perfect sense; the problem, as they saw it, was self-serving faculty who did not want to be held accountable. The “tyranny” in the book’s title sounded to me like the professorial audience member in an scene from the HBO series “Silicon Valley,” who yells “Fascist!” in response to a character’s criticism of higher education. He may have had a point, but he did not make it effectively.

Fortunately, The Tyranny of Metrics, is not a tirade. Far from it, in an eminently readable and well-reasoned treatise, Muller places the desire for measures in higher education in a broader historical and societal context. Muller shows that there is a history to attempts to scientize and quantify, and that the embrace and the criticism of such efforts runs across the political spectrum.

Tyranny’s criticism of the use of metrics is constructive. It even concludes with a checklist of considerations to help the practitioner – all of us, really – consider when to rely on numbers alone versus when to rely on expert judgment to interpret the meaning of the numbers in context. And on the way to that conclusion, Muller provides us with sobering stories of well-intentioned attempts to insert metrics into health care, policing, defense, and foreign aid, that ultimately led to disappointment or worse, due to any number of common flaws in attempts to replace human judgment with use of metrics.

While higher education policies are covered, Muller does not proclaim that education deserves special treatment. He certainly could have: measurement in education is particularly difficult; the heckler in that Silicon Valley episode was right when he said that “the true measure of a college education is intangible.” Instead, Muller shows the reader that even something as numbers-based as financial markets can go wrong when supposedly objective factors completely crowd out judgment. His example is the 2008 mortgage meltdown, which was caused in large part by formulas replacing, rather than supplementing, local judgments in deciding credit-worthiness and home values.

There is a lot of energy and enthusiasm behind Moneyball-style efforts to use data to steer improvements in schools, colleges and in social programs. The instincts behind those efforts are honorable, and there are certainly ways that greater use of data can be helpful. Muller’s book should be used to humble and refine those efforts, for the benefit of all of us.