Hanswursthochschätzung

In his book Shottenfreude, Ben Schott created a dictionary of made up German compound words that he devised to describe concepts for which English (and German) lack words. Thus “dreikäsehochregression” describes “returning to your old school and finding everything feels so small.” “Flughafenbegrüfungsfruede” is “childish delight at being greeted at the airport.”

Hanswursthochschätzung  (which he pieced together from the words for a dunderhead and esteem) means “the respect conferred on those who are conventionally wrong rather than unconventionally right.” He created the term to express an idea that Paul Krugman called “Serious Person Syndrome.”

From Krugman’s NYT blog:

Thus, you’re not considered serious on national security unless you bought the case for invading Iraq, even though the skeptics were completely right; you’re not considered a serious political commentator unless you dismissed all the things those reflexive anti-Bushists were saying, even though they all turn out to have been true; and you’re not considered serious about economic policy unless you dismissed warnings about a housing bubble and waved off worries about future crises.

I have often wondered why academic administrators continue to embrace assessment even though there is no evidence that it improves student learning or anything else. Serious Person Syndrome goes a long toward explaining this phenomenon. As long as a critical mass of people in suits keep telling each other how important assessment is, academia will persist in this folly. After all there is no wisdom quite as comforting as the conventional wisdom.

If there were a prize for argument by assertion…

 

I have had some time to reflect on the three most recent letters (there was an earlier one) that were sent to the Chronicle in response to my essay “An Insider’s Take on Assessment.”

One of them, from Josie Welsh at Missouri Southern, is largely consonant with my view on assessment. She asks why assessors ought to “trouble faculty to collect junk data when others have labored to produce empirically sound findings we can apply to the classroom?” I have long said that if people want to do real research on student learning, that’s a good thing and faculty should seriously consider the results of careful, well-design research on what works in the classroom. Continue reading “If there were a prize for argument by assertion…”

New Batch of Responses to “An Insider’s Take on Assessment”

The Chronicle has published three more responses to “An Insider’s take on Assessment”.

I have only looked at them briefly, but none seem to challenge the notion that there is any evidence that assessment has had a positive effect on student learning or that the way that assessment is done produces useful data.  Rather there seems to be a lot of goalpost shifting that cites the importance of getting faculty to think about what their goals are.  In effect most of them are rehashing the arguments advanced by Joan Hawthorne back in 2015.  But more later.

 

Assessment Won’t Work in Business Schools Either

This 2016 article in the Journal of Management Education just came to my attention. In it Donald Bacon and Kim Stewart argue that small student sample sizes at many business schools mean that assessment results are not statistically valid.  They conclude, much like Roscoe, that it would be better to use existing research for guidance than to make changes to programs based on bad data collected through assessment.

Citation and abstract are below.

Why Assessment Will Never Work at Many Business Schools: A Call for Better Utilization of Pedagogical Research

Donald R. Bacon, Kim A. Stewart


Journal of Management Education

Vol 41, Issue 2, pp. 181 – 200

First Published May 9, 2016

On the long and arduous journey toward effective educational assessment, business schools have progressed in their ability to clearly state measurable learning goals and use direct measures of student learning. However, many schools are wrestling with the last stages of the journey—measuring present learning outcomes, implementing curricular/pedagogical changes, and then measuring postchange outcomes to determine if the implemented changes produced the desired effect. These last steps are particularly troublesome for a reason unrecognized in the assessment literature—inadequate statistical power caused primarily by the use of small student samples. Analyses presented here demonstrate that assessment efforts by smaller schools may never provide the statistical power required to obtain valid results in a reasonable time frame. Consequently, decisions on curricular and pedagogical change are too often based on inaccurate research findings. Rather than waste time and resources toward what essentially is a statistical dead end, an alternate approach is recommended: Schools should examine published pedagogical studies that use direct measures of learning with sufficient statistical power and utilize the findings to improve student learning.

Time to Assess Assessment

A letter to the editor in the Chronicle.

 

It Is Time To Assess Assessment

To the Editor:

There’s no very civil way to say, “I told you so,” but then human beings seem gifted at ignoring what they have been told. Erik Gilbert, in his recent essay, “An Insider’s Take on Assessment: It May Be Worse Than You Thought” (The Chronicle, January 12), summarizes and recommends David Eubanks’ piece from the current issue of Intersection, a journal on assessment; the piece summarizes, among other things, what assessment has not done and does not do — and perhaps cannot ever do. Together with the extensive and negative comments section, it is an important and thoughtful audit of what is looking more and more like the leftover fragments of the assessment and accountability fads that have wasted so much of higher ed’s energy for the last thirty years.

Robert Birnbaum’s 2000 book, Management Fads in Higher Education, could also help us understand the sterility of these issues. And there is still no analysis of how assessment (and accountability more generally) depends on the wisdom of the assessors and so depends on philosophically alert cross-examination. Way back when the AAUP started its Journal of Academic Freedom, I wrote a piece on conceptual problems with assessment. Friends told me that I was too late, the train had already left the station. Now we are all having our noses rubbed in the remnants of dumb accountabilities, but perhaps we will be more careful next time and will listen to more thoughtful cross-examination. It is time for us to assess what our assessment work has accomplished. We can do better now if we will only stop.

John W. Powell
Professor of Philosophy
Humboldt State University
Arcata, Calif.

Another Article Challenges Assessment on Data Quality

I have just become aware of an article in the AACU’s journal  Liberal Education, that makes an argument strikingly similar to Dave Eubanks’ recent article “A Guide for the Perplexed.”   In it Douglas D. Roscoe echoes Eubanks’ suggestion that the data used in assessment are of very low quality.      Much of Rosoe’s argument is couched in terms of cost to benefit and he finds assessment’s costs to outweigh its benefits.

 

The problem is that assessment data can be either cheap or good, but are rarely both. The fact is that good evidence about student learning is costly.

 

As an example of how difficult it is to produce a meaningful picture of how to improve a program or course he offers this:

 

Even if the assessment data can be used to narrow our focus for improvement, they don’t really tell us how to improve. To do that, we need other data that tell us about the educational experiences of the students. We need to know what classes students took, in what order, and what they did in their classes; we need to know what the assignments they completed were like and what the instructors did to support their learning; we need to know what kind of cocurricular experiences they had; and we need to know something about the individual students themselves, like their work habits, natural intelligence, attitudes about their education, and mental health. These data, correlated with student outcomes data, would show us what works and what should be more broadly implemented.

He then points out that all this has already been done, maybe not specifically for your program, but by educational research on higher ed in general.  He suggests that if the goal is to improve higher ed teaching,  it would  be cheaper and more cost effective to just use the results of research on higher education rather attempt the endless collection of poor quality data.

Unfortunately, Roscoe still seems to be wedded to a top-down approach to the “improvement paradigm” that he hopes will replace assessment.  Noting that what’s most valuable about assessment now is the conversations that it creates among faculty about ways to improve programs he suggests this:

…it would be far better to require regular department discussions about how to improve student learning. Deans might require a report of minutes from these meetings, rather than a report on what the assessment data showed.

A lot of “requires” here.  I agree that these types of conversations are useful and when real improvement happens in academic programs, it usually begins with faculty talking of their own volition about their programs.  But mandatory meetings with minutes to the dean seems like the antithesis of a faculty-centered improvement plan.  I can easily see something like that being pencil whipped in a couple of minutes a few times a semester.  The real discussions would still go on, but they won’t happen in the forced setting of required brainstorming session.

Like Eubanks, Roscoe is an assessment guy.  That two assessment insiders have made such similar arguments in the last couple of months, suggests that there may be a change afoot in the assessment world.  If assessors are starting to question the efficacy of their work, how much longer will the accreditors cling to assessment?

 

Jerry Muller on the tyranny of metrics

Jerry Muller has been writing and thinking about the consequences of what he calls the “culture of accountability,” at least since he published this article in the American Interest. Now he has a piece in the Chronicle called “The Tyranny of Metrics,” which is also the title of his new book, which should be out next month.  It deals with learning outcomes assessment (harshly) but also looks at other types of metrics that are employed in higher education.  Of assessment he says:

Metric fixation, which seems immune to evidence that it frequently doesn’t work, has elements of a cult. Studies that demonstrate its lack of effectiveness are either ignored or met with the claim that what is needed are more data. Metric fixation, which aspires to imitate science, resembles faith.

Assessment in the News: Dave Eubanks questions data quality in assessment

In the Fall of issue of Intersection, which is the Association for the Assessment of Learning in Higher Education’s journal, Dave Eubanks of Furman Univeristy offers an insider perspective on the failure of assessment to fulfill its advocates’ expectation that it would improve student learning.  Eubanks argues that the scale at which assessment is done causes the data that are collected to be of very low quality.  Trying to improve courses or programs based on bad data is, not surprisingly, a fool’s errand.

 

The article is found here.  The entire issue appears as a single page so you need to scroll down a bit to get to the article.

My article about the implications of Eubanks’ argument is in this article in the Chronicle.