Assessment Won’t Work in Business Schools Either

This 2016 article in the Journal of Management Education just came to my attention. In it Donald Bacon and Kim Stewart argue that small student sample sizes at many business schools mean that assessment results are not statistically valid.  They conclude, much like Roscoe, that it would be better to use existing research for guidance than to make changes to programs based on bad data collected through assessment.

Citation and abstract are below.

Why Assessment Will Never Work at Many Business Schools: A Call for Better Utilization of Pedagogical Research

Donald R. Bacon, Kim A. Stewart


Journal of Management Education

Vol 41, Issue 2, pp. 181 – 200

First Published May 9, 2016

On the long and arduous journey toward effective educational assessment, business schools have progressed in their ability to clearly state measurable learning goals and use direct measures of student learning. However, many schools are wrestling with the last stages of the journey—measuring present learning outcomes, implementing curricular/pedagogical changes, and then measuring postchange outcomes to determine if the implemented changes produced the desired effect. These last steps are particularly troublesome for a reason unrecognized in the assessment literature—inadequate statistical power caused primarily by the use of small student samples. Analyses presented here demonstrate that assessment efforts by smaller schools may never provide the statistical power required to obtain valid results in a reasonable time frame. Consequently, decisions on curricular and pedagogical change are too often based on inaccurate research findings. Rather than waste time and resources toward what essentially is a statistical dead end, an alternate approach is recommended: Schools should examine published pedagogical studies that use direct measures of learning with sufficient statistical power and utilize the findings to improve student learning.

Time to Assess Assessment

A letter to the editor in the Chronicle.

 

It Is Time To Assess Assessment

To the Editor:

There’s no very civil way to say, “I told you so,” but then human beings seem gifted at ignoring what they have been told. Erik Gilbert, in his recent essay, “An Insider’s Take on Assessment: It May Be Worse Than You Thought” (The Chronicle, January 12), summarizes and recommends David Eubanks’ piece from the current issue of Intersection, a journal on assessment; the piece summarizes, among other things, what assessment has not done and does not do — and perhaps cannot ever do. Together with the extensive and negative comments section, it is an important and thoughtful audit of what is looking more and more like the leftover fragments of the assessment and accountability fads that have wasted so much of higher ed’s energy for the last thirty years.

Robert Birnbaum’s 2000 book, Management Fads in Higher Education, could also help us understand the sterility of these issues. And there is still no analysis of how assessment (and accountability more generally) depends on the wisdom of the assessors and so depends on philosophically alert cross-examination. Way back when the AAUP started its Journal of Academic Freedom, I wrote a piece on conceptual problems with assessment. Friends told me that I was too late, the train had already left the station. Now we are all having our noses rubbed in the remnants of dumb accountabilities, but perhaps we will be more careful next time and will listen to more thoughtful cross-examination. It is time for us to assess what our assessment work has accomplished. We can do better now if we will only stop.

John W. Powell
Professor of Philosophy
Humboldt State University
Arcata, Calif.

Another Article Challenges Assessment on Data Quality

I have just become aware of an article in the AACU’s journal  Liberal Education, that makes an argument strikingly similar to Dave Eubanks’ recent article “A Guide for the Perplexed.”   In it Douglas D. Roscoe echoes Eubanks’ suggestion that the data used in assessment are of very low quality.      Much of Rosoe’s argument is couched in terms of cost to benefit and he finds assessment’s costs to outweigh its benefits.

 

The problem is that assessment data can be either cheap or good, but are rarely both. The fact is that good evidence about student learning is costly.

 

As an example of how difficult it is to produce a meaningful picture of how to improve a program or course he offers this:

 

Even if the assessment data can be used to narrow our focus for improvement, they don’t really tell us how to improve. To do that, we need other data that tell us about the educational experiences of the students. We need to know what classes students took, in what order, and what they did in their classes; we need to know what the assignments they completed were like and what the instructors did to support their learning; we need to know what kind of cocurricular experiences they had; and we need to know something about the individual students themselves, like their work habits, natural intelligence, attitudes about their education, and mental health. These data, correlated with student outcomes data, would show us what works and what should be more broadly implemented.

He then points out that all this has already been done, maybe not specifically for your program, but by educational research on higher ed in general.  He suggests that if the goal is to improve higher ed teaching,  it would  be cheaper and more cost effective to just use the results of research on higher education rather attempt the endless collection of poor quality data.

Unfortunately, Roscoe still seems to be wedded to a top-down approach to the “improvement paradigm” that he hopes will replace assessment.  Noting that what’s most valuable about assessment now is the conversations that it creates among faculty about ways to improve programs he suggests this:

…it would be far better to require regular department discussions about how to improve student learning. Deans might require a report of minutes from these meetings, rather than a report on what the assessment data showed.

A lot of “requires” here.  I agree that these types of conversations are useful and when real improvement happens in academic programs, it usually begins with faculty talking of their own volition about their programs.  But mandatory meetings with minutes to the dean seems like the antithesis of a faculty-centered improvement plan.  I can easily see something like that being pencil whipped in a couple of minutes a few times a semester.  The real discussions would still go on, but they won’t happen in the forced setting of required brainstorming session.

Like Eubanks, Roscoe is an assessment guy.  That two assessment insiders have made such similar arguments in the last couple of months, suggests that there may be a change afoot in the assessment world.  If assessors are starting to question the efficacy of their work, how much longer will the accreditors cling to assessment?

 

Jerry Muller on the tyranny of metrics

Jerry Muller has been writing and thinking about the consequences of what he calls the “culture of accountability,” at least since he published this article in the American Interest. Now he has a piece in the Chronicle called “The Tyranny of Metrics,” which is also the title of his new book, which should be out next month.  It deals with learning outcomes assessment (harshly) but also looks at other types of metrics that are employed in higher education.  Of assessment he says:

Metric fixation, which seems immune to evidence that it frequently doesn’t work, has elements of a cult. Studies that demonstrate its lack of effectiveness are either ignored or met with the claim that what is needed are more data. Metric fixation, which aspires to imitate science, resembles faith.

Assessment in the News: Dave Eubanks questions data quality in assessment

In the Fall of issue of Intersection, which is the Association for the Assessment of Learning in Higher Education’s journal, Dave Eubanks of Furman Univeristy offers an insider perspective on the failure of assessment to fulfill its advocates’ expectation that it would improve student learning.  Eubanks argues that the scale at which assessment is done causes the data that are collected to be of very low quality.  Trying to improve courses or programs based on bad data is, not surprisingly, a fool’s errand.

 

The article is found here.  The entire issue appears as a single page so you need to scroll down a bit to get to the article.

My article about the implications of Eubanks’ argument is in this article in the Chronicle.

Dual Enrollment piece on NPR’s Marketplace Weekend

Dual Enrollment is growing by leaps and bounds.  According to this report from Marketplace over 1 million students are taking college courses in high school.  It’s a good thing that the press is starting to look at Dual Enrollment.  Compared to an issue like distance learning that gets lots of attention, dual enrollment has been largely ignored.  I would argue that it is at least as transformative as distance education, but it gets much less attention.

For a more critical take on the subject, here is an article by Bad Assessment contributor Erik Gilbert.

Welcome to Bad Assessment

The demand for accountability in higher education has led to a cottage industry of “assessment experts” who claim to be able to measure, add, and compare what happens inside and outside of classrooms in higher education. While some approaches may have merit, in practice many of the schemes are useless (like counting angels) or potentially even harmful. Inside an academic discipline an idea without merit would be met with study and dialogue that would prevent it from gaining traction. However, because these counting exercises are being promoted by accrediting agencies with enormous power over colleges, administrators and faculty are reluctant to object. Instead, they grudgingly comply and hire staff to check all the bureaucratic boxes. Meanwhile, meetings about these assessment schemes, often financed by well-meaning foundations, are usually attended only by the true believers (often, the assessment staff hired by the colleges to implement the schemes), leading the attendees to have even greater confidence that they are on the righteous path.