What would sane assessment look like?

When Bad Assessment was in its formative stage, we considered a couple of different names.  One was “The Committee for Sane Assessment.”  In the end we settled on Bad Assessment, but something recently turned up on one of the assessment list-serves (sp?) that gives one an idea of what a reasonable type of assessment might be.  Its author was, no surprise here, Dave Eubanks of Furman.  Here it is: Continue reading “What would sane assessment look like?”

Latest Response to Worthen’s NYT Piece

In Inside Higher Ed today Kate Drezek McConnell has a commentary entitled “What Assessment is Really About.”  To my mind this is the most nuanced and thoughtful of the responses I have seen to Worthen.  It is directed mostly at assessment people and tries to make the case that there are types of assessment that actually work.  This seems like a much more substantive response  others that just take the “Well golly, those professors are sure a bunch of grinches” approach.   However, still does not address the core issues.  Why is there no empirical evidence that assessment has cause positive changes in student learning?

The comments are pretty brutal and I think betray the real frustration that faculty feel with the direction that assessment has taken.  That said I think it’s important to remember that it’s assessment that is the problem not the people who do assessment.  The commentator who said something about thinking about assessment professionals while using his tommy gun in Call of Duty seems to have lost sight of the nature of the problem.  That was a totally inappropriate comment.

NYT: How to Assess What Students Have Learned

In response to Molly Worthen’s article in the Time’s Sunday Review, Ted Mitchell who is President of the American Council on Education, writes this:

 

To the Editor:

Re “No Way to Measure Students,” by Molly Worthen (Sunday Review, Feb. 23), criticizing “a bureaucratic behemoth known as learning outcomes assessment”:

Learning assessment in higher education is simply an effort to document that students have indeed learned something. More work for faculty? You bet. It’s a lot harder than giving out the As, Bs and Cs that have been the traditional measure of student success. But it’s also far more meaningful for students, parents, policymakers and employers.

As higher education costs climb and student borrowing increases, it should come as no surprise that colleges and universities are under more pressure to demonstrate what students have gained. Thanks to the work of many dedicated faculty members and accreditors, colleges and universities are providing a richer and more complete picture of student learning than in the past. This is important and worthwhile.

Sure, we can always do better. But the demand that colleges assess learning will not slacken. One hopes faculty members will lend a hand to these efforts.

TED MITCHELL, WASHINGTON

As always, assessment’s defenders refuse to engage with the substance of the criticism directed at assessment.

Accountability and the University in Canada

In an essay in the LA Review of Books called “Whose University Is it Anyway,”  Ron Srigley looks at the growing power of administrators, the shrinking autonomy of the faculty, and shifting uses of accountability.  It seems Canadian universities are going though many of the same issues that we are experiencing here in the US.

A short except from a very long article:

I propose a test. A favorite trope among the administrative castes is accountability. People must be held accountable, they tell us, particularly professors. Well, let’s take them at their word and hold them accountable. How have they done with the public trust since having assumed control of the university?

NYT on the “Misguided Drive to Measure ‘Learning Outcomes'”: Assessment critisicm breaks out of the trade journals

Thus far, most, possibly all, of the direct criticism of assessment has appeared in journals that are read by academics and a few other insiders.  That changed today with the publication of Molly Worthen’s piece on assessment in the New York Times. Curiously it’s in the Sunday Review, even though today is most certainly a Friday.  With any luck this will help put assessment on the public’s radar in a way that no number of articles in the Chronicle or Inside Higher Ed could.

 

From Worthen’s article:

If you thought this task [of figuring out what students have learned] required only low-tech materials like a pile of final exams and a red pen, you’re stuck in the 20th century. In 2018, more and more university administrators want campuswide, quantifiable data that reveal what skills students are learning. Their desire has fed a bureaucratic behemoth known as learning outcomes assessment. This elaborate, expensive, supposedly data-driven analysis seeks to translate the subtleties of the classroom into PowerPoint slides packed with statistics — in the hope of deflecting the charge that students pay too much for degrees that mean too little.

Accountability

An article in today’s The Conversation takes up the question of why efforts at college accountability fail so often.  Assessment is of course just one arm of the larger effort to hold colleges accountable.

The author identifies four basic reasons why accountability fails.

  1. Colleges are often subject to conflicting incentives.  So one measure of accountability might be at odds with another.
  2. Polices can be gamed.  He points to strategies that colleges use to reduce the number of student loan defaults within the three year period that the government monitors.  These strategies don’t reduce the overall default rate and result in higher student debt loads, but they let the colleges look better in the metrics of accountability.
  3. Unclear connections.  This one is actually closely linked to the problems with assessment.  If a third grade teacher, who is the only teacher in class, has students who consistently underperform, one might reasonably infer that there is some link between the teacher’s performance and the students’ underperformance.   In a university, where students study under dozens of different faculty in different departments, it’s difficult to identify the source of student failings or successes.
  4. Politics as usual. Colleges that should be held accountable for their failings often avoid the consequences of the accountability project because they have political pull.

Student Evaluations and Course Quality

A new article by Indiana University sociologist Fabio Rojas for the James G. Martin Center summarizes the state of the scholarship on the relationship between student course evaluations and course quality as defined by student learning.  Like learning outcomes assessment, using student evaluations as a measure of course quality has an intuitive, common-sense appeal.  However, like assessment, that intuitive appeal is not supported by evidence.

Although a few early studies found a link between learning and positive course evaluations, multiple studies in last decade have disputed those early results.

In terms of evaluating the value of student evaluations of teachers, the issue appears to be settled. Student evaluations are not a good way to measure learning, Uttl et al. argued in 2017. If one believes that evidence should be used to guide policy, the verdict is clear: abolish student evaluations.

A similar statement could be applied to the assessment component of the accountability project.  So far, though, it seems to have fallen on deaf ears.

Financial Times on the Tyranny of Metrics

From Tim Hartford’s review of Muller in the FT:

It makes the case for professional autonomy: that metrics should be tools in the hands of teachers, doctors and soldiers rather than tools in the hands of those who would oversee them.

The Tyranny of Metrics does us a service in briskly pulling together parallel arguments from economics, management science, philosophy and psychology along with examples from education, policing, medicine, business and the military. It makes the case for professional autonomy: that metrics should be tools in the hands of teachers, doctors and soldiers rather than tools in the hands of those who would oversee them. In an excellent final chapter, Muller summarises his argument thus: “measurement is not an alternative to judgement: measurement demands judgement: judgement about whether to measure, what to measure, how to evaluate the significance of what’s been measured, whether rewards and penalties will be attached to the results, and to whom to make the measurements available”.

Bob Shireman’s review of Muller is comming soon.

The Culture of Accountability in DC’s Public Schools: Lessons for Higher Education

Today’s National Review Online has an interesting article by Max Eden and Lindsey Burke.  The DC schools have been used as an example of how a data-driven accountability system can lead to rapid improvements.  Some schools in DC have shown astonishing improvements in attendance, graduation rates and other metrics beloved of the experts.  Eden and Burke point out that exposés by the Washington Post and NPR show that most of this “improvement” is attributable to fraud.   Continue reading “The Culture of Accountability in DC’s Public Schools: Lessons for Higher Education”