I can’t figure out how to make comments visible so I am posting the comments that came in regarding Moloch below.
NAFSA podcast with Molly Worthen
Molly Worthen was a guest on a NAFSA podcast about assessment. The link to the transcript is here. Interestingly, I can’t tell from the NAFSA website what NAFSA stands for.
Paul Valery
Stop Sacrificing Children to Moloch
As someone who has been publicly critical of learning outcomes assessment for a long time, one of the questions I am often asked is: “If you are so opposed to assessment, what would you replace it with?” By way of an answer I have started resorting to this fable:
Imagine that you live in a Bronze Age village. You and everyone else in the village depend for your livelihood on subsistence farming, so you have a keen interest in the success of your crops. Because of that, you have developed a good sense of when to plant what crop, what types of soil work best with specific crops, when to weed, when to harvest and so on. It’s not scientific knowledge but it works pretty well. Still, you are always on the look out for ways of improving your yields. Continue reading “Stop Sacrificing Children to Moloch”
Bad Assessment on the Road this Summer
If you have been waiting for an opportunity to talk about your doubts and concerns about assessment this summer (and who would not want to use valuable summer travel time to talk about assessment), you have several opportunities in the next couple of months.
Friday May 4th, I will a panelist at the San Francisco State University chapter of the California Faculty Association meeting. The theme of the conference is “Resisting the Neoliberal University.” I will be on the morning plenary at 9:00 for the session on the “Mechanics of Managerial Takeover” and then will be in a breakout session called “Tools of Control and Authority.”
Then, in early June, there will be two panels at the Association for the Assessment of Learning in Higher Education (AALHE) meeting in Salt Lake City that should be of interested to assessment doubters. Both are on June 6 and feature me, Bob Shireman, Lynn Priddy, Dave Eubanks and Josie Welsh. The first session at 8:00 will be on “Identifying Problems in Assessment” and the second at 10:30 will about Identifying Solutions to Problems in Assessment.”
The Persistence of Dubious Ideas
The Atlantic just ran an article by Olga Khazan that traces the history of the idea that people have distinctive learning styles. The notion first emerged in the 1990s and rapidly became popular with students and teachers because it seemed to offer both an explanation for why some students failed to do well in school and a solution to that problem. Students failed, the argument went, because their teachers’ instruction did not match their learning styles. The solution was to ensure that auditory learners got auditory instruction, visual learners got visual instruction, and so on. It’s an appealing idea and makes a sort of intuitive sense.
Unfortunately, the research does not support it. Learning styles have been debunked repeatedly, most recently here. Continue reading “The Persistence of Dubious Ideas”
Assessment and History
There was an article in Inside Higher Ed yesterday by Sam Wineburg, Joel Breakstone, and Mark Smith, all of Stanford History Education Group. In it they argue that historians don’t like to do assessment. They are right about that. They are also correct that many historians claim that history courses teach critical thinking.
In an effort to assess this claim, they have tested several groups of students on their ability to use historical evidence. They found that few could do it. Their conclusion is that students are not learning the critical thinking skills that historians claim that their courses teach. Continue reading “Assessment and History”
Letter to the Chronicle on Assessing Student Learning
The Chronicle has published a letter from Don Fader at Alabama. He offers a critique of some studies on class size that were summarized in an article that was published in the Chronicle. Best line:
Among other things, the studies described in this article seem to base their evaluation on student “self-reported learning outcomes.” What the heck does that mean? What students think they learned?
It’s not just learning outcomes assessment that suffers from data quality issues. Much of the research on education (and many other fields) suffers from a willingness to use, to make inferences from, and to publish based on dubious data.
Shireman on Jerry Muller’s Tyranny of Metrics
By Robert Shireman
Starting in 2000 and repeating every year for most of the decade, a distinguished national committee chaired by former North Carolina Governor James B. Hunt Jr., a Democrat, released an annual report card on higher education. Color-coded maps showed every state’s A-F grade in each of several categories including college affordability, participation, and graduation. In one category, though, every state, every year, got an “Incomplete.” That category for which every state was deficient, was “learning,” because, as the authors complained, there is “no nationwide approach to assessing learning” in college, “no common benchmarks that would permit state comparisons of the knowledge and skills of college students.” Continue reading “Shireman on Jerry Muller’s Tyranny of Metrics”
Latest from NILOA
The National Institute for Learning Outcomes Assessment has published a list of material they see as constituting a response to Molly Worthen and, to a lesser extent, me. To me (confirmation bias alert), it sounds very defensive and I am not sure how carefully they have read the articles that they posted. Some of them, while not agreeing entirely with Worthen, seem to accept much of her argument. Eubanks and McConnell are examples of this and I am not sure why Margaret Spellings’ recent response to Bryan Caplan is included at all.
I like the farmer analogy, but it fails to apply correctly in this case. Let me rework it so it matches the true state of affairs.
For decades and decades, farmers went out in their fields and worked to produce the best crop they could. Unfortunately, few farmers had any data on the quality and quantity of their output other than anecdotal data. “I once had an ear of corn,” said one farmer, “who went to Oxford!” Another farmer points out, “Nearly all of my corn gets eaten right away! We must be doing a great job as farmers.”
Although the farmers repeatedly assured the government and the community members that they were providing a good service – just “trust us,” they would tell them – the rising cost of grain, which pushed many families deep into debt, and the tax credits, used to prop up farmers in years of bad weather, were receiving increased ire from the government and taxpayers. So naturally, the government and the community members began demanding that the farmers prove that they were providing a good benefit to the community.
The farmers were appalled, insulted, and irritated that anyone would dare to doubt their expertise in farming. They had, in fact, studied farming intensely and worked long hours in difficult conditions out in the fields. The farmers also pointed out that they used farming practices that were widely accepted and commonly known to be effective. Yet, this was not enough for the government and community members – they wanted to see evidence that the crops produced high quality crops of sufficient quantity to feed their hungry community.
At the same time, some farmers had long suspected that they could grow more and better crops by modifying their farming practices. They knew there were differences in rain, and sunlight, and pests each year, but did not have good data on the relationship between those experiences and the quality and quantity of output of crops. So these farmers developed grading scales to evaluate the quality of the crop produced and insisted on using baskets that were similarly sized to gauge the quantity of crops produced each year. Now, the baskets were sometimes a bit different sized, and the grading scales were sometimes inaccurately applied, but it still gave farmers a better idea of what they were producing than without using such measures.
Farmers tried lots of different things to improve the quality and quantity of their crops. Some prayed at the temple, some sacrificed children, some altered their planting dates, some added additional water in dry months, and others tried other activities. But since the farmers were collecting information about the quality and quantity of yield, it was easy to determine, over time, which of these practices were making a difference. Some farmers changed nothing about their farming practices and complained that the act of measuring the quality and quantity of their crops did not lead directly to better crops. Of course these farmers were also correct – by doing nothing differently, other than measuring their output, they had little reason to expect that their results would change.
Assessment charts a similar path. For too long our responses to concerns about the quality and value of our degrees and programs was simply “trust us.” As a result, government, parents, taxpayers, and others began demanding that we provide some evidence that we were actually producing something of value. Grade data (which can be easily manipulated by instructors) and anecdotes simply weren’t sufficient (for every 1 positive anecdote it was easy to find 3 negative anecdotes). At the same time, there was a growing recognition that degree program quality could be improved by using some of the same information provided for accountability purposes. Assessment is not perfect by any means and isn’t always effective – but neither does measuring the output of farmers always produce gains in quality and quantity.
I have hears requests for “proof” that assessment produces gains in learning. To me this is a bit of a nonsensical question. It’s like asking to compare the output between a farmer who is measuring the quality and quantity of his output to a farmer who isn’t (to what, then, do you compare? You can’t compare something to the null.). Or, let’s say as a the researcher you volunteer to measure the output of the 2nd farmer. This improves your research methodology but does little to really get at the value of assessment – that is to say, if the 1st farmer is simply measuring the quantity and quality of his output and isn’t taking any improvement actions in response, than why would we expect there to be any differences?
In any case, I doubt I will change your mind (I note with irritation the cheap shot you take at the end of your piece on educational researchers which represents little more than your own arrogance and ignorance) but I write to provide a different perspective on an important issue and challenge you to be open to seeing this work from a different perspective.