There was an article in Inside Higher Ed yesterday by Sam Wineburg, Joel Breakstone, and Mark Smith, all of Stanford History Education Group. In it they argue that historians don’t like to do assessment. They are right about that. They are also correct that many historians claim that history courses teach critical thinking.
In an effort to assess this claim, they have tested several groups of students on their ability to use historical evidence. They found that few could do it. Their conclusion is that students are not learning the critical thinking skills that historians claim that their courses teach.
(The piece in Inside Higher Ed is teaser for an article in the Journal of America History. I have not yet had a chance to read the article (my university library has not yet processed the latest issue and has a one year delay on the online version), so I am responding to this based on what’s in Inside Higher Ed and not the JAH article. I have looked at the website that has their assessment material on it.)
For example, only 2 of 57 students in history survey courses at two unnamed large state universities who were asked to use a 20th century painting of the first Thanksgiving noted that the time gap between the event and the creation of the evidence might be problematic. In another survey they did they found that upper level students did only slightly better (2 of 47) on a similar exercise.
About these exercises, they offer this caveat:
We suffer no illusions that our short exercises exhaust the range of critical thinking in history.
In fact what their assessments seem to be testing is students’ ability to do source criticism. This is something that even grad students struggle with. Furthermore, it seems that their assessment instrument assumes that students either exhibit this skill or they don’t. There is also no mention of control groups. Did they compare the performance of history students to students who had not taken history courses? Might students who had no training at all in history do even worse? Did they consider using the CLA, which purports to test a broad range of critical thinking and analytical skills, rather than their own more narrowly tailored assessments? That might have given a more nuanced sense of the extent of students’ critical thinking ability.
It’s not clear what they expect us to do based on these findings. They conclude with this:
In an age of declining enrollments in history classes, soaring college debt and increased questions about what’s actually learned in college, feel-good bromides about critical thinking and enlightened citizenship won’t cut it. Historians offer evidence when they make claims about the past. Why should it be different when they make claims about what’s learned in their classrooms?
Given that they claim to have demonstrated that students are not learning to think critically in history courses, the suggestion that we should provide evidence that they do seems bizarre. Is the idea that if we measure again we find results we like better?
They don’t say it, but what they want is for historians to take assessment seriously.
The second paragraph contains this:
Historians aren’t great at tracking what students learn. Sometimes they even resent being asked.
The link is to Molly Worthen’s NYT article which they seem to have interpreted as just the complaints of yet another cranky college professor who does not want to be held accountable.
The assumption seems to be that is we start measuring these things, then improvement in actual learning will occur and that if students are not learning to think critically, it is because we have not been showing sufficient enthusiasm for assessment.
It’s worth noting that of the three authors, two are assessment guys and the other is a history education/ed psych guy. That does not mean that they are wrong, but does mean that they come to the topic with at least as much of a predisposition toward assessment as people like Worthen come to it with a predisposition against it. (That was me doing some source criticism.)
I don’t doubt that there are many students who are not learning much in college. One only has to look at the evidence presented by Bryan Caplan to be convinced of that. But to then assume that this is fixable through more learning outcomes assessment than we already do seems to more of a faith-based than an evidence-based assertion.
In the end this article is just rehash of the standard model assessment apologia. It says that assessment:
- should work (presumably the two state universities where they did their study have been doing assessment for years)
- but it has not yielded results yet
- this is because professors just go through the motions
- if professors would just take assessment seriously, then it would work
I don’t buy it.