The Persistence of Dubious Ideas

The Atlantic just ran an article by Olga Khazan that traces the history of the idea that people have distinctive learning styles. The notion first emerged in the 1990s and rapidly became popular with students and teachers because it seemed to offer both an explanation for why some students failed to do well in school and a solution to that problem. Students failed, the argument went, because their teachers’ instruction did not match their learning styles. The solution was to ensure that auditory learners got auditory instruction, visual learners got visual instruction, and so on. It’s an appealing idea and makes a sort of intuitive sense.

Unfortunately, the research does not support it. Learning styles have been debunked repeatedly, most recently here.

That the scholarly consensus is against learning styles, has done little dent popular enthusiasm for the idea.

From Khazan’s article:

The “learning styles” idea has snowballed—as late as 2014, more than 90 percent of teachers in various countries believed it. The concept is intuitively appealing, promising to reveal secret brain processes with just a few questions. Strangely, most research on learning styles starts out with a positive portrayal of the theory—before showing it doesn’t work.

Which brings us to another well intentioned, intuitively appealing, and widely held idea, which is also unsupported by evidence­—learning outcomes assessment. I won’t rehash all the reasons that one ought not accept the premises of assessment, but the short version is this:

In 2011 Trudy Banta and Charles Blaich wrote an article describing their attempt to show that learning outcomes assessment leads to actual improvements in student learning. Trudy Banta was one of the early proponents of assessment and was attempting to confirm her belief that it works. From Banta and Blaich as quoted in Eubanks:

We scoured current literature, consulted experienced colleagues, and reviewed our own experiences, but we could identify only a handful of examples of the use of assessment findings in stimulating improvements.

Dave Eubanks observes that even the “handful” of successes were probably not representative:

As Fulcher, Good, Coleman & Smith (2014) point out, the 6% of submissions that Banta & Blaich found to identify improvements is bound to be an overestimate of the actual case, since these submissions were chosen presumably on their merits, and not at random.

To Banta’s credit she accepted her own findings, even though they did not confirm her intuition, and made them public.

Banta and Blaich were looking for evidence of the success of specific assessment efforts. What about the broader, national effort to use assessment to improve student learning?

Higher education has been engaged in assessment for over twenty years, so it’s had ample time to yield results. Are students learning more now than they used to? Did those of us who went to school more than 20 years ago receive inferior educations compared to the assessment-enhanced educations that students receive today? In January, the National Institute for Learning Outcomes Assessment (NILOA) reported that more selective schools make less use of assessment than less selective schools do (they are now franticly backpedaling on this statement). Does that mean students at more-selective schools are being shortchanged compared to their peers at less-selective schools?

The answer to all these questions is “no.” There is no evidence that students are learning more now. In fact, there good reason to believe that students learn very little in college and that this has long been the case. Bryan Caplan, an economist at George Mason, provides a good summary of the evidence that students don’t learn much at all in college here. Whatever twenty years of assessment has done to American higher education, it has not moved the needle on student learning.

As is the case with learning styles, the absence of evidence for learning outcomes assessment has done little to dull the enthusiasm of its proponents. Assessors seem to be content with their intuition that assessment ought to work, which is more than a little surprising given their insistence that faculty should provide evidence of student learning.

When the application of one leech has not cured your illness and a quick survey of the literature shows that it has not cured anyone else either, the answer is not to apply two leeches or to try to get more buy in from the leeches. Rather, it’s time to admit that leeches don’t work and to stop using leeches.

It is interesting that ideas that have been repeatedly debunked (learning styles) or for which there is no supporting evidence (assessment) can be so durable.  It reminds me of the experience of living in Tanzania in the 1980s during the last days of that nation’s experiment with of “African Socialism.” Despite the obvious failures of the system, proponents of the idea kept insisting it would work if only people (“economic saboteurs” as they were called) would stop undermining the system.  The actual logic of socialism was not questioned until the economy was in such tatters that they had to reluctantly embrace mageuzi or reform.  Unfortunately it’s hard to imagine a crisis that could be directly attributable to assessment that would be serious enough to force people to reexamine their commitment to the assessment ideology.  Our mageuzi may still be a long way off.