Bob Shireman quoted on accreditation and conversion from for-profit to non-profit in IHE

IHE has an article about Grand Canyon University’s effort to convert from for-profit to not-for-profit status today.  Bob, who wrote a report (The Covert For-Profit) for the Century Foundation on the subject earlier, is quoted extensively.

Shireman, of the Century Foundation, doesn’t believe either accreditor has done enough, though. And he said the problem has gotten worse, not better.

“Consumers trust colleges labeled ‘public’ and ‘nonprofit’ because public and nonprofit control has been effective in preventing predatory behavior, making the schools safer places for students to enroll by separating institutional control from the financial stakes of investors,” he said.

Schools like GCU are blurring the line between non profit and for profit by trying to take the university non-profit, but then outsourcing many of their functions to a for-profit business that is effectively the old for-profit university.

A similar and equally disturbing process is at work as traditional not-for-profit universities outsource more and more functions to for-profit business partners.  Most of these are things like food services, but some of them are core, academic functions of the university.


Grades and Assessment

I have been traveling and ignoring the internet for the last ten days (a semi vacation-I drove 2700 miles alone with two springer spaniels) and am just catching up this morning.  While I was away there was an essay in IHE that looks at grading practices and practices that the author contends invalidate or “contaminate” grades.  These are: rewarding attendance, giving extra credit, punishing disruptive behavior, and rewarding class participation.  I tend to agree with that in that I don’t take attendance, only give extra credit in one course because it is mandated by our assessment regime, and rarely have disruptive behavior in my classes (a real benefit of 8:00 classes)

There are two interesting things about the article.  One is that it was written by Jay Sterling Silver (awesome name) who is a law professor.  Reading it I assumed that the author was someone who regularly taught general education courses.  I am stunned to hear that law profs take attendance or give extra credit.

But the other interesting thing is this statement:

In the era of outcomes assessment, testing serves to measure, more than ever, whether students have assimilated particular knowledge and developed certain skills. A student’s mere exposure to information and instruction in skills does not, in today’s assessment regime, reflect a successful outcome. The assessments crowd wants proof that it sank in, and grades are the unit of measurement.

This triggered some interesting comments, but fewer than I would have expected.  What did get discussed at length is the difference between excused absences and unexcused absences.

Assuming that attending class actually contributes to students learning stuff, what is the practical difference between missing class to attend a funeral or an athletic event and just oversleeping?  Either way you have missed the experience of being in class and the person who attended the funeral did not learn any more or less than the person who was in bed.



New Piece on Assessment in IHE

I have not had a chance to sit down and really read this closely, but the comments section is quite lively. And his questions seem apt.


Alex Small is the author and the title is “Some Questions for Assessophiles.

Representative paragraph:

Do you think that faculty members who eschew your exercises don’t pay attention to how students perform when we try new things? Yes, we all know someone who drones on for an hour thrice weekly, never sees students in office hours and gives only multiple-choice tests. But are all the rest of us similarly suspect? And if an honest assessment effort demonstrated that students weren’t learning anything from Professor Droning On, what concrete steps would you actually be prepared to take?



I can’t figure out how to make comments visible so I am posting the comments that came in regarding Moloch below.

Music Man
Music Man

I like the farmer analogy, but it fails to apply correctly in this case. Let me rework it so it matches the true state of affairs.

For decades and decades, farmers went out in their fields and worked to produce the best crop they could. Unfortunately, few farmers had any data on the quality and quantity of their output other than anecdotal data. “I once had an ear of corn,” said one farmer, “who went to Oxford!” Another farmer points out, “Nearly all of my corn gets eaten right away! We must be doing a great job as farmers.”

Although the farmers repeatedly assured the government and the community members that they were providing a good service – just “trust us,” they would tell them – the rising cost of grain, which pushed many families deep into debt, and the tax credits, used to prop up farmers in years of bad weather, were receiving increased ire from the government and taxpayers. So naturally, the government and the community members began demanding that the farmers prove that they were providing a good benefit to the community.

The farmers were appalled, insulted, and irritated that anyone would dare to doubt their expertise in farming. They had, in fact, studied farming intensely and worked long hours in difficult conditions out in the fields. The farmers also pointed out that they used farming practices that were widely accepted and commonly known to be effective. Yet, this was not enough for the government and community members – they wanted to see evidence that the crops produced high quality crops of sufficient quantity to feed their hungry community.

At the same time, some farmers had long suspected that they could grow more and better crops by modifying their farming practices. They knew there were differences in rain, and sunlight, and pests each year, but did not have good data on the relationship between those experiences and the quality and quantity of output of crops. So these farmers developed grading scales to evaluate the quality of the crop produced and insisted on using baskets that were similarly sized to gauge the quantity of crops produced each year. Now, the baskets were sometimes a bit different sized, and the grading scales were sometimes inaccurately applied, but it still gave farmers a better idea of what they were producing than without using such measures.

Farmers tried lots of different things to improve the quality and quantity of their crops. Some prayed at the temple, some sacrificed children, some altered their planting dates, some added additional water in dry months, and others tried other activities. But since the farmers were collecting information about the quality and quantity of yield, it was easy to determine, over time, which of these practices were making a difference. Some farmers changed nothing about their farming practices and complained that the act of measuring the quality and quantity of their crops did not lead directly to better crops. Of course these farmers were also correct – by doing nothing differently, other than measuring their output, they had little reason to expect that their results would change.

Assessment charts a similar path. For too long our responses to concerns about the quality and value of our degrees and programs was simply “trust us.” As a result, government, parents, taxpayers, and others began demanding that we provide some evidence that we were actually producing something of value. Grade data (which can be easily manipulated by instructors) and anecdotes simply weren’t sufficient (for every 1 positive anecdote it was easy to find 3 negative anecdotes). At the same time, there was a growing recognition that degree program quality could be improved by using some of the same information provided for accountability purposes. Assessment is not perfect by any means and isn’t always effective – but neither does measuring the output of farmers always produce gains in quality and quantity.

I have hears requests for “proof” that assessment produces gains in learning. To me this is a bit of a nonsensical question. It’s like asking to compare the output between a farmer who is measuring the quality and quantity of his output to a farmer who isn’t (to what, then, do you compare? You can’t compare something to the null.). Or, let’s say as a the researcher you volunteer to measure the output of the 2nd farmer. This improves your research methodology but does little to really get at the value of assessment – that is to say, if the 1st farmer is simply measuring the quantity and quality of his output and isn’t taking any improvement actions in response, than why would we expect there to be any differences?

In any case, I doubt I will change your mind (I note with irritation the cheap shot you take at the end of your piece on educational researchers which represents little more than your own arrogance and ignorance) but I write to provide a different perspective on an important issue and challenge you to be open to seeing this work from a different perspective.

Continue reading “Comments”

Stop Sacrificing Children to Moloch

As someone who has been publicly critical of learning outcomes assessment for a long time, one of the questions I am often asked is: “If you are so opposed to assessment, what would you replace it with?” By way of an answer I have started resorting to this fable:


Imagine that you live in a Bronze Age village. You and everyone else in the village depend for your livelihood on subsistence farming, so you have a keen interest in the success of your crops. Because of that, you have developed a good sense of when to plant what crop, what types of soil work best with specific crops, when to weed, when to harvest and so on. It’s not scientific knowledge but it works pretty well. Still, you are always on the look out for ways of improving your yields. Continue reading “Stop Sacrificing Children to Moloch”

Bad Assessment on the Road this Summer

If you have been waiting for an opportunity to talk about your doubts and concerns about assessment this summer (and who would not want to use valuable summer travel time to talk about assessment), you have several opportunities in the next couple of months.

Friday May 4th, I will a panelist at the San Francisco State University chapter of the California Faculty Association meeting.  The theme of the conference is “Resisting the Neoliberal University.”  I will be on the morning plenary at 9:00 for the session on the “Mechanics of Managerial Takeover” and then will be in a breakout session called “Tools of Control and Authority.”

Then, in early June, there will be two panels at the Association for the Assessment of Learning in Higher Education (AALHE) meeting in Salt Lake City that should be of interested to assessment doubters.  Both are on June 6 and feature me, Bob Shireman, Lynn Priddy, Dave Eubanks and Josie Welsh.  The first session at 8:00 will be on “Identifying Problems in Assessment” and the second at 10:30 will about Identifying Solutions to Problems in Assessment.”


The Persistence of Dubious Ideas

The Atlantic just ran an article by Olga Khazan that traces the history of the idea that people have distinctive learning styles. The notion first emerged in the 1990s and rapidly became popular with students and teachers because it seemed to offer both an explanation for why some students failed to do well in school and a solution to that problem. Students failed, the argument went, because their teachers’ instruction did not match their learning styles. The solution was to ensure that auditory learners got auditory instruction, visual learners got visual instruction, and so on. It’s an appealing idea and makes a sort of intuitive sense.

Unfortunately, the research does not support it. Learning styles have been debunked repeatedly, most recently here. Continue reading “The Persistence of Dubious Ideas”

Assessment and History

There was an article in Inside Higher Ed yesterday by Sam Wineburg, Joel Breakstone, and Mark Smith, all of Stanford History Education Group.  In it they argue that historians don’t like to do assessment.  They are right about that.  They are also correct that many historians claim that history courses teach critical thinking.

In an effort to assess this claim, they have tested several groups of students on their ability to use historical evidence. They found that few could do it.  Their conclusion is that students are not learning the critical thinking skills that historians claim that their courses teach.   Continue reading “Assessment and History”