Interesting blog post on rubrics in IHE

I have been thinking about rubrics lately.  I have always been instinctively dubious about them, but that dubiousness was redoubled when I read a recent article in Inside Higher Ed by Denise Crane, who teaches writing at Santa Clara.

She expressed her doubts:

If our goal is to foster this long-term, deep learning, we should question whether rubrics hinder that ideal. Rubrics, after all, ask students to focus on the short term. They direct students’ attention to a single writing moment and don’t encourage an expansive view of writing and all it entails.

She then did a  survey of her students to understand how they use rubrics and why they value them.  The most telling of the results:

86% noted rubrics helped them to understand what the professor wants. That was the most popular response. 83% noted rubrics helped them “to understand assignment criteria,” and 74% noted that rubrics helped them “to know what they can do to get a better grade” or “what to check off for the assignment.”

My experience is that student writing is getting more formulaic and lifeless.  I suspect that reliance on rubrics is training students to “check off” items (thesis statement-check!, two pieces of evidence-check!) and not to write with any concern for originality, verve or playfulness.  In reading up on rubrics I ran into a guide to rubric writing that cautioned against including creativity as an assessment criterion in rubrics.  Apparently, because creativity is not taught, it ought not be considered when assessing student work.  So follow the formula, do as the rubric says, don’t get too creative and get a good grade.

Bryan Caplan has argued that one of the character traits that a college degree signals to the world about someone who holds one is conformity.  The use of rubrics for grading and assessment would seem to lend support to that argument.

Usually the comments sections of these articles are an intellectual wasteland, but there were some very thoughtful comments on Crane’s piece. The best of them (or the one I was most include to agree with) and entirely consonant with Caplan’s views was this:

 

 

I share the idea that rubrics are a barrier to students learning to write. I’d like to point out that the higher education system is not designed to do that. It appears that higher ed is designed to make clear to students that their success depends on their ability to give some superior authority what they ask for. The rubric is perfect for that purpose. You do the things on the list, you get an ‘A’, or tenure, or your bonus, etc. Note that there are multiple levels to the design. At the higher level, there is a rubric called the gen ed distribution requirement. It requires that students take a semester of English Comp or two (as at my college); among other things. This is what gives most of us our jobs. That higher level rubric might also need to go away if we really want students to learn to write. To make that happen, our relationship might have to change from grading the students (this is apparently the students’ point-of-view) to supporting the students in the writing situations that they care about.

 

Seconds after I first published this I encountered this bit of playful writing on twitter:

Stephanopoulos tells Papadopoulos he wasn’t scrupulous with the populace about his opulence but Papadopoulos cops to operants with a monopoly on scurrilous pomposity and on and on it goes for us in a monotonous pop-political ouroboros.

There is no rubric for that.

The Academic Genome?

I can’t tell if someone is pulling our collective leg in this article in Inside Higher Ed.

Sample paragraphs:

Establish an open-source competency ecosystem. We need competencies to be open, shareable and connectable within and among both third-party and open-source platforms. Everyone will benefit from resources connected under common technical constructs. This “edusystem” of competencies will be a resource for faculty building courses, aligning learning to industry competencies and communicating out student proficiencies to transcripts and employers.

And this:

In their book Degrees That Matter, Natasha Jankowski and David Marshall describe a “learning systems paradigm” that builds on the work of hundreds of postsecondary institutions using the Degree Qualifications Profile, Tuning and the VALUE Rubrics of the Association of American Colleges & Universities to map what is their local academic genome. This work makes possible the specification and coding of digital, machine readable cyberobjectives and cybercompetencies, which are finely grained, unambiguous, actionable statements of instructional intent. The resulting continuous telemetry of real-time, digital data enables evidence-based analytics and adaptivity developments which are increasing the power and efficacy of learning systems.

Maybe you think that is great … maybe you don’t. But it is happening — all around us.

It reads like someone used a computerized jargon and cliche generator to create an article.

Campbell’s Law and Community College Completion

There is an interesting and delightfully cordial debate going on at AEIdeas (a blog at American Enterprise Institute) about CUNY’s attempts to provide programs that help community college students complete in a timely manner.  One participant Angela Rachidi has looked at CUNY’s programs and sees their apparent success as an example of how community colleges an develop programs that help at risk students have improved outcomes.

CUNY ASAP doubled graduation rates when compared to a control group, 42 percent vs. 22 percent after 3 years — similarly impressive.

Continue reading “Campbell’s Law and Community College Completion”

Bob Shireman quoted on accreditation and conversion from for-profit to non-profit in IHE

IHE has an article about Grand Canyon University’s effort to convert from for-profit to not-for-profit status today.  Bob, who wrote a report (The Covert For-Profit) for the Century Foundation on the subject earlier, is quoted extensively.

Shireman, of the Century Foundation, doesn’t believe either accreditor has done enough, though. And he said the problem has gotten worse, not better.

“Consumers trust colleges labeled ‘public’ and ‘nonprofit’ because public and nonprofit control has been effective in preventing predatory behavior, making the schools safer places for students to enroll by separating institutional control from the financial stakes of investors,” he said.

Schools like GCU are blurring the line between non profit and for profit by trying to take the university non-profit, but then outsourcing many of their functions to a for-profit business that is effectively the old for-profit university.

A similar and equally disturbing process is at work as traditional not-for-profit universities outsource more and more functions to for-profit business partners.  Most of these are things like food services, but some of them are core, academic functions of the university.

 

Grades and Assessment

I have been traveling and ignoring the internet for the last ten days (a semi vacation-I drove 2700 miles alone with two springer spaniels) and am just catching up this morning.  While I was away there was an essay in IHE that looks at grading practices and practices that the author contends invalidate or “contaminate” grades.  These are: rewarding attendance, giving extra credit, punishing disruptive behavior, and rewarding class participation.  I tend to agree with that in that I don’t take attendance, only give extra credit in one course because it is mandated by our assessment regime, and rarely have disruptive behavior in my classes (a real benefit of 8:00 classes)

There are two interesting things about the article.  One is that it was written by Jay Sterling Silver (awesome name) who is a law professor.  Reading it I assumed that the author was someone who regularly taught general education courses.  I am stunned to hear that law profs take attendance or give extra credit.

But the other interesting thing is this statement:

In the era of outcomes assessment, testing serves to measure, more than ever, whether students have assimilated particular knowledge and developed certain skills. A student’s mere exposure to information and instruction in skills does not, in today’s assessment regime, reflect a successful outcome. The assessments crowd wants proof that it sank in, and grades are the unit of measurement.

This triggered some interesting comments, but fewer than I would have expected.  What did get discussed at length is the difference between excused absences and unexcused absences.

Assuming that attending class actually contributes to students learning stuff, what is the practical difference between missing class to attend a funeral or an athletic event and just oversleeping?  Either way you have missed the experience of being in class and the person who attended the funeral did not learn any more or less than the person who was in bed.

 

 

New Piece on Assessment in IHE

I have not had a chance to sit down and really read this closely, but the comments section is quite lively. And his questions seem apt.

 

Alex Small is the author and the title is “Some Questions for Assessophiles.

Representative paragraph:

Do you think that faculty members who eschew your exercises don’t pay attention to how students perform when we try new things? Yes, we all know someone who drones on for an hour thrice weekly, never sees students in office hours and gives only multiple-choice tests. But are all the rest of us similarly suspect? And if an honest assessment effort demonstrated that students weren’t learning anything from Professor Droning On, what concrete steps would you actually be prepared to take?

 

Comments

I can’t figure out how to make comments visible so I am posting the comments that came in regarding Moloch below.

Music Man
Music Man

I like the farmer analogy, but it fails to apply correctly in this case. Let me rework it so it matches the true state of affairs.

For decades and decades, farmers went out in their fields and worked to produce the best crop they could. Unfortunately, few farmers had any data on the quality and quantity of their output other than anecdotal data. “I once had an ear of corn,” said one farmer, “who went to Oxford!” Another farmer points out, “Nearly all of my corn gets eaten right away! We must be doing a great job as farmers.”

Although the farmers repeatedly assured the government and the community members that they were providing a good service – just “trust us,” they would tell them – the rising cost of grain, which pushed many families deep into debt, and the tax credits, used to prop up farmers in years of bad weather, were receiving increased ire from the government and taxpayers. So naturally, the government and the community members began demanding that the farmers prove that they were providing a good benefit to the community.

The farmers were appalled, insulted, and irritated that anyone would dare to doubt their expertise in farming. They had, in fact, studied farming intensely and worked long hours in difficult conditions out in the fields. The farmers also pointed out that they used farming practices that were widely accepted and commonly known to be effective. Yet, this was not enough for the government and community members – they wanted to see evidence that the crops produced high quality crops of sufficient quantity to feed their hungry community.

At the same time, some farmers had long suspected that they could grow more and better crops by modifying their farming practices. They knew there were differences in rain, and sunlight, and pests each year, but did not have good data on the relationship between those experiences and the quality and quantity of output of crops. So these farmers developed grading scales to evaluate the quality of the crop produced and insisted on using baskets that were similarly sized to gauge the quantity of crops produced each year. Now, the baskets were sometimes a bit different sized, and the grading scales were sometimes inaccurately applied, but it still gave farmers a better idea of what they were producing than without using such measures.

Farmers tried lots of different things to improve the quality and quantity of their crops. Some prayed at the temple, some sacrificed children, some altered their planting dates, some added additional water in dry months, and others tried other activities. But since the farmers were collecting information about the quality and quantity of yield, it was easy to determine, over time, which of these practices were making a difference. Some farmers changed nothing about their farming practices and complained that the act of measuring the quality and quantity of their crops did not lead directly to better crops. Of course these farmers were also correct – by doing nothing differently, other than measuring their output, they had little reason to expect that their results would change.

Assessment charts a similar path. For too long our responses to concerns about the quality and value of our degrees and programs was simply “trust us.” As a result, government, parents, taxpayers, and others began demanding that we provide some evidence that we were actually producing something of value. Grade data (which can be easily manipulated by instructors) and anecdotes simply weren’t sufficient (for every 1 positive anecdote it was easy to find 3 negative anecdotes). At the same time, there was a growing recognition that degree program quality could be improved by using some of the same information provided for accountability purposes. Assessment is not perfect by any means and isn’t always effective – but neither does measuring the output of farmers always produce gains in quality and quantity.

I have hears requests for “proof” that assessment produces gains in learning. To me this is a bit of a nonsensical question. It’s like asking to compare the output between a farmer who is measuring the quality and quantity of his output to a farmer who isn’t (to what, then, do you compare? You can’t compare something to the null.). Or, let’s say as a the researcher you volunteer to measure the output of the 2nd farmer. This improves your research methodology but does little to really get at the value of assessment – that is to say, if the 1st farmer is simply measuring the quantity and quality of his output and isn’t taking any improvement actions in response, than why would we expect there to be any differences?

In any case, I doubt I will change your mind (I note with irritation the cheap shot you take at the end of your piece on educational researchers which represents little more than your own arrogance and ignorance) but I write to provide a different perspective on an important issue and challenge you to be open to seeing this work from a different perspective.

Continue reading “Comments”