Survey on Faculty Attitudes toward Assessment

Inside Higher Ed has just published the results of a survey they did on technology in the university.  Tucked away at the bottom of the article is a section on attitudes toward assessment.

Assessment. As public and political pressure builds on colleges to provide evidence about their performance and value, one of the major ways of doing so — various approaches to measuring student learning — continues to be viewed with suspicion and disdain by many professors.

Survey respondents were more likely to disagree than agree that assessment efforts on their campuses have “improved the quality of teaching and learning” (38 percent disagree versus 25 percent agree) or “helped increase degree completion rates” (36 percent disagree versus 27 percent agree).

Part of their skepticism may lie in the fact that many professors don’t feel that anything useful results from the efforts. Just a quarter of instructors (26 percent) say they regularly receive data gathered from their college’s assessment efforts (52 percent say they don’t), and 28 percent agree that “there is meaningful discussion at my college about how to use the assessment information.” About a third, 34 percent, say they have used data from these assessments to improve their teaching.

The other problem for many faculty members stems from their qualms about the motivations for assessment. Nearly six in 10 respondents (59 percent) agree that assessment efforts “seem primarily focused on satisfying outside groups such as accreditors or politicians,” rather than serving students.

It looks like about 25% of faculty are assessment supporters.  The last paragraph strikes me as most interesting: nearly 60% of faculty think assessment is primarily about satisfying outsiders “such as accreditors.”  Indeed.

How accreditors brought us assessment

This summer I went to the annual meeting of the Association for the Assessment of Learning in Higher Education (AALHE) for the first time.  One of the panels I went to was a “meet the accreditors” event.  Three or four higher ups from several accreditors were there.  It was deeply depressing.  When someone in the audience, who I know is quite knowledgeable about assessment and data quality, raised the question of validity with them, they just waved that issue away (literally a dismissive hand wave from one of the accreditors’ representatives).

I asked whether they weighed costs against benefits when they decided what types of demands to make of the colleges they accredit. The answer I got can only be described as surreal.  An earnest accreditor patiently explained to me that when his organization visits schools of lesser means, the teams don’t eat out at expensive restaurants, they go to “places like McDonalds, not McDonalds but places like that.”  Apparently the only cost he recognized as associated with accreditation was the cost of hosting the site visitors.  Not the piles of dubious data (hand wave), not the meetings where everyone goes through the charade of loop closing, not the ever growing assessment office, those it seems are not costs they consider.

Continue reading “How accreditors brought us assessment”

Interesting blog post on rubrics in IHE

I have been thinking about rubrics lately.  I have always been instinctively dubious about them, but that dubiousness was redoubled when I read a recent article in Inside Higher Ed by Denise Crane, who teaches writing at Santa Clara.

She expressed her doubts:

If our goal is to foster this long-term, deep learning, we should question whether rubrics hinder that ideal. Rubrics, after all, ask students to focus on the short term. They direct students’ attention to a single writing moment and don’t encourage an expansive view of writing and all it entails.

She then did a  survey of her students to understand how they use rubrics and why they value them.  The most telling of the results:

86% noted rubrics helped them to understand what the professor wants. That was the most popular response. 83% noted rubrics helped them “to understand assignment criteria,” and 74% noted that rubrics helped them “to know what they can do to get a better grade” or “what to check off for the assignment.”

My experience is that student writing is getting more formulaic and lifeless.  I suspect that reliance on rubrics is training students to “check off” items (thesis statement-check!, two pieces of evidence-check!) and not to write with any concern for originality, verve or playfulness.  In reading up on rubrics I ran into a guide to rubric writing that cautioned against including creativity as an assessment criterion in rubrics.  Apparently, because creativity is not taught, it ought not be considered when assessing student work.  So follow the formula, do as the rubric says, don’t get too creative and get a good grade.

Bryan Caplan has argued that one of the character traits that a college degree signals to the world about someone who holds one is conformity.  The use of rubrics for grading and assessment would seem to lend support to that argument.

Usually the comments sections of these articles are an intellectual wasteland, but there were some very thoughtful comments on Crane’s piece. The best of them (or the one I was most include to agree with) and entirely consonant with Caplan’s views was this:

 

 

I share the idea that rubrics are a barrier to students learning to write. I’d like to point out that the higher education system is not designed to do that. It appears that higher ed is designed to make clear to students that their success depends on their ability to give some superior authority what they ask for. The rubric is perfect for that purpose. You do the things on the list, you get an ‘A’, or tenure, or your bonus, etc. Note that there are multiple levels to the design. At the higher level, there is a rubric called the gen ed distribution requirement. It requires that students take a semester of English Comp or two (as at my college); among other things. This is what gives most of us our jobs. That higher level rubric might also need to go away if we really want students to learn to write. To make that happen, our relationship might have to change from grading the students (this is apparently the students’ point-of-view) to supporting the students in the writing situations that they care about.

 

Seconds after I first published this I encountered this bit of playful writing on twitter:

Stephanopoulos tells Papadopoulos he wasn’t scrupulous with the populace about his opulence but Papadopoulos cops to operants with a monopoly on scurrilous pomposity and on and on it goes for us in a monotonous pop-political ouroboros.

There is no rubric for that.

The Academic Genome?

I can’t tell if someone is pulling our collective leg in this article in Inside Higher Ed.

Sample paragraphs:

Establish an open-source competency ecosystem. We need competencies to be open, shareable and connectable within and among both third-party and open-source platforms. Everyone will benefit from resources connected under common technical constructs. This “edusystem” of competencies will be a resource for faculty building courses, aligning learning to industry competencies and communicating out student proficiencies to transcripts and employers.

And this:

In their book Degrees That Matter, Natasha Jankowski and David Marshall describe a “learning systems paradigm” that builds on the work of hundreds of postsecondary institutions using the Degree Qualifications Profile, Tuning and the VALUE Rubrics of the Association of American Colleges & Universities to map what is their local academic genome. This work makes possible the specification and coding of digital, machine readable cyberobjectives and cybercompetencies, which are finely grained, unambiguous, actionable statements of instructional intent. The resulting continuous telemetry of real-time, digital data enables evidence-based analytics and adaptivity developments which are increasing the power and efficacy of learning systems.

Maybe you think that is great … maybe you don’t. But it is happening — all around us.

It reads like someone used a computerized jargon and cliche generator to create an article.

Campbell’s Law and Community College Completion

There is an interesting and delightfully cordial debate going on at AEIdeas (a blog at American Enterprise Institute) about CUNY’s attempts to provide programs that help community college students complete in a timely manner.  One participant Angela Rachidi has looked at CUNY’s programs and sees their apparent success as an example of how community colleges an develop programs that help at risk students have improved outcomes.

CUNY ASAP doubled graduation rates when compared to a control group, 42 percent vs. 22 percent after 3 years — similarly impressive.

Continue reading “Campbell’s Law and Community College Completion”

Bob Shireman quoted on accreditation and conversion from for-profit to non-profit in IHE

IHE has an article about Grand Canyon University’s effort to convert from for-profit to not-for-profit status today.  Bob, who wrote a report (The Covert For-Profit) for the Century Foundation on the subject earlier, is quoted extensively.

Shireman, of the Century Foundation, doesn’t believe either accreditor has done enough, though. And he said the problem has gotten worse, not better.

“Consumers trust colleges labeled ‘public’ and ‘nonprofit’ because public and nonprofit control has been effective in preventing predatory behavior, making the schools safer places for students to enroll by separating institutional control from the financial stakes of investors,” he said.

Schools like GCU are blurring the line between non profit and for profit by trying to take the university non-profit, but then outsourcing many of their functions to a for-profit business that is effectively the old for-profit university.

A similar and equally disturbing process is at work as traditional not-for-profit universities outsource more and more functions to for-profit business partners.  Most of these are things like food services, but some of them are core, academic functions of the university.

 

Grades and Assessment

I have been traveling and ignoring the internet for the last ten days (a semi vacation-I drove 2700 miles alone with two springer spaniels) and am just catching up this morning.  While I was away there was an essay in IHE that looks at grading practices and practices that the author contends invalidate or “contaminate” grades.  These are: rewarding attendance, giving extra credit, punishing disruptive behavior, and rewarding class participation.  I tend to agree with that in that I don’t take attendance, only give extra credit in one course because it is mandated by our assessment regime, and rarely have disruptive behavior in my classes (a real benefit of 8:00 classes)

There are two interesting things about the article.  One is that it was written by Jay Sterling Silver (awesome name) who is a law professor.  Reading it I assumed that the author was someone who regularly taught general education courses.  I am stunned to hear that law profs take attendance or give extra credit.

But the other interesting thing is this statement:

In the era of outcomes assessment, testing serves to measure, more than ever, whether students have assimilated particular knowledge and developed certain skills. A student’s mere exposure to information and instruction in skills does not, in today’s assessment regime, reflect a successful outcome. The assessments crowd wants proof that it sank in, and grades are the unit of measurement.

This triggered some interesting comments, but fewer than I would have expected.  What did get discussed at length is the difference between excused absences and unexcused absences.

Assuming that attending class actually contributes to students learning stuff, what is the practical difference between missing class to attend a funeral or an athletic event and just oversleeping?  Either way you have missed the experience of being in class and the person who attended the funeral did not learn any more or less than the person who was in bed.

 

 

New Piece on Assessment in IHE

I have not had a chance to sit down and really read this closely, but the comments section is quite lively. And his questions seem apt.

 

Alex Small is the author and the title is “Some Questions for Assessophiles.

Representative paragraph:

Do you think that faculty members who eschew your exercises don’t pay attention to how students perform when we try new things? Yes, we all know someone who drones on for an hour thrice weekly, never sees students in office hours and gives only multiple-choice tests. But are all the rest of us similarly suspect? And if an honest assessment effort demonstrated that students weren’t learning anything from Professor Droning On, what concrete steps would you actually be prepared to take?