California Governor seeks to track student performance from kindergarten to the workforce

John Warner of Inside Higher Ed has written a blog post that urges Gavin Newsom, the new governor of California, not to spend $10 million creating a computer surveillance system that will track students as they move through the education system and into the workforce.  Warner argues that doing this effectively will be way more complicated than Newsom thinks, will cost way more than 10 million bucks, and won’t work anyway.  To get at that third point he has a and nicely annotated reading list for Governor Newsom.  Some of it will be familiar to Bad Assessment readers.  Some is new to me and thus might be new to you.

This is why I’ve compiled a reading list for Governor Newsom to consider as he makes his final decision.

For background on the limits of data and algorithms I would like Governor Newsom to read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil, and The Tyranny of Metricsby Jerry Z. Muller.

Brisk and readable by the layperson, both books make a case for how human performance cannot be reduced to quantifiable measurements.

Next,I would like Gavin Newsom to read three books more specifically dealing with education:

The Testing Charade: Pretending to Make Schools Betterby Daniel Koretz. Harvard education professor Koretz shows how our thirty-year obsession with standardization and assessment has not only led to no appreciable gains in student achievement, but how perverse incentives to improve scores have driven out subjects like art, physical education, music, and recess, while resulting cheating and short term prep that has no lasting impact on learning.

Better Test Scores: A Better Way to Measure School Qualityby Jack Schneider. In this book Schneider, an Assistant Professor of Education at UMass Lowell reveals the shortcomings of the kinds of measurements we tend to use when we judge schools. How we think of a particular school is rooted in value judgments about what’s important to the individual. A tracking system will inevitably crowd out this nuance.

Troublemakers: Lessons in Freedom from Young Children in School by Carla Shalaby. In this portrait of students who are deemed “troublemakers” Shalaby demonstrates how subjecting students to a system which seeks standardization and quantification is damaging even to those who toe the line, and disastrous to those who exist at the margins.

There is more in his recommended reading list that you can see by clicking the link above.

If there is one thing Newsom’s proposal brings home to me it’s that assessment in K-12 and assessment in Higher Ed are increasingly related  issues.  It seems that states and accreditors are anxious to replicate the “success” of No Child Left Behind and Race to the Top at the college level.  It’s sure to work this time…

The Long Reach of Accountability Culture Meets Poetry

Jerry Muller’s work reminds us that learning outcomes assessment is but one facet of a broader effort to track, quantify and surveil production and performance. His work looks at everything from police record keeping to the performance of hospitals to course assessement.

So it should come as no surprise that universities that are embracing the culture of accountability not just with respect to learning but faculty productivity too.  The best response to this I have seen so far is this new poem by Susan Harlan.  This now officially makes her my favorite living poet.



I am in the wrong the business

I was flying back from a visit to Washington, DC yesterday and was seated at the front of the economy section.  The flight crew did not pull the curtain across the aisle like they usually do in what I have to assume is an effort to tamp down resentment in the back of the plane. Because of this oversight,  I could see into the first class section.  In the aisle seat across and just forward of me was a woman working on a PowerPoint about assessment.  I half expected her to be carrying a big bag of cash marked “Lumina Foundation.”  Clearly, advocating for assessment pays a lot better than criticizing it.


Survey on Faculty Attitudes toward Assessment

Inside Higher Ed has just published the results of a survey they did on technology in the university.  Tucked away at the bottom of the article is a section on attitudes toward assessment.

Assessment. As public and political pressure builds on colleges to provide evidence about their performance and value, one of the major ways of doing so — various approaches to measuring student learning — continues to be viewed with suspicion and disdain by many professors.

Survey respondents were more likely to disagree than agree that assessment efforts on their campuses have “improved the quality of teaching and learning” (38 percent disagree versus 25 percent agree) or “helped increase degree completion rates” (36 percent disagree versus 27 percent agree).

Part of their skepticism may lie in the fact that many professors don’t feel that anything useful results from the efforts. Just a quarter of instructors (26 percent) say they regularly receive data gathered from their college’s assessment efforts (52 percent say they don’t), and 28 percent agree that “there is meaningful discussion at my college about how to use the assessment information.” About a third, 34 percent, say they have used data from these assessments to improve their teaching.

The other problem for many faculty members stems from their qualms about the motivations for assessment. Nearly six in 10 respondents (59 percent) agree that assessment efforts “seem primarily focused on satisfying outside groups such as accreditors or politicians,” rather than serving students.

It looks like about 25% of faculty are assessment supporters.  The last paragraph strikes me as most interesting: nearly 60% of faculty think assessment is primarily about satisfying outsiders “such as accreditors.”  Indeed.

How accreditors brought us assessment

This summer I went to the annual meeting of the Association for the Assessment of Learning in Higher Education (AALHE) for the first time.  One of the panels I went to was a “meet the accreditors” event.  Three or four higher ups from several accreditors were there.  It was deeply depressing.  When someone in the audience, who I know is quite knowledgeable about assessment and data quality, raised the question of validity with them, they just waved that issue away (literally a dismissive hand wave from one of the accreditors’ representatives).

I asked whether they weighed costs against benefits when they decided what types of demands to make of the colleges they accredit. The answer I got can only be described as surreal.  An earnest accreditor patiently explained to me that when his organization visits schools of lesser means, the teams don’t eat out at expensive restaurants, they go to “places like McDonalds, not McDonalds but places like that.”  Apparently the only cost he recognized as associated with accreditation was the cost of hosting the site visitors.  Not the piles of dubious data (hand wave), not the meetings where everyone goes through the charade of loop closing, not the ever growing assessment office, those it seems are not costs they consider.

Continue reading “How accreditors brought us assessment”

Interesting blog post on rubrics in IHE

I have been thinking about rubrics lately.  I have always been instinctively dubious about them, but that dubiousness was redoubled when I read a recent article in Inside Higher Ed by Denise Crane, who teaches writing at Santa Clara.

She expressed her doubts:

If our goal is to foster this long-term, deep learning, we should question whether rubrics hinder that ideal. Rubrics, after all, ask students to focus on the short term. They direct students’ attention to a single writing moment and don’t encourage an expansive view of writing and all it entails.

She then did a  survey of her students to understand how they use rubrics and why they value them.  The most telling of the results:

86% noted rubrics helped them to understand what the professor wants. That was the most popular response. 83% noted rubrics helped them “to understand assignment criteria,” and 74% noted that rubrics helped them “to know what they can do to get a better grade” or “what to check off for the assignment.”

My experience is that student writing is getting more formulaic and lifeless.  I suspect that reliance on rubrics is training students to “check off” items (thesis statement-check!, two pieces of evidence-check!) and not to write with any concern for originality, verve or playfulness.  In reading up on rubrics I ran into a guide to rubric writing that cautioned against including creativity as an assessment criterion in rubrics.  Apparently, because creativity is not taught, it ought not be considered when assessing student work.  So follow the formula, do as the rubric says, don’t get too creative and get a good grade.

Bryan Caplan has argued that one of the character traits that a college degree signals to the world about someone who holds one is conformity.  The use of rubrics for grading and assessment would seem to lend support to that argument.

Usually the comments sections of these articles are an intellectual wasteland, but there were some very thoughtful comments on Crane’s piece. The best of them (or the one I was most include to agree with) and entirely consonant with Caplan’s views was this:



I share the idea that rubrics are a barrier to students learning to write. I’d like to point out that the higher education system is not designed to do that. It appears that higher ed is designed to make clear to students that their success depends on their ability to give some superior authority what they ask for. The rubric is perfect for that purpose. You do the things on the list, you get an ‘A’, or tenure, or your bonus, etc. Note that there are multiple levels to the design. At the higher level, there is a rubric called the gen ed distribution requirement. It requires that students take a semester of English Comp or two (as at my college); among other things. This is what gives most of us our jobs. That higher level rubric might also need to go away if we really want students to learn to write. To make that happen, our relationship might have to change from grading the students (this is apparently the students’ point-of-view) to supporting the students in the writing situations that they care about.


Seconds after I first published this I encountered this bit of playful writing on twitter:

Stephanopoulos tells Papadopoulos he wasn’t scrupulous with the populace about his opulence but Papadopoulos cops to operants with a monopoly on scurrilous pomposity and on and on it goes for us in a monotonous pop-political ouroboros.

There is no rubric for that.

The Academic Genome?

I can’t tell if someone is pulling our collective leg in this article in Inside Higher Ed.

Sample paragraphs:

Establish an open-source competency ecosystem. We need competencies to be open, shareable and connectable within and among both third-party and open-source platforms. Everyone will benefit from resources connected under common technical constructs. This “edusystem” of competencies will be a resource for faculty building courses, aligning learning to industry competencies and communicating out student proficiencies to transcripts and employers.

And this:

In their book Degrees That Matter, Natasha Jankowski and David Marshall describe a “learning systems paradigm” that builds on the work of hundreds of postsecondary institutions using the Degree Qualifications Profile, Tuning and the VALUE Rubrics of the Association of American Colleges & Universities to map what is their local academic genome. This work makes possible the specification and coding of digital, machine readable cyberobjectives and cybercompetencies, which are finely grained, unambiguous, actionable statements of instructional intent. The resulting continuous telemetry of real-time, digital data enables evidence-based analytics and adaptivity developments which are increasing the power and efficacy of learning systems.

Maybe you think that is great … maybe you don’t. But it is happening — all around us.

It reads like someone used a computerized jargon and cliche generator to create an article.

Campbell’s Law and Community College Completion

There is an interesting and delightfully cordial debate going on at AEIdeas (a blog at American Enterprise Institute) about CUNY’s attempts to provide programs that help community college students complete in a timely manner.  One participant Angela Rachidi has looked at CUNY’s programs and sees their apparent success as an example of how community colleges an develop programs that help at risk students have improved outcomes.

CUNY ASAP doubled graduation rates when compared to a control group, 42 percent vs. 22 percent after 3 years — similarly impressive.

Continue reading “Campbell’s Law and Community College Completion”