Paul Banksley on Equity

Rick Hess of American Enterprise Institute has now done his second interview with leading Ed Consultant Paul Banksley.  Banksley, if you don’t know about him, is the Innovation Sherpa behind the ground breaking 22nd century skills movement.  

An excerpt:

He leaned back and continued. “And ‘equity’? Equity is when we make education more equitable. There’s equality, right? But that’s not equity. Sometimes equality has kids standing on boxes outside a baseball stadium.” He pulled a printed graphic out of his pocket and pointed to it. “You see? Some kids can’t see the game. Who wants that? Just imagine shipping all those boxes to various stadiums. And then kids still can’t see. It’s crazy!!” He slammed his fist on his chair.

The funders and handlers milling around had fallen into a hush. “That is so, so true,” murmured one. Others just snapped their fingers furiously.

“But it’s not just about helping raise the floor right?” I said. “In your 22nd century skills TED talk, you said that we also have to ‘raise the ceiling’.”

He said, “That’s right. We’re not just for equity, we’re for excellent equity. That’s why personalization and data-richness are the secret sauce to cracking the code. We need to do what works. We need more for kids who need more—but no less for anyone else! And it’s got to be about the kids, and the future.”

There was a burst of snapping.

 

 

 

 

 

 

AAUP censures Nunez CC for firing professor who refused to fabricate assessment data

According to IHE, Nunez Community College in Lousiana, which we reported on earlier (here and here), has been sanctioned for terminating a professor with 22 years of service.  He was fired because he protested when the college used his name on an assessment report that he felt had fabricated data in it.  He contacted SACS at the time and their response was that they did not have enough evidence to act.

 

From IHE:

Nunez Community College in Louisiana found its way onto AAUP’s censure list for terminating an associate professor of English who had served the institution for 22 years — over the phone. Nunez doesn’t have tenure, but AAUP maintains that professors are entitled to tenure-like due process protections based on length of service.

Nunez previously declined to comment on the specific circumstances of the case and did not respond to a request for comment about the vote. The professor says he was terminated because he refused to fabricate data on student learning outcomes for accreditation purposes. Nunez said previously that it ensures all faculty members’ academic freedom.

If there are still any questions as to whether accreditors take seriously the integrity part of their work, this seems like an answer.  The accreditors value mindless compliance and form filling first (data quality be damned) and integrity is a distant afterthought.

Educationism

The Atlantic has a new article by Nick Hanauer that challenges the notion that fixing education will fix other bigger issues like income inequality or the declining levels of social mobility in our society.

He calls this belief “educationism” and says that it is extremely attractive to the very rich because it suggests there is way to address these problems and let them keep all their money.

All told, I have devoted countless hours and millions of dollars to the simple idea that if we improved our schools—if we modernized our curricula and our teaching methods, substantially increased school funding, rooted out bad teachers, and opened enough charter schools—American children, especially those in low-income and working-class communities, would start learning again. Graduation rates and wages would increase, poverty and inequality would decrease, and public commitment to democracy would be restored.

 

What I’ve realized, decades late, is that educationism is tragically misguided. American workers are struggling in large part because they are underpaid—and they are underpaid because 40 years of trickle-down policies have rigged the economy in favor of wealthy people like me. Americans are more highly educated than ever before, but despite that, and despite nearly record-low unemployment, most American workers—at all levels of educational attainment—have seen little if any wage growth since 2000.

 

Educationism appeals to the wealthy and powerful because it tells us what we want to hear: that we can help restore shared prosperity without sharing our wealth or power. As Anand Giridharadas explains in his book Winners Take All: The Elite Charade of Changing the World, narratives like this one let the wealthy feel good about ourselves. By distracting from the true causes of economic inequality, they also defend America’s grossly unequal status quo.

What does this have to do with assessment?  Assessment is one facet of this approach and in many ways more offensive than Hanauer’s now abandoned approach of giving money and time to try to “fix” education.

Assessment assumes that there is a problem in education and that it can be addressed through looking at outcomes not inputs like funding.  At least the educationism people are also trying to increase the amount of money going to education.

As NILOA’s director Natasha Jankowski has conceded, twenty years of assessment has done nothing to improve higher education.  Nonetheless operations  like the Lumina Foundation continue to push this agenda.  At this point we need to ask who is benefiting from this.  It’s not students, it’s not faculty, and it’s not universities.  Cui bono?

Seems familiar…

From an article in the Atlantic about what happened when people failed to test the fundamental premises of research on a gene that supposedly was related to depression.

Using data from large groups of volunteers, ranging from 62,000 to 443,000 people, the team checked whether any versions of these genes were more common among people with depression. “We didn’t find a smidge of evidence,” says Matthew Keller, who led the project.

Between them, these 18 genes have been the subject of more than 1,000 research papers, on depression alone. And for what? If the new study is right, these genes have nothing to do with depression. “This should be a real cautionary tale,” Keller adds. “How on Earth could we have spent 20 years and hundreds of millions of dollars studying pure noise?”

“What bothers me isn’t just that people said [the gene] mattered and it didn’t,” wrote the pseudonymous blogger Scott Alexander in a widely shared post. “It’s that we built whole imaginary edifices on top of this idea of [it] mattering.” Researchers studied how SLC6A4 affects emotion centers in the brain, how its influence varies in different countries and demographics, and how it interacts with other genes. It’s as if they’d been “describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot,” Alexander wrote.

If you read the ASSESS distribution list run by AALHE you will see lot of discussion of the unicorn type with very little thought about the fundamentals or concern about evidence. I can’t say I recommend signing up for the list.  I did and it’s pretty depressing.  Or perhaps I an have a copy of SLC6A4 kicking around my chromosomes.

Adrian Ireland on Measuring the Meaningless

Medium is doing a series called the Age of Awareness which consists of “stories providing creative, innovative, and sustainable changes to the education system.”  It seems to be mostly focused on primary and secondary education and Ireland’s contribution to the series is uses K-12 examples, but is still applicable to higher education.

In it he asks an important question.  Critics of assessment in higher education have mostly focused on the poor quality of the data that is used in assessment.  Ireland is asking a different type of question and that is whether education is a really susceptible to measurement.  Even if you could measure some facets of student learning with great precision what would that tell you?

How do I measure your experience backpacking Europe for 2 months? How do I measure that piece of music you have been making in your basement for the last year? How do I measure how you deal with negative emotions? Or your humility or gratitude towards others? I could try, but we both know that it would be so imprecise that it would have little to no value.

The question facing education and the world for that matter is: Are we brave enough to trust that something has value without having the proof of measurement? Are we willing to live with that unknown?

He concludes with this:

We have become so beholden to these external metrics that we no longer trust ourselves. We have lost our instincts for what success feels like.

We need to accept that what is considered valuable is not the same for every person. We need to accept that time spent doing immeasurable tasks is often no less valuable than time spent toward measurable ones. When the ship starts to sink we need to stop blaming the tools and subjects of measurement and instead begin to suspect the measurement itself. We need to shift from asking how do we fix these numbers? to why are we measuring this?

In the words of Russel Ackoff, we need to stop doing the wrong thing righter and start doing the right thing.

OPMs and Assessment

I have often wondered why provosts and other senior administrators don’t complain more about assessment.  Some of them have to be clever enough to know that their institutions are wasting time and resources just to satisfy accreditors.  That they don’t is at least partly attributable to the need to show that their institutions are fully committed to a culture of assessment, but I don’t think that’s the whole story.

Assessment has another use.  It lets you claim that low quality online programs are comparable in quality to your face to face programs.

So it’s interesting to see that California has begun to try to reign in the Online Program Managers (OPM) that have driven much of the growth in online programs.  The IHE had an article about it yesterday and Bob Shireman is front and center in the process of bringing the OPMs to heel.

“I think we have a problem,” said Shireman of colleges working with OPMs. “I think traditional institutions have been too quick to hand over the keys … They’re doing this on extraneous programs that they don’t really care about so that they can make money for the rest of the university, and that’s a good cause. But they’re contracting with for-profit entities that have an incentive to be quite aggressive in their recruiting and are capable of charging a lot of money just because there’s federal aid there.”

Aside from worries about predatory recruitment practices, Carey is concerned that some OPMs are taking too much academic control, which would violate federal rules. “If you look at some early OPM contracts, they talk very explicitly about the curriculum. When you look at later contracts, that language is removed because they know they’re crossing a line they’re not supposed to cross — I certainly have questions about that.”

Read the whole thing.

An earlier post on the OPM menace.

At last, McSweeney’s delivers the Assessment Erotica you were waiting for

She swept her arm across the conference table, scattering rubrics with wild abandon. “Take me,” she cried. Like a pedagogical panther, she climbed on the table and spread her body across the painstakingly collected papers. She licked her lips, which tasted of Burt’s Bees. “I want you to use me until I’m as raw as this data.”
Shouldn’t that be “these data”?

Shocking News! PLOs don’t align with ILOs!

An article in IHE summarizes a study by Campus Labs (a maker of assessment software) that has revealed a worrisome mismatch between institutional learning outcomes and program learning outcomes.  It also had this tidbit:

Perhaps the biggest concern cited by the researchers was the emphasis (or lack thereof) that institutions seemed to put on quantitative reasoning, which appeared far down the list of outcomes that colleges and programs sought to measure.

Interesting that a company whose business model requires that assessment offices suspend their disbelief about quantitive matters, wants more education in that area.

Best comment comes from David Eubanks:

This is a nice study of the words used in learning outcomes statements. I particularly appreciated the inclusion of the reg-ex code used to categorize them. However, there is a large gap in reasoning here, viz, that all those words actually relate to what happens on campuses, including what students learn. The report mentions the importance of data quality, e.g. “The quality of analysis is first contingent upon the quality of data.” Here I mean all those numerical assessments of learning that the platform houses.

What is the quality of the actual assessment data? It is almost certainly very poor, given the constraints of small samples and non-expert analysis. Even summary statistics like number of samples per outcome/time and corresponding range and standard deviation of numerical measures would be helpful. In how many cases is it possible to detect student growth over time, which I would assume is the intended sign of learning? My guess is that Campus Labs could just incorporate a push-button random number generator and save a lot of people a lot of time in regrading papers with rubrics and uploading the numbers.

It’s ironic that one of the main findings is that there isn’t enough quantitative thinking.

Fendrich’s essay available without the paywall

This is a comment from Laurie Fendrich.  I thought I would put it somewhere more visible than the comments sections.  Her essay “A Pedagogical Straightjacket” was the subject of this post.  The essay was first published by the Chronicle, but is still paywalled.  Below she provides a link to a copy on her website.

I’m Laurie Fendrich, the author of this essay, and I own the copyright to it. I think things today are worse than they were when my essay was originally posted, so I am happy to see it has legs.

You can find the essay in its entirety posted on my website:

http://www.lauriefendrich.com

IHE blogger responds to IHE’s “Harsh Take” on assessment

John Warner who blogs for IHE, has posted a piece that proposes ways to do better assessment. He has some interesting ideas about measuring things, like student food security, that contribute to or detract from a “learning atmosphere.”   I am not sure about the details of what he proposes, but he makes an important point.  The current highly bureaucratized state of assessment ignores anything that is external to the curriculum and the classroom.  I would argue that most of higher ed’s serious problems lie elsewhere and are far more complex and structural than anything that can be addressed by adding an extra critical thinking exercise or what ever your loop closer of choice is.

I like this passage, which perfectly captures the reality of assessment as it exists at the chalkboard face (or maybe white-board face would now be more appropriate).

As a frontline instructor, my role in the larger assessment regime has been largely pro-forma and somewhat mysterious. I have been asked to randomly collect artifacts that fit the “learning objectives” for the course – learning objectives imposed from somewhere above me[1]– and hand them over to some other body that does something to them, and then I do it again.

Assessment as practiced at the department and institutional level could not have been less relevant to my day-to-day work.

Are students improving at writing? The answer is yes.

How do I know? Because I do…and because students say so.

Take my word for it, except of course, taking my word for it is apparently not enough.

 

[1]The learning objectives are usually vague and unobjectionable and easy enough to attach to something I was planning on doing anyway.