Eubanks on “Weaponized Learning Outcomes”

Dave Eubanks has new guest post in Inside Higher Ed.  In it he recounts his experience consulting on a court case in Tonga where an accreditor tried to shut down a university over the alleged shortcomings of its assessment program.  Unfortunately (for him) it sounds like he used Skype to give his testimony and so did not get a trip to Tonga out of it.

Dave, who is the Assistant VP for Institutional Effectiveness at Furman, is a mathematician.  Thus, unlike most other people in the assessment trade he is quite knowledgeable (and concerned) about stats, research methods, and data quality.

He worries that assessment’s reliance on learning outcomes statements (what Bob Shireman called “Blurbs with Verbs“) has turned assessment into an exercise in meaningless box checking.

The assessment bureaucracy—those periodic checkboxy reports—can only be justified if the formal learning outcome statements and their standardized assessments are superior to the native ways faculty know their students. Otherwise we could just ask faculty how the students are doing and use course registrations and grades for data. We could look at the table of contents to find the learning outcomes.

In the article he gives a list of the benefits that assessment offices provide.  I am not sure that “benefits” is the word I would use to describe these things, but I am not someone who has work with other assessment people on a regular basis.

The list:

The benefits your office probably already provides include:

  • Facilitation of external program review. This is the natural extension of faculty ways of knowing and is the most authentic way to understand a program, considering facilities, budgets, faculty numbers and qualifications, curricula, and reviewing samples of student work, for example.
  • Being an internal consultant for program development, e.g. leading discussions of curriculum coherence or identifying intuitive learning goals that span courses. This leads to more agreement about what students should be accomplishing, and helps the faculty’s natural language converge.
  • Summarizing or modeling data, when there’s enough of it to work with.
  • Coordinating assessment reporting for regulatory purposes using cookie-cutter forms, often entered into expensive software systems.

The last one is the most expensive and time-consuming but provides the least benefit to the institution. We need to get out of the checkbox-reporting business, and the sooner the better.

As a faculty member, my sense is that for most of us, the last bullet point is how they most often encounter the assessment office.

He would like to see assessment office start to take grades seriously as form of data.

Course grades don’t fit nicely into the learning outcome ideology. You may have been told that they provide only “indirect evidence” and are not useful as primary data for understanding learning. This is, of course, preposterous. Here are some questions you could start with:

  • What is the distribution of academic performance among students by demographic?

  • What is the distribution of course difficulty by program or courses within programs?

  • Does learning suffer when students wait to take introductory courses?

  • How reliable are grade assignments by program?

  • How well do grades predict other things you care about, like standardized tests or other assessment data, internship evaluations, and outcomes after graduation?

If assessment offices focused on this type of stuff I think they would get a lot more support from faculty.

So, read the article and visit his blog.

 

 

Shireman on OPMs and the law in Inside Higher Ed

Bob Shireman of the Century Foundation and the author of the classic critique of assessment “SLO Madness,”  and occasional Bad Assessment contributor, has just published a long essay on the legality of a basic feature of the OPM business model.  Online Program Managers are companies that, depending on who you talk to, help colleges develop, deliver and market online programs or use the names and accreditation of established and, often public, universities as cover under which to run low-quality, high-cost online programs.  The way this works most often is that the OPM gets its money by taking  a percentage (usually 50% or more) of the tuition.

The most common function of an OPM, especially once the initial course development is done, is recruiting.   Because the OPM’s income stream depends on the number of students in the program, it has a strong incentive to be as aggressive as possible about recruiting.

The Higher Education Act, the federal law that set the rules for most of higher education, has this to say about the practice of paying recruiters:

‘The Institution will not provide any commission, bonus, or other incentive payment based directly or indirectly on success in securing enrollments or financial aid to any persons or entities engaged in any student recruiting …’”

On the face of it the current OPM compensation model is illegal.

That this practice continues is because of two sets of guidances that came first from the Bush and then the Obama Departments of Education.  Shireman worked for the Obama administration and says that the reason they ended up allowing this compensation model was because they saw the OPMs as less bad than the for-profit colleges they were trying bring under control.  It seems he is now questions the wisdom of the decision.

Since the Department of Education 2011 guidance on this it has become clear that the OPMs are not the “white-hat warriors” Shireman and his colleagues thought they were.

In fact, in one article and report after another (including an excellent one from the Century Foundation) has made clear that the OPMs are a menace that exploit both students and universities.

Shireman concludes with this:

But one reform does not need to await action in Congress: enforcing the statutory ban on incentive compensation to contractors. Enforcement could begin at any time, in various ways. The department, under this administration or another, could decide to rescind or revise the guidance, recognizing in hindsight that it opened up truck-size loopholes inconsistent with the statute. Or the question of the validity of the guidance could come before a judge, if a harmed student or faculty member, or a competitor school, were to file a suit against a school that is relying on the guidance. Under certain circumstances, a suit filed against the department could lead to a judge throwing out the guidance as contrary to the plain language of the statute.

Shireman has a long history of actually getting policies changed in the real world.  If I were running an OPM I’d be a little nervous now that Bob and the Century Foundation seem to looking into the shadier corners of the OPM business model.

A Foolish Consistency: The Culture of Assessment in Higher Education

I recently learned of an article by Ryan McIlhenny in Confluence.  People who know my work will be aware that I am not much of a theory guy.  McIlhenny is and his article uses Nietzsche, Adorno, and Foucault to attack the current form of assessment in higher education.

There are a couple of things I find interesting about this.  He also makes much use of Jerry Muller’s Tyranny of MetricsMuller strikes me as a right of center sort of person who would probably not make a lot of use of the Frankfort School in his work, but here he is being quoted favorably by a self-described critical theory person.

So, both left of center academics and right of center academics see assessment as a threat to the integrity of higher education.  Who does like the current culture of assessment?  Often it’s people trained in schools of education.

From the article:

And institutions are seeing an increasing number of holders of non-Ph.D. doctorates (e.g., Ed.D.s) filling administrative slots. Most Ed.D. programs, for instance, writes Dewitt Scott, “focus specially on preparing students to assume formal administrative leadership positions in education institutions.”[11] Further, many individuals with such a degree who then take an administrative position do so with very little experience in administration itself and thus, as if to jump on to a highspeed train, inherit the pressures of demonstrating the successfulness of their institution. Professional doctorates, unlike traditional Ph.D.s, are focused more on deepening the practices within a given field, tending toward an urgency not only to produce innovative practices, while often relegating theory, but also to produce immediate results.

This, the increasing prominence of people from ed schools in assessment and administration, is something I have been thinking about a lot lately and I expect to have article out on the topic soon.  But McIlhenny has anticipated some of what I have to say.

It’s a long article and it’s hard to find a single passage that is representative of the larger argument but this one comes close:

So how might we break the obsessive habit of assessment and return to a posture of openness and patience integral to the production of knowledge? One strategy would be to confront the neoliberal ideology that has perverted higher education. Institutional leaders need to gain the courage to break the “foolish consistency”—the metric fixation exacerbated by a consumer culture—that has come to dominate the life of the mind. I’m not sure how to accomplish such a task without some sort of clamorous but unified barrage of criticism. A second more practical solution—one that will return us to a healthier approach to assessment—would be for faculty members to return to the teaching and scholarly outcomes articulated by their own disciplines.[43] Faculty members regularly visit such outcomes as they develop those for their courses and departments. Institutional and department outcomes should be directed from the outcomes specific to an academic discipline. Outcomes should be generated from the bottom, from departments, up. Faculty members must take the lead in articulating institutional outcomes through the use of the outcomes in their respective disciplines.[44 ]In an ideal situation, faculty bodies should have sovereignty over curriculum development, which should include assessment and program review. The sad reality is, however, that administrations have relegated faculty members to advisory positions, which is tantamount to taking away their responsibilities as primary judges of student learning.[45]

New Version of Stop Sacrificing Children to Moloch

Cultural and Pedagogical Inquiry  has published a new version of “Stop Sacrificing Children to Moloch.”

The new version used some of Jim Scott’s ideas about the difference between expert knowledge and what he calls metis, which is a Greek word for the sort of seat of the pants knowledge that comes from experience.

 

The original, which is by far the most widely read post ever on Bad Assessment, is below:

 

Stop Sacrificing Children to Moloch

As someone who has been publicly critical of learning outcomes assessment for a long time, one of the questions I am often asked is: “If you are so opposed to assessment, what would you replace it with?” By way of an answer I have started resorting to this fable:

 

Imagine that you live in a Bronze Age village. You and everyone else in the village depend for your livelihood on subsistence farming, so you have a keen interest in the success of your crops. Because of that, you have developed a good sense of when to plant what crop, what types of soil work best with specific crops, when to weed, when to harvest and so on. It’s not scientific knowledge but it works pretty well. Still, you are always on the look out for ways of improving your yields.

 

One day a group of priests show up. They hold an information session at which they advise you that in the imperial capital (which, of course, taxes your crops), the authorities are concerned that the crop yields in the villages (and thus the tax revenues) are not what they could be. They would like to see that change.

 

The priests point out that most of the other villages have begun sacrificing children to Moloch in order to improve their yields. Reluctantly, your village agrees to start sacrificing children to Moloch in the hope that he will reward you with better harvests.

Continue reading “New Version of Stop Sacrificing Children to Moloch”

More on OPMs

The Hechinger Report has just done a lengthy piece on OPMs.  It mentions the Century Foundation report that came out recently and that I discussed here, but it seems too involved to have been triggered by just that report.

One issue it looks into that the Century Foundation report did not address in the same detail is whether these programs, whatever their other faults might be, are actually a good way of raising money.

Colleges often enter into these deals because they need the money.  As enrollments and state appropriations have shrunk people have started looking to new sources of income.  This process is the subject of Chris Newfield’s book The Great Mistake.  Newfield does not actually address the OPM issue but all of the paths that he sees public universities following to try to find new sources of revenue never seem to fix the money problem of the schools that attempt them.  OPMs it seems, are no better than research, or public private partnerships at generating revenue.

From the article:

Gephardt, of Moody’s, said he’s “skeptical” that online graduate degree programs offered through an OPM could provide a “net revenue salvation” that would compensate for other deep revenue challenges.

As Seymour noted: “For every dollar that flows to the OPM, that’s a dollar less in revenue that can be spent on other aspects that the institution may need. Over time, is the investment in the work that the OPM is doing and the revenue that online programs will bring in worth the offset?”

Howard Lurie, principal analyst at Eduventures Research, puts it more bluntly: “The companies have done well. The schools? It depends,” he said.

OPMs are back in the news

The Century Foundation has just released a report about Online Program Managers that contains both a critique of the industry and advice for colleges that are planing to seek the services of an OPM.

The report is a wake up call about the extent of the industry’s control over the online sector and the degree to which these business have the succeeded in using not-for-profit schools as fronts for the extraction of large sums of tuition money from students and universities.

We learned that these partnerships are as bad as many had suspected, if not worse: in return for some superficial convenience, public universities in every corner of the United States had been putting their for-profit contractors in the driver’s seat in nearly every respect, including financial considerations. More often than not, more than half of the programs’ tuition revenue goes straight to the contractors.

One of the things I have often wondered is why more colleges don’t dump their OPMs and try to run their own online programs.  Now I know: it’s long term contracts that are set up to make it hard to switch providers or strike off on your own.

Boise State University, for example, must give Academic Partnerships (AP), its contracted OPM, two years’ notice to keep its contract from auto-renewing for another three years. What’s more, if the agreement reaches its full five-year term and Boise State manages to end the contract, the school must continue paying Academic Partnerships for each student it secured that is still taking online courses at the school. And perhaps worst of all, if Boise State terminates the agreement early, it can’t work with any other similar provider for the same programs until after what would have been the fifth anniversary of the contract. This arrangement leaves Boise State with no recovery options other than completely abandoning a program and its students if it wants to alter who or how the program is managed.

If more than half of the tuition in these programs is going to the OPM but the universities are “teaching” the courses, then, unless the universities are subsidizing these programs, the amount they spend on instruction must be much lower in these programs than in face-to-face or university-run online programs.

How this does not affect program quality is beyond me.  Is it really possible to have programs that spend so little on instruction that they can give up half the tuition money and still make a profit (which is why universities have these programs after all) with out reducing the quality of the program?

Surely the assessments of these courses and the accreditors that vet them would catch these problems.

If it’s really possible to run a quality online program for a fraction of the cost of a traditional program, then surely we need to bring those efficiencies into traditional classes and cut tuitions in them by the fraction that we pay to the OPMS in online programs.

Or if it’s really possible to run a lousy online program that still produces assessment data that satisfies accreditors, then something is seriously wrong with the assessment and accreditation process.

My gut says it’s the latter.

 

 

Indifference to evidence

IHE has an article today looking at a study on how universities make technology purchases.  The short version is they seem to rely more on gut instinct and hype than on actual research on what works and what does not.

From the article:

…researchers found a wide range of approaches to selecting new technology, but few made use of strong scientific evidence to show whether that technology has a beneficial outcome.

Peer-reviewed external research was mentioned by just a fifth of interviewees.

 

Incorporating externally produced research into decision-making processes in higher ed is “difficult,” the authors said.

Fortunately, as with assessment,  bad, internally produced research is cheap and easy to come by.

“Externally produced, rigorous research, such as randomized controlled trials (RCTs), is often expensive, may take too long to inform pressing decisions, and is often difficult to generalize to a decision maker’s context,” they said. On the other hand, “locally relevant, internal research, such as faculty and student surveys or pilot studies, may be more feasible to implement and may provide more timely information” but may be “less reliable for providing solid answers to questions about effectiveness for improving academic outcomes.”

Hmm…sounds like technology purchasing is using what is known in the assessment business as “actionable data.”  It may not be valid or reliable or meaningful, but it’s available and lets you do what you wanted to do anyway which was buy some cool new software or make faculty fill out forms (why not both?).

“I was expecting to find more rational decision-making processes,” said Fiona Hollands, associate director of the Center for Benefit-Cost Studies of Education at Teachers College, who co-wrote the study. “I thought more institutions would start with a need or a problem and then figure out the solution, rather than starting with solutions and finding problems to solve with them.”

“There are places that literally scan the market looking for new innovative technologies, bring it in, play with it in their technology units and then try to find a use for it on campus. I find that a bit absurd,” said Hollands.

I have actually spent some time thinking about software purchases and why they seem to be the go-to solution for all problems on campus.  I have concluded that most of the big problems on campus (recruitment, retention, showing that students are learning) are so complex and involve so many externalities, that they are virtually insoluble without real structural change.  Most administrators know this, but also have to appear to be addressing these problems.

Buying new software is the easiest way to signal that you are taking a problem seriously.  That technology purchases  create multiple opportunities to add lines  to CVs might be a factor too.  Someone gets to lead the town meetings and workshops where “stakeholders” discuss the problem.  Then someone gets to be the person who oversees that actual purchase of the software. This is the best job because it involves being wined and dined by vendors.  Then once the software has been purchased someone gets to oversee the migration, implementation and, of course, training.

At this point the software has already done its job.  If anyone asks about retention or assessment or whatever, the vice president of whatever can say that steps have been taken.   He and a bunch of other people will have new lines on their CVs.  Best of all,  if they bought the right software they may have shifted most of the burden of work associated with the issue to people outside their unit who can now enter all the retention or assessment documnetation directly into the new software themselves.

So maybe it’s not that surprising that universities are not going to great lengths to test whether the technology they purchase solves the problems it purports to solve.  That may not have been the point of the purchase in the first place.

 

Eubanks says scrap the assessment machine

Dave Eubanks, a long time critic of assessment’s indifference to data quality, has an excellent guest post today on John Warner’s “Just Visiting” blog at IHE.

In it he explains his path from assessment believer to sceptic.  Very few people in the assessment world actually have training in stats, math or data science.  Eubanks is a mathematician by training.

It became clear that the peer-review version of assessment is almost perfectly designed to fail as data-generating exercises. If you sat down to “backwards design” a plan to waste enormous amounts of time and money, you could hardly do better than the assessment report-writing machine that we have now.

He also points to new guidance from the Department of Education that directs accreditors and assessment offices to stop wasting everyone’s time and money.

The Department of Education recently weighed in on this topic. They rewrote the handbook for accreditors encourage more freedom in meeting student achievement standards (pg. 9):

These standards may include quantitative or qualitative measures, or both. They may rely on surveys or other methods for evaluating qualitative outcomes, including but not limited to student satisfaction surveys, alumni satisfaction surveys, or employer satisfaction surveys.

This new language is important because it challenges two of the report-writing machine’s rules, viz: that course grades don’t measure learning, and that survey data (“indirect assessment”) isn’t useful on its own.

More explicit language telling peer reviewers to back off can be found in the proposed rules for accreditors:

Assessment models that employ the use of complicated rubrics and expensive tracking and reporting software further add to the cost of accreditation. The Department does not maintain that assessment regimes should be so highly prescriptive or technical that institutions or programs should feel required to hire outside consultants to maintain accreditation. Rather than a “one-size-fits-all” method for review, the Department maintains that peer reviewers should be more open to evaluating the materials an institution or program presents and considering them in the context of the institution’s mission, students served, and resources available. (Section 602.17b, pg. 104)

In other words, scrap the machine.

My guess is that the assessment machine will rumble on for long time.  Too many people make upper-middle class incomes from it for it to just go away and too many senior administrators have been pretending to believe in its efficacy for too long to reverse course without eating a lot of crow.

Joseph Heller saw it coming…

Somehow I got through both high school and college without reading Catch-22.  I am now about half way through it and it is a truly great book.

In Chapter 11 Captain Black, the squadron intelligence officer, has decided to distinguish himself from other officers by having his men sign loyalty oaths.  When other officers copy this, he finds ever more situations in which his men must sign loyalty oaths.

Mentally substitute either “assessment report” or “rubric” for “loyalty oath” as you read this passage.  It perfectly captures the absurdity of attempting to force people to buy in to a project in which they are compelled to participate.  Captain Black’s expansion of his requirements perfectly depicts the bureaucratic need to constantly extend any project.

“Ok, you’re assessing student learning in their courses, but what about co-curicular learning assessment?”

From the book:

All the enlisted men and officers on combat duty had to sign a loyalty oath to get their map cases from the intelligence tent, a second loyalty oath to receive their flak suits and parachutes from the parachute tent, a third loyalty oath for Lieutenant Balkington, the motor vehicle officer, to be allowed to ride from the squadron to the airfield in one of the trucks. Every time they turned around there was another loyalty oath to be signed…

Without realizing how it had come about, the combat men in the squadron discovered themselves dominated by the administrators appointed to serve them.  They were bullied, insulted, harassed and shoved about all day long by one after the other.  When they voiced their objection, Captain Black replied that people who were loyal would not mind signing all the loyalty oaths they had to. To all the people who questioned the effectiveness of loyalty oaths, he replied that people who really did owe allegiance to their country would be proud to pledge it as often as he forced them to…

“The important thing is to keep them pledging,” he explained to his cohorts.  “It doesn’t matter whether they mean it or not. That’s why they make little kids pledge allegiance even before they know what “pledge” and “allegiance” mean.”

Comment from Rick Weber on Educationism

I was on vacation the last week in Vermont and left my computer at home.  I just now realized that this comment hd been languishing in the approval queue for some time.

It’s a response to the this post about “educationism.”

 

Money sounds nicer than assessment (because it is), but I doubt assessment would be a problem except for the money.

It looks to me like accreditation reports are an attempt to inject accountability into a subsidized industry selling a non-commodity product. And “assessment culture” is (as far as I know… which isn’t that far) is an outgrowth of that.

Here’s a hypothesis (please correct me where I’m wrong!): The GI Bill and HEA made accreditors the gatekeepers to federal money. As that pile of rents increased, the ranks of administration grew. Throw in gold old fashioned American Calvinism and you’ve got a bunch of well-meaning bureaucrats looking for something to do. Administrators see QA engineers doing a good job producing commodity goods and don’t quite realize that, despite their reports, education isn’t something that fits into neat little boxes. At this point, we know the problem at the macro-scale, but the micro-scale solution requires administrators to go out on a limb and risk looking like an idiot by breaking with the herd.