My New Piece on the Lumina Foundation and Assessment

If there is point source for bad ideas about higher education, it’s the Lumina Foundation. They advocate for both a massive expansion of access to higher education and for higher graduation rates. So the plan is for colleges to take on huge numbers of new students and to ensure that they all graduate. They also want want those students to get a rigorous education. How to make sure that happens? Lots of assessment.

Read the whole thing.

Maybe We Should Listen to Our Students’ Assessment of Online Education

The younger of my two sons was fascinated by mythology as a child.  Many children seem to go through a dinosaur phase, but for him Greco-Roman and Norse mythology filled that niche.  Later, in high school, he discovered and devoured the surprisingly large body of young adult genre fiction with mythological themes.  I was originally trained in classics, so I was discreetly thrilled by his interest.  When he went to college he chose to study engineering, which allowed for very little exposure to the humanities.  For his one course humanities requirement, he chose a class on Greco-Roman myth.  I had switched from biology to classics because of an epiphany in a course I took to satisfy a language requirement, so I wondered whether he might have a similar experience in this class. Part way through his first semester I asked how his myth class was going.  “Pretty boring,” he said, “It’s just an online class.”

 

I thought of that when I read Joshua Kim’s critique of Jeff Kolnick’s piece “Generals Die in Bed.”  Interestingly Kim agrees with much of what Kolnick said about the health and safety implications of having students in classrooms this fall, but objected to his assertion that online education is inferior to traditional, face-to-face education.  Kim says that:  “Reviews of the empirical literature consistently find “no significant difference” across instructional modalities.” As Donald Larsson points out in a letter to the editor, the article Kim cites here is actually quite ambivalent about the quality of this research and does not justify Kim’s sweeping claims about the equivalence of the two approaches to education.

But it’s not just the limited nature of the research and the paucity of randomized controlled studies that should concern us.  In most of these studies of the two “modalities,” the point of comparison is learning outcomes.  The assumption seems to be that the value of a course or program can be boiled down to the extent to which students meet the course or program’s stated learning outcomes.  Anyone who has actually written learning outcomes knows that they are usually incredibly anodyne and insipid statements of the most superficial (but easily measured) aspects of a course.  Even if the literature Kim cites is correct that online courses are roughly the same as face-to-face courses when it comes to meeting learning outcomes, that is not the same thing as saying the courses are really equivalent.

There are all kinds of things going on in a class that are not captured by learning outcomes.  There is intellectual excitement, curiosity, community, engagement, doubt, skepticism, boredom, humor and who knows what else, that are not likely to be captured or measured by studies that depend on assessments of student learning.

I have no doubt that the online course my son took met it’s learning objectives.  (Has anyone ever taught a course that did not?) Where it failed is that it had no spark to it.  It failed to engage someone with a deep and abiding interest in its subject.  It would be really interesting to survey students on courses that inspired them to change majors.  I doubt that many students have identified their true callings or found a mentor in online classes.

In fact, it’s surprising that Kim singled out Kolnick for his transgressions against techno-utopianism.  Certainly students and parents have been voicing their doubts lately about the value of online education. That students want their tuitions discounted if they have to take online courses and seem to be considering sitting out the fall rather than spending their money on online classes, suggests that they don’t share Kim’s optimism about online education.  That our students, who surely have far more direct experience with the realities of online education than most of us, see it as inferior should tell us something.

A college degree says more about the person that earns it than just her capacity to meet a bunch of learning outcomes.  Bryan Caplan has made a convincing case that much of the value of a college degree comes from things that have little to do with learning.   He argues that the value of college derives partly (maybe mostly) from what is says about your character and habits.  It shows that you can show up on time, are reasonably intelligent, and moderately conformist.

And it’s not just students who see online degrees as inferior.  The unwillingness of colleges to indicate on transcripts that a course or a degree was taken entirely online suggests that colleges worry that employers harbor the same doubts about online education that students do.

I have a proposal for the advocates of online education. Once all this is over and people can actually choose between face-to-face and online, let’s require that college transcripts indicate whether a course or degree was taken entirely online.  If the value of those courses and programs is the same (which may well be the case for some disciplines), then students and employers should eventually treat them as equivalent.  If they don’t, well, maybe the “modality” that is having trouble attracting students or getting them hired will have to up its game or discount its programs.

Students have long distinguished between online classes and “real” classes.  That both forms of delivery might be equally able to meet learning outcomes seems not to have affected their assessment of the value or effectiveness of these classes.  This ought to give pause to the whole assessment movement and to the accreditors that enable it.  That students, the people who have lived experience of online education, find the assurances of “experts” about the quality of these courses unconvincing should not be dismissed lightly.  Rather it should make us all question the value of leaning outcomes assessment and the knowledge and expertise of the people who advocate for both online education and the assessment of learning outcomes as the be all and end all of higher education.

For once, let’s listen to our students.

Education Jargon Generator

Are you tired of coming up with “content” for your assessment reports?  Not sure how to communicate with assessment professionals in a language they will understand? Help has arrived.  There is now an Education Jargon Generator.  Just push a button and you get randomly arranged bits of eduspeak pulled from real life examples material produced by actual  professional educators.

 

Examples:

We will engage data driven technologies through the experiential based learning process.

We will empower holistic models for our 21st Century learners.

We will enhance process-based policies across the curricular areas.

 

Your next assessment report will write itself.  Enjoy.

 

 

Rubrics, Writing and Assessment

My only experience using rubrics to assess writing came when I was an AP World History reader.   I lasted only two years.  It’s not easy to read hundreds of examples of the same essay, over and over, for a week.  The best description I have heard of an AP reading is that is a cross between summer camp and the gulag.  The summer camp part is what happens when the grading ends and everyone heads out to play tennis or go on group runs in the evening.  The gulag part is when you are stuck at a desk, reading essays through the lens of a rubric.

Back when I was reader the formula was: 2 points for a thesis statement, 2 points for use of evidence, 2 points for “point of view” (AP-talk for rudimentary source criticism), 2 points for something that now escapes me, and then if all those were met, 2 additional points based on a subjective assessment of the quality of the essay.

It worked reasonably well.  Occasionally you would get semiliterate essay that nonetheless ticked all the boxes and you had to give it 8 points.  But mostly the essays that I read were just okay; they were drab, formulaic, efforts to match the requirements of the rubric, which are well know to AP teachers and carefully taught in AP courses.

The only interesting essay I read in those two summers was from a student who clearly had no interest in the rubric or in doing well on the test.  Instead, she wrote an essay about herself.  I don’t remember all the details, but in effect she was dating a younger schoolmate and both her parents and the boy’s parents disapproved.  She described her desire to finally turn eighteen and be free to make her own choices and ended by confiding that she was pregnant, something that neither her parents nor her boyfriend knew.  I have no idea whether this was true or if it was a work of fiction.  But it was well-written in the sense that it made its point and it’s the only AP essay I can still remember twenty years later.

I spend a lot of time thinking about writing, my own and other people’s—both my students’ and that of writers I admire.  (If I could steal anyone’s style and voice it would be Caitlin Flanagan’s.)

So I read with interest John Warner’s blog post in IHE called, “Why Can’t My New Employees Write?” In it he quotes the work of Michelle Kenney who says this about the rubric heavy teaching of writing that is characteristic of the American high school:

This approach results in what high school teacher Michelle Kenney calls “good enough writing…formulaic essays devoid of creativity and well-developed critical thinking, yet proficient enough to pass a test, raise school graduation rates, or increase the number of students receiving AP credit.”

Warner compares the use of rubrics in the teaching of writing to training wheels for beginning cyclists.  Training wheels allow novice riders to experience something that feels like riding a two wheeler but is fundamentally different, and, as it turns out, training wheels actually hinder children’s ability to balance a bike.  The best way to teach people to ride is to use what he calls a balance bike (which sounds like a velocipede) which riders balance but push along with their feet rather than using pedals.  You learn to balance the velocipede/balance bike and then it’s easy to learn to pedal a real bike.

Of course, assessment is part of what drives the use of rubrics.  Nothing does more to create the illusion of objectivity and scientific accuracy in measuring student learning than a rubric. The VALUE rubrics championed by the AAC&U (and probably the Lumina Foundation too, which seems to be the source of many bad ideas) are an excellent example of this type of thinking.

Down in the comments section of Warner’s IHE post is this comment from Warner himself:

I think it’s about anxiety, a belief that if we can’t measure it, it’s not important. I believe it’s getting worse because we have more tools of measurement, more data, and there’s a borderline magical faith that this data will show us the way.

Just as no one shows up for a bike race with training wheels on, no one uses a rubric for writing that is really intended to communicate something to a real audience.  The author of the AP essay I mentioned broke with the rubric and said something she actually wanted someone else to understand.  I have had many essays and articles rejected in my writing life (including the best essay I have ever written, which has been rejected repeatedly by all kinds of publications).  Not once has an editor told me that my essay scored 6 out of 10 on their rubric and thus would not be published.

Real writing is not done with rubrics.  Students should learn to write by doing real writing.

Plague Diary II: The Costs of Compliance

One of the earliest warning signs that the coronavirus was loose in the US came in late February when researchers studying the prevalence of flu in Seattle tested old samples for coronavirus.   They found that some of the samples they had collected earlier in the year were infected with the virus, so they reported this finding to health authorities.  They were ordered to immediately cease testing their samples because their research subjects had only agreed to be tested for the flu, not coronavirus, and their lab was a research lab and thus not certified for clinical work.

As the New York Times piece that broke the story put it “the Seattle Flu Study illustrates how existing regulations and red tape — sometimes designed to protect privacy and health — have impeded the rapid rollout of testing nationally… Faced with a public health emergency on a scale potentially not seen in a century, the United States has not responded nimbly.”

One of the lessons of the fight against the coronavirus has been that in a crisis, regulation can be just as harmful as it is helpful.  Reason.com recently ran an article with the title “Coronavirus: 10 Public Safety Regulations Set Aside in the Name of Public Safety.”  It details restrictions on who can make tests kits and masks and other protective equipment that have been rescinded so that we can get more test kits, masks and protective equipment.

Similarly, some of the regulations governing the meat packing industry are being relaxed so that increased production at plants that remain open can make up for lost capacity at plants that have been closed by coronavirus outbreaks.   The increase in the pace of production will make it harder for food safety inspectors to do their work, but the calculus seems to be that keeping the food supply chain functioning is worth the risk to food safety.

Keeping the higher education system functioning during the crisis may not be as immediately essential as supplies of masks and ventilators and food are, but in the long run how we respond now will shape the future of higher education and that in turn will affect how the recovery plays out.

So, has anyone contemplated the possibility that regulations meant to protect students in normal times might now be hindering our ability to educate them? Not really.  The Higher Learning Commission, for example, has said that it wants to be flexible, but that intuitions should inform them of any “adjustments.”  In other words, be flexible, but don’t think you can stop documenting everything you do.

Likewise, the assessment world, which enforces the biggest portion of the reporting and documentation burden that accreditors place on colleges, shows no signs of backing off.

For my sins, I subscribe to the ASSESS list serve.  The other day an ominous email from the list landed in my inbox.  It is a good example of how the assessors are trying to position themselves in the current crisis.  The email, which had as its subject line “Measure twice, cut what?” offered a link to an article on a business news site that suggested using “robust marketing measurement capability” to ensure that firms “cut in the right places…while preserving the company’s ability to adapt and thrive in the future.”  The email’s author suggested that this logic should be applied to universities too. Thus, as we

look at the potential cuts (programmatic and otherwise) that institutions will be making, it is essential that these decisions be made using good assessment data on the things that are most important — student learning, our mission(s), and overall institutional outcomes.

 

The assessment voice has never been more important than now!!!

It’s certainly the case that universities will be making cuts in the near future.  It’s also true that those cuts should, to the extent possible, pare away the unnecessary, the frivolous, and the counterproductive parts of institutions first.  We are going to face serious decisions about how we spend our money. How we choose to allocate scarce resources in the next year or two may permanently reshape higher education.  So having a clear sense of mission will be essential and having valid data to work with would make that task easier.

Unfortunately, assessment offices won’t be much help here.  In fact, assessment offices are the last places you should look to for valid data.   I pointed this out years ago and assessment insider Dave Eubanks made a more sophisticated version of the same point a little later.  The only effect this has had on the world of assessment is that they have shifted from talking about “valid data” to talking about “actionable data.”  But no one ever offers an explanation for why we would want to take action based on data that they tacitly acknowledge are not valid.

In fact I expect most administrators charged with making these difficult decisions have not even considered using assessment data to decide where to make cuts.  “Looks like all three philosophy students all score ‘super-di-dooper’ on their critical thinking assessment and the average for business students is ‘meets expectations.’  Oh well, I guess we better cut the marketing program…”

That administrators are unlikely to seek out assessment data to help them make decisions now, reflects their awareness that those data are basically meaningless when it comes to judging the quality of a program or what its effects on students are.

So why have they supported and funded the collection and pseudo-analysis of all those data, followed by the Kabuki theatre of “loop closing” for so many years?  You know why, it’s because the accreditors demand it.

Assessment is part of a costly culture of compliance that has grown dramatically in the last decade.  Colleges are expected to be able to demonstrate that they are in compliance with an extraordinarily large number of government regulations.   However worthy the aims of the rules that drive us to have FERPA trainings, Title IX offices, Diversity offices, IRBs, IBCs, IUCUCs, and so on, each of those regulatory regimes creates costs.  If you have ever wondered why there seem to be so many more assistant vice presidents and associate deans now, one of the reasons is that there are now a lot more rules to follow and in most cases the burden of documenting that compliance falls on the university.

The 800-pound gorilla of higher education compliance culture is assessment.  Something that was a fringe activity when I got my first teaching job twenty some years ago, has become the all consuming, do-or-die, centerpiece of accreditation.   Somehow accreditors will allow universities to fritter away absurd amounts of money on athletics and other vanity projects; permit private, for-profit online program managers to flog their expensive, low-quality graduate programs while hiding behind the names of public, non-profit universities; and turn a blind eye to the mass exploitation of adjuncts.  Colleges can even get away with firing people for refusing to participate in the faking of assessment data.  But God help you if you can’t produce the requisite tonnage of assessment reports.

Of the many compliance requirements that universities face, none is as all encompassing and pervasive as assessment.  IRB affects some, but not all, research.  Title IX affects some, but not all, students and faculty. Assessment is in every class, every program, and it demands the time of every wretch on campus who stands in front of a class.  The budgeted costs of running assessment offices may be modest at most universities, but the costs of assessment in faculty time and attention exceed any other compliance requirement.  And for those of you who don’t teach, I have bad news. The assessors have you in their sights too.  The next big thing in mission creep is co-curricular assessment, which extends the non-benefits of assessment to activities like intra-mural soccer and the Quidich team.

Just as the FDA has had to get out of the way of manufacturers of medical supplies in order to get more medical supplies and the ATF has had to look the other way to let distilleries make hand sanitizer, maybe the best thing that accreditors and the assessors could do for higher education right now is to promise to take the next year or two off.  Universities are going to have to make some wrenching and unprecedented changes in how they operate.  They will have to cut costs and rid themselves of the unproductive parts of their operations. They will need flexibility and they will need to move fast.   What better way to reduce costs, increase flexibility, and enhance morale than to take two years off from assessment?

If we are going to risk a little more e. coli in our chicken in order to have chicken at all, surely we can take a chance that the wrong action verbs are going to be deployed in some of our learning outcomes statements.  I am willing to risk it if it means we can offer more education to more students.  When the zombies eventually shuffle off and we stagger out of our Zoom sessions and into the light, the survivors can decide whether and how to restart assessment.

Maybe that article was right; by cutting “in the right places” now, we might just preserve “our ability to adapt and thrive in the future.”

Thinking Like Shakespeare

Update:  My review of Newstok’s, Thinking like Shakespeare has now been posted here.

Inside Higher Ed posted an interview with Scott Newstok of Rhodes College today.  It’s a discussion of his new book How to Think Like Shakespeare: Lessons from a Renaissance Education which just came out with Princeton University Press.  We recently received a review copy of his book from Princeton.  Watch this space for a review in the next couple of weeks.  It will be our second review (the first is here) and so far both of our reviews have been of books from Princeton. Other publishers of good higher ed books, Johns Hopkins, for example, please take note and starting sending review copies.

Plague Diary I

Way back on March 10th, which now seems like long ago, the The Atlantic ran an article called “There are no Libertarians in a Pandemic.”  It was pretty lazy in that it treated CPAC and the Trump White House as libertarian. It prompted several interesting responses from Reason, which is the most prominent libertarian journal.

For our purposes, which is of course, examining the harms that accreditors and assessment do to higher education, the most interesting part of the debate concerns the extent to which over-regulation has hindered the response to COVID-19. The best example of this occurred in Washington, where it appears that the spread of the virus was detected by researchers doing a study on the flu, but IRB rules made it impossible for them act on that knowledge.  When they decided to breach IRB rules and inform the public health authorities of what they found, they were ordered to stop testing their samples for coronavirus.  In a crisis, the authorities chose to hew blindly to what the New York Times (not exactly a libertarian-minded publication) described as “red tape.”  The result was a missed opportunity to slow the spread of the virus.

Since then regulations that hinder or slow the processes of finding new ways to make and distribute protective equipment and to test and develop drugs have been rolled back to aid the public health response.  So it seems that, at least on this front, the libertarians have the stronger argument.

What about in higher education?  How are the accreditors and the assessment establishment reacting?  Are we slashing through the red tape and over regulation to facilitate flexible and creative responses to the situation? Not really.

HLC says this:

Institutions may find that they need to adjust normal operations to protect the health and safety of their campus communities, while providing alternative methods of instructional activity. HLC will be as flexible as possible within the U.S. Department of Education’s expectations. If an institution needs to adjust its business operations in substantial ways (for example, reducing or suspending face-to-face class sessions), an institution should notify HLC of the adjustment, including the steps it takes to ensure quality and continuity in its instructional activity.

It looks like they want to be flexible, but they still want institutions to document and report the standard pseudo-evidence of student learning.  So no red tape removal there.

Judging from the conversations on the Assess list serve, the assessment world seems more concerned with continuing the assessment project than getting out of the way and letting faculty do their best in a trying situation.

If any schools delayed moving to online or remote instruction because they feared the accreditors’ and assessors’ reactions, that would be criminal.

At my own institution a debate is taking place about grading and whether we should continue to use letter grades or move to a pass/fail model.  I am agnostic in  this debate, but it’s interesting to me that much of the objection to changing to the pass/fail option comes from concerns about the importance of GPAs in the more heavily accredited and regulated disciplines.  Education, business and nursing majors all have to meet certain GPA requirements, either for their undergraduate programs or licensure or to be admitted into accredited graduate programs.  So, the regulation of higher education by disciplinary accreditors is limiting our ability to  respond to a complex situation.

My guess as to what will happen?  Accreditors will demand lots of documentation.  Assessors will produce it.  Faculty will do what they think is best and work around the red tape where they can.  On that issue see this blog post by John Warner of IHE. It’s more anarchist than libertarian but it makes a good point.

Now, go wash your hands.

 

 

Thomas Docherty and the Clandestine University

I never thought I would say this, but I have learned something useful from the ASSESS list serve.  I thought I was the only the anti-assessment lurker on the list, but it seems I was wrong.  Another such person could not take it anymore and broke cover by posting a link to an interesting article that is almost ten years old, but it is one that I had never encountered.

It’s by Thomas Docherty of the University of Warwick and was published in the THE in 2011.  It’s called “The Unseen University” and in it he describes what he sees as the two sides of the modern university: The Official University and the Clandestine University.

 

The Official Univeristy

describes itself by mission statements, mission groups, research reports, colourful prospectuses and websites, and YouTube videos. It prides itself on an essentially vacuous “excellence”, supposedly transparently demonstrated by various facts and figures (Information), finally settling into position in the multiplying, and often mutually contradictory, league tables that various agencies will use as a proxy for an understanding of the life of our institutions.

While the Clandestine University

is where most of us do our daily work. As academics, we do not “compete” against colleagues elsewhere for research funding; rather, we just want to do the research, and we welcome good work wherever it is done. When the research councils come up with their next Big Funding Idea, researchers will twist their activity to seem to fit the idea’s criteria, while actually carrying out their preferred research. Of course, although we know this to be the case, we cannot officially say it.

In the laboratory or library, when our experiments or readings lead away from a simple rehearsal of what the grant application said we would do, then we divert from the terms of the grant and we engage, properly, in research. We do not find what we said we would. But we cannot officially say this.

But the information generated by the Official University is powerful and it serves the interest of the powerful:

George Orwell, for one, knew that all totalitarian regimes have an interest in reducing knowledge to the level of mere information. This is the real import of Winston Smith’s job in Nineteen Eighty-Four where he “corrects” the historical record of events: “But actually, he thought as he re-adjusted the Ministry of Plenty’s figures, it was not even forgery. It was merely the substitution of one piece of nonsense for another. Most of the material that you were dealing with had no connexion with anything in the real world, not even the kind of connexion that is contained in a direct lie.”

 

For the replenishment of content in intellectual life, we go to those who operate in the shadows of the Official University: teachers, learners, researchers who are actually getting on with unquantifiable activities. Those activities require that we go into a seminar or a laboratory or a library not knowing what we will have found out when we leave. What we learn there will actually make the world darker, more mysterious, more demanding of further research and enquiry. But we cannot say this.

And, as in Orwell’s dystopia, the Official University is effectively a fantasy, dressed up in figures unconnected to reality, figures that are there to serve political ideologies and party power. And we had best not say this.

I’ve not seen a better argument for resisting assessment than this.  So keep up the fight, keep knowledge and learning alive, even if it means doing so clandestinely.

The Blockbuster and Netflix Analogy

Joshua Kim who writes the Technology and Learning Blog on the Inside Higher Ed website has a piece today that looks at a recent book by one of the founders of Netflix. It seems that in 2000 Blockbuster had the opportunity to buy Netflix for $50 million but passed on it. Netflix is now valued at $138 billion and Blockbuster is down to one token store and is bankrupt.

Kim is a techno-optimist of the most Pollyannaish sort.  Any innovation is good and whatever negative side effects of technology and “innovation” we see now are just details that need to be ironed out.  He also uses “fail” as a noun.  You know the type.

So it comes as no surprise that he sees higher education as the Blockbuster of the moment.  What’s the Netflix equivalent standing by to disrupt us stodgy, legacy types? The low-cost online master’s degree.

I would be the first to agree that higher education is experiencing a bubble and is probably headed for a major retrenchment.  But I don’t think the thing that pushes us over the edge is going to be cheap, online master’s degrees.

There are important differences between a degree and a DVD and not just in the high-minded philosophical sense.  One DVD of Frozen is the same as any other DVD of Frozen. Whether I get it online or buy it from a friend or get it from a store or shoplift it from the last Blockbuster, they’re still the same thing and no one or the other copy of the DVD is considered more desirable than another.  Netflix killed Blockbuster by selling the exact same product in a way that was more convinient and often cheaper.  Similarly Jeff Bezos’ key insight when he founded Amazon was that books were a product that you could buy online without worrying about fit or the quality of the item.  John Grisham’s latest book is John Grisham’s latest book whether you buy in a store or online.

This is not the case with master’s degrees.  Reputation plays a huge role in the perceived value of a degree.  So an MBA from a directional state school  is not valued the same way that an MBA from Wharton is.  So no matter how may people enter the market offering MBAs online for a quarter the price of Wharton, people will still place a greater value on the Wharton MBA.

It’s also the case that an online master’s is not the same thing as a face-to-face master’s.  The “product” is the educational experience of the student not the degree itself and by definition the educational experience of the online student is different than a that of a  student in a face-to-face program. And I know that there are people who will argue that students learn just as much or more in online programs as they do in traditional programs.  This may be true, but it’s largely irrelevant.  It assumes that people value a degree because of what students learn.  But there are lots of good reasons to think that the value of a degree is as much about sorting (what programs were you able to get into) and signaling (you showed up for class in clean clothes, have rudimentary social skills,  and turned in all your work on time for two years).

An online degree may signal things about a student, but its signals are different than what is signaled by a face-to-face degree.  Netflix was able to do what it did to Blockbuster because it took the inconvenience out of renting DVDs.  If the inconvenience of a face-to-face degree is, in effect, part of the product and thus what gives it its value, then the online degrees are selling a fundamentally product.

The exceptions to this are the truly commodified degrees like MEds that will get teachers  a pay raise or promotion regardless of the reputation of the program.  In that area the Blockbuster effect happened long ago.  Those programs were swallowed up by the for-profits and OPMs years ago and most Education grad programs are now online and often controlled by OPMs.  They have evolved into (or maybe always were) more of a rent imposed on teachers who want raises than they are an educational experience.  If you want a reason to be cynical about higher education these programs will do it for you.  They siphon money from people in an already low-paying profession into the hands of for profit OPMs and colleges.  And there is no evidence that they make teachers any better at their work.  They have become a fee that teachers have to pay to advance or earn raises.

But I think the traditional master’s programs are safe from the OPMs and other predators.  They are probably headed for trouble as people lose interest in pursuing MAs in English and History and other traditional disciplines, but my guess is that these will just atrophy without being replaced by cheap online equivalents.  If there were a market for online masters’ in math or Art History the OPMs would have been all over those program years ago.