Assessment Won’t Work in Business Schools Either

This 2016 article in the Journal of Management Education just came to my attention. In it Donald Bacon and Kim Stewart argue that small student sample sizes at many business schools mean that assessment results are not statistically valid.  They conclude, much like Roscoe, that it would be better to use existing research for guidance than to make changes to programs based on bad data collected through assessment.

Citation and abstract are below.

Why Assessment Will Never Work at Many Business Schools: A Call for Better Utilization of Pedagogical Research

Donald R. Bacon, Kim A. Stewart


Journal of Management Education

Vol 41, Issue 2, pp. 181 – 200

First Published May 9, 2016

On the long and arduous journey toward effective educational assessment, business schools have progressed in their ability to clearly state measurable learning goals and use direct measures of student learning. However, many schools are wrestling with the last stages of the journey—measuring present learning outcomes, implementing curricular/pedagogical changes, and then measuring postchange outcomes to determine if the implemented changes produced the desired effect. These last steps are particularly troublesome for a reason unrecognized in the assessment literature—inadequate statistical power caused primarily by the use of small student samples. Analyses presented here demonstrate that assessment efforts by smaller schools may never provide the statistical power required to obtain valid results in a reasonable time frame. Consequently, decisions on curricular and pedagogical change are too often based on inaccurate research findings. Rather than waste time and resources toward what essentially is a statistical dead end, an alternate approach is recommended: Schools should examine published pedagogical studies that use direct measures of learning with sufficient statistical power and utilize the findings to improve student learning.