Sunday, September 1, 2013

The remarkable inferiority of education research


     This morning, I’ve been doing more reading on the poor quality of education research. (Background: The latest from the “experts” (1) and (2))
     It’s pretty damned clear that, for the most part, education research is dreadful, a factoid long recognized but widely discussed especially in the last twenty or so years, as the issue of education reform has heated up, now enveloping higher education.
     What amazes me is that so many of my colleagues are unaware of education research's well-deserved reputation as dreck.
     Recently, I cited some influential works that start with that dismal “diagnosis” and suggests approaches to improving research. But not much has been done about the problem. There's been lots of gnashing of teeth. Some cursory flossing maybe.
     One of the more interesting articles about the poor quality of education research appeared in the Chronicle of Higher Education fourteen years ago: D.W. Miller's The Black Hole of Education Research. It's a good place to start if you are among those who tend to take the education "experts" seriously. Do so at your, and students', peril.
     But also check out Carl F. Kaestle's earlier The Awful Reputation of Education Research (PDF).
     Here are some excerpts from Miller's article.

• The Black Hole of Education Research, by D.W. Miller, originally in Chronicle of Higher Education, Aug. 6, 1999: The Black Hole of Education Research

     …Research on the effectiveness of reforms is often weak, inconclusive, or missing altogether. And even in areas illuminated by good scholarship, it often has little influence on what happens in the classroom.
     All disciplines produce lax or ineffective research, but some academics say that education scholarship is especially lacking in rigor and a practical focus on achievement. “Vast resources going into education research are wasted,” says James W. Guthrie, and education professor at Vanderbilt University who advises state legislators on education policy. “When you look at how the money is channeled, it’s not along productive lines at all.”
. . .
     The research-to-practice pipeline, according to scholars and educators, has sprung many leaks. Governments and foundations don’t spend enough on research, or they support the wrong kind. Scholars eschew research that shows what works in most schools in favor of studies that observe student behavior and teaching techniques in a classroom or two. They employ weak research methods, write turgid prose, and issue contradictory findings. Educators and policy makers are not trained to separate good research from bad, or they resist findings that challenge cherished beliefs about learning. As a result, education reform is often shaped by political whim and pedagogic fashion.
. . .
     A fundamental problem facing the field, says Ellen Condliffe Lagemann, a professor of education at New York University, is its fragmentation among many disparate disciplines, including statistics, anthropology, and cognitive psychology. In education, she says, “there are no common patterns of training. If you don’t have common patterns of training, it’s hard to reach agreement on what research is, much less what good research is.”
. . .
     A growing number of quantitative scholars want to revive a research tradition, prevalent in the 1960s and 1970s, of evaluating classroom practices through randomized tests. …[M]any of them see the clinical trails of medical research as a model for education scholarship. When a drug company wants to test the benefits of treating some new pill, they reason, it finds a group of volunteer patients and compares them to an identical control group. “Why is education research any different?” asks Paul E. Peterson, a political scientist at Harvard’s Kennedy School of Government.
     Yet such trials, routine in the sciences, are relatively rare in education research, according to Thomas D. Cook, a sociologist at Northwestern University. He recently found that such experiments have hardly been used to examine the effectiveness of common reforms, including vouchers, charter schools, whole-school reform, and continual teacher training.
     A task force sponsored by the American Academy of Arts and Sciences is urging researchers to increase the use of random sampling and control groups. Without them, advocates say, no researcher will ever be able to attribute a successful outcome to a particular policy. Any other method leaves doubt about alternative explanations, such as selection bias.
. . .
     Some scholars say that the experimental approach is resisted for ideological reasons. “The whole field of education research is dominated by an orthodoxy that the best kind of learning is the kind that spontaneously emerges,” says John E. Stone, an education professor at East Tennessee State University. The so-called learner-centered approach, he says, assumes that learning will occur when “conditions are right,” so qualitative researchers account for differences in achievement by examining the differences between one classroom and the next.
     Many scholars, financing agencies, and journal editors, he says, would prefer to ignore experimental research, which tends to show that traditional, directed instruction, with drills, lectures, and step-by-step lesson plans, works better than more-spontaneous approaches. By example, he points to a $1-billion federal evaluation in the 1970s of various Great Society programs designed to help low-income students. After most approaches flunked and a method called direct Instruction came out on top, he says, the education field essentially ignored the results.
. . .
     At the nexus of most of those complaints lies the sprawling network of U.S. professional schools that train teachers. Those institutions and departments, of which there are more than 1,700, are responsible for cultivating many of the nation’s education researchers as well as for training most of the public-school educators for whom research is intended. Officials at education schools readily acknowledge that they do a poor job of training educators to distinguish good research from bad, and also of training their scholars to make their findings accessible to practitioners.…

OTHER INTERESTING READS (I encountered this morning):

• The Awful Reputation of Education Research (PDF), Carl F. Kaestle, Educational Researcher, Vol. 22, No. 1. (Jan. - Feb., 1993), pp. 23+26-31. (JSTOR)

• Improving the "Awful Reputation" of Education Research, Gerald E. Sroufe, Educational Researcher, Vol. 26, No. 7 (Oct., 1997), pp. 26-28 (JSTOR)

• Scientific Culture and Educational Research by Michael J. Feuer, Lisa Towne, and Richard J. Shavelson - EDUCATIONAL RESEARCHER, November 2002 vol. 31 no. 8 4-14

• The Accountability Game…., Leon F. Marzillier (Academic Senate for California Community Colleges, October, 2002)

     ...The whole concept of MSLOs [measurable student learning outcomes] as the latest fad in education is somewhat akin to the now discredited fad of the '90's, Total Quality Management, or TQM. Essentially, the ACCJC adopted MSLOs as the overarching basis for accrediting community colleges based on their faith in the theoretical treatises of a movement.... After repeated requests for research showing that such use of MSLOs is effective, none has been forthcoming from the ACCJC [accreditors]. Prior to large scale imposition of such a requirement at all institutions, research should be provided to establish that continuous monitoring of MSLOs has resulted in measurable improvements in student success at a given institution. No such research is forthcoming because there is none….

• Beyond awful: rethinking education research, Sarah M. Fine, Apr 25th, 2011. EdNewsColorado, April 4, 2011

     ...Last month, Ednews editor Alan Gottlieb published a commentary in which he shared a vision that can only be called bleak. In the piece, he describes a meeting where education researchers and policymakers sat down in an attempt to reconcile their differences – apparently to no avail....

* * *

Further reading:

Issues in Education Research: Problems and Possibilities (1999), edited by Ellen Condliffe Lagemann, Lee S. Shulman

Improving Student Learning: A Strategic Plan for Education Research and Its Utilization (1999)

Education Research On Trial: Policy Reform and the Call for Scientific Rigor (2009), edited by Pamela B. Walters, Annette Lareau, Sheri Ranis

* * *

• Why education ‘research wars’ leave no winners (Washington Post, Feb. 5, 2013)

     …These kinds of sloppy inferences play a dominant role in education debates and policy making, and they cripple both processes. Virtually every day, supporters and critics of individuals, policies, governance structures and even entire policy agendas parse mostly-transitory changes in raw test scores or rates as if they’re valid causal evidence, an approach that will, in the words of Kane and Staiger, eventually end up “praising every variant of educational practice.” There’s a reason why people can – and often do – use NAEP or other testing data to “prove” or “disprove” almost anything.
     Nobody wins these particular battles. Everyone is firing blanks.

• Big Surprise: Yet Another Ed Reform Turns Out to be Bogus (Mother Jones) —By Kevin Drum - Mon Jan. 28, 2013 9:55 AM PST

• More Pupils Are Learning Online, Fueling Debate on Quality (New York Times, April 5, 2011)

     …[C]ritics say online education is really driven by a desire to spend less on teachers and buildings, especially as state and local budget crises force deep cuts to education. They note that there is no sound research showing that online courses at the K-12 level are comparable to face-to-face learning….
. . .
     The growth has come despite a cautionary review of research by the United States Department of Education in 2009. It found benefits in online courses for college students, but it concluded that few rigorous studies had been done at the K-12 level, and policy makers “lack scientific evidence of the effectiveness” of online classes.
. . .
     Like other education debates, this one divides along ideological lines. K-12 online learning is championed by conservative-leaning policy groups that favor broadening school choice, including Jeb Bush’s Foundation for Excellence in Education, which has called on states to provide all students with “Internet access devices” and remove bans on for-profit virtual schools.
     Teachers’ unions and others say much of the push for online courses, like vouchers and charter schools, is intended to channel taxpayers’ money into the private sector.
     “What they want is to substitute technology for teachers,” said Alex Molnar, professor of education policy at Arizona State University....

• Classroom Technology Faces Skeptics At Research Universities (Information Week, February 08, 2013) – Largely, a discussion of “Technological Change and Professional Control in the Professoriate”

     …Professors at top research universities are highly skeptical of the value of the instructional technologies being injected into their classrooms, which many see as making their job harder and doing little to improve teaching and learning.
     That's the conclusion of "Technological Change and Professional Control in the Professoriate," published in the January edition of Science, Technology & Human Values. Based on interviews with 42 faculty members at three research-intensive universities, the study was funded under a grant from the National Science Foundation and particularly focuses on professors in the sciences, including chemistry and biology, with anthropology thrown in as a point of comparison….

3 comments:

Anonymous said...

Great post, Roy; thanks for all of the terrific links. I can't help but remember that Carol Gilligan was in Harvard's school of Education when she did her research that resulted in "In A Different Voice." As others have noted, her research was remarkably shoddy: tiny samples, anecdotal elements, interpreting the evidence when she was supposedly presenting facts--just shoddy as hell. It drives me crazy that that awful piece of work made her a household name among academics. (She may have done good stuff since, of course--and the discussion that the book started has been healthy and interesting. But still!)

MAH

Roy Bauer said...

The Gilligan example did occur to me, but I figured I've pissed enough people off for the time being!

Anonymous said...

http://www.washingtonmonthly.com/magazine/september_october_2013/features/americas_worst_community_colle046450.php?page=all

Roy's obituary in LA Times and Register: "we were lucky to have you while we did"

  This ran in the Sunday December 24, 2023 edition of the Los Angeles Times and the Orange County Register : July 14, 1955 - November 20, 2...