This is an archival copy of material that originally appeared at:

Everyday Math: Proof that it works? Hardly.

An article in last week’s Education Week apparently has gone unnoticed by most EduBlogs. Maybe it’s because no one cares all that much about curriculum.

It announced that the US Department of Education has given Everyday Math a qualified nod as a promising program. The article can be found here (registration required), or you can find the entire article on Math-Teach’s discussion forum here.

I wrote last week (here) about the “quest for the Holy Grail” of proof that any particular textbook or instructional program “works.” When I first read this article from Education Week, I thought, “Dadgum it! Just as I pronounce the quest fruitless, the government announces that it has at least spotted the Grail in Everyday Math!”

Then I started to read more. I looked at the sometimes vitriolic back-and-forth on the Math-Teach forum on the subject. These math whizzes and math educators rip apart the studies upon which the government review is based (they also rip apart one another…what happened to gentility in academe?). I also took a look at the actual government report on at the What Works Clearinghouse.

And you know what? The research remains inconclusive: The Holy Grail is still elusive. The academics (genteel or otherwise) still have a lot of work to do.

Here’s what we know about the effectiveness of Everyday Math, based on the research reviewed by the What Works Clearinghouse:

Sixty-six studies have been found to focus on the effectiveness of Elementary School Mathematics programs. Can you believe that there are so few? If proof of effectiveness is the Holy Grail, why are so few academics interested in the quest?

Of these 66 studies, 57 did not meet basic “evidence screens,” meaning that the US Department of Education does not deem these studies to have merit because of flaws in their research design. So most of the so-called “research” is thrown out at the get-go because the studies are too small, too poorly constructed, or otherwise shoddy.

Only one of the studies passed evidence standards. You got that right: only 1 of 66 studies was considered to be reliable. That’s a whopping 1.5%, for you mathematicians out there. The government’s review of this article focused on Scott-Foresman Addison Wesley mathematics concluded that the program had “no discernable effect” on mathematics performance. So to repeat, the only decent study on elementary school mathematics curricula tells us that the curriculum under review has no effect one way or another on student achievement. So no Holy Grail here, folks.

Four of the 66 studies “meet evidence standards with reservations,” meaning that these studies may or may not have spotted the Holy Grail. In other words, these four studies have imperfections: tehy are not so bad as to force them out of consideration, but they contain flaws that may (or may not) undermine their conclusions.
As it happens, fully 61 of the 66 studies cited the What Works Clearinghouse focus on Everyday Math, at least in part. Fifty-seven (57) of them were thrown out because they did not meet the evidence screen. None fully meets the evidence standards. Four (the same four described above) meet the standards “with reservations.”

Based on only these four studies, each of which passes Department of Education standards for evidence “with reservations,” the What Works Clearinghouse declares in its “Intervention Report” on Everyday Math that the University of Chicago program has “potentially positive effects.”

First off, focus on the adverb: potentially.

Second, let’s actually read what the report has to say about each of the four studies.

Study #1: A 1998 study reported a statistically significant impact on student learning, based on a study of 76 5th graders who used Everyday Math, compared with a control group of 91 5th graders who did not. What Works Clearinghouse reworked some of the data, and determined that the effect was not significantly significant, but “substantively important.” So the study points in a direction, but does not definitively build the case to prove the effectiveness of Everyday Math.

Study #2: A 2001 study comparing 3,781 4th graders in 67 different Massachusetts schools using Everyday Math with 5,102 5th graders in 78 similar Massachusetts schools that used more traditional curricula. In reviewing the data reported in the study, What Works Clearinghouse agreed with the study’s conclusions that Everday Math led to “statistically significant improvements in overall math achievement” in 48 of the districts using Everyday Math, but only a “substantively important effect on overall math achievement” in 19 of the schools analyzed in the study. So this is pretty good, until we remember that the study itself passed the standard of evidence “with reservations.” Perhaps this study leads us toward the Holy Grail, but I’m not shouting any hosannahs just yet.

Study #3: Another 2001 study compared the performance of 732 3rd, 4th, and 5th graders in 6 schools using Everyday Math with 2,704 3rd, 4th, and 5th graders in 12 similar schools using more traditional curricula. After reviewing the data and findings, What Works Clearinghouse concluded that this study measured “substantively important, but not significantly significant” effects of Everyday Math on overall math achievement. While the results are positive, there seems no reason to be dancing in the aisles over the conclusions of this study, either.

Study #4: This 1997 study reported that Everyday Math had no significant effect on student achievement, so What Works Clearinghouse categorized the effect of Everyday Math on overall math achievement as “indeterminant.”

So why was the headline in Education Week (”Much-Used Elementary Math Program Gets Qualified Nod From U.S. Ed. Dept.”) so upbeat? From what I read, the headline ought to be something like this: “Weak Research Indicates that Everyday Math Might Sorta Kinda Work, We Think, But We’re Not Really Sure: More and Better Research Needed.”

So the Grail continues to elude us. But that does not mean the religious math wars have ended. I highly recommend the amusing back and forth on the Math-Forum about these findings. If the academics would stop sniping at one another and get down to performing some decent research, we might end the bickering and start raising student achievement.

Unless…unless…the realm of education research is just a bunch of hocus pocus in which it is very, very difficult to move from the realm of faith to the realm of science. I’m just a lowly political scientist. And I find that most of the reseach in political science, especially in international relations (my field), is terribly unscientific because it is so darned hard to isolate variables. The same is true in education. We do our best to isolate variables, but generally to do so is very, very tough and very, very expensive.

And what are we left with? Faith. Belief. Hunches.

And Reason (for a lack of research does not mean we stop making decisions about what textbooks and instructional materials to use…we still can generate useful information upon which to base our decisions…however flawed or imperfect they may be).

So as we continue to search for the Holy Grail and dedicate our lives to the quest, let us also use our heads and make decisions based on the best information we have available. While we cannot deliver the nirvana you seek, EdVantage can help districts do their own criterion-refernenced analysis of various programs to find the one that will likely best suit the needs of your teachers and students.

Mark Montgomery
Independent Textbook Analysis and Evaluation–in Math and Other Subjects
EdVantage Consulting

PS: I want to thank my friend, Mr. Person, over at TextSavvy, for once again pointing me to some new sources. I learned about the Math Forum and math-teach discussion list at Drexel University from him. I also have to thank Ken at D-EdReckoning for wrote about this EdWeek article from a slightly different perspective…but I have a feeling he’ll agree with most of what I’ve written here.
Oh, and if you’re interested in looking up the original studies cited by What Works Clearinghouse, here are the citations.

Study #1: Carroll, W. M. (1998). Geometric knowledge of middle school students in a reform-based mathematics curriculum. School Science and Mathematics, 98(4), 188-197.

Study #2: Riordan, J. E., & Noyce, P. E. (2001). The impact of two standards-based mathematics curricula on student achievement in Massachusetts. Journal for Research in Mathematics Education, 32(4), 368-398.

Study #3: Waite, R. D. (2000). A study of the effects of Everyday Mathematics on student achievement of third-, fourth-, and fifth-grade students in a large north Texas urban school district. Dissertation Abstracts International, 61(10), 3933A. (UMI No. 9992659)

Study #4: Woodward, J., & Baxter, J. (1997). The effects of an innovative approach to mathematics on academically low-achieving students in inclusive settings. Exceptional Children, 63(3), 373-388.

Technorati Tags:
, , , , , , , , Tags:
, , , , , , , ,

4 Responses to “Everyday Math: Proof that it works? Hardly.”

  1. September 29th, 2006 | 7:11 am

    Nice post, Mark.

    I am still amazed that WWC characterised the research base behind EM as “potentially” anything. Out of the four studies that were close enough to actual real research to be considered. 3 had statistically insignificant results, 1 indeterminite results, a 2 were conducted by researchers affiliated with EM (not in itself necessarily fatal, but Riordan study has refused the release the data and appears to be methodologicaly flawed).

    There is the same problem with the reading programs that were considered for funding under reading First. Only two of them, Success for all and Direct Instruction, hava a legitimate research base, the remaining commercial reading program publishers had almost no research at all supporting the effectiveness of their programs. This is just something that has almost never been done by textbook publishers, perform actual testing, or even field tryouts, on the programs before they are published to see if they actually work.

  2. December 7th, 2006 | 10:15 am

    […] My son is learning his math from the 4th grade version of Everyday Math. I have written about Everyday Math here. In some sense, the program is okay but not great, and fortunately my son’s teacher has begun supplementing the book with reinforcements of basic math facts and operations. […]

  3. dean latona
    January 28th, 2007 | 10:16 pm

    we are in an adoption year and your article was informative and entertaining and provides me with some information to pass on to our “academics” who will ultimately make the purchasing decisions. although my information may be to late as they are in a hurry to get into bed with each other and that’s enough said about that.

  4. Renee
    February 12th, 2007 | 12:57 pm

    I am a parent, not a professional. My son is in the first grade and uses EM. The concepts are confusing for him. Whatever happened to learning/memorizing simple addition and subtraction problems FIRST? In my opinion, if ‘basic math’ was taught first, it would be easier for kids to apply that (basic) knowledge to more complex problems. So, not only are we doing EM homelinks, we are also showing are son basic addition and subtraction problems at home. I know of 4 children being tutored at school. In the 1st and 2nd grades! These children are very smart. Yet, they are not able to comprehend EM. Kindergarten and first graders, in my opinion, should be taught simple math first.