|
|
Is This Math Program Proven?
Sales representatives for a large commercial textbook publisher have
dazzled your school officials with an impressive-looking presentation
on the "success" of their math program. They have included charts
and diagrams documenting the wonders produced by their product.
But you know that it doesn't add up. Kids get little or no
arithmetic practice, they're expected to come up with their own
methods (sometimes tortured, unweildy of just plain wrong ones) to
solve problems, and they are encouraged to become dependent on their
calculators starting in early grades.
So how could that impressive-looking research be so misleading?
In the message below, Mary Damer outlines a strategy for dissecting
what the sales representatives are saying.
From: Mary Damer
Date: Wed, 30 Dec 1998
Subject: my advice for Trailblazers math questions( and any new claim)
Here's my advice. We've seen how the U of C math program is accompanied by
a brochure of research which when put under the magnifying glass
doesn't meet any of the criteria for valid question. Since to date none of
the NCTM based math programs have any valid research support, I would be
surprised if his program is the first to have that sort of support. As
consumers we shouldn't have to unmask and diligently probe whether research
claims are valid or not, but none of our educational institutions at this
point in time are committed to that type of analysis. (I'd put this
question out to John Stone if you belong to the ECC).
I recommend asking the same kinds of questions I asked when our local supt.
told me that he had all this research to support the choice of U of C math.
First ask for copies of the research reports supporting claims of improved performance.
Tell them that you want the complete research reports because you will be
looking for the following items to determine whether the research is valid
and has a credible research design.
- SELECTION:
You want to look for SELECTION problems -- how do the researchers know
that the groups of students being compared (one group with
Trailblazers...one group without)
were comparable groups. (I wouldn't be surprised if you found out that they
weren''t since it appeared that the Chicago math program possibly compared
students in regular Chicago schools with children from the expensive
private LAb school.) The researchers should have detailed information that
proves that any test score differences could be attributed to preexisting
differences in prior math achievement, intelligence, motivation, social
class, parental involvement, Any valid study comparing two groups of
students needs to have those students' differences controlled for and will
provide justification for comparing the two groups. For example one would
never conduct a valid research study comparing your North Shore Students
with those students from inner city Joliet.
- HISTORY:
You want to look for HISTORY problems. How have the researchers
ensured that any reported differences between the groups (with Trailblazers and without
Trailblazers) could be due to the classes spending more time on mathematics, having more
money spent on instructional materials, having better trained teachers. If
those variables have not been addressed, you have no way of knowing.
Sometimes shoddy researchers will compare classes of students who are
having the same "ole curriculum" with students who have the new one. Only
problem is that the teachers of the new curriculum receive hours and hours
of additional training and the classes last for longer periods of time.
- IMPLEMENTATION:
You want to look for an INSTRUMENTATION problem. This would
occur if some of the same test questions on the final assessment
instrument were items where Trailblazer students would be
expected to perform better than other students, and questions not used
were ones where we would expect other students to do better than the
Trailblazer students. You have to have information on the selection of the
test that was given to the students to show progress, or you have no way
of knowing. What is the reliability and validity of the tpretest and
posttest. If you gave the two groups of students a pre and posttest based
on NCTM puzzle-type problems that students in the TRailblazer curriculum
would be working on all year, then of course the Trailblazer students
would have an edge. It is very easy to conncoct an assessment instrument
which will support things taught in the new curriculum but not in the old.
That is why you have to know about the test that was given itself. If that
test has no validity, than the results have to be questionned. Were
students able to use calculators on the final test? Were problems
requiring calculation included on the test? Did students have to write and
explain how they solved the problem. Such types of questions can have
dubious validity since a rater will be assigning a score.
- SIGNIFICANCE:
Was any test of statistical significance ever made. If it wasn't any
differences between the performance of students in these two groups could just as
easily be a result of chance.
I would also compile a list of what third or fifth graders learned in a
more computation oriented traditional curriculum and ask him how often
students will practice the skills needed for those computations......have
him list them from the book. Unfortunately balance means a smattering of
this and that if it's anything like the U of C program and so studnets
never have the opportunity to practice more than a few problems with
subtraction with borrowing, long division, or equivalent fractions.
Manipulatives doesn't count....... and inventing your own way doesn't count
either. Where in the program do they teach subtraction with borrowing,
long division, equivalent fractions,etc? How do they teach the children
to do those essential operations? THe most ludicrous fuzzy math stuff is
unearthed when one looks at how these skills are approached.
Hope this helps. The
2+2 Mathematically Correct site may have some other info.
Mary
|