« Ecclesiastes: 1:8 | Main | Book Report: Hunger Games Series »

Unmaking the GRADE

There’s a blog post that’s been making the rounds the last few days—I’ve been seeing links to the cross-post at the Washington Post Answer Sheet blog, but it was originally a post called Testing Madness over at the incredibly well-named Yinzercation blog. The bulk of the note is a description written by an anonymous middle-school teacher of the putting her students through the GRADE™ test. Well, it’s an assessment, really, but from the students’ point of view, I’m sure it smells like a test. It’s quite a powerful description, and well worth reading.

The thing is, though, that I think the anonymous teacher is conflating a few different problems with GRADE™ package and our educational systems. I wanted to try to tease them out a bit. Before I break them down, though, I want to emphasize that (1) I have no idea if this teacher is accurately describing the test as it is written and designed to be administered, or even if this teacher exists at all rather than being an invention of Yinzerblog’s Jessie B. Ramey, and (b) I have no personal knowledge of this specific test or of other of Pearson’s recent assessment packages, and my knowledge of current standardized tests and assessments is based on news-reading and PTO meetings. So I’m talking out my ass, here, is what I’m saying, I’m not hardly an expert. Still, I have a blog, right?

One problem is that the test isn’t very well-made. Again, I don’t know if it is really as bad as she makes it seem, but I do know that it’s really hard to make good tests. Multiple-choice tests are tricky, making sure that one and only one answer is correct—and multiple-choice tests that assess language skills or other essentially murky skills and knowledge bases are profoundly tricky. Still, the existence of a crap test doesn’t argue against the use of good tests. You could argue that it’s effectively impossible to make good tests—that if Pearson, with all its resources for writing, editing, proofing and double-checking, makes crap tests, then making good tests is bound to be so expensive that no-one could afford to give them. Alternately, it could be that this is just what Pearson does, and the lesson is just to avoid Pearson products whenever possible. It’s not conclusive.

Another problem is that the tests are culturally biased. The anonymous teacher talks about idiomatic expressions as well as her city-kids’ unfamiliarity with cars and oil changes and the word bureau. Again, I have no knowledge of the test itself, but the history of standardized tests has pretty much been that of constant shock at discovering cultural bias. In one way, that’s connected to the first problem—if the way the test is crap is cultural bias, then the problem is a crap test. I think, though, that there’s a structural problem there that’s more than just the difficulty of making good tests. The problem is that standardized tests have to be standardized, and children are pretty fundamentally nonstandard. If we manage to take out all the references to suburban life and then take out all the references to urban life, take out the middle-class and the lower-class and the upper-class connotations, the assumptions of the immigrant and the assumptions of the native-born… take out all the regionalisms, the climate issues, the assumptions of ability or disability, religion, gender, ethnicity… if you take out any question that could benefit one kid over another kid, well, I would hope you’ll have the arithmetic left, but not much more.

This is a fundamentally irreducible problem with standardized tests: the way our nonstandard students arrive at them. In fact, this is a fundamentally irreducible problem with tests. A teacher—in a classroom with some countable number of students—can hope to mitigate the problem, or at least to account for it. The larger the units being tested, the bigger the problem grows. A district-wide test? A state-wide test? A national test? My assumption is that the line after which the test is too big, too standard to be useful is somewhere smaller than the line where Pearson can make a profit.

There’s a third problem, though, that I hadn’t really thought about until it came up in that post. If you use baseline tests at the beginning of the year,—first of all, you have to use baseline tests, right? A test at the beginning of the year and another at the end. It eats up twice as much time as only doing one test, of course, but you get at least twice as much information. So you use baseline tests. But any good baseline test, or even any moderately competent baseline test given at the beginning of the year is going to show that a bunch of the students don’t know anything that they, you know, haven’t learned yet. Ms. Ramey and the anonymous teacher attribute the negative effects of the test—students feeling “stupid,” frustrated, and ready to give up on learning—to the test being a crap test, but in fact a well-designed, well-executed baseline test at the beginning of the year will be beyond most of the students’ ability, and will presumably make the students feel stupid and frustrated.

A really good teacher, with enough time and few enough students, will presumably be able to mitigate some of the problem—with the right preparation, a community of self-confident learners (as the anonymous writer describes it her goal to create) could find a baseline test to be a sniff of the treats in store, a glimpse of the mountain they can scale, a glint of the gold at the end of the proverbial. That would be great! Nearly as great as not doing it in the first place!

Tolerabimus quod tolerare debemus,


I do not, at all, mean to imply that this is as bad as feeling stupid and ready to give up, but in my personal experience, it was also not so great a thing to *ace* the baseline test, because, uh, if this is the material we're supposed to not know yet and will know at the end of the year, are you saying that I'm going to be spending an entire year tediously reviewing stuff I already know? Yes, yes they were saying that, and in fact I went on to turn down a CTY math scholarship (which probably would have been a fantastic experience socially) because the course covered the next year's material and I didn't want to spend another whole year knowing that I'd already seen it all. I don't know, at least without the pretest I might have had false hope of something interesting coming along.

Post a comment

Please join in. Comments on older posts will be held for moderation. Don't be a jerk. Eat fruit.