Wednesday, May 23, 2012

Measurement of Academic Progress (MAP Testing) Year 1

  I just finished my first year of MAP testing. It's the first year that Bainbridge Island School District is doing it as a whole. We tested the Math test 3 times and I handled proctoring it for each of my classes. I want to share some of my feelings regarding it here for others who have the data or are considering using the test.
  For those not familiar with MAP testing it is a pretty good test as far as tests go. Students are asked 52 questions and the computer adjusts difficulty to find the sweet spot for what students are able to answer successfully. Students are asked questions that are more and more difficult as they continue get them right, and easier questions as they get them wrong. Ideally it should be able to successfully place a student's score within a few points and compare it to national averages to see where they rank.
  Normal grade level increases are approximately 6 points per grade though the results change a bit year after year. The intent of this is that the test remains somewhat static so that students see growth in the academic area, rather than taking a test in which the bar to pass is simply raised each year. As we are all too familiar with the thought that if I couldn't jump 5'6 last year, I probably can't jump 5'8 this year. MAP allows students to display what they know on a continual year over year scale focuses more on their advancement from last year or last test, rather than how high they jumped compared to everyone else.

My Results


  Looking at my homeroom my students averaged just over a 6 point increase with a range of -6 to +16 points between fall and spring tests. The test projects a 6 point increase for students and it exactly matches what I found, sort of. Much in kind with statistics, no student experienced a 6 point gain exactly. The problem with this data is not it's use globally, but it's application locally.
  From a school wide perspective, this means that I averaged a years worth of growth from my students in the area of mathematics. On some levels I have to consider this success. I stepped into a teaching role five days before the start of the year with a curriculum I was totally unfamiliar with and managed to pull off an average growth from my students. Increases were also fairly evenly distributed across the spectrum with some students with very low scores in fall increasing significantly as well as some students with very high scores initially improving as much as 12 points. Statistically speaking, I've been a successful teacher.

What do these results mean?


  Where things change are from the individual perspective, which is what each parent will see when the results are sent home. An average of 6 means that approximately as many students showed average growth as the number that did not. How will parents feel seeing that their student advanced half a grade for a years worth of work? This data will be used to judge teachers, if only by the teachers themselves. Is this really a fair way to do so?
  Of course a student with a six point loss didn't lose a year of knowledge, just as a student with a 16 point gain didn't suddenly advance their understanding from a 6th to a 9th grade level in mathematics. In other classes I've seen 12 to 14 point losses. Disappointing sure. But it doesn't really mean too much other than the student had a bad day. Now a student who, year after year, continues to show no advancement could become cause for concern. Also there's a good feeling when a student makes a noticeable gain and maintains it over time. But on an individual level, these scores are thin string of data points that don't really indicate much to me as a teacher, or possibly as a parent.

  There's continual fear of testing becoming a method to evaluate teacher success. While the Washington State MSP would be a travesty in this role, I'm not sure that MAP solves this issue either. The fact that I have scores of 75 students will help average out inconsistencies. But using MAP as a basis, I'm an average teacher. I think we'll find that most teachers are average teachers. Those who truly increase scores are probably focusing more on the kinds of questions that boost scores on the test, regardless of the students enjoyment and engagement.

  Again, this comes to the question of "what does good teaching look like?" I don't know the entire answer to that question but I'm pretty sure that it doesn't look like high test scores.

No comments:

Post a Comment