Mathematics teacher Sharon M. Hart's Valley Voices (Feb. 8) piece detailed the enormous gap between the individual student and the overall picture for a school and its district. This gap calls into question the meaningfulness of these scores. The bottom line? An averaged score for a district, school, or even classroom cannot provide a meaningful picture of an individual student nor, for that matter, entire schools and districts. This requires additional statistical analysis.
Assuming the test is valid — actually measures what it is supposed to measure — one must also have a basic analysis using statistical tools.
The most important statistical tool is determining whether a change in scores is merely a random fluctuation or a truly meaningful change, well beyond chance. Is an increase of 10 points on the API simply due to chance? If so, then the "improvement" is an illusion. Unfortunately, I cannot recall a news story that includes this crucial analysis in any article that compares numbers, whether they're about test scores, budgets, population shifts, etc.
Statistics provides our best tools to objectively decide whether outcomes are due to chance or to an effective intervention. Otherwise, comparing scores is meaningless, subject to predetermined interpretations (often biased, self-serving opinions).
David E. Roy