The Government is again releasing National Standards data later this month in a new standardised format that will seem to make it easier to compare schools.
In this respect the Government is playing catch-up to the newspapers which published the data in standardised formats last year, although all have so far resisted ranking the schools from best to worst in National Standards league tables.
The published data appears straight-forward enough - percentages of children "above", "at" "below" or "well below" the standard for their year group in each school. But there is nothing standard about what underlies the tidy rows of figures. Schools' approaches to making judgments against the National Standards are so idiosyncratic and wide-ranging that it is impossible to accurately compare achievement between any two schools, let alone "apples with apples" comparisons across more than 2000 New Zealand primary and intermediate schools.
The extreme variability in processes underlying National Standards judgments is well illustrated by the latest report of the Research, Analysis and Insight into National Standards (Rains) project. Across the six Rains schools being studied, judgments against the national standards were being affected by many sources of variation at national, regional, school and classroom level.
For instance, schools are on different trajectories around the National Standards related to their diverse contexts and past practices. There are differences in approaches to National Standards categories such as "well below", differences in matching the categories to curriculum levels and differences in the rigour of the data sent to the Ministry. Other school-level variations in approaches to the National Standards include how much schools rely on formal assessment tools compared to other evidence in making their judgments, their choices of tests or other assessment tools, and the very specific details of the procedures used by schools for assessment and moderation.