ANA tests important, but assessment results don’t add up

Two weeks ago the country was shocked by the dismal result from the Annual National Assessments (ANAs) that reported most notably that Grade 9 students performance on the maths test fell from 14% to 10.8%. These standardised mathematics and language tests are given to children in grade 1 to 6 and 9 every year with the noble goal to “measure learners’ progress and to establish the level they are performing at.” While the motives might be good, ReSEP economist, Nicholas Spaull, explains that these tests cannot be used to measure progress:

The problem is that these tests are being used as evidence of “improvements” in education when the ANAs cannot show changes over time. There is absolutely no statistical or methodological foundation to make any comparison of ANA results over time or across grades. Any such comparison is inaccurate, misleading and irresponsible. The difficulty levels of these tests differ between years and across grades, yielding different scores that have nothing to do with improvements or deteriorations necessarily, but rather test difficulty and the content covered.


Spaull is reiterating the limitations he had highlighted following the publication of the 2013 results, where he stated that:

For the national assessments to fulfil the function for which they were created, the results need to be comparable across grades, over time and between geographical locations. Unfortunately, given the sorry state of affairs that is the 2013 national assessment, none of these criteria are currently met.

On the 2014 results, he finds that the erratic results speaks for itself. Grade 1 mathematics averaged at 68% in 2012, before plummeting to 59% in 2013 and bouncing back to 68% in 2014. The improvement of  the ratio of grade 3 students who achieved 50% or higher on the mathematics test increased from 36% in 2012 to 65% currently, which would mean we have “the fastest improving education system in recorded human history.”

Spaull has done extensive analysis on data gathered from standardised test from around the world, and sees standardised testing as “one of the most important policy interventions in the last 10 years.”

Testing can be an extremely useful way to monitor progress and influence pedagogy and curriculum coverage, but only if it is done properly. Testing regimes usually take between five and 10 years to develop before they can offer the kinds of reliability needed to make claims about “improvement” or “deterioration”.