The uphill battle of establishing accurate performance trends for schools

Dr Martin Gustafsson is based at the Department of Basic Education and is a member of the Research on Socioeconomic Policy group at Stellenbosch University. This article first appeared in the City Press on the 26th of May and is also available here.

The technical report which informs the newspaper article, and which was discussed at a ReSEP workshop on 18 January 2016, is available here.

By Dr Martin Gustafsson

Schools dragging SA down

The South African Institute of Race Relations, in its February 2016 Fast Facts publication, concludes that “schools drag South Africa down” and that pupil performance is declining substantially.

The latter conclusion is based almost entirely on two numbers: the number of Grade 12 pupils obtaining a score of 70% or more in mathematics in 2008, which was 25 027, and the corresponding figure for 2015, 17 452. The two figures point to a decline of around 30%. At first glance, this appears to be a national disaster.

However, analysis done by myself and others in the basic education department, to be published shortly, paints a completely different picture, of large and encouraging improvements in mathematics in Grade 12. Many of the challenges facing the schooling system are accurately described by the institute’s report, but I disagree completely that the numbers should be pointing to a deterioration.

The problem is, first, that the institute looked selectively at the numbers, and ignored important figures, including a few appearing in their own report. This seems irresponsible.

Second, analysing pupil performance trends is exceedingly complex, not just in South Africa. The British education analyst John Jerrim has written extensively about how the data on mathematics trends have been spectacularly misinterpreted in his country. In South Africa, the complexities are particularly daunting in the case of Grade 12 mathematics.

So what did the institute’s analysts get wrong? They failed to point out that all of the 30% decline they refer to happened between 2008 and 2009. From 2009 to 2015, the trend, using values from all years (as one should), is a weakly positive 2% overall. The number of passes at the 70% level in 2008 was exceptionally high relative to all other years, something which should make any analyst suspicious.

The institute’s analysts also fail to point out that the overall increase in the number of physical science passes at the 70% level, over the entire 2008 to 2015 period, was a whopping 85%. In contrast to mathematics, what appears suspicious here is at least one exceptionally low value at the start of the period, in 2009. The question is why two such closely related subjects would move in completely opposing directions.

Another suspicious trend discussed in the analysis to be released by the basic education department is that the percentage of white and Indian pupils achieving high marks in mathematics has declined markedly over the 2008 to 2015 period. There appears to be no plausible explanation for this trend among these two relatively advantaged groups.

We zoomed into a sample of particularly stable and well-performing schools, with about 4000 mathematics candidates each year, to find explanations to the apparent anomalies. What emerged clearly is that variations across years in the difficulty of obtaining certain marks, for instance 70% in mathematics, explain most of the anomalies.

Mostly these variations are small, but for certain years they are large. Pupils who obtained 69% in the years 2012 to 2015, when levels of difficulty appeared particularly stable, would have obtained 68% in 2011, a marginally more difficult year, and at least 72% in the years 2008 to 2010. 2008 was a particularly easy year for obtaining high marks.

Changing the criteria for our sample did not change the picture substantially. Over the years, the mathematics examination became more difficult, while for physical science the opposite was true.

So is the problem then poor standardisation in the examinations system? Yes and no. There appears to be scope for improving the comparability of marks across years and this is receiving the attention of the basic education department and the council for quality assurance in general and further education and training, Umalusi.

At the same time, it is technically impossible to achieve anything approaching perfectly comparable marks, at all mark levels, in an examination system such as ours, or in similar systems in other countries. We need to learn to live with some variation over the years and rely on other systems, such as the international testing programmes, for more rigorous assessment of trends.

When we recalibrated results for all pupils over the 2008 to 2015 period, using what we found to be equivalent scores, we found that the number of pupils achieving a 70% level of performance in mathematics increased by 27% overall.

For black African pupils the increase was 61%. Physical science improvements, on the other hand, were found to be smaller than what published statistics would suggest, but were still encouraging. By far the largest improvements were in historically disadvantaged schools and top mathematics performers are spread across more schools in 2015 than they were in 2008.

We do not dispute that the under-performance of schools is a key factor holding the country’s development back. This is made clear in the National Development Plan. However, where we do disagree strongly with the institute for race relations is the direction the schooling system has been taking in recent years.

If the movement has been in the right direction and improvements as large as one might realistically expect, then one could hardly hope for more.

The evidence suggests the quality of school education is improving, that the improvements have been substantial and encouraging, and that they are helping to overcome historical race-based inequalities.

But trends seen in a few other countries, such as Brazil, suggest we should be aiming for an even steeper improvement.

This is what ongoing changes to our interventions, of which there are many, should aim to achieve. We also need a more rigorous national debate, involving a wider range of stakeholders, about the actual performance trends of schools.

Education trends point to notable improvements being made

Dr Martin Gustafsson is based at the Department of Basic Education and is a member of the Research on Socioeconomic Policy group at Stellenbosch University. This article first appeared in the Business Day on the 25th of April and is also available here.

The technical report which informs the newspaper article, and which was discussed at a ReSEP workshop on 18 January 2016, is available here.

By Dr Martin Gustafsson


The debate over whether the performance of schools is improving is a messy one, but one worth cleaning up. If one looks through the maze of often ambiguous statistics, there are a few indicators we can use to reveal actual trends. Fortunately for the country’s future, they point to notable improvements.

This is not to say all is well in the schooling system. In many respects it is not. Yet it would be an incomplete and incorrect diagnosis to ignore steps in the right direction.

John Jerrim, a British education analyst, in an interrogation of Britain’s school performance numbers, demonstrates how even in a country with supposedly good monitoring capacity it is alarmingly easy to get the trends wrong, and for this to lead to unnecessary or inappropriate policy reforms.

In SA, at the secondary school level, the matric pass rate is deeply entrenched in the policy discourse. Its dominance is perhaps unfortunate as it is an exceedingly difficult statistic to interpret because it is influenced by such a variety of factors, from demographic trends to subject selection.

One of the measures we have that gets close to being truly and simply comparable over time is our Trends in International Mathematics and Science Study (Timss) score. The Timss tests, which use common “anchor” questions across years and were administered in representative samples of schools in 2002 and 2011, point to large improvements, off a low base, in Grade 9 maths and science over the nine years.

However, legitimate questions have been raised over the ability of Timss to accurately measure progress in SA, given that Timss is designed primarily for countries with high average levels of performance, and is thus not good at differentiating between relatively weaker learners.

Patterns in our Grade 12 examinations data seem to confirm that substantial improvements in mathematics and science have occurred, and that the trend has continued beyond 2011. However, the care that needs to be taken in interpreting the data is a salutary reminder of what a minefield education performance data can be.

The analysis work I did for the Department of Basic Education was partly prompted by the insistence of the Department of Planning, Monitoring and Evaluation that a “missing middle” be addressed. Traditionally, the focus in basic education has been at a relatively low level, or the numbers of learners passing a minimum threshold, and then also at the absolute top end, or the numbers of learners passing with at least 80% in a subject, a distinction. What matters a lot for the future of engineering, financial management and other mathematically-oriented fields is the number of Grade 12 learners achieving 60% or 70% in critical subjects such as maths, given that these are cut-offs universities typically use in their entry requirements.

If one takes the Grade 12 results over the years at face value, one obtains peculiar results. Physical science outcomes have risen sharply between 2008, when the current National Senior Certificate was introduced, and last year, while maths outcomes have dropped. One would not expect closely related subjects to display such divergent trends. Moreover, it emerges that the percentage of white and Indian Grade 12 learners becoming “60 plus” maths performers, meaning they obtain a mark of 60% or more, has declined markedly, from 32% to 23% over the period.

For black African and coloured learners this probability dropped too, from 5% to 4%. These race-based inequalities are, of course, at the core of the enormous challenges the schooling system must deal with. But why would the performance of, in particular, whites and Indians drop in this fashion? There are no reasons to believe this trend to be real.

Closer analysis of the data in fact showed that marks in different years were not exactly equivalent. This should not come as a surprise. While there is almost certainly scope for improvement to the annual marks standardisation process, it would be technically impossible for a nationwide examination system such as our matric to yield exactly comparable marks year after year, through, for instance, anchor items. Common questions would soon become widely known, and their utility thus undermined.

More or less equivalent scores were found by examining the distribution of marks in a sample of stable schools that were unlikely to experience large performance shifts over time. These were high-performing schools with stable enrolment, subject participation and demographic patterns. To illustrate, in maths it seemed a mark of 60 in 2013 was about as difficult to obtain as a mark of 59 last year, 62 in 2010 and 70 in 2008. Across many subjects, 2008 emerged as a year when it was particularly easy to obtain high marks.

Equivalent scores were used to recalibrate marks across the system. More consistent trends emerged as a result. Maths and physical science both displayed improvements over time, roughly in line with each other. Whites and Indians did not experience dramatic declines.

A key finding was that strong improvements emerged among black African and coloured learners. The number of these learners reaching the level of mathematics performance represented by a mark of 60 in 2013 increased by 66%, from around 11,300 a year in 2008-09, to 18,800 in 2014-15. The group of top performers clearly became more diverse in terms of race. The analysis does imply that universities and students should not be overly rigid in interpreting marks when they plan for the future. But the positive trends also imply that one can expect students to repeat and drop out less at university.

The bulk of the improvements have occurred in historically disadvantaged schools, and high performers have been found in an increasing number of schools. In 2008, 60% of Grade 12 learners were in schools with at least one “60 plus” maths student. By 2015, this figure had risen to 77%. This is important. The presence of at least one “whizzkid” in a school, whom fellow learners and even teachers can turn to for advice, can be seen as a sign of a more vibrant mathematics class.

In the light of concerns around the effect of a large number of “progressed” learners in Grade 12 last year, one finding from the analysis is noteworthy. Across key subjects, obtaining 60% and 70% appeared no easier last year than in previous years. In fact, in maths and physical science this was slightly harder in 2015 than in the previous two years. At least at this level of the performance spectrum, historical standards were upheld.

Rather than introducing a completely new type of intensive care for, say, maths, the actual trends suggest we should instead learn from those schools and provinces that are improving fastest, and then promote practices that seem to work. More investigative data work needs to be undertaken, by a wider range of analysts. School-level subject data for Grade 12 are now generally available to researchers through an online data portal, thanks to a data sharing agreement between the Department of Basic Education and DataFirst at the University of Cape Town.

The next step should be to make learner-level data more widely available, partly through the introduction of a rigorous anonymisation process that would protect the privacy of individual citizens.