Dr Martin Gustafsson is based at the Department of Basic Education and is a member of the Research on Socioeconomic Policy group at Stellenbosch University. This article first appeared in the Business Day on the 25th of April and is also available here.
The technical report which informs the newspaper article, and which was discussed at a ReSEP workshop on 18 January 2016, is available here.
By Dr Martin Gustafsson
The debate over whether the performance of schools is improving is a messy one, but one worth cleaning up. If one looks through the maze of often ambiguous statistics, there are a few indicators we can use to reveal actual trends. Fortunately for the country’s future, they point to notable improvements.
This is not to say all is well in the schooling system. In many respects it is not. Yet it would be an incomplete and incorrect diagnosis to ignore steps in the right direction.
John Jerrim, a British education analyst, in an interrogation of Britain’s school performance numbers, demonstrates how even in a country with supposedly good monitoring capacity it is alarmingly easy to get the trends wrong, and for this to lead to unnecessary or inappropriate policy reforms.
In SA, at the secondary school level, the matric pass rate is deeply entrenched in the policy discourse. Its dominance is perhaps unfortunate as it is an exceedingly difficult statistic to interpret because it is influenced by such a variety of factors, from demographic trends to subject selection.
One of the measures we have that gets close to being truly and simply comparable over time is our Trends in International Mathematics and Science Study (Timss) score. The Timss tests, which use common “anchor” questions across years and were administered in representative samples of schools in 2002 and 2011, point to large improvements, off a low base, in Grade 9 maths and science over the nine years.
However, legitimate questions have been raised over the ability of Timss to accurately measure progress in SA, given that Timss is designed primarily for countries with high average levels of performance, and is thus not good at differentiating between relatively weaker learners.
Patterns in our Grade 12 examinations data seem to confirm that substantial improvements in mathematics and science have occurred, and that the trend has continued beyond 2011. However, the care that needs to be taken in interpreting the data is a salutary reminder of what a minefield education performance data can be.
The analysis work I did for the Department of Basic Education was partly prompted by the insistence of the Department of Planning, Monitoring and Evaluation that a “missing middle” be addressed. Traditionally, the focus in basic education has been at a relatively low level, or the numbers of learners passing a minimum threshold, and then also at the absolute top end, or the numbers of learners passing with at least 80% in a subject, a distinction. What matters a lot for the future of engineering, financial management and other mathematically-oriented fields is the number of Grade 12 learners achieving 60% or 70% in critical subjects such as maths, given that these are cut-offs universities typically use in their entry requirements.
If one takes the Grade 12 results over the years at face value, one obtains peculiar results. Physical science outcomes have risen sharply between 2008, when the current National Senior Certificate was introduced, and last year, while maths outcomes have dropped. One would not expect closely related subjects to display such divergent trends. Moreover, it emerges that the percentage of white and Indian Grade 12 learners becoming “60 plus” maths performers, meaning they obtain a mark of 60% or more, has declined markedly, from 32% to 23% over the period.
For black African and coloured learners this probability dropped too, from 5% to 4%. These race-based inequalities are, of course, at the core of the enormous challenges the schooling system must deal with. But why would the performance of, in particular, whites and Indians drop in this fashion? There are no reasons to believe this trend to be real.
Closer analysis of the data in fact showed that marks in different years were not exactly equivalent. This should not come as a surprise. While there is almost certainly scope for improvement to the annual marks standardisation process, it would be technically impossible for a nationwide examination system such as our matric to yield exactly comparable marks year after year, through, for instance, anchor items. Common questions would soon become widely known, and their utility thus undermined.
More or less equivalent scores were found by examining the distribution of marks in a sample of stable schools that were unlikely to experience large performance shifts over time. These were high-performing schools with stable enrolment, subject participation and demographic patterns. To illustrate, in maths it seemed a mark of 60 in 2013 was about as difficult to obtain as a mark of 59 last year, 62 in 2010 and 70 in 2008. Across many subjects, 2008 emerged as a year when it was particularly easy to obtain high marks.
Equivalent scores were used to recalibrate marks across the system. More consistent trends emerged as a result. Maths and physical science both displayed improvements over time, roughly in line with each other. Whites and Indians did not experience dramatic declines.
A key finding was that strong improvements emerged among black African and coloured learners. The number of these learners reaching the level of mathematics performance represented by a mark of 60 in 2013 increased by 66%, from around 11,300 a year in 2008-09, to 18,800 in 2014-15. The group of top performers clearly became more diverse in terms of race. The analysis does imply that universities and students should not be overly rigid in interpreting marks when they plan for the future. But the positive trends also imply that one can expect students to repeat and drop out less at university.
The bulk of the improvements have occurred in historically disadvantaged schools, and high performers have been found in an increasing number of schools. In 2008, 60% of Grade 12 learners were in schools with at least one “60 plus” maths student. By 2015, this figure had risen to 77%. This is important. The presence of at least one “whizzkid” in a school, whom fellow learners and even teachers can turn to for advice, can be seen as a sign of a more vibrant mathematics class.
In the light of concerns around the effect of a large number of “progressed” learners in Grade 12 last year, one finding from the analysis is noteworthy. Across key subjects, obtaining 60% and 70% appeared no easier last year than in previous years. In fact, in maths and physical science this was slightly harder in 2015 than in the previous two years. At least at this level of the performance spectrum, historical standards were upheld.
Rather than introducing a completely new type of intensive care for, say, maths, the actual trends suggest we should instead learn from those schools and provinces that are improving fastest, and then promote practices that seem to work. More investigative data work needs to be undertaken, by a wider range of analysts. School-level subject data for Grade 12 are now generally available to researchers through an online data portal, thanks to a data sharing agreement between the Department of Basic Education and DataFirst at the University of Cape Town.
The next step should be to make learner-level data more widely available, partly through the introduction of a rigorous anonymisation process that would protect the privacy of individual citizens.