Digging deeper into the SATs marking problem
16 Jul 2008 The Guardian
More questions about the test fiasco
Tuesday July 15, 2008
There had been warnings that the new marking contractor, ETS Europe, was experiencing difficulties. Yet there was no official acknowledgement of this until just four days before the results were due.
In June, I received an email from a key stage 3 science marker who, despite frequent attempts, had been unable to submit his marks using the ETS software. He wrote that phone calls and emails to ETS went unheeded, and that his contacts with other markers suggested the problem was widespread. Indeed, just getting them to answer the phone was hard enough, and promises to call back weren't honoured. This was all the more frustrating since ETS had asked for early submission of completed mark lists and had even promised cash bonuses to those who did so.
Ofqual has launched an inquiry, headed by Lord Sutherland, but why did it take so long for the alarm signals to be heard? Or did the authorities know something was amiss but remain quiet, hoping that somehow ETS would still meet the deadline?
Presumably, the inquiry will answer these questions as well as find out exactly what went wrong. Earlier correspondence between the National Assessment Agency (NAA), the QCA and the government shows the QCA was confident that the changes in marking practice proposed by ETS would not cause delays. On March 27, Boston wrote a letter to the education secretary, Ed Balls, under the now inappropriate heading: "National curriculum tests: improving quality of marking". It said the ETS had proposed four innovations to the marking process.
These "innovations" involved greater use of technology. First, there was "online mark capture", providing computer-automated addition of mark totals and assignment of levels. This was "designed to reduce clerical errors".
The second change was called "online standardisation". This was meant to provide an online replacement for a paper-based service and would "provide faster feedback to markers ... and remove the potential for losing pupils' scripts in the postal system".
The third innovation, called "online benchmarking", was meant to provide a more effective method of detecting inconsistent marking.
Taken together, wrote Boston, these changes would "improve the service to schools". Now, as we know, the ETS in fact failed on the most basic element of the service: meeting the deadline. The NAA has suggested that the new technology was a factor, saying that "difficulties with the new marker systems caused further delays to marking and the capture of results".
It is the final paragraph of Boston's letter that, with hindsight, now appears to be over-confident. He wrote: "We are aware that the introduction of new IT-based improvements must carry risks, but we have mitigation plans to ensure delivery is secure."
Balls clearly had his doubts. In a response dated March 30, he warned that "online systems, although they confer many benefits, are also subject to risk when first introduced". He added that he welcomed the NAA's strategy for "mitigating the risk to the tests" but wished to be kept informed of its effectiveness.
Since we now know that this strategy did not work, perhaps we can see the more recent correspondence between the NAA and Ed Balls? The Ofqual inquiry should certainly ask to see it.