Best Practice

Measuring impact: Using the Premium

Schools everywhere are looking for the most effective Pupil Premium strategies and for the best methods of showing the results of their spending decisions. Liz Twist looks at what the evidence shows to be the best approaches
From the summer term, the Pupil Premium has been increased to £900 per annum for each eligible pupil. Maintained schools in England are expected to use this to support practices which contribute to narrowing the gap between pupils who are eligible for the additional funding and those who are not. During Ofsted inspections, schools may need to provide evidence to show this.

There are two aspects to this evidence. First, schools will need to demonstrate that the particular resources, interventions and other practices that they invest in are well-chosen and backed by evidence of their efficacy. Second, that the particular investment has had the desired positive impact in its own context.



Investing the Pupil Premium

In January 2011, a national newspaper reported that “thousands of highly trained dogs are being primed for calls from schools looking for a sympathetic, albeit furry, ear for children with reading difficulties”.

This introduced a story about Polly, a greyhound which had visited a school once a week for a year with her owner. Children would sit beside Polly and read to her. The school involved suggested that all 20 children involved now had more confidence in their reading abilities after the project and all read aloud at home, whereas previously only three had. They also suggested that 60 per cent of the children showed increases in their reading attainment.

The point of this reference is to illustrate how the results of a very small scale project – the introduction of one dog into one school and “working” with 20 children – was assumed to justify the training of “thousands” of dogs ready to go into schools.

At around the turn of the century, researchers in both the UK and across the Atlantic were pulling together evidence of the impact of interventions, policies or products. In 1998, for example, a group of researchers at the National Foundation for Educational Research (NFER) in the UK, led by Greg Brooks, published What Works for Slow Readers? The Effectiveness of Early Intervention Schemes.

Several revisions of this report were subsequently commissioned by the Department for Education in England and its predecessors.

On a much bigger scale and with a broader aim of providing scientific evidence for what works in education, the What Works Clearinghouse was established in 2002 by the United States Department of Education.

This focus on using evidence of effectiveness to inform decision-making has led to a trend for governments in the UK and the US to invest considerable funds in this aspect of policy.

The UK government was inspired by the “Race to the Top” scheme introduced during the first Obama administration with a $4.35 billion investment and there is now an organisation in England working to contribute to the evidence base for interventions: the Education Endowment Foundation (EEF). The Foundation was established in 2011 with an endowment from the government of £125 million.

Two of the main roles of the EEF are to identify promising educational innovations that address the needs of disadvantaged children in schools in England and to evaluate these innovations to extend and secure the evidence on what works and what can be made to work on a large scale.

For schools considering how to use their Pupil Premium funds, the Sutton Trust-EEF Teaching and Learning Toolkit summarises educational research in order to guide decisions on how to use their resources to improve the attainment of disadvantaged pupils.

This is putting in front of teachers the best evidence available on around 30 educational topics to enable them to be informed both about selecting a particular strategy and then implementing it in the way which has been found to be effective.Instead of relying on promotional material in catalogues or on websites, or on the word of mouth network and the views of “evangelists”, teachers can now refer to this independent and rigorous guide.

As the EEF points out, the Toolkit does not provide a “quick fix”, it consists of a digest of large amounts of research. However, effective use of the Toolkit depends upon teachers’ professional knowledge of the context in which they are working, the selection of an appropriate strategy, and then the application of that strategy in a systematic manner.

The Toolkit makes available to teachers evidence in a very usable form, evidence which until now has often been confined to the academic journals and reports, written predominantly for a different readership.



Monitoring the effect of interventions

Teachers need to be able to determine the effect of any change in their practice. In this case, Ofsted will expect to see the impact that the initiatives, funded by the Pupil Premium, have had on attainment.

One way in which this can be measured is through teacher assessment, but standardised tests also have a role in providing objective data.

Again, the EEF has provided useful guidance for teachers with its publication of the criteria used to determine the appropriateness of tests used in evaluations (see further information).

In order to measure the impact, teachers need a “before and after measure”, or a pre- and post-test.

Teachers are often fond of specific tests with which they have become familiar. This can have some positive aspects: they may well know how performance on a particular test relates to pupils’ ability to access the curriculum, or be confident about scoring the test reliably. These elements are often, however, outweighed by the disadvantages associated with the use of outdated or otherwise unsuitable tests.

It is important that a test has had a recent standardisation so that the results reflect the attainment of pupils who have experienced a similar curriculum. It is usually, but not invariably, the case that a recent standardisation reflects a recently constructed test. This is important as it relates to the issue of validity – for example, children may be able to read the word “milkman”, but perhaps these days not all children can be expected to know to what it refers.

Another consideration in the selection of an appropriate test is the nature of the standardisation. All standardisations are based on analysing the performance of a sample of pupils on the particular test.

This sample needs to be sufficiently large to be representative of the population that the test is to be used with. Robust standardisations rely on stratified samples, i.e. the various features that can distinguish schools (such as school size, or the proportion of pupils eligible for free school meals) are represented in the sample of schools in the same proportions as across the whole population of schools.

This information should be available in technical information from the test developers. Samples involving small numbers of pupils, or schools which cluster in a geographical area for example, are unlikely to be nationally representative.

Finally, teachers should interpret test scores with regard to the published confidence intervals. These allow for measurement error: tests can only sample the particular area of learning which they assess and that therefore the score a pupil achieves may vary within a few points of his or her “true score”.

A confidence band of 90 per cent, for example, indicates that teachers can have 90 per cent certainty that the true score lies within the confidence band. When the scores of two pupils are compared, if the confidence bands overlap, the difference between the scores is not significant.

Through the use of suitably standardised tests, teachers will have the evidence to determine which practices, policies and interventions are effective in their own contexts. This will be invaluable to inform future practice, including the use of the Pupil Premium.



• Liz Twist is the head of the Centre for Assessment at the National Foundation for Educational Research.

Further information
• For more information on NFER’s Standardised Tests, visit www.nfer.ac.uk/schools/nfer-tests.
• To access the EEF’s Toolkit, go to http://educationendowmentfoundation.org.uk/toolkit/.
• Download the EEF’s Testing Criteria at http://educationendowmentfoundation.org.uk/uploads/pdf/EEF_testing_criteria.pdf.
• To download a free PDF of all the NFER Research Insights articles, tackling issues such as assessment, parental engagement and creativity, click here.