Best Practice

Pupil Premium interventions: The evaluation deficit

We should be using evidence of what works to close the attainment gaps, but are we properly evaluating our interventions? Owen Carter looks at the ‘evaluation deficit’...

It is common these days to see education as an engine for social mobility and creating opportunity for those who have the least.

Policies like the Pupil Premium are explicitly intended to support this and address the very real challenges of education in high deprivation areas.

Quite rightly in schools we therefore invest huge amounts of time, money and energy in activities to make a difference, especially to the least advantaged.

But this comes with a flipside. As Becky Allen, in her blog series The Pupil Premium is not working, points out: it can drive “short-term, interventionist behaviours”.

Given accountability pressures, there is an understandable tendency to just do more and more – after-school clubs, one-to-one tutoring, curriculum boosters – in order to make a difference.

But the effectiveness of this approach is being called into question. One Sutton Trust report (2018) found that even in academies set up explicitly to serve disadvantaged communities, most disadvantaged pupils were finding it increasingly difficult to improve levels of attainment.

And ImpactEd’s small-scale research suggests that only three per cent of school leaders are confident in their ability to evaluate the impact of the work they are doing – this is what I call the “evaluation deficit”.

For sustained improvement, we need to get much sharper at knowing if what we are doing is effective and prioritising those strategies that make the greatest impact to both close the gap and raise the bar overall. This article provides some practical methods for starting to do just that.

Start by making good bets

For schools looking to make evidence-based change, there has been an explosion in research evidence appearing to show what works to improve outcomes for the most disadvantaged. So it makes sense to start there.

For example, at a whole-school level, research from the National Foundation for Educational Research (NFER) – as presented at the SSAT in 2016 – found that those schools which had most successfully closed the gap prioritised seven key building blocks: attainment for all, behaviour and attendance, high-quality teaching, meeting individual needs, effectively using staff, supportive leadership, and data-driven insights to support intervention programmes.

For specific strategies, the Education Endowment Foundation’s (EEF) Teaching and Learning Toolkit has made ideas such as developing metacognition and self-regulation, providing effective feedback and mastery learning common parlance in schools, with perhaps the most useful components being the research reports that sit behind their evidence summaries.

The Institute for Effective Education’s (IEE) Evidence 4 Impact database provides a similar wealth of resources. And researchers such as Barak Rosenshine provide clear syntheses of evidence that are immediately applicable to the classroom (see Tom Sherrington’s blog for an engaging summary of his work, June 2018).

What does this look like here?

Having identified an area where you think your school might be able to improve practice, now you need to think very carefully through what this looks like in your school. Most projects sink or swim by implementation. Avoid initiative overload in favour of executing a few key things well.

Then you need to think through what success will look like for this initiative – defining not just what you want to do, but what you are trying to achieve with it, and how you will know if you have achieved it. These conversations will often benefit from a focus on the outcomes you are aiming for rather than prescription about what this needs to look like.

Planning for impact

With outcomes identified and some kind of implementation strategy, you are ready to think about how you will evaluate your change to see if it is achieving the outcomes you are hoping for.

First, decide what type of evidence you need for the questions you are trying to answer. For some initiatives that are relatively easy to implement, data like informal feedback from teachers may be completely sufficient evidence for what you are trying to achieve.

For more involved projects that are aiming to make a sustained difference to pupil outcomes, you may want to look at more robust measures, potentially against a control group of pupils that are not taking part in a particular programme.

You should also be selective about your outcome measures, both intermediate and longer term. The range of indicators you could look at is huge and could include:

  • Academic attainment: Consider carefully the validity and reliability of your data – national, moderated exam results and standardised assessments will give different sorts of data to classroom assessments.
  • Pastoral and school engagement measures: For example, looking at measures of behaviour, exclusions, attendance. This data will often be readily available and may be high quality.
  • Broader skills: Many initiatives will be looking to develop outcomes such as pupils’ levels of motivation, self-efficacy or metacognition. In many cases there exist questionnaires that can be used to measure these outcomes.

Once you have decided on your measures, you will typically want to have both baseline and outcome data – where were young people when you started with an initiative and where did they end up? Can you compare this against other groups that were not taking part, or even previous year groups, to put your observed impact into context?

Sense check this all against workload and your existing school processes. You do not want to create a need to collect lots of new data, or to overhaul all your systems. Evaluation should reduce workload by helping you focus. It should not create more work

What to do with your data?

This is the key question that you should plan for at the beginning. You should be aiming to produce some sort of summary that can be useful for non-expert stakeholders. But most crucially you need to carve-out some time to talk through the implications.

If your evaluation finds that what you were doing was not effective, what are you going to do with that? Will you drop the initiative entirely, redesign some components of it, do some further research?

Evaluation will not necessarily give you the answers, but what it will do is give you some evidence that you can reflect on and use alongside your professional judgement.

Conclusion

To summarise, the process for implementing and evaluating change looks thus:

  • Start with the best bets – what does the research evidence suggest?
  • Think: “What does this look like here” and plan for quality implementation.
  • Plan for impact, with a well-considered and appropriate evaluation strategy.
  • Know what you are going to do with your data before you collect it.

If done well, a good implementation and evaluation process can maximise the chances of teachers trying something that works and can minimise the risk that interventions will fail.

Perhaps more importantly though, it can help pave the way for wider cultural change so that when we want to narrow the gap, or indeed improve any sort of outcome, our focus is not on simply doing more, but on carefully identifying the problem that we are trying to solve and rigorously assessing whether our interventions are solving it.

This offers not just the potential of improved outcomes, but also a more sustainable and healthier approach to making a difference.

  • Owen Carter is the co-founder of ImpactEd, which works to improve pupil outcomes by addressing the evaluation deficit. Visit https://impacted.org.uk/
  • Owen will be speaking at the 12th National Pupil Premium Conference organised by Headteacher Update on September 20. For details, visit www.pupilpremiumconference.com

Further information & resources