Panning for gold in Excel: how do you know what the data is telling you?

For better or worse, schools are currently awash with data. Personally I like the stuff; when I look at the columns of figures, percentages and raw values in the January DfE release of secondary school data it is literally (and I mean that literally) the same as Neo seeing the background code of the matrix rolling down before his eyes. It’s an antidote to the January blues to comb all those juicy number columns for patterns across schools, to see turn over stones looking for clues to how some schools manage to exceed or reverse expectations and riffle through the golden wrappers of educational toffee to see if somewhere there is the green triangle of knowledge that to illuminate a path for the next five years.

In reality it’s very hard to filter out useful information. For one thing, the data does not represent everything that goes on in a school nor does it provide context. Still, it’s a large enough data set that you can make some generalisations, though some of the areas that seem to be characteristic of  high-performing schools (insert your own definition of high-performing here) are clearly not under the control of individual schools. Funding, intake ability (particularly in English), physical location and on and on and on…so many variables that can and do influence student achievement are hidden behind the proxy values of the data dump and it is no small task to separate out what is making the difference. It is far easier and seductive to attach a narrative to one particular metric and claim that’s what is behind the success (or failure) of a particular strategy. No-one is too keen on hearing that there aren’t any clear patterns or ‘more research is needed’. A Headteacher who needs to produce in the short term of a three year cycle doesn’t have the luxury of waiting for the fog to clear.

All of which is not to say the data isn’t useful or should be discarded, or worse ignored. The tired canards of ‘lies, damn lies and statistics’ or the teeth-grindingly bad ‘you can say anything with statistics’ are as empty of meaning as ‘you can say anything with words’. The latter sentence is so obviously, laughably pointless and trite that no one ever uses it as a gambit (though the recent US elections may as well have made it their campaign slogan). The same statistics can absolutely be used in to represent conflicting narratives, but the point is to interrogate and question the data. And if the message that comes out from the data is eventually ‘inconclusive’ or ‘not-significant’ (I know, I know, but I’m going to use the term anyway) then so be it. Let us be blunt; there have been and will continue to be policies set at an national level that are didn’t so much borrow from questionable interpretation of data, they coshed it over the head, rifled its pockets and left it unconscious in an alley.

What should we make then of the Wild West of school data, where the first to draw gains the advantage leaving the careful and considered approach’s bullet ridden corpse in the dust? Do we spray the room with our own narratives backed up with selective data and hope some find their mark?  Not at all. The way to learn from data is to employ and champion the methodical approach. There will always be a place for the surprising, innovative and creative approaches that go against predictable patterns, but for most schools in most places most of the hard yards are gained on the day-to-day routines that can be seen and copied. That’s not to say it’s easy, far from it, but I think that if we want to find long term answers to some of our educational issues we will find the direction lurking among the data.

I’ll indulge myself with a story of statistical discovery. A few years back I was looking for anything we could use in school to improve achievement (like I said it was a few years ago) of boys. Spoiler alert: I didn’t solve the issue. I began by combing the data, simply getting the in-school data and looking for anything that stuck out showed a pattern. It was largely trial and error stuff, but something surprising quickly stood out. Our school used a four point system to represent ‘effort’ of each student, reported each term by each classroom teacher. One of the problems back then was that the term ‘effort’ was not clearly defined, leaving the interpretation up to an individual teacher. This meant it was possible for a student to be under-achieving according to their target but still top-scoring in their effort if the class teacher thought they were really trying but some other factor was holding them back. The interesting part though was to look at a whole year cohort and look at their effort marks across all the subjects.

The effort marks were amazingly high. On the scale (4 being excellent, 3 good, 2 under-performing, 1 poor, though I forget the exact definitions) the vast majority of students were scoring 3 or 4 across the board. Given that our school results then were not high, something wasn’t matching up. But there was something unsurprising about the names of the students who weren’t getting the top marks. It was dominated by the names of students with behavioural issues. There was a clear issue of validity of this data, in other words it wasn’t showing what it claimed to show. What it did suggest to me was that teachers were using the effort score as a proxy for classroom behaviour. We’d inadvertently created a system that gave students a four point scale for disruption in lessons.

Although most of the names were well known and already receiving support the students slightly lower on the list were the under-the-radar students who wouldn’t necessarily show up as having problems from an individual level but were showing a pattern across subjects. It was incidentally no use here looking at average effort scores. Having one subject score you lower could disproportionately pull down a mean average, so instead I looked at the number of subjects where a student was scoring 2 or 1 as a way to set a baseline. Reading the data like this also threw up some other issues regarding how teachers will approach such data tasks: I strongly suspect that when a system is in place that would require a certain response from a teacher there is a subconscious tendency to err on the side positivity. We don’t want our students to fail or be a problem so we look on the bright side and give the benefit of the doubt. It’s a problem of subjective measurement that doesn’t necessarily have a straightforward solution. I think if we’d asked teachers to input a value for ‘classroom dispruption’ the data would ave shown something quite different. We have a different system now, but I can imagine a few areas within a school where such measures could show up some interesting comparisons and reveal some (potentially uncomfortable) of the background choices made in subjective assessments between teachers.

Human error and biases will creep into data. The best that can be hoped for is to reduce them as much as is reasonably possible. In school data is not such a problem as external measurements (the bottleneck of individual judgements in subjective assessments like exam marking in English can make or break a whole school for years). Realising the limitations of data and developing the statistical literacy and competence to deal with it are vital.

Advertisements

Author: Mr Whellan's science

Nomadic science teacher

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s