A plea for the return of complexity and nuance to education policy

One of the most difficult skills to teach in history is the skill of evaluation. The historian (usually not before A-level, because this is a difficult business), is invited to delve into endless statistics, sources and events, and to reach a view on the most likely motives, the most convincing cause, or the most significant success or failure.

Most students, indeed most adults, find this extremely difficult. We’re human – we like to be able to say that X led to Y, or that A was definitely more important than B. We crave order, predictability and the comfort of a universe in which we know that if we do this thing here, then that thing there will follow – today, tomorrow, and forever. We don’t like randomness or extreme complexity.

The problem is that in the world of human interaction which history covers, such simplicity is rarely seen. We cannot be certain that different decisions by Hitler in 1942 might not have defeated Russia and brought about a different ending to WWII; we cannot be sure that Cromwell’s motives were largely inspired by God as opposed to ambition; and we cannot be certain that the Profumo Scandal was the most important reason for the 1964 election result. All we can do is look at the evidence, weigh it in the best way we can, and argue that this case is stronger than that case. Putting a point of view requires us to present a case, but few historical cases would be likely to pass the same threshold of proof as in a court of law. Weaker candidates will tend to revert to simple assertion: “It could have been reason X, Y or Z, but I think it was Z”. But to hit those higher marks, all historians must admit to the existence of valid alternative views, and acknowledge that certainty is simply not available to us. Nuance is the king of the A-grade in history.

Yet in education policy, no such doubt is allowed to exist. We have evolved, over the last thirty years, an entire educational policy edifice which is dedicated to what Ben Goldacre might call “Bad Science”, but which thousands of teachers would call “cobblers”. From the Secretary of State’s Office, down through the quangos, to the very classrooms we teach in, we have endless prescriptions of action based on the presumption of knowable certainty. Yet that certainty does not, and cannot, exist.

This blog, which comes as close to my own philosophy of education as anything I’ll ever write, was inspired by several discussions this week about both national and institutional policy. I’m going to try and work down from the DFE to the classroom, to show how I think that at every single level, we are, through our hopeless quest for binary certainty, causing needless damage to ourselves.

Those whom the gods wish to destroy, they first make mad

Unlike most teachers, I worked for ten years in the DFE before moving to the front line, so I’ve seen policymaking at first hand, and one reason I began to become disillusioned was because it was so transparently silly. Let me give an example:

The Blair Government introduced something called the Comprehensive Spending Review. The idea was that you had to justify the budgets you spent by reference to the results of that spending. Sounds great: logical; sensible; prudent. There was just one problem with this plan, Baldrick, and that is that it was bollocks. The idea that in a field like education one can isolate the impact of a single government funding stream, often in very short timescales, is just demonstrable nonsense.

I recall one CSR in which DFE civil servants were asked to justify the budgets for policies such as the Specialist School programme, Excellence in Cities programme, Gifted and Talented policy, and Learning Mentors. So we officials beavered away (I was in the EiC division at the time) to produce statistics which showed that the schools in our programmes were improving faster than others, or – if evidence was reluctant to pop up in time – gather testimonials from the headteachers gratefully receiving our largesse. You can probably predict the issue already : we were all citing the same schools! Most of these schools, by dint of being in difficult circumstances, were on the receiving end of multiple grants, and thus the improved results of school X in Birmingham were being cited as evidence of the success of it being a specialist Languages College, but also, separately, being cited by a different team in the same CSR as evidence of the success of the Excellence in Cities policy, AND having Learning Mentors and so on. Often, these impacts would be claimed even though a programme had only been in place for 6 months, or had only touched a tiny proportion of the children in that school. Yet even if there had been just one programme per school, and it had run for seven years, it would still have been nonsense to claim that just one programme had produced any results increase.

The number of inputs into any school are too many to list, and the idea that the impact of just one could be quantified in any meaningful way is not credible. Moreover, we’d be very aware that even within our self-selected groups of schools, some schools hadn’t improved as fast, some had done very well indeed, and some had gone backwards. So we’d average them and claim the overall figure (if positive) meant the programme which we funded in all of them must be having a positive effect. If it was negative, we’d claim that there hadn’t been enough time for it to “bed in”. This is truly awful “evidence”, and we knew it. But there was a game to be played, and budgets to be protected. Ministers simply did not want to hear that their favourite policy, announced with great fanfare, was not the successful button-push they wanted it to be. They’d promised to “do something”, and so they did. Once it was done, there was too much political face to lose to admit that life is a bit more complex than a headline in the Daily Mail.

Yet the government still plays this game. Most obviously with headline policies such as Academies and Free Schools, the Government is wont to claim that their pet projects improve faster than LEA-maintained schools, as if the only possible cause of that could be their academy status (even if it were true, which it isn’t necessarily). But that same binary certainty is apparent in their curriculum reforms : coursework is bad, but final exams are good; Ebacc is good, but applied qualifications are bad etc.

The nonsense simplification is also applied to individual schools by Government Ministers : Mossbourne, Harris and West London Free School are often used by Government Ministers to show that North Korean chanting, or carpet magnates , or Michael Gove’s friends, are all that are needed to “improve” schools. Yet as I’ve shown on this very blog, the position is far from clear that those schools’ results are (a) anything at all to do with their trumpeted policies or status, or (b) particularly remarkable compared with other schools with similar intakes. The latest binary cobblers is : Setting = good; mixed ability = bad. Nicky Morgan wouldn’t be able to get a decent grade in a history A-level today with that sort of rhubarb.

Nor is such desperate inaccuracy confined to the Tories, however. I had a discussion with a well-known Labour teacher blogger last week in which he, like Tristram Hunt some months ago, claimed that the recent Sutton Trust report on academy chains https://disidealist.wordpress.com/2014/07/24/tristram-hunt-a-historian-who-doesnt-use-evidence/  showed that academy chains drove up results. It doesn’t, not remotely. What it shows, not for the first time, is that the picture is complex. Some of the chains seem to be doing better than others, some are doing worse than LEA-maintained schools, while others seem to be doing better. But there are all sorts of variables in there to take account of at individual school level. It’s therefore just as valid to claim that sponsored academy chains actually lower school outcomes as it is to claim they raise them. To definitively back one side or the other is intellectual gobbledegook – assertion dressed up as evaluation.

It seems that the desire to be able to claim that this input results in that outcome, despite all the evidence that life is far too complex for such simplifications, is not just confined to politicians desperate for tabloid headlines, but is also feeding a very human need for simplification which goes way beyond Sanctuary Buildings.

Ofsted – proof that the Gods do in fact wish to destroy us

Ofsted makes great play of being about evidence. In fact, Ofsted judgements bear as much relationship to evidence as I do to a Martian. This blog rather nails the Ofsted-judgements-based-on-evidence myth : http://jtbeducation.wordpress.com/2014/06/29/whats-the-easiest-way-to-a-secondary-ofsted-outstanding/ However, leaving aside the fact that every HMCI has been wandering around naked for years, we can look at some of the key ways in which Ofsted also conspire in this “policy by simplistic assertion” which infects the whole system from top to bottom.

The following fact never ceases to amaze and appal me : I don’t know a single person, anywhere, in the whole education system, who believes that all children learn in straight lines, improving their skills, understanding and knowledge at the same, constant pace, irrespective of age, ability or interest. There is not one rational educationalist in the world who doesn’t acknowledge that learning is not linear; more of a twisty mountain track over hill and down dale, than a roman highway up an endless 45 degree slope. Yet the assumption of all children making the same amount of progress, at the same speed, over the same period of time, is the fundamental underpinning of the entire Ofsted inspection system. It has such power that you can see simple diagrams in school reports and on school walls all over the country, showing with those smooth, straight lines how levels will increase for all children at the same rate over time.

That. Is. Mad.

We all know that’s not how learning happens, yet in thousands of schools, when a child fails to demonstrate the perfect Straight Line Of Ofsted-Approved Progress, intervention programmes are put in place, teachers are summoned to explain why reality doesn’t match fantasy, and red boxes appear on spreadsheets. Any attempt to acknowledge the differences of individual children, the vagaries of different subjects and the slightly unknowable humanity of education, is met with that classic Get Out Clause of simpletons everywhere : the “No Excuses Culture”. I’ve mentioned before that a “No Excuses Culture” is simply a “No Reality Culture”. I’m not sure a detachment from reality is the best characteristic we can hope for in our schools.

Ofsted also class schools in one of four categories. This is, immediately, nonsense. One of my favourite anecdotes from a well-connected friend is of the time when the then HMCI (not Wilshaw) was presented with some fairly convincing evidence that there is greater difference in outcomes within schools than between schools. http://hepg.org/hel-home/issues/25_6/helarticle/behind-the-classroom-door_427 . The HMCI simply refused to acknowledge the point. The concept so utterly undermined the entire nonsensical  construction of “Outstanding, Good, RI, Inadequate” upon which Ofsted’s empire is built, that the presentation of evidence which challenged this was too much for the poor HMCI to take, their brain went into shutdown mode, and they could not even begin to discuss the possibility! Long silences ensued. Apparently it was really awkward, and people left the room speaking in hushed tones, concerned for the HMCI’s sanity. Yet it is inescapably true that in schools classed by Ofsted as “Outstanding” there will be teachers and subjects whose students appear to achieve worse results than in schools classed by Ofsted as “Inadequate”. But the now decades old desire to call schools Good or Bad, to attach a binary label on a complex institution which cannot be sensibly labelled as such in all but the most extreme circumstances, is so strong that we cannot break it.

Finally, on his excellent blog, Jack Marwood has highlighted how Ofsted’s use of garbage “evidence” such as Raise Online has led to hugely negative consequences based on data of no greater value than pulling a numbered ball out of a tombola. http://icingonthecakeblog.weebly.com/blog/hammering-nails-in-raiseonlines-coffin

Ofsted, especially when led by a man to whom “Nuance” is an island in the Caribbean, institutionalizes groundless assertion in the education system, and calls it evidence. In doing so, it causes untold damage not just to the schools which are battered by its asserted nonsense, but to the education system as a whole, because it reinforces in politicians, media and even fellow citizens, this simplistic refusal to understand education as the complex and human process that it is.

“Traditional” versus “Progressive” – a self-inflicted wound ?

Much of the binary simplification which happens in our schools is driven by the national policy-making framework and Ofsted. So all those teachers currently suffocating under mountains of double- and triple-marking can thank the latest nonsense from Ofsted, who are pushing this (while occasionally claiming, on Twitter, that they are not). The current phase of marking, then getting students to comment on marking, then marking their comments on the original marking (and so on until we all disappear down the rabbithole) has no more evidence behind it than Kinaesthetic Learning ever did. It’s just assertion, pretending to be evaluation; garbage science which causes huge downsides for no reliable upside. But what I want to focus on here is not something else Ofsted force us to do in their simplistic way, but what we are occasionally guilty of forcing upon ourselves.

I had a twitter conversation with a fellow tweeter about the progressive v traditionalist debate. My fellow tweeter said she’d always assumed I was a “progressive” type, based on the fact, I think, that I’m very critical of current education policy (and policymakers), who usually tend towards the more “traditional” end of the spectrum.

But I think this whole debate – apologies to those who enjoy debating it – is actually just another over-simplified tendency to seek a false dichotomy of right and wrong methodology. I teach many lessons in a very old-school-from-the-front-teacher-didactic way. But I teach some lessons with groupwork, and posters, and student self-assessment and other “progressive”-type of activities. My lessons with a difficult Year 9 class will be very strict, while my lessons with Year 13s are often extremely informal and relaxed. Essentially, I judge how best to approach a lesson based on all sorts of variables : age, numbers, goals, content, difficulty, original material or revision, ability range, time of day, my mood, their mood, September or December, available resources and so on. If I were to go in saying “I will not use groupwork because it is progressive, thus bad”, or “I must not talk for more than 5 minutes because that would be traditional, thus bad”, then that would be mad. Surely nobody really thinks that way? Like all teachers trying to do their best, I exercise professional judgement about the range of inputs I could use to try to get the best results from the children in front of me. The whole progressive/traditional debate seems to me to be unhelpful in the same way that Ofsted and the DFE’s binary oversimplifications are. So I don’t really understand what the debate is about, because I don’t imagine that anyone genuinely believes that there’s never a place for didacticism and phonics, or there’s never a place for groupwork and presentations. The art of teaching is to employ whichever methods seem most likely to achieve the outcomes you want for your students. Prescription is thus never helpful.

The students I teach tend to do ok. I accept Jack Marwood’s criticisms of educational statistics, and so I’m not going to sing the praises of my RaiseOnline stats, or ALPS thermometers and so on. Suffice it to say that most of my students seem to achieve what I and others think is towards the top end of their genuine potential, and most seem to enjoy themselves along the way. But I could no more isolate any one input and claim that as the main determinant of their results, than I could spread my wings and fly. I don’t know why or how it works. It just does.

Over time, I’ve developed a range of approaches which I can apply to a variety of situations. But even after ten years it’s not at all predictable : some lessons I expect to work, don’t, while others I think are deeply boring, the students love; this lesson works with some classes, but with others it doesn’t; some students are genuinely inspired by their history lessons, while others remain untouched for years; I’ve had some heroic individual successes which make me feel like Robin Williams in Dead Poets’ Society, but I’ve invested huge amounts of time in others to no apparent avail.

In that time, like all of us, I’ve been subject to all manner of policies which have changed. My school has changed from LEA-maintained to Academy, to part of an Umbrella Trust. I used to have to show different “learning styles” in lesson plans, but now have Personalized Learning Checklists. My CPD has come, then gone, then come again. The Government has changed from Labour to Tory, and Ofsted has gone through three HMCIs. When I started I was a manager, but now I’m a “leader”. Yet nothing, as far as I can see, has had any notable impact on me, my students, my department, or the results which emerge from us. That hasn’t stopped politicians and HMCIs from claiming that their latest policy is working through those results, however.

This is why I won’t blog on pedagogy, or take sides in a progressive v traditionalist debate. What works for me won’t necessarily work for different teachers, and anyway, I don’t really know what works for me most of the time. So I have no more right to claim that I have the answer to how to teach than Wilshaw does. And whenever Wilshaw, or Gove, or Morgan, says “this is how to do it”, then they are, instantly and without fail, wrong.

Conclusion

Life is complex. Individuals are complex. Their interactions are incredibly complex. A lesson with 30 students is not just 30 individual interactions with the teacher, but thousands of possible interactions amongst the students. And each school day is made up of hundreds of these lessons involving hundreds of teachers and thousands of students, all of whom, being human, are unpredictable and inconstant. Start multiplying those inputs, and then throw in factors such as leadership interventions, Ofsted diktats, funding issues, parental involvement, the weather, and external events in the lives of individuals and the community, and what every school comprises is a microcosm of all the incredible unpredictability and complexity of human society, with hundreds of inputs working on thousands of people to produce millions of interactions, any one of which may or may not have an impact on what a student learns on any given day. Put that way, it seems bizarre that anyone would be daft enough to come along and say “This school is Good, that one is Inadequate, and the difference is that the Good one is owned by an academy chain/teaches traditionally”. Yet they do. And it doesn’t serve anyone very well.

So I guess this is a plea for the replacement of groundless assertion in education policy-making and debate, with what the A-Level history syllabus would recognise as evaluation – the nuanced avoidance of simplistic conclusions. But it’s also a plea for the return of an understanding of what education is : a series of complex human interactions which, along with unknowable numbers of other inputs, combine to create outcomes which are not necessarily either predictable, or repeatable. Because if we accept that, then something magical happens : we have to accept that top-down policy-making and diktat are at best meaningless, and at worst damaging. Instead, we have to return the power of decision-making to individuals at a classroom level. If we do that, then non-classroom voices, whether government, Ofsted, academics or school leaders become legitimate sources of ideas, research, support networks and professional advice which can be used to inform that very individual professional practice which each lesson is. Which must surely be better than the current situation of non-bespoke one-size-fits-all ticklists, orders and soundbites passed down from Great Smith Street and whichever planet Wilshaw is currently on, which are to be enforced irrespective of any variables. Or indeed irrespective of the very reality of humanity and learning.

The tragedy of George Santayana’s oft-quoted statement about history is that it’s oft-quoted by those who then immediately go on to ignore it. One key lesson from history, it seems to me, is that we cannot be certain about most things. So why is so much educational policy, and so much of the debate, based around these false certainties of Outstanding versus Inadequate, Effective versus Ineffective, or “Progressive” versus “Traditional”? Perhaps there is no human policy area in which the unrealistic desire for simplistic, binary, pseudo-scientific certainty, has so overwhelmed the lesson of history, which is that humanity is, to put it bluntly, messy.

I have but one certainty about education, which you might call Disappointed Idealist’s First Law :

“Anyone in education who claims that there is a “best” way to do anything, is wrong.”

Not as catchy as Santayana, but I’ll work on it….

Advertisements

16 thoughts on “A plea for the return of complexity and nuance to education policy

  1. A superb post – which gets to the very heart of the complexities of the educational process. As chair of governors of one secondary school and a coopted governor in another I have asked both heads to ensure that their respective SLT read and discuss this. Colin Richards HMI (retired)

    Like

  2. Fantastic post.

    Education is full of ‘bad science’ but sadly those in control of the education framework in this country seem to believe that everything can be simplified by using data.

    Madness.

    Like

    • I don’t think I’m his type. 🙂

      To be fair, if one mainly blogs on the damage done to the education system by top-down diktat and unjustifiable judgements, it’s rather hard NOT to mention Wilshaw !

      Like

  3. Hmmm. So when you want to attack one of your pet hates, Harris, Mossbourne etc, you pull the data off the DfE website to do so. But when anyone else tries to use data to evaluate schools, teachers or students, whether it be Raiseonline, Ofsted or SLTs, then it doesn’t count?

    Like

    • Bringing my earlier replies in, as they originally posted as anonymous because I was on my phone.

      I think your comment is a bit odd, for the following reasons :

      1) I don’t ever say that when anyone else uses data then “it doesn’t count”. I use data myself. I merely argue that data is never sufficient alone, and must always be contextualised. It also helps if it’s good quality data, which is the issue Jack Marwood’s blog considers in the context of Raise Online.

      2) The data I used on the posts about Mossbourne etc was precisely that – contextualisation. The results of those schools are often used by politicians to make inaccurate and simplistic statements about how their results are due to strict discipline, or the Free Schools policy, or some other single input flavour of the month. Yet the context of admissions, SEN, EAL and FSM numbers actually suggest that their results are broadly similar to other schools with similar admissions profiles. That contextual information also points, particularly in the case of Harris and WLFS, to a very disproportionate distribution of students with those characteristics compared to their local communities.

      So ultimately, your point doesn’t really make sense.

      Like

  4. Ah yes, the old ‘You disagree with me so you can’t have understood me’ line. The staple of KS3 debating societies everywhere. And you do talk about attainment data – you say in the Mossbourne blog “I was also astonished to find that my own school was significantly better at adding value to low ability students.” – but haven’t you just pointed out that the whole idea of comparing students from different schools and measuring how much ‘value’ has been added is deeply problematic?

    Like

    • I have indeed, which is precisely why I also quoted the following in the Mossbourne blog :

      “Of course, the fact that Mossbourne is comparatively advantaged in its local area doesn’t make it an easy school. Those are still highish figures for disadvantaged students, and it’s what Mossbourne does with those students which really matters.

      So what do Mossbourne’s locally advantaged, more-able cohort achieve ?
      •I’m impressed with their Ebacc, at 57%. That’s very high compared to many flagship academy chains (see Sutton Trust report on how chains use equivalents and avoid academic GCSEs to boost statistics). Indeed, it’s high nationally.
      •Their capped GCSE score is 345, which is also commendable.
      •Their best 8 value-added score is 1034, which is a good score.

      All in all, I’d say Mossbourne’s results are pretty good considering their intake. They are undoubtedly achieving a lot with their students. Plaudits all round.”

      I don’t criticise Mossbourne the school at all – I praise it, in fact. I criticise politicians and others who use Mossbourne as a peg upon which to hang ridiculously simplistic arguments that Input X has caused good results, and if only every school did the same thing then all schools would get the same results.

      Like I said, you misunderstood the blog. Not being able to understand anything other than a very simple argument is also symptomatic of a KS3 debating society, but hopefully you’ll learn a lesson from this when you next attend yours.

      Like

  5. Thanks for this. I agree very much with the point about complexity. I am one of that terrible “blob” the education academic and I have been very concerned over the move towards the positivistic approach as funded by the Education Endowment foundation to want to use the scientific method and RCTs (Randomised Control Trials) in education as these tend towards the intervention and control group method which makes a causal link assumption and negates the varied complexity of school and social situations.

    I work a lot with training and post-training teachers and one of my ongoing mantras is “if it was simple to ‘fix’ education do you not think it would have been done by now’ – which I think chimes with your own mantra.

    Keep blogging.

    Paul

    Like

  6. […] Continuing to be my favourite and wanna-meet-blogger, @DisIdealist writes, Job Adverts : How to Lose Applicants and Influence People and dissects a job-advert from the TES; worded far too well in today’s Gove & Wilshaw-created education system. He then goes on to share his own first law of education: “Anyone in education who claims that there is a “best” way to do anything, is wrong.” Read A Plea for the Return of Complexity and Nuance to Education Policy. […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s