The teachers’ knowledge that has no name

Why is it so much harder than it looks to write a good knowledge organiser or any other document which identifies the core knowledge pupils need? It is harder than it looks to identify the right knowledge for quizzing. So often we don’t assess formatively – not because we don’t know we should – but because we haven’t clarity about what we should assess.

To explore why this is all so much harder than it looks, last week, I asked our Ark curriculum and teaching and learning leads this question:

The first option is Albert Einstein. I’ve chosen him because he needs no introduction.

The second person pictured is Jaime Escalante. He was a remarkable maths teacher who worked in Los Angeles. He became famous for his incredible success in teaching advanced mathematics to inner-city students, from disadvantaged backgrounds. His story is so impressive a film was made about it in 1988 – called “Stand and Deliver”. Amazing.

So, which of these two pictured, would be more likely to write a good year 6 or 7 maths knowledge organiser (should you want such a thing)? Einstein had huge expertise in maths and teachers do definitely need, at the very least, the expertise they aim to give their pupils over time. That doesn’t mean Einstein knew what you need to teach a kid, step by step, to build towards that desired level of final expertise. A degree in the subject we teach doesn’t give us that particular knowledge. We’d definitely choose Jaime to write maths knowledge organisers.

Jaime Escalante did also have general teaching skills – but that wouldn’t help much with writing the maths knowledge organiser. Say, we add Ruth Miskin to our mug shot selection. She’s the creator of the Read Write Inc phonics programme. Like Jaime, she must be a skilled teacher but she has the wrong curriculum knowledge for maths. She is an expert in reading. For the maths knowledge organisers we’d still want Jaime! He’d have had a very strong mental map/knowledge of progression in maths. I bet that at any teaching moment he’d know just what those kids needed to learn next.

From this little exercise we can see that knowledge of subject itself that an expert would hold is not the same as teacher’s knowledge of curriculum that pupils need to learn, step by step, over time, to reach the desired level of expertise in the subject. Teachers need to know the steps, in subject content knowledge, that take students from novice to expert.

And once a strong curriculum is in place, the decisions teachers must make in each lesson, even with a fully resourced curriculum, still need to be informed by that knowledge of subject progression. I love the way Michael Fordham explains how this applies to day-to-day practices of teachers, which are often called ‘teaching skills’:

  • lesson planning
  • questioning
  • giving an explanation
  • giving feedback
  • marking

This is because the teacher’s lesson activities e.g. formative assessment, involve a comparison, between the mental map of the subject the teacher has in their head and the mental map students have in their head. In making a formative assessment, a teacher is trying to diagnose where the gaps in… [the students’] mental map are, so that he or she can then do something about it. To make a judgement call on something as complex as this requires that the teacher’s mental map is very strong… The richer that map of the subject, the more accurately… [the teacher] will be able to identify where the pupil has weaknesses [1].

To reiterate, skilled teachers need to be sufficiently expert in the subject they are teaching, but also experts in the progression in knowledge, step by step, that builds towards the desired level of pupil expertise. This explains why there can be real variation in the quality of teaching even as teachers deliver the same or a very similar curriculum.


[1] Subject knowledge and mentoring: my talk at Teach First – Clio et cetera


Isn’t it odd that such critical teacher knowledge, so crucial to classroom success, has no name? What do we call this teacher knowledge? Some of you may have heard of Shulman’s term. ‘Pedagogical Content Knowledge’. ‘PCK’ overlaps with this unnamed teacher knowledge but ‘PCK’ has been given so many different definitions over time that it might just be confusing to use that term.  Michael Fordham talked about this knowledge as a ‘mental map’. Daisy Christodoulou, discussing assessment, talks about a ‘progression model’. With my team of Ark subject Network Leads we discussed a possible name and a definition. My team suggested the term ‘curriculum journey knowledge’ which is pretty good. PLEASE DO suggest your thoughts on what we should call this knowledge.

Finally, I’ve asked myself how teachers today can acquire this ‘curriculum journey’ knowledge in the subjects they teach. Perhaps we have to scramble back to our own memories of learning the subject at school. Also ‘curriculum journey knowledge’ can be inferred through using great curriculum materials or textbooks. And as experienced teachers, we probably once learned from the subject expertise of our older and wiser colleagues. But what happens when those colleagues aren’t there?

Our workforce is relatively young and inexperienced. HOW are younger or less experienced or non-specialist teachers to learn crucial ‘curriculum journey knowledge’ in the subjects they teach? It won’t come from a subject degree. It won’t come from teacher training courses which focus in on generic teaching strategies (as useful as those are). It won’t come from generic training in teaching and learning. And yet, as Fordham explained so eloquently, this mental map or ‘curriculum journey knowledge’ is absolutely crucial for success in all teaching activities.

At Ark this is the reason we place so much importance on co-planning. Perhaps more on this is future blogs!

Knowledge matters

I can be prone to hyperbole in my informal chat. Therefore, my first instinct was to assume I’d been too outspoken when I got an explosion of angry replies to this tweet of mine on Thursday.

Some people leapt, without seeking clarification, to call me a liar for claiming that ten years ago ‘knowledge’ was an unacceptable word to use in schools.

The problem was that what I said, and I’m pretty sure a reasonable interpretation of my words, was true.

Ten years ago if, in most school staff training, you stated something like, say, ‘It is crucial that pupils learn knowledge through their schooling‘, this would have triggered a negative response – commonly distaste, disdain or denigration of knowledge. This wasn’t just a possible reaction it was the uniform predictable reaction. That likely response was what I’d meant by my assertion that the word ‘knowledge’ was unacceptable.

(If you disagree and think the situation I describe isn’t a reasonable interpretation of my words, then please believe my sincerity and hopefully it’s clear to you from this post what I did mean.)

If you had talked about knowledge as ‘crucial’ at that time perhaps you would be told that knowledge was needed but was ‘low level’ or ‘rote’. Perhaps you would be told mere memorisation was not important. Quite likely you it would be explained that skills mattered more because they were useful in life and knowledge could always be googled. Perhaps ‘understanding’ was championed as something separate and superior to knowledge (implying that knowledge could only be understood as ‘rote’ or similar). Below was not in the least unusual for that time:

That’s not to say that the word knowledge didn’t appear in documentation. When defined for examination, knowledge was definitely viewed as necessary – unavoidably. Fair to say though, that even in documentation ‘knowledge’ was often replaced with vaguer terms like ‘the learning’ (making ‘learning’ a noun in this way was endemic at that time) or actions like ‘explore’ or ‘describe’ were used with any knowledge left vague and ill defined. For a survey of evidence which can demonstrate how knowledge was understood, about 10 years ago, I’d recommend reading this.

I was tasked in 2019 to write national training for inspectors which explained why knowledge mattered. The kind of response that ‘K’ word typically got, when used in educational conversation, is etched in my mind. That’s because I was going to have my work cut-out to make this training ‘land’. That experience, working at national scale, gives me a clear mental marker of that moment in time and the way the word was understood in English education.

Even so, I was so tempted yesterday to retreat into clarification – if it hadn’t been for the insistent memory. I KNEW the responses that ‘knowledge’ word used to typically trigger. I knew because my whole job, at one time, had revolved around needing to know. Additionally, I knew that the most vociferous tweets of support for knowledge (and against my assertion) were from people I clearly remember, over years, as being the most anti-knowledge in twitter debates.

But how to square this claim of enduring support for ‘knowledge’ from those who I’d observed being the quickest to dismiss its importance? At such times I like using the insight from the story of King Solomon’s Judgment. You may be familiar with the story:

Two mothers from the same household came to the king both claiming that a baby was theirs and that the other mother’s baby had been accidentally smothered. Solomon called for a sword and proposed to cut the baby in half to resolve the conflict over whose baby it was. At this point the real mother agreed to relinquish the baby, preferring it to live, even if it was no longer her own. So far, so sensible. It was the other woman’s attitude that never made sense to me. Her jealousy meant she preferred to prevent the real mother keeping the child, even if that led to the death of the baby and so she agreed to Solomon’s proposal.  But how could anyone agree to the destruction of something as precious as a baby? Surely the woman must have been aware that such a disregard for the child’s well being would give her away?

Most of those involved in yesterday’s twitter spat claimed to have always valued knowledge. So the value placed on ‘knowledge’ is my metaphorical baby – both sides claiming to love it but how will they respond to Solomon’s test of their commitment? Is a disdain for knowledge hiding in plain sight? This is the original tweet from yesterday.

Hmmm. If knowledge matters why wouldn’t you want to ensure knowledge is learned? Why were none of my interlocuters wanting to challenge the implication here about the value of retrieval practice (let alone correct the assumption that retrieval practice was ‘a thing’ 10 years ago)? Why did this assertion get a free pass but mine was hammered? Maybe that is because many in my timeline actually disdained or denigrated knowledge – thought erroneously of it as ‘low level’, ‘rote‘ etc. (follow the link for explanation of problem with this). There’s actually plenty in my timeline that would bear out that theory.

What is really interesting is WHY the word ‘knowledge’ triggers such contradictory assertions from the same people. I’ve explored that in my blogs. In my experience many of the ways knowledge is understood are confused and just not in line with what we know about cognition. For example, skills and knowledge are NOT ‘both important’. We know they are actually not either/or choices. It would be like saying that both the ingredients and the cake are both important – just nonsensical.

And there is definitely a reawakening of those muddled old ideas in the current discussion happening around the government’s curriculum and assessment review. I think reawakening of bad old ideas makes it important that we don’t forget what we have previously rejected and why.

Anyway, many people recognised my description of the past and posted loads of literature from the time which evidently denigrated or devalued knowledge. I’ve copied bits below and will add more – because it is important we remember how far we’ve come.

What is most ironic is that these ideas are still rife in international education and even in Scotland and Wales. See mention in this blog:

Break it down! Getting subject progression right in the new national curriculum

Arguably, the curriculum revolution in English schools over the last 10 years has been driven by three related insights.

  1. That knowledge matters. That what we encounter in the world, what we see, hear or read, can only be understood using what we already know. Comprehension, in its broadest sense, is dependent on prior knowledge.
  2. Given that we can only make sense on new learning through the lens of what we already know, we must plan each subject curriculum step by step from simple to complex. Each small knowledge step expands the child’s ‘schema’ (web of interconnected knowledge) towards expertise.
  3. If learning stuff matters, we need to ensure pupils really DO remember what they have been taught.

Schema theory is the underpinning theory behind the knowledge revolution that has swept through English schools. Knowledge is not held in the mind as a list of random disconnected facts but rather as an interconnected web.

Considering expertise as an interconnected web of knowledge challenges previous assumptions evident in prior iterations of our national curriculum (and there are plenty of ways the same assumptions snuck into the 2010 the national curriculum too). We can recognise this ‘outdated’ curriculum thinking when we see curricular materials that dictate activities that ape the behaviours of experts. Specified, presumably, to inculcate vaguely defined features of expertise often labelled as skills (explained in my previous blog):

Do lots of observing because that is what expert scientists do!

Do lots of guessing stuff from sources because that is what expert historians do!

Do lots of reasoning – to solve problems fluently. That is what expert mathematicians do!

Do lots of reading ‘real books’ (full of unrecognisable words) because that is what fluent readers do!

These days teachers in England know better. They know the above are outcomes and not necessarily nurtured by imitating the activities of experts. A curriculum needs to break down subject content into small steps, or ‘components’. So, now we know, no worries for the new national curriculum then!

There are two problems.

Problem 1: I’ve already talked about in previous blogs: I’m not convinced thinking at the subject level in each subject has always got to the stage of acknowledging the need to identify component knowledge. Take a look at D&T in the 2010 national curriculum to see what I mean. Take a look at maths. Whilst maths, in the 2010 national curriculum, does certainly identify components, aims of the national curriculum (to reason and problem solve) are described as it they are free floating skills developed through mimicry of what expert mathematicians do.

If the subject writers of the new national curriculum simply trot out old ‘outdated’ thinking on subject progression – developed before our English Knowledge revolution swept through schools, then teachers will be led astray trying to teach content which is not the right component, at the right time, to ensure progression of pupils towards expertise.

Problem 2: What are these smaller chunks? And what do they lead to?

For example, in languages, for a long time, curriculums have been organised around topics even though these are really just convenient ways of organising vocabulary not an outline of progression in languages from simple to complex. Or languages are organised as if progress is in modes of expression: listening, reading, writing and speaking. It has taken the Teaching Schools Council, Modern Foreign Languages Pedagogy Review to correct these really unhelpful ways of primarily thinking about progression in languages learning. Pupils need to make progress in learning the structures of the language – phonics, vocabulary and grammar. And when this is clear, the component chunks (from simple to complex) are more easily identifiable.

Another example, the aims of the 2010 national curriculum for music are undeniably desirable outcomes: The NC aims for pupils who can perform, create and compose or understand and explore music. Marvellous, but WHAT do those pupils need to learn to be able to do all this? Joyfully inviting kids to explore their inner musicality with a microphone, a spare oboe, violin and drums is hardly the route to expertise.

Without clarity around the nature of progression in a subject and the components which need to build over time teaching will not lead to growing subject expertise for pupils.

When I led Ofsted’s curriculum unit, we were faced with the question of how to help inspectors recognise high quality curriculum thinking in each subject. This was essential for reliable and valid inspection. The National Curriculum often only specified high level aims with clarity and sometimes subject communities had not moved on from outdated expert-mimicry type curriculum thinking which had bled into the subject content sections even of the 2010 National Curriculum.

It helps to identify useful, research informed, ways of thinking about progression in each subject to meet the aims set out in the National Curriculum. We shared how we found it most useful to identify ‘forms‘ or ‘pillars‘ of progression in each subject. For transparency, we shared that research-informed thinking about progression in each subject in the Research Reviews Ofsted published. We know schools found this thinking very useful.

Forms or ‘pillars’ of progression in each subject

Geography: In some subjects the 2010 National Curriculum already provides useful ways of thinking about progression in a subject. In geography, the ‘pillars’ of locational knowledge, place knowledge human and physical features and geographical skill/fieldwork are very useful headings for thinking about progression and what component knowledge might be. There is some geography curriculum literature that seems to rather arbitrarily call some content ‘knowledge’ and other content ‘understanding’ but this is not a big obstacle to using these useful headings for thinking about progression.

Music: In the case of music the National Curriculum does not really provide useful ways of thinking about progression towards the stated aims. It is still common for music curriculums in schools to have only vague outlines of the specific component knowledge pupils needed to learn to reach expertise. After exploring the literature on progression in music, the Research Review outlines it was most useful to think about progression in technical, constructive and expressive forms. With these headings it is possible to think about how the music curriculum could be broken down into components for building expertise.

Physical Education: PE is interesting because the stated aims of the national curriculum are not fully shared by some of the subject community. The National Curriculum states that pupils will ‘develop competence to excel in a broad range of physical activities’. At KS3 that should be ‘from direct competition through team and individual games (for example badminton, basketball, cricket, football, hockey, netball, rounders, rugby and tennis.’ However, parts of the subject community have an aversion to ‘sport’ as a term and really for excelling in sport to be a goal at all. There is an emphasis in existing curriculum thinking on apparent ‘transferable skills’ like team work. But a curriculum for ‘team work’ (with physical activity as merely a vehicle for delivering that goal) will look really different from a curriculum designed to help pupils gain competence and excellence in playing particular sports or other activities. [By the by, there is a similar situation in Computing]

After exploring the literature on progression (for achieving the aims as set out in the National Curriculum for physical education) the Research Review uses ‘pillars’ of motor competence, rules strategies and tactics and healthy participation. With these headings it is possible to think about how the PE curriculum could be broken down into components for building expertise in different sports or activities.

Getting subject progression right really matters

For all the general discussion on a future National Curriculum, the vast bulk of the content will be decided by subject writers supervised by civil servants who will probably be unaware of the implications of the way subject aims and content are considered. However, a compulsory National Curriculum will have a huge impact on the content and sequencing of what is taught to pupils in schools.

There isn’t one perfect way of organising thinking about progression in a subject but there are many unhelpful approaches. Will the substance of the new National Curriculum, subject by subject, help or hinder teachers as they break down their subject for teaching lesson by lesson? Is the story of educational developments in any country one of liner progress – ever improving? Or will the subject content of the new National Curriculum send us back to the ‘Dark Ages’?

What makes marking unfair? Look harder at validity

GCSE teachers in English and humanities are frustrated about the quality of marking. Sure, it is harder to ensure examiners reach similar judgements in these subjects. But 4/5ths of the time those remarks don’t lead to changes. Teachers know the marks don’t represent what they know about their students or their scripts (when these are requested). What is going on?

Below is a shortened version of a blog I first published in 2015 explaining that the problem is probably not with the reliability of the marking. The problem is probably with the validity of the assessment- it’s fitness for purpose…


What would happen if I, as a history graduate, set out to write a mark scheme for a physics GCSE question? I dropped physics after year 9 but I think it is possible I could devise some instructions to markers that would ensure they all came up with the same marks for a given answer. In other words my mark scheme could deliver a RELIABLE outcome. However, what would my enormously experienced physics teacher husband think of my mark scheme? I think he’d die from laughing so hard:

“Heather, why on earth should they automatically get 2 marks just because they mentioned the sun? You’ve allowed full marks for students using the word gravity…”

After all I haven’t a notion how to effectively discriminate between different levels of understanding of physics concepts. My mark scheme might be reliable but it would not deliver a valid judgement.

Ofqual has done research into the reliability of exam marking. Their research and that of Cambridge Assessment suggest marking is more RELIABLE than has been assumed by teachers. This might surprise teachers when results day feels like a lottery.

I don’t tend to quibble when the results day lottery goes our way but I can admit that it is part of the same problem. Marking of subjects such as history and politics will always be less reliable than in maths and we must remember it is the overall A level score (not the swings between individual paper results) that needs to be reliable. But… even so… there seems to be enormous volatility in our exam system. The following are seen in my department every year:

Papers where the results have a very surprising (wrong) rank order. Weak students score high As while numerous students who have only ever written informed, insightful and intelligent prose have D grades.
Students with massive swings in grades between papers (e.g. B on one and E on the other) despite both papers being taught by the same teacher and with the same general demands.
Exam scripts where it is unclear to the teacher why a remark didn’t lead to a significant change in the result for a candidate.
Quite noticeable differences in the degree of volatility over the years in results depending on paper, subject (history or politics in my case) and even exam board.
Cambridge Assessment looked into this volatility and suggested that different markers ARE coming up with similar enough marks for the same scripts – marking is reliable enough. However, it is then assumed by the report writers that all other variation must be at school/student level. There is no doubt that there are a multitude of school and student level factors that might explain volatility in results such as different teachers covering a course, variations in teaching focus or simply that a student had a bad day. However, why was no thought given to whether lack of validity explains volatility in exam results?

For example, I have noticed a trend in our own results at GCSE and A level. The papers with quite flexible mark schemes, with more reliance on marker expertise, deliver more consistent outcomes closer to our own expectations of the students. It looks like attempts to make our politics A level papers more reliable have simply narrowed the range of possible responses that get reward limiting the ability of the assessment to discriminate effectively between student responses. Teachers know there is a problem but perhaps overemphasise the impact of inexperienced markers.

The mounting pressure on exam boards from schools has driven them to make their marking ever more reliable but this actually leads to increases in unexpected grade variation and produces greater injustices as the assessment becomes worse at discriminating between candidates. This process is exacerbated by the loss of face to face standardisation meetings and thus markers are ever more dependent and/or tied to the mark scheme in front of them to guide their decision making.

Perhaps exam boards need to shift from doubling down on the reliability of their systems and start thinking about the validity of their assessment. Is their exam a good test of understanding of the subject or is it testing something else? Is the markscheme identifying a meaningful progression in subject understanding or introducing arbitrary hoops students need to be trained to jump through? What happens when, to support reliability, examiners are only allowed to reward a narrow range of responses?

The drive for reliability can too often be directly at the expense of validity.

It is a dangerously faulty assumption that if marking is reliable then valid inferences can be drawn from the results. We know that for some time the education establishment has been rather blasé about the validity of its assessments.

For years we accepted data showing our country’s school children marching fairly uniformly up through National Curriculum levels, even though we know learning is not actually linear or uniform. It seems that whatever the levels presumed to measure it was not giving a valid snapshot of progress.
I’ve written about how history GCSE mark schemes assume a spurious progression in (non-existent) generic analytical skills.
Too often levels of response mark schemes are devised by individuals with little consideration of validity.
Dylan Wiliam points out that reliable assessment of problem solving often requires extensive rubrics which must define a ‘correct’ method’ of solving the problem.
EYFS assesses progress in characteristics such as resilience when we don’t even know if it can be taught and critical thinking and creativity when these are not constructs that can be generically assessed.


My experience at A level is just one indication of this bigger problem with validity explaining what looks like unfair marking.

The National Curriculum: it’s not just a list of topics

It is usual for pressure groups to campaign for the inclusion of new items onto the national curriculum. Aside from the fact that no curriculum can include all content suggested for inclusion, these campaigns smack of naivety. Anyone who’s been working in English education for a while knows that the National Curriculum is perhaps more guidelines than actual rules. Prescribed topics do get mentioned to pupils but what it means to teach those areas is open to wildly different interpretations.

Perhaps that is about to change. The new Labour government say they are going to make the National Curriculum compulsory for all schools. It may be that alongside changes to the specified content to be taught, will be real checks and consequences for non-compliance by schools.  Will all the Jack Sparrows, busy doing their own thing, get a bit of a shock?

I don’t think there is widespread understanding that this enforcement of the National Curriculum on all schools could have much more far-reaching consequences than just mandating some changes to the list of topics to be covered in each subject. To elaborate on my last blog, I’m going to discuss why there might be much more specification of pedagogy than currently. If the National Curriculum is becomes compulsory for all schools and that is enforced there could be a new era of enforced, compulsory teaching approaches and it could happen even if that wasn’t the intention.

Our latest 2010 national Curriculum sometimes does specify ‘pedagogy’ i.e. the way something should be taught and not just the content to be covered. For example, I mentioned in my last blog how the current science content prescribes ‘verbs’ – activities associated with science as the way certain content should be learned (see italics below from current NC).

Pupils should be taught to:

  • notice that animals, including humans, have offspring which grow into adults
  • find out about and describe the basic needs of animals, including humans, for survival (water, food and air)
  • describe the importance for humans of exercise, eating the right amounts of different types of food, and hygiene

Learning about offspring must happen through ‘noticing’. The basic needs of animals must be ‘found out’ and ‘described’. Sure, professional scientists find things out and are attuned to ‘notice’ features of phenomena related to their study. But these children are 5 or 6 years old and the clear implication is towards ‘inquiry learning’ as a pedagogy – mimicking the activities of experts to find out basic knowledge (a great critique of this approach here). [That is aside from the illogic of implying children need to be able to ‘describe’ some things they learn, but not describe other things. The random illogic of this really hurts my head. Presumably, the inconsistent need to ‘describe’ is because everything must be learned through ‘doing’ and so some verb just must be found to accompany all knowledge specified.]

It might appear picky to say that the instruction in the National Curriculum that children should be taught to, “observe changes across the four seasons’ is not the same as the National Curriculum listing knowledge of the four seasons children need to know. It isn’t picky in practice because the instruction for an ‘observing’ activity choice really does often lead to pupils doing ‘observing’ without necessarily learning what was intended.

The Ofsted subject report for science found that pupils in primary schools were much more likely to take part in hands-on practical activities than pupils in secondary school. All those verbs in statutory requirements across the National Curriculum dictate what children should do (notice, describe, identify, explore) and thus specify activities. However, the knowledge to be learned from the activities was not always clear enough or maybe even likely to be understood or remembered by children. For example, in Reception, the Ofsted subject report outlines how sometimes, “leaders focused on activities or general topic areas such as ‘changing seasons’ or ‘floating and sinking’, without identifying what they wanted pupils to learn and why”.

The last iteration of the National Curriculum did do much to move away from, “a highly doctrinaire view of teaching delivery”, which characterised the prior 2007 version.  However, plenty of ‘inquiry’ pedagogy snuck through in 2010 because so much of the National Curriculum depends on the quality of what is written at the subject level. Subject content writing might easily be in the hands of those who disagree with the overall philosophy underpinning the National Curriculum. [In another blog I might find space to look at how those contradictory directions are identifiable in the current maths programme of study.]

We could take our freedom for granted, to choose the best teaching pedagogies for the content. However, we used to have much less freedom and the national curriculums for Wales and Scotland specify pedagogies over content. Will we be going back to that approach with the English National Curriculum? That would be very worrying given just how much more successful England’s curriculum has been than Scotland’s over the last decade.

The Scottish Curriculum for Excellence is dominated, across all subjects, by ‘experiences’ planned through clearly mandated activity types. The actual content to be learned, by comparison, is barely specified except in the most general terms.  Despite highly detailed curriculum documentation, the actual content pupils could learn has to inferred e.g. see below extract from the Scottish ‘Curriculum for Excellence’. You can see that activity types are clear and there are also literally pages and pages of elaboration on them before you hit these content tables which finally, sort of, mention the content to be learned.

We do know that direct instruction can be much more effective for novice learners and so all this ‘learning through doing’ might not be so effective as other approaches. Worries go beyond the possible increased mandating of ineffective teaching strategies. As Oates outlines in his seminal 2010 paper, vague content makes effective assessment harder:

…the 2007 revisions resulted in such vague statements of content that valid testing – fair and clear to learners, teachers and parents – was severely compromised. In testing, a clear notion of ‘the construct’ – what it is that is actually being tested – is critical (Wood R 1993; Cambridge Assessment op cit; APA op cit).

Will a full-throttle return to forms of inquiry learning (like enforced in Scotland) be mandated in the new English National Curriculum? It’s not enough to assume the Secretary of State doesn’t have that intention. Will those civil servants supervising the subject writers of the new National Curriculum even be aware, or understand, if inquiry learning approaches shape the organisation of subject content and flood the new National Curriculum?  

The 2010 National Curriculum was shaped from Tim Oates’ paper which rejected the goals of the previous 2007 curriculum and instead ensured that in the new 2010 specification:

The principal motor for driving revision of subjects in the National Curriculum should… be change in the structure and content of knowledge (Lawton D 1980; Hirst P 1974; Hirst P 1975; Jenkins E 2007).

I very much recommend Oates’ 2010 paper for anyone interested in the shaping of the new English National Curriculum.

Sharp eyed readers will see that the quote above doesn’t just mention ‘content’ but also the ‘structure’ of knowledge’. This is another way in which a National Curriculum specifies more than just a list of topics. Much more on that in my next blog.

The New National Curriculum – will intentions be ‘lost in translation’

It’s good to see a broad panel of experts being consulted on the new National Curriculum. However, no matter what the intentions of that group, it will be subject specialists who will ‘translate’ those intentions into subject specific aims and write the body of the new national curriculum, subject by subject. Those subject experts will identify compulsory content based on their view of subject progression. Despite much great work across subject communities, I fear that broadly good intentions could be lost in the translation to a subject context.

First and foremost, will the appointed subject experts take account of what we now know about human cognition? In England, over the last decade or so, there has been a revolution in favour of evidence-based pedagogy. The gap has been bridged between insights from cognitive psychology, educational psychology and other related fields and application of these insights to actual teaching practice. However, over this period, the orthodoxies and mantras which underpin curriculum planning (as opposed to pedagogical practices), in many subjects, have not always been reconsidered in the light of these insights. A curriculum is not just a list of topics but needs to outline a cumulative progression over time in complexity within the study of a discipline. However, some thinking about curriculum progression in some subjects has, in some ways, remained unchanged, preserved in aspic, from what feels like an earlier age.

I’ll explain what I mean through some subject specific examples. One clear insight regarding human cognition that can inform subject curriculum thinking is the domain specificity of skill. It is now well known in the English education community that skilful capacity to perform in one domain e.g. analyse in history, has very limited transfer to maths. Additionally, knowledge of how to solve some problem types in maths (beyond some helpful heuristics) has very limited transfer to dissimilar problem types. That is because capacity to perform skilfully is dependent on very domain (or topic) specific knowledge of the sort experts in that domain might know.

Nevertheless, our current National Curriculum and some discourse in mathematics education sometimes suggests or implies that ‘reasoning’ mathematically and ‘problem solving’ are like skills, rather than capacities (which are dependent on very specific expertise).  Similarly, it is still quite usual to hear of reading comprehension treated as a generic skill to be developed through practicing certain sorts of broad activity types rather than primarily expanded through broad knowledge of the text’s context. Will the subject experts, appointed to write each subject area of the national curriculum, consider the implications of our new and better understanding of the domain specificity of skill?

Another related field of research that needs to inform our understanding of curriculum progression is research on the difference between novices and experts. We now know that there are real differences, ‘…in the mental models used by experts and the ‘intuitive or naïve mental models of novice learners in a domain.’ As Kirschner argues, just because experts use experimentation and discovery to identify new truths does not mean ‘experimentation and discovery should also be the basis for curriculum organisation’ for children who are novice learners. Despite this insight, across subjects there is still a tendency for curriculum documentation to describe expert-type activities, e.g. to ‘understand’, ‘analyse’, ‘evaluate’ rather than identify the component content pupils need to learn to gain the capacities of experts, to undertake such activities productively.

  • In science, it is common for confusion between learning through enquiry (a pedagogy that’s efficacy should be scrutinised anyway) and developing knowledge of scientific enquiry. In some literature, the goal of ‘working scientifically’ still tends to be defined through verb-driven attainment targets that are overly general e.g. ‘observing closely’. Such goals do not support the teacher to identify the concepts and procedures that pupils will need to learn to gain an appreciation of what, for example, the learner could productively attend to when they ‘observe closely’. That is the knowledge of experts that makes their ‘attending’ productive.
  • In history, so many hours are still spent asking pupils, who have no contextual knowledge, to guess at insights from sources. This is justified from the presumption that this is what historians do. Mimicry of this apparent expert activity, it is assumed, will make children into historians.
  • In music, what specific, component knowledge needs to be learned, step by step, to meaningfully ‘compose and improvise’?
  • In PE, exactly what does taking part in a hockey match lead to progression in? What makes us sure of those assumptions about progression?
  • In D&T what specific knowledge should pupils learn over time to gain some general capacity to ‘design’ ‘make’ and ‘evaluate’? What is the expert knowledge of an expert graphic designer, or of designers in other domains? How could that be broken down and taught?

It is a challenge to overhaul perhaps comfortable presumptions about progression in each subject. But it is the pupils who will benefit from such work and so it is worth it. Is it a challenge the writers of the new national curriculum subject content will want to rise to?  I do hope so.

Over the next weeks, I hope to share more about my thoughts on the challenges facing the writers of the national curriculum.

The education horseshoe

Legend has it that in the olden days, back in the mists of time, teachers lectured at pupils. The curriculum under this reign of torpor was disconnected facts and children generally could not understand the details drummed into their poor heads. This approach is personified in the Dickens character, Thomas Gradgrind.

Gradgrind: Now, what I want is, Facts. Teach these boys and girls nothing but Facts.

“[Gradgrind’s children] had been lectured at,from their tenderest years; coursed, like little hares. Almost as soon as they could run alone, they had been made to run to the lecture-room.”

The story goes that this changed when everyone realised that lecturing was no good. Instead children needed to engage in meaningful, authentic and imaginative activities. The curriculum should focus on developing useful skills that would blossom when kids do the right sort of activities to make meaning, like group work and discovery learning.

But, notice something crucial. Many of the new, enlightened generation, still had the same view of knowledge – that it was isolated factoids of info that could fill the bucket of the mind – but these, latter, people thought that was a bad idea whereas the other lot (parodied as Gradgrindian) apparently thought filling the bucket of the mind with factoids was good.

‘Knowing the Kings and Queens of England…are not top priority life skills. Their claim for inclusion in the core curriculum rests on the extent to which they provide engaging and effective ‘exercise machines’ for stretching and strengthening generalizable habits of mind.’

Claxton and Lucas

Arguably, the latter ‘progressive’ movement ideas (which I’ve rather simplified in my description above) dominated educationalists thinking about effective teaching through the 20th century. In England, though, that all changed a decade or so ago. This was because some teachers found out that whilst most educationalists had been doubling down on progressive notions of skill development, through apparently more meaningful authentic activity, other fields of research had been amassing extensive evidence pointing another way. These findings demonstrated, rather convincingly, that forms of direct instruction were generally much more effective than pupil-led pedagogies. Teachers learnt from other fields such as cognitive psychology that knowledge was important after all because it was needed to develop skill. In England, ‘traditionalism’ as an educational movement, gained momentum.

I’d argue that these new insights have led to improvement in teaching in England. The problem is that parts of all these movements made one fatal error and it is one it is very easy to slip into. What?

They focused their energies on changing the teaching activity or pedagogy. The pedagogy became their goal and the means through which they judged if teaching was now effective (whether that was lecture, discovery learning or direct instruction). The focus in each case was on form of teaching over substance taught. In this way they closely resemble each other. And in practice, they all relegate knowledge (whether valued or not) to disembodied ‘stuff’, or ‘the learning’, to be taught through the appropriately sanctioned activity.

It is not a perfect analogy but when I think about this issue I’m repeatedly reminded of the horseshoe model in politics. Wikipedia says:

The horseshoe theory asserts that the far-left and the far-right, rather than being at opposite and opposing ends of a linear continuum of the political spectrum, closely resemble each other, analogous to the way that the opposite ends of a horseshoe are close together. 

In the case of the progressive and traditionalist movements there is a tendency within each to put the focus on correct pedagogy whether that be enquiry learning or Rosenshine’s (indeed, very sensible) principles. In so doing the curriculum or ‘what’ question becomes subservient.

In both cases the nature of the activity choice is at the forefront of thinking about quality education. In both cases knowledge is relegated to ‘stuff’ to be learned. It is understandable. Management of quality of education in school is done by non-specialists and activity choices are most amenable to management across subjects.

However, we can only make sense of what we see, hear and read in the world using what we already know.  Knowledge is not and cannot be held in the mind as disparate, interchangeable facts. It is an The analogy of a web or schema for how the brain structures knowledge is more helpful.

Gradgrind, with his lectures, was wrong (amongst other reasons) because he presented disparate information to pupils regardless of the fact this was probably meaningless to the pupils concerned. Such lecturing takes no account of what pupils might make of what you say and whether it can be understood or even digested in one go. However, the solution was not, first and foremost, to change the pedagogy choice- which is a symptom of Gradgrind’s disease. Rather, Gradgrind needed to change his perception of what knowledge is and how it is learned. From that realisation could flow appreciation of what might be an appropriate activity choice to ensure pupils can learn what they have the capacity to understand next, given what they already know.

Think about it. If Gradgrind had been required to use discovery learning would his plan for group-work, instead of lecture, mean he suddenly began thinking about whether or how pupils might make sense of new content he introduced? I’m picturing Gradgrind in a school today. A senior leader feeds back on a lesson observation. “Now, er, Tom, don’t you think you could make the lesson more engaging and get in some deep thinking by breaking it up a bit? I’d like to see you get the students to talk in pairs.” Does pedagogical advice of this sort change Gradgrind’s perception of what knowledge is and how it is learned? Would the teacher of today, having switched from group work to regular quizzing, also necessarily appreciate whether what they taught was being understood as they assumed?

Whether lecturing, using inquiry learning or direct instruction it is quite possible for a teacher to continue teaching without appreciating that their endeavour should not revolve around using the right activities.

More than this, Dicken’s discomfort with a Gradgrindian view of knowledge was because that crusty, old lecturer reduced a rich interconnected tapestry of ideas (with the attendant human emotions such ideas can engender) to a rubble of disconnected facts. I do sympathise with the progressive cry for ‘meaning’ even when their pedagogical solutions to a curricular problem were unhelpful.

I’d say that the most effective teachers throughout history have at an instinctive level appreciated that pupils need to build knowledge from what they already know. For them, the knowledge is not inert stuff, but must come to live in the child’s mind. Whatever the trend of the moment, I think effective teachers were the ones aware that the purpose of a lesson is for something to be learned and that pupils had to know enough to make sense of the new material. Successful teachers have never thought about planning as starting with activity selection. And weaker teaching has always resulted when teachers are encouraged to bypass considering first what pupils already know and what that means about what they need to learn next on a pathway to expertise.

Therefore, when pupils don’t learn what we hope, the cause is as likely to be curricular* as pedagogical.

*their grasp of what it is children need to learn to make sense of the next lesson’s content

What can ‘iballisticsquid’ tell us about teaching writing?

It once seemed obvious to me that feedback on writing was less useful when it was too context specific.

So I’d try to avoid writing a comment such as:

The example of Hitler’s appeal to the middle classes would be useful here.

Instead I’d write something that could transfer to other essays:

Give more specific examples to back up your points.

This seems in line with what Harry Fletcher-Wood wrote recently in an excellent blog on feedback. He writes the following about teacher responses designed to improve performance on the current task:

This can help students improve the current task, but its effects are limited: students are unlikely to be able to transfer what they learn about one task to another (Hattie and Timperley, 2007; Kluger and DeNisi, 1996; Shute, 2008): people struggle to recognise that they can use the solution to one problem to solve an analogous problem, unless they receive a hint to do so (Gick and Holyoak, 1980)…We may therefore consider giving more general feedback.

In fact Harry is saying something a bit more complex – more on that later – but just to say that I’ve realised it is mistaken to assume that very task specific feedback is wasted.

‘Iballisticsquid’ helped me reach this conclusion. He is a Youtuber, a genial if rather pasty-faced young gentleman, who has made his millions early by recording himself playing computer games and posting these recordings on Youtube. He and his colleagues such as ‘Dan TDM’ and ‘Stampycat’ have worldwide followings of primary aged fans, including my 9 year old son. Initially I found the whole concept bizarre. Why would anyone want to spend hours watching someone else play computer games when you can just play them yourself? True, the commentary as they play is lively, tells the viewer what the presenter is trying to do and is aimed squarely at the humour level of a nine-year-old boy, but still… The other day my son was gazing in rapt attention as ‘iballisticsquid’ played ‘Bedwars’ a game in which he defended a bed – on an island – while trying to obtain diamonds and emeralds. Suddenly I got it! Even I, seeing Mr iballistic model the game-play felt I could have a go. Manuals and instructions would have simply made me glaze over (as they always have) but seeing iballisticsquid play and I naturally inferred the game premise, appreciated the tools at my disposal and felt empowered. Iballistic Squid and co. are superb teachers. It is fascinating that often they make mistakes and lose the game they are playing. If anything these examples of ‘what not to do’ or ‘non-examples’ simply add to the success of their ‘teaching’ of game-playing. Children watch them play, naturally infer what is transferable to their own game-play and are thus empowered to have a go themselves.

It might seem utterly unconnected but something similar happened when I received copious and very specific feedback on my MEd writing. If I’m honest, at the outset of the MEd I did not even appreciate the nature of academic writing, let alone how my own efforts fell short. Sometimes feedback on my drafts just took the form of examples of how my sentences could be better phrased and feedback was nearly always pretty specific to the content. However, I learnt fast. I infered from the examples how I should write in similar contexts. Inference is what humans do naturally IF the examples are pitched correctly so that inferences can be made.

It strikes that my first suggestion for feedback :

‘The example of Hitler’s appeal to the middle classes would be useful here’.

… is more useful than the second because it provides much richer inference possibilities. This example allows the student to appreciate the nature of the sort of examples that are appropriate in this form of writing, the degree of specificity of those examples, the occasions when such examples need to be used. All this can be inferred because inference is what humans do naturally when given appropriate examples (and non-examples).

It is absolutely correct that people don’t transfer  what they learn about from one task to another at all easily and this insight is one teachers must grasp. However, it is when people DO transfer an insight from one context to another that we can say that they have learnt something new. It is also the case that examples are incredibly powerful tools for learning. We make a mistake when we think of ‘inference’ as the skill to be taught when in fact it is what humans do automatically. We make a mistake when we try and teach through generalised principles. People learn through examples. (This is a central insight of Engelmann’s Direct Instruction method which I’ll discuss further in a future blog.)

What can we do to ensure our pupils DO infer? We can repeatedly model the (carefully chosen) specific and as ‘iballisticsquid’ instinctively appreciates, from that modelling our pupils will infer and transfer to new but similar contexts.

Finally do read Harry’s excellent post in which he explains how linking the specific with the more general in feedback can make transfer more likely.

A great teacher: iballisticsquid

Part 2: Early years assessment is not reliable or valid and thus not helpful

This is the second post on early years assessment. The first is here

Imagine the government decided they wanted children to be taught to be more loving. Perhaps the powers that be could decide to make teaching how to love statutory and tell teachers they should measure each child’s growing capacity to love.

Typical scene in the EYFS classroom – a teacher recording observational assessment. 

There would be serious problems with trying to teach and assess this behaviour:

Definition: What is love? Does the word actually mean the same thing in different contexts? When I talk about ‘loving history’ am I describing the same thing (or ‘construct’) as when I ‘love my child’.

Transfer:  Is ‘love’ something that universalises between contexts? For example if you get better at loving your sibling will that transfer to a love of friends, or school or learning geography?

Teaching: Do we know how to teach people to love in schools? Are we even certain it’s possible to teach it?

Progress: How does one get better at loving? Is progress linear? Might it just develop naturally?

Assessment: If ‘loving skills’ actually exist can they be effectively measured?

 

 

 

 

Loving – a universalising trait that can be taught?

The assumption that we can teach children to ‘love’ in one context and they’ll exercise ‘love’ in another might seem outlandish but, as I will explain, the writers of early years assessment fell into just such an error in the Early Years Foundation stage framework and assessment profile.

In my last post I explained how the priority on assessment in authentic environments has been at the cost of reliability and has meant valid conclusions cannot be drawn from Early Years Foundation Stage Profile assessment data. There are, however, other problems with assessment in the early years…

Problems of ‘validity’ and ‘construct validity’

Construct validity: is the degree to which a test measures what it claims, or purports, to be measuring.

 Validity: When inferences can be drawn from an assessment about what students can do in other situations, at other times and in other contexts.

If we think we are measuring ‘love’ but it doesn’t really exist as a single skill that can be developed then our assessment is not valid. The inferences we draw from that assessment about student behaviour would also be invalid.

Let’s relate this to the EYFS assessment profile.

Problems with the EYFS Profile ‘characteristics of effective learning’

The EYFS Profile Guide requires practitioners to comment a child’s skills and abilities in relation to 3 ‘constructs’ labelled as ‘characteristics of effective learning’:

We can take one of these characteristics of effective learning to illustrate a serious problem of validity of the assessment. While a child might well demonstrate creativity and critical thinking (the third characteristic listed) it is now well established that such behaviours are NOT skills or abilities that can be learnt in one context and transferred to another entirely different context- they don’t universalise any more than ‘loving’. In fact the capacity to be creative or think critically is dependent on specific knowledge of the issue in question. Many children can think very critically about football but that apparent behaviour evaporates when faced with some maths.  You’ll think critically in maths because you know a lot about solving similar maths problems and this capacity won’t make you think any more critically when solving something different like a word puzzle or a detective mystery.

Creating and thinking critically are NOT skills or abilities that can be learnt in one context and then applied to another

Creating and thinking critically are not ‘constructs’ which can be taught and assessed in isolation. Therefore there is no valid general inference about these behaviours, which could be described as a ‘characteristic of learning’, observed and reported. If you wish a child to display critical thinking you should teach them lots of relevant knowledge about the specific material you would like them to think critically about.

In fact, what is known about traits such as critical thinking suggests that they are ‘biologically primary’ and don’t even need to be learned [see an accessible explanation here].

Moving on to another characteristic of effective learning: active learning or motivation. This presupposes that ‘motivation’ is also a universalising trait as well as that we are confident that we know how to inculcate it. In fact, as with critical thinking, it is perfectly possible to be involved and willing to concentrate in some activities (computer games) but not others (writing).

There has been high profile research on motivation, particularly Dweck’s work on growth mindset and Angela Duckworth’s on Grit. Angela Duckworth, has created a test that she argues demonstrates that adult subjects possess a universalising trait which she calls ‘Grit’. But even this world expert concedes that we do not know how to teach Grit and rejects her Grit scale being used for high stakes tests. Regarding Growth Mindset, serious doubts have been raised about failures to replicate Dweck’s research findings and studies with statistically insignificant results that have been used to support Growth Mindset.

Despite serious questions around the teaching of motivation, the EYFS Profile ‘characteristics of learning’ presume this is a trait that can be inculcated in pre-schoolers and without solid research evidence it is simply presumed it can be reliably assessed.

For the final characteristic of effective learning, playing and learning. Of course children learn when playing. This does not mean the behaviours to be assessed under this heading (‘finding out and exploring’, ‘using what they know in play’ or ‘being willing to have a go’) are any more universalising as traits or less dependent on context than the other characteristics discussed. It cannot just be presumed that they are.

Problems with the ‘Early Learning Goals’

 At the end of reception each child’s level of development is assessed against the 17 EYFS Profile ‘Early Learning Goals. In my previous post I discussed the problems with the reliability of this assessment. We also see the problem of construct validity in many of the assumptions within the Early Learning Goals. Some goals are clearly not constructs in their own right and others may well not be and serious questions need to be asked about whether they are universalising traits or actually context dependent behaviours.

For example, ELG 2 is ‘understanding’. Understanding is not a generic skill. It is dependent on domain specific knowledge. True, a child does need to know the meaning of the words ‘how’ and ‘why’ which are highlighted in the assessment but while understanding is a goal of education it can’t be assessed generically as you have to understand something and this does not mean you will understand something else. The same is true for ‘being imaginative’ ELG17.

An example of evidence of ELG 2, understanding, in the EYFS profile exemplification materials.

Are ELG1 ‘listening and attention’ or ELG 16 ‘exploring and using media materials’ actually universalising constructs? I rarely see qualitative and observational early years research that even questions whether these early learning goals are universalising traits, let alone looks seriously at whether they can be assessed. This is despite decades of research in cognitive psychology leading to a settled consensus which challenges many of the unquestioned constructs which underpin EYFS assessment.

It is well known that traits such as understanding, creativity, critical thinking don’t universalise. Why, in early years education, are these bogus forms of assessment not only used uncritically but allowed to dominate the precious time when vulnerable children could be benefiting from valuable teacher attention?

n.b. I have deliberately limited my discussion to a critique using general principles of assessment rather than arguments that would need to based on experience or practice.

Early years assessment is not reliable or valid and thus not helpful

The academic year my daughter was three she attended two different nursery settings. She took away two quite different EYFS assessments, one from each setting, at the end of the year. The disagreement between these was not a one off mistake or due to incompetence but inevitable because EYFS assessment does not meet the basic requirements of effective assessment – that it should be reliable and valid*.

We have a very well researched principles to guide educational assessment and these principles can and should be applied to the ‘Early Years Foundation Stage Profile’. This is the statutory assessment used nationally to assess the learning of children up to the age of 5. The purpose of the EYFS assessment profile is summative:

‘To provide an accurate national data set relating to levels of child development at the end of EYFS’

It is also used to ‘accurately inform parents about their child’s development’. The EYFS profile is not fit for these purposes and its weaknesses are exposed when it is judged using standard principles of assessment design.

EYFS profiles are created by teachers when children are 5 to report on their progress against 17 early learning goals and describe the ‘characteristics of their learning’. The assessment is through teacher observation. The profile guidance stresses that,

‘…to accurately assess these characteristics, practitioners need to observe learning which children have initiated rather than focusing on what children do when prompted.’

Illustration is taken from EYFS assessment exemplification materials for reading

Thus the EYFS Profile exemplification materials for literacy and maths only give examples of assessment through teacher observations when children are engaged in activities they have chosen to play (child initiated activities). This is a very different approach to subsequent assessment of children throughout their later schooling which is based on tests created by adults. The EYFS profile writers no doubt wanted to avoid what Wiliam and Black (Wiliam & Black, 1996) call the ‘distortions and undesirable consequences’ created by formal testing.

Reaching valid conclusions in formal testing requires:

  1.    Standard conditions – means there is reassurance that all children receive the same level of help
  2.    A range of difficulty in items used for testing – carefully chosen test items will discriminate between the proficiency of different children
  3.    Careful selection of content – from the domain to be covered to ensure they are representative enough to allow for an inference about the domain. (Koretz pp23-28)

The EYFS profile is specifically designed to avoid the distortions created by such restrictions that lead to an artificial test environment very different from the real life situations in which learning will need to be ultimately used. However, as I explain below, in so doing the profile loses necessary reliability to the extent that teacher observations cannot support valid inferences.

This is because when assessing summatively the priority is to create a shared meaning about how pupils will perform beyond school and in comparison with their peers nationally (Koretz 2008). As Wiliam and Black (1996) explain, ‘the considerable distortions and undesirable consequences [of formal testing] are often justified by the need to create consistency of interpretation.’ This is why GCSE exams are not currently sat in authentic contexts with teachers with clipboards (as in EYFS) observing children in attempted simulations of real life contexts. Using teacher observation can be very useful for an individual teacher when assessing formatively (deciding what a child needs to learn next) but the challenges of obtaining a reliable shared meaning nationally that stop observational forms of assessment being used for GCSEs do not just disappear because the children involved are very young.

Problems of reliability

Reliability: Little inconsistency between one measurement and the next (Koretz, 2008)

Assessing child initiated activities and the problem of reliability:

The variation in my daughter’s two assessments was unsurprising given that…

  • Valid summative conclusions require ‘standardised conditions of assessment’ between settings and this is not possible when observing child initiated play.
  • Nor is it possible to even create comparative tasks ranging in difficulty that all the children in one setting will attempt.
  • The teacher cannot be sure their observations effectively identify progress in each separate area as they have to make do with whatever children choose to do.
  • These limitations make it hard to standardise between children even within one setting and unsurprising that the two nurseries had built different profiles of my daughter.

The EYFS Profile Guide does instruct that practitioners ‘make sure the child has the opportunity to demonstrate what they know, understand and can do’ and does not preclude all adult initiated activities from assessment. However, the exemplification materials only reference child initiated activity and, of course, the guide instructs practitioners that

‘…to accurately assess these characteristics, practitioners need to observe learning which children have initiated rather than focusing on what children do when prompted.’

Illustration from EYFS assessment exemplification materials for writing. Note these do not have examples of assessment from written tasks a teacher has asked children to undertake – ONLY writing voluntarily undertaken by the child during play.

Assessing adult initiated activities and the problem of reliability

Even when some children are engaged in an activity initiated or prompted by an adult

  • The setting cannot ensure the conditions of the activity have been standardised, for example it isn’t possible to predict how a child will choose to approach a number game set up for them to play.
  • It’s not practically possible to ensure the same task has been given to all children in the same conditions to discriminate meaningfully between them.

Assessment using ‘a range of perspectives’ and the problem of reliability

The EYFS profile handbook suggests that:

‘Accurate assessment will depend on contributions from a range of perspectives…Practitioners should involve children fully in their own assessment by encouraging them to communicate about and review their own learning…. Assessments which don’t include the parents’ contribution give an incomplete picture of the child’s learning and development.’

A parent’s contribution taken from EYFS assessment exemplification materials for number

Given the difficulty one teacher will have observing all aspects of 30 children’s development it is unsurprising that the profile guide stresses the importance of contributions from others to increase the validity of inferences. However, it is incorrect to claim the input of the child or of parents will make the assessment more accurate for summative purposes. With this feedback the conditions, difficulty and specifics of the content will not have been considered creating unavoidable inconsistency.

Using child-led activities to assess literacy and numeracy and the problem of reliability

The reading assessment for one of my daughters seemed oddly low. The reception teacher explained that while she knew my daughter could read at a higher level the local authority guidance on the EYFS profile said her judgement must be based on ‘naturalistic’ behaviour. She had to observe my daughter (one of 30) voluntarily going to the book corner, choosing to reading out loud to herself at the requisite level and volunteering sensible comments on her reading.

 

Illustration is taken from EYFS assessment exemplification materials for reading Note these do not have examples of assessment from reading a teacher has asked children to undertake – ONLY reading voluntarily undertaken by the child during play.

The determination to preference assessment of naturalistic behaviour is understandable when assessing how well a child can interact with their peers. However, the reliability sacrificed in the process can’t be justified when assessing literacy or maths. The success of explicit testing of these areas suggests they do not need the same naturalistic criteria to ensure a valid inference can be made from the assessment.

Are teachers meant to interpret the profile guidance in this way? The profile is unclear but while the exemplification materials only include examples of naturalistic observational assessment we are unlikely to acquire accurate assessments of reading, writing and mathematical ability from EYFS profiles.

Five year olds should not sit test papers in formal exam conditions but this does not mean only observation in naturalistic settings (whether adult or child initiated) is reasonable or the most reliable option.  The inherent unreliability of observational assessment means results can’t support the inferences required for such summative assessment to be a meaningful exercise. It cannot, as intended ‘provide an accurate national data set relating to levels of child development at the end of EYFS’ or ‘accurately inform parents about their child’s development’.

In my next post I explore the problems with the validity of our national early years assessment.

 

*n.b. I have deliberately limited my discussion to a critique using assessment theory rather than arguments that would need to based on experience or practice.

References

Koretz, D. (2008). Measuring UP. Cambridge, Massachusetts: Harvard University Press.

Standards and Testing Agency. (2016). Early Years Foundation Stage Profile. Retrieved from https://www.gov.uk/government/publications/early-years-foundation-stage-profile-handbook

Standards and Testing Agency. (2016). Early Years Foundation Stage Profile: exemplification materials. Retrieved from https://www.gov.uk/government/publications/eyfs-profile-exemplication-materials

Wiliam, D., & Black, P. (1996). Meanings and consequences: a basis for distinguishing formative and summative functions of assessment? BERJ, 537-548.