Any education policy proposal should be scrutinised, questioned and discussed in detail. Such debate should be conducted on the basis of accurate information. In the case of the Year 1 Phonics Check, however, a great deal of misinformation is being promulgated which is creating confusion rather than clarity.

A recent example is a blog post in EduResearch Matters by Dr Paul Gardner, who has previously written articles about the Year 1 Phonics Check and its impact in English schools, to which I provided substantive corrections.

Dr Gardner’s blog published last week makes a number of claims that are incorrect. The entire blog is quoted in sections here and I respond briefly in turn.

The article is (ironically) called “The flawed thinking behind a mandatory phonics screening test“.

Dr Gardner: “The New South Wales Government recently announced it intends to ‘trial an optional phonics screening test’ for Year One students. This seems to be following a similar pattern to South Australia where the test, developed in the UK, was first trialled in 2017 and is now imposed on all public schools in the state.”

My response:

The SA government adopted the Year 1 Phonics Check in 2018 because the 2017 trial revealed poor decoding ability among Year 1 students. Importantly, teachers involved in the trial reported that the Check had provided new and valuable information about students’ reading skills, was easy to administer, and children enjoyed doing it. Catholic and Independent schools in South Australia have voluntarily joined the assessment program and will be using the Check from 2019.

Dr Gardner: “The idea of a mandated universal phonics screening test for public schools is opposed by the NSW Teachers Federation, but is strongly advocated by neo-liberal ‘think tanks’, ‘edu-business’ leaders, speech specialists and cognitive psychologists.”

My response:

Hundreds of teachers and principals around Australia are also advocates for the Phonics Check. Many schools are already choosing to use the Phonics Check and champion its benefits for teaching and learning. It is simply wrong to dismiss the Year 1 Phonics Check as a preoccupation of a group of non-teachers.

Dr Gardner: “The controversy surrounding the test began in England, where it has been used since 2012. As in England, advocates of the test in Australia argue it is necessary as an early diagnosis of students’ early reading.” 

My response:

Most advocates for the Phonics Check in Australia do not describe it as a diagnostic assessment in a technical sense. It is more accurately described as a curriculum-based measure that ascertains whether students have a sufficient level of phonics knowledge for them to make good progress as independent readers. Teachers should, of course, be assessing phonics and decoding throughout Foundation and Year 1 but not all schools do. The Phonics Check is designed to be administered at a crucial point in reading development and can identify children who need support if it has not been picked up before.

Dr Gardner: “No teacher would dispute the importance of identifying students in need of early reading intervention, nor would they dispute the key role that phonics plays in decoding words. However I strongly believe the efficacy of the test deserves to be scrutinised, before it is rolled-out across our most populous state, and possibly all Australian public schools. Two questions deserve to be asked about the tests’ educational value. Firstly, is it worthwhile as a universal means of assessing students’ ability in reading, especially as it will be costly to implement?”

My response:

Phonic decoding is a fundamental and essential aspect of reading development. Early acquisition of phonic decoding ability is a strong predictor of later reading comprehension. Identifying and addressing weaknesses in decoding, and in the teaching of decoding, has a dramatic impact on reading success.

The Phonics Check was methodically developed and its benefits proven over seven years in England.

So yes, it is worthwhile.

The Phonics Check is far less costly than most other assessments. It takes 5-7 minutes to administer and the estimated cost is $17 per child, including staff time. It is a highly efficient and effective assessment of early reading development.

Dr Gardner: “Secondly, does it make sense to assess students’ competence in reading by diagnosing their use of a single decoding strategy?”

My response:

There is only one decoding strategy. The Australian Curriculum’s definition of decoding reflects the use of the term scientific research literature on how children learn to read: “Decoding: A process of efficient word recognition in which readers use knowledge of the relationship between letters and sounds to work out how to say and read written words.” 

Dr Garder: “Perhaps these questions can be answered by interrogating the background to the test in England and by evaluating the extent to which it has been successful. What is in the test? The test, which involves two stages, consists of 40 discrete words that the student reads to their teacher. They do so, by firstly identifying the individual letter-sound (grapho-phonic) correspondences, which they then blend (synthesise) in order to read the whole word. So, in fact what is specifically being tested is a synthetic phonic approach to reading, not a phonic approach per se. It could even be argued that calling the test a ‘phonics’ check is a misnomer since analytic phonics is not included.” 

My response:

This is utterly incorrect. Children are not required to sound out the word before they say it. Children can sound out and blend if they want to, or they can just say the whole word. The Phonics Check is neutral on how children read words, it only assesses whether they can.

Dr Gardner: “Students pass the test by correctly synthesising the letter blends in thirty-two of the forty words.  In order to preserve fidelity to the strategy and to ensure students do not rely on word recognition skills, the test includes 20 pseudo words. In the version used in England, the first 12 words are nonsense words.”

My response:

As noted above, the claim that children must ‘synthesise’ or sound out the words is incorrect.

The pseudo word component is to assess whether children can use phonic knowledge, however they may have acquired it, to read words they have not seen before. As children become independent readers, they continually encounter new words, so they cannot rely on their memory of words they have learned as whole words. If they cannot decode new words their reading comprehension will be limited.

Dr Gardner: “We can trace the origins of the phonics screening check in England to two influential sources: ‘The Clackmannanshire Study’ and the ‘Rose Report’. In his 2006 report on early reading, Sir Jim Rose, drew heavily on a comparative study conducted by Rhona Johnston and Joyce Watson, in the small Scottish county of Clackmannanshire. After comparing achievements in reading of three groups of students taught using different phonic methods, the two researchers concluded that the group taught by means of synthetic phonics achieved significantly better results than either of two other groups. These other groups were taught by means of analytic phonics and a mixed methods approach. Although the study received little traction in Scotland and has subsequently been critiqued as methodologically flawed, it was warmly embraced in England, especially by Rose who was an advocate of synthetic phonics.”  

My response:

Johnston and Watson have comprehensively responded to the claims that their study was methodologically flawed.

Sir Jim Rose developed the view that synthetic phonics is the most effective form of phonics instruction as a result of conducting his review, not the other way around as is implied here.

Dr Gardner: “The 2006 Rose Report was influential in shaping early reading pedagogy in England and from 2010 systematic synthetic phonics, not only became the exclusive method of teaching early reading in English schools, it was made statutory by the newly elected Conservative-Liberal Coalition under David Cameron.”

My response:

This is incorrect. The Communication, Language and Literacy Development Strategy (CLLD) was introduced in 2006 to implement the recommendations of the Rose Report that all children should have systematic synthetic phonics instruction in the early years of school as part of a comprehensive literacy program.

Dr Gardner: “The then Education Secretary, Michael Gove, and his Schools’ Minister, Nick Gibb, announced a match funded scheme in which schools were required to purchase a synthetic phonics program.”

My response:

This is incorrect. The UK government made funding available to schools to purchase a synthetic phonics programs, professional development and resources. They were not ‘required’ to do so. https://www.gov.uk/government/news/funding- for-phonics-teaching-to-improve-childrens-reading

Dr Gardner: “Included in the list of recommended programs was one owned by Gibb’s Literacy Advisor. This program is now used in 25% of English primary schools.”

     My response:

No source/evidence is provided for these statements.

Dr Gardner: “In 2012, Gove introduced the phonics screening check for all Year One students (5-6 year olds) in England, and in 2017, Gibbs toured parts of Australia promoting the test here. 

To what extent has the Phonics Screening Check been successful? In its first year, only 58% of UK students passed the test, but in subsequent years’ results have improved. Students who fail the test must re-sit it at the end of year Two. By 2016, 81% of Year One students passed the test, but since then there has only been an increase of 1%. Gibb cites this increase in scores, over a six-year period, as proof that the government has raised standards in reading and advocates of the test in Australia have seized upon the data as evidence in support of their case. 

At face value, the figures look impressive. However, when we compare phonics screening check results with Standard Assessment Test (the UK equivalent to NAPLAN) scores in reading for these students a year later, the results lose their shine. In 2012, 76% of Year Two students achieved the expected Standard Assessment Test level in reading, but last year only 75% achieved the same level. Clearly then, the phonics screening check is not indicative of general reading ability and does not serve as a purposeful diagnostic measure of reading.”

My response:

This comparison of SAT scores in 2012 and 2016 betrays ignorance of the UK assessment system. The SAT tests changed significantly in 2016, and it is now more difficult for children to achieve the ‘expected’ level in reading. Therefore, it is wrong to make comparisons between SAT results in 2012 and SAT results in 2018.

In fact, Year 2 reading results improved markedly in the years from 2012 to 2015, after more than more than a decade of stagnant scores. There is a high correlation between Phonics Check scores and Year 2 reading comprehension scores.

England’s results in Year 4 reading in the PIRLS assessment improved from 2011 to 2016. The 2016 PIRLS assessment involved children who were the first cohort to do the Phonics Check so it had not yet had a positive effect on teaching and learning (which is the aim of the Phonics Check, just like any educational assessment).

Children who have done the Phonics Check have not yet reached the age for PISA assessments so it will take some years yet to see any impact on PISA scores. It must be noted, however, that any long-term impact of the Phonics Check will of course be moderated by the quality of reading instruction in subsequent years.

Dr Gardner: “In a recent survey of the usefulness of the phonics screening check in England, 98% of teachers said it did not tell them anything they did not already know about their students’ reading abilities.”

My response:

No reference/source for this survey statistic is provided, so I cannot comment on it specifically. I do not know, for example, if it was a representative sample of teachers.

However, if it is indeed a recent survey, this finding makes sense since all teachers are using synthetic phonics and they have been using the Phonics Check for seven years. They should be very familiar with their students’ decoding ability. This is not the case in Australia.

Dr Gardner: “Following the first year of the test in 2012, when only 58% of students achieved the pass mark, teachers explained that it was their better readers who were failing the test. Although these students were successfully making the letter-sound correspondences in nonsense words, in the blending phase, they were reading real words that were similar to the visual appearance of the pseudo words.”

My response:

Again, it is simply incorrect to say there is a sounding out and ‘blending phase’ of the Phonics Check.

It is a myth that ‘good readers’ fail the Phonics Check because they try to read pseudo words as real/familiar words.

The evaluation conducted by the National Foundation for Educational Research in 2015 found that ‘‘Over the course of the study a small number of respondents have expressed concerns that the Check disadvantages higher achieving readers. However, as reported in Chapter 2, the analysis of the NPD data found no identifiable pattern of poorer performance on the Check than expected in those children who are already fluent readers.’ (p.10)

This myth also was refuted in research by Castles et al. (2018) which found that “stronger word readers were less likely to make a word error response than weaker word readers, with their most prevalent type of error being another nonword that was highly similar to the target.”

Dr Gardner: “The conclusion is that authentic reading combines decoding with meaning.”

My response:

This is of course true, but accurate decoding is essential to access meaning. Children may not be familiar with the word ‘slat’ but to read it as ‘salt’ because that is a word they know, does not make them a better reader. It is not going to give them the correct meaning.

Dr Gardner: “Furthermore, as every teacher knows, high status tests dominate curriculum content, which in this case, means that by giving greater attention to synthetic phonics, in order to get students’ through the test, there is less time to give to other reading strategies.” 

My response:

Proficiency in word reading (both for familiar and new words) is an essential component of the development of fluency and comprehension. If synthetic phonics is the most effective way of achieving proficiency in word reading, then good teachers would be wise to use it. It is not about ‘getting through the test’, it is about ensuring all children have this vital skill.

Whilst the systematic teaching of phonics has an important place in a teacher’s repertoire of strategies, it does not appear to make any sense to make it the exclusive method of teaching reading, as is the case in England. To give it a privileged status as a test does exactly that.

There is no expectation or encouragement for phonics to be the sole component of early reading instruction. Indeed, the Rose report advises: “Phonic work should be set within a broad and rich language curriculum that takes full account of developing the four inter-dependent strands of language: speaking, listening, reading and writing and enlarging children’s stock of words.” (p. 70)

Dr Gardner: “Perhaps this is the key reason why, in England, phonics screening check scores have improved but students’ reading abilities have not. I don’t think Australia should be heading down the same dead-end path.”

My response:

As noted above, the assertion that reading abilities have not improved is unfounded.

It is discouraging that a senior lecture in primary English at an Australian university can produce an article that contains so many factual errors. It is disappointing that The Australian Association for Research in Education publishes a blog that does not provide supporting evidence for its claims and does not meet standards of basic facts.

If some teachers have misgivings about the Year 1 Phonics Check, this sort of misinformation goes a long way to explaining why.