Update: A slightly revised version of this post has been published in the Australian Journal of Indigenous Education https://doi.org/10.1017/jie.2020.18

 

The ‘Flexible Learning for Remote Primary Schools’ (FLRPS) program was funded by the Australian Government in 2014 for the implementation of Direct Instruction (DI) and Explicit Direct Instruction in 34 remote and very remote schools in Western Australia, Queensland and the Northern Territory. The programme was funded on the basis of extensive research showing DI’s effectiveness in improving academic outcomes, particularly for disadvantaged and minority children in the US. The FLRPS program was delivered by Good to Great Schools Australia with an initial implementation period of three years which was subsequently extended to 2019.

Direct Instruction is a specific program of explicit instruction with a sequenced curriculum and scripted model for teaching. It is sometimes referred to as ‘big D.I.’. The teaching model known as direct instruction, or ‘little d.i.’, is a general set of principles that can be applied to any lesson in any curriculum. The FLRPS program used ‘big D.I.’.

Given the acute literacy teaching needs of indigenous children in remote and very remote schools, it is necessary to carefully evaluate the impacts of programs in these schools. The federal government commissioned the Centre for Program Evaluation (CPE) at Melbourne University to evaluate the FLRPS program during its implementation. Their report states (with reference to the time frame) that “steady improvements in NAPLAN were observed, particularly for reading, writing and spelling.” Although statistically significant positive gains compared to control schools were not found across all NAPLAN domains, intervention schools had substantially stronger progress in writing and spelling, with high effect sizes for change from 2015 to 2017 for spelling and reading in the intervention schools. The evaluation also found that extenuating factors led to widely differing program impacts among the intervention schools, making it important to look beyond the averaged results for the school and community factors related to success.

An article published in the most recent issue of the Australian Journal of Indigenous Education purports to challenge the findings of the CPE reports. The article by Guenther and Osborne (2020) uses NAPLAN reading data for 25 very remote schools in the FLRPS program using Direct Instruction. They averaged scores across schools and across years to conclude that the program was not effective and that “the intervention has a potential to be associated with educational harm to at least some students.” NAPLAN spelling and writing scores were not included in their analysis.

Guenther and Osborne’s (2020) findings were uncritically welcomed and promoted by commentators and academics who are opposed to Direct Instruction and Explicit Direct Instruction. But do they have any validity?

In a word, no. The analysis has a number of important weaknesses and one basic fatal flaw: the time period studied.

The analysis compares the average NAPLAN reading scores for a “pre-intervention” period (2012-2014) with the average NAPLAN reading scores for a “post-intervention” period (2015-2017). The problem with this should be obvious straight away — the so-called “post-intervention” period is not post-intervention at all.

The FLRPS program was announced in 2014 and the first full year of implementation was 2015, starting with 33 schools, and increasing to 34 schools at the end of 2017. Therefore, the “post-intervention” data in the Guenther and Osborne study were actually collected in the first year (in fact the first four months for the 2015 data) and the two subsequent years of the intervention. When the final data set was collected in NAPLAN 2017, the program still had six months left to run.

Direct Instruction has been shown in multiple studies to be an effective teaching program but it is completely unreasonable to expect any instructional program or method to deliver a “hey presto” significant improvement in NAPLAN scores in the first few months for children whose literacy levels are years below the expected benchmark for their age and stage of schooling. Furthermore, while NAPLAN assessments are able to capture useful information for the majority of children, they are not sensitive to changes in the foundational reading skills of children with very low levels of literacy. That is not their design or purpose.

There is also the issue of the comparison group. Guenther and Osborne (2020) compare the 25 schools in the FLRPS program to 115-120 very remote primary schools with high indigenous populations, providing no other demographic or educational information about their comparability to the intervention schools. Remote indigenous schools are not all exactly the same. Importantly, FLRPS is not the only direct instruction program being used in very remote schools in Western Australia, Queensland and the Northern Territory.

This means that a lack of significant growth in NAPLAN scores in the FLSRP program schools could conceivably be due to the inappropriate reporting period and/or the inability of NAPLAN to detect growth in the cohort’s reading skills. The results also cannot be generalised to all direct instruction programs – the study does not consider the instruction being provided in the comparison schools. It would be a tragedy if this flawed research undermined the solid improvements in learning being achieved by direct instruction methods in other remote and very remote schools such as the Kimberley Schools Project. Other direct instruction (‘little d.i.’) programs being used in remote and very remote schools with good outcomes include MultiLit programs and Read Write Inc.

Guenther and Osborne (2020) also report attendance rates before and during the intervention, finding there was a greater decline in attendance in intervention schools than in comparison schools. The possible reasons for this difference in attendance patterns are not explored in the article, despite the authors’ stated commitment to a “post-positivist” approach to their study, which usually employs contextualised and qualitative information in the analysis of quantitative data.

The CPE reports, however, do provide extensive contextual information about the schools, students, and communities involved. They give important detail about the range of outcomes in the FLRPS program, including some pockets of success where low literacy had been immutable for many years, and the factors associated with these outcomes. They noted that the data being analysed was collected at an early point in the implementation and were duly cautious in describing their positive findings where it was appropriate.

A more thorough critique and comparison of Guenther and Osborne (2020) with the CPE reports would certainly reveal more deficiencies in the former. Even so, the basic fact that their post-intervention data cannot in anyway be accurately described as such, is sufficient to call their conclusions about the FLRPS program into question, if not dismiss them entirely.

NB. The author of this piece, Five from Five, and MultiLit have no commercial interest or affiliation with the Direct Instruction program or its developers.