In May 2017, I wrote a blog, “Intervention: Which Programs are Effective.” It was about a new website, evidenceforessa.org, which was created to provide districts with an easy way to check an instructional program’s effectiveness before they spend their Every Student Succeeds Act (ESSA) federal funds. The website states that “it provides a free, authoritative, user-centered database to help anyone – school, district, or state leaders, teachers, parents, or concerned citizens – easily find programs and practices that align to the ESSA evidence standards and meet their local needs.” The ratings are ranked from strong (having the most evidence aligned with the ESSA standards) to moderate or promising, and down to the ones that have no statistical evidence to support their use. The website was produced by the Center for Research and Reform in Education (CRRE) at Johns Hopkins University School of Education in collaboration with technical and stakeholder advisory groups.
According to the ESSA website, their mission seems noble. As clearly noted in the ‘frequently asked question and answer’ portion of their website:
“We believe that using proven programs is morally and fiscally responsible, increasing the likelihood of student success as well as a greater return on financial investments in education. While there are several databases with information about education programs, such as the What Works Clearinghouse and Best Evidence Encyclopedia, this is the only website explicitly tailored to making education leaders aware of specific programs that meet the ESSA evidence standards.”
At the time when my blog was written, I was interested to see what would come up in a search on the ESSA website for Leveled Literacy Intervention (LLI) by Fountas and Pinnell, since this reading intervention program has been widely used in schools across the United States for children with reading difficulties. I was not shocked to see these words under the evidence summary, “Qualifying studies found no significant positive outcomes.” I sarcastically wrote, “How could this be? Isn’t this the ‘go-to’ program of choice?” I explained that “the books used for this program, particularly the lower level ones, have repetitive or predictable text, which basically teach children to look at pictures and remember sight words. Since such students likely have not made the connection between letter and sound correspondences, they tend to use whole word memorization as their default strategy when reading. The LLI books reinforce this contextual guessing strategy. Rather than encouraging children to read ‘left to right and all through the word,’ the LLI books are sending children mixed messages about how to read. It is not unusual to see children’s eyes jumping around to search for meaning while using ‘word solving’ strategies which do not rely on the alphabetic code of letters matching to sounds throughout the written word.”
By September 2017, to my dismay, the ESSA website posted a rating of STRONG for LLI, based on two qualifying studies! How did it go from no significant positive outcomes to strong?? When I began to dig a little deeper, the website directed me to see the What Works Clearinghouse Intervention Report, a “summary of findings from a systematic review of the evidence.” When I scrolled down on the WWC report to the section on “Effectiveness,” it said, “LLI had positive effects on general reading achievement, potentially positive effects on reading fluency, and no discernible effects on alphabetics for beginning readers.” I was most interested in the alphabetics domain. Alphabetics, by definition, means that letters and letter patterns represent the sounds in spoken language, which is a strong predictor of reading success.
Since this summary is based on the results for kindergarten to second grade, the years when these decoding skills are primarily taught, a review that reveals no discernible effect IS SIGNIFICANT!!
A reading intervention program designed for beginner readers SHOULD have positive effects in the alphabetics domain! When else would children get these foundational skills? Despite this significant deficiency, when combined with the general reading achievement domain and the fluency domain, the composite score pushed LLI into the strong category! So, I wondered, what was used to assess these domains? DIBELS assessments were used for alphabetics and fluency, and the Fountas and Pinnell Benchmark Assessment System (BAS) was used for general reading achievement. Fountas and Pinnell are the same authors of the intervention program being reviewed! This all started to make sense to me.
An assessment that relies on contextual guessing and cueing will show positive effects for an intervention that relies on contextual guessing and cueing!
So, the numbers don’t lie, but the interpretation leaves a lot to be desired, especially when kindergarten through second grade results are combined!
Do we really think that busy education leaders who will be purchasing evidence-based products will go any further than the “STRONG” rating on the ESSA website? Do we really think that WWC used the best evidence for giving these programs “thumbs up” ratings? Is this how Reading Recovery gets a “STRONG” review as well? Do we really think that schools will change when they can point to these websites to support what they do?
Faith Borkowsky is the Founder of High Five Literacy and Academic Coachingwith over thirty years of experience as a classroom teacher, reading/learning specialist, regional literacy coach, administrator, and tutor. Ms. Borkowsky is Orton-Gillingham trained and is a Wilson Certified Dyslexia Practitioner listed on the International Dyslexia Association’s Provider Directory. She is the author of Failing Students or Failing Schools? A Parent’s Guide to Reading Instruction and Intervention.She provides professional development for teachers and school districts, as well as parent workshops, presentations, and private consultations.
1 Comment. Leave new
Thank you for this very insightful blog post!