Superintendent John Deasy takes over the second-largest school system in America, LAUSD, just as the district releases its own calculations of student performance based on standardized test scores. From the LA Times:
The Los Angeles Unified School District’s new school performance measure is likely to surprise many parents, who have traditionally compared schools — and at times purchased homes — based on the state’s Academic Performance Index, which rates schools on a 1,000-point index based mainly on their students’ abilities on standardized tests.
The value-added approach [instead] focuses on how much progress students make year to year rather than measuring solely their achievement level, like the API, which is heavily influenced by factors outside a school’s control, including poverty and parental involvement. Value-added analysis compares a student with his or her own prior performance, largely controlling for outside-of-school influences.
The district’s ratings, dubbed “Academic Growth Over Time,” can send parents a very different signal about a school’s performance. Take, for example, 3rd Street Elementary School in the Hancock Park neighborhood of L.A., which has an API score of 938, putting it among the highest-scoring schools in the district. Under the new growth measure, 3rd Street is one of the lowest-performing elementary schools in the district.
What’s troubling is that rhetoric surrounding use of student standardized test scores–even if student performance year-over-year is compared to eliminate external influences–appears to include other measures of teacher performance, yet those other measures have never been identified. What are they? Teacher peer evaluation? Portfolio review of instructional materials? The focus has been almost exclusively on standardized test scores and “value-added” as it applies to teacher performance.
Furthermore, what is the reliability of the data analysis conducted by the Wisconsin non-profit? The LA Times hardly mentions the unnamed non-profit:
The district scores are based on an analysis conducted by a nonprofit research group affiliated with the University of Wisconsin, which has a three-year, $1.5-million contract with L.A. Unified. The group has also worked with public school districts in New York City, Chicago and Milwaukee.
An interview with Pat Morrison of KPCC-FM, the Los Angeles-area NPR affiliate, reveals the Wisconsin non-profit as the University of Wisconsin Value Added Research Center (VARC). Professor Rob Meyer worked on the development of AGT and is also affiliated with the Wisconsin Center for Education Research (WCER). VARC has also collaborated with Edvance, a San Antonio-based education data analysis company that has worked with the Bush Institute. An “About” page acknowledging funders lists the institutional affiliations as well:
Housed in the Wisconsin Center for Education Research at the University of Wisconsin-Madison, the Value-Added Research Center is home to multiple on-going research projects, and is funded by grants from sources such as U.S. Department of Education IES/NCES, NSF, and the Joyce Foundation. Research partners include the Milwaukee Public School System, the Wisconsin Department of Public Instruction, Chicago Public Schools, and Teacher Incentive Fund grant recipients.
The Joyce Foundation and other funders of value-added research at VARC are hardly the same ideological stripe as the Walton (Wal-Mart) Foundation, the Broad Foundation, or many of the other markedly conservative philanthropies that fund “school choice” programs and think tanks. (The Joyce Foundation funds gun control measures and other programs with an explicit social justice bent.) So the question here is, why are typically “non-partisan” foundations so wedded to the “value-added” approach?
Why, when local education policy researchers based in Los Angeles who have made the area school districts the subject of longitudinal study for decades, have never come out in favor of a “value-added” approach? Why is “value-added” embraced when leading economists within California deride much of the Gates Foundation-funded research on Measures of Effective Teaching, saying that it’s “flawed” and “misinterprets its own results”?
This effort is troubling in its effects, wasteful, and focused on the wrong approach.