Michigan's new testing regime is better but should be improved

Ben DeGrow, Mackinac Center for Public Policy

Michigan has released the first round of results under its new system of assessing school performance. For a state struggling in national measures of academic achievement, they highlight needed improvements. But less clear is how fairly and accurately the new system judges the performance of each school.

The 2015 federal education law, the Every Student Succeeds Act, gives states extra flexibility to update their school rating systems. After a lengthy approval process involving the U.S. Department of Education, many states, including Michigan, have begun to report the results under new formulas.

Like Michigan’s old Top-to-Bottom rankings, the new School Index is used by state officials to decide where to spend resources to try to improve results. It’s possible that schools with chronically low rankings could face drastic sanctions, but the state has yet to close down a conventional district school for poor academic performance.

The old ranking system wasn’t very good. It could tell you which schools served a large number of low-income students: They were the ones with poor results. That’s no surprise, since low-income students generally score poorly, for a number of reasons. But the system couldn’t identify the schools that were good at helping the poorly performing students catch up. As a result, a school could be penalized for serving a large number of low-income students, even if its students were learning.

This finding is a key premise behind the Mackinac Center’s series of annual Context and Performance Report Cards. Our report cards adjust multiple years of test scores to show how a school performs compared to what might have been expected of it based on the share of low-income students in its enrollment. (Such students are defined as those who receive federally subsidized free lunches.) Some of the highest performers, like Star International Academy or Dearborn’s Iris Becker Elementary, dramatically beat the odds while serving a high-poverty student population.

In and of themselves, CAP Scores are not the perfect way to measure how well Michigan schools raise student achievement. They do not account for how long a tested student has been enrolled in a particular school, nor what math or reading level she started out at. But in the absence of detailed, individual-level data, the CAP method still provides a reliable tool for judging school performance by recognizing that the average student in poverty has greater challenges to overcome.

The best approach is to assess how well schools enhance students’ knowledge and skills. State officials can more precisely account for this by placing significant weight on the year-to-year progress individual students make on standardized tests.

No single component has greater bearing on a school’s Index rating than student growth, which is calculated by using schoolwide averages of student progress, as well as averages of racial and other subgroups of students. Proficiency, which measures how many students met the mark for their grade level, is close behind in importance, while a host of other factors — many of them only loosely related to how much students know and can do — round out the equation.

Given the more prominent role that growth plays in the new ranking system, one might expect the link between poverty and state ratings to be weaker. And it is. In the old Top-to-Bottom rankings, 55 percent or more of the variation from school to school was explained by the rate of free-lunch students. That figure drops to 41 percent for the new School Index.

Though the poverty rate is now a less important factor in explaining why schools are ranked the way they are, that doesn’t mean that student growth has become that much more important. A closer look at the data suggests changes in the rankings have much more to do with which schools are being rated. The School Index rates about 700 more schools than the Top-to-Bottom rankings did; most of these are alternative education programs of some sort. Because these additional schools cover mostly very small and specialized populations, they tend to create numbers that don’t mean much.

If the School Index includes only the schools that were on the last Top-to-Bottom list, the relationship between a school’s ­measured performance and its student poverty rate looks very much the same. In fact, it’s nearly identical: The free-lunch rate accounts for 56 percent of the new rankings. This correlation is far from the only thing that matters in judging a rating system’s quality, but a number that large raises concerns about its fairness and calls for a more careful investigation.

Thankfully, one key example shows what a fair and valid system might look like. Not many other states have released their full 2017 ranking data under the new federal regime. But at least one highly regarded state system does a better job of evaluating schools on student growth. In Florida, which many reformers see as one of the models in educational accountability, student poverty explains less than one-third of the variation in scores registered on state accountability reports.

Putting more weight on student growth has improved Michigan’s system of school ratings. But there’s enough doubt remaining about the ongoing link to student poverty to justify the next round of Mackinac Center CAP Scores. In other words, context still matters.

—————

Ben DeGrow is the Mackinac Center’s director of education policy.