Rationale for School Quality Metrics and Construction of Ranking Scores

By Anil Nathan and Jack Schneider, College of the Holy Cross

Introduction

This document explains the procedure used by the Boston Globe to rank schools in Massachusetts. The following six categories were used to score and rank schools based on user input on how to weight each of the attributes. The categories are 1) Massachusetts Comprehensive Assessment System (MCAS) Mathematics Growth Score; 2) MCAS English Language Arts Growth Score; 3) School Climate (which includes graduation rates, dropout rates, and the intentions of attending a 2 or 4 year college; 4) College Readiness (which includes SAT Writing scores and the percentage of students scoring 3 or higher on Advanced Placement tests); 5) School Resources (as measured by expenditure per student); and 6) Diversity (calculation is explained below). The most recent available data is used for all calculations.

All variables were scaled (if necessary) and scored based on their deviations from the mean of the variable. This measure allows for some notion of distance between scores while removing some problems associated with the natural magnitude of the variables. Taking a weighted average of these deviations from the means (using category weights inputted by the user) allows for the calculation of a score, which is used to rank the schools.

The remainder of the document explains the rationale for using the categories and variables listed above as well as reporting specifics about how the variables are used to calculate a final score.

MCAS Growth (Mathematics and English Language Arts)

The Student Growth Percentile (SGP) score, unique to Massachusetts, is important because it attempts to identify the value added by the school in the process of education. After all, schools working with high-achieving populations will have high standardized test scores, even if those students gain little from their coursework. Conversely, schools with low-achieving students might do an outstanding job in raising achievement scores, but still produce low overall test scores. Measuring student growth rather than net scores, then, theoretically makes it possible to compare schools with vastly different student populations. SGPs are determined by comparing one student’s history of MCAS exam scores to the scores of all the other students in the state with a similar testing history.

The median MCAS SGP scores by school from 2013 are used in this ranking system. The MCAS SGP scores naturally have a theoretical low point of 0 and a theoretical high point of 100, and so do not require scaling. Means are calculated for each variable conditional on school type, and then deviations from the relevant mean are derived for each school to use in scoring.

School Climate

School climate is challenging to measure. That said, graduation rates are a logical proxy for both student and adult commitment to the process of education. Graduation from high school represents a significant commitment and can signal student feeling about the importance of school work, adult care for students, and broader community support for education. The inclusion of dropouts in such figures is also important because graduation rates are often calculated based on the percentage of enrolled students—potentially allowing a school with a high dropout rate to possess a misleadingly high graduation rate. In order to add some depth to this picture, the postsecondary ambitions of high school students—measured through surveys administered by the state—have also been included. Again, while such data correlates imperfectly with school climate, it does send a powerful signal about the degree to which students feel prepared for college work (whether at two- or four-year schools) and view education as valuable.

Both graduation and dropout rates naturally have a theoretical low point of 0 and a theoretical high point of 100, and so do not require scaling. Means are calculated for each variable conditional on school type, and then deviations from the relevant mean are derived for each school to use in scoring. To combine graduation rates, dropout rates, and college intentions into the category of school climate, the deviations from the mean for graduation rates are multiplied by .50, the deviations from the mean for dropout rates are multiplied by .3, and the deviations from the mean for college intentions are multiplied by .2. These weighted deviations from the mean are then added together to form the school climate deviation from the mean for use in scoring.

College Readiness

College readiness can be measured in a number of ways, none of them perfect. Two relatively predictive and readily available measures, however, are SAT writing scores and the percentage of students taking AP exams earning scores of 3 or higher. Certainly they are limited. Not all students in every school take the SAT or enroll in AP courses. And, of course, it is possible to perform poorly on the SAT or on an AP exam and do quite well in college. Still, SAT writing scores have proven to be reliable predictors of college success—enough so, in fact, that the SAT dropped writing as a separate subject exam and included it in the general test. Successful work on AP exams, similarly, have proven in a number of studies to be predictive of college success, and particularly so for subjects like Calculus BC or Physics.

The SAT writing section is scored from a low of 200 to a high of 800. These values are scaled from 0 to 100, and then means and deviations from the relevant means are calculated. The percentage of students taking AP exams that score a 3 or higher naturally have a theoretical low of 0 and a theoretical high of 100, so no scaling is needed. Deviations from the relevant mean are then calculated. In order to derive a college readiness deviation from the mean, the SAT writing deviation and are each multiplied by .50 and added together.

School Resources

It is common wisdom among parents and policymakers that resources matter in education. Wealthier districts can lower average class sizes and recruit talented faculty, since teacher salaries and benefits make up the bulk of educational expenditures. And adequate resources also make it possible for schools to finance staff development, additional pupil support, music programs, and extra-curricular activities—all of which can enhance the academic and non-academic components of the school program. Some research does indicate that money may play a smaller role than the public imagines. Nevertheless, it is clear that money matters in schooling, providing students with a range of opportunities that might otherwise be unavailable.

There is no theoretical low and high point for expenditure per student, which is used as a proxy for school resources. The data is used to set low and high points to create a scale between 0 and 100. Deviations from the relevant mean are then calculated and used in scoring.

Diversity

All families value diversity differently. And some families intentionally select non-diverse schools and districts, believing that their children will benefit from racial or cultural homogeneity. Still, educational research indicates that a diverse mix of students can produce a range of positive outcomes for young people. Perhaps most obviously, students in diverse schools gain racial and cultural awareness, as well as an enhanced sense of openness that appears to carry forward into adult life. But such environments also appear to promote critical thinking among students. And though it appears that non-dominant groups (particularly African-American and Latino/Hispanic students) benefit the most from diverse school communities, research indicates that white students also benefit by such environments, or at the very least are not adversely affected.

The 4 racial groups used to measure diversity are white, black, Latino/Hispanic, and other. A notion of “perfect diversity” is introduced where each group comprises 25% of the school’s population. The Euclidean distance in this space is then calculated for each school. This number naturally has a theoretical low of 0, and the data is used to determine the theoretical high in order to scale from 0 to 100 if necessary. Deviations from means are then calculated and used for scoring.

>Overall Score and Ranking

The overall score for a school is a weighted average of the deviations from the relevant means from the categories, where the weights are user determined. Any data in a positively weighted category that happens to be missing for a particular school will force that school to not be ranked. These scores are then used to rank the schools in numerical order from high to low.

For the document with complete annotations, click here. Nathan is an Assistant Professor, Department of Economics and Accounting and Schneider is an Assistant Professor, Department of Education at the College of the Holy Cross.

Have more questions about the rankings and the study? Check our previous live chat the creators of Dreamschool.