Use Caution in Interpreting Relationship Between Offender Race and Prison Sentencing

State Supreme Court Justice Rebecca Dallet wrote recently, according to a story in the Milwaukee Journal Sentinel, that a study on race and prison sentencing in Wisconsin “confirms what I and many others have been saying, which is that we have a long way yet to go to have a system that truly treats all equally. We should continue to examine this issue and be proactive in the courts on reducing and eliminating racial bias.”

Her statement indicates that she believes a study by the Wisconsin Court System’s Office of Research and Justice Statistics done in January 2020 provided evidence that the courts in Wisconsin are biased against minorities — a serious accusation coming from a Supreme Court justice.

The study itself did not use the term “bias.” Rather, it alluded to “differences by race.” Dallet, however, was not the only one to apparently conclude that troubling differences in sentences handed down by her fellow judges were the result of bias. The journalist who reported her statement, Daniel Bice, referred to the report as a “racial bias study.”

Was it?

In an effort to determine if Dallet and Bice accurately portrayed the findings of the report, the Badger Institute — which has long been focused on criminal justice reform as well as the causes of and potential remedies for disparities by race — asked an expert in the methodology used in the study to review the findings and analyze whether they prove bias exists within the criminal justice system in Wisconsin.

The request was not made to in any way try to disprove the existence of racism or bias in society as a whole. It was made to help understand the limitations of the sort of analysis used by the Office of Research and Justice Statistics, an essential first step in determining whether there are policies within the system itself or, rather, in broader society that might help eventually eliminate disparities — a goal that all fair-minded Wisconsinites should strive for.

— Mike Nichols, president of the Badger Institute

Methodology used in Wisconsin Court System study can produce biased findings

By ANDREW HANSON, Ph.D. | May 14, 2021

The Wisconsin Court System’s Office of Research and Justice Statistics (ORJS) recently drafted a report titled “Race and Prison Sentencing in Wisconsin” (referred to hereafter as “the report”). The findings are sobering: Black men are 28% more likely to receive a prison sentence than white men, while Hispanic men are 19% more likely to receive a prison sentence than white men.

The frontline findings of the report represent a useful quantification of average differences between groups that can be used to help the public and policymakers think about what the root causes of the disparity might be.

In using the report’s findings to inform policy discussions, it is also important to consider some major limitations of the methodology used in the study and what questions remain unanswered.

The report relies on a common technique used to analyze data in the social sciences called regression analysis.1 Such analysis relates an outcome variable of interest (in this report, the outcome is the sentence) to a host of other factors that might influence that outcome.

The report primarily focuses on the race of the offender as something that might influence prison sentencing but includes other factors in the analysis — things such as criminal history or whether the defendant took the case to trial. These other factors are considered in an attempt to make an “all else equal” comparison and isolate the impact that offender race has on sentencing (they often are referred to as the “controlling factors” in a study).

The real power behind regression analysis is to use as exhaustive a list of controlling factors as possible, so that the researcher can have confidence that the factor of interest — in this instance, race — is truly isolated.2 The extent of the list of controlling factors, how exhaustive it is, essentially determines how accurately the study is able to isolate the factor of interest.

The ORJS report uses the following variables as control factors: initial and convicted charge severity, if guilt is determined by trial, criminal history in the preceding five years, youth status and region of the state. Accounting for those factors, in other words, eliminates the possibility that they are responsible for the differences in sentences found by the ORJS researchers.

Given the complex nature of legal proceedings, however, it is worth considering what other factors — things that might also contribute to differences between sentences for individuals of various races — are left out of the study and not accounted for as potentially relevant to the differences in sentences by race.

Admittedly, I am not a criminal justice expert, but there are probably several categories of omitted factors in the ORJS study worth considering: education, employment status, income, for example — all things considered by judges. Characteristics of the defense attorney might also be important, e.g., public defender status, experience, education, prior experience with the judge. Other omitted factors might include the nature of the evidence and the circumstances of the crime — all things that are difficult to control for but might have an impact.

It is difficult to control for — or omit as a possible cause — all such factors.

At least some of those factors, moreover, might be correlated with race as well as the likelihood or length of a prison sentence.

Any factors that are omitted from a regression analysis that are correlated with both the outcome and the variable of interest will cause the study to produce a bias result for the variable of interest.

Depending on the direction of correlation with the omitted factor, this bias will make the variable of interest — race in this case — look either more or less important than it actually is for the outcome being studied.

For example, the ORJS study does not include information about the use of a public defender. If, say, Black offenders are more likely to have a public defender than white offenders, and using a public defender is correlated with receiving a prison sentence more often, then the regression analysis will mistakenly attribute the public defender effect to the effect of being a Black offender because the study is missing that information.

This logic is true for any variable that is not accounted for in a regression study. With only a short list of factors included in the ORJS study, there is a strong likelihood that the analysis does not produce a reliable estimate for how much offender race matters in sentencing.

The omitted factors, in other words, cause the study to produce a biased finding.

The ORJS study is far from the only piece of social science research to suffer from deficiencies caused by omitted variables.

Famously in economics, Alan B. Krueger (1993) studied how the advent of computer use in the workplace affects wages. Using extremely similar methodology as the ORJS study, Krueger found that workers who use a computer at their job earn 15% to 20% more than non-computer users, controlling for a host of other factors about the workers. Krueger concluded that computer use was causing a wage premium in the workforce.

But Krueger’s work did not have sufficient controlling variables — there are many other aspects of both workers and their jobs that are correlated with having higher wages and correlated with using a computer, so the study produced a biased estimate for how important computers are to earnings. To be clear: It isn’t that a computer doesn’t make workers more productive; it is just that Krueger’s study produced a bias result for how much more productive computers make workers.

This bias was famously demonstrated in work by John E. DiNardo and Jorn-Steffen Pischke (1997), who showed the same estimated wage premium for workers who used a pencil on the job. Their point was that isolating the effect of a computer (or a pencil) from other factors in the workplace, about the worker, industry, employer and market are extremely difficult and that these omitted factors were responsible for driving much of the wage premium in Krueger’s work.

One of my areas of research is studying racial differences in the mortgage industry. There are myriad studies using regression analysis that show differences in loan outcomes between mortgage borrowers (or applicants) of different races. Nearly all of these studies find that minorities fare worse than whites when applying for or obtaining a mortgage. Some of these studies are extremely careful and able to control for many of the differences in borrower characteristics that could potentially cause bias.

These studies demonstrate two important facts: 1) The use of more and better control variables substantially reduces the difference in borrowing outcomes between different race borrowers; and 2) Even after painstaking use of control variables, differences in borrowing outcomes remain between race groups. I find these results useful as a starting point to examine why differences exist, a launching point to explore root causes of a troubling fact in the data, but advise using caution in interpreting the results because of the potential for omitted factors.

Some authors of these studies conclude that discrimination in the mortgage industry must be the cause of any remaining difference between race groups because of the extensive controls used. My own work in this area has shown direct evidence of discrimination happening in the mortgage industry using experiments,3 but the level of direct discrimination is not enough to explain all of the difference in average outcomes between minority and white borrowers. There are still likely to be unaccounted for average differences between the groups remaining (such as differences in search patterns, financial experience and networking) that matter for obtaining a mortgage.

I point to studies from the labor market and mortgage market as a caution to readers of the ORJS study who may conclude that average differences between race groups in prison sentencing is de facto evidence of racial discrimination by courts and judges.

The problems with omitted variable bias in the ORJS study are real, and the presence of these problems likely means that a true “all else equal” comparison would uncover a smaller difference in sentencing between race groups.

That is not to say that racial discrimination in sentencing may not exist; it certainly may. But the ORJS study does not produce sufficient evidence to reasonably allow one to come to that conclusion. Differences in group average sentencing by race is not something to ignore; it is likely indicative of deep differences between groups along several dimensions that contribute to outcome differences in sentencing.

It’s also possible that some of the difference is driven by policies — some of it may even be caused by racism and discrimination. Unfortunately, the ORJS study doesn’t allow us much of a window into what is causing the disparities between groups.

Andrew Hanson is an associate professor in the Real Estate Department at the University of Illinois at Chicago and a Badger Institute visiting fellow.


DiNardo, John E., Pischke, Jorn-Steffen. 1997. “The Returns to Computer Use Revisited: Have Pencils Changed the Wage Structure Too?” Quarterly Journal of Economics, Vol. 112, No. 1 (February), pp. 291-303.

Krueger, Alan B. 1993. “How Computers Have Changed the Wage Structure: Evidence from Microdata, 1984–1989.” Quarterly Journal of Economics, Vol. 108, No. 1 (February), pp. 33-60.

Wisconsin Court System, Office of Research and Justice Statistics. January 2020. “Race and Prison Sentencing in Wisconsin: Initial Outcomes of Felony Convictions, 2009-2018.”


1The difference in prison sentence vs. another outcome is estimated using a “binary logistic regression model” in the report. This model accounts for the (0,1) nature of the outcome of interest (prison=1, some other punishment=0) and the shape of the underlying distribution when using a (0,1) outcome, but the model relies on the same underlying regression idea described here. Most importantl, the binary logistic regression model is subject to the omitted variable critique mentioned here.

2Recent advances in social science research methodology tout using regression analysis in “natural experiment” settings that make it easier to identify changes to a specific variable of interest that are independent of other factors.

3This work uses experiments to ensure that there are absolutely no differences between minority and white potential clients and examines how these clients are treated by lenders, typically in terms of the information they are given or if their inquiry about credit receives a response.