On a spring afternoon in 2014, Brisha Borden was running late to pick up her godsister from school when she spotted a child's unlocked blue Huffy bicycle and a silver Razor scooter. Borden and a friend grabbed the bike and scooter and tried to ride them down the street in the Fort Lauderdale suburb of Coral Springs.
Just as the 18-year-old girls were realizing they were too big for the tiny conveyances — which belonged to a 6-year-old boy — a woman came running after them saying, "That's my kid's stuff." Borden and her friend immediately dropped the bike and scooter and walked away.
But it was too late — a neighbor who witnessed the act had called the police. Borden and her friend were arrested and charged with burglary and petty theft for the items, which were valued at a total of $80.
Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store.
Prater was the more seasoned criminal. He had already been convicted of armed robbery and attempted armed robbery, for which he served five years in prison, in addition to another armed robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was a juvenile.
Yet, something odd happened when Borden and Prater were booked into jail: A computer program spit out a score predicting the likelihood of each committing a future crime. Borden — who is black — was rated a high risk. Prater — who is white — was rated a low risk.
Two years later, we know the computer algorithm got it backward. Borden has not been charged with any new crimes. Prater is serving an eight-year prison term for subsequently breaking into a warehouse and stealing thousands of dollars' worth of electronics.
Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation. They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bail amounts — as is the case in Fort Lauderdale — to even more fundamental decisions about defendants' freedom. In Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such assessments are given to judges during criminal sentencing.
Rating a defendant's risk of future crime is often done in conjunction with an evaluation of a defendant's rehabilitation needs. The Justice Department's National Institute of Corrections encourages the use of such combined assessments at every stage of the criminal justice process. And a landmark sentencing reform bill pending in Congress would mandate the use of such assessments in federal prisons.
In 2014, then U.S. Attorney General Eric Holder warned that the risk scores might be injecting bias into the courts. He called for the U.S. Sentencing Commission to study their use. "Although these measures were crafted with the best of intentions, I am concerned that they inadvertently undermine our efforts to ensure individualized and equal justice," he said, adding, "they may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society."
The sentencing commission did not, however, launch a study of risk scores. So ProPublica did, as part of a larger examination of the powerful, largely hidden effect of algorithms in American life.
ProPublica obtained the risk scores assigned to more than 7,000 people arrested in Broward County in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.
The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.
When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to reoffend, 61 percent were arrested for any subsequent crimes within two years.
ProPublica also turned up significant racial disparities, just as Holder feared. In forecasting who would reoffend, the algorithm made mistakes with black and white defendants at roughly the same rate, but in different ways.
• The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
• White defendants were mislabeled as low risk more often than black defendants.
Could this disparity be explained by defendants' previous crimes or the type of crimes they were arrested for? No. ProPublica ran a statistical test that isolated the effect of race from criminal history and recidivism, as well as from defendants' age and gender. Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.
The algorithm used to create the Florida risk scores is a product of a for-profit company, Northpointe. The company disputes ProPublica's analysis.
In a letter, it criticized ProPublica's methodology and defended the accuracy of its test: "Northpointe does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model."
Northpointe's software is among the most widely used assessment tools in the country. The company, based in Traverse City, Mich., doesn't publicly disclose the calculations used to arrive at defendants' risk scores, so it isn't possible for defendants or the public to see what might be driving the disparity. (On Sunday, Northpointe gave ProPublica the basics of its future-crime formula, which includes factors such as education levels and whether a defendant has a job. It didn't share the specific calculations, which it said are proprietary.)
Northpointe's core product is a set of scores derived from 137 questions that are answered by defendants or pulled from criminal records. Race is not one of the questions. The survey asks defendants about such things as "Was one of your parents ever sent to jail or prison?" or "How many of your friends/acquaintances are taking drugs illegally?" or "How often did you get in fights while at school?" The questionnaire also asks people to agree or disagree with statements such as "A hungry person has a right to steal" and "If people make me angry or lose my temper, I can be dangerous."
The appeal of risk scores is obvious: The United States locks up far more people than any other country, and a disproportionate number of them are black. For more than two centuries, the key decisions in the legal process, from pretrial release to sentencing to parole, have been in the hands of human beings guided by their instincts and biases.
If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be more fair and more selective about who is incarcerated and for how long. The trick, of course, is to make sure the computer gets it right. If it's wrong in one direction, a dangerous criminal could go free. If it's wrong in another direction, it could result in someone unfairly receiving a harsher sentence or waiting longer for parole than is appropriate.
The first time Paul Zilly heard of his score — and realized how much was riding on it — was during his sentencing hearing on Feb. 15, 2013, in court in Barron County, Wis. Zilly had been convicted of stealing a push lawnmower and some tools. The prosecutor recommended a year in county jail and followup supervision that could help Zilly with "staying on the right path." His lawyer agreed to a plea deal.
But Judge James Babler had seen Zilly's scores. Northpointe's software had rated Zilly as a high risk for future violent crime and a medium risk for general recidivism. "When I look at the risk assessment," Babler said in court, "it is about as bad as it could be."
Babler overturned the plea deal that had been agreed on by the prosecution and defense, and imposed two years in state prison and three years of supervision.
Florida's Broward County, where Brisha Borden stole the Huffy bike and was scored as high risk, doesn't use risk assessments in sentencing. "We don't think the (risk assessment) factors have any bearing on a sentence," said David Scharf, executive director of community programs for the Broward County Sheriff's Office in Fort Lauderdale.
Broward County has, however, adopted the score in pretrial hearings, in the hope of addressing jail overcrowding. In 2008, the Sheriff's Office decided that instead of building another jail, it would begin using Northpointe's risk scores to help identify which defendants were low-risk enough to be released on bail, pending trial.
Since then, nearly everyone arrested in Broward has been scored soon after being booked. The scores are provided to the judges, who decide which defendants can be released from jail.
In 2010, researchers at Florida State University examined the use of Northpointe's system in Broward County over a 12-month period and concluded that its predictive accuracy was "equivalent" in assessing defendants of different races. They didn't examine whether different races were classified differently as low or high risk.
Broward County Judge John Hurley, who oversees most of the pretrial release hearings, said the scores were helpful when he was a new judge, but now that he has experience he prefers to rely on his own judgment.
Most crimes are presented to the judge with a recommended bail amount, but he or she can adjust the amount. Hurley said he often releases first-time or low-level offenders without any bail at all.
However, in the case of Borden and her friend Sade Jones, the teenage girls who stole a kid's bike and scooter, Hurley raised the bail amount for each girl from the recommended $0 to $1,000 each
Hurley said he has no recollection of the case.
The girls spent two nights in jail before being released on bail.
"We literally sat there and cried," the whole time they were in jail, Jones recalled. The girls were kept in the same cell. Otherwise, Jones said, "I would have gone crazy." Borden declined repeated requests to comment for this article.