Should Courts Use Artificial Intelligence to Predict Future Criminal Behavior?

Abstract digital human face

It may sound like something out of a science fiction movie, but cities across the country are already using artificial intelligence (AI) to assess whether someone accused of a crime should be released on bond prior to their trial.

The problem with using AI for this purpose, according to computer experts, is that algorithms sometimes make mistakes. These experts argue that court systems that use AI to determine if someone should be granted bail could be contributing to the country’s mass incarceration problem.

AI Experts Issue a Warning About Using Algorithms to Predict Crime

In a July 2019 opinion article published in the New York Times, two research scientists from MIT, along with a lawyer from Harvard’s Criminal Justice Policy Program, argued that pretrial risk assessment tools that use AI are “fundamentally flawed.”

According to the article, the United States incarcerates about 500,000 people who have been accused of a crime. As the MIT Technology Review states, “The U.S. imprisons more people than any other country in the world.” To put it into perspective, 1 in 38 Americans is housed in some type of correctional facility.

A large number of these individuals haven’t been convicted of a crime yet. Instead, they are awaiting trial, and the court has determined that they can’t be trusted to show up in court at some future date. Instead, these individuals are denied bail and held in jail until their trial.

The problem is that not everyone in this group should be denied bail. The New York Times article goes on to note that the U.S. makes up just 4 percent of the world’s population, yet it’s also home to 20 percent of the “global pretrial jail population.” This statistic shows that the U.S. holds far more people in jail prior to trial than it should.

The article states: “There are more legally innocent people behind bars in America today than there were convicted people in jails and prisons in 1980.”

Why Information Scientists Say AI Algorithms Are Flawed

The authors of the article state that an inflated sense of risk is partly responsible for this huge pretrial prison population. Understandably, judges don’t want to put a violent person on the streets prior to their trial.

At the same time, however, a low-risk individual shouldn’t be held in jail when they haven’t been convicted of a crime. Unnecessary pretrial detention can cause people to lose their jobs, and it can jeopardize their ability to care for their families.

In an effort to cut back on unnecessary pretrial incarceration, a growing number of cities around the U.S. have turned to AI that uses mathematical algorithms to predict whether someone is likely to commit a crime in the future.

These systems work by assessing a person’s criminal history, how long they’ve been employed, whether they rent or own a home, if they have a mobile phone, and the city they reside in.

While these assessments may sound worthwhile, the experts who authored the opinion piece say that “risk assessments are virtually useless for identifying who will commit violence if released pretrial.”

According to the article’s authors, the algorithms are virtually meaningless because violent crime is such a rare event. Because it is so statistically rare, it’s almost impossible for AI to be accurate.

This inaccuracy means any risk assessment system is likely to be deeply flawed or uncertain. Algorithms play it safe by casting a big net when it comes to predicting risk. An unfortunate side effect, in this case, is that the net frequently snares people who shouldn’t be placed in jail while they wait for their trial to begin.

Disparities Among Various Risk Assessment Tools

According to data scientists, risk assessment tools also vary a great deal depending on which type of software is being used.

The executive director of the Stanford Computational Policy Lab stated that he reviewed 100,000 judicial decisions regarding bail and found that some judges released 90 percent of the individuals who appeared before them, whereas other judges only released around 50 percent.

“Although AI may develop algorithms that identify risk factors for future criminal behavior, most often human emotions are the driving force behind a given criminal act and thus will forever elude any rational based method of prediction.” Mick Mickelsen Dallas Criminal Defence Lawyer

According to the director, the problem with today’s risk assessment tools is that they fail to review a defendant’s case on an individual basis. Instead, these tools “take a one-size-fits-all approach and are typically not tailored to the needs of specific jurisdictions.”

The director points out that different cities measure risk in different ways, which should be taken into account when a court conducts a risk assessment.

In July 2018, over 100 organizations, including the ACLU and the NAACP signed a petition urging cities to stop using AI risk assessment tools. According to the MIT Technology Review, the risk assessment tools are increasingly popular.

Talk to a Dallas Criminal Defense Lawyer About Your Case

If you have been charged with a crime, it’s important to discuss your case with an experienced Dallas criminal defense lawyer.

Dallas Best Criminal Defense Lawyers

Broden & Mickelsen, LLP
(T): 214-720-9552

Sources:

  1. https://www.nytimes.com/2019/07/17/opinion/pretrial-ai.html
  2. https://engineering.stanford.edu/magazine/article/can-ai-help-judges-make-bail-system-fairer-and-safer
  3. https://www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/

***ATTORNEY ADVERTISING***

Prior results cannot and do not guarantee or predict a similar outcome with respect to any future case.

Mick Mickelsen is a nationally recognized criminal trial attorney with more than 30 years of experience defending people charged with white-collar crimes, drug offenses, sex crimes, murder, and other serious state and federal offenses.