Digital Justice
By Albert B. Kelly
I’ve been encouraged lately by the fact that elected leaders
have been taking a serious-minded approach to bail reform- something that
impacts the lives of many.
Without reform, millions of people would sit in jail cells
for lesser offenses because they couldn’t come up with a few hundred dollars in
bail money to let them continue with their lives as their case was adjudicated.
The result was devastated families and hugely unnecessary
costs to taxpayers at large. People lost their jobs, belongings, apartments,
and even children; the punishment way out of proportion to the offense.
In terms of the larger community, allowing people’s lives to
unravel for want of a few bucks on the front end tends to get awfully expensive
on the back end through taxes, lost productivity, desperation and in dozens of
other indirect ways we might not consider
So the move on bail reform was long overdue. But there’s
something else lurking in our data-driven, algorithm-loving world that takes
inequality and bias and quietly makes them part of society’s DNA.
Most people have no idea what “LSI-R” stands for and that’s
probably a good thing; it’s short for “Level of Service Inventory-Revised”. To
most of us it means nothing, but for those hoping for parole and a chance to
restart their lives, it’s everything.
Level of Service Inventory-Revised (LSI-R) is basically an
algorithm developed by a Canadian company (Multi-Health Systems) that’s used as
an assessment tool to measure the risk that a person might commit a future
crime or otherwise pose a risk to society if released.
When someone enters prison to start their sentence, they
fill out an LSI-R form which is a 2-page “true/false” checklist with a number
rating system that is supposed to assess risk based on the person’s criminal
history, substance abuse history, and financial history; level of education,
personality and attitude.
Other variables or in-puts include family/marital,
leisure/recreation, emotional/personal, a family’s economic status,
neighborhood crime in the home community, friends or family with a criminal
history, etc.
While some states use LSI-R or something similar for both
sentencing and parole, in NJ it’s used as part of the parole hearing process.
So what could be wrong with something as data-driven and “scientific” as an
algorithm?
The most glaring problem is that it’s a snapshot, frozen in
time, and it captures only what was true when an individual started their term
of incarceration, it does not account properly for what’s true years into a
person’s sentence.
My guess is that years in prison change a person and while
some people change for the worse, just as many earn diplomas and degrees, gain
skills through training, become drug-free, and are quite different than they
were when entering prison. Does the algorithm capture any of these variables or
inputs?
While having a job lined up, a home address in a better zip
code, family stability to embrace you, and hobbies to occupy your time are all
good things when eyeing parole, it’s also not a stretch to say that an
algorithm that gives undue weight to these “variables” will disproportionately
penalize low income and urban folks who maybe never had these in the best of
times.
Maybe we need a new algorithm with new “variables” or
lacking that, maybe the algorithm needs to give greater weight to the
rehabilitative and restorative work done by the inmate while they are
incarcerated.
That’s not unreasonable when you consider that most of the
folks writing algorithms these days are upper income, male, with an average age
somewhere between 25 and 40 years old. Even with the best of intentions, the
algorithms they write reflect their bias about family stability, circles of
friends, zip codes and attitudes.
But if the goal in addition to punishment is to actually
rehabilitate and lower recidivism, then we need serious programs and mechanisms
geared toward rehabilitation along with more current evaluations- the new
snapshot in time.
However, productive and successful use of those programs and
mechanisms also needs to be reflected in these algorithms that determine
release so that the inmate knows that doing the hard and serious work of
rehabilitation (or not) will actually matter when it comes time for their
hearing.
Because algorithms are proprietary, they can’t be
challenged, yet they render judgements based on assumptions that quietly get
baked into the cake. Something needs to change otherwise all we’ve got is a
stacked deck that seems perfectly legitimate because it feels “scientific”.