Can AI Replace Judicial Discretion in Criminal Sentencing

 Algorithms in the Courtroom: Can AI Replace Judicial Discretion in Criminal Sentencing ?

Writer - Mr. Sourav 


Justice in the Age of Code

What happens when sentencing starts looking like a score?

Around the world, criminal justice systems are experimenting with algorithmic tools that claim to predict the likelihood of reoffending. What was once a deeply human exercise—shaped by experience, moral judgment, and discretion—is increasingly being “assisted” by data-driven recommendations. The real question is not whether AI can compute faster, but whether it can participate in justice without distorting it.

Why algorithmic sentencing looks attractive

In jurisdictions like the United States, risk assessment tools (such as COMPAS) have been used to inform decisions connected to punishment and release. These systems promise three things courts badly want:

Efficiency in overloaded systems

Consistency across similar cases

Reduced bias, by replacing subjective intuition with data

But the moment we treat a risk score as neutral truth, we step onto dangerous ground.

The problem: bias doesn’t disappear—it gets encoded

Algorithms learn from historical data. And criminal justice data often reflects older inequalities—patterns of policing, arrests, prosecution choices, and socio-economic disadvantage. When that data trains a model, the result can be a system that repeats discrimination with the authority of mathematics.

The COMPAS controversy became emblematic of this fear: that certain communities could be labelled “high risk” more often, even when actual outcomes were more complicated. The lesson is simple: AI may automate bias rather than eliminate it.

What judicial discretion does that AI cannot

Sentencing is not only prediction. It is also moral and legal evaluation.

A judge weighs things that are hard to convert into variables:

degree of intention and culpability

personal circumstances and the possibility of reform

proportionality between offence and punishment

the broader social context of the act

An algorithm can estimate risk. It cannot explain—like a judge must—why a particular punishment is deserved, or how it serves the purposes of criminal law (deterrence, reformation, protection of society, and fairness).

Due process and the “black box” concern

Another issue is transparency. Many systems are not fully explainable, sometimes due to technical complexity or proprietary design. That creates a serious fairness question:

How can an accused challenge a sentencing recommendation if the reasoning behind it cannot be meaningfully understood or tested?

In criminal law, where liberty is at stake, opacity is not a minor technical flaw—it is a constitutional problem.

India: not there yet, but not far from the debate

India hasn’t formally adopted AI-based sentencing tools. Still, as courts digitize and adopt legal-tech systems, the possibility of algorithmic “assistance” is no longer hypothetical. If such tools ever enter sentencing, they must be compatible with Article 14 (equality) and Article 21 (fair procedure and personal liberty)—meaning transparency, accountability, and human oversight cannot be optional.

Conclusion: AI may assist, but it cannot replace judgment

Used carefully, AI can help courts with pattern recognition, data summaries, and consistency checks. But sentencing is where law meets human dignity. A judge can be questioned, appealed, and held accountable. A score cannot.

Algorithms may inform decisions. They should not become the decision-maker.


Newest
Previous
Next Post »