Article
Pedro das Neves
Correctional services around the world continue to grapple with persistent challenges: overcrowded facilities, high recidivism rates, and insufficiently resourced rehabilitation efforts. While many of these issues must ultimately be addressed upstream in the justice system – through measures such as alternative sentencing, diversion programs, and efforts to reduce unnecessary incarceration – there is also a pressing need to strengthen the application of evidence-based rehabilitation practices within custodial and community settings.
Approaches grounded in risk-need-responsivity (RNR) principles, individualized case management and cognitive-behavioral therapy, have proven effective in promoting desistance and supporting reintegration. Building on these foundations, there is growing promise that Artificial Intelligence (AI) can serve as a powerful tool to enhance, scale, and personalize such interventions. As agencies seek more humane, effective, and equitable solutions, AI is emerging as a valuable complement to professional expertise – supporting decision-making, optimizing resources, and improving outcomes for individuals and communities alike.
From dynamic risk assessments to always-available mental health support, AI promises to enhance how we rehabilitate people in custody and during their transition back into society.
But innovation in justice must be matched by responsibility. For AI to contribute meaningfully to public safety and rehabilitation, its implementation must be guided by strong policy, ethical frameworks, and a commitment to data integrity and inclusion. In corrections, perhaps more than anywhere else, the stakes are deeply human.
The data dilemma: a fragile foundation
AI thrives on data. But in corrections, data is often fragmented, poor quality, or riddled with bias. In many jurisdictions, basic information about inmate histories, behaviors, or program participation is still stored in paper files or non-standardized databases. Without digitization and integration, AI cannot produce reliable insights. And when the underlying data reflects systemic disparities, such as the over-policing of certain communities, those same injustices risk being amplified by algorithms.
Investments in modern, secure, interoperable Offender Management Systems (OMS), along with clear data governance standards, must be a policy priority. Only then can AI support decision-making that is both intelligent and fair.
Predictive analytics: smarter decisions, earlier interventions
AI-driven predictive analytics can help correctional agencies move from reactive to preventive strategies. Instead of relying solely on static risk factors, machine learning models can draw from a richer set of variables to forecast an individual’s likelihood of reoffending and recommend tailored interventions. Used ethically, this means resources (e.g., therapy slots, housing, education) can be prioritized for those most in need. In community corrections, real-time analytics could detect early signs of relapse or crisis – such as missed appointments or changes in behavior, and alert supervision officers before harm occurs. However, safeguards are essential. Some studies have shown how biased training data can lead to risk algorithms that overpredict reoffending among Black defendants or women. Transparency, independent audits, and the right to challenge algorithmic assessments should be mandated wherever AI influences liberty or rehabilitation outcomes.
AI as Mentor, Therapist, and Teacher
AI’s promise goes beyond risk assessment and scoring. In many systems where staff are overwhelmed or under-resourced, AI can serve as a 24/7 support tool for inmates: helping them learn, cope, and grow.
In the United States, the “Echo” AI chatbot has been deployed in prisons to offer trauma-informed, conversational support based on cognitive behavioral therapy¹.
Inmates use it to manage anger, anxiety, and stress, reducing behavioral infractions and boosting engagement in rehabilitation programs. AI tutors may also help inmates pursue literacy, vocational training, or higher education. In Finland, incarcerated individuals contributed to AI projects by labeling data – building both digital skills and confidence for post-release employment. These innovations show that AI, if designed with empathy and purpose, can be an empowering force. Still, these tools must complement, not replace, human relationships. AI cannot replicate the full nuance of therapeutic care, nor should it be the only source of support for someone navigating trauma or reintegration.
LLMs and agentic AI: technologies for staff and inmates
The rise of Large Language Models (LLMs), like ChatGPT, and agentic AI brings new use cases into view. Virtual assistants can help probation officers summarize case histories, draft reports, or receive alerts about clients at elevated risk. Inmates can use AI mentors to understand legal rights, access reentry resources, or receive step-by-step coaching on job searches and parole compliance. Even staff training can benefit: AI driven roleplay scenarios can help correctional professionals practice de escalation, motivational interviewing, or restorative justice dialogue. Yet these powerful models must be applied carefully. AI “hallucinations” or misjudgements in legal contexts may have serious consequences. Clear boundaries and human oversight are essential.
Avoiding a digital divide in corrections
AI adoption also raises a critical equity concern: the risk of deepening the digital divide within and between correctional systems. Already, a gap is growing between well-resourced systems and institutions with physical and smart infrastructure and underfunded ones still operating on paper.
Some inmates gain digital skills and access AI tutors; others leave prison unable to use a smartphone.
If not addressed, this divide could lead to unequal rehabilitation outcomes, and ultimately, unequal chances at desistance and reintegration.
National and international efforts must ensure that innovation is not limited to a few “model” prisons or privileged jurisdictions. Equity in access to digital tools must be part of any rehabilitation strategy in the 21st century.
Responsible innovation: the policy imperative
To guide this transformation, governments must adopt clear and forward looking policies. It is essential to establish robust standards for data quality, transparency, and fairness, ensuring that AI systems operate on accurate and unbiased information. There is also a critical need to ensure that AI tools remain explainable and subject to human oversight, especially when they affect decisions related to personal liberty or access to rehabilitation opportunities ².
Authorities must protect individual privacy and secure informed consent, particularly in contexts involving surveillance or the handling of sensitive personal data. At the same time, they should ensure digital access and provide training for both staff and inmates, addressing the risk of digital exclusion and enabling effective engagement with AI technologies. It is equally important to fund ongoing research, pilot initiatives, and impact evaluations to ensure that AI tools are tested, refined, and validated across diverse correctional settings.
Finally, it is imperative that ongoing research and evaluation become standard practice. As AI becomes more integrated into correctional and community supervision systems, policymakers must assess its real-world impacts on rehabilitation outcomes, institutional performance, fairness, and public safety. Evidence – not hype – must guide implementation, and regulatory frameworks must remain agile enough to respond to emerging risks, challenges, and lessons learned.
Towards a more humane and intelligent justice
The time is now. As a policymaker, it is no longer possible to ignore the accelerating role of AI in justice systems. While there is no one-size-fits all solution – and caution is both necessary and responsible – the rapid pace of innovation calls for informed and deliberate engagement. Those who take the time to understand and responsibly adopt AI technologies – grounded in ethics, inclusion, and evidence – will be better positioned to shape a more humane and effective justice system.
Hesitation carries the risk of falling behind – but the path forward demands caution and responsibility. The deployment of AI in corrections must be accompanied by clear safeguards and inclusive strategies that leave no one behind. If done well, AI can help us finally deliver on the promise of rehabilitation: giving every individual not just a sentence, but a second chance.
¹ A comprehensive 8-month study conducted by the Center for Justice and Tech in collaboration with ClearPath Corrections found that inmates who used Echo regularly (4+ times/week) experienced a 28% drop in behavioral infractions, including violent outbursts and self-harm incidents; self-reported surveys showed a 40% increase in emotional self-awareness and ability to de-escalate during stressful moments; participation in voluntary rehabilitation programs – such as addiction recovery, GED prep, and job training – rose by 32% in Echo users; staff reported a noticeable decrease in late-night emergency calls, freeing up resources and improving safety for both inmates and personnel.
² Europe’s AI Act, adopted in 2024, serves as a valuable reference. Its risk-based framework and strict requirements for high-risk applications, particularly in the fields of law enforcement and justice, underscore the importance of human oversight, transparency, and accountability. Different jurisdictions can look to this model as they design or update their own regulatory and institutional frameworks.
Sources available upon request.
Contact the author at [email protected]
Pedro das Neves is the CEO of IPS Innovative Prison Systems and Director of ICJS Innovative Criminal Justice Solutions. With over 20 years of experience, he has led justice reform initiatives across Europe, North America, Latin America, the Middle East, and Central Asia. Pedro has worked extensively with governments, the UNODC, the European Commission, and the Inter-American Development Bank (IDB), focusing on offender management, risk assessment, PCVE, and the modernization of correctional systems. He is an expert in designing and implementing AI-powered tools and digital solutions, such as the HORUS 360 iOMS, aimed at enhancing security, rehabilitation, and reducing recidivism. Pedro is the founder and editor-in-chief of JUSTICE TRENDS magazine and serves as a board member of the International Corrections and Prisons Association (ICPA). His contributions have earned him prestigious awards, including the ICPA Correctional Excellence Award. Pedro holds advanced qualifications from renowned institutions, including the College of Europe, the University of Virginia (Digital Transformation), MIT (Digital Transformation), and the University of Chicago (Artificial Intelligence), cementing his position as a leader in innovation and digital transformation within the justice sector.
Advertisement