Advertisment

Can AI Truly Predict Crime? Insights from George Kailas

George Kailas, CEO at Prospero.ai, points out, that the effectiveness of AI in this realm is a double-edged sword, fraught with ethical implications and the risk of reinforcing existing biases.

author-image
Aanchal Ghatak
New Update
image
Listen to this article
0.75x 1x 1.5x
00:00 / 00:00

Argentina has announced its ambitious plan to emerge as a leader in crime prevention through the launch of an innovative artificial intelligence program. This initiative aims to leverage AI to predict future crimes, utilizing advanced technologies for surveillance, social media monitoring, and facial recognition.

Advertisment

George Kailas, CEO at Prospero.ai, points out, that the effectiveness of AI in this realm is a double-edged sword, fraught with ethical implications and the risk of reinforcing existing biases. By analysing historical crime data, algorithms can identify patterns and trends, offering a proactive approach to policing.

Yet, the question remains: Can AI truly deliver on its promise without compromising fairness and equity in the justice system? George Kailas, CEO at Prospero.ai discusses the challenges and opportunities presented by this emerging technology, highlighting the need for careful implementation and oversight. Excerpts:

Can AI accurately predict future crimes?

Advertisment

When Tom Cruise starred in “Minority Report” in the early 2000s, his character assisted law enforcement with the prediction and prevention of crime through psychic technology. The idea that technology could predict future crimes seems like a Hollywood dream. Well, today that movie plot is becoming a reality.

The potential benefits of AI in crime prediction are immense, from resource allocation to identifying underlying social issues that contribute to crime. However, we must approach this technology with caution.

At the University of Chicago, a team of scientists created an AI-based algorithm that utilizes historical crime data in an effort to predict future criminal occurrences. AI learns from data and follows a set of rules – which is what we call an algorithm – in order to recognize patterns for prediction. By inputting criminal history from previous years and using the developed algorithm, AI can help predict future crimes.

Advertisment

However, accurately is a different conversation. The team at the University of Chicago reported a 90% accuracy rate for their technology’s predictions. This was tested in Chicago and roughly eight other high-crime cities like Los Angeles and Philadelphia. The same results were highlighted. In Argentina, law enforcement is utilizing similar technology and even building on it by implementing social media monitoring. They claim high accuracy rates as well. So, while I do believe AI can assist in predicting future crime, I think its accuracy rate will require more testing. This is not the type of classification where much of an error rate should be acceptable.

What are the limitations of AI in crime prediction?

When you’re in technology – or any STEM field for that matter – things are often viewed objectively. These industries tend to lean on the shoulder of logic and practicality, even within their innovations and creations. We can train AI to follow the algorithms we’ve created and to analyse the data we’ve inputted, but one thing we cannot do is: give AI the emotional intelligence and awareness of a real human being. Thus, a huge ethical concern surrounding this prediction technology is the potential bias it may reveal.

Advertisment

This is a two-fold ethical concern. First, the technology is being trained with historical data that is littered with racial bias. It cannot recognize that it may be unfairly targeting communities of colour because, in the past, law enforcement and justice in this nation were impeded by systematic oppression against minorities.

When you compare AI to traditional crime prediction, the difference is figuratively night and day. Traditional methods rely on human intuition and historical data, but they’re reactive—waiting for crime to happen before acting.

This point filters into the second point of contention: The preexisting bias of law enforcement today. Reports coming out of the University of Chicago show that police officers have their own biases against certain communities and this can result in them rushing to judgment. If not used properly and diligently, this technology could reinforce existing racial and economic biases.

Advertisment

The other issue is at the core of AI generally, it is not the best practice to ask an AI to do something a human cannot. And we have no evidence people are accurate at predicting crimes so it is always fair to ask yourself how a computer can effectively do something a human cannot.

How does AI compare to traditional methods of crime prediction?

When you compare AI to traditional crime prediction, the difference is figuratively night and day. Traditional methods rely on human intuition and historical data, but they’re reactive—waiting for crime to happen before acting. AI, on the other hand, processes mountains of real-time data, from social media to facial recognition, predicting crimes before they occur. But here’s the uncomfortable truth: while AI has incredible potential, it’s walking a fine line. Feed it biased data, and it will reinforce harmful stereotypes, leading to profiling and ethical nightmares. We’re diving headfirst into a world where algorithms decide who’s guilty before the crime even happens, so are we ready for that? Or are we setting ourselves up for a future of AI-driven injustice?

Advertisment

If we can figure out a way to innovate in the right ways creating safeguards that do not exist currently it has the potential to change the world for the better. Especially if that includes more effective interventions that do not involve punishment and imprisonment but the right support resources to prevent crime without aggressive tactics.

What are the ethical implications of using AI for crime prediction?

There’s no way to skirt around the reality: communities of colour, especially Black communities, are disproportionately affected by law enforcement. These groups have faced higher levels of surveillance, profiling, and general targeting compared to their white counterparts. Our incarceration system highlights this with nearly 40% of those incarcerated in this country being Black.

Advertisment

As I previously said, the ethical concerns surrounding this technology are very real. AI assumes biases from the historical data that is filtered into it. This can result in discriminatory policing of communities of colour. Additionally, the mistrust of law enforcement that exists in these communities only increases with the lack of transparency regarding the advancement of this prediction technology. Law enforcement’s actions based on this technology could result in deeper mistrust of the law within these communities. What types of data are used to train AI models for crime prediction?

A significant amount of data has to go into these computing models in an effort to ensure accuracy. I think it’s imperative to consistently emphasize the importance of precision when it comes to this technology. One mistake can result in a world of trouble. Publicly available data is a primary source.

This includes historical criminal information, demographics in the area, geographic landscape, weather patterns, and socioeconomic statistics. Additionally, teams have begun monitoring social media as it has become a large source of information. Established detection sources are also still in play; these can be traffic or public/CCTV cameras and license plate readers.

License plate readers with AI have been shown to solve especially violent crimes but we need to be careful about thoughtful proliferation given efficacy and the concerns shared above.

What algorithms and techniques are most effective for crime prediction?

When it comes to crime prediction, machine learning algorithms like decision trees, neural networks, and random forests have been particularly effective. Think about this, predictive policing systems like Geolitica, formerly PredPol, which uses a combination of historical crime data and machine learning, have been deployed in cities like Los Angeles (which discontinued its use in 2020) to forecast where crimes are likely to occur. These systems analyse patterns in crimes such as burglaries and car thefts, identifying high-risk areas based on past data. Similarly, facial recognition technology combined with AI has been used for identifying suspects and tracking movements in real-time, as seen in China’s expansive surveillance network.

AI identifies crime hotspots and predicts trends using economic, social, and weather data, helping police allocate resources effectively, save time, and maximize impact, giving law enforcement a proactive edge in addressing crime.

However, despite these successes, the technology is far from perfect. False positives, biases in the data, and ethical concerns about profiling remain significant challenges. While AI can process vast amounts of information and uncover patterns humans might miss, we still have a long way to go before the technology reaches a point of true reliability—if it ever does.

How can we ensure that AI models are unbiased and do not perpetuate existing inequalities?

Mitigating bias in this technology starts with the data that is being input. Diversifying the data collected to be representative of the community it aims to serve can aid in training the algorithm to make more equitable evaluations. Of course, immense oversight is necessary. As incredible as the advances surrounding this topic have been, we still need to remember it is in its testing phase. Having groups evaluate the technology consistently and assess its levels of fairness is imperative to minimizing biased decisions.

As time progresses and the usage of this technology commences, legislative oversight must come into play. For example, the US House Committee on Science, Space, and Technology or the Committee on Homeland Security might consider implementing legislation that properly regulates this technology to ensure equity.

What are the potential benefits of using AI for crime prediction?

Here’s a few examples from my point of view how:

•            AI pinpoints crime hotspots, allowing police to allocate resources where they’re actually needed, saving time and maximizing local impact

•            It identifies trends from economic, social, and even weather data to predict where crime is likely to occur, giving law enforcement a proactive edge

•            AI can flag situations where mental health professionals or social workers are a better fit, potentially preventing crimes before they escalate

•            For understaffed departments, AI ensures that every officer or specialist is being used in the most impactful way possible, reducing wasted effort and increasing results

How can we balance the benefits of AI with the risks?

To balance the benefits of AI, human oversight is needed, period. AI isn’t neutral, it reflects the biases of those who create it. To avoid reinforcing existing inequalities, we need diverse teams of BIPOC, women, and people of all backgrounds, actively involved in AI’s development and monitoring.

The idea that AI can be fair or unbiased without human input is misguided at best. Again, without diverse oversight, we risk building systems that exclude or harm certain groups, no matter how advanced the technology becomes.

Can you provide examples of successful or unsuccessful AI-driven crime prediction programs?

As I previously mentioned, two major executors of this technology are: the team of scientists from the University of Chicago and law enforcement in Argentina. The team in Chicago reports a 90% accuracy rate and has tested the technology in eight other cities. Argentinian law enforcement has begun using this technology and claim it has been both accurate and effective. They have even implemented social media monitoring to their algorithm.

However, in New Jersey, we saw that not every software created for predicting crimes works. Plainfield Police Department used a software called Geolitica, known as PredPol until a 2021 rebrand, to predict crimes in the area; only to find out later that the technology had an awful rate of accuracy – 0.5% to be exact. It is argued that the system had many ethical flaws, including perpetuation of racial bias and disparities.

What lessons can we learn from these case studies?

The dangers of poorly executed technology in crime prediction were vividly seen in the Plainfield Police Department case. At the University of Chicago, we saw that the technology was classifying certain areas in the city – areas where demographics were on the lower end of the socioeconomic spectrum – as more menacing than areas of higher wealth. If these biases prevail, the potential threat against communities of color and poorer residential areas may find themselves extensively more policed without reason.

What do experts in AI and law enforcement think about the potential of AI for crime prediction?

Experts in AI see enormous potential in crime prediction, but they’re quick to point out that AI can only act on patterns it already knows. The real challenge from my perspective is that financial crimes, like fraud or insider trading, evolve constantly. AI might catch today’s schemes, but it struggles with novel, unprecedented tactics. Worse, there’s a risk that over-reliance on AI could blind institutions to crimes that don’t fit existing models. As powerful as it is, AI isn’t a crystal ball—it’s reactive, not 100% predictive, and criminals are always finding ways to exploit the gaps in its knowledge.

From a law enforcement perspective, AI offers hope for understaffed agencies, especially in tracking and predicting low-level crimes or fraud. But for departments without the budget or infrastructure, it’s a non-starter. And even with AI’s help, experts are concerned about bias in the data. AI can amplify systemic issues like profiling if it’s not used carefully. At best, it supplements policing, but it’s no magic bullet.

Many experts would also agree how vital training data is. And would recognize that there are some things that need to be fixed on a human “training data” level before we could hope an AI could do better.

What are the challenges and opportunities for the future of AI in crime prediction?

In theory, technology that predicts future crimes before they actually happen sounds like a weight off law enforcement’s shoulders. The idea seems to invoke a sense of increased safety and security in a world that – especially as of late – feels scary and dangerous. The opportunities that this technology can present are bountiful, however, if not executed with integrity and thoughtfulness, it can be detrimental to our society.

Mitigating implicit bias in an effort to ensure fairness and equity must be a priority for the designers of this technology. The algorithms in question have the chance to harm just as much as they do to help. Furthermore, privacy must continue to remain protected; data should not impede the right of people’s privacy and safety. But, once we can – and we will – work out the kinks of this technology … Crime fighting might be a new additive to the powers of AI.

George Kailas

CEO, Prospero.ai

aanchalg@cybermedia.co.in

Advertisment