Program Helps Law Enforcement Determine Who Is Most Likely to Commit Crime
In the movie Minority Report set in the year 2054, an experimental Washington, D.C. police force called Precrime has completely neutralized murder in the city. Reality is scarier – new crime prediction software being rolled out in the nation’s capital that George Orwell’s Thought Police might have found useful: an artificial intelligence system designed to gain insight into what people are thinking.
Developed by Richard Berk, a professor at the University of Pennsylvania, the software is already used in Baltimore and Philadelphia to predict which individuals on probation or parole are most likely to murder and to be murdered.
In his latest version, the one being implemented in D.C., Berk goes even further, identifying the individuals most likely to commit crimes other than murder.
If the software proves successful, it could influence sentencing recommendations and bail amounts.
“When a person goes on probation or parole they are supervised by an officer. The question that officer has to answer is ‘what level of supervision do you provide?'” said Berk.
It used to be that parole officers used the person’s criminal record, and their good judgment, to determine that level.
“This research replaces those seat-of-the-pants calculations,” he said.
Murders, despite their frequent appearance on cop dramas and the evening news, are rare crimes. On average there is one murder for every 100,000 people. Even among high-risk groups the murder rate is one in 100. Trying to predict such a rare event is very difficult, so difficult that many researchers deemed it impossible.
“It’s like trying to find the needle in the haystack,” said Berk.
New advances in computer technology, however, can sift through that haystack more quickly and more accurately than ever.
Beginning several years ago, the researchers assembled a dataset of more than 60,000 various crimes, including homicides. Using an algorithm they developed, they found a subset of people much more likely to commit homicide when paroled or probated. Instead of finding one murderer in 100, the UPenn researchers could identify eight future murderers out of 100.
Berk’s software examines roughly two dozen variables, from criminal record to geographic location. The type of crime, and more importantly, the age at which that crime was committed, were two of the most predictive variables.
“People assume that if someone murdered then they will murder in the future,” said Berk. “But what really matters is what that person did as a young individual. If they committed armed robbery at age 14 that’s a good predictor. If they committed the same crime at age 30, that doesn’t predict very much.”
Baltimore and Philadelphia are already using Berk’s software to help determine how much supervision parolees should have. Washington, D.C. is now set to use the algorithm to help determine lesser crimes as well. If those tests go well, Berk says the program could help set bail amounts and suggest sentencing recommendations.
Predicting future crimes does sound, well, futuristic, said Berk. Even his students at the University of Pennsylvania compare his research to the Tom Cruise movie “Minority Report.”
Scientifically, Berk’s results are “very impressive,” said Shawn Bushway, a professor of criminal justice at the State University of New York at Albany who is familiar with Berk’s research.
Predicting rare events like murder, even among high-risk individuals, is extremely difficult, said Bushway, and Berk is doing a better job of it than anyone else.
But Berk’s scientific answer leaves policymakers with difficult questions, said Bushway. By labeling one group of people as high risk, and monitoring them with increased vigilance, there should be fewer murders, which the potential victims should be happy about.
It also means that those high-risk individuals will be monitored more aggressively. For inmate rights advocates, that is tantamount to harassment, “punishing people who, most likely, will not commit a crime in the future,” said Bushway.
The New Thought Police
The National Security Agency (NSA) is developing a tool with the entire Internet and thousands of databases for a brain, the device will be able to respond almost instantaneously to complex questions posed by intelligence analysts. As more and more data is collected—through phone calls, credit card receipts, social networks like Facebook and MySpace, GPS tracks, cell phone geolocation, Internet searches, Amazon book purchases, even E-Z Pass toll records—it may one day be possible to know not just where people are and what they are doing, but what and how they think.
The system is so potentially intrusive that at least one researcher has quit, citing concerns over the dangers in placing such a powerful weapon in the hands of a top-secret agency with little accountability.
Known as Aquaint, which stands for “Advanced QUestion Answering for INTelligence,” the project was run for many years by John Prange, an NSA scientist at the Advanced Research and Development Activity. Headquartered in Room 12A69 in the NSA’s Research and Engineering Building at 1 National Business Park, ARDA was set up by the agency to serve as a sort of intelligence community DARPA, the place where former Reagan national security advisor John Poindexter’s infamous Total Information Awareness project was born. [Editor’s note: TIA was a short-lived project founded in 2002 to apply information technology to counter terrorist and other threats to national security.] Later named the Disruptive Technology Office, ARDA has now morphed into the Intelligence Advanced Research Projects Activity (IARPA).
A sort of national laboratory for eavesdropping and other spycraft, IARPA will move into its new 120,000-square-foot home in 2009. The building will be part of the new M Square Research Park in College Park, Maryland. A mammoth two million-square-foot, 128-acre complex, it is operated in collaboration with the University of Maryland. “Their budget is classified, but I understand it’s very well funded,” said Brian Darmody, the University of Maryland’s assistant vice president of research and economic development, referring to IARPA. “They’ll be in their own building here, and they’re going to grow. Their mission is expanding.”
“The technology behaves like a robot, understanding and answering complex questions,” said a former Aquaint researcher. “Think of 2001: A Space Odyssey and the most memorable character, HAL 9000, having a conversation with David. We are essentially building this system. We are building HAL.” A naturalized U.S. citizen who received her Ph.D. from Columbia, the researcher worked on the program for several years but eventually left due to moral concerns. “The system can answer the question, ‘What does X think about Y?'” she said. “Working for the government is great, but I don’t like looking into other people’s secrets. I am interested in helping people and helping physicians and patients for the quality of people’s lives.” The researcher now focuses on developing similar search techniques for the medical community.
A supersmart search engine, capable of answering complex questions such as “What were the major issues in the last 10 presidential elections?” would be very useful for the public. But that same capability in the hands of an agency like the NSA—absolutely secret, often above the law, resistant to oversight, and with access to petabytes of private information about Americans—could be a privacy and civil liberties nightmare. “We must not forget that the ultimate goal is to transfer research results into operational use,” said Aquaint project leader John Prange, in charge of information exploitation for IARPA.
Once up and running, the database of old newspapers could quickly be expanded to include an inland sea of personal information scooped up by the agency’s warrantless data suction hoses. Unregulated, they could ask it to determine which Americans might likely pose a security risk—or have sympathies toward a particular cause, such as the antiwar movement, as was done during the 1960s and 1970s.
The Aquaint robospy might then base its decision on the type of books a person purchased online, or chat room talk, or websites visited—or a similar combination of data. Such a system would have an enormous chilling effect on everyone’s everyday activities—what will the Aquaint computer think if I buy this book, or go to that website, or make this comment? Will I be suspected of being a terrorist or a spy or a subversive?
High-tech Public Spying
High-tech eyes — from satellite imagery to school officials spying on students via webcams at home — everywhere you look technology is being employed in creative new ways by government officials, a trend that civil libertarians and others fear are eroding privacy rights.
“As technology advances, we have to revisit questions about what is and what is not private information,” said Gregory Nojeim, senior counsel at the Washington, D.C.-based Center for Democracy and Technology.
IBM has spent a whooping $12 billion beefing up its analytics division.
Predictive analytics gives government organizations worldwide a highly-sophisticated and intelligent source to create safer communities by identifying, predicting, responding to and preventing criminal activities. It gives the criminal justice system the ability to draw upon the wealth of data available to detect patterns, make reliable projections and then take the appropriate action in real time to combat crime and protect citizens.
If that sounds scary to you, that’s because it is. First it’s the convicted-but-potentially-recidivistic criminals. Then it’s the potential terrorists. Then it’s everyone of us, in a big database, getting flagged because some combination of factors—travel patterns, credit card activity, relationships, messaging, social activity and everything else—indicate that we may be thinking about doing something against the law. Potentially, a crime prediction system can avoid murder, robbery, or a terrorist act.