The intersection of technology and crime-fighting presents a unique challenge for law enforcement agencies in balancing community safety with concerns about individual privacy and the potential for data misinterpretation.
Following a wave of looting and property destruction at businesses in Streeterville, River North and other Chicago neighborhoods on Aug. 10, the Chicago Police Department (CPD) posted video clips and images of persons of interest to a page on its website, and requested the public's assistance in identifying the individuals.
The City of Chicago uses remotely-controlled POD (police observation device) cameras to capture footage of potential crimes. According to the CPD's website, the cameras were implemented as a pilot program in 2003 and the technology has advanced through the years, including the ability to detect gunfire and send alerts directly to Chicago's Emergency Management and Communications Center.
Maggie Huynh, a spokeswoman with the Chicago Police Department, said the department's Looting Task Force is also working closely with area tech centers to sift through the surveillance video footage and identify persons of interest.
Social media monitoring
At this time, Huynh said, there are no plans underway to upgrade the cameras for higher-resolution footage or acquire additional tech resources for use in this type of investigation. She noted the police department regularly monitors publicly available, open-source social media as part of its crime-fighting strategy.
On Aug. 14, Chicago Mayor Lori Lightfoot said police planned to create a 20-member taskforce to increase social media monitoring as part of an effort to deter future incidences of looting.
During an Aug. 17 news conference, Chicago Police Superintendent David Brown said the department had made 11 felony-related looting arrests with the assistance of information community members had provided in response to the posted images and video footage. As of Aug. 24, more than two dozen felony arrests were detailed on the website.
Brown characterized the police department's monitoring of social media as "no different than our reading the paper or watching television news."
"Social media is an open source public forum for communication, like our newspapers, like our television news reports, and we're just watching the news on social media like we watch the news on those other formats," he said.
In an emailed statement, ACLU of Illinois Executive Director Colleen Connell expressed concern that social media monitoring by law enforcement holds "a number of dangers," including chilling protected speech and opening the door to targeting on the basis of protected speech, association, race and neighborhood. The organization filed a lawsuit against the Chicago Police Department in 2018, asking the department to turn over documents regarding social media monitoring software.
"It is therefore imperative that the City be fully transparent about the work of the new task force and ensure that its work not target Black and Latinx residents of Chicago for increased police scrutiny and encounters," Connell said in the statement. "And, these investigations using social media monitoring should not be used to discourage or dissuade people in the City from expressing their views – including strong opposition – on matters of public concern, including policing. Finally, the City should be transparent about any software used by CPD to monitor the social media of Chicago residents and how that software works."
Police departments in other cities and abroad have employed sophisticated techniques to identify criminals.
The New York Police Department has some of the nation's most advanced crime-fighting and surveillance resources, according to the New York Times. In addition to facial recognition software, the department's arsenal of tools includes drones, cell phone trackers and license plate readers. A 2018 State bill that would have allowed police to use drones for surveillance of large crowds failed in the House.
In recent years, support has grown for the use of police body-worn cameras. By 2016, according to the Bureau of Justice Statistics, nearly half of general-purpose law enforcement agencies had adopted the technology. The CPD completed an initiative to equip patrol officers with body-worn cameras in 2017.
Two years later, Chicago introduced the first of 200 patrol vehicles equipped with technology allowing the city to compare license plates with a list of stolen vehicles. That same year, the Denver Police Department began using GPS vehicle pursuit darts, which launch GPS trackers at fleeing vehicles and allow police to follow the vehicles through an app, the Denver Post reported.
The darts were designed to deter dangerous police vehicle pursuits, which have claimed the lives of nearly 355 people in the U.S. annually from 1996 to 2015, according to the Bureau of Justice Statistics. Locally, a recent notable high-speed chase fatality occurred in Lakeview this June, when a squad car pursuing a man wanted in a homicide case struck another vehicle, killing a woman and injuring several others. The squad car in question was not equipped with the darts, as the CPD does not use this technology.
In China, a nationwide surveillance system called Skynet employs about 200 million closed-circuit television cameras, by the country's own estimate. Three years ago, China began using these cameras in conjunction with facial recognition technology to display images of lawbreakers, alongside their government ID numbers and names, on large public screens as a means of social shaming, the Times reported in 2018.
The use of facial recognition techniques in fighting crime has come under scrutiny in recent months amid ethical concerns. In May, the Chicago Sun-Times reported that facial recognition technology provider Clearview AI had ended its contract with the CPD. The company was the recipient of an ACLU lawsuit on grounds that its technology violated an Illinois law preventing past and present residents' biometric information from being used without permission.
Major cities such as San Francisco and Boston have banned police and other municipal agencies from using facial recognition technology, and companies such as Microsoft have prohibited law enforcement agencies from using its tech for crime-fighting purposes. This summer, Santa Cruz became the first U.S. city to enact a ban on predictive policing, a controversial technique that involves attempting to predict crime before it happens through the use of algorithms that analyze large data sets.
"Any kind of umbrella artificial intelligence tool, whether it's machine learning, whether it's optimization algorithms, whether it is natural image processing, the issue is, how do you know what you know?" said Desmond Patton, a Columbia University associate professor of social work who has studied the relationship between social media and gun violence and diversity, inclusion and equity in AI. "We've already identified many issues with how these systems can't accurately interpret black faces."
Desmond recommended the Chicago Police Department question whether social media monitoring is the best approach and take into consideration the ability of third-party tools, artificial intelligence and human intelligence to actively interpret context.
"There's so much more that is said and communicated on social media before you see that aggressive or threatening post," Patton said. "It's a missed opportunity to actually do some good by listening to people and learning more about their digital lives."
Patton said he would like to see this particular tool handled by community outreach organizations that focus on violence prevention and are more connected to what is happening on the ground.
"Put them in the hands of the folks who get the language, who get the interpretations, who get the nuance, who get the emojis, how they're used," he said. "Put the money, and put the opportunity and resource in their hands, and allow them to do good work."