What artificial intelligence techniques are available for use in surveillance?
As costs go down and intelligent neural networks become much more accessible, AI-powered surveillance in public places is becoming more and more common. Convolutional neural networks (CNNs), which work under the assumption that the input fed to them consists of images, have been trained carry out tasks like facial recognition, image classification (classifying images by learning from a large database) and object detection (picking out a type of object within a photo, such as when Facebook identifies all the faces in a photo of your friends at the mall).
CNNs can also engage in object tracking (following a moving object, for example using the heat signature) and instance segmentation (understanding how each object in a scene relates to the objects around it). CNNs are now at the point where they can identify objects down to the pixel.
To make it easier to program these neural networks, researchers have experimented with teaching them using regions, layers and other techniques to break the information down into smaller pieces, running information repeatedly through an algorithm until the system is able to correctly identify and classify it. It can take weeks or even months to train a neural network, but the technology is getting better all the time.
How is artificial intelligence used to monitor human behavior?
The West hasn’t quite reached the level of China when it comes to patrolling every aspect of our lives, but we’re getting there. A popular area in which AI is being used to fight crime is “predictive policing.” Tools like PredPol analyze various factors and give law enforcement officers a report of which areas are most likely to suffer an outbreak of crime in the near future.
Although this is understandably attractive to the local police force, who can use the information to optimize their resources and patrol the right areas, the practice of predictive policing raises some serious ethical questions. Authorities are able deploy a variety of computer vision technologies to keep tabs on citizens. AI-powered video cameras are now capable of autonomously identifying a suspicious person, tracking them and informing the police if they see fit. But how do algorithms decide who is a suspicious person?
Police have a tendency to spend more time in less affluent neighborhoods, which means more data gets collected about the residents of these neighborhoods. This in turn disadvantages people who are already marginalized, leading to the possibility that innocent people will be wrongly targeted by police, and making it easier for affluent people to get away with committing crimes. “If misused, systems based on algorithms reinforce human bias and increase inequality between groups in society,” says Virginia Eubanks, political science professor at the University of Albany.
Biased algorithms have an increasing impact on our daily lives
Biased algorithms don’t just play a role in surveillance. Over the last few years, worrying examples of biased algorithms have been cropping up in every field, from job applicant profiling to courtroom decisions to housing loans. Part of the problem is that algorithms are programmed by humans, so human bias inevitably snakes its way into the technology. Compounding the problem is the “black box” issue: since algorithms learn and make decisions on their own, it’s often impossible to tell whether a decision was based on fair factors, or whether it was skewed based on false correlations. For example, just because less women occupy high-paying positions, that doesn’t mean an algorithm should provide a list of lower-paying jobs to a female job searcher vs. a male one.
In 2017, the Chicago police department came under fire for using an algorithm to compile a Strategic Subject List naming the people who were most likely to be involved in gun violence in future, whether as a perpetrator or as a victim. The algorithm helped police identify some factors that they hadn’t previously been aware of, such as the fact that being a victim is a better predictor of future gun violence than being arrested for possession of a weapon. So in that respect, algorithms can come in handy. On the other hand, the Chicago police department was criticized for using the list to target groups of people who hadn’t yet committed crimes.
These kinds of algorithms can easily send us back to the times when we jumped to accuse people, before we adopted the maxim “innocent until proven guilty.” Is it worth putting innocent people in jail, just to maximize police force efficiency?
As algorithm efficiency increases, so must the ethics of AI surveillance
In addition to the efforts by nation-states and company ethics boards who rush to put together AI ethics guidelines, organizations such as AI Now look at ways to remove bias and increase accuracy in algorithms so that they can be implemented in a responsible manner.
If we can train AI to recognize faces and track moving objects, surely we can train it to not to be biased?