Home The False Choice Between Privacy and Safety in Smart Surveillance

The False Choice Between Privacy and Safety in Smart Surveillance

Is there a real choice between privacy and safety in surveillance? As artificial intelligence continues to advance, we’ve seen an increased focus on privacy. A 2019 report from the American Civil Liberties Union contains an ominous warning: Artificial intelligence-enabled video surveillance will soon compromise our civil liberties in a dangerous way. Consider a false choice between privacy and safety in smart surveillance.

The ACLU isn’t wrong: Some technology could indeed contribute to this dystopian vision.

A dystopia is a community or society that is undesirable or frightening — translated as “a bad place.” Some systems attempt to use technology that’s not yet advanced enough, raising the specter of misuse; others invade the privacy of citizens, all for minimal security benefits.

But there are avenues of AI video surveillance that can bring greater public safety without sacrificing civil liberties.

The key is using AI to enable human security professionals to do their jobs better, not overextending AI’s capabilities to take over their jobs entirely. I see AI surveillance existing within three main categories: behavioral analysis, facial recognition, and object detection. The first two categories raise concerns. To understand what makes the last more viable, it’s important to break down the issues with the others.

AI Isn’t Advanced Enough for Behavioral Analysis

Behavioral analysis is essentially an attempt to detect so-called suspicious behavior before any crimes are committed. The ACLU’s “Th­e Dawn of Robot Surveillance” report touches on a few areas here: human action recognition, anomaly detection, and contextual understanding. Contextual understanding is the most advanced, but researchers have yet to make it genuinely feasible, and it’s unclear whether current technology provides a path to this type of general intelligence.

The problem is that computers lack the “common sense” to relate things to the rest of the world. A computer can recognize a dog thanks to thousands of images of dogs. It’s seen, but it can’t understand the context around a dog. 

AI can’t infer — it can only recognize patterns.

Some companies are already putting this technology into action, however. The New York Police Department has partnered with Microsoft to produce what it calls the Domain Awareness System. Part of the system would involve smart cameras that aim to detect suspicious behaviors.

But it’s not a finished product; developers have been working with officials to tweak the software since it was put in place. I’m confident that Microsoft will eventually crack the code to getting this technology working, but behavioral detection is still in beta.

The one area where behavioral analysis may be feasible is in detecting theft, and that’s primarily due to the lack of other viable options. 

Without Amazon Go-style camera installations, tracking every item in a store isn’t possible — so the next-best option is to guess whether a person is suspicious based on certain detected behaviors. But that, in itself, draws civil liberties concerns. The ACLU report notes the problems inherent in identifying “anomalous” behavior and people.

The big concern with action-detecting systems, then, is that the technology isn’t yet advanced enough to produce accurate results outside of small niches such as theft. Security professionals would be left with ambiguous data to interpret — likely reinforcing existing biases rather than making objective observations.

Facial Recognition Creates a Target for Bad Actors

Facial recognition works significantly better than behavioral analysis, and many law enforcement agencies and security firms already use the technology. That’s not to say it’s anywhere near perfect. False positives are common — especially when it comes to people of color. As a result, cities like San Francisco have banned the use of facial recognition software by local government agencies.

Even if facial recognition technology was 100% accurate, it still might not stop the worst of crimes. 

Acts of violence such as mass shootings are regularly perpetrated by students, family members, employees, or customers: in other words, people who “belong” in the location. It’s unlikely that a facial recognition system would flag these individuals.

Is it really just a “false choice” for facial recognition to protect people in their own neighborhoods and homes?

Then, there’s the severe invasion of privacy that’s required to make this technology work. To identify a suspect, facial recognition requires an external database of faces as well as personally-identifying information to match the face to a name. A database like that is an attractive target for bad actors.

Case in point: In July 2019, the U.S. Customs and Border Protection Agency announced that hackers had gained access to a third-party database containing license plate numbers and photo IDs. The CBP has, in recent years, begun collecting facial recognition data and fingerprints, among other things, from foreign travelers. It’s not hard to imagine a world where hackers gain access to this kind of information, endangering individuals as a result.

We Don’t Have to Sacrifice Privacy for Safety

The issues associated with behavioral analysis and facial recognition all lead back to the human element of threat detection. That’s where object detection differs. Object detection, as its name suggests, is entirely self-detained and functions by flagging known items, not individuals. Therefore, people’s private information remains just that — private.

Still, there’s room for improvement. For instance, nonthreatening but unusual items such as power tools may be flagged. And the technology cannot detect concealed weapons that aren’t otherwise raising suspicions.

No type of AI surveillance technology is perfect.

We’re amid constant advancement and refinement. But currently, object detection — flaws and all — is the best avenue for keeping citizens safe without compromising their privacy. As AI video surveillance continues to advance, we should focus on letting security staff members do their jobs better instead of trying to automate them away.

During most mass shootings, for example, police officers know little about the shooter’s location, appearance, or armaments. Not having real-time updates limits police in their ability to respond.

Object detection AI surveillance systems, however, can detect weapons, abandoned objects, and other possibly threatening items with high accuracy.

Upon dection, the AI system can then notify security professionals of their location in real-time. Real-time allows for nearly instant responses. In the recent Virginia Beach shooting, officers took nearly 10 minutes to locate the gunman after they entered the building, and by the time the officers had subdued the gunman, the attack had lasted about 40 minutes.

That may not seem like long, but when there’s an active shooter involved, every second counts. An active shooter is the exact sort of practical situation in which AI can offer real, valuable data instead of a false sense of security.

Personal privacy and public safety don’t have to be mutually exclusive and may not be a false choice between privacy and safety.

It’s time we stopped trying to overextend AI video surveillance to do the work of security professionals. Instead, let’s focus on technology that already works to provide them with real, actionable data that can help them do their jobs better. We can create a safer society without sacrificing civil liberties.

Image credit: rishabh-varshney – Unsplash

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Ben Ziomek
CPO and co-founder of Actuate

CPO and co-founder of Actuate. He works in AI-based product development to build software that employs deep learning to automatically identify weapons in real-time security feeds.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.