Skip to content

Who’s Watching?

Facial Recognition in CCTV: The Implications of Surveillance

The air is a little frigid. You’re expecting an important call. You’re on your way home from class when you feel your phone vibrating in your pocket. With your parka, hat, and boots, you fumble to check your phone without pulling off your mittens. Fortunately, you only have to look at the screen to answer the call – your phone can be unlocked with both your fingerprints, as well as your face.

Facial recognition technologies offer a newer, more personal type of security, through which artificial intelligence and high-precision cameras enable instantaneous identification of users. These technologies are touted for the supposed promise of increased security. This means that national security forces  can increase safety by responding faster to violent crime, and by being quicker with investigations. For individual users, facial recognition technology can purportedly offer them control over their own personal information, whether that be as a means to lock access to your phone, bank, or other personal affairs. 

With little regulation or policy surrounding facial technology, authoritarian surveillance is entirely possible, and already happening.

Though China is the first nation to fully implement surveillance with this technology, Australia, India, and the United Kingdom have joined in trialing the technology over the past year. These national security systems rely on national databases of civilian profiles to identify people. More recently, facial recognition CCTV (closed-circuit television systems) has been added to the suite of modern video analytics for surveillance. In addition to being able to identify objects, animals, and to log how fast things are moving, national security systems using facial recognition CCTV are able to instantly identify who is in the frame. This could mean a decreased reliance on witnesses or in-person investigations. This technology allows for investigations to go entirely digital and enables police to arrive on scene to carry out arrests minutes after crimes are committed. 

Like any other Tuesday night, red and blue lights bounce off the snow at the intersection. You hang up the phone as you turn the corner, and are immediately stopped by an officer. “Are you so-and-so?” they ask. “We saw you on camera.” You shake your head, no. They ask you to show ID. You try to remember if you took your ID to school today.

In the mid 19th-century, philosopher Bentham proposed “the panopticon,”  an architectural prison design, which offered complete control of those being observed via internalized coercion. Because people in the panopticon are always being watched, they are constantly aware of being observed, and are, therefore, under control. 

With little regulation or policy surrounding facial technology, authoritarian surveillance is entirely possible, and already happening. The use of facial recognition technology for surveillance is criticized on many fronts – when it works well, it poses a risk to civilian freedom and privacy, and when it doesn’t work, it makes innocent people vulnerable. Big Brother Watch, a non-profit civil liberties organization which campaigns against the rise of state surveillance, produced a report which estimated that facial recognition technology had a high rate of false positives, as well as false negatives. While false positives occur when the technology identifies someone incorrectly, false negatives are the failure to correctly identify someone who is in a national facial recognition database. A central point of the report is that well-working, or perfected facial recognition technology, would essentially turn civilians into “walking ID cards.” Conversely, the use of surveillance technology to police concerts, festivals, and carnivals in both the UK and China have falsely identified the presence of national suspects to police over 90 per cent of the time. The report also highlights how facial recognition technology is disproportionately accurate when it comes to minority groups: it frequently incorrectly identifies women of minority ethnic groups in the United States. This is a major concern, as racial prejudice in police systems already disproportionately affects minorities. If technologies pose the risk of increasing this disparity, the “merits” of these technologies should truly be called into question. The risk of racial prejudice in AI-based technologies is a recurring concern – a piece earlier this semester, titled Is AI Racist? examined the fallibility of AI and its consistent issues in terms of racial bias. 

As the price of these technologies continues to decrease, making them more accessible for other nations to follow suit, we have to ask whether we are adequately equipped for the repercussions of institutionalizing this technology, and giving in to increasingly authoritarian surveillance. With instantaneous identification, advances in surveillance move us closer to a modern, and vividly real, iteration of the panopticon. We constantly have to ask – who is watching us? Should they be?

[special_issue slug=”police2018″ element=”footer”]