The more accurate facial recognition gets, the more dangerous it becomes, says privacy and technology expert

Professor of Law and Technology and privacy expert, Woodrow Hartzog, and Practice Lead at Securys, John Lloyd, discuss the ethics surrounding facial recognition technology in the lead up to their Last Thursday in Privacy session with Jenny Brennan: “Focus on Ethics: The Global Controversy About Facial Recognition Technology”.

The preliminary conversations touched on many of the ongoing discussions surrounding facial recognition technology (FRT) at present. In response to the EDPS [what does this stand for] calling for a moratorium on FRT, as well as software that captures biometric data in public places, it was made clear that there is a great deal of uncertainty surrounding the effectiveness of a temporary ban. 

One reason for this, offered by Mr Lloyd, was that a moratorium can too easily become a tool to raise an organisation’s profile. Following the ACM’s [what does this stand for] call for an immediate suspension of the current and future private and governmental use of FRT. “These comments only serve to exacerbate the ‘cops and robbers’ narrative, which paints the developers of the technology as villains and sets the regulators up in opposition to them rather than seeking co-operation.”

Mr Hartzog suggested that the effectiveness will exist on a continuum. “The success of a moratorium or ban depends upon how broadly it applies and the political commitment of the institutions supporting it. Simply precluding government actors while allowing facial recognition to become entrenched in the private sector will not be effective in the long run at curbing inevitable abuses of the technology.” 

 “Moratoriums or voluntary suspensions instead of outright bands, while a step in the right direction, kick the can down the road while the technology still proliferates in some quarters. But certainly, any ban, moratorium, or suspension is likely to be more effective in the long run than mere procedural rules like consent or warrant requirements. These kinds of procedural rules entrench the systems and justify their use, even when it’s collectively oppressive or harmful to vulnerable and marginalized communities.”

The US government has made recent proposals to initiate a task force of government science leaders, academics, and industry representatives to create and fund a national research cloud to promote AI innovation. Mr Hartzog urged that such initiatives must be met with caution. “Of course any task force could produce useful insight and help in coalition building, but you have to be careful with these kinds of initiatives because often industry will use them as attempts to justify self-regulation or to water down attempts to meaningfully curtail the excesses and abuses of powerful digital tools.”

When discussing the relevance of the concern surrounding FRT to the average-law abiding citizen, Mr Lloyd reminds us that it is within every citizen’s capacity to perform a level of disobedience in the eyes of an authority. “The average citizen where I live seems to break a lot of laws on a daily basis, from littering to speeding and much more.” 

He adds: “One does not need to be paranoid about a surveillance society, though, to recognise that there is an overlap of liberties here which requires sensitive handling. It is naïve for anyone, especially in the UK, which leads the world in video surveillance, to believe they exist outside the impact of mass surveillance.”

Concerns aboutFRT’s use by law enforcement agencies was reiterated by Mr Hartzog: “Law enforcement’s use of facial recognition technology could create a pervasive atmosphere of chill. By making it easier for the police to engage in surveillance, more surveillance can occur, the mere prospect of which could routinely prevent citizens from engaging in protected activities, such as free association and free expression (from protesting to worshipping), for fear of ending up on government watchlists.”

Furthermore, according to Mr Lloyd, using FRT as a leverage to detect those exercising their right to protest is a major concern. “Surely the key word here is not technology but recognition? Does one have the right to be hidden in a crowd, if exercising a (democratic) right to protest in public? While it is true that public protest is already subject to surveillance all over the world, […] perhaps the greater concern should be whether the facility with which people can be identified is an inhibitor to the right to protest, not least to protest against the government itself.”

The discussion ended with a final note on the potential of “ethical AI” as an appropriate route to solving some of the key challenges. Mr Hartzog criticised the “amorphous” nature of the term which, potentially, can “dull people into a false sense of security about whether AI powered tools are hurting people or making society better or worse”. 

 “Certainly, AI tools can be designed to serve human values. They must if we want a more just and flourishing society. And without strong rules and frameworks to instantiate those values in the design of those tools, we surely will be worse off. But I think it is important to note that the discourse around ethical AI is a good start, but it’s not enough. We need firm, enforceable rules about how these tools are built and used to ensure they serve human values and not just the bottom line.”

To register to watch John LLoyd, Professor Woodrow Hartzog and Jenny Brennan talk on the “FOCUS ON ETHICS: The Global Controversy about Facial Recognition Technology” panel at Last Thursday in Privacy on July 30th, click here.


Registration now OPEN for PrivSec Global
Taking place across four days from 30 Nov to 3 Dec, PrivSec Global, will be the largest data protection, privacy and security event of 2020.

Reserve your place before 2nd October, and receive VIP access to PrivSec Global which includes priority access to limited space sessions, workshops, networking opportunities and exclusive content.

We have been awarded the number 1 GDPR Blog in 2019 by Feedspot.