“I want a moratorium on facial recognition technology”

The anonymity scarf was designed by Sanne Weekers, a student at the University of the Arts Utrecht. By overloading them with information, facial recognition systems get confused, rendering the wearer invisible. Photo: Sanne Weekers)

We’ve gotten ahead of ourselves with some advanced technologies, particularly when it comes to biometrics and surveillance. It’s now time to hit the pause button to give scientists time to address the ramifications of these innovations. Researcher Stephanie Hare argues that a moratorium on facial recognition is overdue.

A living hybrid existence

As the line between technology and humankind grows more blurred, some foresee a sort of hybrid existence in the not too distant future. How do you see this developing?

STEPHANIE HARE: In some ways, this has been the case since humans first began using tools. As I type this, I have a pen stuck behind my ear, an example of wearable tech! Anyone who carries around a notebook and pen, or a Swiss army knife, or a bag full of all the implements needed to support a baby or child when taking them out of the house, is living a hybrid existence with technology and tools. We’ve taken that to the next level in recent years with smartphones, which are minicomputers, cameras, voice recorders and communication (and surveillance) devices all in one. Some people use “wearable devices”, such as Fitbit or the Apple Watch, or they are even requesting to be microchipped. Samsung just announced smart contact lenses that can take photographs and record videos.

Stephanie Hare is a Researcher exploring the nexus between technology, politics and history. She has worked as a Principal Director at Accenture Research, as a Strategist at Palantir, as a Senior Analyst at Oxford Analytica, as an Alistair Horne Visiting Fellow at St. Anthony’s College, Oxford, and as a Consultant at Accenture. She holds a PhD and Master’s of Science from the London School of Economics and a BA in Liberal Arts and Sciences (French) from the University of Illinois at Urbana- Champaign. Her book Technology Ethics is slated for publication later this year. (Photo: Mitzi de Margary)

Technological advances are the catalyst for innovation in society, forcing lawmakers, regulators, academia, journalism, ethics and the arts to play catch up. How do you explain the delay between progress and society’s reaction to it?

Although technology sometimes spurs innovation in society, it is also often a response to social, economic, political and even environmental changes. It’s a two-way street, but it’s even more than that: It is organic, multidimensional and holistic. Technological innovation acts on, and it is acted upon. I view it as a force, and an important one. But I don’t worship it as the force. It is one of the many lenses through which I consider the world.

Checks and balances

Nascent technologies like AI and Big Data can also be used to infringe on our rights. Scandals like Cambridge Analytica appear to be only the tip of the iceberg. What does the age of AI mean for people living in democracies?

I am very concerned about the development of biometrics and surveillance technologies in liberal democracies, which is taking place without any real challenge or checks and balances from lawmakers or regulators. This, in turn, is forcing civil liberties groups, researchers and concerned citizens to raise awareness of the risks and mount legal challenges. Biometrics are some of our most powerful data – our DNA, fingerprints, face, voice, etc. And the use of our biometrics by law enforcement, governments and the private sector without a democratic debate and legal framework to enshrine our rights in law is one of the biggest threats I see to democracies and citizens today.

I have called on lawmakers to pass laws that protect our biometric rights, since the opportunities for abuse here are many and, in the extreme scenarios, terrifying. We also need to empower our regulators so they can investigate possible abuses and take action to hold people accountable and, ideally, incentivise them to protect people’s rights in the design and deployment of their technologies.

Do you think that companies and governments can be relied upon to ensure that some of the worst abuses don’t become reality? To police themselves?

The core concepts of transparency, accountability and responsibility do not feature in how a lot of companies and governments currently use AI and how they plan to use it. This is bad for everyone. Transparency means the ability to know how our data is being used, for what purposes, who it is shared with, how long it is kept, etc. Accountability means the ability to interrogate a decision that is made using our data. Why we were denied a mortgage, for instance, and what we can do to improve our chances of receiving one the next time we apply. Or why did an AI-powered recruiting tool that scans our curriculum vitae not select us to proceed to an interview? Or why did AI-powered medical software upgrade our diagnosis from not serious to serious? And responsibility means taking care to ensure that using our data in any way, but especially with AI, does not violate our rights or have harmful consequences.

Companies try to hide behind the excuse of “intellectual property and proprietary software”, but that is unacceptable when we are talking about decision-making that affects people’s lives, that may infringe on their civil liberties, that may be riddled with bias and other forms of discrimination. It is particularly unacceptable when it is being used on taxpayers and paid for by taxpayers. If you don’t want to expose your algorithms and data gathering practices to audit and scrutiny, then you also cannot expect to get public sector contracts. This technology must be held to account.

A stress test of democratic institutions

How do our institutions need to change to adapt to the sweeping changes brought about by technological advances?

We are seeing a stress test of our democratic institutions, but this has always been the case. We have adapted them when we need to. I would like to see technology play a role in improving the functioning of democracy. For that, though, we need to be able to trust technology – and that is where we are currently so underserved. We have fake news. We have online radicalisation. We have voting machines that are hackable and many that are not backed up by paper, so there is every reason not to have confidence in them. We have lawmakers who are unable, for example, to hold Facebook CEO Mark Zuckerberg to account for his firm’s egregious abuses of millions of people’s data. The reason is that the majority of these lawmakers have not even learned about technology, which means they are in no position to craft laws that will protect us. And we have regulators, such as the US Federal Trade Commission (FTC), that fined Facebook only $5 billion for those data abuses, while simultaneously granting the company’s officers legal immunity for anything that happened before 12 June 2019. That’s a bargain for Facebook! And it sends a signal to the rest of society that the FTC is not prepared to punish Facebook.

In the United Kingdom, our parliament has been stuck in quicksand over Brexit. The result being that lawmakers have done nothing to pass laws on biometrics and surveillance technologies, even though the Science and Technology Committee in the House of Commons called for a moratorium and the Surveillance Camera Commissioner, the Biometrics Commissioner and the Information Commissioner’s Office, all three of the main data regulators, have urged for new legislation repeatedly. Yet parliamentarians do nothing, government does nothing and, meanwhile, the police and the private sector keep rolling out this technology. So, we have to wait for a verdict on a landmark legal action to see if this sorry state of affairs will continue.

Dystopia/Utopia

In China, we are seeing how AI can infringe on people’s rights through the social credit system, which has seen people barred from boarding a plane or embarking on a train journey – and there is no appeals process. China has obtained all its citizens’ data to the point that it can easily be used for subtle manipulation or to discriminate against or incarcerate entire segments of the population. Is this the dystopia we were warned about?

One person’s dystopia may be another’s utopia, or at least another person’s neutral. It all depends on our values and the priorities we set as a result of those values. China is building a system that reflects the values and priorities of the Communist Party and possibly also many other stakeholders. Part of the problem with studying China is that it is not a free society, so people may not be able to express their opinions about whether they consider the system there to be a dystopia.

Certainly, for the Uighur Muslims – who are monitored down to their biometrics, including their DNA, face, voice and fingerprints, and over 1 million of whom are being kept against their will in concentration camps – the system in China is authoritarian and even totalitarian. But there may be other people in China who are fine with all this, and others who may not like it but don’t feel they can do anything.

For liberal democracies, it’s trickier: We have a tradition of human rights and civil liberties, and some of us even believe in it. So, the introduction of biometric and surveillance technologies really challenges us. The fact that we are currently doing it with no legal framework creates an even bigger challenge. We are seeing some pushback against companies and the government gathering and keeping so much data. Companies may change before governments do on this if they sense that their consumers value privacy and data security more than the trade-offs currently on offer, which are mainly convenience and so-called “free” services (that is, free in exchange for our data). Governments may take longer to realise that all this data collection does not actually make us safer – data collection is itself a security threat. It will be interesting to see what the tipping point will be for governments when they start viewing poor data security as a threat to the economy and national security. It’s only a matter of time.

A moratorium on facial recognition

Would you recommend temporary prohibitions on certain new technologies until we can determine their downsides and upsides and make sure that scientists give appropriate consideration to the potential influence their innovations will have on societies from a human rights and democracy standpoint?

I want a moratorium on facial recognition technology use by the police, other branches of the government and the private sector with immediate effect and lasting at least until lawmakers have passed laws that enshrine our rights about data relating to our bodies. That is the bare minimum. Companies have already shown that they cannot be trusted to self-regulate on this; on the contrary, they are building, testing and deploying these technologies enthusiastically.

As for how to make scientists and technologists think about the implications and consequences of their work, many are already doing this. There is also a wealth of scholarship out there already in the Science and Technology Studies (STS) discipline for any who are not. This discipline has existed since the inter-war period and really began to take off after the Second World War, and it is some of the most fascinating thinking I’ve ever come across, contextualising hard science and technology in ethics, law, culture, history, sociology, gender studies, etc. It’s one of the most exciting and dynamic fields, but there is some truth to the criticism that it is not well known inside most of our technology companies and possibly not in companies dealing with other hard sciences (defence, chemical and biotech, for instance), nor has it received the attention it deserves in business, law, politics and journalism.

Your new book, Technology Ethics, is set for publication soon. What aspects of the topic are the focus of the book?

I hope to deliver actionable insights so that the book is as useful and provocative to a computer scientist or engineer as it is to a C-Suite executive, board member, lawmaker, journalist or even an ordinary citizen who wants to engage with my main question, namely: How do we build and use technologies so that they maximise benefits and minimise harm?

This interview is part of a collection of essays and interviews by Alexander Görlach:

Shaping A New Era – how artificial intelligence will impact the next decade (PDF)

Share this post