Facial recognition has been booming for the past years, and governments and police departments around the world were quick to implement the technology.
But now, several tech companies have stopped selling their facial recognition software.
Let’s take a look at what happened.
The downfall of facial recognition tech
It all started on June 8, 2020, when IBM announced it would exit the facial recognition market.
IBM CEO Arvind Krishna cited the reason as the fear that such tech can be used to promote racial discrimination and injustice.
He also wrote a letter to Congress pleading for Racial Justice Reform:
IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency, (…) Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement and that such bias testing is audited and reported.
The statement comes in the wake of protests around the death of George Floyd, shortly after the Black Lives Matter protests across the US, and the world, began.
IBM is not alone in this fight
Shortly after, on June 10, Amazon announced it would pause police use of its facial recognition service, Rekognition, for a whole year.
This decision comes after a two-year battle between Amazon and civil liberties activists, who have voiced concerns that inaccurate face matches could lead to unjust arrests.
Amazon’s public statement shows that the Black Lives Matter protests might have also played a role in the moratorium on police use of Rekognition.
Rival Microsoft also took a stand
Microsoft has also voiced support for the Black community these past weeks.
But the real kicker came when the company announced a ban on police use of their surveillance technologies until federal regulation is in place.
We will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology. (…) The bottom line for us is to protect the human rights of people as this technology is deployed.Microsoft President Brad Smith at a Washington Post Live event
Other companies, like Google, have yet to comment on their stance.
Facial recognition is still a controversial topic
These recent developments might come as a surprise, but they’re a step in the right direction.
Facial recognition plays a part in unlocking your phone, keeping an eye on who’s in your neighborhood, preventing identity theft, using cool social media filters, or even tracking cheaters in gambling.
But critics have been up in arms about things for a while now.
While there is easy to use software, like a VPN, to protect your online data, things are a bit more complicated when it comes to your physical identity.
Facial recognition software can only work alongside a rich database of facial images. It’s the only way to train the AI to detect faces and match them in a database.
One such example is Clearview AI, a company that claims to have devised a groundbreaking facial recognition app.
Its database has more than three billion images, but they were scraped from Facebook, YouTube, Venmo, and millions of other websites.
All this without user consent or knowledge, of course.
Other facial recognition systems get their data from CCTV footage, surveillance systems, social media apps, and sometimes even police databases.
Facial recognition software has long been criticized for its racial biases.
Machines learn to identify a face after being trained on millions of pictures of human faces. But they often have little diversity to train on.
As a result, they will have difficulties recognizing and matching faces of various ethnicities.
No regulations in place
Technology usually evolves much quicker than the laws regulating it. And this leads to various approaches taken in courts or by privacy regulators around the world.
For example, in South Wales, it’s legal for the police to use facial recognition technology. But in San Francisco, it’s banned due to the lack of a regulatory framework. That’s a fancy way of saying there aren’t enough laws in place.
In China, facial recognition is mandatory for buying SIM cars or getting access to basic commodities.
Facial recognition and police overreach
Just like surveillance cameras, facial recognition is often touted as a security measure. That might explain why so many police departments were quick to adopt it.
It all sounds good, in theory. If you can track criminals when they go into public spaces, you can easily capture them and bring them to justice.
But facial technology is far from perfect.
For example, during a football game in 2017, facial recognition software screened the public. 2,470 individuals were flagged by the system. Out of those, over 2,000 fans were mistakenly identified as potential offenders.
A whopping 92% were false positives.
Lookalikes confuse the software
Given this staggering statistic, it’s easy to see how facial recognition can lead to human rights abuses.
The people of Hong Kong had experienced this firsthand when facial recognition was used to target and arrest peaceful protesters.
Now, with all the BLM protests in the US, things aren’t looking rosier, especially since the police have a history of abusing the facial recognition systems.
THREAD: Here's what EFF and our friends have put together so far to help you stay safe during protests, including pictures to help you recognize police surveillance devices, printable guides to help you protect your digital rights, and steps to take before AND after protesting.— EFF (@EFF) June 14, 2020
In a report released by the Center on Privacy & Technology, analysts explained how police agencies across the US misused the software, referencing cases from the NYPD.
On multiple occasions, when blurry or flawed photos of suspects have failed to turn up good leads, analysts have instead picked a celebrity they thought looked like the suspect, then run the celebrity’s photo through their automated face recognition system looking for a lead.
That’s a recipe for disaster, allowing unlawful arrests to happen.
What’s in store for the future?
While IBM, Amazon, and Microsoft are now trying to prevent facial technology from going unregulated, this isn’t the end of the technology.
For Huawei, NEC, Hikvision, and Clearview AI, it’s still business as usual. It doesn’t look like facial recognition is going away anytime soon.
But what do you think?
Will the authorities feel pressured enough to adopt better legislation? Do you think this decision will affect how facial recognition is used around the world?
Let me know your thoughts in the comments section below.
Until next time, stay safe and secure!