Update, 13th July 2020: The Australian and British data protection authorities have started an investigation to look into the controversial and problematic facial recognition technology company Clearview AI. 

Facial recognition is the technology capable of verifying or identifying a person from a digital image or video footage from a video source. The technology is being widely used for tasks ranging from unlocking your phone with your face to identifying criminals. Realising its potential, governments of various countries are investing in developing their own facial recognition system to use it in law enforcement. Australia is one such country that is investing in developing its own facial recognition technology – The Capability. 

The Capability – Australia’s Facial Recognition System

Last year, the Australian federal and state governments agreed to develop a powerful facial recognition and surveillance system for criminal identification. The Capability’s database will have biometric information pulled from the citizen’s passport, visa, driving license and other govt. approved identification sources. To identify criminals, information pooled in The Capability will then be compared to the pictures of the suspects selected from surveillance equipment, such as CCTV footage. Though this sounds promising, experts say that facial recognition technology is prone to a substantial error rate. 

The error rate in facial recognition technology

The facial matching system developed by Australia is prone to error, especially against people of colour. Australia’s facial recognition system is largely based on the model employed by the US Federal Bureau of Investigation. The US full house committee conducted an investigation of the FBI’s system and found it showed high error rates in identifying African Americans. Director of the Australian Privacy Foundation, Monique Mann says there are studies that strongly suggest a racial bias in facial recognition systems. Researchers from MIT found certain facial analysis to be most accurate for white men (less than 1% error rate). While for dark-skinned women, the error rate was 34% [1]. 

Prof Liz Campbell of Monash University said that in Britain and Australia, facial recognition technology was good at identifying white men. She adds that it might not be an inherent bias in the algorithm, the technology is good at identifying people who look like the creators of the technology. By this, she means the diversity of people on whom the algorithms are being tested on is narrow. 

Apart from the risk of bias, there’s another concern regarding the technology that’s been repeatedly raised by lawyers, academics, human rights groups and privacy experts. They fear this technology being used for mass surveillance and have a profoundly chilling effect on public protest and dissent. These professionals, along with tech companies, fear the misuse of facial recognition technology by law enforcement to make wrongful arrests in protests. Following the killing of George Floyd and the protests in the world, several tech companies took steps towards the responsible distribution of this technology. 

To know how the tech giants responded, enter your email ID below

[stu alias=”gated_content_itmunch_facialrecognition”]

1. IBM

American multinational tech company IBM decided to pull out of the facial recognition market and calls for national dialogue. IBM Chief Arvind Krishna, in a public letter sent to Congress on June 8th 2020, explained the decision to back out of the business. The letter also mentioned an intention to work with Congress in the pursuit of racial equality and justice. The company would like to work with Congress on policy reforms that focus on the responsible use of facial recognition technology. IBM also said that it firmly opposes and will not accept the use of facial recognition technology offered by other vendors for mass surveillance, violation of basic human rights or freedom and racial profiling. It says that it is not consistent with their values and principles of trust and transparency.

2. Microsoft

Brad Smith says Microsoft won't sell facial recognition tech to police | iTMunch

On 11th June, Microsoft also took the step of backing off from selling its facial recognition product to law enforcement, until there are federal laws in place regulating the technology. An MIT Media Lab study identified racial and gender bias in facial recognition technology built by IBM, Microsoft and a Chinese tech company Megvii. In a dataset of 1270 faces, the error rate was 1% for light-skinned men, 7% in lighter-skinned women, 12% for dark-skinned men and 35% in dark-skinned women [2]. Shortly after the study was released, IBM and Microsoft said that the companies would improve their software.

SEE ALSO: Microsoft bans sale of facial recognition technology to the U.S. police department

For more updates and latest tech news, keep reading iTMunch

3. Amazon

Amazon has created its own facial recognition system called the “Rekognition”. The technology was launched in 2016 as a part of Amazon Web Services (AWS), Amazon’s cloud computing division. Realising the negative applications of the software, Amazon put a one-year moratorium on selling it to the police departments. Though experts criticized Amazon for giving a time limit, especially when congressional action might not take place in a year. Amazon’s Rekognition is also prone to errors. A research conducted by ACLU found the software incorrectly matched faces of 28 lawmakers with mugshots [3]. Another study conducted by MIT Media Lab proved Rekognition rightly identified the gender of lighter-skinned men, but mistook women for men 19% of the time and mistook dark-skinned women for men 31% of the time [4].

Though tech companies are refraining from selling their facial matching products to law enforcement to promote gender and racial equality, there is a company that has made facial recognition technology especially for law enforcement – Clearview AI.

About Clearview AI

Clearview AI - Facial Recognition Technology | iTMunch

Clearview AI is an American facial recognition tech company that has developed a system that basically works like an image search engine. The company sells this technology to federal agencies and law enforcement along with some private companies. Clearview AI has been relentlessly mining pictures online to create a vast database. It now has a database of 3 billion photos (possibly more). The company claims to have scraped it (without permission, it seems) from Facebook, Google, Venmo, Twitter, YouTube and other sources. How does it work? Someone takes a picture of a person, uploads it on Clearview AI’s app and gets to see every public photo available of that person along with links of the photo’s source. 

Earlier this year, tech giants Facebook, Google and YouTube sent cease-and-desist letters to Clearview AI. Twitter and LinkedIn also took similar actions against the company. But the CEO, in an interview with CBS, defended his company and its practices by saying it is his First Amendment right to collect public photos. The Clearview AI CEO Hoan Ton-That also compared his practices to what Google and its search engines do. In the interview with Errol Barnett, Ton-That said, “Google can pull in information from all different websites. If this is public and could be inside Google search engine, it can be inside ours as well”. 

As said by The New York Times, the system goes far beyond anything ever created by the Silicon Valley giants or US government. Today, about 600 law enforcement agencies are using Clearview according to the company. The NY Times also analysed the code underlying its app and found that its built allows pairing with augmented reality glasses. This means the tool could potentially identify people it saw in real-time. It could be used to identify protestors or even a stranger on a train, revealing a lot more than just their name. Friends, family, co-workers, school and college is just the surface information one can get from the tool. The controversial company also readily offers free trials to “active law enforcement personnel”. It remains unclear how the company verifies this beyond requiring a government email ID.

What makes the technology problematic, even scary, is that their facial recognition tool has been made available to a handful of private companies as well, apart from the law enforcement. What’s scarier is that the company has not been vetted by any independent expert, so the accuracy of its technology remains unclear. 

Recent reports indicate that the client list of Clearview AI also includes Walmart, Best Buy, Macy’s and the NBA. However, Ton-That said this May that the company will no longer sell its product to non-law enforcement entities and private companies. It also terminated its existing contracts, law enforcement or otherwise, in the state of Illinois. 

Recently, a security lapse at Clearview AI that unlocked its source code along with its cloud storage credentials and a set of secret keys. The copy of its apps was also made publically accessible. On April 17th, TechCrunch reported that this exposed base of information was discovered by the CSO of SpiderSilk, Mossab Hussein. Hussein found that this ‘leaked’ information was enough for anyone to register as a new user and access the database.

Apart from Clearview and other tech giants, it seems as though there could be several surveillance tools police can access. It is difficult, rather impossible for the general public to determine what surveillance systems are used by police departments. Law enforcement and tech companies are usually secretive about these products, which are by no means limited to only facial recognition. In the age of information, the right to privacy is a privilege and it seems like nobody has that privilege anymore.

[/stu]

SEE ALSO: Artificial Intelligence Develops a Whole New Sport Named ‘Speedgate’

For more updates and latest tech news, keep reading iTMunch

Image Courtesy

Featured Image: Jack Moreh — Facial Recognition Concept

Image 1: Microsoft

Image 2: Clearview AI Logo

Sources

[1] Hardesty, L. (2018) “Study finds gender and skin-type bias in commercial artificial-intelligence systems” MIT News [Online] Available from: http://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212 [Accessed June 2020]

[2] Buolamwini, J. (2018) “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” MIT Media Lab [Online] Available from: http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf [Accessed June 2020]

[3] Raji, D (2019) “Actionable Auditing: Investigating the Impact of Publically Naming Biased Performance Results of Commercial AI Products” [Online] Available from: https://www.aies-conference.com/2019/wp-content/uploads/2019/01/AIES-19_paper_223.pdf [Accessed June 2020]

[4] Snow, J. (2018) “Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots” ACLU [Online] Available from: https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28 [Accessed June 2020]