Send us your story

AINews-AIThe Capability - Australia’s facial recognition tech & how accurate it might...

The Capability – Australia’s facial recognition tech & how accurate it might be

Update, 13th July 2020: The Australian and British data protection authorities have started an investigation to look into the controversial and problematic facial recognition technology company Clearview AI. 

Facial recognition is the technology capable of verifying or identifying a person from a digital image or video footage from a video source. The technology is being widely used for tasks ranging from unlocking your phone with your face to identifying criminals. Realising its potential, governments of various countries are investing in developing their own facial recognition system to use it in law enforcement. Australia is one such country that is investing in developing its own facial recognition technology – The Capability. 

The Capability – Australia’s Facial Recognition System

Last year, the Australian federal and state governments agreed to develop a powerful facial recognition and surveillance system for criminal identification. The Capability’s database will have biometric information pulled from the citizen’s passport, visa, driving license and other govt. approved identification sources. To identify criminals, information pooled in The Capability will then be compared to the pictures of the suspects selected from surveillance equipment, such as CCTV footage. Though this sounds promising, experts say that facial recognition technology is prone to a substantial error rate. 

The error rate in facial recognition technology

The facial matching system developed by Australia is prone to error, especially against people of colour. Australia’s facial recognition system is largely based on the model employed by the US Federal Bureau of Investigation. The US full house committee conducted an investigation of the FBI’s system and found it showed high error rates in identifying African Americans. Director of the Australian Privacy Foundation, Monique Mann says there are studies that strongly suggest a racial bias in facial recognition systems. Researchers from MIT found certain facial analysis to be most accurate for white men (less than 1% error rate). While for dark-skinned women, the error rate was 34% [1]. 

Prof Liz Campbell of Monash University said that in Britain and Australia, facial recognition technology was good at identifying white men. She adds that it might not be an inherent bias in the algorithm, the technology is good at identifying people who look like the creators of the technology. By this, she means the diversity of people on whom the algorithms are being tested on is narrow. 

Apart from the risk of bias, there’s another concern regarding the technology that’s been repeatedly raised by lawyers, academics, human rights groups and privacy experts. They fear this technology being used for mass surveillance and have a profoundly chilling effect on public protest and dissent. These professionals, along with tech companies, fear the misuse of facial recognition technology by law enforcement to make wrongful arrests in protests. Following the killing of George Floyd and the protests in the world, several tech companies took steps towards the responsible distribution of this technology. 

To know how the tech giants responded, enter your email ID below

Subscribe to unlock the Content


SEE ALSO: Artificial Intelligence Develops a Whole New Sport Named ‘Speedgate’

For more updates and latest tech news, keep reading iTMunch

Image Courtesy

Featured Image: Jack Moreh — Facial Recognition Concept

Image 1: Microsoft

Image 2: Clearview AI Logo


[1] Hardesty, L. (2018) “Study finds gender and skin-type bias in commercial artificial-intelligence systems” MIT News [Online] Available from: [Accessed June 2020]

[2] Buolamwini, J. (2018) “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” MIT Media Lab [Online] Available from: [Accessed June 2020]

[3] Raji, D (2019) “Actionable Auditing: Investigating the Impact of Publically Naming Biased Performance Results of Commercial AI Products” [Online] Available from: [Accessed June 2020]

[4] Snow, J. (2018) “Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots” ACLU [Online] Available from: [Accessed June 2020]


Riddhi Jain
Riddhi Jain
Riddhi Jain is a technology content writer. She is based in India and has been working as a content writer since 2018. Riddhi has been writing content in the tech domain since May 2020 and can’t get enough of it. Riddhi has pursued most of her education from her hometown, Indore. She has graduated as a Bachelor of Business Administration and discovered her love for writing blogs while pursuing an internship during college. Once she discovered her love for writing, she went on to improve this skill set (and hasn’t stopped since).
Read More