My Opening Remarks to the Access to Information, Privacy and Ethics Committee on Facial Recognition Technology

Ana Brandusescu
4 min readMar 24, 2022

On March 21, 2022, I testified as an individual on the use and impact of facial recognition technology for Canada’s House of Commons’ Standing Committee on Access to Information, Privacy and Ethics (ETHI). These are my opening remarks.

Photo by Daniele Levis Pelusi on Unsplash

Good afternoon Mr. Chair and members of the Committee. Thank you for having me here today. My name is Ana Brandusescu. I research governance and procurement of artificial intelligence (AI) technologies, particularly by government. That includes facial recognition technology (FRT). I will present two issues and three solutions today.

The first issue is discrimination

FRT is better at distinguishing white male faces than black, brown, Indigenous, and trans faces. We know this from groundbreaking work by scholars like Joy Buolamwini and Timnit Gebru: their study found that “darker-skinned females are the most misclassified group (with error rates of up to 34.7%). In contrast, the maximum error rate for lighter-skinned males is 0.8%.” FRT generates lots of false positives — that means identifying you as someone you’re not. This causes agents of the state to arrest the wrong person. Journalist Khari Johnson recently wrote for Wired about how in the US, three black men were wrongfully arrested because they were misidentified by FRT. Also, HR could deny someone a job because of facial recognition misidentification or could get an insurance company to deny a person coverage. FRT is more than problematic. The House of Commons’ Report of the Standing Committee on Public Safety and National Security from 2021 states that there is systemic racism in policing in Canada. FRT exacerbates systemic racism.

The second issue is the lack of regulatory mechanisms

In a report I co-authored with privacy and cybersecurity expert Yuan Stevens for the Centre for Media, Technology & Democracy, we wrote that “as taxpayers, we are essentially paying to be surveilled, where companies like Clearview AI can exploit public sector tech procurement processes.” Regulation is difficult. Why? Like much of Big Tech, AI crosses political boundaries. Also, it can evade procurement policies, such as Clearview AI offering free software trials. Because FRT is embedded in opaque, complex systems, it is hard for a government to know sometimes that FRT is a part of a software package. In June 2021, the Office of the Privacy Commissioner (OPC) was clear about needing system checks to ensure the Royal Canadian Mounted Police (RCMP) legally complies when using new technologies. However, the RCMP’s response to the OPC was in favour of industry self-regulation. Self-regulation, for example, in the form of algorithmic impact assessments, can be insufficient. A lot of regulation vis-à-vis AI is essentially a volunteer activity.

So what is the way forward?

Government entities large and small have called for a ban on the use of FRT. And some have already banned it. That should be the end goal. The Montréal Society and AI Collective, which I contribute to, participated in the 2021 public consultation for Toronto Police Services Board’s draft AI policy. Here I extend some of these recommendations along with my own. I propose three solutions:

The first solution is to improve public procurement

Clearview AI got away with what it did across multiple jurisdictions in Canada because there was never a contract or procurement process involved. To prevent this, the OPC should create a policy for the proactive disclosure of free software trials used by law enforcement, and all of government, as well as create a public registry for them. We need to glass box the black box. We need to know what we are being sold. We also need to increase in-house AI expertise. Otherwise, we cannot be certain agencies even know what they are buying. Also, companies linked to human rights abuses, like Palantir, should be removed from Canada’s pre-qualified AI supplier list.

The second solution is to increase transparency

The OPC should work with the Treasury Board of Canada Secretariat (Treasury Board) to create a public registry — this time for AI technologies — especially AI used for law enforcement and for national security purposes, as well as for agencies contemplating face ID for social assistance for employment insurance. A public AI registry will be useful for researchers, academics, and investigative journalists to inform the public. We also need to improve our algorithmic impact assessments (AIAs). AIAs should engage more meaningfully with civil society, yet the only external non-governmental actors consulted in Canada’s four published AIAs* were companies. The OPC should work with the Treasury Board to develop more specific, ongoing monitoring and reporting requirements so the public knows if the use or impact of a system has changed since the initial AIA.

The third solution is to prioritize accountability

From the inside: OPC should follow-up on RCMP privacy commitments and demand a public-facing report that explains in detail the use of FRT in its unit. This can be applied to all departments and agencies in the future. And from the outside: OPC and the Treasury Board should fund and listen to civil society and community groups working on social issues and not only tech-related issues.

Thank you.

*In my speech, I mentioned three published AIAs but there are four.

This is the first time I “appeared” and testified for the House of Commons of Canada. Renee Sieber, thank you for the extensive, invaluable feedback and continuous support. Thank you Teresa Scassa and Brenda McPhail for all the advice, and Yuan Stevens for connecting us on this, for your support and enthusiasm. Jess Reia and Blair Attard-Frost, thank you for all the insights; and to Jonathan van Geuns, for listening to me rehearse my five minutes a thousand times.

--

--

Ana Brandusescu

Researcher. Less efficiency, more accountability (in people, in tech). She/her