Artificial intelligence (AI) is a powerful tool.
In the hands of police and other criminal justice agencies, AI can lead to misconduct. For example, Detroit resident Robert Williams was arrested in front of his children and held overnight in custody after an AI facial recognition system gave a false positive. After his name was in the system for years, Williams had to sue the police and local government for wrongful arrest to have it removed. He eventually learned that a flawed AI had identified him as a suspect.
The same thing happened to Michael Oliver of Detroit and Nijir Parks of New Jersey, three men who have two things in common: they were all victims of false positives from AI facial recognition systems, and they are all black men.
After all, AI facial recognition systems have been found to be unable to distinguish between most people of color, with one study finding that Black women have the highest error rate at 35 percent.
These examples highlight significant issues around the use of AI in policing and the law, particularly at a time when AI is being used more than ever before in the criminal justice system and in the public and private sectors.
Canada: New Laws, Old Problems
Canada is currently considering two new pieces of legislation that will have a significant impact on the use of AI in the coming years. Both pieces of legislation lack provisions to protect the public when it comes to police use of AI. As academics who study computer science, policing, and the law, we are troubled by these gaps.
In Ontario, Bill 194, or the Strengthening Cybersecurity and Building Trust in the Public Sector Act, focuses on the use of AI in the public sector.
Federal Bill C-27 would enact the Artificial Intelligence and Data Act (AIDA). While AIDA's focus is on the private sector, it also has implications for the public sector, as the government has many public-private partnerships.
Police departments use AI as owners and operators, and can also contract with private agencies to perform AI analysis on their behalf.
Given this public use of private sector AI, even laws aimed at regulating private sector use of AI must set out rules of engagement for criminal justice agencies using this technology.
Protesters against Amazon's facial recognition software hold up a photo of Amazon founder and former CEO Jeff Bezos near their faces at Amazon's headquarters. (AP Photo/Elaine Thompson)
Racial Profiling and AI
AI has powerful predictive capabilities. Using machine learning, you feed it a database of profiles and it can “guess” what someone is likely to do, or match faces to the profiles. AI can also decide where to direct police patrols based on past crime data.
These technologies seem to have the potential to increase efficiency and reduce bias, but police use of AI could increase racial profiling and unnecessary police deployment.
Civil rights and privacy groups have produced a report on AI and surveillance practices that cites examples of racial bias in places where police are using AI technology, as well as a number of false arrests.
In Canada, police agencies including the Royal Canadian Mounted Police (RCMP), Toronto Police, and Ontario Provincial Police have already been criticized by the Privacy Commissioner of Canada for using Clearview's AI technology for mass surveillance.
Clearview AI has a database of over 3 billion images collected without permission from the internet. Clearview AI matches faces from its database with other footage, which violates Canadian privacy laws. The Office of the Privacy Commissioner of Canada has criticized the RCMP's use of the technology, and Toronto police have suspended their use of the product.
The removal of law enforcement restrictions from Bill 194 and Bill C-27 could enable similar mass surveillance for Canadian AI companies.
Read more: How police surveillance technology serves as a tool of white supremacy
The EU takes the lead
Internationally, efforts are underway to regulate the use of AI in the public sector.
So far, the European Union’s AI law is the best in the world when it comes to protecting citizens’ privacy and civil rights.
The EU AI law adopts a risk- and harm-based approach to regulating AI, requiring users of AI to take specific measures to protect their personal data and prevent mass surveillance.
In contrast, both Canadian and U.S. laws pit people's right to be free from mass surveillance against companies' desire to be more efficient and competitive.
Trailer for “Coded Bias.”
There's still time to make changes
There is still time to make a change: Bill 194 is pending in the Ontario Parliament, and Bill C-27 is pending in the Canadian Parliament.
The exclusion of police and criminal justice agencies from Bill 194 and Bill C-27 is a glaring oversight that could bring the Canadian judiciary into disrepute.
The Ontario Law Commission criticized Bill 194. The commission said the proposed law does not promote human rights or privacy, and would allow unhindered use of AI in ways that could violate Canadians' privacy. The commission said Bill 194 ignores the use of AI by police, prisons, courts and other criminal justice agencies, and would allow public agencies to use AI in secret.
Bill C-27 has been flagged by the Canadian Civil Liberties Association (CCLA) and has petitioned to withdraw the bill, stating that its regulatory measures are aimed at increasing private sector productivity and data mining, rather than protecting the privacy and civil liberties of Canadians.
Regulation is needed to cover such partnerships because police and national security agencies often work with private providers on surveillance and security intelligence activities, but Bill C-27 does not mention police or national security agencies.
CCLA recommends that Bill C-27 be harmonised with European Union AI law and include guardrails to prevent mass surveillance and prevent the abuse of AI powers.
These will be Canada's first AI laws, and Canada is years overdue for regulations to prevent the misuse of AI in the public and private sectors.
Changes must be made now to Bill 194 and Bill C-27 to protect Canadians at a time when criminal justice agencies are increasingly using AI.