Australia's privacy regulator has ended its investigation into Clearview AI's use of images of Australians in its facial recognition service, but there are no signs the company will comply with an order to remove the images.
Clearview AI is a facial recognition service used by law enforcement agencies around the world, including in limited trials in Australia. The company claims to have a database of over 50 billion faces collected from the internet, including social media.
In 2021, the Australian Information Commissioner's Office found that Clearview AI had violated Australians' privacy by collecting these images without their consent and ordered the company to stop collecting the images and delete any recorded images within 90 days. Clearview initially appealed the decision to the Administrative Appeal Tribunal, but withdrew its appeal in August last year before the Administrative Appeal Tribunal could issue its ruling, so the original decision remains in effect.
It is unclear whether Clearview has subsequently complied with the order, and the company did not respond to a request for comment.
On Wednesday, a year after Clearview withdrew its appeal, Privacy Commissioner Carly Kind announced that the OAIC would not continue to seek to enforce the order against Clearview.
“I have considered extensively the issue of whether the OAIC should devote further resources to scrutinizing Clearview AI's conduct. The company is already under investigation by the OAIC and is the subject of regulatory investigations in at least three jurisdictions around the world, as well as a class action lawsuit in the United States,” she said.
“Taking into account all the relevant factors, we are not satisfied that further action is warranted in the particular case of Clearview AI at this time.”
Clearview AI agreed in June to settle a class-action lawsuit brought against it for violating the privacy of Americans included in its system for an undisclosed amount without admitting wrongdoing. The settlement has not yet been approved by the court.
A 2022 settlement with the American Civil Liberties Union (ACLU) bars Clearview AI from selling its database to most U.S. companies and any entity, including Illinois law enforcement agencies, for five years.
Kind said Wednesday that Clearview's conduct, part of its efforts into generative artificial intelligence models, has become increasingly common and troubling in the years since.
The OAIC and 11 other regulators issued a statement in August last year calling on publicly accessible sites to take appropriate measures to prevent personal information on their sites being unlawfully collected.
Kind said all regulated entities in Australia that use AI to collect, use or disclose personal information must comply with privacy laws.
“The OAIC will soon issue guidance for organisations seeking to develop and train generative AI models, including how the APPs (Australian Privacy Principles) apply to the collection and use of personal information. It will also issue guidance for organisations using commercially available AI products, including chatbots.”
Correspondence between Clearview AI and the OAIC when its AAT appeal against a freedom of information request last year was dismissed revealed that Clearview had decided not to operate in Australia and had blocked web crawlers from retrieving images from servers in Australia, and therefore believed it was not subject to Australian jurisdiction.
Australian and UK users were able to opt out of the system, but when the company began re-scraping the internet in January last year it took no steps to reassure users that the facial images it scraped did not include Australians whose images are stored on servers outside Australia used by social media sites such as Facebook.