At least eight people in the US have been arrested after being mistakenly identified by facial recognition software, the Washington Post reports. US police use artificial intelligence (AI) to arrest suspects without the need for other evidence.
The Washington Post analyzed police information from reports, court records and interviews with police, prosecutors and defense attorneys.
The editors cautioned that they likely highlighted only a small part of the problem: prosecutors rarely tell the public when they use artificial intelligence (AI) tools, and the law only requires disclosure in seven states.
Therefore, it is impossible to determine the total number of cases of illegal arrests caused by incorrect search results by AI tools – emphasized “WP”.
False positives due to AI performance
As the newspaper found, in eight known cases of illegal arrests, the police failed to perform at least one of the basic investigative operations relying on facial recognition using AI tools. However, if he had checked alibis, compared distinguishing marks, examined DNA evidence or fingerprints, these eight men could have been considered suspects before they were arrested.
In six of the eight cases, police failed to check the suspects’ alibis, and in two cases they ignored evidence that contradicted their assumptions — even crucial evidence like DNA and fingerprints that pointed to another possible suspect.
In five cases, the police failed to collect the basic evidence. The newspaper cites the arrest of a person suspected of cashing a fake check for a significant amount in a bank. After recognizing the AI tools, the police arrested the suspect but did not even check his bank accounts.
In three cases, the police ignored physical characteristics of the suspects that did not match the AI’s recognition. The newspaper reported the case of a heavily pregnant woman arrested for stealing a car, although neither witnesses nor surveillance showed that the attack was carried out by a pregnant woman.
In six cases, the police did not check the statements of the witnesses. The identity of the suspect (known as AI) in the theft of an expensive watch was confirmed by a security guard who was not in the store during the theft.
AI works almost perfectly in laboratory conditions
“WP” admitted that the facial recognition software works almost perfectly in laboratory conditions using clear contrast photos. However, as NYU Law School researcher Cathy Kinsey pointed out, there has been no independent testing of the accuracy of the technology used to read the blurred surveillance photos. For this reason, it is difficult to estimate how long the technology will not recognize faces.
Furthermore, researchers have found that people using AI tools can blindly trust the decisions they make. In a study by neuroscientists from University College London, subjects chose the fingerprint that the computer first showed them to be most similar to the pattern.
“Confidence in the system prevented them from correctly evaluating similarities,” the newspaper emphasizes
Main image source: Shutterstock