AI technology has become so advanced that researchers argue in a new paper that we need better ways to verify that online people are human and not AI bots.
In a paper that has yet to be peer-reviewed, researchers from Ivy League universities, OpenAI, Microsoft and other companies have proposed a “personal character recognition” (PHC) system for human authentication that could replace existing processes like CAPTCHA.
But for those concerned about privacy and mass surveillance, this is a very imperfect solution that shifts the burden of responsibility onto the end user – a tactic often used in Silicon Valley.
“A lot of these plans are based on the idea that, rather than companies working hard to release safe products, society and individuals will be forced to change their behavior based on the problems that come with cramming chatbots and large language models into everything,” surveillance researcher Chris Gilliard told The Washington Post.
In their paper, the researchers proposed the PHC system because they fear that “bad actors” could use AI's massive scalability and ability to mimic human behavior online to flood the web with non-human content.
Chief among their concerns is AI's ability to spit out “human-like content that expresses human-like experiences and perspectives”; digital avatars that look, move and sound like real humans; and AI bots that are increasingly skilled at mimicking “human-like behavior on the internet,” such as “solving CAPTCHAs when requested.”
That's why the idea of PHC is so appealing, the researchers argue: Governments and other organizations providing digital services could issue a unique identity certificate to each end user, who would then verify their identity through zero-knowledge proofs (a borrowed cryptographic technique in which a human provides certain information without revealing any data details).
According to the researchers, end users can store their credentials digitally on their personal devices, helping them maintain anonymity online.
Authentication systems can replace or augment human authentication processes on the Internet, such as the aforementioned CAPTCHAs or biometric authentication such as fingerprints.
While it sounds like a great solution in theory, the researchers acknowledge that PHC systems still have pitfalls.
First, it seems inevitable that many will sell PHC to AI spammers, lending credibility to automated content and undermining the project’s goals.
According to the paper, organizations issuing these types of credentials could become too powerful, while leaving the entire system vulnerable to hacker attacks.
“One of the major challenges for the PHC ecosystem is the potential concentration of power in the hands of a few institutions, particularly PHC issuers, and also in the hands of large service providers whose decisions regarding PHC use have a significant impact on the ecosystem,” the paper states.
Authentication systems can also be a source of friction for less internet-savvy users, such as older adults, who are more likely to be targets of online fraud.
The researchers therefore argue that governments should explore the use of PHC through pilot programs.
But PHC avoids a critical problem: This kind of system just puts another digital burden on end users, who have to contend with spam and other junk in their crowded digital lives — a problem that tech companies created and should solve.
One measure companies can take is to watermark content generated by AI models or develop processes that can detect telltale signs that data was generated by AI. Neither is perfect, but it can shift the burden of responsibility to the source of the problem: AI bots.
And if tech companies were to completely abdicate this responsibility, it would just be another black mark on Silicon Valley, which is accustomed to unleashing problems no one wants while profiting from its influence.
This is similar to how tech companies use up huge amounts of precious electricity and water to power their AI data centers, and communities, especially in drought-stricken areas, suffer from this resource allocation.
And the PHC, shiny and glamorous on paper, is once again shifting the blame.
More on AI: US Government bans fake AI reviews