AI Researchers Want to Solve the Bot Problem by Requiring ID to Use the Internet
AI researchers are worried that AI bots will gradually take over the internet, spreading like a digital invasive species. Rather than tackling the problem by restricting the proliferation of bots and AI-generated content, one research group has decided to go in the opposite direction. In a preprint published recently, dozens of researchers propose a system under which humans would need to verify their humanity directly with another person in order to obtain a "personhood credential." The idea
AI researchers are worried that AI bots will gradually take over the internet, spreading like a digital invasive species. Rather than tackling the problem by restricting the proliferation of bots and AI-generated content, one research group has decided to go in the opposite direction.
In a preprint published recently, dozens of researchers propose a system under which humans would need to verify their humanity directly with another person in order to obtain a "personhood credential."
The core idea appears to be creating a system in which a person can prove they are human without having to reveal their identity or any other information. If this sounds familiar to anyone in the crypto community, that's because the research draws on blockchain-based "proof of personhood" technologies.
Digital Verification
Services like Netflix or Xbox Game Pass that charge a subscription fee typically rely on users' financial institutions to handle identity verification. This doesn't allow for anonymity, but for most people, that's an acceptable tradeoff — generally seen as just the cost of doing business.
Other services, such as anonymous forums, can't rely on user payments as proof of humanity. At minimum, non-human customers in good standing still need to take steps to limit bots and fake accounts.
As of August 2024, for instance, ChatGPT's safeguards would prevent it from being exploited to register large numbers of free Reddit accounts. Some AI can get past human-style CAPTCHA checks, but it would take significant effort to complete the steps involved in email address verification and the rest of the account setup process.
The team's core argument, however — which includes experts from companies like OpenAI, Microsoft, and a16z Crypto, as well as academic institutions like Harvard Society of Fellows, Oxford, and MIT — is that current restrictions can only hold for so long.
Within a few years, humanity may have to reckon with the reality that without being able to look someone in the eye, there's no reliable way to determine whether you're interacting with a real individual.
Pseudonymity
The researchers are advocating for the development of a system in which certain organizations or institutions would be designated as credential issuers. These issuers would use human verifiers to confirm an individual's humanity. Once verified, the issuer would certify that individual's credential. Importantly, issuers would not be permitted to track how those credentials are used. It remains unclear how the system could defend against cyberattacks and the looming threat of quantum-assisted decryption.
On the other side, organizations interested in providing services only to verified individuals could choose to grant accounts exclusively to credential holders. This could effectively limit each person to a single account and make it impossible for bots to access these services.
According to the paper, the research does not set out to determine which centralized pseudonymity method is most effective, nor does it address the numerous potential problems such a scheme might face. The research group does, however, acknowledge these challenges and has called for further study.