Verification in an Age of Automation

I just saw that Gmail is starting to implement its own identity verification and validation system. This comes at a time when Twitter’s “blue check” system has undergone product changes that make it much less valuable, and Facebook has rolled out its own paid verification program. At the same time, the productization and subsequent prevalence of generative AI, particularly for conversational and casual text, has made it even easier to automate the mass creation of bot accounts that sound plausibly human.

Granted, the problem of spam is as old as email, and bots have always been a problem on social networks1. Up to this point, fighting these efforts amounted to detecting patterns in deterministic behaviors: figuring out common phrases in posts, where and how the communication gets sent out, and what kind of a scam outcome the spam is angling for. These endeavors tended to be low-effort and low-value, pretty much the bottom of the barrel; in fact, more sophisticated operations tend to be linked to sovereign states—because they’d have the resources to employ those more skilled and their objectives are grander than scamming a quick buck.

AI complicates the equation here, as it effectively raises the quality of the floor of bots and spam by adding an element of non-determinism. We’re still in the early game2, but while the headlines place a spotlight on AI-propelled misinformation and disinformation—particularly in the realm of politics—the seedy belly underneath is teeming with opportunities to mass-create fake accounts and have them sound human enough to perpetuate whatever scam their owners are running.

This is one of the reasons why we’re seeing this movement towards verification. It doesn’t solve the problem of automated fake accounts but tries to raise the bar by reducing the surface area. Authenticity now requires getting past a verification process designed to weed out automation, and some of these systems charge for their “blue checks” to further balance the asymmetric costs of broadcasting spam. Services—particularly social ones—need free and easy-to-sign-up-for-user accounts to get to scale, but at scale, they become vulnerable to spam and moderation challenges.

The strategy, though, implies centralization: you’re trusting Meta’s and Google’s and Twitter’s systems to verify humans correctly. Companies can employ fraud teams to act quickly and decisively against emergent scams, and at the same time, maintain parity with advancements in automation, from simple scripts to complicated AI-driven account creation and spam. Of course, bots and fake users will still get through, but corporations have enough resources to invest in combating the issue3.

A popular sci-fi trope is the human test: as computers and AI—and eventually robots and cyborgs—become increasingly humanlike, tests are designed to split the progressively minute differences between man and machine. Given how much we’ve advanced in AI mimicry in these past few years, perhaps the most unrealistic part of these elaborate, fictional verification processes is assuming that they can stay effective for any extended amount of time.


  1. After all, bots were purportedly the reason why Musk wanted to buy Twitter in the first place.

  2. Some of these scripts don’t even bother obfuscating the telltale “As an AI language model…” disclaimer.

  3. From experience, this is also analogous to payments processing (Square) and lending (Affirm); fraud detection is a matter of survival.

Share this article
Shareable URL
Prev Post

Review: Excellent Advice for Living

Next Post

The Folly of Custom Syncing

Read next