Mutual AI Escalation

One way to think about AI, at least in the short term, is that it’s a mechanism for automation and adding efficiency, particularly in areas that have so far resisted automation due to domain complexity. Our current Generative AI boom was kicked off with the release of DALL·E, but was supercharged when ChatGPT was released to the public, and has now branched out to everything from videos to music to coding1.

Unfortunately, the human condition suggests that instead of taking advantage of these efficiencies to save time and effort and make space for leisure, we’re much more likely to engage with the hedonic treadmill and work just as much—if not more—to strive for more income relative to those around us. Why do the same amount of work in less time, when you can do more work in the same amount of time?

It’s already starting to play out, with some unintended consequences. Amidst reports of the sheer amount of labor needed to classify and annotate data for training machine learning models2, it’s not a surprise that some workers find it easier to use AI to make their work faster and less tedious. Soon, others will catch on to the technique, the amount of output will be recalibrated to AI-assisted labeling, and we’ve circled back to doing more for the same costs.

It gets worse when both sides are equipped with AI. The other day, I caught a colleague editing their email with ChatGPT—nothing nefarious, they just wanted to sound more professional in their communications, but the result was something so out of character and corporate that it stood out unnaturally. I’m sure others are already using AI to generate paragraphs upon paragraphs of email text, important business communications where perceived thoroughness is important. Yet, all this means is that there’s a real use case for AI to summarize overly-long emails; indeed, such products already exist. The end state here feels predictable, almost obviously so.

So it’ll come to pass that AI is the latest mechanism to raise the floor of utility while simultaneously layering on complexity of dubious value. For instance, we’ve all learned how to Google in the past 2 decades of the internet. That has become the new baseline for accessing knowledge, but the SEO race has degraded Google’s results to the point where people looked to ChatGPT to provide better answers. Similarly, whereas the tax code was simple to start, centuries of additional rules and discovering loopholes to those rules have resulted in such a Byzantine tax code where the vast majority of people in the US have to pay for the privilege of figuring out how much they have to pay the government.

We’re therefore stuck in a classic Prisoner’s Dilemma scenario: we’re all better off agreeing to not abuse AI to generate and subsequently consume piles of superfluous information, but the incentives are too strong to engage when everyone else seems to be getting ahead with the added tooling.

Or, to put it another way, we’re creating more Bullshit Work—perhaps en route to the creation of a new generation of Bullshit Jobs.


  1. To be fair, it was also used in the past few years extensively for utilities like voice recognition, closed captions, and language translations, but the use cases were niche enough that none of them broke out into the mainstream.

  2. I.e., reinforcement learning with human feedback, or RLHF.

Share this article
Shareable URL
Prev Post

Augmented and Virtual Reality Goal Posts

Next Post

FinTech Credibility

Read next