Machine Learning the Ropes of Intuition

We tend to romanticize the manual past in our inexorable march of technological progress and automation. Artisanal craft over factory assembly lines. Personalized service over generic interactions. Professional piano tuner over…whatever apps would help tweak instruments’ efficacies in producing the right pitches of sound1.

A few weeks ago, I was talking with someone at a conference about what the difference could be—beyond nostalgia and reasonable aversion to change. If there’s anything here, we called it some form of “human intuition,” something that was hard to capture via simple rules or dogmatic processes. GenAI, though, is rewriting this narrative quickly and forcefully; LLMs are bridging the gap between idea and implementation in art and language, and doing so by massively reducing the latency between these steps. The AI-optimistic case is still having people drive tooling enhanced by AI, imbuing the system with direction and in service of very human motivations.

Machine-aided language translation is a good case study. For over a decade, linguists and computer scientists built complicated systems to facilitate translations via a set of increasingly complicated and rich set of rules. They worked well for common terms and phrases, but often fell short on edge cases—those tricky aspects of language that defy simple categorization, as language is inherently full of exceptions, contextual awareness, nuance, history, and even contradictions. The breakthrough came about when Google applied machine learning2 to the problem space; it turned a tapestry of overlayered rules into a process of informed guessing based on large amounts of data.

Critics and detractors of AI will say that this type of “smart” content generation is little more than probabilistic pattern matching3, that there’s no real intelligence behind the machine. But this charge of the lack of comprehension may just be a distinction without a difference. Yes, the LLM doesn’t have a person’s lifetime of experiences and emotion and anecdotes, but it does have a corpus of billions of words4 and connections between them that have all the appearance of intuition and conceptual understanding in their compositions.

In fact, since machines can do this tirelessly and endlessly, the sheer quantity diminishes the value of the output via sheer abundance. When ChatGPT was initially unveiled, one of its first party tricks was remixing styles from different arenas of life: Shakespearean sonnets about dysfunctional Congressional politics, rap lyrics on orthopedic surgeries, a treatise on memecat in the style of e.e.cummings. More recently, folks have been experimenting with Google’s NotebookLM and its “audio overview” feature to generate realistic-sounding podcasts that sound remarkably similar to professionally developed audio fare. The output tends to be more amusing and “this is much better than I thought it’d generate.”

So the optimistic take here is that Generative AI is impressive in its breadth and speed but lacks the depth found in some forms of human intuition. It’s a tool that can sweep through the obvious connections, to inspire and bootstrap its user with an abundance of mundane results to compose something more interesting on top, establishing a higher floor and baseline to inspire better output5. And given the pace of development these last 2 years, that baseline has lifted higher very quickly. With Artificial General Intelligence (AGI) hype fading a bit, the emergence of this type of machine intuition just sets a higher bar for the human equivalent.


  1. There is a flip side of this argument around the lack of scale causing reliability issues down the line, for instance with low volume cars.

  2. Which we now colloquially call “AI.”

  3. Stephen Wolfram wrote a very long post explaining how it works some time ago; it’s still worth reading in full for demystifying how ChatGPT works.

  4. Yes, tokens and parameters are not words per se, but—close enough.

  5. As you’d expect, a bunch of students are taking the easy route and using ChatGPT verbatim for an easy grade.

Share this article
Shareable URL
Prev Post

What is a Photo, Anyway?

Next Post

Prosumers Skew Expectations

Read next