Asking an AI the Right Questions

It feels like 2022 is the year of democratizing AI. GPT-3 made headlines back in 2020 when it launched to a limited audience and its new users played around with the tool, but it took a little over a year until its API was made generally available at the end of 2021. By this summer, AI-powered image generators DALL·E 21 and Stable Diffusion were also released to the general public, and now AI has expanded into everything from making music to short videos, though those efforts are still in their infancy.

Past advances in AI—in areas like chess or search relevance—have also been massive leaps in quality and depth, at times truly 10×’ing previous implementations. But, since they’re multiplicative on top of existing, “dumber” intelligence, there’s a feeling of diminishing returns, where a chess computer could already beat a grandmaster can now do so by developing strategies by playing itself millions of times. This set of AI-powered tools feels like a revelation, I suspect because it’s enabling completely new use cases.

With those use cases come the age-old concerns around technology replacing humans. The podcast ATP had an episode that talked about which jobs will become obsolete with the advent of this type of AI, but at the same time, which new jobs get created in conjunction with its rise to prominence. Even if future advances in artificial intelligence make this prediction sound hopelessly naïve, it feels like that at this moment, there’s a real symbiosis between what AIs are good at with what humans are capable of, where we can leverage this complement to extend the edges of our limitations2.

More specifically, the types of AI that we’re currently developing are good at absorbing vast quantities of information, then discovering and reflecting patterns within—some obvious to human minds, others less so. The magic of modern AI isn’t quite that we figured out how to replicate the essence of a human mind mechanically3; it’s that we can use so much data and run neural networks with so many billions of parameters that it can simulate human responses to specific tasks. That’s not to underplay our technical achievements here. With perhaps some additional iterations, this is developing AI that can pass the Turing test.

So far, where AI shines is on narrow tasks, where the constraints of a specific use case allow the system to produce humanlike output. There was an interview on Stratechery about the development of Github Copilot, and how it wasn’t so much that Copilot was learning to code like a junior developer, as much as it was just spitting out better and better boilerplate that would pass incrementally more tests:

Right. Now, the interesting thing was in most cases it was wrong, either at questions or at code. Actually, I think on the code synthesis, I don’t remember the exact numbers, but I think on the code synthesis, maybe 20 percent of the tests would pass at first, and then over time we got up to 30, 35 percent, something like that.

The early reactions of Copilot replacing software developers were unfounded; the product did not understand the code as much as it could guess at what the next line could be based on billions of other lines of code, and ultimately requires someone who does understand the code to make judgment calls along the way4.

A similar dynamic is playing out with AI-driven image generation. Sure, winning art competitions generates headlines, but artists and designers are using AI to help visualize ideas, generating a breadth of imagery and then selecting a handful to seed their creativity. As with the Copilot use case, it’s more accurate to think of the workflow as AI-augmented or AI-supported, where human judgment is ultimately needed to make something good out of artificial creation.

Figuring out the right input is the other part of unlocking utility out of AI systems. Going back to the question of which jobs get destroyed and created with the advent of this technology, there’s a skill to formulating the right prompts and identifying useful results—from dozens of possibilities that the system spits out—to further iterate through the system again or to use as jumping-off points for human creativity. It’s akin to learning how to Google; knowing how to find the facts can be equally as, if not more, important than knowing the facts themselves5.

One way I’ve thought about how we use AI—at least, in the immediate future barring exponential advancements in AGI—is to think of the system as an unknowable black box with mostly well-defined inputs and outputs, and technical advancements that look to push those boundaries outwards so they’re progressively less defined. AI feels magical when you can give it an imprecise, vague prompt, and it’s able to produce something unexpectedly good and potentially useful—a subtle difference from just emulating humans.


  1. I’ve been messing around with DALL·E for the past few weeks now, using it to populate a couple of my posts featured images.

  2. Of course, when we go the other direction, we get The Matrix.

  3. So-called artificial general intelligence, or AGI.

  4. There’s a joke in there about copying and pasting from Stack Overflow.

  5. We haven’t fully internalized this yet, 3 decades into the existence of search engines; there are still plenty of technical interviews that focus on trivia.

Share this article
Shareable URL
Prev Post

The Kinesis Advantage360

Next Post

Our Obsession with Ranking

Read next