AI Engineering Validation

Product development around AI is accelerating. Just in March alone: OpenAI launched GPT-4; MidJourney upgraded its art generator to V5; Microsoft announced AI integrations into its Office 365 suite; Google formally launched its Bard AI into beta; Adobe launched its own AI-powered image generation and editing tooling. In some of these cases, it’s a matter of companies productizing and releasing the fruits of their ongoing AI research after years of development and fine-tuning; in other cases, it’s the Cambrian explosion of utilities and tools built on top of AI-powered services.

Understandably, with how fast things are moving, there’s renewed fear of the societal impact of AI, advancing rapidly from hypothetical “AI will destroy humans” concerns to more immediate use cases and their repercussions. AI can generate essays and code in seconds, and it only takes a couple more iterations to create art suspiciously similar to existing artists’ styles. Trained models can also summarize documents, create presentations, and complement—some predict replace—online search.

As it relates to the field of software engineering, John Carmack—of Doom and iD Software and Oculus VR fame—gives his take here: software development shouldn’t be about the syntax or cleverness of the code being written, but ultimately, about the value being delivered with said software. I tend to agree! In fact, coding is a particularly powerful use case for AI, specifically the Large Language Models (LLMs) that power the ChatGPTs and Bards of the world:

  • Programming languages have simpler grammar and syntax compared to natural languages, along with a minuscule set of keywords that makes text predictions more straightforward.
  • There’s a giant corpus of training data readily available1, semi-structured source code that’s also organized enough to be executed.
  • Most importantly, the nature of code requires it to be compiled and/or interpreted, which helps detect instances of AI hallucinations and confirms its correctness as a core aspect of the software development workflow2.

So what does this mean for our discipline?

In the short term, as features like Github Copilot have shown, AI helps augment existing development. It’s able to automate boilerplate and produce rote code, thereby reducing the tedium of copy & pasting similar code elsewhere and making minor tweaks to fit the immediate context. LLMs take this further by generating snippets of code from written descriptions, simple debugging, and acts as a context-aware, inline mini-search for its users.

That is, AI raises the floor for programmers. The simple stuff will be easier, cheaper and faster to accomplish with AI-generated code, which in turn will devalue this type of coding work. Yet, even rote code is needed to build apps and services, so the first folks to be disrupted will be the inexperienced, entry-level software engineers, as well as the outsourcing firms who can no longer compete on price. The emergent field of no-code apps should evolve and expand functionality rapidly with this added flexibility.

In the medium term, if the above prediction holds true, we run into the real risk of squeezing the pipeline of mid-level and senior software engineers. Just as old languages lack new practitioners to replenish the pool of available developers, AI will make obsolete the currently necessary parts of engineers’ early career development. There will be value in reading, editing, and validating complex code; honing those skills would have to be accomplished more as an academic exercise, like practicing arithmetic by hand despite the existence of calculators.

(As an aside, I wonder AI will help drive more popularity towards functional programming. Being able to logically define a snippet of code’s output is kinda convenient in a world where code can be trivially generated but would require manual confirmations for correctness.)

In the long run, innovation will be even more valuable. LLMs will remix code a billion different ways, and make accessible many of the resulting permutations that have utility, so inventing something novel means clearing a higher bar and thus will be even more impressive. Optimistically, we can hope for a stable state where human programmers assisted with AI—similar to how trajectory of AI development in chess—provides peak performance over either standalone human or AI coders.

  1. Whether that code should be used as training data, though, is up for legal challenge.

  2. There’s even a script that feeds interpreter errors back to the AI so it can correct itself, repeatedly, to try to converge on the “right” script.

Share this article
Shareable URL
Prev Post

Meeting Culture

Next Post

Installing the Tesla Solar Roof

Read next