Reading this article about how crappy code is in “the real world”, you would think that all software is written by developers either too dumb, or too smart – which is just a conceited way of saying they’re too dumb to realize their deficiencies – to do their jobs. Of course, there are also external forces in unreasonable deadlines and overbearing management to conspire to produce heaping piles of ugly, unmaintainable, unusable code.
It’s a pretty pessimistic take on something eating the world, isn’t it?
Self-serving attributional bias aside, I’d like to think that my fellow developers are able to see the same issues I can, particularly from a cursory reading, when they’ve taken time to write and debug and test their code. That it looks messy now is less an issue of competence, and has more to do with the tradeoffs that the engineers make during the process of development.
Computer science teaches us to make tradeoffs between computing time and space; systems can consume more memory to store and cache data, or spend CPU cycles to recalculate results. In software engineering, a similar tradeoff is made between computing and developer resources1; inefficiencies, inflexibility, and messiness occur in code so to optimize coding effort and time. The fallacy is assuming that this is rooted in incompetence or laziness.
Oftentimes, the old code was fit for product requirements which are now outdated. That less-than-optimal data structure or barren object structure may have been the right level of complexity for a simpler product, or one that had not anticipated on having its code last for more than a week. The added cognitive burden of building keeping a “more perfect” system would not have been worth the time. As the proverb goes, perfect is the enemy of good.
From a productivity standpoint, if hacky, crappy code had lasted for two years despite the whining of its engineers, perhaps it works well enough and does not “desperately need to be refactored.” When evaluating code quality, we spend our efforts analyzing its modularity and cleanliness, sometimes venturing into the realm of readability and extendibility, but we rarely account for our time taken to understand or rework the specimen. The static, objective measure of quality takes on a much more ambiguous spectrum of tradeoffs when developer effort is accounted for.
It’s just a matter of moving from “this sucks” to asking, “why does this suck?”
I hate to use that term when describing people, but it draws the proper analogy between the time and effort spent by machines versus that of programmers.↩