It’s cool to do a technology startup. Conventional wisdom has been that employees from big companies get tired of the lack of impact, the bureaucracy and the middle management, and the ones who have smarts and ambition will build their own visions. Since starting a company is so damned hard, the generally accepted strategy is to find and hire talent who not only know their stuff backwards and forwards, but those who can out-execute their counterparts at their previous large firm. The biggest advantage startups supposedly have is the opportunity to work on interesting things with smart people.
But then, this article brings up a great point: if Web 2.0 (and nowadays, post-Web 2.0) is where the smart engineers are flocking, why do they make seemingly amateur mistakes and build horribly engineered systems? Why store plaintext passwords and release poorly designed APIs?
I’d first identify that, as software developers, we are inherently biased towards persistence, and that we have innate respect towards systems built with scale in mind and have the ability and flexibility to last beyond the next six months. We value stability and scalability, and look to automate repetitive work by constructing virtual abstractions of reality. The best, most virtuous programmers are lazy and enable others to be lazy.
Of course, to build the right system, with useful and lasting APIs, on top of appropriate data structures optimized for common use cases, at the right level of abstraction in the models and their interactions, providing speed and redundancy and even some resemblance of SLA-level reliability, is terribly hard. It requires not only smart people, but also a solid set of specifications, and time and ultimately iteration to create, run and maintain.
The Web 2.0 ethos essentially completely opposes these principles. Startups, in particular, thrive on agility: fast iteration, rapid failure, and minimally viable products are the cliches thrown around the community. Small teams and small companies are uniquely able to hack together a product, test its viability, and evolve quickly to the volatile demands of a fickle audience. Trading stability and possibly good engineering design for speed should be a conscientious decision, not just a default mode of operation.
It shouldn’t be surprising that a lot of prior code and legacy systems (particularly at startups) look unprofessional; given the constraints of product uncertainty and speed of execution, engineering quality is almost always the sacrificed third, regardless of the engineers’ skills or experience.
The question then becomes whether, at some point past the initial conception of the repository, code quality can be promoted to the forefront. Can the company muster the right group of engineers and give them the appropriate length of time necessary to build something solid, fundamental, and lasting?
Unfortunately, my personal observation tells me that these magical conditions don’t happen on their own. Broken software receive outsized scrutiny – working systems don’t call attention to themselves – and companies that have just achieved some level of success are going to be criticized more harshly than the unknowns or the well-established. Catastrophic failure is the swiftest catalyst I know to quality; it’s an earthquake that shifts business priorities and sharpens focus on the neglected and forgotten.
Even for the mythical 10x.