Technological Optimization

I think of myself as a technologist, someone who sees of technology as a positive force on society, the continuous endeavors that strive to move us towards better futures. It accomplishes that goal by enabling more possibility with greater efficiency: railroads over horse buggies; LEDs over incandescent bulbs; digital money transfers over shiny rock bartering. That is not to say progress has been a straight line, however; there are plenty of negative consequences to streamlining inconspicuously: the proliferation of misinformation; anxiety from comparisons on social media comparison; pollution and environmental externalities felt at a global scale. Those negative bits are the core of this excerpt, from a recent book focusing on where “Big Tech” has gone wrong.

It’s easy to see how tech—as an industry and now a force on society at large—has become so pervasive that any negative unintended consequences garner outsized attention and increasingly, regulation. For folks like me owe their livelihoods to tech, a part of our core epistemological foundation is simply that most of the work we do is de facto good. This served as an implicit, scrappy rallying cry when the business of software was a small niche industry, but in successfully taking over the world, it’s an assumption that is worth reexamining.

A prime example is the Facebook memo from 2016, which advocates for the company’s singular goal to enable more connections, and to not worry about any negative ramifications from those those added capabilities. Since it was written, with the amount of political malfeasance and misinformation campaigns proliferating on the platform, the premise is at best questionable, and has raised enough ire to result in congressional hearings and antitrust lawsuits1.

That example aside, most of the technological advances at the cutting edge are much less controversial, or feature less damaging negative externalities. When wireless standards improved from 3G → LTE → 5G, the only interesting bits are talking through how fast carriers can roll out the new standards, which devices support them, and how to make best use of the new speeds. In as far as any backlash exists with the new commercial space race between SpaceX and Blue Origin and Virgin Galactic, it’s mostly complaining that the billionaires funding these efforts could be spending their money on social issues instead. And although progress in autonomous driving has been slow, people have mostly cheered on the incremental improvements.

It helps that these examples they deal with the hard sciences and are centered around pure engineering challenges. Making internet connections faster, space flight cheaper, and vehicular transportation more accessible are awesome improvements to their users’ qualities of life. Technology is really good at making things faster and cheaper, and the initial results just make what was previously hard a bit easier. That ease increases usage and broadens access; the pace of how cheap solar energy has gotten over the past decade is one of the best renditions of this phenomenon.

But given enough time and maturation, the initial ease that created more usage which attracted more users starts generating second order effects, some inevitably problematic. Right now, I’d say that the intersection of technology and the social sciences falls into that categorization; it’s proven hard to predict how the human condition responds to communicative capabilities which are effectively free, instantaneous, and span the globe. If social networks have enabled more of the best of humanity, that benefit has now been overshadowed by letting more of the worst of humanity proliferate as well: misinformation, peer pressure, anxiety, app addiction, overt racism.

For a long time, the excuse we technologists gave ourselves is that efficiency cannot itself be a moral force, that it is, conjunctively, amoral. If an artificial intelligence outputs racist and sexist commentary, it’s merely a function of its training data set, which ultimately is a reflection of society amplified by the speed of microprocessors. In the US, this abdication of responsibility is codified by a section of the Communications Decency Act, commonly known as Section 230.

I wonder if we should take a stronger stance, though, in leaning into the moral quandaries that arise from technology and try to tackle these issues head-on. In software, we understand well the power of default options and interface nudges to modify behaviors; the simple changes we can invoke have disproportionate impact and scale because our software has disproportionate scale. In that sense, being on the forefront of technological development should carry a heavy responsibility.


  1. Of course, the guy who wrote it was recently promoted to CTO.

Share this article
Shareable URL
Prev Post

This Metaverse Thing

Next Post

Career Advancement in Large Teams

Read next