I don’t buy the popular notion that we’re in an post-PC era.
Of course, the proliferation of mobile devices has outpaced PCs to a large degree. They are accessible in a way that PCs never managed—the combination of low costs, mobility, and ease of use has given the phone a reach that cannot be matched by any other technological device to date. On the high end, the latest smartphones are starting to rival laptops in computing power, and more than ever it seems like PCs can truly be replaced.
The huge downside to this mobile movement is that the nuts and bolts of technology have been largely abstracted away from users. Whereas in the past folks may have just given up learning how to use a PC, the simplicity of mobile is what has enabled widespread access. Touch interfaces have lowered the barrier to entry, particularly around communications and media consumption: my two-year old has figured out how to unlock my iPad, launch the Youtube Kids app, pick a favorite video, and crank up the volume1.
That said, lowering the learning curve is not the same as simplifying complexity. Even as software embeds itself into every aspect of our lives, it remains unfathomably complex beyond consumer use cases, and there are now more users than ever before who are using software without having to really understand it or managing the underlying complexity. Whereas this is no longer a requirement for using computers:
There is still unmet demand for those who can work with computers at a fundamental level, or even those who can use sophisicated, work-related applications. This gap manifests itself in a new form of computer illiteracy, one that remains relevant for the foreseeable future.
Thankfully, he has yet to figure out how to disable the parental timer that I set every time he’s itching for video entertainment.↩