Of all the use cases for AI, the one that feels the most dystopian, Black Mirror-esque is its ability to replicate personalities. Companies like character.ai already offer models trained on historical personas and celebrities, but a research paper late last year showed it’d be possible, with just a 2-hour interview, to garner enough data to imbue an AI with the interviewee’s personality. Given the amount of digital breadcrumbs that each of us now generates daily and the exponential progress of AI research and development, there’s a clear line of sight to deepfake AIs that look and sound like us, casually generated and replicated.
The possibilities here approach the realm of hard sci-fi that would have seemed farfetched even a decade ago. In particular, I’m reminded of the novel1 Altered Carbon, featuring a world where people can be digitized and then uploaded into new bodies —”sleeves” in the fiction’s nomenclature—when their current one expires. Granted, we can’t grow biological humanoid forms on demand and don’t have a way to truly digitize the entirety of one’s psyche, but if we consider interfacing primarily through bits and bytes, we’re suddenly not that far off from that reality.
Altered Carbon, though, does explore some of the philosophical ramifications of such capabilities. Specifically, while identities can and are often backed up, it’s illegal and taboo to have more than one personality active at a time. When multiple people can legitimately claim to be the same person—they have not only the same personality, but also the same experiences and memories, and only diverge after the copy manifests itself in their new bodies—it presents a moral and psychological dilemma.
Removing the base assumption that there’s only one, unique “self” is antithetical to all of human history and experience. Our ingrained sense of individuality, of the value that comes from who someone is and what they do, would be trivially commoditized. And it’s so foreign a concept that we don’t have the language or framework to talk about what that would mean; the closest things I can think of that could apply are sci-fi tropes: hive minds that, by definition, don’t have individual identities; or an ur-mind resulting from massive mental fusioning of humanity into a singular consciousness.
Of course, such questions are only relevant in the far-off future. Of more immediate concern is all the nefarious things that people will use such capabilities to manipulate, trick, and scam their fellow human beings before we’ve internalized this abundance of personality. It doesn’t even have to be intentional; there has been a stream of articles and studies that warn of the increasing emotional dependence on AI-powered chatbots, which always respond and are tuned to be agreeable.
For the past few years, the fear around this wave of LLM-driven AI has been the eventual development of AGI, or Artificial General Intelligence, that would supersede humanity as the dominant species on this planet. Before we get there, though, we’re muddling through the impact of integrating AI with humanity as we are: upending higher education; atrophying the ability to think; deciding to replace jobs with AI and then walking it back. Various attempts at preemptive regulation or self-imposed constraints have fallen well short of AI’s rapid pace of development and competition; it may be that we have to first discover and live with an age of de-identity before we figure out how to handle it.
And Netflix adaptation.↩