Freddie DeBoer has a magisterially long piece about AI on his newsletter this week, which is really worth pouring a drink and sitting down to read. In it, after a compelling walk through the intellectual and cultural hubris of the 19th century, and the subsequent disillusions of the 20th, he turns to a discussion of the limitations of AI, driven as they are by the inevitable limitations of humanity. AI, despite the promises of its proponents, is probably not going to be the most disruptive force since fire was discovered. And its not going to be so because biological life is infinitely complex, and the idea that we can somehow create processors and chips and servers that can even come close to replicating them is on par with that Victorian hubris. Here is a really important paragraph from this part of the essay:
In Nicaragua, in the 1980s, a few hundred deaf children in government schools developed Nicaraguan sign language. Against the will of the adults who supervised them, they created a new language, despite the fact that they were all linguistically deprived, most came from poor backgrounds, and some had developmental and cognitive disabilities. A human grammar is an impossibly complex system, to the point that one could argue that we’ve never fully mapped any. And yet these children spontaneously generated a functioning human grammar. That is the power of the human brain, and it’s that power that AI advocates routinely dismiss – that they have to dismiss, are bent on dismissing. To acknowledge that power would make them seem less godlike, which appears to me to be the point of all of this.
The human desire to be like God, or even be God. AI is just Babel, endlessly replaying down through history.
Anyways, read Freddie for more in that vein. I want to focus on something else. I’ve written here fairly recently on AI and my own pessimism and even alarm about this new use of technology. What I want to do is reiterate the nature of my AI alarm, in order that I not be misunderstood. This is important because of how Freddie describes many of the loudest voices of AI alarm that are out there. Here he is again:
Talk of AI has developed in two superficially-opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators.
(…)
That, I am convinced, lies at the heart of the AI debate – the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians.
https://freddiedeboer.substack.com/p/ai-or-the-eternal-recurrence-of-hubris
I am not an AI millenarian. My brand of alarm is much more mundane than that, at least in the sense of Great Events. I dislike AI – in the form of the large language models and the like being developed and marketed right now – because I believe they are dehumanizing and destructive to cultural goods. I don’t worry about ChatGPT taking over the world and killing all humans. That’s far from anything I think possible, for the same reasons Freddie lays out. We don’t need AI in order for those fears to become real; haven’t any of us paid attention to history:
Everything that AI doomers say that artificial intelligence will do is something that human beings could attempt to do now. They say AI will launch the nukes, but the nukes have been sitting in siloes for decades, and no human has penetrated the walls of circuitry and humanity that guard them. They say AI will craft deadly viruses, despite the fact that gain-of-function research involves many processes that have never been automated, and that these viruses will devastate humanity, despite the fact that the immense Covid-19 pandemic has not killed even a single percentage point of the human population. They say that AI will take control of the robot army we will supposedly build, apparently and senselessly with no failsafes at all, despite the fact that even the most advanced robots extant will frequently be foiled by minor objects in their path and we can’t even build reliable self-driving cars. They say that we will see a rise of the machines, like in Stephen King’s Maximum Overdrive, so that perhaps you will one day be killed by an aggressive juicer, despite the fact that these are children’s stories, told for children.
No, I just worry that the growth of AI will perpetuate the neuroses and dangers of much of our modern technoculture. AI will perpetuate loneliness. It will continue the devaluation of the creative arts, of the humanities, of original ideas. It will become another tool of wealth inequality and economic destructiveness. Like my recent essay flagged, it is another attempt by humanity to escape all frictions. It will be another technology that promises us the moon and leave the vast majority of us holding the bag while a few get richer and more powerful. In the words of Freddie, it will further the modern propensity to seek “to avoid human.” It is an idol, in the same sense as the Golden Calf that Moses raged against. It promises what it cannot deliver, and we are so desperate to hear it that we forget how to be human.
I wrote this back in the spring:
All the while, people who are being promised a bright, AI-driven future will instead get more loneliness, more monetization of our attention, and less meaningful connection. It’s already well-acknowledged that Big Tech has used the levers of addiction to make the gains they have made in our lives; this knowledge will surely be put to use in figuring out how to addict us to AI in the hopes of extracting a few more pennies from the areas of our lives that have so far escaped their pocketbooks.
Freddie states it like this: “The bitter irony of the digital era has been that technologies that bring us communicatively closer have increased rather than decreased feelings of alienation and social breakdown.” He’s right. And this is what I fear from AI. That it will continue us down the path of despair and alienation and cynicism and apathy we are traveling. That’s a pretty destructive thing to unleash on ourselves. That’s what I fear.
