Defining my AI alarm

Freddie DeBoer has a magisterially long piece about AI on his newsletter this week, which is really worth pouring a drink and sitting down to read. In it, after a compelling walk through the intellectual and cultural hubris of the 19th century, and the subsequent disillusions of the 20th, he turns to a discussion of the limitations of AI, driven as they are by the inevitable limitations of humanity. AI, despite the promises of its proponents, is probably not going to be the most disruptive force since fire was discovered. And its not going to be so because biological life is infinitely complex, and the idea that we can somehow create processors and chips and servers that can even come close to replicating them is on par with that Victorian hubris. Here is a really important paragraph from this part of the essay:

In Nicaragua, in the 1980s, a few hundred deaf children in government schools developed Nicaraguan sign language. Against the will of the adults who supervised them, they created a new language, despite the fact that they were all linguistically deprived, most came from poor backgrounds, and some had developmental and cognitive disabilities. A human grammar is an impossibly complex system, to the point that one could argue that we’ve never fully mapped any. And yet these children spontaneously generated a functioning human grammar. That is the power of the human brain, and it’s that power that AI advocates routinely dismiss – that they have to dismiss, are bent on dismissing. To acknowledge that power would make them seem less godlike, which appears to me to be the point of all of this.

The human desire to be like God, or even be God. AI is just Babel, endlessly replaying down through history.

Anyways, read Freddie for more in that vein. I want to focus on something else. I’ve written here fairly recently on AI and my own pessimism and even alarm about this new use of technology. What I want to do is reiterate the nature of my AI alarm, in order that I not be misunderstood. This is important because of how Freddie describes many of the loudest voices of AI alarm that are out there. Here he is again:

Talk of AI has developed in two superficially-opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators.

(…)

That, I am convinced, lies at the heart of the AI debate – the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians.

https://freddiedeboer.substack.com/p/ai-or-the-eternal-recurrence-of-hubris

I am not an AI millenarian. My brand of alarm is much more mundane than that, at least in the sense of Great Events. I dislike AI – in the form of the large language models and the like being developed and marketed right now – because I believe they are dehumanizing and destructive to cultural goods. I don’t worry about ChatGPT taking over the world and killing all humans. That’s far from anything I think possible, for the same reasons Freddie lays out. We don’t need AI in order for those fears to become real; haven’t any of us paid attention to history:

Everything that AI doomers say that artificial intelligence will do is something that human beings could attempt to do now. They say AI will launch the nukes, but the nukes have been sitting in siloes for decades, and no human has penetrated the walls of circuitry and humanity that guard them. They say AI will craft deadly viruses, despite the fact that gain-of-function research involves many processes that have never been automated, and that these viruses will devastate humanity, despite the fact that the immense Covid-19 pandemic has not killed even a single percentage point of the human population. They say that AI will take control of the robot army we will supposedly build, apparently and senselessly with no failsafes at all, despite the fact that even the most advanced robots extant will frequently be foiled by minor objects in their path and we can’t even build reliable self-driving cars. They say that we will see a rise of the machines, like in Stephen King’s Maximum Overdrive, so that perhaps you will one day be killed by an aggressive juicer, despite the fact that these are children’s stories, told for children.

No, I just worry that the growth of AI will perpetuate the neuroses and dangers of much of our modern technoculture. AI will perpetuate loneliness. It will continue the devaluation of the creative arts, of the humanities, of original ideas. It will become another tool of wealth inequality and economic destructiveness. Like my recent essay flagged, it is another attempt by humanity to escape all frictions. It will be another technology that promises us the moon and leave the vast majority of us holding the bag while a few get richer and more powerful. In the words of Freddie, it will further the modern propensity to seek “to avoid human.” It is an idol, in the same sense as the Golden Calf that Moses raged against. It promises what it cannot deliver, and we are so desperate to hear it that we forget how to be human.

I wrote this back in the spring:

All the while, people who are being promised a bright, AI-driven future will instead get more loneliness, more monetization of our attention, and less meaningful connection. It’s already well-acknowledged that Big Tech has used the levers of addiction to make the gains they have made in our lives; this knowledge will surely be put to use in figuring out how to addict us to AI in the hopes of extracting a few more pennies from the areas of our lives that have so far escaped their pocketbooks.

Freddie states it like this: “The bitter irony of the digital era has been that technologies that bring us communicatively closer have increased rather than decreased feelings of alienation and social breakdown.” He’s right. And this is what I fear from AI. That it will continue us down the path of despair and alienation and cynicism and apathy we are traveling. That’s a pretty destructive thing to unleash on ourselves. That’s what I fear.

drawing together the threads on AI

I did some writing back in March about AI, as that tool came to dominate the national conversation and begin seeping into our lives more fully. The rise of AI really galvanized my thinking and focused my mind around a variety of ideas that had been floating around in my head. I reacted at first with intense pessimism, which has cooled slightly (I even found some good applications for Large Language Model tools in the classroom!), but, all-in-all, that is the mood the growth of AI has left me with: pessimism about the future it is ushering in, and how humanity will react to and integrate with this new tool. In an ideal world, AI would be introduced into our world slowly, with a lot of oversight and conversation. This conversation would be led by regular people, by community interests, by civil society, and by ethicists and religious leaders. We would be thinking long and hard about what we want AI to do, and how we want to get there, and we would be aware of the dangers cropping up left and right.

Instead, as expected (can you imagine any other way it would really be?), AI is being foisted upon by the worst actors out there: global tech companies, venture capital and financial interests, and techno-utopists driven by freshman-level understandings of ethics and utilitarian commitments where humanity takes a backseat to progress. AI will inevitably be wielded to make money for the global elite, billionaires who can’t imagine enough digits in their bank accounts, and who see their fellow humans as means to the ends of enrichment.

All the while, people who are being promised a bright, AI-driven future will instead get more loneliness, more monetization of our attention, and less meaningful connection. It’s already well-acknowledged that Big Tech has used the levers of addiction to make the gains they have made in our lives; this knowledge will surely be put to use in figuring out how to addict us to AI in the hopes of extracting a few more pennies from the areas of our lives that have so far escaped their pocketbooks.

I wanted to use this post to draw together some of these threads that have been running through my writing and rattling around in my brain recently. All of this pessimism about AI is intimately connected to my theological commitments, and my political and social ones as well. The primacy of human dignity, the direction of human attention towards the ultimate Good that is God, the importance of community and connection, the need in a liberal and capitalist world to focus on the lives of regular, everyday people in our politics: no matter which lens I look through right now, all of them encourage skepticism towards the growth of technology and the increasing hold it has on our lives. And that hold is driven by global corporations and moneyed interests, all of whom view the whole world as one giant market from which they can extract from the rest of us wealth and power and obeisance. My commitments all demand that I resist this, and that I use the tools at my finger tips – my words, my ideas, and my voice – to push back and fight against this.

I am writing this today from outside, in my backyard, where the Oklahoma wind is swirling around me, and summer is in full swing. And it reminds me: this is what lasts. AI hasn’t got shit on the wind, on the warm sun, on the smell of soil and flowers, on the birds chirping as they perch on the string of lights hanging around our back porch. The moneyed interests of the world – they are all going to get old, and confront mortality, and when we are all gone, this will all remain. The rat race everyone is caught up in – I’ll let others run it, because I have compost to turn over and weeds to pull. You can’t put that on a microprocessor, and I can’t get it delivered to my pocket. How sad for those who are trying to. They think I’m going to miss out if I don’t use AI; boy are they mistaken.

I really am pretty pessimistic about the state of our culture, and the power of technology in our lives. But it just takes a few minutes away from that bubble, out under the blue sky, or in the pages of a book, at the tip of my favorite ink pen, or in the words of this morning’s daily prayers, to find where my optimism lies, to remember the hope of the world and to be reminded about who has the final victory. There’s a task for you: ask ChatGPT to give you hope. It’s answer will be crafted to please you – but it’ll still be false. Hope is out here.

the AI bubble

Alan Jacobs highlights this quote from Charlie Stross:

The thing I find most suspicious/fishy/smelly about the current hype surrounding Stable Diffusion, ChatGPT, and other AI applications is that it is almost exactly six months since the bottom dropped out of the cryptocurrency scam bubble.

“Place Your Bets”

See my recent writing about AI and it’s link to capitalism. At base, the AI craze, no matter the intentions of the engineers and thinkers and programmers behind it, will become another tool of techno-capitalism, just like social media and cell phones before it: a way for them to monetize our attention. And the by-product of this latest capitalist enterprise will be the same as the one’s before it: lonely, disconnected and discarded human beings, a social fabric further shredded, and any concept of the True, the Good, and the Beautiful. In our new technocracy, profit and power are the end (same as they ever were), human consciousness and well-being is the means. And in the end, the bubble will burst, the rich and powerful will consolidate their gains, and the rest of us will be left holding the bag, as we look around and wonder what happened to our culture.