Defining my AI alarm

Freddie DeBoer has a magisterially long piece about AI on his newsletter this week, which is really worth pouring a drink and sitting down to read. In it, after a compelling walk through the intellectual and cultural hubris of the 19th century, and the subsequent disillusions of the 20th, he turns to a discussion of the limitations of AI, driven as they are by the inevitable limitations of humanity. AI, despite the promises of its proponents, is probably not going to be the most disruptive force since fire was discovered. And its not going to be so because biological life is infinitely complex, and the idea that we can somehow create processors and chips and servers that can even come close to replicating them is on par with that Victorian hubris. Here is a really important paragraph from this part of the essay:

In Nicaragua, in the 1980s, a few hundred deaf children in government schools developed Nicaraguan sign language. Against the will of the adults who supervised them, they created a new language, despite the fact that they were all linguistically deprived, most came from poor backgrounds, and some had developmental and cognitive disabilities. A human grammar is an impossibly complex system, to the point that one could argue that we’ve never fully mapped any. And yet these children spontaneously generated a functioning human grammar. That is the power of the human brain, and it’s that power that AI advocates routinely dismiss – that they have to dismiss, are bent on dismissing. To acknowledge that power would make them seem less godlike, which appears to me to be the point of all of this.

The human desire to be like God, or even be God. AI is just Babel, endlessly replaying down through history.

Anyways, read Freddie for more in that vein. I want to focus on something else. I’ve written here fairly recently on AI and my own pessimism and even alarm about this new use of technology. What I want to do is reiterate the nature of my AI alarm, in order that I not be misunderstood. This is important because of how Freddie describes many of the loudest voices of AI alarm that are out there. Here he is again:

Talk of AI has developed in two superficially-opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators.

(…)

That, I am convinced, lies at the heart of the AI debate – the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians.

https://freddiedeboer.substack.com/p/ai-or-the-eternal-recurrence-of-hubris

I am not an AI millenarian. My brand of alarm is much more mundane than that, at least in the sense of Great Events. I dislike AI – in the form of the large language models and the like being developed and marketed right now – because I believe they are dehumanizing and destructive to cultural goods. I don’t worry about ChatGPT taking over the world and killing all humans. That’s far from anything I think possible, for the same reasons Freddie lays out. We don’t need AI in order for those fears to become real; haven’t any of us paid attention to history:

Everything that AI doomers say that artificial intelligence will do is something that human beings could attempt to do now. They say AI will launch the nukes, but the nukes have been sitting in siloes for decades, and no human has penetrated the walls of circuitry and humanity that guard them. They say AI will craft deadly viruses, despite the fact that gain-of-function research involves many processes that have never been automated, and that these viruses will devastate humanity, despite the fact that the immense Covid-19 pandemic has not killed even a single percentage point of the human population. They say that AI will take control of the robot army we will supposedly build, apparently and senselessly with no failsafes at all, despite the fact that even the most advanced robots extant will frequently be foiled by minor objects in their path and we can’t even build reliable self-driving cars. They say that we will see a rise of the machines, like in Stephen King’s Maximum Overdrive, so that perhaps you will one day be killed by an aggressive juicer, despite the fact that these are children’s stories, told for children.

No, I just worry that the growth of AI will perpetuate the neuroses and dangers of much of our modern technoculture. AI will perpetuate loneliness. It will continue the devaluation of the creative arts, of the humanities, of original ideas. It will become another tool of wealth inequality and economic destructiveness. Like my recent essay flagged, it is another attempt by humanity to escape all frictions. It will be another technology that promises us the moon and leave the vast majority of us holding the bag while a few get richer and more powerful. In the words of Freddie, it will further the modern propensity to seek “to avoid human.” It is an idol, in the same sense as the Golden Calf that Moses raged against. It promises what it cannot deliver, and we are so desperate to hear it that we forget how to be human.

I wrote this back in the spring:

All the while, people who are being promised a bright, AI-driven future will instead get more loneliness, more monetization of our attention, and less meaningful connection. It’s already well-acknowledged that Big Tech has used the levers of addiction to make the gains they have made in our lives; this knowledge will surely be put to use in figuring out how to addict us to AI in the hopes of extracting a few more pennies from the areas of our lives that have so far escaped their pocketbooks.

Freddie states it like this: “The bitter irony of the digital era has been that technologies that bring us communicatively closer have increased rather than decreased feelings of alienation and social breakdown.” He’s right. And this is what I fear from AI. That it will continue us down the path of despair and alienation and cynicism and apathy we are traveling. That’s a pretty destructive thing to unleash on ourselves. That’s what I fear.

not a luddite

One final note on all my pessimism about technology recently: I don’t want to give the wrong idea. I’m not anti-technology. I don’t walk around with a flip phone, I do own a television with subscriptions to all the major streamers, I play XBox often (current gaming: FI 2022), I have a lot of cultural content I love and consume regularly (Star Wars, sports, prestige television.) I am not against creature comforts, and I do love an evening on the couch with a good show or a basketball game.

The danger I want to warn against is the seeming demand on our lives to let our technologies dictate the shape of our lives, and the growing monetization of every aspect of our lives. As noted here before, I am a leftist with a strong critique of modern capitalism. I’m not an out and out socialist; instead, I reject totalizing ideology that tries to fit humanity and culture into a mold, and the dominant ideology in our world today is global techno-capitalism.

opting out of AI

AI has been in the news a lot lately, and I have a lot of thoughts about the topic, many of which are still amorphous and uncertain. One thing I know for sure is that my tentative attitude towards AI is wary and pessimistic, in line with a more recent turn away from technology and what Kingsnorth is calling “The Machine.” I’m sure I’ll have more to say on the topic in the near future, especially in my essay series developing on my newsletter, but for now, this piece by Kevin Drum (who generally is more optimistic about AI) caught my eye:

Starting in November, Clarkesworld began to receive a torrent of stories written by ChatGTP—which has apparently been touted to aspiring writers as a sure-thing moneymaker by an array of scam artists. This has now gotten so out of hand that Clarkesworld is no longer accepting unsolicited submissions—for now, at least.

In other news, ChatGTP is being used to write cover letters for job hunters. Is this kosher? Or a fraudulent attempt to appear as something you’re not?

https://jabberwocking.com/chatbots-are-taking-over-a-part-of-the-world/

This is one of the biggest issues I have with this new AI-driven world of creation: its entirely utilitarian and capitalist in the worst way possible. So many are embracing AI because of what Kevin says here, because it’s seen “as a sure-thing moneymaker.” The only goal is to fulfill a task, to eliminate as much friction in life as possible, and to profit as quickly and as shortsightedly as possible. There is no incentive to create art or write a story in order to become a better artist; there is no thought given to the idea that writing bad cover letters over and over again eventually helps you develop the skill to write better cover letters – and in the process, to become a better, more well-rounded human being who can communicate about your strengths and weaknesses. No, the end goal is all that’s in mind, the drive to get yours as fast and as painlessly as you can. Yeah, the cut-throat and immoral greed of capitalism is partly to blame here, and so it makes this path rational for a lot of people, in purely economic terms. But at what long term, societal and ethical cost?

As someone who spends my days teaching teenagers the art of becoming good writers, I try to communicate this message all the time: sure, you can use ChatGPT to generate your essays and answers for you. But, at the end of the day, you haven’t gained a skill, you haven’t bettered yourself, and you haven’t made it any more likely that you’ll achieve the success you want in life. In fact, you’ve done the opposite. By getting AI to do the work for you, you are well on your way to being the kind of human envisioned in a movie like WALL-E:

We should want more than life than just to acquire. And the development of skills like writing, like making art, like telling stories, like interacting with other people: these are good in and of themselves. We don’t all have to be utilitarians. We shouldn’t just think of all these things as merely means to the end of amassing stuff. The good of writing a story isn’t the money you can make off of it; the good is in practicing the ancient and deeply human art of writing, for itself.

This is why I am taking an early stand on my refusal to use AI, in any way, as far as I can avoid it (I am sure there are situations where the demands of corporate global capitalism will force me to use AI, whether I want to or not, in order to exist in this modern world.) I don’t want AI to write for me, I don’t want art created by it, I don’t want it to make my life “easier” (whatever that means.) Being human means exercising my mind for myself, not having a computer do it for me. It’s not a worry about AI plugging us all into the Matrix or something; it’s a regard for human dignity and creativity. Will you join me in this stand against AI?