Defining my AI alarm

Freddie DeBoer has a magisterially long piece about AI on his newsletter this week, which is really worth pouring a drink and sitting down to read. In it, after a compelling walk through the intellectual and cultural hubris of the 19th century, and the subsequent disillusions of the 20th, he turns to a discussion of the limitations of AI, driven as they are by the inevitable limitations of humanity. AI, despite the promises of its proponents, is probably not going to be the most disruptive force since fire was discovered. And its not going to be so because biological life is infinitely complex, and the idea that we can somehow create processors and chips and servers that can even come close to replicating them is on par with that Victorian hubris. Here is a really important paragraph from this part of the essay:

In Nicaragua, in the 1980s, a few hundred deaf children in government schools developed Nicaraguan sign language. Against the will of the adults who supervised them, they created a new language, despite the fact that they were all linguistically deprived, most came from poor backgrounds, and some had developmental and cognitive disabilities. A human grammar is an impossibly complex system, to the point that one could argue that we’ve never fully mapped any. And yet these children spontaneously generated a functioning human grammar. That is the power of the human brain, and it’s that power that AI advocates routinely dismiss – that they have to dismiss, are bent on dismissing. To acknowledge that power would make them seem less godlike, which appears to me to be the point of all of this.

The human desire to be like God, or even be God. AI is just Babel, endlessly replaying down through history.

Anyways, read Freddie for more in that vein. I want to focus on something else. I’ve written here fairly recently on AI and my own pessimism and even alarm about this new use of technology. What I want to do is reiterate the nature of my AI alarm, in order that I not be misunderstood. This is important because of how Freddie describes many of the loudest voices of AI alarm that are out there. Here he is again:

Talk of AI has developed in two superficially-opposed but deeply complementary directions: utopianism and apocalypticism. AI will speed us to a world without hunger, want, and loneliness; AI will take control of the machines and (for some reason) order them to massacre its creators.

(…)

That, I am convinced, lies at the heart of the AI debate – the tacit but intense desire to escape now. What both those predicting utopia and those predicting apocalypse are absolutely certain of is that the arrival of these systems, what they take to be the dawn of the AI era, means now is over. They are, above and beyond all things, millenarians.

https://freddiedeboer.substack.com/p/ai-or-the-eternal-recurrence-of-hubris

I am not an AI millenarian. My brand of alarm is much more mundane than that, at least in the sense of Great Events. I dislike AI – in the form of the large language models and the like being developed and marketed right now – because I believe they are dehumanizing and destructive to cultural goods. I don’t worry about ChatGPT taking over the world and killing all humans. That’s far from anything I think possible, for the same reasons Freddie lays out. We don’t need AI in order for those fears to become real; haven’t any of us paid attention to history:

Everything that AI doomers say that artificial intelligence will do is something that human beings could attempt to do now. They say AI will launch the nukes, but the nukes have been sitting in siloes for decades, and no human has penetrated the walls of circuitry and humanity that guard them. They say AI will craft deadly viruses, despite the fact that gain-of-function research involves many processes that have never been automated, and that these viruses will devastate humanity, despite the fact that the immense Covid-19 pandemic has not killed even a single percentage point of the human population. They say that AI will take control of the robot army we will supposedly build, apparently and senselessly with no failsafes at all, despite the fact that even the most advanced robots extant will frequently be foiled by minor objects in their path and we can’t even build reliable self-driving cars. They say that we will see a rise of the machines, like in Stephen King’s Maximum Overdrive, so that perhaps you will one day be killed by an aggressive juicer, despite the fact that these are children’s stories, told for children.

No, I just worry that the growth of AI will perpetuate the neuroses and dangers of much of our modern technoculture. AI will perpetuate loneliness. It will continue the devaluation of the creative arts, of the humanities, of original ideas. It will become another tool of wealth inequality and economic destructiveness. Like my recent essay flagged, it is another attempt by humanity to escape all frictions. It will be another technology that promises us the moon and leave the vast majority of us holding the bag while a few get richer and more powerful. In the words of Freddie, it will further the modern propensity to seek “to avoid human.” It is an idol, in the same sense as the Golden Calf that Moses raged against. It promises what it cannot deliver, and we are so desperate to hear it that we forget how to be human.

I wrote this back in the spring:

All the while, people who are being promised a bright, AI-driven future will instead get more loneliness, more monetization of our attention, and less meaningful connection. It’s already well-acknowledged that Big Tech has used the levers of addiction to make the gains they have made in our lives; this knowledge will surely be put to use in figuring out how to addict us to AI in the hopes of extracting a few more pennies from the areas of our lives that have so far escaped their pocketbooks.

Freddie states it like this: “The bitter irony of the digital era has been that technologies that bring us communicatively closer have increased rather than decreased feelings of alienation and social breakdown.” He’s right. And this is what I fear from AI. That it will continue us down the path of despair and alienation and cynicism and apathy we are traveling. That’s a pretty destructive thing to unleash on ourselves. That’s what I fear.

Excerpt #29: restraint and limits

To argue for a balance between people and their tools, between life and machinery, between biological and machine-produced energy, is to argue for restraint upon the use of machines. The arguments that rise out of the machine metaphor – arguments for cheapness, efficiency, labor-saving, economic growth, etc. – all point to infinite industrial growth and infinite energy consumption. The moral argument points to restraint; it is a conclusion that may be in some sense tragic, but there is no escaping it. Much as we long for infinities of power and duration, we have no evidence that these lie within our reach, much less within our responsibility. It is more likely that we will have either to live within our limits, within the human definition, or not live at all. And certainly the knowledge of these limits and of how to live within them is the most comely and graceful knowledge that we have, the most healing and the most whole.

Wendell Berry, “The Use of Energy” in The Unsettling of America

don’t be significant or effective

Some of my critics were happy to say that my refusal to use a computer would not do any good. I have argued, and am convinced, that it will at least do me some good, and that it may involve me in the preservation of some cultural goods. But what they meant was real, practical, public good. They meant that the materials and energy I save by not buying a computer will not be “significant.” They meant that no individual’s restraint in the use of technology or energy will be “significant.” That is true

But each one of us, by “insignificant” individual abuse of the world, contributes to a general abuse that is devastating. And if I were one of thousands or millions of people who could afford a piece of equipment, even one for which they had a conceivable “need,” and yet did not buy it, that would be “significant.” Why, then, should I hesitate for even one moment to be one, even the first one, of that “significant” number? Thoreau gave the definitive reply to the folly of “significant numbers” a long time ago: Why should anybody wait to do what is right until everybody does it? It is not “significant” to love your own children or to eat your own dinner, either. But normal humans will not wait to love or eat until it is mandated by an act of Congress.

Wendell Berry, “Feminism, the Body, and the Machine” in What Are People For?

One of my favorite lines of thought within good Christian theology is a critique of the desire for efficiency and significance in modern culture. I based the entire first series of my essay project at The Radical Ordinary on this critique. For Wendell Berry, it is an on-going critique as well, and he states it so well in this essay. The world conforms itself to the demands of economics, of numbers and dollars and cents: everything must be efficient, streamlines, frictionless.

But, as Berry reminds us here, love is not efficient. Love is not significant, at least not in the way the world would view significance. It does not contort itself meet the needs the invisible hand of the market, but instead, moves things out of its reach. As Christians, and as the Church, questions of efficiency must always be pretty far down the list of priorities in making decisions about the use of our time, resources, and love. Other things must come first.

In the newest issue of Plough Quarterly, there is a story about the Palazzo Migliori, a mansion just off Saint Peter’s Square in the Vatican, that Pope Francis had turned into a home for people with no where else to go. The story contemplates the divine wastefulness of turning such a beautiful and historic building into a shelter for just a few people. In this section of the piece, I am reminded of these conversations I keep having here about effectiveness, and the words of Stanley Hauerwas and Wendell Berry:

Pope Francis dining at the Palazzo Migliori

This place gives Anna a story that bends toward peace and rests there. Something about its over-the-top-ness: the carefully painted crests on the ceiling, the terrace overlooking Saint Peter’s Square, the unnecessarily good food. The visitors who know your name and your favorites and your good and bad habits, who know you need to put that cream on your foot and will banter with you until you do it. Above all it is knowing: that this place could have been a posh hotel; that some might call its current incarnation a waste; that you are not being given the bare minimum.

When we love someone, we are not thinking of how to do so efficiently; we are thinking how to do it well. Think of new parents preparing a beautiful nursery: they may buy things the child never uses, and perhaps some of that money and effort might be better used elsewhere. But we are not surprised when loving parents put more thought and work into preparing a place than is strictly necessary.

There are certain things that we know make a good place for anyone – shelter from the cold, a quiet place to sleep, a warm stew, a clean place to wash up, art, song, softness – and we can prepare these things even before we meet the recipients. Once we meet, there begins the work of making it a good place for them in particular – for Astriche, who loves chamomile; for Lioso, who is so much more tired than hungry and just wants to sleep; for Ajim and his appetite; for Anna the teller of tales.

https://www.plough.com/en/topics/justice/social-justice/princess-of-the-vatican

The mindless drive for efficiency and significance is a depersonalizing drive. Love is not depersonalized. It requires intimacy, connection, and a knowing of the other we are called to love. You can build a generic homeless shelter, sure. But you can’t build a home, or a relationship that way. And only those relationships of love are what save lives and make the world a better place. And remember, you don’t need permission to act this way, or to develop a strategic 12-point plan to figure out how. Just ask, how can I show love today, or in this situation, or in this specific encounter, and then do those things. Don’t worry if it is the most effective use of your time. Don’t worry about whether it will undermine some bigger Plan. Don’t run a cost-benefit analysis. Just love, and be loved, as God wants us to be.