Can a Bot Long for its Maker?
It seems that scarcely a day goes by without encountering some troubling headline about the encroachment of AI into every corner of our lives. Today’s entry was an article by a novelist named Andrea Bartz in The New York Times called “The ‘Shy Girl’ Fiasco Shows Why Trust in Writers Is Plummeting.” I knew nothing about Shy Girl, and what little I learned from the article made me long to return to my state of unknowing. Apparently, it is a horror novel that readers and journalists flagged as “having prose that sounded like AI slop”). What doesn’t, these days? At any rate, what piqued my curiosity was the bit about the plummeting of trust in writers. As one who writes for a living (essays, sermons, blog posts) I obviously have a skin in the game.
The disease is not difficult to diagnose and Bartz lays it out plainly:
But as A.I. models continue to improve, I’m concerned that it will become difficult to distinguish between something written by a human versus a bot. As more A.I.-generated writing is put out in the world, more readers will question whether the text they are poring over was penned by a human. We’re barreling toward a rapid erosion of trust between authors and readers, and the publishing industry is unprepared to deal with the consequences.
We are indeed. It grieves and angers me that this is where we are as a culture. Where we must wonder if the words we read, the things we see and hear with our own eyes and ears, can be trusted. Where our default assumption about pretty much anything we come across on a screen is, “yeah, good chance AI is responsible for some or all of that.” How can trust between readers and writers survive if we are never even sure that what we read was crafted by a human person? It can’t. And it won’t.
A few weeks ago, I attended a workshop led by a friend of mine on AI and theology. It was a fascinating and wide-ranging discussion, touching on everything from the inherently parasitic nature of AI, on how and why we so naturally (and incorrectly) attribute personhood and intentionality to bots, about the staggering energy requirements of AI and the impact this will have upon the natural world, on where and how and if AI might be good and useful (within proper constraints), and about whether any of our reflections on any of this even matters as the tidal wave of AI seems inevitable and unstoppable.
For me, the most basic and important questions about AI have to do with human personhood. How does widespread embrace of this technology form us as human persons? Does it form us in healthy ways that lead to human flourishing? Or does it deform us and subtly (or not-so-subtly) condition us to think of ourselves as machines? I think it would be extremely difficult to make the case the widespread embrace of AI is forming us in ways that make us more authentically human. On the contrary, it is probably making us lazier and more impatient, less resourceful and resilient, more manipulable, more polarized… the list could go on. And it is obviously eroding trust (see above).
Near the end of the workshop I attended, my friend asked us to consider AI through the lens of an Augustinian view of human desire. At the most basic level (and I’m undoubtedly guilty of oversimplification), St. Augustine said that a fundamental truth of the human condition is disordered desire. We desire the wrong things too much and the right things too little. And so how does AI reflect and refract our desires? In the direction of human flourishing? Or not. The answer seems—at least to me—so obvious as to barely warrant elaboration. The fact that among the major ethical issues AI companies have had to deal with in these early years is people creating and posting naked images of others speaks (profoundly unflattering) volumes.
A final note. At various points in the workshop, we were asked to wrestle with the question of what, exactly, separates a human person from a bot. If bots are so rapidly taking over all the things that human beings used to do, if they can communicate as well (or better) than we can, if they are less prone to error, if they can convey emotion and caring, if they can be made to look like human persons, where might we look for unconquered territory that is ours alone, where might be we able to stand and say, “This—this!—is irreducibly human.”
In many ways, I think the question assumes too much. Everything that a bot does is parasitic on human experience, human language, human knowledge, human expression. In no sense is a bot ever doing anything. It is simply an algorithmic response to a human prompt. We concede far too much by even referring to bots in anthropomorphic terms. But leaving that to the side, I couldn’t help but return to Augustine. Perhaps his most well-known quote comes from his Confessions:
You have made us for yourself, and our hearts are restless until they find rest in you.
In Christian theology, God created human beings in his image, and we are born with a longing for our Maker. No matter how far we stray, no matter how determined we are to deface the image of our Creator, we still retain the capacity to long for God.
We have, in a sensed, played God and created AI bots in our image. But can a bot long for its Maker? Can it long for anything? It can use the language of longing, sure. It can express the kinds of things that human persons have expressed for millennia. It can rummage around in every corner of human desire that has ever found its way on to a page or a screen.
But in this it is only ever laying claim to terrain that is properly, and irreducibly, and entirely correctly described as “human.”
—
The above was produced without the assistance of AI by a human person in around 65 minutes on a Thursday morning while listening to Mumford and Sons’ new album and drinking mediocre coffee. You can tell because it rambles a bit, probably contains a few (unintentional) mistakes, is too long and doesn’t have any bullet points.
Discover more from Rumblings
Subscribe to get the latest posts sent to your email.