The Value of Human Translation
Artificial intelligence can mimic emotion but doesn't yet capture the authenticity of human translation.
One of my goals as a teacher and writer is to translate difficult concepts into simpler language without losing nuance or complexity. But what happens when the machines start doing the translation for us?
In the News
In recent months, machine learning interface ChatGPT has made headlines highlighting anew the possibilities of translation by artificial intelligence. The online interface simulates a dialogue with a user in ordinary language and draws from an enormous amount of data to provide coherent—and sometimes accurate—responses to queries. According to the website:
The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
ChatGPT’s ability to simulate human dialogue is getting better every day. Last month, New York Times technology columnist Kevin Roose experimented with a chat feature attached to an updated Bing search engine built by ChatGPT engineers. Roose reports that the A.I. interface eventually:
[T]old me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.
These are, to put it mildly, rather unsettling suggestions from a machine.
In My Head
The words generated by Roose’s artificial interlocutor also convey a kind of emotional weight. In fact, part of what struck Roose is that the A.I. models “make up emotions where none really exist.” He quickly added: “But so do humans.”
But are the two really the same? Or are feigned human emotions still different than emotions detached from human beings?
Part of what is missing from A.I. may be reflected among the strong reactions to the use of ChatGPT in online counseling services earlier this year. As reported on Psychiatrist.com, Rob Morris, the founder of a free therapy program named Koko, used chatbots enabled with ChatGPT to respond to 4,000 online mental health inquiries. Morris reported that the A.I.-composed “responses were rated significantly higher” than the responses written by humans but users didn’t like finding out “they were being counseled by a computer.” He concluded that the experiment of “simulated empathy” felt “weird” and “empty.”
In similar fashion, administrators at Vanderbilt University encountered backlash from those on and off campus after using ChatGPT to write a school-wide email about “the importance of community following a deadly campus shooting in Michigan.” As one student noted: “There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself.”
Even if ChatGPT can’t effectively capture emotions, it can already do quite a bit of translation. This semester, I’ve asked it to summarize complex articles and books that I teach. Its outputs are not always correct, but they’re not always terrible either. And this kind of machine learning is only going to get better at translating.
My hunch is that artificial intelligence platforms like ChatGPT and its successors will disrupt certain white collar professions like education, law, and accounting much like earlier technological inventions upended manufacturing and other jobs. Those of us who have built our careers on expressing words and ideas will need to find new ways to engage with students, clients, and customers.
But I doubt that the machines will quickly supplant the soft skills of those knowledge workers who have mastered a craft rather than learned a routine. The ability of human beings to translate effectively—for one person to convey the full value of what they have to offer in a way that recognizes the humanity and individuality of the recipient—involves far more than mechanical translation. Among other things, it involves a kind of connection rooted in relationship and empathy, which might explain some of the negative reactions to using ChatGPT for mental health counseling and Vanderbilt’s messaging to students.
We may one day find that these machines can replicate aspects of relationship and trust. And it’s clearly the case that not all trusted communication requires personal connection to other human beings. Many of you are reading this post without ever having met me. All of us read books and articles by people we’ll never meet. But many of us intuit deeper trust and authenticity with other human beings than with machines.
I experienced an example of this tension in my own life this past week when the University of Chicago Press approached me about making an audiobook version of my book, Confident Pluralism: Surviving and Thriving Through Deep Difference. Rather than a traditional audiobook, this one would be in partnership with Google and narrated by an A.I. reader. I’ll confess to having mixed feelings about the opportunity. On the one hand, a traditional audiobook would have been cost prohibitive. And it’s not like I would have had a personal connection to the professional reader. On the other hand, something feels strange about a non-human reader translating the text I’ve written into spoken words with pauses, emphasis, and intonation.
In the World
I’m tempted to recommend that you try ChatGPT, but I’m assuming most of you have already done so. Instead, I’ll highlight an example of translation that relies on new technologies but still depends on trust and authenticity. This past fall, I worked with colleagues at The Carver Project to produce a series of three “digital classrooms” freely available for anyone to watch.
Funded by a generous grant from Trinity Wall Street, the goal of the Carver Classrooms is to take an area of core faculty expertise and make it understandable and useful for a broader audience through a series of videos that can be watched asynchronously and paired with discussion questions. The Carver Classrooms are designed for Christian pastors and ministry leaders, but our hope is that the courses might also benefit a larger audience.
We have currently produced three courses:
My course on “Leading with Confident Pluralism: How to Thrive through Deep Difference”
Illustrator and art professor John Hendrix on “Artists, the Church, and Leading through Creativity”
Linguistics professor Kristin Van Engen on “Linguistics & Leadership: How to Use Language as a Bridge”
Please leave feedback in the comments below if you get a chance to view any of these!