As noted previously in these pages here and here, I’ve run a half-dozen experiments using ChatGPT for translation purposes, entering text passages of both prose and poetry from languages I know and trying different prompts. I’ve also introduced the software into my teaching, allowing student translators to use it for their translation projects provided that they include how they’ve used it in their process reflection pieces.
The biggest potential virtue to my mind is similar to other computer assisted translation tools—speed, especially when dealing with long texts that have a lot of repetition. This means probably not literary texts, especially not those with pretensions to “high” art. But formulaic works such as romance novels or low-brow mysteries might work well. In one of his YouTube videos on ChatGPT and translation, Tom Gally at the University of Tokyo makes a similar claim about popular Japanese literature. (See approx. minute 8 here: https://www.youtube.com/watch?v=5KKDCp3OaMo.)
The software seems to do much better with fiction and literary nonfiction than with poetry, especially poetry with any sort of sound painting in it. I tried several times, unsuccessfully, to get it to create slant rhymes. While the software cannot technically “hear,” the recognition and creation of phonetic representations of sounds is probably something it will do better in subsequent versions.
For use with literary texts that require, let’s say, greater writerly skills in the receiving culture, I could see the software being used to generate a first draft or even an alternate draft, something like the “fast pass” that many translators will do before setting about the critical tasks of researching references, inserting hidden explanatory phrasing, differentiating the idioms of characters, controlling the pace, and attending to other nuances.
Would I use it to do this? Probably not, as my practice does not generally include a quick first draft. I usually try to polish from the very start, re-reading and revising multiple times as I proceed to the next portion of a text. This is just how I work. I would also be suspicious of how a quick first draft generated by AI might influence all my subsequent drafts, up to and including the finished text.
The biggest weakness I have found so far is what software developers call AI’s “hallucination” challenge. In short, this means that it makes shit up, and it does so with apparent confidence. In practice, given the trust problem that this creates, one would need to read whatever it generated almost as if one already had a translation finished in one’s mind. Otherwise, where the AI might slip in something it hallucinated, something that sounded like it could go there, might be hard to notice. This would not require proofing so much as checking every word and phrase against the source. I mean, imagine editing the text of a translator you knew had a tendency to make shit up and pass it off as authoritative. Maybe you could get good at recognizing the patterns, predicting where the text was likely to veer into pure invention. This does not sound like a time saver.
It’s moving fast, however, so I’m keeping my eye on it. I’ll likely continue to allow my translation students to use it occasionally (with appropriate commentary and reflection) and will keep experimenting to see if there are discrete tasks I can entrust to it. These look to me to be less about translation per se at this point—given its trust problem—than about invention. I could imagine, for instance, noticing something stylistically familiar in a text, e.g., this character sounds to me like a Croatian Winston Churchill, and then asking the AI to give me some Churchillisms for my English version. I believe it could do this fairly well, as long as the character didn’t have speak in slant rhymes.
###