Last week, the American Translators Association was in Boston for its 52nd annual conference, and so was I. The Association’s organizing efforts were excellent, as they usually are, and some 2000 people from the language industry were in attendance. My own presentation was slipped into the final time slot of the last day. I was worried that the timing would cause a lot of people to miss it, but over 100 people stayed to hear me talk about speech-to-text software as a productivity booster, and to watch Dragon NaturallySpeaking dazzle and occasionally stumble in a live, unscripted demonstration. Tremendous thanks go out to everybody who showed up to the Dragon seminar and especially those who participated by offering their own insights. The slides can be downloaded here, and at the end of part 2 I’ll tack on a few points that were brought up by participants last Saturday and which will aid in making an informed choice about whether to take the speech-to-text plunge.
It became clear, the more I talked to translators who were excited by Dragon, that the paucity of non-English support will be a big obstacle to wider adoption in this industry (it is only available for Windows in English, French, German, Dutch, Italian, and Spanish, and on a Mac, Spanish and Dutch are not even options). This limits it to those who translate into one or more of the supported languages. My understanding is that Nuance owns IBM’s ViaVoice technology, which offered some support for Asian languages, so I hope this is just a matter of not figuring out how to tap the bounty of the translation market rather than a severe technical issue. It is also a bit troubling that there is no serious competition for Dragon in the speech-to-text market, at least not if you’re looking for a tool that can enter text in any CAT tool or word processor and train itself to your voice for accuracy, as most translators intrigued by the idea of dictating text to save themselves both time and aching shoulders are. However, I did outline a prospect for combining existing translation memory software with voice dictation that enterprising developers could look into.
The best two talks I attended on the first day of the conference (Thursday) were the ones given by translators Karen Tkaczyk and Jost Zetzsche. Karen presented in the Language Service Providers division, and her talk was chiefly addressed to translation agencies that outsource work to freelancers. She highlighted some of the ways that agencies can build relationships of communication and trust that leave translators eager to collaborate again and again, and to pass along the good word to other freelancers. This includes offering greater transparency to the freelancer, offering up unsolicited but critical information like whether the project would be proofread internally, and conveying a sense of what sort of information the end client wants to glean from the translation. Jost’s talk focused on the ongoing revolution in liberating data from translation memories, a decades-old software technology where full sentences are stored as matches for their human-generated translations. These sentences are of little use unless an exact match or an 80%+ “fuzzy” match comes along in a new document, which will be almost never. Breaking them down into “subsegments” (individual words or phrases, matched with the words or phrases in the corresponding sentence in the other language that mirror them most clearly) has enabled search tools like Linguee and Taus that offer more refined results.
In the next couple days, I’ll fill the second installment of this wrap-up with details from later talks that were given about machine translation, French spelling reforms that have taken a while to catch on, more about Dragon, and why I have no pictures of my own from Boston to put up.