A new batch of Machine Translation tools driven by Artificial Intelligence is already translating tens of millions of messages per day. Proprietary ML translation solutions from Google, Microsoft, and Amazon are in daily use. Facebook takes its road with open-source approaches. What works best for translating software, documentation, and natural language content? And where is the automation of AI-driven neural networks driving?

The Evolution of Neural Machine Translation

William Mamane, Head of Digital Marketing at Tomedes, a professional language services agency, had been a skeptic of machine translation. “Our company has been around for 12 years, with 50,000 plus business clients. We have championed the value of ‘human translation’ and still do.

However, we have witnessed a steady advance in the quality of machine translation. At present, machine translation does not rival a good mother tongue linguist, but there still is a place for AI and Machine Translation in the translation services value chain.”

To trace this evolution, let’s go back to the origins of AI as applied to machine translation. At a basic level, MT uses algorithms to substitute words in one language for those in another. That proves insufficient to translate successfully.

Understanding of whole phrases is necessary for both source and target languages. We can understand MT as decoding the source language and recording its meaning in the target language.

Apply statistics to choose the best translation for a given phrase.

There are various approaches to solving this challenge, some being to apply statistics to choose the best translation for a given phrase. Others apply structured rules to select the most likely meaning. But in complex language forms like fiction or other types of literature, even the best machine translation engines do not sound natural.

Machines do better with structured language for specific uses. These include weather reports, financial reports, government protocols, legal documents, sports results. Language and idioms are limited in these cases. There are formulaic linguistic structures and formats.

From Algorithm to Systems

Here Machine Translation is already in daily use. Even with this consideration, it does not obviate the need for human beings to be editors and proofreaders. They need to identify proper names, resolve ambiguities, and decipher idioms. However, that supervisory, editorial, or auditing role is less demanding and less time-consuming than translation.

On the web, automated translation started in the 1990s with Xerox’s SYSTRAN and AltaVista’s Babelfish. Both used statistical methods and rules to translate short text. The popularity of both was striking. In 1996, AltaVista reported that BabelFish fielded half a million requests in a day.

Even back in 2012, Google processed translations that would fill one million books per day. And that was before the translation revolution that occurred in the last five years. More on the early history of MT is here.

Neural Machine Translation 

Neural machine translation (NMT) uses an artificially produced neural network. This deep learning technique, when translating, looks at full sentences, not only just individual words. Neural networks require a fraction of the memory needed by statistical methods. They work far faster.

Deep Learning or Artificial Intelligence applications for translation appeared first in speech recognition in the 1990s. The first scientific paper on using neural networks in machine translation appeared in 2014. The article was followed rapidly by many advances in the field.

In 2015 an NMT system appeared for the first time in Open MT, a machine translation competition. From then on, competitions have been filled almost exclusively with NMT tools.

The latest NMT approaches use what is called a bidirectional recurrent neural network, or RNN. These networks combine an encoder which formulated a source sentence for a second RNN, called a decoder. A decoder predicts the words that should appear in the target language. Google uses this approach in the NMT that drives Google Translate.

Microsoft uses RNN in Microsoft Translator and Skype Translator. Both aim to realize the long-held dream of simultaneous translation of Harvard’s NLP group recently released an open-source neural machine translation system, OpenNMT. Facebook is involved in extensive experiments with open source NMT, learning from the language of its users.

Google Translate

Google Translate is a free multilingual machine translation service developed by Google to translate text.

Google Translate is a free multilingual machine translation service developed by Google to translate text. It offers a website interface, mobile apps for Android, and iOS. Its API helps developers build browser extensions and software applications. Google Translate supports over 100 languages, living and dead. It was serving over 500 million people daily as of May 2017. As of 2018, it was translating more than 100 billion words per day.

Although Google Translate is not as reliable as human translation, it’s getting closer. In a study last 2018, Google asked native speakers of each language to rate Google Translate’s translation on a scale of 0 and 6: It scored an average of 5.43. Performance varies across languages. For example, Google Translate was tested to perform best when English is the target language, and the source is European.

Microsoft Translator

Microsoft Translator is a multilingual MT cloud service integrated across multiple consumer, developer, and enterprise products. The Translator Text API has a free tier allowing two million characters per month. It has paid tiers which permit the translation of billions of characters per month.

Speech translation via Microsoft Speech services is metered by the duration of the source audio stream. As of August 2019, the service supports 65 language systems and 11 speech translation systems powering its live conversation feature in various apps.

Microsoft takes a new approach to live translation. Initiators of a conversation get a code to pass to participants. The code lets each participate in the conversation using their preferred language.

Facebook Translator

Each day, as of 2017, Facebook performed about 4.5 billion automatic translations using neural networks. It swapped out phrase-based machine translation for its 2 billion-plus users. Facebook’s opted for convolutional neural networks (CNNs) rather than recurrent neural networks (RNNs). It hoped that the localized/translated text would more closely resemble a natural language.

While RNNs process information linearly and methodically, CNNs approach information as a hierarchy. The hierarchy permits the recognition of non-linear data relationships. This technique has already proven valuable in machine vision, which surpasses human sight. For translation, it grasps context better.

A significant advantage of the Facebook Translator is its multi-hop attention capability. Attention emulates how humans translate.  As a rule, we don’t break down a sentence all at once and then translate it. Instead, we return to it, again and again, to check and re-check its meaning. The CNN mimics this process. It considers the sentence repeatedly and making choices about what to translate first.

A first “glance” might start with a verb. A second might look at a noun. Translating in hops helps figure out textual relationships contextually. It’s also more efficient. Facebook’s CNN is rated nine times faster than the RNN, which Google uses.

Facebook has one more huge advantage. For instance, it studies the real-time language of users. Its CNN trains on real sentences that reflect actual language use. That’s a significant advantage over Google, which did much of its learning from protocols of the European Union. When you post on Facebook, you’re training the social network on how to translate more accurately.

The Bottom Line on Machine Translation

Should human translators look for a new day job? With the rapid pace of change of Neural Machine Translation, one might think so. However, the reality is different. Translators of Facebook, Microsoft, and Google do well at producing approximate meanings.

NMTs can assist in translating while skilled linguists can finish and polish the translation output.  In conclusion, future translators will be more often working with artificial intelligence rather than against it.

Reuven Koret

R.J. Koret writes about language, psychology, and technology, with a special interest in AI, wordplay, and Freudian slips.