Content Services
- Technical Writing
- Training & eLearning
- Financial Reports
- Digital Marketing
- SEO & Content Optimization
Translation Services
- Video Localization
- Software Localization
- Website Localization
- Translation for Regulated Companies
- Interpretation
- Instant Interpreter
- Live Events
- Language Quality Services
Testing Services
- Functional QA & Testing
- Compatibility Testing
- Interoperability Testing
- Performance Testing
- Accessibility Testing
- UX/CX Testing
Solutions
- Translation Service Models
- Machine Translation
- Smart Onboarding™
- Aurora AI Studio™
Our Knowledge Hubs
- Positive Patient Outcomes
- Modern Clinical Trial Solutions
- Future of Localization
- Innovation to Immunity
- COVID-19 Resource Center
- Disruption Series
- Patient Engagement
- Lionbridge Insights
Life Sciences
- Pharmaceutical
- Clinical
- Regulatory
- Post-Approval
- Corporate
- Medical Devices
- Validation and Clinical
- Regulatory
- Post-Authorization
- Corporate
Banking & Finance
Retail
Luxury
E-Commerce
Games
Automotive
Consumer Packaged Goods
Technology
Industrial Manufacturing
Legal Services
Travel & Hospitality
SELECT LANGUAGE:
This is the third part in The Future of Language Technology Series, which explores the changes of language delivery as a result of technological developments.
With more content being created than ever before, companies are pressured to find ways to translate their copy quickly and efficiently. Machine Translation (MT) has increasingly grown in popularity, but it has been far from perfect. Inferior quality has been a longstanding issue. However, you can expect improvements that will enhance your localization efforts.
We’ll explore the factors that are influencing Machine Translation and let you know how to best leverage the technology to benefit your localization strategy.
Machine Translation is the automated translation of source material into another language without human intervention. Although it is a relatively new concept for the general public, Machine Translation has been around for decades.
SYSTRAN was among the first companies to develop Machine Translation systems in the late 1960s. The company cooperated with the U.S. Air Force, which set out to translate intelligence material during the Cold War. The goal was to have machines translate content well enough for human translators to understand its meaning and easily improve upon the text. Early Machine Translation engines used rule-based methods, which meant they relied on rules developed by humans or from dictionaries to execute translations. Since that time, the language technology has evolved considerably.
Visit our Machine Translation thought leadership page for the latest trends on MT.
A major development in Machine Translation happened in the 1990s, when companies like IBM started to leverage statistical models that significantly improved translation quality. Statistical Machine Translation engines were a novel technology. These engines focused on the use of advanced statistical methods and vast amounts of data from the Internet to translate growing piles of content. Google would later deploy the technology on a large scale to try to make all human knowledge searchable.
Early Statistical Machine Translation engines were much better than rule-based engines but still made a lot of errors. So, companies began to experiment with Hybrid Machine Translation engines, which commonly combined Statistical Machine Translation with Rule-Based Machine Translation. These advancements popularized Machine Translation technology and helped adoption on a global scale.
In 2017, Machine Translation made another technological leap with the advent of Neural Machine Translation (NMT). Neural Machine Translation harnesses the power of Artificial Intelligence (AI) and uses neural networks to generate translations.
Contrary to the aforementioned methods, neural networks try to mimic the thought process of a translator rather than “guessing” a probable outcome. The result is a much more naturally sounding translation that captures the meaning and nuance of the sentence more accurately. This development resulted in Machine Translation being good enough, not only for the comprehension or gisting of large volumes of documents, but for regular, non-mission-critical business documents as well.
Neural Machine Translation has addressed some of Machine Translation’s longstanding shortcomings, including the poor readability of automated translations and its incompatibility with certain languages, such as Korean. Efforts to improve Neural Machine Translation are ongoing. To learn more about Neural Machine Translation, read our blog post Neural Machine Translation: How Artificial Intelligence Works When Translating Language.
Lionbridge’s R&D teams estimate that Neural Machine Translation is improving by 3-7% every year. Our experts figure out improvement by a measurement called the Editing Distance. The Editing Distance calculates the number of edits a human must make to the Machine Translation output for the resulting translation to be as good as a human translation.
Neural Machine Translation will continue to advance as the demand for translation services increases and as Machine Learning gets better at automatically training Neural Machine Translation engines.
Neural Machine Translation will be adopted at faster rates in the future as the amount of content requiring localization grows exponentially.
The COVID-19 crisis has accelerated the Digital Transformation of many businesses, which has fueled the need for more translation services. At the same time, content needs to be more targeted and diversified. These market conditions will drive Machine Translation to be used for parts of the content, with or without human translation supervision.
Human translation supervision is executed through Machine Translation Post-Editing (MTPE), a hybrid of Machine Translation and traditional human translation. Post-editing follows the Machine Translation process to improve the quality of the translated text. Our blog explores when to use Machine Translation Plus Post-Editing.
Companies can expect translation services to become more affordable, at least for some languages, as a result of Neural Machine Translation. These cost reductions will enable companies to increase the number of markets they target and help them get products to these markets faster.
As the adoption of Neural Machine Translation is accompanied by Digital Transformation within the global economy, a more competitive landscape will emerge. End users will increasingly expect to receive product information in their native language. It will become the norm, not the exception, for companies to meet this expectation in all their markets.
When it comes to automating translation, Machine Translation is not the only tool in the translation toolkit. Translation Memory (TM) has been an important precursor to Machine Translation, and it will continue to serve a purpose in localization. Machine Translation and Translation Memory often work together. However, the role of Translation Memory will shift.
Developed in the early 1990s, Translation Memory is a database of past translations that a company leverages to reduce the workload of new content.
Translation Memory technology is implemented through computer-aided translation (CAT) tools or a Translation Memory tool (TM tool). These tools allow multiple translators working on the same piece of content to use previously translated words or phrases across different pieces of the same content.
Translation Memories enable companies to:
Historically, Translation Memories played a crucial cost-savings role that is worthy of underscoring.
Although Machine Translation and Translation Memory both work to automate the translation process, they differ from one another in substantial ways.
Since Translation Memories serve as a repository, or database, of past translations, they play a passive role in generating translation by matching whole sentences or sentence fragments to the source text. In contrast, Machine Translations are a much more sophisticated technology. Machine Translation actively tries to guess the possible translation for a source text by using past translations and various natural language processing techniques.
These technologies are complementary to one another. Together, they bolster a translator’s ability to work faster and improve productivity. They also address quality issues like consistency of terminology. Both technologies are tightly integrated and work hand in hand to deliver higher quality translations.
In the last several years, companies have embraced the use of Machine Translation and Translation Memories for their translation. In taking this step, they have turned their attention to execution and the effective implementation of the technologies.
Since Machine Translation offers far more efficiencies than Translation Memory—and by its definition is based on some sort of Translation Memory—the two technologies are increasingly blending. However, Machine Translation is becoming the main translation productivity tool that is being deeply integrated within many translation workbenches.
With the ascent of Machine Translation technology as the leading productivity tool in the translation and localization industry, the role of Translation Memories will change. Translation Memories will become more of a training tool for Machine Translation engines, rather than a simple database of translations.
Unsupervised Machine Translation, which is when Machine Translation is done without human intervention, is best suited for relatively simple text that has low visibility. Traditionally, it has been implemented for user-generated content such as reviews, forums and auctions, like eBay. Depending on your quality expectations, content type and purpose, Machine Translation could do a decent job at translating simple, general business documents in some languages. Its increasing use enables more companies to enjoy benefits that are similar to the ones produced by Translation Memories but that are even more pronounced. These include:
It’s important to note that translators are a limited resource. Their ability to spend less time on certain assignments will free them up to work on more projects, which will put less strain on the market as a growing number of companies vie for their services.
Machine Translation can increase translators’ capabilities by 3-5 times in some cases, allowing for more content to be localized in a shorter amount of time. With increased productivity and reduced costs, companies will be able to translate more content into more languages.
When designing your content strategy and making decisions about which markets to pursue, make sure you are mindful of the efficiencies that will result from the use of modern Machine Translation and related technologies.
Partnering with an experienced Localization Service Provider (LSP) will help you implement Machine Translation to best achieve your desired outcomes. Importantly, the partnership will enable you to create and improve your content and develop your go-to-market strategy. Some LSPs, like Lionbridge, are increasingly moving into the digital marketing space to help companies manage their whole content journey, not just the localization piece.
A carefully planned and executed localization strategy—with guidance from a strong localization partner—will help you take advantage of all the benefits of Machine Translation technology. This will free more of your resources to create additional content and/or get it into more markets with the same budgets.
Click the image below to view key definitions for understanding Machine Translation
To learn more about our full range of Machine Translation services, download our Machine Translation whitepaper. If you are interested in ensuring that you have the right balance between Machine Translation, Translation Memory and human translation, reach out to us.
We’re eager to understand your needs and share how our innovative capabilities can empower you to break barriers and expand your global reach. Ready to explore the possibilities? We can’t wait to help.