Discover Lilt's research in machine translation and localization
The Lilt research team is working on driving the future of translation technology.
Online Adaptation of Neural Machine Translation Models
Lilt's translation models adapt to translators as they work, updating parameters automatically with each sentence translated. This tight loop allows adaptation to specific document-level and project-level vocabulary, structural patterns, and idiosyncrasies. Our research team focuses on fast and effective adaptation of state-of-the-art neural machine translation models using methods that are efficient enough to support large-scale personalized neural machine translation.
Measuring Immediate Adaptation Performance for Neural Machine Translation
An interactive neural machine translation system that supports localization must do more than translate full sentences in isolation: it must make suggestions about what translators will type next in context, how they will transfer formatting from the source document to the target, and what edits will be performed by reviewers. Interactive systems must take termbases, translation memories, and contextual constraints into account for all of these suggestions. Our research team focuses on the full range of automatically generated suggestions that can improve the speed and quality of human localization work across translation, reviewing, and quality assurance.
End-to-End Neural Word Alignment Outperforms GIZA++
Human-Computer Interaction and Data Science for Localization
Lilt's human-in-the-loop approach to localization places both professional translators and artificial intelligence technology together at the core of our operations. A broad range of human-computer interaction problems arise in this setting, from text-editing interfaces to assigning translators to project workflows. Our research team focuses on interaction design across the Lilt platform and data science across Lilt's business and translator community.
The Impact of Text Presentation on Translator Performance