Discover Lilt's research in Machine Translation and Localization

The Lilt research team is working on driving the future of translation technology. Our research covers three areas:

> Online Adaptation of Neural MT Models
> Interactive Neural MT
> Human-Computer Interaction for Localization

Lilt HeroImages Technology

Online Adaptation of Neural Machine Translation Models

Lilt's translation models adapt to translators as they work, updating parameters automatically with each sentence translated. This tight loop allows adaptation to specific document-level and project-level vocabulary, structural patterns, and idiosyncrasies. Our research team focuses on fast and effective adaptation of state-of-the-art neural machine translation models using methods that are efficient enough to support large-scale personalized neural machine translation.

Simianer, Wuebker, and DeNero (NAACL 2019)

Measuring Immediate Adaptation Performance for Neural Machine Translation

Wuebker, Simianer, and DeNero (EMNLP 2018)

Compact personalized models for neural machine translation.

Wuebker, Green, and DeNero (EMNLP 2015)

Hierarchical Incremental Adaptation for Statistical Machine Translation.

Interactive Neural Machine Translation

An interactive neural machine translation system that supports localization must do more than translate full sentences in isolation: it must make suggestions about what translators will type next in context, how they will transfer formatting from the source document to the target, and what edits will be performed by reviewers. Interactive systems must take termbases, translation memories, and contextual constraints into account for all of these suggestions. Our research team focuses on the full range of automatically generated suggestions that can improve the speed and quality of human localization work across translation, reviewing, and quality assurance.

Global Revenue
Zenkel, Wuebker, and DeNero (ACL 2020)

End-to-End Neural Word Alignment Outperforms GIZA++

Zenkel, Wuebker, and DeNero (ArXiv 2019)

Adding Interpretable Attention to Neural Translation Models Improves Word Alignment.

Wuebker, Green, DeNero, Hasan, & Luong (ACL 2016)

Models and Inference for Prefix-Constrained Machine Translation.

Human-Computer Interaction and Data Science for Localization

Lilt's human-in-the-loop approach to localization places both professional translators and artificial intelligence technology together at the core of our operations. A broad range of human-computer interaction problems arise in this setting, from text-editing interfaces to assigning translators to project workflows. Our research team focuses on interaction design across the Lilt platform and data science across Lilt's business and translator community.

Account Management
Läubli, Simianer, Wuebker, et al. (ARXIV 2020)

The Impact of Text Presentation on Translator Performance

Läubli and Green (Routledge 2019)

Translation technology research and human-computer interaction

Green, Chuang, Heer, and Manning (UIST 2014)

Predictive Translation Memory: a Mixed-Initiative System for Human Language Translation.

Green, Wang, Chuang, Heer, et al. (EMNLP 2014)

Human Effort and Machine Learnability in Computer Aided Translation

Green, Heer, and Manning (CHI 2013)

The Efficacy of Human Post-Editing for Language Translation.

Meet the Research Team

John DeNero Headshot OLD

John DeNero

Spence Green

Spence Green

Joern Wuebker Headshot

Joern Wuebker

Neural Icon

Patrick Simianer

Geza Kovacs Headshot

Geza Kovacs

Sai Gouravajhala

Sai Gouravajhala

Hannah Yan

Hannah Yan

Thomas Zenkel Headshot

Thomas Zenkel

Gabriel Bretschner Headshot

Gabriel Bretschner

Aditya Shastry Headshot

Aditya Shastry

Jessy Lin

Jessy Lin

Neural Icon

Ze'ev Shamir

Yunsu Kim Headshot

Yunsu Kim