[Free eBook] Download your 9-step tactical guide, navigating your translation transformation journey
Get Your CopyAI Talk Series, Episode 1: LLM — Why is the time for AI now?
Welcome to the first blog post of our AI Talk Series, where we’ll be sharing AI insights and predictions from LILT’s co-founders and experts. For the next few weeks, you can expect to learn a deeper understanding of large language models, upcoming trends in the localization industry, and the business implications of generative AI. Let’s dive right in!
A little background about our experts: LILT’s founders, Spence Green and John DeNero met at Google, while working on Google Translate’s program. As researchers at Stanford and Berkeley, they both have experience working with natural language technology to make information accessible to everyone. They were amazed to learn that Google Translate wasn’t used for enterprise products and services inside the company and left to start their own company to address this need – LILT.
LILT’s AI technology foundation is similar to ChatGPT and Google Translate, before our patented contextual AI, connector-first approach, and human-adapted feedback. We sat down with Spence and John to learn more about large language models and their thoughts on AI.
What has ChatGPT done to the industry since its introduction a few months ago, and is anything really different now?
John: There has been a lot of change in the last few years. I'm not sure there's been a lot of change in the last few months, but in the last few years, we've seen one of the main goals of AI finally realized—which is to have a system that can learn from data without much constraining what it learns from the data. It's amazing. Every time you do something intelligent, you have to have a representation of the world that is relevant to the thing that you're trying to do. And now we have computers that can accurately pick their own representation in order to do whatever we ask them.
Spence: You know, with ChatGPT in particular, the GPT-3 paper came out in 2020, so building these large-scale language models isn’t some breakthrough in the last two months. Over the course of the last 10 years, we used to build really task-specific systems and models. Now with deep learning, you can use a more general learning framework for a lot of different tasks, which is really quite an enormous change from five or six years ago.
John: I think there was a big innovation with ChatGPT. It just wasn't so much an AI innovation. It was more of an interface innovation, like setting it up so that people could interact with it in a way that they understood and were comfortable with without much training.
How do you define large language models (LLMs) and generative AI?
John: Generative AI is a computer system that can generate things that previously we thought only people could generate—art, photographs, essays, and translations. So, what the word ‘generative’ means is that it can create the kinds of things that human expressivity and creativity were formally required to create.
Anytime you're writing a whole document or generating a whole image, you're doing generative AI. And the thing that makes it AI is that the outputs are different from any of the examples that it's seen before because it’s doing that synthesis. Translation is definitely generation—it's just pretty constrained about what the results should look like.
A large language model has observed vast amounts of text, usually from the web, and is set up to synthesize new text based on the patterns that it's observed.
And it can really do two things. It can do something akin to reading, which is that it can encode text that you provided in order to figure out the context for generating new text, and then it can generate new text.
Can you explain the difference between general and specific AI and how that applies to something like ChatGPT vs. LILT?
John: So a translation model looks just like a large language model except for the content is in a different language than what gets generated. And it happens to be that the translation models are good at one particular thing, which is translation, whereas language models can be used for lots of different things because you can put any question on the input, and then it will generate an answer.
What does the future look like and why are you excited? What can we expect LLM + Generative AI to look like in 5 years?
John: If I think beyond LILT, there are cases where people's work divides into the really interesting part where they need their expertise to do it, and then there's some amount of just drudgery of having to write up a report about what they did. It almost feels like a machine could do it—if only the machine could write text. Now we have machines that can write text, and so it might be the case that somebody in that position could spend more of their time doing the stuff that they actually care about. For example, a doctor could spend more time meeting with patients and less time writing up medical reports afterward because part of that would be assisted through AI.
Spence: I think one of the things I'm excited about within LILT, which is something we talked about a really long time ago and sort of motivated the work we did on grammatical error correction, is getting rid of the concept of a style guide and instead, learning from data and previous text that a business generated. It would be cool if the system could read past English texts and say, “This is the style guide, and then this is how we generate it in French.”
* * *
Thanks for chatting, Spence and John! As businesses continue to embrace the change and opportunities that lie ahead, it will become increasingly important for global teams and leaders to invest in AI technologies to remain competitive. Tune in for the next episode of our AI Series for a deeper exploration of AI, large language models, and their impact on the translation industry.