Archive.fm

Localization Fireside Chat

Unveiling Aurora AI(TM): Revolutionizing Localization with Lionbridge CTO Marcus Casal

Duration:
49m
Broadcast on:
09 Jul 2024
Audio Format:
mp3

(upbeat music) - Well, good afternoon, everybody. This is Robin at you from the localization firesite chat. And welcome to another recording. Today we are recording episode number 75. And I'm honored and I have the pleasure to have with me Marcus Cassell, who is our chief technology officer for LYMVRAGE. And at Marcus, welcome to the channel. Good to see you as always. And I'm excited to have you with me today because we wanna talk about Aurora AI, which has been the buzz on the social media and in the media in general, the launch of Aurora AI by LYMVRAGE. So you are behind all this and your team, obviously, obviously we all work with teams. And I wanna make sure that we first introduce you to the audience. We, over the year and a half that we've been live now, we've gathered quite a bit of audience. If you don't mind introduce yourself and tell us your localization story, everybody's got a story. So you must have an interesting one. I can't wait to hear it. Go for it. - Thank you very much, Robin. It's really a pleasure to be here. So I don't know how interesting my story is, but indeed everyone has a story. So like a lot of people, I think I kind of got into localization in some ways by accident, right? I've always been interested in languages. I grew up speaking two languages, natively growing up in Miami in a Cuban American community. So I grew up speaking Spanish and English. I studied languages in university, but I was always interested in computer programming as well. And those two things came together very interestingly. When I was in graduate school and a friend of mine was like, "Hey, I got a job at this localization company." It was one of these precursor to a precursor to a precursor to one of the big low companies now. And I said, "Huh, what's that?" Well, this is what we do. And I said, "Huh, that sounds kind of interesting." And many years later, maybe more years than I care to admit, here I am. So I've worked as a low engineer. I've worked in operations. Before that, I've worked as a software developer. But in localization, I've worked as a localization engineer. I managed engineering teams. I had pretty large operational role. And then I moved into the technology side, helping to build the technology, not just use it, right? Which became another dimension to it. So I've led product and development teams at Linebridge, at several of our competitors. And ultimately, I've run technology now at a couple of localization companies. - Well, excellent. And thanks for the introduction there, Marcus. You're the epitome of starting from somewhere in a localization industry and growing through that process of being like in what you do, obviously, and try to find a better or more exciting jobs in the same industry that you like and you love. But one of the observation I have throughout the 75 episodes that I've done is that I'm not sure if you notice something like this, but there's a three categories of how people come into the industry. One is by accident, three on purpose. They make the decision they don't want to go into the industry. And I was alerted to a third one a few weeks ago. It's called accidentally on purpose. And it sounds what happened there. Now, being a CTO for a language company, tell me what this is about. Tell the audience what's about to be a Chief Technology Officer for a language organization or language industry company. I'm not sure if we can characterize libraries as a language industry company. Now, in some aspect, we're growing in a variety of facets. - I mean, we're a global content lifecycle company in many ways. We create content, we transform it, including into multiple languages. We do that in multiple domains, gaming, traditional content, life sciences, tech. Oh, yeah, and that's really the segue into it. It's the best job in the world, right? It's really amazing because if you look at who our customers are, we work with some of the most innovative creative companies on the planet, who are passionate about having these global conversations with their own users, and we're enabling that. So, you know, I get to work with large tech customers. I get to work with very specialist life sciences customers with gaming customers. And really support that global conversation and that global engagement, right? 'Cause it's not always about the conversation. It's also about the functionality. It has a lot of stuff globally. So as far as I'm concerned, it's the best job on the planet. And no, no one else can have it. I love it. - Now, I wanna address the elephant in the room because some people listening to this, I might think I could envy you being a CTO and some other people would say, "I don't wanna be in your seat." In this ever evolving technology world that we're living in, how do you keep it all together? - So that's a really good question. But one of the things you mentioned in your introduction is, I've got a great team, right? I mean, none of us, there is no hero theory, right? None of us are individual heroes. I mean, sure, there's work I've done, I'm proud of, there's work I've seen you do, right? And having worked with you for years that I know you're very proud of, but it's a team sport, right? Like so much else in life. And the reason we keep it together is we have a really good team. I've got the privilege at Lionbridge of running our development teams and our product teams as well as our infrastructure, our classic IT organization, working in things from AI to our cloud strategy. And so there's a lot of ground to cover and we have really good teams distributed globally, right? As is necessary in our business to support our customers. So I couldn't keep on top of it all myself. Fortunately, I have a good team that, and we all do it together. - No, appreciate you guys every time I, any one of the users need any help or try to figure customer solutions. You guys are stepping in and solving and helping customers achieve what they want to achieve, I guess, in a desire solution. No, the topic of today, I want to talk to you about the recently launched application or software by Lionbridge. The name is Aurora AI that was all over the media, social media, et cetera. For those audience, it's for me as a member of Lionbridge at the same time managing and running this localization fireside channel. It's a proud moment for me, to be honest with you, to see Aurora AI being launched by Lionbridge. And so for those who don't know, can you tell us, give us at least a brief introduction or an introduction of sort to Aurora AI? What is Aurora AI? - Yeah, that's a great question. So if you think about it, translation has existed since humans first began to speak multiple languages, right? Translation is nothing new, right? Our industry, localization and global content lifecycle management, that has evolved because of technology. And having been in this industry a while, you and I both have. We've seen waves of this technology come. I'm old enough to remember when translation memory came onto the scene and what a transformative change that was. Now I can reuse what I've translated previously, I don't have to translate from scratch every time. Oh gosh, but what if that previous translation wasn't any good? Now I'm stuck with it, right? And can I not deviate from it or am I losing creativity? And so there's always these conversations. One of the things we really decided that we needed to build a whole new platform, right? And that's what Aurora fundamentally is, is a whole new platform for executing global cross language, content transformation work, really for two big reasons. One is workflow has existed in this industry for quite some time, right? It's been pretty mature for at least 15 years. It's been around in some form or another since the first sort of globalization management systems almost 20 years ago at this point. But these were very linear, serialized workflows. I begin with some source content and then I want to proceed through a series of steps turning it into multiple target language contents. And if you think about the needs of today, especially with the rise of LLMs, it isn't about linear workflow as much as it is about orchestration. Being able to take multiple inputs and decide the best path in as automated away as possible. Not just step one, step two, step three, step four, but depending on the data at step three, for example, maybe I can skip to step six or maybe I can go off in another direction entirely. So the ability to really orchestrate. And this, for example, is very similar to what's happened in the manufacturing sector where a lot of very sophisticated manufacturing companies can now offer us very personalized experiences, right? Some of the biggest shoemakers out there can now personalize my running shoes or my trainers for my needs. What an amazing capability that is that's powered by this very similar type of orchestration technology that can take multiple inputs. What do I have in stock? What does the customer need? And put it all together into the right solution in a way the traditional linear TMSs couldn't. So we wanted to jump on your orchestration and lifecycle revolution. The other part that I mentioned was large language models. We have spent, I think, the whole world has spent the last year and a half consumed by GPT4. And as always, there's tremendous interest. There's some people with a rational exuberance around it, then some people fall into a kind of value of despair. But at the end of the day, this is truly transformative technology, right? LLMs are incredibly powerful models that can do any number of things. Now, we've had very sophisticated language models before, it's called machine translation. But machine translation is very powerful, but can do one thing. Turn stuff in one language, it's stuff in other languages. LLMs can produce content. They can start turn stuff in one language and stuff in other languages. They can tell you something about content. They can do so much more. And in order to really leverage the power of the LLMs, we also realized we needed a much more modern execution platform. So the combination of orchestration, which gives you an incredibly powerful infrastructure to get stuff done. And LLMs, which can now give you input into what you should get done and the quality of what you have in front of you and what your next step should be, put those together and we decided to launch our. - So I'll be missed if I didn't ask this one faced with developing large project like this, Marcus. Everybody in the world is faced with answering the question, build it versus buy it. What was the thought process around that and what did you decide to do eventually? - That's a great question. And you know, my answers always went up both, right? Which is in any modern development organization, you have to balance, build versus buy. And whenever you can buy something, right? Whenever you can partner with best of breed solution, something that can do it better, faster, cheaper, more effectively than you can, you absolutely should. So for example, we built Aurora on top of a very powerful formation and orchestration platform called Kamunda. They've spent well over a decade building out this solution. We weren't going to reinvent the wheel. We built on top of that and we partnered with them. Even more importantly, when it comes to the LLMs, we didn't try to create our own models. We didn't even go down the path of, hey, let me get some open source hugging face models and fine tune them. I mean, we've partnered very, very closely with Microsoft, with Microsoft as your AI, which uses the GPT models because between OpenAI and backed by Microsoft, they have poured billions into creating these models. We were not going to create something better than that. So we partnered with them to deliver that solution. On the other hand, when it comes to solving our own customer's last mile problems, because with LLMs, you have the same challenge in some ways that you had machine translation. The people who make the models want to create the best solution for as many people as they possibly can. They don't want to customize it for customer A's needs or customer B's needs or customer C's needs because then that defeats their purpose of trying to create the best baseline option out there. So the ability for us to say, okay, for this customer's needs, I want the orchestration to look like this. I want to have this workflow versus that workflow. I want to invoke the LLMs to post edit the work of machine translation, or I want to invoke the LLMs using chain of thought approaches to generate multilingual content itself. Those kinds of customer specific and use case specific solutions, that's what we're good at. So that's what we've built, partnering on top of commercial solutions from people like Camonda, Microsoft. It's all hosted in Azure again. You really have to know when to partner with Best of Breed solutions where you're leveraging the value that they've created and deciding where do you add value, right? Where is leverage is right to play? And we've decided that it's there, marrying Best of Breed technology with our intimacy with the customer's needs and with their own users' needs to deliver the right solution. - And that brings me to another point in the journey here is, if I was a customer from the outside looking in and I see an announcement over our AI on LinkedIn, in the press, et cetera, first question, I'm gonna be asking myself as a customer, is what is an order of AI's benefit to me as a customer, either existing customer where I'm considering doing business with Lightbridge? Can you collect some of those? - Absolutely. So this is the most modern state-of-the-art language production platform in the industry. I say that not with arrogance, but simply it is we've built this using Best of Breed development methodology. So for our customers, you've got a very robust, very scalable, very resilient solution, right? Which means you're gonna get your stuff back more quickly and predictably because the technology works. It's also highly secure because it's under the Microsoft as your umbrella. So we have the combination of our own security features as well as what's afforded by Microsoft. So it is very, very good technology and knowing that your partner is using good technology is quite positive. But even more than that is our ability to really create solutions that are optimized. Not necessarily customized, right? But optimized for each customer and their use case, again, using that orchestration approach, we can do things like automatically decide whether content is suitable for machine translation given the domain, the complexity. We have a bunch of analytics that we can run on content. And then based on those analytics, you then make a decision and Aurora has a decision engine, a very good one built into it, so that I can say, well, if the analytics indicate that this is suitable for machine translation, that I can go down this pathway, if they indicate it's not, I can justify why not, and there's empiricism behind that. It might be a question of the lexical density of the content, it might be a question of the terminology, some of those you may want to change, some of those you may not, right? But we can then have empiricism behind you to go down a different approach. So we can make very, very smart decisions always driven by data and empiricism so that we can meet our customers' needs as opposed to just saying, okay, everything is going to go through a conventional translation, edit proof step as we were doing 20 years ago. Not all content is the same, and it doesn't merit the same workflow. - Absolutely, you're not all content is the same, and one of the biggest thing that our industry struggles with is every customer is different, every type of customer is as much as we try to bring standardization to the type of work that we do in general in the localization industry, we still have a hundred customers, you still have a hundred models, and each customer has maybe another hundred model subset to that. It must have been a challenge designing a software, like Aurora AI, to accommodate with that fluidity, if you will, for that agility around designing these solutions to meet specific need within a customer subset or a specific industry or a specific customer set. How did you manage to think around the tremendous number of solutions that I have to create now? I mean, this is not just specifically to leverage, everybody is running through the same thing, right? So how did you manage to do that? - That's a really good question. I mean, I think the most precise way to answer that is you don't, right? You, rather than trying to build every solution into your software, what you do is you make sure your software is built on a standard that can accommodate whatever solutions. In some ways, it's similar to human language, natural language, right? The language that you and I are speaking right now, English has been used to produce sonnets by Shakespeare and it's been used to produce highly technical documentation for manufacturing and it's used to produce compelling marketing content. It's the same language, right? It's a function of how you use it, how you assemble it for different use cases and with Aurora, we've built it all, again, partnering with Camunda to accommodate what's called BPMN, business process modeling and notation, which is specialist variant of XML, which functions as the instruction set. So as long as you tell it, here's what I'm trying to do. I'm trying to compose a sonnet versus technical documentation and you encode that properly and that's where process modeling and process optimization comes into play. The system can output whatever you want. So rather than me or my development team trying up front to think about, oh gosh, we have to think about every possible use case and, you know, if you need something new, right, Robin, if you go and get us a new customer in Canada who needs something new, right? Do you need marketing content, do you need these review steps? You need this file format. You need to integrate with this repository on the customer side. Well, sorry, I have to go build that from a development perspective, get back to me. I'll get back to it a few sprints. We can say, yeah, no problem. We know what those building blocks are. We encapsulate those into an instruction set and the tool can do that. So we don't have to anticipate every need up front. We have to anticipate having the instruction set, the ability as it were to consume recipes depending on what our customers need to give the right output. >> And sorry for the crude analogy. I think it's more like picking those instructions from a shelf of library of that you've built already. I miss my-- >> Absolutely. Absolutely so. No, no, it's a very good analogy, right? A lot of this exists and we're assembling it together. And then when we have to do, when we do have to build something new, we do, right? We retain that capability. But as much as possible, it's about assembling things into what each customer needs. >> So one of the-- you spoke about earlier Marcus, the AI decision capability within Aurora AI. So one of the inherent things that you function of features in AI is the ability to learn. Has this been built into Aurora, the learning ability? So let's say I've got a process A and I'm running process A through Aurora AI. And now AI is handling some of the decision making. And as the process changes, I'm assuming the AI is learning that those behaviors, is it or? >> Yeah, that's a really, really fascinating area. So the answer is yes, but not enough yet, right? There'll be more because that idea of having a feedback loop, right, between what the system has decided and then a downstream outcome, right, or downstream something is one of the most important areas that we're working on now that we have the baseline system built. You always want to be very careful around kind of signal to noise and make sure the system is taking the right data as input for the future. But we are now opening up that space as it were. And this is the part to me that's fascinating. Which is, and this isn't just me or blind, right? I mean, if you follow what's happening in AI, right, everyone who really knows this intimately says, look, the machines aren't coming for everyone's jobs. The machines are going to come for the jobs of people who don't know how to use AI, right? By being able to automate decisioning, right, that previously had been done by humans, we're now freeing up the humans to then take that next step, which is what you just mentioned, okay, what downstream data can I then feed back into the machine to help it improve itself in the future? When humans were too busy doing every routine task and every routine decision, right? If these conditions are met, then I have to do a human review. If they're not met, then I don't do a human review, unless it's August, in which case my reviewers are on holiday, so then I'm never doing a human review, right? And that kind of complex matrix of decisioning has traditionally been done by humans because the tooling that we had wasn't really robust enough to do that for them. Now that it is, we're freeing up the human to say, okay, this told me this was good enough for machine translation. Does the output correlate to that, right? Yes, okay, cool. Or no, it didn't, so what was wrong with it? So yes, there is some feedback loop, for example. I expect this, did I get that, and then I sent signal further back in the system. But to me, one of the things that's most fascinating is freeing up the creativity of the humans that we have, right? 'Cause the humans, at the end of the day, are the ones who bridge knowing the technology, knowing languages, which is something we're very good at, and knowing our customers, in ways that help make the system better and by taking a lot of the routine tasks off their plate, I'm freeing them up for that higher value, analytic and cognitive work, and I think the sky's the limit on how we're gonna help improve the system. It's not about the machines taking over. It's about the humans working very, very synergistically. This isn't a utopian dream, right? This is a real example. The humans are helping the machines make better decisions all the time. Absolutely, and you know, we always talk on the channel about the human in the loop, and there's a lot of gloom and doom out there that there's no role for human after AI takes full, you know, full fledging deployment, if you will. But we keep talking about the human in the loop on this channel quite a bit. Well, the software, or AI is in production based on the announcement, you know, it's in use. Do you have any, Marcus, any success stories you can share based on, you don't have to name the customer, but maybe some of the success factors that you feel like, wow, this is good. Yeah, I certainly can. I mean, you know, again, this release was relatively recent, right? So we're still in the early days compared to say workflow has existed for 20 years. So we certainly don't have 20 years of data on Aurora. But among the things that I find really important is that when it comes to purely automated activities, Aurora is significantly faster than what we were doing previously, right? And so something that may have taken a few minutes now takes a few seconds. And you might say, well, whatsoever does that make, right? In the aggregate, if you think about typical workflows that have frequently dozens, and in some cases, hundreds of steps, if I am shaving minutes to seconds or seconds to minutes off of a large number of those steps, in the aggregate, I'm saving quite a bit of time overall. I'm also doing it in a way that is far more resilient than what I'm doing today, or that no software today. I mean, all software has bugs, right? This is a truism of building software. It has bugs that sort of has to build on it. And in a production system, bugs and issues, let's say more generally get manifested as exceptions, right? And the system is telling you, hey, something happened that wasn't supposed to happen. I'm going to log that as an exception. And we've had a reasonable number of exceptions of traditional workflow tools. That number is a fraction of what it was previously, with Aurora, because we have more modern, resilient, error handling, retry logic, et cetera. So what I'm seeing is a system that is faster, a system that is more resilient. And again, a system that when something new is needed, when a customer needs a new recipe, I can assemble it from the things I have on my shelf and my pantry of your analogy. I can assemble that as opposed to having to say, well, let me go and build this. And let me take a few sprints and see how long it takes. So the early data we're seeing is a software that's better, it's faster, it's more resilient. And it empowers us to more effectively solve our customers' needs, which at the end of the day is what we're in business to do. - Absolutely. Marcus, would Aurora AI in your opinion be deployed across all divisions of language or specific divisions? - Yeah, I mean, yes, right? We're doing this incrementally, of course. And we have our phasing and the kind of stuff you would expect with any sort of global complex technology deployment plan. But yes, part of the value comes from scale. And this ties in with one of the broader trends. This whole industry has evolved from a very craft business, right? And again, you are now old enough to remember or experienced enough, I should say, experienced enough, Robin. To remember when you had these small craft shops of translators and project managers and localization engineers kind of figuring out what each customer needed. And we often did great work under those models, right? Absolutely great work. But gosh, was it hard to scale, right? 'Cause what you did for one customer would necessarily translate to another customer. And what we see now is, again, think of the way some of the biggest consumer brands in the world can hyper personalize your athletic shoes nowadays. They don't do that with craft teams of cobblers, right? They do that with incredibly powerful industrial automation software, powering a global supply chain and manufacturing base. And that's the promise of Aurora. That's what Aurora enables. So at the end of the day, size does matter, right? We're one of the biggest companies in our space. And we think that the value of scale, of being able to do things globally, predictably, is part of the value. So yes, we want to scale Aurora out across the entire company and we've got a plan to do so. So one of the questions we deal with on this channel is re-skilling and retooling the team that they are going to be using these new tools, these innovative and new software that we're putting in out there and Aurora AI is one of them. Could you speak to training the user or how were they accepted of the new software? Was it difficult to train? What was your general reaction to a new software by the staff internally? How did they take it? They think it's... - Yeah, that's, you know, it's a great question. I'm going to answer it in two ways, Robin. One is buy and mortgage, right? What most users care about is can I achieve my business goals, right? The thing I need to do on behalf of my customer and if, again, if it was more resilient this faster, they're delighted to use it. So internal adoption has not been a challenge because again, we've made it better than the software placed. So that's cool. The other part of it is you talk about sort of retraining and re-skilling people. And that's been fascinating as well because when we've had a lot of people who had tremendous depth of knowledge as in kind of smaller scale solutions, right? Whether it's craft solutions or kind of semi-industrial solutions that may be scaled to a particular customer, particular division, and we're saying, hey, do you want to now learn how to do this globally, right? Can we turn a low engineer into a business process modeling and notation expert so that you can do this company wide? We've invested quite a bit in training folks to do that and we've created an entire process team, right? Embedded within leverage, right next to leverage operations for the initial deployments that are actually physically next to each other, right? In one of our main production centers. And there's that feedback loop. Okay, the system needs to do this 'cause I'm trying to accomplish that. And then how do I do this globally so that every solution isn't just for this customer? And so, if you look at the journeys some of our folks have been on, I mean, sure, as we talked to myself, I've gone from being a low engineer to the CTO of one of the bigger language companies, not bad, but we have other folks who've gone from being low engineers to now business process experts, right, and industrial automation experts. And that's a very fulfilling journey to be on as well. So we've been able to offer people growth and career paths. And that, of course, helps with adoption and everything else. >> Excellent. Now, Marcus, I'm not sure if you have this information, but for those customers who are being transacting, using RRAI through the production centers, as you mentioned earlier, any general feedback, any early general feedback on what do they think? Did we hit the mark? >> Yeah, so that's, I mean, again, we're very early here, right? We did this announcement just a couple of weeks ago, we have a subset of our business going through it at present, but the early feedback is quite positive, right? Not surprisingly, it's not hard to get people to accept software that is better, faster, and more resilient. And again, not that we were using before was bad, but modern piece of software built in partnership with best-in-class solutions like Camunda hosted in Azure. I mean, we went, we invested in this, right? We didn't go for the cut rate solutions here. We went for true best-of-breed solutions. And so our customers are seeing stuff in many ways that was faster and better than what they were seeing before. So they are all in. Now, as we continue to evolve this, right? So some of the things we haven't even touched on yet is Aurora has excellent eventing, right? And so everything in Aurora gets written to an event stream, which is a very technical way of saying, knowing absolutely everything that happens in the system, right? If anyone has bought a package online, you get the stream of data that your package is 10 kilometers away. Now it's five kilometers away, there's two stops. This is really exciting, right? Or it's at a distribution facility, or there was an exception, right? Because any complex system has exceptions. There was a snowstorm in Ontario, and therefore I didn't get what I needed to get. I'm trying to localize it. - That's right, I know you do move it well. - Yeah, and so all of that, right? As we continue to deploy it, and we expose more of that to the customers, right? We have all of us, right, in our lives, personally, and increasingly professionally have the sensational need to know exactly what's happening, exactly when it's happening, and Aurora will satisfy that need in a way that traditional localization, "Oh, I'm gonna give you a status report." It's not quite the same thing as having that kind of real-time eventing. And again, it's not that our systems before were bad, but in many cases, if you look at most local technologies out there, not just from Cambridge, but some of the commercial ones, don't have eventing and event streams in it, because that's relatively recent technology, and it would be very hard to retrofit it, but when you're building a system from the ground up, you can use some of this very modern stuff that comes a lot to the right. So-- - I guess, you know, it's all good, like, eventing, that's perfect, because now I can think, you mentioned Amazon delivery, I'm thinking like my Uber guy, you know, where is he? You know, five hundred-- - Same. - So it's same idea, do you think? - It's the exact same idea. Indeed, eventing was first created by, kind of simultaneously, by Netflix and Twitter around, you know, the kind of workflow and where stuff was in Q, et cetera. And so, if you think about, you know, your Uber driver, your pizza or your package from Amazon, they're all using very similar technology that everything that happens in the system is written, right? And it's usually, you know, the idea of using an event stream, and I know this gets a little geeky here, but I am the CTO, right? So I am writing it out to an external system, and then I can use that external system to report and slice and dice in a way that I don't have to get into the database of the actual system, and therefore, risk performance issues or whatever. So everything that happens, it's this constant stream of I'm writing it, I'm writing it, I'm writing it. You don't have to ask me, everything I'm doing is being written to somewhere else. So you can go and query that system, and it's a wonderful thing. So one of the question, I guess, whoever is listening to us on the customer side right now, and we have a few customers who listen to this podcast, which I appreciate their time. One of the question would be is, after listening to this discussion, is ORAI gonna be open to customers? Yeah, so, you know, yes, right? In the same way that line bridges technology is always open and exposed to customers. You know, as you know, line bridge has long held a commercial philosophy that we don't license our technology, unlike some of our competitors, right? We're not saying, hey, come and give us, you know, a couple of hundred grand a year for ORAI, and then pay us more for services. We say, hey, pay us for services, and you get the value of this technology for yourself. So when it comes to visibility, insights, submitting work, tracking, eventing, even views into some of the decisioning, et cetera, we will absolutely expose this to customers. You know, are we gonna license it for customers to use with other providers? No, right, that's not our approach, but to work with us. I also think, you know, in general, right, the landscape of TMS technology, if you wanna consider it to be more than TMS, because it's really about industrial automation and orchestration of any content lifecycle process, not just traditional translation. But even if you think of it purely through the lens of translation, you know, how many of our customers should still be licensing their own TMSs in a world in which scale matters, right? One of the things we see with LLMs is you spend a lot of time on, say, prompt engineering and up front work, and then your cost downstream of producing the words goes down. So should you be spending money on your own TMS, or should you be trying to consolidate your spend with fewer providers, right? In order to get more economies of scale out of each of the providers that you do have, you know, listen, there's no one size fits all solution. And, but I think it's a really interesting time to think about the role of technology in this industry and how much our customers value or would benefit from having licensed technology of their own, but then they have to distribute their work across multiple suppliers and they're not achieving true linguistic scale for many of them, as opposed to, you know, partnering somebody like us, right? Who can give you that linguistic scale, that insight, but, yeah, and we're not charging you with a separate technology licensing fee. So I think it's gonna be, I think it's gonna be an interesting moment. - And for, you know, I wanna mention here on the channel, it's, you know, I've talked to, you know, many companies on this channel before. And very rarely I hear in this industry about somebody who was doing R&D in the service space. We're not a technology company in terms of producing software for sale, right? So, language is not a software seller. We use software in language to provide services to customers through those softwares. To put, you know, investment you mentioned earlier, there's investment in the best of breed, et cetera. So there must have been, you know, quite a large investment to commit, to develop something that innovative in a time where, I don't think the industry, now everybody's telling me it's growing, I don't think it's growing. Personally, in a time where people are retracting, we're investing in, which is, you know, this is pretty good. I feel like it's a huge positive sign for language and the industry in general. - Oh, I mean, I agree. And there has been substantial investment for some time as we've been building this over the last months and even years. - Yeah. - So it's, it's considerable. But I also feel it's necessary, right? For the reasons we laid out at the beginning of this, you know, if you accept that the LLMs and the LLM revolution, and leave aside like LLMs, 'cause there's other models out there as well, right? That the AI revolution is a transformative moment in how language gets produced, right? And I think most people will accept it. And if you accept that you need better than traditional, serialized workflow to really get the value of that revolution, then to me, it's somewhat axiomatic, right? You have to develop new technology that can combine the AI revolution with customer-specific needs that can bridge that gap, always with humans in the loop who understand the technology, who understand the language, and who understand the customers and their customers in order to, to get value. So, I mean, of course, like, you know, everyone's resource constrained. Everyone has to, you know, everyone has a board that they have to go to, and I spend a lot of time with ours around this. But to me, it's more of a question of how can you not do this, right? If you accept that AI is this transformative, how can you accept that you're gonna continue with technology that doesn't fully take advantage of that? So that's ultimately, that was ultimately the calculus we had to make. - My last topic, Marcus, if you don't mind, before we wrap it up is around. I know it's early. I know you just deployed the software, but, you know, knowing you and working with you for so many years, I think you already have a vision of what's next. I'd like to talk a little bit about what's next, what's the vision down the road? Where do you see this going in the next few years, six months, however you wanna measure this? - Sure, I mean, look, we need to continue to get it deployed, right? - That's another one. So there's a methodology around global software deployments about phasing, things in grouping, things in your train, and you have a change management leader, and we have a great one. And so we're doing all of that, right? And so we're getting it deployed across the company, which, you know, for global distributed companies is no meat feat. But I think what you're really asking is, you know, what does it enable after this? And this to me is one of the most fascinating areas, because one of the promises of AI, right? But a promise that you're not going to be able to really realize unless you have very smart orchestration and infrastructure technology around it, is the ability to really blow up traditional localization paradigms. And we think about the localization industry. We've been metering weighted words for 25 years. And, you know, we've been charging for that, we've been paying for it in many ways, and it's very comfortable, because we have this kind of coin of the realm that everyone accepts. But is that really the best model now? As LLM's blur the line between content creation and content localization, using very sophisticated prompting strategies, of course, you need very good orchestration to enable you to prompt in the right order, at the right time, with the right includes using things like plugins, fine tuning when appropriate, using, you know, some specific features like semantic kernel, et cetera. But all of that kind of harness around the AI needs very good orchestration. But ultimately, you can get that AI to deliver content to you in multiple languages. And at this point, it isn't localization anymore, because you haven't started with a single source that was your source as it were. Certainly was the source of your domestic market, of your domestic language. Instead, you're generating content in all your languages, or at least in a good number of your languages, at once powered by the machines. And that to me is one of the most exciting things. And even if you aren't ready to go all the way to a full kind of AI-enabled, multilingual content generation, being able to use LLM's to make traditional localization processes more efficient, for example, to do part of the work that was done by a human post-editor, and to have the AI do part of that work. It's also really fascinating, because now what I can have is different services for different types of content, you know. And it isn't just traditional one-size-fits-all. I charge you so many sense of word for French, and so many other sense of word for Chinese, not taking into account the type of content. So that to me is what's really exciting, and most immediately, right? The ability to engage with our customers on what we like to call storytelling content. The content that you find in games, in entertainment, in sports, you know, the stuff that really resonates with us as passionate users, as human beings, engaged with the world around it. And we traditionally haven't had a great answer for that type of content. Technical content, fairly dry stuff, we do great at, right? But how do you engage with that stuff that really doesn't just land with our brains, but also lands with our hearts? And with the LLMs, and with the generative AI in general, you have that possibility, or powers that, you know, long way of answering the question, which is I think you're gonna have more offerings, and they're gonna have more offerings for more types of content that previously have been very hard to do with traditional globalization methodologies. And I think it's incredibly exciting time that we're-- - The future is positive. - Absolutely. - Absolutely. - Hey Marcus, hey, I wanna thank you so much for coming online with me today. And thanks for sharing. I really appreciate that your insight to Aurora AI, and congratulations on such a good launch. And congratulations on such an innovative solution from what I'm hearing, from what I'm seeing, from what I'm experiencing, especially after this conversation. Thanks for liking us. Me and the audience on Aurora AI. And I wish you the best of luck. Any last comments before we stop the recording? - No, thank you Robin. I really appreciate being here. I hope this was useful to your audience. Please stay tuned, right? Follow Lionbridge, follow Robin, who's always a great interface to a lot of people for what we do. But, you know, you ended with something that I think is really important to reinforce, which is the future is bright. There's a lot of doom and gloom, oh my gosh, the machines, and you know, we're gonna get into a Skynet scenario. I mean, look, no one has a crystal ball, right? But what we see is humans who understand other humans, who understand the role of content with those humans, who understand how to get the most out of the technology, and who understand the nuance of human language, right? 'Cause language is such a critical part of who we are. Being able to occupy and bridge those different areas, I think the future is very bright. This technology is not about replacing the people, this technology is about really freeing us to connect with content, with language, and with technology in far more meaningful ways. All the conversations I've had with customers over the last several months have just reinforced this, right? No one is saying, oh my gosh, no, I don't need, I don't need anybody to help me with localization anymore. No, they do, right? They may need different things. They may need it at different times in the process. They mean, they certainly will need different technology. But I think for those of us who love language and who love technology, the future absolutely is bright. And thank you for giving me the opportunity to share that. - Oh, no problem Marcus, always good to see you. You welcome back on the channel for another big announcement, hopefully. And I wanna thank our audience for tuning in. If you're coming in on YouTube, thank you for watching this episode. And if you're coming in on our podcast, thank you for listening in. And if you haven't subscribed to our channel, please consider subscribing, no cost to you. And please share and comment on the content. Really appreciate it. Thank you, Marcus, appreciate it. - Thank you. (gentle music) (gentle music) (gentle music) (soft music) (gentle music)