[Music] Hello and welcome to the Campus Technology Insider podcast. I'm Rhea Kelly, Editor-in-Chief of Campus Technology and your host. What's the state of artificial intelligence and education for 2025? It's all over the place, according to Ryan Lufkin, VP of Global Academic Strategy at Instructure. While innovative adopters are experimenting with ways to help students engage with AI tools, others may be stuck in the idea of AI as an avenue for plagiarism and cheating. And while it's important to build trust in the technology, perhaps it's time for educators and students alike to put the power of AI to work. We talked with Lufkin about building AI literacy, international AI adoption, personalizing the academic experience, and more. Here's our chat. Hi Ryan, welcome to the podcast. Hey, thanks for having me. So I thought we could just start by telling me about yourself. Tell me about your background and your role at Instructure. Yeah, so my name is Ryan Lufkin. I'm the Vice President of Global Academic Strategy, here at Instructure, the makers of Canvas. I always like to call that out because Canvas tends to be a household name, even if Instructure isn't. So I've been in this role for about two years, but I've been in EdTech for actually 25 years now. So back in '99, I worked for an EdTech startup called Canvas Pipeline, doing the first HTML portal for higher ed that was customizable and personizable to help bring some of the student data to life and really engage students. And that kind of kicked off my career. I spent the vast majority of it here in EdTech, and both on the administrative technology side of the SIS and things like that, and then on the LMS side. Now at Instructure for the last almost seven years. So I live and breathe EdTech. It's what I get excited about. And I think as we talked about some of the different trends that are impacting education, hopefully you can understand something I'm passionate about. For sure. I love that going back to the early days of portals. I know that when that was truly, when HTML portal was truly groundbreaking. So we're here to talk about AI, and I imagine that is a big part of your life right now, or it's just one of the biggest trends in EdTech, I think. So I thought I'd ask a tough question, kind of too huge of a question, but how do you characterize the current state of AI in higher education? I mean, it's interesting because I think it's all over the place really. And I tend to look at it through the experience of my kids. I'm lucky enough to have a 20-year-old who's a sophomore in college, and I have a 14-year-old who's an eighth grader. And so I look at how their schools are talking to them about AI, and they're very, very different. For my daughter, she's given much more guidance on how to use AI as part of the tool. AI literacy is actually part of the curriculum, and really understanding that it's a tool that we're going to be using well into the future for moving forward, and so how do they enable using it? And for my son, it's much more viewed as a cheating tool. They should not use, and they're not being given a lot of guidance on how to use the tool at all. And that's creating a chasm, really. And I think globally, I'm fortunate enough in my role to be able to travel across the globe, and we see that kind of inconsistency all over the world. You've got those very innovative adopters who are trying to figure out how to use in the classroom, how to really help students engage with the tool. And then you have a lot of people that are still stuck in the idea that it's a cheating tool, it has no basis in education, and frankly, a fear about it replacing educators or replacing jobs in the future, I think is kind of inhibiting that. So inconsistent across the globe, but I think gaining momentum. Do you see it as a K-12 versus higher ed thing, or is it really just a school by school thing? It's a school by school thing. I think it's more prevalent in K-12, and honestly, not to cast any, you know, disparaging statements about K-12 educators, a lot of them came out of COVID and they moved mountains, and, you know, all of a sudden to be kind of, we're getting back to normal, only to be hit with AI, as one more thing they have to learn. And they're already, you know, incredibly time-poor and incredibly overworked, so to expect all of them to immediately become AI experts really is not realistic. But many of them are, many of them are embracing these tools, they're saving themselves time, they're helping. So it's not fair to paint with broad brushes, it's just that's what we're seeing is just kind of so spotty all over the globe. And again, it's not necessarily K-12 versus higher ed, it's not region versus region. On November 30th of 2022, we all started in the exact same spot with gender of AI, across the globe, and we've just seen different levels of embracing that technology, you know, pretty much everywhere. What do you expect to come in 2025, you know, with regards to AI compared to, you know, the past year? Yeah, I talk a lot about the fact that we've been in the trust building phase, right? I have a presentation I give, and I put up a slide that has, you know, the Terminator and HAL from 2001 to Space Odyssey and all of these different AIs. We've been trained to believe that AI is going to be evil. When it shows up, it's going to be bad. And a lot of the headlines that came out immediately were, you know, people doing some prompt engineering and getting AI to say things that, you know, my AI threatened me. AI told me I should kill myself, things like that. And when you really look at the prompting that went into it a lot, it takes a lot of manipulation of the tools to kind of get that response. And so, you know, it's feeding on the fears that have been sown since the 1980s, and even before that, right? You know, War of the Worlds, even prior to, you know, in the 1950s, really so that idea that that aliens, AI, these are all bad things. And so, when they come naturally, we have that fear. So, we've been in the trust building phase, really helping people understand what AI is, what it's capable of, what it's good at, right? And I think now we're moving, we're starting to really move en masse beyond that fear phase. And into the, how do we put this, this innovative tool to work? And how do we, how do we start saving ourselves time? You know, an educator say, you know, I'm the one that wants to write poetry. I'm the one that wants to paint pictures. It should be doing the tough work. And I was like, it can. We can do those things. We just have to move beyond, you know, the easiest implementation cases. I think we're moving that way. I think we're starting to put AI to work. And I think 20, 25 is going to be a very productive year in actually, you know, we're starting to see these models get smaller and more affordable, easier to implement, right? More, that's one of the, that's one of the reasons they're so appealing is they're so approachable. And so, I think we're going to start putting to work in really effective ways. It's amazing how many terminator references come up practically every day, it feels like. Oh, it does. It does. And this idea that they're going to become sentient. And why? I'm not sure why humans have this, this complex where we were pretty sure if AI ever becomes sentient, it's going to wipe us out, right? Like, why do we feel that way? Why don't we think it's going to be our friend or be like, somebody really roots for humanity to do more and better, right? I just think it's an interesting kind of complex we've created for ourselves culturally. So you mentioned how you travel around the world, talking about AI. And I'm curious what you've learned from that international perspective, besides just the variety of this sort of state of AI in different places. Yeah, I mean, it's so interesting because we we actually had Marta Castellano from Ariane Deas University in Columbia on our podcast last week. And I think it was so interesting because she is a very deep insight into the culture aspects that that impact like certainly Columbian society but society as a whole. And she is such a positive outlook on AI specifically as its ability to personalize learning, right? That ability to really engage with students that are very difficult to engage with, right? They serve students across Columbia in urban areas, in rural areas, you know, those that are dealing with poverty issues, crime issues, those that are on the other end of the spectrum are very wealthy, right? And so how do we personalize those education experiences? How do we make sure we're engaging with students and we identify when they might be going off track quickly and bring them back into the fold, give them help when they need to? And shit, just such an amazing perspective on that. And I think we see these thought leaders. One of the cool things, like I mentioned, is we all started at that starting line. Very few institutions or individuals had access to generative AI prior to the end of 2022. And so those that get over the fear and embrace it and start using these tools really are setting themselves up for success. And the ones that, you know, bear their head in the sand or hope it will go away are doing themselves a disservice, right? Like this AI is here to stay, like the internet, like the calculator, like, you know, so many innovations before it's here to stay. And so I love to see, you know, across Europe, across Asia, you know, across North America at schools of all different sizes. This isn't just the MIT's and the Harvard's and the incredibly well-funded schools making innovation. This is a personal innovation for schools of all sizes wherever you are in the world. Are there any particular countries you think are doing it better than, you know, the U.S. and their approach to AI? Yeah, I mean, I think the Philippines, it's been interesting because the Philippines actually had a mandate for more certificate programs to upskill their workforce, upskill their students across the country, to prepare them for the tech jobs that are coming that way. And it's been interesting to watch the universities down there. Actually apply AI in ways that are incredibly innovative for reaching new students. A lot of the same challenges like I talked about with with Columbia, but they're able to create a more personalized experience, create a outreach for these students, and really teach them how to be using AI in a way that maybe students in the U.S. may not be learning as rapidly how to use these tools, right? And so they're using it as a way to close those knowledge gaps and really accelerate the skilled growth for, you know, for their end. Incidentally, you know, instructors actually opening an office in the Philippines. And so we we are both helping improve that and then benefiting from that upskilling of those workers. So that's kind of an incredible closing of the loop. So you mentioned the potential for personalizing learning, and I'm curious what you think kind of the biggest areas of potential are for AI, for the use of AI in teaching and learning. Yeah, I mean, I think to me, that's the personalization experience. We're just kind of scratching the surface of it. You know, we know from the data out there that all learners learn differently, right? All students are going through different things across their academic experience. You know, there are times when a student is not successful simply because the way an educator teaches doesn't click with them, right? But you put AI into that instance and it can kind of fill those gaps. It can it can say, hey, you're more of a visual learner. What if we supported you with some visual elements that maybe the educators are providing, right? I don't ever see a world and I say this all the time. I want to I wish I could exclamation point this, you know, but I don't ever see a world where educators aren't part of this process. Educators are the magic. They're the storytellers. They're the they're the ones that create those engaging elements with students. AI allows us to fill the gaps, expand our reach, you know, take care of the administrative tasks so we can do more, right? Focus on the the teaching, the things that really, you know, create those aha moments for students. That's what's important. And I think the more that we start exploring these different ways to personalize that experience, catch students, you know, in a more timely fashion when they might be going off track, help understand what their what their attitudes are really and focus in those areas, right? More as a guide. I just think, you know, I look back to my my journey. I was a I was a, you know, six-year college student, right? Because I didn't know what I want to do and I changed my major and I shifted. I was also a first-generation college student, so I didn't I didn't have the guidance from my my parents on, you know, how to choose courses. What should I do? What should I where should I go, right? And the idea that AI could help understand a student, guide a student, and really help them find success more quickly, you know, more affordably. And it's just incredible. And again, like I said, we're just starting to scratch the surface on what's possible. We're we're gaining the trust and now hopefully we start putting some of these into play. What do you think it takes, you know, is it a matter of ed tech providers getting the right tools together, integrating across campus systems? It just seems like there's a lot to be done to be able to take advantage of all that potential. Yeah, it's both honestly. It's a collaboration, right? And that's where it's been so interesting because, you know, we've added a number of AI features into to canvas with the feedback of our our schools and what they want to see. But by and large, a lot of that they're driving a lot of the innovation and part of it is that they want they want to feel the ownership. They want to feel the trust in the large language models that are being implied that are are being put in there. And you know, we we do everything we can to build that trust with the tools we use. We use AWS is bedrock large language models, right? So we're not using third part of the large language models. We publish what we call our nutritional facts cards. Like, you know, like you would find on the box of a box of cereal, it has all the facts about the large language model that's being used to help build that trust. And I think that the biggest piece is going to be the the innovation driven internally at universities and colleges and and by all means K12 districts. And even even within companies that are trying to train and support learning within their organizations, they've got to have the vision. We as vendors need to be there to support them with the technology that can facilitate that, right? And again, we're just every time I think we have these conversations, we interpret new use cases in ways that I think is just remarkable, right? Like people think, well, could it do this? I'm like, yeah, I probably could. Let's make it do that. Let's try that, right? And Zach Pendleton is our chief architect. I always like to plug him because he's kind of like really walker that way where he's like, we can figure that out. They bring us back to his team and they start playing with stuff to figure out how you'd apply a large language model. How would you, how would you, you know, carve out the data that would make it smart to be able to accomplish these tasks, things like that. So it's such an innovative time. It's so exciting. If you embrace it, if you get excited about it, man, there's so much opportunity. It's just, I don't know, I honestly get excited about it everywhere. Everywhere I speak about it. Yeah, I love that Willy Wonka comparison. It's just, it's kind of magical, you know? You got to have the dreamers, right? That's what it, yeah, that's what it takes. The dreamers, the ones that are, you know, looking at new ways, because it's not, AI can be applied to the traditional model, right, and streamlining tasks, and providing, you know, knocking the corners off of a user's experience in a lot of different ways. But it's the things that we haven't even thought of, the new processes that are really interesting, the new ways of doing things. And that's one of the hardest parts with education, is education works within a pretty, pretty rigid framework of regulation and, and requirements. And I think, you know, you've seen institutions like Western Governors University, and their competency-based education model that, you know, predates AI, running into issues with, with issues around guidelines around funding and seat time. And they're saying, well, our model doesn't actually focus on seat time. It's about getting students to competency quicker. So we're measuring competency as opposed to this metric. You're measuring the wrong metric. And it took them going back and forth with the U.S. Government Department of Education to really figure that out and get some changes made. We're going to run into the exact same things with AI. And a perception of, you know, AI, you didn't follow the exact same path to mastery of skill. You didn't follow the exact centricional path to understanding. But that's okay. And AI is helping you do those in ways that we haven't figured out. But it takes, like you said, that, that, you know, schools have to have to be pushing that evolution. We have to be working with the government around the evolution. You know, that's one of the things I think some countries kind of as a knee-jerk reaction jumped to provide more regulation, whether then sitting back and maybe waiting, you know, waiting until we actually have a little more insight into how AI was being applied. And I think that's that'll be a detriment to those, those countries, schools. And again, I think like we saw the initial ban on AI, right? You know, the beginning of 2023, we had schools all over the globe, just banning AI. And then we saw those slowly go away and be removed and say, okay, that's not realistic. How do we apply these? And I think in a lot of ways, government regulations kind of doing the same thing. Some of that knee-jerk regulation might dial back a little bit. And there's, there's certainly need for regulation, but we've got to be make, you know, make sure we're not safely innovation. It's like when you talk about automating workflows, you know, you don't necessarily want to just automate the exact same thing that we're doing before. But re, you know, reimagining how the work should be done is a whole nother thing that's exactly that spot on. So let pitfalls remain, you know, what should people still be worried about when it comes to using AI? Well, the interesting thing, you know, again, not to mention Zach Pendleton, but something that he came up with a metaphor early on around eating our vegetables, right? We already worry about student data privacy, accessibility, security, right? We, we already have regulations and guidance around that for, you know, the, the, the different guidances to make sure that we, we protect those things. We just need to make sure as we apply these AI tools that we're eating our vegetables and meeting those same requirements, right? Those, we've got those guidelines. Let's just make sure that we don't, we don't do something dumb, right? But certainly like student data privacy is always one of the biggest issues, you know, educator intellectual property, university intellectual property, things like that. How do we make sure that, that we don't put that at risk? Ultimately, I think the biggest challenge really is how do we make sure students are using AI to enhance learning, not to avoid learning, right? I think, I think that's the biggest challenge is everybody's worried that we're going to see a future where AI, students are using AI to, to do their homework for, for educators who are using AI to grade their homework. And so it's AI teaching AI and, and no one's getting smarter in the process, right? That's what we have to avoid. And, and we need to make sure that these, these tools streamline our process, save us time, make us better as opposed to making us worse. And I think knowing that that's an issue and that being such a concern for education as a whole, I, I think we'll, we'll get there, but it'll be a process. I came across a quote, something you said in another podcast called the add up experience, where you said we're about to face a wave of AI feral children in higher education. Students who know how to use these tools, but don't know how to use them ethically. And then you emphasize the importance of AI literacy and training both students and faculty on, on that literacy. I love that term AI feral children. Could you kind of dive more into what you're talking about there? Yeah, there's a, there's a video I came across years ago and it was a little girl and they handed her a magazine and she tried to scroll on the magazine and then she looked at her finger and she wiped it on her shirt and she tried to scroll again. In her perspective, someone, she was using using an iPad, right? She was just scrolling in that digital experience. And it shows how digitally native students are. And, you know, as we get older, we don't all have that same perspective on the digital world, right? We, we tend to look at the world through our own experiences and we saw it through COVID when people would be like, oh, my students missing this or that. Well, that was your experience and they're going to have a different experience. So let's make the most of their experience. And, and I think what we're seeing now is we've got these digitally native students. So I see it through my own son, that I mentioned earlier, but I see, I look at him and his friends and, and they truly don't understand why, why chat GBT would be bad, but Grammarly or these other tools are, are good. Why can they use one? To them, they're all just tools in their arsenal of accomplishing their tasks. And if the tasks are being given to be accomplished with these tools, why wouldn't they use them, right? That's, they just truly don't understand. And so when you've got a world where primary education is not addressing or teaching, you know, AI literacy, including AI ethics, when to use it appropriately, these students are going to crash as a wave onto a higher education and it'll be up to college educators to make sure to correct these bad behaviors that students have already built. And in some case, remediate the learning that hasn't occurred because they've been using these tools unethically or inappropriately, right? And so we've got to, we've got to work together. And that's why I think that the barrier between, you know, K-12 and higher ed has should just be abolished, right? Like we've seen more and more enrollment, joint enrollment. I think we, we draw kind of a artificial barrier where for a student, their learning journey is their learning journey. That just happens to be a chapter in it. And we need to be making sure that we are working to better work together better as educational institutions, whatever level you are, to understand that you're playing a role in a student's educational journey. And if we're not teaching those ethics early on in that journey, it suffers everywhere else across the rest of the book, right? And so we've got to work together on that. And so, yeah, I like, I use some, you know, calling them AI feral children may be a little alarmist, but I do feel like we've got to get a better focus on teaching these students, you know, elementary school, whatever level it is, AI literacy needs to become part of our core curriculum at a very early age. Just like digital literacy, we've been pushing for that for quite some time. Students are using these tools. Why are we, why are we going analog when these students are already carrying digital devices in their pockets, right? Let's fix the problem, use the tools we have and raise awareness. Yeah, just makes you wonder, is this something that would ever be included in like the Common Core, you know? I think it should be. Honestly, you know, we've talked a lot about in the past about coding being included in Common Core, right? Like this, this, this idea that you need to understand digital language and things like that. AI and AI literacy, the ability to write AI prompts. This, this is, I think, very rapidly becoming a core part of our society. And we're better off the society if people actually can recognize deep fake. You know, it's really hard for people to understand deep fake video and images if they don't know that AI is capable of generating those, right? They take them for granted. But if you're taught at a very young age what AI is capable of, man, you're going to be a lot more skeptical when you see those videos, when you see images of the Pope and a puffy jacket, right? Like you're going to, you're going to understand that that might not be real and question that. And so I think as a society, it's incredibly important we, we address that. Yeah, I hope we get there. So what can you tell me about Instructure's strategy when it comes to AI sort of moving forward? Yeah, it's been interesting because we worked very closely with AWS and you know, Canvas was, we like to say Canvas was born in the cloud 12 years ago and in partnership with AWS. And so AWS has built what they call their bedrock large language models and that seven large language models that they host, they, you know, we're not passing data outside of AWS where Canvas is also hosted. But one of the cool things that's been interesting, especially lately, is that they've been able to start shrinking these large language models, making them more affordable, making them more compact, right? That was one of the early on people who are very excited about AI. And they were more concerned about the security issues, the privacy issues, than the cost, right? And what we've seen is these tools across an organization can be incredibly expensive. So we need to find ways to make sure we're rolling them out in ways that aren't going to drive up the cost of technology for students. And so, you know, we've rolled out a number of features in Canvas, everything from translation tools, to, you know, insights tools that are, you know, easy analytics tools for educators to use, to large discussion summaries, things like that, where, you know, instead of having to read through hundreds of discussions posts, you can click a button and it'll summarize the conversation for an educator, things like that. And so those are the basics. And we're going to continue to add features that aren't driving up the base price of our product because ultimately, this needs to be affordable for everyone. And then the second piece is we, you know, we're built on the LTI standard, right? Which is, I always compare it to Legos for people that don't know. It's a common language that allows communication or education tools to work together. And so if you have a, you know, third-party app with an LTI app, it'll plug directly into Canvas. And then as they do updates and we do updates, it doesn't break the system, they work together seamlessly. But what we're able to do is really expand our LTI plumbing for lack of a better term so that as schools want to build their own or plug in third-party tools like everything from Microsoft co-pilots, you know, PraxisPree is a good partner of ours. You can plug these tools in and they can actually power interactions throughout Canvas. And so it allows that flexibility. And we're seeing more and more schools say, hey, I'm going to stand up my own large language models within AWS as well or on Canvas and let my students experiment and build with this. We've had, I believe it was University of Central Florida, built their entirely, their own search tool across Canvas, built on the World Large Language Model. And so allowing that flexibility, that choice is really key for us. And I think that's, we've gotten a lot of great feedback. And we don't do much without really, you know, the guidance of our customers. We run customer advisory boards and advisory councils, where they tell us what they want and what direction they want us to go. And we heard loud and clear that they were worried we'd run too fast down the road with AI and create issues. And we certainly saw other vendors in the space do that. And we really said, you know what, we're going to be very measured and very deliberate with how we roll out these tools. And we've got great feedback about that. That's, that's, the trust is there. The biggest piece for us, like I said, we're in the trust building phase. We don't want to squander that. We want to make sure that we build that trust together. And at the same time, we've got schools like University of Michigan and MIT and, and all across group doing amazing things with AI, both, both our tools and, and the tools that they're plugging into Canvas. Are there any tools that universities are creating that you would want to like bring in and make it a part of Canvas? I mean, honestly, there's, there's, there's so much around building courses, making courses engaging, right? And I think one of the, one of the exciting things that we have kind of talked about is we want more show and tell, right? I want to see more of what schools are doing. I said I was in Ariandina down in, in Bogota, in October, and these two were showing me similar courses and they've customized these courses. And I'm like, I want to show this to everyone. I need to bring like, can you, can you record video of this so I can take this back and show everybody? And I think that's, that's what I love about education is it's so collaborative. And so, you know, you don't ever have to start anything from scratch in Canvas, you know, through Canvas Commons, you can share learning objects, everything from a, a course to a module to a quiz, right? And, and so if there's whatever you want to do, somebody, somebody's done it, they want to share that. We want to encourage that more with the AI tools. If they're building AI tools, how do they share that with, with other members of the community and make sure that their, their people get excited about it, but they're also nobody starting from scratch. We, we, we build together better than we ever do by ourselves. Yeah, that's, that's pretty cool. Like a repository of tools and best practices. Yes. I mean, from the very beginning, that's one of the things I loved about. I've never worked for an internet company where whatever you want to do, somebody's done it and they've recorded themselves doing it and it's available on the community. All you have to do is do a Google search and it'll, you'll pull up a video of one of our advocates who's been like, oh, you didn't know how to do that. Let me show you that, right? It's amazing. All right. I think that's a good place to end it. Thanks so much for coming on. Well, thanks for having me. This has been a great conversation. Thank you for joining us. I'm Rana Kelly and this was the Campus Technology Insider podcast. You can find us on all the major podcast platforms or visit us online at campustechnology.com/podcast. Let us know what you think of this episode and what you'd like to hear in the future. Until next time. [Music] [BLANK_AUDIO]