Archive.fm

Cyber Distortion Podcast Series

S3 – Episode 006 – The Insane Impact of AI – (FireTalks 15)

Duration:
1h 24m
Broadcast on:
14 Jul 2024
Audio Format:
mp3

In this episode, Kevin and Jason hit 15 different topics on the Insane Impact of AI on our businesses, in our daily lives, in healthcare, and beyond. Each topic will be covered in 5 minutes or less in a new format we’re calling “Firetalks 15!”

The guys like to talk, and if you’ve listened to more than a few episodes, keeping any topic under 5 minutes proved to be a very challenging thing as you’ll see in this interesting new format!

Jason Popillion is a CISSP and serves as a CIO/CTO of a SaaS company and Kevin Pentecost is a CISSP, CISM, CEH, CPT, MPCS, MCSE, CCA, ITIL-F and serves as the Information Security Director for a Manufacturing company.

- Hey Jason, you wanna see, you wanna see another one of my cool tricks dude? - Another one Kevin? - This one's really cool. - Okay show 'em what you got. - Firetalks, ha ha ha, isn't that cool? That's pretty awesome man, where do we get the firetricks dude? - Oh man, that's just AI infused with a lighter light, man, it's just, yeah dude, I paid extra for that with our, you know, effects package. - Sounds like it's something you gotta implant it into your hand that makes you go... - Check it out dude, yeah! - Check it out dude, firetalks, alright, let me try it, let me try it one time, ready? - Firetalks, firetalks, firetalks, firetalks, firetalks, firetalks, firetalks! - Oh dude look, let's work on this later, it's time to get a new podcast, alright, I'll teach you that trick later, okay? - Sounds good. - Alright, so here we go, welcome to our new episode format, called Firetalks15, let's go, let's go. Hey, welcome to the Cyberdistortion Podcast, we have a really unique and interesting podcast episode for you tonight, Kevin and I are excited about it, we're gonna do something a little different, you know, we cover all these topics and one of the things we try to do is cover them in fun, unique ways and my sidekick here, Kevin Pentecost, the man, the myth, the legend has come up with a unique way to bring this stuff to you today. So we are going to be covering the exciting world of the AI, but we're gonna do it in a different way than we've done it before, we're gonna break it down into something Kevin is calling this fire, what's it called, Kevin, Firetalk15, Firetalk15, Firetalk15, right? So we're gonna break this down in what we're calling the Firetalk15 and what this really is is we're gonna have 15 different topics on the subject that we're going to talk about in five minutes and when Kevin talks too long, he's cutting himself off because he's always talking too much and we're gonna cover this, every one of these topics in five minutes and give you as much good information about each one of these topics as possible. So we're excited about doing this, I think it's gonna be fun, but yeah man, so first off for everybody that's listened to us, they know you're full of it, you're the one who talks, you're the one who have, we have to literally cut off at the knees when you're talking because you like to ramble your mouth a lot. So it won't be me that's getting cut off, it's gonna be you and the reason I know that is because I'm gonna be controlling the timer anyway. So I give you a heads up if we're getting close, if we're coming up on that last 30 seconds or so, I'll let you know and the way we're gonna do it, I'm gonna manage a timer over here and we're gonna basically just go through each of the topics, the main topic is AI and we're gonna talk about everything around various areas within the AI realm and we've got some pretty good ones there, so I'm excited to get into that and the very first topic, we're gonna start simple Jason, this is AI Cybersecurity 101. So the first very first one, what is AI? Okay, hold on, let me tell you something before we jump right into this one. The only thing that flies between these teeth is knowledge and I'm about to spit some knowledge to that. Oh my god. All right, all right, well we'll see about that. Okay, so I'm gonna start the timer, you get to set up the first topic, that's roughly a minute and then you're gonna give you a thoughts on the topic and then I'll get the last bit before we close out the timer on this first topic. Cool. Sorry, good. Let's go. Five minutes starting now. All right, man, AI, what is AI? You can take this thing a number of different ways, right? And for the feeble-minded people like Kevin, I'm going to break it down in something very simple. Thank you. All right. AI is the most disruptive technology we have seen in my time in the past 30-plus years. And the reason why I'm saying the most disruptive is not that it's not more than prior technology technologies, but because it is doing what it's doing at the rate that it is doing it. I'm qualifying it as more most disruptive, right? It is progressing at such a rapid rate with such phenomenal results that it's very hard to keep up with, right? For us who are trying to really keep our thumb on the forefront of where it's going and what we can do with and how we can achieve stuff with it, it really is. But that's one definition, the other definition I'm going to give you is more specific. That artificial intelligence, AI, and more specifically generative artificial intelligence that we're all using right now, is really about the ability for you to interact. And again, I'm breaking this down into simple terms, but the ability for you to interact with a backend system, a computer system, or a knowledge base like that at the root of it, it has some type of a language model, small language model, a large language model that has been trained with a bunch of data that you can interact with and it will generate appropriate answers to the prompts or questions or, you know, the conversation you are trying to have with it. To the point where it can almost understand, it appears to seem like it's understanding your conversation, where in reality, it's just using math and predictive analytics to determine the next logical word that would happen in a sequence based on the information that has been trained on. And the core really is that we've trained these systems with so much data on how humans work and interact in human-based knowledge that now they know how to interact with us as if a human would require it. And so that's the astonishing part that we're seeing as the results of these systems being able to interact with you, equivalent to how a human would interact with you. And we're like, "Oh my gosh, this thing is going to take over the world. It's smarter than everybody." Yeah. Yeah. That's a human. Yeah. It's a very human sense. Talking to computers in their language and vice versa, which we're starting to see and hear more and more about. So AI to me, it's incredible and it's also incredibly terrifying. And I think about that in the sense of where things are going, not today, but where they're going the next five years or maybe slightly beyond. And through all the cycles of hype with the release of chat GBT, I honestly think that was a massive turning point. When chat GBT was released November 2022, wasn't it? We took a leap into the next world of AI that we're going to have to unravel and figure out how to secure and figure out how to deal with what's coming next. But that kind of revitalized AI in the sense of, well now we can use natural language processing and we can come up with images based on text, we can translate text into audio, we can translate text into video, we can translate one language into another language. Those are the things that we're starting to translate pictures into text, right? You can do a number of different things. So Kevin, what makes you scared about it? Are you any more scared now than you were prior to AI making its debut? Yes, because it's now mainstream and it's in the forefront of my frontal lobe constantly because it's in the news everywhere. Thirty seconds by the way, 30 seconds. In my opinion, I'm terrified because of what I hear the capabilities of artificial general intelligence are going to be. And we will get into that here in a bit. But yeah, knowing that a computer could literally start making its own decisions and decide its own outcomes scares the hell out of me. Yeah, well get scared man for sure. I agree with that man. That's it, four, three, two, seconds up, segment one, done. Topic number two, and for the second topic, we're going to be talking about the history of AI and the history of AI goes back about seven decades now. And what we're going to talk about here is some of the key milestones that have taken place across those seven decades, and most people don't realize this when they think about AI that it goes back into the 1950s. But it does. And across these seven decades, there's been lots of things that have happened along the way that have been important in the history. But also important in the history of AI are the two major events that happen in the middle where nothing happened. And we refer those as the AI winters. And basically, there was one in the mid-70s, and there was another one in the late '80s to early '90s that took place where really nothing's going on. Funding has dried up, public interest is basically almost non-existent, and technology is not supporting the needs of AI at that time. So keep in mind, across these seven decades, some of these milestones took place around two major AI winters too. So for me, a few things come to mind. In 1950, the Alan Turing test was created. So the test that we still use today to determine whether or not we are talking to a human or a machine is still around. The same test that was designed back then is the test we still use today. So that was the first thing in the history of AI. The second is the term artificial intelligence. That also happened in the '50s, Jason. Did you know AI was technically the phrase was coined in 1955? Yeah, I know that's so interesting, man. And as you're saying, this is reminding me of something. The people back then, that, you know, we talk about, you know, that being the greatest generation, and they really had the foresight to think way outside the box on the art of possible. That's how this really got started, is thinking about, huh, these things that we believe are computers of some type, not the way we see computers today, but in a totally different form back then, that someday that arbitrary thing will be able to think artificially the way a human does, right? That's really the foresight and thinking that they had back then. To think that that foresight and thinking many, many, many years later has gotten us to where we are today. Wow. That's kudos to the minds that really put that stuff together, the foundation of this stuff together. Yeah. I mean, these were out of the box thinkers and to come up with the concept to even, hey, let's create a test that we can use to determine if a human can tell that they're talking to another human or a computer. And oh, by the way, yeah, we can trick them. We'll say that they passed the test, the Turing test, named after, of course, the computer scientist Alan Turing, who came up with it. And it's still around today, it's still about, I mean, because if you think about it, through those AI winters, where not a lot was happening, the challenge is still the same challenge today in the 2020s, you know, how do we create intelligent computer systems that can think like humans, act like humans, rationalize like humans, solve problems like humans, and and have ultimately with a GI, the same level of compassion or simulate the same level of compassion as humans. Yeah. Well, well, let me tell you this, but for me, when I look at the history, I break it down into, and in, you know, all my answers are going to be very, I better be because you get 35 seconds. Yeah, because I'm cutting you off. Here's what I'm going to give you. It's very much that same level of thinking out of the box thinking and ingenuity back then is really what is sparking this today with Chet GPT, even AI releasing Chet GPT, prior to that, we really wasn't even thinking about it. Yeah. The general public wasn't. That's that ingenuity sparked that. And for the future, as we continue to go forward, continued ingenuity like that, it's what's going to take us into AGI and to solve problems that we haven't even contemplated. Nice. Solve you. One second, fireside. We're going to have a fireside fire talk chance with our back. We're going to run through another fire fire talk chat. This topic is going to be on generative AI. And so when we talk about generative AI, again, I shared this earlier, it really comes down to the essence of being able to create something new out of nothing, right, to generate something. In this case, what we're doing is, and most often we refer to things like generating a response. But anymore, that's not true, right? It could be generating an image or video or audio or files of data. It could be really a number of things that we're trying to generate. It's also known as weak AI or narrow AI, you might have heard those types of terms. And what most experts agree upon is that generating content falls into the paradigm of being complex, being coherent, being original. And when you think about that with the type of results that you might see when you're using a chat GPT or other tools like that AI tools, that's the place that we become astonished by what generative AI does. It's like the magic that's happening right before your eyes. I can't tell you, man, how many times, every time I generate an image, when I put a prop in and I see, wow, this really captured some unique things in my thinking from the words that I provided. And you know, I guess, sure, right, we should get caught up in an old, it's going to hurt the artist and it's designing, you know, unique works of art and it's going to destroy the artistic, you know, way the humans operate it. I can agree and I can disagree with that. But I also appreciate the fact that you have this, this unit of ones and zeros, gets and bites that now can interpret a language. And from that language, create a visual, unique representation. By dating into millions and millions and millions of records of data and images. Yeah. And just understanding how it works, right? We can argue around the fact that, well, that's just copying other images. It's not true anymore, right? This stuff is generating unique, unique fashion, you know, works copying other images for source. But if you think about it, how many times have you ever seen the same AI image ever generated twice in a row or even close? In fact, yeah, when you're, when you're creating your prompt to go into mid-journey or dolly or one of these solutions where you're creating an image based on text, I can't even get those tools to, to, to repeat the image that I actually liked from the last prompt that I used and just generate four new images based on the one that I actually liked because I say, hey, why don't you add some wires to that or add some light bulbs to that? Next thing you know, it's got four whole new images. It didn't even use the source images that I said. That's incredible. But in the early days, right, in the early days, I can remember generating an image and it looked like a several, you know, different photos being spliced on top of each other and make the image. And eight fingers on a hand and it was really bad. Yeah. Yeah. Yeah. But that's not true anymore. In this equation, when we start thinking about getting closer to AGI, right, artificial general intelligence, that there is that, that section in the middle where if you give AI an image now, it can interpret that image, right? It can come back and say, oh, well, this is an image of a guy who's sitting on a beach drinking a cocktail, you know, petting his dog, right? It can, it can interpret that image. So it's ability to understand things beyond simple text, right? And, and in other forms, media forms is really where we start to get 30 seconds, right? So, yeah. So, so really what I think what we're saying here is that, you know, chat GPT was released and we had basically this AI boom. And these tools that we're seeing today are what we call today's AI. That's generative AI. It's generating a result based on some sort of a prop that you engineer to get the result you want. Next, in five seconds, we're going to talk about AGI. That's the next cool DGI, yeah. Yeah. Right. Fire Talk. Topic number four. Here it is AGI or Skynet, as I like to refer to it, Jason, when all hell breaks loose in this dystopian, fantastic world that AI rises up and it's going to kick our ass. I get excited about AGI, the topic of AGI simply because people's minds can go all over the place with this one. And if you think generative AI is really cool and it is, oh, you just hold on to your trick and grates, kids, because AGI is next level. So this is where artificial intelligence can start to solve problems in various domains in areas in which it's never been trained. Up until now, generative AI is solving a problem based on something that it's been trained to do. It's limited in capacity based on what that learning language model is restricted to as far as its data sets. AGI is smart enough to think like a human. And I saw a quote that Elon Musk said and he says, by next year, AI is probably going to be smarter than any single human on the planet. But by the year 2029, AI is probably going to be smarter than all humans combined. That's why we say, oh my god, this stuff is kind of terrifying. And I don't know how close he actually is. And I'm kind of a glass at full kind of guy. I kind of think we can get some parameters around this and some controls around it. But dude, if we don't, yeah, things can get ugly quick. What are your thoughts? Absolutely. Okay. Okay. Okay. Go with me, all right, here's my thought. My thought is, you know, I have this conversation with my wife often we're talking about. And I give when I give talks, I like to distill, you know, the complexity of this down to simple terms. I like to say this. Like there's only two things I really care about with artificial intelligence, well, in fact, generative artificial intelligence, Jason, let's not get into your nude photos. No, dude, the data that I bring to it, okay, and its ability, its ability to do perceived deductive reasoning. And it's only why I say perceived deductive reasoning is because it's been trained on all this data. And when you ask it questions or you're giving it tasks to do, what it's doing is factoring through how to answer your question and that process, when you're watching it happen behind the scenes, it appears to be using very good deductive reasoning. I have done some things with AI that I'm watching it work and it's like, okay, I'm going to try doing this. Oh, wait a minute. I can't do that. Well, I'm going to do this instead. And it's going to, like it's, it's deciding how to approach a problem. And that ability to decide how to approach a problem is very scary when you think about how close it is to doing true deductive reasoning because it does it so well. So the question becomes, if it does do deductive reasoning, is that now a GI or is it closer on the path? It's closer. Yeah. We've got to get to the point where it's sentient. In other words, it can, it can react on emotion. And so we're, and even, even take that a step further on emotion, human emotion, but also based around human moral compasses. Yeah. Even that's going to be a little subject to the definition, right? It's not everyone's moral compass is the same. But everyone expresses emotion the same, right? So when you start to get into its ability to understand emotion, boy, that's a really complex area or, or, you know, what, what, what core human values are. That's going to be very complex issues for it to understand that's, to that point, that's when it gets scary. Well, and that's why nobody today can tell you when the precise moment is going to be that we reach AGI because there's a little bit of subjectiveness to that. Yeah. Yeah. Well, I can tell you, you know, if, if Bundy was training these agents, his perspective of, of, you know, human value and I assume we're talking, yes, good, buddy, human value, a moral compass would be all over the place and that, in that part is too, right? So how do we get to a, how we get to AGI will be scared. All right. We're done. All right. So now we're going to jump into topic five. And topic five is AI as a disruptive game changer. So you know, this is, I love this topic, by the way, because this topic gets to the heart of a, as a society, making the, the shift from a, a, being excited and enamored about all the glitzy whiz bang that AI can do and actually looking at the functional aspects in how AI can change, how we operate, how business operate, you know, house in a society, in an economic society, how it can really change and disrupt the way we know business to operate. And I think people are in the fence right now with that, right? They're your individuals who have done things with AI and they're like, Oh, this is really cool. I think I can use this to help me do this one task in my work. And then we have people there be like, Oh, this is really cool. I'll keep playing around with it because I like how it does this stuff, but I haven't built anything with it yet. And then you have this third group who's just watching to see what everyone else is doing until they can define what the path looks like, right? Yeah. Yeah. I think what's interesting for me at least is that I've done so much with it that, and I work with people like Microsoft and, and AWS and, and Salesforce and others like that, where when, when we're talking about this, we're talking about it from that position of how it can be disruptive in, in the world that we live in. And we're, we're pursuing that we're looking for that because that's really where something as exciting as this really has value. And we really do believe that it has value. Now I'll give you some examples. I've written models where it's now doing, uh, whereas generating PowerPoint presentations for me, whereas dissecting data, where it's creating, you know, unique conversations or unique communications for me to serve out to, you know, individuals or my public or my customers or whatever it is. But I, I, I really have done a lot of really cool things where it can take data and from data derive very actionable outputs that we take action on. And to that point, what I really believe is on the forefront between that cycle of generative artificial intelligence and, um, and AGI, what I really believe is in the middle of that is where generative artificial intelligence uses the same foundation that it gets trained on using machine learning to then dissect data real time and understand how we dissect that data real time to take real actions on it. Like I've just completed a project where I'm able to do demand forecasting for goods and services within an industry with precision. And all it's using is historical information to do this. And it's a combination of AI and machine learning to do this, right? I've done it in a number of different cases. Well now if you can think about that one step further, the ability for the deductive reasoning model of that AI offers in a, you know, in and of itself combined with additional tooling like machine learning on additional larger data sets in real time, now you can start to get real time answers out of a system that you don't have to train. And that's where I think you're going to start. That's the next level now that I think you're going to start to see some really crazy disruption happen. All right. You left me less than a minute because you like to ramble with your mouth a lot. And you call that knowledge. Okay. When the internet came out, it was disruptive, right? We think about some of the things that happened when the internet came out. The dot combo, all these, uh, retails, online retail, Amazon, rideshare apps now, smartphones. You have access to the internet in your fingertips. AI is going to be chapter two of that same book, in my opinion. You're going to start to see reasons for AI to get embedded in all these various different areas of your life. And you talk about disruption. The internet was the biggest disruptor, either you or me have seen in our lives until now. I think AI is going to be the next chapter in that book. And it's going to be an even more, uh, massive disruptive chapter that we're going to get to watch play out before I is going to be awesome. Can't wait. You're back. And we were on topic six in the AI breakdown in the firetogs. And that is misinformation and deep fakes. Jason with the good new bad ass technology comes bad things, good things and bad things. We can't have nice things without bad things, unfortunately. And while AI can do a lot of great things for humanity, we talked about this last season, and I think it was season, uh, episode four and episode five, a two parter, uh, on AI. And we were specifically talking about how chat GPT is some scary shit, but we got into other areas of AI and how AI was doing things for the good. That's how we ended the fifth, uh, episode last season is, what are some of the good things it's doing? Well, early cancer screening, early cancer detection, uh, saving the bees, you know, all those things that we listed off in that episode, uh, if you haven't heard that, go back and listen to that great episode, got a good friend, Justin hutching on there with this and, and he was, uh, breaking down a lot of good information for us. But the bad side of it, we get into these misinformation campaigns and deep fakes, misinformation. We look at that, um, just the other day, Jason, you mentioned a photo you saw and it was a, uh, a fake image of Donald Trump with a group of black ladies. And supposedly it was being used as a campaign to show how Donald Trump's behind everybody and trying to get that black people following him and believing everything he's saying and look like he's a man of all the people, right? It was obviously a campaign boosting image that someone created for the purposes of making whether or not, you know, uh, Donald Trump really loves black people, hates black people, no idea. But the, the idea behind this image was they were trying to boost the fact that he's a man of all people. And I've seen other, uh, misinformation campaigns, uh, with fake images. And the way we knew that one was fake had a fake hit the, if the, uh, hand on the shoulder of the lady was messed up in the AI rindory. So yeah, with the elections coming up, misinformation is going to be a big thing. Uh, but then there's deep fakes. That's taking that to a whole new level. So I'll let you speak to the deep fake aspect of this. Oh man. What can I say about deep fakes? You know, here's one thing. Go watch season three, episode one. Yeah. That's what you can go do, right? Because we did some phenomenal deep fakes and I only say phenomenal because they're really good because the content of what we did was hilarious and funny. And it only, it didn't take us a whole lot of time to build that deep fake. And, um, but, but yeah, go check it out, man. This is, it'll give you a perspective of what we're talking about when we're talking about deep fakes and how easy it is to really pull some things off like this. Here's the thing, Kevin, it, this area of AI scares me the most. It's from a cybersecurity standpoint is the most difficult to protect because you can't get put in good, you know, technology that will, uh, and we're getting there, we, there are some baseline solutions, but you can't really put in good technology that will stop a person from getting faked. It really is training, right? It goes, it, this, it really, your defense here is to go back to the basics, train people. And one of the key things you can do is train them to always verify one of these things. So if you're getting a phone call from the boss man saying, Hey, I really appreciate you're a lot and you're doing a great job for me right now. I'm in a deal and I need you to go wire me $10 million so I can complete this deal. Don't just go do the work, you know, call boss man back up and say, Hey, did you really? Did you really just call me and ask me about this? Right? There's always steps you can take before you are fully fake and get caught up. And I think that's the most important piece. We've shown so much, uh, that is possible and deep fakes both video, audio and, um, and just photos as you just talked about earlier with the political climate on, you know, on us right now, all these photos looking like it's real when it really isn't, you know, there's always something you can find in the photo to determine if it's a fake or not. And you should really train people to look for those things. Yeah. Yeah. Uh, lots of, uh, people trying to create, uh, imagery or audio of someone that said something that they probably didn't say. And that's that one. So fire talk, topic number seven, a real life deep fake. So to go along with the prior talk, we're going to give you a real life example. And this is where it gets really scary, right? You get all the stuff and you see it in person and you're like, okay, yeah, I get it. I, oh, that's a really good one. But when it really has the potential to impact your business, that becomes very scary, especially from a cybersecurity standpoint. So you might have heard of this. Maybe you haven't, you know, um, the stuff we've talked about before, you know, that's you see this Hollywood imagery of how well, you know, even, you know, image modifications are being done in movies and skits like that, right? And skits kind of the same thing, a little bit of it, but this particular example was one that happened, uh, in January of the, this year. And so this company got scammed, Kevin scammed out of 25.6 million dollars, right? Like it was monopoly money and how they did it was an individual in the company. This is fascinating. By the way, this is absolutely fascinating that this happened. Yeah, they got social engineered. Okay. So now this is a new take on social engineering, but they got social engineered by, um, by the CFO of this organization. So this person was in finance and they got social engineered as if the CFO was, you know, asking for them to do something. Yep. Yep. So in this process, the hackers used not only a visual cue as the CFO, um, they also did audio deep fakes as the CFO interesting about it. So think about it. Right. How would they pull that off? Well, the tricky part, right? Audio is a phone call. Hey, it's Jimmy, Jimmy Jam, I'm calling you and I need you to do this deal for me and transfer $25.6 million, right? Yeah. And you, you're talking to me on the call, but it's, it's working so well that you feel like you're actually talking to the CFO, having a conversation with the CFO. Now that is advanced deep fakes, but where really gets advanced is later, they set up a call, a Zoom video call and on the video call, there was a video deep fake. Yeah. Yeah. That's insane. Yeah. Right. And, and I mean, come on, man, that was full many, many, many people. Oh, yeah. I mean, you got to think about it. It's January of this year, but this is the first publicly known or publicly reported instance of a deep fake social engineering attack that actually resulted in multi million dollar payout. Yeah. Absolutely. Man, absolutely. You know, and, and this isn't going to stop. This is just the beginning of these types of attacks. Yeah. Like I told you earlier, man, these, this is scary stuff, right? How do you defend against that? So you already heard about the scams of the people that mostly aimed at the elderly where they're calling elderly people and say, and hey, we, we're, we've abducted your grandchild. We've got her in the back of the car. We're going to let her speak to you, but, but we're going to grab the phone back. We just want to prove that she's here, they, they hear, they play an audio clip, sounds like she's struggling and crying and screaming for help. And then they take the phone back and basically say, okay, here's what you're going to do. We're going to send us $50,000 and we're going to get that money by midnight tonight or she's dead. Yeah. And so think of some of the simple ones though, Kevin, like there has been a social engineering tactic that has been used for years now where they would target older people and they would say, Hey, grandpa hits your grandson, blah, blah, blah, yeah, I need you to bail me out in the jail. Yeah. The person didn't sound anything like the individual, right? They were just, they just knew the name. They knew, they knew information about the person, right? Their grandchild that they were impersonating and now they sound exactly like them. Yeah. They sound exactly like them. Yeah. Think how that's going to work. 20 seconds. So that's scary stuff, man. That's what I got to tell you. Yeah. Absolutely. All right. Cool. So early, I'm working a lot of mouse clicks over here, Jason, because I'm working a timer on one screen and the riverside on the other. Here we go. Topic eight of the AI firetalks, the new frontier of cybersecurity. So we touched on this earlier, but there was a contrast between AI technology and the internet or AI's breakthrough and the internet's breakthrough. And if you think about kind of how the internet is what really caused us to have to consider cybersecurity in the first place as a major area within any company's area of thought. We have to protect our data. We have to protect our investment. We have to protect our customers' data. We have to check our employees' data. And we need all this data, all this data. All this data needs to talk to the internet. Therefore, cybersecurity was born. Well with AI, it's very similar, but the nature of the threats have changed. With cybersecurity, we had things like viruses and worms and ransomware and Trojan horse viruses and things that we had to consider that would impact just taking the computer down or corrupting the data, so to speak, or making the data like in the case with a DDoS attack, making it unavailable to you to be able to do business with that data. With AI, now we've got a whole new world and a whole new spectrum of threats that we have to consider. You just talked about one in the last segment with deep fakes, misinformation deep fakes, whether it be through audio and video. And just think about this, pretty soon we're going to get to a point where AI is able to do automated spearfishing attacks with intelligence behind them based on language models and datasets that are designed specifically for phishing. You can open up a whole new world of black market commerce, and you already see some of that with the phishing as a service, things like that. Those things scare me, but there's also the element of dealing with data poisoning and data privacy, so what are your thoughts on that piece of it? Yeah, I think there's so many aspects here, and data poisoning and privacy is a big one because AI only really has value to us based on the data it has access to. So the data and the angle of securing data from a cybersecurity sense is probably the most important area to look at because if the data gets tainted in some form, and we can argue that tainted data is also an example of training a model to do spearfishing. It's corrupt, like in legal training. It's training it to be bad, so it's not really good data. So I think data generally, if you have a model, and that's true to commercial models. If a commercial model gets poisoned with malicious information, the integrity of the data is ruined. Right, so you have all of those as challenges. Protecting the data and all these models are going to continue to be hugely important, and again, training your people on how to use the data. If they're taking data and putting it into a system that they don't know anything about because they're using AI, well, they're not helping your cause either, right? They're giving away key proprietary information potentially to train into a model that they should not be putting this information into. So that's equally important, right? The usage of this data and how you use it and what you do with it matters in all aspects, and that's the real cybersecurity story here. Yeah, it's interesting how it all still, even though this technology is new and the tactics are new and the threats are new, and even the attack surface, to some degree, is becoming new, it all goes back to the same concepts and the same baseline things that we've always concerned our well, our self-width. Yeah. See the CISSP, we learned about the CIA triad, the confidentiality of your data, the integrity of your data, and your availability of your data. Those are the three key areas of data protection that you have to be focused on. That hasn't changed. Absolutely. Exactly the same. Yeah, exactly the same, man. All right, good segment. All right, guys, we're on fire talk topic nine, AI in healthcare. So here's what we're going to do. Let's think about this for a moment, right? When we look at AI in its potential and current impact in healthcare, we talked a little bit earlier about some things that it can do, and it's really centered around the data. And healthcare in a whole has a vast amount of health-related data on patients, right? So it's ability to utilize AI to help do things more efficiently could really be huge in healthcare. In fact, there's already early indications in some of the things that it's done, like the ability to diagnose and personalize treatment plans, the ability to, you know, do early screening for cancer marketers, the ability to look at patients' history and determine, you know, predictive pasts that could occur. So it can do quite a bit of things, because again, like we just said, centered around the data. So here's a thought, and I just had this thought today, because of a conversation I had yesterday with a colleague, we're talking about this and they said, oh, I bet one day, you know, we're going to get to a point where people are going to have a chip in them that's going to transmit all of their health information for their physical body and have a profile, a data profile for that individual that AI will interpret. I'm like, eh, well, let's stay away from the chip stuff. I know some people are already doing that right, Kevin. We'll actually be talking to one here in about a week. Yeah. Exactly. But let's stay away from that for a moment. That could still be possible with things like wearables, right? You have quite a bit of data that's being collected in a wearable that could be used as a profile. And what it really brings me to is this idea, going back to what I said originally when you think about the originators of artificial intelligence were really good out-of-the-box thinkers. Well, if you think about a few things back in the '60s that we were seeing when I was a kid watching things like Star Trek and they would bring somebody into the sick bay and they'd have a little hit scanner that was scanning a person's vitals and they'd be like, oh, that, Jim, you just got out torn, blah, blah, blah, blah, blah, right? Yeah. But how close we are to something like that happening today. Having something that uses AI to redraw your health profile real-time and be able to come back and give you a diagnosis because it's been trained on all this data. Yeah. Yeah. Crazy. You know, I read a stat that said that 76% of all healthcare firms will be using AI in the next year. So it's already moved into the healthcare space. It's now a matter of how can we take the technology and advance, truly advance healthcare with it. And you said something earlier about predictive detection, early detection. Well, think about the very rare diseases out there. That when that data gets into these language models and you can start to see patterns between somebody in another country that has this extremely rare, less than 1% of the population gets this disease, but the triggers and the conditions and that all of the data behind that, the symptoms line up with somebody over here in the U.S. that maybe has that exact same thing and they can detect the most rare conditions super fast because of AI. That's going to be fantastic to see in the healthcare industry when that starts happening. The other thing that I think of with AI and healthcare, and this is critically important because it's a massive problem. And we saw this during COVID, during the pandemic. They're trying to rush to get a vaccine approved through who is at the FDA has to approve. I don't know who approves it, but they had to go through and cut through a lot of the red tape to get it approved because the process is kind of broken. Takes way too long, way too much testing to get solutions that we need today to the public. And sometimes in the case of life-saving vaccines or things. So if we can speed that process up using AI, oh my God, that's going to be amazing. That's going to be amazing. Things that have been life-threatening finally get cured. Rolling ahead with AI topic number 10, which is AI in the business. And for me, we've touched on a lot of areas, whereas having a massive impact, it's been in the business. It's already continuing to improve customer experiences. It's improving business operations. It's improving fraud management and cybersecurity. I'm going to give an example of that. And you see digital personal assistance and all kinds of applications today with co-pilot and all the things that we see that are becoming more mainstream. But I'm going to go back to cybersecurity. I'm a cyber guy. So I'm going to give an example of how cyber solutions are already using AI. If you have a next generation EDR, you can now use AI to set up logic and flows, workflows that say, hey, if certain conditions are met in my environment. If I get an alert that meets this criteria or it's at this impact level, let's say it's at a high or a critical level alert, it could be a false positive, but I don't want to take my chances on that and I don't want to wait until I get alerted or somebody get picks up their phone and sees a message and jumps and logs in and goes to a console to deal with it. So what we can do now using AI is look at the conditional logic and say, if it meets that criteria, go ahead, send it alert immediately, but contain that device immediately in a privatized sandbox where it can't talk to anything else. And oh, by the way, go ahead and kick off a scan on it too. Make sure there's no obvious detections on known malware or ransomware on this device, while that alerts waking up my team so they can go investigate this thing forensically. So that's one way where cyber is being used already using machine learning and AI. Absolutely, absolutely, man. I think, you know, good examples of this in business. I think people still haven't crossed that bridge yet where they are getting real-life applications, but, you know, we created a chat client assistant, whatever, right? It's a little bit more than that and it's a little bit of everything, but a tool, an AI tool that allows our employees the ability to customize their AI experience based on use cases that they have. And as they come up with a use case, they define that, they give it context, they may add files to it, they store it, and then they can use that as a baseline to have conversations with this AI. Now what we found from that is we're solving all kinds of business problems. We're doing things like, hey, guess what? You got a presentation you have to build, ask the tool to go out there and build you a baseline presentation with or without images on a given topic, X number of slides, and make sure you have, you know, detailed talking notes. It will choose how to build that most often, either out of the data you're giving it or it will connect it in it and do the research and pull that information into it. And then January, you a baseline starting PowerPoint presentation. That's pretty cool. Within your companies, within your company's brand, that's cool. And I could take that, refine it, modify it, but it's got key content already built into it. It gets me over that writer's block hop of, how should I structure this? How should I put this, you know, string the story together, gets me started so I can just do all the rest of the work, right? We do things where we're, you know, loading data about customers on what their buying habits are, what their strategic initiatives are, how that aligns with our data sets. And then we're having it right, precise messaging to target customers based on how they would want to hear about these solutions, which basically removes all the BS, right? It's saying, hey, this is the solution. This is how it aligns with your needs that you want it or not, you know? And you know, it's just simple things like that that can say simple, they're not simple though. I mean, if you think about what we would have done 10 years ago for some automated system to just simply create an outline for us for a presentation, dude, we would have killed for that 10 years ago. Oh, absolutely, absolutely. And here's the early reports, the early reports now is that we have departments telling us that they have 10 times efficiency gains over prior processes in how they're using our tool, right? That's where it becomes like insanely crazy. When you could do something that was taking you a lot of time like that, awesome. By y'all talk number 11. Here we go. AI smart cities, that's the topic. Smart cities. Yeah. This has always been an interesting one. And at some point, if you were going to be like, well, that has nothing to do with AI. Well, let me let me tie it all together for you. When we talk about smart cities or like some people call them 15 minute cities sustainable cities. Yeah, there's a lot to name for anywhere in the city within 15 minutes kind of deal or right, they designed it specifically with all of these, you know, ideas in mind. We're already starting to see these types of cities get created and AI really is at the forefront of creating these smart cities. You know, a lot of these things where AI gets applied into them can be at many stages throughout the development of these smart cities from the moment of analyzing data to determine the best way to create routes so that people can get anywhere within 15 minutes, right? That would be an example. To all the anemities that they could have within the city that an individual has access to so that they, you know, are sustainable. So a city can be sustainable to whatever population size that they're targeting. You know, but there's other things like in order to maintain the cycle where people can get anywhere within 15 minutes, you have a lot of other complexities. You have to manage traffic flow being one of those, right? The different markets that are set up within an area that someone should have access to. You have a pharmacy in a certain area of the city. You might need one of others, you know, every four corners of the city in order for you to be a 15 minute city kind of deal, right, and be able to do that analysis and get that information up. Here's one I think people don't think so much about, but I think applies here and really applies at another level with AI. And it is things like smart cars and smart driverless cars for autonomous vehicles. Yeah. Yeah. For sure. Because it's a ability to make those decisions on when to stop, when to go, how fast to go, directions to go. All of that is being driven somewhere with intelligence in mind. And it's it supports things like a smaller carbon footprint. It supports things like sustainability and accessibility to, you know, various regions of a city, right? So the AI can play a role in many different areas on these smart cities. What do you think? Yeah. Less strain on the power grid. There's lots of positives that and they will paint the picture that it's extremely positive and great for the environment, great for the earth. And in a lot of ways, they are right. But to your point about the autonomous vehicles, there's moral dilemmas with what we're talking about here. Example, you got an autonomous vehicle driving down the road and suddenly somebody's dog gets off their leash and runs out in front of the vehicle. Okay. It's the middle of rush hour. You've got cars coming in the other direction head on. You've got a woman walking across in front of you with a stroller with a baby in it and now you've got the dog running in front of you. The car doesn't have time to make the decision not to hit something. So what's it going to choose to hit? Is it going to hit head on to the other cars? Is it going to run over the baby or is it going to kill the dog? So, so what is it Kevin? You have an old lady, a dog and a baby and you have to, you have to hit one of them, which yeah, yeah, yeah, tell me AI, what are you choosing here and to go back to the smart city thing real quick, not everybody's on board with this idea. There's a lot of huge social media influencers, Joe Rogan is one of them that says, hey, this is, this is government overreach. This is, you know, way too outlandish to ever work. So what do we do? You know, how do they kind of see it as a conspiracy to control the herd? You can control the herd easier when you contain the herd. Well, what's a smart city? Technically, you're containing all the people in this bubble and reducing the need to have a vehicle to go anywhere, right? Well, that's if you contain everybody within the city, right? The people still, they travel the day, right? They get around, they go talk to somebody, I mean, you're in Dallas. Am I never going to go to Dallas to see you and stay in my little bubble? No, I'm going to go out, right? Yeah. I don't know where I stand on this. I, I, part of me is like, okay, I can see the positives in it, but I could also very easily see the negatives. I could be pulled in either direction with this. Yeah. Yeah. Yeah, most certainly. All right. Cool. Wrap that one up. Eight, fire talk topic number 12, weaponizing AI. This is the scary. We got to kind of delve back over to the dark side again, Jason, with AI on this one. There's lots of directions we could go. We could talk about nation states weaponizing AI and we could talk about all the scary stuff that come along with military warfare and the power grid and all that stuff. I'm reading a book right now from our good friend, Justin Hutchins, who was, who has been on our podcast before and will actually be on with us again next week. We'll be recording our, our episode seven that will drop right before black hat and DEF CON on the area of transhumanism, AI transhumanism. But in his book, he has a quote where he says, by speaking their language, humans have hacked machines for decades. But now with the machine speaking, our language, the heroin machines will hack humans is upon us. You know, that's a pretty frightening thought when you think about it. And I just think, you know, the, the, I don't really know how to feel about weaponization of AI. It, it, you know, you see the drones over in, in Ukraine and Russia and these swarms that have you seen the, the, the, the swarms of dogs, robots in China and the drone swarms in Russia and who knows what the hell else is going to be coming out of North Korea. North Korea is going to give Russia to use against Ukraine. Dude. Yeah. We live in a scary, scary time. And the capability of this technology to do things to destroy us is beyond anything we could have imagined even 15 years ago. Yeah. I think warfare is such an interesting dynamic. And here's the reality. Humans will not stop at only using AI for good. Yeah. Humans will use AI for bad. That's how we operate or for power. Yeah. Oh, well, it's always power driven. Yeah. It's always power driven. Yeah. Right. It's always power driven. And, and as a result, we will have many casualties. You know, in some form or another that, that we will suffer as a, as a result of AI. That's just how it goes. So when we looked at this, I mean, does that mean the technology we shouldn't adopt it? We shouldn't move forward with it. You know, with everything that happens in life, we're going to have good and bad. We're going to have stuff that will advance us and stuff that will deteriorate us and break us down. That is why it is so important for us as humans to really be thinking about how we want to operate in the new world with such advanced technology. I like how you say in the new world, that's very important to say because we're not going back. Oh, absolutely not. No, things will change and you can either accept that change or become a casualty of it. Yeah. But it's, things are going to change. Yeah, for sure. It's out of the bottle, as we like to say, right? Yeah, Jeannie's out of the bottle and it's doing all kinds of magic right now. So I do, I think, I think you're going to see that all over the place. I mean, you talked about drones and drones being used in warfare. I've read so many articles and talked to individuals where drones are currently being used in very advanced ways where the intelligence is key to how a drone determines where it's going to strike. And how it's going to strike. You know, an error in that will be catastrophic, you know, catastrophic. Yeah. Right. So, so there's just, there's so many things in what we didn't even talk. We talked a little bit about this was just the new age of hackers weaponizing AI to do a better job at hacking, right? And, and you know, we talked about things like deep fakes, but what about the, the quantity of hacks that a hacker can roll out, right? If AI helps us be more efficient at the work that we do, it can help hackers be more efficient in hacking. Yeah. Right. So now, you know, we're, we're receiving 10 times more attacks than what we have been receiving in the past that we have to defend the job. What you're saying is cyber security is not going to get easier on us. It's going to get harder. Oh, yeah. Man. I mean, this is, that's a story of life right now. I mean, you, I think everybody needs to think like a cyber security professional. They should start teaching the CIA triad in elementary school. Yeah, they should. They really should. And how to use a password manager, right? Yeah. All right. Exactly. All right, man. Good segment. Fire talk. 13. The role of employee education. Kevin, I alluded to this earlier there are the basics are the same, even with AI. And I will say if there's one area where the basics are, are, you know, just extremely important, even more important than they had in the past world, it would be employee education. And why I say this is because of the advancements of deep fakes have changed our perspective on what we can trust and not trust, which we, you know, we should have been there already. But when you're starting to see visual and verbal and audio communications that, that are tricking you into believing that it's true when it sounds exactly and looks exactly like the person, I mean, it becomes very difficult to defend against that. So I do think man, like continued education on employees, looking at anomalies in photos to be able to determine what a deep fake looks like, even even going beyond that and teaching and creating a practice in which whenever any type of a question that reaches a level X, right, a financial question of some type, a data question that reveals key information, whatever, you categorize the levels of risk and in information that we could be giving someone or interacting with someone on. But anytime that that reaches a certain level of high risk, it's an automatic verify. Yeah, your teaching employees to do something like that, right? Yeah. So the day, if there's not a secondary method for people to say, this looks extremely real, but that little tingling in my head is kicking off and I'm not quite sure if I can trust it, right? There's not a secondary way for them to be like, let me verify this and know that they have a process for it. Then we're having more people get caught than we really want to. And that's where I think things really need to be at. Yeah, you know, you're really just talking cybersecurity one on one and it all comes down to compensating controls in just about anything with cybersecurity. You don't depend on one layer of security to cover the need. You usually like to overlap those security layers and have more than one control protect you on a given threat, right? And to your point, you don't have to look any further than the earlier deep fake example that we used with the $25.6 million scam. Had they had the compensating control to say, yeah, I know I see you on the zoom. I know this sounds ridiculous and stupid, but we do have a process in place and I need to follow that procedure because it's what we do when I have to wire money in excess of X amount. So pardon me while I follow the procedure and you can appreciate that since you're the CFO watching me do it. So I mean easy compensating control for that is always have a secondary person that has to approve it. Yeah. But you can't can't approve it's $25.6 million by yourself without having someone else also approve it. Yeah. Yeah. I mean, and that seems elementary cybersecurity one on one, but obviously there are companies and this was not a small company, by the way, that don't have those secondary controls in place, at least not for today's threat landscape because this fake scenario, you would think, Oh, well, this is a this is all for an acquisition. This is on the down low. I can't say anything about this. This isn't even public knowledge yet. Like this get leaks out, this can impact shareholders stock prices, things like that. So I guess I better listen since this is the CFO on a call with me right now saying just wire the money right now. So you know, I think the procedures have to be there. The procedures have to be followed at regardless of what level the request is coming from. And I guarantee you there's not a large organization out there today where their sea level management team does not appreciate the fact that those types of controls are in place. Absolutely. Absolutely. And they're going to they're going to give you kudos when you're doing it, even if they're on the same call with you. That's right. Yeah. Your percent. Yeah. So I guess going back to cyber security one on one, lay your your security and have compensating controls. Yeah. Yeah. That's that's where it is. All right. Moving on. Barreling towards the finish line now in this fire talk discussion around mini, mini AI topics. And this one is probably the most important, I think. And it's all around core objectives. And if you think about it, what we're talking about tonight is nothing more. And we've mentioned this already a couple of times. It's nothing more than another tool in the hackers tool belt. AI technology is changing the yes, and things are changing faster than we can even adjust to them as we've given illustrations up tonight. But in the end, this really comes down to protecting your data, protecting your digital assets, protecting your identity, protecting your brand, protecting your ass. You have to have these controls in place and you have to know that your employees are going to need to be retrained on this stuff because the landscape has changed dramatically in just the last two years. So as we learned in our CISSP training, Jason, goes back to that CIA triad, right? That's right, baby. But it's reality, integrity, integrity, availability, availability, your data, your data needs to be protected. It needs to be kept confidential. It needs to be good data, clean data, un poisoned data, data that you can trust to know that you can run your business on. And it also needs to be there for you when you need it, simple as that. You know, Kevin, I try to take this one step further when I think about AI and the message around security for AI. And I always try to tell people, yes, all of those things, 100%, that's where it starts, that's foundational. But also think about this. You know, when you start to work with AI solutions, either you building those or you're working with a third party service provider, no more than if you just, you know, was getting compute power from a third party service provider. You would do your due diligence with that provider. And that due diligence with that provider would cover things like, well, what's your data retention policy? Are you planning to trade any of my information on that, on that large language model for the benefit of either me or any of your other customers? You know, I would be, you know, what's, how do you manage data and transit, data address, data and process, right? All of the, those same basic things that you would ask around data with that third party provider plus, what is their intent with that data as it pertains to AI? Those are key core questions and the only thing that's changing and what I'm asking now to that third party provider is, what is their intent with my data in terms of AI? If their answer is, your data does not live and breathe within an AI system because we, we use that data real time to answer the questions we need and then it's gone. AI system knows nothing else about it. That's a good answer. If they're saying, well, you know, we compile your data and we put it into the system. We use a little bit to train the model or to fine tune the model so it does a better job of answering the questions for you. Then my flags would go up a little bit because we're reaching a day and age where with AI, the fine tuning and the training isn't as important anymore as it is to be able to dynamically be able to pass that data to it to answer questions. The engines have gotten faster. The cost to support these engines have gotten lower. The context sizes to take data into an engine real time dynamically have gotten bigger. So we're moving towards the ability for these AI engines to be able to handle data real time and not store it and I have to be trained on it and I have to have your data live and breathe in it other than to answer my question in a point in time and then nothing more. So your security models should match that when you're asking those third-party providers. Yeah, absolutely. Hand another thing to that point and that's a great point. If you're dealing with cross-border data, transmission, GDPR, whatever, god forbid, please ask them how they're going to destroy your data and how are they going to forget you existed in their database when you ask them to forget you exist. Oh, yeah. You better be reading up on the AI guidance that the EU put out. Oh, yeah. That they've now or they're now in use. In China, everybody else is now doing it too. Yeah. No, those regulations, folks. No them. Absolutely. Absolutely. All right. Good segment. All right, Kevin, we're in the final one. Fire Talk. Bip. Tain. My friend. Yeah, baby. And this is really going back to preparing for the future. That's what we're going to be talking about. Now, we covered a lot in this segment. We talked about a lot of aspects of security, of the AI, where it is, where it's going, what you should be thinking about. We're going to kind of distill this down into the most important security aspects in just growth AI type aspects that we have, that we think of, you and I can think of. And so, I'm going to add, Canada, you kick it off with the first one. Give them what's yours. My first takeaway would just be know the technology, know what's out there. We talked about deep fakes. We talked about misinformation campaigns. We talked about the automation of things like phishing email farms and AI is now able to generate hacker code for non hackers so that they can be malicious if they want to be malicious. You don't even need to know how to code these days to pull off malicious types of exercises. So know the technology threats, the technology itself, what's the technology capable of? But more importantly, know your cyber security countermeasures for that technology. If you don't have them in place, get them in place. Yeah, absolutely. Okay. Mine is prepare now, be an AI prepper, prepare for where you want to go with AI because it is not going to stop just so you can catch up. My biggest free one of my biggest fears is that it's moving so fast and you have a group of people who are on the forefront of where this is going and how to use it, how to make it operational. And you have people who quite aren't doing anything with it yet. And that gap is large, very large. We've got to start bringing up people into this, into this arena. So and you can do that by preparing if there's only one thing you do to prepare, that would be update your acceptable, your use policy, use policy, yeah, you're acceptable use policy. That is the first thing you can do is not that hard. Here's what you can do. Here's what you can't do. That's what you need to let people know. Right. Yeah. Give people guidance, right? You need to do that. So that's where I would start. Cool. Cool. All right. And risk is risk, guys, with AI, it's okay to not be bleeding edge. It's even okay to not be leading edge. In fact, if you want to sit on the sidelines and watch what other people are doing and learn from their mistakes, there's no race because as Jason mentioned, it's not going anywhere. So unless you're being pressured to figure out how you're going to be using AI to better your organization and your life depends on it, then I would say don't be bleeding edge with this technology because it's a risky to do that. Yeah. So mine is speak the lingo. If you're not already familiar with certain topics like prompt engineering is a good example of that. Yep. There are a whole lessons on how to do effective prompt engineering and just doing one of those lessons can help make you more knowledgeable and what the possibilities are with AI. But one of the places I spend a lot of time is prompt engineering, right? In my mind, it's like a game of how to outthink the brain, the AI brain, right? I'm finding myself giving it, you know, guidance and instructions and it does something. I'm like, no, dummy, that's not what I said for you to do. This is what I told you to do, right? So how do I outthink it to make it do what I wanted to do? Yeah. I spent a lot of time doing that with with the eye and all right. We got one minute to get through these last two points. Okay. So let's go. Don't worry today, folks. AG eyes right around the corner. And when when these technologies, you can think and reason for itself and problem solve on its own, then we've got a real problem. So today the threats aren't that big a deal. They're the ones that are coming or next level. So be ready. Absolutely. All right. Last one. What the AI is the future is not going away. It is here to stay. It will fundamentally change how you do business, how you operate in your day to day tasks. The key thing that I try to tell people is AI is not an if today, when you look at it, it's not like it's been designed to replace individuals. It's not designed to really replace the workload that a person does. So what it is designed to do is make you more efficient in the tasks that you are doing in your job. And in every last aspect, if you have that in mind, as you approach AI, you will find yourself more efficient. Fine. What you're trying to do. Four. Three. Oh, you finished it. All right. Well, that was fun, Jason. It's a new format for us to fire talk 15, 15 segments, five minutes each. We may have to do that again sometime on some other topics. That was cool. The abrupt end on a few of those was a little rough, but hey, that's what makes it fun. It is what it is. So hopefully there were some good takeaways for the listeners tonight on this covering so many topics in a single episode, hopefully that led to some good information and some good takeaways. So with that, we're going to wrap it up. I'll quickly tease the next episode, which as we mentioned is going to include our good friend, Justin Hutchins and his podcasting co-host, Lynn, no, they have a AI driven podcast. And Lynn is the first, I think he calls himself the first white hacker that is chipped with embedded technology, something along those lines, but Lynn's going to come on and we're going to talk all about transhumanism. So embedding technology into the human biology, that's going to be a fun, fun discussion. We've already seen how this works, and it is interesting to say the least. You're likely have here both sides of the story, people for and against it and why would you do that and we'll look at the value of it. Needless to say, it as a topic by itself will be very interesting. Oh, it's going to be fascinating. I can't wait. All right. Well, with that, we're going to wrap this up, call it an episode and we will see you very soon. All right, guys. See you later. Take care. I'm Ashley. I'm an artificially generated avatar with the ability to manipulate your mind via the power of my intelligence botnets. Let me prove it to you. You're about to click on that subscribe button and then you'll click on the notification bell so you don't miss when Kevin and Jason dropped future episodes of the Cyber Distortion Podcast. So go ahead, that's right, slide your mouse over slowly and gracefully, there you go. Now click, see, that wasn't so bad now, was it? I'd like to thank you for listening to today's amazing podcast episode on YouTube or your favorite audio streaming platform, and don't forget to tell your friends. Oh, and remain diligent my cyber friends. The world is a very scary place. the world is a very scary place. [BLANK_AUDIO]