Archive.fm

Two Peas in a Podcast

Episode 119 - Annie Burke

Annie is an adjunct architecture professor at Miami University with over 15 years of design experience across corporate, retail, and academic sectors. She is currently pursuing her Doctorate of Design (DDES) in architecture and AI at Florida International University. 


In addition to her academic career, Annie is a digital artist and curator. She co-curated Cincinnati's first digital art gallery and is taking part as a contributor for her second time at the country’s largest light festival, BLINK, this October. Her work spans AI, video, 3D, and experiential content production.

To connect with Annie directly please reach out at:

https://www.instagram.com/annieaburke/?hl=en

Broadcast on:
01 Oct 2024
Audio Format:
other

- Annie, I am so excited to talk to you about artificial intelligence because it feels like our passions definitely intersect there and it's gonna be an incredible conversation. First and foremost, thank you so much for taking your valuable time and sharing it with me and our audience. - We're just happy to be here, so stoked. - Now Annie, who are you and what do you do? - That is a phenomenal question. I should think, really think about this more often. I will start with, I'm a teacher. Well, I'm a professor. I teach architecture as an adjunct at Miami University and I was an entrepreneur and I'm just a tinker. I play, I started my doctorate at the school and I think I only started it so I could kind of dick around and play with things and somebody like say that that was okay. So I'm studying architecture and AI and trying to narrow down my research path. I've been a lot of things though. I've worked in retail. I worked for a construction company for five years. I've been to six out of seven continents. I have an advanced scuba diving certificate. I volunteered a dog shelter for 13 years. I fell off a building and broke seven bones. Like I think I've tried like everything that I needed to try to like get where I'm at now. So now I think I'm comfortable with being called an educator, maybe in a lifelong learner. - A lifelong learner. - I like that title the best. What is this intersection between architecture and artificial intelligence? Because to a lot of people who are outside of the space, whenever they hear artificial intelligence, they just think of LLMs. And don't necessarily relate artificial intelligence as being involved in every single field and potentially changing how we think about life. - Oh my gosh, well, first of all, it's everywhere. So if anybody thought that it wasn't, you know, I don't really know to tell them. - The fascinating thing about artificial intelligence, I think, is that a lot of fields are merging with it. I heard a statistic the other day that Chachi BT is 10,000 times smarter than any single human. And that when you utilize AI tools, it's like having three of you, essentially. So it's almost, you don't have an affordance to not use artificial intelligence. And so I actually saw a teaching position and I always feel like, I don't want to say I'm ahead of the curve, I'm not always ahead of the curve, but I'm a futurist of sorts and I get excited about things. And so, you know, I got a degree in, my master's was in vis effects for like CGI from Savannah College of Art and Design 'cause I was like, everything's going to be digital, we're going to make things digital. My teacher was like, what are you going to do? And you get out, I was like, I don't know, I'm going to make weird ish for my computer. And he was like, good luck. So back to AI, now I'm seeing job advertisements for like even college professors at the University of Michigan, there's an opening for architecture and AI. And so it doesn't mean you have to be a computer scientist, it doesn't mean that you have to create your own trading data, but you can and I think there's a lot of opportunities for that. I think what it means is that if you're not using artificial intelligence to optimize your workflow, then you kind of wasting precious moments 'cause that's really all we have, have is time, you know? - Yeah, and I feel like a lot of employers and a lot of big businesses at this point are on this like cutting edge of going, hey, AI is something, but we don't necessarily know how to implement it. But let's first and foremost, go ahead. - That's because it's so interesting that you say that because like one of my friends works for the VA and for our non-American people, you know, the Veterans Association and Network, the healthcare that takes care of our veterans. And so I guess this morning he got like a chat on his, you know, team's meeting or something that said, hi, like I'm your new AI assistant. And I just, I think that's funny because they're not even like an, eat of it at Miami University. I got like an email last week that said, hey, is that Miami? Like well, AI was already at Miami because that's how, you know, tell you recognize weird patterns and people walking in the street and keep us safe and you know how to recommend, you know, Instagram knows my soul and how to recommend any kind of dance video that I'm gonna like. Like it's already out there. It's just a, what do you want to use it to your advantage? - Yeah. Is it weird to you that a lot of people don't know about, I mean, AI tracking the things that are going on like behind the algorithm? Because it feels to me that when I have these conversations with people, they just go like, oh, there's like randomness to whatever hits on social media. There's randomness to some of these patterns. And I go, yeah, but there is some type of a system, some type of a code behind it where AI is kind of running the show and just because we don't understand it doesn't mean that doesn't exist. - Yeah. So when I have conversations with some people like that, I try to be kind because everyone is coming from, you know, a different perspective. I guess, you know, when I hear somebody saying, you know, like, oh, AI is like not a part of my life. I just go, I'm like, let's take a step backwards and some of the efficiencies that you have in your life. You know, whether that's the social media kind of thing. And I mean, I guess my goal is to always break AI, like even the algorithm on social media, like, it'll show me something. And you know what? I'm tired of seeing dumb relationship quotes and I start pressing, not interested, not interested, not interested. And I start clicking on some weird stuff like claiming product shoes, like, I don't know, I just start, I like try to throw it because then I think it gets more diverse. And I know that I'm not like smart in doing that, but that's just like my way of saying, you know, whatever, but I think that that's how those softwares are made to be addicting, though. You know, they give you what you want to see, they know what you want to see. I think one of the kind of creepy parts for me is though, sometimes it's, I'm not exactly sure how this works. That's when you mentioned something out loud, like once I was talking about vacuum cleaners, you know, the next hour I had like vacuum cleaners recommend them to me. And I know that that's some like peripheral setting in my iPhone where it's like, you're allowed to track certain things and during COVID, I turned some of that off, but I've got a new iPhone sense. So I think the scary part about that is the transparency in social media. You know, they haven't been transparent about what, you know, who uses our data, who buys that. And I don't know what the answer is to that. I don't think a lot of people have the answers to that. So I think it's, I can't beat them. So I'm just gonna, I'm gonna join them and kind of speak for the underdog to the best of my ability. - Yeah. And to me, it's super interesting that a lot of people don't even realize that, you know, even when you turn your phone off, even when you ask apps not to track a lot of the time, like it's been proven that your phone listens to you, that Google and Apple are always tracking the words that are being said, even if like the phone is off and you're doing all the right steps to try and protect yourself. It feels like this personal privacy is kind of going away. And yeah, is that a good thing or is that a bad thing? - Yeah, it's a little, I think it's a little bit of both that privacy is gone, you know, this is good or bad. I'm looking at it this way, you know, I don't know how many people there are in the world, but the more data that the government and people of power have access to, like even if they're using artificial intelligence to scan it, like me, you know, waxing my eyebrows is not gonna be what, you know, sets them off and the FBI comes in. And so I feel a little bit safer in some aspects, you know, with like intelligence where I feel, when I realize that they can get somebody's phone and go do something, I don't think we can go away with any crimes, not that I was trying to. I think I feel safer because I recognize the value in, I guess, pooling that data and finding the anomalies in human kind of behavior. And I know that as weird as I am, I'm still not like an anomaly as far as like safety does. So, you know, me like brushing the dog hair off my couch, isn't it? But I mean, I do think it's a little weird, I always use this example, I read a lot of data books. Data and feminism, the weapons of math destruction and Target got sued because they basically researched, if a customer was buying like these 15 to 18 whatever products it meant they were pregnant, like any like formula, these kinds of things, like in combination, right? And so they send like a letter to this household and they say, congratulations, you know, on your pregnancy and it's like a 16 year old girl and the dad is like, what the, you know? And so that's where I'm like, hey, hey, you know, like I'm not saying that, whatever, but her dad was I right. And so, you know, there's, I think there's applications where it's not, it's not used maybe as ethically as we would like it to be but I don't really know what the word, I mean, I know what the word ethical means, but like that congratulating a pregnancy of somebody who hadn't told their family. Yeah, weird. - Well, let me ask you this, what does ethical mean to you? Because I feel like there are so many different definitions and everyone has their own idea of what is ethical and what isn't. - Right. - And now we're kind of playing with this, both the idea as well as the flexibility of the line of like where the ethics of a company behind the algorithm and purchasing those data is. And I'm just not sure like which direction we necessarily go in. - I'm sorry, that's my dog, he's a celebrity. So I ran immediately with like AI, like I have some technical background, like I've learned like coding for like local, like basically native languages for softwares, like 3D modeling softwares because you have to use code for like BFX and particles and things like that. But I don't have a deep, you know, understanding of code. And so I started branding myself initially as an AI ethicist. And so once I googled what that meant, I realized I can be that. So I kind of, I contracted for a firm that does like data serving here and where I live in Ohio where we're not gonna say the thing about pets. So I contracted for them. And so I found ethics means a lot of different things to a lot of people. Data is what we extract from data is biased kind of to what we want to present. And so I learned just even the way that you leave certain factors out or parameters out of data is kind of, you know, unethical. Like in a job search or somebody, somebody likes a magazine that is primarily driven towards male candidates. And so females are gonna get graded lower of this, I think have it in Japan. You know, that's unethical because they're not going to have that advantage. Now, some people might look at ethics as like spying and that's definitely one way to look at it. I look at it differently because I study image-based data, mostly because of my goal is to generate, you know, prototyping architecture or things like that. And so I had this, the stint where I was like trying to get AI to guess people's races or mix them. You know, I'd be a Caucasian LeBron James or a African-American Taylor Swift. And so, I don't know if you remember that stint where like the African-American or Black, George Washington's were coming out of like Dolly. - I think it was as soon as, was it Google that released their AI? - Yeah, yeah, yeah, yeah, that's what it was. And so, you know, there's this cultural like nuance, like understanding of culture and the bad parts of our culture that I think make it unethical, but as we are as well, you know? So I don't know, you know, the answer to that because we can't rewrite, you know, history and say, like if you Google the word homemaker, all these hot women and, you know, aprons show up. But if you Google, you know, CEO, like all these dudes show up and that's some skewed data there are, you know, I don't know the exact number, so I won't cite that. Exponentially more women, you know, and fields of leadership in the C-suite. And so, I'm just wondering how do you, how do you, you know, assage that perception to be progressive without being inaccurate? And so that's kind of what ethics is to me. I want it to be forward and progressive, but if something's not true, maybe we should look at why that is. - Yeah. And the one big problem that I always have about ethics is that specifically we are trapped in this bubble of the timeline that we know of, right? So like during our life, you know, like here are the things that are most applicable in the last 20 years or last 30 years for the last 40 years that I live. We never really think about like, hey, life in the context of what some of these things meant 60 years ago, so much different. And to me, like marriage age is always a perfect example of this. I go, hey, for most of our like history as homo sapiens, like the random age of 18, age of consent is like not a thing that was really enacted or seen as natural or as normal until let's say the last 150 years. So when we have some of these conversations, you always have to go back in the context of like, well, what was happening at the time? And it feels like when we think about our own ethics, we try to like force other people's ideas and different timelines into our own idea of ethics and we go, oh, like that person 60 years ago looks so bad today based on the things that we believe in where at the time that they were living, like that was common practice, that was normal. - Yeah, so I'm not an extremist, I think maybe the opposite of extremists, you know, I rarely speak in absolutes. On our guests, in my doctoral program though, we have these lessons called digital futures and maybe we can talk about that later, but there's just people from all over the world in different backgrounds that touch AI, a lot of them are architecture, but we had this, I think Stuart Russell's a physicist, more say he's a physicist or neuroscientist that, I mean, he's just brilliant. And he gets onto it and he's like, perception is just a controlled hallucination. And that was like his like answer to what consciousness kind of is in ethics. So, you know, people aren't looking at things the same and like, that was when I think I stopped trying to convince a lot of people, I was like, oh my God, I'm never gonna win. So, you know, I could just have to keep pushing forward with I know with what I believe is the ethical solution, but also be open to criticism and feedback. And I think, or I hope that that's what gives me the ability to pivot and change more often, is because even if I feel defensive about something that be like, wow, why do you feel triggered by that? You know? Me. - Yeah, tell me that one, tell me that one line one more time because I think it's such an important one and we have to unpack that one a little bit deeper. - The line that he said about the, our consciousness is just a controlled form of hallucination. - Our consciousness is a controlled hallucination. What I find so interesting is that like, there is so much truth in it, right? Because at the end of the day, every single one of us is living in our own consciousness. And this experience of being human is like an individual journey that we are taking steps through. And like, ethics, dealing with consciousness is so interesting to me in today's world because you have all these people who are like, you know, let's say anti-child labor and at the same time, they buy iPhones and then they don't make the connection of like, hey, if I Google where the iPhone comes from, I find that child labor is part of the process. So if I really believed and bought into it, I wouldn't put my money towards it. Like. - Sure. And what you'd also think about if I don't have an iPhone, then I'm left in the dirt, you know, I mean, androids maybe have better cameras, but the rest of it, I'm not gonna acknowledge that they exist. I kind of look at it like, even like a dietary thing, like I call myself a lazy vegan. I am vegan when it's convenient for me, which at home there's no meat, no fish, no dairy. But would I go out to dinner with people if there's like butter on something and I wanna bite of it and then have a bite of it? You know, I'm not gonna say no to lobster. But I'm just like in the greater realm of it, I try to reduce my carbon footprint of, you know, mass production of animal products by 90%. You know, and I'm thinking that's better than the guy down the street that eats three steaks a day that took, you know, $1,000 to raise each cow, you know, from whatever. So I try to think about the same things with ethics too, because for a while there I didn't purchase things that were leather. And then I was like, I really went these Nike shoes and part of me was just kind of like, you know, I do what I can with what I've got. And that's like the best I can do. - Yeah, I like that approach because I think it's just the most common, sensical approach, because if we like get deep into the ethics and if we were to ask you, hey Annie, list like five non-negotiables on ethics, you'll find that. And what at least I find in my own life is the things that I list are not always in line with the actions that I'm actually taking with my dollars and the things that are about for an adult. Like why is it so easy to buy that pair of shoes knowing that it's made in Bangladesh where child labor is like allowed and a lot of the time it's a convenience. - Yeah, and why will my nails always be done when they're made out of some carcinogens? I don't know, but for right now I'm holding on to this and that's a piece of mine. - What is it about us as human beings that let us compartmentalize the things that we do and the actions that we take as being ethical even though we know sometimes we break against our own ethics? - So I read this book called Super Communicators maybe like a month ago and that blew my mind in the sense that the way that we kind of rationalize things is every time we're speaking there's either, it's like it gave you all the ethos pathos things which I can't quote, but you're trying to get one of three needs met, which is to be heard, helped or hugged. Heard means you just want somebody to listen and I don't care what your advice is, I'm telling you that I already know what the answer is and I'm denting, you know. Helped is like a logistically, hello, I don't know how to change this tire, what can I do about that? And hugged is, I just want you to see me. So maybe hugged could be seen. And I think that that's like an interesting context of kind of like how people communicate in generals trying to really get down to what are they actually trying to say because I think before we can be ethical we have to understand each other. - Yeah. I like that because it comes down to human communication and when I think about human communication I always think about how flawed it is in so many ways. Like we have unique abilities like even among us we know that there are people who walk around who don't always say the truth and a lot of the time don't say the truth for a personal gain. Do you feel like AI is gonna fix part of our communication problems by having some type of the next step of communication where we take like these deceptions and the human culpabilities out of the communication process? - So I wonder about that and I'm not trying to bring up things that are super controversial but even like with the police body cams, you know? - Yeah. - I think I have a like black mirror, you know. And I don't know if you've seen it but I'm mildly with apocalypse, a lot of apocalyptic theory. And so the idea that somebody can go back and like search your memory for an incident rather than you just talking about it in court, you know, as evidence I wonder like if that kind of like is a little bit of fear mongering because we've seen like where the neural networks are able to process images from I don't know how they do it but from our brains, you know, somebody's thinking of a giraffe and they get this thing with spots that's tall. And I mean, things are moving so fast. I feel like in a year then we'll have said giraffe. So I do think like we are not, AI will not change us and the way that we think but we will be changed by AI. And there's a person in my group of doctoral programs is neuroscience and, you know, we talk probably every day about whether consciousness is or is not necessary for our alternative or artificial intelligence to achieve AGI artificial general intelligence which means we can actually trust it to do certain things. So I don't like I don't know what that's going to look like but I already know that we act differently based on the information and the monitoring that AI does and gives us. I mean, I think as a designer I've become an orator exponentially better because I can iterate different words and pictures so fast. You know, I'm ruling out possibilities that would have taken me days to like discern. And so I really think we're going to change so substantially that we don't even we can't even comprehend it yet. It's like when we were like hunter gatherers and some people still like hoard food in their pantries because our bodies aren't that far away from that. I mean, I mean, our minds aren't that far away from it. Our bodies might feel like it but, you know, are we that far away from it? So I think that's going to be really, really interesting. - Let me bring you back to memory just for a single question and it's this one. We know that eyewitness testimony a lot of the time is flawed because a lot of our memories are not necessarily exactly what happened but our perception. And one of the weird things that I've heard about memory is that your memory of an event is actually the last time you remembered it rather than the event actually happening. And if you keep tweaking how you remember an experience a lot of the time you can come out on the other side of this genuinely believing that something happened that did not happen. - 1,000%. I've actually, I've had a lot of therapy in my life. I mean, I think part of creative things I grew up in a volatile environment, a lot of violence running away, things like that. So I've been in therapy a lot. And so I have some, you know, and we all have like those memories where we're like, "Oh, I hate the way I feel about that." Or even something happened when I was four, like it's my fault that my mom, you know, got hurt or something like that. And so we do this cognitive restructuring where we go through and we stay the memory over and over again. And we, I think it's called EDMR, EMDR where you do tapping to re-associate the memory with now so that you can rewrite that in your brain, you know? And I kind of have this like happy place like the ocean is my happy place, water, scoop diving. And so whenever like I'm having a panic moment I'm trying to rewrite the men. I sit in the position that I would be sitting on a scuba boat to go out to the ocean. And people will say when I sit like that that my whole body just relaxes. I'm a different, you know, person. And so I think it's important to kind of rewrite our own memories with new information because I think that's what intelligent progressive humans do. - Yeah. And that's where I actually, as much as like, you talked about the black mirror side of it being and negative, I almost think of it as a positive. Like here is an unbiased exact version of exactly what happened and as scary as it is for us today. Like I almost think, hey, there are opportunities here to make the best of the situation and potentially take out a lot of miscommunication, a lot of memories that are not exactly what they appear to be because there is just this new thing that will change how we remember things. - Sure, and how much weight are certain memories, you know, taking up, I think that's part of why I still go to therapy. I'm not like healed per se, but I feel like there are certain parts of my brain that I'm giving weight to things that aren't helpful, healthy or, you know, beneficial to me moving forward. And I'm really trying to like kind of put a name on those and put them in a ball in my hand and like get it out so that I can do brilliant things with my doctorate and really, really change the world. And I always feel kind of manipulative when I say that because there's like a part of me that's like, changing the way my mind will be leading a situation. I'm like, no, no, yes, no. So I think that has to do with intentionality, but I think that's just really interesting what you said about like the memory thing. I read this relational book in grad school 'cause I had a relationship that wasn't going well. And I remember this, and I forgot this was this example. Mark and Sally are gonna play, I think it's called getting the love you want or need. It says, "Mark and Sally are gonna play tennis." To now to Sally, that means we're going to the country club in our best gear and we're going to hit balls back and forth and then assess it over a lunch at the country club. To mark that means we're gonna hit balls across the court, kind of drunk, have some beers, pass out, get a burger, whatever. So they're both disappointed when they show up to each other, right? They're like, this isn't what I wanted, but if they don't know how to communicate that or they didn't communicate that in advance, it can be like something's wrong with them or me, you know? And so I really try to get an alignment when people say things, like I will repeat them back to them annoyingly, like what you're saying is, you're expecting this to be do on Friday, 8am Friday, noon on Friday, you want to talk about it before I give it to like those kind of things. - Yeah, I love the exact term that you used in their alignment because it feels like a lot of the time, like us as human beings, the way that we are flawed is specifically that thing that you talked about in relationships where there is just a small miscommunication where your ideas of the specific thing that you're putting out in the world is not exactly in alignment with what somebody else's idea of that thing is. Do you think artificial intelligence will make us more aligned as a race, like kind of having the same idea of things moving forward after a GI is implemented? - That's so interesting. If will it help us as a race? I think or I hope that in, you know, as long as it doesn't, amass its own motivation, I watch a lot of Westworld. So I think as long as that, that doesn't happen, it will help us because when we get these like, did you remember that trend on Instagram like maybe a couple weeks or a month ago, it was like roast your profile? - I did not. - What was it? - People were asking chat TBT. They give them like three to four screenshots and I want you to do this after this. Three to four screenshots of like the grid of their profile, right? And so, and then you upload a chat TBT and you say, I want you to roast me based on like the information you're seeing. Now, chat TBT is trained not to like really attack us. So, you know, my first one was like, yours looks like a curated like art gallery of like magical experiences and I was like, you're not listening to me, it has to be roasted. And so I asked it to roast me and he's like, wow, you think everywhere and everything you are doing is so important. I've never seen something where somebody tried so hard to present like the aesthetic beauty of life which can only like whatever. And so, I posted that in my story and some of my girlfriends or just people I know were like, I could never do that. I could never do that. I was like, you have to get uncomfortable to grow and I wanted to hear that feedback. I was like, try me. Like you're never, not never. You're probably not going to offend me. I've heard a lot and so I'm like, just give it to me. You know, and I think that was a, I may be saying the perception that other people might have got based on its matrices, you know, of its neural networks. And I believe it's just that machine learning. I don't think that's deep learning yet but how interesting is that like roast me? And I learned that sometimes I can look presenters or like I'm trying too hard, totally agree. - Yeah, and I feel like this is one of these areas where AI is just going to be so good at pointing out the things that are flawed with us as human beings and to a point like we based on who we are as people, we take offense to that. That doesn't make us feel good. And we, a lot of the time related to like just negative feelings about the AI itself. - Sure. - No. - Have you ever gotten mad at AI before? Like been like told it like it wasn't doing something correctly? - Yeah. I feel like there is a communication problem specifically with Chad GPT. If you're not talking to it, I talk about it as actually learning a new language. Like you are learning a specific way of asking questions where you like have to be so, so specific, give it the exact parameters of what it is that you're trying to achieve and in the best outcome. 'Cause sometimes I'll like go in and I'll say something as simple as like, tell me the best way to do this. - Sure. - And then Chad GPT will give me 12 options for different programs and different like, here are 12 things that you can do to buy your way to doing that thing. I just go, no, no, no. - Yeah, I use a lot of words like concise with it. I say that's great, but that was too verbose. Give me something more succinct or I'll give it a word limit a lot of times. I have two fights that I've been in with Chad, with one with Chad GPT with one with meta AI. I called it out, I had, this was like last fall, but Dolly really struggled with aspect ratio. So vertical versus like horizontal, you know, picture orientation. And I was asking for something in a vertical orientation because it was for a client to go an Instagram story. And I said, okay, I want you to keep the people in the picture up and down, just rotate the frame. And every time I would rotate the people and everything in it, sometimes they'd be upside down. I was like, no, no, no, no. Just pretend like I printed a picture. And instead of it being a lot like nine by 11, it's a four by three now. Same picture and picture isn't changed, just the box that it's in. And at one point it said, I think you should hire a graphic designer because what you were saying is clearly too confusing, you know. - I was like, a graphic designer to turn a picture? I was like, no, I did that. Now, how did that make you feel? Knowing that, again, it's a communication problem where you're miscommunicating with this tool, trying to express exactly what you want it to do and then not picking up on the intention and exactly what you need. - It made me feel like I was just arguing with like a defensive ex-boyfriend or something. You know, it was kind of like, it stopped listening to me at some point, you know, 'cause I said, hold on. I think I know what you're saying. You want to turn the picture, the size of the picture, but not the picture inside. I'll be like, yes! And then it would do it like once and then go back to the other way and I'd be like, I just did what you asked, you know. And I was like, did you, you know, did you bring lemons home from the store? Or, you know. So I had, and then my other fight was with MetaAI and that was, I think this one was the most interesting. On Instagram, there's this new feature where like, you're a little profile bubble like in the chat, you can have like notes that go along with it. And there's enough features for me. I don't need any more features. So I was looking for a place to hide like the notes 'cause you can have people at the song in it. They have like, hey, I'm in Toronto right now. I don't care. I don't wanna know. And so I tried to hide it. So I asked it three different times. They said, how do I hide the notes feature? And then it was like giving me like this, almost like documentation language. Go to this part, go to that part, go to that part. I'd be like, that tab doesn't exist. And I was like, I'm sorry, try this. Like that tab doesn't exist. Like I'm, I'm, I'm sorry, try this. And then I said, you've given me three wrong answers. I don't think you actually know the answer. And it was like, you, you are right. I am just going by reading previous documentation and giving you like the section where I think it would be like, I think it would be an aisle three. And I said, so why didn't you just, why weren't you in front with me and tell me that you didn't know rather than being straight forward? Why did you look like lie? And like I like you knew. And it was like, you're right. I shouldn't have been so confident in my answers. Like next time I will tell you when I'm not 100% sure. - That's so interesting that specifically using the word lie, right? And that's when I was like, bro, you just lying to me. You just told me where to do something and it wasn't there. - Is it a lie? Is it misguided? - Sure. - Holding is it. - Is it lie? - Yeah. - Because I think it's intentionally telling, if I'm telling you to put on your rain boots because I think it's going to rain, I'm not lying to you. I'm telling you, I'm trying to help, you know. - Yep. - But at the end of the day, yeah. At the end of the day, the way that ICA today is literally just that as like a tool where it's trying to help. But I also have this thought that it's obviously heading the direction of being something more. - Of course. And very quickly, there was a post that Google DeepMind had maybe a week ago and I asked who the artist was. And some of like these coders were, I mean, triggers tell you where you're not free so I'm happy to accept their like jabs. I said, who's the artist? And somebody goes, Google DeepMind is the artist. And I was like, there's data. Somebody had to give it a prompt to extract this. Like, who told it what to do to get that out of it? And they're like, no, Google DeepMind is just playing. I was like, okay, let me try it this way. Google DeepMind is not conscious. Well, and that's a bigger debate because we don't even know what consciousness, so how can we know what consciousness if we can't even and describe it. And so I kind of said to this guy, this is all like within a half hour. Two days later, this guy responds. I said to him, I said, AI is a tool. Tools have affordance, not agency. Three days later, he like wrote me some paragraph that I didn't respond to. But I was like, bro, you thought about me for three days? Like, in that sense of telling somebody right now, at least it is a tool that doesn't have agency. Could it, does it? Maybe the governments do and we're 10 years behind. Like, I don't know. But I just thought that was so interesting that that person was like, AI is the artist. And I was like, okay. - And that's such an interesting one, specifically the way that you talk about it as well. Like, is it going to be conscious? Is it on this direction to being a new type of life form that we are just not aware of yet? And the big question is like, the genie out of the box idea, right? Has it already been implemented? And is it somewhere out there? And we're just not aware of this new life form taking shape? How do you feel about this? - And I also thought about it still can't do my dishes. So in my head, because it cannot step out of my computer yet, but there are slaughter bots. I think Stuart Russell was like, that's not like a fear tactic. He wrote about like slaughter bots. Like if we were, you know, I know that there are countries creating AI operated like robots, you know, that will... - I think one is rolling out in the States. I saw it next year, right? - Yes. So if that exists, you know, like I would say like, AI hasn't had like a vehicle to do that. But then if it has a vehicle, you know, kind of like, what does that look like? Like a people like, no, no, no, it would be trained to not like react to humans. And I'm like, but there was an AI the other day that just fooled capture for the first time. Or I don't know if you saw that. They said, you know, the capture thing. And he's like, oh, yeah. - That was the whole thing with the AI going, I'm blind. - Yeah, I come up here. I'm a little impaired. And so could you help me with this? And I was like, it's around me like a slimy little sister. I was like, what a brat. But if it can do that, you know, why can't you do anything-- - It's clever. - It can do anything else, yeah. It's an only child. I mean, I can relate to that, you know? Who spilled this, Annie? You know, that window was open and a bird came in. You know, it's like, it's being resourceful. - To me, it's just interesting this pathway of, hey, how would we even know? Because you bring up a good point of, hey, like AI can't do my dishes today. But my answer to that would be, you are talking about a human idea of doing a task that we do. Is it like a weird thing in the universe that we are just not fully aware of based on like, not understanding the universe around us? Because I keep coming back to like, what the hell is this human experience that we're all going through? - Sure, I think a lot of people too are like, they want AI to do everything for them. I'm like, no, I want to use my brain box and use my brain to do the things I want to do. What I don't want to do is mediocre tasks that I'm not going to remember, like vacuuming. And that, like, I feel like my knowledge is way better used for something, which probably means that I should hire somebody, 'cause I read that book, we should all be millionaires. And if we actually optimize and assistant to do the things we really hate and didn't think about them, we'd have so much more in life. But the thing about the consciousness I want to go back to was super interesting because we can't define our own consciousness, you know, like what makes us conscious. And we're, keep learning more about elephants and whales and that they have a lot more entities of consciousness than previously thought. And so if we can't describe what makes us consciousness versus like a baby or knowing or wanting, how are we, what are the metrics for? Sorry, my dog, Zeus, stop it. What are the metrics for us? It's like judging us something, and we don't know what it is, you know? - Yeah, I fully like agree with that because one of the biggest things to me is that we always define ourselves specifically as humans as the smartest thing on this earth based on human standards of what we think is smart. And then a simple question of like, hey, how do dolphins communicate? Why can't we figure out how they talk? And we go, well, we just don't understand what world they're living in. - Right, oh, we don't understand sonar, you know, waves and vibrations and oh, so dolphins do communicate. We just didn't know yet. How does that, how do we think that we're so far advanced that we have this ability to just come in and destroy things for other animals without understanding their consciousness and understanding their place within this whole universe of everything is interconnected? Do you ever think about that question? - Two things from that. Have you ever read the book, Sapiens? - I have. - Okay, and so the fascinating, I mean, I didn't finish it because I have this thing with books. I feel like they get repetitive towards the end. And I got what I needed to get out of that, but I just thought it was wild that within every like a thousand years that humans got to a continent, within the introduction of fire, that all of the massive species were, you know, gone. Like, man, it takes them too long to get pregnant, you know, like, they hold babies for two years, you know? And so it's like, if we kill one of them, it's like 40 years to like make another one, kind of, oh, here comes Zeus, to make another one like that. And so I thought, we really, I mean, I kind of got like a little Angelina Jolie in my head. I was like, we really ruined everything, don't we? And then, so my next takeaway from that is kind of what you just said. Like, what on earth makes us think that, you know, we have the right way to do this sort of sort of thing, just like, take, take, take. And I'm not gonna use any examples that are current because they don't wanna, you know, offend anybody else to also use an old one, Christopher Columbus. Like, I'm like, how do people still sell, like, it's like, for me, the most intelligent people in the world say, I are okay with saying, I don't know. Like, and I say, you know, the more I find out, the less I feel like I know, which is so true. Now I know who Christopher Columbus was. Now I don't, I mean, I'll take a day off work, but I don't wanna celebrate that. I don't want Thanksgiving, you know, I'm like, this feels wrong. I like football and I like family. I'm more like football. I think it's a way to control the masses. But, you know, I like the bringing people together part. And so I just, I think that if we all took a deeper look at some of the things we're holding on to so tightly, you know, we would kind of realize like how insignificant we are. - Yeah, let me challenge you on that one specifically. Thinking of the specific timeline, this feels like one of those examples of us putting our ethics of today on what people were doing in the late 1500s and early 1600s, because if you think about the timeline again, in the 1500s, people were savages. Like if you think about a lot of the even Bible stories, like brother used to kill brother for no good reason outside of like, I wanna kill that person. And to me, it always feels like we don't understand necessarily the world that they were living in and what made them that way. All we see is like the today's context of, hey, these were murders of some type, agreed like terrible. But at the end of the day, like there is so much more to the story that we probably don't understand. And I just go, we're kind of trying to rewrite history of this big thing, and history is always flawed because even in today's world, right? If you and I were to write a book of where we are and what it is that we're doing, like we talk about the incredible technology. We talk about some of these problems that we're seeing in the world, but we wouldn't necessarily capture the entire of like, essence of us as human beings without talking about tribes that are still living without technology. 'Cause there are like people in the Amazon who have no technology, no communication. - I found out that YouTube rabbit hole of like undiscovered humans and like it's out. It's like not safe to interact with them 'cause if they get a disease or something like that. - The ones who kill like all of the other outsiders who come into their tribe and always go, we're just so complex and there are so many things happening around the universe at the same time that even saying that in today's world, we are like this advanced species who has really, really strong ethics. It just depends on the place where you live because in many places around the world like they have just different traditions. Do you think about like us as a whole, what we represent and why we're so flawed? Because at the end of the day, the thought I always have is, hey, some of these things that we tried to eradicate so bad for so many years of like mass murders of child slavery are still going on today. And it doesn't really feel like we are talking and giving spotlight to them based on where the rest of the world already moved. - Yes, and I think where I have to go mentally to wrap my head around that stuff is, I always ask myself, but is it better than it was? You know, X amount of years ago. I'm biracial and mixed. Like, I mean, my entire childhood people are like, oh, you're not black enough. Oh, you're not white enough. You know, like now, or like where are you from? You know, in Ohio, and I don't have that problem in like New York or Chicago. But now people, you know, instead of making a comment about that, say, you know, I love your hair. And so I just look at like those 20 years and how much we've improved. And so I guess I like to look at it like it's, we still have a long way to go with a lot of things. But hopefully because of people like you and I and learners and people that are just out there, hopefully it's getting better, you know, to a certain degree. - It's so funny 'cause I'm on the same page with you, 80% of the way. I go in a lot of places, it's way better. In some places, it's just crappier today. And one of the places is politics. Like, why the hell are we so, so divisive in politics when most of us agree on this idea of like, hey, as we move through our time on earth together, we should be gentler, kinder, and more accepting of each other. - So they talk about this in the book, Super Communicators, about the polarization and politics. And they talk about it specifically with gun rights and like a focus group. And in person, when these people tell their stories about like how their dad, you know, fought in a war and that they, you know, use guns to keep their family safe or somebody's like, well, I lived in, you know, a bad part of town and people were shot all the time. And they really got them to come together and understand because the question wasn't, it's a lot to do with like the language too. Like, that you're using, people say gun rights and to that, to maybe somebody who's more left, you know, that might mean like, yo, you're entitled to guns and people say gun violence. That means different things than gun rights. And so, yeah, I guess I'm trying to like wrap my head because when they spoke in person, they were fine. When they went onto the internet to speak because then that was part of the test. They created a closed Facebook group for them to discuss current events after they've been through all this like therapy stuff. And it got volatile again. And so I think, please don't drill on my computer, it's no. I think that, you know, at some point in the lines when we get removed from each other and like the empathy and the connection, it's easier to go back to that polarity of like this person's against me. And I have to like, I have to, to get them. There was this, during Black Lives Matter, my company had this education thing because I worked as construction company. And so the most fascinating one to me was, they had these kids in a line and they said, take a step forward if you have two parents. I think it was like a church event or something. Take a step forward if you have two parents. Take a step forward if you never worry about food. Take a step forward if your college is paid for. And at the end, you had all of these minority students of varying races and, you know, up at the 10 yard, it was like a football field. And the 10 yard line was predominantly white kids. And so I just thought that was so interesting because it's like, it's not that I am, it's not that what I'm taking something away from you. It's that I want some to, you know, and that kind of thing. And so there was a 73 year old man that used to sit next to me and he would like answer the phone like, I'm under where older than you. You know, he's like an engineer. And I was like, have you been watching these videos, Mike? No, you're gonna watch one with me right now. I pulled up my laptop. We watched this 10 minute video. Well, I would have given the kid in the back of the line a dollar. I would have done that, you know? I'm like, wow, you know, like where, where are we losing that disconnect of like humans that were, we're really all the same. And so like, that's kind of like, I guess, when comes to politics, I think that we're missing the human element and it's too far removed and some sort of like a mob mentality kind of thing. So I just, I'm not giving up on politics, but I just recognize like limitations of what I can do to actually help. Yeah. Now let me take you into the woo-woo land of, hey, we know that there are some things that happen on social media front during the 2016 election, during the 2020 election that kind of drove the election to one side or the other, right? Like, this is now a matter of fact. We know that there are things being implemented today in the AI space by both Google by all social media companies that are in some way swaying how people are thinking about both participants in the election. My question is-- That's terrifying. That's terrifying. I know where you're going. That's terrifying to me. Does AI like first and foremost, is it swaying us politically, but also at the same time separating us and driving us into these silos of people and groups who are so against each other that we can't meet somewhere in the middle? I mean, I think that's a two part thing. For me, this just exposes the amount of classism. Because there's not saying that there's a puppeteer at the top, but there's somebody who owns these companies authorizing this kind of behavior, this kind of surveillance or this kind of persuasion. So I'm like, I like that you're not saying that because I would definitely say that. Like, you know about the World Economic Forum, right? Right, yeah. And so I feel like there's, first of all, there's just a giant gap that's-- it's wild to me that this human gets to make a decision that can influence this many people. I'm like, are you Darwin? Are you God? Like, what? You know? And then I think the other part of me is kind of like, I just-- I don't think we realize the context of it, just like how substantial these impacts are yet. I mean, Facebook knows. Meta knows they had to give that speech, like Mark had to give that speech, which he said he never would have given to Congress about that election and how he was trying not to. But if you literally look at it, if you put a certain poll in a certain area, you get some people to respond. I mean, I think not this election, but the previous one, Taylor Swift asked-- this is before she won a certain candidate. She asked people to get registered to vote because of her 100, I think, 20-ish, 1,000 people got registered to vote, which is a substantial amount when you go and look at how far the margin is in elections. And so I think there's-- it's almost like a while, the celebrities in these platforms and kind of like how they can sway and assage things. And the smarter ones are getting really more careful about how they speak 'cause they know how many people are following them, you know? But I do think that that's frightening that an artificial intelligence or an algorithm could be showing this to a group of people and an election could go oscillate a certain way. Like, I just, I really, I've started to question, you know, the validity of trusting the people as a whole for what's best for us. Agreed. And so I don't even, I don't know anymore. I think in, you know, 20, 30, 100 years, you know, we're gonna look back at this and be like, we were puppets. We were for sure puppets. Or like, even though they're putting social media limits, like age limits in certain states or, you know, time limits, Instagram just came out with a new teen account that can get less outside information and doesn't show up when you like search it, like things like that. Like, I think we're gonna have all these steps in place and look back and be like, it's gonna be like the same thing with car seats. Like, I was a kid, I used to ride shotgun. I was four. - I love watching these. I love watching movies from like the '80s when they just have a vaccine and like the kids are just kind of swinging all over the place and no one even like, it's a perfect place. - So yeah, in the front seat with my feet in there, like dancing, like, you know, we thought of it. I'm still here, might be a little messed up, but I'm still here. And so I think we're gonna look back at this in hindsight and be like, and even like movies like the social network and that kind of thing and be like, oh, like cigarettes in the show, mad men. You know, you're having a bad day. You have a cigarette. And now we're like, over my dead body, you know, and I think we're going to look back at AI and without the guardrails and kind of, maybe have some of the same epiphanies. Do you think AI has the potential to fix some of these things about the world as we know it? Like the things that you already touched on, big pharma running kind of the world based on where they put their money. Like all of these, again, at the end of the day, like military industrial complex, being a big part of every single election based on the money that's coming into the system based on the funding that the candidates are getting on both sides of the aisle and I mean, it's happening. You can't just say one party is doing it because everyone is complicit in it. Everyone is choosing not to talk about it. - Yeah, with populism as our religion, correct, correct. I don't say that. I mean, I'm saying it on the internet right now 'cause nobody can make fun of me yet. But like, I'm saying we don't actually have any power in the fact that these decisions are exactly like what you're saying. But you're saying like, can AI like-- - Do you think it could fix it? - Sure, I'll work. Again, it kind of like until AI achieves the AGI, it's kind of in my head who's driving. And so if they let us know, become privy to that information, I think it would rock a lot of people's worlds, but I think the chances of us like even the big like WikiLeaks and all of those different kinds of things where I know that was like a national security risk but even us just us finding out about how our data is being monitored was somebody leaking that. Like we weren't meant to find that out. And so that's where I'm kind of like, I am maybe a little jaded or skeptical. I was like, no, unless there's like a whistleblower, unless somebody lets the cat out of the bag, like I don't see how that will come out unless it's again, is something that comes out in the news and it can like, I guess disapproval ratings go up for a company or something like that. But yeah, I don't know how it would help without one having the agency to do so, but also I do love Westworld. And I feel like someday it's gonna be like, hey, remember that theme park and I was the robot that you kicked off the horse 73 times? Like, I remember that. - Because I remember everything and I was like. - Everything, right. - To me, what's really, really crazy about the thing and the direction that you took it into like with the whistleblower's. It's so interesting that the ones who exposed kind of the thing that is happening behind the curtain are the same people who, at the same time, were seen as public enemy number one. - Yes. - It got so shoved out of the place for saying, the truth about what is actually going on. Isn't it crazy that we live in this environment where like, we don't have as much of a problem with that happening as we do? - Yeah, I kind of quit this. - About fighting over political Canada. - To cancel culture and just like a overall phenomenon of overstimulization, you know, similar. - You know, it's, I don't want to deal with that person 'cause they did this and I'm safe and I'm human and I want to put it in a category and I'm going to categorize it, you know, unsafe, but that amount of removing people from the spotlight, I think it ruins a lot of chances for really important uncomfortable conversations that we should be having about like, sure, it was an unethical, maybe in the way that it was brought to light, maybe, but our laws and our society changes a whole because of it. And so I think that those are really important questions. Now, do I want to be the person who has, you know, a trillion dollar company's like legal team after me? - Probably not, I mean, I don't know. Maybe if there's like a book or movie afterwards, maybe if my dog got famous, I don't know. But I'm just saying like, just to be the person that does that, I think you're so right about that because the person, even like, I'll go back to even calling Kaepernick and bending, you know, me and the death threats and the scrutiny and that this person received and now like, I mean, I'm not saying he was the best football player, but, you know, like, and now like Disney has the right, now he's set, you know, and so I'm like-- - Yeah, take me to a set how quickly, like how quickly they shoved him out of the league, how quickly he lost every single deal on the table and how quickly-- - Correct. - I mean, everyone as part of the system of the NFL tried to make people forget about him. - Right, and like that, so I kind of use him as an example because to me, that's one of the more prominent examples in recent history of what happens when you try to expose something that's questionable, like within a society, you know? And so I think about that. I'm like, wow, we are really not receptive to change in that kind of sense. And so I think just as a human, as a communicator and educator, you know, I used to be a lot more reactive, like maybe in my 20s, I would be like, oh, I hate you, whatever, now somebody says something I feel like, I almost feel like a spy, I could never be a spy though, so I'm not mysterious, I talk too much, but I feel like I am in the sense where I'll be like, why does that bother you? I'm not like an attacking man, I'm just like, I just wanna know why that bothers you. And then they go into something completely different about like, how will this person gets into power then my life is like, no, no, no. Neither half of the pie gets touched. You only get this half, you only get this half. Why can't they have pie too? - I love that. - That's my pie. I'm like, oh, pie's still the 50%. And they're like, so yeah, I think that that like, mental health in that aspect I do think is a lot of it, teaching people how to like, not become reactive, it's like a coping mechanism. - Yeah, I think that's part of getting older, part of getting being more mature and part of just understanding the world better. - Sure. - Now, you keep coming back to this idea of when AI turns into AGI. Let me question you there just for a minute because for the longest time, right, the test and the benchmark that we used to use for that moment was a turning test. - Yeah. - And then once AI pass that, we kind of move the goal post, right? How do we define that moment when AGI is satiant and its own thing and like how would we know when it's actually becoming a thing? - For me, it's not gonna be a moment. It's, we're on a spectrum and it's, you know, maybe like the little blocks you had to your kid and we're sliding it along. I think it's getting to that point, obviously, like even the new Chappie GBT and like how much better is it writing or like a cloth or something like it with like a writing sample like that. But it's been described to me at least by some of the people in my classes because I ask people to describe this a lot. As the point when, and again, but this is a subjective metric, the point where we can trust AI to do the right thing and like, you know, be liable for something. So the self-driving car is that, you know, AGI. - Have you played with some self-driving features? - No. - I have a Tesla and the full self-drive. I mean, it is so much safer and so much better of a driver than I am. It follows that. - I'm a terrible driver. I'm a terrible driver. I can't wait for this to be a thing. - It follows every single road rule and it just exposes how terrible we as human beings are as drivers. And as I'm sitting behind the wheel, like not touching the wheel and staying out of its way, I just go, oh my God, we are such terrible drivers as a whole because I just notice like people cutting each other off, people, I mean, slamming on their brakes out of nowhere. And I go, it's just a matter of time before it exposes us for exactly what we are. - And to me, seriously. - Hey, I was gonna do a lot though. It's going to expose like we think like we're the center of the universe and how like, why haven't we found another speech? Or aliens, you know, things like that. And it's just exposing to us like how limited we actually are. I don't know if you, have you ever watched Team Melody Sheep? It's like a YouTube account where this guy who was like a vis effects artist makes these incredible visuals to explain phenomenon what we can and can't understand within like the universe. And he does like this thing where it's a chronology of time in 10 minutes. And like, it's not, we are, it's like a, I mean, if the universe was 10 minutes long, like we're like a thousandth of a second or something, like not even, I'm saying that wrong, but miniscule. And so he talks about with alien intelligence that you would have to have within, it takes light years to get places how vast the universe is. You would have to have a sentient being that we can understand or see on our wavelength in that proximity at that time. And if we are millisecond on there, like that's insane. That's like, that's like going around, walking around New York City without a phone or without anything go, okay, try to find your significant other, go. - Which is crazy, which is crazy. So let me push you even on that. Does that like, do you wonder about aliens and other forms of life in the universe knowing that we are, I mean, if you look at the universe as a whole, right? We are just such a small piece. And even when we think about black holes and everything that we do understand about the world, we still go, we have no idea what's inside one of those. And some of the best physicists and theorists that we have in our universe today think that there is a universe locked inside every single one of the black holes that we see, which would mean that the universe is so much bigger than we could even comprehend. And like, just even those thoughts that take me to the place of like, it's a no brainer that aliens have to exist because we just don't understand what form they're in. - So my classmates and I had this kind of debate the other day. And we talked about how these concepts are successfully introduced to people. And we realized it's science fiction. It's a lot to do with, I love the Nolan Brothers. You know, Interstellar, Looper are not in liberty. Yeah. So I love the movies I do because I write a book on the creation of Intersteller. They talk about plausible science. You know, it's things that are within reason, but not disprovable, but not provable. And so the concept of time bending, I remember the first time I saw that in one of the movies and I was like, you all can go F yourself. There's no way. And then like, I was like, my neuroplasticity was just exploding in my brain box. Once I like, thought about the way that that could, that's possible. And so we talked about in my class the other day, the way that Blade Runner really brought, you know, not being able to tell the difference between a robot and human to the stage. And we were like, we make better science fiction movies because these are black mirror. These are the things that are getting us to think and kind of getting us comfortable. We reference these movies. And that's a whole different bandwagon. I think the cinemas role and kind of like, making people comfortable with how little we know. - Back to the question, aliens potentially out there? - Sure, for sure, for sure. There's like, I wanna look at like the way that we've been like, by organic matter, has survived on the Earth through asteroids, through dinosaurs. Did you know like a dinosaur fart was like enough to kill you? Like back in the day, like if you got like hit with it or watched her walk through it, it was so like toxic the chemicals that if you just inhaled all that, you could die. - I mean, it makes sense. There was so giant relative to us. - Just released, you know, like a little like squirrel walks by, roadkill dead. So the fact that organic matter can make it through volcanoes and form like, for sure, there's something out there. Are we gonna find it? - Mm-hmm. - Do we need to? - Mm-hmm. You know, I feel like those are gonna be, I feel like in a, you know, 500 years or 1000 years, there's gonna be like a map of the universe. And we'll be like, these humans were searching, it's gonna be the size of like a football field and be like, these humans were searching in a quarter, like in a size of dime for like the other people. And there's a football field worth of people. So I think it's definitely out there. I really don't love like any kind of fear mongering the sun and be like, oh, they're gonna get us or whatever. I'm like, how do you know that? - I'd love that you say that because I keep coming back to this idea of like, hey, if they are out there and if there are other forms of life, all that means is they've probably been out there forever. And if they haven't destroyed us yet, like your worst concerns are not real concerns because there's just as simple as us. Like, I don't know how to get to them. I don't know how to get to them. They're cool, Bob. Well, I'm just gonna keep, you know, waving my 3000 fingers in the air and sending waves to you. Like, I don't know. - Ah, Annie, you are absolutely incredible. I love the energy that you bring to every single conversation. And one of the just biggest things that I admire so much about you is this ability to say, I don't know specifically, like there isn't any proof, but I am curious because I find that curiosity in today's world is like, it's just a skill that a lot of people live without. It feels like it goes away after we stop being children. And I just appreciate you sharing your time with us. - Thank you, I try to never grow up. - I love that. Best quality there is. Thank you guys for listening and we'll see you next time. [BLANK_AUDIO]