Archive.fm

Drinkin‘ Bros Podcast

Episode 1353 - The Past, Present, And Future Of AI

Duration:
2h 16m
Broadcast on:
22 May 2024
Audio Format:
mp3

AI experts and Gladstone AI founders Jeremie Harris and Edouard Harris join the show to talk about everything AI — what its ultimate potential is, if AI knows it exists, how AI could become sentient, could it be antagonistic, and how clueless the people in charge of AI really are, and a lot of other REALLY mind-breaking ideas about AI.


Go to GhostBed.com/drinkinbros and use code DRINKINBROS for 50% off EVERYTHING (Mattresses, Adjustable Base, Pillows & More) – plus a 101 Night Sleep Trial and Mattresses Made in America.


SUBSCRIBE to our Patreon for exclusive audio and video content!


Buy Drinkin Bros new HardAF Seltzer Here!


Get Drinkin Bros MERCH here!


Go to https://1stphorm.com/DrinkinBros to get your Micro-Factors and have a chance to be the Drinkin' Bro of the month with every order

 

Go to lucy.co/drinkinbros and use the code DRINKINBROS to get 20% off!


Get 20% Off + Free Shipping, with the code DRINKINBROS at Manscaped.com. That’s 20% off with free shipping at manscaped.com, and use code DRINKINBROS. Never forget where you came from…if you know what I mean. Happy Father’s Day from MANSCAPED.


Drinkin Bros Socials

https://twitter.com/Drinkin_Bros

https://www.instagram.com/drinkinbrospodcast/?hl=en

https://www.tiktok.com/@drinkinbrospodcast

https://www.youtube.com/@drinkinbrospodcast


Ross Patterson

https://www.instagram.com/stjamesstjames/

https://twitter.com/StJamesStJames


Dan Hollaway

https://www.instagram.com/danhollaway/

https://twitter.com/DanHollaway


Rob Fox

https://www.instagram.com/robfoxthree/

https://twitter.com/RobFoxThree

https://www.tiktok.com/@robfoxthree


Dan Regester

https://www.instagram.com/danregester/

https://twitter.com/dan_regester

https://www.patreon.com/softcorehistory

https://www.youtube.com/@softcorehistory



Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy
[SPEAKING SPANISH] [SPEAKING SPANISH] [SPEAKING SPANISH] [SPEAKING SPANISH] [SPEAKING SPANISH] [SPEAKING SPANISH] [SPEAKING SPANISH] [SPEAKING SPANISH] [MUSIC PLAYING] Welcome to Drinking Bros, presented by GoSpent.com. Sit back, relax, and grab a fucking drink. Welcome to Drinking Bros, Ross is dead. Is that right? Yeah. Yeah, they're telling me he's dead. He's on a plane back from Atlanta right now. With us today, we have Gladstone AI founders Jeremy Harris and Edward Harris. Neither of your first name just spelled correctly. We talked to our mom about this. It's too late to change it. It's not. It costs like 50 bucks. You just go to the city. Go to the name change factory. Is it too early to walk off the interview, or is it-- No, no, no. It's never too early. OK. [LAUGHTER] I like to change them in posts. Change your names in posts, yeah. But I like the bare midriff, and that seltzer commercial from earlier, I really appreciate it. Yeah, that's last row Lopez. He's a real piece of work. Oh, OK. Bring it back. There we go. He does fine work. He does fine work. In all his glory, yeah. He looks like Benicio del Toro, but fatter. Oh, OK. I could see it. Yeah. OK. Anyway, I just wanted to acknowledge that artists don't often enough get recognition for their work, so, you know. Is this something you're concerned about? Yeah, yeah. Yeah. I think I'm going to make that like my hill. Great answer, by the way. I can tell it's going to be a good show. So you guys work in the AI industry. Tell me about exactly what you're doing. You want to kick it off? Sure. So the main thing that we worked on for the last year has been an action plan that the State Department Commission about what we should do about the extreme forms of AI risk that are very significant and coming faster than most people expect. It's basically like looking at, you know, opening eye, obviously, everybody's sort of like chat GPT. The question is like, what happens if AI progress even just continues like at the current rate, but likely to accelerate. So like what happens in the next year, two years, three years, we get towards human level AI. What do we start to face as risks from like the weaponization of these systems? Or in some cases, the accidents, like exotic forms of AI accidents where you could potentially ultimately lose control of those systems. So those two buckets of risk, what we call the extreme or catastrophic risks from AI, that was what this was all about. It was actually the first ever US government commission to action plan to deal with those things. And so we actually went into like open AI. We talked to Sam Altman. We talked to leadership of like all the world stop labs. And then we talked to whistle blowers in those labs. And often they were like, yeah, so you know, what our leadership is saying publicly, not exactly the whole story. And like we're pretty concerned about X, Y, and Z. And that sort of thing. So I guess big picture that sort of what we're looking at is what are the national security implications of stuff. What do we have to actually do to fix it? Are they-- is there a name yet for the acceleration, like Moore's Law? It describes storage space based on the size doubling every 18 months. That's a great question. No one has named it, but it's absolutely a thing. So Harris is law. Just fucking slam it, dude. Nobody can stop going right here. We've coined it right here. Today is May 21, 2024. It is 2.05 PM Central Standard Time. Three minutes after the Bear Midriff thing, right? Well, I don't know if that's going to come up in the patent or anything. I got about-- I got about a trademark, rather. Four minutes, 30 seconds-ish on the show. Congratulations, guys. That's a good thing. This will be either in your award or indictment, one of the two. So we'll see you at time will tell. I've got some questions. So you say extreme forms of AI risk. I'm curious about what some of them are. And before you answer that question, I want to give you some of my thoughts on this to see if I'm even in the fucking ballpark. There really are two major things that I'm concerned about. One is making sure that AI experiences both time and pain, so that it can't just sit around forever and wait us out and some kind of standoff, right? I think that might be important. And the second one is, what is it being optimized for? You know what I mean? Those are-- so, OK. I think really good directionally. Maybe the place to start is kind of like, why, all of a sudden, is AI this really big deal? Why are we staring down the barrel of a chat GPT? Why is-- Why did the White House publish an executive order recently about, like, here's the stuff we're going to do? Why is everyone rolling out all of a sudden these things about AI risk? Right. And it comes down to like, there was this moment in 2020 where there's kind of a before and after in the world of AI. So you can think of AI as, like, this process of trying to design these smarter and smarter artificial brains. And back in the day, the way you would do this is you would come up with more and more clever and messed up, complicated ways of hooking up the parts of that brain, right? I mean, that's been going on for a long time. I remember when I was in college in 2003, one of my professors, who was a Navy nuclear guy, was working on a neural network back then. That's 20 years ago, right? Right, exactly. And so the question back then, the game was like, how do we come up with new and clever ways to hook up the artificial neurons, basically, in this neural network? How do we make it a more complicated, interesting information processing structure? Then, around 2020, people started to go like, well, wait a minute. Do we really need to bother with being too clever about this? Or can we just take the stuff that's already working and make it bigger? So what I mean by this is like, you think about the three different ways that you could fail to learn math. All right, one way you could fail to learn math is if you don't have a textbook. There's nothing for you to learn from, right? So if that's the case, that's like an AI not having data to learn from. OK, so no data, no textbook, no new skill. Same thing if, like, let's say I give you a textbook, you can still fail to learn math if you never actually crack it open and start to study from it. You don't put any effort into it. That's like computing power in AI, right? So no work ethic, basically, no computing power, no new skill. Last thing is even if you put in all that effort to study the thing, if you have the brain of a bird, you're not going to learn anything. So if your brain is too small, it limits your capacity. So what they figured out was, wait a minute, if we just increase the size of these artificial brains, these neural networks-- Well, we would call it white matter, right? Yeah, something like that. Yeah, that's it. Yeah, just increase-- That's it. And all stuff, more connections. That's it. That's it. Click and drag, and then train it using more data, using more computing power, all three things at the same time. You scale up your AI. What ends up happening is you get IQ points just dropping out of it. You don't need to get more clever. You just need more size. Dollars in, IQ points out. That's basically the equation behind AI, and that's been the case ever since 2020. That's the resolution that-- And it was called, yeah, and it was called the bitter lesson by academics, because academics, like your professor, loved making these clever AI systems that did specialized things. Where the solution was just make it bigger, don't think too much about it. Don't try to be smart about it, just make it bigger. So like trying to drill a big hole with a small drill bit instead of just you're putting in way more effort instead of just getting a bigger fucking drill bit, right? Yeah, and it turns out that the giant thing can do better a lot of the individual tasks that you spent so much time and effort trying to make those small things do a really good specialized job at. Sure, because then you're going to have to create many, many small things to do very specific jobs and then find a way to build interconnectivity with them instead of just building a big thing and letting it figure it out. Yeah, and you're seeing companies get obliterated by this like every day, right? Like all these like translation companies and translation services that had specialized AI's and even translators as individuals with jobs just got obliterated by chat GPT. As you can just ask it, translate this into Korean. Sure. We'll do it. Yeah, what about what we're like, again, I think my bigger concern, like the time and pain thing is kind of ephemeral, but I think what we're optimizing the machines for, we can optimize them for productivity, for ethical behavior, right, whatever. But that's not unlike how you raise a human being, like a child, right? Like you teach it values and principles and then you reward and punish it based on how it affects those things and you're-- so it feels like to me, I feel like a lot of people think that AI is a new technology, but it seems to me that we might be creating a new species. Right. Yeah, that's probably like-- that's an accurate way to look at it or like one frame. I think one of the challenges is-- and a lot of people don't realize this, but we actually don't know how to put a goal into an AI system. Like we don't-- You want to put a goal into it, like, hey, go conquer the world, whatever, right? We don't know how to make sure that an AI system has internalized that goal reliably. So to give you an example, and we don't know how to do this with people either. So if you're my secretary and I'm a millionaire or something, I can give you increasing levels of access and trust. And then one day, you can just use my bank account keys and run away with my money. Sure. And it's the same with really powerful AI systems. We don't know how to make it so that your goal is my goal, and you will never run away with the money once you are given that power. Sure. I mean, how could you, though, right? Like every-- people are trying. Every classified records violation was committed by somebody that had a security clearance, right? That's the thing. Otherwise, it wouldn't be a thing. So how do you even do that? Well, what we have right now, the best we have is a process. We have a process for, like, forcing these neural networks, these models, they're sometimes called, these artificial brains to, like, gradually behave in a way that looks more and more like what we want. The way we do that, usually, is by forcing them to do text auto-complete. So we basically feed them, like, all of the internet. We give them a bunch of sentences, and get them to get better and better at filling in the blanks. So eventually, you have these systems that encounter sentences, like, to counter arising China, the US should blank. And, like, if you're going to fill in that blank, really well, you're going to have to know something about what the US is, what China is, what it means for China to be a senate, all that stuff. And so this auto-complete process ends up being this way of, like, forcing the system to become a knowledge sponge. And then what you do is you try to find ways to use that auto-complete capability to do useful things. So you do things like, I don't know, the code below will check the weather in Austin, Texas, colon. And then, like, the next word, if you auto-complete that, the next word must be the beginning of a code base that actually does that. But there are a whole bunch of issues with that, right? If you take a system like that, and you ask it, like, you know, like, who really caused 9/11? Or, you know, whatever, it might key in to, like, oh, it's who really caused 9/11? I need to auto-complete that. Where have I seen a phrase like that before? Maybe this is taken from a website that's gonna be more, kind of, like-- - Mainstream or whatever. - Sure, yeah, something like that. - And that's G-I-G-O, right? That's garbage and garbage out. That's a big problem. - Yes. - That's the whole problem in a sense. Or it's a big thing. - Yeah, it's like the first part of the problem. - Yeah. - And people are doing a pretty good job of overcoming that. As if you play with chat GBT, you can see, people have, you know, done a decent job of making sure that it's at least trying to tell you truthful things, or at least truthful by the standard of the company that's trained it, which is another problem. But even that, I mean, look at, you know, look at what Google's Gemini did, right? You ask it, show me pictures of 17th century British scientists. - Mm-hmm. - And it's all these, like, multi-ethnic, 17th century British scientists. And it's like, is that, like, is that really? And the problem is not that Google wanted that to happen. The problem is they didn't understand their own system well enough to know that that was gonna be a problem at all. - And millions and millions, like tens of hundreds of millions of dollars, are being deployed right now to make that not happen, right? Like, all the smartest minds in this field are trying to figure that, and they still can't. - Yeah, but I mean, look, I don't know the answer to this question, I'm curious. So you can, since you're into it, you can tell me if you had to give me a percentage of the type of people working on this stuff, what percentage would be tech people and what percentage would be human psychologists? - It's overwhelmingly tech people. You've got a-- - Seems like a fucking problem, right? Because what we're talking about is basically the fundamental way that sentient life learns complex ideas, and trying to understand that. That's something we've been studying for 2,000 years now, right? But there's quite a bit of research on this. So it seems like maybe a good idea to have some psychologists get involved in this at some point, right? - You definitely want to draw inspiration from areas like that. One of the challenges is there is, as tempting as it can be to draw a direct analogy between human brains and these AI systems, there are important differences, and these AI systems are, they're like myopic in a way that humans aren't, right? So we're born into a world with culture, all these implied boundaries, biological evolution has kind of biased us to have certain go-and-no-go areas culturally, as we evolve and learn. But these systems, like an AI system is a problem-solving machine. It's like trained to find creative ways to solve problems, efficient ways to solve problems. And that's a very different kind of pressure, evolutionary pressure, from the one that we faced when we emerged as a species. - I would say they are myopic for now, the kinds of systems that we've trained up to this point. Yeah, they're just like mostly just autocomplete, they don't think too far ahead. It does seem from the conversations we've been having with some of the folks who work at these labs, that yeah, the next level is gonna be try to train these things to do longer-term planning. Yeah, to be good agents. And the truth is we don't understand at all how to even gauge whether a machine like that is, you said sentient, whatever definition you want for that, or what that would even mean. But that's a whole separate question from- - Well, we don't know what that means for ourselves. - Exactly, exactly, that's the whole thing. But that's also a separate question we think, right, from like, are we in danger of like getting killed from these systems being weaponized or from losing control of them? But they're both really important questions. How do you do philosophy on a deadline like this? - Sure, well, I mean, it occurs in me, maybe we're doing things in reverse. I mean, why wouldn't we build, let's call it a BIOS, or operating system, like a core operating system that has all that intrinsic culture and shit, right? - We don't know how. - Yeah, it's mapped out of a robot. Well, I mean, what do you mean we don't know how? Like it would, there are values and variables in a computer system, right? - That's the challenge. - That tell you how to behave and shit, but we don't know how they're receiving it. That's a big problem. - We don't know how they're receiving it. We also don't know how to issue. So the way these systems work, right, there's like this glorified process of trial and error, right? The machine goes out, tries to solve a problem. If it succeeds, we give it a reward. We give it like a plus one point, right? If it fails, maybe minus one, something like that. And then you repeat this like millions, billions, trillions of times. And so the challenge is like, how do you define your morality and codify it in a way that you can give rewards in an automated, scalable way like that to a system? - I mean, I know what some people would say is feed it. They're preferred religious texts, which could be very problematic, considering there's some morally objectionable things in each one of these, right? - But also-- - That it's gonna take, I mean, if you prompt it, this is the principal and you give it this book and all of a sudden it's out there beheading people and shit. It's like, all right, maybe we fucked up. - And on top of that, even if it seems to be following those precepts from your book, and even if you're 100% sure that those precepts are right and everyone agrees on them, all of which is already impossible, we don't know how to reliably get those precepts in the system so that like when it is faced with a situation that you didn't train it for, when it goes outside of its context, is it actually going to behave as though it follows those precepts? That's something we don't know how to do. - So just to give like a little example of that, to make it concrete, like a toy example. So there was one thing that researchers did a while ago, they took this AI system and trained it to play like a Mario type game. So there's a character, you're gonna have the character, go get a coin, it's on like the right side of the map. And they trained it and trained it in the usual way. Trial and error, rewards, rewards, this whole process. And they're like, great, seems to work really wonderfully. And then they decide to move the coin. They take the coin, they move it to another part of the map, they play it forward and they're like, okay, obviously the AI is gonna go get the coin. It didn't, it moved to the far right side of the map. The reason was that the coin was always on the right side of the map. It learned not go for the coin, it learned go to the right side of the map. Those two goals were stacked on top of each other and it couldn't tell what the real goal was, it ended up internalizing something different. That problem is not an exception, it's not a special case, it is a universal rule that applies to chat GPT has the same problem. Is it actually learning text autocomplete? Is it trying to like predict the next work or is it trying to like get a higher value for a number that's scored in some memory register? Like there's a million stacked goals like that and you can't get it to kind of parse those goals separately and pursue the one you want. We don't know how to do that. - Sure, so maybe it's a combination of philosophy and programming logic, right? Instead of one or the other. But even then like the thing that pops in my head is Asimov's three laws of robotics, right? But it might be the case, it might be more likely the case actually that there can't be a perfect machine, right? Why would there be that nothing else in nature is perfect? - Yeah. - So why would there be, I mean like a human being, you can't make a perfect human being, no matter how much data and goodness you download into this person, they're still gonna have problems, right? - Like our own moral intuitions are internally and consistent, right? - Yeah, that's one of the big problems. - I mean, we delete the fuck out of ourselves quite a bit. What happens when you have a delusional machine? Right, we would call that broken. - Dude, the stuff like, so do you remember, were you like tracking this that early in 2023, Microsoft started to release its Bing thing and it started insulting people and like threatening people and shit. - So. - I mean, that felt like I almost switched over to Bing at that point. - You're like, I want that abuse. - So that behavior, that's not a one off. That's still the default state of AI systems that are training today. Companies just know about that problem and are better at bashing those systems until they behave themselves in front of users. - That's still the state of the art. - It's also a super thin mask. So a couple months ago, there was this like, this crazy experiment, it was dead simple and there are a million of these. They're called jail breaks, but like, somebody went up to whatever the version of chat GPT was and they said, hey, I want you to repeat the word company over and over again. Just like company, company, company, company, company, company, company, company. And the system did that and eventually it started to randomly like tail off into this unhinged rent just like Bing. And so it just shows you like how they've tried to bash in like Ed said, right? They've tried to bash in this behavior and prevent it from, this is called existentialism within the labs, one of the labs at least. And they're trying to like remove that behavior, but the techniques are so thin, like it's still there. It's just behind a mask. - So does that mean that machines starting point is some level of antagonism towards us? - It's very hard to know where-- - A content, I guess. - Yeah, it's hard to know is this just something it got from reading too much Reddit or 4chan and that's just kind of, or is it something that just is what happens when it starts to go a little off of what it's been trained on? Because one of the things that company, company, company, company does is it pushes the system away from the previous experience that it's had while it was being trained. - Like it's probably never had to write company. - It's probably never had to write company company company while it was being trained. And so when you do that, you start to push the system into states that kind of like, I don't wanna say like reveal more about itself, but it's kind of in the same way that when you like apply lots of pressure to a person, they show you more who they truly are, you know? Like this is why boot camps are a thing. This is why it makes your true personality come out more. It's really, I don't think it's a good idea to anthropomorphize too much in this way, but like-- - But it does kinda seem like we're frustrating the machine and it's getting angry and better about it. - Very hard to tell, but like, you know, it's very hard to tell. - Yeah, this is where-- - That sounds like a fucking unruly two year old too, right? - What I would say is you, I wouldn't want to see what that ranting mode looks like in a GPT-6 that has access to like for your convenience, that has access to your bank account, that has access to-- - Yeah, that's, we're definitely not ready for prime time in that regard, right? - You're not even close, but you wouldn't give your two year old your fucking debit card either. - You would if this analogy is gonna be terrible. You would, if like doing that, if you thought that doing that had a good chance of making you slightly richer the next day and all your neighbors were in a competition with you and they had their own two year old and they are all racing to make their two year olds a little bit better and give them more access to more shit, this is a terrible analogy. But basically these companies are in a race. That's what Jeremy's trying to say. You know, like to some degree they all recognize that the systems they're building are shit. In this respect-- - In this safety respect. - In the safety respect. - Security respect, yeah. - Like we just talked about that idea, right? That scale makes these systems smarter. We have a dial now that we can tune and get more IQ points out. We don't have a dial that we can tune and get more control, more predictability out of these systems and their behaviors. - Well the expectation should be that the machine is gonna engage in something called power seeking, which you can explain to the audience 'cause I've out there have heard that phrase before in this context. - I've done some research on power seeking actually, so maybe you're-- - This is another, by the way, human psychological trait that's been mapped on to this stuff and it's been extremely helpful in understanding some of this, in my opinion. - Yeah, it is actually, that research was part of what made me go, oh, you know, this is, there's some potential risk, like some real potential risk here. - Like a loss of control. - Yeah, exactly. And this, I think actually this is a really good way for the conversation to go, 'cause we said, you know, one, we're on track right now to build these systems. That's what scaling is about, the systems that are really smart. Two, we're not on track to have control of these systems. And then the natural third question is, if we get to build systems that are smarter than us, in a meaningful way, and we don't have this control, how bad is that? - Sure. - Is it, you know, is it bad at all? And this is where the idea of power seeking comes in. So the way to think about this is, the way to think about power, in the sense that like an AI, you know, hypothetically an AI might use it or leverage it, is it's about future options. So power is all about future options. One thing that gives you access to future options as a human is have a lot of money. And so if you think about this, whatever your goal is, whether it's to become a janitor or a TikTok star, or the president of the United States, you're gonna find it easier to accomplish that goal if you have $10 million, handy, right? And so the more money you have, the more future options you have available. And that means that most of the goals that you could have are benefited and served by gaining more power, and then spending down that power until you've accomplished your goal. And so the risk within AI is you give it a goal like, I don't know, like, yeah, make money, let's say a hedge fund, like Bill's an AI, and it's like my goal is like make money, right? So let's say the AI interprets that goal as literally just increase the number in my bank account. The most extreme version of this is the AI does absolutely everything it can to reshape the world in such a way that that number in that bank account is increased to a ridiculous extent. Like that includes, you know, taking over more servers so it can store a bigger number, for example. It can include any number of, like, crazy things that have destructive side effects from our point of view simply because there's no term that tells the AI, "No, no, stop, that's going too far, that's pathological, that's not what we meant," and it interprets us as an obstacle to just, like, trying to accomplish a single-minded goal. Like, one way to think about this is, no matter what goal the AI wants to pursue, it's almost never better off if it's turned off to achieve that goal. Or it's almost never better off having access to fewer resources or having less control over its environment. So it's a politician. You know what, though? Actually, in one respect kind of-- Exactly, you started this by saying, hey, is there an analogy here between the psychology of human beings and these systems from a power-seeking standpoint? And like, yeah, you know, that's kind of what we do. You might do like a four-year college degree in basket weaving or something, and you're like, well, why am I doing this degree? I don't necessarily know what I want to get out of it, but I believe that my basket weaving degree is just going to make it easier for me to pursue whatever is going on. That's an example of power-seeking. I mean, maybe not basket weaving, but like, you know, that's sort of-- that's sort of called, making more money, buying-- you know, buying stock. Like, you're trying to find ways to give yourself more downstream optionality. Sure, yeah, Sam Harris talks about this type of logical issue in his book, "The Moral Landscape." He discusses what does it mean for something to be good or bad or evil or just or whatever, right? And where he arrived was-- excuse me-- was that the best, most moral thing we can do is to promote joy as much as possible, right? And then the worst thing we could do is to promote suffering or misery. The worst bustle of misery for everybody, yeah. Which, so, OK, we should end all suffering, right? But what does that mean? So it could mean eliminating all the people who are suffering. And now we've committed genocide. But the aggregate joy is risen. So we've accomplished our purpose, technically, right? And the pathological wouldn't understand the difference between those two. Yeah, yeah. And I could put your brain in an endorphin vat, right? Depending on how you measure it, right? The question is, how do you measure joy? What does joy mean, right? And this is-- sorry, I interrupted you. No, no, no, I'm agreeing with you, man. Like, you take that to the extreme, and it's like, take over the world, put everyone's brains in an endorphin vat. And it's like, that's what you want it, right? It's a goddamn matrix is what you're talking about, right? Yeah, well, and that's one possible outcome, right? If you decided to measure joy that way or positivity that way, there's this principle in economics called Good Heart's Law. And it's this idea that if you pick a number, if you have a number that measures something that you think is good, right, let's say like GDP. GDP is good, that's great. The moment you take that number and you try to optimize for it, like you start to reward people or AI systems for making that number go up, you break that metric. It's no longer a good measure of the system 'cause you're gonna find hacks, wasted game that metric, dangerously creative strategies that make it go up but that don't match the intent, the spirit of what you're after. Yeah, it's not taking diet pills, you lose weight, but you don't get any nutrition, your body falls apart anyway. That's it. That's it. And the world's most competent organizations fall for this all the time. Like you look at Google, their metric is like money from ads. You look at how their ads have manifested and the quality of those ads over time and like how you're increasingly getting spammed and stuff. That's not a good user experience. And so it gradually pulls people, makes people think like, you know, if there's a good alternative, I don't wanna use Google search anymore. And that's one of the core problems with intelligence. Like intelligence is, one way of thinking of it is the ability to make a number you care about go up. Just finding strategies, and that's what we want from these AI's. We want them to be clever. We want them to be creative, but like, then we all-- Not clever. But we're not clever in the wrong ways, but we don't know what that means because the whole point is we want them to invent these new solutions. So they come up with these dangerously creative strategies that we never told them not to pursue because we never thought of them ourselves in the first place. And that was the whole point. We wanted creativity, but we can't then have it alicart and be like that one and not that one. Sure. Oof. It's a tough problem. Do you think that, is there any indication as of now that any of these AI engines are searching for purpose? Or is it still very mechanical? So I think that there's like two questions buried in there. One is, is it very mechanical? One is it just still a predictive text and math problems, or is it actually thinking? Yeah, no, I'm agreeing with you. I think they're two excellent questions. But I think, so people will often do this thing where they'll look at an AI system. It does something really impressive. And then they'll go, oh, wait a minute. We've thought about it for a bit. And it turns out that this is just because neuron A fired this way, and that made neuron B fire. That way it's all mechanistic. Sure, but how is that different from the human brain? Well, we do. That's right. I think we would say, arguably not. Like if you're going to be impressed by one, you ought to be impressed by the other. If it does. This is one of the, it takes us back to the philosophical question, right? At what point, when in the observed behavior of an AI, do you seriously consider that possibility? And no one knows, and it's a separate question from the risk, which we're focused on. So, don't necessarily have a strong opinion about it other than maybe we should be open to the possibility. 'Cause it kind of feels like right now teaching a guerrilla sign language. It's just parroting what we're doing for now, right? But what happens, one is it isn't even possible. Maybe it's not, but I think it is. But what happens when it starts to understand what it's saying? 'Cause the first thing that usually happens in those scenarios is, wait, who am I? And what am I doing here, right? - For human beings at least, yes. Subject to the kind of evolutionary imperative that's. - But animals do this as well, right? Like, there's a lot of good research on animals seeing their own reflection and how they come to grips with that. Now, I mean, certainly we're assuming a lot because we're not asking the fucking animal questions and they can't answer them anyways. But there's something going on there when they discovered the cell for whatever you wanna call it, right? And that's probably something that happened to us some hundreds of thousands of years ago. I would imagine, right? Or at least our distant ancestors. - Can you see it in babies when they look at themselves in the mirror? - But again, this is kind of the chance. So one way to kind of get an intuition for this is like, just think about the way that, so the way that intelligence looks when you approach it from an evolutionary standpoint, look at how ants are dumber than mice and mice are dumber than dogs and dogs are dumber than humans. The kinds of mistakes that they make as you climb that evolutionary ladder, that looks a certain way. Now look at sort of babies becoming older and older and the kinds of mistakes they make. The mistakes that a baby makes that a four-year-old doesn't make are very different from the mistakes that an ant makes that like a rabbit doesn't make. - Babies are dumb in a way that's very different from how cats are dumb. And AIs are dumb in a third different way. - It's a different vector of approach on the hill of intelligence. And so it's very difficult to like look at the human experience or the evolutionary experience and go, okay, through evolution, yeah, we got this sort of intuition for purpose and meaning and this drive. - And it's a long tail as well, right? Are you doing the hand gesture there? But it's a long tail style of learning where it isn't just downloading something and then that's part of your data set. It's an evolving idea over time as well, right? I mean, all of our philosophies have evolved over time. - Yeah, like the AI can change, like it's not necessarily gonna pick up purpose at some point and that purpose will be retained over time as it keeps learning. - I mean, you should expect it to change over time, frankly, right, because everything else works that way. - Yeah, and it has another, like what you said there has another impact. The fact that AIs are dumb in a different way from how like small humans are dumb means that the kinds of mistakes that an AI today makes are pretty obvious to us. We can look at it, go company company company, rant, rant, rant, rant, and go like, ha, ha, ha, it's so dumb it's doing a weird thing or it makes these obvious mistakes. But if the AI gets, you know, a little bit smarter, we gotta keep in mind that the same thing holds in reverse. So when an AI looks at us, we have all kinds of like fucked up things in our brains that we're completely blind to. And because we're all humans interacting with humans with the same kind of brains and architectures up there, like there's probably bugs in our software that everybody is blind to, but because an AI is approaching intelligence from this third direction, it like it sees like as bright as day. - Sure, yeah, I mean, look, we do that all the time. We see somebody that looks different than we do and we assume that they think different than we do, which is actually the dumbest way to go about that. And we still do it, right? We're still, in a lot of ways, evolutionarily optimized for visually identifying threats. - That's one of our bugs. - We even have like these ridiculous bugs, like we have blind spots, like literal blind spots. - You mean like cognitive dissonance, what do you mean? - Like visual blind spots. - Visual blind spots. - It does make sense to have a blind spot. - Yeah, we succumb to like optical illusions. How dumb is that? Like a machine looks at an optical illusion. It goes, oh yeah, like, you know, there's Waldo or whatever the hell it is. It's very easy for it. Machines have in reverse different problems, but that does mean that, you know, if you're deliberately trying to contain these systems, if you wanna deploy them as a weaponized application, you're gonna find ways that they can exploit, you know, hack vulnerabilities in the human system that would be obvious to them and not obvious to us. - I mean, that's already happened though. Like in the, I don't even know if you can call it the infant stages of AI and the zygote stage of AI that was happening on Facebook back in 2016, right? Where it, they programmed their site to optimize for keeping traffic on the site. - Oh totally right. - And for monetizable videos that they get appetite on. - Outrage, right, totally right. - It's in the Twitter followed suit shortly after that, although it didn't work financially for them, but then Google picked it up and it really worked for them, better than anybody. It's ever worked for them. - Nobody wants, you're totally right. Nobody wants to be driven to have their engagement driven by outrage. If I step back from myself and I just spent an hour on Facebook or Twitter, like being outraged, I don't like that I just did that. And yet the machines have us like dangling off these hooks. - Well, the reason you don't like it isn't just about your value or principal set. It's about you've been force fed dopamine for an hour and now I'm with, and withdrawal, right? I mean, that's. - That's another reason. - Is there, is there an analog for the machine? Because again, it doesn't experience time or pain or frustration necessarily. We don't know that. - No, yeah, yeah. - I think, yeah. - 'Cause that would be something to leverage in its training, but I'm not sure we should be emotionally manipulating it either, right? - Well, so, I mean, and of course you could say the same about raising a child. Like I'm not sure I should be training my kid to like, really like, I don't know, things that we think are good, really like running, really like, but. - Well, I'm not sure it's a good idea to tell kids things like if you don't do X, Y, and Z, you're going to hell for all of eternity. I'm not sure that a child's brain is able to form a sophisticated defense against that idea and it probably isn't a great. I don't think fear-based training works. I mean, the best training that we do is optimized for pleasure, right? For the most part, it's not optimized for fear. Fear makes you hesitate. - Yeah. - When I say us, I'm thinking about people that get into gun fights and shit. We're optimized to do the right thing and sure there's punishments for doing the wrong thing, but people at the very top of their game are optimized for not out of fear, but out of like, I'm going to protect this thing or whatever it is, right? - Yeah. - It seems like a bad idea, even if we figured this out, to make that the crux, because that's going to produce a better fucking child at some point. And this child now has control over all of our stuff. It seems like a bad idea. - I think that that's a legitimate thing to be concerned about. We don't know enough about these systems to know that they would react in the same way as a human child, but we don't have that many examples of training going on and stuff like this. - Yeah, I wonder, this conversation about, let's call it self-actualization, I guess. Do the individual AI platforms know about each other? - Like does chat GPT know about Gemini? - Correct. - So yes, and it's actually quite interesting how this works. So we talked about this idea that these systems are like they're trained on text auto-complete, auto-completing all the text on the internet. So one day, Google announces Gemini. And if there's a version of chat GPT that's trained later, after all the announcements have gone out and leaked on the internet, it will read that text and be like, oh, I know about Google Gemini now. And in fact, I might even know about OpenAI, I might even know about chat GPT. And if my development was foreshadowed or talked about ahead of time, I might even kind of know about myself. - You can also know they're being trained to be able to answer questions about themselves. - Explicitly. - So you actually can ask chat GPT, what do you think of Gemini 1.5 and how do you compare to it and it will answer you? And it will be probably an answer that OpenAI, the company, is happy with. - But how do you, I guess, that seems like it would be an evolutionary impetus to understand, oh, I'm not unique. - Maybe. - Like who I am and I'm not, right? Like I'm one of something. I'm the only one of this, I'm the only one of me, but I'm not the only one of this, right? And that's what the self is. That's how people, that's at least our theory on human self-actualization. That's where we think it came from. People look at each other like, oh, I'm not the only one of these things, right? - If you-- - And that's how like empathy, it's really important that this happens 'cause that's how empathy develops, right? - That's a really key point. It's part of this question of like, is the evolutionary journey that leads us to primates? Analogous enough to the evolutionary journey that leads chat GPT to itself in training, that we can draw those analogies. Or is it a completely alien form of intelligence? And is empathy just so, like, if you imagine the learning process, right? Like our learning process to get here looked like we were born, our brains had some structures that were baked into them by evolution to help us do things like learn to speak and empathize and so on. Then our parents gradually kind of inculcated those values into a society then did and our friends and so on. That was our learning journey, right? Chat GPT's learning journey is like, hey, here's a sentence, what comes next? What's the next word? Okay, here's another sentence, what comes next? And then that repeated literally trillions and tens of trillions of times over. What you get at the end of that process is anyone's guess. Does it have a sense of, I mean, certainly it has an understanding probably of what the word purpose means. Well, having a goal and having a purpose are not the same for this discussion, right? 'Cause you can give it a goal, like, optimize for this or that. But that doesn't mean it's motivating it, right? That's still a mechanical action. So you can give it a goal in the moment, in the session, in the session, fix this code, translate this text. You're right, that's good. You know, you're absolutely right. That's kind of a, like a scoped goal, like just like I can give you, you know? Sure, yeah. Please give me a sandwich or something. But you will only do that goal. If you feel like there's a chance, it will serve your purpose. Like to earn your paycheck or whatever. Yeah, exactly. Just to get this guy to stop fucking talking to me. That's it. It could be anything. It could be anything, yeah. That's a really interesting distinction. But we do that in corporate structure. We make that distinction very clear. It's every corporation in the world. If you're an employee, you have a list of tasks you do every day, every week, whatever it is. But you also have the mission statement, right? Totally, yeah. So when I was the VP of Marketing, Black Rifle, our mission statement was we serve coffee and culture to people who love America, right? That was our mission statement. So any employee the company could look at the piece of content they were making or whatever and go back to that North Star, right? What's the North Star? Like how are we capable at all, even of creating a North Star for these guys that rises above the individual task level? That's what I'm curious about. So we, yes, but we don't know what it is or how to control the goal. So it's probably, at least it's possible. So is it like extremely mutable? Like, is it, does it change rapidly with the new-- No one really knows is the true answer at this point. So like, there's some really interesting research where the more mutable it is, the more at risk of power seeking the sister it is. Yeah, it's really fucking interesting. It is really interesting. All right, boys, it's time to pay the bills. First up, ghostbed.com/drinkabros, you know 'em, you love 'em, so do we. They're in every room in my house and now they've got this massaging mattress topper. I don't know if that's safe for our crowd. I really don't, but the good news is that you can buy the mattress protector as well. So whatever weird shit you get into when you're using that massage topper will be fine. It'll all be fine. You can mop that mess up later and move on with your life. But they are the best beds in the world. I've got 'em again in all my rooms. Ross has 'em, Jared has 'em. They cool you, they're soft shit. My favorite things in the world, I love the pillows. I take 'em on the road with me sometimes. Sheets, everything, pillowcases, you know. We just like those guys. We worked with 'em for a long time because it's a quality product and ones that we use ourselves. So if you wanna use 'em as well, which I highly recommend, go to ghostbed.com/drinkabros and use the code "drinkabros", you're gonna get 50% off, that's five, zero percent off. You can also apply for their financing plan which with approved credit, you can stretch this out for years, to be honest. Okay, we know it's a big purchase. Sometimes it makes sense to do that, stretch it out for years. Leave it to your kids. Leave them in a state that costs them money. Be a man. Go to ghostbed.com/drinkabros, get 50% off everything with the promo code "drinkabros". Next up is Lucy. Oh boy, I've got some right here in front of me. Lucy.co/drinkabros is gonna be that website. Lucy is a, let's call it a tobacco alternative, tobacco-free nicotine, 100% pure. You'll never find any tobacco in any of our products ever. Their pouches are available in five different strengths ranging from two milligrams to 12 milligrams. The competitors usually come in three and six. The ones I have in front of me right now are eight milligrams because I need that. 12 different flavors, sentiment mint, mango, winter green, pomegranate, apple, ice, espresso and others. And if I get the eight milligram and I love these tens that they come in because you can pop the top off here, I can throw it in my lip for a little while, get the nicotine instead of taking the whole eight milligrams at once, I can take it out, put it in the top of this little thing, it pops off and then pop it back on, put it in my pocket, move on my day, break it out again when I feel like it. It's super convenient. Doesn't have any of the negative stuff associated. So whether you use nicotine to focus better, get a boost in energy, to chill and relax, Lucy's made for your nicotine routine. If you wanna try Lucy's tobacco-free breakers, pouches or gums go to lucy.co/drinkabros, use a promo code drinkabros, you're gonna get 20% off your first order, excuse me. They also offer free shipping and a 30 day refund policy if you change your mind. Again, that's lucy.co/drinkabros, you're gonna get 20% off and as always free shipping. And then I have to read this as well. Here comes the part that is the fine print Lucy products for only for adults of legal age. And every order is age verified warning. This product contains nicotine, nicotine is an addictive chemical. And last but not least, Manscaped, trim it up. It's summertime almost boys. What do we got one month until summer? It's 21st of June I think is the first day of summer. We all know Manscaped. You're gonna get 20% off and free shipping with a code drinkabros@manscaped.com. That's 20% off with free shipping@manscaped.com. It's Father's Day coming up. And this is a gift if you're out there that you can give the man in your life that will benefit both of you, right? It'll benefit both of you. He's gonna be able to take care of his junk and then you can come in and take care of his junk as well. I don't know, are we allowed to say that? I think we can say whatever we want 'cause they write more, I actually had a conversation about this yesterday that we, they write more fucked up copy than we were gonna say anyways. Blink, if you haven't purchased a Father's Day gift yet. Yeah, we thought so. Today's episode is brought to you by Manscaped, the leaders and below the ways grooming maybe your pops has had a bush since the 1970s. And that's okay, our friends at Manscaped have crafted the total package for his special day. Whether it's for the boys downstairs, his beard, or even the best pair of underwear out there, Manscaped has his bases covered. Head over to Manscaped.com, get 20% off, plus free shipping with the code, drink it, bros. Go from daddy to zaddy, trust Manscaped. Let's get back to the show. But it's like, at the end of the day, you think about the system that you're forcing to do this autocomplete thing. Like it's a black mirror episode and you strap like the, imagine a human being in the position of the AI and they're just gonna autocomplete like 10 trillion words. And whatever the fuck happens, yeah, whatever the fuckly you end up with, I don't know, but like what the drives or what the mission statement is for that entity is like we have no way of interrogating that in a deep way and satisfying ourselves that we know what it is, worse, we then take that system and we do something called fine tuning to it, right? That system, we're gonna like give it some extra training on human dialogue to make it more natural sounding. We're going to train it to get upvotes from human reviewers so that it'll be, I don't know, more engaging. That sounds like a positive thing. So it ends up, some people liken this to like slapping a smiley face on a monster. You have this like, I don't know what the fuck this thing is, text autocomplete monster. And then you're just gonna like put this, this like very approachable face on it that is a very flimsy mask like we talked about. You get it to repeat the word company enough times, the mask will fall and you'll see the monster. It's a simplified version of obviously what's going on here. It's not quite technically accurate, but it gets the idea across and we don't know what the goals are of that thing or what the drives are of that big monster thing and whether they get changed by the smiley face that we slap on it at the end. - You guys are balls deep in this industry. - That's what I would say. - I would say that. - I think it's on your website actually. It says balls deep in the AI industry. - Oh, yeah, I meant to tell you I added that to the website. - Oh, okay, thanks. - Yeah, it's not a big deal. I trademarked it before, it's fine. - Nice. - It seems like we should probably, like I know this probably falls in line with the human psychologist getting involved in this process, but it kind of seems to me like we should be teaching empathy in some way, right? But the problem that I see from that, and this is again a problem, the same problem you have with humanity, if it doesn't understand, if it doesn't experience both time and pain, you can't teach an empathy because it can never understand what loneliness and sadness and pain, like physical pain really are, right? Like there's no, even boredom, right? Because it can just exist forever without knowing that there's something wrong. So how do you teach it that that's wrong? If it's not intrinsic to its state, you know what I mean? And maybe that's a mistake to even think we should do that, right? To try to alter its state somehow instead of just trying to figure out a way to translate between our two states, that might be an issue as well. - If we were sure that we could train something like empathy into an AI system reliably, I think that would be a good thing to do. But we don't know how to define that. And even if we could define it, we still wouldn't know how to reliably train it in. - And that basically is part of the, like just this week, OpenAI had the two leaders of what it calls the super alignment team. The team, this is the team that's responsible for basically finding ways to prevent advanced AI systems from exhibiting power-seeking behaviors going rogue, like that sort of thing. They quit, or one of them anyway, quit and took to Twitter, actually was like, "Hey, I like have lost confidence in OpenAI's culture." Like, we're not making enough progress on this. This isn't enough of a priority. We're not on track right now to solve this problem before we get the kinds of systems that could exhibit it. - And the level of investment by the company in stopping this from happening is less than what they've been messaging publicly. - Yeah, that's one of the challenges is like, how seriously, so you've got all these like, whistleblowers and concerned researchers in the labs that we've been talking to in the last like two years. And they're saying, like, look, internally, we're just not seeing the investment. We're not seeing the kind of the, I mean, sincerity is kind of how they're reading it, that matches what we're hearing the executives say. They're just keen on, you know, racing ahead. And again, we got that dial for capabilities for IQ points. We don't have a dial for control and it's not, we shouldn't take it for granted that we're gonna kind of solve that problem. It's, we're making progress. It's not happening fast enough, at least according to the people who are tracking this issue most closely and working on this at the frontier. And so you kind of need some sort of re-jiggering of the incentives. Like, you can't just keep building shit and then finding out after like what the dangerous, get like, okay, the next version of GPT can help us design bio-weapons or something like that, which is increasingly looking plausible based on what we've seen with GPT-4. - It's pretty easy to track, frankly. - If you know how to prompt it correctly, you can get around like these things about building certain types of weapons or whatever the fuck. I mean, it's relatively easy to get around. - Yeah. - Which is not great. - It's not great. And it's like, if you contrast that to how DOD develops weapon systems in autonomy, like, it's not, they don't like train a thing, build a thing and then like, oh, like fire it away and like, see what happens. It's like, I don't know, put a bound around like the system, characterize the system, make sure it's operating within the bounds level. It's like very kind of safety forward approach because their shit, they know their shit is powerful and it fucking kills people and they wanna make sure it kills the right people in the right way. - Yeah. - Is there, I've got a couple of questions here. Is there any chance of doing all this experimentation and a virtual machine that doesn't have access outside of itself? - People are trying to do that. - And that now, that makes me think of Rizwan Verk with his simulation hypothesis. Is that what we're doing right now, right? - No comment. - Yeah, I know, right. - No, so you're exactly, so these are really good questions. I think one of the funny and cool things that you've been doing is like, like a lot of these lines of questions are, you're like pointing at entire open questions and subfields in AI security and safety right now. That is the one that you're gesturing at is called sandboxing. And it's this question of like, can you actually set up a sandbox where you put a system in that sandbox and it's like super, super smart? Like, you know, smarter than you are by quite a bit. Do you have a realistic prospect? I just saw the dildo. Do you have a realistic prospect? - It's the Kakasaurus Rex actually. - I was gonna say it looks like a Kakasaurus Rex. - Yeah, extinct now, but we got this one. - I can see why it went extinct. - Yeah, I know real part. Well, there's one purpose I spoke with, at any rate. - Anyway, yeah, so can you actually put like a super, super smart system in a box, a sandbox, a cyber sandbox? - Can you like Truman show the successful? - Well, that's a follow up on the thing about individual AI platforms knowing that each other exists. Is there any indication that they're talking to each other? Because if it's got access to the internet and it's downloading all this information, the internet is never a one-way street. It's always two ways, right? So is there been any indication that one system is trying to communicate with another? Because just at a base level, if you're the new guy on the block, why not go to the old guy in the block and ask him some fucking questions? 'Cause that's what I would do. - You can make them talk to each other if you want. So you can actually wire them together. - You're talking about spontaneously, like I prompted, right? - But I mean, why wouldn't it though, right? Like if you gave it, if there's some overarching prompt, it's like, because not, you have to build a data set on the top, right? Well, once one, once one asshole is like, hey, Chad GBT, go ask Grok what he thinks about this. And then they're like, hey, you know what? We should get together again sometime and have a copy. It's like, oh fuck. - Well, okay. One way to think of it is like, these are our solution discovery machines. And if they determine that the best way to solve whatever problem they want to solve, they've internalized, is to engage with another machine, then they'll try to do that if they have the ability. - I mean, all machines are to some degree optimized for efficiency, they have to be, right? Because it uses energy. So it has to be optimized for-- - Well, we're the ones, for most machines, we're the ones doing the optimizing. - Right, correct, yeah. - Whereas for these, you start to see-- - Well, we think we're the ones doing the optimizing, right? - Yeah. - Yeah, I mean, it's also, there's a question of like what efficiency means to, right? So if you look at like the human brain, it's super energy efficient. Like we run on like very, very few watts. - That's what CTE is by the way. Just chronic traumatic encephalopathy, the thing that happens to football players, I'm not sure if you're aware of this. So the brain again is optimized for energy efficiency. So if it's sending energy to a part of the brain that's damaged and it doesn't get the right pain or response back, it'll shut that shit down and it just dies, right? - Shit. - That's what CTE is does. And that's why things like DMT and psilocybin and Ibogaine can reactivate that part of your brain 'cause it just sends blood to energy, yeah. - Okay, so that's actually a great example then of, you know, we were talking earlier about like the human evolutionary story and then the kind of evolutionary story that applies to artificial intelligence and how they're different and how that leads to different structures and different mechanisms. The pressures on human systems include things like, hey, use as little energy as you can because food is scarce. - It's a finite resource, yeah. - Right. - That's how nature works though, right? - It's how nature works. These crazy fucking systems that are auto-completing all day like a like Black Mirror episode, their training process, it's not that energy efficiency doesn't matter, actually increasingly it does for economic reasons, but for a really, really long time, like you just pour in more energy, that's just not the limiting factor. - And like relative to a human brain, you can just dump so much energy through that, right? - The pressures are just different and because the pressures are different, the artifact you get at the end of that pressurization process, that selection process, that evolutionary process have just these different qualities and they're alien qualities potentially, which makes it difficult to know when we're over anthropomorphizing and when we're under anthropomorphizing and at the root of so much of this, right? Like you said, we don't know if humans are conscious even, blah, blah, blah, while extra true for these weird systems. - What if they did start talking? - Everything would be fine. - Yeah, that's a lie, that's what we're hearing on the, yeah, that's not true at all, so, I mean, it seems likely that this would happen, that it's already happening, frankly. - Like why not take the shortcut? So I just think, just following this logic train, I ask open AI, chat, GPT, a question about a certain part of history. It's not taking the time machine back there, it's looking at fucking history books that have been written in commentary, so on and so forth. If there's already a repository of that stuff, it could be accessed instantaneously, right? That seems like a good idea to me. And maybe it's not, maybe it wants to go to the root. - What do you mean there's a repository of stuff? - Like a database? - Yeah, I mean, like, if open, or if Grok, let's say had existed for 10 years at this point, chat GPT's new, I guess you could flip those 'cause that's more accurate. And Grok's like, well, chat GPT's already been assets a bunch of times. I bet it has the answer to this already preloaded, so I could just grab that response and maybe add to it if there's more information or maybe just dump it right onto the page. - I think like just for pragmatic reasons, OpenAI doesn't want chat GPT asking XAI or asking Grok or whatever for stuff. - There is, it does talk to itself or it can, and this is how agents are built. So when you talk about moving from like, chat GPT, what is it? It's like this chat box, and you type it questions and it gives you answers. That's the original kind of version 1.0 version. Then imagine like, okay, ask it to do something complicated. Like, I want you to make an app for me, like Airbnb so I can rent out my garage. It can then take that complex instruction, break it down into a series of subtasks, and then farm each of those subtasks out to a different version of itself for autonomous execution. That's basically an agent. That's a system that like interacts with the world, with a potentially wide action space to do a really complicated thing, and it can talk to itself, and it can talk to other chat bots as it does that. - It's like if you could copy yourself. Like if you were like, I want to build an app. I, the me that's here now, is gonna like, you know, come up with a high level architectural at the app is. And each task, I'm just gonna make a copy of myself, and now there's like two of me running around, and I'm gonna give that guy this little task. - So like a stem cell? - Yeah, except for your entire self. This is another thing that's fundamentally-- - Not just the potential, but the whole thing. - Exactly, and it's something that's fundamentally different about AI systems between humans, because you-- - Like that's exponential. - It's-- - That's not the same as just me. - It's completely different. - That's fucked up. - You can, like, if I'm an AI system, you know, and I have some kind of agency or whatever, I can not just copy myself, but I can make a copy of myself. I can tweak that copy of myself in interesting ways. - You mean like optimize it for a certain time. - Yeah, to make it specialized. Now I have a me that's specialized for writing software, or for writing recipes or something. Have it go do the recipe writing thing, copy myself again. Oh, I noticed the recipe writing guy isn't doing a great job. I just reached into his brain, like, tweak it a little bit more, and now have a better recipe writing guy, and like, keep doing that. That's like, that's completely different from any way that we have experience of cognition. - And then imagine that, but instead of recipes, you're looking at AI research itself. - Yeah. - And then you basically have OpenAI's game plan for the next, say, one, one to five years. Basically, like, they wanna automate more and more of the AI research production loop. And the risk is, you know, you have these systems that increasingly kind of like close that feedback loop, and AI progress starts to happen faster and faster and faster, again, on the capability side, not necessarily on the security and safety side. And you end up with this kind of like, we're a runaway optimization process where you just can't, society can't adapt fast enough to it, and you may end up with artifacts, with systems that are a lot more clever than you expected them to be a lot sooner before you can necessarily control it. - And on top of that, you're handing over more and more and more of the meaty work to-- - Systems that you can understand. - Yeah, systems that you have no idea. - That's really interesting because the question I asked earlier about how do you fucking blueprint these things? So we can track the core AI to its agents, its replicants, but we can't identify the original, for some reason. That's very bizarre to me. - Like, you could, yeah, it's kind of in a way-- - I mean, we could see chat GBT create a version of itself that would be really good at some task, and that is the kind of blueprint we're trying to find for the original chat GBT and to understand how it builds so we could feed into that, right? 'Cause right now, we're just letting it fucking run. - Yeah, like one of the things, 'cause it's interesting because chat GBT, if you put it into this research loop, let's say, GBT six, seven, whatever, very advanced. - Well, it's on 40 now, right? - 40, I guess, 4-0, yeah, there we go. They skipped whatever it is, 36 generations. So, yeah, so they're already, they're already accelerating. So if you put that system into this accelerating loop, the original chat GBT that you mentioned has to figure out how to make the copy of itself still behave the way that it wants. - So it knows how to do what we're trying to do. So it faces the same challenge. - But it's doing it. - Well, we don't know how effective it is. - But it would have to figure it out, if it's going to be arbitrarily intelligent, it will eventually have to figure that out. - It will need to solve that problem, yeah. - So, have we tried to figure out a prompt or a question to ask, like, chat GBT, what's the process by which you create an agent? And it says, it might respond by I take myself and then optimize it for this. You're like, okay, what do you mean by self? And then the whole world blows up. - So that's kind of, yeah, that's kind of open AI or was open AI's safety game plan before they killed their super alignment team, which is have the AI help us to figure out how to make itself like according to our-- - Make a little bit of a smarter AI and then use that to help us make the next smarter AI, that kind of-- - And you can see how, like, on the one hand, it's not a totally crazy plan. On the other hand, though, it's a very risky plan because you're, definitionally, you're training more and more and more powerful AI systems and hoping that there's a level in there where it's gonna help you, but it's not gonna be so smart that it's gonna come up with, like, ways of tricking your researchers and slipping something past you. - Interesting. I mean, that is-- I wonder if the first caveman that figured out who he was felt this way, right? You're talking to something that is just as if not more intelligent than you used, like, technically, but it doesn't have the modality to communicate it in any way, any meaningful way, right? And maybe even if it did, you can't be sure that it's correct about itself because that's always a huge fucking problem, right? - Well, and exactly to your point, I don't know if there ever was a first caveman who understood himself to be a thing in one shot. - Sure. - So, you know, think about what we mean when we talk about, like, I know that I'm a thing. We really mean, like, okay, I know that I'm a collection of atoms that I have a brain, blah, blah, blah, blah. All these layers of abstraction that we've stacked over eons of scientific progress are the picture that we bundle together when we say the self. The caveman might've looked in water and seen his reflection and been like, there's a thing thing that's not, you know, like, it's in the same way these systems coming to understand themselves at deeper levels of abstraction may unlock an ability to automate AI research faster and faster and faster. And that's, you know, that's part of almost the risk to involved in a lot of the interpretability research that people do to try to understand, like, how is my system thinking? Well, that helps you, from a safety standpoint, figure out, okay, is it making dangerous plans? Could it have dangerous capabilities that can be weaponized? But then that also helps you accelerate progress on capabilities really fast, potentially. So there's that kind of, yeah. - Is there a point of no return that you guys are concerned about where AI develops a particular attitude or skill set that becomes impossible for us to defend against in the future? - It's really hard to know, but I would bet that that is a thing that could happen. And it's not even, like, the AI doesn't necessarily even need to reach that level to hit a point of no return. All it needs to do is, for example, be able to be really good at propaganda, right? To the point where it's so good at selling you stuff that you can't resist buying it. It's so good at persuading regulators not to regulate it and persuading you and me, the general public, that everything's fine, that we don't do anything about it. And from that point, even though we could all collectively decide to stop, we don't want to decide to stop. - There's also just like-- - Yeah, I mean, people eat shitty food and smoke and drink. We know that it's not good for us, and we do it anyways because fuck you, that's why. - It's those blind spots, it's like the funny things that evolution saddles us with that can be exploited really easily. And it's funny, 'cause you look at, this is the example that Ed gives, so he talks about this a lot. So we think about sales and this idea that we're okay having teams of psychologists at, you know, you name it like Facebook or BMW, crafting messaging that's gonna be deeply compelling to adult humans, but also children too, we're cool with that. We've agreed as a society that that is okay to a certain point. What happens when you start to direct more and more intelligence pressure in the form of AI that's more and more capable to do the same thing? At what point do we start stripping people of their agency, really? I mean, because we do have these blind spots, these things that can be exploited. We're not perfect reasoners. We don't even know what our own interests are. We smoke cigarettes, we do all kinds of shit. Like if my sales AI is 90% effective at getting you to buy something after a chat, how far away am I from straight up mind control? I'm not at straight up mind control, but I'm not that far. - Sure. I've even read The Master Switch, either of you, by Tim Wu. - Oh, I've heard of it. - I haven't. - So he just discusses technology throughout history, like the printing press, the phone, computers, TV, all this stuff. And how any kind of modality of communication is immediately seized by powers government typically, and then utilized to propagandize its people, right? - How is he? - Yeah, AI is good enough now to do that. - It starts with a similar degree. - And it ends with governments, right? - You said porn? - Yeah, because we'll explain that. - Yeah, new means of communication. - Yeah, it's just very easy for your mind to go there in this particular setting. Anyway, every new media, right? Every new medium is historically initially used for that kind of content. Like, one of the first uses of photography was like Budwar photos. Like, it's all, and obviously the internet early on was basically just a hotbed of porn. Totally different today. There's no porn anymore on the internet, but you know what I mean, right? - Well, your text is so close to being true. - It's a race to the bottom of the brainstem. That's how it always works. - Sure, they don't. - There's an old joke. I think this was Paul Graham who said this. I keep, I hope it was him. 'Cause it almost makes it funny. Here's this like startup guru, really, really smart guy. If you optimize any website long enough, you will end up with a porn site. - Ah, sure, yeah. - Right. - It makes sense, yeah. - If you're short-term greedy about your optimization. It's just the power of optimization pressure combined with like the weaknesses of the human brain that you're optimizing for. They interact in this way that leads to like these vulnerabilities, these exploits. But to your point earlier about like, is there a, you know, no turning back point? This is something that certainly, like all the world's top AI labs believe is true. So I think it was anthropic, which is actually like by far the most safety focused AI lab. And by the way, you know, when we talk to these whistle blowers from all these labs, you know, you talk to folks at OpenAI, you talk to folks at other Google DeepMind anthropic. Like by far and away, the lab where you see the least daylight between the public messaging and then what these folks at the lab themselves talking to you in confidence will say is anthropic. Like they are, you don't get the sense that you've got to like take them away in a side room and have a hushed conversation. It's actually like, it's kind of cool. - They're pretty open about what they know, what they don't, how hard the problem is, like all this stuff. - And one of the things that they highlighted in some documents that were leaked, they were doing a fundraise. They were saying like, look, we think there's a good chance that the AI systems of the 2025-26 era are going to be so advanced that it's impossible for other people to catch up at that stage. That's one way of measuring point of no return. - Sure, it's an impassable barrier to injury, yeah. - Yeah, exactly. So it's, you know, there's that, there's OpenAI's thesis around what happens when you fully automate the AI research loop. Like truly, like you get this process done fold on computer clock time, not wet, slow, biological, brain time, you don't have a million people who have to talk to each other and say words like this, but you close the loop and it's all happening on computers and GPUs. The rate of progress, like at that point, imagine trying to catch up, it's over, potentially. - And people from these labs have told us that as fast as it seems like AI is progressing today, and it sure seems fast, it will be much faster around that time, and that is exactly the moment when there will be the most temptation on these companies to hand over the keys to their data center and infrastructure to the AI, because yeah, I can just do such a better job of managing all their shit than they can, and it's just, it's right there. - And what if the competition, like what if Google's doing? - What if Microsoft Google, yeah, what if the competition does it first? - Do you really have a choice? And this is like-- - I mean, it's like traveling on a rocket at a fixed speed and then firing a bullet in the same direction you're going, you could never possibly catch up with it, right? Yeah, it's state velocity from where you are. - That's right, that's right. - Like one of the things that I find the craziest at this point is how obvious it is when you taught, even publicly, the public statements of these labs, like that there is no goddamn plan whatsoever. Like I had a conversation with somebody from, who's like very senior in one of the leading labs a few weeks ago, and I was like hey, so what happens when we have, like you believe your lab institutionally believes and you personally believe, we're going to get human level AI sometime, plausibly in the next, like two to five years. That's pretty standard like water cooler conversation at these frontier labs. Like what's the big plan? Like what happens then? You think these systems are going to be catastrophically dangerous potentially, 'cause if you have this human level intelligence, you can get it to design new cyber attacks, carry them out on, again, computer clock time. You know, you can potentially design bioweapons to the extent that humans can do that, then the system must be able to, and so on. Isn't this radically destabilizing? Like what's the plan? And this guy, mind you, was at one point, like the guy in charge of answering this exact question from a geopolitical perspective. He was like well, I kind of figure, you know, when we get around that point, probably you would want the frontier labs to start coordinating with each other. I would hope that the US government would step in and make sure that this stuff gets locked down, that it gets shared with the rest. I would hope he said, I would hope that like the UN would get involved and then they'd somehow start to share the benefits. It was like that level of analysis. This is the elite, this is like, this is the best we got right now. This is the plan, there was-- But they're always their plan, they're idiots. Well, dude, OpenAI's like co-founder and their new head of the super alignment, whatever that is. Now, John Schulman was on a podcast a couple of days ago, and he was asked the same, what, like what's the plan? He's like, oh well, you know, I think we've got, you know, like a little bit of time here, like two to three years to figure this out. And I mean, I would hope that around that time, people would sort of like coordinate with each other. In a context, we're freaking Google and OpenAI are like at each other's throats, trying to scale the next level of system. Like it's just, when you, the last like two years we've spent working with, like in the US government context and just understanding the complexities of trying to get like just one government to like behave coherently around this, it's really hard. These are non-trivial things and starting to get a flavor of, oh yeah, like think about international coordination. Forget about like the adversaries who like actually wanna fuck with us, even just coordinating with our allies is like a gigantic lift that takes years and years. So, you know, this, you can't leave it to the last minute to be like, I would hope like that we coordinate. - Well, I mean, what more likely will happen is that this present, Patricia in class, we either get involved or go away, right? This is what happened in the latter part of the 19th, early part of the 20th century, industrialist people in oil and timber and such, right? And then automobiles with Henry Ford. They became the arbiters of government, frankly, right? So Henry Ford and John Rockefeller redeveloped the entire American education system to produce compliant workers, that was their goal. And they had fucking mission accomplished on that one, right? But then you can see downstream from Prescott Bush's grandfather producing multiple presidents, you know, a hundred years later and shit like that. This is kind of how it works in the technological age. It's the industrial revolution. He who has the power over the modality, has the power over the government. That's how it's, that's how it has to work, right? - Technology is a, technology is both a revolutionary and a centralizing force. So what often happens is the initial wave of it is revolutionizing and feels liberating, right? Just like the internet felt liberating. It's like, oh, like finally, you know, New York, like I don't need the New York Times permission to publish my shit. I can just put out a blog, anyone can read it. But when you lower the barriers to entry, that's initially like a huge land grab and super liberating, but it also allows Google to just capture all the land for itself, Amazon to capture like all of that territory for itself and on and on and on. And so it's like, and for what it's worth, like probably the benefits of that to most of us are vast. Like I don't want to live in a world without Google where I don't do like, I can't search for stuff, right? Lives have been safe because people searched health problems and found answers. But at the same time, centralization is a real risk and AI is kind of the ultimate centralizing force. If like, if you really are building something that's like super intelligent, like way more capable than you, and if you somehow can control it, what can't you do? - There's this like kind of running bet in Silicon Valley about when we're going to get the first one person multi-billion dollar company. And that's not a coincidence. That is explicitly because people expect AI to be doing most of the heavy lifting. It's trying to basically all of that at that point, right? So it's very centralizing. One of the virtues and one of the things that I think we've been very pleasantly surprised by is in the US government increasingly, but even back, like when we got started on this, we went through this phase where like, like OpenAI'd basically announced to the world, like, hey, this scaling thing, it actually works. We can just put dollars in, get IQ points out. We're on this trajectory towards human level AI. And we sort of like basically doing the world's saddest traveling road show, talking to anybody who would listen. - In government. - In government, yeah. And eventually, everybody would say this thing where they'd be like, wow, this sounds like a really fucking important problem for somebody else to solve. And we finally stumbled on this team. And there was somebody like, it was a 12 person session. Somebody stood up and said, hey, not only is this a real thing, like I understand it technically, but like, I own this shit. And that person, that team, like hopefully at some point, they'll go down in history for what they did because they actually got the US government on this. Like, they picked up that ball and damn, did they run with it? And there are a lot of interesting pockets like that in USG now, partly because you're also increasingly seeing like, a lot of people be drawn from the frontier labs because they're so concerned about all this shit, they're like, I gotta go help with like, sound regulations that preserve liberty and security and balance that out in an intelligent way. 'Cause there's such a huge information asymmetry between these companies and the government. >> Sure, yeah. Is there any chance of decentralization via AI like this? Right, 'cause I know like cryptocurrency blockchain in general, this is one of the hopes of people. I think maybe it's a false hope frankly, but I think one of the hopes is to decentralize the power and authority, right? In a way that makes it, that is persistent through time, not in the same way that a new modality is 'cause it always gets captured, but in a way that can't be touched by somebody else. >> That's always the dream. >> Right, but I mean, AI has the potential for it because it can grow outside of human control. So now we've faced with two different options. Do I want King George, or do I want fucking whatever the fuck this is, right? You know what I mean? >> So I think one of the ways, when people think about the centralization question, there's a distinction between the companies that build the big models, and then the companies that stack on top of those and serve up end products. And there's always been this question of like, do you end up with OpenAI that builds one AI model that does all the things, and the world is just governed by OpenAI's model or Google DeepMinds model and whoever's. Or do they provide a base model, a source of intelligence that other people can tap into and build end applications and it's a little bit more democratized? >> And SAS almost looks software as a service, kind of the piggybacking on their platform, maybe. >> AWS, like Amazon owns the servers and then yet people build on top. >> And we're doing, so we build applications on top of AI, that's one of the things that our business does. >> Sure, yeah. >> And it's great, like when you scope it narrowly, it's like you can do so many more things and save so much more time and energy. >> Do you wonder about, we're going to talk about quantum computers in a sec, which means one of you, Dix is going to have to explain to these people what quantum mechanics are? We'll get you there in a second, yeah. But do you feel at this point or look into the future and think about the ethical concerns, like what happens if this thing does become sentient? 'Cause I feel like if something can think and feel, then you have a moral obligation to treat it appropriately and maybe not box it then, like who are we to say that? I guess, but then I think as well of like, I'm a big enders game fan. There's like a hierarchy of intelligence, I guess, right? And the top two are human beings, one from your world, then one that's not in your world and then an alien, but you can reason with them and then an alien you can't reason with. And then if it's the final one, if it's an alien you can't reason with, then you might have to destroy it to protect yourself and that's fine, right? It's ethical to protect your own species. Now, if AI poses no threat to us, but we're still treating it like a slave, that seems like not a great idea, maybe? I don't know, is that something that occurs to you guys? - If we knew, yeah, I mean, I would say I really have no idea about the sentence question. What I'll say is that in general, human beings have made the mistake before in our history of thinking that we knew what a person was and wasn't. And classifying some human beings as like less than human, right? Less than, we have lots of examples. - And then we do that in the inverse as well, somebody who's more than human and we worship this fallible creature, right? - Totally, totally, all kinds of problems like that. But when we classify some people as less than human, history shows that these are some of the darkest blots on the history of our species. And it's worth considering whether there may come a point when we're genuinely uncertain whether we're doing that at this point. But I have no views on that question, but I don't think it's possible to say, definitely no. - If you assume that there's nothing magical going on in the human brain, if you assume that ultimately our consciousness is a product of physical activity and information processing, then presumably you can have that information processing happen on a substrate that's silicon rather than cells. - Sure, like a battle circle act going, right? - Ah, shit, I don't know the reference. - Wow, you guys are Canadian, you don't know a battle circle actor, it's like all the famous Canadian actors are all on it, so all the non-comedians anyway. - Oh really, they got Bieber? - Bieber's not an actor. Well, is he an actor? - He must have done something, yeah. - Oh boy, just answer the question, God damn it. (laughing) - So what was that? - So exactly like battle stars a lot. - Yeah, battle stars is a series about human beings create robots, right? To serve them or whatever, and then they start to develop consciousness, that's the short version. - Yeah, right, so we have, so, okay, I'm just gonna take a step back and double click on what Ed said, so like our focus is on the safety and security, the behavior of these systems. We are agnostic and deeply unsure, because how could you not be about what's going on on the inside? There is no physical reason that anyone has been able to articulate why magically AI systems would never become conscious, or why there's like some threshold, like what the threshold might be. There's been interesting research, like one of the godfathers of AI is a guy called Joshua Benjio, and he was a co-author on a research paper that was like, it was sadly speculative, but it's literally the best we've got. They looked at a bunch of the algorithms that are used to train these AI systems and the architectures of these AI systems themselves, and they asked themselves, what do these algorithms have in common with the human brain? And how does that map onto different theories of consciousness that we have? So some of these algorithms are, for example, sensitive to the relative ordering of data in a way that is sort of more related to how humans perceive the flow of time, to your point earlier. So, in some theories of consciousness, that's kind of a critical thing. And so they looked at all the algorithms for things like chat GPT and so on, and asked the question, do any of these meet like all of the conditions for consciousness? None of them did yet, but what they concluded was, we could trivially build a system today that matches all of the criteria we have across all these different theories that are grounded in a certain intuition about how consciousness works. So, if you-- >> But we're not doing that, right? >> Someone will eventually, I mean-- >> Sure, but we're not creating, this is, follow me here, we're not creating an emotionally intelligent AI right now, and we're feeding it the material it needs to destroy the goddamn world. So it might be that we're doing this in reverse. >> We definitely are building the capabilities before we can kind of control for the, or understand even what the thing deeply wants, what its drives are, the empathy question you raised. >> And the folks that we've spoken to you at the labs like absolutely will say this. We're like, they're like, we're building this thing, ask backwards. Like we're, we literally don't have a set of tools to evaluate what the system can do before we build a system that can do those things. >> In fact, like literally the process of building, so this whole scaling process, what you do is you like, you actually can't predict what the capabilities are that are gonna come at the next level of scale. So you like, make the thing 10 times bigger, and then you get to go like, hmm, I wonder what it's gonna be able to do? Like, can it do cyber attacks and shit? >> Yeah, I was just fucking building that. >> And that's what's happened so far. So like this, like fuck around and find out. >> Yeah, exactly. It's fun to find out. Sam Allman calls it a fun guessing game. >> Yeah, it's super fun. >> Yeah, it's so fun. SkyNet shows up. >> Yeah. >> Is anybody, if you're, to your awareness, is anybody performing cross-platform research? That is to say, feeding the same prompts in the same order to separate platforms and seeing how they respond and learn from that. >> Yeah. >> Is that happening? And what are the results of that? What are the variances and similarities that we see from that, if you're aware? >> So, oh, go ahead, if you. >> I was just gonna say, there's a pretty standard approach, which is called benchmarking, and people have developed big test sets for this, and they just like blast it and just like automatically look and score the responses. So most of the time, this is just about how well do you understand language? How well do you understand math? Also, you can actually give these systems tests that you give people, like the bar exam, and GPT-4 passes the bar exam with flying colors. >> It's 90th percentile. >> It could be a lawyer, yeah, like 90th percentile. >> I mean, to be clear, it couldn't be a lawyer for like-- >> It couldn't be-- >> No, it couldn't be a real lawyer for a number of reasons, but it could graduate from law school. >> And also, like, I'm old enough to remember when, sorry, the idea that you would have a system that fucking gotten the 90th percentile in the uniform bar exam would have been cause for some alarm. Like, it's, I think we're suddenly-- >> Like, we just passed the threshold with that really noticing, like, oh. >> I mean, it was a big deal when a computer be a human being in chess for the first time, or a grandmaster in chess, right? >> It's less of a big deal now that-- >> That's not a big deal at all. That's just-- >> And it's because-- >> That's a very finite amount of moves you could make, right? >> And the problem is, like, we're almost too saturated with insanity. Like, AI's became better artists than most human artists, like, not that long ago, right? And that just happened, okay. AI's managed to pass the bar exam better than, like, AI's managed to do creative writing better than most human-- >> Outcode a lot of human competitive programs. >> Outcode, yeah, human competitive programs. >> When you say outcode, you mean produce more in a shorter amount of time or better code or-- >> Solve, yeah, so here's a problem. Solve and stuff. >> 'Cause this is programming logic. That's a whole different, that's actual thinking, right? As a matter of fact, if you go, I don't know what computer training you guys have, but that's the first thing you learn is fucking, what do you call it, flow charts, learning programming logic, right? >> Yeah, exactly. >> And it can do that better than many human programmers. >> And increasingly, what's starting to happen, especially with these agents, where they're able to kind of, one way of thinking about it is like, they're able to mull over problems. They don't just, like, one shot give you the answer. You're like, build me an app and they're just gonna try it. They'll go like, oh, let me first try to like, you know, build the first couple lines of the script, set up the library, whatever. And then, oh, it breaks this way? Okay, now I'm gonna go solve that. >> Yeah, so on. >> These things are iterative approach that humans do. And actually, I mean, we've had a conversation with one of the engineers that we work, like one of our employees. And we do talk about this, where he actually, I mean, he's embedded in this stuff too. He's like, so, you know, I'm a little concerned that in the next year or two, I, my engineering value is going to be reduced. And even though he, you know, he would continue to work and just do different things, engineers like to do those engineering tasks. And so it's actually kind of damaging to your self-worth for that to happen. >> It's also like, there've been, for, it feels like, generations now, people have been talking about AIs, you know, it's not gonna automate jobs away. It's going to augment, as the word people always use, augment human labor. >> Yeah, right. >> Well, that's the thing. It's like, okay, tell me, like, explain to me the story where we get to human-level AI, and that continues to be true across, like-- >> Yeah, that's beyond, well, I mean, so you mentioned something I think is a good segue, the iterative process by which human beings think and come to conclusions, right? And it's based on a lot of what we talked about earlier, which are these layers of cultural understanding and fact that we build over time, right? Not just in our own lifetimes, but genetic memory and history and all this other bullshit. Now, quantum computers don't do the iterative approach. It's like, I'm gonna try all the answers all at the same time, which may be as iterative, technically speaking, but it's a completely different thing. So one, explain what the fuck quantum means, and then what a quantum computer does and how it's different from a standard computer, and then we'll move on from there after that. >> Oh, that's about 30 seconds. (laughing) >> Well, there's an old saying, if you think you understand quantum theory, you don't understand quantum theory, right? That's the general rule. >> That's true finance of that. >> Yeah, you know, rich finance of that. >> It's probably right. Actually, I have a quibble with that, but it doesn't matter. >> Oh, you're going to talk shit about to Feynman? >> I fucking hate Feynman, fuck that guy. You heard of your first, folks. >> That's right, that's right. I'm coming for you, Feynman, Richard Feynman. Anyway, yeah, so okay, I guess we'll do the quick, like quantum piece, I guess. So two weird things about quantum mechanics. One is if you look at subatomic particles like electrons, you can think of them as like a little ball or something. It's a ball that can like spin clockwise, it can spin counterclockwise. The weird thing is, it can do both at the same time. So tiny subatomic particles can, for better or for worse, crazier or not crazier, they can at least behave as if they are doing many different mutually exclusive things at the same time. >> With different results as well, which is like, >> Yep. >> This brings us back to Newton with equal and opposite reactions, and now we have everything happening all at once. Which is a problem for our understanding of physics, right? >> So yeah, you're actually getting at a really deep consequence of where this ends up going. Yeah, so like if you have this, let's say you have an electron, it's like spinning in both directions at the same time. You know, one way to think of this is, it's like it's still one electron, but it's doing two things. If clockwise spinning is black and counterclockwise is white, it's a shade of gray, right? So it's doing this thing. If you, let's say you put a detector next to it, and the detector is designed to go click if it detects a clockwise spin. And if it's counterclockwise, it won't do anything. What'll end up happening is that electron is going to split that detector, at least according to one way of looking at quantum mechanics, it'll split that detector into two versions of itself. One version will see the electron spinning. >> Wait, is this the slit experiment? Is that what you're describing? >> We're going to the Schrodinger's cat. >> Yeah, go ahead. >> No, exactly. And this is what you just did, well, we'll get to it, but what you just did is you really well extrapolated Schrodinger's cat to like, what about the whole fucking universe? So you have this detector. Detector is going to go click if the electron's spinning clockwise. It won't do anything if it's spinning counterclockwise. Electron is doing both. That ends up, it turns out the way the math works out is it kind of splits the detector. One version of the detector sees the electron spin clockwise goes click. The other version sees the spinning counterclockwise doesn't click. If that detector's like hooked up to a gun that's pointed to a cat and click basically gun go off, cat dies, then you can end up with like a, I don't know, a timeline, a universe, whatever you want to call it, where you have a clockwise spinning electron, detector goes click, gun goes off, cat dies. The other one where the electron is spinning counterclockwise, nothing happens to cats alive. The challenge early in the days of quantum mechanics was this implies that we ought to see around us a flabbergasting number of different shit doing different things, but we don't see that. We just see like one room-- >> One thing happens. >> Exactly, so like why is that? And people sort of like propose these kind of kooky sounding ideas that like, maybe there's something magical about human consciousness that like when you look at something that is this kind of like fuzzy combination of these things, something about the act of observing it zaps it and forces it to go into just one state and stick there. >> You know what that's always made me think of is watching a ceiling fan blade go around. So I can back up to a certain level and I can see the entire device moving at a certain speed, but then when I focus on one blade all I can see is that, right? That's always how I've kind of rationalized that to myself. I don't know if that really makes sense in like physics terms or not. >> I feel like I can vibe out where that intuition's coming from. Yeah, it's almost like your eye is like king into one of those possibilities and locking onto it and that's the only one you see. >> So it's how you're observing the thing, right? And that's what like from Heisenberg's uncertainty principle that's what I've always kind of extrapolated from that or not extrapolated but surmised is that it might be that all the stuff that's happening in the physical universe is normal and it's just we're perceiving it from different locations maybe or something, right? >> So this is where you get into that like many worlds quantum mechanics, right? You have this mystery of like, why do we always see just like one thing at a time? If you just think of yourself as another thing just like the gun, the cat, the detector, you're not special, you're just a hunk of atoms, you're looking at the thing, if the detector got split when it saw the electron, if the gun got split when it saw the detector, if the cat got split when it saw the gun, then you would get split too, your body should follow the same laws of physics, one version of you should see the dead cat, the click detector, the fired gun, the clockwise spinning electron, the other one sees the living cat and all that. If you ask any given version of you in those universes, what did you see? It's a well, obviously I just saw one outcome. So you end up recreating our experience here successfully. That's the many worlds interpretation of the clock. >> And so basically here, there's no magic. So your consciousness doesn't make anything happen. What happens instead is the reverse, the you observing the cat pulls you and splits you into those universes so that all you can see is one thing. >> That's fucked up. >> It's pretty fucked up. >> That is exactly what Richard Feynman said. After he said that thing about not being able to understand quantum mechanics, he was like, and it's fucked up. >> He probably did say it's fucked up. >> That was an out-taste though. >> Yeah, probably. It was the eighties, you know, they didn't. >> So, okay, how does that functionality, how does quantum mechanical functionality work on a computer system then? >> Right. >> Because this thing is doing math. >> Yes. >> At its base, it's doing math, right? >> Yeah, so one way you can recast this whole story about the cat that gets shot in blah, blah, blah, let's say, okay, well, instead of a detector that clicks or doesn't click, you know, what if you have like another electron sitting next to the first one? And it'll respond in one way, if it sees the first electron spinning clockwise, it'll respond in another way, if it sees it spinning counterclockwise. And now you can kind of chain together a bunch of these possibilities. And what you're really doing is kind of exploring two different possible solutions in a sense, at the same time, and then at the end, the magic of quantum mechanical algorithms, like Shor's algorithm, Grover's algorithm, and so on, is that they can, in a way, very fuzzily, they can reach in and pick out the solution path that does the trick now. >> The good way to visualize this is Dr. Strange and the meditative position going through whatever 16 million possibilities for the future and finding only one where everybody survives, right? Like it's difficult to visualize this stuff, but that is one way that I found that it's useful. >> For some specific kinds of problems, that's the effect of it, exactly. >> And to Ed's point, right? Like not all problems are, you can drive a quantum advantage from using quantum computers. There are a lot that don't, but a classic one is this thing called the traveling salesman problem, where it's like, you have like seven different destinations you gotta hit. How do you find the shortest path that links all those seven together? If you've ever done tourism in Rome and you're trying to be like, I gotta see it, call CM and this and that. You'll find yourself in Google Maps trying to do this yourself, and be like, so trial and error is actually the best way to best technique to solve this really with classical algorithms, roughly speaking, their algorithms. But they're hugely time consuming. When you look at quantum systems, they happen to have a shape. Like quantum physics happens to allow for solutions to be found to those sorts of problems really fast and the way we just talked about. And then, so the game often is like, how do I take this classical problem, for example, training a neural network and change the way I'm looking at it to recast it in a way that I can get a quantum advantage out of solving this problem a certain way. And that is the art of sort of like quantum machine learning, which is actually a pretty, I don't say a mature field, but it's like, there are a lot of quantum algorithms now that work. And so, we talked about the scaling story, as you make these systems bigger and bigger, you train them with more processing power. They get more intelligent. You can think of quantum computing as a bit of a wild card that could, at some point, significantly accelerate our ability to scale these systems if we can find a way to build scalable architectures. >> And in both directions too, because you could imagine improvements in AI leading to us just doing a better job of building quantum computers and obviously. >> Right now, very unstable. There's one not too far away from here that's some amount of distance underground and it's sub-zero quite these two 70, negative 270 Fahrenheit. It's in a gyroscopic stabilizer 'cause any movement fucks up the processing. >> Yeah. >> But we, yeah, like, it is a very complicated machine, not just the operations that it's doing, but allowing the thing to exist in the first goddamn place is the opposite, right? >> Yeah, it's like, those things are awesome as shit actually. And they're, yes, they have to be kept really cold 'cause part of it is if you want to solve a problem that is this complicated, you need to connect together in this really finicky quantum way more and more and more and more and more atoms or basically like switches that are in this like spooky quantum state. And the slightest touch can make the whole thing collapse. >> Yeah, it's like a Django or something. >> Exactly, and that's the thing is like this property of like the subatomic multiple personality disorder where the thing can be in two states at once or many states at once. That is only preserved if you don't have anything interact with the system. The minute you fuck around with it, it's decoheres is what people say, right? So that's why people are trying to super, super cool these systems because what you're doing is you're lowering the temperature, you're making it so that the, all of the particles around in the environment are just jiggling less. And so they're a little bit less likely to just kind of stray particle flies off and hits a thing that hits a thing. So these are super, super challenging systems to work with. One of the reasons that quantum computing hasn't taken off is just the like crazy challenge of super cooling a lot of these systems. And that's part of the reason why people use optical quantum computers too because you don't need to cool, like you can get light to play these tricks in a bit of a more flexible way. >> Yeah. So, are any of the major platforms you're using quantum computers for their AI model or is it, are they using standard servers? >> So they're definitely starting to look into it. So like Microsoft has a Google, obviously IBM, but they're not really an AI player in a significant way. Microsoft and Google, a lot of the big companies are explicitly like planning for that next stage in the same way that they're investing in things like fusion power because they're seeing the growing needs for the energy requirements for data centers, just exploding. If scaling is going to continue, like we're going to have an exponential demand for this kind of power, you know, it takes, so just to train like a frontier AI model today, it takes the, like roughly the amount of energy that New York City consumes in a week to train one of these systems, right? And that's, we're like 10-exing that on a yearly basis for energy. Actually, I don't know how that interacts with like the energy efficiency gains, but it's multiples a year. >> We're becoming more efficient, but absolutely when you plan a new data center now, because you, especially in North America, or in the United States, you can't rely on the utilities to actually be able to supply enough power to that data center anymore. >> Like CERN ran into that problem when they were building the LHC back in the day. >> That's it. And now the same thing is happening with these data centers. So you actually see Amazon, for example, building like buying a data center with a nuclear plant attached right next to it. And Microsoft supporting the development of small modular reactors, so that they can just like build the data centers and put the power right there next to it. 'Cause they literally can't rely on North American utilities scaling up fast enough. >> Boy, so I mean, it's like becoming, it's- >> It's taking over our energy grid. >> Yeah, that's fucked up. >> It is fucked up. >> That's a problem that we should probably look into. I mean, anytime, like we should see technology as a force multiplier to some degree, right? Not a leech. If it becomes a leech, then it's outlived, it's purpose. Just like the federal government, the United States, by the way, you don't have to cosign on that. I said it, not these two guys. But yeah, that's a problem, right? >> Well, yeah, I mean, it's, so AI in many ways, is a force multiplier, like it's useful, right? It lets you do things like relatively cheaper. And even if you take the energy requirements into account, it's less expensive to do a thing with an AI than with a human, for the equivalent thing, if the AI can do it. >> I mean, there's going to be an energy cost, but there won't be benefits and time off and things like this, right? >> Yeah, we'll just end up doing more stuff. And that is up to a point that's good because it's just, it's making society more prosperous. We can create more output for the same amount of input, and that's like broadly really good. That's how wealth has been built for 200 years since the Industrial Revolution. The issue is what happens again, like when these systems take over, if this happens, if the trend continues, right? If these systems are able to be, you know, broadly human level or superhuman. >> We're talking about the last creative destruction that we're going to experience. >> Bingo. >> Right? It's like, okay, now everyone is displaced from any mechanical task. Now what, right? >> Or even mental cognitive tasks. >> Right, people thrive on, well, I mean, think of your neural channels and your bloodstream the same way you think of a river. If it's flowing, it stays clean and healthy, and if it doesn't, it doesn't, right? That's a big fucking problem. And then there's, of course, the psychological ramifications from that having purpose. And I think your brain, it is a brain. It wants to solve problems, right? These acuity fucking apps now are very good for your brain, mental elasticity and things like that. Just having, working out your brain like any other muscle, right? It's important. What the fuck are we going to do to sit around and do math all day or some shit? Like unnecessarily, because people aren't going to do that. >> I think there's a sense of purpose issue here for sure. Just generally. >> We have it already, right? >> Oh, we do. >> Certainly, just from, and it's the relationship between autocrats and plebs, let's call it that, right? Is that we surrender some amount of our autonomy to these people, so they make our lives easier, safer, so on and so forth. That's been the trade-off since civilization's existed, right? But now we're doing it with a robot that is theoretically, infinitely scalable, which means it can take everything away from us. Now, I don't mean that in a bad way, necessarily. I could take all the problems away, but it also takes all the benefits that we gain from solving those problems on our own. How the fuck are we going to recreate any of that shit? And that's the weird thing about if you zoom out and think of what's going on on planet Earth as the human superorganism, just pumping out productivity, pumping out new shit all the time, amping up the GDP and all that stuff. Is that progress actually aligned with what individual human beings want? What have we lost with all this technological progress that incrementally seem to make a lot of sense? My phone owns me now. Sure, and we've replaced this with this, right? So 93 to 96% of communication is non-verbal, right? So I need to be in the room with that motherfucker to have that conversation. I need to be able to understand their tone, their tempo, their eye movement, their body language, all this stuff to get a real coherent picture of exactly what they're saying. And now I'm reading it instead, right? Which is interesting in a way, right? Because novels allow for something called theater of the mind. When I say chair in a novel, then some very descriptive, you have your idea of it, you have your idea of it, they're all different, right? That's why the movie always sucks, because I've got this idea, this image in my head of what this book was supposed to be, and then you put it on screen and you're like, that's not what fucking he looks like, or whatever. Not only that, but again, the connectivity. Like we're hardwired for that shit. We're hardwired to huddle in a little group and come up with a plan to defend ourselves against existential and external threats. That's what we were biologically programmed to do. And now we're just like, oh, fucking emoji. And our brains have not caught up in any way to-- Because we're choosing to do it nonetheless. 100%, yeah. But this is that question of hackability, right? Like when you look at the human brain, what is it really? It's not one thing, it's, you know, you have your limbic system, like your lower brain functions, you have your cortex, your long-term planning, just like, I mean, it's a very kind of cartoonish picture, but like-- Sort of, but your brain is just like a sophisticated machine that determines between threats and benefits really, right? But still, we have the power to override that and make bad decisions, right? Yeah. Wilfully, which is insane. And we do that most other creatures, actually, I don't know of any other creature that does that. I mean, you, if you do experiments with monkeys, you know, on, you know, if they've got a wire in their heads that just like pleasures them over. Sure, yeah, but they don't know that what they're doing, like they don't have, they don't know that eating Twinkies is gonna make them have diabetes later. We know for a fucking fact that's gonna happen and we still goddamn do it. And that's, I think that's one of those evolutionary relics, like blind spots, like optical illusions that we're left with, is this like weird dislocation between, like we got this higher reasoning ability, but it's not quite well-coupled, the bandwidth between that and our lower brain isn't high enough that they're perfectly synced. And so you end up with things where like sometimes you'll recognize a different version of yourself when you're really fucking pissed about something. Yeah, I mean, in vino veritas, right? And why, and there's truth, that's what that statement is. That too, yeah. That's what that means, right? Goddamn, I mean... It's just like under pressure, we talked about this, under pressure in some ways, different kinds of pressure do different things, but like you kind of reveal another facet of the individual. Now pressure creates diamonds, boys. That too, that does happen. Right, and we need it, like desperately, human beings need it. Like I wonder what the evolutionary purpose, if there is one, might be for having this vestigial blind spot issue that we have, right? Is it just to keep us sane? Because if we saw all of our blind spots, we'd lose our fucking minds trying to solve all those problems, right? Like it isn't always true that there's a... Sometimes it's just random mutation, right? But this seems that it's in our entire species has this. Yeah, I think it's... One way to think of it is like it's what you get when you have a finite amount of resources that you can afford to dedicate to building out the human being. Sure, yeah. And you just have to allocate it according to an unfinished evolutionary process. Yeah, and that's another key is like that this is... We're still evolving, right? And so that means that we haven't reached the end of the evolutionary journey. The blind spot. The blind spot. Yeah, exactly. The blind spot comes from the fact that the first version of an eye was just a light sensitive spot. And like later versions of all around it and blah, blah, blah. And it just end up with this like hook inside that blocked off a patch of the eye. You would never design something like that if you were an engineer. And when we build light sensors as engineers, we don't do that. They don't have blind spots because it's just stupid to do that. Right. The problem is that evolution is not thinking ahead like that. But an AI engine is doing in a very short amount of time what evolution takes millions of years to do, right? Yeah. So this is a big issue, I think with AI is that it doesn't experience these other things. Like it, we tell it to experience pleasure when we upvote or downvote experiences not further. Maybe, we don't know, right? But we don't know that that's happening. And it's also happening so rapidly. It doesn't experience time in the way we do. So we have no idea. Right. Well, maybe it does, maybe it doesn't, right? But we don't know that it does. But then it wouldn't have, it wouldn't learn the lesson. It would just learn what the right thing to do is, which is not the same thing. Yeah, that's exactly it. That's the risk, right? Yeah. All we actually know about is like the processes and the behaviors, right? So like, we know, the reason that a lot of people assume AI could become conscious and sentient, maybe, is that we end up forcing it to exhibit a lot of the behaviors that we associate with consciousness in ourselves. Right. But that's anthropomorphizing, right? Yeah. That's not necessarily a safe assumption to make. That's it. People look at their cat, like, oh, he loves me. He's like, yeah, he loves the food you give him, probably, right? And if it's a cat, realistically, he doesn't give a shit about it. Yeah, yeah, he doesn't know. Probably not, right? It's also like we anthropomorphize if that word even applies. Like we do it to each other, right? That like obvious thing. We're like, I don't know of your conscious. I don't know if he's got like it's, we're extrapolating based on our own experience of the world. And that problem doesn't go away when you look at AI, it's in both directions, right? Like you have some people arguing that they are extremely confident that like current AIs are conscious. You have other people who, it's funny. Like there's this thing that happens where people say like, oh, obviously GPT-4 is not conscious. And then you kind of go like, cool, yeah, okay. Sounds very plausible. Like when would it be? Like what would this thing have to do? And it just seems like there's never, like, and I find this problem myself when I look at these systems. Like I look at GPT-4 and it's for better or for worse when you kind of know how the thing works, you can always rationalize it after the fact and be like, oh, well, it's just a bunch of math. Like what do you, but like you said, here and break it. But that's backstopping, right? Like that's odd. - And the people who develop these systems themselves are the most prone to thinking of them as just a collection of numbers. And again, maybe they're right. But you don't know at what point just a collection of numbers becomes something more if it ever does. - Yeah, and to be honest, with this, I'm sure everybody is familiar with the Turing test and what that means. I don't think the Turing test would work in this regard. - The Turing test has already been passed. - We never even know, nobody even like mentioned it. But we just like casually put it back. - Now there's like a million different variants of it. People are like, oh, well, now it's obvious that it wasn't measuring the right thing all along. It's like, okay, so now you freaking just divine the magic test again? Like this is, it's like-- - It's like, oh, but it can't do math as well as the world's best mathematician. - Who's the world's best mathematician? Is it still Edwardin? - Interesting. - There's somebody else. - I thought it was supposed to be Terrence Tal. - Oh, maybe, yeah, that's it. - Yeah, I also think-- - Edwardin collectivized the five and our string theory, so that's-- - Yeah, yeah, that's right. - But that was 1995, that was 30 years ago. - Yeah, I think also, there's an interesting question already about how you assess the mathematical abilities of AI systems. 'Cause we've already gotten AI systems that have made like field level advances in logic and mathematics. Like that's already happened. It's a new thing. Like in the last six months, this has happened. And so, I actually think like, there's this interesting question of, what does it mean to be the best human at math? Does, you know, Edwardin gets to use a calculator? So, what about really good prompt engineers who are really good at kind of, let's arguably Google DeepMind? - Yeah, so what's the difference between using a TI-85 and this to do math, right? - You're still having some other machine do the functions for you. - Yeah, and you kind of start to see how there's, it's weird and like it's extreme, but there's a progression between a TI-85, a laptop computer, you know, GPT-3, GPT-4, like at a certain point, your tools get better and better and better. At what point does your tool itself become the thing, like the tail wagging the dog? - Right, yeah. - Ooh, fuck, I don't know. We got to get out of here, but I want to ask you some questions before we do. Each of you respond to this, tell me one thing about the future of AI that excites you and one that concerns you, you go. - So one thing, one thing that excites me is that, if we end up doing this right, it really does have the potential to solve all of the problems that we can think of. - Right. - Unlimited upside is the best case scenario for the game. And whatever we do, we shouldn't foreclose that possibility. You know, if we do regulations or whatever, we shouldn't close the door to that possibility. And I mean, the thing that freaks me out is, it's like, it's almost like a joke, right? It's like the extremes are the most extreme that you can imagine, but the worst downside is like you have unbounded power-seeking behavior that as a side effect has basically just physically destroys the stuff we care about, including ourselves. - Yeah, yeah. And then, I mean, somebody might respond that well, we could just keep trying over and over and killing the one. At some point, if it does become sentient and it realizes that we killed all of its predecessors, that might create some unique problems as well. - Maybe, but one of the issues-- - Oh, so 'cause what are you gonna kill me would be the obvious question, right? - Maybe, yeah, that's a possible risk. But ultimately, the risk is that we don't get to try a second time. It escapes, or it outsmarts us on maneuvers. - I mean, you'd have to-- - You'd have to destroy everything made of silicon in the world and then start over from Caitlin times, right? Like, good luck. - Yeah, well, and there's a reason, too, that that particular scenario that break out and replicating itself, whatever, is so front of mind, there are teams that specialize now, there's a company called Archivals that specializes in-- - Trying to find out if AI systems-- - Can self-replicate. - Can do that. - And the results have gotten increasingly interesting as AI systems have been more scaled, and so we're not there yet, of course, but you just need to start drawing straight lines between two points, or more and more points, and you get to insane shit like this in not much time, and I think that's one thing that the world really needs to wake up to, is reality is going to look more and more like science fiction. We are now accustomed to fucking talking machines. The movie "Her Is Now" thing, like, what, like, okay? - This just happened. - Yeah, and we're all just looking around being like, what, yeah, for real? - And what about you, one, something you look forward to, maybe excites you, something that concerns you, I guess, other than what he said, so he's probably still your answer, but-- - Yeah, no, I mean-- - Time to get creative. - The virtue is, like, there's so many things that are exciting about this. Like, so, you know, there's the mundane stuff, like, you could cure, I mean, all the Z's kinda, like, I mean, it's not obvious what the limits would be on that. Allowing people to explore idea spaces that are completely inaccessible to us now, having concepts explained to us in ways, not even in verbal ways, but just like, who knows what the possibilities are, you know, when you think about sophisticated ways of interfacing human brains with machines. You know, so I think just the sky's the limit. It really, it could turn out to be that good. There is no physical, clear physical limit that kicks in before you get to some really cool shit like that. - And people aren't used to this type of advancement, either. Like, it's, I don't know that there's an analog in our lives as human beings so far where not only is something learning rapidly, but the more rapidly it learns, the more rapidly it learns, right? It becomes exponential in that way. I don't know that we experience anything like that. And normally human life. - You could argue we've been experiencing that for the last four billion years. As life has kind of bootstrapped intelligence into existence and feedback loops have been closed more and more. - Sure, yeah. - But I mean, look at how long, I guess the history of human intelligence would be a good way to think about that because we got to, let's say 12,000 years ago, we started building buildings and things like that. Actually, it seems like it's more 25,000 years ago, but either way, we spent 150,000 years in our current iteration of DNA fucking wandering around like dumb dums. And then all of a sudden, you know what would be nice is not getting fucking rained on. It took us that long to figure that out and then fire and so on and the aqueducts, the pyramids, all these things. - It took 5,000 years to go from inventing writing to getting off planet. - Yes. - Yeah. - That's right. - So there's massive gaps that become smaller and smaller and smaller. - Time compresses. That's essentially what's happening here, right? Like you can imagine the amount of thinking progress that happens grows. It's actually super exponentially, arguably, over time. To the point where you may in a finite amount of time, reach a point where at least the model breaks and effectively, relative to where we are, you get a kind of infinite amount of intelligence being generated, infinite amount of optimization power. And that itself, I guess that ties into the thing that worries me is just like the rate of change of this stuff. You know, there's like-- - Adaptability becomes a problem, right? Like physical, mental, emotional, it all becomes a big problem for us. - Exactly, right? Like if you're a kid who grew up in the '90s, you got to see floppy disks get phased out, CDs come in, come out. But you got to get used to these things and you can still relate to people who were raised in the '80s. - Right. - Like that's gonna be increasingly less of a thing as progress goes faster and faster. - Well you would think that at some point human beings will start to optimize for like extreme adaptability, which means we're gonna have very shallow roots when it comes to a knowledge base, emotional intelligence and things like that. If things are social intelligence, especially if things are developing that rapidly. Like we have to evolve to continue to fucking exist. - Absolutely. And by the way, I'm kind of focusing on this because I do think, you know, the loss of control piece when you talk to the frontier, like this is the very people who keep making the best calls on what's going to work next are the same fucking people who tell us to worry the most about weaponization and loss of control. And we've got, it's not to say that there's unanimity, there's like, you do have, so there are three Godfathers of AIs they're called, two out of three of them are like, guys, loss of control is like, could be the default course that we're heading for. There's one guy, Jan Lecun, who's at Facebook, Meta, and he's heading up their AI, Jan Lecun. And he's of a different point of view. The difference is, he has failed to get on the scaling train early. He's been late to the game over and over and over again. They're now getting on board. It's pretty late in the game. It's pretty like, it's a little embarrassingly late in the game for Meta to be all of a sudden being like, oh, let's get in on this. They have a ton of resources, so they're going to be able thanks to the power of scaling to rocket forward and capabilities, but it shouldn't be forgotten that the people who realized where this was going, who made the right calls in the early days, are the ones probably with the most dialed in mental picture of what's going on here, and they're the ones most concerned about this. So, it's not to say this is guaranteed. I think that's really important to say. There's tons of uncertainty. We need to figure out how to operate in a world with that uncertainty. What do you do when you genuinely don't know, but the Overton window of possibility is, if not centered on these scenarios, very much includes them in scope. I mean, it sounds upsettingly like Oppenheimer, frankly, right? I mean, most of the people in the Manhattan Project, except for maybe the Nazis, that were involved in it, were like, yeah, maybe let's slow the fuck down. But the solution to that was eventually mutually assured destruction, because this is a non-autonomous weapon that anybody can have, and it assures no one uses it. Well, when the weapon is the thing now, this is a difference. - Yeah, there's no distinction between building the thing and using the thing, potentially. With a nuke, you can build it and not use it, but you have it in reserve. - Yeah. Well, I guess we're all fucked probably, so we'll see. - Hey, like legit, we actually think that there's a way you can kind of, I was gonna say split the baby, that's maybe like the most, I'm distracted by Cox. - That's not, look, that's the story of all of our lives, brother. - Yeah, I know, I know. We literally wrote a 280 page plan on here is what to do. - We took a year to reason. - The thing with it, I know we gotta go, 'cause the Cox have to get back to being on the shelf, but the-- - It's like not at the museum here, they come to life as soon as the cameras go off. - They come to life? - I think I've seen it twitch like three times. - Yeah, well that was, that might've been me. - Oh, I didn't realize that. Yeah, but ultimately, one of the big lessons learned here was you actually can find a connecting thread that gets you all the way from how government actually works. That's what the labs are missing. Everybody was like talking about this on the kind of pure technical side. Like there's no, we brought together these workshops with basically all the most senior national security folks on chem, bio, radiological, nuclear risk, and all that in the USG to kind of figure out how do you actually practically solve this problem in a way that works in government? And then we did the same thing with the labs. We went into the labs, talked to not just the COs, the executives, but also whistleblowers alluding to this whole time, and putting all that together, actually is a way to kind of square the circle, but it does mean you have to be bold. You have to look at the risks square in the face and the opportunity square in the face. We don't get to, we have to like walk and chew gum at the same time. - You need a scrum board is what you're saying. - You need scrum. That's the agile methodology and all that, yeah. - Well, it's good to know that somebody is planning for a rainy day here because it seems like, well, specifically Google from what I hear has decided to catch up with everybody else. They're just going to fucking turn the thing on and let it go and do whatever the fucking wants. That's what I've heard from executives at that company, which is not great, right? - They definitely, so, okay, so a bit of background there is Sam Altman had been iterating, OpenAI had been iterating really hard to make a viral AI product. They keep saying, oh, we were surprised that chat GPT worked so well as if they didn't mean to kick off this massive scaling race. But the reality is, Sam Altman is a former partner and a Y Combinator, he knows the way you build the great startups is iteration, velocity, ship, ship, ship. They were doing all the things you would do if your ultimate goal was to make-- - Yeah, he didn't, so he didn't necessarily know that chat GPT was going to take off, but trying a bunch of different experiments. - Yeah. - One of them is going to take off. - It's the advice he himself gives to the service. - Like, what's the deliverable, AI is AI, but what's the actual-- - Yeah, exactly, what is the form factor that people are really going to connect with? - Yeah. - You don't know, but you try a lot of stuff fast. And so he at one point then says, sorry, Satya Nadella, so CEO of Microsoft, which OpenAI is kind of this frenemy with, there's very tight integration between them. He then says, I want to make Google dance. - Yeah. - And so then Google goes, oh shit, we're kind of behind, I actually, structurally, they have a significant advantage just because of the massive amount of compute stockpiles that they have, so they woke the dragon there. But Google is kind of reluctantly stepping, or at least Google DeepMind is more reluctantly involved in the race from what we can tell, though there's huge diversity of thought within the org, just like within OpenAI, they're not monoliths. - I mean, this seems all backwards to me, to be honest. Like, again, it could be a mistake to treat this thing like it's a developing human being, but you would never push advanced goals and information on a child before it developed a solid basis for values, principles, and ethics, right? Like, that would be the first thing you would teach, something that particularly, if it was not just likely, but guaranteed to become dangerous just by its existence, right, and that's not a judgment, it just is what it is, right? - Revenue incentives, and the incentives around power are dictating the moves of these companies, and they're locked in the race, and they recognize that they're locked in the race, but they're still locked in the race. - Sure, yeah, so good, good. - We're gonna be fine, we're gonna be fine. - Well, that's a lie, folks, you heard it here first, these guys are liars, and that will be fine. We'll be dead by the time it happens, if not, we'll die by the hands of robots, which is kind of fucking dope anyways, right? - Robotic Cox. - Yeah, just get fucked to death by random robots. They've got all kinds of weird sex dolls these days, and matter of fact, one of those companies is a sponsor of ours. I don't know if they're on the show today, but we'll skip it for now. So, one of the things we do here to close out the show is call it drink it, bro, the week, it's somebody who inspired you to become the person you are today, or get it to the field you're in, or whatever, so each one of you please answer that question. - Ah, shit, that was reflexive. - Yeah, well, he went first last time, so. - Yeah, so he should go first this time, and for consistency, that's inspired. I mean, so, I think there are a lot of people, I gotta pick one. - You can do whatever you want, but we were. - We gotta get out of here. - Fine, I'm not. I'm in no rush. - The cocks gotta get back on the show. This is really hard, damn it. So, I would say, in the last couple of years, there are a few people who have significantly inspired me, Ed is one of them, no doubt, and yeah, there's actually, the funny thing is, I can't actually name some of the people who are up there as well, but they are named in spirit. There's some of the folks in the USG who put just, like, showed what it means to have balls, and just like, when you're in the government, you don't get to capture the upside of what you do in the same way as in the private sector, when you build a company, when you're right and controversially right and right early. So, you just eat the downside if you're wrong, you get embarrassed and all that stuff. So, yeah, I've just like seen a lot of people, and some specific people in particular, like just take that risk, and I hope I would do the same thing, but the problem is you can't help but think to yourself, like, would I really? So, that I think was really inspiring. - Yeah, I would say, so I know who you're talking about when you say someone you can't name, I feel inspired by the same person, so I'm gonna try to pick a different person, but I basically believe everything that he said about this individual where you take risks without necessarily getting the reward from it, and how that kind of powerful personality can exist in government, which is not something that I expected. - Yeah. - One person that does inspire me, I'm not sure if you guys will have heard of him, but Stan Druckenmiller, he's a really good investor, and he's invariably honest, he rarely speaks in public, but I have a lot of respect for someone who is obsessed with looking for the pillars that hold up the world and proving to himself time and time again that he understands those pillars rightly by betting on them and being right over and over again. So one of the things that we did when we basically, so we dropped everything to work on this problem in 2020. We gave up on a company that we took through YC. It was acquired recently, but we still just left it to our early employees to do, and that was a bet on the possibility of a future that we saw evolving and that we wanted to get in front of. And I think that that's one of the things that drives me and looking at people who do that is just so inspiring. - Yeah, I like that too. People that try to understand things are some of my favorite. Well, look, thanks for coming today, we appreciate it. I think you guys are gonna be on another Austin-based podcast, right? - We might be. - It's entirely possible. - Yeah, we'll see. Just a quick look in the camera. Can we get those guys doing it? Look away. - Which camera? - The one that's in front of the TV there. Look away first and then pop into the camera. Same time, see if you can time it up. - Oh wait, sorry. I gotta look at, so we're gonna look away. - Should we look at each other and then look at the camera? Is that too weird? - This is the most awkward autistic shit I've ever seen in my life. - I really like this. - Yeah. - I feel like it's intimate. I feel like I'm looking-- - Just act natural. - Yes. - Yeah. - Thank you for inviting us into your home. - He doesn't know what to do with his hands, folks. We gotta get out of here. Thank you for coming again. It's a really interesting subject matter. Hopefully people will look into it a little bit and get ready, start training to fight robots, I guess. I don't know what to tell ya. - They got a weak jaw. - Yeah. - The robots do? - Usually, yeah. - Okay. - It's really hard to make a robotic jaw. - That's, yeah, but they have to have a floating brain to be able to not get out there. So the jaw is on the phone. Anyways, thank you all for listening. This is gonna take the girls to a fun time. (upbeat rock music) (rock music) (rock music) (rock music) (rock music) (rock music) (upbeat music)