Archive.fm

ToKCast

Ep 212: Livestream 3, June 28 2024

More questions, more lengthy and more verbose than ever. Enjoy, or drift of to sleep with me ;)

Duration:
2h 24m
Broadcast on:
28 Jun 2024
Audio Format:
mp3

More questions, more lengthy and more verbose than ever. Enjoy, or drift of to sleep with me ;)

Hello everyone, and welcome to the third live talk cast for June in just this last week really so I've been I was going to say inundated but that's not a nice word to use I've been gifted with many very interesting questions over the last few days and I thought I'll do a final one. For this month, I sent to do live streams in spurts now and again over the course of the year sometimes I'll do a whole bunch in a row and then there'll be a drought during which I do not do any. And so I have a number of questions to get through today already from Twitter, but as people begin to join the chat they can feel free to also ask me questions there, or if any new ones appear on Twitter than I will respond to those as well. But there is much to get through, let's just say that to begin with, in terms of what people have asked me on Twitter surrounding similar topics has always which I don't mind you know, we can talk about the same things. Over and again because they're usually the things where the misconceptions tend to arise or they're just open questions and everyone's opinion is almost as good as anyone else as long as we're committed to rationality and reason. So, I'm not going to take these questions in any particular order. I'm just going to sort of pick from random and we'll see if we can get through as many of the Twitter questions as we possibly can. Let's begin with convex bets and convex bets has asked, is a system such as a Tesla autopilot, which uses video input from the cameras to train the end to end a neural network to drive the car, basically using induction to develop the knowledge needed to drive the car, or is it doing something more or something else. Do you think it is possible to have a system that can drive a car in all conditions, at least 10 times more safely in terms of miles per accident than an average human driver with this approach, or do you think this is not possible without a car? Why? So I think I've got three questions there to go through. So let me begin with the first part about, is the Tesla autopilot using induction in order to learn? No, how do we know that because induction's not possible induction is the idea that from the past, we can observe things occurring repeatedly and from that extrapolate to an explanatory theory. To knowledge. Induction purports in epistemology to go beyond mere extrapolation into explanations. After all, anyone can, and I've talked about this so often before, and there is an article on my website about induction, and when I come to edit this, I'll throw a picture of that particular page up. Look up Brett Hall induction. I've talked about this so many times before, but let me use the example. The example is if you were to plot a graph of water being heated over a stove, every minute or so, before it reaches boiling point, what you will tend to find is a lovely linear trend. Now, on induction, looking at the past data, which is a straight line, straight line graph, then you could draw a line with that straight line graph and predict that as you add more heat to the liquid water, you will continue to get the same pattern. You can generalize extrapolate use induction, purportedly, in order to figure out what the temperature of the water will be in 50 minutes time, let's say. And let's say the temperature of the water is going up, 10 degrees Celsius per minute, then you might very well predict that in 15 minutes from now, the temperature of the water is 180 degrees Celsius. That's your prediction, given the straight line extrapolation. Now, how did you arrive at that? Well, only given the explanation you already have, and the explanation you already have, regardless of induction, is that water when heated must follow this straight line graph. That's already, you've already conjectured that, and now you're just confirming, for want of another term, what your theory is. Now, of course, if you actually do the experiment, you're going to be roundly disappointed and refuted, because, as we all know, water at approximately sea level on a typical day standard temperature and pressure will boil at 100 degrees Celsius and the temperature will not rise. So induction couldn't have told you that no amount of extrapolating from the past data could you have gotten to what really happened. Now, in order to get to what really happens, we're not after just a prediction of what is going to happen to the temperature, but rather an understanding of the explanation to do with molecular bonding of H2O molecules, this thing called latent heat and as something boils, the behavior of the bulk liquid. There's a huge explanatory theory, a complicated explanatory theory, that does not come to us from what is known as induction, the generalizing of data into an explanatory theory, impossible, but rather a conjectured explanation, which is then tested against reality. And the testing against reality is done by actually trying to increase the temperature of the water, often to, you know, 180 degrees Celsius, which is what you might have learned a prediction that the water was kind of like the water liquid water under normal conditions of normal air pressure will never reach 180 degrees Celsius. So if you pressurize the air, you might be able to get there. And so as water boils and you continue to heat it, the temperature is not right. So this is my go to example, my tired example of why induction not only doesn't work in that situation, but never works any time anywhere. Because to extrapolate to begin with, you need an explanatory theory. So a Tesla is not using an impossible means of creating knowledge. In fact, it's not creating knowledge at all. It's not creative. But what it's kind of like is a room bar. Okay, rumors have been around for a while now robotic vacuum cleaners that go wandering around your house. And the first time that you put the rumble on you can set it to a particular setting where it will generate a map inside of its head of your side of a dead. And then you can generate a map inside of its circuitry inside of its memory using AI using a program in order to build up this map of your house, so that the next time that you run it is far more efficient it knows where the walls are it knows where the stairs are and all this kind of thing. Whatever the case, the rumble will learn what your house is like. Now, would we say that it is created knowledge, I wouldn't, the knowledge that has been created was placed in there by the programmers, the programmers were the ones that designed the rumble as a map making machine. So of course, it is going to follow the instructions about how to make a particular map it's a map making machine in general you put it into your house and it makes a specific map. Unsurprising it follows slavishly it's programming it's code in order to produce the output required the output that you expect. So too with the Tesla thank goodness you want it to not be creative because to be creative would allow it to create explanatory knowledge and then decide it doesn't want to do what you want to do. Avoid obstacles travel at a particular velocity turn the corner when the corner arrives all this kind of stuff that the autopilot should do. In other words be committed to nothing but driving the car. So I do not call this creating knowledge, but rather gathering information very quickly about its environment in order to safely transport you from point A to point B. Why wouldn't I call that knowledge well we're not necessarily spinning hairs but what we're talking about is the difference between gathering data unorganized kind of information. Rather than generating an explanation, which is what knowledge creation as we normally talk about it amounts to there's another kind of knowledge creation which is the biological knowledge creation. But the knowledge of how to avoid an obstacle how to maintain a particular speed how to turn a corner all of these things that the Tesla autopilot can do. Were knowledge instantiated into its code but discovered or programmed by an engineer software engineer earlier and elsewhere. So this you can call it an end to end was a neutral neural network. If you like it's a piece of software that is able to learn about its environment again learn there is used in with some proviso. We now begin to distinguish between training and learning learning being conjecturing an explanation conjecturing an idea and trying to refute it and training simply being fed lots and lots of data which is then stored into memory. Much like a fluid can be poured into a bucket but this is not like what learning is which is not the fluid being poured into a bucket idea. But if you're simply loading data into a particular system then it does more resemble the old bucket theory of knowledge idea. But this is not the way in which knowledge is created or the way in which knowledge is learned those two things are the same. The second part of the question so do I think the car is using induction no is doing something more or something else it's doing something else. It is using pre-existing explanatory knowledge encoded into the programming which guides the autopilot in order to perform the function that you wanted to. And you don't want to do anything else you don't want it to go explaining things being creative. Because were to do that. Then it would disobey its own code it could choose to not avoid the obstacle let's say which I guess comes into the second part of your question here. Which says do you think it is possible to have a system that can drive a car in all conditions at least 10 times more safely in terms of mile per accident. The average human driver with this approach or do you think it's not possible without AGI. And AGI would be a person and would be I presume and AGI being a person we can't distinguish between the different kinds of people except what their substrate might be which is going to make no difference in any fundamental sense to their functionality what they're doing. But the thing about AGI is that they're going to be highly prone to error because they will be conjecturing. I would not want an AGI to be driving my Tesla to be the autopilot because they wouldn't be an autopilot. They would be a person so there'd be no difference really between an AGI and a regular GI person except what the substrate would be what we want with an autopilot is a close as perfectly obedient as possible system which is going to do nothing but drive and AGI can be distracted. And AGI is a person and people get distracted. This is the whole problem of people driving and why we want autopilot ultimately to take over. So never mind the AI being 10 times more safe, I would expect a million times more safe, quite literally. That an AI system autopilot which is networked into some sort of grid that operates at the speed of light which is able to notice all other vehicles on all roads within some particular radius. And able to account for what's coming up at the next crossroad, where the pedestrians are, there'd have to be some system of monitoring that is better even than the present set of testels have. Testels have obviously sensors around the entire vehicle checking for things like pedestrians and the trash can that you go past and the set of lights that's coming up. We can imagine that becoming ever better and we can also imagine the autopilot being ever more proficient and obedient at being able to get you safely from point A to point B. Unlike a person, a personal driver who is going to get bored, look out the window and get distracted, take their eyes off the road, be drunk or be angry, not have sufficient coffee, have too much coffee. As you talk to them, maybe they start thinking about something else, they're upset because of things that have happened at home, a thousand and one things that could intrude into the mind of a regular person who is driving, or an AGI who would be a person, because they're thinking creatively about other problems. Other than, how do I get from A to B safely and efficiently, and AI, a very good Tesla autopilot would do nothing but focuses the wrong word because that invokes consciousness, do nothing but implement the program of getting from A to B. Utilizing whatever instructions are in the program that is controlling the wheels of the car. And it wouldn't be, as I say, 10 times more safe than a person, but a million times more safe, they'd be able to get from A to B faster, you would never get anywhere near having a head on accident because the thing would be reacting with reflexes so much faster than what any human being can do. So I'm looking forward to that day, that day is coming, and that day will happen long before, or shouldn't say long before, that day happens independent of any research going on with AGI because this is an AI problem, not an AGI problem. We don't want creative AI, we don't want autopilots to be creative. We can't possibly, we already have autopilots, very, very good autopilots for aircraft, obviously, and for cars. But you might say, but don't you want it to be able to think creatively in case it encounters a problem, which the programmers haven't thought about, but that's always the case. Problems not foreseen are an issue for AI and AGI both. In all cases, problems are parochial, they're problems for people. We are the entities that have problems, the AI are a tool that helps us to assist us to solve our problems, but they can only solve problems where we have thought about what the potential problems are going to be. So all of the things that you think might happen or go wrong on the road, that's what you encode into a test on. Well, if only you could give the test, the creative capacity to solve the unknown problem, well, you're back at the same position. It would be no better than a human being in that case. A human being driving along and countering a problem that's never been seen before, how would they react? They'd react exactly the same way as anyone who didn't have information about that unknown thing. Suddenly the huge octopus falls from the sky, what do you do? Well, a person presumably recognizes that as an obstacle, well, the same thing with the AI-controlled vehicle. But I can't think off the top of my head what would be a problem unforeseen, but there would be problems unforeseen that people who program autopilots would be tripped up by if you could somehow orchestrate that. But hopefully there is a fail safe within the autopilot such that if that kind of thing happens, there is a thing where it just brings the car to a halt or not. Maybe you want the car to accelerate. If there's an avalanche beside you, you want the car to go fast around the car to slow down. Maybe, you know, are these things thought about? Presumably they are by AI autopilot coders. Okay, let's keep going. Convex Bette again has asked. BOI by David Deutsch is one of the books that Sam Altman has openly recommended people to read. I cannot reconcile this with his recent prophecy of having systems soon that we will be able to ask to figure out all of physics. Do you think he actually believes this, or is he just gaslighting for fundraising, regulatory capture, or some other reason? Well, I can't speak to the last bit. I don't know. I'm not going to use my mind. There's no possible way I can figure that out. There's a misconception there, but it is a pervasive misconception. It's not just Sam Altman. I've heard many people, physicists included, saying things like, you hear it all the time, something synonymous with the idea of completed science, that if the researchers continue to work hard enough, then they will tie up into a nice neat bow, with a nice neat bow, some area of science. Physical chemistry, cosmology, particle physics, whatever happens to be, so that there will be no open questions. And it betrays a deep misconception about what the nature of science is, that when we find a solution to something, it opens up a window onto a whole bunch of new questions that we weren't able to ask before, because we didn't have the language, we didn't have the concepts, we didn't have the understanding. Prior to the idea of space itself being, of space time being a fabric that could bend and weave, there was no way of conceiving of how space could stretch and expand, much less accelerate. And so it was only once we had an explanation of general relativity, the idea of space time, and how space itself was this physical entity that has properties. Could you then have an understanding that things like mysteries like an accelerating universe could even be conceivable, because otherwise you would be thinking things like, "Ah, you see a star there that's moving away with a redshift, it's moving through space," because it would never end to your mind that space itself could be the thing that would be expanding and causing redshifts. But given a dynamic and really existing space, fabric of space, fabric and scare quotes, then you can conceive of space expanding and contracting, that kind of thing. So when people say things like, you know, Sam Altman did say, "We will have AI that is able to figure out all of physics." It makes one wonder what, they're thinking in terms of this, it's becoming outdated, I think, idea of a theory of everything, the reductionist theory of everything, that we unify the fundamental forces, the fundamental forces at the moment being the electroweak force, the strong force, and we've got gravity which is out there as not actually a force, but it's sometimes bundled in with the others. But if the particle exists, some of them, anyway, insist that gravity must ultimately be quantized in some way, and there must be this particle called the graviton, and then we unite all of these anyway and we unite these forces and we have the theory of everything. But in fact, that wouldn't be the end even of particle physics, because we would ask why the parameters are the way they are, why the values of the constants are what they have, why the magnitudes of the forces are what they are, why that theory and not some other theory of everything, and someone goes. There can be no end to physics, there can be no end to science. Every problem solved reveals new problems yet to be answered. Every solution reveals more of our ignorance. Two things happen simultaneously in the creation of knowledge, both an increase in what we know by reducing our ignorance in a particular area, and the opening of a window, which we didn't know was there before, onto a vista of problems which we couldn't have conceived of beforehand. Instead of windows, it might be better to think of doors. We pass through a door, we've unlocked it with our solution, to step into a room which just has more doors, and all of those doors represent problems in this analogy, and that is what happens in physics, that you come up with an explanation of gravity, and then you realize that explanation allows for things like black holes, neutron stars, an expanding universe, an accelerating universe, a universe that must have began hot and dense, so the possibility of technology and harnessing these things, so many more problems are revealed to you. My quip on Twitter was something to the effect of the idea that you can figure out all of physics, to go to the chat GPT 7.0 or something, and to say, tell me the ultimate solutions to the universe, tell me all of physics, would be exactly like saying to it, tell me all of the numbers. Everyone should immediately be able to see that to ask chat GPT to tell you all the numbers is a ludicrous thing to say. You will never get to the end, obviously. Not only will you never get to the end even if you were to confine yourself to the integers, but there are different kinds of numbers. Do you mean complex numbers? Do you mean real numbers? Do you mean rational numbers? Or could there be other numbers beyond those numbers that we've thought of? Different orders of infinity, that kind of thing. We can never rule out new abstract entities being discovered, but more than that, the analogy simply is, we're only ever at the beginning of infinity when it comes to talking about numbers. We know, as a matter of fact, there are more numbers. And by not a perfect one-to-one correspondence between that idea and what physics is all about, we can nonetheless have some insight into what it means to have an explanation of something. Because to have an explanation of something means to reveal to your conscious self a model of the way in which the universe works in some way. But you will always have questions about why that model, and that model will often buttress up against some other model that you have, some other explanation you have, which means they won't fit together. And we are in that situation right now, quantum theory and general relativity. Many ways of explaining the ways in which they butt heads, so to speak. One is the discrete versus the continuous. Another is to just say, can you know the location of a specific object in space with arbitrarily high precision? General relativity would require you to do that in order for it to make the predictions that it does. Now it's classically, it's a classical theory, but quantum theory says that no, you can't know arbitrarily with high precision what the position of something is. Whatever the theory is that is going to succeed, general relativity, and quantum field theory let's say. There is no guarantee that it is not going to contradict some other strongly held explanation of reality somewhere. That we're not going to be in an unproblematic state. The history of science and ideas tells us that we are always in a problematic state. We move from misconception to better misconception as David Deutsch says. There will always be open questions and this is a positive view of reality in our circumstance is that not having a final theory means that there's always more progress to make because there will always be errors to correct. That's a wonderful thing, wonderful thing. I don't know what Sam Altman's motivation is, I don't think he has any sort of deep dark motivation. All I can say about that particular claim, figure out all the physics. It just contains a misconception about what science is. One may as well say, figure out all of history or figure out all of biology. I guess people think that that's also possible, figure out all of mathematics, et cetera, et cetera. Figure out all of art. Many people have read the beginning of Phineen recommended it. Mark Zuckerberg has as well. I guess Elon Musk at this point may have. St. Harris, St. Pinca, many great contemporaries, contemporary luminaries in the intellectual world have recommended the book and some seem to have a better understanding than others. But what many of them share is, I think, a tendency to, and we all do this, when reading a book to read into the book, what you expect to be there. In the beginning of Phineen he contains many counter-cultural and counter-common sense ideas. And so it can be so psychologically jarring for many people that their mind can tend to skip over the challenging parts and to fill in those gaps with their preconceived ideas about how the world must work. And so you do, I haven't counted people who, you know, will have read the beginning of Phineen. You won't say they've read the beginning of Phineen, understand the beginning of Phineen, but then insist things like, nonetheless, there must be a superintelligence out there somewhere, but a superintelligence is possible. Completely missing this idea, which is so central to the themes throughout the beginning of a universality and explanatory universality. And once you really understand that, then you recognize that there can be no superintelligence. All you can do is augment the speed at which you think and the memory that you have. But once a system is creative, you know, able to have a problem and therefore attempt a solution, that capacity gives it, gives that system, that person, universality when it comes to knowledge creation. Yes, okay, so hopefully that answers the question. Same old one, I think, does excellent work and, you know, more power to him in making chat GPT ever better. Roland Burl has asked, he's put it in scare quotes, he said, companies don't want to kill their customers. And he's gone on to ask, what if they are in a position far worse than Boeing? Probably going under some in a last ditch effort to compete, they cut corners. If mistakes happen, they won't be around to blame anyways. Regulations are solution here. It's true that companies do not want to kill their customers. I mean, it's logically the case. And these days, of course, the issue is that if you are negligent as a CEO or some other worker, then you will be criminally culpable. If you are the maintenance worker, or indeed the manager of an elevator or lift company, and you don't do the work that you should have done, and your poor designer and elevator forced the ground in a skyscraper somewhere killing the occupants, you will be criminally liable for negligence. They'll be the last self-interested, including the workers and presumably the workers at Boeing. And as far as I know, many of the workers at Boeing, they're very proud of the fact that they work for this prestigious company. And it's had some great difficulties recently with the 737, super max jets and all that sort of stuff. Granted. But the arrow of progress and safety in the airline industry has broadly speaking, been pointing in one direction. But there's no such thing as an unproblematic state, and there's no way of avoiding problems. Problems are inevitable. And what we want to do is to learn from our mistakes and presumably Boeing is doing that. So I don't buy the idea that they're going to suddenly want to start cutting corners without regard for safety. And who cares about killing their customers? These people that to paint business people as psychopaths of that kind is a very common caricature. I understand. The evil business people sitting in there, figuring out ways of sitting on a pile of gold like schmout of the dragon out of the hobbit is the image that people have. You know, Mr. Burns from The Simpsons is emblematic of that. Scrooge, the Scrooge, Ebenezer Scrooge. These are all these old tropes about wealthy people. But they're a good and bad people at every structure of society. Poor people, middle income people, and wealthy people. People who own small businesses, people who own large businesses. There are people who choose to cut corners. But the incentive to cut a corner in a world of litigation and prosecution and the potential for having your life ruined because you think that you're going to make a quick dollar by what? Not attaching the engines or the door, the cargo door sufficiently well. They use poor quality rivets or something like that. I haven't followed it that closely. But people are right to be concerned that airline manufacturers do a good job. So it's great that the spotlight is on them. But I would never think that the errors that are made have been deliberate or because they don't, they want to kill their customers. Could it be the case that they've been careless and negligent? Absolutely. Absolutely. We can concede that. But the profit motive is the best, far and away, the best way of helping to ensure safety in any industry. Because of, you know, it is the reductio ad absurdum. Companies don't want to kill their customers. Of course they don't. They don't want to hurt their customers. They don't want to scare their customers. Boeing does not want to go broke. They want to continue to exist and to make plans. I think I've made that point. Regulations do as they tend to curb creativity. So if we begin to regulate, here's what happens. At the moment there are, roughly speaking, two huge aircraft manufacturing companies, Boeing and Airbus. But if you regulate, large companies do actually like regulation. It sounds Microsoft like regulation. Now why? Because once you begin to regulate, that requires you to employ a lot of lawyers. That requires you to meet a particular standard to manufacture products according to a particular recipe, which has been circumscribed by the regulatory framework. So you've got these regulations. So in that situation, only the wealthiest companies can exist. If you're a startup and you have to meet the regulations, you have to employ all the lawyers. You have to make sure that you're understanding what the regulations are so that you can manufacture things to these particular standards, rather than trying something creative and new, because that will be against the law. This is why large companies cozy up to government, and you have this unholy alliance at times between certain large companies and governments. The governments want to be seen to be doing something by the people who vote for them. So if something ever goes wrong, it's like the voters say, or the media says, the government needs to do something. They regulate social media because the children are suffering, something like that, and the voters say, yes, the government needs to do something. And so the social media companies will say, OK, we're happy to play ball. We will regulate, how about regulations XYZ, and they will implement XYZ, which ensures that any startup social media company or whatever company happens to be, is less able to enter the market as a competitor. What regulations tend to do? They tend to reduce competition precisely for that reason. And so it's not like all companies are against regulation. They're not. They can be against certain regulations. And of course, that's what happens with lobbying. It's like, if this particular regulation is going to hurt our business, then can we make a deal, you know, me as the company, you know, the CEO and you as the politician, let me make a deal with you. I will implement a regulation, so you can be seen by the voters to be doing something, and you can do something for me by slightly changing this regulation in such a way that doesn't hurt my business, but it will hurt my competitors business, or potential competitors business. There are, in other words, perverse incentives. I just noticed, go, what is Brett talking about airplanes companies? If anyone in the chat, yeah, so what I'm doing presently is I'm responding to Twitter questions. So, initially, all of the questions that I'm getting have come straight from X Twitter. So I'm gradually getting through those, and then I come to the YouTube comments. There's a lot to get through here. So I think that answers that question. Are regulations ever a solution to when things go wrong in the market? No, the market is self correcting. And by the way, even if you do heavily regulate some industry, this is no guarantee against something like an accident, like a plane crash. You produce the likelihood of plane crashes by implementing laws. The incentive that people have for doing a good job is that they can make more money. And so if you're working for Boeing, and you know, you're riveting doors to the future large, then your incentive is, well, hopefully you're in an industry that you really love and that's why you're there. And you earn a lot of money because you're doing a good job and the more you do a good job, the safer your aircraft are and the better the aircraft are and the more money that the entire company makes and the more likely you are to get a pay rise. And so, so the wheel turns. But when things start to get heavily regulated, then, you know, the company has less money because, especially if the regulation is working in some way to dampen down innovation within that particular industry, which has happened, which has happened. One of the reasons why we don't have why, why, why aren't there supersonic flights okay it's starting to come now but we had the Concord there for a while. There was a crash it was terrible accident and so the Concord sort of vanished soon after and so did any attempt to have a resurgence of supersonic flight. Samu has asked also on X, why are people universal explainers and other animals aren't. I think it is not a matter of size since there are animals that have bigger brains and humans is there something fundamentally different about the hardware or is it the software that makes a difference. Good question yeah it's the thing that I've talked about a bit. It's not the hardware, the hardware could be the hardware of a pocket calculator presumably as long as it's some sort of approximately universal Turing machine. So it's all about the software. And we have a program running, which is called a mind or a human mind if you like, but has this capacity to explain the world around it. It's a notice problems and to conjecture solutions, to articulate and to model to create. Other animals don't have that they have instincts and they react in a way that is automatic there's an automaticity there, rather like an AI. So there's a fixed repertoire of behaviors that the typical animal has cat dog chimpanzee and they can't really step outside of that repertoire of behaviors so you don't come across chimpanzees. Talking about calculus and that kind of thing and it's not because their brains are necessarily that much smaller. You talk about whales you can talk about dolphins and I guess gorillas have brains that are at least approaching the size of a human beings if not larger. But rather because there's something there about the mind that is running. Now, we have no clue as to how exactly this mind works at the level of neuroscience, what's going on there. The connect on as they sometimes refer to it as. So, yeah, the short answer is, is there something fundamentally different about hardware or software that makes it something fundamentally different about the software, this capacity to generate explanations the world. And that capacity is a universal one and universal there means anything that can be understood can be understood by us in principle. In practice, there's always an infinite amount left to understand. But in principle, if someone else can understand that thing, then so can you and a lot of people of course object to this and they don't like the idea. I say, but no, you know, Terrence Tao is a brilliant mathematician, I could never possibly do what he does. Well, that is true on the one hand, you know, you don't have his interests, you would never do what he does, you would never do what Mozart does, you would never do what Roger Federer does, you would name the person that has this singularly amazing commitment. To this one area and is a genius within that one thing. Yeah, you don't have that because that's what it means to be an individual person, you have a problem situation. There would be things that you can do, possibly better than anyone else on the planet has, you just may not even be aware that you're the best at doing that particular thing. Who knows. OK, more ink has asked, differentiating between AI and AGI based on the potential to misbehave seems like a low bar. Generally weaker models tend to misbehave more and we call them stupid. The smartest people on earth aren't exactly conformists, but they usually, but they aren't usually the most misbehaving either. So that contains a number of misconceptions. The first is, I don't think anyone's ever said, we differentiate between AI and AGI based on the potential to misbehave. The word that's often used is disobeying. But even then, this is not a criterion for distinguishing between AI and AGI. So it's not about misbehaving and it's not even about disobeying. I say things often like consciousness, free will, capacity to choose, explanation, generation. They're all synonymous ideas in my mind at the moment because none of them are particularly well understood. We know we do them. We have consciousness, we have free will, we can create, we can choose, we generate explanations. Out of those, that list, by the way, there will be any number of academics will object to at least one of them in my experience. They'll say, we're conscious, but we don't have free will, that's Sam Harris' idea. Or rest in place, Dan Dennett would say, we have free will, we're not conscious, something like that. And everything in between, some people will say, we're not really creative, we're like large language models, we're just extrapolating. So, I just think they're all part of the central mystery of what it is to be a person that we don't understand yet. Now, this mystery, which needs to be solved before we have AGI, also has, what falls out of that is that the AGI or the person, because they're creative, or they have free will. Many people don't have like the idea, they have free will, but I say both, but you know, because they're creative, it means that creativity means that you can look at all the options on the table before you and refuse to do any of them. That's disobedience. The option might be, you know, given to you by a teacher or a parent or a friend or someone and they're saying, do this and you say, no, I'd rather do something else. You have a preference for something else, and AI is not like that and AI is judged the extent that it performs its task. In other words, it's obedient, but it's not even to ascribe obedience to it is the wrong thing. But loosely speaking, of course, people do talk about, you know, their car misbehaving or their toaster misbehaving or their coffee machine misbehaving. Okay, they anthropomorphize these things, but you can't use misbehaving or more often what we say is disobedience as a criterion for distinguishing between AI and AGI. After all, if I ask chat GPT seven zero, a question, and it does not answer me, do I conclude it's a disobedient AGI, or do I conclude it's a malfunctioning AI. What I would say on that, well, just if you're existing inside of this imaginary thought experiment that I've just sent up there, there's no way of telling. But in reality, what you would do is, of course, you would speak to the coder. Whoever it is that is written down the code, the program for chat GPT seven zero, and they would be able to diagnose what's going on. And more than likely they would say, ah, okay, yes, this line of code is messed up the entire program. So it's stuck in a loop or it's halted when it shouldn't have. This thing is malfunctioning, it's not disobeying you, I can see exactly what the problem is and let me correct that line of code, and now let's run it again and ask you a question and now it's fixed. As I keep saying on this topic, it's not like a computer programmer is going to wake up one morning, log on to their computer and be surprised that the computer is doing something that they ascribe AGI. They're not going to go, wow, it's an AGI in my computer, I wonder how that works. No, it will be the other way around. It will be that the programmer has in their mind an idea for how to program an AGI. They may even share that idea first with other programmers and say, look at this, this is amazing, completely revolutionary way of understanding what software is. I think that's going to have to be that fundamental, that it's going to go beyond what we currently think of as programming as algorithms even. Whatever this thing is, it is able to generate output that we cannot predict in advance, that is going to have its own problems. So anyway, someone will come up with this solution eventually, who knows, one year, ten years, ten thousand years, who knows when? Someone will figure out how this works and when they do, other people will be able to look at the plan and go, "Ah ha!" The old John Wheeler one, how could it have been otherwise or have it ditched quotes at the beginning of the beginning of infinity? It will be a solution that is likely to be elegant and lurking just beneath the surface as to what AGI is. It won't be in a super complicated program because after all, it has to be somehow encoded into DNA and DNA is not... It's complicated, but it's not super complicated. And the entirety of our DNA code is certainly not the program for AGI, only a small part is, or for general intelligence, for generating explanations. And so, the order of things is not... Programmer wakes up, is surprised by their computer and announces to the world, "I found AGI, but I can't tell you how it works yet. Give me a few days and I'll let you know." I'll interrogate it and find out. But the other way around, "Hey everyone, in the community of programmers working on AGI, I have a recipe for AGI. Do you want to check it out and maybe we should try and build one of these things?" Or maybe they will have the recipe and then build it and then show everyone. That would be the course of events. That's the way engineering, the relationship between science and engineering tends to be. Elon Musk had the idea for SpaceX rockets. Discussed the idea for SpaceX rockets with other engineers and came up with the plans. It's not like he woke up one morning and saw a SpaceX rocket there. And when I wonder how that works, "Hey everyone, I can see SpaceX written on the side of that big metal cylinder out there. Maybe we should start a company." But all the wrong way around, so too with AGI. We have to first have the plan, implement the plan, and then we will have the AGI. And it's not that the AGI is going to misbehave because that's an ambiguous term. But rather that it would have the capacity to disobey. But you can't use that as a criterion because there's no criterion for being a person. Whatever way in which you try and test a person for personhood, they can refuse to do the test. And so you won't ever know if they're refusing to do the test or if they're a malfunctioning AI. Bart is up next. And I'm sorry, Bart. I'm going to have to do your question short-shrift, but I don't know how else to say this because Bart's question is, "Are human actions computations of some sort?" And the answer is yes. All physical processes can be regarded as computations, so that includes all human actions. That's the universality of computation. Convex Betz has asked a frivolous question. And most of my audience, when I was in America, and I won't know what we're talking about here, but he has asked, "Giving away where you're from, convex Betz. Are you from the UK, India, or Australia, or somewhere in the Commonwealth?" Opposition needs 10 runs in the last over to win and your life depends on winning. Who bolls the last over? McGraw or Warren? Now, I am not a huge cricket fan, but I would pick Warren. Shane Warren. Lauren has asked, "Do you think Bitcoin and Ethereum are competitors? Given Bitcoin's deliberate lack of cheering completeness for security and reliability, should a digital gold focus on doing one thing well or be multipurpose like Ethereum?" I'm not highly across all this. This is a great question for Naval, or Ravacant, or someone like that. There's a video of mine that I made two years ago about this exact thing. It seems to, like, obviously they are competitors because for any given person who is investing in crypto, if they invest in Bitcoin, they're not investing in Ethereum, so they are competitors. That's clear. But Ethereum does have this capacity to do things other than simply be a cryptocurrency. But I don't have a strong opinion either way. I think that both are useful. I think it's good to have competition in the market and definitely competition against the federal reserve, whether that be in whatever country or jurisdiction you happen to be in. Because cryptocurrencies serve at least some limited bull walk against inflation, limited amounts, because governments can, of course, just print money. It was remarking recently, I was on Twitter. I can't remember if someone asked the question or if they just made the assertion that if you want to make more wealth, then all you need to do is to print more money or winning the lottery would also create more wealth. Well, no, that's not the case. It's moving money from one place to another. Everyone can see that you're winning the lottery. It makes you personally more wealthy. It doesn't generate more wealth in a society because everyone had to buy a ticket. That ticket cost money and the money is pooled and some of it goes to awarding the prize. And so the money has just been moved from one place to another. The wealth has moved from one place to another. What we mean by wealth, by the way, this is crucial. The repertoire of transformations that can be made, that's what wealth is. So individually, there's a number of different things that you can change about the world. And that depends upon the amount of wealth you personally have. And as a society and more broadly as a civilization, there are certain transformations. Physical movements of stuff that we can do given our wealth. Which includes the kind of knowledge about how to transform matter from one form into another as a special case. Why doesn't printing money increase the amount of wealth? Well, the best way I can think of to explain this is just a thought experiment. One way of printing money is to physically print notes, obviously. But the government could also say something like, however much money you have in your bank account right now, we're going to add three zeroes to the end of that. And then you have a new bank balance. So at the moment, your bank balance is a thousand dollars. We're going to add three zeroes to that. So next time you go and check your bank balance, it's going to say you've got a million dollars. And more wealthy, everyone in your country is more wealthy now. That's effectively printing money, right? We're just adding more zeroes. Now, what happens in that circumstance? Is there more wealth? Is there more capacity to buy stuff in that country? Why don't we just do this? Well, no, there's not. The obvious many ways of coming to see this. But imagine you hear the announcement from the government. We're going to add three zeroes. Initially, you've got a thousand dollars in your account. Now you've got a million dollars in your account. So you rush down to the car dealership. You've always wanted to Lamborghini and Lamborghini. There's $500,000. So you think I'm going to spend half of my new amount of wealth on the Lamborghini. Everything turns on whether the information, because currency, money, is also a carrier of information. Whether or not the car dealer who presently owns the Lamborghini, which is up for sale for $500,000, has received that information about the new value of the currency. Now, if they haven't received it, then you might be able to get away with hand quickly signing the check or transferring the money electronically and driving off the Lamborghini. But more than likely what is going to have happened is, especially in this day and age, the car dealership also knows what the government has just done. And so when you turn up and you look at the ticketed price for the Lamborghini, which says $500,000, $500,000, and you've got a million dollars in your account, and you say to the mind, "I'd like to buy that Lamborghini." They'll say, "Not just yet. Give me 10 minutes and we'll discuss things." And within that 10 minutes, John Smith from Up the Road has also raced down to the car dealership. And he, unlike you, didn't have $1,000 in his account, but had $100,000 in his account. And now he has $100,000,000 in his account because we've added three zeros to what was initially in his account. So you can see where this is going. Because now the guy that owns the car dealership with the Lamborghini that he's ready to sell can bargain, can auction. And you are offering $500,000, and the fellow with $100,000,000 says, "I'll give you $50,000." Now we're talking, okay? So now suddenly the price of the Lamborghini has been inflated. And then more people turn up, and eventually the market figures out what the new cost of the Lamborghini is. It's going to be 1,000 times more than what it initially was because nothing has changed in that society, except there's just more zeros that have been added to everything. So it balances in bank accounts and took costs of things everywhere. So we haven't increased the amount of wealth. We haven't increased our capacity to buy stuff. We haven't increased the repertoire of transformations that we can make. All we've done is inflated the currency. And the price of stuff is dictated by supply and demand. And the same amount of demand will exist. The same amount of supply rather would exist for Lamborghinis as before the government printed all this money as compared to after. The problem with governments especially in printing money is that they can print money for themselves in order to pay their bills to you. So the government contracts private industry rather often. And if the government begins to run long money, just prints money to pay off the private contractors that it has, which effectively devalues the currency. Now there was lots more money out there, currency floating around in the economy, meaning that you yourself are poorer, your money and your bank can is worth much less. This is how inflation works. Because today the government can continue to print as much money as they like. They can buy everything they want because they just print more money. If they don't have enough money right now, just print a bit more and then buy the thing. And always outcompete you. They will always have more money than you. And this is a real problem that we have right now. In fact, for whatever reason, they invented a new term for modern monetary policy or something like that, which is literally just printing money. It's not exactly an interesting policy or theory or anything like that. It's a means of inflation. Okay, convex bets still. Okay, so let me switch it up a bit. Ben Chuggers asked, what's one significant thing you disagree with David Deutsch about? This is a perennial question, I get it quite often. I was asked since the genesis of air chat, for example, I think two iterations of air chat. I've been asked it about three times. I started to develop a stock answer. I've been asked it numerous times on these AMAs as well. So it continues to come up. It's interesting. So I might do the long version of the answer. So settle back with my tea in order to tell the story. So if I was to disagree with a significant, it depends on what the word significant would mean in context, but certainly a significant disagreement implies one would expect a rejection of the world view presented in the work of David Deutsch, because to me, it constitutes a coherent whole. So if you start to reject a significant part, then effectively what it does is the whole thing begins to unravel. Now, let's just tell the story of my history of disagreements with a part of the world view of David. It's interesting, the way in which these questions are phrased, it also is perverse in the way in which I treat. The way in which that question is phrased is perverse, given the way I treat philosophy and science, because I regard philosophy science knowledge more broadly as a contest of ideas. It's about ideas, not personalities. But some people are very fixated on who thinks what. What does X think, trying to psychologize them? It's like, what do I disagree with Albert Einstein about? I don't know. I'm sure he had a lot of ideas. I'm sure David Deutsch has a lot of ideas that I might not disagree with him. It's just that I'm not aware of the overwhelming majority of ideas that David has, or even that I have from my moment of mind, or that anyone has. It's more interesting to talk about, are there parts of the beginning of infinity that I disagree with? Are there parts of fabric of reality or that kind of thing? That's more interesting, because a simple answer is, I don't know. So when I first learned about David Deutsch at all, I was obviously at university. I told the story many times before, and studying physics and trying to understand quantum theory, and at the time I was paying my way through university by being a security guard at a shopping center, and I was wandering the shopping center and wandering in and out of the bookstores. This was back when bookstores were everywhere, and in this particular shopping center, which had about hundreds of stores. It was a huge shopping mall. There were a number of book shops, and I would go in there trying to avoid doing more productive things, and I'd go straight, you know, make a beeline for the popular science books. And I read through all of Paul Davies' books here. I was, you know, one of Paul Davies' super fans. We're talking sort of late '90s here, and I ran out of stuff to read by Paul Davies. But there appeared upon the shelf in 1997, this book, on the back of which said, "I've never been so inspired since I read Girdle Escher Bach." And that was a quote by Paul Davies. Blurbing David Deutsch's book, "The Fabric of Reality." I thought, "Well, if it's good enough for Paul Davies, it's good enough for me." And I guess I'd heard of him before, because I had read the book called "The Ghost in the Atom" by Paul Davies, which was interviews with a dozen physicists or something like that, all about their own personal interpretations of quantum mechanics. Because I was, again, doing an undergraduate degree in physics and were studying quantum mechanics, see, I had no clue what they were talking about most of the time. You could do the formalism, and that's in fact what the physics lectures were really telling you. Here's how you solve the equation. Don't worry that you don't understand this. No one understands this kind of thing. The old Feynman line, if you think you understand it, you think you don't. And so I read lots of Paul Davies' stuff, because at least he was trying to get at something qualitative about what was going on fundamentally in quantum theory. But it still was dissatisfied. I never know I read "The Ghost in the Atom." It must have blown past me, because one of the interviewees in that book is David Deutsch. So I must have come across the name at some point, but I didn't put two and two together. And then I read the fabric of reality, of course, and I was blown away, and chapter two shadows just fixed me. It was like I was a wounded man, hemorrhaging blood, and chapter two shadows fixed me. And suddenly I felt whole again. And I thought I understand quantum theory now. I understand quantum theory. All of the lectures, all of the mathematics, all of the stuff that I'd been studying at that point, I was only in like second year uni. But I thought none of it came close to those few pages in the fabric of reality in terms of understanding. Of course, I could predict stuff. Of course, I could know what equals hf means. Of course, I could understand what the Schrödinger wave equation was about. But none of it told me why you got an interference pattern when you had single particles passing through the sun. So what's that got to do with this particular question? Well, I read through the book ravenously, and every chapter was spectacular. And I thought, I've got to talk to someone about this. And sure, there were people at uni, but they weren't so much interested. And it was the beginnings of the internet. And there were news groups. And there were email lists. And there was an email list there with David Deutsch and a community of people who had come together to discuss the fabric of reality. And there was a fabric of reality group. And I joined the fabric of reality group. And I was transformative because I was able to talk, not only to people who were super fans of David Deutsch, like me, and eventually became, but David Deutsch himself was there. Now, not only was I studying physics at the time, so it was a great honor and all this kind of being very excited. You know, he was this, you know, great mind that was available to people and to engage with people. And over the course of the next, gosh, but it was some, it was, you know, more than a decade, more than a decade, that we had these email lists. And over that time, it's got to have, it would have been not thousands or tens of thousands of millions of words were exchanged on these email lists backwards and forwards of people arguing about the contents of the work of David Deutsch, in particular, the fabric of reality. And being young, you know, sort of late teens, early twenties, back when you know everything, and back when you go to university and you think, "Ah, I heard that at university, and now I am convinced that this is the truth." So, for example, things like cogito ergo sum, you know, which is called the cogito, which is Descartes, I think, therefore I am. I was persuaded because, you know, the lecturers and the tutors would capture you with their enthusiasm and they would, you know, when you read through Descartes' meditations and be persuaded. Here's the one thing you cannot doubt. And you would even be presented with critiques of things like that, which were weak. And so I, you know, I would go along to fabric of reality group where the work of David Deutsch was like, "Ah, here, now I've got the knockdown now. Now, now I'm smart. I cogito ergo sum, you cannot doubt your own existence." And David very patiently, you know, he persuaded me that even that is not immune from fallibleism. Even that. So I had that disagreement then. And then, you know, eventually I became familiar with the work of Sam Harris. And, you know, I was persuaded for a time that, you know, the well-being of conscious creatures is what morality is all about. And if we want to do anything, we want to avoid the maximum possible suffering for everyone. And this is the way we understand morality. Once again, go to David Deutsch. I disagree. This is where I'm pinning my flag. And then David would patiently, you know, explain how well morality doesn't need to have a foundation. Morality is a problem-solving project, just like any other area of knowledge. And so my point in telling the story is that over the decades of being fascinated by the work of David Deutsch, there have been occasions where I have all the zealotry of a convert coming in and saying, "Oh, I disagree with David Deutsch and I disagree with the fabric of reality and I disagree with the beginning of the year here in these places, these places, these places." Only to realize that I misunderstood something fundamental, that the error was here with me. And you repeat that process sufficiently, and you begin to realize almost all disagreements are of that kind that someone doesn't understand the other person. There's no willful disagreement. You know, you often hear that phrase. They're deliberately being wrong. Deliberately getting things wrong, whatever you want to call it. So now I can't, off the top of my head, think of anything in the beginning of the fabric of reality, because I've read both of them so often, listened to the audio books, discussed them at length with people, written vast amounts over decades now, that it is just part of my worldview. So, to significantly disagree with myself, because these are also my ideas now, as well as anyone else who happens to learn them, you know, the ideas are separate, and would be to cause an unraveling to a certain extent of the entire thing. What am I going to do? Significantly disagree with the universality of computation or explanation, or of non-foundationalism, of fallibleism, of the centrality of creativity, of problems, and so it goes. There are the multiverse. Yes, so even if you were to replace the David Deutsch with Sam Harris, what is one significant thing you disagree with Sam Harris about? I would still have the same difficulty in trying to answer, even though there are many things I can point to in his work, where I think that's the thing that I disagree with him, like famously, in my mind, famously, I produced that video that compared the visions of morality, of Sam Harris and David Deutsch, after they had their second conversation on the "Making Sense" podcast, was then called "Waking Up" podcast. But anyway, because Sam didn't seem to understand where David was coming from, and so I diagnosed the problem as one of the many issues in the conversation was that Sam was coming from a position of thinking that all knowledge has to be built upon solid foundations, whether it be mathematics, physics, or morality. So you need a starting point, you need this axiomatic place from which you can derive all the more emergent theories within that domain. And I pointed this out to Sam, I said, "Our circumstance is not one of building knowledge like we build a tower. It's more like an interwoven web where there needs to be no foundation." And so I was sort of saying to Sam, "I disagree with you on the importance of foundations." And Sam said to me, "I'm not as much as a foundationalist as you think I am." Now, we never resolve this. So do I disagree with Sam Harris? Well, I don't know. What I do know is that I disagree with foundationalism. I disagree with this idea that I just explained, which is that all knowledge is built like a tower is upon foundations which are solid and immutable to some extent, leading to ever more derivative knowledge, which is as true as the foundations are. But Sam says he's not a foundationalist. So, yeah. I guess the meta point on this question is also, I'm less interested in what people think than what ideas are out there. So it might be bizarre to say that I might not be, you know, I'm not focused on what David Deutsch thinks about any given issue. But it's true. It's like anyone else. What I'm interested in is what are the ideas that are out there? It's not about the person that's about the ideas. And so, yeah. Yeah. The short answer is over the course of many decades, I've thought that I've disagreed with David. And I guess have disagreed with certain things only for me to, on closer inspection, realize that I was making a mistake. And these days, if there was something where I wasn't sure, then I would just fire off an email or something like that and resolve the issue that way. The link has asked, what's your take on proof of work, real world penalty versus proof of stake virtual penalty as consensus mechanisms in cryptocurrencies. Given your anti authority stance, I'm curious how you view the unforgivable costliness of proof of work as a fundamental differentiator. Well, again, I'm no expert in this, but I don't, well, firstly, unforgivable costliness. I don't know how unforgivable costliness presumably when it comes to improve of work. The whole idea is that the, you know, your cryptocurrency miners are solving complicated mathematical problems of some kind or other. And so that's proof of work. And that can be highly computationally and energy intensive. So this is the costliness. It's an, it's an action. But I don't see that as anything to do with all authority. Or is it? I'm not sure. I don't see the connection. But proof of stake on the other hand, it seems to me to be that's problematic. I'm, I guess I, again, not an expert, but forced to throw in with one versus the other would be proof of work because proof of stake is subject to centralization that the person who has more coins has more power or authority. It seems to me that that's the authoritarian way that a system could go. So yeah, I'd need clarification on what we mean by unforgivable unforgivable energy costliness, which doesn't worry me when it comes to crypto too much. I don't know if that answers your question, but yeah, again, I'm not an expert on proof of work proof of stake stuff, convex bets is asked what are the best arguments scientists who believe in group evolution have. I don't know none of them are particularly good. I, as I said, I was watching a debate between Richard Dawkins and his old thesis examiner, who was a person who, just like Stephen Jay Gould. Ford group selection was a thing or at the very least that it wasn't the selfish gene that was the unit of selection. There are, there are no best either. Look, this kind of question presumes that once you have a good explanation, which is the selfish gene, then you can look at other alternatives to that and say rank order them in terms of better and worse. So, group evolution, just like individual selection of individuals are all flawed because they seem to be avoiding the very notion of genetics that the genome being a molecule of DNA actually contains information, the unit of selection being the gene. And so, I don't know what the best arguments are that they have. This is one of those areas where, as I said in a previous live stream, really, it would require you to do a lot of work to understand the bad ideas as well as the good ideas. And it's probably a better idea, a better idea to just learn the best ideas, rather than to learn all the misconceptions that are out there as well, all the things already proven wrong. This is one of the great misconceptions about physics, by the way, you know, if you want to be an engineer, learn Newtonian mechanics all day long because you're going to use it. When I say engineering, like civil engineer or mechanical engineer, something like that, aeronautical engineer. But if you want to be a theoretical physicist, there isn't really any argument for learning Newtonian mechanics in great depth. Maybe this is a historical quirk you want to, but really, if you want to make advances, you know, you're a 12 year old or a 14 year old and you want to become a theoretical physicist. Learn the latest physics, learn what the open problems are, just go straight to quantum mechanics, general relativity, whatever. Again, physics or knowledge is not like a tower where there are these foundations upon which you need to build stuff. Of course, in order to do quantum mechanics, you're going to have to understand some basic mathematics, like how to add and subtract and divide and multiply and so on and so forth and algebra and calculus and whatever. Yes, but none of that is to say that you need to understand the classical three body problem in Newtonian mechanics. That's not required. And many people disagree with me on that but I think it's just a result of the traditional way of schooling that we, you know, learn a whole bunch of misconceptions but you don't need to learn all of the mistakes that people have made they're not entirely mistakes. That's a whole bunch of good reasons why we regard Newtonian mechanics as knowledge and can solve a lot of good problems, but what I'm saying is that that's not required if you want to make progress if you want to make progress in physics. Newtonian mechanics is not going to help you do that. It's already a closed system. It's been superseded by relativity for one thing. Okay, you can go straight to that and quantum mechanics. Context that's again is libertarianism the closest political philosophy today to classical liberalism in the US people immediately associate libertarian ideals is right wing ideals which I struggle to understand. Is this unique to the US or is this perception prevalent in the rest of the world rest of the West to your guess is as good as mine I guess so it's bizarre I think things have changed recently, although, you know, I try to keep across some of this. The left right wing divide you know has historical antecedents going back to I don't know France or something. And these days of course associated with, if you're far right, well if you're right you're automatically far right I think that's the modern side guys there's no such thing as just regular right everyone's far right. That is immediately associated with fascism, Nazism, that kind of thing as far and the left and the far left is what they're all the friendly people and the kind people and the nice people and the environmentalists and the people who want to provide you with social welfare. They generally associated with communists and for people who'd like to regard themselves as off axis, which is the libertarians classical liberals are object of this free market capitalist and that kind of thing. Because they look at that spectrum they go well that that's a spectrum of collectivism a spectrum of collectivism where on the, you know, the far right you've got fascism and the far left you've got communism and it's like what really was splitting hairs it's like this horse shoe effect where, you know, communism and fascism look very similar to someone who's outside of that altogether. Fascism allows a little bit more private industry but not really I mean you still have this dictator at the top who's going to be able to confiscate all the money from any company at any time. There are no rights individual rights in such a society and so to with communism where, you know, again, no individual rights is about the collective and in both cases what you've got is just groups of people coming together under some ideology. Not for the purpose of protecting individuals but in order for the ideology itself to survive. Now, I continue to argue that libertarians shouldn't be fighting an archa capitalist who shouldn't be fighting objectivists and so on and so forth because the real debate right now is between people who tend in the direction of more and more individual rights and freedom and, you know, low attacks a smaller government all that kind of thing, the individuals versus the collectives in various forms. I'm not a political philosopher so is libertarianism the closest I get libertarianism values individual rights and freedom as does classical liberalism. But you know as does you know more traditional forms of conservatism as well. In so far as they value the capacity for people to work in jobs that they want to and keep as much of the wealth that they create as they possibly can. Well, having rights like freedom to associate freedom of from religion, freedom of speech and all those kinds of things come along with traditional conservatism, libertarianism, liberalism, these kinds of things which are not associated with far right fascism or versions of the collectivists left either. Yeah, so I think that that answers that let me just check I've done all of the Twitter questions before I go to what's happening on YouTube. I think I've done, I think I've done the Twitter stuff. So, all right, my rambling can now become focused on YouTube. Otto von Wigan responded to something I said must have said earlier in the piece, but roads are not predictable enough for an AI to allow for full self driving wouldn't you need an AGI to predict what other people do. Well, AGI can't predict what other people you can't predict what other people do I can't no one can predict that's the whole problem as the whole essence if you want to use that word of what a person is a creative entity. And a creative entity is inherently unpredictable I think I tweeted something just very recently like if it's predictable it's not creative, if it's creative it's not predictable. So to hear people are not predictable. Roads are reasonably predictable you know once the road has been laid down and there it is it's on Google Maps or Apple Maps or whatever map thing you have and that can be put into the AI autopilot and presumably the AI autopilot would also not only have infidelity up to the minute maps, digital maps of wherever it happens to be driving, but as I also said would also be connected to some satellite system presumably or some other method of sensing where all other cars on the road happen to be within some fixed radius. And furthermore would also be able to have some means of scanning the road up ahead for things like potholes or obstructions on the road and that kind of thing. All of this is happening at a speed which is you know 10,000 or 100,000 times faster than any human being could have a possible react to any of this information. Processing so much more information what a human being could possibly do fixed on this one particular thing, never getting distracted never wanting to pull over for coffee never being able to limit you to alcohol never being distracted by the ring of a phone or texting or anything like that. So, I think roads are predictable, as for full self driving well I don't think we're there yet of course no one's I don't think anyone saying that we're there yet we're in a transition phase as we are with many things where for many circumstances it seems like the human driver is absolutely essential, but then for other applications in other places, the human can be done away with and the AI will take over but it can't be that far off, because this is a soluble problem. Because, you know, I doubt that many people need to get into a car, this is a silly thing to say because you know it's one of those things that it's attributed to Bill Gates you know no one will ever need more than 20 kilobytes memory or something but I don't think he actually said it. No one in a car would ever have to travel faster than 500 kilometers per hour but yeah that could be wrong but let's say 500 kilometers an hour, traveling along you know regular roads and highways and that kind of thing. I presume would be a fairly easy safe thing for a, an AI to do with a well designed car like a Tesla, something that a human being could not do because it reflexes are not fast enough they aren't going to get distracted and if they're distracted for, you know, a micro second traveling a kilometer per hour it's going to be catastrophic, but this that kind of catastrophe of taking your eyes off the road for just, you know, a thousandth of a second when you shouldn't have. It's a sort of thing and I want to does that mean an AI will never malfunction no, but one would presume that the number of malfunctions that autopilot testers of the future have will be way way orders of magnitude lower than the number of unforced errors that human beings make on the road each and every day, whether because they've taken their eyes off the road or been distracted by something like the phone ringing or blinded by the car coming in the opposite direction with the headlights on or just a health scare which seems to see that in turn on the six o'clock news and every other week at least you know someone who's older or even younger has had some stroke or some medical or has driven into oncoming cars or driven into a shop front and killed someone or other or a bunch of children pedestrians and whatever. So that sort of thing would not be eliminated entirely no one saying that I cars would really eliminate all accidents, all road deaths and that kind of thing, but it would massively reduce it and I think it is, you know, as many of us pointed out at the time, when the coronavirus was happening and many of us were on the other side of this and saying, yeah, it's serious, we'll take it seriously and whatever, I was forever pointing out for the million millions, okay, and upper limit of deaths that were occurring to the coronavirus and there was a global movement to stop this thing, not saying there shouldn't have been, but why wasn't a similar movement being done for the, I think it still remains at 2.2 million road deaths per year around the world, that by any metric is a catastrophe, why, but we tolerate it. Now, one day that will become intolerable, an intolerable loss of life, especially in the context where you have self-driving cars that avoid that, that could take that 2.2 million people killed per year on the road, even if you took it down by a factor of 1,000 to 2,200 what a wonderful saving of, you know, 2.1, whatever, lives, 2.1 million lives every year. So, I think roads are predictable, yeah, people can build new roads, but I would presume that in the future, ALI-controlled cars are going to have access to roads even being changed, you know, new private roads being built in the work kind of stuff, yep. Goat, can AI evolve into AGI or AGI itself? Or AGI itself is different, David Deutsch thinks AI and AGI are fundamentally different, what do you think? Yeah, well, I've talked about this a lot, so I won't go into it again because it will jerk ball people that are watching. In fact, I talked about it last lifetime, they're fundamentally the opposite of one another. And the shortest way to say this is that AI must fundamentally follow the instructions that it's been given, that's what it does, and that's how we judge it. Is it doing what you want it to do, is it completing the task? AGI, on the other hand, does not need to follow the instructions that it's been given. You can tell your employee to do this thing, but they don't need to do that, maybe they do if they want to keep their job, but there's no mechanistic, deterministic way in which they're going to slavishly follow the instructions. They're not determined to do what you instruct them to do, and AI is so determined to do what you tell it to do, because it is nothing but a mechanism. We are not mechanisms, we have choice, we can see the options before us and choose among them, that's our creative capacity. Does incompleteness theorem point to God or is it a delusion? Like the mathematicians misconception? Okay, go to ask the number of questions here. The incompleteness theorem just for everyone is often elevated to this mystical level rather like quantum theory is. Now it certainly says something very interesting about the world. It says that for any axiomatic system, such as in mathematics, there will always be statements that can be conjured or expressed in that system, for which you do not have a proof, for which you cannot decide other statements written within that system, true or false according to that system. Do they follow from the axioms or not? They may be valid, but can you show a sequence of using rules of inference to get from your axioms to that statement? That's what the incompleteness theorem is. That's what completeness is. Every single statement that can be written has a proof for it, and you can do that for a sentential logic, for example, or something called predicate logic, so this gets into the technical details of what formal or mathematical logic is. Arithmetic is not like that, it's incomplete. There are things that you can write down in arithmetic, it's what Gurdles and completeness theorem was about, for which there is no proof, which means that there's many consequences of this. I do not buy the idea that it has consequences for consciousness, but it does have the consequence that mathematics will remain forever a creative exercise, you can't just calculate your way to every single answer to every single question in mathematics, which is, I guess, the error that was made in that Sam Altman thing, you know, let's just discover all the physics. Hey, computer, discover all the physics, which is what Sam Altman said, you know, the AI would be able to do one day. Well, that's similar to this, you know, physics is always going to be a creative exercise, so you can't just discover it all. You can create knowledge about things yet to be discovered, or things being discovered. You can't discover it all any more than you can prove everything in mathematics for the same reasons, they're both creative domains, where I forget who I'm quoting, but every point is a boundary point, everything that you know is a place where you can ask of it, a question, which can broaden your knowledge. There's always unknowns left. Ignorance is infinite and knowledge is always finite, and for any given piece of knowledge, there are questions you can ask about it. Why is it that way? And that can be a useful way to explore reality and to come up with solutions and there is, because there's an infinite amount of ignorance, everyone has something to contribute because they're simply not enough people and never will be to solve all the problems because we'll never run any problems and so on and so forth. Go to ask some personal questions which are just going to pass over in silence. Divesh Dewindi, please explain quantitative computation in layman's terms, after an hour and 37 minutes. I can't. What we can do is direct you to my series on the multiverse, which touches on this, where we talk about the existence of bits versus qubits. So even if you asked me to explain computation in layman's terms, I think someone kind of tried to do that yesterday, it would be it's a high bar to do. Computation, classical computation is the manipulation of bits binary digit zeros and ones in order to store those binary digits in memory, process them and come up with a solution. The binary digits can take on either of two values, a zero or a one or a true or false or an honor or an off, whatever you want to say, high voltage or low voltage. When it comes to quantum computation, instead of having bits, which are single valued, you can have qubits, which can be in a superposition of values, zero one or a mixture of the two, and this gives them more power, because you can use the bits and they can work together as multiple computers, effectively speaking, performing massive parallel processing. And so, instead of having to calculate one thing, then another thing, then another thing, you can calculate many things simultaneously and combine these many things simultaneously at the end in a superposition to come up with a solution to a question that otherwise would have taken many classical the age of the universe to do. But that doesn't really explain quantum computation in layman's terms. Sometimes it's very difficult to come up with a layman's terms explanation. It would depend upon how good your layman's understanding of classical computation is. But one way of looking at it would be assume trillions of classical computers working in parallel. That's what a single quantum computer is kind of like, but not strictly, but roughly speaking in layman's terms, that would be the analogy. Otto, I thought about the theory of everything in physics a bit, and I've always trouble understanding it, I settled on treating it as an intuition I expect knowledge to diverge again at some point. Yes, so the theory of everything as it's normally regarded as the unification of these fundamental forces, and then in principle, once you have this single equation that can be written on the t-shirt of someone, you could walk around with the here's the theory of everything. Then if you knew the initial conditions and you've got the equation, then you can predict the final state of the universe. But of course, knowing the initial conditions, where every single fundamental particle is, and how fast it's going, what its momentum is, something like that, is impossible. Because there is no simultaneous now, and so there's no simultaneous in the future, either. So automatically you're trapped, this theory of everything can't give you what many people wanted to do, which is even to predict the future state, because to predict the future state with high precision means knowing the present state with high precision, which you can't do. Among other things, Heisenberg's uncertainty principle, particles are spread out in space anyway. But it's just been poorly named, it's the theory of everything, it's just the theory of forces or something like that, and physics is not entirely about forces. Constructors here, for example, tries to get behind that, even string theory tries to broaden things out a little bit beyond just that. Tornado Y has asked, do you think governments should regulate how tech companies store user data? Companies should naturally tend towards good user data management, but it would take very long for incentives to kick in. Governments are singularly bad at everything. The one thing they should be able to do is to police the society and to defend society, so police and military. And even then, the government has made up of politicians who are regular people, and bureaucrats who office workers with anything from communications through to legal degrees. So in that circumstance, why would we expect the government, whether they be members of the bureaucracy or politicians themselves, to understand anything whatsoever about technology. One of the reasons why technology flourishes in the way that it does in certain areas, like, for example, mobile phones, there's been a, it's only been a couple of decades, and we have the iPhone and we have smartphones, and that's flourishing. Why? Because the government has been unable to keep up. If they could understand it, they'd regulate it, and the EU tried to do this, right? The EU recently, you know, enforced this regulation that iPhones, for example, couldn't have their proprietary way of charging. They had to have the USBC charger or something like that, so already they're trying to intrude and regulate. And as soon as you regulate, of course, you stifle creativity, because if everyone now has to use USBC or whatever it happens to be, then no one can innovate and make something better because the EU has decided, no, this is the thing. And we're never going to go do anything better than that. Standard communist idea, right, where you were going with this one plan and no one's led an alternative. They say, you know, this is good for everyone, so we don't have to carry on. The point is that no, companies do not want to kill their customers, companies do not want to breach the privacy of the company's do not want to upset their customers, it's the last thing they want to do. So the internal regulations of a company, especially a profitable company, are going to be far more stringent than anything the government will place upon them. The government contains data, you know, there are situations where government data has been breached, hackers will find a way. So that Jeff Goldblum line from Jurassic Park Life will find a way, hackers will find a way, okay, it's this constant battle between the good guys and the bad guys in tech and with the protection of data. So I wouldn't want to hand it over to governments, the governments, if they have any role here is where you have foreign actors, okay, foreign governments, you know, the Iran's or the China's or the Russia's of the world. Trying to disrupt or even destroy internet infrastructure with great programs of hacking, there the government has a role, because that's more of a, it's a military issue. And they do have, you know, in America, you have the great NSA and so forth, and we have the, I think it's called the Australian Signals Directorate or something like that. And so monitor foreign incursions or trying to hack into companies and so on and so forth. So it's a form of defense. But as for regulating how tech companies store data, I don't think so, because among other things, you want the tech companies to continually innovate and to come up with more secure ways of storing data. Hopefully soon, we're able to store our data using quantum encryption and quantum encryption is file safe, because if someone looks at quantum encrypted data, immediately that is known. We know when quantum encrypted data has been seen has been observed and so you can immediately change it, and alert can go off a red light can go off somewhere and say your data has just been looked at. Now you might not know who's looked at it, but you will know that it's been looked at this is unlike present means of encryption which can be broken and you will you won't know until sometime later the data has been stolen and so on. So there are solutions to this but they they're in principle they're not in practice yet but we want companies to continue to innovate in the area of data security and a perfect way to avoid them innovating is to regulate a specific kind of data security. And moreover of course once you regulate a solid data security say this is the way in which data needs to be secured. Well the bad guys know that as well, and soon figure out workarounds so I'd much rather than a rush other trusted to want to another word trust much rather leave it in the hands of private industry, who can adapt, accelerate and innovate rapidly. Whereas the slow ballast of government is only going to slow things down and give an advantage to the bad guys who can take advantage of the regulations. Go to ask how to know that ability that we are mostly best in the world that well it's just whatever your interest is I presume. Black locust to what extent is it fair to say that Hitler in the Nazis lost World War two because they were wrong. That is their ideology made them further from the truth and the allies were in the rest is just details. Yes they were wrong they're morally wrong they're factually wrong all of that kind of stuff and if you want to get you know totally fundamental on this it's like as David writes in the beginning of infinity. If there is a moral maxim around which morality might be judged or any moral I might be judged it is do not destroy the means of error correction. And what performs error correction well any number of things but a primary error correct in this world is a human being, a human being error corrects. Do not destroy that do not destroy people and so Hitler was destroying a lot of people by starting wars and focusing on Jews and focusing on. Destroying the means of error correction in terms of both people and the way in which creativity could flourish by having a very regimented society and. I don't want to say that those errors haven't been learned but it's clearly the case today that we have new pogroms taking place in various places chief among them the attempt at genocide recently in Israel by Hamas. Those people are factually fundamentally wrong morally epistemologically and physically. The most awful kinds of means by which you would destroy the means of error correction is to kill a person and then everything else sits beneath that. So yes, of course, to be evil is to be wrong. Factually wrong about what a person is in these cases of war. And how it is that things improve, which is via error correction. Which also in a related point is why I'm typically against the death penalty. Let's say I'm against all killing you know if the choices between the terrorist who's about to press the button and set the bomb off inside a school somewhere killing them or not you kill them. Okay, you shoot them you take them out if that's the only option left to you if it's the best option available. But when you've captured the murderer who may have committed awful crimes and they you know 30 years old or something. I do not destroy the means of error correction we can correct that person and literally turn that person to someone else now it's cold comfort to the family of people who've lost loved ones to murder. However, I also think it's cold comfort to kill that person as well. There has been, as far as I know sociological research on the benefits of revenge because it happens in tribal societies that when a person is allowed to enact revenge upon someone else for having been hurt and they feel better. Well okay but we're not a tribal society and I don't think feelings are fundamental when it comes to matters of morality. What we want is to focus on error correction and so even if keeping a murderer alive in order to try and assist them to realize the error of their ways and to become a different person. Hurt victims a little morality is not about feelings because when I say hurt what I really mean is to cause them some sort of emotional distress. But they can learn better as well they can learn why. Having a vengeful heart is not a good psychological state to be in and instead to have some degree of compassion to the extent that's possible. It's hard but you know there is wisdom sometimes in ancient traditions that talk about this kind of thing and the wise thing is to even in situations where you have the most egregious crimes being committed. If the alternative is between punishing someone with death or allowing them to survive and to understand the mistakes they've made I'd rather go for the latter to allow person to understand the mistakes they've made. All of that said on a related issue when it's wartime we don't have time to talk about that we don't have time to split hairs over whether we're going to teach the Islamic fundamentalist a better way of life or not. We need to wipe them out and destroy them because they are coming after us and various other innocent people. And so when you're in a situation that is urgent when it's an emergency which it is and the Islamic fundamentalist or any fundamentalist or terrorist in general, making plans now as we speak to kill the maximum number of people they need to be destroyed now because our methods may be crude but it's the best we know right now in the future in some in the light in the future. Yeah sure we'll be able to press a button on the the AI drone thing and it will, you know, at very high velocity go and capture the terrorist in the act of building the bomb before they ever have a chance to set it off. And then you capture them and drag them back to prison and then you put them in prison and then you effectively deprogram them or teach them something better so they cease to be an enemy of civilization and become an ally and we want more of those. Why David Deutsch don't believe in non-locality well you would have to read his papers on this Deutsch Hayden paper on that. Because David Deutsch has a coherent view of physics and that means understanding that the prohibition on messaging fast in the speed of light holds universally. And special relativity gets you that you can't travel fast in the speed of light you can't signal fast in the speed of light. And so that means everything must be local and that includes entanglement effects. So even if two particles are entangled, at some point they must have been a single system close enough together in whatever case they cannot possibly signal fast in light and there is no experiment that demonstrates they do look up. David's work on that I guess after the next few fairly quickly because we're coming up two hours and it gets me to the end of my endurance, but this is fun. Van see Krishna, given that we are in the era of LLMs what do you think about large physics model my view at this point is that unified field theory might be a large encoding non-parametric model. I don't think that large anything models, artificial intelligence is able to generate explanatory knowledge. It can explain novelty, but as many people even, you know, people like Sabine Hossenfelder, various other physicists have been out there on social media and Twitter with some quite funny interactions with chat GPT asking it to, you know, generate a theory of dark matter or to unify general relativity and quantum theory. Yeah, it's hilarious when it comes up with it because it's childish sort of stuff it's nowhere near that and won't be anywhere near that unless it becomes an AGI which it won't become an AGI because an AGI has a different thing. Unified field theory, I don't know what that what unified field theory might be a large encoding non-parametric model. I don't understand that I'm afraid I'm sorry, I'm sorry, I'm sorry. I'm not a black locust. Have there been any promising ideas to add to the quest to find the objective principles. Sorry, I've missed a bit. Have there been any promising ideas to add to the quest to find objective principles? All art, at least since David Deutsch's "Why I Fly Out Beautiful", not that I'm aware of, I'm not an artist. So, but there must be some and most of them are in explicit. Things are literally beautiful or not. And we have good theories running on our minds about what those are. You are disgusted by the rubbish tip, but you are attracted to the florist, both in terms of smell and sight. And that's no accident. So, why is that? Well, because there is objective beauty in the world. Now, what are their standards of objective beauty? Well, they must include things that the artists would talk about. Matters of symmetry and harmony and lighting and all that kind of stuff that isn't my area. So, I think that art contains a lot of knowledge that some things have been discovered and some things are well known, but I can't articulate them all here. Easy reminder, how do you filter information on current events? That is, who do you trust? What kind of heuristics do you have? For example, Trumpless, Biden, the agenda behind the scenes, what's going on? Trust is the word that I tend to avoid. I think I'm going to slip up once during this two hours, forgive me for that. The real metric is how do you come to detect and correct errors. And in an age where there are many channels of information, all you can do is to compare one with the other and to sometimes use your own eyes and ears and to believe your own eyes and ears. Because rather often we are told not to believe our own eyes and ears. You mentioned politicians there. I like to listen to the politicians themselves rather than commentaries on what the politicians have said. Now, in the age of AI, of course, this is becoming a fraud area, but we're not there yet. We know when it is actually Trump speaking or Biden speaking. Why? Because there are error correcting mechanisms out there. There are traditions. There are cultures and there are institutions. The media, sometimes called the fourth estate, where even if you disagree with these media outlets, whether that be mainstream or new social media channels, they will have commentary on the enunciations of these politicians. And the enunciations of the politicians will be identical, even if the interpretations of what is being said differ diametrically. Douglas Murray sometimes gets pessimistic on this point and says we can't even agree on what the facts are anymore. I think that's a little bit too far. Because we will still agree that Trump said X, Y and Z or Biden said X, Y and Z. Although there are cases, you know, of course, where, yeah, especially in the fog of war, for example, where it will be reported everywhere, that's a rocket struck a hospital in Palestine. Now, if every source agrees that that happened, we should still be skeptical. And certainly in one situation we found not only was it the case that the rocket didn't strike the hospital, but it turned out later that the rocket struck the car park of the hospital. But while a good 80 or 90% of the media said an Israeli rocket struck the hospital and then turned out that it was a misfired Hamas rocket that struck the hospital car park, and that some smaller number of people were killed. So, I think this is Douglas Murray's point that sometimes we can't agree on the facts, you know, was the rocket fired and who fired the rocket well, I think that a rocket landed somewhere, and some people were killed, that everyone agreed on. But precisely where, and who fired it and how many people were killed and that kind of thing. Yes, we have difficulty these days in the fog of war, but that may have always been the case with war, but these days it's particularly fraud because of the visceral distrust through to the spectrum of hatred that many media outlets and other commentators have for the IDF and Israel and so on and so forth. So we, and that's that's an extreme example. But, and that's the hardest example. So when we get into other things which are more frivolous and you're watching the news, generally, you know, if the news says there was a car accident at 530 PM today at the corner of this street in that street. It's reliable. It's not a matter of trust, but you have an understanding that the reason the news exists or the reason this reporter has a job is because they want to continue to make money either gainfully employed or by selling advertising and to do that, they want to be truth tellers, they want to be accurate because if they're found uncovered to be not reliable, then people will tune out. And so that is an incentive. There's some positive incentives there for people to, as far as possible, not lie. Now, there are other perverse incentives in certain other situations where there are incentives to lie, and that's where you have to have your critical faculties about you. How do I filter information? Well, as I just say, if it's, you know, if it's don't react to quickly is always an important thing. Never react to quickly give it time, especially when it's important world events. Consider, be slow on these things. The news wants to move fast by virtue of the fact it's new. You should soberly sit back and not everything, of course, should be animating you and exciting you. There are a lot of things to be concerned about. And so being, I think a lot of people are fixated on the news and you can see that, you know, some people that I watch are fixated on the news, you can see that. I don't mind watching certain YouTubers and things who comment on the news, but you think it must be rough on their psychology because every day they're tuning into the news and they've got their own channels. And it can't be good because the news is designed to emotionally capture you and the most easily available emotion to elicit in someone is a fee or excitement response. The fee that the world is ending, the catastrophe is coming, the excitement that something is about to happen. And so we have to be cognizant of that. You have to be aware of that in yourself and in other people. It's the age of hyperbole and the genuinely true social media, the smallest thing happens and it gets amplified. Otto has asked again. Something about wrong to eat meat. I've got an article there for you, Otto, about humans and other animals about the morality of eating meat. One day, I think it'll just become a moot point. It's kind of like the debate about energy sources running out. Should we burn coal, should it be nuclear, should be wind or solar. Interesting as that is, I was passionate of some people can get an even me at some time. I think, hopefully, within decades, it will all be a moot point. Once we have fusion power, the old arguments will be, you know, who cares about rehashing them. And so the eating meat thing, I don't personally have any qualms about that argument, so I won't go in for that. But it'll be a moot point once we're able to grow as much meat as we like in the laboratory without ever having to worry about harvesting it from a living organism like cow or sheep or pig or whatever, it'll just be grown in a factory. It won't have another system, so all of those concerns that people often have will be arguments that will seem ridiculously dated. Arch 45, why does it seem that people who read David Deutsch are influenced by his ideas and end up being smarter than everyone else like the most recent Lex Friedman guest clearly having the best grasp. I commented on that Lex Friedman guest last time I did the live stream. I think the effect is, and I'm so heartened to see the work of David Deutsch getting more and more airtime via various channels. There's more influence, and that means if you're influencing influencers, if you're influencing the people who are appearing on Lex Friedman, if you're influencing people who are, you know, the wealthiest on the planet from Elon Musk down. This is only a good thing, and it still seems that when you encounter someone who is familiar with the work of David Deutsch, the way they speak can be different. And this different and new and original and creative way of speaking or interacting is by virtue of the fact it doesn't sound like every other academic or intellectual. As you say, can seem like they're smarter. That doesn't mean they are. I don't endorse the idea of gradations of smartness. But a different and preferable way of sifting good ideas from bad. A commitment to rationality which I regard as seeking good explanations via a method of error detection and correction. A commitment to reason to science and mathematics broadly and philosophy. And so it's a leveling up of traditional intellectual thought, which was let's accumulate as much information as we possibly can fill our heads with facts. I'm not denigrating, you know, people who have deep knowledge, deep background knowledge they can draw on. But it was a mode of authoritarian understanding that many people still push today. You know, what are your sources, what are the authoritative sources, how can you ensure that you're justifying your beliefs and you've reached the truth and that way of thinking. That way of thinking is the old style intellectual that you can still see out there in the world that will rest upon and rely upon credentials and credentialing that will defer to the authority or perceived authority of experts. In some sense it's only subtle but in other ways it's a radical shift turned towards a way of viewing for example expertise, not as a matter of authority but as a matter of deep knowledge, where there is a commitment to error correction. That we have this open-ended capacity to improve our circumstance and that is another way in which David Deutsch has influenced I would say the global conversation in a fundamental way, perhaps the most fundamental way of all, is in drawing a bright line between an old guard of intellectuals who are still the most popular. Committed to a pessimistic view of the future, prophesying the end times and denigrating human beings and what we've done for the planet and Western civilization. That way of thinking and that vast group of intellectuals that still out numbers us by 1000 to 1 or something like that versus the anti-authority commitment to unbounded progress and optimism that David Deutsch gifts us through the beginning of infinity. It's not to say he's the only person on earth who's ever done this, there is Matt Ridley, you look at the work of Karl Popper and Feynman, there are many people who have this optimistic view, but what I'm saying is he articulates it in one place in that book and throughout his work. In a way that is more cogent and he's the key thing coherent as a worldview by coherent I mean, there are all these many aspects to it that fit together like a lovely little puzzle, which is why you know when I go back to Ben's question about what is the significant thing you disagree with on David Deutsch, it's because I feel as if, I don't want to sound ridiculously arrogant here but I feel as if I can see the puzzle. Whether it's that piece over there which is quantum theory and understanding the significance of quantum theory for possibility, which leads to constructive theory is what can possibly happen in the world what can't possibly happen in the world. That tied to epistemology and our capacity to imagine what is physically possible and to take action to achieve it and then wealth, to generate more and more wealth in order to achieve the thing that we want to because what's holding us back is only knowledge, not resources which are infinite, because any kind of matter which exists out there in the universe which so far as we know is effectively infinite if not actually infinite has all the resource that we need we just have to know how. And knowing that all of this is possible gives us optimism, but at the same time, we'll never get to the end we can only have a hope to correct errors. And we'll always have errors with us because we're fallible as human beings and human beings are cosmically significant so as I'm telling you all this, you can see them filling in the pieces of the puzzle these bits of the puzzle, everything from quantum mechanics that I began with through to the nature of what it is to be a person fits together and people are coming to understand aspects of this and to get on board. And so it is so much more refreshing than that, as I said, old guard of people who are ever engaged in talking down people dismissing explanations and the centrality of knowledge in changing our circumstance now and giving us hope for the future. Because we need this right now of course you know you have the people who have unfortunately indoctrinated generations that basically the end of the world is not too far around the corner. Climate change, you know, of all the problems, you know, many, many problems, but you know, to pick one climate change is the one that animates youngsters most. It's the one that politicians are using various other activists are using people who are interested in collectivism are using as a cudgel to bring in more regulations to socially control people to win elections. To gain power, and it all rests upon denigrating people and the output of people. And yet that's just one thing, you know, and then you've got, you know, people who are pessimistic about AI and people who are pessimistic about the potential for curing disease and people are pessimistic about longevity and the list is long. So, yes, sometimes that, all of that, it makes people, it's a new kind of smart person if you like, and thank goodness they are on the ascendancy. Whether their numbers are accelerating, these ideas are accelerating into the zeitgeist, as quickly as the pessimistic ones I don't know, we're at an inflection point of sorts. The next few years will tell, next few years will tell. I really have to stop, so even though the questions keep coming in, SAS, can you elaborate on David Deutsches view on free will versus Sapolsky house does he basically say free will could exist in the bounds of whatever creativity is, which is still not known can you elaborate on. I can't give you David Deutsches view on free will because I don't know I'm not David Deutsches, all I can tell you is what I've said in the past about this which is that I think that free free will is just a term for an emergent phenomenon that human beings have the capacity to create new options in the world and choose them. So we, we create staff, we create explanations importantly, among many examples, imagine you want to create a highly precise way of locating yourself on the globe, you know, you're navigating on an old sailing boat and you think I've got this hand drawn map if only I knew exactly where the new world was exactly where the United States was I'm sailing here from Spain over to the United States. Why, why can't I see landmarks more precisely well. Back then you didn't have the option, you couldn't freely choose to switch on your GPS map, but now you do. Now you do. Now the GPS will tell you exactly where you are on the globe down to the nearest centimeter or something like that. So, we created that well Einstein and then engineers came along and programmers and coders and people who made smartphones and now, you know, so when I say we I mean humanity created that. And so now we have the choice. We have the choice to switch on a GPS system or to use a paper map. Now, that capacity to create is intimately tied to this notion of free will. Yes, I have said that, you know, when you want to explain what it is a person does. You, it's convenient to invoke this idea of free will. Now if you say, Oh, but everything is determined, I will agree with you. Everything is determined by laws of physics. The laws of East are deterministic. They are sovereign, you cannot escape from the laws of physics. So if your conception of free will is to defy the laws of physics, then I agree that no such supernatural force of free will exists. So put that aside. The fact that you can't in principle predict what a person is going to do should make you think carefully about what it means therefore to be a person. A person is not like a ball rolling down a hill, a person can make choices, and their choices depend upon what knowledge they're creating moment to moment, and inherently unpredictable thing. And so you can avoid the term free will if you like, and many people do. Sam Harris does, for example, you mentioned his name. But we're still left with the central mystery of what a person is. So rather than get tied up in debates over terminology does a person a free will. I like these days to just talk about, there is this interesting mystery at the heart of what it is to be a person. We create genuinely create like an unpack what create and in particular we create explanations we can unpack what that means. And that process is inherently unpredictable. And so, although you can say things are determined, you can also say without contradiction to say that things that are determined to be unpredictable subjectively, and I went through three different versions of unpredictable. The live stream before last. You get from quantum mechanics and you get that also from, among other things, knowledge creation inherently unpredictable, even though it's all determined. Yeah, people often make that mistake though. Okay, whenever you're watching YouTube, I was watching one with Sabine Hossenfelder, and they were, she was speaking with some other physicists as well and it was remarkable to me, these professional physicists, you know, such a high level. Not being able to distinguish between predictability and determinism. Things can be determined without being predictable. Things can be determined without being predictable. You fire a photon of the half-silvered mirror, it is absolutely determined that it will either go through or it will bounce off, 50/50, which can't predict which. And there's no point saying that for this next photon, it's got a 50% chance of going through a 50% chance of bouncing off. If you want a prediction of which it will do, you can't. And so that, you know, I think that's a nice bright line between predictability and determinism. They're not the same, they're not synonymous at all. Black locust, do you live in Sydney? Is there a Hyperion Deutschian society? Can I say just as a matter of, I'm not that I'm aware of, but Deutschian isn't a word. I don't know how much thing is Deutschian. Even Hyperion is something that is sometimes challenging. You know, I just like epistemology, just that word. That doesn't, you know, the study of knowledge. It's like people don't go around calling themselves Einsteinians, or fine mians, or... I do call themselves Darwinians, I suppose, but anyway, it's better to not have movements full stop, but it's certainly better than not have movements centered around a particular person. The reason my podcast is in very large part, a lot of proportional podcasts, is that it's the work of Deutsch, which is because in one singular place, in a number of books, you have a highly concentrated identity. I think once upon a time, I think it was Aaron Stoppel, or someone said, "Wonderful phrase," or something to the effect of, you know, David Deutsch's work, the work in the beginning of infinity, has the highest density of knowledge per sentence in any book ever, and I think that's a nice way of putting things. And that's why, you know, I focus on that work, but also because, yeah, it's all the stuff that I've studied throughout my life, and that's why it seems like I'm focused singularly on David Deutsch, but just that there's also another way of looking at it, which is that David Deutsch and I have the same interests. He's a lot more accomplished than what I am, and a subtly different interest. You know, he's a very focused on the physics, and, you know, I like the epistemology stuff as well as the physics. Anyway, so there are reasons for these. People who converge on the truth converge together, that kind of thing. I can say that. Oh, SAS. Also, with the double-stit experiment, can you expand on why it proves multiverse as strongly as our study of organ and improves evolution? OK, so, no, it doesn't prove anything. Science is not about proof. I can't do that. But go to my series on the multiverse. In particular, go to the episode on Shadows, which is the easiest way. Instead of saying that the double-stit experiment proves the multiverse, what we say is the only way to explain what you see in the double-stit experiment is by recourse to invoking the instance of the multiverse. The multiverse is forced upon us, just as evolution by natural selection is forced upon us once you understand how it is that there is a variety of species that change over time, and you look at the fossil record for evidence of that. SAS is asked about the multiverse and dark matter, such as Marvel movies, other than thought this as well, you know, like dark matter, whatever it is. Are we swimming in a civilization of dark matter people? By definition, so far as we can tell, dark matter does not interact with any other kind of matter. So perhaps there are civilizations of people made of dark matter. We just don't know yet. So yeah, it's a good premise for science fiction. I don't know why science fiction movies recently haven't taken advantage of dark energy and dark matter. I mean, it's a lot to explore there, maybe one day. People saying nice things and irrelevant things. Thank you to a donation for today, a singular donation from Aaron Martin. Thank you, Aaron. Shout out to you. Did I answer your question? Did you ask a question whether you did or you didn't? Thank you so much. Again, for anyone else is enjoying these, I've had some lovely feedback from people over time. Please feel free to go to www.breadhall.org. And there are links there to how you can support me doing this into the future and my regular episodes as well, which ironically, even though I've been here for what, 2022 minutes, actually take a lot longer, like a half hour episode of Top Car sometimes takes up to, well, it could take six hours or something because the editing is just such an nightmare. No editing required here. This is the unedited version. So the question about what science is can sometimes be phrased as explain what exists and what's going on and the relationships between things and the causes. What does physical reality consist of that kind of thing into sciences morality is about what you should do. What you should do. So your problem is always, what should you do next? There are better and worse choices. So anytime you have the word, the word should is a moral question. And there's objectively better answers. Some things you should do and some things you shouldn't do. And of all the things that you could do, some are better to do than others. So morality is always the problem of what to do next. What should you do next? Something about veganism there from Power Ranger. The distinction I make between humans and other animals does come down to this fundamental idea of explanation creator. And to get into the nitty gritty of it without going too much detail, I think there's a distinction to be drawn between pain and suffering. And I'm not sure that animals have qualia, but even if I granted that they did have qualia. So for example, the capacity to experience pain pain is not necessarily suffering suffering can be conceived as some explanation of the pain. In particular, an explanation of pain is going to continue, the explanation of pain is going to get worse or suffering is a uniquely, I would say, human thing that people disagree with me on that, but that's where I come down on why it's not ethical to cause a person suffering. And why it is that, well, unless of course, some sort of criminal and you need to tackle them to the ground, that's going to cause them to suffer, modulate that kind of exception edge case. Animals and people are black and white different things, entities in the world. And it's almost as if the more we learn about AI, not age, but AI, we may very well converge on the idea that animals are a kind of AI. They are slavishly following programs, even if animals within the same species are different one to another, they might have rather unique personalities. Anyone who has a dog knows that different dogs have different personalities, different cats have different personalities. So there's something there in the genome that's unique, but that's just variation on the genome. None of that necessarily confers the capacity to have an experience to explain things, but it's a long conversation. I also, in all my pieces, when I talk about this and veganism and the morality of animals, I also say, none of this is an argument for being cruel to animals. So there's also all that. The one last thing, Power Ranger. Why should animals be denied of certain rights? Well, because, among other things, they're not people, and so they can't have human rights because they're not a human, the reason why a human right exists to the right to your life and the right to your property is because you are a right, thinking creature who can correct errors and explain the world and improve it. And animals never going to do that, not for themselves, not for each other. If you care about the animals, if you care about animals, then you should care about humans, infinitely more, because humans are the one thing that are going to be able to save the animals that you're concerned about. And that suffering does indeed come down to the capacity to explain, which no animal can. Unfortunately, as I said, thank you, and thank you to everyone who participated today who came along who watched, dipped in, dipped out, asked questions whether it was on X or on YouTube. But until next time, and possibly a couple of weeks from now, we'll say goodbye for now, and I will see you again in a future episode of Topcast. Bye bye. [BLANK_AUDIO] [ Silence ]