Archive.fm

NBN Book of the Day

Shannon Vallor, "The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking" (Oxford UP, 2024)

There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords.  In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/book-of-the-day

Duration:
1h 4m
Broadcast on:
10 Jul 2024
Audio Format:
mp3

There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords. 

In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face.

Learn more about your ad choices. Visit megaphone.fm/adchoices

Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/book-of-the-day

This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify. The global commerce platform that supercharges your selling, wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at Shopify.com/tech, all lowercase. That's Shopify.com/tech. Ryan Reynolds here for, I guess, my hundredth mint commercial. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no No, no, no, no, no. No, no. No. No. No. Honestly, when I started this, I thought only I'd have to do like four of these. I mean, it's unlimited premium wireless for $15 a month. How are there still people paying two or three times that much? I'm sorry, I shouldn't be victim blaming here. Give it a try at midmobile.com/save, whenever you're ready. $45 up from payment equivalent to $15 per month. New customers on first three month plan only. Taxes and fees extra. Speed slower above 40 gigabytes of CD-Tails. Welcome to the new books network. Hello, and welcome to new books and philosophy, a podcast channel with the new books network. I'm Carrie Figuerer, Professor of Philosophy at the University of Iowa, and I'm co-host of the channel, along with Robert Thales, Sarah Tyson and Malcolm Keating. Together, we bring you conversations with philosophers about their new books in a wide range of areas of contemporary philosophical inquiry. Today's interview is with Shannon Valer, Bailey Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute at the University of Edinburgh. Her new book, The AI Mirror, How to Reclaim Our Humanity in an Age of Machine Thinking is just out from Oxford University Press. There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity that somehow the AIs will rise up and destroy us or become our overlords, and it's game over for humanity. In her new book Valer, Valer are used at the actual and very alarming as this existential risk of AI is quite different. Because some AI technologies, such as chat, TPT or other large language models, can closely mimic the outputs of an understanding mind without actually having understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. Valer critically examines AI doomers and long termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face. Let's turn to the interview. Hello Shannon Valer welcome to new books and philosophy. Thanks really happy to be here. I'm really looking forward to our conversation about the AI mirror how to reclaim our humanity in the age of machine thinking. Before we get into the interview, tell us a bit about yourself, you know, how you came to this topic, how you became a philosopher, anything you'd like to tell us in way of background. Sure. So my original focus in grad school was the philosophy of science, and kind of the merging or integration of analytic philosophy of science and language with phenomenological perspectives on things like reference. So it was quite far actually from the work that I do now. And when I started teaching, I was primarily teaching courses in the philosophy of science, but there was a course on the books at Santa Clara University where I started teaching in 2003, called science technology and society. And it was a philosophy course, but the person who had taught it, Thomas Powers had left the university, and I had done graduate work on the intersection of science and ethics. And I was really interested in thinking about philosophy of science more broadly to think about the social and ethical implications. At the same time I work in the philosophy of science was also focused on the role of technologies and producing scientific knowledge so I was, you know, writing about things like particle accelerators and the extent to which they could really be seen as instruments of observation. Questions like that. So, so technology and ethics were things I was interested in but I wasn't putting them together at the time. But I taught this course, I developed a new version of the course for science technology and society, and I did a unit on the ethics of emerging technologies. And I started talking to my students about social media technologies and smartphones and how those were changing their moral and social lives. The students just grabbed onto it so urgently. It was like they were drowning in the ocean and I just handed them a life raft, just the opportunity to talk about the new kinds of conflicts, the new kinds of anxieties, the new kinds of effects that they were seeing the mediation of their social lives by these technologies was so powerful for me to see in the classroom that I got excited about writing about it. And so I was looking for a way to think about the questions that they were raising for me in the classroom. Back to my, you know, graduate school roots in in virtue ethics is one of my, my side interests. And really kind of started putting philosophy of technology and virtue ethics together and wrote a paper on the ethics of social media and the ethics of social networking technologies. My paper really just took off and got a lot of early interest and that convinced me that there was something really powerful here that needed to be explored, and that was really the turn that I made around 2006 2007 to focusing on the ethics of emerging technologies, starting with social media, but also very quickly got into talking about the ethics of robotics and artificial intelligence, even before these things were commercially viable in the way they are today. And I never looked back. So that's what I've been doing ever since and, you know, it just fits with my general kind of passions and interests as a, as a thinker, I'm a giant nerd always was, you know, was, was, was programming in basic when I, when I was in middle school and I never, I never lost my fascination with science with science fiction with the way that technologies change the world and I've just kind of built that into my philosophical career. Well, that's, that's quite a background and it's, it's, you know, it certainly is like one of the hot topics, you know, nowadays. Or philosophers have certainly, like yourself have, have made a big impact and it's, and continue to have a big impact. One of, one of the things that, that drew me to this book was actually a conversation I had with, you know, a friend, not an academia or anything like that about the existential risk, you know, he had been reading, you know, there's a lot of, you know, sort of in the air about existential risk of AI. And when I pressed him on well what exactly is the risk, you know, he couldn't really say, and then I kind of looked around myself, and I was like, you know, there's all these articles in, you know, various newspapers and, you know, other online things. And, you know, about the existential risk but nobody ever says like exactly what it is, you know, you can imagine, you know, an AI or some sort of artificial intelligence, you know, having a whole electric grid crash or something that would be really, really impactful. It's not going to destroy humanity, right, and I, and I kept trying to find this and I actually never found it. And, and so I was really interested to see what you had to say about that. So I know the whole book is about, you know, this issue essentially of, you know, the existential risk of AI. So maybe to start us off, you can just say, in broad terms and we'll get into more details. What do you see as the, the, the genuine existential risk of, of AI. Yeah, thanks. Well, I think that's partly the point is that I think we have a in the media, and in the kind of tech narratives that are getting pushed out. We have an incredibly misleading narrative about what the existential risk is and in what sense it's existential. As you point out, I think the, the likelihood of AI tools leading to the annihilation of humanity as a, as a species is, is incredibly slim, which many of the proponents of existential risk admit, but their point is, look, even if it's slim, if it's not zero, then we have to do something about it. But the problem is is that they often suggest that the risk is greater than the existential risks that are posed to us by things like climate change, nuclear proliferation and so forth. And that's just, in my view, plainly false and in the view of, of many, if not the majority of AI researchers as well, who know the limitations of these technologies, a lot of the scenarios where AI destroys humanity. All these tools developing consciousness or sentience developing malign intentions, collaborating with the aim of exterminating or overtaking humanity. And these, frankly, are just science fiction fantasies that that I do explore in the book and expose the basis of those kinds of fantasies. On the other hand, it is true that these tools can be used in ways that are incredibly unsafe and dangerous and unpredictable, and things like the electrical grid scenario that you mentioned. If we were to integrate AI into critical infrastructure in a way that was negligent, we could very well see catastrophic impacts. It would be easy to avoid doing that, though. Right now, you know, it depends on how optimistic you are about the wisdom of the people who manage these systems and integrate technology into them. You know, you may be more or less sanguine about the prospects of people, you know, not making incredibly stupid decisions about, you know, integrating AI with a power grid or letting it anywhere near, you know, nuclear launch codes and the like. You know, there's, there's fairly straightforward guardrails for protecting safety critical infrastructure from unpredictable and poorly governed AI systems. But the point is, if something bad happens, it'll be because humans were incredibly negligent and foolish and their use of AI, not the AI developed any genocidal intentions toward humanity. So I try to draw that distinction in the book, but I do point out that there is an existential risk, but it's not existential in the sense of human extermination it's existential in the philosophical sense of something that threatens human meaning that threatens human agency, and the openness of the future. Right. So if you think about existentialism as a philosophy, which focuses on human freedom, which focuses on the open horizon of human potential, which is the corollary of that freedom, and which also focuses on human responsibility. And the ultimate inability to excuse our actions as, you know, caused by forces outside of our control. These are the themes of the book that the kind of a narrative that we're being fed right now is, unfortunately, I think, undermining our awareness of our existential responsibility is undermining our understanding of ourselves, and is under understanding our understanding of the openness of the future, because you get a lot of narratives that we call techno deterministic, which suggests that the future for us is already decided by the arc of technological innovation. The idea you'll often see is that AI is going to determine what the future look looks like, and that humans are really just kind of passive writers at this point, on a wave of technological innovation that will deposit us into some future that we didn't have the power to choose, and that's an incredibly dangerous lie, and it serves the interests of the people who still are very much behind the wheel in terms of where technology goes, and where it takes us. So I try to puncture that illusion, and in the book highlight the ways in which technology and AI are still very much human powers under human control, and a matter of human responsibility. And it has to be something that is a very optimistic message in one way, because it means that the future is not set, it means that we do not have to accept futures with AI that we don't want. But in order to avoid those futures, we have to actually recover our self understanding and confidence in human agency and power, and that's something that the book is really dedicated to doing. So you mentioned the use of the narrative that we're being fed, you know, about this close future of the, you know, the kind of basically the inevitability of AI, I don't know, taking taking over many things and directing us and closing off our future. And can you, can you save it more about, about those narratives. Sure. Well, you see them occurring in a couple of different forms right so there's the, this what's been called the AI numerous narrative. Right, so the doomers are those who are convinced that artificial general intelligence or what's sometimes called super intelligence will arise out of the AI developments that we're seeing today. And that will simply overtake humanity and assume control of our lives and institutions, and likely destroy us or disempower us. And what's interesting is you'll see this narrative coming in many cases from people who are actually centrally involved in a research and development right so you'll see these numerous narratives coming from people like Joshua Benjio, or Jeff Hinton, or people like Elon Musk who will will paint this specter of AI as this incredible danger to humanity. And yet it is not framed in, in most cases, as something that depends upon human choices we make today. It's often framed subtly or explicitly as something that's already happening to us, and that we have a very little in the way of opportunities to to avert. So that narrative is one that I that I reject and it's incredibly dangerous, and it, in a sense, wants people to be too scared of AI, and to intimidated by AI to actually intervene in the systems that we have today. It also tends to put the risks out into a time horizon where action today seems irrelevant or, or, or moot. So you have what I talk about in the book extensively. It's called the long termist kind of movement, which is a way of shifting the horizon of concern about existential risk into the long term future, and saying, look, we, we don't need to worry about the existential risks that we're facing today so much as we need to worry about risk from AI that yes, you know, might might arise over a far longer time horizon but is somehow, you know, considered to be more urgent for us to focus on than very near term risks like climate change. I go in the book, I go into and, and, and, and, and try to, to take down the long termist arguments. But, but in essence what they, what they suggest is that there is this theoretical risk of AI that becomes so powerful that it destroys us all. And even if it's something that could take 100 or 500 years to happen, we should be devoting all of our resources and all of our attention now to that problem. So, so that's one narrative. But there's also the kind of contrasting narrative which is the, which the techno utopian narrative. So it's the other extreme. Otherwise it encourages human passivity. And it, and it conveys a similarly deterministic view of, of the future so that's the narrative that AI is going to save us all, rather than kill us all right. And the idea is that these technologies are going to be so beneficial that they're going to solve all our problems for us so they're going to solve climate change for us. So it goes right, AI is going to solve poverty, AI is going to create cures for all of our diseases perhaps even for aging itself and the idea is, look, we don't have to work on any of our most urgent human problems right now. We just need to invest more money in AI, which will someday deliver us from all of our problems. It's incredibly dangerous and false narrative as well. So the book is really trying to challenge both ends of the spectrum, which are equally ungrounded in reality, and also dangerous to, to adopt as, as policy, or as kind of a practical way of thinking what AI is what it can do for us and what it can't. So the book is trying to bring us back to a realistic estimation of what AI tools are, how they work, what their risks are what, what possibilities are are plausible what possibilities are simply dumped up as, as fantasy. And, and, and which of these narratives are really actually when you look through them, clearly designed to convince us that we are not in the driver's seat anymore, and that we shouldn't try to intervene in the future that AI will bring, or that if we do intervene, it will be only upon sort of theoretical long term risks, not the near term harms that we're already seeing from AI. So both of these narratives, tell us, hey, don't worry about things like disinformation and deep fix and AI driven fraud. Don't worry about things like copyright violations and the degradation of the creative industries by tools. Don't worry about the, the use of, of AI to amplify racial and gender discrimination in automated decision making. Don't worry about any of these, you know, near term risks of AI that are already harming people today. And, and the rationale for not worrying about those things differs between these two narratives one says don't worry about it because all these bad effects will be outweighed soon by all the great world saving effects of AI. And then the other narrative says don't worry about these near term harms of AI that need to be governed right now, because really those are a distraction from the long term risks of AI that we should be investing money and studying. What you need to understand about both of these narratives is that practically they amount to the same policy, which is don't regulate us. Don't put any guard rails on what the technology is doing to our communities right now. Don't try to exercise any direct control on the shape of innovation in the present. And it's very clear who benefits from that kind of policy. And so, I think often, you know, when, when someone tells a story, and that story wants you to do something or not do something. It's very important to ask who benefits from me following this advice. And that, and that often tells you how trustworthy the narrator of this particular story is. Right. Right. So follow the money. Yeah, exactly. Yeah, I was going to ask, well, you know, it sounds like, and this you do go through the book, you know, the people who benefit from this. These narratives of inaction, basically, are the tech industry. Right. Absolutely. And when you see who's behind those narratives, both the numerous and the utopian ones. It is largely the money within the corporate ecosystem that funds those narratives, both the long term is narratives and the utopian narratives are funded by many of the same people working in the same company. So that tells us something too. So, the real risk is a bit more complicated. I mean, one of the benefits of those narratives is that they're kind of simple, and they, they feed into. Interestingly, you go into this as well, fears by people in charge of being in charge, which has always struck me as exactly the right thing because I always sort of wondered, why is it that we're like so afraid these machines are going to enslave us. Like, from the perspective of most people. Gosh, there's an awful lot of stuff we're not in control of. So, you know, fantasies of not being in control are hardly fantasies. Yeah, that's right. Yeah, I mean, I think that's a really interesting point because it encourages us to look away from all the ways in which we're already unfree, all the ways in which we're already being manipulated. And controlled by, by powerful agencies and actors. And it is essentially, it's like a shell game right it gets us to take our eyes off of the forces that are actually constraining our agency, and inhibiting things like public accountability. It takes our eyes off of those forces, and asks us to instead look at this kind of phantom, this illusion of malign, you know, super intelligent AI overlords that are somehow just shimmering over the horizon and about to emerge and become real. And, and the book is very much pointing out the way in which this is a manufactured illusion. Or in some cases, I think it's sincere. I mean, I think one of the things that I touch on in the book is our power to become captivated by our own reflections and our, and our, our own mirror images, which is, which is a metaphor I use to explain how a particular subset of AI technologies work. And, and I think some of the people selling this narrative do believe it. And they also benefit from it, but it, but I think it probably in many cases is a sincere belief because I think for some people they've really lost a grip on the distinction between human minds and the tools that we've built to mirror them. And they really think that somehow these tools are going to generate some species of superior mind. So I also try in the book, try to talk extensively about why, why that's not going to happen, at least not from the tools that we're building today. It's that time of the year. Your vacation is coming up. You can already hear the beach waves, feel the warm breeze, relax and think about work. You really, really wanted all to work out while you're away. And Monday.com gives you and the team that piece of mind. When all work is on one platform and everyone's in sync, things just flow wherever you are. Tap the banner to go to Monday.com. So, yeah, so the mirror metaphor, could you, could you go into that a bit and then, and then link that metaphor to what you see as the genuine sort of existential risk. Yeah, sure. So, again, you know, we, we hear these narratives that describe tools like chat DPT or Gemini as new kinds of intelligence or new kinds of, of minds that, you know, may threaten us or, or out compete us or save us. But the book points out how these tools. And here again, I want to make clear. I'm not talking about all AI technologies here. AI technologies are incredibly diverse, actually, and built on many different kinds of techniques. And the current narrative that basically collapses all AI into generative AI tools like chat GPT that are built on large language models. I really want to resist that collapse. There's lots of kinds of AI we could build and do build that don't fit this template. But the book is about the AI technologies that do fit that template. And, and these are in no way intelligent, despite the word, and they're not minds. What they are is their mathematical mirrors of our minds. And so when we, when we call it artificial intelligence, really what we're saying is these are reflections of human intelligence in software outputs. So there's a distinction between a thing in its mirror image. Right. But it can be confusing. In fact, you know, the whole notion of a hall of mirrors right in a or a funhouse mirrors right is the potential to create a kind of confusion where you don't know which thing is the real one, and which one is the reflection. And I think the proliferation of AI reflections of human intelligence that are being marketed right now are creating this kind of hall of mirrors phenomenon where we're becoming increasingly confused about where the intelligence lies. So, let me explain that the mirror analogy here. So think about how mirrors work. So you have a surface. And the surface of that mirror is what does the reflecting and the physical properties of that surface so for example you usually will have a coating on the glass that increases its reflective power. And it's the properties the physical properties literally the physics of it that determines how that mirror that glass mirror will reflect light with what degrees of magnification or distortion or fidelity right. So, in the analogy, think about something like a large language model and how it works. There's an algorithm what we call the learning algorithm that builds that model, and that machine learning algorithm that gives a large language model. It's seemingly intelligent capabilities is just a set of mathematical instructions that's that's all an algorithm is right. It's a set of mathematical rules for taking one set of numbers and turning it into a difference set of numbers. That's the only thing the algorithm does. And you can think of the mathematical properties of an algorithm as similar to the physical properties of a glass mirror surface they determine exactly how the thing will reflect its input. So for a glass mirror, right, the input is light, but for a large language model, the input is our language data. And so what you can think of an algorithm, the learning algorithm that builds an AI model is the thing that establishes the properties of the reflective surface. And what patterns in the light, it will be able to reflect, and with what degrees of fidelity and magnification right. So there's a very similar kind of mechanics going on here it's just in one case the mechanics are physical involving the properties of light, and the other the mechanics are strictly mathematical. So if you think about training a large language model what we do is we shine a lot of light onto that surface. That is we feed it a ton of human language data. Right. And so we take all of the digitized language data that we can grab up and for a large platform company we happen to have a lot of it. And we shine all that light on the learning algorithm. That's the training process. And then that changes the properties of the mirror surface, so that they can better reflect those images the way we want. So when we've, once we've trained the model, we've got a model that will generate the new outputs we want in the form that we want. And these are just the mathematically reflected images of the data, the light. So if you think about what a large language model does then it reflects the patterns in our language data. And transforms them in ways that are determined by the properties of the algorithm by those mathematical instructions. So if you want different kinds of outputs, then you change the algorithm a little bit so that it reflects the light a little bit differently. Okay. So what we've got at the end of the day is input of data, which is for the model just a set of numbers linked to other numbers. And then the reflection comes out in the same way. But since those numbers in the data correspond to things like words and sentences, then the output comes out in that form as well. Or if you train it on pixels, right, or if you train it on sound input, the same process happens. So we have multimodal models now that can deal with text as well as audio and image data. It all works like a mirror. So it's all just sets of numbers that are linked together in ways that the algorithm can reflect and replicate. So, at the end of the day, what you have is not a new mind you've created. What you've got is a mirror that will reflect the composite or amalgamated patterns of human minds that are carried in our language data. And so you don't have a system that thinks, you don't have a system that understands what you have as a system that can pick up and replicate the patterns from our acts of thinking and understanding. So it's a second order phenomenon. They simply extract the numerical patterns found in our reasoning and understanding language, and then reflect them back to us. But they don't actually do any of the reasoning or understanding themselves. And a lot of the failures of generative AI tools to solve certain kinds of problems has exposed that. So I'll give you an example. There's a common logic puzzle that involves, I mean, sometimes it's the objects that involves change but it's a man, and a boat. On the side of a river, and the man has the challenge of bringing across the river safely. In the carnivore, like a wolf or a fox, an herbivore, like a goat, and then some vegetable item like a, like a head of lettuce or cabbage. Okay. And the idea is the man has to get these things across the river safely. But it can't be done in one step because one thing will just eat the other, and you can't leave those certain items alone on the, on the shore of the river, because again, one thing will just eat the other. So in order to get them across safely the man has to make many trips and he has to think about how to combine the different items in the trips. So that, that logic puzzle has been discussed in texts in lectures in, you know, logic blog posts in exam questions a million times, probably hundreds of millions of times. So there's a lot of information about that kind of puzzle in the language data that we have shined upon these large language models. And if you ask a large language model to solve that kind of problem, it will usually get it right, not always because the way these algorithms work stochastic, meaning there's some randomness in the output, and there's always a chance it will get something wrong. But what researchers found, and this, this came out in April, it was really interesting was that the, the leading AI, generative AI models, couldn't solve a much simpler version of the task, it was something like a man is standing on the bank of a river, he has a boat and a goat, how does he get the goat safely across the river. Now, a six year old can answer that for you right does not take advanced reasoning, the man puts the goat in the boat and goes across the river done one step. But these models were actually not giving that answer they were giving these incredibly complicated answers involving many crossings and many physically actually impossible configurations of actions. And it made no sense at all these answers that that were being given, even though any reasoner could see the simple answer right in front of them. And, and what these answers told you is that all these systems are doing is trying to replicate the most dominant mathematical pattern in their training data. And the most dominant pattern associated with this combination of words is the more complicated version of the problem. So it keeps trying to replicate that pattern of the complicated answer. And what that tells you is, it actually doesn't understand the problem even a little bit. It doesn't understand anything about goats and boats and rivers. Right. It doesn't understand the physical situation that's being described. It only describes understands or calculates rather the common patterns and combinations of these words and tries to reproduce that. And that's why it fails so badly. Now you can retrain these models very quickly, and I'm sure the company's already have, so that it can discover the new pattern of the simple puzzle and generate the right answer. But even when it gets the right answer, it's doing the same thing. It's still not understanding. It's just matching the pattern more correctly, right. So, so it's really important to understand then that these tools don't think, don't understand, don't reason, but they reflect the patterns found in our thinking, understanding and reasoning. And that, and it's very easy to confuse those two things just as this is very easy to confuse. You're yourself with your, with your image in a, in a kind of fun house room. Well, I mean, it's, you know, essentially it's a more complicated, but at the bottom, the same problem that, you know, John Cyril raised long ago about that with the Chinese room, you know, there's no. It's related. Yeah. And so, yeah, so the, so my question is, I mean, these things are, you know, clearly so much more dangerous, I guess. I mean, and, and I think you need to, I'd like to hear you say, you know, exactly, you know, why they are more dangerous. So, so it says, it's the same problem, but it's a different problem. And what ways are different and why is it so much more dangerous, more risky, in what ways exactly at this point. Sure. So when John Cyril talked about the Chinese room example, I mean, he was thinking about traditional kinds of, of algorithms as sets of instructions or rules that weren't statistical, right, but that we're rather take x if you receive x input, turn it into y output. Right. So, so one kind of technical difference here is that's not how these algorithms work these algorithms are statistical, rather than kind of conventionally rule driven. So they produce outputs that are less predictable, but they're just as lacking in understanding as Cyril wanted to point out that the, you know, rule book that is being used by the person in the, in his example is using. So that's fine. But what Cyril wasn't thinking about at all, was that if you actually did successfully create something that could speak in the language of understanding, without understanding, something that could use the currency of meaning, without grasping meaning, right, something that could create without actually having anything to express within. Something that could give moral instructions without having the first clue what morality is. He never thought about what the implications of that kind of discovery would be, and that's where we are today. Right. We've created these tools that can mirror or parrot right and if you go back to the stochastic parrots paper that that Meg Mitchell and Tim MacGibber and others, you know, initially, and Emily Bender that they initially developed to kind of illustrate the similar issue I don't like the parrot analogy as much as the mirror analogy but but it intends to make the same point. But when you've got something that can mimic human understanding and thinking without doing it, what it, what it encourages is the surrender of the activity of thinking and understanding itself. It starts to look unnecessary. Right. Why bother thinking, why bother reasoning, why bother trying to understand if you have a machine that can give you the same outputs without doing any of that. And so the real existential danger is the message that we're increasingly hearing that human intelligence and human understanding is increasingly superfluous, extraneous, unnecessary, inefficient. Much better to just have the kind of behavioristic mirror that will produce the same outputs, the same words, the same kinds of images, the same kinds of decisions, without any of the thinking or understanding going on behind it. It's incredibly dangerous for for multiple reasons. First of all, I'm a virtue ethicist so the ability to think the ability to reason the ability to solve problems, which we need more than ever. Right. The world is getting more complicated and harder for humanity to manage in a way that allows us to collectively flourish on this planet. And so we need more wisdom than we've ever had before, in order to manage that well, but wisdom and here I'm thinking of practical wisdom in the sense that Aristotle spoke of, you know, furnaces, is something that requires experience and practice to develop. So you only get good at practical reasoning by doing it. So any technology that encourages humans to quit the business of practical reasoning is potentially doing the human family to being unable to reason in the future unable to solve its own problems. And if the bulk of humanity becomes inexperienced with solving problems inexperienced with thinking and reasoning, that is an absolute catastrophe for the human future. And that is an existential risk it means that the real existential risks that we're facing around climate change around food insecurity around all kinds of things that are going to threaten the human family in the coming century. That means that we're at this very moment, potentially diminishing our ability to respond intelligently to those challenges. So that's one. The second thing is it devalues the currency of human thinking and reasoning. Right. It, it disempowers those who have knowledge it disempowers those who have understanding. And that threatens the whole, the whole rationale for things like public education, which is already being devalued by other forces right so this contributes to the devaluing of education of humans. Why bother educating people if they don't need to do any thinking, if all the important decisions are going to be made by machines. The third danger is that, and this is something I talk about a lot in the book is that these machines are only trained on historical data because that's what we have. So all these machines know is the past, and what they know is the past of the decisions we've already made, the past that reflects the values we've already promoted, the past that reflects the dominant perspectives articulated by those who had the most power and voice who could therefore get their decisions and get their outputs and get their ideas into the public record, which historically were mostly men, mostly English speaking men or at least men in the global north, mostly the wealthy and elite. Right. So these mirrors don't even reflect all of humanity. They reflect a very privileged subset of humanity and they reflect the values and decisions and thinking patterns and assumptions of those populations. Now, one of the things I talk about in the book is it's those patterns that got us into the pickle that we're in right now with an economic order that enriches the few at the expense of the many, something that's incredibly unsustainable, an environmental order that is unsustainable, right, in the midst of the sixth mass extinction, and amplifying climate change, and a context in which the value of education and under and human understanding is going down rather than up. So if the dominant values of the past or what got us where we are, does it really seem like a sound strategy to build machines that will automate the future by replicating those patterns and carrying them into policies for the future. And yet that's exactly what we're doing. These technologies already only know what we've already done, and they're built to replicate that and push that past into the future. And in the book I point out that's not innovation. Right. That's the past consuming our potential. What human freedom is the is the ability to look backwards and say, you know what that that's not working anymore. Nothing different. And we've done that many times in the past and it's hard work, but humans do it. But we won't do it anymore if we hand over the most critical decisions and policy choices to machines that are built to ensure that the future looks exactly like the past. And that's where we are right now at that critical juncture, and we don't have to make that choice, but we have to wake up and realize that these tools will push us further along unsustainable paths. Unless we refuse to follow. Well, there's, I'm not, I mean, I understand the book is supposed to be optimistic. But, you know, two things, you know, one is, you know, it seems to be part of our makeup that we will be cognitively lazy when we can be. So, so there's a, but not always. No, not always. But, but there is a sort of a tendency that, you know, if you can offload something, particularly if it involves hard thinking. We'll do it, you know, I mean, that's, that just seems to be a kind of a fact that we need to kind of deal with. That's true. Really? I just, yeah, I want to push back on that. I mean, again, I think that's a, we can see examples of that everywhere. Yeah. How would the, how would the body of human literature that has been fed into these systems even exist if that were universally true. Well, that's the, this is, that's part two of what I was going to say. Sure. Was, was, yeah, I mean, this is a, but there's always exceptional, you know, or more or less, you know, a smaller percentage of humanity where these who are not lazy who do, you know, innovate in all kinds of different ways. And that's how, in fact, we have the technology that we're talking about. So what, what I guess what I'm questioning here in terms of the optimism, and I would, you know, I do want you to push back is how exactly are we supposed to push back from, you know, given the forces, both of, you know, the, the dominant forces of the past, you know, being pushed into the future. How exactly are we supposed to divert that, given that many people are, you know, they're too busy trying to put food on the table with a very low salary or something like that. And I want to, I want to, I want to explore that because I think what you've just said actually shows two very different, not compatible ways of understanding the issue. Right. So the first way you framed it is in terms of laziness. But the second way you framed it's very different. And I think more accurate. Right. People, people aren't by and large prevented from thinking and creating by laziness. They're generally prevented by an absence of opportunity and resource and access. And, and let me tell you why I have this perspective. So I'm a first generation college student. No one in my family, not my parents, not my grandparents, not my aunts and uncles went to university. And yet I have a PhD and a, and a professorial chair at a major world university. Right. How did that happen? That's not supposed to happen. And in the book I talk about the fact that even chat GPT says it's not supposed to happen because someone tried to create my bio for a talk where I was being introduced with this bio, using chat GPT last year. And chat GPT said I graduated from UC Berkeley. Now the reason it said that, and I didn't, was because it's actually far more fitting a profile for my current status. And where I actually graduated from was from the Cal State University system. From a university that was a low ranked commuter university that I went to, not because I didn't have the test scores to go to UC Berkeley, but because I didn't have the money. And because I had to work full time in order to take care of myself and keep a roof over my head. And that was only possible going to a university that had as many night classes as possible, which commuter universities do. So they're cheap, and they have night classes. And so that's what I did. Right. Now why am I pointing this out. It's because something like the California public education system was a damn miracle in terms of granting wider access to the strata of society that allow. People to contribute to the pursuit and generation of new knowledge to having influence on policy. And to being able to shape the way that the world works right in most of the world, people like me don't get access to that. Now, had I grown up in a country or a state where there was no funding for or access to that kind of opportunity, I would never have been in a position to write this book. Right. But it wouldn't have been because I was lazy. It wouldn't have been because I wasn't interested in knowledge. As you say, it would have been because I had only given been given the opportunities sufficient to add a maximum, keep food on the table. And that would have been sort of the ceiling for me. So, I don't think the truth is that humans are naturally intellectually lazy or uncreative. I don't think we actually do shrink away from thinking for ourselves. I, you know, you might hear in what I'm saying a bit of kind of enlightenment nostalgia, but, but I do think the enlightenment wasn't completely wrong about this right. That the power to think for yourself is actually something that is a right of every human person. And given the right kinds of access and resources and opportunities, we tend to rise to the challenge of taking advantage of that. And that, in fact, it's often, frankly, the most privileged in society who put the least effort into thinking for themselves into being creative. And, and yet the rhetoric that we're that we're given to swallow is very much the opposite right we're trained to think that it's the people without resources that are lazy, and take the path of least resistance. And yet you look at the, you look at the CEOs that are running most of the major multinational corporations. You look at the way they're running their companies it's actually not particularly innovative or creative. They're doing, they're doing very predictable kinds of things and they're chasing the low hanging fruit the immediate commercial value they can extract, even at the expense of making better products right we see it with Boeing. They're doing it with a lot of the AI companies where they're building tools and releasing tools that actually make, you know, the content worse, and, and less reliable. Why, because they can make money doing it. There's nothing particularly innovative about that. But where the innovation is happening is actually in places that, you know, have a lot less power. So I very much think that we are being fed a narrative that deliberately encourages us to relinquish our power, and not to demand the access and the resources that allow us all to think for ourselves to create and, and envision new futures. And I think we've had some social experiments that have proven that given the opportunity. People will seize the, the, the field of a thought of reasoning of understanding creating that this is this is something actually humans want to do. So, okay. Do you, I mean, so that's a, that's a, that's a radical vision. And I don't say that in a, in a bad way. But it's kind of sad that it is radical now because it wasn't radical in the 1970s right wasn't radical 1960s it was pretty straightforward. But it was, and it's like getting crushed. And we are in a particular position where we can kind of, you know, say that and, and, and leave that and there's an awful lot of people who don't believe that at all right. So I don't want to assume that that the way we think about these things is at all universal. And I think that's part of the issue here is how do you get. I mean, this is essentially a political issue. And, and what's the roadmap to get to the goal that you've clearly articulated. Well, it's funny from where we are now. Yeah, it's, it's funny that you sort of frame it as a political issue which it is. But as if that's, I want to resist the, and maybe you're not suggesting it but I want to resist the idea that it being a political issue means that. Our, our beliefs are philosophies are, are, are not germane to the solution because I want to think about, I don't, but go ahead. Yeah, I want to think about political, political problems and, and what drives political change. Sometimes it's grassroots resistance but it's often grassroots resistance that is animated by philosophies. And I want to think about, you know, how did we ever get away from a world in which women were not seen as persons where women were property where women's capabilities to reason were not even recognized. How did we ever get away from that world. There were a lot of people who, you know, started writing about the rights of women long before women had them. There were lots of people who started writing about the abolition of slavery long before abolition was a, was a political movement. There were people who were writing philosophers writing about the illegitimacy of absolute political power and absolute monarchy. Right. And for those monarchs were removed from power so political change requires thinking first and requires changes and understanding. So I think this idea that we, you know, we shouldn't be optimistic because these are political problems and we clearly don't know how to solve our political problems. And that's that it's often, it's often by thinking in new ways that new political paths open for us. So, I don't want to suggest that we wait, you know, to make political change until we've published a bunch of books by philosophers. But that's not my point and I think the thinking gets done in a lot more places now this is one of the one of the benefits right of the internet and digital culture is that the thinking that drives society can happen much faster. And it can happen in more diverse venues and it can be participated in by audiences that don't have to have a university post in order to legitimize their voice. I think that's a benefit that we should recognize and use. But I, but I do think that it is, it is thinking and speaking in many different forms, and through many different media that political paths are first, first carved for us. And so if we need new political roads. I think the best thing we can do for ourselves is begin thinking and talking about what roads need to open. So I'm hopeful that this book is contributing to a much broader conversation that's going on right now around sustainability around around political power and wealth and inequality, and around, you know, who the technology ecosystem serves. But my optimism is very cautious. I certainly don't think the future is necessarily going to realize the potential to preserve and in large human freedom. That's not guaranteed we actually have to put in the work. But I see in history, many indications that we can do that work. We have done that work. We tend to drop the ball and not sustain the work once it was it starts and I think about, you know, those, the sort of periods as I mentioned where it was taken for granted that, you know, most people had a right to an education. And that, you know, that humans are more than just output machines. I think we can reclaim that. And, and I hope that we do. Okay. Well, we could keep discussing, but I think we need to wrap up. We've been at this and I certainly encourage listeners to read the book for further discussion along these lines. It's fascinating stuff. So let me to end the conversation. What's what's on your horizon right now. Are you this one just came out this book. Yeah. So what are you working on in the short term. And I'm still in that, in that period, where I think it's something like a postnatal period right I think writing the book is in some ways like giving birth and you need some recovery. Right. So I'm taking some recovery this summer before I decide what's what's next. But one of the things that always sustains that recovery for me is reading fiction. And using that to kind of enlarge my own moral and political imagination. And, and I think that's one of the great, great benefits of all forms of art. So I have, I have books that I am looking forward to reading this summer that I hope will will do that. And, and we'll, we'll see what, what comes next but I, but I do think the question of what technology is for and how we realign the, the, the political and economic forces that drive technology with sustainable human flourishing. And I think, I think that will probably be my problem to talk about for the remainder of my life I don't think I, I'd love to think that that problem will go away before I die but no it's, it's, it's not going to get solved in my lifetime. So, so I think I'm pretty sure that the rest of my career I'll be writing about, you know, how we recover. And this is something I talk about at the end of the book how we recover the humane part of technology. Technology begins in the earliest part of human history with ways of healing ourselves in one another ways of building shelter for one another ways of sustaining ourselves in our communities. That's what it's for. And, and I think we, we need to get back to that and I hope I can spend the rest of my career. Urging, urging people to kind of reclaim that, that mantle and that right technology that promotes that promotes human flourishing. So, okay, well, we need to, to stop recording but thank you again for a fascinating interview. Thank you for inviting me. It's been a pleasure. And have a have a great summer of fiction. Thanks so much looking forward to it. [Music]