Archive.fm

POLITICO Tech

The AI pioneer with a warning for Gov. Gavin Newsom

Washington isn’t poised to pass major AI legislation. Ottawa isn’t either. So Canadian computer scientist Yoshua Bengio, one of the “godfathers” of artificial intelligence, is looking to Sacramento. He’s urging California Gov. Gavin Newsom to sign an AI safety bill by month’s end — and facing off against influential tech executives who want it killed. On today’s POLITICO Tech, Bengio explains why he thinks California needs to regulate now. Learn more about your ad choices. Visit megaphone.fm/adchoices

Broadcast on:
13 Sep 2024
Audio Format:
other

Washington isn’t poised to pass major AI legislation. Ottawa isn’t either. So Canadian computer scientist Yoshua Bengio, one of the “godfathers” of artificial intelligence, is looking to Sacramento. He’s urging California Gov. Gavin Newsom to sign an AI safety bill by month’s end — and facing off against influential tech executives who want it killed. On today’s POLITICO Tech, Bengio explains why he thinks California needs to regulate now.

Learn more about your ad choices. Visit megaphone.fm/adchoices

Millions of people already count on Zell to send and receive money. And our impact doesn't stop there. More than 40 million Americans saw our safety education content in 2023. So when you're sending money to people you know and trust, count on Zell. Terms and conditions apply. Hey, welcome back to Politico Tech. Today's Friday, September 13th. I'm Stephen Overlay. The tech world is still waiting on California Governor Gavin Newsom to either sign or veto a big artificial intelligence bill. As you heard in our episode last Friday, he has until the end of the month. And there are big name players in tech and politics trying to get him to kill it. But Joshua Benjio is one of the AI luminaries pushing Newsom to sign it. The debate is heavily biased by the influence of a few people with very strong financial interests. It's not a new situation. Benjio is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms. And he's notably considered one of the Godfathers of AI. The last time he was on the podcast, he told me he doesn't much care for the title. Still, it makes him one of the most influential voices pushing for Newsom to take action. We have to look back at things like what happened with climate change, what happened with tobacco, how hard a battle it has been to counter these influences so that we can be prepared to what's coming. On the show today, Benjio tells me why he thinks it's imperative for California to act now. Even if AI isn't quite living up to the hype. Here's our conversation. Professor, welcome back to Politico Tech. Thank you for having me. Why don't we start with the California bill? Why do you think it's so important for the governor to sign that law? You know, the barrier that we need to mitigate catastrophic risks that could happen in the next few years, it's not clear that Congress is going to act in the next few years based on the past recent record. So because we don't know what is the timeline of machines that could be used in very, very dangerous ways, because it takes time for legislation like this to take effect for the government and companies to harmonize how to deal with that. I think it's really important to get through this now and not like in two years from now. Got it, it sounds like it's important to put a marker down at least and to have some regulation started on this technology. Yeah, and in fact, one of the things I really like about this bill is that even before it takes effect, so I think a year and a half from now, it will create an incentive for AI labs to do more research on how to make sure their system doesn't get misused in catastrophic ways or that we don't lose human control because they will know that the law will come into effect and that they need to do more R&D to improve their safety procedures, which right now isn't a significant fraction of their effort, but if you compare to other areas where companies manipulate dangerous things, like drugs or planes, a lot of what they do is really safety oriented. We need that kind of motivation for companies to do the right thing. You know, some tech leaders have argued the bill is to onerous or that it will hamper innovation, the development of AI. You've said the opposite. You see this as kind of light touch and a measured approach. Why do you see it that way? Very simple, because it doesn't prescribe a particular way of doing safety and protect the public. Instead, it creates incentives. So there are basically two kinds of incentives. One is through transparency and the other is through liability and the concern of lawsuits. So transparency by forcing companies to publish their safety plan and redacted results is going to provide an incentive to not have a bad image in the public if what they do doesn't look like it's coherent with the current scientific state of the art in terms of safety. And liability has a similar effect, but through a different means, which is concern that if they don't behave more or less at the level of where the current science of safety is, by a significant margin that would make them liable if something really bad happens. And that means the legal department of these companies is going to work for the public because I'm going to push on the engineers to also do the right thing and at least within a zone of what is considered good practice in terms of safety in the scientific community. Obviously this law, it's a state law in California. What effect do you see it having on you in Canada and on AI development beyond the borders of California? So first of all, there are lots of false things that have been said about this law. And one of them is that the companies would leave California. I mean, if they do, they would not only have to move their headquarters, but also to not have any business with California. California being the fifth largest economy in the world, it's very doubtful that the cost of not doing business with California would be much, much higher than just following safety procedures that they have already committed to do and most of them are already doing. So that's not onerous because they voluntarily accepted to do it. So I think this is all propaganda. And in addition, of course, because California is home of these companies for the most part, that would have an effect on the whole world. - You mentioned earlier that Congress is not really poised to pass any sweeping AI law. We've seen also in Canada national efforts to pass an AI law have stalled. Do you think that the will to regulate AI is losing momentum? - I wouldn't put it this way. I think what is happening is a counterreaction by a very small minority of people who have very big financial interests in wanting to compete without any regulation on their way. And that's not just in tech that we see like this in every sector, companies are fighting regulation. Even though at the end of the day, we end up with regulation and business continues and we have good products and also the public is protected. It's not a new thing, but it's new in computing. There is not a lot of regulation and there are ideologies in the tech world that are libertarian and opposing any such thing. However, contrary to, for example, what some people see, including in your last recording about this subject, it's a very small number of people who are actually against this. The tech workers are in favor of the bill. The general population in California is in favor of the bill. And so if you look at the people who are speaking up against it, even researchers who are speaking up against it, often it's because they have interests in startups or big tech. And it's difficult to get unbiased opinion about these things because a majority of AI professors are actually involved and based on what I've heard have been receiving phone calls to pressure them into speaking against the bill. - How worried are you that those efforts are succeeding? That kind of these small minority of voices with a lot of financial interests are the ones that ultimately will win out with policymakers? - Well, that worries me. And I think that is reason why we need to have a healthy discussion in which the facts are laid down and we let reason speak. - It occurs to me, I'm not a computer scientist, obviously. You are, and a lot of your work these days is focused on AI safety. What is kind of the frontier of AI safety right now? How do we actually make this technology safe for the general public? - Well, there are things we already know how to do and that should be done. So currently the state of the art in protecting the public is to detect capabilities that such an AI system has, which could be leading to harm with the right queries if humans wanted to use this, for example, for cyber attacks or massive negative impact on society. And the idea behind this is that if an AI doesn't have the capability, say of a very strong persuasion that could be disturbing our political system or creating a very dangerous cyber attack, then it doesn't matter who and know what queries is used, we know that it won't be able to do these bad things. So it is not checking for intent of the AI to do something bad, it's just checking for ability and that is a sufficient condition. So that's the basic approach, it's not perfect and we need more R&D into having stronger protections and designing the AI systems that will be safe by construction, which we don't have right now. But at least that's something that can be done. The companies have already voluntarily committed to these things in multiple instances in the Bletchley Park and the Seoul Agreements, the White House commitments and so on. So they should do it, but now with a bit more of a legal incentive for doing it. >> Got it, what's the force of law behind it? >> Exactly. >> Because I was gonna say this week, the Biden administration has proposed that the most powerful AI models should report information to it about things like cybersecurity, vulnerabilities. And I guess it raises the question to me, just requiring these companies to kind of report on themselves do enough to ensure the technology is safe. >> Well, that's where the two main arms of the bill come in. So the bill asks them to report, same thing that they've accepted to do with the White House. But also, if those reports are not satisfying in the sense that the scientific community thinks that they're like gaping holes or it's dangerous, then flags will be raised in public or the Attorney General could sue them in case there are dangerous impacts. So that's the difference. Right now, the White House can only ask and cannot really force them to do the right thing, to use the latest approaches that AI safety institutes in the world are putting out, like the UK AI safety institutes and NIST and US AI safety institute. So that's the advantage. We're going from voluntary commitments and reporting to the White House to something that's mandatory, but essentially, it says the same thing. (gentle music) >> Millions of people already count on Zell to send and receive money and our impact doesn't stop there. We're helping users spot scams and fraud. More than 40 million Americans saw our safety education content in 2023 and countermeasures like real-time fraud monitoring and detection help protect users. So when you're sending money to people you know and trust, count on Zell. Terms and conditions apply. >> It's so interesting to me because obviously you are a scientist by background. You've considered one of the godfathers of AI, though I know you don't care for that title, but you're now messaging to lawmakers about how to make this technology safe. How do you message that to other engineers, to other computer scientists? The folks who are actually developing this AI that might be sort of pushing ahead with innovation without necessarily taking all of the risks into account. >> Okay, there's a very simple argument. Scientists don't know when we will reach AGI, human-level competence and sufficient number of areas for AI to be potentially dangerous with the wrong goals or in the wrong hands. And the horizon of when this could happen goes from a few years as little as like two or three years to decades or more. And there's no way to know. If we look at the trend, we're clearly going in this direction, but maybe things will saturate, maybe things will accelerate. There are reasons to think at some point they could accelerate when the AI is able to help the AI researchers accelerate their work. And from the point of view of protecting the public, well, we have to make sure that the plausible scenarios, let's say in two, three years are guarded against. And that means acting right now in terms of legislation and starting to put pressure on the companies to do the right thing. So that's a very simple argument. It's accepting our uncertainty. And the people who were saying, oh, no, it's going to take decades or whatever. They don't have any strong argument to reassure me and others or scientists that it is going to be necessarily that way. We just don't know. Since we last spoke on the podcast, this public sentiment has kind of been growing that AI is not as smart as it has been hyped up to be, at least so far, right? We've seen this in some businesses saying it's not as transformative as it was promised. And some of the shine, I guess, has kind of come off of the technology in a way. I wonder what you make of that and whether that line of thinking might influence this debate around AI safety. I have lots to say about this. So first of all, you really have to take a historical perspective and look back not just in the last year, but in the last decade. And then you look at the trends of capabilities and competence on various tasks. And it just keeps rising and often in many benchmarks going above human-level abilities. And it's normal for these advances to not go on a straight line either, right? In fact, if you look at these curves, they seem to be accelerating. Now, the current abilities are already worrisome. There was a recent study out of Switzerland, the EPFL, one of the leading universities in Europe, comparing GPT-4 and humans at their ability to persuade people to change their mind on arbitrary subjects. So the ability of persuasion. Guess who wins? I'm gonna go AI. Yes. Now, then the question is, how much stronger is the AI at persuasion through dialogue than humans? You can see that this is already dangerous from the point of view of deploying these kinds of abilities in social networks with bad actors trying to use this to influence our democratic processes. And there are other studies that are maybe more in the national security realm about helping terrorists, for example, and making it easier for terrorists to do some things. So it's already a problem. If we look at the trajectory over a long horizon, we're inexorably going towards greater and greater capabilities, it's not clear how fast. And so we have to start putting guardrails. It's very clear. Well, Professor, thanks for being here on Politico Tech. My pleasure, thanks for having me. That's all for today's Politico Tech. If you enjoy our show, be sure to subscribe on Apple, Spotify, or wherever you get your podcasts. And share your favorite episode with a friend or colleague. You can also join us for a live taping of Politico Tech, this Tuesday in Washington, D.C. at Politico's AI and Tech Summit. Register today at politico.com/2024 AI Tech Summit. And for more tech news, subscribe to our newsletters, Digital Future Daily and Morning Tech. Our managing producer is Amy Reese. Our producer is Afrah Abdullah. And our editors are Steve Hoiser, Daniela Cheslow, and Louisa Savage. I'm Stephen Overlay. See you back here on Monday. (gentle music) (gentle music)