Archive.fm

POLITICO Tech

The AI and tech voices influencing Donald Trump

When Donald Trump was first elected to the White House, he tapped a young and relatively unknown guy named Michael Kratsios to be the nation’s chief technology officer. Today, Kratsios is back outside politics, working as the managing director at the company Scale AI. At Politico’s AI and Tech Summit last week, he spoke with POLITCO’s global tech editor Steve Heuser. They talked about what Trump’s tech policy might look like in a second term, and why he thinks Vance will be a champion for “little tech.”

Learn more about your ad choices. Visit megaphone.fm/adchoices

Broadcast on:
25 Sep 2024
Audio Format:
other

When Donald Trump was first elected to the White House, he tapped a young and relatively unknown guy named Michael Kratsios to be the nation’s chief technology officer. Today, Kratsios is back outside politics, working as the managing director at the company Scale AI. At Politico’s AI and Tech Summit last week, he spoke with POLITCO’s global tech editor Steve Heuser. They talked about what Trump’s tech policy might look like in a second term, and why he thinks Vance will be a champion for “little tech.”



Learn more about your ad choices. Visit megaphone.fm/adchoices

[MUSIC] >> This episode is brought to you by Microsoft Azure. Turn your ideas into reality with an Azure-free account. Get everything you need to develop apps across cloud and hybrid environments, scale workloads, create Cloud-connected mobile experiences, and so much more. Discover what you can create with popular services free for 12 months. Learn more at Azure.com. That's A-Z-U-R-E.com. And sign up for a free account to start building in the cloud today. [MUSIC] >> Hey, welcome back to Politico Tech. It's Wednesday, September 25th. I'm Stephen Overlay. When Donald Trump was first elected to the White House, he tapped a young and relatively unknown guy named Michael Kratzios to be the nation's chief technology officer. It's a big job and Kratzios was a key figure, crafting policies on artificial intelligence, quantum computing, and other emerging technologies, both in the White House and at the Pentagon. Now, before landing in Washington, he spent seven years working at Peter Thiel's investment firm. Peter Thiel, the tech billionaire who essentially bankrolled the rise of Trump's VP pick, JD Vance. Kratzios is back outside politics now. Working as the managing director at the company's scale AI. He recently spoke with Politico's global tech editor, also my editor, Steve Hoiser, back at Politico's AI and tech summit last week. They talked about what Trump's tech policy might look like in a second term, and why he thinks Vance will be a champion for little tech. I thought it was a conversation worth bringing on the podcast. So give it a listen. Michael, thanks, welcome. I'm glad we got bus to move instead of the Aerosmith riff for our walk on music. I wanted to ask, just dive right into this topic right here, right? Like you and I, we've talked about this before. We've been on stage at last year's event. The world has changed somewhat since then, about one of the ways in which it's changed is the election is a lot closer. You've been in the Trump administration, right? You've been in a very interesting position. You were a close to Peter Thiel. You were Trump's CTO, which is a really remarkable position, and then you went to the Pentagon, and you wrote or partly authored the Trump executive order on AI, which was published in 2019. So if you look at where Republicans are now, and what Donald Trump has said about this, what can we expect if he wins in November? On AI policy, on like the future of sort of where the White House's position on this, what happens to that executive order? First of all, thank you so much for having me. Excited to be here. This is always a terrific event. Thank you for having me. To caveat, I do not work for the campaign or speak for the campaign anyway. I think maybe the best way to kind of dive into this discussion is to think a little bit about how the White House and the first Trump administration approached artificial intelligence. And the executive order that was signed by President Trump in 2019 essentially laid out the first U.S. National Strategy on AI. And it was centered around four key work streams. And I think if you want to sort of like maybe think about the future, you can think a little bit about kind of how it was approached in the past. So the first pillar was around research and development leadership. And this idea of how do we ensure that our agencies across the federal government that are spending money on R&D are focusing it on the most important areas in the world of AI. And there's an R&D, AI R&D memo, which the White House puts out, which lays out the priorities. If you think ahead, kind of like what are big priorities in the recent development world today. I think one of them would be around test and evaluation. And you see NIST thinking about it. So I think a lot of emphasis would likely be kind of in the test and evaluation sort of area of recent development. How do we actually evaluate these models? The second area is around regulation. And I think the executive order in the first admin essentially told OMB to put out a guidance memo for all agencies that are regulating technologies that are powered by AI. And I think the core tenants there, I think, still remain today, even with sort of this Gen of AI revolution. And that is sort of this idea of a sector-specific risk-based approach to AI regs. So first, look at what rules are already in the book. And then figure out if you need new ones. And so many of these concerns around AI are oftentimes already regulated. And I think that was discussed in the last panel. Michael, I'm so sorry to interrupt you. This is 2019, right? Yeah. And the world changes. And those of us who followed Trump's thinking know that it changes quickly as well. What do we think that this is still the sort of framework? I mean, he's been on podcasting like, hey, this is scary stuff, you know? Generative AI, it feels different. And I don't mean to interrupt your third and fourth. But what is the evidence that he might even be thinking the same way? Or do we think actually like he might be more concerned about some of these other? Yeah, I can't speak to that. But I think we can see what he said publicly. And I think one area, which I think is very interesting, that wasn't a big issue in the first administration, was this question around electricity and the power that is necessary to drive all of this model training that's going to be happening over the next five to 10 years. That's something he's spoken about publicly. And I think is a pretty broad consensus on the right, that it's important that we sort of unlock and unleash American energy and grow electricity capacity to be able to actually do the training runs of these massive models. >> We had Senator Micrones on earlier who was also saying it's time for all of the above energy policy. >> Yes. >> America is going to, data centers are going to be, you know, moved offshore if America doesn't square this away. But at could AI like these clean tech technologies like inadvertently be ushering in like a new era of messier like, you know, more emissions, like more power generation. We have a lot of fossil fuel reserves. >> Yeah, I'm more optimistic about it. I think when you have sort of a big demand pull, there's a lot of, you know, innovation that happens because people want to tap into that. And I think we've made some pretty big advances already in things like SMRs and other nuclear energies. And I think it's, and we're very, very endowed here in the United States with a pretty incredible supply of neck gas. So I think there's obviously very clean ways that you could do this. >> So to get back to the former president, what is, who's talking to Donald Trump about issues like technology and AI? >> Well, who's got a few months? >> He's got a very public conversation with the president on Twitter spaces. So that's one example. He also went on the podcast with David Sachs and many others. So those are all public conversations he has had. Because who is whispering in his ear is one of the most interesting questions we have. We have it about both candidates, but you're in a closer position to talk about Trump. Is he still in touch with like Peter Thiel on these issues? >> So I think he's had very public conversations with Elon and David and others. And I think what we have seen, and I think you guys have reported, I think there's been a bit of a change in an exciting way that many people in Silicon Valley have ever realized that, you know, they're the candidate that kind of supports a broader interest of technological innovation in the United States. And I think his support in Silicon Valley as someone who is sort of part of that first drive in 2016 is very markedly different eight years from now. >> I'm really interested in the China conversation that we're about to have. There's one more question I wanted to ask about the Republican ticket, which is you have worked like with JD Vance in the Peter Thiel investment universe. Tell me a little bit about Vance's views on tech and how they're fitting into the kind of current politics of Trumpism and the Republican Party. >> Yeah, I think Vance is an incredible thinker, an incredibly brilliant senator and someone who deeply believes in transforming and improving the everyday life of Americans. And I think what's so special about maybe for the community in this room and people who track tech policy, having someone who is dialed in and part of the tech community at sort of part of the top of the ticket of a presidential election is pretty amazing and pretty special. So I think we'll have, if that ticket ends up winning, we'll have someone who is more tuned to tech in a senior role than we ever had in the history of the United States. I think one thing that JD has talked about is this sort of split maybe in Silicon Valley between big tech and little tech. And the importance of recognizing that there needs to be a place for big tech to work in a way that is positive for the American people and also allow space for little tech to be able to grow and flourish. And I think you're seeing a lot of that in kind of the debates around competition. >> So in a hypothetical scenario with a Trump Vance ticket in office, does that just the sort of war on big tech monopolies continue? We've seen really aggressive action, actually the Trump launched and Biden has continued and really prosecuted against the tech giants and in little tech obviously there's a strong antitrust push to break up those giants and create space for new innovations. Do we see a continuation of the current policy where we're going hard after Google, Amazon, Apple? Do we keep Lena Kahn? >> Yeah. >> Do we need JD's law school, fellow, fellow, you know, log grad? >> I can't speak to any of that, but I think broadly competition law exists in the United States and district courts have ruled on certain companies, you know, acting in a monopolistic way in search. And I think in that, that was a case that actually began under the Trump administration. So I think JD has spoken very publicly about, about his interest in looking into some of these things, ensuring that sort of competition is fair and little tech has an opportunity to succeed. >> Does JD Vance have an iPhone or Android? >> I don't know. >> Is it a green bubble? >> I mean, I am not sure about that. >> Okay. >> Yeah. >> I want to let's look at a little bit globally right now when we get out of JD Vance's like phone habits and into the broader world. You have some really interesting, you and I were talking just yesterday and we have really interesting worries about China. And when it comes to AI, we think a lot about like AI, you know, sort of like this competitive landscape of businesses and things like that, you have a talent, but you've used envision a world in which like China is pushing AI the way it pushed like Huawei. And talk a little bit about that and what you see as the real concern and the competitive threat. >> Yeah, I think we're at this interesting point today that reminded me very much of where we're with Huawei a few years ago, and obviously a number of differences we'll get into. But what we have here is a technology that is very valuable to an authoritarian regime that wants to find a way to embed itself in a lot of the global south and in a lot of countries that are on the fence between leaning towards the west or leaning towards a CCP influence world. And in the case of Huawei, you had them essentially subsidize these telecom equipment, get it embedded into the countries and then use it as a way to siphon data out. And it was very publicly reported, for example, the headquarters of the African Union were a vector for a lot of this. What we have today is I would argue an even bigger potential risk. There are large language models that are essentially these sort of like base foundation models which are then fine-tuned for particular use cases. So you can fine-tune it to sort of like run your IRS equivalent or collect taxes to manage your property, to do your healthcare, and each government over time is going to be looking for ways to fine-tune these base models to provide these citizen-facing services. And the CCP will undoubtedly attempt to push their Chinese base models into these countries and subsidize fashion and build these fine-tuned models up on top. And I think that is something that's very dangerous in the U.S. needs to think very, very carefully about. We are very lucky as a country because we have the best chips in the world with companies like NVIDIA and we have the best models in the world with all of our state-of-the-art builders. It's just that we don't necessarily have the apparatus built into our sort of global development agencies to be able to actually push out and export software. The U.S. is really good at subsidizing airplanes if a country wants to sort of buy an American aircraft, but we as a country don't do a good job of subsidizing or supporting the export of American tech software. And I think that's a critical, critical junction point that we need to focus on. This episode is brought to you by Shopify. Forget the frustration of picking commerce platforms when you switch your business to Shopify. The global commerce platform that supercharges your selling, wherever you sell. With Shopify, you'll harness the same intuitive features, trusted apps, and powerful analytics used by the world's leading brands. Sign up today for your $1 per month trial period at Shopify.com/tech, all lowercase. That's Shopify.com/tech. So are you seeing Chinese AI being built into other countries' digital infrastructure right now? But Chinese are attempting to export it. We know that they are and I think the best sort of the hook that they're attempting to use is sort of under this banner of what's known as sovereign AI. So a lot of discussion is made of this concept and what that actually means. And each country wants to have some sort of control of their own destiny on AI. And if you can think the most sort of base, easiest thing that they can do is try to create a model that is fine-tuned to sort of the language, culture, tradition, or specifics of that country. So if you airdrop a US model into an African country, there's going to be some shortcomings and things that doesn't know. So someone has to do the work of fine-tuning these models to these countries. And the Chinese are very open to being the ones to do that. And they would love to be the ones who then build the compute stack which they run on as well. And we have to be much more vigilant in the US to kind of push back on that. And I think for us what has been observed is the US has gotten really good over the last two years at the protect side of the coin. This is the US running out and saying, okay, what are the types of chips that we need to export control? So the Chinese don't have access to them. What are the KYC requirements that have to put in place so the Chinese don't have access to some of our cloud infrastructure? But there is a second side to that coin and that's the promote side and that's what we're talking about here. How do you get our technology, which is best in class, best in the world, sort of with our values ingrained in it, exported out? Are there countries right now that are running a Chinese LLM that you know of? I don't know if they're running sort of as a national LLM, but they're certainly being exported out. Like if your bank accounts running through like Ghana or something or you're like getting snooped on? Really? That would be very unfortunate. That is the risk, right? That is very much the risk. I want to get a little bit more granular now and talk a little bit about the US government. This is a room full of people who have strong Washington connections. I walked through a big AI expo a few months ago at the Washington Convention Center, the SCSP, and it was like a trade show of government agencies that were trying to get AI talent on board. Small companies that were trying to be middlemen between the government and the technology your company is in that business as well. From your perspective, you do a lot of government contracting work. Where's the most action right now in the federal government uptake of AI? I think I'm more plugged in a little bit to the national security side of the equation, and I think you're seeing a lot of talk and discussion and efforts by the DOD to try to incorporate AI. The speed and the velocity of that integration is probably not quite as what I would like. What's slowing it down? What's a roadblock to the uptake? To some extent, general bureaucracy is always a challenge. I think you have the procurement issue where you're procuring a technology that's never been procured before, so it's new to procurement officers, it's like how do you create the specs for it and so on. I think the other key thing, which is sort of like I don't think talked about enough, is like at its core, especially the journey of AI space, it's a technology that is non-deterministic, and whenever you're dealing with that type of technology, the threshold for what is acceptable or meets a threshold of safety or of level of compliance that you're happy with is hard to meet. You're a government agency and you want to put sort of in motion a large language model and put it in production and it's only right 70% of the time, you're going to be very hesitant to put it out. We've had conversations with folks at the Commerce Department, for example, where if you point a large language model to the census data set, which is probably one of the richest data sets that the federal government has, and you start asking it questions about how many people are in the census track or what are the trends over this amount of time, it's giving you the wrong answer more often than not, and that's because the models themselves has not been fine-tuned on the data itself. It's just sort of like broadly trying to pick up what it can from the little snippets of data. So to me when I think about this problem, I think the government can spend a little more effort, a little more thought into what I call sort of making data AI ready, and the more that you can create data sets that you can use to specifically train LLMs, the better they will be to actually answer the questions that governments want to see. What's here? I mean, the government does not know and for moving quickly, and data is not known for being all that malleable, especially in the volumes at which the government really is. What's your prediction for when AI becomes mostly useful, as opposed to mostly a development project? Yeah, I want to be optimistic. I'm a technologist. I think it's really important. I think we can do it. So I would give it a few years. I think as as agencies CIOs and CTOs think about these implementation things, my general guidance would be you have to begin with use cases which are not high risk. These are low risk, back office kind of internal models that are not facing out to citizens. Make sure that you run robust pilots and then you run them internally and kind of see how and where the uptick is, and then over time you can expand these higher use cases. I think what's often sort of unfortunate, and I don't know, is challenging, is that sort of the shiniest objects, the places where you really believe that AI can be transformative, well, the ones are pretty risky. And there, if you try to jump into that endeavor today, it may take a little longer than you think to get it right. You insert an AI gas into the OODA loop of a fighter pilot and you're in trouble pretty thin. Yes. Do you think that the DOD, too, a lot of the most important use cases are ones that run on higher classification levels, the idea of how do you sort of get an ATO for one of these LLMs to operate in a high-side system is not trivial. Have you seen anything in action that has worried you? I think my bigger worry is that we're not trying enough stuff. And I think over time, I think I have more faith that we realize that some use cases are probably not that good, and we probably need to table them and move on to others. But I think if you're not trying enough, you end up putting all your eggs in one or two use cases, and when, inevitably, one of them doesn't work, you kind of end up a little frustrated. We are coming to the end of our conversation. This is a lot of fun. Thank you so much. I can't not ask you this question, but is anyone going to buy TikTok? Do you think about China? Do you think about technology or Silicon Valley? I do think it's important that we find a buyer for it and find a way for it to not be a potential sector. It does seem important. Is there just a remote imaginable outcome in which ByteDant sells off its American spittier and someone's willing to put the money out there for it? The law is a law, so I think we're going to have to see if it plays out. All right. Well, thank you so much. Thank you. It's really nice to have you up here. Appreciate it. That's all for today's Politico Tech. For more tech news, subscribe to our newsletters, Digital Future Daily and Morning Tech. Our managing producer is Annie Reese. Our producer is Afra Abdullah. I'm Stephen Overlay. See you back here tomorrow. [music] (chimes) [Music]