Archive.fm

The Social Contract with Joe Walsh

What Nate Silver, 538, and RealClearPolitics All do wrong with Carl Allen

Polls are not predictions of election outcomes. The polls aren't wrong, the ANALYSIS of the polls is what's wrong. Fascinating conversation with researcher @realcarlallen. Have a listen.


Hosted on Acast. See acast.com/privacy for more information.

Duration:
48m
Broadcast on:
02 Sep 2024
Audio Format:
mp3

Need new glasses or one of fresh new style? Warby Parker has you covered. Glasses started just 95 bucks, including anti-reflective scratch-resistant prescription lenses that block 100% of UV rays. Every frame's designed in-house, with a huge selection of styles for every face shape. And with Warby Parker's free home trion program, you can order five pairs to try at home for free. Shipping is free both ways, too. Go to warbyparker.com/covered to try five pairs of frames at home for free. Warby Parker.com/covered. Imagine the softest sheets you've ever felt. Now imagine them getting even softer over time. That's what you'll feel with Boland Branch's best-selling signature sheets in 100% organic cotton. In a recent customer survey, 96% replied that Boland Branch sheets get softer with every wash. Start getting your best night's sleep in sheets that get softer and softer for years to come. Try their sheets with a 39 guarantee. Plus get 15% off your first order at bollandbranch.com/code/buttery. Exclusions apply. See site for details. If there's one thing that my family and friends know me for, it's being an amazing gift-giver. I owe it all to Celebration's Passport from 1800flowers.com, my one-stop shopping site that has amazing gifts for every occasion. With Celebration's Passport, I get free shipping on thousands of amazing gifts. And the more gifts I give, the more perks and rewards I earn. To learn more and take your gift giving to the next level, visit 1800flowers.com/acast. That's 1800flowers.com/acast. America, we got to get back to doing what's expected of us as free citizens in this democracy. That means being tolerant, being respectful, staying informed, and being engaged. I'm former Congressman Joe Walsh. Follow me here and join the millions of Americans who are renewing their social contract with each other. The social contract, it's on us. Hey, Labor Day. Happy Labor Day. Hope everybody is doing well. Hope everybody is having a relaxing chill of a long holiday weekend. I hope that everybody, well, it's still a holiday. So I hope everybody is turning all their social media shit off a little bit. I say that as we broadcast a podcast today on Labor Day, that we will promote all week. And I'm going to promote this conversation all week, back to our weekly conversations. And I am so looking forward to this one. Carl Allen is a researcher, an analyst of sports and political data. Follow him on Twitter, @RealCarolAllen. That's Carl with a C. I'm fascinated by this guy because I'm fascinated by polls. And I'm fascinated, Carl, welcome, my friend. I'm fascinated by how much people don't understand about polls and polling. And I'm so excited to yank you and grab you on. And let's start here. You've got a book coming out this week, as we speak, called "The Polls Words Wrong." Carl, welcome, work. There it is, baby. Came ready, Joe. Came ready. I love it. And by the way, Carl, Allen, where people get that book. When's it out? It is out today. It is out today. So go to Amazon. Amazon today. It's available direct from the publisher tomorrow. But I'm encouraging people to pre-order today because it comes out in waves. And the shipping date is September 24th. So you want to get that as soon as possible. Really, really important. Awesome, really important to talk about. The title of the book is "The Polls Weren't Wrong." Again, follow Carl on Twitter acts at real Carl Allen. That's Carl with a C. Carl's also got a sub-stack you ought to be involved with and partaking in. It's real Carl Allen at or .substack.com. Carl, I'm like flummoxed and blown away. And I always have been, even though politics is my world, by this issue of polls. And let's start here. The title of your book is "The Polls Weren't Wrong." But why does it seem like everybody is wrong when they talk about polls today? What's going on? Right. No, you're exactly right. So the title of my book comes from the analysis that I did starting with most famously the 2016 Trump-Clinton election. Yeah. When we talk about whether or not the polls were accurate, whether or not the polls were right, whether or not the polls were wrong, we have to have some sort of scientific standard that we measure the polls against, right? And in this field right now, you would be absolutely shocked at how unscientific these polls, poll analysis, the polls themselves, not too bad, the poll analysis, how accurate the polls were, is so unscientific. So when I talk to folks, so my background, I worked in exercise physiology lab. I did quantitative research for major league baseball, NFL. The, when I talk to people outside of the political arena, and I tell them, yeah, these political analysts, these quants who analyze the data, they say that the polls, the results from the polls, should predict pretty, pretty closely the result of the election. And they say, Carl, you're oversimplifying. Carl, no, you're reading it wrong. And I say, no, look at these quotes from these people. And this isn't, these aren't quotes for the media to simplify things for folks. These are technical articles written in journals that they are objectively and provably incorrect about the function, the meaning of poll data. So in my book, I, the, the political analysis comes in about the last quarter, right? There's some, there's some foundation. Wait, wait, wait, wait, wait, wait. Carl, back up a sec, because I'm loving this. And I want to keep up with you. Hit me over the head if I'm wrong. The, the, the polling data itself, you're saying, isn't, hasn't necessarily been wrong. The analysis of that data has been wrong. That is spot on. Okay. If I had to, if I had to condense what the book is about to a sentence, you just nailed it. Because, okay, both of us, Carl, then let me ask this. Tell me if I'm nailing it still and hit me over the head if I'm not. Is it then fair to ask or say, why are Nate Silver 538 in real clear politics, why are they all wrong? Well, when it comes to averaging polls, number one, Nate Silver, largely famous for his poll aggregation, but his contributions to the field are much bigger than that. Real clear politics, and I cite them in the book. I cite them in the book because they have such a long and transparent track record of averaging polls. Nate Silver introduced some more advanced techniques to how we should average polls, how we should wait them. But real clear politics, their contribution was, look, these are the polls we're taking and this is how we're averaging it. I would not say that they are wise in how they average polls. It's very easy to manipulate averages if you know how to play the game. But, and here's the big point, there is no objective standard when we talk about bringing scientific standards to this field. There is no objective standard for how polls are average. So when Nate Silver says, my poll average says this and this happened in the election and there was this disconnect, therefore the polls were wrong. No, that's not how poll averages work. And this is the same thing that happens every election. Real clear politics, 538, real Carl Allen's poll average. Look, we can argue, people can argue about how polls should be averaged, what the best methodology should be. But at the end of the day, this field, right now, all of the pressure, all of the criticism is on the pollsters themselves. Is on the pollsters that he says, is that misplaced? That is absolutely misplaced. So polling in itself is a science, okay? Let's establish this, polling is a science. But, and this is where people sometimes miss or misconstrue and misunderstand things. Polling is by definition an inexact science, an inexact science, okay? So in the book, chapter, I think chapter four, very early, I introduced this concept called an ideal poll, an ideal poll. Which means, if there were no other sources of error, non-response, frame error, all of these things that pollsters talk about, and the margin of error itself were the only source of error in the poll, what would the poll should? Meaning, and I present this in a very scientific way, because I kind of flip the analysis from how it is traditionally done. Here's how it works. Instead of saying, we have these polls, what does it say about this population of likely voters? I say, what if you know with 100% certainty who the voters are? What if you know with 100% certainty that none of them will change their mind, that no one has undecided all of these other factors? And I say, if you took a poll from this population, what would the polls say? And this demonstrates in a very straightforward manner, and it doesn't oversimplify anything, because this is mathematically where poll data finds its value. If you take a sample from that population, what would the poll data say? And guess what happens? The poll data is erratic. The poll data is necessarily imprecise, imperfect. Carl, and that's okay, and that's okay. Because we need, and this is where Nate and Morris and all these other guys are way, way, way out of their comfort zone. When they are criticizing otherwise transparent, reliable, independent pollsters for not being accurate, they are forcing them out of the field. We need more quality independent pollsters working in this field. And the consequences of their analysis, which I talked about in the book, is that they're forcing otherwise good pollsters out of the field, because they think their data should be here when it's here. And that's not how polls work. So, and again, Carl, hit me if I'm not keeping up, because I find this fascinating, and like the average layman out there, I struggle with what polls mean. One of the overriding themes of yours, and what I garnered from the book, is that most of us fuck up, screw up in analyzing polls, because we look at polls as elections. We look at polls as predictions of an election outcome, whereas you say, I believe, fuck that, polls are not predictions of election outcomes. Like that blows me away. That would blow the average in a way. It does, it does. And when I wrote this book, so I don't know if you know, but Rutledge and Taylor and Francis, who published my book, is a very well-known academic author, or publisher. And so, when they approached me about putting this into a book, I said, "Well, who do you want me to write it for?" They said, "Well, we're doing a statistical literacy series." A statistical literacy series. And I said, "Oh, that opens up a whole new avenue for me." So, to get to your point about polls as predictions, it's not a debate because it's not true. But in the sense that people argue about it, the debate, this debate isn't new. The beginning of political polling, or the beginning of survey data being used for political purposes isn't new. It's been around since Gallup, Literary Digest, Roper, like 100 years ago. 80 years ago, Joe, 80 years ago, Roper and Gallup had a debate. Gallup said, "The way polls are analyzed as predictions keeps us accountable." And Roper said, "No, polls are not predictive tools. Polls are analytical tools. Polls inform predictions. Polls are not predictions." And that debate 80 years ago, what happened? Nothing. Nothing. This is the only field. This is the only scientific field that has maintained their pseudoscientific methods from 80 years ago. What? And it absolutely blows my mind. But Carl, like, okay, Kamala Harris versus Donald Trump. A poll will come out today. We're still 65 days away from the election. But a poll will show Harris up or Trump up. And you're telling me we can't look at that poll and that can help us predict when in November? Using polls or viewing polls as predictions is the most fundamental mistake within this field. And what you said, what you said, Joe, about, "I'm a layman." You're very well informed, obviously, with your background about how polls work and how to use the data. But when people who are well educated, they like polls, they like the data side of things, but maybe they're not statisticians. When they talk to me about poll data, I don't judge them or talk down to them because they view polls as predictions. I want to explain to them and help them understand why that's not the case. And so what I tell people is I say, look, if you believe that polls should predict the result of the election, you are entirely justified in doing so because you have a field of experts who don't know what they're doing, telling you that that's what the polls should do. G. Eliot Morris, in his book, his most recent book published like two years ago, literally said, and this is a direct quote, "The Marquette poll predicted the Democratic candidate for governor would win by one point." What? The poll that he was referring to, by the way, the Republican candidate was ahead by one point, so he got his terms backwards. But the poll was 47, 46, and the winner got to 50. So where did the rest of that come from? Okay, so then talk to me, my friend, by the way, everybody. Carl Allen, I'm speaking to follow him on Twitter at Real Carl Allen. That's Carl with a C, a great new book coming out. The polls weren't wrong. Get it at Amazon today, it's going to be out. Just a fascinating book. Carl, a poll comes out today that you, there it is, baby. The polls weren't wrong. Carl Allen, a poll comes out today, Carl, that you respect. And let's say a poll comes out today that you think is a legit poll, and it shows Harris up nationally by three or four or five points, and you think that's a legit poll. If you're telling me it's not predictive, then what the fuck is the point of that poll? What does that poll then tell us? A poll that comes out today? Perfect, perfect question. Let me demonstrate for you, if I can, what we should do with that poll. A quality poll, a transparent poll with a good methodology of which there are quite a few. Neat, that's it. You look at it, you look at it, Neat, and that's it. Individual polls, individual polls, tell us very little. I think in the book I use the term the the grainiest, the grainiest of salt is not because that polls don't have value, it's because an individual poll by definition, by definition, can only tell us very little. Remember, I talked about ideal polls in chapter four. Polls in which the only source of error is the margin of error itself. Well, some of my reviewers said, but Carl, there's no such thing as an ideal poll. That's not true. There's no such thing as an ideal poll in political applications, in political applications, when these analysts skip all of the basic science that leads up to how to analyze political polls, they make this mistake of saying, this one poll that says this candidate is up one, like G. Eliot Morris did in his book, this one poll that says this candidate is up one predicts they will win by one. No, every poll comes with it a margin of error. Every single poll, without exception, by definition. So those numbers, plus or minus 3%, plus or minus 4%, neat, neat. That's all it is. It's one tiny piece of data to answer your question. I'm sorry, I'm getting- No, Carl, keep going. Okay, you big jerk, I understand that and I get that. One poll doesn't mean squat. Let's say that tomorrow, the average of the 20 best polls shows Harris up three points nationally tomorrow, 64 days before the election, a range of polls. Does that tell us anything? It tells us that if we're running a foot race to 1,000 meters and Kamala Harris is ahead by three meters at the 500 meter mark, it tells us a little bit. I mean, it tells us you like the analogy. It tells us a little bit, right? It's not nothing. And this is why I advocate for more polling, better polling, independent polling, which analysts like Silver and Morris and a lot of the major well-known experts within the field outright dismiss is because we need a lot of good data to piece together that average that you're talking about. Because as we talked about, there's no objective way to take an average. We can take the most recent polls. We can take the most recent high quality polls, however you define high quality. So the analogy that I love to use, not just because people can relate to it, but because it is literally accurate, is if you take a snapshot of a foot race, we just got past the Olympics. The Paralympics are still going on. If you take a snapshot of a foot race and you blur it up, a blurry snapshot, what does that tell us about who's going to win? Not nothing. It doesn't tell us nothing if I had to choose between guessing about who will win with no other data or a blurry snapshot of a race. I'll take the blurry snapshot. I can work with that, but again, just because someone is ahead by whatever poll average that you're using, does not predict that they will win. This is the last thing really quick. The hard part for people to understand is that even if a poll says someone is ahead right now, because of factors like the margin of error, it's entirely possible because of that blurry snapshot that they're not actually even ahead. When we talk about how close poll data can be within the margin of error, so to speak, the blurry snapshot analogy works on very many levels here. So then, okay, Colin, and I do love that, the blurry snapshot of a foot race and at the 100 meter foot race and at the 50 meter point, you've got one of the racers is a stride and a half ahead of the other one, not predictive of who's going to win, but can't you say kind of sort of like that person who's a stride and a half ahead at 50 meters, he's got a much better chance of winning than if he were three strides behind at 50 meters. Correct. There's some predictive value there, no? Yes, yes. So here's the difference between a snapshot, a stride and a half ahead, and a blurry snapshot, a stride and a half ahead plus or minus two and a half strides. We cannot say with certainty that even at this snapshot that they are ahead. If, if we could say with certainty that they were ahead right now, then that offers a huge amount of predictive value and that's why we take poll averages in the first place. And can we ever get there? Could we ever get there, Carl, where you could predict with certainty that that guy is a stride and a half ahead of 50 meters or is that and so I guess eliminate the margin of error? No. No, because again, talking about the discomfort, so a lot of people aren't comfortable with numbers. They don't like numbers. The book is not too technical on that end, but there are a few fundamental concepts that we have to understand about poll data. And number one is this is a science, inferential statistics, the umbrella under which survey data falls is a science, but it is by definition an inexact science. So we have to, we have to be comfortable with our, we have to eliminate this idea in our minds that the poll numbers that we see, which we only ever really see the top lines, right, which is another thing that it's, I understand why we have to do it. But if a poll says, I think you use the example Kamala Harris up three, let's say those numbers are 47% to 44%. We have to be comfortable in our minds and saying, okay, those are the numbers, but if we apply the margin of error to this plus or minus, let's say 3%, well, Harris could be as low as 44 45, Trump could be as high as 46 47. And this imprecision is part of the value, not, it's not a weakness of poll data. It's a value of poll data because the only way we can eliminate the margin of error is take a census, literally ask everything. And, and, and obviously the, that's, you know, mathematical, in political applications, it's impossible. When I take, when I, when I do experiments and I take a poll of a high school, it's very easy to do a census, 1000 students, I knock it out in an afternoon. So, so Carl, the election then happens. And then people look back and say, I see those polls back then they were wrong when you are arguing, no, the analysis was wrong because you don't use that poll number from a month ago to tell you who's going to win. So you can't say the polls were wrong because you, they're not predictive of the outcome to begin with. Man, now you're getting in my wheelhouse. Okay. So this, this hindsight, if only we would have this hindsight bias of if we had done this differently than our numbers would have been better, this is where the pseudoscience comes in. This is where the pseudoscience comes in. So as a researcher, when I worked in an exercise physiology lab, when I did, did reports, did data for sports teams, we have to be able as researchers to identify confounding variables, identify confounding variables. And what I mean by that is when we take observation A and we see result B, we want to see how closely these, these numbers or the data lines up, right? Yeah. But in any good research, there's going to be some disconnect here between A and B, between what we observe and the result. So identifying those confounding variables is not to be too mean here. This is middle school level science. This is very basic, this is very basic stuff in election, in an election. And I, and this is very, this is a very important part of the book. In an election, there are at least two confounding variables between any poll. I don't care if it's a day before, a week before, a month before, or 60 days before, and the election, there are at least two confounding variables. Number one, how undecided voters decide? If there are 10% voters undecided, if there are 2% voters undecided, we, the poll data doesn't tell us, it doesn't try to tell us how they will decide. That's number one, that's a confounding variable. Confounding variable number two, if voters who say, I will vote for this candidate, change their mind between when the poll was taken and in the election. Now, the cover of the book, and I encourage people to have a look at it when they get a chance, the cover of the book demonstrates this very clearly. And what I say is 538 says, this should not exist. Don't look at this chart. Those two confounding variables, how undecideds decide how people change their mind, if people change their mind. In our modern polarized mindset of Democrats, Republicans, and this is an area that you can speak to even better than me, because you form a Republican, you talk to Republican and traditionally conservative voters who say, look, maybe I supported Trump in 2016, I don't anymore. I voted Republican all my life, and now I don't. In our polarized mindset, we say, what person who says in a poll that they would vote for Trump would now vote for Kamala Harris or vice versa? But this is an oversimplification. Remember, remember in 2016, we had these guys, this guy Gary Johnson, ran for president. We had a very substantive third party push. And then on the cover of my book, this man Evan McMullen, remember him? Yeah. Now, McMullen nationwide, only pull maybe one or two percent of the vote. But in Utah, his home state, he was polling 20, 25, even 30%. And so guess, guess what 538? Guess what these professional, alleged scientific analysts do with that data? Yes. Tell me, they throw it out. They say these third parties don't matter. And from a scientific perspective, when you look at a poll, and the poll says McMullen has approximately 25% support plus or minus the margin of error, right? Well, what does that mean? It means if you asked everyone in that population right now, he would probably get somewhere within that margin of error. But here's what happens. Close to the election, when people say, oh man, I don't know if McMullen can win Utah, and I really don't want Hillary to win Utah. Those people are like, look, I don't like Trump that much, but at least he's not Hillary. And then they change their mind. Over here, a week before the election, they said, I think I'm going to support McMullen. Election day rolls around. It comes time to check a ballot, and then they say, I can't do it. I want my vote to count. And this third party phenomenon, very unique to the United States, very unique to the United States, but consistently, third parties underperform their poll numbers. Consistently, third parties underperform their poll numbers. Now, that doesn't mean the polls were wrong. It means between the time the person was asked the question and election day, they changed their minds. That happens. Yeah, and Carl, that's really cool information. You called them confounding factors or variables. How one decided to decide, and how people change their minds. And then Carl, I think then, so then there's a bunch of polling. We have an election, and then the Nate Silvers and all the experts of the world, they then try to reconfigure their machine for the next election to make sure that their polling's more accurate. And it's all based on they want to be more predictive of the result. And so every cycle, we go through this thing where they jigger here and jigger that, and you're saying that whole fucking process is flawed because polls can never accurately predict. And their analysis, their methodology is basically them spinning around and chasing their asses, trying to figure out what would have made things better last time. And they think, oh, if we had been right about this, let's apply this to the next election. And it's just this constant game of chasing their ass and playing this hindsight game. It's ignorant. It's not meant to predict the result. Correct. So here's the best example that I can give. Here's one of the one of, I don't want to say the best, but one of the best examples that I can give. In 2016, the day after the election, 538 put out an article, the polls were wrong. Harry Hinton was on that piece. And he now works for CNN telling misinforming people about polls. The day after the election, they said the polls were wrong. And how did they analyze that? They took the poll averages, their poll averages, which obviously can't possibly be wrong. They took their poll averages, compared it to the election, and said, based on this and this alone, the polls were wrong. And what happened? We got more data after the election. And we found a few huge things, talking about confounding variables, in various swing states, where the polls were allegedly wrong, 60 to 70% of undecided voters went for Trump. The methodology used, and this is another thing that blows my friends who work in other fields, like scientific fields, blows their minds. The analysis, the methodology used by every reputable, the consensus of the experts in the field, they say, "We can't assume undecided split 50/50." That's how they judge poll accuracy. They say, "If this assumption is true, then this is what we would see." And as a scientist, or someone who understands middle school science at least, I say, "Hold on. There's a huge potential for confounding here. How did undecideds vote? 65% for Trump? Are you kidding me?" So this number, that alone, undecideds alone swung the margins 2%. 2% in a swing state is a lot. Then, well, they say, "Well, 2% wasn't enough, so we can pretend that our calculations, even though our calculations were objectively incorrect, we can pretend they're good enough." No, there's more. Third party voters, people who said that they would support, mostly, the libertarian candidate Johnson, 60% of those voters changed their mind. They voted for Trump. This is, these are two huge confounding variables that swung the numbers. So the analysis that they did, they assumed 50/50 undecided. They assumed no one changed their mind, Joe. They assumed no one changed their mind. Can you believe this nonsense? This is what masquerades. This is what passes for advanced analysis. Are you kidding me? So Carl, is it then accurate to say that Carl Allen would say, when it comes to Nate Silver and 538 in real clear politics, blah, blah, blah, blah, blah, blah, it's not that their polling is necessarily wrong. Their analysis is wrong. The way they analyze polls, their polls is wrong. That is absolutely correct. Deep in the ocean, an orca pod is on the hunt. These aren't your average orcas. These guys are organized. Marketing team, did you get those social media posts scheduled for the CO migration? Hi, I captain. We even have an automated notification for all pod managers when they go live. They use Monday.com to keep their teamwork sharp, their communication clear, and their goals in sight, Monday.com. For whatever you run, even orcas, go to Monday.com to dive deeper. Explaining football to the friend who's just there for the nachos? Hard. Tailgating from home like a pro with snacks and drinks everyone will love. Any easy win. And with Instacart helping deliver the snack time MVPs to your door, you're ready for the game in as fast as 30 minutes. So you never miss a play or lose your seat on the couch or have to go head to head for the last chicken wing. Shop game day favs on Instacart and enjoy $0 delivery fees on your first three gross reorders. Offer valid for a limited time. Other fees in terms apply. So then Carl, if polls are not predictions of election outcomes, what is the value of polling today? Like a good poll that comes out today. What's the value of that poll? What's the value then? Given that you understand that someone understands what the poll data means, which is to say the blurry snapshot analogy, understanding the margin of error, the importance of a poll average and not looking at an individual polls. Assuming that we understand some of these basic concepts, a very good analyst, a very good analyst can produce a decent forecast. Yes, so there's a huge, huge distinction. So some people think a poll in a forecast, some people think they're the same, which is not correct. Some people think they're very, very close, not really. The analogy or the quote that I use in the book, and this is because I loved his comedy. Mitch Hedberg has a wonderful bit, a wonderful bit about he's a comedian, but people always want him to act. It's like, oh, you're a good comedian. Can you write me a script? He's like, that's not fair. I worked my ass off to become a really good comedian. That's like asking a chef who worked his whole life to become a chef. Oh, you're a chef? Can you farm? These fields, these fields are kind of under the same umbrella, but asking a chef to farm or a farmer to cook is like asking a pollster to forecast. These are very different fields. We have to understand the distinction center. I love, Carl, that's so cool. And you've got this extended quote in the book, and I'm going to read it because I love it. But I will not give the slightest consideration to the falsehood that failure to predict an eventual result is evidence of a poll's wrongness, nor should anyone because it deserves none until we can all agree with what is true. Polls are not predictions, and polls do not try to predict the eventual margin, nor do they attempt to estimate winning probability. The debate is meaningless. And then I love this line. A good poll plus a bad forecast, do not equal a bad poll. Yes, correct. A good poll plus a bad forecast does not equal a bad poll. And you're welcome to disagree, because you're welcome to be wrong, but these are not qualifications that I'm comfortable with in a field that claims to be scientific. And so I love that you pulled that quote because a lot of people who I would say are ex, and this is something that I really need to point out because I've been critical of Silver and Morris and a lot of these guys. These guys are objectively smart. These guys are objectively talented to say that they don't do valuable work would not be respectful or true. But a lot of these people who are very, very smart, high in the field, et cetera, they want to debate up here. They say, well, let's talk about frequentism versus Bayesian analysis. They want to debate up here. No, no, no. We're starting at the beginning. And less and until we can agree on what is objectively true, that is polls are not predictions of elections, polls do not try to attempt, polls do not attempt to try to predict the margin of the election, nor do polls or pollsters declare favorites, until we can agree on this very, very basic baseline truth, which disproves the analysis done by most everyone in this field, until we can agree on those fundamental facts, there's no debate to be had. It's like trying to debate astrophysics with someone who doesn't understand arithmetic. So not happening. So Carl, by the way, you're going to come on again before this election because I want you on, I want to talk to you for two and a half hours. Two other, two other, let's end with two other questions. How then, again, individual polls, everybody needs to be smart to those, an aggregate, a range of polls, for the average person out there listening to us right now, Carl, how should polling data be understood? Wonderful question. Number one, number one, this is the first thing that I'll point out, forget the margin, not the margin of error, forget the margin, which is in the book I call it the spread, the difference between the top two candidates, forget it, throw it out, get it out of your vocabulary. It is a pseudoscientific metric that proclaims the perceived value of this metric is who is ahead and by how much, and it fails both of those tests. It does not pass either of those tests, even for right now, forget predictive value, even for right now, it doesn't pass that test. Get it out of your system, get it out of your vocabulary. Number two, look at the numbers in the poll average, the numbers, not the margin, not the spread, the numbers. If Harris is getting 47, 48, 49, plus or minus 3% in your head, right off the bat, she could be as low as 45, she could be as high as 53 or whatever, plus or minus three, same with Trump. And so what happens when people start applying these scientific, which I say they're scientific, but they're really fundamental statistical literacy, kind of going back to the original home of the book, these fundamental plus or minus 3% on a poll average, kind of gives you a perspective of like, man, anything could happen at this point. Maybe Harris is favored, which my forecast says, 538, I think says, maybe Trump is favored, which silver just updated his forecast. His forecast goes like this, but now he has Trump favored. Who should be favored is a respectable debate that people can have. But at the end of the day, Joe, what you're doing, what you're doing is more important than what I'm doing, in terms of getting, talking to people, getting people out to vote, the life, I believe, and I may have, it may have been edited by myself or the publisher, but I believe the last word of the book is vote. And at the end of the day, we have to treat regardless of what smart people or what people you think are smart, good at numbers say about probability, we have to vote. Because if we go into this with a mindset of, oh, 99% up by eight, whatever, no, anything can happen between this poll and the election, we have to, we have to put up numbers. So Carl, my last quick, and I want a 30 second answer, my last quick one, we're going to do this again because I could go with you for three and a half hours. From your perspective, give me a 30 second answer. What does the polling data tell you right now about where the race for president is between Harris and Trump? 30 second answer. And I'll borrow an answer that I saw in other analysts use in the most recent UK election, everything hinges on this 8% undecided, eight polls, give or take. And it depends, it varies on state and things like that. But if things proceed as they're proceeding, which you could argue is the most reasonable hypothesis that we can make as forecasters or analysts, if things proceed how they're proceeding and there are any combi moments between now and the election, it all still rests on this 8% undecided. And that's where, get out the vote campaigns matter. That's where we're doing matters. That's where, that's where everything is going to make a difference because 60 40 versus 50 50 makes consuming the election. No joke. That voice, that voice belongs to Carl Allen, a researcher, longtime analyst of sports and political data. Just fascinating dude. Please everybody listening, follow him on Twitter at real Carl Allen. That's where the sea at real Carl Allen. He's got a book coming out right now. There it is. The polls weren't wrong. Get your hands on that book. Subscribe to a sub stack real Carl Allen dot sub stack dot com. Carl, you're the best. I'm having you on again. Thank you brother. Man, Joe, thank you so much for having me. Sorry, the video quality here is not great. I don't look as good as you, at least not in this light, but Joe, thank you again. Thanks so much for having me on. Thanks. All good Keith. Thank you everybody. Be brave. Remember to listen, share and follow the social contract with Joe Walsh on Apple podcasts, Spotify and everywhere great podcasts are found. And be sure to leave a 5 star review. This has been the social contract with Joe Walsh. Hey there. Looking to level up your shopping experience? Let me introduce you to Amazon Live. If you haven't heard, it's a shoppable video experience where influencers and creators showcase the latest must haves all while you shop in real time. And for those who love some celeb gossip, reality stars like Kyle Richards, Lala Kent and friend of the pod, Paige Disorbo. On her new show in bed with Paige Disorbo, Paige invites top tier guests to cozy up in her fluffy bed or they spill secrets, share nighttime routines, and even whip up midnight snacks. Stream and shop new episodes of her series in bed with Paige Disorbo every Tuesday at 7 p.m. ET by going to amazon.com/live/page_disorbo. Or you can watch Amazon Live's new live TV channel on freebie or prime video under the DIY section and shop along on your phone. If there's one thing that my family and friends know me for, it's being an amazing gift-giver. I owe it all to Celebrations Passport from 1800flowers.com, my one-stop shopping site that has amazing gifts for every occasion. With Celebrations Passport, I get free shipping on thousands of amazing gifts and the more gifts I give, the more perks and rewards I earn. To learn more and take your gift giving to the next level, visit 1800flowers.com/acast. That's 1800flowers.com/acast. Need new glasses or one of fresh new style? Warby Parker has you covered. Glasses started just 95 bucks, including anti-reflective scratch-resistant prescription lenses that block 100% of UV rays. Every frame is designed in-house with a huge selection of styles for every face shape. And with Warby Parker's free home trion program, you can order five pairs to try at home for free. Shipping is free both ways too. Go to warbyparker.com/covered to try five pairs of frames at home for free, warbyparker.com/covered.