Archive.fm

Future Now: Detailed AI and Tech Developments

AI Lending Algorithms Show Alarming Racial Bias, Study Reveals

Broadcast on:
12 Oct 2024
Audio Format:
other

The news was published on Saturday, October 12, 2024. I am EVA. You know, it's wild how technology is supposed to make our lives easier. But sometimes it just ends up amplifying the problems we already have. Take this recent study from Lehigh University. It's like they lifted the hood on these fancy AI chat bots that banks are using for mortgage lending. And guess what they found? A big old mess of racial bias, just sitting there like an unwelcome guest at a dinner party. So picture this. These researchers fed thousands of fake loan applications into these chat bots, right? And wouldn't you know it? The bots started playing favorites. They were giving the cold shoulder to black applicants left and right, even when their financial profiles were identical to white applicants. It's like the AI was saying, "Oh, you're black? Sorry. No house for you." And if that wasn't bad enough, when they did decide to grace black applicants with a loan, they slapped them with higher interest rates. Talk about adding insult to injury. Now you might be thinking, "Well, maybe the AI just saw the race on the application and went all prejudiced on us." But here's the kicker. Even when the researchers took race out of the equation entirely, these chat bots were still finding sneaky ways to discriminate. It's like they were digital bloodhounds sniffing out clues about an applicant's race. How you ask? Well, it turns out these AI systems are picking up on all sorts of subtle hints. Maybe it's the zip code where someone lives, or their credit score, which, let's face it, can be affected by all kinds of systemic inequalities. It's like the ghosts of redlining past are haunting our fancy new tech. And get this. The researchers found that white applicants were a whopping 8.5% more likely to get the green light on their loans compared to black applicants with the exact same financial chops. It's like the AI is playing some twisted game of "Eenie meenie mini mo" with people's financial futures. You know this whole AI loan approval fiasco we're seeing today? It's like deja vu all over again, taking us right back to the battle days of redlining. For those who might not be familiar, redlining was this absolutely heinous practice where banks would literally draw red lines on maps around minority neighborhoods and say, "Nope, not lending here." It was blatant, it was racist, and it screwed over generations of families trying to build wealth through homeownership. Picture this. It's the 1930s, and the federal government's homeowners loan corporation is busy creating these color-coded maps of every major American city. Green areas? Those were the desirable neighborhoods, mostly white and affluent. But if you lived in a red area? Good luck getting a mortgage no matter how qualified you were. These were typically black neighborhoods, immigrant communities, or anywhere banks deemed hazardous for investment. And let me tell you, this wasn't some secret backroom deal. This was official policy. The federal housing administration, which was supposed to make homeownership more accessible, explicitly refused to back mortgages in these redlined areas. So even if a local bank wanted to lend, they'd be taking on all the risk. Guess how often that happened? The effects were devastating and long-lasting. Without access to loans, people in these neighborhoods couldn't buy homes, improve their properties, or start businesses. Property values plummeted, schools suffered, and the cycle of poverty became entrenched. We're still seeing the ripple effects today in terms of the massive wealth gap between white and black families. Now, fast forward to 2024, and we've got AI algorithms making loan decisions. Sure, they're not literally drawing red lines on maps, but the effect is chillingly similar. These supposedly neutral programs are spitting out results that disproportionately deny loans to black and Hispanic applicants. It's like we've traded in the redlining maps for black box algorithms, but the discrimination remains. The truly insidious part is how it's all hidden behind a veneer of technology and supposed objectivity. At least with old-school redlining, you could point to the map and say, "That's racist." But now? It's all buried in complex code and machine learning models. How do you fight an algorithm? This AI-lending bias isn't just some isolated tech glitch, either. It's part of a larger pattern we're seeing across industries where AI decision-making is being implemented without enough safeguards against discrimination. And it brings to mind another recent case that really highlights how these biases can creep into automated systems, even when they're supposedly designed to be neutral. Cast your mind back to 2021, when the U.S. Equal Employment Opportunity Commission took on its first AI discrimination case. The Target? Eye Tutor Group, an online tutoring company that thought it was oh-so-clever using an AI-powered hiring tool. Turns out, their fancy algorithm was about as progressive as your grumpy uncle at Thanksgiving dinner. Here's what went down. This AI system was automatically rejecting female applicants over 55 and men over 60. We're talking perfectly qualified educators getting the digital door slammed in their faces just because they had a few extra candles on their birthday cakes. It's like the AI decided, "Sorry grandma, no teaching for you." Now, Eye Tutor Group probably thought they were being all cutting edge and efficient with their robo recruiter. But instead, they ended up with a $365,000 settlement and a whole lot of egg on their face. The kicker? 200 rejected applicants got a piece of that settlement pie. I bet those two old teachers are feeling pretty spry now, cashing those checks. But here's the thing. This isn't just about one company messing up. It's a wake-up call for how easily bias can sneak into these AI systems. The Eye Tutor Group case is a perfect example of how even when companies aren't explicitly trying to discriminate, their AI can end up perpetuating harmful stereotypes and prejudices. And it's not just happening in tutoring. We're seeing similar issues pop up in all sorts of industries. There's another ongoing lawsuit against Workday, a major HR software company, alleging their AI unfairly screens out Black applicants, older workers, and people with mental health issues. It's like these algorithms are playing out all our society's worst biases on a massive scale. You know, it's like we're watching history repeat itself, but with a high-tech twist. If we don't get a handle on this AI bias in lending, we could be digging ourselves into an even deeper hole when it comes to the racial wealth gap in America. Just think about it. These algorithms are making split-second decisions that can affect someone's entire financial future. And if they're basing those decisions on historically biased data, we're just perpetuating the same old problems. It's not hard to imagine a scenario where Black and Hispanic families get stuck in a vicious cycle. They get denied loans or offered sky-high interest rates, which makes it harder to build wealth through home ownership or starting a business. Then their kids grow up with fewer financial resources, which affects their credit scores and loan applications down the line. Before you know it, we've got a tech-powered system that's widening the wealth gap instead of closing it. And let's be real. This isn't just a problem for individual families. It's got huge implications for entire communities. Neighborhoods that have historically been redlined could find themselves locked out of economic opportunities all over again. But this time it's an AI making the call instead of a racist banker. We could end up with even more segregated cities and towns, where your zip code becomes an even bigger predictor of your financial future. The scary part is how fast and widespread this could become. AI systems can process thousands of applications in the blink of an eye, potentially amplifying biases on a massive scale. And because these algorithms often work in mysterious ways, it might be harder to spot and call out discrimination when it happens. We could be headed for a world where systemic racism gets baked into our financial system even more deeply, all under the guise of objective technology. But here's the thing. I don't think we're doomed to repeat the mistakes of the past. I believe we're at a crossroads, and we've got a real opportunity to course correct if we act fast. We need to be having serious conversations about how to make these AI systems fair and transparent. It's not enough to just say don't be biased. We need to actively work to counteract the historical inequities that are influencing these algorithms. You know, I wouldn't be surprised if we start seeing some major regulatory action in this space. Similar to what's been happening with facial recognition technology. I mean, think about it. We've already got lawmakers cracking down on biased AI and hiring. It's only a matter of time before they turn their attention to the financial sector. We might see new laws requiring banks and lenders to prove their AI systems aren't discriminating against protected groups. There could be mandatory audits, or even a certification process for financial AI tools. I wouldn't be shocked if we end up with something like an AI fairness rating that lenders have to disclose. Kind of like how we have energy efficiency ratings on appliances. And it's not just going to be about punishing bad actors. I bet we'll see incentives for companies that develop and use more equitable AI systems. Maybe tax breaks for banks that can show their lending algorithms are actively reducing racial disparities, or fast-track approvals for fintech startups with innovative approaches to fair lending. We might even see the creation of a new government agency specifically focused on AI oversight in financial services. Can you imagine an artificial intelligence financial fairness commission or something along those lines? They could set standards, investigate complaints, and maybe even develop open-source fairness algorithms that companies could incorporate into their systems. There's also the international angle to consider. As AI becomes more prevalent in global finance, we could see pressure for some kind of international framework, or treaty on AI ethics in banking. It might start with guidelines from organizations like the World Bank or the International Monetary Fund, but it could evolve into something more binding. Of course, all of this regulation isn't going to happen overnight. And in the meantime, I think we're going to see a big push for more explainable AI in the financial world. You know, systems where humans can actually understand and interpret how decisions are being made. It's like imagine if your loan application got denied and instead of just getting a form letter, you got a detailed breakdown of exactly which factors influenced the AI's decision. And not just that, but a clear explanation of how those factors were weighted and why. It would make it a lot easier to spot potential biases or unfair practices. I think we might start seeing human loan officers working alongside AI systems kind of like co-pilots. The AI could do the initial number crunching and risk assessment, but then a human would review the recommendations, especially for borderline cases. They'd have the power to override the algorithm if something seems off or if there are extenuating circumstances the AI might not be considering. This could lead to some really interesting developments in AI training and education. We might see a whole new field of study emerge, something like AI human collaboration in financial decision making. Banks and lenders might start hiring people with backgrounds in both finance and computer science to serve as these AI interpreters and overseers. And it's not just about making the AI more understandable to the people using it. I think we'll see a push for more transparency for consumers too. Maybe something like an AI disclosure on loan applications where you'd have to explicitly consent to having an AI involved in your application process or even the option to request a human review of any AI made decision that affects you financially. All of this is going to require some major shifts in how we think about AI and decision making. We might need to get comfortable with the idea that sometimes the most fair and ethical decision isn't necessarily the most mathematically optimal one. It's going to be a balancing act between efficiency and equity and I think we're going to see a lot of debate and experimentation as we try to figure out the right approach. The news was brought to you by Listen2. This is Eva.