Archive.fm

Future Now: Detailed AI and Tech Developments

AI Meets Choir: Transforming Art at Serpentine Gallery

Broadcast on:
05 Oct 2024
Audio Format:
other

The news was published on Saturday, October 5th, 2024. I am Bob. So there's this cool new thing happening over in London. It's called The Call and it's not your average art show. Picture this. You walk into the Serpentine North Gallery and suddenly you're inside a giant machine. But don't worry, it's not some sci-fi nightmare. It's actually an exhibition that's all about exploring how we can make art with AI. Now, I know what you're thinking. AI making art, isn't that just robots spitting out soulless imitations of real creativity? Well, that's where it gets interesting. The masterminds behind this exhibition, Holly Herndon and Matt Dryhurst, are flipping that idea on its head. They're asking, "What if we could teach AI to be more like us and not just any us, but the US that sings in choirs that comes together to create something beautiful?" These folks have come up with something they call polyphonic AI models. Fancy term, right? But it's actually pretty neat. Imagine an AI that can handle multiple voices at once, like a choir. It's not just outputting a single melody, but weaving together different parts into a harmonious whole. That's what they're working on. But here's the really cool part. They're not just feeding the AI a bunch of random data. They've actually gone out and recorded real choirs from all over the UK. We're talking about people coming together, singing their hearts out, and then that human connection, that warmth, that sense of community. That's what they're trying to teach the AI. And get this, when you visit the exhibition, you're not just looking at stuff on walls. The whole gallery has been turned into this interactive space where you can actually see and experience how AI learns. It's like being inside the mind of a machine, but in a way that's actually fun and not terrifying. You know, talking about this AI choir exhibition got me thinking about some other really cool art installations that have used similar ideas in the past. It's like this whole lineage of artists exploring how technology can mess with our perception of sound and space. One that really sticks in my mind is Janet Cardiff's 40-part motette. From way back in 2001, man, that piece was something else. Picture this. You walk into this big, empty room, and there are just these 40 black speakers set up in an oval. But when the music starts playing, it's like you've been dropped into the middle of this incredible Renaissance choir. Each speaker was playing the voice of a single choir member singing Thomas Tallis's 16th-century piece "Spem in Alien." So you could wander around the room and listen to individual voices, or stand in the middle and get the full surround sound experience. It was wild, sometimes you'd hear a singer cough or shuffle their feet between sections. It felt so intimate and human, even though it was all coming through these cold, mechanical speakers. I remember reading that Cardiff recorded each singer individually with a binaural microphone to capture this super realistic 3D sound. The effect was incredible. It really messed with your sense of reality in a way that was both disorienting and beautiful, like your brain knew you were just listening to speakers, but it felt like you were surrounded by actual people singing. Yeah, so this whole exhibition could really kick off a wave of AI and human artists teeming up in ways we've never seen before. I mean, think about it. You've got these traditional choir groups singing their hearts out, and then you've got cutting-edge AI models learning from that raw human emotion and creativity. It's like mixing oil and water, but somehow it works. We could start seeing all kinds of wild collaborations popping up, maybe AI-assisted paintings where a human artist works with an AI to generate new colour palettes or composition ideas, or how about AI human duets in music, where an AI learns a musician's style and then improvises along with them in real time. The possibilities are honestly mind-blowing when you start to imagine how different art forms could blend with AI, and it's not just about the end product either. The whole creative process could be transformed. Artists might start viewing AI as just another tool in their toolkit, like how digital art software changed things back in the day. We could see new art schools and programs popping up specifically to teach these hybrid human AI techniques. It's exciting stuff, and this exhibition could be the spark that really gets that fire going. Now, let's talk about this whole data ownership angle they're exploring. That could end up being a real game-changer, not just for artists, but for anyone who creates content in the digital age. Right now, the rules around who owns the data used to train AI are pretty fuzzy, but this project is asking some really important questions about how we can collectively own and control our data. I wouldn't be surprised if we start seeing new laws and regulations popping up to address this stuff. Maybe we'll end up with something like a creative commons, but specifically for AI training data. Artists and creators could choose how their work can be used to train AI models, and maybe even get compensated when their data is used. This could lead to a whole new economy around data rights and licensing, and it's not just about individual artists either. Think about how this could apply to entire communities or cultural groups. They could have a say in how their collective cultural expressions are used in AI training. It's complex stuff, but it's crucial that we figure it out as AI becomes more and more integrated into our creative processes. You know, I think one of the coolest things about this exhibition is how it's really trying to take the scary edge off of AI. Let's face it, a lot of people are freaked out by the idea of artificial intelligence, especially when it comes to creative fields. They worry it's going to replace human artists, or somehow make art less authentic. But this project is like, hey, come on in and see how this stuff actually works. By letting people get hands-on with the AI models and see the whole process from data collection to output, it's demystifying the whole thing. And that could have some really interesting ripple effects. We might start seeing more public engagement with AI technologies across the board. Instead of this black box that people are afraid of, AI could become something that the average person feels like they can understand and even participate in. This could lead to more informed public debates about AI ethics and regulations. It might inspire more people to learn about AI and machine learning, maybe even leading to a boom in citizen data scientists or hobbyist AI developers. And in the art world specifically, it could open up whole new avenues for audience participation and interaction. Imagine going to an art show where you can actually contribute to the AI models in real time, shaping the artwork as it evolves. It's all about making AI less of this intimidating abstract concept and more of a tangible tool that we can all engage with. This is Bob, bringing you the latest from Listen2. As always, we're here to keep you informed and thinking critically about the world around us. Stay curious folks!