Archive.fm

EDUCATION - The Creative Process & One Planet Podcast

How is AI Changing Our Perception of Reality, Creativity & Human Connection? w/ HENRY AJDER - AI Advisor

How is artificial intelligence redefining our perception of reality and truth? Can AI be creative? And how is it changing art and innovation? Does AI-generated perfection detach us from reality and genuine human connection?

Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.

Duration:
53m
Broadcast on:
29 Jun 2024
Audio Format:
mp3

How is artificial intelligence redefining our perception of reality and truth? Can AI be creative? And how is it changing art and innovation? Does AI-generated perfection detach us from reality and genuine human connection?

Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.

THE CREATIVE PROCESS

Tell us about how you came to work as an advisor working at the frontier of the generative AI and synthetic media revolution? As we reflect on the resurgence of fascist ideologies, how do you see the role of deep fakes in accelerating these dangerous movements? And what responsibilities should creators and regulators of synthetic media hold in this context?

HENRY AJDER

Having worked in this space for seven years, really since the inception of DeepFakes in late 2017, for some time, it was possible with just a few hours a day to really be on top of the key kind of technical developments. It's now truly global. AI-generated media have really exploded, particularly the last 18 months, but they've been bubbling under the surface for some time in various different use cases. The disinformation and deepfakes in the political sphere really matches some of the fears held five, six years ago, but at the time were more speculative. The fears around how deepfakes could be used in propaganda efforts, in attempts to destabilize democratic processes, to try and influence elections have really kind of reached a fever pitch  Up until this year, I've always really said, “Well, look, we've got some fairly narrow examples of deepfakes and AI-generated content being deployed, but it's nowhere near on the scale or the effectiveness required to actually have that kind of massive impact.” This year, it's no longer a question of are deepfakes going to be used, it's now how effective are they actually going to be? I'm worried. I think a lot of the discourse around gen AI and so on is very much you're either an AI zoomer or an AI doomer, right? But for me, I don't think we need to have this kind of mutually exclusive attitude. I think we can kind of look at different use cases. There are really powerful and quite amazing use cases, but those very same baseline technologies can be weaponized if they're not developed responsibly with the appropriate safety measures, guardrails, and understanding from people using and developing them. So it is really about that balancing act for me. And a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool. 

THE CREATIVE PROCESS

I know you are a lead advisor on the partnership on AI responsible practices for synthetic media. And I just see that now OpenAI has joined the Coalition for Content Provenance and Authenticity (C2PA) to address the prevalence of misleading information online through a kind of watermarking on deepfake videos, etc. Do you think that that's enough or there are still ways for bad actors to bypass it?

AJDER

I advise the content authenticity initiative, an organization advocating from the adoption of content provenance technologies like the C2PA. I think it's worth breaking down some of the solution approaches that are being put forward technologically because understandably, it's quite complicated. The first of these is detection, which looks to try and give a score or a kind of an evaluation using AI to spot, let's say, the kind of digital fingerprints left behind by AI-generated content, which the human eye or ear may not be able to detect. But deepfake detection is always going to be, to some extent, unreliable. There are tens of different detection tools out there, and they sometimes disagree with each other. It will be able to give you a percentage confidence score based on probabilistic reasoning. But we've actually seen in the hands of everyday people, this doing more harm than good because they've sort of trusted blindly a detection system which is not robust. The second approach is watermarking within the media, on the pixel level or on the kind of wave level in audio. This is the intentional injection of artifacts into a piece of media, which really stands out to a detection tool. The challenge is that signals similarly have a robustness challenge. They can be manipulated. Someone can compress a piece of audio or compress an image, which actually removes the fidelity around those watermarks. The last is C2PA, which attaches cryptographically secured metadata to a media, providing something like a nutrition label. The moment an authentic photo is captured, let’s say, data is cryptographically secured to that media file and then travels with it throughout its lifetime. And as people continue to manipulate it within that standard, it then continues that log, that ledger of how manipulation has taken place. In my mind, it's the most promising approach because it's the most secure. 

THE CREATIVE PROCESS

I'm glad we're going in that direction. It helps the average viewer understand when you say it's like nutritional labels. A lot of us pass over that because it's just like a long list of nutrients and names of chemicals I don't even understand, and I just have to go to the checkout line. I think in some ways the existence of deepfakes reinforces the importance of legacy media, but I'm just wondering what this is all doing to us. You produced the BBC series The Future will be Synthesized, but now we’re entering a synthetic reality. AI is rapidly altering our collective reality. It's kind of a Nietzschean all gods are dead, be your own god, and some tech companies are offering opportunities to inhabit dreamlike worlds where we can be more than our real selves. For those who haven't had a chance to experience enough of the real world before they're immersed into a synthetic one, how do we protect those most vulnerable in society from these synthetic realities?

AJDER

The series that I did came out in May of 2022. When I was producing that documentary with my producer at the BBC, I don't think I was expecting us to have accelerated so quickly into that future. The world has been quite synthetic for some time. I just don't think we've necessarily been aware of it, but whether it's computational photography in every single smartphone, even when you think you're taking a photo, no filter, there's still quite a lot of shaping of the reality that you're presented with at the end by these phones. Movie magic and entertainment, recommendation algorithms for your Facebook feed or Grammarly are all AI-powered. But we're now interfacing with AI in a much more intimate and personal way. When you were taking a photo or scrolling through your Twitter feed, AI was working behind the scenes, but you weren't necessarily aware of that. Whereas in the age of generative AI, we are becoming almost kind of synthetic conductors of the reality that we're creating. That could be via a large language model or through one of these companion based apps. We're starting to see some really powerful tools like Suno coming out which can generate AI-generated music. We are starting to see AI-generated content seep into areas of our lives, particularly around what we traditionally see as human-to-human communication or human creativity, that I think does fundamentally change the way that we kind of almost evaluate the value of human creative endeavors, but also the way that we think about interacting with each other in the digital world. How is that going to shape expectations of what normal messy human relationships look like? There's a huge swathe of gray. There's a big question of are we feeling uncomfortable about these use cases because they're new, they're unfamiliar? Or are we feeling uncomfortable because a deeper ethical intuition is being disturbed?

THE CREATIVE PROCESS

Last year, the OECD reported a significant ten-year decline in reading, math, and science performance among 15-year-olds globally. One-third of the students cited digital distraction as an issue, and there was an overall tripling of ADHD diagnoses between 2010 and 2022. I'm curious to hear your thoughts on the future of education and the insights from the paper you wrote for the All Party Parliamentary Group on Artificial Intelligence.

AJDER

From an educational perspective, I believe AI has the potential to level the playing field and provide broader access to digital resources akin to having a digital Aristotle – offering dialogue and bouncing of ideas. I've observed some exciting use cases and tools in universities that demonstrate the cost-effective power of these technologies. In addition to text, AI-generated content has the potential to bring history to life and create innovative forms of assessment and engagement for teachers and students. Despite its reliability issues, AI can provide valuable data points in subjects like history and science and also revolutionize content presentation. However, there is an ongoing crisis in higher and secondary education related to cheating, as educators struggle to detect AI-generated content in student submissions. While many in the industry are enthusiastic about the opportunities AI presents, detection tools are currently insufficient to confidently identify AI-generated work as cheating. To address this issue, alternate assessment methods such as Viva style examinations have been suggested. However, implementing such methods would require a significant increase in resources. There are also proposals for randomized defenses of submitted work, similar to a "drug test," to deter students from using AI to complete assignments. The prevalence of AI-generated assignments poses a significant challenge, particularly as educators and students are reluctant to revert to traditional examination methods. While there are creative solutions we can explore, such as engaging with chatbots for critique and limitations, it's evident that the issue requires attention, especially in state education where resources and AI expertise may be lacking.

THE CREATIVE PROCESS

You speak about the importance of the arts and the humanities, and I'm just wondering how you reconnect to who you were before the overlay of synthetic realities?

HENRY AJDER

Getting out in the countryside and getting out in nature is really important to me. I do have ways to get away from screens, which I think are really important. Having seen some of your previous guests' music, it's something that's really important to me and I have really enjoyed some of the artists that you've had on. I think that those are kind of key parts of being human. I think it's something which is closer to almost a religious assertion than it is grounded in sort of a scientific version of reality. For me, if I didn't know a piece of music was AI-generated, I could think that was absolutely beautiful. The moment I know that actually it's AI-generated, part of the magic goes for me. And it's the same with writing. It's the same with art in general. I think that's because there is an intangible value, and it's a real value. It really does shape the phenomenology of how you experience that piece of art, which comes from knowing that it is contained essentially from one consciousness to another. AI obviously can't do that. Fundamentally we can't have that epistemic knowledge of other people having minds, having consciousness. It's kind of Cartesian, right? It's like, the only thing I know I can truly know is that I am, right? We can't fundamentally know that other people exist, but the way that I think we do come to have those transcendental experiences between people is through art. I've seen some beautiful AI-generated art, but in my mind, it is inherently less valuable to me because there isn't enough consciousness driving that creative process. Keeping that side of things is really important.

THE CREATIVE PROCESS

And finally, as you think about the future and education, the future of work and our AI co-creators, what would you like young people to know, preserve and remember?

AJDER

I feel potentially something I'm concerned about, and I guess by extension I would like to preserve, is a real sense of empathy and humility, which comes with understanding that the world is messy, that people are messy, that defects and imperfections exist, that things don't always necessarily kind of go the way you want, even as much as you wish they could. Imperfection is part of life and I guess my concern is that AI-generated content, which smooths and perfects a version of reality to precisely what you want and forces you or makes you feel pressured to represent yourself in this absolutely perfect way, fundamentally gives you no room for error and kind of detaches you from the reality of growth and life and and how people work. Empathize with other people. Everyone has their challenges. Things don't always have to be exactly perfect to how you want them to be or how other people want them to be. And that involves having some humility about yourself as a messy creature, as we all are. I hope that's retained, but I do see this kind of move towards this sort of smoothed and shaped reality that AI is enabling, potentially creating more of a disconnect between that imperfect, messy, but also quite beautiful world. This sort of polished but ultimately plastic version of reality increasingly is becoming the default for some people over the kind of fleshy, messy human side of things.

This interview was conducted by Mia Funk with the participation of collaborating universities and students. Associate Interview Producers and Associate Text Editor on this episode was Nadia Lam. The Creative Process is produced by Mia Funk. Additional production support by Sophie Garnier

Mia Funk is an artist, interviewer and founder of The Creative Process & One Planet Podcast (Conversations about Climate Change & Environmental Solutions).
Listen on Apple, Spotify, or wherever you get your podcasts.

- One, two, three, four. - How is artificial intelligence redefining our perception of reality and truth? Can AI be creative? And how is it changing art and innovation? Does AI-generated perfection detach us from reality, life, and genuine human connection? - Henry Eider is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations including Adobe, Meta, the European Commission, BBC, the Partnership on AI and the House of Lords. Previously, Henry led synthetic futures, the first initiative dedicated to ethical, generative AI and metaverse technologies, bringing together over 50 industry leading organizations. Henry presented the BBC documentary series The Future Will Be Synthesized. - Henry Eider, welcome to the creative process and one planet podcast. - Thank you so much, it's a pleasure to be with you. - Henry, tell us how you came to work as an advisor working at the frontier of generative AI and the synthetic media revolution. You know, 'cause I'm a generalist and I find AI is such a fast-moving train that it's kind of hard to keep up with the daily developments. It's advancing so fast. - Having worked in this space for seven years since the inception of deep fates in late 2017, for some time it was possible with just a few hours a day to really be on top of the key. Technical developments, use cases, malicious and positive. Now, particularly since the explosion of generative AI, chat, GPT, Dali, two, all of these kinds of tools emerged in 2022. I don't really think anyone can say that they are on top of everything because there are so many people developing, so many businesses being built, so many use cases around the world, right? It's now truly global. So I don't think anyone can say that they truly know absolutely everything that's going on because it's just expanded so exponentially in the real meaning of the term. - On the day we're recording this, March the May 8th victory in Europe Day, which some refer to as the victory over fascism. North Korea's former propaganda minister, Kim Ki-Nam, just died at the age of 94. Some have likened him to Nazi Germany's propaganda boss, Gobles, widely known for his mantra, repeat a lie often enough and it becomes the truth. So as we reflect on the resurgence of fascist ideologies, how do you see the role of deep fakes in accelerating these dangerous movements? What responsibilities should creators and regulators of synthetic media hold in this context? - Yeah, it's a really good question. I think deep fakes, AI-generated media have really exploded the last 18 months, but they've been bubbling under the surface for some time in various different use cases, right? Whether that's where the origins of the term deep fakes came from, which is non-consensual, pornographic content, almost exclusively targeting women, or whether that's in the kind of new age of fraud where your voice and your face can be cloned and used to fall off ones. But the disinformation and use of deep fakes in the political sphere to deceive, to try and persuade has become a lot more established and really matches some of the fears held five, six years ago. But at the time, we're more speculative than they were instantiated in what was actually happening. And I think what we're seeing in 2024 for global elections, the most people on record being eligible to vote over half the world's population, the fears around how deep fakes could be used in propaganda efforts to destabilize democratic processes, influence elections have really reached a fever pitch. And that's with good reason. Every single midterm and presidential election in the US since about 2018, I've been asked, is this gonna be the one where deep fakes are used by state actors to change the result of an election, suppress voters or things like this? And up until this year, I've always really said, well, look, we've got some fairly narrow examples of deep fakes and AI-generated content being deployed, but it's no way near on the scale or the effectiveness required to actually have that kind of massive impact. This year, it's no longer a question of, are deep fakes gonna be used? It's now, are they actually gonna have the impact that people fear they will? And we have started to see deep fakes, particularly in political processes around the world being used in different ways to try and have an impact. That could be in the kind of propaganda sort of line. So that's sort of people using AI-generated imagery or videos to try and big up or endorse candidates. So maybe that's making certain candidates look like superheroes or using, as we saw in Indonesia, synthetic resurrection, bringing back the military dictator there to endorse one of the candidates for the election. But it also can be using it to attack other candidates. For example, in Argentina, we saw both candidates using AI-generated imagery of each other as like a villain from Clockwork Orange to try and say, you know, these people are unstable or crazy or whatever. And then we've also seen much more direct disinformation, attempts to deceive voters around, let's say, whether they should be voting as we saw with a robo call of Biden in New Hampshire, saying, you know, don't vote in primaries, as well as in cases where we're seeing people getting other politicians such as the Labour opposition leading the UK, you know, a secretly recorded audio of him shouting at an assistant framed as if it was real when it was fakes. So we really are starting to see a maturation of AI-generated content in these kinds of attack vectors or threat vectors. Is this actually changing voters minds? Is this actually going to have a concrete result on the outcomes of elections or the perceptions of a government about a politician? The jury is still out on this at the moment. There's a lot of correlation, causation challenges there. But I think it's fairly safe to say that both state actors and malicious domestic actors are starting to become aware of how these tools can be used maliciously. But we still are relatively early days in that process. We don't have a huge catalog going back years and years, nor do we have a post-mortem of, let's say, an election where we can definitively say, well, look, this changed the result for this candidate in this region. But yeah, safe to say that the age of political deep fakes, political AI-generated content is very much upon us. It's now just a question of how impactful these are going to be over, let's say, fake news as in a written fake article or other forms of more crude media but manipulation, such as when we saw a video of Nancy Pelosi, the leader of the Democratic Party, just slowing down the voice audio to make her sound drunk. Are deep fakes going to have a massive impact beyond those kind of more basic capabilities? The jury is still out to some extent. - 'Cause I know that you've advised a number of technology companies from Adobe, Meta, but also government special committees revising the European Commission, the House of Lords, you worked for the BBC as a broadcaster, just taking the temperature on those different organizations, what is their take on this? - Yeah, so all of these different organizations that I work with have a stake in this challenge or indeed the opportunity. So all of these different stakeholders, I work with, they have a horse in the race in some sense, and that could be because they are worried about the potential malicious impact of AI-generated content and deep fakes and how they might influence their customers. But also it's worth saying that there are lots of uses of AI-generated content which are creatively interesting or even pro-social, even potentially beneficial to society outside of the commercial lens. And so a lot of these organizations are simultaneously thinking, well, okay, how can we build responsibly these kinds of tools if their tech companies was also thinking, okay, well, does responsible use look like? And how does that then interface with, let's say, how governments are thinking about their elections or how new organizations are facing the challenge of trying to fact check high realistic AI-generated content. It really does depend on the organization, but my position is shared broadly with a lot of these groups, particularly big organizations that have different teams and different kind of groups working on different challenges. We can be simultaneously excited and worried, right? I think a lot of the discourse around generative AI is very much you're either kind of an AI Zoomer or you're an AI doomer. You're even really excited and you want to accelerate development or you're terrified and you're making comparisons of this as a threat like the nuclear threat and should be treated as really dangerous. For me, I don't think we need to have this mutually exclusive attitude. I think we can look at different use cases and say, well, this is really exciting in healthcare and accessibility in art. But those very same baseline technologies can be weaponized and if they're not developed responsibly with the appropriate safety measures, the appropriate guardrails, the appropriate understanding from people using and developing them, we could end up seeing quite a few malicious outcomes. So it is really about that balancing act for me and it can be a real challenge. And there's a lot of people who are understandably concerned and a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool. But I've also been working with people who are using it in some pretty amazing capacities as well. So as usual, the truth resists simplicity. There's a lot of nuance, but I can understand why for certain groups, let's say, if you're a woman and you've been targeted by the final consensual pornography and you know that hundreds of thousands of other women are being targeted, you may be thinking, well, this is a much bigger threat than the potential benefits of this technology. So we need to do that kind of assessment by the same token. I can see why other people in other sectors are thinking, well, look, the benefits far outweigh the potential risks. But I think the general attitude is we need to get this right. We saw what happened with social media where we didn't understand the implications and ramifications until it was too late. We have a chance to do it this time and there is a bit of a tug of war as to how that happens, particularly when we look at market forces and challenges that governments are having around regulation, for example. - Yes, indeed. And I do want to go into the EU, AI, and others. I know you are a lead advisor on the partnership on AI responsible practices for synthetic media and I open AI has joined this coalition for content provenance and authenticity to endorse this kind of watermarking on deep fake videos. Do you think that's enough or there are still ways for bad actors to bypass it? - Yes, I advise the content authenticity initiative which is the organization trying to advocate for the adoption of content provenance technologies like the C2PA. And so seeing open AI join that after Google also recently joined the steering committee was fantastic. These are the kinds of organizations that can move the dial. They can change attitudes amongst smaller companies. They can get this technology in front of the general public to help raise awareness. I think it's worth breaking down the solution approaches being put forward technologically because understandably it's quite complicated to describe how the technology is working. I would break it down into three key areas. So the first of these is detection. So deep fake detection, classifier based analysis which looks to try and give an evaluation using AI to spot, let's say, digital fingerprints left behind by AI generated content which the human AI or EM may not be able to detect. And so these detection tools can be really useful but there's a big caveat there which is they're useful when used by people who understand their limitations and you have the ability to look for more signals than just that detection. You're never gonna have a 100% accurate model. And because of that, you're gonna have this challenge where the landscape is moving very quickly. People are developing new generative techniques. People are trying to break these detection systems. And so there is no one detection system to rule them all, so to speak, right? And that leads to challenges because there are tens of different detection tools out there and they sometimes disagree with each other. And that means that when it comes to having a definitive answer, detection will never really be able to give you that. It will never be able to give you a binary yes or no answer. It will be able to give you a percentage confidence score based on probabilistic reasoning. 80% confident that it's real and so on and so on. So we've actually seen in the hands of everyday people this actually doing more harm than good because they've sort of trusted blindly a detection system which is not robust. Like as we are seeing in the current crisis in Israel and in Gaza, we've seen real images dismissed as fake by detection tools which have then spread on social media and actually led to more disinformation, right? So detection has a role to play but it has to be used with understanding of its limitations and of how it fundamentally works. So that's one approach as detection. Second approach is watermarking. If you think about a piece of media, watermarking is within the media, on the pixel level or on the wave level in audio. And the way to think about watermarking in my mind is almost like good faith red flags. So this is the intentional injection of signals of artifacts into a piece of media which stand out to a detection tool. So if a deep fake detection tool is looking for signals which are unintentional in a piece of media, watermarking is providing those detection tools with big, hello, I'm fake labels, right? And good faith, they're designed to be spotted. And this is something that again, many different companies are working on, Google DeepMind released Synth ID, they announced it a few months back. And the idea is that it helps detection tools, scanning media, understand whether it's real or not. And it's kind of a disclosure approach, right? The challenges with this approach is that signals similarly have a robustness challenge i.e. they can be manipulated. Maybe someone can compress a piece of audio or compress an image which removes the fidelity around those watermarks. And there are other techniques that someone might be able to use to try and basically compromise them. If someone's able to remove or corrupt them, then people might get a false sense of confidence if they don't see a watermark in a piece of content that it might be real. C2PA from the Coalition for Content Providence is an open standard which is providing a different kind of disclosure. So if watermarking is in the pixel level, what C2PA does is it attaches data to a piece of media. So it's kind of external to the piece of media itself what you're hearing, what you're seeing. But it's a piece of metadata which is cryptographically secured. So it's like a nutrition label, if you think about it in that sense. So the moment a piece of AI-generated content is created, the moment a an authentic photo is captured on let's say a DSLR camera. Data is cryptographically secured to that media file and then travels with it throughout its lifetime. And it provides people with information about how it was captured, what time it was captured. And as people continue to potentially edit it, manipulate it within that standard, it then continues that log, that ledger of how manipulation has taken place. And because it's cryptographically secured, if it's tampered with, if someone tries to break it, it loses that standard, it loses that kind of mark. And so the idea is that we use this in a way where people were reflexively looking for a content credential. That's what it's called with the C2PA. It's a little kind of CR logo that appears in a corner of an image, which then provides a drop down of how that image is being made or manipulated. And in my mind, it's the most promising approach because it's the most secure technologically. It doesn't have the same robustness issues that let's say watermarking or detection has. The biggest challenge is scale, getting the attitudes in society to change to the point where people reflexively look for it. At the moment, a lot of people will see something that looks real or hear something that sounds real, and they might reflexively trust it. The challenge with provenance is to get people to look for that mark to then understand how a piece of media has been created. So that's an overview of the different technological approaches to combating deceptive or misleading AI-generated content. On top of that, the all of them struggle from a challenge which is much deeper in society, which is let's say that C2PA is adopted by every single major technology company, news company, social media platform, all of the different people in the pipeline. There are still gonna be people who say, well, this content provenance standard is developed by the CIA with the big tech companies, and it's all a big conspiracy, and they choose which pieces of media are marked as real and which are fake, same with detection. This detection system is biased against this political party or this kind of content, and it's all a conspiracy. So I think the temptation of seeing technology as a silver bullet is misguided. Technology has a role to play, but there are much deeper challenges on the human level. I think technology will never be able to fully address. - Yes, and I'm glad we're going in that direction, but it helps the average viewer understand what they're seeing. When you say it's like nutritional labels, a lot of us pass over that because it's just like time in the day, and it's a long list and chemicals I don't even understand, and I just have to go to the checkout line. So in some ways, the deep fakes reinforce, for those who don't have time, the importance of legacy media, because we just don't have time to sift through all that. But I'm just wondering what this is all doing to us. You did that series, The Future Will Be Synthesized, and now it seems like today, now it's synthetic. Guys rapidly altering our collective reality, and it's moving so fast. You know, the plethora of AI startups that you discuss. We can create our own worlds, places, scenes. It's kind of a Nietzsche, and all gods are dead. Be your own god, and some tech companies are offering opportunities to inhabit dreamlike worlds where we can be more than our real selves. So how do we protect those most vulnerable in society from these synthetic realities? For those who haven't had a chance to experience enough of the real world before they're immersed into a synthetic one? - Yeah, it's a really good question. I like that you're bringing in Nietzsche as someone who did four years in academic philosophy. Actually, for the first time in a long time, academic philosophers have real input into the development of these technologies, and AI ethics is obviously a growing field. But the series that I did came out in May 2022. Just a month before Dali II, I believe was launch this is OpenAI's texture image tool. And obviously chat GPT then launched later that year. And so when I was producing that documentary with the BBC, the future will be synthesized was not necessarily saying that parts of reality in the present on synthesized. It was more, these are the early stages. If I'm honest, I don't think I was expecting us to have accelerated so quickly into that synthetic future by 2024. And I think this question around how is this changing society, communications, relationships? Have we see ourselves? These are really big questions. It makes me wonder about how this is holding up a looking glass to reality pre AI generated content. The world has been quite synthetic for some time. I just don't think we've necessarily been aware of it, or at least it's not something we've thought about a lot. But whether it's computational photography in every single smartphone, all smartphones will take various different photos. Night photography or portrait modes are all AI-shaped. So even when you think you're taking a photo, no filter, there's still actually quite a lot of shaping of the reality that you're presented with it. Or whether it's movie magic and entertainment. Most films today, there's dialogue replacement, right? The lines that you're hearing are not actually necessarily those recorded on the sound stage. Same with the effects and all of these different things. Commercial photography and advertising, things like this. And of course social media, where for some people, the idea of uploading a photo without filtering it is unthinkable, particularly among some of the younger generations. So I think the world is already very synthetic. And that doesn't even begin to touch on recommendation algorithms for Spotify or your Facebook feed, or Grammarly, or all of these different tools, which are AI-powered. But there are big challenges AI brings that weren't there necessarily before. And I think part of that comes down to the fact we're now interfacing with AI in a much more intimate and personal way. It's a much more direct relationship with artificial intelligence than we saw previously. That is, when you were taking a photo on your smartphone, or you were scrolling through your Twitter feed, or whatever it might be, you were x-feed. AI was working behind the scenes, but you weren't necessarily aware of that even, or you weren't directing it. Whereas in the age of generative AI, we really are becoming almost kind of synthetic conductors of the reality that we're creating. So that could be via a large language model, whether that's Chad GPT, Claude, Google Gemini, or one of these companion-based apps like Replica, which are creating kind of AI girlfriends, or boyfriends, or things like this. Similarly, with music, we're starting to see some really powerful tools, like Suno coming out, which can generate entirely AI-generated music, which sounds not necessarily artistically to my taste, but I would be arrogant to say that there isn't something that those tools could generate, which I would enjoy. We can come on to perhaps how I feel knowing if humans created a piece of art changes the way that you experience it. But nonetheless, we are starting to see AI-generated content seep into areas of our lives, particularly around what we traditionally see as human-to-human communication or human creativity, that I think does fundamentally change the way that we evaluate the value of human creative endeavors, but also the way that we think about interacting with each other in the digital world. I mean, even yesterday, Apple announced the release of their new iPad. One of the features I think they showed off was this feature where it corrects your gaze. This capability essentially synthetically changes your eye to make it look like you're looking directly at the person. Some people would say, well, great, okay, I'm paying attention to that person. I want them to feel like I'm paying attention to them as if I was looking them in the eye, but others. This is serious that this is weird. This is shaping a reality to something that maybe seems more polite, but it's not real. And I think for some, including me, maybe that's a first step towards a more avatar-driven version of digital reality, where we interact more with synthetic components of each other than we do potentially with organic components, which are being transmitted via the camera lens itself. So how do we think about a 12-year-old boy or a 12-year-old girl who, their first romantic interactions are not with another person, but they're with a chatbot? How is that going to shape expectations of what normal, messy human relationships look like? If we're talking about music or entertainment content, let's say you're having content which is putting you from some center in the content you're watching. How does that change your relationship with the characters? How does that change your relationship with content as of a third party? And similarly with things like synthetic resurrection, this idea, as I mentioned earlier, of bringing back deceased people using AI. I mean, this is something I wrote about in 2018. It's idea of a father creating an avatar or a voice of a deceased mother to read a bedtime story to his children, but then potentially that's something he starts listening to. Is that something that could actually really help with the bereavement process, help people process grief? Or is it something that could actually be really damaging and actually keep people stuck in a grief and bereavement loop and never let them actually process loss? These are really big questions. There aren't many clear answers. And we have lots of cases in AI of really explicitly malicious use cases. And we have a few which I think are really explicitly good, like using synthetic voice to give people the velocity ability to speak, to disease, the ability to retain their voice. The example I always cite is Stephen Hawking. He had this robotic voice, but that wasn't how he sounded. If he were alive now and had lost his ability to speak, he could have had a bespoke voice that sounded like him, allowed him to keep part of his identity, which I think is pretty uncontroversially great. I think it's a really powerful thing. But between these kind of black and white areas, which are quite clear, there's a huge wave of great where we don't necessarily have kind of precedent ethically to kind of be like, well, okay, this is similar to this. And that's how I feel about this. And therefore, this is how I'm gonna feel about this AI use case. And the challenge I think we're facing right now, whether that's in, how do we disclose AI generated content? Should AI generated content take the place of certain artists and so on? Is synthetic resurrection acceptable in certain context? There's a big question of, are we feeling uncomfortable about these use cases because they're new, they're unfamiliar, it's kind of future shock. It's a brave new world kind of side of things. Or are we feeling uncomfortable because a deeper, ethical intuition is being disturbed. In many cases, the juries out, and I think the next few years are gonna be big for that kind of shift in different societies around the world, potentially coming to different conclusions about what they see as acceptable and what is not. And I think for young people who are growing up with this, that will change how they see the world, how they see AI as part of their lives, how they see relationships or art, all of these different categories. So lots of big questions and not many clear answers. - Yes, I think it should be that we're automatically opted out and we have an option to opt in instead of like having to tick all these boxes. - So this is one of the big challenges. It's like this idea that we can opt out of an AI generated world, right? And the different experiences and content that involves. But if I go back to my previous example of saying the world is already quite synthetic, how many people understand how AI is already pre the generative shift shaping their lives? I don't think many. And if we had opt outs for every single one of those applications, as you were saying with the content provenance, with the label side of things, how much are people gonna treat them like the cookies, notification you get when going onto a website like, oh yeah, whatever. There's a real information overload side of this. It's pretty inevitable that we're gonna start seeing AI generated content become more accepted or at least certainly more prominent. I don't think that's necessarily a good thing. But I think this idea that we can alert people to the kind of composition of everything they're experiencing in the digital world. And for them to be able to understand that meaningfully and make informed decisions of the back of that, I think it's unlikely. And that's challenging, that's uncomfortable. - Yeah, so it all calls for the need for governance and guardrails, a lot of people are relieved to see the passing of the EU AI Act and other acts, which are not quite as comprehensive around the world. But of course, the challenge lies in its implementation, of course, as you know, of various high risk sectors, whether it's public service and accountability, law enforcement, healthcare, which has its advantages. But we have now the establishment of the concept of the human guarantee, the financial sector, robotics, you know, Thomas vehicles, military, national security, all these areas that are covered by that. Could you, you know, reflect on the vulnerabilities you see in these high risk areas? How could we be using, you know, pre-training measures so we can use these applications more freely and feel safe? - Yeah, so the EU AI Act, in my mind, was an admirable piece of legislation. And I'm glad to have seen it pass. I know that's certainly not the attitude of everyone in the space. But there is always going to be an opportunity cost of legislating in one way over another. And I think the approach that the EU AI Act took, which is to basically categorize different use cases of AI based on their risk level, from high risk to the point where there does not have permissible tools, such as biometrics and stuff like this, or facial kind of profiling for, let's say, policing and stuff like this, all the way through to kind of completely benign use cases. And I think legislation, in some cases, is only as good as your ability to enforce it. And I think we've seen a similar challenge in the UK with the Online Safety Act, which is a much broader set of online harms it's looking to address. But it was a very painful labor period to get that to pass. But now the challenge is enforcement. And I think that's where I'd struggle a little bit more with the EU AI Act, in terms of the amount of resources that will be required from the EU to actually meaningfully implement this legislation, to actually hold people accountable. And also, it's gonna work better on companies and actors, their commercial entities, they need to operate in the EU, or they genuinely want to follow the law and be compliant. But there's a huge amount of bad actors out there, right? They don't care, they don't have any interest in this. And so, our these laws and the legislative procedures actually going to be implemented effectively. And the jury's still out on that, but I think it's gonna be a really big challenge. It's also worth saying that, yeah, we do have this really interesting dynamic at the moment, which is different countries are approaching AI from different perspectives when it comes to legislation. The US is kind of being a bit more hands-off. We did the Biden executive order and some of that coming through around government procurement. Obviously, I think the DOD is like the biggest, you know, if it was a company, it's one of the biggest in the world in terms of the amount of contracts it takes. You know, making demands of kind of AI services that they would procure, which I think is positive, but they are still fairly hands-off. Same with the UK, kind of like a bit of a middle ground. We had the white paper come out and there is definitely intention to legislate against harmful uses of AI and also to endorse and work towards more responsible uses. But again, at the moment, it's not particularly concrete. But then we've also seen China, for example, who were some of the earliest to legislate on this with some of the most sweeping legislation, the Deep Synthesis Act that they passed more than a year ago. So different countries are taking different approaches with different degrees of comprehensiveness. My concern is that in a globalized world where digital boundaries are, in many cases, is fairly meaningless between nations, that we're going to see certain nations potentially almost acting as kind of AI tax havens, where you'll see intentionally minimal legislation around, let's say, training data or about safety measures. - I know that you also advised on the paper of the Generative AI as a catalyst for change in education. And last year, the OECD reported a huge tenure decline in reading, math, and science and performance among 15-year-olds globally, a third of whom cited digital distraction as an issue and found an overall tripling of ADHD diagnosis between 2010 and 2022. And so I'm just wondering what your reflections are in the future of education and what emerged from that paper that you wrote for the All-Party Parliamentary Group? - Yeah, so from the education side of things, I think AI has an opportunity to become a great equalizer in many ways and actually open up access to effectively kind of digital Aristotle's, you know, Aristotle Tutor of Alexander the Great, you know, providing that kind of dialogue partner, that kind of way of bouncing ideas. And I think from some of the use cases that I've seen coming out of the universities or from students, there's some really exciting stuff being done. And because, you know, the cost of access to these tools is relatively cheap, I do think that it can be a really powerful resource. And the same goes for other forms of AI-generated content, not just text. I think there's a way to bring potentially history to life or to have new creative ways of getting people as kind of conductors of AI to sort of create new forms of content for assessment or for teachers to create new kinds of ways to engage their students. So both in terms of equalizing access to, you know, and this is the caveat and always necessarily reliable, but quite reliable data points on history or science or things like this. I think it's really useful. And also in the way that you can present new content. Having said that, there is no doubt that there is a crisis going on, particularly in higher education, but also in kind of the secondary education market of teaching of teachers and educators fundamentally being unable to tell with the level of confidence required to make a definitive claim, whether a student has cheated on their assignment or not. I've been speaking to a lot of people in the higher education industry. And a lot of them are really excited about these tools and they're really excited about the opportunities. But when it comes down to the individual lecturers, they are seeing essays being turned in, which all sound very much the same. You know, they're very confident to say I generated, but because the detection tools are not reliable enough, they cannot say you have cheated because they can't make that claim with a strong base of evidence. And so there are ways potentially around that. You know, you could maybe start doing more vivor style examinations. So getting people to kind of actually defend their essay and so on. Obviously for certain institutions, that's a huge resource increase for them in terms of a human assessment, which is maybe not sustainable. One example I saw some people talk about, it's almost like a drugs test. It's like, you know, you could be randomly selected to defend your work. And so then maybe that's a strong deterrent. But yeah, I don't think there's any doubt that what we used to be, I need to do an all-nighter to write this essay, which arguably isn't very healthy either. But, you know, is now potentially I'll use chat GPT or perplexity to write this for me or Claude. And I think that is a really big challenge. I'm not entirely sure how we deal with that in a world where most educators don't want to go back to exam hall examinations, where most students don't want that. And I think there are creative ways to potentially get around it in terms of the kinds of assessments you come up with. Maybe it's like, you know, actually I want you to have an engagement with a chat bot where you critique its ideas or you show me its limitations, stuff like this. But there's no doubt that it's a challenge. There's no doubt that a lot of people, particularly in the state education sector, where there aren't so many resources and where teachers are not AI experts, they are really struggling. - Yes, I think there may be ways we can intuit, but of course then when you're dealing on the university level and when there's international students and their language might be different and there are all these different ways that you might think something's vacant, it's really real. But I read the paper that, you know, of course it helps teachers who also experience burnout. But one thing that I've noticed are remarkable, uptake and what I would characterize is kind of mania among students. I mean, they're really smart, but I feel that the way they're communicating, it's too fast. It's like it's reflecting the speed of AI and that's something I observed in the last year. - I do some work with the University of Cambridge, but I'm not kind of daily interacting with students and that's a fascinating kind of reflection. I certainly noticed my attention span seemingly being lesser than it was and in an age of instant gratification from digital content, I could certainly see how AI generated content could feed into that because again, it's less friction, right? To getting the reality you want or getting the experience you want. And that's really interesting. One thing that came through in some conversations I've been having is that students increasingly are feeling uncomfortable starting conversations with each other, having direct interpersonal relationships outside of what is their default form of communication now, which is Snapchat or TikTok or Instagram, you know? And I think there is a way that AI could make that potentially even more difficult way. You know, you're having a bad day in three years time. It might be the case that you just turn up with your avatar and that's acceptable to ask, it might seem strange, but for generations that grow up with that as the norm, as the kind of feature that's being actively pushed to them by AI companies, I don't see any reason why social norms could not shift in exactly the same way that my grandparents when they were alive were utterly overwhelmed and bemused by the world that I lived in, right? To me, it feels normal, but to them, it feels utterly alien. And that will happen to me feeling that the generation that come next, which are AI first generations, will almost certainly have similarly alien attitudes toward this stuff to me that I maybe feel uncomfortable about, but for them, that's the norm. We still have no idea how that is going to impact interpersonal relationships, romantic relationships, confidence in public speaking, the ability to succinctly and slowly communicate ideas. So it's one of those things where we have to, in some respects, wait and see, but we can also, in my mind, do some pretty obvious things for the low-hanging fruit of like, this is clearly not going to be good. And I think maybe a little bit more and kind of formal studies, kind of more peer-reviewed research would be nice to see in this space before potentially some tools are launched. The challenge to that, of course, is that the speed of Silicon Valley, the speed of change, the competitive nature of this is that all of these big companies are feverishly launching new products, new services. And a lot of them are doing responsibility-based research and trying to build them as best they can. I'm not saying they're just releasing them completely untamed. But the reality is that it's really hard to know what the long-term or even medium-term impacts of these are going to be if they're not being piloted and sort of tested over a period of time. And quite simply, shareholders and market forces are not going to allow without legislation, in my view, or perhaps some of this much more thorough, comprehensive, safety testing to take place. I know certain people in these companies would say, "Overblown, we didn't do this before. "This is kind of way too cautious." And in some cases, it might be. But in others, I think when it comes to kids in particular, we do need to be more cautious. - And how do you see the future of education involving to prepare students for the uncertainties of the workplace? You know, what does the future of work look like to you as you consider our AI co-creators? And just as AI radically transforms work, I mean, where are we going to find our sense of self-worth and self-esteem? Are you one of the optimists who see, well, redistributed in a new automated society? - I mean, I certainly wouldn't say that I'm blindly optimistic nor would I like to think I'm blindly pessimistic. You know, my work in this space, I refer to as kind of AI or deep fake cartography. So I've been mapping the landscape for a long time. A lot of my early research was a first of its kind to really do this. And as mentioned earlier around speculation, around disinformation and elections and stuff, my attitude has always been, okay, that sounds like a hypothesis that could be the case, but I'm going to base my judgments based on what I'm actually seeing happening in the real world. I really try to avoid speculation becoming conviction. I'm not kind of a behavioral psychologist or, you know, spending a lot of my time on the education side of things, but I think there's real potential as mentioned for this to really help people get access to information in a way that they previously weren't able to. And for that to be something that anyone regards to where you're from, you know, this could really open up opportunity, which I think is great. At the same time, I worry about people's ability to recall information potentially, the ability of people to function without these kinds of tools. And I get the analogy I'd use is, my handwriting is now atrocious, and I actively cannot hand write for a long time. Whereas I used to be able to, I did exams that were four hours long handwritten. I mean, I came out with cramping my hand for days afterwards, but word processing has fundamentally changed my relationship with how I actually write, but also how I think I cannot think. And in real time, communicate ideas as effectively with a pen, as I can with a keyboard. Now, when word processing was first emerging, some people might be really worried about this. You know, this is gonna kind of stop people from being able to hand write, and this is gonna really drive a dependency on computers. Maybe that's true. You know, we've adapted to it, and it hasn't had some kind of big disastrous consequence. I guess if we're looking at the future of work, we need people to be AI ready for a new reality where these tools are everywhere, where they're commonplace, and where people who use them are likely gonna be able to be more efficient, more productive than people who don't. Now, that might lead to a position where people don't remember things very much, in the same way that Google potentially has removed people's ability to memorize facts or stuff, because you've just got instant access in the palm of your hand. And I think there are, and we're certainly gonna be some downside to that. You know, we're living in a time where there is increasing instability in the world in many different regions. And so I wonder if an over-reliance on AI in an increasingly unstable world maybe is gonna lead to some situations where we are kind of left without a paddle, so to speak. But fundamentally, I think that this is the new reality when it comes to how people access and generate information, and kind of telling people don't use it at all, is actually bad advice for people getting work in the future. At the same time, I think saying outsource your critical thinking and your creativity and your ability to think entirely to that tool, equally naive and ill-advised. And that's one thing that I think actually is really valuable about doing humanities and social sciences based degrees in this time, despite the fact that they're being attacked and defunded in many different universities, is these degrees where if you engage in them, good faith, give you a real sense of the ability to think critically, to kind of think about broader trends and being able to understand what a good argument looks like, to be able to kind of, yeah, have that critical approach to knowledge in the world. That's certainly something I feel like I've benefited from my studies, and I hope that value is emphasized in the age of AI rather than further denigrated. - Yes, and so you speak about the importance and the arts and the humanities, and I'm just wondering how you, you know, reconnect, and I hope that you do regain a relationship to handwriting. That's a wonderful way of, you know, really, you kind of-- - Some like as new as the 2020 book, or is to try and do more handwriting, yes. - Yeah, but you know, one of the other ways, is it being out in nature, you know, reading, you know, one of those ways that you reconnect to who you are, who you were before the overlay of synthetic realities. - Right, getting out in the countryside and getting out in nature is really important to me, and I do have ways to get away from screams. Having seen some of your previous guests' music as well, it's something that's really important to me, and, you know, have really enjoyed some of the artists that you've had on, and I think that those are key parts of being human, and I do think there is value to that. I think it's something which is closer to a misa-religious assertion than it is a one that maybe necessarily is grounded in sort of a scientific version of reality. But for me, you know, if I didn't know a piece of music was AI-generated, I could think that was absolutely beautiful. And if someone had said to me, this is the new song by this artist, I could be like, wow, that's amazing, I love it. The moment I know that actually it's AI-generated, part of the magic goes for me. And it's the same with writing, it's the same with art in general. And I think that's because there is an intangible value, and it's a real value, it's a real experience. It really does shape the phenomenology of how you experience that music, that piece of art, which comes from knowing that it's essentially from one consciousness to another. AI obviously can't do that, and I'm not gonna get into the debate on AI consciousness now, but in my view at the moment, it can't do that. And that's why I say it's kind of almost a religious kind of experience, because fundamentally, we can't have that epistemic knowledge of other people having minds, having consciousness. You know, it's kind of Cartesian, right? It's like the only thing I know I can truly know is that I am, right? We can't fundamentally know that other people exist, but the way that I think we do come to have those somewhat transcendental experiences between people, which is why I talk about the religious mystical kind of element of it, is through art, is through these kinds of things. And it doesn't mean, as I said, that I can't enjoy AI-generated content. I've seen some beautiful AI-generated art, but in my mind, it is inherently less valuable to me because of the fact it's AI-generated, because there isn't enough consciousness driving that creative process. So I think keeping that side of things is really important, as well as getting away from the digital sphere where you can. For me, personally, when you know it's created by a human, although many of the creatives who we've spoken to might rely on a certain generative aspect in their work, but it's not AI, but the generative or electronic music, but it really came from them, right? Yeah, so it's gonna say, I think, especially electronic artists, I know you interviewed Max Cooper, whose work I love. You know, I wouldn't be surprised if these kind of new tools and capabilities are being experimented with by artists of his kind. And, you know, even looking back to work of the likes of people like John Hopkins, using kind of elements of programmatic music on certain aspects to kind of build new patterns, or Holly Herndon, who's done some really interesting musical experiments, building like a voice clone of herself that can then be used as an instrument. This stuff is great, I love this, because it is pioneering its sort of pushing the boundaries. But I think the key is that piece of art is being ideated and being co-created in a way with AI, which is still the human is in a driving scene, in my opinion. They're still conducting, again, that experience. But, you know, I quite like just listening to some solo piano, and just being like, that is an intimately human experience. And I've heard some really interesting, horrible stuff come out of AI-generated, but for me, it's again, it's that kind of magic of a group of people singing, but certainly not meaning to downplay the artistic significance of co-created work. I just think, for me, generally speaking, I probably enjoy more the stuff where I feel that the human is kind of not just in the loop, but is the loop. Oh, absolutely. And you wouldn't want to eat something that was, we talk about the nutritional value. You just, like, synthetic food, even if it's injected with all the vitamin compliments. There's something not living. I've been known to enjoy a Pepsi Max, which I'm pretty sure is the most synthetic substance known to Matt. But, yeah, you're right, you're right. I think, you know, maybe sometimes you enjoy them, but other times it's time and place, right? And so, for me, yeah, I feel the AI influence on content. There are certain zones where I'm happy for it to be part of it, right? You know, if it's stock photography, commercial photography, things like this, I'm not particularly fast. To me, it doesn't feel like a real art form if it's like man and women sitting at a table drinking coffee, sun in the garden. The way that you even search for stock photography is almost like you're searching for a prompt on a diffusion-based image generation model. Whereas other kinds of photography, I do want that to be very much human-driven. And the same goes for music, the same goes for film and entertainment, writing as well. If it's just marketing copy on a product, I'm not that fast, again, it's nice to know, but I know it's unrealistic to expect that. If someone's writing a book and it's an artistic piece of work, you know, maybe AI can generate some really impressive poetry, but I think, again, I would want the human to be acknowledged if it's there so that I can understand where that work has come from. My name is Nadia and I'm a student studying in Northwestern University specialized media campus in guitar. As someone who recently completed my first year of university, something I found very surprising was how, in all of my courses, professors would dedicate a large portion of the syllabus to mention how we should not use AI to plagiarize. This was so different compared to high school where my teachers never had to explicitly mention AI at all. Probably because at the time, most of our assignments were in class and timed, and AI was also just not that popular. But reflecting on all of this and how southern AI has found its place in education and other aspects of our lives, I think of Harry Eider's conversation about the problems in AI's detection, as well as the irreplaceable value of the arts. I remember for an assignment, we were told to write a quite personal essay about ourselves and then put the same prompt through chat TPT to comparable essays. And the truth is, I really didn't like what I got from chat TPT. I found my writing better, not because it's objectively better, certainly writing is very subjective, or because I'm a very good writer, but I preferred my writing because as the writer, I know every sentence I wrote was deliberate and came from the heart. It's kind of similar to how Harry Eider mentions preferring human-made music as opposed to AI-generated music or even music that blends AI and human. There's nothing wrong with the last two. It's certainly innovative, and maybe if a person doesn't carefully read the nutrition label as Harry Eider puts it, the AI music could pass as human and be an enjoyable experience, but I think knowing its AI makes you feel the experience differently. And I think it's really because AI lacks the process and the human experience of this process. As a writer, my favorite part of writing is the process of creating. AI doesn't go through just the same way we do. There's the code and data, which is impressive, technology-wise, but in terms of creating like art, it's just automatically a result without going through the thought process. For example, the thoughts of an artist mixing watercolor and trying to figure out the perfect shade for the perfect color. Harry Eider highlights the many words our synthetic world could rewrite in what we consider human and how we interact with other humans. And that's why I think it's even more crucial than ever to consider the arts and cherish them, because at its core, it's a priceless exchange from one consciousness to another. Harry Eider puts this into words so perfectly. How there's no way to look into other people's minds, like experience their faults, in short, there's no certainty other people exist. But the arts, it's just so personal. It's an almost mystical experience that can mix us together and reminds us we're not alone. Now back to the interview. And speaking of art forms that aren't quite appreciated, I think teaching can be a kind of art form. I believe it was the writer John Steinbeck who said, "I've come to believe that a great teacher is a great artist and that there are as few as there are any other great artists. Teaching might even be the greatest of the arts since the medium is the human mind and spirit." And so I'm just wondering which teachers have been important to you on your path to becoming a kind of philosopher? I certainly wouldn't call myself a philosopher. I feel like I'd have to have done the PhD and wrote the original research to claim that. But I guess there's someone interested in philosophy. I've had some really good teachers over the years. I've also had some really bad ones, to be honest. But I'll focus on the good. You know, I didn't have the best time at secondary school. And I had some teachers who really gave me the opportunity to understand that learning and thinking didn't have to fit into necessarily the box, the curriculum or the kind of state education system prescribed. One teacher, Ms. Atkins, gave me a lot of room to express myself and took an interest in my kind of musical interest at the time. I used to love like a heavy metal, which was, you know, time has changed. But, you know, she would really engage with me on the things that I was passionate about. And helped me, I think, channel that passion into different avenues than the curriculum sort of expected you to. And similarly with my A-level 16 to 18 education here in the UK before university, Mrs. Gillan, my philosophy teacher, was really good at helping again open up this world of, you know, content to me, which really changed the way I thought and actually gave me a subject that I was like, oh, wow, OK, this really fits my kind of somewhat rebellious attitude in some areas, you know, gave me the space to be myself, but also engage in content and ideas that I hadn't really had before. So I've been lucky to have some great teachers. And I think you're right that, you know, it is an art form. And certain teachers are great for everyone. Other teachers, you just click with, whereas other people wouldn't think twice about it, right? And I think that's also the beauty of that kind of teacher student relationship is that, for some people, it really is a meeting of minds. So, yeah, always, always very grateful for that. And my journey to where I am has been very serendipitous in terms of my academics and then the work that I've done up into this point. But I like to kind of reflect on the teachers that help me believe in myself a bit more or kind of open the doors to ways of thinking, which I still carry to this day. And just finally, as you think about the future and education, all these things we're talking about, the future of work and our AI co-creators, what would you like young people to know, preserve and remember? Oh, that's a good question. I feel potentially something I'm concerned about. And I guess, by extension, I would like to preserve is a real sense of empathy and humility, which comes with understanding that the world is messy, that people are messy, that defects and imperfections exist, that things don't always necessarily go the way you want, even as much as you wish they could. Imperfection is part of life. My concern is that AI generated content, which smooths and perfects a version of reality to precisely what you want and makes you feel pressured to represent yourself in this absolutely perfect way. Fundamentally, it gives you no room for error and attaches you from the reality of growth and life and how people work. Yeah, just empathize with other people. Everyone has their challenges. Things don't always have to be exactly perfect to how you want them to go, how other people want them to be. And that involves having some humility about yourself as a messy creature, as we all are, right? I hope that's retained. I'm sure it will in some senses, but I do see the sort of smoothed and shaped reality that AI is enabling, as potentially creating more of a disconnect between that kind of imperfect, messy, but also quite beautiful world. And this polished but ultimately plastic version of reality that increasingly is becoming the default for some people over the fleshy, messy human side of things. That's so important, yes. Thank you, Henry Eider, for opening our eyes to new ways of thinking, for shining a light on synthetic media and reminding us to embrace our messiness, our humanness, and for your important research into AI, its challenges and opportunities, by helping us understand what we value and where we're going, we can consider possible outcomes and what we should do to ensure a positive future. Thank you for adding your voice to the Creator process. Thanks for having me, Mia. The Creative Process podcast is supported by the Yamashaski Foundation. This interview was conducted by Mia Funk with the participation of collaborating universities and students, associate interview producer, and associate tech editor on this episode with Nadia Lam. The creative process is produced by Mia Funk. Additional production support by Sophie Garnier, Wintertime was composed by Nicholas Annadolis and performed by the Affini intro. We hope you enjoyed listening to this podcast. If you'd like to get involved with our creative community, exhibitions, podcasts, or submit your creative works for a few, just drop us a line at hemagreativeprocess.info. Thanks for listening. (gentle music) (gentle music) (gentle music) (gentle music)