Archive.fm

The Next Page

AI, Democracy, and International Relations with Jérôme Duberry

Welcome to a thought-provoking episode of The Next Page. Francesco Pisano, Director of the Library & Archives discusses the intersection of artificial intelligence, democracy, and international relations with Professor Jérôme Duberry from the Geneva Graduate Institute. With AI rapidly evolving and influencing political practices, diplomacy, and global governance, understanding its societal implications is more important than ever. In this episode, Professor Duberry shares his insights on the dual nature of AI in democracy, highlighting both the hopes and concerns it raises. From micro-targeting in political campaigns to AI's role in shaping global policies, we explore how AI is reshaping the way we access information and engage in democratic processes. As AI becomes a vital tool in diplomatic practice, we discuss its potential to augment human decision-making and the ethical considerations surrounding its use. Professor Duberry also sheds light on the challenges of governing AI on an international scale, examining the debates around AI ethics and regulation. Finally, we address the importance of AI literacy, particularly for the younger generation, to ensure informed participation in shaping the future of technology. Tune in to gain a comprehensive understanding of AI's impact on our world and the critical need for inclusive governance. Books by Jérôme Duberry: Duberry, J. (2022). Artificial Intelligence and Democracy: Risks and Promises of AI-Mediated Citizen-Government Relations. Cheltenham, UK: Edward Elgar Publishing.  Duberry, J. (2019). Global Environmental Governance in the Information Age: Civil Society Organizations and Digital Media. Abingdon, UK : Routledge.  Where to listen to this episode  Apple podcasts:  https://podcasts.apple.com/us/podcast/the-next-page/id1469021154 Spotify: https://open.spotify.com/show/10fp8ROoVdve0el88KyFLy YouTube: https://youtu.be/Voay4XN23UA Content    Guest: Dr. Jérôme Duberry, Managing Director of the Tech Hub, Co-Director Ad-Interim, Executive Education, and Senior Researcher at the Albert Hirschman Center on Democracy  Host: Francesco Pisano, Director, UN Library & Archives Production and editing: Amy Smith

Broadcast on:
11 Oct 2024
Audio Format:
other

Welcome to a thought-provoking episode of The Next Page. Francesco Pisano, Director of the Library & Archives discusses the intersection of artificial intelligence, democracy, and international relations with Professor Jérôme Duberry from the Geneva Graduate Institute. With AI rapidly evolving and influencing political practices, diplomacy, and global governance, understanding its societal implications is more important than ever.

In this episode, Professor Duberry shares his insights on the dual nature of AI in democracy, highlighting both the hopes and concerns it raises. From micro-targeting in political campaigns to AI's role in shaping global policies, we explore how AI is reshaping the way we access information and engage in democratic processes.

As AI becomes a vital tool in diplomatic practice, we discuss its potential to augment human decision-making and the ethical considerations surrounding its use. Professor Duberry also sheds light on the challenges of governing AI on an international scale, examining the debates around AI ethics and regulation.

Finally, we address the importance of AI literacy, particularly for the younger generation, to ensure informed participation in shaping the future of technology. Tune in to gain a comprehensive understanding of AI's impact on our world and the critical need for inclusive governance.

Books by Jérôme Duberry:

  • Duberry, J. (2022). Artificial Intelligence and Democracy: Risks and Promises of AI-Mediated Citizen-Government Relations. Cheltenham, UK: Edward Elgar Publishing. 
  • Duberry, J. (2019). Global Environmental Governance in the Information Age: Civil Society Organizations and Digital Media. Abingdon, UK : Routledge.

 Where to listen to this episode 

Content   

Guest: Dr. Jérôme Duberry, Managing Director of the Tech Hub, Co-Director Ad-Interim, Executive Education, and Senior Researcher at the Albert Hirschman Center on Democracy 

Host: Francesco Pisano, Director, UN Library & Archives

Production and editing: Amy Smith

[MUSIC] >> Welcome everyone to this new episode of the next page, the podcast of Library Archives here in Geneva. Today, we're going to discuss about artificial intelligence, democracy, and international relations. We're going to have a conversation with Professor Jerome Juberi, who teaches at the Graduate Institute. There's a number of research activities we'll hear from him just in a second. So as an introduction, we wanted to have this chat with Professor Juberi, because artificial intelligence is coming into play in several areas that are important for international relations. It's also enabling the emergence of new political practices, digital campaigning that are powered by new technologies that are able to micro-target audiences, and the scope and scale of artificial intelligence is augmenting by the day. In response to that, of course, the academic community is doing a lot of research to develop better understanding of this technology, but also the hopes and concerns that these raises are a matter of discussion in the international forest well and also in the United Nations. Today, we have invited Professor Jerome Juberi, who teaches here in Geneva as I said before, and is also an expert in this kind of matters, and the managing director of the Tech Hub at the Graduate Institute. Jerome, welcome to the podcast. It's such a pleasure to have you here in the studio with us live and in person. I would like to ask you just to introduce yourself to our audience, to tell us a little bit about yourself and your research. Thank you very much. Hi to all, a pleasure to be here. So as you mentioned, teaching and conducting research at the Geneva Graduate Institute, quite interested in the societal implications of digital technology, and so far, I've done some research and teaching on how civil society uses digital technology. I first looked at environmental civil society organizations, and in the last years, four to five years, I explored the pitfalls and promises of AI for democracy. More recently, I'm co-leading a project on AI literacy for youth, and I'll mention a little bit about this and what we've learned through this project later on. I'm also leading an executive education project on negotiation policy and diplomacy, and also a couple of courses on AI for negotiation and AI for diplomats. So we really found the right expert for this chat, because all these elements I hope will have space in this podcast episode, so that the audience get to understand what are the intricacies and the linkages between all this negotiation, mediation, but also use of technology, and how diplomats can actually use this technology. Just to begin with, you wrote recently about artificial intelligence and democracy. So I would like to start from this point. Today, the power clearly lies in the hands of those who own the data. Many people say that, and these people also, this group, benefit from the ability to process data in large quantities. So now, my question to you to get the ball rolling on this topic of artificial intelligence and democracy would be, what are the hopes and concerns that AI raises in general for democracy? There is a lot out there that we hear, it's good for democracy, it's bad for democracy, democracy is dead because the machines are coming, and in other instances, actually, we see how AI and humans combine in certain mixes, actually, makes everybody a little bit smarter in a way. And so, what are the hopes and concerns out there? We do have hopes and concerns, both, and it's actually quite a challenge to be able to consider both aspects, to have kind of a balanced perspective. We tend to either be techno-optimist or techno-pessimist. Maybe to look at the hopes and the threats, one way to look at it would be maybe to identify some kind of touch points where AI impacts democracy. We could say where AI impacts some political systems in general. The first level is the individual level. Here, clearly, AI plays a key role today in the information ecosystem. Think of social media platforms. Their algorithm is an AI algorithm. So it means that today, one gatekeeper, one form of information selection is done by AI. The newsfeed of your social media platform, say, X or Instagram is an artificial intelligence. So at the first level, at the individual level, the way I access information is enhanced by AI, because AI gives me access to more information than before. I go on a web search engine such as Google, and I have here an AI that selects information for me, so it makes selection information more accessible to me. But on the other hand, the same tool is used to filter the information. And here we're talking about filter bubble, meaning that all the information that we have access online has been tailored for us based on our navigation history. So in a way, this is quite negative for democracy because we're not confronted anymore to other perspectives and other views. So the first level is really this. AI is impacting how we access information in a way is enhancing and reducing the priority of information, and it affects how we form our opinion. The second level is at the group level. We can hear clearly see that AI, like any technology, favors some actors versus others. So clearly, in the context of democracy, the political parties, the communities that have the means to buy the services of, let's say, Google or META or large data brokers or political communication companies that use AI, then they have a clear competitive advantage versus others. Another way to look at this question of disparity between the groups is to think at who is designing the technology. And of course, today, AI is mainly developed, designed by men, very often white men with a similar background. So I exaggerate a little bit here, but this is to say that actually the ones who design and develop and think of this technology, they're not representative of the diversity of society. So the technology that it will develop, which will then shape us and influence how and what we can do, they will, of course, favor the population that are similar to them, because that's, of course, the technology is in way an expression of their values, their worldviews. The third layer or the third level is at the level of institution, and we'll come back to this. This is a question of elections and how AI is used in the context of electoral processes. And the last level, sometimes we forget about it, but maybe it's a more systemic view. It's to consider that AI today is a strategic asset for states, for countries in the world, for governments. And so there is kind of a battle between democracies and more authoritarian regimes. And the use and how AI is designed and developed is in the middle of this battle between these countries who perceive, of course, questions of freedom of expression, freedom of opinion, very differently, depending on the perspectives. On one of the things that you mentioned, maybe it's worth to just dive in a little bit deeper. So I will be interested in hearing you more about artificial intelligence enable political communication, various practices around that, and around the world. I know that you have an opportunity to do research in various areas. So is there a way to present to our audience, you know, AI enabled political practices from Africa, Europe, I don't know, North America, so that we can be maybe not too long, but rather precise on what is emerging for getting these practices? One way to look at it is to say that a lot of these practices are actually similar. Their objective is we use AI basically to process large amounts of data fast in order to modify the behavior of people so that they can vote in a certain way or buy something. So the practices that have been developed by the advertising industry for the private sector to promote products and services is also used for political communication. So in a way, we could to look at it, we could follow the data flow. So first, AI is used to collect data. So basically, it's used to automate the data collection. So basically, as soon as your online AI is collecting data about what you do, what you look at, what you like, what you dislike, et cetera, to do what that's the second step is to analyze this data in order to identify some patterns in your behavior and to profile you. So to identify, to understand more clearly, what is your psychological profile and how would you behave with different types of inputs? And that will lead to the third level, which is in that case, to modify your behavior. So I will present, if I'm a political campaigner, the information differently, if I know your psychological profile, in order to ensure that you will vote for my candidate or not. And for this, we have a lot of very good examples, a very famous and well-researched and known cases, Cambridge Analytica, which is this company, political communication company, that became sadly famous in the context of the Brexit and the 2016 US election, because basically they stole a large amount of data, personal data through Facebook, to develop a model that allows them then through a couple of data points, profile individual users, and then be able to tailor the communication very precisely. So we're talking about micro-targeting, meaning that it's an information, if I'm a political campaigner, I will identify key messages, and then I will use AI to tailor these messages at the individual level, and then also use AI to send out these messages through a very large audience. So AI is really enhancing really the scope, the scale, but also the precision of the communication. And an example, this looks like, is the fact that today we use AI to allocate advertising spots online automatically. So it's not a person who will decide when you are on a page of the New York Times, for example, what type of advertisement you see, it's an AI that will allocate this spot, which is an empty spot, with the right product or service, depending on the time, depending on who you are, depending on what they know about you. It's pretty scary. Looked from the other extreme, it could be, wow, this is so helpful, because basically I only get what I'm interested in, or in the case of the ad while I'm watching YouTube, for example, but it's also scary in terms of how accurate the micro-targeting for political tailor messages could result into actually steering electoral processes, for example. I wonder, these things are pretty out there, if I understand what they're currently used, and to you as an academic, the question would be, what is the research amplitude on these things? Would you have suggestions for a research agenda on these? Are we good, are we behind the curve in terms of research, academic research on this kind of technology tools and practices? It's a very good question. I would say if I were to give a general answer, we'd say that we're far behind. For a couple of reasons, the first one is that we need to have access to these practices, and these practices are developed mainly by the private sector and in capacity. The situation has improved with online platforms providing more and more access to what is being published on their platforms, but there is a lot to do still, and also, of course, a question of funding. There is a very big difference here between what online platforms can support for their own research and what academia can do. Maybe I would add also one point here that we haven't mentioned yet, but AI can be because we portray here AI as something quite negative, but it's also important to see that AI can be also very helpful in the context of democracy. For example, can provide very easily and quickly translation in different languages, so it can make information accessible through translations, but not only language translation, it can also translate very complex, legal, or technical texts into a very simplified version so that us, non-expert, can understand a topic that may be difficult to understand. It can also be used by states and governments to reach communities that maybe traditionally do not vote and to provide information or incentive for them to vote. The same practices that are used to modify behavior, they can also be used in a positive way. Again, and maybe the last example that a positive use of AI, AI is used already today by governments to identify issues that have not yet been addressed, so by collecting opinions online and trying to identify some issues that people mention online, it allows them to actually see, okay, there is a need for a policy or a rule or support in the domain that was not yet addressed. Well, thank you for all that, and certainly on a couple of these things, we'll go back during the episode, but now I would like to move the discussion one notch towards international relations. So, let's talk about artificial intelligence and /in international relations, and the nexus there is very multifaceted, but it is increasingly significant as one of those things that we have observed in the history of international relations become significant gradually, like civil society, technology, climate, artificial intelligence, right? So I could think of two aspects that I would like to suggest to you for comments, and then maybe you can think of more, but the two aspects that were obvious to me when I was preparing for this episode with you were, number one, AI artificial intelligence as a tool, as an instrument, as a means for diplomats and diplomacy, in general, not only diplomats, the people that do diplomacy, but diplomacy as a discipline, as a practice, and the second is artificial intelligence as a subject of international debate and discussion and even argumentation, in a way, governance, yes, governance, no global, this global that. So let's begin, if you wish, with artificial intelligence as a tool for diplomacy, what is the status of potential for diplomats, especially or professionals in the field of diplomatic practice, what's the status there? We could say that today, very similarly to the national level, at the international level, AI is used to process larger assets, to advance global issues, think of climate change, we use AI to, for example, build a digital twin of the planet Earth with its different systems, so to replicate the Earth, in order to better understand how they interact and to be able to make better predictions. But AI is, of course, like any other technology has been used also in the context of weapons. So here we think of lethal autonomous weapons or autonomous weapon systems. So weapons that can trigger, can attack more or less autonomy with a human in the loop or on the loop, but always with the oversight of human beings. But we see this being implemented, developed, and the negotiation and the governance of these lethal autonomous weapons and autonomous weapon systems is ongoing but very difficult. Because of course, it's a question of national security, I mean, you imagine the implications. AI is also very present today in times of conflict and peace through disinformation. And it's an important element to mention because here we're talking about hybrid conflicts or hybrid threats or hybrid warfare. What does this mean? It means that today, AI enables a myriad of actors to produce very easily disinformation that contributes to the fog of war, so that contributes to the effort of one country to compete or to attack another country. And we can see that there are many, many examples of disinformation being used today in time of peace and conflict and war. And that blurs the lines between what used to be in diplomacy very clear distinction between we are in peace or we are at war today. These elements are much more blurry. So again, it's the nature of technology. It has two faces. AI can be used to process large data sets to address global issues. And it is already the case. It is used. And we have some very good examples, including Switzerland. And also it is used to produce this information, I'm sure you've all seen the deep fake in time of election, but also else. If we go one level down towards diplomatic practice, like, let's imagine, work done in embassies and in permanent missions and work done in official meeting rooms by diplomats. Is it sufficiently proven that AI can be used to write better unbiased reports or summaries? So even without, even at a very primitive level of prompting, is it the case or not that AI is a friendly tool for a mass of professionals that we call diplomats in terms of summarizing capturing the key points, even drafting ideas or even looking at, I've seen cases of the human prompting the machine, my view on this is one, two, three, am I missing something? And then AI responding, well, there are these other 15 things and the human reacting, okay, seven are not relevant, two, I already spotted them. But this one, I never thought about it. What is your sense as a researcher in this more, more preside level of usage, very kind of basic, is there traction there, is being used? So increasingly, it's been increasingly used. The question whether AI is biased or not is a tricky one. AI is, at least that's my perspective, is always biased because it's been trained on a certain data set and this data set is very often biased. And therefore, at least the type of, we're talking here about machine learning or similar type of artificial intelligence, and there are different types of artificial intelligence, different technologies under this umbrella term. So what we often refer to today as AI is very, is machine learning or similar types of technologies. It's an algorithm that basically learns or has this capacity to develop its own definition of the outside world. So how does it develop its own definition of an outside world? Let's say a cat, it's by absorbing and processing a large amount of data that provide many images of cat and at the end, it will come up with its own definition of what a cat looks like or what is a cat and will be able to identify this cat in different types of images. But of course, if you provide only images of big cats or small cats or a certain type of breed, it will only be able to recognize a certain type of breed of cat and not all cats. So this is to say that any type of artificial intelligence today has been trained on a certain type of data set. What is important to know is where is the data coming from, how it has been organized. For example, how have we selected the cats, how have we tagged, what is the cat and what is not. So these examples are very trivial. But when we're talking about international law, when we're talking about global governance, this is extremely important because a north can be key in a negotiation. So that's the first element. The second element to mention is that AI, for example, in the context of AI for negotiation, AI can be very helpful in a sense to augment the negotiators, not to replace the negotiators. But if you are in a situation of crisis and you arrive and very often, there is no time to have a research of the context because you need to right away address the crisis. So here AI can be very, very helpful to provide very quickly, I want to say, maybe basic information, at least an overview of the situation. So that's one example. Another example is to design some scenarios. If I have option one and B, what is the future scenario of this? So it can inform the decision-makers, it can inform the policy-makers, the negotiators, with additional information. Now to conclude and to come back to your original question, personally, I use generative AI. So, for example, a chat GPT, so I form artificial intelligence that can generate content. That's why it's called generative AI. But to me, it seems still quite limited when it is not about information that is in a way quite common. Because again, it's been traded on data that is coming from the internet, content is coming from the internet. So if we're talking about large and broad consensus, then yes, it will provide information that is in general quite correct, but it can so make many mistakes. So I think it's important to distinguish between what AI can do and cannot do. I think that if we're smart enough, we will develop AI in a way that will augment us. And I think AI can do that very well and not in a way that will replace us. And this is a good segue into the exploration to the other aspect of AI in international relations, meaning the current, again, hopes and concerns about the deployment of artificial intelligence globally and the possible, you know, interferences with the way that international relations have been managed or practiced so far. So for example, the old debate on the ethical use of artificial intelligence, you mentioned the case of little autonomous systems, but also governance of AI, governance of the internet, et cetera. So what can we say about that so that the audience gets a sense of where we at at the global level on the debate about AI and national states, I would say, and AI and governance of AI, which feeds your dichotomy of AI that augments humans or AI that control and basically is used in a way to suppress individual liberties, for example. There is really this question whether AI, we're talking about automation and AI augmentation. And so if it's AI automation, basically it's an AI to replace people. And a lot of the AI that is being developed is being developed to replace human beings because human beings, it implies paying them to do something, whereas here the AI can actually do it on your behalf, instead of developing AI that actually augment you. In terms of governance, so of course, the governance of artificial intelligence builds on the governance of the internet and previous effort. Many of them have been conducted here in Geneva, actually. We can say that the governance of artificial intelligence remains a very young discipline. Contrary to the governance of the internet, I'd say that it is emerging in a context that is quite different in a context, an international context that is much more polarized than it was the case for the internet governance. But on the other hand, we've learned a lot from the regulation or the non-regulation of social media platform and their impact. So here we can really see a positive, very positive point is that states, governments have really, and even online platforms, have really understood the importance of regulation. And here when it comes to regulation, we have different perspectives. So first, there are ethical guidelines and ethical standards that have been developed by organizations such as the OECD, such as we can also think of UNESCO, 10 principles for an ethical approach to AI. So a number of guidelines and ethicals that have been developed. We also have hard law. The EU AI Act is a very good example for this. And we have efforts, global efforts that are led in particular to the UN high-level advisory body for AI and the digital compact that have conducted consultations, published an interim report a couple of months ago and we'll publish another one in September and we will see where and how this will develop. It seems that at the moment, the state of discussion is not to create another UN agency just for AI. For the reason that AI will have impacts on many different aspects, health, environment. And so the specialized UN specialized agencies will probably be the best suited to actually address this technology because they have the expertise, for example, WHO in health. Maybe another element here to mention is, I talked about perspectives. There are maybe two ways to look at AI governance and it's very much linked to the different stages of AI development. Today we are, when we talk about AI, we're talking about what we call artificial and narrow intelligence. What is artificial and narrow intelligence? It's the artificial intelligence we have today, so it's an artificial intelligence that has been designed for a specific purpose and basically cannot do anything else. As no autonomy can only do this. So you use, for example, an AI tool to translate, this AI tool can only do that translate. It cannot play chess, it cannot drive your car, it cannot only do that. But there are future developments of AI, think, and we call them artificial general intelligence and artificial super intelligence. And these developments are very controversial in the sense that we don't know when and if they will happen. You have experts and thinkers who say they will happen in the next 20 years, 30 years, some say that actually our GPT is already almost artificial general intelligence. And others say it will never happen. So it's very difficult to know. And yet, when we talk about governance, and this is the point where I wanted to reach, is that we have one school of thought that says, well, today we have some societal, economic, environmental, political implications of AI. So we need to address and govern and regulate these implications and the near future, but focus on basically artificial and narrow intelligence. And there is another school of thought, which is around this kind of existential risk, affectable tourism, long-termism type of movements that say, well, wait, if we imagine or if we project ourselves into the future, we realize that if one day we have an artificial super intelligence, that means that if we have an AI that will be more intelligent than all human beings on the planet for any type and any form of intelligence, including emotional intelligence, for example, this will present such a high dramatic danger for a threat for society. It's an existence, existential risk for society. Therefore, we should focus our governance effort on this. So basically on the far future potential, but high, high risk of this type of AI. So these are the two schools, I mean, there are many more, but two that I mentioned here. And my take is, of course, these two perspectives are important, but if I had to choose and that's what I do in my research, I prefer to focus on the implications of today. Because again, we're talking about discrimination, bias, and these are elements that are extremely important and urgent and pressing. Geron Dubury, before we conclude, I wanted to touch on something that you mentioned before. That was artificial intelligence literacy and youth. And let's not conclude the episode before talking about this, because it seems to me that it's only natural that youth will be existing in the world where they will be more artificial intelligence, more applications, more promises, hopes, and also more challenges and problems. And also about literacy, because that is also something very interesting, and it's emerging, for example, in our environment as a librarian archives here in Geneva, we are seeing now the emerging need from clients to have literacy about, even before AI, but technology in knowledge applications and systems. So let's talk about those two things and how even they combine literacy and youth. Of course, you're a professor, you're a teacher, you mentioned that in that context. So tell us, yes, when we talk about youth and digital technologies in general, what we've realized is that, again, it depends on the country we still have today, and we need to do this parenthesis first, there is still the digital divide today. So meaning that there are still countries that are more connected than others, regions that are more connected than others, gender that are more connected than others, male and more connected than women, for example, also generational divide, et cetera. So different perspective and different, it's multifaceted, this question of divide. But if we talk about in general, we could say that youth tend to be more connected than older generations, they tend to use more digital technology than older generations, and they tend to have developed digital skills in the sense that they have developed the capacity to use these tools. Yet they're missing, very often, the digital literacy. So for example, they rarely have a critical perspective or critical view of these technologies. They don't necessarily understand or see the relations of power behind these technologies. And this is the same for AI. So when we talk about AI literacy, we're talking about this capacity to understand, to have a basic understanding of this technology, but also to understand what are other relations of power behind this technology to be able to understand where they're the actors, what is that play? And why is it important? It's important because we can come back to the question of governance or the question of democracy, you need to contribute to the governance and the regulation of these technologies. So in order to be able for policymakers, for diplomats, for citizens, and youth to be able to contribute, to have a say, you need to be informed. So you need to understand how it works, but also what are the implications associated to environmental. We didn't talk about the environmental implications, but AI has huge environmental implications. So when it comes to AI literacy, we've conducted a project here in Switzerland with youth where we basically put youth and young people in the role of an AI developer. And we do what can be referred to as design fiction. We ask them to write a story with AI about the societal implications of this object enhanced by AI in the future. And it's really interesting because it allows us then to discuss about this question of societal implication and to have the main element here is to give them back the agency so that they can co-decide what is the role of AI or technology at large in their society. That's the main important element. We should not give only this decision to large tech companies. Of course, they have to be at the table and they have something to say and they have the expertise. But at the end of the day, citizens, policymakers must be included. It strikes me that this is a participatory, inclusive approach. Do you think that technology like those that underpin the development of AI applications are more naturally inclined and open to co-design and participation processes because they can be accessed from remote, large number of crowds, et cetera? Do you think there is that kind of feature built into the system today? I think it's a difficult question to answer in general. I would say that they can be if they are developed with this objective in mind. This is a really good question because it allows me to basically answer with three other questions that are extremely important. The first one is who designs the technology and who designs the technology will have an impact on who benefits from that technology and then who is harmed by that technology. If the technology, for example, is developed with the objective to be co-design or co-developed, then yes, for sure. But if it's a technology, if it's an AI that is proprietary, that is closed, that can only be accessed by the developers or the company, then no. We have to wrap up the episode. I would like to wrap it up with a conclusion with two questions to you in one go. The first would be we just talked about future generations and implications and literacy. The first question for me would be what do you anticipate as an expert today about the future of AI for humankind as far as you can see? The second is maybe you want to live with our audience one thought, one important thought that you've been seeing or cultivating in your experience as a researcher but also as a teacher in contact with you on these questions of artificial intelligence. I think to answer your first question, one element that I think is important to mention is the convergence of different emerging technologies. We have today, we're talking about artificial intelligence, which is an emerging technology, it's a technology that is still emerging, we are still to see its future developments. But at the same time, we have a biotech synthetic biology. We have, of course, neuroscience, we have quantum computing. We have a number of other emerging technologies that feed each other and I was recently attending a conference with a researcher on synthetic biology and she was saying, "Today, I cannot separate artificial intelligence from synthetic biology because both go hand in hand today." So when we talk about the future, I think this convergence is an important element and it makes the discussion about AI governance even more complex. One thought to leave with the audience, I think for me, the key element I mentioned already, is really this question of agency, agency and participation. This is maybe a bias of mine because this is something that has motivated me for these years, but it's extremely important that because AI is and will be used in such a large array of activities, very often AI is invisible, we don't see it. So that's an issue as well, and how do you make it more visible and how do you explain AI? All the question around AI explainability, it's not enough for me to show you the code. I can show you lines and lines and lines of code and show you it's transparency. I can show you the code. But do you understand it? No. So how do you explain how AI makes a decision beyond just showing lines of code? And of course, it's important if it's used in the context of governance and decision making because it is very much linked to the accountability and the legitimacy of this final decision. And so for me, what is crucial here is definitely this question of agency. How we make sure that citizens are enhanced, augmented and by AI and keep their agency? And to do this, we need education, we need literacy, and we need more episodes like this one. Fascinating. Professor Jaron Duberi, thank you so much for taking the time of being on the next page with us. Thank you very much. Have a good day. [MUSIC]