Jonas Christensen 3:03
Gianclaudio Malgieri, welcome to Leaders of Analytics. It is fantastic to have you on the show today.
Gianclaudio Malgieri 3:10
Thank you very much, Jonas. It's my pleasure to be here.
Jonas Christensen 3:13
And we have a really interesting and thought-provoking episode ahead, because we're talking about something that I believe you and I both find very important in the data space, but also underappreciated, and it's really about making AI sustainable. So that can mean many things to many people. In this interview, we'll be talking about ethics, privacy, and also the concept of data pollution. That might be something that listeners haven't heard about before. But at the end of this episode, you will know much more from one of the experts on this topic in the world, which is Gianclaudio. Now, Gianclaudio before we get to that, we want to learn a little bit about you and your background. So in your own words, could you tell us a bit about yourself, your career background and what you do?
Gianclaudio Malgieri 4:00
Sure. I am a researcher in law and technology. I am an associate professor of law and technology at ADEC Business School, the augmented Law Institute in Lille, in France. And I have also the honour to codirect the Brussels Privacy Hub. Brussels Privacy Hub is a research and dissemination platform within the Vrije Universiteit Brussel, the Free University of Brussels. In addition, I conduct research and teach on data protection law, privacy, AI regulation, data sustainability, consumer protection and intellectual property, mostly in the digital phase. I am also an external researcher, external expert for the European Commission and I am on the editorial board of some law and technology journal. Basically, this is me in a nutshell.
Jonas Christensen 4:00
Yeah, so quite a deep and broad remit across academic research and plied regulation as well through the European Commission. So really interesting background that we will learn a lot from today. Now I'm interested, how did you get into this space of data and artificial intelligence in the first place? Because you're a legal professional by background. Could you tell us a bit about that?
Gianclaudio Malgieri 5:20
Yeah, sure, exactly. Even though I am a legal expert by background, I've been working with computer scientists, in the last at least five years. I've had the pleasure to publish on computer science journals. Why? When I started, basically, my interest for data, data protection was very early in my research and in my studying. I think, that data, the challenge of data, the challenge of technology and regulation of technology was one of the biggest challenges that even the law had. I started to do some independent research after my university studies, when the GDPR was under the first discussion, the first proposal. So the discussion in Europe was starting to be on data protection, actually. So for me, it was an interest that came and grew and grew and grew and grew. And then artificial intelligence became the keyword because of course, the GDPR data protection is also a way to understand how we could regulate automated decision making. You know, technologies are everywhere, and decisions are always automated. So for me, the challenge was: Do we have a right to explanation? Do we have a right to understand what's happening? What to do with black boxes? And so with those challenges, I became more and more interested in discussion with computer scientists. How we can open a black box, what is AI, how AI can decide for us. Not just legal, but also philosophical perspectives. And so that's how I started on these topics.
Jonas Christensen 7:03
Yeah, very interesting. So you do have an appreciation for both sides, if I may call it that. The development of the technical components of AI and the computers that sit underneath and the technical challenge but also the the ethical and philosophical challenge of ''Just because we can doesn't mean we should.'' and all those things. This really brings us to what we're going to talk about today, which is making AI sustainable. Gianclaudio, the thing that I think about when I think about this space is that the early conversations on, If I may call it AI ethics, as a broad topic was really around model accuracy, explainability and interpretability of models, so we can actually understand why we've predicted or produced the output that we have. These are still fundamental factors in AI ethics. But there's now more and more a big focus on the broader social impact of AI, human rights, data privacy, and using AI for good. Because it can also be used for bad as we will probably talk about later, because we're seeing that in war zones and other places at the moment. So there's been quite an evolution in this space. Could you describe to us the evolution over say the last 20 years or so in the space of AI ethics?
Gianclaudio Malgieri 8:17
So as you said Jonas, the computer scientists already tried to investigate accuracy, exploitability, depletability(.interpretability was the word the AI put but I'm not sure if it's this or that) But my question is: Why? Why should we wonder? Why should we ask about exploitability, depletability, accuracy etc? Not just because we want better utility from Ai, but also because we want that AI could respect human values, legal values, fundamental principles and fundamental rights. So every principle like the ones you mentioned like accuracy, explainability, etc, have a role, have a purpose for humans. Accuracy is a guarantee against biases and discrimination, for example. Explainability gives a dignitary justification to algorithms because if I can explain, if I can understand and interpret algorithms, I can contest them and have better decisions. And as you said, in the last few years, I think, mostly in the last 15 years, 12 years, the attention on human and social impact of AI has grown a lot. Why? First of all, because of social technical changes, of course. Now we use AI to make so many different decisions in many different fields. Before I mean, the use of technology was based on repetitive actions. Now, a technologist can really take meaningful and impactful decisions on our life in many different sectors and also think of the importance that now social media has, for example. Content moderation and online behavioural advertising is all based on AI, and they really influence our life. In addition, what I would like to mention is that there were several scandals in the last years that made us feel the urgency to look at the social impact of AI. Just to mention some: Cambridge Analytica, for example. The case is very problematic because Facebook data were used to manipulate electoral choices of people, the impact on democracy. We can say similar things for either decisions like Brexit, and so on. For other important scandals, for example, the Compass Case in New York City. They were using algorithms to predict the risk of recidivism of accused people. And it was clearly biassed against black people. So I think that all these scandals, brought the attention of scholars up and made this discussion urgent. So that also legislators started to really think at meaningful ways to regulate it. We have the GDPR. We have other things we will have the opportunity to mention them. In the US, the discussion is a bit slower in terms of legal reforms. But yes, I think that mostly in these last 20 years, or 15 years, there was this increase of discussion.
Jonas Christensen 11:33
If you're a regular listener on this show, you will know that I have previously called these sorts of events "the Digital Equivalent of Oil Spills" because they are really sort of haphazard or silly experiments that spill data all over the place and have real consequences for other humans. We need to treat it with that same respect as we do with oil platforms digging stuff out of the sea, and all the rest. So I'm interested in really sort of getting to the core of what data protection is. And I'm gonna not do that by asking you ''What is data protection?'' but in a slightly different way. Ask you the question, Gianclaudio, of why is data protection so important for the future of our society as we know it?
Gianclaudio Malgieri 12:19
Well, yeah, I think this is a great question. Wondering "Why" is the first step to understanding how important it is. So basically, data protection, I think, is essential for the future of our society for different reasons. First of all, the respect of democracy. A society, a legal system, that can respect personal data of individuals is a legal system that is generally well equipped to protect democracy. It's on democratic system. If surveillance is indiscriminate, if police surveillance or police data processing, or political uses of personal data have no restrictions, of course, the democratic system will suffer. And we have many examples around the world. Think at systems in which data protection is not adequately protected trough laws. Usually, these systems have no good democratic structures. It's quite evident. So this is the first reason. Second reason is that it's something more subtle, more subliminal. If we protect our consumer experience online, we are protecting our mental freedom. So it's not just electoral purposes, electoral issues and democratic issues. Its the bigger picture of our identity as users, consumers, online citizens. Now everything, even after COVID, everything went online. Everything was digital. There was a great increase of the use of digital identities. For example, during COVID times, during pandemic, the fact that most of public administration services were digitalized if they were not already digitalized. This made data protection even more urgent. So basically, in a nutshell, democratic reasons, but also the protection of our identity, as consumers, as citizens, as users, and so on. Data protection in a nutshell is rebalancing power imbalance. And as you said, if data is the new oil, the big data collectors are the new capitalists and we need to protect our identity and our powerlessness, if you allow me this term. Looking at this power imbalance, a little protection might be a good tool.
Jonas Christensen 14:53
I absolutely agree with you in terms of these very large corporations collecting huge amounts of personal data on individuals being a real risk to society, if not kept in check. And we can only to some extent rely on them being the good guys. That is the supranational corporations, but also for nation states, we have the same challenge. That's a little bit harder to regulate because nation states typically regulate themselves. But we'll get to that in a minute. Before we dwelve into those topics, I am interested in a new concept that you've introduced to me, which is when we talk about the misuse of personal data, we typically view it through a privacy lens. So that is someone's personal data has either been compromised and ended up in the wrong hands or personal data is used to manipulate or create adverse outcomes for individuals or minority groups. So it's sort of very personal to an individual. But you also talk about the concept of data pollution, which is much broader. Could you explain to us what data pollution is and why we should care about it?
Gianclaudio Malgieri 16:01
Sure, thank you for the question, because I think this is a very, very key question. The data pollution actually was a term invented not by me, but by Professor Omri Ben-Shahar in an article published three years ago on Journal of Legal Analysis. I think that powerful concept can be exploited more in the future. Data pollution basically is the idea that, again: if data is the new oil, oil pollutes a lot. Fuel, oil gas, they pollute, right? And data can also have a negative externalities on the digital environment. So, the reason why I started to analyse data protection, also from the data sustainability perspective, but the personal reason also because I am a legal professor in a business school, and we have many courses there. One of them and even many Master's programmes, one of them is sustainable businesses. So they asked me whether and how I could see myself in that program and I said "Yes". Data protection is also a matter of sustainability. So, just to explain to you why data pollution. We can make a comparison between physical environment and digital environment. And in the physical environment, of course, we need oil, we need energy and energy has negative externalities on the environment, producing climate changes and so on. But also the digital environment can be polluted. The same energy which is data, personal data, non personal data and so on, have externalities. In particular, if we process personal data in a way that is not respectful of democratic values, individual rights, fundamental principles, that of course, are based on legal concepts, but legal concepts that we can have a minimum agreement at least on what are the good values for the protection of personal data. Well, if we do that, there might be some externality on democracy, on transparency, on freedom and individual rights. So, data pollution in my view, means having a digital environment where the processing of personal data even non personal data is non regulated, and creates paradoxes and forms of under protection for the most vulnerable data subjects. If we process personal data without any restriction, if we sell personal data without any form of autonomy or dignity, consideration for individuals, this means that the digital environment will be polluted. Polluted in terms of less trustworthy, less democratic, even the society in general. So for me data pollution and the sustainability is a way to look at this perspective from a sustainability discourse. And just to conclude on this, I would like to say that data sustainability is not just about obstacles and bargains. Data sustainability means that companies can have profit from processing personal data and data protection is not just an externality. It's also an advantage. Because if the processing of data is more accurate, more purpose limited, is more necessary and so all data protection principles are respected: companies can grow. Can grow in their reputation, their trust portfolio, their relationship with consumers. So this is what I would like to say: We can have a good environment, a sustainable environment online, which means respecting fundamental rights, respecting public interest like democracy, public administration, etc. Well functioning of public administration and profit for businesses.
Jonas Christensen 20:20
Very interesting. And I have a lot of images of what I think data pollution is and unsustainable use of data. Could you give us a very practical example of data pollution? To really sort of make it crystal clear? For listeners? What you mean?
Gianclaudio Malgieri 20:35
We can make many examples from real stories to possible dystopian/dystopic scenarios. Just to make a couple of examples: Data pollution might be, for example, you participate in medical research project as a volunteer, and after two weeks, your insurance raises your premium, because they discovered from that research, that you have higher risk of dying next year. And so your life insurance has a higher financial risk. Data pollution means, for example, that the choices that you make online are influenced by other digital footprints that you left before in your digital experiences. Data pollution means that for example, you receive, just to make examples that are a bit not so well known in, you know, in literature. Just to make an example: You just lost your dog or a relative or whatever. And then you start being exposed to online advertisement that refers to the fact that you are in grief. Even for things unrelated. For example, advertisements about shoes, about -I don't know - bags, cars, but you keep seeing the fact that there was a Father and a Son. A man and the dog on whatever. So the fact that information that you didn't, that you were not aware they knew about you are used against you. So in a nutshell for me, data pollution examples is any form of exploitation on individual vulnerabilities based on your personal data. And vulnerability can be manipulation, can be discrimination, can be stigmatisation, stereotyping, and so on. Of course, it's difficult to make specific examples, but I tried to give some.
Jonas Christensen 22:46
Yeah, very good and very helpful, thank you. One of the examples that going are around in my head is a recent example, that's really come to light during the Russian-Ukrainian war, because we have an example here of a company called ClearView AI, which is an American facial recognition company. They produce law enforcement software that can recognise faces with a very high degree of accuracy, more than 99%, based on the collection of billions of images from the internet, right? So they've scraped social media and other places to get this information. And you and I are probably in the database, it's very likely. If you have a Facebook profile, or LinkedIn or something like that, you're probably in the database. And they obviously using the images from here, associated with the names and identities that they've collected in the same exercise to recognise individuals that they can get from pictures or images or CCTV or something like that. And this software has been used to catch bad guys from surveillance footage and identify soldiers in Ukraine. And you might say "Oh, this is actually a good purpose". And it's worthwhile, and it's helping society. But it's also problem at the other hand, because no one in this database have actually agreed or consented to being in there and their information is - because they put it on Facebook with a picture - then all of a sudden they're in this database being used for many other purposes, without their consent or even knowledge thereof. Is that a real example of data pollution? And if so, how do we deal with this sort of grey area? How do we determine whether something is ethical or not in this scenario?
Gianclaudio Malgieri 24:26
Yeah, this is absolutely a great example. The tension is between autonomy of individuals and public interests. The example you made. Giving consent for something, it means guaranteeing people's autonomy on their personal data, or on their digital life. On the other hand, we have public interests. So should we always base data protection on consent? Because this is one main issue. And every time we don't use consent, are we polluting the digital environment? Are we data polluting? I would say the trade off, the balancing between autonomy and public interests doesn't mean that we always need to use consent. We can have public interests, legal basis for processing data. We can present data on the basis of laws without consent of people. The GDPR and even before the Data Protection Directive in Europe were very clear on that. The point is that there should be other safeguards and other protections to make sure that individuals are protected. Even if they didn't give their consent. For example, there should be a right to object to data processing. So right to block the data processing that is going on, right to receive explanations, transparency, fairness. Fairness is a very underestimated concept in data protection law. We have it in many different data protection laws around the world, not just the GDPR. What does fairness mean in practice? You mentioned ethics and sustainability. Fairness is a difficult concept. But the final goal is exactly ethical and sustainable data processing. Fairness means making sure that the data controller - that can be the state, the army, private companies - don't abuse their power imbalance. And so how can we protect them? How can we deal with this situation? I would focus on the national risk. The GDPR, and other regulations about data protection and AI are risk based. So first, we start from risk. With this risk, risk of fundamental rights and freedoms, we try to anticipate these risks. And once we have them clear, in our mind, we'll look at the necessity and proportionality of our measures, considering the risks for individuals. And then we decide whether certain data processing activities can be done or not. It's like environmental law. Before building a new building, we have to understand what will be the risks on impacted population, etc. And then we try a balance, right? So for the example you made about ClearView AI. Well, first we should analyse what risks this can have and what benefits, right. And in order to understand risks, we should understand what are the possible harms. And it's not clear. There's not a list of harms that AI can produce. So of course, it's a bottom up approach, bottom up exercise that can have a lot of problems, issues, drawbacks, but at least we should try. And data protection legislations are already suggesting and recommending these kinds of practices. So yeah, difficult to do in practice, but we have the principles, we should work on these principles.
Jonas Christensen 27:59
Yeah, and I think we also have to recognise that a lot of these questions are very novel to human beings, they are actually stretching our brains a little bit. We haven't sort of come across them before. So it is hard to identify the right path through the legal frameworks and through also, as you mentioned, that it's hard to determine what is fairness and how do we measure that up and how do we think about it. It's something that we haven't had to think about as such a big scale before. We might have to deal with fairness every day with a small group of individuals or in one to one situations, but not for millions of people in a way where algorithms are making decisions. It's very new to us as a human race, these sorts of questions. So I'm interested, what do you think about our current data protection frameworks, like the GDPR that you've mentioned? Are they adequately equipped to deal with the individual as well as the collective interests?
Gianclaudio Malgieri 28:56
Well, I think so. I think GDPR and similar data protection legislations of other countries, like for example, the UK GDPR, or data protection in other parts of the world like Brazil, Japan, Israel and so on, Switzerland, they are based in this complex balancing exercise: individual interest, public interests. So I think ''Yes'' and we see it in many different parts. First of all, as I said before, it's not just consent. You have other legal basis, including public interests. Second, individual rights like right to be forgotten. The famous right to be forgotten is not an absolute right but should be balanced with freedom of expression, journalistic purposes, and so on. Even research, many people said GDPR can block research. Research into public interests. We saw it was fake news. It was not true. COVID researchs were possible even through GDPR, and thanks to GDPR. Because you can do research even on sensitive data. So health data, medical data, etc, if you just respect some safeguards and you don't need consent of people, but you need to protect those data through some safeguards: through transparency, through risk assessment, and so on. So this is like a - how to say - intellectual revolution, intellectual change. Forget about consent or notice and consent. And let's start to consider a dynamic and continuous risk assessment approach. I think this is the best way to take into account public interest and individual interest.
Jonas Christensen 30:42
Yeah, that's a really interesting concept. Because I think when a lot of people give consent to the use of their data, they're actually not really sure what they're consenting to. And it's not necessarily transparent, even if those seeking consent are trying to be as transparent as possible. It's actually really, really complex, typically what they're trying to do and for the layman, they might not understand that. That is a really interesting concept of the sort of continuous risk assessment. How do you see that being used in practice in the use cases that you come across?
Gianclaudio Malgieri 31:12
Well, the reality is not so good as we could expect. Risk assessment in practice is carried out very superficially, even in European Union countries. So in principle, risk assessment is a powerful tool. In practice, data protection, impact assessment, and data processing risk assessment are not taken into serious account. Not because data controllers don't want, or not just because they don't want, but because it's difficult to assess and quantify risks. That is a purely economic concept, right? The management concept for fundamental rights and the purely abstract and humanistic concepts, right? How can you quantify the risk that certain data processing techniques or data processing activities can have an impact on your freedom to speech? How can you quantify into numbers? It's difficult and there's no guidance at the moment to understand how to translate into numbers these general and abstract concepts. How discrimination can be quantified, you know, all these issues. So what happened in practice, and I'm sorry, not to give the news... What happened in practice is that risks that are assessed now are just cybersecurity risks. The risks that computer scientists can well control through equations and algorithms. You can do an equation even for privacy enhancing technologists. You know, there's this whole new bunch of discussion, I mean, not new... but it's like 10 years or 20 years that computer scientists are discussing about privacy enhancing technologies. Why they announced privacy? Because they enhanced anonymization and they avoided identification, but it's not the whole focus of data protection. Data protection is respecting fairness, transparency, autonomy, and you can't just protect fairness, transparency, and autonomy, just by avoiding cyber attacks. Cyber attacks are a part of the problem. There are many other problems that even in physiological situations, and not just ontological situations are problematic. My relationship with my boss, with my employer asking me for some data, there are no attacks - in that it's not the cybersecurity issue - but there's a power imbalance problem. And data protection is there to protect me and risk assessments should take into account how my power imbalance can be exploited. So that, for example, I give consent to some sensitive data that actually I wouldn't like to share. And so again, risk assessment should look at the bigger perspective, the bigger approach. How could we do in the future? There are different ways to do that. First, guidelines from data protection authorities. Second, being creative. Foresight studies, futurist studies, I don't know. There are different ways in which we could do that. Let's start.
Jonas Christensen 34:11
''Let's start''. I like that. One of the things you mentioned was that there are many different data protection regulations governing the space around the world. So different jurisdictions have different regimes. So in Europe, it's the GDPR you mentioned Brazil and America and so on. They all are similar in some ways, but they also have potentially some underlying fundamental philosophical background for them is slightly different. So therefore, they play out differently. So the way I read the GDPR is very much connected to the human rights and protection of the individual, which is a big part of the European soul across these 50 odd countries. I contrasted in some ways to - for instance, China, where they are introducing a very strict data privacy regulation at the moment. It's very strict private enterprise. And there are more liberties afforded to government use of private data. The Chinese government has themselves declared that data has become a national strategic resource. And these laws are mandated to basically keep that information, inside China, that is stored on Chinese citizens. So that means that for instance, multinational corporations cannot take data on Chinese citizens out of the country and so on. So in other words, we have clashes of regulations across jurisdictions, and data is flowing very easily from one place to another. But the regulations are not transferable. They have borders. The digital environment doesn't necessarily have that. How do we coordinate and manage all this regulation across jurisdictions? And is that even possible?
Gianclaudio Malgieri 35:52
Well - and this is a $1 million question, but - just trying to swim in this ocean of different regulations. First of all, something I always tell my students is, when you look at existing or proposed laws, you should always wonder about the legal political traditions and the legal political reasons and the purposes, the political purposes of a new piece of legislation. And of course, you mentioned something very interesting. You've mentioned China. So the Chinese data protection laws. Very new, huh? Proposed just a few months ago. US with this fragmented protection. You have the California Consumer Privacy Act. Then you have some other members. American states have data protection laws and then you have sectoral protection, right? You have children, you have health data, etc. How can we deal with them? First of all, we should have clear in mind why there are different protections. It's not just because you have different legal stories and legal experiences. It's also because there are different goals. Why did China started to look at data protection more seriously? Because Big Tech's were having more power than the state. And these forms of private monopolies are very problematic for China institutional system, for the structures of power in China. So in that sense, if in Europe - for example, in Germany, that is one of the first country of regulating data protection in Europe - the fear was the big brother, George Orwell's Big Brother, so the state that can control everyone. In China, the fear is about little brothers. It's about all Big Tech's that could have more and more power, even though many of those big Tech's cannot work in China, but you have other important companies working there. And in the US, the trigger was, again, to avoid the power of federal state to have excessive surveillance on individuals. But then after September 11, the problem was security. And so data protection changed a lot in order to guarantee to the state big power. And so that was also one of the triggers of the GDPR. So I am confusing even more but just to clarify that we have different reasons why there are different protections. How can we solve this fragmentation? The GDPR tried to do that: Throw some extraterritorial rules. So having an extraterritorial impact. GDPR says "If you want to process data of people in European Union, even if you're not a European company, you have to respect GDPR". This is called the Brussels effect or Brussels impact, right? And I think it was successful, because now India, for example, proposed a GDPR, like legislation about data protection, and Brazil copied the GDPR. In Turkey, we have something called GDPR. So if you want the real oil of Western countries, and in particular of European Union countries, you have to respect data of individuals. So there was this - how to say - political positive contagion. Okay. Secondly, international agreements. And of course, this is a bit... we have to be a bit cynical here. How come that the international agreement between European Union and United States about data protection was considered invalid for two times? The case against the safe harbour. So safe harbour agreement was considered invalid and even the Privacy Shield SRAM 1, SRAM 2 judgments and now we have a new Privacy Shield. Why? Of course, political tensions around the world and international crisis in Ukraine made European Union and US closer. And this new, even probably higher energy link, energy provision link between United States and European Union helped the political leaders to find an agreement even on data protection. So just to say - difficult to predict - I think international agreements are a way. There are also international standards, like the Council of Europe, modernised convention on data protection. But it's difficult to navigate, of course, and there are many, many, many, many historical events that can influence these things. So I'm sorry, I can't give an optimistic answer.
Jonas Christensen 38:15
It was a very hard question that I pulled out of my hat there for you. I think the interesting bit is that we talk a lot about bias in data. But there's definitely also bias in legislation and the creation thereof. And it's all coming out of a broader political environment. I think your sample of the European-US relationship and how it's basically zig-zagged bit over the last, say, 12 years or so. It's a very good example of that.
Hi, there, dear listener. I just want to quickly let you know that I have recently published a book with six other authors, called "Demystifying AI for The Enterprise, A Playbook for Digital Transformation". If you'd like to learn more about the book, then head over to www.leadersofanalytics.com/ai. Now back to the show.
Gianclaudio, you talk a bit about this nationalisation of data almost because we can realise from this conversation that the data can be positive, but they can also be a weapon in some ways in the hands of the wrong people or nation states. And will we see more and more nationalisation of data, more and more sort of protection of that data akin to what China's doing across the world? Or do you think that is more of a Chinese example?
Gianclaudio Malgieri 42:09
Well, even in Europe, we tried to do a European Union based cloud services. Just to avoid giving all our data to US. And there are other examples. So yeah, I think data can be a good political weapon, if we can say or not weapon, but at least political power that states could use to influence intranational policymaking. And they just made you the examples a few minutes ago about, I mean, about this kind to them, that the new agreement about possible new Privacy Shield was highly, I don't know, if encouraged, but at least happened in the same days in which United States and European Union were closer because of international crisis with Russia. So yes, I think data can be, of course, a political topic. It has been in the last years. It will be in the future. I don't think - that and this is a bit, you know, sad - but I don't think that people, so that electors consider data protection, as important as other values. And this means that if you ask people, if you ask European people "Do you want to give your data to US? If in exchange, they protect you in economic or military terms." Well, probably European people would say "Yes". So I think one key - if not solution, but at least - one key step would be to work on education about data protection. So help not only children, but also adults to understand that data protection is key for our democracy, for our dignity, for our autonomy. And so I think in that situation, data protection could become less vulnerable to international crisis.
Jonas Christensen 44:16
Yeah and I think it's also part of the population that doesn't even need military protection to hand over their data. They might just want the newest iPhone, and that's enough to entice them to send data to America. Now, we're sort of coming towards the end here. And I have saved one of the most important questions for the end, in my opinion. And I asked you because you are an educator in this space. And as you've mentioned, you do lots of education across different degrees and you're apart of educating the future leaders of this space. So how do we educate our future business leaders, legislators, legal professionals to deal with these AI sustainability issues? What knowledge and skills should listeners on this show and others wanting to learn how to deal with this? What knowledge should they seek out?
Gianclaudio Malgieri 45:10
Oh, well, great question. First of all, the first knowledge would be learning other theoretical languages. Lawyers should understand computer science discussion, and computer scientists should understand legal discussions. So first, it's a real problem of communication between two fields, right? We have different definitions, different notions, different principles, same words, ambiguous words. We should first start to understand them. Simple example: the notion of sensitive data. Well, computer scientists have their view, lawyers have another view. And the notion of privacy. Privacy in computer science has a very technical meaning. In the legal discussion has a broader meaning. So first, I would say understanding each other's language, the other experts language. Second, we should focus on risks. So a goal for education should be also changing the approach. Legislators, leaders, computer scientists, every powerful subject that will have the role of deciding about implementing a new technology, implementing AI in a given context, should consider to involve vulnerable individuals in the decision making. Involve, for example, impacted people, their representatives, vulnerable people, associations, experts about vulnerabilities and so on. Involving them in a participatory design process. This will be key. And so just to summarise interdisciplinary discussion, and participatory design. I think these are the two challenges. For the first, we have to study more. We have to talk more. We have to write things together. We have to work together as lawyers and computer scientists. Law for computer science and computer science for law. Secondly, participatory design. We should understand how to do that. But this is a mental approach. We should understand that every time we decide about a new technology, we are deciding about the impact of a whole community. And we should involve that community. Democratic process, accountability process, different methods, but at least let's do that.
Jonas Christensen 47:34
Yeah, and I think this is really what we all have to think about. It's that whenever we're designing something, even if it's a piece of code in the backend of a system, we're actually impacting someone's life. So we do need to really appreciate and have this cross-functional knowledge of that intent. It's very hard. It's very complex. And it's very difficult, but it's the world we live in. And that is really challenged for us to do. So listeners out there, have a go at it. Good luck. And don't forget Gianclaudio's words that you've heard on here today. Now, Gianclaudio, we've got two questions left. We're almost at the end. They're short questions. The first one, I always ask guests on the show, which is to pay forward. So who would you like to see as the next guest on Leaders Of Analytics? And why?
Gianclaudio Malgieri 48:19
Well, difficult question because there are many, many wonderful researchers and leaders about AI regulation, and so on. But perhaps I would suggest a dear co-author of mine, Professor Marco Kaminsky from Colorado Law School, because she tried to import European Union concepts and protections into the United States debate, but also, because she tried to really elucidate some of the most complex concept about analytics, regulation and AI. Like contestation, human involvement, and so on. So I think she would be a great guest at your very interesting podcast.
Jonas Christensen 49:05
Wonderful suggestion, and I will definitely reach out to her after the show. So thank you so much for that. Really appreciate it. The last question is: Where can people find out more about you and get a hold of your content?
Gianclaudio Malgieri 49:18
Yeah, sure. There is my website: https://www.gianclaudiomalgieri.eu. And I tried to have a blog there, and also a list of publications and news about activities. So yeah, that might be a good way to stay in touch.
Jonas Christensen 49:35
Yeah, so listeners, do go and check out Gianclaudio's website and I will put a link to it in the show notes, so you won't have trouble finding it. Gianclaudio Malgieri, thank you so much for being on Leaders of Analytics today. It's been such a pleasure to listen to your knowledge and my head is spinning a little bit from all the things that I now need to go and learn. But I think that's a good thing because we're pushing the envelope here with the new knowledge and an underappreciated topic that needs more light of day in the world. So, thank you so much for your time today. And I wish you all the best on your continuous journey on research and educating our fellow citizens of the world.
Gianclaudio Malgieri 50:16
Thank you so much, Jonas. I learned a lot from your questions. It was great and good luck with the podcast. It's great.