Jonas Christensen 2:58
Ivana Bartoletti, welcome to Leaders of Analytics. It is fantastic to have you on the show.
Ivana Bartoletti 3:06
Thank you. It's great to be here.
Jonas Christensen 3:08
Yes, it is so good to have you. We have a lot to talk about today. You have a very interesting job or role. You have a very interesting background. You work in a very interesting organisation that is huge. And also you have written an even more interesting book, which is probably going to be what we talk about for the most part in this podcast. But before we get to that, I'm sure the audience would love to hear a little bit about you. So could you tell us a little bit about yourself, your career background and what you do?
Ivana Bartoletti 3:37
With pleasure. So I currently am the Global Chief Privacy Officer for Wipro, a leading company in the area of digital transformation and cloud and artificial intelligence, robotics and sort of helping companies innovate with technology and transform digitally. But before then I was at Deloitte, where I was a director focusing on artificial intelligence and safeguards and governance around the introduction of AI in ambulatory decision making, as well as blockchain and other technologies. And before then, I was working with - I was heading up the privacy practice for a consultancy firm in the energy industry, which was really interesting, because there's a lot of stuff around smart metering and the grid and all of that. But my interest for internal privacy is stems from a human rights background. So I actually started 20 years ago, and from sort of political approach to this, so it's about 20. And I got very much involved in human rights, in politics and in civil liberties and that's where everything came from. So I started from that approach and that approach that still remains with me, sort of in terms of the way that I see privacy, I see data protection and artificial intelligence and all of that. So that has been my journey. So coming from sort of a more human rights background going into information security and then I'd take a second degree in law and here I am.
Jonas Christensen 5:02
So how did you end up in this world of technology and AI and data through that journey?
Ivana Bartoletti 5:08
Yeah, it's really interesting question because I started in information governance and information security. And when I first moved to the UK, went to the UK, I'm now in Germany, but I went to the UK in 2007. So I already had sort of background in human rights and civil liberties. But then the focus became much more technical and I wanted to really combine the technical element with the legal element, because I strongly believe that the two are completely intertwined. So I really wanted to focus on that. But then, when artificial intelligence, when we started to realise is that along, alongside of the wonderful things that technology was bringing to life, there was a much darker side to it. And then we started to realise that no neutrality of data, when this comes to became more mainstream, I think I realised that that was perhaps the field where I wanted it to be. And because I think there is nothing more relevant these days thaqn sort of the non-neutrality of beta, the non-neutrality of technology, and technologists have become so transformative. Like, for example, artificial intelligence, when they have enormous power but at the same time they carry enormous risks. And I think I got to the stage where I felt that I was so much intrigued and so much passionate about new technology and I was starting to really realise how much I wanted to safeguard this new technology and the only way of doing that was not just to learn more and to get involved from a technical side, but was also to make everyone aware of the risks and trying to find concrete solutions, both technological and political level, to try and limit the risks that these technologies bring with them.
Jonas Christensen 6:49
The interesting bit here in this topic, for me is the continuous sort of blurring of the lines between being a pure technologist versus being a designer of a processes, solutions, products that actually have quite a large impact on people's lives and play to their minds. There's a lot of psychology in it, whether you like it or not, whether you're aware of how you're affecting people's psychology or not. And I think some of the best in the trade, the social media empires out there do know exactly what they're doing. But we'll get to that later in this podcast, because I think this is a large foundation for the books that you've written and some of the arguments in there. So we'll definitely explore that. But Ivana before we get to that, I'm interested in hearing a little bit more about what a Chief Privacy Officer does, and especially for a company that, according to Google, has over 220,000 employees. So this is a huge organisation. What does a week look like for you?
Ivana Bartoletti 7:47
Yeah, I mean, it's a really interesting question, because it's the first time that I went in such a role. I mean, I've always been sort of advising companies and working with companies and then I started in September, so there are so many different elements of this job. I mean, the first element of this job is that privacy is not just about the line. It's not just about technology. It's how you - And to my first objective and the first objective, I believe, of a global Chief Privacy Officer is to break the silo culture that sometimes we still have an organisation, where you have the legal team on the one end and then you have the CTO team and tech, and then you have the security and then you know that privacy has to be the thread that binds all of these element together. Because whether you use personal or no personal data, then ultimately things do have impact on individuals. So you have to try and build the culture, whereby people understand that privacy is the thread that binds every single element together, every single new products that you create. I work in a technology firm, obviously. So all the products that we create, that we put into the market, that we obviously, you know, I have to make sure that privacy is embedded from the onset. I have to make sure that we don't consider privacy as an afterthought. So that is part of my job and that requires a lot of things. It requires structures, it requires processes. It also requires awareness amongst everyone involved that we have to ensure that privacy is there from the very early stages. That the reason for example, by - One of the first thing I did was to establish a privacy by design form. Bringing together all the different areas to try and see how we can really use privacy enhancing technologies and privacy engineering to really support the work that we do. Part obviously, of the job is also to ensure that 220,000, 250,000 actually, employees that we have, that their data is safeguarded and protected. That the customers data is safeguarded and protected. And that in a time where we are seeing privacy laws proliferating. You know, privacy laws are everywhere at the moment, you know. In Australia for example, we're thinking about sort of updating existing privacy legislation. But then you have Vietnam, you've got China or India with the new privacy laws and hopefully coming to life sometime this year and next year. A lot of stuff happening all around the world at the moment, which is great. But at the same time, we've seen both convergence around semi-serious, but we've also seen divergence across jurisdictions. And we have seen massive legal, but also geopolitical stuff happening around data localization and digital sovereignty and how we share data globally. So these are all issues that are the top of my agenda and my sort of working day. So I have to say, there's never been a more fascinating time that now, you know, to be in sort of privacy law and practice,
Jonas Christensen 10:45
There is a big catch up to do here, because the underlying technology and use of data is evolving so fast and you mentioned the word or the phrase ''Privacy by design''. It's very novel and new to humans and organisations to actually have to design privacy. It's not been a problem before. And I'm sitting here trying to imagine what it's like to deal with this challenge, which is really cross-national, cross-jurisdictional. But there are no laws that connects across countries. I mean, maybe in Europe, it's the closest because we've got the European Union legislation. But other than that, seems like a cat and mouse game that nations will struggle to actually win over, especially multinational corporations that can and will move data across borders. How do you see that challenge playing out in the near medium and long term?
Ivana Bartoletti 11:38
I think what you're saying is very true. I mean, there are two elements at the moment. So we're seeing this really strange thing happening, if you think about it. We're seeing convergence on the one hand. So for example, if you look at the PIPL in China or the sort of legislation in India or the GDPR and sort of legislation in Brazil, and some things like transparency, fairness and automated decision making, or sort of the right that individuals have to control where the data goes, you know, these are enshrined in laws, you know, with all the cultural differences, obviously. So, for example, in China, things will apply different in public and private sector. But the bottom line is that we are seeing a desire coming from people to exert control over their data. So this is - and adding the desire of transparency with the new affordances in particular, you know, that these technologies have. So we are seeing this across. So it's interesting, because on the one hand, we have data protection rising and privacy. On the other hand, we have data protection on the horizon. So you have sort of countries wanting to in different degrees, but wanting to really be more strict with the way that they share, and some of it is understandable, especially because if you think about a lot of big tech companies, they have been reliving, you know, they've been sort of building their fortunes on the data extractivist model, you know, which has often meant that, you know, there's been data grabbing, especially from sort of some countries rather than others. So, to some extent, you know, this is understandable. So some reasons underpinning that drive to localization comes from that. Some drive to data vocalisation comes from geopolitical issues and we've seen this with Europe. We've seen this. But what is interesting is that within this sort of dichotomy between data protection on the one hand and data protectionism on the other, you always have to wonder, you know, what does the individual get out of it? It's like, what is the benefit from us as individuals, and as individuals in terms of control over data, but also ability as individuals to put a stop on sort of data extractivist approach that we've had for so long? So I think it's really interesting to see all what is happening. There is legal uncertainty, for example in Europe, around data sharing across the globe. And what we're seeing, for example, is that, you know, some big companies may at some point, - I'm thinking about the GAFAM, and they may be at some point being forced to rethink their organisational model. There is uncertainty around -In Europe for example, we have a loophole around what constitutes data transfers. So a lot of this stuff is really scary if you think about it, because, of course, we live in a global dynamic. So we need to be able to share data across the globe. We need to do so in a way which safeguards people's rights. Of course, we don't want intrusion from law enforcement. If it comes, for example from the EU when data goes abroad, we have central guarantees, so there has to be the same. But ultimately, there are different degrees of protections across the globe. So I'm hoping, to be fair Jonas, that organisations such as the OECD will be able to find a way forward with this. I'm hoping in a transnational agreement that we'll be able to bring countries together around what are the standards that we need to share data globally. Elizabeth Denham, who used to be the Information Commissioner in the UK said, ''We need a new Bretton Woods for data''. And although I don't particularly like the terminology, but I think it's true that we need to get countries together around: How do we share data globally? What are the standards that we're going to use? And I think there is something to be said around the role of international organisations within this. So the OECD, for example, which is already driving some work in this area, which stalled for some different views between the US and Europe. And Asia, for example in this consideration, could be a real force, because there are so many interesting developments happen in Asia, especially in their data sharing. So I'm hoping that we can drive this conversation on how we move forward beyond the law, because I mean, beyond the law on a new agreement to what it means for data to be shared across the globe.
Jonas Christensen 16:08
Yeah, it's such a big and interesting topic that we just don't know all the answers to yet. And we're going to dive a little bit deeper into this now, because I'd like to shift to your book that you've written. I think, a year or two ago, it was published. And this book is called, ''An Artificial Revolution: On Power, Politics and AI''. Could you tell us what this book is about and why you wrote it?
Ivana Bartoletti 16:36
Thank you. So I wrote this book, because I was realising that the issues around sort of misuse of data were becoming more and more prominent, especially thanks to the amazing work. There's some activists and leaders, and especially women of colour. And especially, just one among everyone is Sofia Nobello, or Joy Ponabelio. And also women, like, you know, Meredith Whittaker have been doing in this space. So I think over the last few years, we have realise that, you know, alongside some really fascinating advances of technologies, we've also come to realise that there are inherent risks with, for example, artificial intelligence and algorithmic decision making. But what struck me is that, you know, we had this amazing voices from these fantastic, especially women who are at the forefront of all of this. And to an extent, I always say, my book is really testament to what they have done, and I really wanted them to really name them and to highlight the amazing work that they've been doing. But I also realised that I wanted to bring these issues to the kitchen table. So I wanted to write something very simple to digest for these conversations to move from technologies and the area of activism, which is crucial, and I consider myself to be an ally. It's a word that I like. To the wider realm of politics and political parties and international organisations. But I wanted to bring it to the people. I wanted to explain to people that actually there is nothing strange or difficult about this topic. So when we talk about data, when we talk about AI, when we talk about the rooms, we're actually talking about something very simple, because we're talking about people. We're talking about individuals. We're talking about talking about politics. We're talking about geopolitics. And these are topics that everyone needs to talk about. So that's how it came about. You know, the idea of wanting to bring something really simple to people to understand, yes, there's something great about tech and we all love it. But if we love technology very much, we've got to also deal with the risk that we have to ensure the technology works for everyone. And also I was realising that I was very struck by the link between sort of the role of algorithms increasingly having an editorial function in our life, increasingly deciding what we get exposed to and what we see and on top of having an allocative function. But the editorial function: so the idea that, for example the way that we perceive the world and the way that we read about the world when we browse the internet, is mediated by these machines that are set to give us certain newsfeeds based on our backgrounds and our browsing history. And the link between, for example, sort of devising populism, which is fueled by the echo chamber effect of this outwards, and for example, the link between the rise of populism and the connection between all of this and the anti-feminist bashing, and to women sort of passing arguments. So to an extent I wanted to bring all of this to life in a very simple way. So that was why it was written and I'm glad that particularly a younger generation - I'm seeing, for example, students, both in secondary school and first or second year university, they are quite easy to read. And a lot of students have been using it to to familiarise with these topics. And that was my aim, you know, the objective.
Jonas Christensen 20:15
Yeah, that's really neat when I hear that. I think that's great because that is also a generation that, in a sense, has had their whole life online. Some of us are old enough to remember the offline world before the internet. But there's a generation there that to them it's second nature to put everything online. But actually, you need to be very careful with that, which is one of the central points in the book, I suppose. You think a lot about this curated worlds that we get presented all the time, when we have our head stuck in our phones or Netflix or what have you, which is now a large part of our day. And they often use the example of Facebook, because Facebook is one, maybe two pieces of software, if you look at the phone app versus the desktop, the websites solution, but within these two pieces of software are three and a half billion individual newsfeeds. And that's not the software doing it. That's the data. That's the private data, the connections and all that stuff. So we're all getting our own curated view. And it really struck me the other week, when someone in my network who I have seen the last few years become more and more populist, as you call it, sort of very one sided in their views, and so on. They said, have a look at this photo on Facebook, and they showed me the Facebook and I just happened to, as should do because your thumbs now wired to scroll, I scroll a couple of posts down and it was just one after the other of this very curated information that confirmed one type of you that this individual has carried increasingly for last few years. So it really struck me in that moment. I wasn't surprised but it sort of crystallised it for me. And what is all this doing to us as individuals and as a collective society, all this curation?
Ivana Bartoletti 22:00
Yeah. I mean, it's incredible, isn't it? So as a collective, as a society, you can think, you know, so what happens to democracy? We all see different things. The basis of democracies is you and I, we can discuss common grounds, but if that common ground is disappearing, because we all get exposed to different things, we're not going to talk about it. So, I find that really scary. And I think, you know, we really have to stop and think about what is going on in the risk for democracy. I'm also concerned about the sort of more like representation of hands, you know, that happens in society. So for example, the fact that by doing the - how much are we actually crystallising stereotypes by doing that? You know, so for example, one of the things that I am very concerned about the sort of the use of the systems on Facebook or other social media is obviously an organisation like Facebook, they need to, and similar, they earn money buy how many times the users will click on a particular advert. They have to - the clients, the companies that advertise on Facebook, or other systems. But obviously, if they want people to click on things, to maximise their efficiency and for something to be efficient, obviously, it means that it will have to rely on what historically has been efficient. So for example, let's say that you're selling a product, I don't know, to clean the house, for example. Historically, women have been cleaning the house more than men, so then you will continue to advertise this kind of product to female audience, because traditionally, historically, this is what's been happening. Because the company or similar companies, they would want to maximise the results and the results for their clients, so they want people to click on that kind of advert. But by doing so, by not breaking with the past, we are algorithmically, in the way we are soldering the past into the present and the future. And this is the sort of trade off between discipline and fairness and efficiency. But these machines, they're programmed to be efficient. So they are programmed for people to click, and therefore for pizza to be attractive, they have to basically have the past baked in, because they have to perpetuate the past, so that people continue to click on that. But by doing so, we really, you know, we will continue to recreate this sort of self fulfilling prophecies. We're never moving on. So this is what scares me the most, you know, the fact that we are just basically reproducing the past. And sort of the most dangerous display of this is in the predictive technologies used, for example, by law enforcement. That is just something else. But on the social media side, what really scares me is from society's - I mean, Kate Crawford calls it, ''Representation of the heart'' and she's right. To know that you basically kind of reproduce the same stereotypes over and over again. And this is for example, where I would like to see a shift in the way that we globally perceive things like discrimination law. You know, we see this discrimination - anti-discrimination law is a very static thing. You know, you discriminate based on the way that we live now. But I would like a global shift to see this in a more progressive way. In a way of saying, you know, anti- discrimination is a journey. So we just don't base things just on what is this baseline now, but where we would like to be as a society. It's becoming so typical now with all these things has been automated, because basically, like, you're basically baking everything, as it is now, which is not great into things that really influenced the way that we think, the things that we've seen. They've wrapped so much control about us and they just put us, by putting us in particular clusters, and therefore wanting to maximise the result of the sort of the advertising campaigns or the recommendation campaigns, then we've got no way out. We're just stuck into what we are perceived to be, because historically, we've been a certain away. And there is particularly, there is no incentive for a company to break away from that. Because what is the incentive? If you are not breaking the law and financially you want to maximise the output, then it very much becomes very much of a political choice.
Jonas Christensen 26:20
Yeah, it really is a mind boggling topic. There's sort of two things I'm picking out of this as you talk, and it's bringing back some memories of the past for me. So one thing you are saying is, one that we've been programmatically automated to see certain things. So you're perpetuating what's already been. So we're seeing more of the same and only the same, and necessarily to programme something, the algorithms are programmed to optimise in some dimensions or metric, which means that we are getting the best result in this metric, which is then necessarily either compromising or completely excluding other important metrics. And I'm reflecting on a job that I had some years ago where I did a lot of PR in that role and that involved getting stories in the newspaper, and so on. And this is not so long ago that we didn't have a digital newspaper environment. And I found a good way to get lots of stories in the paper, because I figured out what made the journalists tick, which was they're always looking for a story for the next day, that's part of their matrix. They have to fill the papers. But also, when they called me and said this was a great story, we excited about it, it was when it got lots of clicks. So then they rotate the stories on their website. And then if my story happened to be click worthy, it would stay at the top for longer. And that was great. That was great. And that was the type of content that survived and lived on and I could supply more of that. But at no point was there a critical voice saying, ''Well, is this content '',- it was factually correct. I mean, I had my own ethics around it. But there wasn't a deep investigation of what I brought to the table. And also the metric here was not ''Give information to the people, so that they know what they need to know''. It was ''How many clicks can we get on the website?'', which is again, we've created this environment full of ''Everything can get measured'', so therefore we optimise it. It's good in one sense, but also dangerous in another sense. So we're talking about here, AI. So if we contrast AI a bit, what is good about AI and what is bad about AI?
Ivana Bartoletti 28:29
So I mean, first of all, AI is such a broad topic and term. I think what's good and what's was bad, I think we've seen a lot of really interesting things happening out there. And I think people realise that AI is not - one of the things that strucks me, that when people think about AI and sort of talk about AI, we're still and this is like also a fall that comes I think from the media is very much, they think about Terminator. Right? So AI is the robot, they've already seen. When in reality, there is much more to it. So, It's already baked in a lot of things that we do. Everybody uses Google Maps. I mean, chatbot bots are everywhere. And so there is a lot of AI already in our life. And I think a lot of people would recognise that is generally a good thing. You know, a lot of people will say, - Well, actually a lot of the automation that is going on and is perceived by people for being good. And I don't blame that. If you think about the pandemic, and how we've been able to be more connected. How we've been able, for example, to have a doctor's appointments online, virtually. How we've relied on self checking symptoms. I mean, there's been a lot of good stuff. And in general, I think people, citizens do feel the positive element of automation alongside what is sometimes, alongside all the issues around job losses. So there's a lot of training site about how robots are gonna take over and take our jobs and again, this is a very polarised debate. And it's an unhelpful one because it moves away the responsibility from policymaking to automation. I mean, these are things that require public policy solutions and politics is only just coming to terms with this. But there's no doubt that some automation, particularly certain areas ought to be welcomed. I mean, if I think about automation in the energy industry, for example. If I think about automation in areas that are traditionally more dangerous for humans. Now robot do go do these things, whether it's going high to fix electricity, or whether it's oil. A lot of this stuff is actually good and it's part of the progress that we make, as humans, you know, to improve and to better our society. And when it comes to some of the debt or the pitfalls in, for example, in relation to job losses, you know, that's where politics must come in. We have to think long term about what is the concept of labour. What does it mean? What does it mean to labour? What does it mean, when you're not taught a particular task by a human as it used to be in the past but actually, you know, you're automating even the learning process? I mean, what does that all mean, in terms of identity, in terms of financial? I mean, this is a massive political discussion. But the bad thing, so I believe, are very much the ones related to the lack of transparency and controls around what we're bringing in. And what I mean by that is that we can't see this problems in isolation. When people say to me, ''what is bad about AI?'', you can't distinguish between the technology and the wider issues that we've got around all this. For example, and to trust the big power that some companies have and the fact that we have problems there that, for example, the European Union and the US are trying to really curb. I mean, if I think about the US, and Lina Khan and the FTC and the new trends towards really curbing and with severe antitrust policies and making - the approach there is very much making privacy part of that, too. It's difficult. One of the reasons why we're seeing some really bad things about technologies also because of a lack of, for example, biosimilar gentle technologies, or, you know, seeing things that we've seen, or, for example, what happened in the UK two years ago, where the students could not take their exams, and a new algorithm is used to replace the exams, and was used to predict the grades of the students. And then what happens is that the algorithm was automatically giving the higher grade to students coming from private education than just students coming from state education, regardless of their effort and their actual situation. These are the bad things we're seeing. But these bad things we're seeing, they're not because of their technology. Of course, there are affordances within these technologies, but they are because of the lack of controls that we have and lack of understanding the controls around how these systems are created, the transparency around it in, the lack of complete understanding about the affordances and the potential that these technologies have with them. The case, for example, the exam in the US, you can blame the algorithm. You blame it on the individuals that created those systems. And if you think about the context, and it's not surprising at all that that was the outcome. So I think it's it's very difficult to say what's good and what's bad. I think it's really understanding that AI is part and forms parts of something much bigger. To an extent, it cannot be seen in isolation for many other reasons, including the way that market is dominated by a few companies and the CCOs are completely intertwined. And this is why I think there is a serious action taken by some jurisdictions to think about the work in the UK were, so the US sorry, whether it's a real tsunami of legislation coming next year to try and govern all these issues together.
Jonas Christensen 34:01
Yeah, we say that data is the new oil. So I might use the analogy of some of these things being akin to a digital oil spill, where you actually need that regulation around how these things are govern, so that we don't make them go out of hand by. May I say amateur's approaches sometimes to how these things are built. So that same example from the UK is a famous one and a terrible one, where the algorithm had gone in and dictated the grades based on, - Yeah, I don't know what variables are going into it. But then, on average, it might have been reasonable, but I'm sure every individual felt that it was very unreasonable. So you can see how this is a great example of this situation of freedom. Be careful and we just play around with AI like it's my first chemistry set or what have you. Create damage that is large and society.
Ivana Bartoletti 34:50
I agree. But there's also something else, you know. It feels as if you say, ''We thought they were gone''. You know, like think about physiognomy or phonology. You know, these things they try to extend a lot of things that we thought that were completely gone. They seem to come back to AI. Somebody said to me, I just like love this sentence and said, ''You know, it should be common sense that you do not judge people on the basis of what they look''. But then with AI, we've lost the common sense. And these things have come back. So now you have faced systems, you know. They look at the way that you move your mouth or your behaviour, you know, your face, in your movement, to then draw conclusion on how you are as an individual. For example, the trustworthiness based on facial traits. And this stuff is stuff that was rubbed out decades ago, centuries, say, and then it's coming back. And it's actually bringing back to an extent racist theories, isn't it? You know, it feels as if, because of this high, we are not thinking, and this is why I was relating all this to the wider sort of issues around antitrust and the dominance in the market of certain companies. These things are really coming back, these horrible things that we thought they were gone. And again, you know, we're having a debate at the moment on whether certain things should be banned, or whether certain things should be regulated and governed just as high risk. I mean, this is a debate that we're having in Europe, because of the European AI act. It's an interesting debate, because it's not just about the European AI act, but really goes to the heart of what we're talking about, you know. In the past, people will say, ''Well, a knife is a knife''. You know, is technology to an extent. You can use it to kill, you can use to cut a piece of food. So actually, the issue is the use and not the product in itself and I try to bring this to to AI as well. And I completely disagree with this approach, because of the technologies such as the sort of related to the phonology ones that they we were mentioning, they have affordances. So the ability to then introduce races by the backdoor, for example. We've got to really be careful too, there is an argument. So we have to be careful in saying, it not ending up sleep-walking into legitimising things that are not good for society, simply by deeming these things as high risk, thus wrapping controls around them, but wrapping controls about them. So yeah, we'll put these controls, then basically we're just legitimising. And I do think that we have to stop and think, ''Do we want to legitimise these things?'' or do we want to say that, for example, things like phonology based, they should be just banned, rather than even to some is adulterated, by saying that they can be of some use. And this is not an easy discussion and is one I do not have an answer to. Again, because we live in a society where for every fact there is an anti-fact, which is the opposite factor, which has the same importance. But to an extent, I don't know the answer to this, because if I think about hospital setting, I'm often told that, particularly for people who are in the study of the face, of the movement, can be really useful. For example, to help with understanding what people that are in hospital when they are not able to speak, what they feel in terms of pain or desires. I'm like, ''Well, who am I to say that is not true?''. You know, it could be true and maybe it is. Therefore that could be in good use of this kind of but how do we prevent this from not sleepwalking into something which is actually very dangerous? I think this is why we must really be careful when people say, ''AI just a new piece of technology''. I mean, no technology is ever neutral, but particularly when the affordances, so the possibility and capabilities that these technologies have, when they are so transformative and dangerous, as we've seen with AI, then we really, you know, we've got to stop for a moment. I mean, in my book, I make an analogy with nuclear, but I don't make it because I think it's an academic exercise. You know, nuclear, some people see it as a good thing if it's done in a good way you. Now we're investing in the fourth generation nuclear weapons and all of that. But if you think about how much we fought to gather around nuclear, how much campaigning there has been, how much protests around it, and how much international negotiations discussions we've had to ground up. I mean, think about, you know, the deal with Iran, and how much it's been dominating the headlines. So what I'm saying is, we probably need the same amount of discussions. We probably need the same amount of opposition, the same amount of demonstration, the same amount of sort of public alert. We need people to be alert to the risks and then to an extent, we do not perceive the risks in the same ways that we see with the nuclear movement because It's the way that these things are presented and adulterated to us. The way that as soon as you raise your hand and you say, ''I have a problem with this'', you're perceived as somebody who is anti-technology, anti-future, that we want all to go back and live in caves. And this is wrong, because I'm the first one to talk about the risks, but it's because I love technology, not the opposite. So we can't have really nuanced discussions around all this and this is really dangerous. And the way for society, I believe, to really invest in the systems in a good way, avoid that we become even more polarised, avoid that, you know, these systems play a role in breaking out past societies and democracies and avoid that we bring back software racism in the systems. And, you know, the way to have public awareness, democratic control, and even if need be, people will say, ''No, we don't want this'' and by saying it, we can have a discussion. But this seems very, very difficult. Isn't it? Every time you express some doubts, you're seen as sort of, I don't know, somebody coming back from centuries, millennia ago, wanting to go back to that situation. It's not true at all, isn't it?
Well, you're making me reflect on probably what's been two and a half years of the rise of nationalism around the world. That's probably a little bit longer. But we've had demonstrations in the US with sort of, I call them nationalists or whatever you want to call them, storming the Capitol Hill there. You've had around the world anti-vaccination rallies, freedom rallies, anti-lockdown rallies, all this stuff, yeah, called populist and also an increasing nationalism. And you could argue that social media is helping to, which is AI, which is helping to: 1) Fuel the sentiment, but also helping to organise people with relative ease. So why don't we have anti-Facebook, anti-Twitter rallies of people fighting against this taking over our minds, yet?
That's an interesting one. But first of all, because I think a lot of people like them and you can't blame them. Can you? I mean, you know, people love sharing the photos of their kids, you know, if their mood, you know, although is something you say - People love it, because the world where we're very mobile, now with the pandemic, but in our world where we're mobile, we can share our histories, some nice moments with others. Try to live without this. Try and live without Amazon, without Google, without Facebook, you know, the ones who made these experiments. We'll probably last like a few days. It's impossible. This is the reason why the issue cannot be seen in isolation from other issues, including competition, including anti-trust, including privacy. These things are all intertwined or don't, - By no means, I don't want to be seen as somebody who wants to govern privacy through competition. There are two, - one is about functioning of the markets and the other one is about human rights. So for me, they are very separate. And yet, there is a greater relationship than ever between the two. So I think what we're seeing at the moment is that people would like to exert this control. So they would like to be able to say, ''I want a version of Facebook, which is more private, which doesn't rely on these systems''. If you ask individuals, they would tell you. I mean, all the surveys have been conducted around sort of privacy laws getting more and more prominence in countries and this proliferation of privacy laws, they also happen because there is a demand for them. To the point that some people, erroneously in my view, believe that there is a paradox between what people believe and actually what people do in reality. You know, some people say, ''Well, you know, people say that they care about privacy, but at the same time, they do put photos of their kids on Facebook. They do buy on Amazon'' and I always respond, ''But what are they supposed to do? There's no alternative to them''. What are you supposed to do? Are you supposed not to use Google Maps? I like Google Maps? Why shouldn't - There's nothing else. What am I supposed to do? And even the privacy preserving measures that these big companies are bringing in because of popular demand and regulatory action, they are entrenched in the power, not they're centralising it. Right? So that's why you can't really see these issues as separate from anti-trust and competition. And this is why there is the action taken by the FTC in the US. We've seen similar things happen in Australia. But this is also the reason why the European Union saying, ''Well, actually, we need to have a real tsunami here and there is a digital governance act, there is a digital services act, a digital marketing act, the AI act''. All of these things, they have one thing in common, which is ensuring that the benefit of this digital age is little bit more diversified and they do not just benefit, if you have a few names. So the reason why I'm saying all this is because often I'm told, ''Yeah but you know people say this, but they don't care''. But no. You can you have a movement of it against Facebook, where you have a few people saying, ''We should not have it''. Sometimes I read it and yesterday for example, when some of the paintings around Meta were made public, there was people saying that we got to get rid of it. But you know, I like things to be real and battled to be winnable. And to me, the real battle to win here is to ensure that we organise our markets in a way with which if there is something that is different from Google, or something that is different from one of these big companies has the freedom, liberty and the space to be available, and at the moment, and this is about good markets, it's about good economy, this is about to an extent the FTC is moving under that mantra. This is the purpose of antitrust. This is the purpose of good competition, to enable a lot of products to come out and for the strongest ones, and the better ones, to remain in the market. And the functioning of this is really, really important. But there is absolutely no doubt that countries like China, the US, the EU, they all come to terms with the issue of the big company into trouble with all of us. And it's not surprising, it's not surprising, because we've had for a long, long time, we had little regulation around all this. So we've let the system infrastructure grow a little bit wild in the last decades. And now we're coming to terms with the problems. So very difficult, in my view, to see things as separate. We have to see sort of the bigger picture, which is very much related to the markets and all of that.
Jonas Christensen 46:41
Yeah, interesting and you're making me reflect on the fact that we often, being the state or government in a jurisdiction, have controlled or even broken up monopolies in the past. So that would have been monopolies on goods and services but now we're we're experiencing monopolies on information. There is a similar iteration of that, but now in an information and privacy setting that we're probably going through here. So yeah, it's going to be so interesting to see how that plays out. And you're also making me think about something else at random, which is, in your book, you argued that data is capital. And you also say that it is political today. The data products that we create, these AI solutions are political product. Could you elaborate on what all that means?
Ivana Bartoletti 47:27
Yeah. So this comes from one specific discussion that I've had. I mean, of course, these topics have been been in public domain. But I remember a discussion that I had a few years back and it was an international conference. It was based in China, but the person who told me was not from China. And I was having this conversation, and there was a politician, Minister coming from a country, and we were having a conversation. The person said to me, ''Well, who's going to need politics moving forward? We've got AI, data. We can just be based on data. We'll make the best decisions for our people''. And I'm horrified. Because when I was 16, I had the privilege to study abroad when I was in, I was living in Italy, at the time, but I had the privilege of studying abroad, and I went to the US, in Syracuse, in New York State. And I had the experience of living in both a - Go to school in two schools. One was in a school downtown and one was going to the suburbs. The biggest difference that I remember with between the two schools, - you know, I'm 42, I was 16 at the time, so it was a long time ago - and I still remember the main difference being the amount of control that was between the two schools. And by control, I don't mean, - You know, by control I mean, how many times I was checked. How many times my documents were checked? How many times my, everything was checked about me? And the difference between the two schools was that the first one was I was checked a lot of times - was mainly black and he second school may be white. So I realised that actually, there's nothing neutral about data collection. Nothing neutral at all. And I also realised that when people give out stats about how much violence, how much incidents, how many accidents, how many episodes you have in a particular area, that depends on how much capability you have to go and check all these incidents. So obviously, if, for example, you want to define, you want to say what is an area, which is more dangerous, and the other was less dangerous? You know, going back to my experiments, when I was in the US, I obviously, and that has massive implication, right, because you say which area is more dangerous or less dangerous, driving in massive implications on same house price, the future of the people who live there, the investments made in a particular area, then obviously data collection will represent the amount of people the data has been collected from. So in that particular area where you have more control, more data is going to be collected, with massive consequences on the output. So I realised very young and that was a huge part of my brain and my human rights driven approach to privacy, that a data collection exercise is never a neutral exercise. The same happens with, for example, victimless violence, right. If you ask how many women are victim of abuse, you know, you'll get a different result if you're asking women themselves or if you're asking a police officers or if you ask the police force. So, there is nothing neutral about a data collection exercise. There is nothing neutral about a choice about who is going to end up in a database and who is going to be left out in a database. Somebody is in a position of power to make that decision. And you know, how can you say that data is neutral? So how can you say that we can use data to make decisions for the future? That was a mind boggling experience. You know, I was like, ''How is that possible?''. But actually, when data collection is basically a project or a decision on somebody been in a position of power, making decisions around war, ends up in a database, how can you tell me that an unscrutinized, unfettered and unchecked use of this data is going to then drive the decisions around policy allocations, assignment of resources. Let's say, for example, technology is used to predict crime in a particular area. Predict crime? You're taking away the most important thing is life, which is to end up somewhere different from where you started. It's just like the self fulfilling prophecy. You think because historically, the data that you've got, you think this area is going to be more risky, this family's going to end up into crime, and this kid is going to end up into crime. You wrap control around them. That control is only going to make things worse, not better. I mean, how would you feel if you're more controlled? So I just started to think that there was something really, really wrong. Or not at the age of 16-17. I didn't really realise all that, obviously. I mean, I realised it was very much in my mind. I didn't have the knowledge at the time to really elaborate on that. Then it all came back and then couple that with my feminist upbringing. So where I realised that, again, that there was a form of violence in data collection, which had also been gender related elements. Then I just thought, ''Well, we've got to wake up''. We can't really think about data, this sort of incredible thing that we have, which is, I mean, it is important. I am very concerned about missing the opportunities that data gives us. I work for a company where I want to harness the value of the data for the benefit of people. So I have, again, I am in no way anti-leveraging the use of data. No way. I mean, think about the pandemic. But I am all in favour of understanding the dynamics underpinning data, to make sure that we are able to put enough controls around it, but also to make sure that even if we've got all the controls in place, to understand that there is not enough technical controls that we can put in place to solve a problem, which is actually historical and political. So that I always say, it has to be a choice that an organisation makes as well. To say where do they want to go? In a prismatic you can do technically, or massaging the data and doing whatever you can do with the data, but to make it more sort of, to avoid the perpetuation of the past. But then ultimately, it's too big of a problem for technology to solve it alone. You know, it's actually a political decision. I think it was Reuben Binns, we use to work for the Information Commissioner in the UK and he needs to be credited with some of the amazing work that's been done by the Information Commissioner on AI and some really sort of cutting edge work that has been doing. But he said, you know, that being fair in AI, it's very much about - it's a decision that needs to be made by a company conscience, social decision that it's been made the companies.
Jonas Christensen 54:08
And I think data and AI actually has an opportunity to be very full of empathy for the individual. And you can actually find things, patterns, even automate them if you dare but you could find things out about people that can be used to be very empathetic to the individual that otherwise would get lost in the masses. So I can see that you're on this mission to do this. So I think that's great. Ivana, I have got three questions left for you. One you will answer as long as you want. And the other two are quick. So you have taken an initiative here to write a book but you've also taken the initiative to co-found an organisation called ''Women Leading in AI Network''. Could you explain to us what this network is about and why it's so important to drive this female agenda on the topic of AI?
Ivana Bartoletti 55:00
So this network was started a few years back. When it was 2018 and we were a group of friends and coming from different backgrounds. Just coming from sort of privacy law and people coming from technology, more like data science, and then lawyers and business people, campaigners. So we were just sitting there, we were thinking about - It was a few days before International Women's day. So we were sitting there, we're thinking about these's something really strange about International Women's Day. We're thinking about what's going to happen to - the journey that we as women had gone through, and then we realise that all the things we've been trying to fight against, and some have achieved. So for example, not being discriminated when appliying for jobs and not being discriminated when applying for a loan. They were actually being reintroduced by AI, in unscrutinized, unchecked manner, and not discriminating on the ground of skin, colour of the skin. You know, then you think about facial recognition, and how much it discriminated against women of colour. So, we were sitting in there and we were like, ''Oh, my goodness. We have been fighting so hard and then these things have been brought back''. And that reminds me that fact that we seem to have lost a lot of common sense with AI, just because it's called. And so we said, ''You know what? We should call an assembly at the London School of Economics and we should just like call an assembly and see what comes and start something on this''. And also, because I mean, there were organisations like the AI Now Institute, doing amazing work on this. So I wrote an article in The Guardian and I wasn't expecting the result. I wasn't expecting the outcome. And then we had this event and the season. A lot of women came in. So it was really good. And the aim is, not only to bring more women and include diversity into the coding rooms. I mean, obviously that is important, right? Bias in AI comes from many different reasons, not just because of the data. From the parameters, measurement aggregation and obviously, the scrutiny that is given is stronger when there is a more diverse workforce, because things can be seen that couldn't be otherwise seen. And also because having a more diverse workforce means that the people we're going to be impacted the most and the most vulnerable, they can be represented in a room. But I think that was one of the issue that we wanted to deal with. The most important one, in my view, was to have more women at the top of the table with decisions around the AI. So not just coders, not just women in technology, but actually women leaders in parliaments, business or international organisations, to define the norms around AI. So that was the thing. So we wrote a paper with the principles for trustworthy AI and then we introduced it soon. A lot of ideas and we're not the only one. Obviously, the strength of this movement is that you're part of something bigger, and you play your part to your constituency. But then obviously, we were part of something bigger, and we had a lot of resonance. And it was good. We worked with the European Commission, with the European Parliament that comes from Europe, and we've been growing and it's a really important network. Because we offer a space once a month. We meet up online. We discuss the big ideas coming, all the regulatory responses. So it's not just about women in technology, women in AI. It's really women in really policymaking around AI. And we can share your experiences that we're having. How we are coding sort of privacy and fairness into the systems, what policies we're putting in place as leaders in our respective fields. So I would encourage not just - I mean, women, this is an organisation led by women. But you know, when we have events, men and women come in,or whatever. It is open to anyone. But the leadership is female. That's the whole purpose.
Jonas Christensen 58:56
Yeah, Wonderful. That is a real important purpose that even though I'm not female, I can totally subscribe to. So thank you for that. Now, I promised you two questions here at the end, Ivana. So the first one is I always ask the guests to pay forward on the show. So my question to you is: Who would you like to see as the next guest on Leaders of Analytics and why?
Ivana Bartoletti 59:21
I actually would like to suggest a guy called Gianclaudio Malgieri, who I think is doing some excellent work on vulnerability in the systems and what it means to be vulnerable in the digital system, what it means to be vulnerable in privacy law. And he works very closely with Frank Pasquale, but I think it'd be a really, extremely interesting voice to bring to a wider audience and I really hope that you can call him.
Jonas Christensen 59:52
Brilliant suggestion and I will definitely follow that one up. Last question: Where can people find out more about you and get a hold of your content?
Ivana Bartoletti 1:00:01
Yeah. So my website, which is easy. It's www.ivanabartoletti.co.uk. And also Twitter. Yeah. So I think that's the best way.
Jonas Christensen 1:00:09
Fantastic. Now listeners, I really encourage you to check out this book called ''An Artificial Revolution''. I find it very insightful and as she's already mentioned, very easy to read and interpret as well. It really makes you think about the world we live in right now. And also as a data intuitionists, the role and responsibility that we have to create the future that we want and avoid one we don't want. Ivana, thank you so much for being on the show today. I really appreciate your time. I know you have to go and put your Superwoman cap on and then go and save us all and make sure that we keep our privacy around the world. On behalf of listeners and the global community in general, thank you so much for doing what you do and making sure that AI turns out the way we actually want it to turn out in the next 10,15, 20 years. Really appreciate it. All the best. Thank you for joining in today.
Ivana Bartoletti 1:00:35
Thanks to you for having me. Really enjoyed it.