Apr 16, 2019 | 6 min read

Spotlight

Creating an Ethical AI Framework

This series highlights the key insights and lessons from our Digital Leadership series of podcasts. We spotlight the important takeaways from our interviews in an accessible format. The following insights come from Sander Klous, Professor at Big Data Ecosystems at the University of Amsterdam, and a partner in charge at Big Data and Analytics for KPMG in the Netherlands. Stay tuned for the full podcast interview with Sander Klous, in the meantime, take a look at our full library of podcasts.

What have been the core experiences that have led you down your current path, and the current interest in big data and analytics?

I started to write about big data and advanced analytics a couple of years ago, I wrote a book called ‘We are Big Data’, where I was emphasizing the enormous impact of data, and the analysis of data on our society. The key message of that book is that we actually want this, and in the years that followed you could see all kinds of side-effects: privacy was one of the most familiar topics of discussion, along with others like discrimination and reliability. Situations led to people distrusting the results of algorithms, and that for me was a very interesting notion. 

Interestingly enough people talk about solutions in terms of better algorithms, or being more transparent about the algorithms used, or explaining better what an algorithm does, and then people will probably start trusting it. But that’s not how trust works, trust typically comes from experience, and then experience builds reputation, and then reputation builds brand. So, if that’s how trust works then you need to start doing something else.

Could you talk about what can go wrong, what are some of the issues that face us when we’re dealing with algorithms?

I actually did some work on that collecting headlines from the news of everything that went wrong with data, or with analytics, and then I tried to organize them into several topics. The first one is familiar - cyber-security related, so something gets hacked and then information gets out on the street.

Another is something which is also quite familiar to us, its data governance related; think about who can do what with the data, and who can do what with an analysis, that’s a very common topic that we already know from the past. What you see is, that now data and analysis is actually stretching the boundaries further of an organization, and it becomes harder and harder to decide who can do what with data.

Another is IT architecture-related because we are now living in a world where we have the Internet of Things, and devices are in our daily lives and are touching our daily lives. Many times during the day things like maintenance and system upgrades, things that you typically associate with servers in the basement, now become something that can affect your daily lives; if you have a smart thermostat in your room, or if you have a smart-lock on your door, and further upgrades to the software cause impact completely different than when it actually hits a server in the server room of your office.

What shaped your insights around ethics?

This is something I learnt through a conversation with the Head Psychiatrist of the University Medical Centre of Utrecht. I asked her, ‘Now that you have a computer that gives you advice on the treatment of a patient, that must make you feel uncomfortable as a doctor, because you don’t actually understand completely what’s behind that advice that’s coming from the computer?’ and she said, ‘Well, actually that’s not the biggest problem I have, because currently I’m also using research from my colleagues, and that’s also something that I might not completely understand because I wasn’t involved in that research, but I still use their advice and their answers. So, for me it’s not much different if I don’t completely understand the advice from my colleagues, I still use it in my treatments, or, if it comes from a computer’. She said, ‘The biggest difference is, if the computer gives me completely new insights, predictable insights. It will tell me for example that a patient will become violent within the next 24 hours’, and she gave an example where there are cases with over 80% confidence that in that computer system, the patient actually becomes violent within 24 hours.

Then the question becomes, ‘What do I do with that information?’ You get into this loop of all kinds of ethical situations that you now suddenly have to take into account. The interesting thing is, you cannot even avoid it, you have to take ethical decisions here because the technology is available, so even if you decided not to use the technology, that’s already an ethical decision. So, if the technology is not there, there is no ethical choice to make, but as soon as there is then every doctor has to make ethical decisions around this topic, that’s what is very interesting.

How do you assess the level of awareness of risk of algorithms?

In my opinion, algorithms are all around us, and they are so embedded in our daily lives that sometimes you don’t even notice it anymore. There is even a term for it, it’s called ‘invisible barbed wire’.  Maybe what’s wrong with that phrase is that the barbed wire is not yet completely invisible. Sometimes you realize that you’re impacted by an algorithm in a way you don’t like. I always like the example of, in the navigation system that is for example sending you into a one-way street from the wrong direction, and at that moment you’re really frustrated with your navigation system, and you’re feeling annoyed.

What are the implications of this?

For me this means that we are living in a crucial moment in time, we are the generation that needs to think about the autonomy in decision-making; what decisions do we want to make ourselves? And, what decisions are we fine with if they’re taken over by algorithms, or technology? We are the generation that has to define what the boundaries are, and how we are going to make sure that the algorithms that are implemented comply with these boundaries.

How do you think about an ethical framework for AI?

The approach that we chose is of course very closely associated with the approach that we have also seen on financial statements. About 100 years ago we had similar situations around financial statements, basically there were mistakes in financial statement that led to distrust in society about these financial statements. That’s how the audit function appeared, as we needed an independent auditor that looks at these financial statements, and you need a framework to put in place that shows you are in control of how you built the financial statement. That’s something we are developing right now, and we have developed over the years, but we will be developing it further over the next years. So, yes, I think we can learn from what has been done in financial statement audits, and we can try to create similar frameworks, and apply them to algorithms as well.

  

orange_line_1-759860-edited

Momenta Partners encompasses leading Strategic Advisory, Talent, and Investment practices. We’re the guiding hand behind leading industrials’ IoT strategies, over 200+ IoT leadership placements, and 25+ young IoT disruptors. Schedule a free consultation to learn more about our Connected Industry practice.