[Podcast] Dr. Wolter Pieters on Information Ethics, Part One

[Podcast] Dr. Wolter Pieters on Information Ethics, Part One


Leave a review for our podcast & we'll send you a pack of infosec cards.

In part one of my interview with Delft University of Technology’s assistant professor of cyber risk, Dr. Wolter Pieters, we learn about the fundamentals of ethics as it relates to new technology, starting with the trolley problem. A thought experiment on ethics, it’s an important lesson in the world of self-driving cars and the course of action the computer on wheels would have to take when faced with potential life threatening consequences.

Wolter also takes us through a thought track on the potential of power imbalances when some stakeholders have a lot more access to information than others. That led us to think, is technology morally neutral? Where and when does one’s duty to prevent misuse begin and end?


Wolter Pieters: My name is Wolter Pieters. I have a background in both computer science and philosophy of technology. I’m very much interested in studying cyber security from an angle that either goes a bit more towards the social science, so, why do people behave in certain ways in the cyber security space. But also more towards philosophy and ethics, so, what would be reasons for doing things differently in order to support certain values.

Privacy, but then again, I think privacy is a bit overrated. This is really about power balance. It’s because everything we do in security will give some people access and exclude other people, and that’s a very fundamental thing. It’s basically about power balance that is through security we embed into technology. And that is what fundamentally interests me in relation to security and ethics.

Cindy Ng: Let’s go back first and start with philosophical, ethical, and moral terminology. The trolley problem: it’s where you’re presented two dilemmas, where you’re the conductor and you see the trolley is going down a track and it has the potential to kill five people. But then if you pull a lever, you can make the trolley go on the other track where it would kill one person. And that really is about: what is the most ethical choice and what does ethics mean?

Wolter Pieters: Right. So, ethics generally deals with protecting values. And values, basically, refer to things that we believe are worthy of protection. So, those can be anything from health, privacy, biodiversity. And then it’s said that some values can be fundamental, others can be instrumental in the sense that they only help to support other values, but they’re not intrinsically worth something in and of themselves.

Ethics aims to come up with rules, guidelines, principles that help us support those values in what we do. You can do this in different ways. You can try to look only at the consequences of your actions. And in this case, clearly, in relation to the trolley problem, it’s better to kill one person than to kill five. If you simply do the calculation, you know, you could say, “Well, I pull the switch and thereby reduce the total consequences.” But you could also argue that certain rules state like you shall not kill someone, which would be violated in case you pull the switch. I mean, if you don’t do something, then five people would be killed. Then you don’t do something explicitly, whereas you would pull the switch you would explicitly kill someone. And from that angle, you could argue that you should not pull the switch.

So, this is very briefly an outline of different ways in which you could reason about what actions would be appropriate in order to support certain values, in this case, life and death. Now, this trolley problem is these days often cited in relation to self-driving cars, which also would have to make decisions about courses of action, trying to minimize certain consequences, etc. So, that’s why this has become very prominent in the ethics space.

Cindy Ng: So, you’ve talked about a power in balance. Can you elaborate on and provide an example on what that means?

Wolter Pieters: What we see in cyberspace is that there are all kinds of actors, stakeholders that gather lots of information. There’s governments being interested in doing types of surveillance in order to catch the terrorist amongst the innocent data traffic. There is content providers that give us all kinds of nice services, but at the same time, we pay with our data, and they make profiles out of it and offers targeted advertisements and, etc. And at some point, some companies may be able to do better predictions than even our governments can do. So, what does that mean? In the Netherlands, today actually, there’s a referendum regarding new powers for the intelligence agencies to do types of surveillance online, so there’s a lot of discussion about that.

So, on the one hand, we all agree that we should try to prevent terrorism, etc. On the other hand, this is also a relatively easy argument to claim access to data, they’re like, “Hey, we can’t allow these terrorists attacks, so we need all your data.” It’s very political. And this also makes it possible to kind of leverage security as an argument to claim access to all kinds of things.

Cindy Ng: I’ve been drawn to ethics and the dilemma of our technology, and because I work at a data security company, you learn about privacy regulations, GDPR, HIPAA, SOX compliance. And at the core, they are about ethics and a moral standard of behavior. And can you address the tension between ethics and technology?

And the best thing I read lately was Bloomberg’s subhead that said that ethics don’t scale. When ethics is such a core value, but at the same time, technology is sort of what drives economies, and then add an element of a government to overseeing it all.

Wolter Pieters: There’s a couple of issues here. One is that’s often cited is that ethics and law seem to be lagging behind compared to our technological achievements. We always have to wait for new technology to kind of get out of hand before we start thinking about ethics and regulation. So, in a way, you could argue that’s the case for internet of things type developments where manufacturers of products have been making their products smart for quite a while now. And we suddenly realized that all of these things have security vulnerabilities, and they and they can become part of botnets of cameras that can then be used to do distributed denial of attacks on our websites, etc. And only now are we starting to think about what is needed to make sure that these and other things, devices are securable at some level. Can they be updated? Can they be patched? In a way, it already seems to be too late. So, it is the argument then that is lagging behind.

On the other hand, there’s also the point that ethics and norms are always in a way embedded in technologies. And again, in the security space, whatever way you design technology, it will always enable certain kinds of access, and it will disable other kinds of access. So, there’s always this inclusion, exclusion going on with new digital technologies. So, in that sense, increasingly, ethics is always already present in a technology. And I’m not sure whether ethics, whether it should be said that ethics doesn’t scale. Maybe the problem is rather that it scales too well in the sense that, when we design a piece of technology, we can’t really imagine how things are going to work out if the technology is being used by millions of people. So, this holds for a lot of these elements.

And then the internet when it was designed, it was never conceived as a tool that would be used by billions. It was kind of a network for research purposes to exchange data and everything. So, same for Facebook. It was never designed as a platform for an audience like this, which means that, in a sense, that the norms that are initially being embedded into those technologies do scale. And if, for example, for the internet, you don’t embed security in it from the beginning and then you scale it up, then it becomes much more difficult to change it later on. So, ethics does scale, but maybe not in the way that we want it to scale.

Cindy Ng: So, you mentioned Facebook. And Facebook is not the only tech company that design systems to allow data to flow through so many third parties, and when people use that data in a nefarious way, the tech company can respond to say, you know, “It’s not a data breach. It’s how things were designed to work and people misused it.” Why does that response feel so unsettling? I also like what you said in the paper you wrote that we’re tempted to consider technology as morally neutral.

Wolter Pieters: There’s always this idea of technology being kind of a hammer, right? I need a hammer to drive in the nail and so, it’s just a tool. Now, information flow technology has been discussed for a while that there will always be some kind of side effects. And we’ve learned that technologies pollute the environment, technologies cause safety hazards, nuclear incidents etc., etc. And in all of these cases, when something goes wrong, there are people who designed the technology or operate the technology who could potentially be blamed for these things going wrong.

Now, in the security space, we’re dealing with intentional behavior of third parties. So, they can be hackers, they can be people who misuse the technology. And then suddenly it becomes very easy for those designing or operating the technology to point to those third parties as the ones to blame. You know, like, “Yeah. We just provide the platform. They misused it. It’s not our fault.” But the point is, if you follow that line of reasoning, you wouldn’t need to do any kind of security. Just say, “Well, I made a technology that has some useful functions,” and, yes, then there’s these bad guys that misuse my functionality.”

On the one hand, it seems natural to kind of blame the bad guys or the misusers of whatever. On the other hand, if you only follow that line of reasoning, then nobody would need to do any kind of security. So, this means that you can’t really get away with that argument in general. Then, of course, with specific cases, and then it becomes more of a gray area, where does your duty to prevent misuse stop? And then you get into the area, okay, what is an acceptable level of protection security?

But also, of course, there’s the business models of these companies involve giving access to some parties, which the end users may not be fully aware of. And this has to do with security always being about who are the bad guys? Who are the threats? And some people have different ideas about who the threats are than others. So, if a company gets a request from the intelligence services like, “Hey, we need your data because we would like to investigate this suspect.” Is that acceptable or maybe some people see that as a threat as well. So, the labeling of who are the threats? Are the terrorists the threats? Are the intelligence agencies the threats? Are the advertising companies the threats? This all matters in terms of what you would consider acceptable or not from a security point of view.

Within that space, it is often not very transparent to people what could or could not be done with the data. And then the European legislation is trying, in particular, to require consent of people in order to process their data in certain kinds of ways. Now that, in principle, seems like a good idea. In practice, consent is often given without paying too much attention to the exact privacy policies etc., because people can’t be bothered to read all of that. And in a sense, maybe that’s the rational decision because it would take too much time.

So, that also means that, if we try to solve these problems by letting individuals give consent to certain ways of processing their data, this may lead us to a situation where individually, everybody would just click away the messages because for them it’s rational like, “Hey, I want this service and I don’t have time to be bothered with all these legal stuff.” But on a societal level, we are creating a situation where indeed certain stakeholders in the internet get a lot of power because they have a lot of data. This is the space in which decisions are being made.

Cindy Ng: We rely on technology. A lot of people use Facebook. We can’t just say goodbye to IoT devices. We can’t say goodbye to Facebook. We can’t say goodbye to any piece of technology because as you’ve said, in one of your papers, that technology will profoundly change people’s lives, and our society. Instead of saying goodbye to this wonderful thing that we’ve created or things, how do we go about living our lives and conducting ourselves with integrity, with good ethics, and morals?

Wolter Pieters: Yeah. That’s a good question. So, what currently seems to be happening is that, indeed, a lot of this responsibility is being allocated to the end users. Like, you decide whether you want to join social media platforms or not. You decide what to share there. You decide whether to communicate with end-to-end encryption or not, etc., etc. So, this means that a lot of pressure is being put on individuals make those kinds of choices.

And the fundamental question is whether that approach makes sense, whether that approach scales, because the more technologies people are using, the more decisions they will have to make about how to use these kinds of technologies. Now, of course, there are certain basic principles that you can try to adhere to when doing your stuff online. But on the security sides, watch out of phishing emails, use strong passwords etc., etc. On the privacy side, don’t share stuff off from other people that they haven’t agreed to etc., etc.

But all of that requires quite a bit of effort on the side of the individual. And at the same time, there seems to be pressure to share more and more and more stuff even…and, for example, pictures of children that aren’t even able to consent to whether they want their pictures posted or not. So, it’s, in a sense, there’s a high moral demand on users, maybe too high. And that’s a great question.

In terms of acting responsibly online, now, if at some point you would decide that we’re putting too high a demand on those users, and the question is like, “Okay, are there possible ways to make it easier for people to act responsibly?” And then you would end up with certain types of regulation that don’t only delegate responsibility back to individuals, like, for example, asking consent, but putting really very strict rules on what, in principle, is allowed or not.

Now, that’s a very difficult debate because you usually end up also in accusations of paternalism, like, “Hey, you’re putting all kinds of restrictions on what can or cannot be done online.” But why shouldn’t people be able to decide for themselves? For instance, on the other hand, people being overloaded with decisions to the extent that it becomes impossible for them to make those decisions responsibly. This, on the one hand, leaving all kinds of decisions to the individual versus making some decisions on a collective level that’s gonna be a very fundamental issue in the future.

Get the latest security news in your inbox.