This article is part of the series "[Podcast] Dr. Zinaida Benenson and Phishing Threats". Check out the rest:
I’m always reluctant to make a direct shameless plea to read our IOS content. But you must read the following transcript of my recent interview with Dr. Zinaida Benenson, a German security researcher. Last year she presented at Black Hat the results of a nicely designed experiment to measure the susceptibility of college students to phish mail. Let’s just say the students could use some extra tutoring when it comes to the dangers of the web.
She proved to my satisfaction that our curiosity about what lies ahead at the next link overrides our cautiousness and overall rational thinking. Benenson believes that non-college students would do just as poorly in a similar experiment.
What does this have to with data security at your organization?
In short: some percentage of employees will always click on the worst, spammiest phish mail imaginable. And if you make the phish content even somewhat convincing, you’ll get even higher yields. Benenson has also shown that it likely doesn’t matter if you give employees extensive security awareness. Someone will click, and that’s all the hackers need.
The more important point is to have secondary defense in place that can spot an attack in progress — ransomware, data theft, denial of service — and limit the damage.
[Inside Out Security] Zinaida Benenson is a senior researcher at the University of Erlangen-Nuremberg. Her research focuses on the human factors connections in privacy and security, and she also explores IoT security, two topics which we are also very interested in at the Inside Out Security blog. Zinaida recently completed research into phishing. If you were at last year’s Black Hat Conference, you heard her discuss these results in a session called How To Make People Click On Dangerous Links Despite Their Security Awareness.
So, welcome Zinaida.
[Zinaida Benenson] Okay. So my group is called Human Factors In Security And Privacy. But also, as you said, we are also doing technical research on the internet of things. And mostly when we are talking about human factors, we think about how people make decisions when they are confronted with security or privacy problems, and how can we help them in making those decisions better.
[IOS] What brought you to my attention was the phishing study you presented at Black Hat, I think that was last year. And it was just so disturbing, after reading some of your conclusions and some of the results.
But before we talk about them, can you describe that specific experiment you ran phishing college students using both email and Facebook?
[ZB] So in a nutshell, we sent, to over 1,000 university students, an email or a personal Facebook message from non-existing persons with popular German names. And these messages referred to a party last week and contained a link to supposed pictures from the party.
In reality, this link led to an “access denied” page, but the links were individual. So we could see who clicked, and how many times they clicked. And later, we sent to them a questionnaire where we asked for reasons of their clicking or not clicking.
[IOS] Right. So basically, they were told that they would be in an experiment but they weren’t told that they would be phished.
[ZB] Yes. So recruiting people for, you know, cyber security experiments is always tricky because you can’t tell them the real goal of the experiment — otherwise, they would be extra vigilant. But on the other hand, you can’t just send to them something without recruiting them. So this is an ethical problem. It’s usually solved by recruiting people for something similar. So in our case, it was a survey for… about the internet habits.
[IOS] And after the experiment, you did tell them what the purpose was?
[ZB] Yes, yes. So this is called a debriefing and this also a special part of ethical requirements. So we sent to them an email where we described the experiment and also some preliminary results, and also described why it could be dangerous to click on a link in an email or a Facebook message.
[IOS] Getting back to the actual phish content, the phish messaging content, in the paper I saw, you showed the actual template you used. And it looked — I mean, as we all get lots of spam – to my eyes and I think a lot of people’s eyes, it just looked like really obvious spam. Yet, you achieved like very respectable click rates, and I think for Facebook, you got a very high rate – almost, was it 40% – of people clicking what looked like junk mail!
[ZB] We had a bare IP address in the link, which should have alerted some people. I think it actually alerted some who didn’t click.. But, yes, depending on the formulation of the message, we had 20% to over 50% of email users clicking.
And independently on the formulation of the message, we had around 40% of users clicking. So in all cases, it’s enough, for example, to get a company infected with malware!
50% Clicked on Emails
[IOS] That is surprising! But then you also learned by surveying them, the reasons they were clicking. And I was wondering if you can share some of those, some of the results you found?
[ZB] So the reasons. The most important or most frequently stated reason for clicking was curiosity. People were amused that the message was not addressed to them, but they were interested in the pictures.
And the next most frequently stated reason was that the message actually was plausible because people actually went to a party last week, and there were people there that they did not know. And so they decided that it’s quite plausible to receive such a message.
[IOS] However, it was kind of a very generic looking message. So it’s a little hard to believe, to me, that they thought it somehow related to them!
[ZB] We should always consider the targeting audience. And this was students, and students communicate informally. Quite often, people have friends and even don’t know their last names. And of course, I wouldn’t send … if I was sending such a phishing email to, say employees of a company, or to general population, I wouldn’t formulate it like this. So our targeting actually worked quite well.
[IOS] So it was almost intentional that it looked…it was intentional that it looked informal and something that a college student might send to another one. “Hey, I saw you at a party.” Now, I forget, was the name of the person receiving the email mentioned in the content or not? It just said, “Hey”?
[ZB] We had actually two waves of the experiment. In the first wave, we mentioned people’s names and we got over 50% of email recipients’ click. And this was very surprising for us because we actually expected that on Facebook, people would click more just because people share pictures on Facebook, and it’s easier to find a person on Facebook, or they know, okay, there is a student, it is a student and say, her first name is Sabrina or whatever.
And so we were absolutely surprised to learn that over 50% of email recipients clicked in the first wave of the experiment! And we thought, “Okay, why could this be?” And we decided that maybe it was because we addressed people by their first names. So it was like, “Hey, Anna.”
And so we decided to have the second wave of the experiment where we did not address people by their first names, but just said, “Hey.” And so we got the same, or almost the same, clicking rate on Facebook. But a much lower clicking rate on email.
[IOS] And I think you had an explanation for that, if you had a theory about why that may be, why the rates were similar [for Facebook]?
[ZB] Yeah. So on Facebook, it seems that it doesn’t matter if people are addressed by name. Because as I said, the names of people on Facebook are very salient. So when you are looking up somebody, you can see their names.
But if somebody knows my email address and knows my name, it might seem to some people …. more plausible. But this is just … we actually didn’t have any people explaining this in the messages. Also, we got a couple of people saying on email that, “Yeah, well, we didn’t click that. Oh, well it didn’t address me by name, so it looked like spam to me.”
So actually … names in emails seem to be important, even if at our university, email addresses consist of first name, point, second name, at university domain.
[IOS] I thought you also suggested that because Facebook is a community, that there’s sort of a higher level of trust in Facebook than in just getting an email. Or am I misreading that?
[ZB] Well, it might be. It might be like this. But we did not check for this. And actually, there are different research. So some other people did some research on how well people trust Facebook and Facebook members. And yeah, people defer quite a lot, and I think that people use Facebook, not because they particularly trust it, but because it’s very convenient and very helpful for them.
Curiosity and Good Moods
[IOS] Okay. And so what do you make of this curiosity as a first reason for clicking?
[ZB] Well, first of all, we were surprised how honestly people answered. And saying, “Oh, I was curious about pictures of unknown people and an unknown party.” It’s a negative personality trait, yeah? So it was very good that we had an anonymous questionnaire. Maybe it made people, you know, answering more honestly. And I think that curiosity is, in this case, it was kind of negative, a negative personality trait.
But actually, if you think about it, it’s a very positive personality trait. Because curiosity and interest motivate us to, for example, to study and to get a good job, and to be good in our job. And they are also directly connected to creativity and interaction.
[IOS] But on the other hand, curiosity can have some bad results. I think you also mentioned that even for those who were security aware, it didn’t really make a difference.
[ZB] Well, we asked people if they know — in the questionnaire —we asked them before we revealed the experiment, and asked them whether they clicked or not. We asked them a couple of questions that are related to security awareness like, “Can one be infected by a virus if one clicks on an attachment in an email, or on a link?”
And when we tried to correlate, statistically correlate, the answers to this question, to this link clicking question, with people’s report on whether they clicked or not, we didn’t find any correlation.
So this result is preliminary, yeah. We can’t say with certainty, but it seems like awareness doesn’t help a lot. And again, I have a hypothesis about this, but no proof so far.
[IOS] And what is that? What is your theory?
[ZB] My theory is that people can’t be vigilant all the time. And psychological research actually showed that interaction, creativity, and good mood are connected to increased gullibility.
And on the other hand, the same line of research showed that vigilance, and suspicion, and an analytical approach to solving problems is connected to bad mood and increased effort. So if we apply this, it means that being constantly vigilant is connected to being in a bad mood, which we don’t want!
And which is also not good for atmosphere, for example, in a firm. And with increased effort, which means that we are just tiring. And when we…at some time, we have to relax. And if the message arrives at this time, it’s quite plausible for everybody, and I mean really for everybody including me, you, and every security expert in the world, to click on something!
[IOS] It also has some sort of implications for hackers, I suppose. If they know that a company just went IPO … or everyone got raises in the group, then you start phishing them and sort of leverage off their good moods!
Be Prepared: Secondary Defenses
[IOS] What would you suggest to an IT Security Group using this research in terms of improving security in the company?
[ZB] Well, I would suggest firstly to, you know, to make sure that they understand the users and the humans on the whole, yeah? We security people tend to consider users as you know, as nuisance, like, ‘Okay they’re always doing the wrong things.’
Actually, we as security experts should protect people! And if the employees in the company were not there, then we wouldn’t have our job, yeah?
So what is important is to let humans be humans … And with all their positive but also negative characteristics and something like curiosity, for example, can be both.
And to turn to technical defense I would say. Because to infect a company, one click is enough, yeah? And one should just assume that it will happen because of all these things I was saying even if people are security aware.
The question is, what happens after the click?
And there are not many examples of, you know, companies telling how they mitigate such things. So the only one I was able to find was the [inaudiable] security incident in 2011. I don’t know if you remember. They were hacked and had to change, actually to exchange all the security tokens.
And they, at least they published at least a part of what happened. And yeah, that was a very tiny phishing wave that maybe reached around 10 employees and only one of them clicked. So they got infected, but they noticed, they say that they noticed it quite quickly because of other security measures.
I would say that that’s what one should actually expect and that’s what is the best outcome one can hope for. Yes, if one notices in time.
[IOS] I agree that IT should be aware that this will happen and that the hackers and some will get in and you should have some secondary defenses. But I was also wondering, does it also suggest that perhaps some people should not have access to email?
I mean … does this lead to a test … .and if some employees are just, you know, a little too curious, you just think, “You know what, maybe we take the email away from you for a while?”
[ZB] Well you know, you can. I mean a company can try this if they can sustain the business consequences of this, yeah? So if people don’t have emails then maybe some business processes will become less efficient and also employees might become disgruntled which is also not good.
I would suggest that … I think that it’s not going to work! And at least it’s not a good trade off. It might work but it’s not a good trade off because, you know, all this for…If you implement a security measure that, that impairs business processes, it makes people dissatisfied!
Then you have to count in the consequences.
[IOS] I agree that IT should be aware that this will happen and that the hackers will get in and you should have some secondary defenses.
But I was also wondering, does it also suggest that perhaps some people should not have access to email? I mean … does this lead to a test where if some employees are just, you know, a little too curious you just say, ‘You know what? Maybe we take the e-mail away from you for a while.’
[ZB] Well, you know, you can. I mean, a company can try this if they can, you know, if they can sustain the business costs and consequences of this, yeah?
So if people don’t have emails then maybe some business processes will become less efficient and yeah, and also employees might become disgruntled which is also not good.
I would suggest that, I think that it’s not going to work!
And at least it’s not a good trade off. It might work, but it’s not a good trade off because, you know, all this for…if you implement security measure that impairs our business processes and makes people dissatisfied, then you have to count in the consequences.
[IOS] I’m agreeing with you that the best defense I think is awareness really and then taking other steps. I wanted to ask you one or two more questions.
One of them is about what they call whale phishing or spear phishing perhaps is another way to say it, which is just going after not just any employee, but usually high-level executives.
And at least from some anecdotes I’ve heard, executives are also prone to clicking on spam just like anybody else, but your research also suggests that some of the more context you provide, the more likely you’ll get these executives to click.
[ZB] Okay, so if you get more context of course you can make the email more plausible, and of course if you are targeting a particular person, there is a lot of possibilities to get information about them, and especially if it’s somebody well-known like an executive of a company.
And I think that there are also some personality traits of executives that might make them more likely to click. Because, you know, they didn’t get their positions by being especially cautious and not taking risk and saying all safety first!
I think that executives maybe even more risk-taking than, you know, average employee and more sure of themselves, and this might get a problem even more difficult. So it also may be even to not like being told by anybody about any kind of their behavior.
IoT and Inferred Preferences
[IOS] I have one more question since it’s so interesting that you also do research on IoT privacy and security. Over in the EU, we know that the new General Data Protection Regulation, which I guess is going to take place in another year, actually has a very broad definition of what sensitive data is. I’m wondering if you can just talk about some of the implications of this?
[ZB] Well, of course IoT data is everything’s that is collected in our environment about us can be used to infer our preferences with quite a good precision.
So… for example we had an experiment where we were able just from room climate data, so from temperature enter the age of humidity to determine if a person is, you know, staying or sitting. And this kind of data of course can be used to target messages even more precisely
So for example if you can infer a person’s mood and if you suppose if you buy from the psychological research that people in good moods are more likely to click, you might try to target people in better mood, yeah? Through the IOT data available to you or through IOT data available to you through the company that you hacked.
Yeah … point is, you know, that targeting already works very well. Yeah, you just need to know the name of the person and maybe the company this person is dealing with!
[IOS] Zinaida this was a very fascinating conversation and really has a lot of implications for how IT security goes about their job. So I’d like to thank you for joining us on this podcast!
[ZB] You’re welcome. Thank you for inviting me!