Tag Archives: phishing

The Equifax Breach and Protecting Your Online Data

The Equifax Breach and Protecting Your Online Data

As we all know by now, the Equifax breach exposed the credit reports of over a 140 million Americans. What are in these reports? They include the credit histories of consumers along with their social security numbers. That makes this breach particularly painful.

The breach has also raised the profile of the somewhat mysterious big three national credit reporting agencies or NCRAs — Experian and TransUnion are the other two. Lenders use NCRAs to help them decide whether to approve credit for auto loans, mortgages, home improvement, and of course new credit cards.

NCRAs Are Supposed to Protect Against Identity Theft

Let’s say the Equifax hackers go into phase two of their business plan, likely selling  personally identifiable information (PII) to cyber gangs and others who will directly  open up fake accounts. Of course, the stolen social security numbers makes this particularly easy to attempt.

The bank or other lender extending credit will normally check the identity and credit worthiness of the person by contacting the NCRAs, who under red flag rules are supposed to help lenders spot identity theft. Often times (but not always), the cyber thieves will use a different address than the victim’s when applying for credit, and this anomaly should be noticed by the NCRAs, who have the real home address.

The credit report should in theory then be flagged so future lenders will be on alert as well, and the financial company originally asking for the report is also warned of possible identify theft for the credit application.

I am not the first to observe that the irony level of the Equifax breach is in the red-zone – like at 11 or 12. The NCRAs are entrusted with our most personal financial data, and they’re the ones who are supposed to protect consumers against identity theft.

Unfortunately, an NCRA hacking is not a new phenomenon, and the big three have even been the target of class action suits brought by affected consumers under the Fair Credit Reporting Act (FCRA). To no one’s surprise, the legal suits have already begun for the Equifax breach – the last count puts it at 23.

What Consumers Should Do

While we hope that red flags have been already placed on affected accounts, it’s probably best to take matters into your own hands. The FTC, the agency in charge of enforcing the FCRA, recommends a few action steps.

At a minimum, you should go to the Equifax link and see if your social security number is one that’s been exposed. If so, you can get free credit monitoring for a year — in short, you’ll know if someone tries to request credit in your name.

(Yes, I just did it myself, and discovered my number might have been compromised. I went ahead and subscribed for the credit monitoring.)

If you’re really paranoid, you can go a step further and put a credit freeze on your credit report. This restricts access to the credit report held by the NCRAs, and in theory, should prevent lenders from creating new accounts. Normally, this would be a charge, but Experian graciously arranged to freeze the reports for free after outraged consumers protested.

None of these measures are fool proof, and clever attackers and thieves can get around these protections.

Online Protection With The Troy Hunt Course

Besides social security numbers, the hackers hauled away a lot of PII – names, addresses, and likely bank and credit card companies. As far as I can tell, passwords were not taken by the Equifax hackers.

Obviously, social security numbers are the most monetizable, but the other PII is still useful, particularly in phishing attacks. Readers of this blog know how we feel on the subject: any online information gained by hackers can and will be used against you!

So we should all be on alert for phish mails from what may appear to be our banks and other financial companies, and we should be wary of other scams.

That’s where the indispensable security expert Troy Hunt can help us all! His Internet Security Basics video course is a favorite of ours because it breaks down online security into a series of simple lessons that non-technical folks can quickly understand and take action on.

I draw your attention to Lesson Three, “How to know when to trust a website”, which will be incredible helpful in helping you avoid  the coming wave of online scams.

Let’s not waste a crisis: it’s probably also a good time to review and change online passwords and understand what makes for good passwords. Troy’s Lesson Two, ‘How to Choose a Good Password” we’ll bring you up to speed on passphrases and password managers.

The Equifax breach is as bad as it gets, but let’s not make it worse by letting cyber thieves exploit us again through lame phishing emails.

Learn how to protect yourself online with security pro Troy Hunt’s five-part course.

[Podcast] Phishing Researcher Zinaida Benenson, Transcript

[Podcast] Phishing Researcher Zinaida Benenson, Transcript

I’m always reluctant to make a direct shameless plea to read our IOS content. But you must read the following transcript of my recent interview with Dr. Zinaida Benenson, a German security researcher. Last year she presented at Black Hat the results of a nicely designed experiment to measure the susceptibility of college students to phish mail. Let’s just say the students could use some extra tutoring when it comes to the dangers of the web.

She proved to my satisfaction that our curiosity about what lies ahead at the next link overrides our cautiousness and overall rational thinking. Benenson believes that non-college students would do just as poorly in a similar experiment.

What does this have to with data security at your organization?

In short: some percentage of employees will always click on the worst, spammiest phish mail imaginable. And if you make the phish content even somewhat convincing, you’ll get even higher yields. Benenson has also shown that it likely doesn’t matter if you give employees extensive security awareness. Someone will click, and that’s all the hackers need.

The more important point is to have secondary defense in place that can spot an attack in progress — ransomware, data theft, denial of service — and limit the damage.

 

[Inside Out Security] Zinaida Benenson is a senior researcher at the University of Erlangen-Nuremberg. Her research focuses on the human factors connections in privacy and security, and she also explores IoT security, two topics which we are also very interested in at the Inside Out Security blog. Zinaida recently completed research into phishing. If you were at last year’s Black Hat Conference, you heard her discuss these results in a session called How To Make People Click On Dangerous Links Despite Their Security Awareness.

So, welcome Zinaida.

[Zinaida Benenson] Okay. So my group is called Human Factors In Security And Privacy. But also, as you said, we are also doing technical research on the internet of things. And mostly when we are talking about human factors, we think about how people make decisions when they are confronted with security or privacy problems, and how can we help them in making those decisions better.

 

[IOS] What brought you to my attention was the phishing study you presented at Black Hat, I think that was last year. And it was just so disturbing, after reading some of your conclusions and some of the results.

But before we talk about them, can you describe that specific experiment you ran phishing college students using both email and Facebook?

The Experiment

[ZB] So in a nutshell, we sent, to over 1,000 university students, an email or a personal Facebook message from non-existing persons with popular German names. And these messages referred to a party last week and contained a link to supposed pictures from the party.

In reality, this link led to an “access denied” page, but the links were individual. So we could see who clicked, and how many times they clicked. And later, we sent to them a questionnaire where we asked for reasons of their clicking or not clicking.

 

[IOS] Right. So basically, they were told that they would be in an experiment but they weren’t told that they would be phished.

[ZB] Yes. So recruiting people for, you know, cyber security experiments is always tricky because you can’t tell them the real goal of the experiment — otherwise, they would be extra vigilant. But on the other hand, you can’t just send to them something without recruiting them. So this is an ethical problem. It’s usually solved by recruiting people for something similar. So in our case, it was a survey for… about the internet habits.

 

[IOS]  And after the experiment, you did tell them what the purpose was?

[ZB] Yes, yes. So this is called a debriefing and this also a special part of ethical requirements. So we sent to them an email where we described the experiment and also some preliminary results, and also described why it could be dangerous to click on a link in an email or a Facebook message.

 

[IOS] Getting back to the actual phish content, the phish messaging content, in the paper I saw, you showed the actual template you used. And it looked — I mean, as we all get lots of spam – to my eyes and I think a lot of people’s eyes, it just looked like really obvious spam. Yet, you achieved like very respectable click rates, and I think for Facebook, you got a very high rate – almost, was it 40% – of people clicking what looked like junk mail!

[ZB] We had a bare IP address in the link, which should have alerted some people. I think it actually alerted some who didn’t click.. But, yes, depending on the formulation of the message, we had 20% to over 50% of email users clicking.

And independently on the formulation of the message, we had around 40% of users clicking.  So in all cases, it’s enough, for example, to get a company infected with malware!

50% Clicked on Emails

[IOS] That is surprising! But then you also learned by surveying them, the reasons they were clicking. And I was wondering if you can share some of those, some of the results you found?

[ZB] So the reasons. The most important or most frequently stated reason for clicking was curiosity. People were amused that the message was not addressed to them, but they were interested in the pictures.

And the next most frequently stated reason was that the message actually was plausible because people actually went to a party last week, and there were people there that they did not know. And so they decided that it’s quite plausible to receive such a message.

 

[IOS] However, it was kind of a very generic looking message. So it’s a little hard to believe, to me, that they thought it somehow related to them!

[ZB] We should always consider the targeting audience.  And this was students, and students communicate informally. Quite often, people have friends and even don’t know their last names. And of course, I wouldn’t send … if I was sending such a phishing email to, say employees of a company, or to general population, I wouldn’t formulate it like this. So our targeting actually worked quite well.

 

[IOS] So it was almost intentional that it looked…it was intentional that it looked informal and something that a college student might send to another one. “Hey, I saw you at a party.”  Now, I forget, was the name of the person receiving the email mentioned in the content or not? It just said, “Hey”?

[ZB] We had actually two waves of the experiment. In the first wave, we mentioned people’s names and we got over 50% of email recipients’ click. And this was very surprising for us because we actually expected that on Facebook, people would click more just because people share pictures on Facebook, and it’s easier to find a person on Facebook, or they know, okay, there is a student, it is a student and say, her first name is Sabrina or whatever.

And so we were absolutely surprised to learn that over 50% of email recipients clicked in the first wave of the experiment! And we thought, “Okay, why could this be?” And we decided that maybe it was because we addressed people by their first names. So it was like, “Hey, Anna.”

And so we decided to have the second wave of the experiment where we did not address people by their first names, but just said, “Hey.” And so we got the same, or almost the same, clicking rate on Facebook. But a much lower clicking rate on email.

 

[IOS] And I think you had an explanation for that, if you had a theory about why that may be, why the rates were similar [for Facebook]?

[ZB] Yeah. So on Facebook, it seems that it doesn’t matter if people are addressed by name. Because as I said, the names of people on Facebook are very salient. So when you are looking up somebody, you can see their names.

But if somebody knows my email address and knows my name, it might seem to some people …. more plausible. But this is just … we actually didn’t have any people explaining this in the messages. Also, we got a couple of people saying on email that, “Yeah, well, we didn’t click that. Oh, well it didn’t address me by name, so it looked like spam to me.”

So actually … names in emails seem to be important, even if at our university, email addresses consist of first name, point, second name, at university domain.

 

[IOS] I thought you also suggested that because Facebook is a community, that there’s sort of a higher level of trust in Facebook than in just getting an email. Or am I misreading that?

[ZB] Well, it might be. It might be like this. But we did not check for this. And actually, there are different research. So some other people did some research on how well people trust Facebook and Facebook members. And yeah, people defer quite a lot, and I think that people use Facebook, not because they particularly trust it, but because it’s very convenient and very helpful for them.

Curiosity and Good Moods

[IOS] Okay. And so what do you make of this curiosity as a first reason for clicking?

[ZB] Well, first of all, we were surprised how honestly people answered. And saying, “Oh, I was curious about pictures of unknown people and an unknown party.” It’s a negative personality trait, yeah? So it was very good that we had an anonymous questionnaire. Maybe it made people, you know, answering more honestly. And I think that curiosity is, in this case, it was kind of negative, a negative personality trait.

But actually, if you think about it, it’s a very positive personality trait. Because curiosity and interest motivate us to, for example, to study and to get a good job, and to be good in our job. And they are also directly connected to creativity and interaction.

 

[IOS] But on the other hand, curiosity can have some bad results. I think you also mentioned that even for those who were security aware, it didn’t really make a difference.

[ZB] Well, we asked people if they know — in the questionnaire —we asked them before we revealed the experiment, and asked them whether they clicked or not.  We asked them a couple of questions that are related to security awareness like, “Can one be infected by a virus if one clicks on an attachment in an email, or on a link?”

And when we tried to correlate, statistically correlate, the answers to this question, to this link clicking question, with people’s report on whether they clicked or not, we didn’t find any correlation.

So this result is preliminary, yeah. We can’t say with certainty, but it seems like awareness doesn’t help a lot. And again, I have a hypothesis about this, but no proof so far.

[IOS] And what is that? What is your theory?

[ZB] My theory is that people can’t be vigilant all the time. And psychological research actually showed that interaction, creativity, and good mood are connected to increased gullibility.

And on the other hand, the same line of research showed that vigilance, and suspicion, and an analytical approach to solving problems is connected to bad mood and increased effort. So if we apply this, it means that being constantly vigilant is connected to being in a bad mood, which we don’t want!

And which is also not good for atmosphere, for example, in a firm. And with increased effort, which means that we are just tiring. And when we…at some time, we have to relax. And if the message arrives at this time, it’s quite plausible for everybody, and I mean really for everybody including me, you, and every security expert in the world, to click on something!

 

[IOS] It also has some sort of implications for hackers, I suppose. If they know that a company just went IPO …  or everyone got raises in the group, then you start phishing them and sort of leverage off their good moods!

Be Prepared: Secondary Defenses

[IOS] What would you suggest to an IT Security Group using this research in terms of improving security in the company?

[ZB] Well, I would suggest firstly to, you know, to make sure that they understand the users and the humans on the whole, yeah? We security people tend to consider users as you know, as nuisance, like, ‘Okay they’re always doing the wrong things.’

Actually, we as security experts should protect people! And if the employees in the company were not there, then we wouldn’t have our job, yeah?

So what is important is to let humans be humans … And with all their positive but also negative characteristics and something like curiosity, for example, can be both.

And to turn to technical defense I would say. Because to infect a company, one click is enough, yeah? And one should just assume that it will happen because of all these things I was saying even if people are security aware.

The question is, what happens after the click?

And there are not many examples of, you know, companies telling how they mitigate such things. So the only one I was able to find was the [inaudiable] security incident in 2011. I don’t know if you remember. They were hacked and had to change, actually to exchange all the security tokens.

And they, at least they published at least a part of what happened. And yeah, that was a very tiny phishing wave that maybe reached around 10 employees and only one of them clicked. So they got infected, but they noticed, they say that they noticed it quite quickly because of other security measures.

I would say that that’s what one should actually expect and that’s what is the best outcome one can hope for. Yes, if one notices in time.

 

[IOS] I agree that IT should be aware that this will happen and that the hackers and some will get in and you should have some secondary defenses. But I was also wondering, does it also suggest that perhaps some people should not have access to email?

I mean … does this lead to a test  … .and if some employees are just, you know, a little too curious, you just think, “You know what, maybe we take the email away from you for a while?”

[ZB] Well you know, you can. I mean a company can try this if they can sustain the business consequences of this, yeah? So if people don’t have emails then maybe some business processes will become less efficient and also employees might become disgruntled which is also not good.

I would suggest that … I think that it’s not going to work! And at least it’s not a good trade off. It might work but it’s not a good trade off because, you know, all this for…If you implement a security measure that, that impairs business processes, it makes people dissatisfied!

Then you have to count in the consequences.

[IOS] I agree that IT should be aware that this will happen and that the hackers will get in and you should have some secondary defenses.

But I was also wondering, does it also suggest that perhaps some people should not have access to email? I mean … does this lead to a test where   if some employees are just, you know, a little too curious you just say, ‘You know what? Maybe we take the e-mail away from you for a while.’

[ZB] Well, you know, you can. I mean, a company can try this if they can, you know, if they can sustain the business costs and consequences of this, yeah?

So if people don’t have emails then maybe some business processes will become less efficient and yeah, and also employees might become disgruntled which is also not good.

I would suggest that, I think that it’s not going to work!

And at least it’s not a good trade off. It might work, but it’s not a good trade off because, you know, all this for…if you implement security measure that impairs our business processes and makes people dissatisfied, then you have to count in the consequences.

 

[IOS] I’m agreeing with you that the best defense I think is awareness really and then taking other steps. I wanted to ask you one or two more questions.

One of them is about what they call whale phishing or spear phishing perhaps is another way to say it, which is just going after not just any employee, but usually high-level executives.

And at least from some anecdotes I’ve heard, executives are also prone to clicking on spam just like anybody else, but your research also suggests that some of the more context you provide, the more likely you’ll get these executives to click.

[ZB] Okay, so if you get more context of course you can make the email more plausible, and of course if you are targeting a particular person, there is a lot of possibilities to get information about them, and especially if it’s somebody well-known like an executive of a company.

And I think that there are also some personality traits of executives that might make them more likely to click.  Because, you know, they didn’t get their positions by being especially cautious and not taking risk and saying all safety first!

I think that executives maybe even more risk-taking than, you know, average employee and more sure of themselves, and this might get a problem even more difficult. So it also may be even to not like being told by anybody about any kind of their behavior.

IoT and Inferred Preferences

[IOS] I have one more question since it’s so interesting that you also do research on IoT privacy and security. Over in the EU, we know that the new General Data Protection Regulation, which I guess is going to take place in another year, actually  has a very broad definition of what sensitive data is. I’m wondering if you can just talk about some of the implications of this?

[ZB] Well, of course IoT data is everything’s that is collected in our environment about us can be used to infer our preferences with quite a good precision.

So… for example we had an experiment where we were able just from room climate data, so from temperature enter the age of humidity to determine if a person is, you know, staying or sitting. And this kind of data of course can be used to target messages even more precisely

So for example if you can infer a person’s mood and if you suppose if you buy from the psychological research that people in good moods are more likely to click, you might try to target people in better mood, yeah? Through the IOT data available to you or through IOT data available to you through the company that you hacked.

Yeah …  point is, you know, that targeting already works very well. Yeah, you just need to know the name of the person and maybe the company this person is dealing with!

 

[IOS] Zinaida this was a very fascinating conversation and really has a lot of implications for how IT security goes about their job. So I’d like to thank you for joining us on this podcast!

[ZB] You’re welcome. Thank you for inviting me!

[Podcast] Dr. Zinaida Benenson and the Human Urge to Click

[Podcast] Dr. Zinaida Benenson and the Human Urge to Click

Leave a review for our podcast & we'll send you a pack of infosec cards.


Dr. Zinaida Benenson is a researcher at the University of Erlangen-Nuremberg, where she heads the “Human Factors in Security and Privacy” group. She and her colleagues conducted a fascinating study into our spam clicking habits. Those of you who attended Black Hat last year may have heard her presentation on How to Make People Click on a Dangerous Link Despite their Security Awareness.

As we’ve already pointed on the IOS blog, phishing is a topic worthy of serious research. Her own clever study adds valuable new insights. Benenson conducted an experiment in which she phished college students (ethically, but without their direct knowledge) and then asked them why they clicked.

In the first part of our interview with Benenson, we discuss how she collected her results, and why curiosity seems to override security concerns when dealing with phish mail. We learned from Benenson that hackers take advantage of our inherent curiosity. And this curiosity about others can override the analytic security-aware part of our brain when we’re in a good mood!

So feel free to (safely) click on the above podcast to hear the interview.

Continue reading the next post in "[Podcast] Dr. Zinaida Benenson and Phishing Threats"

I Click Therefore I Exist: Disturbing Research On Phishing

I Click Therefore I Exist: Disturbing Research On Phishing

Homo sapiens click on links in clunky, non-personalized phish mails. They just do. We’ve seen research suggesting a small percentage are simply wired to click during their online interactions. Until recently, the “why” behind most people’s clicking behaviors remained something of a mystery. We now have more of an answer to this question based on findings from German academics. Warning:  IT security people will not find their conclusions very comforting.

Attention Marketers: High Click-Through Rates!

According to research by Zinaida Benenson and her colleagues, the reasons for clicking on phish bait are based on an overall curiosity factor, and then secondarily, on content that connects in some way to the victim.

The research group used the following email template in the experiment, and sent it to over 1200 students at two different universities:

Hey!

The New Year’s Eve party was awesome! Here are the pictures:

http://<IP address>/photocloud/page.php?h=<participant ID>

But please don’t share them with people who have not been there!

See you next time!

<sender’s first name>

The message, by the way, was blasted out during the first week of January.

Anybody want to guess what was the overall click-through rate for this spammy message?

A blazing 25%.

Marketers everywhere are officially jealous of this awesome metric.

Anyway, the German researchers followed up with survey questions to find the motivations behind these click-aholics.

Of those who responded to the survey, 34% said they were curious about the party pictures linked to in the mail, another 27% said the message fits the time of year, and another 16% said they thought they knew the sender based on just the first name.

To paraphrase one of those cat memes, “Humans is EZ to fool!”

The clever German researchers conducted a classic cover-story design in their experiment. They enlisted students to ostensibly participate in a study on Internet habits and offered online shopping vouchers as an incentive. Nothing was mentioned about phish mails being sent to them.

And yes, after the real study on phishing was completed, the student subjects were told the reason for the research, the results, and given a good stern warning about not clicking on silly phish mail links.

Benenson also gave a talk on her research at last year’s Black Hat. It’s well-worth your time.

Phishing: The Ugly Truth

At the IOS blog, we’ve also been writing about phishing and have been following the relevant research. In short: we can’t say we’re surprised by the findings of the German team, especially as it relates to clicking on links to pictures.

The German study seems to confirm our own intuitions: people at corporate at jobs are bored and are finding cheap thrills by gazing into the private lives of strangers.

Ok, you can’t change human nature, etc.

But there’s another more disturbing conclusion related to the general context of the message.The study strongly suggests the more you know and can say about the target in the phish mail, the more likely it is that they will click. And in fact in an earlier study by Benenson, a 56% click-rate was achieved when the phish mail recipient was addressed by name.

Here’s what they had to say about their latest research:

 … fitting the content and the context of the message to the current life situation of a person plays an important role. Many people did not click because they learned to avoid messages from unknown senders, or with an unexpected content  … For some participants, however, the same heuristic (‘does this message fit my current situation?’) led to the clicks, as they thought that the message might be from a person from their New Year’s Eve party, or that they might know the sender.

 

Implications for Data Security

At Varonis, we’ve been preaching the message that you can’t expect perimeter security to be your last line of defense. Phishing, of course, is one of the major reasons why hackers find it so easy to get inside the corporate intranet.

But hackers are getting smarter all the time, collecting more details about their phishing targets to make the lure more attractive.The German research shows that even poorly personalized content is very effective.

So imagine what happens if they gain actual personal preference and other informational details from observing victims on social media sites or, perhaps, through a previous hack of another web site you engage with.

Maybe a smart hacker who’s been stalking me might send this fiendish email to my Varonis account:

Hey Andy,

Sorry I didn’t see you Black Hat this year! I ran into your colleague Cindy Ng, and she said you’d really be interested in research I’m doing on phishing and user behavior analytics. Click on this link and let me know what you think.  Hope things are going well at Varonis!

Regards,

Bob Simpson, CEO of Phishing Analytics

Hmmm, you know I could fall for something like this the next time I’m in a vulnerable state.

The takeaway lesson for IT is that they need a secondary security defense, one that monitors hackers when they’re behind the firewall and can detect unusual behaviors by analyzing file system activity.

Want to find out more, click here!

Did you click? Good, that link doesn’t point to a Varonis domain!

Another conclusion of the study is that your organization should also undertake security training, especially for non-tech savvy staff.

We approve as well: it’s a worthwhile investment!

Cyber Espionage: Could Russian and Korean Hackers Have Been Stopped (With U...

Cyber Espionage: Could Russian and Korean Hackers Have Been Stopped (With UBA)?

Once upon a time, breaking into the Democratic National Committee required non-virtual thieves picking real door locks and going through file cabinets. And stealing the design secrets of a fighter jet was considered a “black bag” job that utilized the talents of a spy who knew how to work a tiny spy camera. Then, the stealthy spy could pass the micro-film to a courier by exchanging identical brief cases.

Times have changed.

In the last few days, two stories have shown us, if we still needed more evidence, how modern espionage has evolved into hacking. Cyber spies can conduct first-class intelligence operations without leaving their desks at the IT departments of their Dr. Evil-ish security agencies.

Spies Like Us

Yesterday, The Washington Post said that Russian government hackers had penetrated the DNC’s computer network.

According to security experts who were brought in by the DNC, the cyber spies thoroughly compromised the DNC’s computers and were able to read all email and chat traffic.

Unfortunately, this news is hardly a surprise. In fact, we predicted this would happen.

It’s believed that two separate and perhaps competing Russian hacking groups were involved, with one of them having broken into the DNC network as far back as last summer. No financial information about donors was taken. The hackers were engaging in espionage, gaining access to the DNC’s opposition research on Donald Trump.

And then on the Korean peninsula, South Korean officials said 40,000 documents related to the wing design of the US’s F-15 fighter jet had been taken by their friendly neighbors to the north.

Stealthy Attacks

We have more information about the Russian spies, so let’s look at that incident first.

One of the Russian cyber groups involved in the DNC was identified as Cozy Bear. This is the same group responsible for attacks at the White House. The second group is called Fancy Bear, and they have been known to exploit zero-day vulnerabilities.

Security experts say that both groups have also used phishing attacks in the past. Cozy Bear and Fancy Bear are believed to be connected to Russian intelligence agencies.

At this point, though, we’re not sure exactly how the gangs broke into the DNC network.

However, we do know that once in, they inserted remote access trojans (RATs) and implants that allowed them to remotely log keystrokes, execute commands, and transfer files. The Russian cyber gangs also used Command and Control (C2) techniques, which embed the commands to control the RATs in an HTTP stream.

As far as IT admins were concerned, some users at the DNC were communicating with one or more web sites, when in fact these C2 web sites were run by the cyber gangs and used to orchestrate the attack.

The Russian cyber spies also hid their actions by using PowerShell commands — malware-less hacking. And they also stole credentials with Mimikatz, which was run as a stealthy PowerShell script, in a Pass-the-Hash/Pass-the-Ticket attack.

Putting on our intelligence analyst’s hat, I think we can say with good confidence that the North Koreans used similar techniques. A phish mail, for example, involving fake Apple IDs was used to initially enter Sony in Pyongyang’s massive doxing of that company.

The current attack that was launched against Korean Air Lines began in 2014. The North Korean cyber spies likely used the aforementioned stealth techniques to keep their implants and document exfiltration activities below the radar.

Spy Lessons

If you’ve been following along, none of the above — unfortunately should be new to you. In fact, for anyone who’s been keeping track of hacking incidents over the last few years, these different techniques and tools are just familiar parts of the landscape.

We’ve known for a very long time the smart hackers get around perimeter defense using phishing, SQL-injection, or zero-day vulnerabilities. And then once in, they have many ways to remain stealthy and avoid triggering virus scanners.

Instead of trying to build a higher wall, a more practical approach is to spot the hackers when they’re inside and then prevent them from accessing and exfiltrating sensitive data.

In both the DNC and Korean Air Lines incident, the IT teams eventually noticed some anomalies. However, at that point, it was far too late in terms of preventing the surveillance of internal emails and the removal of data.

A far better solution is to automate the anomaly detection so that when files are accessed at unusual times for a given user or PowerShell executables launched by users who hardly or never run these apps, then the alarms will go off.

We are, of course, talking about User Behavior Analytics (UBA). As these incidents teach us, the protection of sensitive data is too important to be based on hunches or the blind luck of an alert IT person looking at audit trails.

Instead, UBA’s predictive algorithms can compare current access patterns against historical records in order to spot the hackers in closer to real-time.

Think of UBA as giving your IT group the power to spy on hackers and cyberspies. It’s far more efficient and cheaper than training and outfitting an agent. Sorry, 007!

Got UBA?  Learn more about how Varonis can protect you data. 

CEO Phishing: Hackers Target High-Value Data

CEO Phishing: Hackers Target High-Value Data

Humans like to click on links. Some of us are better at resisting the urge, some worse. In any case, you’d also expect that people in the higher reaches of an organization — upper-level executives and the C-suite — would be very good at resisting phish bait.

Harpooning the Whale

Alas, even the big phish like to chomp on the right links.

We now have even more evidence that cyber thieves are getting better at fine tuning their attacks against high-value targets — known as “whale phishing”.

The security firm Digitalis tells us that attackers are using social media to research executive habits–say an interest in cricket — to then forge an email (embedded with a malware payload)  from a business associate — also discovered through social media — mentioning the cricket match.

This is business-class phishing!

The attraction of the corporate whale is that they are likely to have incredibly valuable information on their laptops. Not the commodity PII that are involved in most data breaches, but intellectual property and other sensitive data – deals in progress, key customers, confidential financial data, or embarrassing emails.

It’s the kind of information that could be sold to competitors or, better yet, doxed unless a ransom is paid.

We’ve long known that phishing attacks that are based on better research are very effective. The more the attacker knows about you, the more likely you are to trust the sender.

Which would you click on: an email sent by a Nigerian finance minister regarding unclaimed funds, or an email from your bank — from your local branch — saying there’s been an adjustment to your balance, and you’ll need to look at the attached PDF?  Enough said.

Executive Privacy

Digitalis also found that executives, like the rest of us, are not very good about their privacy setting on Facebook and other social networking sites. They found that less than half of those surveyed restrict who can see their profile. And only 36% keep up with their social settings.

Should executives simply forgo social media?

I’ve heard experts say if C-levels and other execs don’t set up their own account, the hackers will do the work for them by establishing a forged identity and squatting on their property.  This can then lead to very sophisticated phishing.

My advice: as an executive, you should take charge of your social persona. This leads to one of the points of the Digitalis Research: executives (and the rest of us as well) should never reveal more than they have to in these social networks.

As in the file system world, always change from the default “everyone” setting, and restrict information to just friends.

And since social networking companies — well at least one — have had a bad habit of tweaking these settings, you should, as Digitalis suggests, periodically revisit your account.

Concierge Security?

Security pros have pointed out that social networks, by design, will always share some information by default, and this typically includes who your friends are.

Even with very restrictive settings, a smart attacker can still use this friend information to make very good guesses about the habits, interests, and preferences of the target account—say, the CFO of the company.

Welcome to our world!

There are no easy answers here when it comes to protecting executives from attacks. It’s essentially the problem organizations face with hackers in general: they will get in!

The more important point is to monitor and detect for unusual system and file events to reduce the risks.

In a past blog post, I’ve said its worth devoting IT security resources to monitoring the computer activities of corporate VIPs. With this latest research, I’ll double down on that position.

And if the company is large enough, this could include dedicated staff — perhaps a security concierge service.

In any case, it does make sense to take any alarms and notifications involving the computer accounts of C-levels very seriously. Don’t view them as likely false positives.

It’s worth tracking them all down until they’re resolved.

And Hotels Have WiFi Issues Too!

And Hotels Have WiFi Issues Too!

I would like to say that hotel data security problems just end at compromised PoS systems. Unfortunately, the headlines tell another story. Last year, researchers at a security firm discovered a serious vulnerability in a router commonly used by hotels.

The researchers noted that one of the processes running on some models of an InnGate router had no authentication enabled by default— that is, there’s no password required for access.

In theory, it would be possible for a hacker to probe IP addresses, find the router, login into the device, and then start changing configuration parameters.

Smart hackers, for example, could enter the device, lower the encryption levels, and  download credentials and other data. I suppose it would be even easier for a vacationing hacker to just simply attack the router from behind the firewall.

IoHT, Internet of Hotel Things

According to the researcher, he found the specific router model in eight out of the ten top hotels.

There’s more bad news.

I mentioned in my last post on PoS malware in hotels that your guest room may no longer be your castle. I didn’t realize how true that was until I learned how everything in hotel IT infrastructure is connected.

And that does include the electronic key card system that controls entry into the individual rooms.

This gets strange very quickly.

It is possible for a hacker to reprogram a specific door of a room to allow easy entry. A cyber thief from the virtual world could become a stealthy cat burglar in the real world. Perhaps, she could target rooms where she suspects there’s jewelry, cash, or other valuables.

This router vulnerability is yet another reason to keep your valuables in your hotel room’s security safe!

As far as we know, the security safes’ electronic locks are not on the hotel’s internal network. At least I hope so.

Protect Yourself

It’s not clear whether there have been incidents involving this specific vulnerability in the wild. However, there were those mysterious whale phishing attacks against executives staying in hotels in East Asia.

Somehow the cyber thieves were able to launch a phishing attack over the internal hotel network. The lure was a screen that suddenly appeared asking the user to renew an Adobe license. These attacks were considered the work of the DarkHotel gang.

The big question was how the attackers got inside. It’s possible they exploited this router vulnerability or some other zero-day opening.

Let’s agree: your hotel is not a castle like your home, but more like a semi-public area.

For employees and especially executives who are staying at hotels, here’s my advice:

  • Use your VPN if possible. It’s safer to conduct work over an encrypted connection. Don’t assume the hotel network is secure.
  • Stay away from the executive office centers, where there are computers available for guests. There have been instances in which malware — keyloggers — has been found on these shared computers.
  • Physically lock your laptop if possible, and make sure your hard drive is password protected. With hackers potentially having access to the card key system, you’ll also have to worry about real physical theft as well.

Hey hotel IT departments! Varonis User Behavior Analytics can spot and stop malware at various points in the kill chain. Learn more today!

 

 

Social Engineering Remains a Top Cybersecurity Concern

Social Engineering Remains a Top Cybersecurity Concern

In 2016, the top cyberthreat for IT pros, at least according to ISACA’s Cybersecurity Snapshot, is social engineering.  It has always been a classic exploit amongst the hackerati. But in recent years it has become a preferred entry technique.

Instead of breaking into a network, an attacker merely has to manipulate those who have access to the victim’s data, even the victim to give away credentials – “Is your Requester Code 36472? No, it’s 62883.” This is technically a salami attack that works by fooling several people, so the attacker has enough slices of information to piece together the credentials needed to access the user’s account.

In previous blog posts, we’ve covered a few ways to help guard against social engineering. But because social engineering can’t be blocked by technology alone, humans remain the weakest link in this security problem.

“People inherently want to be helpful and therefore are easily duped,” said Kevin Mitnick, who was once the country’s most wanted computer criminal. “They assume a level of trust in order to avoid conflict.”

As IT security groups allocate their resources to defend themselves against major security threats, they shouldn’t forget to continuously educate end users on social engineering method so they don’t become easy targets to exploit.

Let’s review the most common forms of social engineering:

  1. Phishing

One of the easiest ways to become infected with malware –Ransomware anyone? – is through phishing. With a phishing attack, the bait is an email containing personal information hackers have collected through prior reconnaissance. Crafted to look like an official communication from a legitimate source (Fedex, UPS), the phish mail is intended to catch the victims off guard, duping them to click on a link that takes them to a non-legitimate web site or opening a file attachment containing a malware payload.

Often the hackers will focus on high-value targets, bamboozling executives and other C-levels. The goal in “whale phishing” is usually to extract IP or other very confidential and possibly embarrassing information.

Educate your staff! Don’t click on links or open attachments or emails from people you don’t know or companies you don’t do business with.

Related Phishing Blog Posts:

  1. Pretexting aka Impersonation

Pretexting is really a more direct instance of phishing that relies on old-fashioned person-to-person interactions. Typically, a phone call is involved.  Fun fact: Hannibal Lecter knew how to  pretext!

While Anthony Hopkins may have impersonated a temporary employee from his jail cell, real-life pretexters can impersonate a fellow employee, IT representative, or vendor. Their goal is to gather confidential or other sensitive information –  SSN, bank account, mother’s maiden name, or the size of your savings and investment accounts. Today, attackers are also outsourcing the pretexting work to companies that will make the calls for them. Talk about progress!

Pretexting  had become such a problem that in 1999 the  Gramm-Leach-Bliley Act (GLBA), better known for improving financial data security, flat out made pretexting illegal.

The statute applies to all organizations that handle financial data, including banks, brokerages, credit unions, income tax preparers, debt collection agencies, real estate firms and credit reporting agencies. Take that Hannibal Lecter.

However, GLBA has not stopped a new generation of pretexters from selling the data they’ve collected to data brokers, who may then resell it to private investigators or even insurance companies.

  1. Baiting

Baiting is like a phishing, but the attacker dangles and entices the victim with an exciting offer. It could be in the form of a free download – music, movie, book — or a USB flash drive with a logo labeled, “Confidential Company Roadmap”.

Once the victim’s curiosity or greed leads to a download or use of  a device, the victim’s computer gets inflected with malware, enabling the attacker to infiltrate the network.

  1. Quid Pro Quo (This for That)

Similar to baiting,  a Quid Pro Quo also lures but with a practical benefit – usually a service – such as “Please help me with my computer!” Instead of fixing the problem, the attacker installs malware on the victim’s computer.

  1. Piggybacking (or Tailgating)

Piggybacking happens in the non-virtual world, involving a person tagging along with a legitimate employee who is authorized to enter a restricted area.

Solution?  Implement one of the most basic security tips: set your PC to lock after inactivity!

 

Want to guard against social engineering? Make sure least privilege is in its authorization processes.

 

 

 

Image source: ISAC’s January 2016 Cybersecurity Snapshot, Global Data

Carbanak Attack Post-Mortem: Same Old Phish

The Kaspersky report about Carbanak malware released last month led to some pretty frightening headlines, usually starting with “Billion dollar heist…”.  Now that we’re over a month into reviewing some of the forensic evidence, it appears that Carbanak is less sophisticated than many first thought. At its heart, this was a spear phishing attack that targeted bank employees.

Once the bank worker opened the email and downloaded the attachment, the malware simply acted as a smart keylogger. The remote hackers scooped up the passwords they needed to access the special internal money transfer apps, and then they entered into the banking business themselves. Using their remote servers—Command and Control (C2) variant—they eventually could get the banking software to enable withdrawals from ATMs.

Indicators of Compromise

The report covers most of the major Indicators of Compromise  (IOC), which is what security gurus look for to see if a system has been infected.

Here’s what we know.

After an employee clicks on the executable in the email, Carbanak copies itself into windows\system32\com and renames the file as svchost.exe. This is standard operating procedure for hackers: use a valid Windows system executable—‘svchost’—so that the malware will be part of any whitelist IT admins have set up in a Windows User Account Control (UAC) policy — see our post on this topic.

At this point, the malware sneakily communicates with the hackers’ C2 servers, which tells them what to do — send captured key strokes or screen images, etc. The Kaspersky report has more detailed IOC information — for example the addresses of the IOC servers — so I would check that out if you have concerns.

There’s more cleverness in their attack. The hackers were also able to move around the bank network until they found a computer running the special bank application used to transfer funds. Once they found it, game over!

And It Could Have Been Prevented

The first order of business is that any corporate employee, especially in banking or finance, should never, ever run an application from an email.

And by the way, this was not a very sophisticated phish— say an overnight delivery package email, which someone could understandably get taken in by.carbanak phish mail

Instead, this was a not very subtle get-rich-quick scheme — see English translation of the original Russian message to the right.

The larger issue is educating employees about phishing and other socially engineered attacks. Companies are just not doing a good job of this — either through specialized training or frequent communications about the hazards of clicking.

But suppose an employee does click on the malware. At this point, if enabled, a Windows User Account Control (UAC) policy can warn employees that the app has been signed by an unknown publisher.  They’ll then have to explicitly click on the warning box—we’ve all experienced this. However, we should take these warnings more seriously.

The Carbanak gang was smart enough to digitally sign the malware so the executable wouldn’t have been completely rejected. In any case, employees should be trained to never accept an app with a mysterious provenance —especially from a publisher they’ve not see before.

We’ve also written about the issues of having Remote Desktop Services broadly enabled for ordinary users. Most users don’t need to be able to login into another desktop — it makes it much easier for a hacker to hop around.

One more tip. I don’t know about you, but I can’t create files in windows\system32. My Varonis admin set up a UAC policy to disable that!

More than a few observers have pointed out that the Carbanak attack was overhyped.

I agree.

I’m not saying that some of the attack features weren’t sophisticated, but that this was a just a basic phishing attack against a very poorly defended victim.

Phishing Attacks Classified: Big Phish vs. Little Phishes

Phishing Attacks Classified: Big Phish vs. Little Phishes

The CMU CERT team I referred to in my last post also has some interesting analysis on the actual mechanics of these phishing attacks. Based on reviewing their incident database, the CERT team was able to categorize phishing attacks into two broader types: single- versus multi-stage.

What’s the difference? Think of single-stage as catching lots of small phish, and multi-stage as landing the big one.

Single-Stage Attacks: Mass Marketing

In a single-stage attack, the hacker is interested in collecting information on a specific user.  They accomplish this through a volume approach: blasting out emails, and hoping to get some small percentage of click-throughs. It’s essentially mass marketing applied to phishing. The CMU folks have learned that response rates are roughly between 3% and 11%. So hackers probably know in advance the yields they’ll achieve from the campaigns based on their various lists.

Single-stage phishing is the one we often come across in our inbox—i.e., FedEx shipment waiting, credit card cancelled, etc. Once the bait is taken, the hackers receive personal data directly from the user, who has typically been tricked into entering details into a web form—credit card, social security numbers, passwords, etc.

Multi-Stage Attacks: The Business Class of Phishing

Multi-stage is the better planned, deadlier attack launched by more sophisticated cyber-thieves  In this case, the hackers are not interested in obtaining just basic personal data from a single user.

According to CERT, their “response and information capture” phase (see the graphic below) now has multiple parts: hackers probe the system to obtain higher privileges with the goal to find more granular data (PII, IP),  possibly learn about system internals for another attack, find additional phishing targets, or even use the data to target more high-value phish—executives.

cmu-cert -- attack

From Unintentional Insider Threats: A Review of Phishing and Malware Incidents by Economic Sector (CMU CERT)

CERT’s Advice

While the academic community continues to explore why we click on some obviously spammy stuff, the CERT team has some solid advice on mitigation:

  • Organizations need to view compliance not as an obstacle to job productivity, but as an essential part of an employee’s responsibilities.
  • IT needs to deploy more programs to train staff on identifying social engineering schemes.
  • There should be a focus on improved tools for computer and network defense cyber monitoring.

Varonis eBook explains how phishing works:  get our free Anatomy of a Phish!

Image credit: Presus museum