Category Archives: Data Security

Data Security 2017: We’re All Hacked

Data Security 2017: We’re All Hacked

Remember more innocent times back in early 2017? Before Petya, WannaCry, leaked NSA vulnerabilities, Equifax, and Uber, the state of data security was anything but rosy, but I suppose there was more than a few of us left — consumers and companies — who could say that security incidents did not have a direct impact.

That has changed after Equifax’s massive breach affecting 145 million American adults — I was a victim — and then a series of weaponized ransomware attacks that held corporate data hostage on a global scale.

Is there any major US company that hasn’t been affected by a breach?

Actually, ahem, no.

According to security researcher Mikko Hyponnen, all 500 of the Fortune 500 have been hacked. He didn’t offer evidence, but another cybersecurity research company has some tantalizing clues. A company called DarkOwl scans the dark web for stolen PII and other data, and traces it back to the source. They have strong evidence that all of the Fortune 500 have had data exposed at some point.

We Had Been Warned

Looking over past IOS blog posts, especially for this last year, I see the current massive breach pandemic as completely expected.

Back in 2016, we spoke with Ken Munro, UK’s leading IoT pen tester. After I got over the shock of learning that WiFi coffee makers and Internet-connected weighing scales actually exist, Munro explained that Security by Design is not really a prime directive for IoT gadget makers.

Or as he put it, “You’re making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all.”

If you read a post from his company’s blog from October 2015 about hacking into an Internet-connected camera, you’ll see all the major ingredients of a now familiar pattern:

  1.  Research vulnerability or (incredibly careless) backdoor in IoT gadget, router, or software;
  2. Take advantage of an exposed external ports to scan for suspect hardware or software;
  3. Enter target system from the Internet and inject malware; and
  4. Hack system, and then spread the malware in worm-like fashion.

This attack pattern (with some variation) was used successfully in 2016 by Mirai, and in 2017 by Pinkslipbot and WannaCry.

WannaCry, though, introduced two new features not seen in classic IoT hacks: an unreported vulnerability – aka Eternal Blue – taken from the NSA’s top-secret TAO group and, of course, ransomware as the deadly payload.

Who could have anticipated that NSA code would make its way to the bad guys who then use it in for their evil attack?

Someone was warning us about that as well!

In January 2014, Cindy and I heard crypto legend Bruce Schneier talk about data security post-Snowden. Schneier warned us that the NSA wouldn’t be able to keep it ssecrets and that eventually their code would leak or would be re-engineered by hackers. And that is exactly what happened with  WannaCry.

Here are Schneier’s wise words:

“We know that technology democratizes. Today’s secret NSA program, becomes tomorrow’s PhD thesis, becomes the next day’s hacker tool.”

Schneier also noted that many of the NSA’s tricks are based on simply getting around cryptography and perimeter defenses. In short, the NSA hackers were very good at finding ways to exploit our bad habits in choosing weak passwords, not keeping patches up to date, or not changing default settings.

It ain’t advanced cryptography (or even rocket science).

In my recent chat with Wade Baker, the former Verizon DBIR lead, I was reminded of this KISS (keep it simple,stupid) principle, but he had the hard statistical evidence to back it up. Wade told me most attacks are not sophisticated, but take advantage of unforced user errors.

Unfortunately, even in 2017, companies are still learning how to play the game. If you want a prime example of a simple attack, you have only to look at 2017’s massive Equifax breach, which was the result of a well-known bug in the company’s Apache Struts, which remained  unpatched!

Weapons of Malware Destruction

Massive ransomware attacks was the big security story of 2017 — Petya, WannaCry, and NotPetya. By the way we offered some practical advice on dealing with NotPetya, the Petya variant that was spread through a watering hole — downloaded from a website of a Ukrainian software company.

There are similarities in all of the aforementioned ransomwares: all exploited Eternal Blue and spread using either internal or open external ports. The end result was the same – encrypted files for which companies have to pay ransom in the form of some digital currency.

Ransomware viruses ain’t new either. Old timers may remember the AIDs Trojan, which was DOS-based ransomware spread by sneaker-net.

The big difference, of course, is that this current crop of ransomware can lock up entire file systems  — not just individual C drives — and automatically spreads over the Internet or within an organization.

These are truly WMD – weapons of malware destruction. All the ingredients were in place, and it just took enterprising hackers to weaponize the ransomware


One area of malware that I believe will continue to be a major headache for IT security is file-less PowerShell and FUD attacks. We wrote a few posts on both these topics in 2017.

Sure there’s nothing new here as well — file-less or malware-free hacking has beenu used by hackers for years. Some of the tools and techniques have been productized for, cough, pen testing purposes, and so it’s now far easier for anyone to get their hands on these gray tools.

The good news is that Microsoft has made it easier to log PowerShell script execution to spot abnormalities.

The whole topic of whitelisting apps has also picked up speed in recent years. We even tried our own experiments in disabling PowerShell using AppLocker’s whitelisting capabilities. Note: it ain’t easy.

Going forward, it looks like Windows 10 Device Guard offers some real promise in preventing rogue malware from running using whitelisting techniques.

The more important point, though, is that security researchers recognize that the hacker will get in, and the goal should be to make it harder for them to run their apps.

Whitelisting is just one aspect of mitigating threats post-exploitation.

Varonis Data Security Platform can help protect data on the inside and notify you when there’s been a breach. Learn more today!

Have I Been 2 Testify Before Congress

Have I Been 2 Testify Before Congress

Troy Hunt, creator of HaveIBeenPwned and Varonis partner – testified before the US Congress to talk about data breaches and cybersecurity: he gave context and recommendations about the recent spate of massive data breaches, and what Congress can do to help protect both the privacy and digital assets of its citizens.

This testimony couldn’t have come at a better time – just as it came to light that a previously undisclosed Uber data breach had leaked 57 million driver and rider accounts. It underscores that today, data breaches are an ever-present threat that even top tech companies struggle to contain.

You can read Troy’s full prepared statement here –

The hearing (and Troy’s comments) focused on digital identity verification as a means of lessening the impact of a data breach – here’s a quick rundown of some of the highlights:

Data Breaches

  • Are caused by a variety of configuration and malicious factors.
  • Have become more of an issue as data storage prices have fallen, encouraging a “data hoarder” mentality.
  • Often aren’t even known to have occurred until years after the fact.
  • Are aggressively traded by groups wanting to use the credentials for purposes of identity theft, spam, and spear phishing attacks.

Data Breach Vectors

  • There’s no agreed upon definition of what exactly constitutes a data breach – and “data breach” itself is a catch-all term for a variety of different types of incidents where an organization has lost control of the data they have been entrusted with.
  • The rising ubiquity, low cost and inherent connectivity of cloud-based data storage services have contributed to more data breaches occurring. See How to Better Structure AWS S3 Security
  • A single firewall rule or one relatively minor permissions change can inadvertently expose the entirety of an organization’s data to the Internet.

On Data Breach Timing

  • Several breaches dominated the news at the same time as the hearing – Uber’s massive cover-up of a previously undisclosed leak, and the image sharing social network Imgur discovered evidence of a breach that had occurred back in 2014.
  • There’s an important distinction between the timing of the data breach itself and the public disclosure of that breach.
  • Data Breach disclosures often happen years after the fact – due to a mix of not knowing and deliberate choice.

The now growing banality of data breaches and their (relatively) low outward cost to organizations is coming to a point with potential legislation like the upcoming EU General Data Protection Regulation (GDPR).

While there aren’t domestic general data privacy regulations (as opposed to class-based data protections like HIPPA), there is a mismatch of state by state data protection legislation that are already in effect.

Much of the focus of this legislation is around financial and identity data – a common clause is that if certain numbers records are released that Credit Card Reporting Agencies must be contacted, users notified by various means, etc.

In Europe, the – – GDPR is going to go into effect on May 25th 2018. The regulations cover EU citizen data held globally (affecting US organizations as well) and impose significant penalties for companies who violate those data protection provisions.

The GDPR is a huge step towards regulating data protection and making it law that organizations should implement a standard of data security. We even made a course with Troy Hunt to walk through everything you need to know about GDPR, the GDPR Attack Plan (use code ‘troy’ to unlock the course) at

While the testimony of one lone Australian Infosec practitioner is not going to singlehandedly solve the data breach problems plaguing the world, it represents a solid and serious step towards better understanding the problem and taking action on the part of the US Congress.

[Podcast] Security and Privacy Concerns with Chatbots, Trackers, and more

[Podcast] Security and Privacy Concerns with Chatbots, Trackers, and more

Leave a review for our podcast & we'll send you a pack of infosec cards.

The end of the year is approaching and security pros are making their predictions for 2018 and beyond. So are we! This week, our security practitioners predicted items that will become obsolete because of IoT devices. Some of their guesses – remote controls, service workers, and personal cars.

Meanwhile, as the business world phase out old technologies, some are embracing the use of new ones. For instance, many organizations today use chatbots. Yes, they’ll help improve customer service. But some are worried that when financial institutions embrace chatbots to facilitate payments, cyber criminals will see it as an opportunity to impersonate users and take over their accounts.

And what about trackers found in apps bundled with DNA testing kits? From a developer’s perspective, all the trackers help improve the usability of an app, but does that mean we’ll be sacrificing security and privacy?

Other articles discussed:

  • Australia government consider allowing firms to buy facial recognition data
  • Replay scripts to track cursor

Tool of the Week: Sword

Panelists: Kilian Englert, Kris Keyser, Mike Buckbee

New Survey Reveals GDPR Readiness Gap

New Survey Reveals GDPR Readiness Gap

With just a few months left to go until the EU General Data Protection Regulation (GDPR) implementation deadline on May 25, 2018, we commissioned an independent survey exploring the readiness and attitudes of security professionals toward the upcoming standard.

The survey, Countdown to GDPR: Challenges and Concerns, which polled security professionals in the UK, Germany, France and U.S., highlights surprising GDPR readiness shortcomings, with more than half (57%) of professionals still concerned about compliance.

Findings include:

  • 56% think the right to erasure/”to be forgotten” poses the greatest challenge in meeting the GDPR, followed by implementing data protection by design.
  • 38% of respondents report that their organizations do not view compliance with GDPR by the deadline as a priority.
  • 74% believe that adhering to the GDPR will give them a competitive advantage over other organizations in their sector.

Interview With Wade Baker: Verizon DBIR, Breach Costs, & Selling Board...

Interview With Wade Baker: Verizon DBIR, Breach Costs, & Selling Boardrooms on Data Security

Wade Baker is best known for creating and leading the Verizon Data Breach Investigations Report (DBIR). Readers of this blog are familiar with the DBIR as our go-to resource for breach stats and other practical insights into data protection. So we were very excited to listen to Wade speak recently at the O’Reilly Data Security Conference.

In his new role as partner and co-founder of the Cyentia Institute, Wade presented some fascinating research on the disconnect between CISOs and the board of directors. In short: if you can’t relate data security spending back to the business, you won’t get a green-light on your project.

We took the next step and contacted Wade for an IOS interview. It was a great opportunity to tap into his deep background in data breach analysis, and our discussion ranged over the DBIR, breach costs, phishing, and what boards look for in security products. What follows is a transcript based on my phone interview with Wade last month.

Inside Out Security: The Verizon Data Breach Investigations Report (DBIR) had been incredibly useful to me in understanding the real-world threat environment. I know one of the first things that caught my attention was that — I think this is pretty much a trend for the last five or six years — external threats or hackers certainly far outweigh insiders.

Wade Baker: Yeah.

IOS: But you’ll see headlines that say just the opposite, the numbers flipped around —‘like 70% of attacks are caused by insiders’. I was wondering if you had any comments on that and perhaps other data points that should be emphasized more?

WB: The whole reason that we started doing the DBIR in the first place, before it was ever a report, is just simply…I was doing a lot of risk-assessment related consulting. And it always really bothered me that I would be trying to make a case, ‘Hey, pay attention to this,’ and I didn’t have much data to back it up.

But there wasn’t really much out there to help me say, ‘This thing on the list is a higher risk because it’s, you know, much more likely to happen than this other thing right here.’

Interesting Breach Statistics

WB: Anyone who’s done those lists knows there’s a bunch of things on this list. When we started doing that, it was kind of a simple notion of, ‘All right, let me find a place where that data might exist, forensic investigations, and I’ll decompose those cases and just start counting things.’

Attributes of incidents, and insiders versus outsiders is one I had always heard —- like you said. Up until that point, 80% of all risk or 80% of all security incidents are insiders. And it’s one of those things that I almost consider it like doctrine at that time in the industry!

When we showed pretty much the exact opposite! This is the one stat that I think has made people the most upset out of my 10 years doing that report!

People would push back and kind of argue with things, but that is the one, like, claws came out on that one, like, ‘I can’t believe you’re saying this.’

There are some nuances there. For instance, when you study data breaches, then it does. Every single data set I ever looked at was weighted toward outsiders.

When you study all security incidence — no matter what severity, no matter what the outcome — then things do start leaning back toward insiders. Just when you consider all the mistakes and policy violations and, you know, just all that kind of junk.

Social attacks and phishing have been on the rise in recent years. (Source: Verizon DBIR)

IOS: Right, yes.

WB: I think defining terms is important, and one reason why there’s disagreement. Back to your question about other data points in the report that I love.

The ones that show the proportion of breaches that tie back to relatively simple attacks, which could have been thwarted by relatively cheap defenses or processes or technologies.

I think we tend to have this notion — maybe it’s just an excuse — that every attack is highly sophisticated and every fix is expensive. That’s just not the case!

The longer we believe those kind of things, I think we just sit back and don’t actually do the sometimes relatively simple stuff that needs to be done to address the real threat.

I love that one, and I also love the time to the detection. We threw that in there almost as a whim, just saying, ‘It seems like a good thing to measure about a breach.’

We wanted to see how long it takes, you know, from the time they start trying to link to it, and from the time they get inside to the time they find data, and from the time they find the data to exfiltrating it. Then of course how long it takes to detect it.

I think that was some of the more fascinating findings over the years, just concerning that.

IOS: I’m nodding my head about the time to discovery. Everything we’ve learned over the last couple of years seems to validate that. I think you said in one of your reports that the proper measurement unit is months. I mean, minimally weeks, but months. It seems to be verified by the bigger hacks we’ve heard about.

WB: I love it because many other people started publishing that same thing, and it was always months! So it was neat to watch that measurement vetted out over multiple different independent sources.

Breach Costs

IOS: I’m almost a little hesitant to get into this, but recently you started measuring breach cost based o proprietary insurance data. I’ve been following the controversy.

Could you just talk about it in general and maybe some of your own thoughts on the disparities we’ve been seeing in various research organizations?

WB: Yeah, that was something that for so long, because of where we got our information, it was hard to get all of the impact side out of a breach. Because you do a forensic investigation, you can collect really good info about how it happened, who did it, and that kind of thing, but it’s not so great six months or a year down the road.

You’re not still inside that company collecting data, so you don’t get to see the fallout unless it becomes very public (and sometimes it does).

We were able to study some costs — like the premier, top of line breach cost stats you always hear about from Ponemon.

IOS: Yes.

WB: And I’ve always had some issues with that, not to get into throwing shade or anything. The per record cost of a breach is not a linear type equation, but it’s treated like that.

What you get many times is something like an Equifax, 145 million records. Plus you multiply that by $198 per record, and we get some outlandish cost, and you see that cost quoted in the headlines. It’s just not how it works!

There’s a decreasing cost per record as you get to larger breaches, which makes sense.

There are other factors there that are involved. For instance, I saw a study from RAND, by Sasha Romanosky recently, where after throwing in predictors like company revenue and whether or not they’ve had a breach before — repeat offenders so to speak — and some other factors, then she really improves the cost prediction in the model.

I think those are the kind of things we need to be looking at and trying to incorporate because I think the number of records is probably, at best, describes about a third … I don’t even know if it gets to a half of the cost on the breach.

Breach costs do not have a linear relationship with data records! (Source: 2015 Verizon DBIR)

IOS: I did look at some of these reports andI’m a little skeptical about the number of records itself as a metric because it’s hard to know this, I think.

But if it’s something you do on a per incident basis, then the numbers look a little bit more comparable to Ponemon.

Do you think it’s a problem, looking at it on per record basis?

WB: First of all, an average cost per record, I would like to step away from that as a metric, just across the board.  But tying cost to the number of records probably…I mean, it works better for, say, consumer data or payment card data or things like that where the costs are highly associated with the number of people affected. You then get into cost of credit monitoring and the notifications. All of those type things are certainly correlated to how many people or consumers are affected.

When you talk about IP or other types of data, there’s just almost no correlation. How do you count a single stolen document as a record? Do you count megabytes? Do you count documents?

Those things have highly varied value depending on all kinds of circumstances. It really falls down there.

What Boards Care About

IOS: I just want to get back to your O’Reilly talk. And one of the things that also resonated with me was the disconnect between the board and the CISOs who have to explain investments. And you talk about that disconnect.

I was looking at your blog and Cyber Balance Sheet reports, and you gave some examples of this — something that the CISO thinks is important, the board is just saying, ‘What?’

So I was wondering if you can mention one or two examples that would give some indication of this gap?

WB: The CISOs have been going to the board probably for several rounds now, maybe years, presenting information, asking for more budgets, and the board is trying to ‘get’ what they need to build a program to do the right things.

Pretty soon, many boards start asking, ‘When are we done? We spent money on security last month. Why are we doing it this quarter too?’

Security as a continual and sometimes increasing investment is different than a lot of other things that they look at. They think of, ‘Okay, we’re going to spend money on this project, get it done, and we’re going to have this value at the end of that.’

We can understand those things, but security is just not like that. I’ve seen it a lot this breaking down with CISOs, who are coming from, ‘We need to do this project.’

You lay on top of all this that the board is not necessarily going to see the fruits of their investment in security! Because if it works, they don’t see anything bad at all.

Another problem that CISOs have is ‘how do I go to them when we haven’t had any bad things happen, and asking for more money?’ It’s just a conversation where you should be prepared to say why that is —  connect these things to the business.

By doing these things, we’re enabling these pieces of the business to function properly. It’s a big problem, especially for more traditional boards that are clearly focused on driving revenue and other areas of the business.

IOS: Right. I’m just thinking out loud now … Is the board comparing it to physical security, where I’m assuming you make this initial investment in equipment, cameras, and recording and whatever, and then your costs, going forward, are mostly people or labor costs?

They probably are looking at it and saying,  ‘Why am I spending more? Why am I buying more cameras or more modern equipment?’

WB: I think so! I’ve never done physical security, other than as a sideline to information security. Even if there are continuing costs, they live in that physical world. They can understand why, ‘Okay, we had a break-in last month, so we need to, I don’t know, add a guard gate or something like that.’ They get why and how that would help.

Whereas in the logical or cyber security world, they sometimes really don’t understand what you’re proposing, why it would work. If you don’t have their trust, they really start trying to poke holes. Then if you’re not ready to answer the question, things just kind of go downhill from there.

They’re not going to believe that the thing you’re proposing is actually going to fix the problem. That’s a challenge.

IOS: I remember you mentioning during your O’Reilly talk that helpful metaphors can be useful, but it has to be the right metaphor.

WB: Right.

IOS: I mean, getting back to the DBIR. In the last couple of years, there was an uptick in phishing. I think probably this should enter some of these conversations because it’s such an easy way for someone to get inside. For us at Varonis, we’re been focused on ransomware lately, and there’s also DDoS attacks as well.

Will these new attack shift the board’s attention to something they can really understand—-since these attacks actually disrupt operations?

WB: I think it can because things like ransomware and DDoS, are things that are apparent just kind of in and of themselves. If they transpire, then it becomes obvious and there are bad outcomes.

Whereas more cloak-and dagger stealing of intellectual property or siphoning a bunch of consumer data is not going to become apparent, or if it is, it’s months down the road, like we talked about earlier.

I think these things are attention-getters within a company, attention-getters from the headlines. I mean, from what I’ve heard over the past year, as this ransomware has been steadily increasing, it has definitely received the board’s attention!

I think it is a good hook to get in there and show them what they’re doing. And ransomware is a good one because it has a corporate aspect and a personal aspect.

You can talk to the board about, ‘Hey, you know, this applies to us as a company, but this is a threat to you in your laptop in your home as well. What about all those pictures that you have? Do you have those things backed up? What if they got on your data at home?’

And then walk through some of the steps and make it real. I think it’s an excellent opportunity for that. It’s not hype, it’s actually occurring and top of the list in many areas!

Contrary to conventional wisdom, corporate board of directors understand the value of data protection. (Source: Cyber Balance Sheet)

IOS: This brings something else to mind. Yes, you could consider some of these breaches as a cost of doing business, but if you’re allowing an outsider to get access to all your files, I would think, high-level executives would be a little worried that they could find their emails. ‘Well, if they can get in and steal credit cards, then they can also get into my laptop.’

I would think that alone would get them curious!

WB: To be honest, I have found that most of the board members that I talk to, they are aware of security issues and breaches much more than they were five to ten years ago. That’s a good thing!

They might sit on boards of other companies, and we’ve had lots of reporting of the chance that a board member has been with a company that’s experienced a breach or knows a buddy who has, is pretty good by now. So it’s a real problem in their mind!

But I think the issue, again, is how do you justify to them that the security program is making that less likely? And many of them are terrified of data breaches, to be honest.

Going back to that Cyber Balance Sheet report, I was surprised when we asked board members what is the biggest value that security provides — you know, kind of the inverse of your biggest fear? They all said preventing data breaches. And I would have thought they’d say, ‘Protect the brand,’ or ‘Drive down risk,’ or something like that. But they answered, ‘Prevent data breaches.’

It just shows you what’s at the top of their minds! They’re fearful of that and they don’t want that to happen. They just don’t have a high degree of trust that the security program will actually prevent them.

IOS: I have to say, when I first started at Varonis, some of these data breach stories were not making the front page of The New York Times or The Washington Post, and that certainly has changed. You can begin to understand  the fear. Getting back to something you said earlier about how simple approaches, or as we call it block-and-tackle, can prevent breaches.

Another way to mitigate the risk of these breaches is something that you’ve probably heard of, Privacy by Design, or Security by Design. One of the principles is just simply reduce the data that can cause the risk.

Don’t collect as much, don’t store as much, and delete it when it’s no longer used. Is that a good argument to the board?

WB: I do, and I think there are several approaches. I’ve given this recommendation fairly regularly, to be honest: minimize the data that you’re collecting. Because I think a lot of companies don’t need as much data as they’re collecting! It’s just easy and cheap to collect it these days, so why not?

Helping organizations understand that it is a risk decision! Tthat’s not just a cost decision. It is important. And then of what you collect, how long do you retain it?

Because the longer you retain it and the more you collect, you’re sitting on a mountain of data and you can become a target of criminals just through that fact.
For the data that you do have and you do need to retain … I’m a big fan of trying to consolidate it and not let it spread around the environment.

One of the metrics I like to propose is, ‘Okay, here’s the data that’s important to me. We need to protect it.’ Ask people where that lives or how many systems that should be stored on in the environment, and then go look for it.

If you can multiply that number by like 3 or 5 or 10 sometimes. And that’s the real answer! It’s a good metric to strive for: the number of target systems that that information should reside within. many breaches come from areas where that should not have been.

Security Risk Metrics

IOS: That leads to the next question about risk metrics. One we use at Varonis is PII data that has Windows permissions marked for Everyone. They’re always surprised during assessments when they see how large it is.

This relates to stale data. It could be, you know, PII data that hasn’t been touched in a while. It’s sitting there, as you mentioned.  No one’s looking at it, except the hackers who will get in and find it!

Are there other good risk metrics specifically related to data?

WB: Yup, I like those. You mentioned phishing a while ago. I like stats such as the number of employees that will click-through, say, if you do a phishing test in the organization. I think that’s always kind of an eye-opening one because boards and others can realize that, ‘Oh, okay. That means we got a lot of people clicking, and there’s really no way we can get around that, so that forces us to do something else.’

I’m a fan of measuring things like number of systems compromised in any given time, and then the time that it takes to clean those up and drive those two metrics down, with a very focused effort over time, to minimize them. You mentioned people that have…or data that has Everyone access.

Varonis stats on loosely permissioned folders.

IOS: Yes.

WB: I always like to know, whether it’s a system or an environment or a scope, how many people have admin access! Because we highly over-privileged in most security environments.

I’ve seen eyes pop, where people say, ‘What? We can’t possibly have that many people that have that level of need to know on…for that kind of thing.’ So, yeah, that’s a few off the top of my head.

IOS: Back to phishing. I interviewed Zinaida Benenson a couple months ago — she presented at Black Hat. She did some interesting research on phishing and click rates. Now, it’s true that she looked at college students, but the rates were  astonishing. It was something like 40% were clicking on obvious junk links in Facebook messages and about 20% in email spam.

She really feels that someone will click and it’s just almost impossible to prevent that in an organization. Maybe as you get a little older, you won’t click as much, but they will click.

WB: I’ve measured click rates at about 23%, 25%. So 20% to 25% in organizations. And not only in organizations, but organizations that paid to have phishing trials done. So I got that data from, you know, a company that provides us phishing tests.

You would think these would be the organizations that say, ‘Hey, we have a problem, I’m aware. I’m going to the doctor.’ Even among those, where one in four are clicking. By the time an attacker sends 10 emails within the organization, there’s like a 99% rate that someone is going to click.

Students will click on obvious spammy links. (Source: Zinaida Benenson’s 2016 Black Hat presentation)

IOS: She had some interesting things to say about curiosity and feeling bold. Some people, when they’re in a good mood, they’ll click more.

I have one more question on my list …  about whether data breaches are a cost of business or are being treated as a cost of business.

WB: That’s a good one.

IOS: I had given an example of shrinkage in retail as a cost of business. Retailers just always assume that, say, there’s a 5% shrinkage. Or is security treated — I hope it will be treated — differently?

WB: As far as I can tell, we do not treat it like that. But I’ll be honest, I think treating it a little bit like that might not be a bad thing! In other words, there have been some studies that look at the losses due to breaches and incidents versus losses like shrinkage and other things that are just very, very common, and therefore we’re not as fearful of them.

Shrinkage takes many, many more…I can’t remember what the…but it was a couple orders of magnitude more, you know, for a typical retailer than data breaches.

We’re much more fearful of breaches, even at the board level. And I think that’s because they’re not as well understood and they’re a little bit newer and we haven’t been dealing with it.

When you’re going to have certain losses like that and they’re fairly well measured, you can draw a distribution around them and say that I’m 95% confident that my losses are going be within this limit.

Then that gives you something definite to work with, and you can move on. I do wish we could get there with security, where we figure out that, ‘All right, I am prepared to lose this much.”

Yes, we may have a horrifying event that takes us out of that, and I don’t want to have that. We can handle this, and we handle that through these ways. I think that’s an important maturity thing that we need to get to. We just don’t have the data to get there quite yet.

IOS: I hear what you’re saying. But there’s just something about security and privacy that may be a little bit different …

WB: There is. There certainly is! The fact that security has externalities where it’s not just affecting my company like shrinkage. I can absorb those dollars. But my failures may affect other people, my partners, consumers and if you’re in critical infrastructure, society. I mean that makes a huge difference!

IOS: Wade, this has been an incredible discussion on topics that don’t get as much attention as they should.

Thanks for your insights.

WB: Thanks Andy. Enjoyed it!

[Podcast] The Challenges and Promise of Digital Drugs

[Podcast] The Challenges and Promise of Digital Drugs

Leave a review for our podcast & we'll send you a pack of infosec cards.

Recently the Food and Drug Administration approved the first digital pill. This means that medicine embedded with a sensor can tell health care providers – doctors and individuals the patient approves – if the patient takes his medication. The promise is huge. It will ensure a better health outcome for the patient, giving caretakers more time with the ones they love. What’s more, by learning more about how a drug interacts with a human system, researchers might find a way to prevent illnesses that was once believed impossible to cure. However, as security pros there are some in the industry that believe that the potential for abuse might overshadow the promise of what could be.

Other articles discussed:

Tool of the week: Quad9

Panelists: Mike Thompson, Kilian Englert, Mike Buckbee

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part I

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part I

This article is part of the series "[Podcast] Privacy Attorney Tiffany Li and AI Memory". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.

Tiffany Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes about the privacy implications of artificial intelligence, virtual reality, and other disruptive technologies. We first learned about Tiffany after reading a paper by her and two colleagues on GDPR and the “right to be forgotten”. It’s an excellent introduction to the legal complexities of erasing memory from a machine intelligence.

In this first part of our discussion, we talk about GDPR’s “right to be forgotten” rule and its origins in a law suit brought against Google. Tiffany then explains how deleting personal data is more than just removing it from a folder or directory.

We learn that GDPR regulators haven’t yet addressed how to get AI algorithms to dynamically change their rules when the underlying data is erased. It’s a major hole in this new law’s requirements!

Click on the above link to learn more about what Tiffany has to say about the gap between law and technology.

Continue reading the next post in "[Podcast] Privacy Attorney Tiffany Li and AI Memory"

8 Tips to Surviving the Data Security Apocalypse

8 Tips to Surviving the Data Security Apocalypse

These days, working in data security can feel like surviving a zombie apocalypse – mindless hordes of bots and keyloggers are endlessly attempting to find something to consume. Just like in “The Walking Dead,” these zombies are an ancillary threat to other humans. The bots and keyloggers are pretty easy to defeat: it’s the human hackers that are the real threat.

How prepared are you to deal with the real threats out there?

Get Global Access Groups Under Control

Are you still using global access groups? That’s the dystopian equivalent of leaving your walls unmanned!  Giving the default “everyone” group access to anything is a hacker’s dream scenario.  They get a free pass to move from share to share, looking for anything and everything, and you’ll never know they were there.

Removing all permissions from the default global access groups is an easy way to improve data security. Varonis DatAdvantage highlights folders with Global Access Groups so that you can see who’s got access to what at-a-glance – and then you can use the Automation Engine to quickly remove those global permissions from your shares.  All you need to do is set the Automation Engine to remove Global Access Groups and it will move users out of those generic groups and into a new group that you can then modify.  The important thing is to stop using Global Access Groups, and keep your walls manned at all times!

Identify (and Lock Down) Your Sensitive Data

Effective survivors hide their resources and food stores from the prying eyes of outsiders. The most organized groups stash backup caches and keep records of their stores. Do you do the same with your PII and intellectual property data?  Can you, right now, tell me where every social security number or credit card string is stored on your file shares? If you can’t, then who knows what kind of treasures potential thieves will find as they poke around?

Knowing where your sensitive data is stored is vital to surviving the data security apocalypse – our Data Classification Framework quickly and easily identifies PII and intellectual property data in your unstructured files, so you know where your sensitive data is – and where you can lock it down.

Track Your Dangerous Data

Imagine that the guard on the North wall got eaten – and now the map with the weapons caches for the entire region is MIA.  Can another group of survivors find that map and steal your stuff? You might be leaving the same breadcrumbs on your network by leaving behind old files that have valuable information a hacker could use for profit.  

Identifying and deleting or archiving this data is just as important as moving that cache of weapons to the safety of your base camp. DatAdvantage can report on stale data and give you visibility into what might be leaving you vulnerable to hackers. Managing stale data is an excellent strategy to limit exposure, and keeps you one step ahead.

Practice Good Password and Account Policy

Say you use a certain whistle to communicate with your group – and you’ve used that same whistle for the past 8 months. What are the chances that a rival group will ambush you by using that whistle?

It’s the same if you have passwords that never change, or accounts that are no longer active, which should have been removed or deactivated.  Hackers can use those accounts to try to access resources over and over again without setting off any alarms.  

It’s always best to change the “whistle,” or password, on a consistent basis – and have a policy in place to revoke access privileges when people leave the group. Perhaps something less drastic than chopping their head off before they go full zombie.  With DatAdvantage, you can report on these kinds of accounts in your Active Directory so that you can take action and remove this threat without using an axe.

Fix Inconsistent Permissions

Once you have redundancies and processes to keep everything running smoothly, what happens when that one guy in your survivor group just can’t follow simple instructions?  What if they’re an important part of the plan, but can never quite complete their part?  You might say that part of the plan is broken, like when you have a share that is set to inherit permissions from the parent – but for some reason isn’t. In data security terms, you have inconsistent permissions, which can cause confusion as to exactly how the permissions on these folders are set.  

Fixing all of these broken links in the fence will help keep the outsiders from getting into your data stores. You can automate the process of repairing inconsistent permissions with the Automation Engine – so that you’re maintaining a least privilege model and only the right people can access that data. Or get through that fence.

Identify Data Owners

If your survival group is going to be a self-sustaining society, you’ll need leaders to support your growth.  You wouldn’t want the horticulturist in charge of weapons, and you probably wouldn’t want the weapons master in charge of your vegetables.  The same holds true for your data and the data owners.

You need to be able to identify the owners of your data so that you know who’s responsible for managing permissions and access to those shares. When there’s one person in the Legal department who can grant access to the legal shares, you’re in a much better situation than if the IT department handles that for every department.  

The first step is to identify data owners – and DatAdvantage provides reports and statistics to help you do just that. You can automate the process with DataPrivilege, and enable those data owners to approve and revoke permissions from their shares and audit permissions on their shares on a regular basis. Now that the data owners are in charge of who gets access to their data, things are starting to make a lot more sense – not to mention run much more smoothly.

Monitor File Activity and User Behavior

As your society of survivors grows into a full-fledged community, you want to make sure that everyone is contributing and utilizing the resources of the community correctly.  So you put in some monitoring systems.  Assign chain of commands and reporting structures and even make some rules.  

And so, you need to do the same thing by monitoring your file and email servers. DatAdvantage gives you visibility on the file and email servers – even user behavior – which is paramount to data security: outsiders can sometimes get in, and once they get in they might look like they belong.  But when they start stealing extra bread or copying gigs of data to an external drive, we need to know.

Set Up Alerts and Defend Your Data

Alerts can warn you about a herd tripping a bell on the perimeter or that Jeff from marketing has started encrypting the file server with ransomware.  The faster and more that you know about potential threats, the better you can respond.  Conversely, the longer the outsiders have to do bad things, the worse it will be for us every time.

You can set those tripwires to automatically respond to specific types of threats with DatAlert, so that your security team can lessen the impact and get straight to the investigation phase. DatAlert establishes behavioral baselines for every user – so that you know when somebody’s acting out of the ordinary, or if their account has been hijacked. With DatAlert, you can monitor your sensitive data for unusual activity and flag suspicious user behavior so that you know when you’re under attack. 

Want to check your own preparedness level for the data security apocalypse? Get a risk assessment to see how you measure up.  We’ll  check your environment for all of these potential threats and provide a plan of action to get you up to true survivor status.

After Equifax and WannaCry: New Survey on Security Practices and Expectati...

You’ve seen the headlines: Breaches are hitting high-profile organizations almost daily. After major events — the WannaCry and NotPetya outbreaks, and most recently the Equifax breach — we wanted to know if professionals responsible for cybersecurity in their organizations are shoring up their security, what approaches they are taking, and if they believe they are prepared for the next big attack.

Today we release the results of a new independent survey: After Equifax and WannaCry: Security Practices and Expectations.

The survey, which polled 500 IT professionals responsible for cybersecurity in the UK, Germany, France and U.S., highlights an alarming disconnect between security expectations and reality: While 45% of IT professionals are bracing for a disruptive cyber attack in the next year, the vast majority (89%) profess confidence in their cybersecurity stance.

Other notable findings include:

  • 25% reported their organization was hit by ransomware in the past two years.
  • 26% reported their organization experienced the loss or theft of company data in the past two years.
  • 8 out of 10 respondents are confident that hackers are not currently on their network.
  • 85% have changed or plan to change their security policies and procedures in the wake of widespread cyberattacks like WannaCry.

Read the full survey:

After Equifax and WannaCry: New Survey on Security Practices and Expectations.

Maximize your ROI: Maintaining a Least Privilege Model

Maximize your ROI: Maintaining a Least Privilege Model

TL;DR: Managing permissions can be expensive. For a 1,000 employee company, the overhead of permissions request tickets can cost up to $180K/year. Automating access control with DataPrivilege can save $105K/year or more and reduce risk. Read on to see the math.

One of the most important requirements of implementing a data security plan in today’s breach-a-day era is to implement and maintain a least privilege model across your enterprise.

The principle of least privilege says that users should only have access to resources that they need to do their work. What does this mean? The marketing team, for example, probably shouldn’t be able to access to corporate finance and HR data. You’d be shocked how often they do.

A least privilege model can drastically limit the damage insiders can do but, perhaps more importantly, it prevents hackers from moving laterally across the organization with a single compromised account.

Without least privilege, hackers can likely move from one share to another, grabbing as much private data they can. On the other hand, if (and when) that least privilege model is implemented, the hacker will be limited to the same resources that the compromised account is able to access.

The downside? Achieving least privilege permissions is no minor feat. You need to analyze access control lists, correlate them to users and groups in Active Directory, and remediate issues like global access, which should be a major red flag. Hackers actively seek out common issues like overly permissive service accounts, broken permissions inheritance, and weak admin passwords.

Once you grab the low-hanging fruit by closing common loopholes, you’ll need to involve business owners to figure out whether current entitlements are legitimately needed and, if not, revoke them.

We’ve helped thousands of companies get to least privilege and, on average, it takes 6 human hours or more per folder to implement a least privilege model manually.

How Much Does it Cost to Manually Maintain a Least Privilege Model?

It’s a major investment to implement least privilege model in money, resources, upkeep, and human capital. Once you’re there, the IT Service Desk traditionally takes on the burden of maintaining that least privilege model.

Based on 2016 industry data, the average service desk call costs the company $15.56 Seems like a reasonable price for a quick service call. Say the end user calls requesting access to a share. IT has to contact the end-user’s manager–or someone else in the approval chain–and then either approve or deny the request. Based on surveys of our customer base, this process on average, takes about 20 minutes over the course of a day for the help desk to complete.

Now, how many times do you think they get this call in a month? 50? 100? 1,000? Some of our customers process up to 7,000 permission changes a month – all in the name of data security, and to maintain a least privilege model.

Here’s a quick chart of that scenario: the number of (service desk calls/month) * (cost per call), for the entire year.

Number of cases per month Cost per case Cost per month Cost per year
100 15 $1,500 $18,000
500 15 $7,500 $90,000
1,000 15 $15,000 $180,000
2,500 15 $37,500 $450,000
5,000 15 $75,000 $900,000
7,000 15 $105,000 $1,260,000

You read that right. Without a way to streamline that access request process, it would cost our customer over one million dollars a year just to keep their permissions in a good place.

Fun desk exercise: if you know your service desk cost-per-case and how many AD changes you process each month, you can do this same calculation for yourself. Now ask yourself, what’s it worth to you?

Besides the monetary cost, there’s the human element to consider.

Based on the above chart, if you’re in the 1,000 AD changes per month range, you’re at a baseline cost of $180,000 dollars per year in service desk calls which, at 20 minutes per call, ends up taking 333 human hours each month just to manage those requests. That’s 2 full time hires working more than 40 hours each month, dedicated to fielding permissions requests. Even if you had a team working non-stop around the clock and on weekends, that would be nearly two weeks of dedicated man hours on permissions requests.

And that’s just the mid range.

In a larger enterprise those 7,000 AD updates roughly comes out to 2,310 work hours a month. That’s 14 people dedicated full time to maintain least privilege permissions per month!

A Better Way to Manage Permissions

DataPrivilege takes the burden off of the Service Desk and gives the data owners – the ones that actually *know* who should be accessing that information – the ability to grant and remove access from their own shares.

This makes removing and granting access as simple as responding to an email: and each data owner will only be doing for their shares – not the entire domain.

We can all probably agree that putting the IT Service Desk in charge of access to the Corporate Finance folder is a bad idea. However, putting the Controller or the Lead Corporate Accountant in charge of access to that folder is a great idea – and you should pat yourself on the back for coming up with it!

DataPrivilege will also automate your entitlement reviews and create reports for auditing and compliance. We provide APIs to integrate with your IAM or ITSM systems. And of course DataPrivilege will integrate with any other Varonis software you own.

But Wait, How Much is That Going to Cost Me?

Let’s consider an average-sized shop in the 1,000 user and 1,000 AD changes range. As we saw earlier, those 1,000 AD changes per month could cost $180,000 per year, and 333 man hours dedicated to permissions management. By using DataPrivilege to help manage permissions, you’ll not only free up resources, but that same shop will save $105,000 a year.

And of course your Service Desk resources are more effective and flexible without the load of permissions changes. Your data owners are in charge of their data – and your auditors have nothing to worry about in regards to access to sensitive data. In one year DataPrivilege pays for itself – and you’ve reduced the ongoing load of permissions management into the future, making your company more secure in the process.

Let’s again look at our 10,000 user enterprise that processes 7,000 AD updates per month. That would cost the organization $1.26 million per year in Service Desk cost and 2,310 human hours per month. By using DataPrivilege in that first year, you’re saving $960,000 – and significantly cut down the dedicated human hours required to manage those permissions! That’s just year one.

In year two and beyond, you save over $1,000,000.

What could your Service Desk accomplish without 7,000 AD changes per month on their plate? Could they increase productivity for the rest of the company by responding faster to more urgent cases? Could you reallocate headcount and move resources to other departments?

Are You Pulling My Leg?


Those numbers are legit. But keep in mind, they’re specific to maintaining a least privilege model. To get there, you have to (and really should) implement least permissive permissions.

And of course you have to balance all of this outlay against the cost of doing nothing and the risks associated with doing nothing. How much do you think the breach at Equifax is going to end up costing them?

The Wall Street Journal says “billions”.

Not to mention you don’t want to have to testify in front of Congress and explain how you messed up. The Cybersecurity and Infrastructure Protection Subcommittee don’t have time for that.

OK, What Next?

There are a few ways to begin to get started with DataPrivilege and Varonis. One of the easiest ways is to get a free Risk Assessment.

Our engineers will analyze your current data security situation – including global group access and overexposed data – and you’ll get a detailed report with recommendations on where your biggest vulnerabilities are and how to manage them. Or, skip all that and go straight for a demo of DataPrivilege. Your call.

Getting to and maintaining a least privilege model is one of the most important steps in protecting your sensitive data – it significantly reduces the risk of your sensitive data being overexposed, leaked, or stolen – and DataPrivilege will help you get there.

[Podcast] Bring Back Dedicated and Local Security Teams

[Podcast] Bring Back Dedicated and Local Security Teams

Leave a review for our podcast & we'll send you a pack of infosec cards.

Last week, I came across a tweet that asked how a normal user is supposed to make an informed decision when a security alert shows up on his screen. Great question!

I found a possible answer to that question at New York Times director of infosecurity, Runa Sandvik’s recent keynote at the O’Reilly Security Conference.

She told the attendees that many moons ago, Yahoo had three types of infosecurity departments: core, dedicated and local.

Core was the primary infosec department. The dedicated group were subject matter experts on security, still on the infosec department, but worked with other teams to help them conduct their activities in a secure way. The security pros on the local group are not officially on the infosec department, but they’re the security experts on another team.

Who knew that once upon a time dedicated and local security teams existed?! It would make natural sense that they would be the ones to assist end users on security questions, why don’t we bring them back? The short answer: it’s not so simple.

Other articles discussed:

Panelists: Kilian Englert, Forrest Temple, Matt Radolec