Category Archives: Data Security

The Security Threats are Coming From Inside the House!

The Security Threats are Coming From Inside the House!

Think of any of the big data breaches: Equifax, Target, NSA, Wikileaks, Yahoo, Sony. They all have one thing in common: the data breaches were an inside job.

That’s not to say that all the hackers were employees or contractors, but once the hackers get inside the perimeter security, is there any difference? Their activities all look the same to an outside observer.

We write about this phenomenon so often. Once a hacker gets access inside the network, they often have everything they need to find (and attempt to exfiltrate) the good stuff that will make the headlines – the personal data, emails, business documents, credit card numbers, etc.

So the question becomes – can your data security team realize that they’re inside, before it’s too late?

It’s imperative that in addition to your firewall, routers and network monitoring software, you monitor what’s inside as well: the user behavior, file activity, folder access and AD changes.

The Perimeter Has Been Breached

Here’s a scenario: a hacker has gained access to a user account and is attempting to download Intellectual Property data that is stored in OneDrive. The first few attempts to access the data have failed, but they’re persistent, so they poked around until they found an account that has the access they needed to read the files.

Even with monitoring at the file operation level, this kind of activity is hard to discern from the end user clicking on that folder and trying to get access.

And that’s where we come in. Varonis analyzes all of these attempts to access this OneDrive share in context, with user behavior analytics (UBA). From there, you can leverage Varonis threat models to analyze and compare that activity to known behaviors both by the user who’s trying to access that data, their peers, and by hackers to exploit and infiltrate company networks.

In this scenario, this account is suddenly accessing classified, sensitive data that they have never touched before. That’s a red flag – and we’ve got threat models built specifically to detect that type of behavior.

This is outside of normal behavior patterns for the account that the hacker is leveraging – and because Varonis has been monitoring all the activity in OneDrive for over a year now, you have all the evidence you need to act immediately.

Without Varonis, you’d likely never even see the attempts to access this OneDrive folder. You would never notice the files being copied from this folder, and you wouldn’t see which folder they accessed next.

Investigation & Forensics

The first step is to programmatically lock out and log out this account – which you can set up as an automatic response with Varonis. At the same time, emails and alerts can be sent to the infosec team and/or a SIEM system. Once the relevant parties are informed the investigative work can begin.

So where to start? Find out what happened, how it happened, what vulnerabilities were exploited, and what you can do to defend against it in the future. With the Varonis DatAdvantage UI, you can pull up the file audit history of the hacked userid to see where else that account has been before the alert was triggered.

Use the full file audit trail to see what – if any – damage was done and lock down access to the entire system if necessary. And after the initial threat is neutralized, the work to close the security holes can begin.

The Varonis Security Platform is a key component of a layered security system. Layers create redundancy, and redundancy increases security. Hackers will find and exploit any opening they can; it is our responsibility to protect each other’s private data.

By learning and using the correct tools and principles for good data security, we can make it much harder for the bad guys to profit from their hacking, and limit the impact of data security breaches.

Want to see how Varonis will work in your environment to catch these types of insider threats? Click here to set a demo with one of our security engineers.

 

Krack Attack: What You Need to Know

Krack Attack: What You Need to Know

For the last decade, philosophers have been in agreement that there is another, deeper level within Maslow’s Hierarchy of Human Needs: WiFi Access.

We’re now at the point where even the most mundane devices in your house are likely to be WiFi enabled.

Today we learned that every single one of those devices–every single smartphone, wireless access point, and WiFi-enabled laptop–is vulnerable due to a fundamental flaw with WPA2(Wireless Protected Access v2).

It turns out that the WPA2 (Wireless Protected Access v2) protocol can be manipulated into reusing encryption keys in what’s being called the Krack Attack

The result?

Attackers can view and compromise your encrypted traffic, inject ransomware code, hijack your credentials, and steal sensitive information like credit card numbers, passwords, emails, photos, and more.

Who Is Affected?

Because of how it works, this attack threatens all WiFi networks – and WiFi-enabled devices.

While the flaw is in the WPA2 protocol itself, how that protocol is implemented differs across device and software vendors. Apple’s iOS devices and Windows machines are mostly (as of now) unaffected since they don’t strictly implement the WPA2 protocol and key reinstallation.

The largest group affected are Android users and those other client devices that implemented the WPA2 protocol very strictly.

How the Attack Works

The attack works against WiFi clients and depends upon being within WiFi range of the target device. Attackers can use a special WiFi card that retransmits a previously used session key which forces a reinstallation of that key on the client device.

By doing so (and depending on exactly how WPA2 is implemented on the client device), the attacker can then send forged data to the client. For example, an attacker could silently manipulate the text and links on a web page.

How Practical Is the Attack?

An interesting twist to this attack is that it depends much more upon physical proximity in order to compromise a client since you need to be in WiFi range. An attacker also needs a somewhat specialized networking device and to be able to code up the exploit manually – since no software has yet been released for this attack.

What You Can Do To Protect Yourself Today

The more encryption you run at different layers of the communications stack the better. If you’re in charge of a website, this is just one more in a vast list of reasons you should be forcing SSL/TLS on your site.

VPNs are also a strong (additional) option: they’re inexpensive, easily configured, and can make Krack much less of an issue. An attacker can view/capture the encrypted data but won’t be able to do anything with it.

What You Can Do In The Coming Weeks

Update your devices – and be mindful of where and on what devices you’re using WiFi.

Every vendor is likely going to release a patch addressing this vulnerability: install the next product update that gets pushed to you – and encourage those around you to install security updates.

Neglected security updates are actually a large and persistent vulnerability: they’re there for a reason – install them! Greater adoption helps everyone. If you need more convincing, check out Lesson 4 of Troy Hunt’s Internet Security Basics.

What You Can Do Long Term

This may spark more (and long-needed) research into the areas of WiFi vulnerabilities.

While you can’t entirely prepare for the unknown, you can set yourself up to respond quickly by establishing good procedures for emergency patch management, implementing defense in depth by layering multiple different security systems and keeping all of your systems as up to date as possible.

This attack highlights that it’s important not to rely solely on any single layer of defense. For many home networks, this is, unfortunately, their only security layer. Always consider what happens when a layer of defense fails.

[Podcast] The Anatomy of a Cybercriminal Startup

[Podcast] The Anatomy of a Cybercriminal Startup

Leave a review for our podcast & we'll send you a pack of infosec cards.


Outlined in the National Cyber Security Centre’s “Cyber crime: understanding the online business model,” the structure of a cybercrime organization is in many ways a lot like a regular tech startup. There’s a CEO, developer, and if there are enough funds, an IT department.

However, one role outlined on an infographic on page nine of the report that was a surprise and does not exist in legitimate businesses. This role is known as a “money mule.” Vulnerable individuals are often lured into these roles with titles such as “payment processing agents” or “money transfer agents.”

But when “money mules” apply for the job and even after they get the job, they’re not aware that they are being used to commit fraud. Therefore if cybercriminals get caught, “money mules” might also get in trouble with law enforcement. The “money mule” can expect a freeze on his bank account, face possible prosecution, and might be responsible for repaying for the losses. It might even be on your permanent record.

Other articles and threads discussed:

Tool of the week: SPF Translator

Panelists: Mike Buckbee, Kilian Englert, Mike Thompson

My Big Fat Data Breach Cost Post, Part II

My Big Fat Data Breach Cost Post, Part II

This article is part of the series "My Big Fat Data Breach Cost Series". Check out the rest:

If I had to summarize the first post in this series in one sentence, it’s this: as a single number, the average is not the best way to understand a dataset. Breach cost averages are no exception! And when that dataset is skewed or “heavy tailed”, the average is even less meaningful.

With this background, it’s easier to understand what’s going on with the breach cost controversy as its being played out in the business press. For example, this article in Fortune magazine, does a good job of explaining the difference between Ponemen’s breach costs per record stolen and Verizon’s statistic.

Regression Are Better

The author points out that Ponemon does two things that overstate their cost per record average. One, they include indirect costs in their model — potential lost business, brand damage, and other opportunity costs. While I’ll get to this in the next post, Ponemons’ qualitative survey technique is not necessarily bad, but their numbers have to be interpreted differently.

The second point is that Ponemon’s $201 per record average is not a good predictor, as is any raw average, and for skewed datasets sets it’s especially not a very useful number.

According to our friends at the Identity Theft Resource Center (ITRC), which tracks breach stats, we’re now reached over a 1000 breach incidents with over a 171 million records taken. Yikes!

Based on Ponemon’s calculations, American business has experienced $201 x 171 million or about $34 billion worth of data security damage. That doesn’t make any financial sense.

Verizon’s average of $.58 per record is based on reviewing actual insurance claim data provided by NetDiligence. This average is also deficient because it likely understates the problem — high deductibles and restrictive coverage policies play a role.

Verizon, by the way, has said this number is also way off! They were making a point about averages being unreliable (and taking a little dig at Ponemon).

The Fortune article then discusses Verizon’s log-linear regression, and reminds us that breach costs don’t grow at a linear rate. We agree on that point! The article also excerpts the table from Verizon that shows how different per record costs would apply for various ranges. I showed that same table in the previous post, and further below we’ll try to do something similar with incident costs.

In the last post, we covered the RAND model’s non-linear regression, which incorporate other factors besides record counts. Jay Jacobs also has a very simple model that’s better than a strict linear line. Verizon, RAND, and Jacobs’ regressions are all far better at predicting costs than just a single average number.

I’ll make one last point.

The number of data records involved in a breach can be hard to nail down. The data forensics often can’t accurately say what was taken: was it 10,000 records or a 100,000? The difference may amount to whether a single file was touched, and a factor of ten difference can change $201 per record to $20!

A more sensible approach is to look at the costs per incident. This average, as I wrote about last ime, is a little more consistent, and is roughly in the $6 million range based on several different datasets.

The Power of Power Laws

Let’s gets back to the core issue of averages. Unfortunately, data security stats are very skewed, and in fact the distributions are likely represented by power laws. The Microsoft paper, Sex, Lies and Cyber-Crime Surveys, makes this case, and also discusses major problems — under-sampling and misreporting — of datasets that are based on power laws: in short, a few data points have a disproportionate effect on the average.

Those who are math phobic and curl up into fetal position when they see an equation or hear the word “exponent” can skip to the next section without losing too much.

Let’s now look at the table from the RAND study, which I showed last time.

An incident of $750 million indicates that this is a spooky dataset. Boooo!

Note that the median cost per for an incident — see the bottom total — is $250,000 while the average cost of $7.84 million is an astonishing 30 times as great! And the maximum value for this dataset contains a monster-ish $750 million incident. We ain’t dealing with a garden variety bell-shaped or normal curve.

When the data is guided by power law curves, these leviathans exist, but they wouldn’t show up in data conforming to the friendlier and more familiar bell curve.

I’m now going to fit a power law curve to the above stats, or at least to the average — it’s a close enough fit for my purpose. The larger point is that you can have a fat-tailed dataset with the same average!

A brief word from our sponsor. Have I mentioned lately how great Wolfram Alpha is? I couldn’t have written this post without it. If I only had this app in high school. Back to the show.

The power law has a very simple form: it’s just the variable x, representing in this case the cost of an incident, taken to a negative exponent power of alpha:  x-α.

Simple. (Please don’t’ shout into your browser: I know there’s a normalizing constant, but I left it out to make things easier.)

I worked out an alpha of about -2.15 based on stats in the above table. The alpha, by the way, is the key to all the math that you have to do.

However, what I really want to know is the weight or percentage of the total costs for all breach incidents that each segment of the sample contributes. I’m looking for a representative average for each slice of the incident population.

For example, I know that the median or 50% of the sample — that’s about 460 incidents — has incident costs below $1.8 million. Can I calculate the average costs for this group? It’s certainly not $7.84 million!

There’s a little bit more math involved, and if you’re interested, you can learn about the Lorenz curve here. The graph below compares the unequal distribution of total incidents costs (the blue curve) for my dataset versus a truly equal distribution (the 45-degree red line).

The Lorenz curve: beloved by economists and data security wonks. The 1% rule! (Vertical axis represents percent of total incident costs.)

As you ponder this graph — and play with it here — you see that the blue curve doesn’t really change all that much up to around the 80% or .8 mark.

For example, the median at .5 and below represents 9% of the total breach costs. Based on the stats in the above table, the total breach cost for all incidents is about $7.2 billion ($7.84 million x 921). So the first 50% of my sample represents a mere $648 million ($7.2 billion x .09). If you do a little more arithmetic, you find the average is about $1.4 million per incident for this group.

The takeaway for this section is that most of the sample is not seeing an average incident cost close to $7.8 million! This also implies that at the tail there are monster data incidents pushing up the numbers.

The Amazing IOS Blog Data Incident Cost Table

I want to end this post with a simple table (below) that breaks average breach costs into three groups: let’s call it Economy, Economy Plus, and Business Class. This refers to the first 50% of the data incidents, the next 40%, and the last 10%. It’s similar to what Verizon did in their 2015 DBIR for per record costs.

Economy Economy Plus Business Class
Data incidents 460 368 92
Percent of Total Cost 9% 15% 74%
Total Costs $648 million $1 billion $5.33 billion
Average costs $1.4 million/incident $2.7 million/incident $58 million/incident

If you’ve made it this far, you deserve some kind of blog medal. Maybe we’ll give you a few decks of Cards Against IT if you can summarize this whole post in a single, concise paragraph and also explain my Lorenz curve.

In the next, and (I promise) last post in this series, I’ll try to tell a story based on the above table, and then offer further thoughts on the Verizon vs. Ponemon breach cost battle.

Story telling with just numbers can be dangerous. There are limits to “data-driven” journalism, and that’s where Ponemon’s qualitative approach has some significant advantages!

[Podcast] How Weightless Data Impacts Data Security

[Podcast] How Weightless Data Impacts Data Security

Leave a review for our podcast & we'll send you a pack of infosec cards.


By now, we’re all aware that many of the platforms and services we use collect and store information about our data usage. Afterall, they want to provide us with the most personalized experience.

So when I read that an EU Tinder user requested information about her data and was sent 800 pages, I was very intrigued with the comment from Luke Stark, a digital technology sociologist at Dartmouth University, “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”

He is on to something. We don’t usually consider archiving stale data until we’re out of space. It is often through printing photos, docs, spreadsheets, and pdfs that we would feel the weight and space consuming nature of the data we own.

Stark’s description of data’s intangible quality led me to wonder how weightless data impacts how we think about data security.

For instance, when there’s a power outage, some IT departments aren’t deemed important enough to be on a generator. Or when Infosec is often seen as a compliance requirement, not as security. Another roadblock security pros often face is when they report a security vulnerability – it’s not usually well received.

Podcast panelists: Mike Buckbee, Kilian Englert, Mike Thompson

[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

Leave a review for our podcast & we'll send you a pack of infosec cards.


While some regard Infosec as compliance rather than security, veteran pentesters Sanjiv Kawa and Tom Porter believe otherwise. They have deep expertise working with large enterprise networks, exploit development, defensive analytics and I was lucky enough to speak with them about the fascinating world of pentesting.

In our podcast interview, we learned what a pentesting engagement entails, assigning budget to risk, the importance of asset identification, and so much more.

Regular speakers at Security Bsides, they have a presentation on October 7th in DC, The World is Y0ur$: Geolocation-based Wordlist Generation with Wordsmith.

Learn more by clicking on the interview above.

Transcript

Sanjiv Kawa: My name is Sanjiv Kawa, I’m a penetration tester with PSC. I’ve been with PSC for…well, since June 2015. Prior to that, I had a couple of different hats on. I was a security consultant. I did some development work, and I was also a QA. My IT knowledge and my development knowledge is pretty well rounded, but my real interests are with penetration testing. Large enterprise networks as well as exploit development and automation. So, yeah, that’s me.

Tom Porter: I’m Tom Porter, been doing security for about eight years. My roots are on the blue team. So I got started in the government contracting space, doing mostly defensive analytics, network situational awareness, dissecting packets, writing ideas, rules. So now that I do pen testing, I have an idea of what the blue team is looking for. And I’ve used that to help me bypass the IR restrictions and find my way into FCDs.

Cindy Ng: Well let’s start with some foundational vocabulary words, like what are white box, black box, and a grey box is?

Tom Porter: So when you look at different approaches to carrying out pen testing, you’re gonna have this spectrum where on one side is kind of the white box or what you hear as the crystal box pen testing. That’s where the organization’s divisions are sharing information about their environment with the penetration testers. So they actually might give them the keys to the kingdom so they can log in and analyze machines. They have an idea of what the network layout already looks like before they come on site. They know what might be the architectural weaknesses in their deployment, or in their environment, and then the penetration testers step in there with an upper hand. But it gives them a little more value in that regards, just because they already have…they don’t have to spend the time doing the reconnaissance, the intelligence gathering. So you can…it’s a way to kind of crush down a pen test.

On the other side of the spectrum are kind of the black box. And that mimics more of what a real-world attacker might be doing, just because they don’t have necessarily insight into what the architecture, what the systems…you know, kind of operating systems are running, what kind of applications, versions of those, who are privileged users, who’s logging in from where. And there is a hefty portion upon the front side of the engagement to do a lot of reconnaissance, a lot of intelligence gathering, a lot of monitoring. But it’s also a great test of your incident response team to see how they’re adapting and responding to what these black box testers might be doing.

And then kinda in the middle, we have this notion of a grey box test. It’s where some information is shared, but not necessarily everything. And it’s a style that we like. And it’s kind of the assumed breach style, where we are given an idea of what network ranges reside there. We’re given an idea of maybe generalized what the process is or a privileged user might use to get into a secure environment. We know ahead of time that in the past year, they’ve rolled out a new MFA solution. It’s another way of kinda crunching down the time necessary for an engagement, from several months down to a week or two. In that way, we don’t break the bank with our customers. But we can still work with them to provide insights just because we have…or provide value just because we have a little more extra insight into their environment.

Cindy Ng: Even if you get a white box environment and they tell you everything that they know, there are some still grey areas such as things that they might not know, so do you think essentially you’re always working in a grey box zone?

Tom Porter: To a degree, for sure. As much as we’d like to, we don’t necessarily have a full asset list of everything in our environment. Not every organization knows every single piece of applications that they’ve installed on their local development machines. There’s always that kind of grey box notion of it, but just to generalize that not every company has an idea of what their inventory list looks like. And that’s where we come in. We can do our kinda empirical assessment to figure out what systems are running, what services are listening. And we can use that to cross-reference with what our client has on their side to figure out where the deltas are and give them more a complete picture of what their inventory list looks like, what their assets look like.

Cindy Ng: What’s the difference a penetration test versus a vulnerability scan? Because on the surface they sound potentially very similar?

Sanjiv Kawa: This is Sanjiv Kawa here. The real difference between a vulnerability assessment and a penetration test is the vulnerability assessment does sort of identify rank and report any vulnerabilities which have been identified in the environment. But it doesn’t go that next step, which is test the exploitation of those vulnerabilities and leverage those exploitations for the assessor’s personal benefit, or the assessor’s goal in that particular penetration test. So a good example would be, you know, a vulnerability was identified with a particular service in this network. Well, the vulnerability scan will say, “You know, this is a high-risk vulnerability because this service is using weak passwords, or this service is unpatched.” And at that point it’s kinda hands-off. It’s up to the patch management team or the incident response folks, or whoever the blue team is in that environment, to assess that vulnerability and put it into sort of a risk bubble as to whether they wanna fix it, or whether it’s an okay risk in that environment.

The penetration tester will exploit that vulnerability and leverage whatever underlying information on that system they can use to then move laterally throughout the environment, vertically throughout the environment, and essentially get to their end goal. I think it becomes…you’re absolutely right, there’s definitely a bit of haze surrounding what the differences between those two things, but really, the penetration test shows value in exploiting these vulnerabilities and ultimately reaching the end goal, as opposed to assuming these vulnerabilities exist, not testing the exploitability of them, and not really understanding the full depth of what this vulnerability on this particular service can result in.

Cindy Ng: Are there an X number of vulnerabilities you see again and again or something that you would see as new and upcoming that you rarely see? But so, I was reminded…last week I was talking to a cryptographer and he says, “You don’t always have to go complicated. It’s sometimes back to the basics.” So then my second question too, is you might have come up with a list of a bunch of vulnerabilities, how do you prioritize them?

Sanjiv Kawa: Yeah, it’s a really good question. And the cryptographer that you were speaking to is absolutely right. There’s times where you don’t actually need to complicate the situation. To be fair, a large amount of my penetration tests, I’m not actively exploiting vulnerabilities in terms of services, I’m actually looking for misconfigurations in a pre-existing network or native features in pre-existing operating systems that get me to my end goal.

And to answer the second part of your question, you know, what’s new and upcoming in the whole vulnerabilities world? Well, we’ve recently seen the Shadow Brokers release the dumps from the Equation Groups, who have kind of a close tie to NSA, but that’s neither here or there. And, you know, these guys have been hoarding zero-day exploits, which absolutely affect a large spectrum of operating systems and services. The most common, EternalBlue, for example, affects the SMB version 1 protocol for, I think 2012 down all the way to, you know, Vista/2003. It’s a really interesting space that we live in because there’ll be a lull for a little while where there’s no real service exploitation, at least in a wide sort of area of what you would expect an enterprise network to look like. And so you’re playing the misconfiguration game. And that usually gets you to where you need to be. However, it’s really exciting when you start seeing exploits which you’ve heard about on the wire, but haven’t necessarily been released yet and turned into a real proof of concept.

I guess another thing that kind of touched on here as well is it’s all sort of environments specific. It’s really interesting. There’s no real concrete methodology. I mean, you do have the PTES, the pen testing execution standard, where you kind of go from OSINT all the way to Cleanup and have Post Exploitation in between, and vulnerability assessments and exploitation. And that’s kind of a framework that you can follow as a penetration tester. But I think, and what I’m trying to get at here is, it’s really organic, when you’re hunting for these vulnerabilities and misconfigurations in a network.

Cindy Ng: Tom, you mentioned CDE earlier. And before I spoke to you guys, I had no idea what CDE meant. It stands for cardholder data environment. Can you explain to our listeners what that means?

Tom Porter: CDE are cardholder data environment. It’s this notion that comes from PCI compliance from payment card industries. They put together a standard for how folks that host, so merchants and service providers, should secure their environments to be compliant. And it started back, you know, over a decade ago where Visa, MasterCard, and three other brands had their own testing standards to be compliant with each of their brands. But it was kinda clunky, and just because you’re compliant with one brand doesn’t mean you’re necessarily compliant with the others. And not a lot of merchants or service providers really came on board.

So they got together and they ended up producing version 1.0 of what’s now known as the PCI Data Security Standards. And it’s evolved over the years. It doesn’t necessarily fit every business model, but it tries to incorporate as many as possible. But it started out mostly covering Web Apps, but it’s eventually evolved into hosting, so data centers, hosting solutions, web applications, as much as they can get under the umbrella. So in PCI DSS, they have what’s called the PCI zone. And it’s a fairly strict bounds on the systems that people owned process, that store, transmit or process credit card data, or sensitive authentication permission. And what we just call that in our parlance is CDE. And it’s our targets for PCI pen testing.

Cindy Ng: Let’s go through a scenario, an engagement that you both have been a part of. I wanted to know, what does regular engagement looks like? Are they all the same? Is there a canned process?

Tom Porter: So we work within…and what their standard dictates is, and as this evolved over the years, typically, they should be done by third parties. They should follow a standard that’s already publicly accessible and vetted. So like Sanjiv mentioned earlier about PTES, or if you’re using something like OSSTMM, something that’s been rigorously discussed and debated and has been vetted as a proper way of going about an engagement built like this, instead of just rolling your own, in which case you might not have full coverage.

So what we do is kind of adopt these into pens, to each client, because not every client environment is uniform, and not every business model is the same. So we’ll have some clients where we go in with an already pretty good idea of what we’re gonna be doing. We know how we’re gonna proceed from A to B to C, with a little bit of room for creativity mixed in. Some clients are brand-new. Some of the environments or technologies they’re rolling out, we’ve never seen before, and we have to get creative. We work cooperatively with a client to figure out how we’re gonna rigorously test this so it meets the letter of the standard. So we stick to a general methodology, but it doesn’t necessarily mean it’s rigid. We do have some room in there for flexibility to adapt to whatever the client is using.

Cindy Ng: Oftentimes clients are struggling with old technology as it attempts to integrate with new technology, and it takes a few versions to get it right. You can make a recommendation one year and it might take a while for things to smooth out. What is the timeline for fixing and patching?

Sanjiv Kawa: Yeah, that’s a really good question. So after we’ve done a penetration test, and if significant findings have been made, which really affect the client’s CDE, then the client has to have 90 days to remediate these findings. There’s typically various different ways that you can do this. In most cases, an entire re-architecture of a client’s enterprise network is not gonna be a completely valid recommendation, right? There’s just not enough time or resources to complete a task like that within 90 days. So at that point, we start looking at, you know, reasonable remediations that we can suggest. And often, clients might wanna look at one bottleneck. So, for example, if I fix this, how does this affect the rest of the vulnerabilities that you guys identified? Does that kind of wrap it into a mitigation bubble in terms of, “Will this particular bottleneck affect the security of the CDE?”

Secondarily, there might be compensating controls that you can put into place to help either, you know, harden endpoint systems, or there might be network-based controls you can put into place, like, for example, packet inspection or rate limiting, or just reels segmentation, for example. So there’s a lot of creative solutions that we can kind of adapt to a per-client basis. There’s really no silver bullet to fix an enterprise network. And ideally, what we strive to achieve is to try and give the client the best remediation possible, which fits a correct timeline and is reasonable to do in their environment. I think we’re different in the sense that a lot of penetration testers will enter an environment. They won’t understand the full complexity and dependencies of an enterprise network. And once a penetration test is done, they kinda wipe their hands and don’t really follow up with any sort of meaningful remediations or suggestions. We interact with everyone, from the system administrators, network administrators, all the way up to C-Suite, to identify meaningful solutions, meaningful recommendations that they can implement within a reasonable timeline.

Cindy Ng: There is a human aspect. You know, people say humans are the weakest link, and social engineering, for instance, it’s one of the many requirements in PCI compliance. And it’s one of the things that people often debate about. You know, some say that users need more training, and then there are also other security researchers who think that users aren’t the enemy. That we haven’t focused enough on user experience design. What is your experience as a pen tester when you’ve worked with so many different departments it’s both a human and social aspect as well as technology and security? And when you have multiple layers of complexities, how do you mitigate risk, and what is your approach?

Sanjiv Kawa: Yeah, that’s a really good question. I guess every person has a different opinion. If you look at C-Suite or management, they might say it’s a policy issue. If you look at the system administrators and the network administrators, they might say, “Oh, there’s a technological issue, or indeed it’s a user issue.” Touching on your first point of social engineering, it’s not a requirement in PCI DSS 3.2 yet. Something that PSC is actually working on is to…and something that PSC has worked on is…well, PSC is a co-sponsor of the PCI DSS Special Interest Group for pen testing, and we’ve authored a significant portion of the pen testing guidance supplement for PCI DSS, and we’ve also authored a significant portion of PCI DSS. And something we are trying to work on is getting social engineering to be a requirement.

Now, the hardest part about that is how you can sort of measure whether user education, whether user training, has become effective over time. In addition to that, we also believe that, you know, the outcome of social engineering shouldn’t be pass/fail, right? There should be a program of some level at which is something that we believe you should be doing since you’re phishing or vishing. But it should mainly for reinforcement and betterment of the end user. Yeah, I guess that’s kind of, like, our sort of stance on social engineering.

 

We currently don’t do it. And from a personal perspective, I think there’s only so many times that you can tell a organization that hires you that phishing or vishing, or some form of social engineering was the main entry point, right? There’s only so many times you can do that before they become kind of sick to your approach, or your initial vector or foothold into your environment. There’s some lot more creative ways. There’s a lot more ways that show value, especially with pre-existing technologies or pre-existing things that they had already deployed in their network.

Cindy Ng: I was reading the penetration testing guidance and it was so thorough that I just assumed that that was what you had to follow for pen testing. That’s why I’m like, “Oh, it’s part of the requirement.”

Sanjiv Kawa: As Tom had spoken about a bit earlier, you know, PCI isn’t the perfect program, right? But I truly believe that it’s designed to fit as many business models that there can be. And it’s a good introductory framework for, especially penetration testers, because it’s so clear. I mean, you define your priced assets, your CDE, which should ideally be a segmented part of your network, where there’s absolutely minimal to zero significant connectivity to your corporate network. And the pen tester can use basically anything that’s in the corporate network to try and gain access to the CDE. And if it can be used against you, it’s in scope.

One thing I really like about PSC is that we don’t really enter these scoping arguments, you know. For example, “These are the IPs that you’re limited to.” Or, you know, “You pay $3 to test this IP.” Because with PCI, especially as a compliant standard, it says, “Anything that can be used in your corporate scope to gain access to the CDE can be used against you.” So that way, it can almost bring everything into scope, which is kind of nice for a pen tester. It’s, in my opinion, pen testing in the purest form possible.

Cindy Ng: You both have mentioned CDE multiple times, and that’s what you’re…you spend all your time at. I was wondering do you prioritize that list though, like first you need to protect the crown jewels, then do you look at the technology second, or the processes third, and then people last? So is it, it goes back to whatever is important to the organization?

Tom Porter: It’s not uniform for every organization, just because it depends on their secure remote access scheme. The crux might come down to misconfigure a technology appliance, it might come down to a user who’s behaving in a manner that’s outside of procedure. So it really depends. And that’s why when we’re on site carrying out an engagement, we kind of cast a wide net because we wanna catch all of these deficiencies and whether it’s the process, the systems or the people. So to give you some examples, we might see that a technology that is supposed to provide adequate segmentation between a secure zone and a less secure zone has a vulnerability in some type of service that we can exploit. It’s rare, but we see it every now and then. That would be something that falls into the technology list.

Sometimes we might see on the user side, an admin who sits outside of the PCI zone but wants to get in, and has to do it fairly often, they’ve set up their own little backdoor proxy on the machine so that now it’s on a separate VUM. So we’ll get on their machine after we’ve compromised the domain and inspect their next step connections. We’ll see that they’ve set up their own little private proxy that they haven’t told anybody about it. So it depends. We cast a wider net just because we’re not entirely sure what we’re gonna find. And the organization doesn’t necessarily always know where the breakdown in the process will be.

Sanjiv Kawa: To add to that, I would also say that a majority of our time is spent in sort of like the Post Exploitation phase. Typically, our time is spent identifying that network segmentation gap and trying to sort of jump from the corporate network into the CDE, and assessing all possible roots which will, you know, result in that outcome. The corporate domain is something that…it’s kind of like the Wild West. A lot of clients don’t…in my opinion, don’t have enough emphasis and controls placed around the corporate network, but they’ve really secured the CDE and all the access in terms of authentication, their multi-factor authentication, and granular network segmentation, into the CDE. Yeah, a lot of our time is spent assessing those sort of roots into the CDE.

Tom Porter: And to piggyback off that, Sanjiv’s reminded me, I’ve been on a number of engagements now where we have this kind of division in labor where most of the resources have been applied to the CDE to make it as secure as possible, while the non-CDE systems or corporate scope has kind of fallen by the wayside with regards to attention and then time and money.

And then we have…we find where the corporate network is then set up as a trust, like a domain trust, as an example, with the CDE. So if we compromise the corporate network, now we have a pivot point into the CDE or with this domain trust. So we end up going to clients and saying, “Hey, we’re very proud of you how much work you’ve put into securing your CDE, but now we understand that your CDE is vulnerable to the deficiencies of the corporate network because you’ve put this trust in it.”

Cindy Ng: I think you’re thinking of security as a gradient, or as…a lot of C-Level and regular users, in general, think, “Well, you’re either secure or you’re not. And it’s a zero or a one.” How do you explain that to C-Level when you’re talking to them?

Sanjiv Kawa: Sure. Yeah. So I don’t believe that, you know, security is binary. I think, especially in modern networks, there needs to be an adequate balance of convenience and security. But that needs to be applied at the right levels. There are so many examples which I might not even be able to get into some of them, but you need to really be able to…not as a third-party assessor, but as an internal assessment administrator or as an internal network administrator, or as an internal IT manager, we provide you, as a third-party assessor, with the necessary tools or vocabulary and the recommendations that you can then sort of take and package to your C-Level suite.

I think a lot of the focus that I’ve definitely been looking at lately is just better access and authentication controls. It seems like one of the most common entry points into most networks is just weak passwords. And so how do you remediate that? How do put a policy in place? Well, in truth, most organizations have a policy in place, but it’s a written policy. It’s not necessarily a technical control. There are technical controls that you can put in place, which are kind of opaque to the end user and kind of makes them better without realizing that they’re becoming better.

So a really common example is third-party integrations into Active Directory, right? By default, I believe that the Windows password policy is a mixture of alphanumeric characters and a seven-length password. And, you know, with, you know, modern networks, it’s kind of archaic to think about because it doesn’t have any sort of intelligent identification of whether a user is using season year as a password, or a company name123 as a password, or something to that effect. So how do you train a user to become better at selecting passwords? Well, in short, you can purchase one of these integration tools and integrate that into Active Directory and load in bad passwords or what you would consider to be bad passwords. And at that point, a user is automatically more secure because they’re unable to select a weak password. By default, they’ve already selected a better password. It’s really just kind of identifying what an organization’s weakest points are, what their failing points are, and how you can make those better in a cost-effective, potentially technical control, which can kind of remove the risk to the environment.

Cindy Ng: You speak about it a little bit about being cost-effective? How do you rate and assign risk to the budget? Do they tell you, “Here’s how much money we have, and work with it?”

Tom Porter: Not necessarily. We give them recommendations, but our recommendations aren’t necessarily gospel. They know their resource constraints, whether it’s budget, whether it’s people, time, whatever it may be. And they work within those. And we can flexible with them throughout the remediation process. So what we do ask, as we work with clients through remediation, is if they come up with ideas for how they wanna go about remediating a finding, they come back and bounce those ideas off of us. Let us pen-test this idea on paper before you invest a significant amount of resources into it. And then we can save each other a bunch of headache down the road.

Cindy Ng: And when you’re engaged with the C-Level, what do they care about, versus what IT people care about?

Sanjiv Kawa: I think the most common thing with C-Level suite is just brand integrity, right? I mean, if they show up on the front page of a newspaper because of a breach, you know, it’s really gonna impact negatively their ability to continue to sell. And a byproduct of that is customer confidence, right? So C-Suite will always care about…in relation to security, brand integrity, and customer confidence. Second to that, they care about time to remediate, and a byproduct of that is cost to remediate. But from my experience, those are probably the big few things that the C-Suite cares about the most.

Tom Porter: I’ll say part of that too also depends on the type of business they’re in. Dependent upon what your revenue stream is. You know, something like a denial-of-service where it takes you offline for several hours might have a greater impact than some data being exfiltrated from your environment. If you think about maybe a pharmaceutical company, their crown jewels aren’t necessarily their uptime, it’s their…you know, their patents or their IP. So if we can exfiltrate those, then that’s gonna have a much more impact on the business than say taking them offline for a few hours.

Cindy Ng: You mentioned earlier that you guys also are working under a whole bunch of zero-day potential vulnerabilities that you might be encountering in the future. And we saw that happen. How do you red-team yourself and improve on the new knowledge that you get?

Tom Porter: There’s a lot of debate right now. If you…dependent upon your preferred internet source. But with regards to this, what we see it from penetration testers, there’s a few things that we do for our clients. One, we can send out advisors and bulletins to our clients just to let them know, “Hey, you know, the internet is a dangerous place right now, and these are the things you should be doing to make sure you’re secure.” One thing that we do as we’re going about our engagement, we line out in our report, and we do executive briefs for customers as well, when we do these presentations and reports for customers, we line out all of our findings to say, “You know, these are our highs, these are our mediums, these are some best practice observations.” And like, we tell them, “You know, you should remediate findings one, two and three in order to be PCI compliant. But we also use this as an opportunity for organizations to strengthen their security posture. So we might have some findings that fall out of that kill chain or that critical path to compromise, but if we see an opportunity for someone to strengthen their security posture, we’re gonna mention it to them. We won’t require it for remediation for PCI compliance, but we’re more of a mindset of security first, compliance second.

Cindy Ng: What’s the important security concept that you wish people would get?

Sanjiv Kawa: I don’t wanna really preach here, but I just think really understanding your assets, your inventory. And a good way to do that is to have an internal security team who is proactive and continuously analyzing systems and services that are exposed. Yeah, I think it all comes down from good asset identification, good network controls. And the best companies that I have been to, which I have not been able to compromise in terms of gaining access through CDE, just has really good segmentation. Identifying your prized assets, putting those into an inaccessible network or a network that is accessible only to few but through very complicated access control mechanisms that require multi-authentication, very limited. There’s kind of multiple answers to that, but really it comes down to asset identification, network controls, segmentation, and limitations on authentication controls. Those are the key things that I care about the most. Tom might have some different things.

Tom Porter: When I talk to users and it almost happens in every single compromise or engagement that I’m on, a lot of it comes back to either weak passwords or more commonly, password reuse. And that’s something that echoes with users not only in the enterprise but also personally when they’re on their…you know, their email or their banking websites. And you see this a lot with attacks out across the internet with credential stuffing. But the idea is not only choosing strong passwords, but also using unique passwords for every type of site that you might have a login stored. You see all these breaches from places like, you know, LinkedIn to any handful of other dumps where either passwords are dumped either in clear text or hash form, and when those hashes are crashed, there’s now a whole litany of usernames, email addresses, and passwords that folks can try to breach other accounts with.

And that’s kind of where the idea of credential stuffing comes in. A password breach comes out, a set of credentials are revealed, and then attackers have these tools to try this username/password combination across a wide array of sites. And what happens is users end up getting several of their accounts compromised because they’ve reused the same password. So not only does that echo on a personal level, but we see that in enterprise too. I’ll crack and I’ll get someone’s, you know, Active Directory password, or it just happens to be the same password for their domain admin account, or it happens to be the same password used to login to the secure VDI appliance. So it’s not necessarily something that’s gonna show up in a vulnerability scan listed out in a report, but it is something that we see often, that we end up having to remediate almost all the time.

Sanjiv Kawa: Yeah. And there’s several ways that you can combat this. If you have an internal security team, one of the things they get to be doing is monitoring breach dumps and monitoring passwords, and simply just pulling the hashes for any of their domains that they have and running comparison checks to see if any of these known bad passwords tying to known users in their environment exist. Secondarily just comparing the hashes from two separate domains, or comparing hashes from two separate user accounts, specifically privileged and non-privileged. And it’s kind of just doing password audits, right? Just maybe quarterly or every single…whatever aligns with your password change policy, be it 30 days or 90 days, whatever it is. And that way, you can, as a security team, ensure that…well, ensure to a certain degree that users are selecting passwords which are smart, and more importantly, aren’t reusing passwords from zones of low security to zones of high security.

Cindy Ng: Have you seen organizations move away from passwords and go into biometrics?

Sanjiv Kawa: Yeah, there’s been talk about it. I recently had a conversation with a client last week about this. But what they’re battling with is user adoption and cost. Cost being having to have certain, you know, devices which can read fingerprints or read faces. What people end up just going with is a second factor of authentication, either through an RSA one-time pass or do a multi-factor authentication or Google Authenticator. There’s lots of multi-factor authentications there which can basically…you know, biometrics is still a single factor of authentication, right? So having a token supplied to you by any of the aforementioned providers is probably one of the most cost-effective ways and most…and a secondary factor of authentication which is more secure in the long run in terms of user account controls.

Cindy Ng: What kind of problems, though, do you see with biometrics? Let’s say cost isn’t a problem. Could you guys play around with that idea for a little bit? What could you see potentially go wrong?

Tom Porter: So what I have seen in the past is some of these biometric-type logins, when they’re actually stored on the machine, so on like a Windows desktop, are just stored in memory as some kind of token, very similar to a password hash. So it’s just…operates on the same function, and you could reuse that token around the network and still adequately impersonate that user.

Not only that, we’ve also seen…you’ve probably read about it online. Technologies are using, you know, facial-type recognition how people can mimic that by just scrolling through your LinkedIn or Facebook and reconstructing the necessary pattern to log in with your face. And those are some of the things. Just because these are factors…unlike a password, these are factors that are potentially publicly available. Things like your hands for thumbprints to your face on Facebook. It’s just something that we haven’t historically secured in the past.

Cindy Ng: Your experience is so vast and multi-faceted, but is there something that I didn’t ask that you think is really important to share with our listeners?

Sanjiv Kawa: I guess we could probably share about…share some of the other penetration tests that we do. Not all of our pen tests are PCI-oriented. We have done things like pre-assessments in the past. So, for example, this organization is gearing up for another compliance regime, whether it be, you know, SOX or HIPPA, or FFIEC, or something to that effect, and PSC will do pre-assessment penetration tests conforming to the constraints of those compliance regimes.

We also do evaluation-based pen tests. So let’s say….and Tom sort of spoke about this a bit earlier, but let’s say your organization is implementing a new set of core security technology so that it requires some sort of an architectural sort of shuffle, whether it be a new MFA or multi-factor authentication implementation or some sort of segmentation, or a new monitoring alerting system, we can pen-test those or identify if technologies are adequate for your environment before we deploy. We’ve also done very complex mobile application and, well, just regular RESTful or SOAP-based API or other sort of application or web services-style testing that fall outside of the PCI compliance sort of zone. But for the most part, our mindset is still PCI-oriented, right? All you do is you substitute the CDE for that client’s goal, and that’s your success criteria. That’s what you’re trying to get to. And you’re using everything that you can ingest around you to get to that goal.

Tom Porter: I’d like to add, when we’ve sat down with the clients and we’re looking at results of a penetration test and we lay out the findings for what should remediate it and what doesn’t necessarily remediate it, one of the luxuries of PCI that’s kind of a gift and a curse, depending on your perspective, is that we actually get to see remediation through. And it’s a rarity in our industry just because remediation is so rarely required. So we actually gonna sit there and walk through remediation with clients. And when we come back and do our retesting to verify the remediation’s in place, we find that not always all of the findings were remediated. Maybe they remediated some of them, or the ones we required, but not necessarily all the ones we had in the report.

And as Sanjiv spoke about earlier, we kind of have this notion of a kill chain. The findings that we link together to achieve a compromise. You might hear it referred to as a critical path to compromise and kill chain and cyber kill chain. Something with the word “chain” in it. Yeah, the attack path. But essentially we’re trying to identify these bottlenecks, and what ends up happening is these organizations get to places where there’s a kind of mitigation bubble, they start offering compensating controls, which is just more of like a Band-Aid-on-it solution instead of fixing the real problem. So what we do is, especially these findings, and we’re talking about…talking with network or sysadmins who have a very knowledgeable layout of kind of the tech environment. And they’re trying to relay the importance of having, you know, patches for this, or a new appliance for that up to the C-levels, they can use us as a resource. They can tell us whether to focus and then in our report, if we find a deficiency, we’re giving the tech people ammunition to take to the C-Level to say, “Hey, we actually need to act on this. We’ve got a verified third party that says, “You know, we need to beef up our security here. We need to invest resources there.” And it kinda gives them some backing to say, “Hey, you know, we need this.”

Cindy Ng: Do you notice that you’re also looking at IoT devices?

Tom Porter: Absolutely. All the time, actually. Because when we do our network sweeps, we see all kinds of things out there. And they’re almost always set with factory default passwords. They usually have some type of embedded OS that we can pivot through, and they’re usually on wide open networks that don’t have…you know, if they’re using host-based firewalls to protect ingress and egress…So we now have our pivot points to get around.

[Podcast] Varonis Director of Cyber Security Ofer Shezaf, Part II

[Podcast] Varonis Director of Cyber Security Ofer Shezaf, Part II

This article is part of the series "[Podcast] Varonis Director of Cyber Security Ofer Shezaf". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


A self-described all-around security guy, Ofer is in charge of security standards for Varonis products. In this second part of the interview, we explore different ways to improve corporate data security, including security by design techniques at the development stage, deploying Windows 10s, and even labeling security products!

Learn more from Ofer by clicking on the interview above.

Continue reading the next post in "[Podcast] Varonis Director of Cyber Security Ofer Shezaf"

3 Tips to Monitor and Secure Exchange Online

3 Tips to Monitor and Secure Exchange Online

Even if you don’t have your sights on the highest office in the country, keeping a tight leash on your emails is now more important than ever.

Email is commonly targeted by hackers as a method of entry into organizations. No matter if your email is hosted by a 3rd party or managed internally, it is imperative to monitor and secure those systems.

Microsoft Exchange Online – part of Microsoft’s Office365 cloud offering – is just like Exchange on-prem but you don’t have to deal with the servers. Microsoft provides some tools and reports to assist securing and monitoring of Exchange Online like encryption and archival, but it doesn’t cover all the things that keep you up at night like:

  • What happens when a hacker gains access as an owner to an account?
  • What happens if a hacker elevates permissions and makes themselves owner of the CEO’s email?
  • What happens when the hackers have access to make changes to the O365 environment, will you notice?

These questions are exactly what prompted us to develop our layered security approach – which Andy does a great job explaining the major principles of here. What happens when the bad people get in – and they have the ability to change and move around the system? At the end of the day, Exchange Online is another system that provides an attack vector for hackers.

Applying these same principles to Exchange Online, we can extrapolate the following to implement monitoring and security for your email in the cloud:

  1. Lock down access: Make sure only the correct people are owners of mailboxes, and limit access make changes to permissions or 0365 to a small group of administrators.
  2. Manage user access: Archive and delete inactive users immediately. Inactive users are an easy target for hackers as they are usually able to use those accounts without being noticed.
  3. Monitor behavior: Implement a User Based Analytics (UBA) system on top of your email monitoring. Being able to spot abnormal behavior (ie an account being promoted to owner of the CEO’s email folder, another forwarding thousands of emails to the same email address) early is the key to stopping a hacker in hours or days instead of weeks or months.

Wondering if there’s a good solution to help monitor your Exchange Online? Well, we’ve got you covered there too.

More NSA Goodness: Shadow Brokers Release UNITEDRAKE

More NSA Goodness: Shadow Brokers Release UNITEDRAKE

Looking for some good data security news after the devastating Equifax breach? You won’t find it in this post, although this proposed federal breach notification law could count as a teeny ray of light. Anyway, you may recall the Shadow Brokers, which is the group that hacked the NSA servers, and published a vulnerability in Windows that made WannaCry ransomware so deadly.

Those very same Shadow Brokers have a new product announcement that also appears to be based on NSA spyware first identified in the Snowden documents. Bruce Schneier has more details on its origins.

(Way back in 2014, Cindy and I listened to Schneier speak at a cryptography conference, warning the attendees that NSA techniques would eventually reach ordinary hackers. Once again, Schneier proved depressingly right.)

Known as United Rake or UNITEDRAKE in hacker fontology, this is an advanced remote access trojan or RAT along with accompanying “implants” – NSA-speak for remote modules. It makes some of the admittedly simple RATs I investigated in my pen testing series look like the digital-version of Stone-Age tools.

The UNITEDRAKE Manual

How do we know how UNITEDRAKE works?

The Shadow Brokers kindly published a user’s manual. I highly recommend that IT folks who only know about malware by scanning the headlines of tech-zines peruse the contents of this document.

Forgetting for a moment that Evil Inc. is behind the malware, the 67-page manual appears on the surface to be describing a legit IT tool: there are sections on minimum software requirements, installation, deployment, and usage (lots of screenshots here).

Manage remote implants or modules from the UNITEDRAKE interface.

To my eyes, this is a detailed user’s manual that puts many business-class software collateral to shame. It’s the productized malware that we often hear about, and now we can all see for our itself. UNITEDRAKE will likely be sold on the dark web, and the manual is the teaser to get hackers interested.

I didn’t see all the capabilities explained that were implied in the screen shots, but there’s enough in the manual to convince the likely buyer that UNITEDRAKE is the real-deal and worth the investment

But It’s Still a Trojan

Once you read through the UNITEDRAKE manual, you see it’s essentially a RAT with a classic modern architecture: the client-side with the implants is on the victim’s computer, and it communicates to the hacker’s server on the other side of the connection.

Port 80 seems to be the communications channel, and that means HTTP is the workhorse protocol here —although raw TCP is mentioned as well.

In the RAT world, the client-side is the victim’s computer.

Scanning a few specialized websites, I learned that NSA implants such as Salvage Rabbit can copy data off a flash drive, Gumfish can take pictures from an embedded  camera, and Captivated Audience can — what else — spy on users through a laptop’s microphone. You can read more about this spy-craft in this Intercept article.

The NSA guys at least get credit for creative product naming.

The Prognosis

Obviously, the NSA was in a better position to install these implants than typical hackers. And it’s unclear how much of the NSA-ware the Shadow Brokers were able to implement.

In any case, with phishing and other techniques (SQL-injection, and say probing for known but unpatched vulnerabilities), hackers have had a good track record in the last few years in getting past the perimeter undetected.

Schneier also says that Kapersky has seen some of these implants in the wild.

My takeaway: We should be more than a little afraid of UNITEDRAKE, and other proven productized malware than hackers with some pocket change can easily get their hands on.

We never believed the perimeter was impenetrable! Learn how Varonis can spot attackers once they’re inside.

DatAdvantage for Exchange Online Is Here

DatAdvantage for Exchange Online Is Here

We’re thrilled to introduce complete monitoring for Exchange Online as part of our 6.4.50 beta, giving Varonis customers the same coverage we provide with the Exchange on premise system – but now in the cloud.

I’ll let Jeff give you the idea:

Get Started with a Demo of DatAdvantage for Exchange Online

 

With DatAdvantage for Exchange Online, you’ll be able to manage access and monitor email events – and with DatAlert, you’ll get alerted when there’s unusual mailbox activity.

We’ve also added new threat models for Exchange Online – including abnormal service behavior: atypical actions performed on mailboxes owned by other users, and abnormal admin behavior: access to atypical mailboxes.

DatAdvantage for Exchange Online gives you a complete audit trail of exactly who is sending emails (and where they’re going), which users are accessing what email folders, and which users open phishing emails – those kinds of things.  You’ll have transparency and know everything that happens in Exchange Online.

Try it out today and see how DatAdvantage for Exchange Online will help build your email defenses in the cloud, protect against email hijacking and phishing attempts – and keep your data secure.

My Big Fat Data Breach Cost Post, Part I

My Big Fat Data Breach Cost Post, Part I

This article is part of the series "My Big Fat Data Breach Cost Series". Check out the rest:

Data breach costs are very expensive. No, wait they’re not. Over 60% of companies go bankrupt after a data breach! But probably not. What about reputational harm to a company? It could be over-hyped but after Equifax, it could also be significant. And aren’t credit card fraud costs for consumers a serious matter? Maybe not! Is this post starting to sound confusing?

When I was tasked with looking into data breach costs, I was already familiar with the great Verizon DBIR vs. Ponemon debate: based on data from 2014, Ponemon derived an average cost per record of $201 while Verizon pegged it at $.58 per record. In my book, that’s an enormous difference. But it can be explained if you dive deeper.

After looking at one too many research paper, presentation and blog post on the subject of data breach costs, I started to see that once you absorb a few underlying ideas, you understand what everyone is yakking about.

That’s a roundabout way of saying that this will be a multi-part series.

Averages Can Cause Non-Average Problems

The first issue to take up is the average of a data sample. In fact, this blog’s favorite statistician Kaiser Fung lectured us on this point a while back. When looking at a data set, a simple average of the numbers works well enough as long as the distribution of the number is not too skewed – has a spike or clump at the tail end.

But as Fung points out, when this is not the case, the average leads to inconsistencies, as in the following hypothetical data set of breach record counts over two years:

Company Number of records breached (2015) Number of records breached (2016 )
1 100 150
2 200 400
3 150 300
4 225 250
5 75 100
6 1000 1200
7 1500 1000
8 8000 1000
9 300 400
10 175 500
Average 1172 530

For 2015, the average of 1172 is off by several multiples for seven of the ten companies! And if we compare this average to the following year’s average of 930, we could incorrectly conclude that breach counts are down.

Why? If we look at those seven companies, we see all their breach counts went, ahem, up.

This usually leads to a discussion of how numbers are distributed in a dataset, and that the median number, where 50% or less of the data can be found, is a better representation than an average — especially for skewed data sets. Kaiser is very good at explaining this.

For those who want to get a head start on the next post in this series, they can scan this paper, which has the best title on a data security topic I’ve come across, Sex, Lies and Cyber-crime Surveys. This was written by those crazy folks at Microsoft. If you don’t want to read it, the point is this: for skewed data, it’s important to analyze how each percentile contributes to the overall average.

Guesstimating Data Breach Costs

How does Ponemon determine the cost of a data breach? Generally, this information is not easily available. However, in recent years, theses costs have started to show up in annual reports for public companies.

But for private companies and for public companies that are not breaking breach costs out in their public financial reporting, you have to do more creative number crunching.

Ponemon surveys companies, asking them to rate the costs for common post-breach activities, including auditing & consulting, legal services, and identity protection fees. Ponemon then categorizes costs into whether they’re direct — for example, credit monitoring — or fuzzier indirect or opportunity costs — extra employee time or potential lost business.

It turns out that these indirect costs represent about 40% of the average cost of a breach based on their 2015 survey. These costs mean something, but they’re not really accounting costs. More on that next time.

Recently, other researchers have been able to get a hold of far better estimate of the direct breach costs by examining actual cyber insurance claims. Companies, such as Advisen and NetDiligence, have this insurance payout data and have been willing to share it.

The cyber insurance market is still immature and the actual payouts after deductibles and other fine print don’t represent the full direct cost of the breach. But this is, for the first time, evidence of direct costs.

Anyway, the friendly people over at RAND — yes, the very same company who worked this out — used these data sets to guesstimate an average breach cost per incident of about $6 million – wonks should review their paper. This tracks very closely with Ponemon’s $6.5 million per incident estimate for roughly the same period.

Per incident cost data based on insurance claims. Note the Max values! (Source: RAND)

Before you start shouting into your browser, I realize I used an average above to estimate a very skewed (and as we’ll see heavy-tailed) set.

In any case, several studies including the RAND one, have focused on per incident costs rather than per record costs. At some point, the Verizon DBIR team also began to de-emphasize the count of records exposed, realizing that it’s hard to get reliable numbers from their own forensic data.

In the 2015 DBIR report, the one where they announced their provocative $.58 per record breach cost claims, the researchers relied on, for the first time, a dataset of insurance claim data from NetDiligence.

Let me just say that the DBIR’s average cost ratio is heavily influenced by a few companies with humongous breached record counts  — likely in the millions —  reflected in the denominator and smaller total insurance payouts for the numerator. As we saw in my made-up example above, the average in this case is not very revealing.

Why not use multiple averages customized over different breach count ranges? I hope you’re beginning to see it’s far better to segment the cost data by record count: you look up in a table to find the costs appropriate for your case. And Verizon did something close to that in the 2015 DBIR to come with a table of data that’s nearer Ponemon’s average for the lower tiers:

Ok, so maybe Verizon’s headline-grabbing $.58 per record breached is not very accurate.

Counting breach data record provides some insight into understanding total costs, but there are other factors: the particular industry the company is in, regulations they’re under, credit protection costs for consumers, and company size. For example, take a look at this breach cost calculator based on Ponemon’s own data.

Linear Thinking and Its Limits

You can understand why the average breach cost per record number is so popular: it provides a quick although unreliable answer for the total cost of a particular breach.

To derive the $201 average cost per record, Ponemon simply added up the costs (both direct and indirect) from their survey and divided by the number of records breached as reported by the companies.

This may be convenient for calculations but as a predictor, it’s not very good. I’m gently walking around the topic of linear regressions, which is one way to draw a “good” straight line through the dataset.

Wonks can check out Jay Jacobs’ great post on this in his Data Driven Security blog. He shows a linear regression beating out the simple Ponemon line with its slope of 201 — by the way, he gained direct access to Ponemon’s survey results. Jacobs’ beta is $103, which you can interpret as the marginal cost of an additional breached record. But even his regression model is not all that accurate.

I want to end this post with this thought: we want the world to look linear, but that’s not the way it ticks.

Why should breach costs go up by a fixed amount for each additional record stolen? And for that matter, why do we assume that 10% of the companies in a data breach survey will contribute 10% to the total costs, the next 10% will add another 10%, etc.

Sure for paying out credit monitoring costs for consumer and replacing credit cards that were reissued by litigious credit card companies, costs add up on a per record basis.

On the other hand, I don’t know too many attorneys, security consultants, developers, or pen testers who say to new clients, “We charge $50 a data record to analyze or remediate your breach.”

Jacobs found a better non-linear model — technically log-linear which is fancy way of saying the record count variable has an exponent in it. In the graph below — thank you Wolfram Alpha! — I compared the simple-minded Ponemon line against the more sophisticated model from Jacobs. You can gaze upon the divergence or else click here to explore on you own.

The great divergence: linear vs non-linear data breach cost estimates.

If you made it this far, congratulations!

In the next post, I hope all this background will payoff as I try to connect these ideas to come up with a more nuanced way to understand data breach costs.

Continue reading the next post in "My Big Fat Data Breach Cost Series"