Tag Archives: eu data protection regulation

Finding EU Personal Data With Regular Expressions (Regexes)

Finding EU Personal Data With Regular Expressions (Regexes)

If there is one very important but under-appreciated point to make about complying with tough data security regulations such as the General Data Protection Regulation (GDPR), it’s the importance of finding and classifying the personally identifiable information, or personal data as it’s referred to in the EU. Discovering where personal data is located in file systems and the permissions used to protect it should be the first step in any action plan.

You don’t have to necessarily take our word for it, you can look at GDPR to-do lists from law firms and consulting groups that are heavily involved with advising companies on compliance.

We’ve already given you a heads up about Varonis GDPR Patterns, which helps you spot this personal data, and now that I’ve chatted and learned more from Sarah and the Varonis product development team, I’ve more to share.

Nobody Does It Better

GDPR Patterns is, of course, built on our Data Classification Framework or DCF. For those new to Varonis, DCF has an enormous advantage over other classification solutions, since it implements true incremental scanning. After the initial scan of the file system, DCF can quickly identify any changes, and then selectively scan those directories or folders that have been accessed. This makes far more sense than starting scanning from scratch!

By the way, for those crazy enough to think they can try rolling their own data scanning software, they can refer to my series of posts on a DIY classification system based on PowerShell. Please learn from my craziness and avoid the urge.

With DCF doing the heavy lifting, GDPR Patterns can focus on spotting EU-style personal data within files. According to the GDPR definition, personal data is effectively anything related to an individual that can identify that person. The definition’s very broad and deceptively vague language covers a lot of territory! (For more excruciating details, please refer to this official EU document.)

Obviously, we’re talking about all the usual suspects: names, addresses, phone numbers, credit card, bank and other account numbers. GDPR personal data also encompasses internet-era identifiers such as IP and email addresses, and futuristic biometric identifiers (DNA, retinal scans) as well.

Many EU Identifiers

The EU comprises 28 countries, and that means many identifiers vary by country. This is where the Varonis product team did the hard work of research, spending months analyzing phone numbers, license plate numbers, VAT codes, passports, driver’s licenses, and national identification numbers across the EU.

Does anybody know what the Hungarian personal identification code, known as Születési szám, looks like?

That would be an 11-digit sequence based on date of birth, gender, a unique number to separate those born on the same date, and a checksum.

Or what about a Slovakian passport number?

That’s 9-characters: 2-digits followed by 7-letters.

Varonis has worked all this out!

We use regular expressions or regexes to do pattern matching when possible. It’s not as easy to craft these regexes as you might think.

If you want to match wits against the people who devised the Dutch license plate numbering scheme, you can click here to see a regex analysis of one sample number. And then you can try a few out on your own to see if you’ve got it. Enjoy!

A regular expression representing Dutch license plates. Think you understand it? Try your luck with the link above!

Patterns Are More Than Regexes

The research and effort we put into the regular expressions only forms part of the GDPR Patterns solution. Sure, it’s conceivable that someone could work out regexes for a few countries or do Google searches to find these expressions on the web.

However, we’ve crafted our regexes by looking at real-world data samples, and not automatically accepting what’s provided by government agencies and others. Our GDPR regexes have proven themselves in the field!

With so many different alphanumeric patterns, it shouldn’t be surprising there’d be occasional “collisions” — sequences that could be classified into several types of  personal data. For example, EU passport numbers vary between 8 and 10 consecutive numbers, so they’d also be caught by an EU phone number regex.

That is why we’ve also added validator algorithms to supplement the regexes. Specifically, GDPR Patterns scans for special keywords that are near or in proximity to the EU personal data: if we find the keyword, it helps zero in on the right GDPR pattern.

For example, when GDPR Patterns finds an 11-digit number, it looks for additional keywords to determine if this represents a national personal ID:  “IK” or “ISIKUKOOD” implies Esontia; “Születési szám” or “Személyi szám” or “Személyi azonosító” would of course mean Hungary, etc.

If we don’t find the extra keywords, then we can’t assume the 11 digits are an identification code, and so it would not be classified as GDPR personal data. In other words, the validation algorithms reduce false positives.

In case you’re asking, we do use negative keywords as well. If GDPR Patterns finds one of these types of keywords, it means that personal data caught by the regex expression can’t be classified under that pattern.

More GDPR Patterns Details

The Varonis developers have dived deep into EU identification numbers, driver’s licenses, license plates, and phone numbers, looking at real-world samples to come up with both positive and negative keywords and proximity information.

We’ve integrated GDPR Patterns into our DatAdvantage reports to show which files contain a specific Pattern based on a hit count.

GDPR Patterns is also integrated with DatAlerts so that notifications can be delivered when files are accessed containing personal data. We’ll help you meet the GPDR 72-hour breach notification requirement.

Data Transport Engine will also use GDPR Patterns to archive or remove stale or no longer useful EU personal data, another requirement in GDPR.

Have questions?  Contact us for more information.

New Post-Brexit UK Data Law: Long Live the GDPR!

New Post-Brexit UK Data Law: Long Live the GDPR!

The UK is leaving the EU to avoid the bureaucracy from Brussels, which includes having to comply with the General Data Protection Regulation (GDPR). So far, so good. However, since the EU is so important to their economy, the UK’s local data laws will in effect have to be at very high-level — basically, GDPR-like — or else the EU won’t allow data transfers.

Then there is the GDPR’s new principal of extra-territoriality or territorial scope — something we’ve yakked a lot about in the blog — which means non-EU countries will still have to deal with the GDPR.

Finally, as a practical matter the GDPR will kick in before the UK formally exits the EU. So the UK will be under the GDPR for at least a year or more no matter what.

Greater legal minds than mine have already commented on all this craziness.

The UK government looked at the situation, and decided to bite the bullet, or more appropriately eat the cold porridge

Last week, the UK released a statement of intent that commits the government to scrapping their existing law, the Data Protection Act, and replacing it with a new Data Protection Bill.

Looks Familiar

This document is very clear about what the new UK data law will look like. Or as they say:

Bringing EU law into our domestic law will ensure that we help to prepare the UK for the future after we have left the EU. The EU General Data Protection Regulation (GDPR) and the Data Protection Law Enforcement Directive (DPLED) have been developed to allow people to be sure they are in control of their personal information while continuing to allow businesses to develop innovative digital services without the chilling effect of over-regulation. Implementation will be done in a way that as far as possible preserves the concepts of the Data Protection Act to ensure that the transition for all is as smooth as possible, while complying with the GDPR and DPLED in full.

In effect, the plan is to have a law that will mirror the GDPR, allowing UK companies to continue to do business as usual

The Bill will include the GDPR’s new privacy rights for individuals: the “right to be forgotten”, data portability, and right to personal data access. And it will contain the GDPR’s obligations for controllers to report breaches, conduct impact assessments involving sensitive data, and designate data protection officers.

What about the GDPR’s considerable fines?

The UK has also gone along with the EU data law’s tiered structure – fines of up to 4% of global turnover (revenue).

Her Majesty’s Government may have left the EU, but EU laws for data privacy and security will remain. The GDPR is dead, long live the GDPR!

GDPR Resources

Of course, the new Bill will have its own articles, with different wording and numbering scheme than the GDPR. And legal experts will  no doubt find other differences — we’ll have to wait for the new law. Having said that, our considerable resources on the EU data law remain relevant.

For UK companies reading this post and looking for a good overview, here are three links that should help:

 

For a deeper dive into the GDPR, we offer for your edification these two resources:

And feel free to search the IOS blog and explore the GDPR on your own!

A Few Thoughts on Data Security Standards

A Few Thoughts on Data Security Standards

Did you know that the 462-page NIST 800-53 data security standard has 206 controls with over 400 sub-controls1?  By the way, you can gaze upon the convenient XML-formatted version here. PCI DSS is no slouch either with hundreds of sub-controls in its requirements’ document. And then there’s the sprawling IS0 27001 data standard.

Let’s not forget about security frameworks, such as COBIT and NIST CSF, which are kind of meta-standards that map into other security controls. For organizations in health or finance that are subject to US federal data security rules, HIPAA and GLBA’s data regulations need to be considered as well. And if you’re involved in the EU market, there’s GDPR; in Canada, it’s PIPEDA; in the Philippines, it’s this, etc., etc.

There’s enough technical and legal complexity out there to keep teams of IT security pros, privacy attorneys, auditors, and diplomats busy till the end of time.

As a security blogger, I’ve also puzzled and pondered over the aforementioned standards and regulations. I’m not the first to notice the obvious: data security standards fall into patterns that make them all very similar.

Security Control Connections

If you’ve mastered and implemented one, then very likely you’re compliant to others as well. In fact, that’s one good reason for having frameworks. For example, with, say NIST CSF, you can leverage your investment in ISO 27001 or ISA 62443 through their cross-mapped control matrix (below).

Got ISO 27001? Then you’re compliant with NIST CSF!

I think we can all agree that most organizations will find it impossible to implement all the controls in a typical data standard with the same degree of attention— when was last time you checked the physical access audit logs to your data transmission assets (NIST 800-53, PE-3b)?

So to make it easier for companies and the humans that work there, some of the standards group have issued further guidelines that break the huge list of controls into more achievable goals.

The PCI group has a prioritized approach to dealing with their DSS—they have six practical milestones that are broken into a smaller subset of relevant controls. They also have a best practices guide that views — and this is important — security controls into three broader functional areas: assessment, remediation, and monitoring.

In fact, we wrote a fascinating white paper explaining these best practices, and how you should be feeding back the results of monitoring into the next round of assessments. In short: you’re always in a security process.

NIST CSF, which itself is a pared down version of NIST 800-53, also has a similar breakdown of its controls into broader categories, including identification, protection, and detection. If you look more closely at the CSF identification controls, which mostly involve inventorying your IT data assets and systems, you’ll see that the main goal in this area is to evaluate or assess the security risks of the assets that you’ve collected.

File-Oriented Risk Assessments

In my mind, the trio of assess, protect, and monitor is a good way to organize and view just about any data security standard.

In dealing with these data standards, organizations can also take a practical short-cut through these controls based on what we know about the kinds of threats appearing in our world — and not the one that data standards authors were facing when they wrote the controls!

We’re now in a new era of stealthy attackers who enter systems undetected, often though phish mails, leveraging previously stolen credentials, or zero-day vulnerabilities. Once inside, they can fly under the monitoring radar with malware-free techniques, find monetizable data, and then remove or exfiltrate it.

Of course it’s important to assess, protect and monitor network infrastructure, but these new attack techniques suggest that the focus should be inside the company.

And we’re back to a favorite IOS blog theme. You should really be making it much harder for hackers to find the valuable data — like credit card or account numbers, corporate IP — in your file systems, and detect and stop the attackers as soon as possible.

Therefore, when looking at the how to apply typical data security controls, think file systems!

For, say, NIST 800.53, that means scanning file systems, looking for sensitive data, examining the ALCs or permissions and then assessing the risks (CM-8, RA-2,RA-3). For remediation or protection, this would involve reorganizing Active Directory groups and resetting ACLs to be more exclusive (AC-6). For detection, you’ll want to watch for unusual file system accesses that likely indicate hackers borrowing employee credentials (SI-4).

I think the most important point is not to view these data standards as just an enormous list of disconnected controls, but instead to consider them in the context of assess-protect-monitor, and then apply them to your file systems.

I’ll have more to say on a data or file-focused view of data security controls in the coming weeks.

1 How did I know that NIST 800-53 has over 400 sub-controls? I took the XML file and ran this amazing two lines of PowerShell:

[xml]$books = Get-Content 800-53-controls.xml
$books.controls.control|%{$_.statement.statement.number}| measure -line

 

GDPR: Troy Hunt Explains it All in Video Course

GDPR: Troy Hunt Explains it All in Video Course

You’re a high-level IT security person, who’s done the grunt work of keeping your company compliant with PCI DSS, ISO 27001, and a few other security abbreviations, and one day you’re in a meeting with the CEO, CSO, and CIO. When the subject of General Data Protection Regulation or GDPR comes up, all the Cs agree that there are some difficulties, but everything will be worked out.

You are too afraid to ask, “What is the GDPR?”

Too Busy for GDPR

We’ve all been there, of course. Your plate has been full over the last few weeks and months hunting down vulnerabilities, hardening defenses against ransomware and other malware, upgrading your security, along with all the usual work involved in keeping the IT systems humming along.

So it’s understandable that the General Data Protection Regulation may have flown under your radar.

However, there’s no need to panic.

The GDPR shares many similarities with other security standards and regulations so it’s just question of learning some basic background, the key requirements of the new EU law, and a few gotchas, preferably explained by an instructor with a knack for connecting with IT people.

Hunt on GDPR

And that’s why we engaged with Troy Hunt to develop a 7-part video course on the GDPR. Troy is a web security guru, Australian Microsoft Regional Director, and author whose security writing has appeared in Forbes, Time Magazine, and Mashable. And he’s no stranger to this blog as well!

Let’s get back to you and other busy IT security folks like you who need to get up to speed quickly.  With just an hour of your time, Troy will cover the basic vocabulary and definitions (“controller”, “processor”, “personal data”), the key concept underlying GDPR (personal data is effectively owned by the consumer), and what you’ll need to do to keep your organization compliant (effectively, minimize and monitor this personal data.)

By the way, Troy also explains how US companies, even those without EU offices, can get snagged by GDPR’s territorial scope rule— Article 3 to be exact. US-based e-commerce companies: you’ve been warned!

While Troy doesn’t expect you to be an attorney, he analyzes and breaks down a few of more critical requirements and the penalties for not complying, particularly on breach reporting, so that you’ll be able to keep up with some of the legalese when it arises at your next GDPR meeting.

And I think you’ll see by the end of the course that while there may be some new aspects to this EU law, as Troy notes, the GDPR really legislates IT common sense.

What are you waiting for?  Register and get GDPR-aware starting today!

 

 

 

 

[Transcript] Interview With GDPR Attorney Sue Foster

[Transcript] Interview With GDPR Attorney Sue Foster

Over two podcasts, attorney Sue Foster dispensed incredibly valuable GDPR wisdom. If you’ve already listened, you know it’s the kind of insights that would have otherwise required a lengthy Google expedition, followed by chatting with your cousin Vinny the lawyer. We don’t recommend that!

In reviewing the transcript below, I think there are three points that are worth commenting on. One, the GDPR’s breach reporting rule may appear to give organizations some wiggle room. But in fact that’s not the case! The reference to “right and freedoms of natural persons” refers to explicit privacy and property rights spelled out in the EU Charter. This ain’t vague language.

However, there is some leeway in reporting within the 72-hour time frame. In short: you have to make a good effort, but you can delay if, say, you’re currently investigating and need more time because otherwise you’d compromise the investigation.

Two, the territorial scope requirements in Article 3 are complicated by what it means to target EU citizens in your marketing. The very tricky part is when you’re a multinational company that has both a EU and non-EU presence. If you read closely, Foster is suggesting that EU citizens that happen to find their way to, say, your US web site, would not be protected by the GDPR.

In other words, if the company’s general marketing doesn’t target EU citizens, then the information collected is not under GDPR protections. But that would not apply to a company’s localized web content for, say, the French or German markets — information submitted through those sites would of course be under the GDPR.

Yes, I will confirm this with Foster. But if this is not the case for multinationals, then it would cause a pretty large mal de tête.

Third, GPDR compliance is based on, as Foster notes, a “show your work” principle, the same as you did on math tests in high school. It is not like PCI DSS, where you’re going down a checkoff list: Two Factor Authentication? Yes.  Vulnerability Scanning? Yes, etc.

The larger issue is that security technology will change and so what worked well in the past will likely not hold up in the future. With GDPR, you should be able to justify your security plan based on the current state of security technology and document what you’ve done.

Enough said.

Inside Out Security
Sue Foster is a partner with Mintz Levin based out of the London office. She works with clients on European data protection compliance and on commercial matters in the fields of clean tech, high tech, mobile media, and life sciences. She’s a graduate of Stanford Law School. SF is also, and we like this here at Varonis, a Certified Information Privacy Professional.

I’m very excited to be talking to an attorney with a CIPP, and with direct experience on a compliance topic we cover on our blog — the General Data Protection Regulation, or GDPR.

Welcome, Susan.

Sue Foster
Hi Andy. Thank you very much for inviting me to join you today. There’s a lot going on in Europe around cybersecurity and data protection these days, so it’s a fantastic set of topics.
IOS
Oh terrific. So what are some of the concerns you’re hearing from your clients on GDPR?
SF
So one of the big concerns is getting to grips with the extra-territorial reach. I work with a number of companies that don’t have any office or other kind of presence in Europe that would qualify them as being established in Europe.

But they are offering goods or services to people in Europe. And for these companies, you know in the past they’ve had to go through quite a bit of analysis to understand the Data Protection Directive applies to them. Under the GDPR, it’s a lot clearer and there are rules that are easier for people to understand and follow.

So now when I speak to my U.S. clients, if they’re a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they’ll be subject to the GDPR. That’s very clear.

Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.

It’s also really important for U.S. companies to understand that there’s a new ePrivacy Regulation in draft form that would cover any provider, regardless of location, of any form of publicly available electronic communication services to EU users.

Under this ePrivacy Regulation, the notion of what these communication services providers are is expanded from the current rules, and it includes things that are called over-the-top applications – so messaging apps and communications features, even when a communication feature is just something that is embedded in a website.

If it’s available to the public and enables communication, even in a very limited sort of forum, it’s going to be covered. That’s another area where U.S. companies are getting to grips with the fact that European rules will apply to them.

So this new security regulation as well that may apply to companies located outside the EU. So all of these things are combining to suddenly force a lot of U.S. companies to get to grips with European law.

IOS
So just to clarify, let’s say a small U.S. social media company that doesn’t market specifically to EU countries, doesn’t have a website in the language of some of the EU country, they would or would not fall under the GDPR?
SF
On the basis of their [overall] marketing activity they wouldn’t. But we would need to understand if they’re profiling or they’re tracking EU users or through viral marketing that’s been going on, right? And they are just tracking everybody. And they know that they’re tracking people in the EU. Then they’re going to be caught.

But if they’re not doing that, if not engaging in any kind of tracking, profiling, or monitoring activities, and they’re not affirmatively marketing into the EU, then they’re outside of the scope. Unless of course, they’re offering some kind of service that falls under one of these other regulations that we were talking about.

IOS
What we’re hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I’m wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA’s and consumers?
SF
Sure absolutely. So first it’s important to look at the specific definition of personal data breach. It means a breached security leading to the ‘accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to personal data’.  So it’s fairly broad.

The requirement to report these incidents has a number of caveats. So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.

Then there’s a set of exceptions. And that is unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. So I can understand why U.S. companies would sort of look at this and say, ‘I don’t really know what that means’. How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?

Because that’s not defined anywhere in this regulation!

It’s important to understand that that little bit of text is EU-speak that really refers to the Charter of Fundamental Rights of the European Union, which is part of EU law.

There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person’s privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?

So you could, for example, say that there’s a breach that isn’t likely to reveal information that I would consider personally compromising in a privacy perspective, but it could lead to fraud, right? So that could affect my property rights. So that would be one of those issues. Basically, most of the time you’re going to have to report the breach.

When you’re going through the process of working out whether you need to report the breach to the DPA, and you’re considering whether or not the breach is likely to result in a risk to the rights and freedoms of natural persons, one of the things that you can look at is whether people are practically protected.

Or whether there’s a minimal risk because of steps you’ve already taken such as encrypting data or pseudonymizing data and you know that the key that would allow re-identification of the subjects hasn’t been compromised.

So these are some of the things that you can think about when determining whether or not you need to report to the Data Protection Authority.

If you decide you have to report, you then need to think about ‘do you need to report the breach to the data subjects’, right?

And the standard there is that is has to be a “high risk to the rights and freedoms” of natural persons’. So a high risk to someone’s privacy rights or rights on their property and things of that sort.

And again, you can look at the steps that you’ve taken to either prevent the data from — you know before it even was leaked — prevent it from being potentially vulnerable in a format where people could be damaged. Or you could think also whether you’ve taken steps after the breach that would prevent those kinds of risks from happening.

Now, of course, the problem is the risk of getting it wrong, right?

If you decide that you’re not going to report after you go through this full analysis and the DPA disagrees with you, now you’re running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.

And that I think it’s going to lead to a lot of companies being cautious in reporting when even they might have been able to take advantage of some of these exceptions but they won’t feel comfortable with that.

IOS
I see. So just to bring it to more practical terms. We can assume that let’s say credit card numbers or some other identification number, if that was breach or taken, would have to be reported both to the DPA and the consumer?
SF
Most likely. I mean if it’s…yeah almost certainly. Particularly if the security code on the back of the card has been compromised, and absolutely you’ve got a pretty urgent situation. You also have a responsibility to basically provide a risk assessment to the individuals, and advise them on steps that they can take to protect themselves such as canceling their card immediately.
IOS
One hypothetical that I wanted to ask you about is the Yahoo breach, which technically happened a few years ago. I think it was over two years ago … Let’s say something like that had happened after the GDPR where a company sort of had known that there was something happening that looked like a breach, but they didn’t know the extent of it.

If they had not reported it, and waited until after the 72-hour rule, what would have happened to let’s say a multinational like Yahoo?

SF
Well, Yahoo would need to go through the same analysis, and it’s hard to imagine that a breach on that scale and with the level of access that was provided to the Yahoo users accounts as a result of those breaches, and of course the fact that people know that it’s very common for individuals to reuse passwords across different sites, and so you, you know, have the risks sort of follow on problems.

It’s hard to imagine they would be in a situation where they would be off the hook for reporting.

Now the 72-hour rule is not hard and fast.

But the idea is you report as soon as possible. So you can delay for a little while if it’s necessary for say a law enforcement investigation, right? That’s one possibility.

Or if you’re doing your own internal investigation and somehow that would be compromised or taking security measures would be compromised in some way by reporting it to the DPA. But that’ll be pretty rare.

Obviously going along for months and months with not reporting it would be beyond the pale. And I would say a company like Yahoo would potentially be facing a fine of 2% of its worldwide revenue!

IOS
So this is really serious business, especially for multinationals.

This is also a breach reporting related question, and it has to do with ransomware. We’re seeing a lot of ransomware attacks these days. In fact, when we visit customer sites and analyze their systems, we sometimes see these attacks happening in real time. Since a ransomware attack encrypts the file data but most of the time doesn’t actually take the data or the personal data, would that breach have to be reported or not?

SF
This is a really interesting question! I think the by-the-book answer is, technically, if a ransomware attack doesn’t lead to the accidental or unlawful destruction, loss, or alteration or unauthorized disclosure of or access to the personal data, it doesn’t actually fall under the GDPR’s definition of a personal data breach, right?

So, if a company is subject to an attack that prevents it from accessing its data, but the intruder can not itself access, change or destroy the data, you could argue it’s not a personal data breach, therefore not reportable.

But it sure feels like one, doesn’t it?

IOS
Yes, it does!
SF
Yeah. I suspect we’re going to find that the new European Data Protection Board will issue guidance that somehow brings ransomware attacks into the fold of what’s reportable. Don’t know that for sure, but it seems likely to me that they’ll find a way to do that.

Now, there are two important caveats.

Even though, technically, a ransomware attack may not be reportable, companies should remember that a ransomware attack could cause them to be in breach of other requirements of the GDPR, like the obligation to ensure data integrity and accessibility of the data.

Because by definition, you know, the ransomware attack has made the data non-assessable and has totally corrupted its integrity. So, there could be a liability there under the GDPR.

And also, the company that’s suffering the ransomware attack should consider whether they’re subject to the new Network and Information Security Directive, which is going to be implemented in national laws by May 9th of 2018. So again, May 2018 being a real critical time period. That directive requires service providers to notify the relevant authority when there’s been a breach that has a substantial impact on the services, even if there was no GDPR personal data breach.

And the Network and Information Security Directive applies to a wide range of companies, including those that provide “essential services”. Sort of the fundamentals that drive the modern economy: energy, transportation, financial services.

But also, it applies to digital service providers, and that would include cloud computing service providers.

You know, there could be quite a few companies that are being held up by ransomware attacks who are in the cloud space, and they’ll need to think about their obligations to report even if there’s maybe not a GDPR reporting requirement.

IOS
Right, interesting. Okay. As a security company, we’ve been preaching Privacy by Design principles, data minimization and retention limits, and in the GPDR it’s now actually part of the law.

The GDPR is not very specific about what has to be done to meet these Privacy by Design ideas, so do you have an idea what the regulators might say about PbD as they issue more detailed guidelines?

SF
They’ll probably tell us more about the process but not give us a lot of insight as to specific requirements, and that’s partly because the GDPR itself is very much a show-your-work regulation.

You might remember back on old,old math tests, right? When you were told, ‘Look, you might not get the right answer, but show all of your work in that calculus problem and you might get some partial credit.’

And it’s a little bit like that. The GDPR is a lot about process!

So, the push for Privacy by Design is not to say that there are specific requirements other than paying attention to whatever the state of the art is at the time. So, really looking at the available privacy solutions at the time and thinking about what you can do. But a lot of it is about just making sure you’ve got internal processes for analyzing privacy risks and thinking about privacy solutions.

And for that reason, I think we’re just going to get guidance that stresses that, develops that idea.

But any guidance that told people specifically what security technologies they needed to apply would probably be good for, you know, 12 or 18 months, and then something new would come along.

Where we might see some help is, eventually, in terms of ISO standards. Maybe there’ll be an opportunity in the future for something that comes along that’s an international standard, that talks about the process that companies go through to design privacy into services and devices, etc. Maybe then we’ll have a little more certainty about it.

But for now, and I think for the foreseeable future, it’s going to be about showing your work, making sure you’ve engaged, and that you’ve documented your engagement, so that if something does go wrong, at least you can show what you did.

IOS
That’s very interesting, and a good thing to know. One last question, we’ve been following some of the security problems related to Internet of Things devices, which are gadgets on the consumer market that can include internet-connected coffee pots, cameras, children toys.

We’ve learned from talking to testing experts that vendors are not really interested in PBD. It’s ship first, maybe fix security bugs later. Any thoughts on how the GDPR will effect IOT vendors?

SF
It will definitely have an impact. The definition of personal data under the GDPR is very, very broad. So, effectively, anything that I am saying that a device picks up is my personal data, as well as data kind of about me, right?

So, if you think about a device that knows my shopping habits that I can speak to and I can order things, everything that the device hears is effectively my personal data under the European rules.

And Internet of Things vendors do seem to be lagging behind in Privacy by Design. I suspect we’re going to see investigations and fines in this area early on, when the GDPR starts being enforced on May, 2018.

Because the stories about the security risks of, say, children’s toys have really caught the attention of the media and the public, and the regulators won’t be far behind.

And now, we have fines for breaches that range from 2% to 4% of a group’s global turnover. It’s an area that is ripe for enforcement activity, and I think it may be a surprise to quite a few companies in this space.

It’s also really important to go back to this important theme that there are other regulations, besides the GDPR itself, to keep track of in Europe. The new ePrivacy Regulation contains some provisions targeted at the internet of things, such as the requirement to get consent from consumers from machine-to-machine transfers of communications data, which is going to be very cumbersome.

The [ePrivacy] Regulation says you have to do it, it doesn’t really say how you’re going to get consent, meaningful consent, that’s a very high standard in Europe, to these transfers when there’s no real intelligent interface between the device and the person, the consumer who’s using it. Because there are some things that have, maybe kind of a web dashboard. There’s some kind of app that you use and you communicate with your device, you could have privacy settings.

There’s other stuff that’s much more behind the scenes with Internet of Things, where the user is not having a high level of engagement. So, maybe a smart refrigerator that’s reeling information about energy consumption to, you know, the grid. Even there, you know, there’s potentially information where the user is going to have to give consent to the transfer.

And it’s hard to kind of imagine exactly what that interface is going to look like!

I’ll mention one thing about the ePrivacy Regulation. It’s in draft form. It could change, and that’s important to know. It’s not likely to change all that much, and it’s on a fast-track timeline because the commission would like to have it in place and ready to go May, 2018, the same time as the GDPR.

IOS
 Sue Foster, I’d like to thank you again for your time.
SF
You’re very welcome. Thank you very much for inviting me to join you today.

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

Leave a review for our podcast & we'll send you a pack of infosec cards.


In this second part of our interview with attorney and GDPR pro Sue Foster, we get into a cyber topic that’s been on everyone’s mind lately: ransomware.

A ransomware attack on EU personal data is unquestionably a breach —  “accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access  …”

But would it be reportable under the GDPR, which goes into effect next year?

In other words, would an EU company (or US one as well) have to notify a DPA and affected customers within the 72-hour window after being attacked by, say, WannaCry?

If you go by the language of the law, the answer is a definite …  no!

Foster explains that for it to be reportable, a breach has to cause a risk “to the rights and freedoms of natural persons.”  For what this legalese really means, you’ll just have to listen to the podcast. (Hint: it refers to a fundamental document of the EU.)

Anyway, personal data that’s encrypted by ransomware and not taken off premises is not much of a risk for anybody. There’s still more subtleties involving ransomware and other EU data laws that I think is best explained by her, so you’ll just have to listen to Sue’s legal advice directly!

There’s also very interesting analysis by Foster on the implications of the GDPR for Internet-of-Things gadget makers.

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

Leave a review for our podcast & we'll send you a pack of infosec cards.


Sue Foster is a London-based partner at Mintz Levin. She has a gift for explaining the subtleties in the EU General Data Protection Regulation (GDPR). In this first part of our interview, Foster discusses how the GDPR’s new extraterritoriality rule would place US companies under the law’s data obligations.

In the blog, we’ve written about some of the implications of the GDPR’s Article 3, which covers the law’s territorial scope. In short: if you market online to EU consumers — web copy, say, in the language of some EU country  — then you’ll fall under the GDPR. And this also means you would have to report data exposures under the GDPR’s new 72-hour breach rule.

Foster points out that if a US company happens to attract EU consumers through their overall marketing, they would not fall under the law.

So a cheddar cheese producer from Wisconsin whose web site gets the attention and business of French-based frommage lovers is not required to protect their data at the level of the GDPR.

There’s another snag for US companies, an update to the EU’s ePrivacy Directive, which places restrictions on embedded communication services. Foster explains how companies, not necessarily ISPs, that provide messaging — that means you WhatsApp, Skype, and Gmail — would fall under this law’s privacy rules.

Sue’s insights on these and other topics will be relevant to both corporate privacy officers and IT security folks.

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

At the end of the previous post, we took up the nuts-and-bolts issues of protecting sensitive data in an organization’s file system. One popular approach, least-privileged access model, is often explicitly mentioned in compliance standards, such as NIST 800-53 or PCI DSS. Varonis DatAdvantage and DataPrivilege provide a convenient way to accomplish this.

Ownership Management

Let’s start with DatAdvantage. We saw last time that DA provides graphical support for helping to identify data ownership.

If you want to get more granular than just seeing who’s been accessing a folder, you can view the actual access statistics of the top users with the Statistics tab (below).

This is a great help in understanding who is really using the folder. The ultimale goal is to find the true users, and remove extraneous groups and users, who perhaps needed occasional access but not as part of their job role.

The key point is to first determine the folder’s owner — the one who has the real knowledge and wisdom of what the folder is all about. This may require some legwork on IT’s part in talking to the users, based on the DatAdvantage stats, and working out the real-chain of command.

Once you use DatAdvantage to set the folder owners (below), these more informed power users, as we’ll see, can independently manage who gets access and whose access should be removed. The folder owner will also automatically receive DatAdvantage reports, which will help guide them in making future access decisions.

There’s another important point to make before we move one. IT has long been responsible for provisioning access, without knowing the business purpose. Varonis DatAdvantage assists IT in finding these owners and then giving them the access granting powers.

Anyway, once the owner has done the housekeeping of paring and removing unnecessary folder groups, they’ll then want to put into place a process for permission management. Data standards and laws recognize the importance of having security policies and procedures as part of on-going program – i.e., not something an owner does once a year.

And Varonis has an important part to play here.

Maintaining Least-Privileged Access

How do ordinary users whose job role now requires then to access a managed folder request permission to the owner?

This is where Varonis DataPrivilege makes an appearance. Regular users will need to bring this interface up (below) to formally request access to a managed folder.

The owner of the folder has a parallel interface from which to receive these requests and then grant or revoke permissions.

As I mentioned above, these security ideas for last-privilege-access and permission management are often explicitly part of compliance standards and data security laws. Building on my list from the previous post, here’s a more complete enumeration of controls that Varonis DatAdvantage supports:

  • NIST 800-53: AC-2, AC-3, AC-5, CM-5
  • NIST 800-171: 3.1.4, 3.1.5, 3.4.5
  • PCI DSS 3.x: 7.1,7.2
  • HIPAA: 45 CFR 164.312 a(1), 164.308a(4)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3, A11.2.2
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Stale Sensitive Data

Minimization is an important theme in security standards and laws. These ideas are best represented in the principles of Privacy by Design (PbD), which has good overall advice on this subject: minimize the sensitive data you collect, minimize who gets to see it, and minimize how long you keep it.

Let’s address the last point, which goes under the more familiar name of data retention. One low-hanging fruit to reducing security risks is to delete or archive sensitive data embedded in files.

This make incredible sense, of course. This stale data can be, for example, consumer PII collected in short-term marketing campaigns, but now residing in dusty spread-sheets or rusting management presentations.

Your organization may no longer need it, but it’s just the kind of monetizable data that hackers love to get their hands on.

As we saw in the first post, which focused on Identification, DatAdvantage can find and identify file data that hasn’t been used after a certain threshold date.

Can the stale data report be tweaked to find stale data this is also sensitive?

Affirmative.

You need to add the hit count filter and set the number of sensitive data matches to an appropriate number.

In my test environment, I discovered that C:Share\pvcs folder hasn’t been touched in over a year and has some sensitive data.

The next step is then to take a visit to the Data Transport Engine (DTE) available in DatAdvantage (from the Tools menu). It allows you to create a rule that will search for files to archive and delete if necessary.

In my case, my rule’s search criteria mirrors the same filters used in generating the report. The rule is doing the real heavy-lifting of removing the stale, sensitive data.

Since the rule is saved, it can be rerun again to enforce the retention limits. Even better, DTE can automatically run the rule on a periodic basis so then you never have to worry about stale sensitive data in your file system.

Implementing date retention policies can be found in the following security standards and regulations:

  • NIST 800-53: SI-12
  • PCI DSS 3.x: 3.1
  • CIS Critical Security Controls: 14.7
  • New York State DFS Cybersecurity Regulations: 500.13
  • EU General Data Protection Regulation (GDPR): Article 25.2

Detecting and Monitoring

Following the order of the NIST higher-level security control categories from the first post, we now arrive at our final destination in this series, Detect.

No data security strategy is foolproof, so you need a secondary defense based on detection and monitoring controls: effectively you’re watching the system and looking for unusual activities.

Varonis and specifically DatAlert has unique role in detection because its underlying security platform is based on monitoring file system activities.

By now everyone knows (or should know) that phishing and injection attacks allow hackers to get around network defenses as they borrow existing users’ credentials, and fully-undetectable (FUD) malware means they can avoid detection by virus scanners.

So how do you detect the new generation of stealthy attackers?

No attacker can avoid using the file system to load their software, copy files, and crawl a directory hierarchy looking for sensitive data to exfiltrate.  If you can spot their unique file activity patterns, then you can stop them before they remove or exfiltrate the data.

We can’t cover all of DatAlert’s capabilities in this post — probably a good topic for a separate series! — but since it has deep insight to all file system information and events, and histories of user behaviors, it’s in a powerful position to determine what’s out of the normal range for a user account.

We call this user behavior analytics or UBA, and DatAlert comes bundled with a suite of UBA threat models (below).  You’re free to add your own, of course, but the pre-defined models are quite powerful as is. They include detecting crypto intrusions, ransomware activity, unusual user access to sensitive data, unusual access to files containing credentials, and more.

All the alerts that are triggered can be tracked from the DatAlert Dashboard.  IT staff can either intervene and respond manually or even set up scripts to run automatically — for example, automatically disable accounts.

If a specific data security law or regulations requires a breach notification to be sent to an authority, DatAlert can provide some of the information that’s typically required – files that were accessed, types of data, etc.

Let’s close out this post with a final list of detection and response controls in data standards and laws that DatAlert can help support:

  • NIST 800-53: SI-4, AU-13, IR-4
  • PCI DSS 3.x: 10.1, 10.2, 10.6
  • CIS Critical Security Controls: 5.1, 6.4, 8.1
  • HIPAA: 45 CFR 164.400-164.414
  • ISO 27001: A.16.1.1, A.16.1.4
  • New York State DFS Cybersecurity Regulations: 500.02, 500.16, 500.27
  • EU General Data Protection Regulation (GDPR): Article 33, 34
  • Most US states have breach notification rules

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for ...

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for Risk Assessment

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

Over the last few years, I’ve written about many different data security standards, data laws, and regulations. So I feel comfortable in saying there are some similarities in the EU’s General Data Protection Regulation, the US’s HIPAA rules, PCI DSS, NIST’s 800 family of controls and others as well.

I’m really standing on the shoulders of giants, in particular the friendly security standards folks over at the National Institute of Standards and Technology (NIST), in understanding the inter-connectedness. They’re the go-to people for our government’s own data security standards: for both internal agencies (NIST 800-53) and outside contractors (NIST 800-171).  And through its voluntary Critical Infrastructure Security Framework, NIST is also influencing data security ideas in the private sector as well.

One of their big ideas is to divide security controls, which every standard and regulation has in one form or another, into five functional areas: Identify, Protect, Detect, Respond, and Recover. In short, give me a data standard and you can map their controls into one of these categories.

The NIST big picture view of security controls.

The idea of commonality led me to start this series of posts about how our own products, principally Varonis DatAdvantage, though not targeted at any specific data standard or law, in fact can help meet many of the key controls and legal requirements. In fact, the out-of-the-box reporting feature in DatAdvantage is a great place to start to see how all this works.

In this first blog post, we’ll focus on DA reporting functions that roughly cover the identify category. This is a fairly large area in itself, taking in asset identification, governance, and risk assessment.

Assets: Users, Files, and More

For DatAdvatange, users, groups, and folders are the raw building blocks used in all its reporting. However, if you wanted to view pure file system asset information, you can go to the following three key reports in DatAdvantage.

The 3a report gives IT staff a listing of Active Directory group membership. For starters, you could run the report on the all-encompassing Domain Users group to get a global user list (below). You can also populate the report with any AD property associated with a user (email, managers, department, location, etc.)

For folders, report 3f provides access paths, size, number of subfolder, and the share path.

Beyond a vanilla list of folders, IT security staff usually wants to dig a little deeper into the file structure in order to identify sensitive or critical data. What is critical will vary by organization, but generally they’re looking for personally identifiable information (PII), such as social security numbers, email addresses, and account numbers, as well as intellectual property (proprietary code, important legal documents, sales lists).

With DatAdvantage’s 4g report, Varonis lets security staff zoom into folders containing sensitive PII data, which is often scattered across huge corporate file systems. Behind the scenes, the Varonis classification engine has scanned files using PII filters for different laws and regulations, and rated the files based on the number of hits — for example, number of US social security numbers or Canadian driver’s license numbers.

The 4g report lists these sensitive files from highest to lowest “hit” count. By the way, this is the report our customers often run first and find  very eye-opening —especially if they were under the impression that there’s ‘no way millions of credit card numbers could be found in plaintext’.

Assessing the Risks

We’ve just seen how to view nuts-and-bolts asset information, but the larger point is to use the file asset inventory to help security pros discover where an organization’s particular risks are located.

In other words, it’s the beginning of a formal risk assessment.

Of course, the other major part of assessment is to look (continuously) at the threat environment and then be on the hunt for specific vulnerabilities and exploits. We’ll get to that in a future post.

Now let’s use DatAdvantage for risk assessments, starting with users.

Stale user accounts are an overlooked scenario that has lots of potential risk. Essentially, user accounts are often not disabled or removed when an employee leaves the company or a contractor’s temporary assignment is over.

For the proverbially disgruntled employee, it’s not unusual for this former insider to still have access to his account.  Or for hackers to gain access to a no-longer used third-party contractor’s account and then leverage that to hop into their real target.

In DatAdvantage’s 3a report, we can produce a list of stale users accounts based on the last logon time that’s maintained by Active Directory.

The sensitive data report that we saw earlier is the basis for another risk assessment report. We just have to filter on folders that have “everyone” permissions.

Security pros know from the current threat environment that phishing or SQL injection attacks allow an outsider to get the credentials of an insider. With no special permissions, a hacker would then have automatic access to folders with global permissions.

Therefore there’s a significant risk in having sensitive data in these open folders (assuming there’s no other compensating controls).

DatAdvantage’s 12 L report nicely shows where these folders are.

Let’s take a breath.

In the next post, we’ll continue our journey through DatAdvantage by finishing up with the risk assessment area and then focusing on the Protect and Defend categories.

For those compliance-oriented IT pros and other legal-istas, here’s a short list of regulations and standards (based on our customers requests) that the above reports help support:

  • NIST 800-53: IA-2,CM-8
  • NIST 800-171: 3.51
  • HIPAA:  45 CFR 164.308(a)(1)(ii)(A)
  • GLBA: FTC Safeguards Rule (16 CFR 314.4)
  • PCI DSS 3.x: 12.2
  • ISO 27001: A.7.1.1
  • New York State DFS Cybersecurity Regulations: 500.02
  • EU GDPR: Security of Processing (Article 32) and Impact Assessments (Article 35)
Continue reading the next post in "Data Security Compliance and DatAdvantage"

G’Day, Australia Approves Breach Notification Rule

G’Day, Australia Approves Breach Notification Rule

Last month, Australia finally amended its Privacy Act to now require breach notification. This proposed legislative change has been kicking around the Federal Government for a few years. Our attorney friends at Hogan Lovells have a nice summary of the new rule.

The good news here is that Australia defines a breach broadly enough to include both unauthorized disclosure and access of personal information. Like the GDPR, Australia also considers personal data to be any information about an identified individual or that can be reasonably linked to an individual.

In real-world terms, it means that if hackers get phone numbers, bank account data, or medical records or if malware, like ransomware, merely accesses this information, then it’s considered a breach.

So far, so good.

There’s a ‘But’

However, the new Australian requirement  has a harm threshold that also has to be met for the breach to be reportable. This is not in itself unusual in that we’ve seen these same harm thresholds in US states breach notification laws, and even the EU’s GDPR and the NIS Directive.

In the Australian case, the language used is that the breach will “likely to result in serious harm.”  While not explicitly stated, the surrounding context in the amendment says that breach would have to cause serious physical, psychological, emotional, economic, reputational, and financial harm or other effect that a “reasonable” person would agree.

By the way, this is also similar to what’s in the GDPR’s preamble.

The Australian breach notification rule, though, goes further with explicit remediation exceptions that give the covered entities – privacy sector companies, government agencies, and health care providers – even more wiggle room. If the breached entity can show that they have taken actions involving the disclosure or access before it results in serious harm, then they don’t have to report it.

I suppose you could come up with scenarios where there’s been, say, limited exposure of passwords from a health insurance company’s website, the company freezes the relevant user accounts, and the instructs affected individuals to contact them about resetting passwords. That might be a successful remediation.

You can see what the Australian regulators were getting at. By the way, I don’t think this rule is as “floppy” as one publication called the notification criteria. But it does give the covered entities something of a second chance.

Anyway, if there’s a harmful breach event, then Australian organizations will have to notify the regulators as soon as possible after discovery. They’ll need to provide them with breach details, including the information accessed, as well as steps affected individuals should take.

The Australian breach notification rule is set to go into effect in a few weeks, and there will be a one-year grace period from that point. Failure to comply can result in investigations, forced remedial actions, and fines or compensations.

Cybersecurity Laws Get Serious: EU’s NIS Directive

Cybersecurity Laws Get Serious: EU’s NIS Directive

In the IOS blog, our cyberattack focus has mostly been on hackers stealing PII and other sensitive personal data. The breach notification laws and regulations that we write about require notification only when there’s been acquisition or disclosure of PII by an unauthorized user. In plain speak, the data is stolen.

These data laws, though, fall short in two significant ways.

One, the hackers can potentially take data that’s not covered by the law: non-PII that can include corporate IP, sensitive emails from the CEO, and other valuable proprietary information. Two, the attackers are not interested in taking data but rather in disruption: for example, deploying DoS attacks or destroying important system or other non-PII data.

Under the US’s HIPAA, GLBA, and state breach laws as well as the EU’s GDPR, neither of the two cases above — and that takes in a lot of territory — would trigger a notification to the appropriate government authority.

The problem is that data privacy and security laws focus, naturally, on the data, instead of the information system as a whole. However, it doesn’t mean that governments aren’t addressing this broader category of cybersecurity.

There’s not been nearly enough attention paid to the EU’s Network and Information Security (NIS) Directive, the US’s (for now) voluntary Critical Infrastructure Security Framework, Canada’s cybersecurity initiatives, and other laws in major EU countries.

And that’s my motivation in writing this first in a series of posts on cybersecurity rules. These are important rules that organizations should be more aware. Sometime soon, it won’t be good enough, legally speaking, to protect special classes of data. Companies will be required to protect entire IT systems and report to regulatory authorities when there’s been actions to disrupt or disable the IT infrastructure.

Protecting the Cyber

The laws and guidelines that have evolved in this area are associated with safeguarding critical infrastructure – telecom, financial, medical, chemical, transportation. The reason is that cybercrime against the IT network of, say, Hoover Dam or the Federal Reserve should be treated differently than an attack against a dating web site.

Not that an attack against any IT system isn’t a serious and potentially costly act. But with critical infrastructure, where there isn’t an obvious financial motivation, we start entering the realm of cyber espionage or cyber disruption initiated by governments.

In other words, bank ATM machines suddenly not dispensing cash, the cell phone network dropping calls, or – heaven help us! — Google replying with wrong and deceptive answers, may be a sign of a cyberwar or at least a cyber ambush.

A few months back, we wrote about an interview between Charlie Rose and John Carlin, the former Assistant Attorney General in the National Security Division of the Department of Justice. The transcript can be found here, and it’s worth going through it, or at least searching on the “attribution” keyword.

Essentially, Carlin tells us that US law enforcement is getting far better at learning who are behind cyberattacks. The Department of Justice is now publicly naming the attackers, and then prosecuting them. By the way, Carlin went after Iranian hackers accused of intrusions into banks and a small dam near New York City. Fortunately, the dam’s valves were still manually operated and not connected to the Internet.

Carlin believes there are important advantages in going public with a prosecution against named individuals. Carlin sees it as a way to deter future cyber incidents. As he puts it, “because if you are going to be able to deter, you’ve got to make sure the world knows we can figure out who did it.”

So it would make enormous sense to require companies to report cyberattacks to governmental agencies, who can then put the pieces together and formally take legal and other actions against the perps.

First Stop: EU’s NIS Directive.

As with the Data Protection Directive for data privacy, which was adopted in 1995, the EU has again been way ahead of other countries in formalizing cyber reporting legislation. Its Network and Information Systems Directive was initially drafted in 2013 and was approved by the EU last July.

Since it is a directive, individual EU countries will have to transpose NIS into their own individual laws. EU countries will have a two-year transition period to get their houses in order. And an additional six months to select companies providing essential services (see Appendix II).

In Article 14, operators of essential services are required to take “appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems.”  They are also required to report, without undue delay, significant incidents to a Computer Security Incident Response Team or CSIRT.

There’s separate and similar language in Article 16 covering digital service providers, which is the EU’s way of saying ecommerce, cloud computing, and search services.

CSIRTs are at the center of the NIS Directive. Besides collecting incident data, CSIRTs are also responsible for monitoring and analyzing threat activity at a national level, issuing alerts and warnings, and sharing their information and threat awareness with other CSIRTs.  (In the US, the closest equivalent is the Department of Homeland Security’s NCCIC.)

What is considered an incident in the NIS Directive?

It is any “event having an actual adverse effect on the security of network and information systems.”  Companies designated as providing essential services are given some wiggle room in what they have to report to a CSIRT. For an incident to be significant, and thus reportable, the company has to consider the number of users affected, the duration, and the geographical scope.

Essential digital service operators must also take into account the effect of their disruption on economic and “societal activities”.

Does this mean that a future attack against, say, Facebook in the EU, in which Messenger or status posting activity is disrupted would have to be reported?

To this non-attorney blogger, it appears that Facebooking could be considered an important societal activity.

Yeah, there are vagaries in the NIS Directive, and it will require more guidance from the regulators.

In my next post in this series, I’ll take a closer look at cybersecurity rules due north of us for our Canadian neighbor.