Tag Archives: eu data protection regulation

GDPR: Troy Hunt Explains it All in Video Course

GDPR: Troy Hunt Explains it All in Video Course

You’re a high-level IT security person, who’s done the grunt work of keeping your company compliant with PCI DSS, ISO 27001, and a few other security abbreviations, and one day you’re in a meeting with the CEO, CSO, and CIO. When the subject of General Data Protection Regulation or GDPR comes up, all the Cs agree that there are some difficulties, but everything will be worked out.

You are too afraid to ask, “What is the GDPR?”

Too Busy for GDPR

We’ve all been there, of course. Your plate has been full over the last few weeks and months hunting down vulnerabilities, hardening defenses against ransomware and other malware, upgrading your security, along with all the usual work involved in keeping the IT systems humming along.

So it’s understandable that the General Data Protection Regulation may have flown under your radar.

However, there’s no need to panic.

The GDPR shares many similarities with other security standards and regulations so it’s just question of learning some basic background, the key requirements of the new EU law, and a few gotchas, preferably explained by an instructor with a knack for connecting with IT people.

Hunt on GDPR

And that’s why we engaged with Troy Hunt to develop a 7-part video course on the GDPR. Troy is a web security guru, Australian Microsoft Regional Director, and author whose security writing has appeared in Forbes, Time Magazine, and Mashable. And he’s no stranger to this blog as well!

Let’s get back to you and other busy IT security folks like you who need to get up to speed quickly.  With just an hour of your time, Troy will cover the basic vocabulary and definitions (“controller”, “processor”, “personal data”), the key concept underlying GDPR (personal data is effectively owned by the consumer), and what you’ll need to do to keep your organization compliant (effectively, minimize and monitor this personal data.)

By the way, Troy also explains how US companies, even those without EU offices, can get snagged by GDPR’s territorial scope rule— Article 3 to be exact. US-based e-commerce companies: you’ve been warned!

While Troy doesn’t expect you to be an attorney, he analyzes and breaks down a few of more critical requirements and the penalties for not complying, particularly on breach reporting, so that you’ll be able to keep up with some of the legalese when it arises at your next GDPR meeting.

And I think you’ll see by the end of the course that while there may be some new aspects to this EU law, as Troy notes, the GDPR really legislates IT common sense.

What are you waiting for?  Register and get GDPR-aware starting today!

 

 

 

 

[Transcript] Interview With GDPR Attorney Sue Foster

[Transcript] Interview With GDPR Attorney Sue Foster

Over two podcasts, attorney Sue Foster dispensed incredibly valuable GDPR wisdom. If you’ve already listened, you know it’s the kind of insights that would have otherwise required a lengthy Google expedition, followed by chatting with your cousin Vinny the lawyer. We don’t recommend that!

In reviewing the transcript below, I think there are three points that are worth commenting on. One, the GDPR’s breach reporting rule may appear to give organizations some wiggle room. But in fact that’s not the case! The reference to “right and freedoms of natural persons” refers to explicit privacy and property rights spelled out in the EU Charter. This ain’t vague language.

However, there is some leeway in reporting within the 72-hour time frame. In short: you have to make a good effort, but you can delay if, say, you’re currently investigating and need more time because otherwise you’d compromise the investigation.

Two, the territorial scope requirements in Article 3 are complicated by what it means to target EU citizens in your marketing. The very tricky part is when you’re a multinational company that has both a EU and non-EU presence. If you read closely, Foster is suggesting that EU citizens that happen to find their way to, say, your US web site, would not be protected by the GDPR.

In other words, if the company’s general marketing doesn’t target EU citizens, then the information collected is not under GDPR protections. But that would not apply to a company’s localized web content for, say, the French or German markets — information submitted through those sites would of course be under the GDPR.

Yes, I will confirm this with Foster. But if this is not the case for multinationals, then it would cause a pretty large mal de tête.

Third, GPDR compliance is based on, as Foster notes, a “show your work” principle, the same as you did on math tests in high school. It is not like PCI DSS, where you’re going down a checkoff list: Two Factor Authentication? Yes.  Vulnerability Scanning? Yes, etc.

The larger issue is that security technology will change and so what worked well in the past will likely not hold up in the future. With GDPR, you should be able to justify your security plan based on the current state of security technology and document what you’ve done.

Enough said.

Inside Out Security
Sue Foster is a partner with Mintz Levin based out of the London office. She works with clients on European data protection compliance and on commercial matters in the fields of clean tech, high tech, mobile media, and life sciences. She’s a graduate of Stanford Law School. SF is also, and we like this here at Varonis, a Certified Information Privacy Professional.

I’m very excited to be talking to an attorney with a CIPP, and with direct experience on a compliance topic we cover on our blog — the General Data Protection Regulation, or GDPR.

Welcome, Susan.

Sue Foster
Hi Andy. Thank you very much for inviting me to join you today. There’s a lot going on in Europe around cybersecurity and data protection these days, so it’s a fantastic set of topics.
IOS
Oh terrific. So what are some of the concerns you’re hearing from your clients on GDPR?
SF
So one of the big concerns is getting to grips with the extra-territorial reach. I work with a number of companies that don’t have any office or other kind of presence in Europe that would qualify them as being established in Europe.

But they are offering goods or services to people in Europe. And for these companies, you know in the past they’ve had to go through quite a bit of analysis to understand the Data Protection Directive applies to them. Under the GDPR, it’s a lot clearer and there are rules that are easier for people to understand and follow.

So now when I speak to my U.S. clients, if they’re a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they’ll be subject to the GDPR. That’s very clear.

Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.

It’s also really important for U.S. companies to understand that there’s a new ePrivacy Regulation in draft form that would cover any provider, regardless of location, of any form of publicly available electronic communication services to EU users.

Under this ePrivacy Regulation, the notion of what these communication services providers are is expanded from the current rules, and it includes things that are called over-the-top applications – so messaging apps and communications features, even when a communication feature is just something that is embedded in a website.

If it’s available to the public and enables communication, even in a very limited sort of forum, it’s going to be covered. That’s another area where U.S. companies are getting to grips with the fact that European rules will apply to them.

So this new security regulation as well that may apply to companies located outside the EU. So all of these things are combining to suddenly force a lot of U.S. companies to get to grips with European law.

IOS
So just to clarify, let’s say a small U.S. social media company that doesn’t market specifically to EU countries, doesn’t have a website in the language of some of the EU country, they would or would not fall under the GDPR?
SF
On the basis of their [overall] marketing activity they wouldn’t. But we would need to understand if they’re profiling or they’re tracking EU users or through viral marketing that’s been going on, right? And they are just tracking everybody. And they know that they’re tracking people in the EU. Then they’re going to be caught.

But if they’re not doing that, if not engaging in any kind of tracking, profiling, or monitoring activities, and they’re not affirmatively marketing into the EU, then they’re outside of the scope. Unless of course, they’re offering some kind of service that falls under one of these other regulations that we were talking about.

IOS
What we’re hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I’m wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA’s and consumers?
SF
Sure absolutely. So first it’s important to look at the specific definition of personal data breach. It means a breached security leading to the ‘accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to personal data’.  So it’s fairly broad.

The requirement to report these incidents has a number of caveats. So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.

Then there’s a set of exceptions. And that is unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. So I can understand why U.S. companies would sort of look at this and say, ‘I don’t really know what that means’. How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?

Because that’s not defined anywhere in this regulation!

It’s important to understand that that little bit of text is EU-speak that really refers to the Charter of Fundamental Rights of the European Union, which is part of EU law.

There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person’s privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?

So you could, for example, say that there’s a breach that isn’t likely to reveal information that I would consider personally compromising in a privacy perspective, but it could lead to fraud, right? So that could affect my property rights. So that would be one of those issues. Basically, most of the time you’re going to have to report the breach.

When you’re going through the process of working out whether you need to report the breach to the DPA, and you’re considering whether or not the breach is likely to result in a risk to the rights and freedoms of natural persons, one of the things that you can look at is whether people are practically protected.

Or whether there’s a minimal risk because of steps you’ve already taken such as encrypting data or pseudonymizing data and you know that the key that would allow re-identification of the subjects hasn’t been compromised.

So these are some of the things that you can think about when determining whether or not you need to report to the Data Protection Authority.

If you decide you have to report, you then need to think about ‘do you need to report the breach to the data subjects’, right?

And the standard there is that is has to be a “high risk to the rights and freedoms” of natural persons’. So a high risk to someone’s privacy rights or rights on their property and things of that sort.

And again, you can look at the steps that you’ve taken to either prevent the data from — you know before it even was leaked — prevent it from being potentially vulnerable in a format where people could be damaged. Or you could think also whether you’ve taken steps after the breach that would prevent those kinds of risks from happening.

Now, of course, the problem is the risk of getting it wrong, right?

If you decide that you’re not going to report after you go through this full analysis and the DPA disagrees with you, now you’re running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.

And that I think it’s going to lead to a lot of companies being cautious in reporting when even they might have been able to take advantage of some of these exceptions but they won’t feel comfortable with that.

IOS
I see. So just to bring it to more practical terms. We can assume that let’s say credit card numbers or some other identification number, if that was breach or taken, would have to be reported both to the DPA and the consumer?
SF
Most likely. I mean if it’s…yeah almost certainly. Particularly if the security code on the back of the card has been compromised, and absolutely you’ve got a pretty urgent situation. You also have a responsibility to basically provide a risk assessment to the individuals, and advise them on steps that they can take to protect themselves such as canceling their card immediately.
IOS
One hypothetical that I wanted to ask you about is the Yahoo breach, which technically happened a few years ago. I think it was over two years ago … Let’s say something like that had happened after the GDPR where a company sort of had known that there was something happening that looked like a breach, but they didn’t know the extent of it.

If they had not reported it, and waited until after the 72-hour rule, what would have happened to let’s say a multinational like Yahoo?

SF
Well, Yahoo would need to go through the same analysis, and it’s hard to imagine that a breach on that scale and with the level of access that was provided to the Yahoo users accounts as a result of those breaches, and of course the fact that people know that it’s very common for individuals to reuse passwords across different sites, and so you, you know, have the risks sort of follow on problems.

It’s hard to imagine they would be in a situation where they would be off the hook for reporting.

Now the 72-hour rule is not hard and fast.

But the idea is you report as soon as possible. So you can delay for a little while if it’s necessary for say a law enforcement investigation, right? That’s one possibility.

Or if you’re doing your own internal investigation and somehow that would be compromised or taking security measures would be compromised in some way by reporting it to the DPA. But that’ll be pretty rare.

Obviously going along for months and months with not reporting it would be beyond the pale. And I would say a company like Yahoo would potentially be facing a fine of 2% of its worldwide revenue!

IOS
So this is really serious business, especially for multinationals.

This is also a breach reporting related question, and it has to do with ransomware. We’re seeing a lot of ransomware attacks these days. In fact, when we visit customer sites and analyze their systems, we sometimes see these attacks happening in real time. Since a ransomware attack encrypts the file data but most of the time doesn’t actually take the data or the personal data, would that breach have to be reported or not?

SF
This is a really interesting question! I think the by-the-book answer is, technically, if a ransomware attack doesn’t lead to the accidental or unlawful destruction, loss, or alteration or unauthorized disclosure of or access to the personal data, it doesn’t actually fall under the GDPR’s definition of a personal data breach, right?

So, if a company is subject to an attack that prevents it from accessing its data, but the intruder can not itself access, change or destroy the data, you could argue it’s not a personal data breach, therefore not reportable.

But it sure feels like one, doesn’t it?

IOS
Yes, it does!
SF
Yeah. I suspect we’re going to find that the new European Data Protection Board will issue guidance that somehow brings ransomware attacks into the fold of what’s reportable. Don’t know that for sure, but it seems likely to me that they’ll find a way to do that.

Now, there are two important caveats.

Even though, technically, a ransomware attack may not be reportable, companies should remember that a ransomware attack could cause them to be in breach of other requirements of the GDPR, like the obligation to ensure data integrity and accessibility of the data.

Because by definition, you know, the ransomware attack has made the data non-assessable and has totally corrupted its integrity. So, there could be a liability there under the GDPR.

And also, the company that’s suffering the ransomware attack should consider whether they’re subject to the new Network and Information Security Directive, which is going to be implemented in national laws by May 9th of 2018. So again, May 2018 being a real critical time period. That directive requires service providers to notify the relevant authority when there’s been a breach that has a substantial impact on the services, even if there was no GDPR personal data breach.

And the Network and Information Security Directive applies to a wide range of companies, including those that provide “essential services”. Sort of the fundamentals that drive the modern economy: energy, transportation, financial services.

But also, it applies to digital service providers, and that would include cloud computing service providers.

You know, there could be quite a few companies that are being held up by ransomware attacks who are in the cloud space, and they’ll need to think about their obligations to report even if there’s maybe not a GDPR reporting requirement.

IOS
Right, interesting. Okay. As a security company, we’ve been preaching Privacy by Design principles, data minimization and retention limits, and in the GPDR it’s now actually part of the law.

The GDPR is not very specific about what has to be done to meet these Privacy by Design ideas, so do you have an idea what the regulators might say about PbD as they issue more detailed guidelines?

SF
They’ll probably tell us more about the process but not give us a lot of insight as to specific requirements, and that’s partly because the GDPR itself is very much a show-your-work regulation.

You might remember back on old,old math tests, right? When you were told, ‘Look, you might not get the right answer, but show all of your work in that calculus problem and you might get some partial credit.’

And it’s a little bit like that. The GDPR is a lot about process!

So, the push for Privacy by Design is not to say that there are specific requirements other than paying attention to whatever the state of the art is at the time. So, really looking at the available privacy solutions at the time and thinking about what you can do. But a lot of it is about just making sure you’ve got internal processes for analyzing privacy risks and thinking about privacy solutions.

And for that reason, I think we’re just going to get guidance that stresses that, develops that idea.

But any guidance that told people specifically what security technologies they needed to apply would probably be good for, you know, 12 or 18 months, and then something new would come along.

Where we might see some help is, eventually, in terms of ISO standards. Maybe there’ll be an opportunity in the future for something that comes along that’s an international standard, that talks about the process that companies go through to design privacy into services and devices, etc. Maybe then we’ll have a little more certainty about it.

But for now, and I think for the foreseeable future, it’s going to be about showing your work, making sure you’ve engaged, and that you’ve documented your engagement, so that if something does go wrong, at least you can show what you did.

IOS
That’s very interesting, and a good thing to know. One last question, we’ve been following some of the security problems related to Internet of Things devices, which are gadgets on the consumer market that can include internet-connected coffee pots, cameras, children toys.

We’ve learned from talking to testing experts that vendors are not really interested in PBD. It’s ship first, maybe fix security bugs later. Any thoughts on how the GDPR will effect IOT vendors?

SF
It will definitely have an impact. The definition of personal data under the GDPR is very, very broad. So, effectively, anything that I am saying that a device picks up is my personal data, as well as data kind of about me, right?

So, if you think about a device that knows my shopping habits that I can speak to and I can order things, everything that the device hears is effectively my personal data under the European rules.

And Internet of Things vendors do seem to be lagging behind in Privacy by Design. I suspect we’re going to see investigations and fines in this area early on, when the GDPR starts being enforced on May, 2018.

Because the stories about the security risks of, say, children’s toys have really caught the attention of the media and the public, and the regulators won’t be far behind.

And now, we have fines for breaches that range from 2% to 4% of a group’s global turnover. It’s an area that is ripe for enforcement activity, and I think it may be a surprise to quite a few companies in this space.

It’s also really important to go back to this important theme that there are other regulations, besides the GDPR itself, to keep track of in Europe. The new ePrivacy Regulation contains some provisions targeted at the internet of things, such as the requirement to get consent from consumers from machine-to-machine transfers of communications data, which is going to be very cumbersome.

The [ePrivacy] Regulation says you have to do it, it doesn’t really say how you’re going to get consent, meaningful consent, that’s a very high standard in Europe, to these transfers when there’s no real intelligent interface between the device and the person, the consumer who’s using it. Because there are some things that have, maybe kind of a web dashboard. There’s some kind of app that you use and you communicate with your device, you could have privacy settings.

There’s other stuff that’s much more behind the scenes with Internet of Things, where the user is not having a high level of engagement. So, maybe a smart refrigerator that’s reeling information about energy consumption to, you know, the grid. Even there, you know, there’s potentially information where the user is going to have to give consent to the transfer.

And it’s hard to kind of imagine exactly what that interface is going to look like!

I’ll mention one thing about the ePrivacy Regulation. It’s in draft form. It could change, and that’s important to know. It’s not likely to change all that much, and it’s on a fast-track timeline because the commission would like to have it in place and ready to go May, 2018, the same time as the GDPR.

IOS
 Sue Foster, I’d like to thank you again for your time.
SF
You’re very welcome. Thank you very much for inviting me to join you today.

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

Leave a review for our podcast & we'll put you in the running for a pack of cards.


In this second part of our interview with attorney and GDPR pro Sue Foster, we get into a cyber topic that’s been on everyone’s mind lately: ransomware.

A ransomware attack on EU personal data is unquestionably a breach —  “accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access  …”

But would it be reportable under the GDPR, which goes into effect next year?

In other words, would an EU company (or US one as well) have to notify a DPA and affected customers within the 72-hour window after being attacked by, say, WannaCry?

If you go by the language of the law, the answer is a definite …  no!

Foster explains that for it to be reportable, a breach has to cause a risk “to the rights and freedoms of natural persons.”  For what this legalese really means, you’ll just have to listen to the podcast. (Hint: it refers to a fundamental document of the EU.)

Anyway, personal data that’s encrypted by ransomware and not taken off premises is not much of a risk for anybody. There’s still more subtleties involving ransomware and other EU data laws that I think is best explained by her, so you’ll just have to listen to Sue’s legal advice directly!

There’s also very interesting analysis by Foster on the implications of the GDPR for Internet-of-Things gadget makers.

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

Leave a review for our podcast & we'll put you in the running for a pack of cards.


Sue Foster is a London-based partner at Mintz Levin. She has a gift for explaining the subtleties in the EU General Data Protection Regulation (GDPR). In this first part of our interview, Foster discusses how the GDPR’s new extraterritoriality rule would place US companies under the law’s data obligations.

In the blog, we’ve written about some of the implications of the GDPR’s Article 3, which covers the law’s territorial scope. In short: if you market online to EU consumers — web copy, say, in the language of some EU country  — then you’ll fall under the GDPR. And this also means you would have to report data exposures under the GDPR’s new 72-hour breach rule.

Foster points out that if a US company happens to attract EU consumers through their overall marketing, they would not fall under the law.

So a cheddar cheese producer from Wisconsin whose web site gets the attention and business of French-based frommage lovers is not required to protect their data at the level of the GDPR.

There’s another snag for US companies, an update to the EU’s ePrivacy Directive, which places restrictions on embedded communication services. Foster explains how companies, not necessarily ISPs, that provide messaging — that means you WhatsApp, Skype, and Gmail — would fall under this law’s privacy rules.

Sue’s insights on these and other topics will be relevant to both corporate privacy officers and IT security folks.

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

At the end of the previous post, we took up the nuts-and-bolts issues of protecting sensitive data in an organization’s file system. One popular approach, least-privileged access model, is often explicitly mentioned in compliance standards, such as NIST 800-53 or PCI DSS. Varonis DatAdvantage and DataPrivilege provide a convenient way to accomplish this.

Ownership Management

Let’s start with DatAdvantage. We saw last time that DA provides graphical support for helping to identify data ownership.

If you want to get more granular than just seeing who’s been accessing a folder, you can view the actual access statistics of the top users with the Statistics tab (below).

This is a great help in understanding who is really using the folder. The ultimale goal is to find the true users, and remove extraneous groups and users, who perhaps needed occasional access but not as part of their job role.

The key point is to first determine the folder’s owner — the one who has the real knowledge and wisdom of what the folder is all about. This may require some legwork on IT’s part in talking to the users, based on the DatAdvantage stats, and working out the real-chain of command.

Once you use DatAdvantage to set the folder owners (below), these more informed power users, as we’ll see, can independently manage who gets access and whose access should be removed. The folder owner will also automatically receive DatAdvantage reports, which will help guide them in making future access decisions.

There’s another important point to make before we move one. IT has long been responsible for provisioning access, without knowing the business purpose. Varonis DatAdvantage assists IT in finding these owners and then giving them the access granting powers.

Anyway, once the owner has done the housekeeping of paring and removing unnecessary folder groups, they’ll then want to put into place a process for permission management. Data standards and laws recognize the importance of having security policies and procedures as part of on-going program – i.e., not something an owner does once a year.

And Varonis has an important part to play here.

Maintaining Least-Privileged Access

How do ordinary users whose job role now requires then to access a managed folder request permission to the owner?

This is where Varonis DataPrivilege makes an appearance. Regular users will need to bring this interface up (below) to formally request access to a managed folder.

The owner of the folder has a parallel interface from which to receive these requests and then grant or revoke permissions.

As I mentioned above, these security ideas for last-privilege-access and permission management are often explicitly part of compliance standards and data security laws. Building on my list from the previous post, here’s a more complete enumeration of controls that Varonis DatAdvantage supports:

  • NIST 800-53: AC-2, AC-3, AC-5, CM-5
  • NIST 800-171: 3.1.4, 3.1.5, 3.4.5
  • PCI DSS 3.x: 7.1,7.2
  • HIPAA: 45 CFR 164.312 a(1), 164.308a(4)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3, A11.2.2
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Stale Sensitive Data

Minimization is an important theme in security standards and laws. These ideas are best represented in the principles of Privacy by Design (PbD), which has good overall advice on this subject: minimize the sensitive data you collect, minimize who gets to see it, and minimize how long you keep it.

Let’s address the last point, which goes under the more familiar name of data retention. One low-hanging fruit to reducing security risks is to delete or archive sensitive data embedded in files.

This make incredible sense, of course. This stale data can be, for example, consumer PII collected in short-term marketing campaigns, but now residing in dusty spread-sheets or rusting management presentations.

Your organization may no longer need it, but it’s just the kind of monetizable data that hackers love to get their hands on.

As we saw in the first post, which focused on Identification, DatAdvantage can find and identify file data that hasn’t been used after a certain threshold date.

Can the stale data report be tweaked to find stale data this is also sensitive?

Affirmative.

You need to add the hit count filter and set the number of sensitive data matches to an appropriate number.

In my test environment, I discovered that C:Share\pvcs folder hasn’t been touched in over a year and has some sensitive data.

The next step is then to take a visit to the Data Transport Engine (DTE) available in DatAdvantage (from the Tools menu). It allows you to create a rule that will search for files to archive and delete if necessary.

In my case, my rule’s search criteria mirrors the same filters used in generating the report. The rule is doing the real heavy-lifting of removing the stale, sensitive data.

Since the rule is saved, it can be rerun again to enforce the retention limits. Even better, DTE can automatically run the rule on a periodic basis so then you never have to worry about stale sensitive data in your file system.

Implementing date retention policies can be found in the following security standards and regulations:

  • NIST 800-53: SI-12
  • PCI DSS 3.x: 3.1
  • CIS Critical Security Controls: 14.7
  • New York State DFS Cybersecurity Regulations: 500.13
  • EU General Data Protection Regulation (GDPR): Article 25.2

Detecting and Monitoring

Following the order of the NIST higher-level security control categories from the first post, we now arrive at our final destination in this series, Detect.

No data security strategy is foolproof, so you need a secondary defense based on detection and monitoring controls: effectively you’re watching the system and looking for unusual activities.

Varonis and specifically DatAlert has unique role in detection because its underlying security platform is based on monitoring file system activities.

By now everyone knows (or should know) that phishing and injection attacks allow hackers to get around network defenses as they borrow existing users’ credentials, and fully-undetectable (FUD) malware means they can avoid detection by virus scanners.

So how do you detect the new generation of stealthy attackers?

No attacker can avoid using the file system to load their software, copy files, and crawl a directory hierarchy looking for sensitive data to exfiltrate.  If you can spot their unique file activity patterns, then you can stop them before they remove or exfiltrate the data.

We can’t cover all of DatAlert’s capabilities in this post — probably a good topic for a separate series! — but since it has deep insight to all file system information and events, and histories of user behaviors, it’s in a powerful position to determine what’s out of the normal range for a user account.

We call this user behavior analytics or UBA, and DatAlert comes bundled with a suite of UBA threat models (below).  You’re free to add your own, of course, but the pre-defined models are quite powerful as is. They include detecting crypto intrusions, ransomware activity, unusual user access to sensitive data, unusual access to files containing credentials, and more.

All the alerts that are triggered can be tracked from the DatAlert Dashboard.  IT staff can either intervene and respond manually or even set up scripts to run automatically — for example, automatically disable accounts.

If a specific data security law or regulations requires a breach notification to be sent to an authority, DatAlert can provide some of the information that’s typically required – files that were accessed, types of data, etc.

Let’s close out this post with a final list of detection and response controls in data standards and laws that DatAlert can help support:

  • NIST 800-53: SI-4, AU-13, IR-4
  • PCI DSS 3.x: 10.1, 10.2, 10.6
  • CIS Critical Security Controls: 5.1, 6.4, 8.1
  • HIPAA: 45 CFR 164.400-164.414
  • ISO 27001: A.16.1.1, A.16.1.4
  • New York State DFS Cybersecurity Regulations: 500.02, 500.16, 500.27
  • EU General Data Protection Regulation (GDPR): Article 33, 34
  • Most US states have breach notification rules

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for ...

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for Risk Assessment

Over the last few years, I’ve written about many different data security standards, data laws, and regulations. So I feel comfortable in saying there are some similarities in the EU’s General Data Protection Regulation, the US’s HIPAA rules, PCI DSS, NIST’s 800 family of controls and others as well.

I’m really standing on the shoulders of giants, in particular the friendly security standards folks over at the National Institute of Standards and Technology (NIST), in understanding the inter-connectedness. They’re the go-to people for our government’s own data security standards: for both internal agencies (NIST 800-53) and outside contractors (NIST 800-171).  And through its voluntary Critical Infrastructure Security Framework, NIST is also influencing data security ideas in the private sector as well.

One of their big ideas is to divide security controls, which every standard and regulation has in one form or another, into five functional areas: Identify, Protect, Detect, Respond, and Recover. In short, give me a data standard and you can map their controls into one of these categories.

The NIST big picture view of security controls.

The idea of commonality led me to start this series of posts about how our own products, principally Varonis DatAdvantage, though not targeted at any specific data standard or law, in fact can help meet many of the key controls and legal requirements. In fact, the out-of-the-box reporting feature in DatAdvantage is a great place to start to see how all this works.

In this first blog post, we’ll focus on DA reporting functions that roughly cover the identify category. This is a fairly large area in itself, taking in asset identification, governance, and risk assessment.

Assets: Users, Files, and More

For DatAdvatange, users, groups, and folders are the raw building blocks used in all its reporting. However, if you wanted to view pure file system asset information, you can go to the following three key reports in DatAdvantage.

The 3a report gives IT staff a listing of Active Directory group membership. For starters, you could run the report on the all-encompassing Domain Users group to get a global user list (below). You can also populate the report with any AD property associated with a user (email, managers, department, location, etc.)

For folders, report 3f provides access paths, size, number of subfolder, and the share path.

Beyond a vanilla list of folders, IT security staff usually wants to dig a little deeper into the file structure in order to identify sensitive or critical data. What is critical will vary by organization, but generally they’re looking for personally identifiable information (PII), such as social security numbers, email addresses, and account numbers, as well as intellectual property (proprietary code, important legal documents, sales lists).

With DatAdvantage’s 4g report, Varonis lets security staff zoom into folders containing sensitive PII data, which is often scattered across huge corporate file systems. Behind the scenes, the Varonis classification engine has scanned files using PII filters for different laws and regulations, and rated the files based on the number of hits — for example, number of US social security numbers or Canadian driver’s license numbers.

The 4g report lists these sensitive files from highest to lowest “hit” count. By the way, this is the report our customers often run first and find  very eye-opening —especially if they were under the impression that there’s ‘no way millions of credit card numbers could be found in plaintext’.

Assessing the Risks

We’ve just seen how to view nuts-and-bolts asset information, but the larger point is to use the file asset inventory to help security pros discover where an organization’s particular risks are located.

In other words, it’s the beginning of a formal risk assessment.

Of course, the other major part of assessment is to look (continuously) at the threat environment and then be on the hunt for specific vulnerabilities and exploits. We’ll get to that in a future post.

Now let’s use DatAdvantage for risk assessments, starting with users.

Stale user accounts are an overlooked scenario that has lots of potential risk. Essentially, user accounts are often not disabled or removed when an employee leaves the company or a contractor’s temporary assignment is over.

For the proverbially disgruntled employee, it’s not unusual for this former insider to still have access to his account.  Or for hackers to gain access to a no-longer used third-party contractor’s account and then leverage that to hop into their real target.

In DatAdvantage’s 3a report, we can produce a list of stale users accounts based on the last logon time that’s maintained by Active Directory.

The sensitive data report that we saw earlier is the basis for another risk assessment report. We just have to filter on folders that have “everyone” permissions.

Security pros know from the current threat environment that phishing or SQL injection attacks allow an outsider to get the credentials of an insider. With no special permissions, a hacker would then have automatic access to folders with global permissions.

Therefore there’s a significant risk in having sensitive data in these open folders (assuming there’s no other compensating controls).

DatAdvantage’s 12 L report nicely shows where these folders are.

Let’s take a breath.

In the next post, we’ll continue our journey through DatAdvantage by finishing up with the risk assessment area and then focusing on the Protect and Defend categories.

For those compliance-oriented IT pros and other legal-istas, here’s a short list of regulations and standards (based on our customers requests) that the above reports help support:

  • NIST 800-53: IA-2,CM-8
  • NIST 800-171: 3.51
  • HIPAA:  45 CFR 164.308(a)(1)(ii)(A)
  • GLBA: FTC Safeguards Rule (16 CFR 314.4)
  • PCI DSS 3.x: 12.2
  • ISO 27001: A.7.1.1
  • New York State DFS Cybersecurity Regulations: 500.02
  • EU GDPR: Security of Processing (Article 32) and Impact Assessments (Article 35)

G’Day, Australia Approves Breach Notification Rule

G’Day, Australia Approves Breach Notification Rule

Last month, Australia finally amended its Privacy Act to now require breach notification. This proposed legislative change has been kicking around the Federal Government for a few years. Our attorney friends at Hogan Lovells have a nice summary of the new rule.

The good news here is that Australia defines a breach broadly enough to include both unauthorized disclosure and access of personal information. Like the GDPR, Australia also considers personal data to be any information about an identified individual or that can be reasonably linked to an individual.

In real-world terms, it means that if hackers get phone numbers, bank account data, or medical records or if malware, like ransomware, merely accesses this information, then it’s considered a breach.

So far, so good.

There’s a ‘But’

However, the new Australian requirement  has a harm threshold that also has to be met for the breach to be reportable. This is not in itself unusual in that we’ve seen these same harm thresholds in US states breach notification laws, and even the EU’s GDPR and the NIS Directive.

In the Australian case, the language used is that the breach will “likely to result in serious harm.”  While not explicitly stated, the surrounding context in the amendment says that breach would have to cause serious physical, psychological, emotional, economic, reputational, and financial harm or other effect that a “reasonable” person would agree.

By the way, this is also similar to what’s in the GDPR’s preamble.

The Australian breach notification rule, though, goes further with explicit remediation exceptions that give the covered entities – privacy sector companies, government agencies, and health care providers – even more wiggle room. If the breached entity can show that they have taken actions involving the disclosure or access before it results in serious harm, then they don’t have to report it.

I suppose you could come up with scenarios where there’s been, say, limited exposure of passwords from a health insurance company’s website, the company freezes the relevant user accounts, and the instructs affected individuals to contact them about resetting passwords. That might be a successful remediation.

You can see what the Australian regulators were getting at. By the way, I don’t think this rule is as “floppy” as one publication called the notification criteria. But it does give the covered entities something of a second chance.

Anyway, if there’s a harmful breach event, then Australian organizations will have to notify the regulators as soon as possible after discovery. They’ll need to provide them with breach details, including the information accessed, as well as steps affected individuals should take.

The Australian breach notification rule is set to go into effect in a few weeks, and there will be a one-year grace period from that point. Failure to comply can result in investigations, forced remedial actions, and fines or compensations.

Cybersecurity Laws Get Serious: EU’s NIS Directive

Cybersecurity Laws Get Serious: EU’s NIS Directive

In the IOS blog, our cyberattack focus has mostly been on hackers stealing PII and other sensitive personal data. The breach notification laws and regulations that we write about require notification only when there’s been acquisition or disclosure of PII by an unauthorized user. In plain speak, the data is stolen.

These data laws, though, fall short in two significant ways.

One, the hackers can potentially take data that’s not covered by the law: non-PII that can include corporate IP, sensitive emails from the CEO, and other valuable proprietary information. Two, the attackers are not interested in taking data but rather in disruption: for example, deploying DoS attacks or destroying important system or other non-PII data.

Under the US’s HIPAA, GLBA, and state breach laws as well as the EU’s GDPR, neither of the two cases above — and that takes in a lot of territory — would trigger a notification to the appropriate government authority.

The problem is that data privacy and security laws focus, naturally, on the data, instead of the information system as a whole. However, it doesn’t mean that governments aren’t addressing this broader category of cybersecurity.

There’s not been nearly enough attention paid to the EU’s Network and Information Security (NIS) Directive, the US’s (for now) voluntary Critical Infrastructure Security Framework, Canada’s cybersecurity initiatives, and other laws in major EU countries.

And that’s my motivation in writing this first in a series of posts on cybersecurity rules. These are important rules that organizations should be more aware. Sometime soon, it won’t be good enough, legally speaking, to protect special classes of data. Companies will be required to protect entire IT systems and report to regulatory authorities when there’s been actions to disrupt or disable the IT infrastructure.

Protecting the Cyber

The laws and guidelines that have evolved in this area are associated with safeguarding critical infrastructure – telecom, financial, medical, chemical, transportation. The reason is that cybercrime against the IT network of, say, Hoover Dam or the Federal Reserve should be treated differently than an attack against a dating web site.

Not that an attack against any IT system isn’t a serious and potentially costly act. But with critical infrastructure, where there isn’t an obvious financial motivation, we start entering the realm of cyber espionage or cyber disruption initiated by governments.

In other words, bank ATM machines suddenly not dispensing cash, the cell phone network dropping calls, or – heaven help us! — Google replying with wrong and deceptive answers, may be a sign of a cyberwar or at least a cyber ambush.

A few months back, we wrote about an interview between Charlie Rose and John Carlin, the former Assistant Attorney General in the National Security Division of the Department of Justice. The transcript can be found here, and it’s worth going through it, or at least searching on the “attribution” keyword.

Essentially, Carlin tells us that US law enforcement is getting far better at learning who are behind cyberattacks. The Department of Justice is now publicly naming the attackers, and then prosecuting them. By the way, Carlin went after Iranian hackers accused of intrusions into banks and a small dam near New York City. Fortunately, the dam’s valves were still manually operated and not connected to the Internet.

Carlin believes there are important advantages in going public with a prosecution against named individuals. Carlin sees it as a way to deter future cyber incidents. As he puts it, “because if you are going to be able to deter, you’ve got to make sure the world knows we can figure out who did it.”

So it would make enormous sense to require companies to report cyberattacks to governmental agencies, who can then put the pieces together and formally take legal and other actions against the perps.

First Stop: EU’s NIS Directive.

As with the Data Protection Directive for data privacy, which was adopted in 1995, the EU has again been way ahead of other countries in formalizing cyber reporting legislation. Its Network and Information Systems Directive was initially drafted in 2013 and was approved by the EU last July.

Since it is a directive, individual EU countries will have to transpose NIS into their own individual laws. EU countries will have a two-year transition period to get their houses in order. And an additional six months to select companies providing essential services (see Appendix II).

In Article 14, operators of essential services are required to take “appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems.”  They are also required to report, without undue delay, significant incidents to a Computer Security Incident Response Team or CSIRT.

There’s separate and similar language in Article 16 covering digital service providers, which is the EU’s way of saying ecommerce, cloud computing, and search services.

CSIRTs are at the center of the NIS Directive. Besides collecting incident data, CSIRTs are also responsible for monitoring and analyzing threat activity at a national level, issuing alerts and warnings, and sharing their information and threat awareness with other CSIRTs.  (In the US, the closest equivalent is the Department of Homeland Security’s NCCIC.)

What is considered an incident in the NIS Directive?

It is any “event having an actual adverse effect on the security of network and information systems.”  Companies designated as providing essential services are given some wiggle room in what they have to report to a CSIRT. For an incident to be significant, and thus reportable, the company has to consider the number of users affected, the duration, and the geographical scope.

Essential digital service operators must also take into account the effect of their disruption on economic and “societal activities”.

Does this mean that a future attack against, say, Facebook in the EU, in which Messenger or status posting activity is disrupted would have to be reported?

To this non-attorney blogger, it appears that Facebooking could be considered an important societal activity.

Yeah, there are vagaries in the NIS Directive, and it will require more guidance from the regulators.

In my next post in this series, I’ll take a closer look at cybersecurity rules due north of us for our Canadian neighbor.

Update: New York State Finalizes Cyber Rules for Financial Sector

Update: New York State Finalizes Cyber Rules for Financial Sector

When last we left New York State’s innovative cybercrime regulations, they were in a 45-day public commenting period. Let’s get caught up. The comments are now in. The rules were tweaked based on stakeholders’ feedback, and the regulations will begin a grace period starting March 1, 2017.

To save you the time, I did the heavy lifting and looked into the changes made by the regulators at the New York State Department of Financial Services (NYSDFS).

There are a few interesting ones to talk about. But before we get into them, let’s consider how important New York State — really New York City — is as a financial center.

Made in New York: Money!

To get a sense of what’s encompassed in the NYDFS’s portfolio, I took a quick dip into their annual report.

For the insurance sector, they supervise almost 900 insurers with assets of $1.4 trillion and receive premiums of $361 billion. Under wholesale domestic and foreign banks — remember New York has a global reach — they monitor 144 institutions with assets of $2.2 trillion. And I won’t even get into community and regional banks, mortgage brokers, and pension funds.

In a way, the NYSDFS has the regulatory power usually associated with a small country’s government. And therefore the rules that New York makes regarding data security has an outsized influence.

One Rule Remains the Same

Back to the rules. First, let’s look at one key part that was not changed.

NYSDFS received objections from the commenters on their definition of cyber events. This is at the center of the New York law—detecting, responding, and recovering from these events—so it’s important to take a closer look at its meaning.

Under the rules, a cybersecurity event is “any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information …”

Some of the commenters didn’t like the inclusion of “attempt” and “unsuccessful”. But the New York regulators held firm and kept the definition as is.

Cybersecurity is a broader term than a data breach. For a data breach, there usually has to be data access and exposure or exfiltration. In New York State, though, access alone or an IT disruption, even when attempted (or executed but not successfully) is considered an event.

As we’ve pointed out in our ransomware and the law cheat sheet, very few states in the US would classify a ransomware attack as a breach under their breach laws.

But in New York State, if ransomware (or a remote access trojan or other malware) was loaded on the victim’s server and perhaps abandoned or stopped by IT in mid-hack, it would indeed be a cybersecurity event.

Notification Worthy

This leads naturally to another rule, notification of a cybersecurity event to the New York State regulators, where the language was tightened.

The 72-hour time frame for reporting remains, but the clock starts ticking after a determination by the financial company that an event has occurred.

The financial companies were also given more wiggle room in the types of events that require notification: essentially the malware would need to “have a reasonable likelihood of materially harming any material part of the normal operation…”

That’s a mouthful.

In short: financial companies will notify the regulators at NYSDFS when the malware could seriously affect an operation that’s important to the company.

For example, malware that infects the digital console on the bank’s espresso machine is not notification worthy. But a key logger that lands in a bank’s foreign exchange area and is scooping up user passwords is very worthy.

The NYDFS’s updated notification rule language, by the way, puts it more in line with other data security laws, including the EU’s General Data Protection Regulation (GDPR).

So would you have to notify the New York State regulator when malware infects a server but hasn’t necessarily completed its evil mission?

Getting back to the language of “attempt” and “unsuccessful” found in the definition of cybersecurity events, it would appear that you would but only if the malware lands on a server that’s important to the company’s operations — either because of the data it contains or its function.

State of Grace

The original regulation also said you had to appoint a Chief Information Security Officer (CISO) who’d be responsible for seeing this cybersecurity regulation is carried out. Another important task of the CISO is to annually report to the board on the state of the company’s cybersecurity program.

With pushback from industry, this language was changed so that you can designate an existing employee as a CISO — likely a CIO or other C-level.

One final point to make is that the grace period for compliance has been changed. For most of the rules, it’s still 180 days.

But for certain requirements – multifactor authentication and penetration testing — the grace period has been extended to 12 months, and for a few others – audit trails, data retention, and the CISO report to the board — it’s been pushed out to 18 months.

For more details on the changes, check this legal note from our attorney friends at Hogan Lovells.

EU GDPR Spotlight: Do You Have to Hire a DPO?

EU GDPR Spotlight: Do You Have to Hire a DPO?

I suspect right about now that EU (and US) companies affected by the General Data Protection Regulation (GDPR) are starting to look more closely at their compliance project schedules. With enforcement set to begin in May 2018, the GDPR-era will shortly be upon us.

One of the many questions that have not been full answered by this new law (and still being worked out by the regulators) is under what circumstances a company needs to hire a data protection officer (DPO).

There are three scenarios mentioned in the GDPR (see article 37) where a DPO is mandatory: the core activities involve the processing of personal data by a public authority; the core activities involve “regular and systematic monitoring of data subjects on a large scale”; or the core activities require large-scale processing of special data—for example, biometric, genetic, geo-location, and more.

Companies falling into the second category, which I think covers the largest share, are probably pondering what is meant by “regular and systematic monitoring” and “large-scale”.

As a non-legal person, I even noticed these provisions were a bit foggy.

A few months ago, I asked GPDR legal specialist Bret Cohen at Hogan Lovells about what the heck was meant.

Cohen’s answer was that, well, we’ll have to wait for more guidance from the regulators.

And Thus Spoke the Article 29 Working Party

No, the Article 29 Working Party (WP29) is not the name of a new Netflix series, but will, under the GDPR, become a kind of super data protection authority (DPA) providing advice and insuring consistency between all the national DPAs.

Anyway, last month the WP29 published a guidance addressing the confusing criteria for DPOs.

And after reading it, I suppose, I’m still a little confused.

For those of us who were following the GDPR and watching how this legal sausage was made, the DPO was one of the more contentious provisions.

There were differences of opinion on whether a DPO should be mandatory or optional and on the threshold requirements for having one in the first place. Some were arguing that it should be the number of employees (250) of a company and others, the number of records of personal data processed (500).

The parties — EU Commission, Parliament, and Council — finally settled on DPOs being mandatory but they removed specific numbers. And so we’re left with this vague language.

The new guidance provides some clarification.

According to the WP29, “regular and systematic” means, in human-speak, a pre-arranged plan that’s carried out repeatedly over time.

So far, so good.

What does “large scale” mean?

For me, this is the more interesting question. The WP29 said the following factors need to be taken into consideration:

  • The number of data subjects concerned – either as a specific number or as a proportion of the relevant population
  • The volume of data and/or the range of different data items being processed
  • The duration, or permanence, of the data processing activity
  • The geographical extent of the processing activity

We’re All Monitoring Web Behavior

You can kind of see what the law makers were grappling with in the list of factors. But it’s still a little muddy.

Obviously, an insurance company, bank, or retailer that collects personal data from millions of customers would require a DPO.

However, a small web start-up with a few employees can be also engaged in large-scale monitoring.

How?

Suppose their free web app is being accessed by tens or hundreds of thousands of visitors per month. The startup’s site may not be collecting personal data or very minimal personal data other than tracking browser activity with cookies or by other means.  I use plenty of freebie sites this way — especially news sites — and the advertising I see reflects their knowledge of me.

But according to the guidance and other language in the GDPR, monitoring of web behavior would be a type of “monitoring” that’s mentioned in the DPO provisions.

I could be mistaken but it seems to me that any company with a website that receives a reasonable amount of traffic would be required to have a DPO.  And this would include lots of B2Bs that don’t necessarily have a large customer base compared to a consumer company. For validation of this view, check out this legal post.

It’s a confusing point that I’m hoping to get resolved by our attorney friends.

In the meantime, more explanation on this somewhat wonkish, but important topic, can be found here by the brilliant people over at the IAPP.

What We Learned From Talking to Data Security Experts

What We Learned From Talking to Data Security Experts

Since we’ve been working on the blog, Cindy and I have chatted with security professionals across many different areas — pen testers, attorneys, CDOs, privacy advocates, computer scientists, and even a guru. With 2016 coming to an end and the state of security looking more unsettled than ever, we decided it was a good time to take stock of the collective wisdom we’ve absorbed from these pros.

The Theory of Everything

A good place to begin our wisdom journey is the Internet of Things (IotT). It’s where the public directly experiences all the issues related to privacy and security.

We had a chance to talk to IoT pen tester Ken Munro earlier this year, and his comments on everything from wireless coffee pots and doorbells to cameras really resonated with us:

“You’re making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all. I think that’s one of the biggest issues right now is there are a lot of manufacturers here and they’re rushing new product to market …”

IoT consumer devices are not, cough, based on Privacy by Design (PbD) principles.

And over the last few months, consumers learned the hard way that these gadgets were susceptible to simple attacks that exploited backdoors, default passwords, and even non-existent authentication.

Additional help to the hackers was provided by public-facing router ports left open during device installation, without any warning to the poor user, and unsigned firmware that left their devices open to complete takeover.

As a result, IoT is where everything wrong with data security seems to show up. However, there are easy-to-implement lessons that we can all put into practice.

Password Power!

Is security always about passwords? No, of course not, but poor passwords or password defaults that were never reset seem to show up as a root cause in many breaches.

The security experts we’ve spoken to have, without prompting from us, often bring up the sorry state of passwords. One of them, Per Thorsheim, who is in fact a password expert himself, reminded us that one answer to our bad password habits is two-factor authentication (TFA):

“From a security perspective, adding this two-factor authentication, is everything. It increases security in such a way that in some cases even if I told you my password for my Facebook account, as an example, well because I have two –factor authentication, you won’t be able to log in.  As soon as you type in my user name and password, I will be receiving a code by SMS from Facebook on my phone, which you don’t have access to. This is really good.”

We agree with Thorsheim that humans are generally not good at this password thing, and so TFA and biometric authentication will certainly be a part of our password future.

In the meantime, for those of who still cling to just plain-old passwords, Professor Justin Cappos told us awhile back that there’s a simple way to come up with better password generation:

“If you’re trying to generate passwords as a human, there are tricks you can do where you pick four dictionary words at random and then create a story where the words interrelate. It’s called the “correct horse battery staple” method! “

Correct-horse-battery-staple is just a way of using a story as a memory trick or mnemonic. It’s an old technique but which one helps create crack-proof passwords.

One takeaway from these experts: change your home router admin passwords now (and use horse-battery-staple).  Corporate IT admins should also take a good, hard look at their own  passwords and avoid aiding and abetting hackers

Cultivate (Privacy and Security) Awareness

Enabling TFA on your online accounts and generating better passwords goes a very long way to improving your security profile.

But we also learned that you need to step back and cultivate a certain level of privacy awareness in your online transactions.

We learned from attorney and privacy guru Alexandra Ross about the benefits of data minimization, both for the data companies that collect and the consumers who reveal their data:

“One key thing is to stop, take a moment, and be mindful of what’s going on. What data am I being asked to submit when I sign up for a social media service?  And question why it’s being asked.

It’s worth the effort to try to read the privacy policies, or read consumer reviews of the app or online service.”

And

“If you’re speaking to the marketing team at a technology company—yeah, the default often is let’s collect everything. In other words, let’s have this very expansive user profile so that every base is covered and we have all these great data points.

But if you explain, or ask questions … then you can drill down to learn what’s really necessary for the data collection.”

In a similar vein, data scientist Kaiser Fung pointed out that often there isn’t much of a reason behind some of the data collection in the first place:

“It’s not just the volume of data, but that the fact that the data today is not collected without any design or plan in mind. Often times, people collecting the data are really divorced from any kind of business problem.”

Listen up IT and marketing people: think about what you’re doing before you submit your next contact form!

Ross and other PbD advocates preach the doctrine of data minimization: the less data you have, the lower your security risk is when there’s an attack.

As our privacy guru, Ross reminded us that there’s still lot of data about us spread out in corporate data systems.  Scott “Hacked Again” Schober another security pro we chatted with makes the same point based on his personal experiences:

“I was at an event speaking … and was asked if I’d be willing to see how easy it is to perform identity theft and compromise information on myself. I was a little reluctant but I said ok, everything else is out there already, and I know how easy it is to get somebody’s information. So I was the guinea pig. It was Kevin Mitnick, the world’s most famous hacker, who performed the theft. Within 30 seconds and at the cost of $1, he pulled up my social security number.”

There’s nothing inherently wrong with companies storing personal information about us. The larger point is to be savvy about what you’re being asked to provide and take into account that corporate data breaches are a fact of life.

Credit cards can be replaced and passwords changed but details about our personal preferences (food, movies, reading habits) and our social security numbers are forever and a great of source raw material for hackers to use in social engineered attacks.

Data is Valuable

We’ve talked to attorneys and data scientists, but we had the chance to talk to both in the form of Bennett Borden. His bio is quite interesting: in addition to being a litigator at Drinker Biddle, he’s also a data scientist. Borden has written law journal articles about the application of machine learning and document analysis to e-discovery and other legal transactions.

Borden explained how as employees we all leave a digital trail in the form of emails and documents, which can be quite revealing. He pointed out that this information can be useful when lawyers are trying to work out a fair value for a company that’s being purchased.

He was called in to do a data analysis for a client and was able to show that internal discussions indicated the asking price for the company was too high:

“We got millions of dollars back on that purchase price, and we’ve been able to do that over and over again now because we are able to get at these answers much more quickly in electronic data.”

So information is valuable in a strictly business sense. At Varonis, this is not news to us, but still it’s still powerful to hear someone who is immersed in corporate content as part of his job to tell us this.

To summarize, as consumers and as corporate citizens, we should all be more careful about treating this valuable substance: don’t give it away easily, and protect if it’s in your possession.

More Than Just a Good Idea

Privacy by Design came up in a few of our discussions with experts, and one of its principles, privacy as a default setting, is a hard one for companies to accept. Although PbD says that privacy is not a zero-sum game — you can have tough privacy controls and profits.

In any case, for companies that do business in the EU, PbD is not just a good idea but in fact it will be the law in 2018. The concept is explicitly spelled out in the General Data Protection Regulation’s (GDPR) article 25, “Data protection by design and by default”.

We’ve been writing about the GDPR for the last two years and of its many implications. But one somewhat overlooked consequence is that the GDPR will apply to companies outside of the EU.

We spoke with data security compliance expert Sheila FitzPatrick, who really emphasized this point:

“The other part of GDPR that is quite different–and it’s one of the first times that this idea will be put in place– is that it doesn’t just apply to companies that have operations within the EU. Any company regardless of where they are located and regardless of whether or not they have a presence in the EU, if they have access to the personal data of any EU citizen, they will have to comply with the regulations under the GDPR. That’s a significant change.”

This legal idea is sometimes referred to as extraterritoriality. And US e-commerce and web service companies in particular will find themselves under the GDPR when EU citizens interact with them. IT best practices that experts like to talk about as things you should do are becoming legal requirements for them.  It’s not just a good idea!

 

Our final advice for 2016: read the writing on the wall and get yourself in position to align yourself with PbD ideas on data minimization, consumer consent, and data protection.