Tag Archives: eu data protection regulation

G’Day, Australia Approves Breach Notification Rule

G’Day, Australia Approves Breach Notification Rule

Last month, Australia finally amended its Privacy Act to now require breach notification. This proposed legislative change has been kicking around the Federal Government for a few years. Our attorney friends at Hogan Lovells have a nice summary of the new rule.

The good news here is that Australia defines a breach broadly enough to include both unauthorized disclosure and access of personal information. Like the GDPR, Australia also considers personal data to be any information about an identified individual or that can be reasonably linked to an individual.

In real-world terms, it means that if hackers get phone numbers, bank account data, or medical records or if malware, like ransomware, merely accesses this information, then it’s considered a breach.

So far, so good.

There’s a ‘But’

However, the new Australian requirement  has a harm threshold that also has to be met for the breach to be reportable. This is not in itself unusual in that we’ve seen these same harm thresholds in US states breach notification laws, and even the EU’s GDPR and the NIS Directive.

In the Australian case, the language used is that the breach will “likely to result in serious harm.”  While not explicitly stated, the surrounding context in the amendment says that breach would have to cause serious physical, psychological, emotional, economic, reputational, and financial harm or other effect that a “reasonable” person would agree.

By the way, this is also similar to what’s in the GDPR’s preamble.

The Australian breach notification rule, though, goes further with explicit remediation exceptions that give the covered entities – privacy sector companies, government agencies, and health care providers – even more wiggle room. If the breached entity can show that they have taken actions involving the disclosure or access before it results in serious harm, then they don’t have to report it.

I suppose you could come up with scenarios where there’s been, say, limited exposure of passwords from a health insurance company’s website, the company freezes the relevant user accounts, and the instructs affected individuals to contact them about resetting passwords. That might be a successful remediation.

You can see what the Australian regulators were getting at. By the way, I don’t think this rule is as “floppy” as one publication called the notification criteria. But it does give the covered entities something of a second chance.

Anyway, if there’s a harmful breach event, then Australian organizations will have to notify the regulators as soon as possible after discovery. They’ll need to provide them with breach details, including the information accessed, as well as steps affected individuals should take.

The Australian breach notification rule is set to go into effect in a few weeks, and there will be a one-year grace period from that point. Failure to comply can result in investigations, forced remedial actions, and fines or compensations.

Cybersecurity Laws Get Serious: EU’s NIS Directive

Cybersecurity Laws Get Serious: EU’s NIS Directive

In the IOS blog, our cyberattack focus has mostly been on hackers stealing PII and other sensitive personal data. The breach notification laws and regulations that we write about require notification only when there’s been acquisition or disclosure of PII by an unauthorized user. In plain speak, the data is stolen.

These data laws, though, fall short in two significant ways.

One, the hackers can potentially take data that’s not covered by the law: non-PII that can include corporate IP, sensitive emails from the CEO, and other valuable proprietary information. Two, the attackers are not interested in taking data but rather in disruption: for example, deploying DoS attacks or destroying important system or other non-PII data.

Under the US’s HIPAA, GLBA, and state breach laws as well as the EU’s GDPR, neither of the two cases above — and that takes in a lot of territory — would trigger a notification to the appropriate government authority.

The problem is that data privacy and security laws focus, naturally, on the data, instead of the information system as a whole. However, it doesn’t mean that governments aren’t addressing this broader category of cybersecurity.

There’s not been nearly enough attention paid to the EU’s Network and Information Systems (NIS) Directive, the US’s (for now) voluntary Critical Infrastructure Security Framework, Canada’s cybersecurity initiatives, and other laws in major EU countries.

And that’s my motivation in writing this first in a series of posts on cybersecurity rules. These are important rules that organizations should be more aware. Sometime soon, it won’t be good enough, legally speaking, to protect special classes of data. Companies will be required to protect entire IT systems and report to regulatory authorities when there’s been actions to disrupt or disable the IT infrastructure.

Protecting the Cyber

The laws and guidelines that have evolved in this area are associated with safeguarding critical infrastructure – telecom, financial, medical, chemical, transportation. The reason is that cybercrime against the IT network of, say, Hoover Dam or the Federal Reserve should be treated differently than an attack against a dating web site.

Not that an attack against any IT system isn’t a serious and potentially costly act. But with critical infrastructure, where there isn’t an obvious financial motivation, we start entering the realm of cyber espionage or cyber disruption initiated by governments.

In other words, bank ATM machines suddenly not dispensing cash, the cell phone network dropping calls, or – heaven help us! — Google replying with wrong and deceptive answers, may be a sign of a cyberwar or at least a cyber ambush.

A few months back, we wrote about an interview between Charlie Rose and John Carlin, the former Assistant Attorney General in the National Security Division of the Department of Justice. The transcript can be found here, and it’s worth going through it, or at least searching on the “attribution” keyword.

Essentially, Carlin tells us that US law enforcement is getting far better at learning who are behind cyberattacks. The Department of Justice is now publicly naming the attackers, and then prosecuting them. By the way, Carlin went after Iranian hackers accused of intrusions into banks and a small dam near New York City. Fortunately, the dam’s valves were still manually operated and not connected to the Internet.

Carlin believes there are important advantages in going public with a prosecution against named individuals. Carlin sees it as a way to deter future cyber incidents. As he puts it, “because if you are going to be able to deter, you’ve got to make sure the world knows we can figure out who did it.”

So it would make enormous sense to require companies to report cyberattacks to governmental agencies, who can then put the pieces together and formally take legal and other actions against the perps.

First Stop: EU’s NIS Directive.

As with the Data Protection Directive for data privacy, which was adopted in 1995, the EU has again been way ahead of other countries in formalizing cyber reporting legislation. Its Network and Information Systems Directive was initially drafted in 2013 and was approved by the EU last July.

Since it is a directive, individual EU countries will have to transpose NIS into their own individual laws. EU countries will have a two-year transition period to get their houses in order. And an additional six months to select companies providing essential services (see Appendix II).

In Article 14, operators of essential services are required to take “appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems.”  They are also required to report, without undue delay, significant incidents to a Computer Security Incident Response Team or CSIRT.

There’s separate and similar language in Article 16 covering digital service providers, which is the EU’s way of saying ecommerce, cloud computing, and search services.

CSIRTs are at the center of the NIS Directive. Besides collecting incident data, CSIRTs are also responsible for monitoring and analyzing threat activity at a national level, issuing alerts and warnings, and sharing their information and threat awareness with other CSIRTs.  (In the US, the closest equivalent is the Department of Homeland Security’s NCCIC.)

What is considered an incident in the NIS Directive?

It is any “event having an actual adverse effect on the security of network and information systems.”  Companies designated as providing essential services are given some wiggle room in what they have to report to a CSIRT. For an incident to be significant, and thus reportable, the company has to consider the number of users affected, the duration, and the geographical scope.

Essential digital service operators must also take into account the effect of their disruption on economic and “societal activities”.

Does this mean that a future attack against, say, Facebook in the EU, in which Messenger or status posting activity is disrupted would have to be reported?

To this non-attorney blogger, it appears that Facebooking could be considered an important societal activity.

Yeah, there are vagaries in the NIS Directive, and it will require more guidance from the regulators.

In my next post in this series, I’ll take a closer look at cybersecurity rules due north of us for our Canadian neighbor.

Update: New York State Finalizes Cyber Rules for Financial Sector

Update: New York State Finalizes Cyber Rules for Financial Sector

When last we left New York State’s innovative cybercrime regulations, they were in a 45-day public commenting period. Let’s get caught up. The comments are now in. The rules were tweaked based on stakeholders’ feedback, and the regulations will begin a grace period starting March 1, 2017.

To save you the time, I did the heavy lifting and looked into the changes made by the regulators at the New York State Department of Financial Services (NYSDFS).

There are a few interesting ones to talk about. But before we get into them, let’s consider how important New York State — really New York City — is as a financial center.

Made in New York: Money!

To get a sense of what’s encompassed in the NYDFS’s portfolio, I took a quick dip into their annual report.

For the insurance sector, they supervise almost 900 insurers with assets of $1.4 trillion and receive premiums of $361 billion. Under wholesale domestic and foreign banks — remember New York has a global reach — they monitor 144 institutions with assets of $2.2 trillion. And I won’t even get into community and regional banks, mortgage brokers, and pension funds.

In a way, the NYSDFS has the regulatory power usually associated with a small country’s government. And therefore the rules that New York makes regarding data security has an outsized influence.

One Rule Remains the Same

Back to the rules. First, let’s look at one key part that was not changed.

NYSDFS received objections from the commenters on their definition of cyber events. This is at the center of the New York law—detecting, responding, and recovering from these events—so it’s important to take a closer look at its meaning.

Under the rules, a cybersecurity event is “any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information …”

Some of the commenters didn’t like the inclusion of “attempt” and “unsuccessful”. But the New York regulators held firm and kept the definition as is.

Cybersecurity is a broader term than a data breach. For a data breach, there usually has to be data access and exposure or exfiltration. In New York State, though, access alone or an IT disruption, even when attempted (or executed but not successfully) is considered an event.

As we’ve pointed out in our ransomware and the law cheat sheet, very few states in the US would classify a ransomware attack as a breach under their breach laws.

But in New York State, if ransomware (or a remote access trojan or other malware) was loaded on the victim’s server and perhaps abandoned or stopped by IT in mid-hack, it would indeed be a cybersecurity event.

Notification Worthy

This leads naturally to another rule, notification of a cybersecurity event to the New York State regulators, where the language was tightened.

The 72-hour time frame for reporting remains, but the clock starts ticking after a determination by the financial company that an event has occurred.

The financial companies were also given more wiggle room in the types of events that require notification: essentially the malware would need to “have a reasonable likelihood of materially harming any material part of the normal operation…”

That’s a mouthful.

In short: financial companies will notify the regulators at NYSDFS when the malware could seriously affect an operation that’s important to the company.

For example, malware that infects the digital console on the bank’s espresso machine is not notification worthy. But a key logger that lands in a bank’s foreign exchange area and is scooping up user passwords is very worthy.

The NYDFS’s updated notification rule language, by the way, puts it more in line with other data security laws, including the EU’s General Data Protection Regulation (GDPR).

So would you have to notify the New York State regulator when malware infects a server but hasn’t necessarily completed its evil mission?

Getting back to the language of “attempt” and “unsuccessful” found in the definition of cybersecurity events, it would appear that you would but only if the malware lands on a server that’s important to the company’s operations — either because of the data it contains or its function.

State of Grace

The original regulation also said you had to appoint a Chief Information Security Officer (CISO) who’d be responsible for seeing this cybersecurity regulation is carried out. Another important task of the CISO is to annually report to the board on the state of the company’s cybersecurity program.

With pushback from industry, this language was changed so that you can designate an existing employee as a CISO — likely a CIO or other C-level.

One final point to make is that the grace period for compliance has been changed. For most of the rules, it’s still 180 days.

But for certain requirements – multifactor authentication and penetration testing — the grace period has been extended to 12 months, and for a few others – audit trails, data retention, and the CISO report to the board — it’s been pushed out to 18 months.

For more details on the changes, check this legal note from our attorney friends at Hogan Lovells.

EU GDPR Spotlight: Do You Have to Hire a DPO?

EU GDPR Spotlight: Do You Have to Hire a DPO?

I suspect right about now that EU (and US) companies affected by the General Data Protection Regulation (GDPR) are starting to look more closely at their compliance project schedules. With enforcement set to begin in May 2018, the GDPR-era will shortly be upon us.

One of the many questions that have not been full answered by this new law (and still being worked out by the regulators) is under what circumstances a company needs to hire a data protection officer (DPO).

There are three scenarios mentioned in the GDPR (see article 37) where a DPO is mandatory: the core activities involve the processing of personal data by a public authority; the core activities involve “regular and systematic monitoring of data subjects on a large scale”; or the core activities require large-scale processing of special data—for example, biometric, genetic, geo-location, and more.

Companies falling into the second category, which I think covers the largest share, are probably pondering what is meant by “regular and systematic monitoring” and “large-scale”.

As a non-legal person, I even noticed these provisions were a bit foggy.

A few months ago, I asked GPDR legal specialist Bret Cohen at Hogan Lovells about what the heck was meant.

Cohen’s answer was that, well, we’ll have to wait for more guidance from the regulators.

And Thus Spoke the Article 29 Working Party

No, the Article 29 Working Party (WP29) is not the name of a new Netflix series, but will, under the GDPR, become a kind of super data protection authority (DPA) providing advice and insuring consistency between all the national DPAs.

Anyway, last month the WP29 published a guidance addressing the confusing criteria for DPOs.

And after reading it, I suppose, I’m still a little confused.

For those of us who were following the GDPR and watching how this legal sausage was made, the DPO was one of the more contentious provisions.

There were differences of opinion on whether a DPO should be mandatory or optional and on the threshold requirements for having one in the first place. Some were arguing that it should be the number of employees (250) of a company and others, the number of records of personal data processed (500).

The parties — EU Commission, Parliament, and Council — finally settled on DPOs being mandatory but they removed specific numbers. And so we’re left with this vague language.

The new guidance provides some clarification.

According to the WP29, “regular and systematic” means, in human-speak, a pre-arranged plan that’s carried out repeatedly over time.

So far, so good.

What does “large scale” mean?

For me, this is the more interesting question. The WP29 said the following factors need to be taken into consideration:

  • The number of data subjects concerned – either as a specific number or as a proportion of the relevant population
  • The volume of data and/or the range of different data items being processed
  • The duration, or permanence, of the data processing activity
  • The geographical extent of the processing activity

We’re All Monitoring Web Behavior

You can kind of see what the law makers were grappling with in the list of factors. But it’s still a little muddy.

Obviously, an insurance company, bank, or retailer that collects personal data from millions of customers would require a DPO.

However, a small web start-up with a few employees can be also engaged in large-scale monitoring.

How?

Suppose their free web app is being accessed by tens or hundreds of thousands of visitors per month. The startup’s site may not be collecting personal data or very minimal personal data other than tracking browser activity with cookies or by other means.  I use plenty of freebie sites this way — especially news sites — and the advertising I see reflects their knowledge of me.

But according to the guidance and other language in the GDPR, monitoring of web behavior would be a type of “monitoring” that’s mentioned in the DPO provisions.

I could be mistaken but it seems to me that any company with a website that receives a reasonable amount of traffic would be required to have a DPO.  And this would include lots of B2Bs that don’t necessarily have a large customer base compared to a consumer company. For validation of this view, check out this legal post.

It’s a confusing point that I’m hoping to get resolved by our attorney friends.

In the meantime, more explanation on this somewhat wonkish, but important topic, can be found here by the brilliant people over at the IAPP.

What We Learned From Talking to Data Security Experts

What We Learned From Talking to Data Security Experts

Since we’ve been working on the blog, Cindy and I have chatted with security professionals across many different areas — pen testers, attorneys, CDOs, privacy advocates, computer scientists, and even a guru. With 2016 coming to an end and the state of security looking more unsettled than ever, we decided it was a good time to take stock of the collective wisdom we’ve absorbed from these pros.

The Theory of Everything

A good place to begin our wisdom journey is the Internet of Things (IotT). It’s where the public directly experiences all the issues related to privacy and security.

We had a chance to talk to IoT pen tester Ken Munro earlier this year, and his comments on everything from wireless coffee pots and doorbells to cameras really resonated with us:

“You’re making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all. I think that’s one of the biggest issues right now is there are a lot of manufacturers here and they’re rushing new product to market …”

IoT consumer devices are not, cough, based on Privacy by Design (PbD) principles.

And over the last few months, consumers learned the hard way that these gadgets were susceptible to simple attacks that exploited backdoors, default passwords, and even non-existent authentication.

Additional help to the hackers was provided by public-facing router ports left open during device installation, without any warning to the poor user, and unsigned firmware that left their devices open to complete takeover.

As a result, IoT is where everything wrong with data security seems to show up. However, there are easy-to-implement lessons that we can all put into practice.

Password Power!

Is security always about passwords? No, of course not, but poor passwords or password defaults that were never reset seem to show up as a root cause in many breaches.

The security experts we’ve spoken to have, without prompting from us, often bring up the sorry state of passwords. One of them, Per Thorsheim, who is in fact a password expert himself, reminded us that one answer to our bad password habits is two-factor authentication (TFA):

“From a security perspective, adding this two-factor authentication, is everything. It increases security in such a way that in some cases even if I told you my password for my Facebook account, as an example, well because I have two –factor authentication, you won’t be able to log in.  As soon as you type in my user name and password, I will be receiving a code by SMS from Facebook on my phone, which you don’t have access to. This is really good.”

We agree with Thorsheim that humans are generally not good at this password thing, and so TFA and biometric authentication will certainly be a part of our password future.

In the meantime, for those of who still cling to just plain-old passwords, Professor Justin Cappos told us awhile back that there’s a simple way to come up with better password generation:

“If you’re trying to generate passwords as a human, there are tricks you can do where you pick four dictionary words at random and then create a story where the words interrelate. It’s called the “correct horse battery staple” method! “

Correct-horse-battery-staple is just a way of using a story as a memory trick or mnemonic. It’s an old technique but which one helps create crack-proof passwords.

One takeaway from these experts: change your home router admin passwords now (and use horse-battery-staple).  Corporate IT admins should also take a good, hard look at their own  passwords and avoid aiding and abetting hackers

Cultivate (Privacy and Security) Awareness

Enabling TFA on your online accounts and generating better passwords goes a very long way to improving your security profile.

But we also learned that you need to step back and cultivate a certain level of privacy awareness in your online transactions.

We learned from attorney and privacy guru Alexandra Ross about the benefits of data minimization, both for the data companies that collect and the consumers who reveal their data:

“One key thing is to stop, take a moment, and be mindful of what’s going on. What data am I being asked to submit when I sign up for a social media service?  And question why it’s being asked.

It’s worth the effort to try to read the privacy policies, or read consumer reviews of the app or online service.”

And

“If you’re speaking to the marketing team at a technology company—yeah, the default often is let’s collect everything. In other words, let’s have this very expansive user profile so that every base is covered and we have all these great data points.

But if you explain, or ask questions … then you can drill down to learn what’s really necessary for the data collection.”

In a similar vein, data scientist Kaiser Fung pointed out that often there isn’t much of a reason behind some of the data collection in the first place:

“It’s not just the volume of data, but that the fact that the data today is not collected without any design or plan in mind. Often times, people collecting the data are really divorced from any kind of business problem.”

Listen up IT and marketing people: think about what you’re doing before you submit your next contact form!

Ross and other PbD advocates preach the doctrine of data minimization: the less data you have, the lower your security risk is when there’s an attack.

As our privacy guru, Ross reminded us that there’s still lot of data about us spread out in corporate data systems.  Scott “Hacked Again” Schober another security pro we chatted with makes the same point based on his personal experiences:

“I was at an event speaking … and was asked if I’d be willing to see how easy it is to perform identity theft and compromise information on myself. I was a little reluctant but I said ok, everything else is out there already, and I know how easy it is to get somebody’s information. So I was the guinea pig. It was Kevin Mitnick, the world’s most famous hacker, who performed the theft. Within 30 seconds and at the cost of $1, he pulled up my social security number.”

There’s nothing inherently wrong with companies storing personal information about us. The larger point is to be savvy about what you’re being asked to provide and take into account that corporate data breaches are a fact of life.

Credit cards can be replaced and passwords changed but details about our personal preferences (food, movies, reading habits) and our social security numbers are forever and a great of source raw material for hackers to use in social engineered attacks.

Data is Valuable

We’ve talked to attorneys and data scientists, but we had the chance to talk to both in the form of Bennett Borden. His bio is quite interesting: in addition to being a litigator at Drinker Biddle, he’s also a data scientist. Borden has written law journal articles about the application of machine learning and document analysis to e-discovery and other legal transactions.

Borden explained how as employees we all leave a digital trail in the form of emails and documents, which can be quite revealing. He pointed out that this information can be useful when lawyers are trying to work out a fair value for a company that’s being purchased.

He was called in to do a data analysis for a client and was able to show that internal discussions indicated the asking price for the company was too high:

“We got millions of dollars back on that purchase price, and we’ve been able to do that over and over again now because we are able to get at these answers much more quickly in electronic data.”

So information is valuable in a strictly business sense. At Varonis, this is not news to us, but still it’s still powerful to hear someone who is immersed in corporate content as part of his job to tell us this.

To summarize, as consumers and as corporate citizens, we should all be more careful about treating this valuable substance: don’t give it away easily, and protect if it’s in your possession.

More Than Just a Good Idea

Privacy by Design came up in a few of our discussions with experts, and one of its principles, privacy as a default setting, is a hard one for companies to accept. Although PbD says that privacy is not a zero-sum game — you can have tough privacy controls and profits.

In any case, for companies that do business in the EU, PbD is not just a good idea but in fact it will be the law in 2018. The concept is explicitly spelled out in the General Data Protection Regulation’s (GDPR) article 25, “Data protection by design and by default”.

We’ve been writing about the GDPR for the last two years and of its many implications. But one somewhat overlooked consequence is that the GDPR will apply to companies outside of the EU.

We spoke with data security compliance expert Sheila FitzPatrick, who really emphasized this point:

“The other part of GDPR that is quite different–and it’s one of the first times that this idea will be put in place– is that it doesn’t just apply to companies that have operations within the EU. Any company regardless of where they are located and regardless of whether or not they have a presence in the EU, if they have access to the personal data of any EU citizen, they will have to comply with the regulations under the GDPR. That’s a significant change.”

This legal idea is sometimes referred to as extraterritoriality. And US e-commerce and web service companies in particular will find themselves under the GDPR when EU citizens interact with them. IT best practices that experts like to talk about as things you should do are becoming legal requirements for them.  It’s not just a good idea!

 

Our final advice for 2016: read the writing on the wall and get yourself in position to align yourself with PbD ideas on data minimization, consumer consent, and data protection.

Ransomware: Legal Cheat Sheet for Breach Notification

Ransomware: Legal Cheat Sheet for Breach Notification

You respond to a ransomware attack in many of the same ways you would to any other cyber attack. In short: have plans in place to analyze the malware, contain the damage, restore operations if need be, and notify any regulatory or enforcement authorities.

And your legal, IT, and communications team should be working together in all your response efforts. Legal meet IT, IT meet legal.

So far so good. But ransomware is a different animal.

Unlike in just about any other cyber attack, the hackers announce what they’re doing: it’s called a ransom note.

The discovery process therefore happens far more quickly, not as is often the case, months later.

And the hackers’ goal is to leave the data on site, encrypted of course, so there’s no immediate concern of credit card or account theft.

I suppose those are some minor pluses to ransomware. However, this raises a big legal question.

Since the data is just accessed, but not exposed to outsiders, does this mean that the victim won’t have to notify authorities and consumers as required by the few US data laws and regulations that have breach notification language?

We thought it was an interesting question as well.

And that’s why we wrote a white paper on this important (if somewhat obscure) legal topic.

The paper provides essential background on the data security laws that many US companies will have to deal with: Health Insurance Portability and Accessibility Act (HIPAA), Gramm-Leach-Bliley Act (GLBA), state laws, and the EU’s own data laws.

It’s a great read for those directly involved in a breach response. But to spare the casual IT person from all the legalese,  we’ve mercifully put together this cheat sheet.

Breach Notification Rules for Ransomware

The real issue to investigate is whether unauthorized access alone triggers a notification to customers. In effect, that is what ransomware is doing – accessing your PII without your permission.

We present for your ransomware breach response edification the following:

  1. Healthcare – HIPAA’s Breach Notification rules requires covered entities (hospital, insurers) to notify customers and the Department of Health and Human Services (HHS) when there’s been unauthorized access to protected health information (PHI). This is the strictest federal consumer data laws when it comes to a ransomware breach response. There are, though, some exceptions so read the paper to learn what they are!
  2. Consumer banks and loan companies – Under GLBA, the Federal Trade Commission (FTC) enforces data protection rules for consumer banking and finance through the Safeguards Rule. According to the FTC, ransomware (or any other malware attack) on your favorite bank or lender would not require a notification. They recommend that these financial companies alert customers, but it’s not an explicit obligation.
  3. Brokers, dealers, investment advisors – The Securities and Exchange Commission (SEC) has regulatory authority for these types of investment firms. Under GBLA, the SEC came up with their own rule, called Regulation S-P, which does call for a breach response program. But there’s no explicit breach notification requirement in the program. In other words, it’s something you should do, but you don’t have to.
  4. Investment banks, national banks, private bankers – With these remaining investment companies, the Federal Reserve and various Treasury Department agencies jointly came up with their own rules. In this case, these companies have “an affirmative duty” to protect against unauthorized use or access, and notification is part of that duty. In the fine print it says, though, that there has to be a determination of “misuse” of data. Whether ransomware’s encryption is misuse of the data is unclear. In any case, the rules spell out what the notification must contain — a description of the incident and the data that was accessed.
  5. US state laws – Currently, there are 48 states that have consumer breach notification laws. However, only two states, New Jersey and Connecticut, require a breach notification on access alone, thereby covering a ransomware attack. But there’s additional fine print that may allow companies to avoid reporting the breach to affected consumer in their state.
  6. EU data laws – Under the Data Protection Directive (DPD), there isn’t a breach notification requirement. Some countries such as Germany, though, have added it in their national data laws. (And ISPs and telecoms under the EU’s e-Privacy Directive already have their own breach reporting rule.) But the new EU General Data Protection Regulation, which will go into effect in 2018, does have a 72-hour rule requiring notification to local data protection authorities (DPAs) and consumers when “personal data” is accessed. However, a harm-based threshold is applied – the breach would have to “result in a risk to the rights and freedoms” of consumers. Notification for a ransomware attack would be very dependent on specific circumstances, and we’ll likely have to wait for more clarification from the regulators.

That’s the cheat sheet. However, the white paper provides a lot more context, and also goes into a few of the subtleties, particularly involving HIPAA .

Our view?

Always report a ransomware breach to the appropriate agencies and law-enforcement authorities.

For IT people who want to impress their peers in the legal department, and for legal eagles who need some quick background on ransomware, this white paper covers it all. Download it today!

[Podcast] More Sheila FitzPatrick: Data Privacy and EU Law

[Podcast] More Sheila FitzPatrick: Data Privacy and EU Law

In the next part of our discussion, data privacy attorney Sheila FitzPatrick gets into the weeds and talks to us about her work in setting up Binding Corporate Rules (BCRs) for multinational companies. These are actually the toughest rules of the road for data privacy and security.

What are BCRs?

They allow companies to internally transfer EU personal data to any of their locations in the world.  The BCR agreement has to get approval from a lead national data protection authority (DPA) in the EU. FitzPatrick calls them a gold standard in data compliance — they’re comprehensive rules with a clear complaint process for data subjects.

Another wonky area of EU compliance law she has worked on is agreements for external transfer data between companies and third-party data processors. Note: it gets even trickier when dealing with cloud providers.

This is a fascinating discussion from a working data privacy lawyer.

And it’s great background for IT managers who need to keep up with the lawyerly jargon while working with privacy and legal officers in their company!


Subscribe Now

Add us to your favorite podcasting app:

Follow the Inside Out Security Show panel on Twitter @infosec_podcast

 

New York State Proposes Real-World Cybersecurity Regulations for Banks

New York State Proposes Real-World Cybersecurity Regulations for Banks

The EU General Data Protection Regulation (GDPR) has raised the bar for what we expect from a national data security and privacy law. The US doesn’t really have anything close (outside of HIPAA for medical PII). So it’s interesting to see some movement at the state level. Let’s now give a shout out to New York regulators for their proposed cybersecurity rules for financial companies.

Go Empire State!

Like the GDPR, the New York  experiment has rules covering basic principles of data security, risk assessments and documentation of security policies, breach notifications, and designating someone to be responsible for the program.

Unlike the GDPR, it’s a far more specific set of requirements that read a little like PCI DSS and other low-level data security standards.

For example there’s a section that says the covered entity – in this case insurers, banks, and other financial companies — have to address, at a minimum 13 different areas, including data governance and classification, access controls and identity management, customer data privacy, incident response, risk assessment, and monitoring.

There are further requirements for multi-factor authentication, limits on data retention, audit trails of access to data, and restricting data access to relevant users. There’s even a rule covering annual penetration testing and quarterly vulnerability assessments. Nice work New York!

And like the GDPR, if there’s an exposure, the company has to notify regulators within 72 hours!

Chief Information Security Officer and Other Professionals

Adding more teeth to this, the regulators also say that covered financial companies will have to designate a CISO or chief information security officer who’ll be responsible for compliance.

The CISO is called on twice yearly to produce a report that includes an assessment of the overall security of the company, description of any cyber risks, and summary of any material cyber security events.

The report, by the way, will have to be filed with the state’s Superintendent of Financial Services.

In addition, the board of directors will annually review the company’s program and provide a certification to regulators. This is another way of saying the board can be held liable if there are later proved to be misrepresentations or omissions.

There are further provisions that call for the “company to employ cybersecurity personnel sufficient to manage” the cyber risk, require cyber personnel “to attend regular cybersecurity update and training sessions” and “take steps to stay abreast of changing cybersecurity threats”

How about that!

It will be a legal requirement in New York for IT security staff to improve their cyber skills and stay current.

It’s a good example of how the security catastrophes of the last few years are slowly forcing IT staff to act more like a professionals — attorneys, accountants — with serious legal obligations. That’s a good thing

Still Evolving               

The proposed regulations are making their way through the approval process – you can read the rules here. It’s currently in a 45-day public commenting period.

If the proposal is not updated after the comments are received, the regulations will go into effect in January 1, 2017 with a 180-day grace period. The first cyber certification will be due by January 15, 2018 so companies have a year after the regulations are approved to get their acts together.

Overall, this is a step in the right direction though I have some of my own quibbles. One that I’ll mention is that the New York rules have very broad definition of what has to be protected — essentially anything that’s not public.  I’d be a little happier if they included standard identifiers — in other words, PII — as well.

Hey New York banks! Get a head start on the pending cyber rules with a free risk assessment.

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

Meanwhile back in the EU, two data protection authorities have announced they’ll be looking into Yahoo’s breach-acopalypse. Calling the scale of the attack “staggering”, the UK’s Information Commissioner’s Office (ICO) has signaled they’ll be conducting an investigation.  By the way, the ICO rarely comments this way on an on-going security event.

In Ireland, where Yahoo has its European HQ, the Data Protection Commissioner is asking questions as well.

And here in the US, the FBI is getting involved because the Yahoo attack may involve a state actor, possibly Russia.

Under the Current Laws

One of the (many) stunning things about this incident is that Yahoo knew about the breach earlier this summer when it learned that its users’ data was for sale on the darknet.

And the stolen data appears to have come from a hack that occurred way back in 2014.

It’s an obvious breach notification violation.

Or not!

In the US, the only federal notification law with any teeth is for medical PII held by “covered entities”— insurance companies, hospitals, and providers. In other words, HIPAA.

So there’s little that can be done at the US federal level against the extremely long Yahoo delay in reporting the breach.

In the states, there are notification laws — currently 47 states have them — that would kick in but the thresholds are typically based on harm caused to consumers, which may be difficult to prove in this incident.

The notable exception, as always, is California, where Yahoo has its corporate offices in Sunnyvale. They are one of the few that requires notification on just the discovery of unauthorized access.

So Yahoo can expect a visit from the California attorney general in its future.

One might think that the EU would be tougher on breaches than the US.

But under the current EU Data Protection Directive (DPD), there’s no breach notification requirement. That was one of the motivations for the new General Data Protection Regulation that will go into effect in 2018.

If You Can’t Keep Your Head About You During A Breach

Yahoo may not be completely out of the legal woods in the EU: the DPD does require appropriate security measures to be taken — see article 16. So in theory an enforcement action could be launched based on Yahoo’s lax data protection.

But as a US company with its principle collection servers outside the EU, Yahoo may fall beyond the DPD’s reach. This is a wonky issue and if you want to learn whether the current DPD rules cover non-EU businesses, read this post on the legal analysis.

And this all leads to why the EU rewrote the current data laws for the GDPR, which covers breach notification and “extra-territoriality” — that is controllers outside the EU — as well as putting in place eye-popping fines.

Yeah, you should be reading our EU data regulations white paper to get the big picture.

If the GDPR were currently the law — the GDPR will go in effect in May 2018 — and the company hadn’t reported the exposure of 500 million user records to a DPA within 72 hours, then it would face massive fines.

How massive?

Doing the GDPR Breach Math

A violation of GDPR’s article 33 requirement to notify a DPA could reach as high as 2% of global revenue

Yahoo’s revenue numbers have been landing north of $4.5 billion dollar in recent years.

In my make-believe scenario, Yahoo could be paying $90 million or more to the EU.

And yes I’m aware that Verizon with over $130 billion in revenue is in the process of buying Yahoo.

Supposing the acquisition had already gone through, then Verizon would be on the hook for 2% of $134 billion or about $268 million.

There’s lesson here. Large US and other multinational companies with a significant worldwide web presence should be planning now for the possibility of an epic Yahoo-style breach in their post-2018 future.

Interview with Attorneys Bret Cohen and Sian Rudgard, Hogan Lovells’ GDPR Experts

Interview with Attorneys Bret Cohen and Sian Rudgard, Hogan Lovells’ GDPR Experts

We are very thankful that Bret Cohen and Sian Rudgard took some time out of their busy schedules at the international law firm of Hogan Lovells to answer this humble blogger’s questions on the EU General Data Protection Regulation (GDPR). Thanks Bret and Sian!

Bret writes regularly on GDPR for HL’s Chronicle of Data Protection blog, one of our favorite resources. Sian had worked at the ICO, the UK’s data protection authority, and helped draft the binding corporate rules (BCR) governing internal transfers of personal data within companies for the EU’s Article 29 Working Party.

So we’re in good hands with these two. Both are now part of HL’s Privacy and Cybersecurity practice. The interview (via email) is heavily based on questions our own customers have been asking us. We’re very excited to share with you their legal expertise of the GDPR.

Inside Out Security:  When exactly is a data controller required to conduct a data protection impact assessment (“DPIA”)?  Must the controller always undertake a DPIA for new uses of certain types of data (e.g., biometrics, facial images)?

Hogan Lovells: The DPIA requirement is linked to processing “likely to result in a high risk for the rights and freedoms of natural persons,” taking into account “the nature, scope, context and purposes of the processing.”  This is a fact-specific standard, and one therefore that is likely to be interpreted differently by different data protection authorities (“DPAs”), although it is generally understood to refer to significant detrimental consequences for individuals.

The GDPR requires a DPIA in three specific circumstances:

  • Where the processing involves a “systematic and extensive” evaluation of an individual in order to make an automated decision about the individual (e.g., profiling) that has a legal effect on that individuals (e.g., denial of benefits).
  • The processing “on a large scale” of sensitive categories of personal data, specifically (a) personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, (b) genetic data, biometric data for the purpose of uniquely identifying an individual, data concerning health, or data concerning an individual’s sex life or sexual orientation, and (c) personal data pertaining to criminal convictions and offenses.
  • A systematic monitoring of a publicly accessible area on a large scale (e.g., through CCTV).

The data protection authorities (“DPAs”) have promised to provide guidance before the end of 2016 on this aspect of the Regulation and the individual DPAs have authority to publish guidance on the kinds of processing operations that require a DPIA and those that do not, and these individual guidance documents might differ from country to country.

IOS: For what types of data and processing would data controllers be required to engage in a “prior consultation” with a DPA?

HL: Data controllers are required to consult with a DPA prior to engaging in data processing where a DPIA indicates that the processing “would result in a high risk” in the absence of measures taken by the controller to mitigate the risk . “High risk” is not defined, but it likely to carry a similar meaning to the threshold DPIA requirement described above:  that is, a significant detrimental consequence for individuals. The requirement to engage in a prior consultation will also likely be influenced by DPA guidance on the issue and we can expect further guidance on this point before the end of this year.

 

IOS: What does a DPIA under the GDPR look like?

HL: A DPIA is not too complicated.  A suggested approach is to undertake a review of what the proposed data processing activities involve as against the relevant requirements of the Regulation (transparency, legal basis for processing,  data retention, data security, etc).   This can be done by creating a standard DPIA template that requires the filling in of these fields which then allows for a final assessment of risk by an assessor who, based on the analysis provided, will recommend the steps to take to address any risks identified..

 

IOS: A lot of Varonis customers already follow data security standards (principally ISO 27001). Articles 42 and 43 of the GDPR seem to suggest outside, accredited certification and other bodies will have the power to issue a GDPR certification. Does this mean that existing accreditations will be sufficient to comply with the GDPR standard?

HL: Under Articles 42 and 43, DPAs can approve certifications issued by certain certification bodies as creating the presumption of compliance with various parts of the GDPR.  However, the relevant DPA or the EU-wide group of DPAs, the European Data Protection Board (currently known as the Article 29 Working Party), would have to approve a particular certification before it can be deemed to be sufficient to comply with the GDPR standard.

The Article 29 Working Party has indicated that it intends to provide further guidance on this topic before the end of 2016.

 

IOS: The GDPR requires notification of data breaches within 72 hours of discovery, including “the measures taken or proposed to be taken by the controller to address the personal data breach, including, where appropriate, measures to mitigate its potential adverse effects.”  What types of “measures to mitigate” the breach’s “potential adverse effects” will be required?

HL:The “adverse effects” mentioned here reference potential adverse effects to the individual. There are no hard and fast rules as of yet, but this standard encourages companies to provide mitigating measures as may be appropriate in a given situation to help protect individuals.  This can include information about the offering of credit monitoring services, information about the offering of identity theft insurance, or steps taken to confirm the deletion of the personal data by unauthorized third parties. It is also possible that some DPAs may request information on technical and other remedial security measures taken by the company. The specific requirements are likely to be borne out through additional regulatory guidance and practice.

 

IOS: When a data subject requests erasure of personal data, does that mean that data must be deleted everywhere the personal data is located (including in emails, memos, spreadsheets, etc.)?

HL: The right to erasure—also known as the “right to be forgotten”— was one of the more controversial aspects of the GDPR when it was first published, not least because the practical limits on a controller’s obligation to delete personal data were unclear.  The right to erasure is not unlimited and an organization is only required to erase the data when one of the grounds specified in the Regulation apply. These include that the personal data is no longer needed for its original purpose; the individual withdraws consent, or objects to the processing; the data must be erased in order to comply with a legal obligation to which the controller is subject; the data has been collected in relation to the offering of information society services to children; or the processing is unlawful.

There are exemptions to the erasure right, including where an individual objects to the processing, but the organization can establish an overriding legitimate ground to continue the processing; or where the individual withdraws consent to the processing, and the organization has another basis on which to rely to continue the processing. Other exemptions include where processing is necessary for exercising a right of freedom of information or expression, for compliance with a legal obligation, for reasons of public interest in relation to health care, or for exercising or defending legal claims.

In theory, this means that an organization should take reasonable steps to delete personal data subject to a valid erasure request wherever it resides, although we recognize that there may be practical limitations on the ability of an organization to delete certain information. The DPAs do have the ability under the GDPR to introduce further exemptions to this provision but we do not know yet what these will look like.

Organizations do have room to put forward arguments that they have overriding legitimate grounds to continue processing personal data in certain circumstances. Where consent has been withdrawn in many cases it is also likely that there will be another basis on which organizations can continue to process at least some of the data (e.g., legitimate business interests). Organizations should document the steps they take to comply (or choose not to comply) with erasure requests, to justify the reasonableness of those steps if pressed by a DPA.

Where a data controller has made personal data public (e.g., by publishing it on a website) and receives a valid erasure request for that personal data, the GDPR requires the controller to, “taking account of available technology and the cost of implementation,” take “reasonable steps” to inform other third-party controllers who have access to the personal data of the erasure request.

This is an area on which we can expect further guidance from the DPAs, although it is not in the list of first wave guidance that we are expecting from the Article 29 Working Party this year.

 

IOS: An organization must appoint a data protection officer (“DPO”) if, among other things, “the core activities” of the organization require “regular and systematic monitoring of data subjects on a large scale.”  Many Varonis customers are in the B2B space, where they do not directly market to consumers. Their customer lists are perhaps in the tens of thousands of recipients up to the lower six-figure range. First, does the GDPR apply to personal data collected from individuals in a B2B context? And second, how when does data processing become sufficiency “large scale” to require the appointment of a DPO?

HL: Yes, the GDPR applies to personal data collected from individuals in a B2B context (e.g., business contacts).  The GDPR’s DPO requirement, however, is not invoked through the maintenance of customer databases.  The DPO requirement is triggered when the core activities of an organization involve regular and systematic monitoring of data subjects on a large scale, or the core activities consist of large scale processing of special categories of data (which includes data relating to health, sex life or sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, or biometric or genetic data).

“Monitoring” requires an ongoing tracking of the behaviors, personal characteristics, or movements of individuals, such that the controller can ascertain additional details about those individuals that it would not have known through the discrete collection of information.

Therefore, from what we understand of Varonis’ customers’ activities, it is unlikely that a DPO will be required, although this is another area on which we can expect to see guidance from the DPAs, particularly in the European Member States where having a DPO is an existing requirement (such as Germany).

Whether or not a company is required to appoint a DPO, if the company will be subject to the GDPR, it will still need to be able to comply with the “Accountability” record-keeping requirements of the Regulation and demonstrate how it meets the required standards. This will involve designating a responsible person or team to put in place and maintain appropriate  policies and procedures , including data privacy training programs.

EU GDPR Spotlight: 72-Hour Breach Notification Rule

EU GDPR Spotlight: 72-Hour Breach Notification Rule

One of biggest and more controversial changes in the EU General Data Protection Regulation (GDPR) is the requirement for companies to report breaches of consumer personal data.  Fortunately, we recently had the chance to talk with an expert on GDPR compliance to find out some of the subtler details.

“Likely to Affect”

The first key thing to keep in mind is that there are two different thresholds to apply in a GDPR breach: one for notifying customers, and the other for alerting the Data Protection Authority (DPA).

If the personal data that has been exposed is “likely to affect” a consumer, then they will need to be notified. This is a fairly low bar to reach.

According to the compliance attorney we spoke to, any personal data identifiers – say, email addresses, online account IDs, and possibly IP addresses — could easily pass the likely-to-affect test.

In addition, if the breached personal data contains more monetizable personal data – bank account numbers or other financial identifiers—  then you can say the breach is “likely to harm” the individual. In this situation, both the consumer and the DPA will have to be notified.

Breach Response: Not Just IT

The notification sent to the DPA in the likely-to-harm case also must include detailed information about the incident.

Besides describing the nature of the breach, the notification has to mention the types of data, the number of individuals, and the number of records exposed.

The company (or data controller in EU-speak) then needs to describe any likely consequences of the breach as well as any mitigation efforts to be taken.

The notification to the DPA must be made within 72 hours or “a reasoned justification” given in cases where that window can’t be met.

Of course, you’ll need IT to provide the basic information about what types of data and number of records that were involved.

But according to the attorney, a breach or incident response team must be made up of more than IT!

Minimally, the response group should include a chief privacy officer, a legal representative if the CPO is not an attorney, risk management personnel, PR, and financial.

The point is that while IT is crucial for understanding the scope of the attack, a breach impacts so much more than just IT —regulatory, financial, and legal areas—that other experts have to be involved.

 

learn about GDPR in a free email course