Category Archives: Compliance & Regulation

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

In this series on governmental responses to cybercrime, we’re taking a look at how countries through their laws are dealing with broad attacks against IT infrastructure beyond just data theft. Ransomware and DDoS are prime examples of threats that don’t necessarily fit into the narrower definition of breaches found in PII-focused data security laws. That’s where special cybercrime rules come into play.

In the first post, we discussed how the EU’s Network and Information Security (NIS) Directive tries to close the gaps left open by the EU Data Protection Directive(DPD) and the impending General Data Protection Regulation (GDPR).

Let’s now head north to Canada.

Like the EU, Canada has a broad consumer data-oriented security law, which is known as the Personal Information Protection and Electronic Documents Act (PIPEDA).  For nitpickers, there are also overriding data laws at the provincial level — Alberta and British Columbia’s PIPA — that effectively mirror PIPEDA.

The good news about PIPEDA is that it has a strong breach notification rule wherein unauthorized data access has to be reported to the Canadian regulators.  So ransomware attacks would fall under this rule. But for reporting a breach to consumers, PIPEDA uses a “risk of harm” threshold.” Harm can be of a financial nature as well as anything having a significant affect on the reputation of the individual.

Anyway, PIPEDA is like the Canadian version of the current EU DPD but with a fairly practical breach reporting requirement.

Is there anything like the EU’S NIS?

Not at this point.

But in 2015, the Canadian government started funding several initiatives to help the private sector protect against cyber threats. One of the key programs that came out of this was the Canadian Cyber Incident Response Centre (CCIRC), which is similar to the EU’s CSIRTs.

CCIRC provides technical advice and support, monitors the threat environment and posts cybersecurity bulletins (see their RSS feed), as well as provide a forum, the Community Portal, through which companies can share information.

For now, Canada is following a US-style approach: help and support private industry in dealing with cyberattacks against important IT infrastructure, but make reporting and other compliance matters to be a voluntary arrangement.

However, the public discussion continues, and with attacks like this, new approaches may be needed.

Cybersecurity Laws Get Serious: EU’s NIS Directive

Cybersecurity Laws Get Serious: EU’s NIS Directive

In the IOS blog, our cyberattack focus has mostly been on hackers stealing PII and other sensitive personal data. The breach notification laws and regulations that we write about require notification only when there’s been acquisition or disclosure of PII by an unauthorized user. In plain speak, the data is stolen.

These data laws, though, fall short in two significant ways.

One, the hackers can potentially take data that’s not covered by the law: non-PII that can include corporate IP, sensitive emails from the CEO, and other valuable proprietary information. Two, the attackers are not interested in taking data but rather in disruption: for example, deploying DoS attacks or destroying important system or other non-PII data.

Under the US’s HIPAA, GLBA, and state breach laws as well as the EU’s GDPR, neither of the two cases above — and that takes in a lot of territory — would trigger a notification to the appropriate government authority.

The problem is that data privacy and security laws focus, naturally, on the data, instead of the information system as a whole. However, it doesn’t mean that governments aren’t addressing this broader category of cybersecurity.

There’s not been nearly enough attention paid to the EU’s Network and Information Systems (NIS) Directive, the US’s (for now) voluntary Critical Infrastructure Security Framework, Canada’s cybersecurity initiatives, and other laws in major EU countries.

And that’s my motivation in writing this first in a series of posts on cybersecurity rules. These are important rules that organizations should be more aware. Sometime soon, it won’t be good enough, legally speaking, to protect special classes of data. Companies will be required to protect entire IT systems and report to regulatory authorities when there’s been actions to disrupt or disable the IT infrastructure.

Protecting the Cyber

The laws and guidelines that have evolved in this area are associated with safeguarding critical infrastructure – telecom, financial, medical, chemical, transportation. The reason is that cybercrime against the IT network of, say, Hoover Dam or the Federal Reserve should be treated differently than an attack against a dating web site.

Not that an attack against any IT system isn’t a serious and potentially costly act. But with critical infrastructure, where there isn’t an obvious financial motivation, we start entering the realm of cyber espionage or cyber disruption initiated by governments.

In other words, bank ATM machines suddenly not dispensing cash, the cell phone network dropping calls, or – heaven help us! — Google replying with wrong and deceptive answers, may be a sign of a cyberwar or at least a cyber ambush.

A few months back, we wrote about an interview between Charlie Rose and John Carlin, the former Assistant Attorney General in the National Security Division of the Department of Justice. The transcript can be found here, and it’s worth going through it, or at least searching on the “attribution” keyword.

Essentially, Carlin tells us that US law enforcement is getting far better at learning who are behind cyberattacks. The Department of Justice is now publicly naming the attackers, and then prosecuting them. By the way, Carlin went after Iranian hackers accused of intrusions into banks and a small dam near New York City. Fortunately, the dam’s valves were still manually operated and not connected to the Internet.

Carlin believes there are important advantages in going public with a prosecution against named individuals. Carlin sees it as a way to deter future cyber incidents. As he puts it, “because if you are going to be able to deter, you’ve got to make sure the world knows we can figure out who did it.”

So it would make enormous sense to require companies to report cyberattacks to governmental agencies, who can then put the pieces together and formally take legal and other actions against the perps.

First Stop: EU’s NIS Directive.

As with the Data Protection Directive for data privacy, which was adopted in 1995, the EU has again been way ahead of other countries in formalizing cyber reporting legislation. Its Network and Information Systems Directive was initially drafted in 2013 and was approved by the EU last July.

Since it is a directive, individual EU countries will have to transpose NIS into their own individual laws. EU countries will have a two-year transition period to get their houses in order. And an additional six months to select companies providing essential services (see Appendix II).

In Article 14, operators of essential services are required to take “appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems.”  They are also required to report, without undue delay, significant incidents to a Computer Security Incident Response Team or CSIRT.

There’s separate and similar language in Article 16 covering digital service providers, which is the EU’s way of saying ecommerce, cloud computing, and search services.

CSIRTs are at the center of the NIS Directive. Besides collecting incident data, CSIRTs are also responsible for monitoring and analyzing threat activity at a national level, issuing alerts and warnings, and sharing their information and threat awareness with other CSIRTs.  (In the US, the closest equivalent is the Department of Homeland Security’s NCCIC.)

What is considered an incident in the NIS Directive?

It is any “event having an actual adverse effect on the security of network and information systems.”  Companies designated as providing essential services are given some wiggle room in what they have to report to a CSIRT. For an incident to be significant, and thus reportable, the company has to consider the number of users affected, the duration, and the geographical scope.

Essential digital service operators must also take into account the effect of their disruption on economic and “societal activities”.

Does this mean that a future attack against, say, Facebook in the EU, in which Messenger or status posting activity is disrupted would have to be reported?

To this non-attorney blogger, it appears that Facebooking could be considered an important societal activity.

Yeah, there are vagaries in the NIS Directive, and it will require more guidance from the regulators.

In my next post in this series, I’ll take a closer look at cybersecurity rules due north of us for our Canadian neighbor.

Interview With Medical Privacy Author Adam Tanner [TRANSCRIPT]

Interview With Medical Privacy Author Adam Tanner [TRANSCRIPT]

Adam Tanner, author of Our Bodies, Our Data, has shed light on the dark market in medical data. In my interview with Adam, I learned that our medical records, principally drug transactions, are sold to medical data brokers who then resell this information to drug companies. How can this be legal under HIPAA without patient consent?

Adam explains that if the data is anonymized then it no longer falls under HIPAA’s rules. However, the prescribing doctor’s name is still left on the record that is sold to brokers.

As readers of this blog know, bits of information related to location, like the doctor’s name, don’t truly anonymize a record and can act as quasi-identifiers when associated with other data.

My paranoia was certainly in the red zone during this interview, and we explored what would happen if hackers or others could connect the dots. Some of the possibilities were a little unsettling.

Adam believes that by writing this book, he can raise awareness about this hidden medical data market. He also believes that consumers should be given a choice — since it’s really their data  — about whether to release the “anonymized” HIPAA records to third-parties.

 

Inside Out Security: Today, I’d like to welcome Adam Tanner. Adam is a writer-in-residence at Harvard University’s Institute for Quantitative Social Science. He’s written extensively on data privacy. He’s the author of What Stays In Vegas: The World of Personal Data and the End of Privacy As We Know It. His articles on data privacy have appeared in Scientific American, Forbes, Fortune, and Slate. And he has a new book out, titled “Our Bodies, Our Data,” which focuses on the hidden market in medical data. Welcome, Adam.

Adam Tanner: Well, I’m glad to be with you.

IOS: We’ve also been writing about medical data privacy for our Inside Out Security blog. And we’re familiar with how, for example, hospital discharge records can be legally sold to the private sector.

But in your new book, and this is a bit of a shock to me, you describe how pharmacies and others sell prescription drug records to data brokers. Can you tell us more about the story you’ve uncovered?

AT: Basically, throughout your journey as a patient into the healthcare system, information about you is sold. It has nothing to do with your direct treatment. It has to do with commercial businesses wanting to gain insight about you and your doctor, largely, for sales and marketing.

So, take the first step. You go to your doctor’s office. The door is shut. You tell your doctor your intimate medical problems. The information that is entered into the doctor’s electronic health system may be sold, commercially, as may the prescription that you pick up at the pharmacy or the blood tests that you take or the urine tests at the testing lab. The insurance company that pays for all of this or subsidizes part of this, may also sell the information.

That information about you is anonymized.  That means that your information contains your medical condition, your date of birth, your doctor’s name, your gender, all or part of your postal zip code, but it doesn’t have your name on it.

All of that trade is allowed, under U.S. rules.

IOS: You mean under HIPAA?

AT: That’s right. Now this may be surprising to many people who would ask this question, “How can this be legal under current rules?” Well, HIPAA says that if you take out the name and anonymize according to certain standards, it’s no longer your data. You will no longer have any say over what happens to it. You don’t have to consent to the trade of it. Outsiders can do whatever they want with that.

I think a lot of people would be surprised to learn that. Very few patients know about it. Even doctors and pharmacists and others who are in the system don’t know that there’s this multi-billion-dollar trade.

IOS:Right … we’ve written about the de-identification process, which it seems like it’s the right thing to do, in a way, because you’re removing all the identifiers, and that includes zip code information, other geo information. It seems that for research purposes that would be okay. Do you agree with that, or not?

AT: So, these commercial companies, and some of the names may be well-known to us, companies such as IBM Watson Health, GE, LexisNexis, and the largest of them all may not be well-known to the general public, which is Quintiles and IMS. These companies have dossiers on hundreds of millions of patients worldwide. That means that they have medical information about you that extends over time, different procedures you’ve had done, different visits, different tests and so on, put together in a file that goes back for years.

Now, when you have that much information, even if it only has your date of birth, your doctor’s name, your zip code, but not your name, not your Social Security number, not things like that, it’s increasingly possible to identify people from that. Let me give you an example.

I’m talking to you now from Fairbanks, Alaska, where I’m teaching for a year at the university here. I lived, before that, in Boston, Massachusetts, and before that, in Belgrade, Serbia. I may be the only man of my age who meets that specific profile!

So, if you knew those three pieces of information about me and had medical information from those years, I might be identifiable, even in a haystack of millions of different other people.

IOS: Yeah …We have written about that as well in the blog. We call these quasi-identifiers. They’re not the traditional kind of identifiers, but they’re other bits of information, as you pointed out, that can be used to sort of re-identify. Usually it’s a small subset, but not always. And that this information would seem also should be protected as well in some way. So, do you think that the laws are keeping up with this?

AT: HIPAA was written 20 years ago, and the HIPAA rules say that you can freely trade our patient information if it is anonymized to a certain standard. Now, the technology has gone forward, dramatically, since then.

So, the ability to store things very cheaply and the ability to scroll through them is much more sophisticated today than it was when those rules came into effect. For that reason, I think it’s a worthwhile time to have a discussion now. Is this the best system? Is this what we want to do?

Interestingly, the system of the free trade in our patient information has evolved because commercial companies have decided this is what they’d want to do. There has not been an open public discussion of what is best for society, what is best for patients, what is best for science, and so on. This is just a system that evolved.

I’m saying, in writing this book, “Our Bodies, Our Data,” that it is maybe worthwhile that we re-examine where we’re at right now and say, “Do we want to have better privacy protection? Do we want to have a different system of contributing to science than we do now?”

IOS: I guess what also surprised me was that you say that pharmacies, for example, can sell the drug records, as long as it’s anonymized. You would think that the drug companies would be against that. It’s sort of leaking out their information to their competitors, in some way. In other words, information goes to the data brokers and then gets resold to the drug companies.

AT: Well, but you have to understand that everybody in what I call this big-data health bazaar is making money off of it. So, a large pharmacy chain, such as CVS or Walgreen’s, they may make tens of millions of dollars in selling copies of these prescriptions to data miners.

Drug companies are particularly interested in buying this information because this information is doctor-identified. It says that Dr. Jones in Pittsburgh prescribes drug A almost all the time, rather than drug B. So, the company that makes drug B may send a sales rep to the doctor and say, “Doctor, here’s some free samples. Let’s go out to lunch. Let me tell you about how great drug B is.”

So, this is because there exists these doctor profiles on individual doctors across the country, that are used for sales and marketing, for very sophisticated kind of targeting.

IOS: So, in an indirect way, the drug companies can learn about the other drug companies’ sales patterns, and then say, “Oh, let me go in there and see if I can take that business away.” Is that sort of the way it’s working?

AT: In essence, yes. The origins of this trade date back to the 1950s. In its first form, these data companies, such as IMS Health, what they did was just telling companies what drugs sold in what market. Company A has 87% of the market. Their rival has 13% of the market. When medical information began to become digitized in the 1960s and ’70s and evermore since then, there was a new opportunity to trade this data.

So, all of a sudden, insurance companies and middle-men connecting up these companies, and electronic health records providers and others, had a product that they could sell easily, without a lot of work, and data miners were eager to buy this and produce new products for mostly the pharmaceutical companies, but there are other buyers as well.

IOS:  I wanted to get back to another point you mentioned, in that even with anonymized data records of medical records, with all the other information that’s out there, you can re-identify or at least limit, perhaps, the pool of people who that data would apply to.

What’s even more frightening now is that hackers have been stealing health records like crazy over the last couple of years. So, there’s a whole dark market of hacked medical data that, I guess, if they got into this IMS database, they would have the keys to the kingdom, in a way.

Am I being too paranoid here?

AT: Well, no, you correctly point out that there has been a sharp upswing in hacking into medical records. That can happen into a small, individual practice, or it could happen into a large insurance company.

And in fact, the largest hacking attack of medical records in the last couple of years has been into Anthem Health, which is the Blue Cross Blue Shield company. Almost 80 million records were hacked in that.

So even people that did… I was hacked in that, even though I was not, at the time, a customer of them or had never been a customer of them, but they… One company that I dealt with outsourced to someone else, who outsourced to them. So, all of a sudden, this information can be in circulation.

There’s a government website people can look at, and you’ll see, every day or two, there are new hackings. Sometimes it involves a few thousand names and an obscure local clinic. Sometimes it’ll be a major company, such as a lab test company, and millions of names could be impacted.

So, this is something definitely to be concerned about. Yes, you could take these hacked records and match them with anonymized records to try to figure out who people are, but I should point out that there is no recorded instance of hackers getting into these anonymized dossiers by the big data miners.

IOS: Right. We hope so!

AT: I say recorded or acknowledged instance.

IOS: Right. Right. But there’s now been sort of an awareness of cyber gangs and cyber terrorism and then the use of, let’s say, records for blackmail purposes.

I don’t want to get too paranoid here, but it seems like there’s just a potential for just a lot of bad possibilities. Almost frightening possibilities with all this potential data out there.

AT: Well, we have heard recently about rumors of an alleged dossier involving Donald Trump and Russia.

IOS: Exactly.

AT: And information that… If you think about what kind of information could be most damaging or harmful to someone, it could be financial information. It could be sexual information, or it could be health information.

IOS: Yeah, or someone using… or has a prescription to a certain drug of some sort. I’m not suggesting anything, but that… All that information together could have sort of lots of implications, just, you know, political implications, let’s say.

AT: I mean if you know that someone takes a drug that’s commonly used for a mental health problem, that could be information used against someone. It could be used to deny them life insurance. It could be used to deny them a promotion or a job offer. It could be used by rivals in different ways to humiliate people. So, this medical information is quite powerful.

One person who has experienced this and spoken publicly about it is the actor, Charlie Sheen. He tested positive for HIV. Others somehow learned of it and blackmailed him. He said he paid millions of dollars to keep that information from going public, before he decided finally that he would stop paying it, and he’d have to tell the world about his medical condition.

IOS: Actually I was not aware of the payments he was making. That’s just astonishing. So, is there any hope here? Do you see some remedies, through maybe regulations or enforcement of existing laws? Or perhaps we need new laws?

AT: As I mentioned, the current rules, HIPAA, allows for the free trade of your data if it’s anonymized. Now, I think, given the growth of sophistication in computing, that we should change what the rule is and to define our medical data as any medical information about us, whether or not it’s anonymized.

So, if a doctor is writing in the electronic health record, you should have a say as to whether or not that information is going to be used elsewhere.

A little side point I should mention. There are a lot of good scientists and researchers who want data to see if they can gain insights into disease and new medications. I think people should have the choice whether or not they want to contribute to those efforts.

So, you know, there’s a lot of good efforts. There’s a government effort under way now to gather a million DNA samples from people to make available to science. So, if people want to participate in that, and they think that’s good work, they should definitely be encouraged to do so, but I think they should have the say and decide for themselves.

And so far, we don’t really have that system. So, by redefining what patient data is, to say, “Medical information about a patient, whether or not it’s anonymized,” I think that would give us the power to do that.

IOS: So effectively, you’re saying the patient owns the data, is the owner, and then would have to give consent for the data to be used. Is that, about right?

AT: I think so. But on the other hand, as I mentioned, I’ve written this book to encourage this discussion. The problem we have right now is that the trade is so opaque.

Companies are extremely reluctant to talk about this commercial trade. So, they do occasionally say that, “Oh, this is great for science and for medicine, and all of these great things will happen.” Well, if that is so fantastic, let’s have this discussion where everyone will say, “All right. Here’s how we use the data. Here’s how we share it. Here’s how we sell it.”

Then let people in on it and decide whether they really want that system or not. But it’s hard to have that intelligent policy discussion, what’s best for the whole country, if industry has decided for itself how to proceed without involving others.

IOS: Well, I’m so glad you’ve written this book. This will, I’m hoping, will promote the discussion that you’re talking about. Well, this has been great. I want to thank you for the interview. So, by the way, where can our listeners reach out to you on social media? Do you have a handle on Twitter? Or Facebook?

AT: Well, I’m @datacurtain  and I have a webpage, which is http://adamtanner.news/

IOS: Wonderful. Thank you very much, Adam.

Update: New York State Finalizes Cyber Rules for Financial Sector

Update: New York State Finalizes Cyber Rules for Financial Sector

When last we left New York State’s innovative cybercrime regulations, they were in a 45-day public commenting period. Let’s get caught up. The comments are now in. The rules were tweaked based on stakeholders’ feedback, and the regulations will begin a grace period starting March 1, 2017.

To save you the time, I did the heavy lifting and looked into the changes made by the regulators at the New York State Department of Financial Services (NYSDFS).

There are a few interesting ones to talk about. But before we get into them, let’s consider how important New York State — really New York City — is as a financial center.

Made in New York: Money!

To get a sense of what’s encompassed in the NYDFS’s portfolio, I took a quick dip into their annual report.

For the insurance sector, they supervise almost 900 insurers with assets of $1.4 trillion and receive premiums of $361 billion. Under wholesale domestic and foreign banks — remember New York has a global reach — they monitor 144 institutions with assets of $2.2 trillion. And I won’t even get into community and regional banks, mortgage brokers, and pension funds.

In a way, the NYSDFS has the regulatory power usually associated with a small country’s government. And therefore the rules that New York makes regarding data security has an outsized influence.

One Rule Remains the Same

Back to the rules. First, let’s look at one key part that was not changed.

NYSDFS received objections from the commenters on their definition of cyber events. This is at the center of the New York law—detecting, responding, and recovering from these events—so it’s important to take a closer look at its meaning.

Under the rules, a cybersecurity event is “any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information …”

Some of the commenters didn’t like the inclusion of “attempt” and “unsuccessful”. But the New York regulators held firm and kept the definition as is.

Cybersecurity is a broader term than a data breach. For a data breach, there usually has to be data access and exposure or exfiltration. In New York State, though, access alone or an IT disruption, even when attempted (or executed but not successfully) is considered an event.

As we’ve pointed out in our ransomware and the law cheat sheet, very few states in the US would classify a ransomware attack as a breach under their breach laws.

But in New York State, if ransomware (or a remote access trojan or other malware) was loaded on the victim’s server and perhaps abandoned or stopped by IT in mid-hack, it would indeed be a cybersecurity event.

Notification Worthy

This leads naturally to another rule, notification of a cybersecurity event to the New York State regulators, where the language was tightened.

The 72-hour time frame for reporting remains, but the clock starts ticking after a determination by the financial company that an event has occurred.

The financial companies were also given more wiggle room in the types of events that require notification: essentially the malware would need to “have a reasonable likelihood of materially harming any material part of the normal operation…”

That’s a mouthful.

In short: financial companies will notify the regulators at NYSDFS when the malware could seriously affect an operation that’s important to the company.

For example, malware that infects the digital console on the bank’s espresso machine is not notification worthy. But a key logger that lands in a bank’s foreign exchange area and is scooping up user passwords is very worthy.

The NYDFS’s updated notification rule language, by the way, puts it more in line with other data security laws, including the EU’s General Data Protection Regulation (GDPR).

So would you have to notify the New York State regulator when malware infects a server but hasn’t necessarily completed its evil mission?

Getting back to the language of “attempt” and “unsuccessful” found in the definition of cybersecurity events, it would appear that you would but only if the malware lands on a server that’s important to the company’s operations — either because of the data it contains or its function.

State of Grace

The original regulation also said you had to appoint a Chief Information Security Officer (CISO) who’d be responsible for seeing this cybersecurity regulation is carried out. Another important task of the CISO is to annually report to the board on the state of the company’s cybersecurity program.

With pushback from industry, this language was changed so that you can designate an existing employee as a CISO — likely a CIO or other C-level.

One final point to make is that the grace period for compliance has been changed. For most of the rules, it’s still 180 days.

But for certain requirements – multifactor authentication and penetration testing — the grace period has been extended to 12 months, and for a few others – audit trails, data retention, and the CISO report to the board — it’s been pushed out to 18 months.

For more details on the changes, check this legal note from our attorney friends at Hogan Lovells.

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

More Adam Tanner! In this second part of my interview with the author of Our Bodies, Our Data, we start exploring the implications of having massive amounts of online medical  data. There’s much to worry about.

With hackers already good at stealing health insurance records, is it only a matter of time before they get into the databases of the drug prescription data brokers?

My data privacy paranoia about all this came out in full force during the interview. Thankfully, Adam was able to calm me down, but there’s still potential for frightening possibilities, including political blackmail.

Is the answer more regulations for drug data? Listen to the rest of the interview below to find out, and follow Adam on Twitter, @datacurtain, to keep up to date.

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

In our writing about HIPAA and medical data, we’ve also covered a few of the gray areas of medical privacy, including  wearables, Facebook, and hospital discharge records. I thought both Cindy and I knew all the loopholes. And then I talked to writer Adam Tanner about his new book Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records.

In the first part of my interview with Tanner, I learned how pharmacies sell our prescription drug transactions to medical data brokers, who then resell it to pharmaceutical companies and others. This is a billion dollar market that remains unknown to the public.

How can this be legal under HIPAA, and why doesn’t it require patient consent?

It turns out after the data record is anonymized, but with the doctor’s name still attached, it’s no longer yours!  Listen in as we learn more from Tanner in this first podcast.

EU GDPR Spotlight: Do You Have to Hire a DPO?

EU GDPR Spotlight: Do You Have to Hire a DPO?

I suspect right about now that EU (and US) companies affected by the General Data Protection Regulation (GDPR) are starting to look more closely at their compliance project schedules. With enforcement set to begin in May 2018, the GDPR-era will shortly be upon us.

One of the many questions that have not been full answered by this new law (and still being worked out by the regulators) is under what circumstances a company needs to hire a data protection officer (DPO).

There are three scenarios mentioned in the GDPR (see article 37) where a DPO is mandatory: the core activities involve the processing of personal data by a public authority; the core activities involve “regular and systematic monitoring of data subjects on a large scale”; or the core activities require large-scale processing of special data—for example, biometric, genetic, geo-location, and more.

Companies falling into the second category, which I think covers the largest share, are probably pondering what is meant by “regular and systematic monitoring” and “large-scale”.

As a non-legal person, I even noticed these provisions were a bit foggy.

A few months ago, I asked GPDR legal specialist Bret Cohen at Hogan Lovells about what the heck was meant.

Cohen’s answer was that, well, we’ll have to wait for more guidance from the regulators.

And Thus Spoke the Article 29 Working Party

No, the Article 29 Working Party (WP29) is not the name of a new Netflix series, but will, under the GDPR, become a kind of super data protection authority (DPA) providing advice and insuring consistency between all the national DPAs.

Anyway, last month the WP29 published a guidance addressing the confusing criteria for DPOs.

And after reading it, I suppose, I’m still a little confused.

For those of us who were following the GDPR and watching how this legal sausage was made, the DPO was one of the more contentious provisions.

There were differences of opinion on whether a DPO should be mandatory or optional and on the threshold requirements for having one in the first place. Some were arguing that it should be the number of employees (250) of a company and others, the number of records of personal data processed (500).

The parties — EU Commission, Parliament, and Council — finally settled on DPOs being mandatory but they removed specific numbers. And so we’re left with this vague language.

The new guidance provides some clarification.

According to the WP29, “regular and systematic” means, in human-speak, a pre-arranged plan that’s carried out repeatedly over time.

So far, so good.

What does “large scale” mean?

For me, this is the more interesting question. The WP29 said the following factors need to be taken into consideration:

  • The number of data subjects concerned – either as a specific number or as a proportion of the relevant population
  • The volume of data and/or the range of different data items being processed
  • The duration, or permanence, of the data processing activity
  • The geographical extent of the processing activity

We’re All Monitoring Web Behavior

You can kind of see what the law makers were grappling with in the list of factors. But it’s still a little muddy.

Obviously, an insurance company, bank, or retailer that collects personal data from millions of customers would require a DPO.

However, a small web start-up with a few employees can be also engaged in large-scale monitoring.

How?

Suppose their free web app is being accessed by tens or hundreds of thousands of visitors per month. The startup’s site may not be collecting personal data or very minimal personal data other than tracking browser activity with cookies or by other means.  I use plenty of freebie sites this way — especially news sites — and the advertising I see reflects their knowledge of me.

But according to the guidance and other language in the GDPR, monitoring of web behavior would be a type of “monitoring” that’s mentioned in the DPO provisions.

I could be mistaken but it seems to me that any company with a website that receives a reasonable amount of traffic would be required to have a DPO.  And this would include lots of B2Bs that don’t necessarily have a large customer base compared to a consumer company. For validation of this view, check out this legal post.

It’s a confusing point that I’m hoping to get resolved by our attorney friends.

In the meantime, more explanation on this somewhat wonkish, but important topic, can be found here by the brilliant people over at the IAPP.

What We Learned From Talking to Data Security Experts

What We Learned From Talking to Data Security Experts

Since we’ve been working on the blog, Cindy and I have chatted with security professionals across many different areas — pen testers, attorneys, CDOs, privacy advocates, computer scientists, and even a guru. With 2016 coming to an end and the state of security looking more unsettled than ever, we decided it was a good time to take stock of the collective wisdom we’ve absorbed from these pros.

The Theory of Everything

A good place to begin our wisdom journey is the Internet of Things (IotT). It’s where the public directly experiences all the issues related to privacy and security.

We had a chance to talk to IoT pen tester Ken Munro earlier this year, and his comments on everything from wireless coffee pots and doorbells to cameras really resonated with us:

“You’re making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all. I think that’s one of the biggest issues right now is there are a lot of manufacturers here and they’re rushing new product to market …”

IoT consumer devices are not, cough, based on Privacy by Design (PbD) principles.

And over the last few months, consumers learned the hard way that these gadgets were susceptible to simple attacks that exploited backdoors, default passwords, and even non-existent authentication.

Additional help to the hackers was provided by public-facing router ports left open during device installation, without any warning to the poor user, and unsigned firmware that left their devices open to complete takeover.

As a result, IoT is where everything wrong with data security seems to show up. However, there are easy-to-implement lessons that we can all put into practice.

Password Power!

Is security always about passwords? No, of course not, but poor passwords or password defaults that were never reset seem to show up as a root cause in many breaches.

The security experts we’ve spoken to have, without prompting from us, often bring up the sorry state of passwords. One of them, Per Thorsheim, who is in fact a password expert himself, reminded us that one answer to our bad password habits is two-factor authentication (TFA):

“From a security perspective, adding this two-factor authentication, is everything. It increases security in such a way that in some cases even if I told you my password for my Facebook account, as an example, well because I have two –factor authentication, you won’t be able to log in.  As soon as you type in my user name and password, I will be receiving a code by SMS from Facebook on my phone, which you don’t have access to. This is really good.”

We agree with Thorsheim that humans are generally not good at this password thing, and so TFA and biometric authentication will certainly be a part of our password future.

In the meantime, for those of who still cling to just plain-old passwords, Professor Justin Cappos told us awhile back that there’s a simple way to come up with better password generation:

“If you’re trying to generate passwords as a human, there are tricks you can do where you pick four dictionary words at random and then create a story where the words interrelate. It’s called the “correct horse battery staple” method! “

Correct-horse-battery-staple is just a way of using a story as a memory trick or mnemonic. It’s an old technique but which one helps create crack-proof passwords.

One takeaway from these experts: change your home router admin passwords now (and use horse-battery-staple).  Corporate IT admins should also take a good, hard look at their own  passwords and avoid aiding and abetting hackers

Cultivate (Privacy and Security) Awareness

Enabling TFA on your online accounts and generating better passwords goes a very long way to improving your security profile.

But we also learned that you need to step back and cultivate a certain level of privacy awareness in your online transactions.

We learned from attorney and privacy guru Alexandra Ross about the benefits of data minimization, both for the data companies that collect and the consumers who reveal their data:

“One key thing is to stop, take a moment, and be mindful of what’s going on. What data am I being asked to submit when I sign up for a social media service?  And question why it’s being asked.

It’s worth the effort to try to read the privacy policies, or read consumer reviews of the app or online service.”

And

“If you’re speaking to the marketing team at a technology company—yeah, the default often is let’s collect everything. In other words, let’s have this very expansive user profile so that every base is covered and we have all these great data points.

But if you explain, or ask questions … then you can drill down to learn what’s really necessary for the data collection.”

In a similar vein, data scientist Kaiser Fung pointed out that often there isn’t much of a reason behind some of the data collection in the first place:

“It’s not just the volume of data, but that the fact that the data today is not collected without any design or plan in mind. Often times, people collecting the data are really divorced from any kind of business problem.”

Listen up IT and marketing people: think about what you’re doing before you submit your next contact form!

Ross and other PbD advocates preach the doctrine of data minimization: the less data you have, the lower your security risk is when there’s an attack.

As our privacy guru, Ross reminded us that there’s still lot of data about us spread out in corporate data systems.  Scott “Hacked Again” Schober another security pro we chatted with makes the same point based on his personal experiences:

“I was at an event speaking … and was asked if I’d be willing to see how easy it is to perform identity theft and compromise information on myself. I was a little reluctant but I said ok, everything else is out there already, and I know how easy it is to get somebody’s information. So I was the guinea pig. It was Kevin Mitnick, the world’s most famous hacker, who performed the theft. Within 30 seconds and at the cost of $1, he pulled up my social security number.”

There’s nothing inherently wrong with companies storing personal information about us. The larger point is to be savvy about what you’re being asked to provide and take into account that corporate data breaches are a fact of life.

Credit cards can be replaced and passwords changed but details about our personal preferences (food, movies, reading habits) and our social security numbers are forever and a great of source raw material for hackers to use in social engineered attacks.

Data is Valuable

We’ve talked to attorneys and data scientists, but we had the chance to talk to both in the form of Bennett Borden. His bio is quite interesting: in addition to being a litigator at Drinker Biddle, he’s also a data scientist. Borden has written law journal articles about the application of machine learning and document analysis to e-discovery and other legal transactions.

Borden explained how as employees we all leave a digital trail in the form of emails and documents, which can be quite revealing. He pointed out that this information can be useful when lawyers are trying to work out a fair value for a company that’s being purchased.

He was called in to do a data analysis for a client and was able to show that internal discussions indicated the asking price for the company was too high:

“We got millions of dollars back on that purchase price, and we’ve been able to do that over and over again now because we are able to get at these answers much more quickly in electronic data.”

So information is valuable in a strictly business sense. At Varonis, this is not news to us, but still it’s still powerful to hear someone who is immersed in corporate content as part of his job to tell us this.

To summarize, as consumers and as corporate citizens, we should all be more careful about treating this valuable substance: don’t give it away easily, and protect if it’s in your possession.

More Than Just a Good Idea

Privacy by Design came up in a few of our discussions with experts, and one of its principles, privacy as a default setting, is a hard one for companies to accept. Although PbD says that privacy is not a zero-sum game — you can have tough privacy controls and profits.

In any case, for companies that do business in the EU, PbD is not just a good idea but in fact it will be the law in 2018. The concept is explicitly spelled out in the General Data Protection Regulation’s (GDPR) article 25, “Data protection by design and by default”.

We’ve been writing about the GDPR for the last two years and of its many implications. But one somewhat overlooked consequence is that the GDPR will apply to companies outside of the EU.

We spoke with data security compliance expert Sheila FitzPatrick, who really emphasized this point:

“The other part of GDPR that is quite different–and it’s one of the first times that this idea will be put in place– is that it doesn’t just apply to companies that have operations within the EU. Any company regardless of where they are located and regardless of whether or not they have a presence in the EU, if they have access to the personal data of any EU citizen, they will have to comply with the regulations under the GDPR. That’s a significant change.”

This legal idea is sometimes referred to as extraterritoriality. And US e-commerce and web service companies in particular will find themselves under the GDPR when EU citizens interact with them. IT best practices that experts like to talk about as things you should do are becoming legal requirements for them.  It’s not just a good idea!

 

Our final advice for 2016: read the writing on the wall and get yourself in position to align yourself with PbD ideas on data minimization, consumer consent, and data protection.

HIPAA and Cloud Provider Refresher

HIPAA and Cloud Provider Refresher

As far as regulators are concerned, the cloud has been a relatively recent occurrence. However, they’ve done a pretty good job in dealing with this ‘new’ computing model.  Take HIPAA.

We wrote that if a cloud service processes or stores protected health information (PHI), it’s considered in HIPAA-ese, a business associate or BA. As you may recall, the Final Omnibus Rule  from 2013 says that BAs fall under HIPAA’ s rules.

A covered entity — health provider or insurer —also must have a contract in place that says the cloud service provider or CSP will comply with key HIPAA safeguards –technical, physical, and administrative. The Department of Health and Human Services (HHS), the agency in charge of enforcing HIPAA, has conveniently provided a sample contract.

The relationship between a covered entity and CSP can be a confusing topic for security and compliance pros. So the HHS folks kindly put together this wonderful FAQ on the topic.

You should read it!

And please note that CSPs are under a breach notification requirement, though, the exact details of reporting back to the covered entity would have to be worked out in the contract.

One key point to keep in mind is that the reason behind having a BA contract is to make sure that the CSP knows they’re being asked to process PHI.

And if a somewhat careless or unscrupulous hospital doesn’t make the CSP sign such a contract, it still doesn’t matter!

HIPAA rules say the BA can’t plead ignorance of the law (except in very special cases.)  In this situation, the hospital would get fined for this lapse of not having offering a contract, and the CSP would still be held responsible for PHI security.

The higher goal is preventing a covered entity from outsourcing compliance responsibility to an indifferent third-party, and avoiding an ensuing legal finger-pointing exercise when there’s a security violation.

CSPs have done a good job of keeping up with changing data secure regulations, and they’re very aware of the HIPAA rules. For example, Amazon knows about the BA contracts as does Google and many other cloud players.

Trying to learn a new language can be difficult! Become fluent in HIPAA with our free five-part email  HIPAA class

The Federal Trade Commission Likes the NIST Cybersecurity Framework (and You Should Too)

The Federal Trade Commission Likes the NIST Cybersecurity Framework (and You Should Too)

Remember the Cybersecurity Framework that was put together by the folks over at the National Institute of Standards and Technology (NIST)?  Sure you do! It came about because the US government wanted to give the private sector, specifically the critical infrastructure players in transportation and energy, a proven set of data security guidelines.

The Framework is based heavily on NIST’s own 800-53, a sprawling 400-page set of privacy and security controls used within the federal government.

To make NIST 800.53 more digestible for the private sector, NIST reorganized and condensed the most important controls and concepts.

Instead of 18 broad control categories with zillions of subcontrols that’s found in the  original, the Cybersecurity Framework — check out the document — is  broken up into just five functional categories – Identify, Protect, Detect, Respond, and Recover — with a manageable number of controls under these groupings.

Students and fans of NIST 800-53 will recognize some of the same two-letter abbreviations being used in the Cybersecurity Framework (see below).

crit-nist-categories

NIST Cybersecurity: simplified functional view of security controls.

By the way, this is a framework. And that means you use the Framework for Improving Critical Infrastructure Cybersecurity – the official name — to map into your favorite data security standard.

Currently, the Framework supports mappings into (not surprisingly) NIST 800.53, but also the other usual suspects, including COBIT 5, SANS CSC, ISO 270001, and ISA 62443.

Keep in mind that the Cybersecurity Framework is an entirely voluntary set of guidelines—none of the infrastructure companies are required to implement it.

The FTC’s Announcement

Since this is such a great set of data security guidelines for critical infrastructure, could the Cybersecurity Framework also serve the same purpose for everyone else—from big box retailers to e-commerce companies?

The FTC thinks so! At the end of August, the FTC announced on its blog that it has given the Cybsecurity Framework its vote of approval.

Let me explain what this means. As a regulatory agency, the FTC is responsible for enforcing powerful regulations, including Gramm-Leach-Blilely, COPPA, and FCRA, as well as its core statutory function of policing “unfair or deceptive acts or practices.”

When dealing with data security or privacy related implications of the laws, the FTC needs a benchmark for reasonable security measures. Or as they put it, “the FTC’s cases focus on whether the company has undertaken a reasonable process to secure data.”

If a company follows the Cybersecurity Framework, is this considered implementing a reasonable process?

The answer is in the affirmative according to the FTC. Or in FTC bureaucratic-speak, the enforcement actions they’ve taken against companies for data security failings “align well with the Framework’s Core functions.”

Therefore if you identify risks (Identify), put in place security safeguards (Protect), continually monitor for threats (Detect), implement a breach response program (Respond), and have a way to restore functions after an incident (Recover), you’ll likely not hear from the FTC regulators.

By the way, check out their Start with Security, a common sense guide to data security, which contain some very Varonis-y ideas.

We approve!

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

Meanwhile back in the EU, two data protection authorities have announced they’ll be looking into Yahoo’s breach-acopalypse. Calling the scale of the attack “staggering”, the UK’s Information Commissioner’s Office (ICO) has signaled they’ll be conducting an investigation.  By the way, the ICO rarely comments this way on an on-going security event.

In Ireland, where Yahoo has its European HQ, the Data Protection Commissioner is asking questions as well.

And here in the US, the FBI is getting involved because the Yahoo attack may involve a state actor, possibly Russia.

Under the Current Laws

One of the (many) stunning things about this incident is that Yahoo knew about the breach earlier this summer when it learned that its users’ data was for sale on the darknet.

And the stolen data appears to have come from a hack that occurred way back in 2014.

It’s an obvious breach notification violation.

Or not!

In the US, the only federal notification law with any teeth is for medical PII held by “covered entities”— insurance companies, hospitals, and providers. In other words, HIPAA.

So there’s little that can be done at the US federal level against the extremely long Yahoo delay in reporting the breach.

In the states, there are notification laws — currently 47 states have them — that would kick in but the thresholds are typically based on harm caused to consumers, which may be difficult to prove in this incident.

The notable exception, as always, is California, where Yahoo has its corporate offices in Sunnyvale. They are one of the few that requires notification on just the discovery of unauthorized access.

So Yahoo can expect a visit from the California attorney general in its future.

One might think that the EU would be tougher on breaches than the US.

But under the current EU Data Protection Directive (DPD), there’s no breach notification requirement. That was one of the motivations for the new General Data Protection Regulation that will go into effect in 2018.

If You Can’t Keep Your Head About You During A Breach

Yahoo may not be completely out of the legal woods in the EU: the DPD does require appropriate security measures to be taken — see article 16. So in theory an enforcement action could be launched based on Yahoo’s lax data protection.

But as a US company with its principle collection servers outside the EU, Yahoo may fall beyond the DPD’s reach. This is a wonky issue and if you want to learn whether the current DPD rules cover non-EU businesses, read this post on the legal analysis.

And this all leads to why the EU rewrote the current data laws for the GDPR, which covers breach notification and “extra-territoriality” — that is controllers outside the EU — as well as putting in place eye-popping fines.

Yeah, you should be reading our EU data regulations white paper to get the big picture.

If the GDPR were currently the law — the GDPR will go in effect in May 2018 — and the company hadn’t reported the exposure of 500 million user records to a DPA within 72 hours, then it would face massive fines.

How massive?

Doing the GDPR Breach Math

A violation of GDPR’s article 33 requirement to notify a DPA could reach as high as 2% of global revenue

Yahoo’s revenue numbers have been landing north of $4.5 billion dollar in recent years.

In my make-believe scenario, Yahoo could be paying $90 million or more to the EU.

And yes I’m aware that Verizon with over $130 billion in revenue is in the process of buying Yahoo.

Supposing the acquisition had already gone through, then Verizon would be on the hook for 2% of $134 billion or about $268 million.

There’s lesson here. Large US and other multinational companies with a significant worldwide web presence should be planning now for the possibility of an epic Yahoo-style breach in their post-2018 future.