Category Archives: Compliance & Regulation

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

This article is part of the series "[Podcast] Adam Tanner on the Dark Market in Medical Data". Check out the rest:

Leave a review for our podcast & we'll put you in the running for a pack of cards.


In our writing about HIPAA and medical data, we’ve also covered a few of the gray areas of medical privacy, including  wearables, Facebook, and hospital discharge records. I thought both Cindy and I knew all the loopholes. And then I talked to writer Adam Tanner about his new book Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records.

In the first part of my interview with Tanner, I learned how pharmacies sell our prescription drug transactions to medical data brokers, who then resell it to pharmaceutical companies and others. This is a billion dollar market that remains unknown to the public.

How can this be legal under HIPAA, and why doesn’t it require patient consent?

It turns out after the data record is anonymized, but with the doctor’s name still attached, it’s no longer yours!  Listen in as we learn more from Tanner in this first podcast.

Continue reading the next post in "[Podcast] Adam Tanner on the Dark Market in Medical Data"

EU GDPR Spotlight: Do You Have to Hire a DPO?

EU GDPR Spotlight: Do You Have to Hire a DPO?

I suspect right about now that EU (and US) companies affected by the General Data Protection Regulation (GDPR) are starting to look more closely at their compliance project schedules. With enforcement set to begin in May 2018, the GDPR-era will shortly be upon us.

One of the many questions that have not been full answered by this new law (and still being worked out by the regulators) is under what circumstances a company needs to hire a data protection officer (DPO).

There are three scenarios mentioned in the GDPR (see article 37) where a DPO is mandatory: the core activities involve the processing of personal data by a public authority; the core activities involve “regular and systematic monitoring of data subjects on a large scale”; or the core activities require large-scale processing of special data—for example, biometric, genetic, geo-location, and more.

Companies falling into the second category, which I think covers the largest share, are probably pondering what is meant by “regular and systematic monitoring” and “large-scale”.

As a non-legal person, I even noticed these provisions were a bit foggy.

A few months ago, I asked GPDR legal specialist Bret Cohen at Hogan Lovells about what the heck was meant.

Cohen’s answer was that, well, we’ll have to wait for more guidance from the regulators.

And Thus Spoke the Article 29 Working Party

No, the Article 29 Working Party (WP29) is not the name of a new Netflix series, but will, under the GDPR, become a kind of super data protection authority (DPA) providing advice and insuring consistency between all the national DPAs.

Anyway, last month the WP29 published a guidance addressing the confusing criteria for DPOs.

And after reading it, I suppose, I’m still a little confused.

For those of us who were following the GDPR and watching how this legal sausage was made, the DPO was one of the more contentious provisions.

There were differences of opinion on whether a DPO should be mandatory or optional and on the threshold requirements for having one in the first place. Some were arguing that it should be the number of employees (250) of a company and others, the number of records of personal data processed (500).

The parties — EU Commission, Parliament, and Council — finally settled on DPOs being mandatory but they removed specific numbers. And so we’re left with this vague language.

The new guidance provides some clarification.

According to the WP29, “regular and systematic” means, in human-speak, a pre-arranged plan that’s carried out repeatedly over time.

So far, so good.

What does “large scale” mean?

For me, this is the more interesting question. The WP29 said the following factors need to be taken into consideration:

  • The number of data subjects concerned – either as a specific number or as a proportion of the relevant population
  • The volume of data and/or the range of different data items being processed
  • The duration, or permanence, of the data processing activity
  • The geographical extent of the processing activity

We’re All Monitoring Web Behavior

You can kind of see what the law makers were grappling with in the list of factors. But it’s still a little muddy.

Obviously, an insurance company, bank, or retailer that collects personal data from millions of customers would require a DPO.

However, a small web start-up with a few employees can be also engaged in large-scale monitoring.

How?

Suppose their free web app is being accessed by tens or hundreds of thousands of visitors per month. The startup’s site may not be collecting personal data or very minimal personal data other than tracking browser activity with cookies or by other means.  I use plenty of freebie sites this way — especially news sites — and the advertising I see reflects their knowledge of me.

But according to the guidance and other language in the GDPR, monitoring of web behavior would be a type of “monitoring” that’s mentioned in the DPO provisions.

I could be mistaken but it seems to me that any company with a website that receives a reasonable amount of traffic would be required to have a DPO.  And this would include lots of B2Bs that don’t necessarily have a large customer base compared to a consumer company. For validation of this view, check out this legal post.

It’s a confusing point that I’m hoping to get resolved by our attorney friends.

In the meantime, more explanation on this somewhat wonkish, but important topic, can be found here by the brilliant people over at the IAPP.

What We Learned From Talking to Data Security Experts

What We Learned From Talking to Data Security Experts

Since we’ve been working on the blog, Cindy and I have chatted with security professionals across many different areas — pen testers, attorneys, CDOs, privacy advocates, computer scientists, and even a guru. With 2016 coming to an end and the state of security looking more unsettled than ever, we decided it was a good time to take stock of the collective wisdom we’ve absorbed from these pros.

The Theory of Everything

A good place to begin our wisdom journey is the Internet of Things (IotT). It’s where the public directly experiences all the issues related to privacy and security.

We had a chance to talk to IoT pen tester Ken Munro earlier this year, and his comments on everything from wireless coffee pots and doorbells to cameras really resonated with us:

“You’re making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all. I think that’s one of the biggest issues right now is there are a lot of manufacturers here and they’re rushing new product to market …”

IoT consumer devices are not, cough, based on Privacy by Design (PbD) principles.

And over the last few months, consumers learned the hard way that these gadgets were susceptible to simple attacks that exploited backdoors, default passwords, and even non-existent authentication.

Additional help to the hackers was provided by public-facing router ports left open during device installation, without any warning to the poor user, and unsigned firmware that left their devices open to complete takeover.

As a result, IoT is where everything wrong with data security seems to show up. However, there are easy-to-implement lessons that we can all put into practice.

Password Power!

Is security always about passwords? No, of course not, but poor passwords or password defaults that were never reset seem to show up as a root cause in many breaches.

The security experts we’ve spoken to have, without prompting from us, often bring up the sorry state of passwords. One of them, Per Thorsheim, who is in fact a password expert himself, reminded us that one answer to our bad password habits is two-factor authentication (TFA):

“From a security perspective, adding this two-factor authentication, is everything. It increases security in such a way that in some cases even if I told you my password for my Facebook account, as an example, well because I have two –factor authentication, you won’t be able to log in.  As soon as you type in my user name and password, I will be receiving a code by SMS from Facebook on my phone, which you don’t have access to. This is really good.”

We agree with Thorsheim that humans are generally not good at this password thing, and so TFA and biometric authentication will certainly be a part of our password future.

In the meantime, for those of who still cling to just plain-old passwords, Professor Justin Cappos told us awhile back that there’s a simple way to come up with better password generation:

“If you’re trying to generate passwords as a human, there are tricks you can do where you pick four dictionary words at random and then create a story where the words interrelate. It’s called the “correct horse battery staple” method! “

Correct-horse-battery-staple is just a way of using a story as a memory trick or mnemonic. It’s an old technique but which one helps create crack-proof passwords.

One takeaway from these experts: change your home router admin passwords now (and use horse-battery-staple).  Corporate IT admins should also take a good, hard look at their own  passwords and avoid aiding and abetting hackers

Cultivate (Privacy and Security) Awareness

Enabling TFA on your online accounts and generating better passwords goes a very long way to improving your security profile.

But we also learned that you need to step back and cultivate a certain level of privacy awareness in your online transactions.

We learned from attorney and privacy guru Alexandra Ross about the benefits of data minimization, both for the data companies that collect and the consumers who reveal their data:

“One key thing is to stop, take a moment, and be mindful of what’s going on. What data am I being asked to submit when I sign up for a social media service?  And question why it’s being asked.

It’s worth the effort to try to read the privacy policies, or read consumer reviews of the app or online service.”

And

“If you’re speaking to the marketing team at a technology company—yeah, the default often is let’s collect everything. In other words, let’s have this very expansive user profile so that every base is covered and we have all these great data points.

But if you explain, or ask questions … then you can drill down to learn what’s really necessary for the data collection.”

In a similar vein, data scientist Kaiser Fung pointed out that often there isn’t much of a reason behind some of the data collection in the first place:

“It’s not just the volume of data, but that the fact that the data today is not collected without any design or plan in mind. Often times, people collecting the data are really divorced from any kind of business problem.”

Listen up IT and marketing people: think about what you’re doing before you submit your next contact form!

Ross and other PbD advocates preach the doctrine of data minimization: the less data you have, the lower your security risk is when there’s an attack.

As our privacy guru, Ross reminded us that there’s still lot of data about us spread out in corporate data systems.  Scott “Hacked Again” Schober another security pro we chatted with makes the same point based on his personal experiences:

“I was at an event speaking … and was asked if I’d be willing to see how easy it is to perform identity theft and compromise information on myself. I was a little reluctant but I said ok, everything else is out there already, and I know how easy it is to get somebody’s information. So I was the guinea pig. It was Kevin Mitnick, the world’s most famous hacker, who performed the theft. Within 30 seconds and at the cost of $1, he pulled up my social security number.”

There’s nothing inherently wrong with companies storing personal information about us. The larger point is to be savvy about what you’re being asked to provide and take into account that corporate data breaches are a fact of life.

Credit cards can be replaced and passwords changed but details about our personal preferences (food, movies, reading habits) and our social security numbers are forever and a great of source raw material for hackers to use in social engineered attacks.

Data is Valuable

We’ve talked to attorneys and data scientists, but we had the chance to talk to both in the form of Bennett Borden. His bio is quite interesting: in addition to being a litigator at Drinker Biddle, he’s also a data scientist. Borden has written law journal articles about the application of machine learning and document analysis to e-discovery and other legal transactions.

Borden explained how as employees we all leave a digital trail in the form of emails and documents, which can be quite revealing. He pointed out that this information can be useful when lawyers are trying to work out a fair value for a company that’s being purchased.

He was called in to do a data analysis for a client and was able to show that internal discussions indicated the asking price for the company was too high:

“We got millions of dollars back on that purchase price, and we’ve been able to do that over and over again now because we are able to get at these answers much more quickly in electronic data.”

So information is valuable in a strictly business sense. At Varonis, this is not news to us, but still it’s still powerful to hear someone who is immersed in corporate content as part of his job to tell us this.

To summarize, as consumers and as corporate citizens, we should all be more careful about treating this valuable substance: don’t give it away easily, and protect if it’s in your possession.

More Than Just a Good Idea

Privacy by Design came up in a few of our discussions with experts, and one of its principles, privacy as a default setting, is a hard one for companies to accept. Although PbD says that privacy is not a zero-sum game — you can have tough privacy controls and profits.

In any case, for companies that do business in the EU, PbD is not just a good idea but in fact it will be the law in 2018. The concept is explicitly spelled out in the General Data Protection Regulation’s (GDPR) article 25, “Data protection by design and by default”.

We’ve been writing about the GDPR for the last two years and of its many implications. But one somewhat overlooked consequence is that the GDPR will apply to companies outside of the EU.

We spoke with data security compliance expert Sheila FitzPatrick, who really emphasized this point:

“The other part of GDPR that is quite different–and it’s one of the first times that this idea will be put in place– is that it doesn’t just apply to companies that have operations within the EU. Any company regardless of where they are located and regardless of whether or not they have a presence in the EU, if they have access to the personal data of any EU citizen, they will have to comply with the regulations under the GDPR. That’s a significant change.”

This legal idea is sometimes referred to as extraterritoriality. And US e-commerce and web service companies in particular will find themselves under the GDPR when EU citizens interact with them. IT best practices that experts like to talk about as things you should do are becoming legal requirements for them.  It’s not just a good idea!

 

Our final advice for 2016: read the writing on the wall and get yourself in position to align yourself with PbD ideas on data minimization, consumer consent, and data protection.

HIPAA and Cloud Provider Refresher

HIPAA and Cloud Provider Refresher

As far as regulators are concerned, the cloud has been a relatively recent occurrence. However, they’ve done a pretty good job in dealing with this ‘new’ computing model.  Take HIPAA.

We wrote that if a cloud service processes or stores protected health information (PHI), it’s considered in HIPAA-ese, a business associate or BA. As you may recall, the Final Omnibus Rule  from 2013 says that BAs fall under HIPAA’ s rules.

A covered entity — health provider or insurer —also must have a contract in place that says the cloud service provider or CSP will comply with key HIPAA safeguards –technical, physical, and administrative. The Department of Health and Human Services (HHS), the agency in charge of enforcing HIPAA, has conveniently provided a sample contract.

The relationship between a covered entity and CSP can be a confusing topic for security and compliance pros. So the HHS folks kindly put together this wonderful FAQ on the topic.

You should read it!

And please note that CSPs are under a breach notification requirement, though, the exact details of reporting back to the covered entity would have to be worked out in the contract.

One key point to keep in mind is that the reason behind having a BA contract is to make sure that the CSP knows they’re being asked to process PHI.

And if a somewhat careless or unscrupulous hospital doesn’t make the CSP sign such a contract, it still doesn’t matter!

HIPAA rules say the BA can’t plead ignorance of the law (except in very special cases.)  In this situation, the hospital would get fined for this lapse of not having offering a contract, and the CSP would still be held responsible for PHI security.

The higher goal is preventing a covered entity from outsourcing compliance responsibility to an indifferent third-party, and avoiding an ensuing legal finger-pointing exercise when there’s a security violation.

CSPs have done a good job of keeping up with changing data secure regulations, and they’re very aware of the HIPAA rules. For example, Amazon knows about the BA contracts as does Google and many other cloud players.

Trying to learn a new language can be difficult! Become fluent in HIPAA with our free five-part email  HIPAA class

The Federal Trade Commission Likes the NIST Cybersecurity Framework (and Yo...

The Federal Trade Commission Likes the NIST Cybersecurity Framework (and You Should Too)

Remember the Cybersecurity Framework that was put together by the folks over at the National Institute of Standards and Technology (NIST)?  Sure you do! It came about because the US government wanted to give the private sector, specifically the critical infrastructure players in transportation and energy, a proven set of data security guidelines.

The Framework is based heavily on NIST’s own 800-53, a sprawling 400-page set of privacy and security controls used within the federal government.

To make NIST 800.53 more digestible for the private sector, NIST reorganized and condensed the most important controls and concepts.

Instead of 18 broad control categories with zillions of subcontrols that’s found in the  original, the Cybersecurity Framework — check out the document — is  broken up into just five functional categories – Identify, Protect, Detect, Respond, and Recover — with a manageable number of controls under these groupings.

Students and fans of NIST 800-53 will recognize some of the same two-letter abbreviations being used in the Cybersecurity Framework (see below).

crit-nist-categories

NIST Cybersecurity: simplified functional view of security controls.

By the way, this is a framework. And that means you use the Framework for Improving Critical Infrastructure Cybersecurity – the official name — to map into your favorite data security standard.

Currently, the Framework supports mappings into (not surprisingly) NIST 800.53, but also the other usual suspects, including COBIT 5, SANS CSC, ISO 270001, and ISA 62443.

Keep in mind that the Cybersecurity Framework is an entirely voluntary set of guidelines—none of the infrastructure companies are required to implement it.

The FTC’s Announcement

Since this is such a great set of data security guidelines for critical infrastructure, could the Cybersecurity Framework also serve the same purpose for everyone else—from big box retailers to e-commerce companies?

The FTC thinks so! At the end of August, the FTC announced on its blog that it has given the Cybsecurity Framework its vote of approval.

Let me explain what this means. As a regulatory agency, the FTC is responsible for enforcing powerful regulations, including Gramm-Leach-Blilely, COPPA, and FCRA, as well as its core statutory function of policing “unfair or deceptive acts or practices.”

When dealing with data security or privacy related implications of the laws, the FTC needs a benchmark for reasonable security measures. Or as they put it, “the FTC’s cases focus on whether the company has undertaken a reasonable process to secure data.”

If a company follows the Cybersecurity Framework, is this considered implementing a reasonable process?

The answer is in the affirmative according to the FTC. Or in FTC bureaucratic-speak, the enforcement actions they’ve taken against companies for data security failings “align well with the Framework’s Core functions.”

Therefore if you identify risks (Identify), put in place security safeguards (Protect), continually monitor for threats (Detect), implement a breach response program (Respond), and have a way to restore functions after an incident (Recover), you’ll likely not hear from the FTC regulators.

By the way, check out their Start with Security, a common sense guide to data security, which contain some very Varonis-y ideas.

We approve!

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

Meanwhile back in the EU, two data protection authorities have announced they’ll be looking into Yahoo’s breach-acopalypse. Calling the scale of the attack “staggering”, the UK’s Information Commissioner’s Office (ICO) has signaled they’ll be conducting an investigation.  By the way, the ICO rarely comments this way on an on-going security event.

In Ireland, where Yahoo has its European HQ, the Data Protection Commissioner is asking questions as well.

And here in the US, the FBI is getting involved because the Yahoo attack may involve a state actor, possibly Russia.

Under the Current Laws

One of the (many) stunning things about this incident is that Yahoo knew about the breach earlier this summer when it learned that its users’ data was for sale on the darknet.

And the stolen data appears to have come from a hack that occurred way back in 2014.

It’s an obvious breach notification violation.

Or not!

In the US, the only federal notification law with any teeth is for medical PII held by “covered entities”— insurance companies, hospitals, and providers. In other words, HIPAA.

So there’s little that can be done at the US federal level against the extremely long Yahoo delay in reporting the breach.

In the states, there are notification laws — currently 47 states have them — that would kick in but the thresholds are typically based on harm caused to consumers, which may be difficult to prove in this incident.

The notable exception, as always, is California, where Yahoo has its corporate offices in Sunnyvale. They are one of the few that requires notification on just the discovery of unauthorized access.

So Yahoo can expect a visit from the California attorney general in its future.

One might think that the EU would be tougher on breaches than the US.

But under the current EU Data Protection Directive (DPD), there’s no breach notification requirement. That was one of the motivations for the new General Data Protection Regulation that will go into effect in 2018.

If You Can’t Keep Your Head About You During A Breach

Yahoo may not be completely out of the legal woods in the EU: the DPD does require appropriate security measures to be taken — see article 16. So in theory an enforcement action could be launched based on Yahoo’s lax data protection.

But as a US company with its principle collection servers outside the EU, Yahoo may fall beyond the DPD’s reach. This is a wonky issue and if you want to learn whether the current DPD rules cover non-EU businesses, read this post on the legal analysis.

And this all leads to why the EU rewrote the current data laws for the GDPR, which covers breach notification and “extra-territoriality” — that is controllers outside the EU — as well as putting in place eye-popping fines.

Yeah, you should be reading our EU data regulations white paper to get the big picture.

If the GDPR were currently the law — the GDPR will go in effect in May 2018 — and the company hadn’t reported the exposure of 500 million user records to a DPA within 72 hours, then it would face massive fines.

How massive?

Doing the GDPR Breach Math

A violation of GDPR’s article 33 requirement to notify a DPA could reach as high as 2% of global revenue

Yahoo’s revenue numbers have been landing north of $4.5 billion dollar in recent years.

In my make-believe scenario, Yahoo could be paying $90 million or more to the EU.

And yes I’m aware that Verizon with over $130 billion in revenue is in the process of buying Yahoo.

Supposing the acquisition had already gone through, then Verizon would be on the hook for 2% of $134 billion or about $268 million.

There’s lesson here. Large US and other multinational companies with a significant worldwide web presence should be planning now for the possibility of an epic Yahoo-style breach in their post-2018 future.

Interview with Attorneys Bret Cohen and Sian Rudgard, Hogan Lovells’ ...

Interview with Attorneys Bret Cohen and Sian Rudgard, Hogan Lovells’ GDPR Experts

We are very thankful that Bret Cohen and Sian Rudgard took some time out of their busy schedules at the international law firm of Hogan Lovells to answer this humble blogger’s questions on the EU General Data Protection Regulation (GDPR). Thanks Bret and Sian!

Bret writes regularly on GDPR for HL’s Chronicle of Data Protection blog, one of our favorite resources. Sian had worked at the ICO, the UK’s data protection authority, and helped draft the binding corporate rules (BCR) governing internal transfers of personal data within companies for the EU’s Article 29 Working Party.

So we’re in good hands with these two. Both are now part of HL’s Privacy and Cybersecurity practice. The interview (via email) is heavily based on questions our own customers have been asking us. We’re very excited to share with you their legal expertise of the GDPR.

Inside Out Security:  When exactly is a data controller required to conduct a data protection impact assessment (“DPIA”)?  Must the controller always undertake a DPIA for new uses of certain types of data (e.g., biometrics, facial images)?

Hogan Lovells: The DPIA requirement is linked to processing “likely to result in a high risk for the rights and freedoms of natural persons,” taking into account “the nature, scope, context and purposes of the processing.”  This is a fact-specific standard, and one therefore that is likely to be interpreted differently by different data protection authorities (“DPAs”), although it is generally understood to refer to significant detrimental consequences for individuals.

The GDPR requires a DPIA in three specific circumstances:

  • Where the processing involves a “systematic and extensive” evaluation of an individual in order to make an automated decision about the individual (e.g., profiling) that has a legal effect on that individuals (e.g., denial of benefits).
  • The processing “on a large scale” of sensitive categories of personal data, specifically (a) personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, (b) genetic data, biometric data for the purpose of uniquely identifying an individual, data concerning health, or data concerning an individual’s sex life or sexual orientation, and (c) personal data pertaining to criminal convictions and offenses.
  • A systematic monitoring of a publicly accessible area on a large scale (e.g., through CCTV).

The data protection authorities (“DPAs”) have promised to provide guidance before the end of 2016 on this aspect of the Regulation and the individual DPAs have authority to publish guidance on the kinds of processing operations that require a DPIA and those that do not, and these individual guidance documents might differ from country to country.

IOS: For what types of data and processing would data controllers be required to engage in a “prior consultation” with a DPA?

HL: Data controllers are required to consult with a DPA prior to engaging in data processing where a DPIA indicates that the processing “would result in a high risk” in the absence of measures taken by the controller to mitigate the risk . “High risk” is not defined, but it likely to carry a similar meaning to the threshold DPIA requirement described above:  that is, a significant detrimental consequence for individuals. The requirement to engage in a prior consultation will also likely be influenced by DPA guidance on the issue and we can expect further guidance on this point before the end of this year.

 

IOS: What does a DPIA under the GDPR look like?

HL: A DPIA is not too complicated.  A suggested approach is to undertake a review of what the proposed data processing activities involve as against the relevant requirements of the Regulation (transparency, legal basis for processing,  data retention, data security, etc).   This can be done by creating a standard DPIA template that requires the filling in of these fields which then allows for a final assessment of risk by an assessor who, based on the analysis provided, will recommend the steps to take to address any risks identified..

 

IOS: A lot of Varonis customers already follow data security standards (principally ISO 27001). Articles 42 and 43 of the GDPR seem to suggest outside, accredited certification and other bodies will have the power to issue a GDPR certification. Does this mean that existing accreditations will be sufficient to comply with the GDPR standard?

HL: Under Articles 42 and 43, DPAs can approve certifications issued by certain certification bodies as creating the presumption of compliance with various parts of the GDPR.  However, the relevant DPA or the EU-wide group of DPAs, the European Data Protection Board (currently known as the Article 29 Working Party), would have to approve a particular certification before it can be deemed to be sufficient to comply with the GDPR standard.

The Article 29 Working Party has indicated that it intends to provide further guidance on this topic before the end of 2016.

 

IOS: The GDPR requires notification of data breaches within 72 hours of discovery, including “the measures taken or proposed to be taken by the controller to address the personal data breach, including, where appropriate, measures to mitigate its potential adverse effects.”  What types of “measures to mitigate” the breach’s “potential adverse effects” will be required?

HL:The “adverse effects” mentioned here reference potential adverse effects to the individual. There are no hard and fast rules as of yet, but this standard encourages companies to provide mitigating measures as may be appropriate in a given situation to help protect individuals.  This can include information about the offering of credit monitoring services, information about the offering of identity theft insurance, or steps taken to confirm the deletion of the personal data by unauthorized third parties. It is also possible that some DPAs may request information on technical and other remedial security measures taken by the company. The specific requirements are likely to be borne out through additional regulatory guidance and practice.

 

IOS: When a data subject requests erasure of personal data, does that mean that data must be deleted everywhere the personal data is located (including in emails, memos, spreadsheets, etc.)?

HL: The right to erasure—also known as the “right to be forgotten”— was one of the more controversial aspects of the GDPR when it was first published, not least because the practical limits on a controller’s obligation to delete personal data were unclear.  The right to erasure is not unlimited and an organization is only required to erase the data when one of the grounds specified in the Regulation apply. These include that the personal data is no longer needed for its original purpose; the individual withdraws consent, or objects to the processing; the data must be erased in order to comply with a legal obligation to which the controller is subject; the data has been collected in relation to the offering of information society services to children; or the processing is unlawful.

There are exemptions to the erasure right, including where an individual objects to the processing, but the organization can establish an overriding legitimate ground to continue the processing; or where the individual withdraws consent to the processing, and the organization has another basis on which to rely to continue the processing. Other exemptions include where processing is necessary for exercising a right of freedom of information or expression, for compliance with a legal obligation, for reasons of public interest in relation to health care, or for exercising or defending legal claims.

In theory, this means that an organization should take reasonable steps to delete personal data subject to a valid erasure request wherever it resides, although we recognize that there may be practical limitations on the ability of an organization to delete certain information. The DPAs do have the ability under the GDPR to introduce further exemptions to this provision but we do not know yet what these will look like.

Organizations do have room to put forward arguments that they have overriding legitimate grounds to continue processing personal data in certain circumstances. Where consent has been withdrawn in many cases it is also likely that there will be another basis on which organizations can continue to process at least some of the data (e.g., legitimate business interests). Organizations should document the steps they take to comply (or choose not to comply) with erasure requests, to justify the reasonableness of those steps if pressed by a DPA.

Where a data controller has made personal data public (e.g., by publishing it on a website) and receives a valid erasure request for that personal data, the GDPR requires the controller to, “taking account of available technology and the cost of implementation,” take “reasonable steps” to inform other third-party controllers who have access to the personal data of the erasure request.

This is an area on which we can expect further guidance from the DPAs, although it is not in the list of first wave guidance that we are expecting from the Article 29 Working Party this year.

 

IOS: An organization must appoint a data protection officer (“DPO”) if, among other things, “the core activities” of the organization require “regular and systematic monitoring of data subjects on a large scale.”  Many Varonis customers are in the B2B space, where they do not directly market to consumers. Their customer lists are perhaps in the tens of thousands of recipients up to the lower six-figure range. First, does the GDPR apply to personal data collected from individuals in a B2B context? And second, how when does data processing become sufficiency “large scale” to require the appointment of a DPO?

HL: Yes, the GDPR applies to personal data collected from individuals in a B2B context (e.g., business contacts).  The GDPR’s DPO requirement, however, is not invoked through the maintenance of customer databases.  The DPO requirement is triggered when the core activities of an organization involve regular and systematic monitoring of data subjects on a large scale, or the core activities consist of large scale processing of special categories of data (which includes data relating to health, sex life or sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, or biometric or genetic data).

“Monitoring” requires an ongoing tracking of the behaviors, personal characteristics, or movements of individuals, such that the controller can ascertain additional details about those individuals that it would not have known through the discrete collection of information.

Therefore, from what we understand of Varonis’ customers’ activities, it is unlikely that a DPO will be required, although this is another area on which we can expect to see guidance from the DPAs, particularly in the European Member States where having a DPO is an existing requirement (such as Germany).

Whether or not a company is required to appoint a DPO, if the company will be subject to the GDPR, it will still need to be able to comply with the “Accountability” record-keeping requirements of the Regulation and demonstrate how it meets the required standards. This will involve designating a responsible person or team to put in place and maintain appropriate  policies and procedures , including data privacy training programs.

[Podcast] Attorney and Data Scientist Bennett Borden, Part I: Data Analysis...

[Podcast] Attorney and Data Scientist Bennett Borden, Part I: Data Analysis Techniques

This article is part of the series "[Podcast] Attorney and Data Scientist Bennett Borden". Check out the rest:

Leave a review for our podcast & we'll put you in the running for a pack of cards.


Once we heard Bennett Borden, a partner at the Washington law firm of DrinkerBiddle, speak at the CDO Summit about data science, privacy, and metadata, we knew we had to reengage him to continue the conversation.

His bio is quite interesting: in addition to being a litigator, he’s also a data scientist. He’s a sought after speaker on legal tech issues. Bennett has written law journal articles about the application of machine learning and document analysis to ediscovery and other legal transactions.

In this first part in a series of podcasts, Bennett discusses the discovery process and how data analysis techniques came to be used by the legal world. His unique insights on the value of the file system as a knowledge asset as well as his perspective as an attorney made for a really interesting discussion.

Continue reading the next post in "[Podcast] Attorney and Data Scientist Bennett Borden"

HHS to Investigate Smaller HIPAA Privacy Breaches

HHS to Investigate Smaller HIPAA Privacy Breaches

As  a reader of this blog, you know all about Health and Human Services’ (HHS) wall of shame. That’s where breaches involving protected health information (PHI) affecting 500 or more records are posted for the world to see. It’s actually a requirement of HIPAA – technically the HITECH Act. But now there’s been a slight change in breach policy.

The Office of Civil Rights (OCR), which is part of HHS, investigates all large HIPAA breaches. But this month they announced they will increase efforts to look into smaller breaches that come to their attention.

Regional offices will be given discretion to prioritize which smaller breaches to look into. Some of the factors that they’ll take into account are “breaches that involve unwanted intrusions to IT systems (for example, by hacking)” and “instances where numerous breach reports from a particular covered entity or business associate raise similar issues.”

The investigations will likely take the form of offsite “desk audits”.

Attorneys in data compliance will tell you that to pass these audits you’ll need to have your HIPAA paper work in order — documented security and privacy policies, recent risk assessments, and breach reporting procedures are top on the list.

This is just another indication of how HHS/OCR is stepping up its auditing and HIPAA enforcement.

Covered entities: you’ve been warned!

hipaa

Data Privacy US-Style: Our National Privacy Research Strategy

Data Privacy US-Style: Our National Privacy Research Strategy

While the EU has been speeding ahead with its own digital privacy laws, the US has been taking its own steps. Did you know there’s a National Privacy Research Strategy (NPRS) white paper that lays out plans for federally funded research projects into data privacy?

Sure, the Federal Trade Commission has taken up the data privacy mantle in the US, bringing actions against social media, hotels, and data brokers. But there’s still more to do.

So you can think of the NPRS as a blue-print for the research and development phase of the US’s own privacy initiatives. By the way, the US government spent about $80 million on privacy research efforts in 2014.

What’s the Plan?

I scanned through this 30+ page report looking for some blog worthy points to share. I found a few

First, the NPRS has interesting ideas on how to define privacy.

The authors of the paper, representing major federal agencies including the FTC, don’t have a firm definition but instead they view privacy as being characterized by four areas: subjects, data, actions, and context.

Essentially, consumers release data into a larger community (based on explicit or implicit rules about how the data will be used) on which certain actions are then taken – processing, analysis, sharing, and retention. The idea (see diagram) parallels our own Varonis approach to using metadata to provide context to user actions to file data. We approve of the NPRS approach!

privacy-nprs

The larger point being that privacy has a context that shapes our privacy expectations and what we consider a privacy harm.

NPRS is partially focused on understanding our expectations in different contexts and ways to incentivize us to make better choices.

Second, the plan takes up the sexier matter of privacy engineering. In other words, research into building privacy widgets that software engineers can assemble together to meet certain objectives.

I for one am waiting for a Privacy by Design (PbD) toolkit. We’ll see.

The third major leg in this initiative targets the transparency of  data collection, sharing, and retention. As it stands now, you click “yes” affirming you’ve read the multi-page legalese in online privacy agreements. And then are surprised that you’re being spammed at some point by alternative medical therapy companies.

The good news is that some are experimenting with “just in time” disclosures that provide bite-size nuggets of information at various points in the transaction — allowing you, potentially, to opt out.

More research needs to be undertaken, and NPRS calls for developing automated tools to watch personal data information flows and report back to consumers.

And this leads to the another priority: ensuring that personal information flows meet agreed upon privacy objectives.

Some of the fine print for this goal will sound familiar to Varonis-istas. NPRS suggests adding tags to personal data — essentially metadata — and processing the data so that consumer privacy preferences are then honored.

Of course, this would require privacy standardization and software technology that could quickly read the tags to see if the processing meets legal and regulatory standards. This is an important area of research in the NPRS.

In the Meantime, the FTC Needs You!

You’re now fired up reading this post and wondering whether you—white hat researcher, academic, or industry pro—can have his or her voice regarding privacy heard by the US government.

You can!

The FTC is now calling for personal privacy papers and presentations for its second annual Privacy Con to be held in Washington in January 2017. You can check out the papers from last year’s conference here.

If you do submit and plan to speak, let us know! We’d love to follow up with you.

What is the Minimum Acceptable Risk Standards for Exchanges (MAR-E)?

What is the Minimum Acceptable Risk Standards for Exchanges (MAR-E)?

Under the Affordable Care Act (ACA) of 2010, there are now online marketplaces to buy health insurance. These are essentially websites that allow consumers to shop around for an insurance policy by comparing plans from different private providers.

Result: US consumers can purchase health insurance using the same technology that allows them to buy books, gadgets, and artisanal coffees on the web.

I think we can agree that health data that’s collected on these web sites deserves some extra protections.

The Origin of MARS

To address security issues of the exchanges, the ACA required the Department of Health and Human Services (HHS) to come up with data security standards.

Specifically, the Centers for Medicare & Medicaid Services (CMS), a part of HHS, was made responsible for providing guidance and oversight for the exchanges, including defining technical standards.

CMS then established the Minimum Acceptable Risk Standards for Exchanges (MARS-E), which defines a series of security controls. MARS-E is now in its second version, which was released in 2015.

Those familiar with NIST 800-53 — a security standard underlying other federal data laws such as FISMA — will immediately recognize the two-letter abbreviations used by MARS. They borrowed 17 control families from NIST 800-53, which for the record are:

Access Control (AC), Awareness and Training (AT), Audit and Accountability (AU), Security Assessment and Authorization (CA), Configuration Management (CM), Contingency Planning (CP), Identification and Authentication (IA), Incident Response (IR), Maintenance (MA), Media Protection (MP), Physical and Environment Protection (PE), Planning (PL), Personnel Security (PS), Risk Assessment (RA), System and Services Acquisition (SA), Systems and Communication Protection (SC), Systems and Information Integrity (SI).

The complete catalog of controls can be found here.

The controls provide only guidance — they are not meant to force specific security technologies on the exchanges!

HIPAA Confusion

You may ask whether HIPAA rules on privacy and security for protected health information (PHI) also apply to the health exchanges?

Great question!

Health exchanges are not covered entities under HIPAA.  So HIPAA’s Privacy and Security rules wouldn’t seem to apply.

But  … are they Business Associates (BAs) of the covered entity?

As you may recall, after the new rules that were published back in 2013 (the “HIPAA Omnibus Final Rule”) third-party contractors and their subcontractors who handle or process PHI would fall under HIPAA.

The short answer is that the exchanges can be BAs if they perform more than minimal data functions and have a deeper relationship with the insurer.

It’s really the same question that comes up with health wearables. HIPAA doesn’t apply to these gadgets, unless the gadget provider has a direct relationship with the insurer or health plan – for example, through a corporate wellness plan.

To get a little more insight into this confusing issue of health exchanges and HIPAA, read this article.

In the meantime, you can peruse the table below showing the mapping of relevant MARS-E controls to Varonis products.

 

 

MARS Control Family Requirement Varonis Solution
AC Access Control

AC-2 Account Management

a. Identifying account types (i.e., individual, group, system, application, guest/anonymous, and temporary);

b. Establishing conditions for group membership;

c. Identifying authorized users of the information system and specifying access privileges;

By combining user and group information taken directly from Active Directory, LDAP, NIS, or other directory services with a complete picture of the file system, Varonis DatAdvantage gives organizations a complete picture of their permissions structures. Both logical and physical permissions are displayed and organized highlighting and optionally aggregating NTFS and share permissions. Flag, tag and annotate your files and folders to track, analyze and report on users, groups and data. Varonis DatAdvantage also shows you every user and group that can access data as well as every folder that can be accessed by any user or group.
AC-6 Least Privilege

 

a. Employs the concept of least privilege, allowing only authorized accesses for users (and processes acting on behalf of users) that are necessary to accomplish assigned tasks in accordance with Exchange missions and business functions

Varonis DataPrivilege helps organizations not only define the policies that govern who can access, and who can grant access to unstructured data, but it also enforces the workflow and the desired action to be taken (i.e. allow, deny, allow for a certain time period). This has a two-fold effect on the consistent and broad communication of the access policy: 1) it unites all of the parties responsible including data owners, auditors, data users and IT around the same set of information and 2) it allows organizations to continually monitor the access framework in order to make changes and optimize both for compliance and for continuous enforcement of warranted access.
AU Audit and Accountability

AU-2 Auditable Events

(a) … that the information system must be capable of auditing the list of auditable events specified in the Implementation Standards;

Implementation Standards

Generate audit records for the following events …

h. File creation,

i. File deletion

j. File modification,

m. use of administrator privileges

Varonis DatAdvantage helps organizations examine and audit the use of ordinary and privileged access accounts to detect and prevent abuse. With a continual audit record of all file, email, SharePoint, and Directory Services activity, DatAdvantage provides visibility into users’ actions. The log can be viewed interactively or via email reports. DatAdvantage can also identify when users have administrative rights they do not use or need and provides a way to safely remove excess privileges without impacting the business.

Through Varonis DataPrivilege, membership in administrative and other groups can be tightly controlled, audited and reviewed.

Varonis DatAlert can be configured to send real-time alerts on a number of actions including the granting of administrative rights to a user or group. This allows the organization to detect, in real-time, when privileged access has been granted erroneously and act before abuse occurs. Real-time alerts can also be triggered when administrative users access, modify, or delete business data.

AU-6 Audit Review, Analysis, and Reporting

a) Reviews and analyzes information system audit records regularly for indications of inappropriate or unusual activity, and reports findings to designated organizational officials …

Implementation standards

5. Use automated utilities to review audit records at least once every seven (7) days for unusual, unexpected, or suspicious behavior.

 

Varonis DatAlert provides real-time alerting based on file activity, Active Directory changes, permissions changes, and other events. Alert criteria and output are easily configurable so that the right people and systems can be notified about the right things, at the right times in the right ways. DatAlert improves your ability to detect possible security breaches, and misconfigurations. DatAlert can be configured to alert on changes made outside a particular time window.

Varonis DatAdvantage monitors every touch of every file on the file system, normalizes, processes, and stores them in a normalized database so that they are quickly sortable and searchable. Detailed information for every file event is provided; all data can be reported on and provided to data owners. Data collection does not require native object success auditing on Windows.

IR Incident Response

IR-6.1 Incident Reporting

The organization employs automated mechanisms to assist in the reporting of security incidents

 

Varonis DatAlert provides real-time alerting based on file activity, Active Directory changes, permissions changes, and other events. Alert criteria and output are easily configurable so that the right people and systems can be notified about the right things, at the right times in the right ways. DatAlert improves your ability to detect possible security breaches, and misconfigurations. DatAlert can be configured to alert on changes made outside a particular time window