Category Archives: Compliance & Regulation

HIPAA and Cloud Provider Refresher

HIPAA and Cloud Provider Refresher

As far as regulators are concerned, the cloud has been a relatively recent occurrence. However, they’ve done a pretty good job in dealing with this ‘new’ computing model.  Take HIPAA.

We wrote that if a cloud service processes or stores protected health information (PHI), it’s considered in HIPAA-ese, a business associate or BA. As you may recall, the Final Omnibus Rule  from 2013 says that BAs fall under HIPAA’ s rules.

A covered entity — health provider or insurer —also must have a contract in place that says the cloud service provider or CSP will comply with key HIPAA safeguards –technical, physical, and administrative. The Department of Health and Human Services (HHS), the agency in charge of enforcing HIPAA, has conveniently provided a sample contract.

The relationship between a covered entity and CSP can be a confusing topic for security and compliance pros. So the HHS folks kindly put together this wonderful FAQ on the topic.

You should read it!

And please note that CSPs are under a breach notification requirement, though, the exact details of reporting back to the covered entity would have to be worked out in the contract.

One key point to keep in mind is that the reason behind having a BA contract is to make sure that the CSP knows they’re being asked to process PHI.

And if a somewhat careless or unscrupulous hospital doesn’t make the CSP sign such a contract, it still doesn’t matter!

HIPAA rules say the BA can’t plead ignorance of the law (except in very special cases.)  In this situation, the hospital would get fined for this lapse of not having offering a contract, and the CSP would still be held responsible for PHI security.

The higher goal is preventing a covered entity from outsourcing compliance responsibility to an indifferent third-party, and avoiding an ensuing legal finger-pointing exercise when there’s a security violation.

CSPs have done a good job of keeping up with changing data secure regulations, and they’re very aware of the HIPAA rules. For example, Amazon knows about the BA contracts as does Google and many other cloud players.

Trying to learn a new language can be difficult! Become fluent in HIPAA with our free five-part email  HIPAA class

The Federal Trade Commission Likes the NIST Cybersecurity Framework (and Yo...

The Federal Trade Commission Likes the NIST Cybersecurity Framework (and You Should Too)

Remember the Cybersecurity Framework that was put together by the folks over at the National Institute of Standards and Technology (NIST)?  Sure you do! It came about because the US government wanted to give the private sector, specifically the critical infrastructure players in transportation and energy, a proven set of data security guidelines.

The Framework is based heavily on NIST’s own 800-53, a sprawling 400-page set of privacy and security controls used within the federal government.

To make NIST 800.53 more digestible for the private sector, NIST reorganized and condensed the most important controls and concepts.

Instead of 18 broad control categories with zillions of subcontrols that’s found in the  original, the Cybersecurity Framework — check out the document — is  broken up into just five functional categories – Identify, Protect, Detect, Respond, and Recover — with a manageable number of controls under these groupings.

Students and fans of NIST 800-53 will recognize some of the same two-letter abbreviations being used in the Cybersecurity Framework (see below).


NIST Cybersecurity: simplified functional view of security controls.

By the way, this is a framework. And that means you use the Framework for Improving Critical Infrastructure Cybersecurity – the official name — to map into your favorite data security standard.

Currently, the Framework supports mappings into (not surprisingly) NIST 800.53, but also the other usual suspects, including COBIT 5, SANS CSC, ISO 270001, and ISA 62443.

Keep in mind that the Cybersecurity Framework is an entirely voluntary set of guidelines—none of the infrastructure companies are required to implement it.

The FTC’s Announcement

Since this is such a great set of data security guidelines for critical infrastructure, could the Cybersecurity Framework also serve the same purpose for everyone else—from big box retailers to e-commerce companies?

The FTC thinks so! At the end of August, the FTC announced on its blog that it has given the Cybsecurity Framework its vote of approval.

Let me explain what this means. As a regulatory agency, the FTC is responsible for enforcing powerful regulations, including Gramm-Leach-Blilely, COPPA, and FCRA, as well as its core statutory function of policing “unfair or deceptive acts or practices.”

When dealing with data security or privacy related implications of the laws, the FTC needs a benchmark for reasonable security measures. Or as they put it, “the FTC’s cases focus on whether the company has undertaken a reasonable process to secure data.”

If a company follows the Cybersecurity Framework, is this considered implementing a reasonable process?

The answer is in the affirmative according to the FTC. Or in FTC bureaucratic-speak, the enforcement actions they’ve taken against companies for data security failings “align well with the Framework’s Core functions.”

Therefore if you identify risks (Identify), put in place security safeguards (Protect), continually monitor for threats (Detect), implement a breach response program (Respond), and have a way to restore functions after an incident (Recover), you’ll likely not hear from the FTC regulators.

By the way, check out their Start with Security, a common sense guide to data security, which contain some very Varonis-y ideas.

We approve!

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

If the GDPR Were in Effect, Yahoo Would Have to Write a Large Check

Meanwhile back in the EU, two data protection authorities have announced they’ll be looking into Yahoo’s breach-acopalypse. Calling the scale of the attack “staggering”, the UK’s Information Commissioner’s Office (ICO) has signaled they’ll be conducting an investigation.  By the way, the ICO rarely comments this way on an on-going security event.

In Ireland, where Yahoo has its European HQ, the Data Protection Commissioner is asking questions as well.

And here in the US, the FBI is getting involved because the Yahoo attack may involve a state actor, possibly Russia.

Under the Current Laws

One of the (many) stunning things about this incident is that Yahoo knew about the breach earlier this summer when it learned that its users’ data was for sale on the darknet.

And the stolen data appears to have come from a hack that occurred way back in 2014.

It’s an obvious breach notification violation.

Or not!

In the US, the only federal notification law with any teeth is for medical PII held by “covered entities”— insurance companies, hospitals, and providers. In other words, HIPAA.

So there’s little that can be done at the US federal level against the extremely long Yahoo delay in reporting the breach.

In the states, there are notification laws — currently 47 states have them — that would kick in but the thresholds are typically based on harm caused to consumers, which may be difficult to prove in this incident.

The notable exception, as always, is California, where Yahoo has its corporate offices in Sunnyvale. They are one of the few that requires notification on just the discovery of unauthorized access.

So Yahoo can expect a visit from the California attorney general in its future.

One might think that the EU would be tougher on breaches than the US.

But under the current EU Data Protection Directive (DPD), there’s no breach notification requirement. That was one of the motivations for the new General Data Protection Regulation that will go into effect in 2018.

If You Can’t Keep Your Head About You During A Breach

Yahoo may not be completely out of the legal woods in the EU: the DPD does require appropriate security measures to be taken — see article 16. So in theory an enforcement action could be launched based on Yahoo’s lax data protection.

But as a US company with its principle collection servers outside the EU, Yahoo may fall beyond the DPD’s reach. This is a wonky issue and if you want to learn whether the current DPD rules cover non-EU businesses, read this post on the legal analysis.

And this all leads to why the EU rewrote the current data laws for the GDPR, which covers breach notification and “extra-territoriality” — that is controllers outside the EU — as well as putting in place eye-popping fines.

Yeah, you should be reading our EU data regulations white paper to get the big picture.

If the GDPR were currently the law — the GDPR will go in effect in May 2018 — and the company hadn’t reported the exposure of 500 million user records to a DPA within 72 hours, then it would face massive fines.

How massive?

Doing the GDPR Breach Math

A violation of GDPR’s article 33 requirement to notify a DPA could reach as high as 2% of global revenue

Yahoo’s revenue numbers have been landing north of $4.5 billion dollar in recent years.

In my make-believe scenario, Yahoo could be paying $90 million or more to the EU.

And yes I’m aware that Verizon with over $130 billion in revenue is in the process of buying Yahoo.

Supposing the acquisition had already gone through, then Verizon would be on the hook for 2% of $134 billion or about $268 million.

There’s lesson here. Large US and other multinational companies with a significant worldwide web presence should be planning now for the possibility of an epic Yahoo-style breach in their post-2018 future.

Interview with Attorneys Bret Cohen and Sian Rudgard, Hogan Lovells’ ...

Interview with Attorneys Bret Cohen and Sian Rudgard, Hogan Lovells’ GDPR Experts

We are very thankful that Bret Cohen and Sian Rudgard took some time out of their busy schedules at the international law firm of Hogan Lovells to answer this humble blogger’s questions on the EU General Data Protection Regulation (GDPR). Thanks Bret and Sian!

Bret writes regularly on GDPR for HL’s Chronicle of Data Protection blog, one of our favorite resources. Sian had worked at the ICO, the UK’s data protection authority, and helped draft the binding corporate rules (BCR) governing internal transfers of personal data within companies for the EU’s Article 29 Working Party.

So we’re in good hands with these two. Both are now part of HL’s Privacy and Cybersecurity practice. The interview (via email) is heavily based on questions our own customers have been asking us. We’re very excited to share with you their legal expertise of the GDPR.

Inside Out Security:  When exactly is a data controller required to conduct a data protection impact assessment (“DPIA”)?  Must the controller always undertake a DPIA for new uses of certain types of data (e.g., biometrics, facial images)?

Hogan Lovells: The DPIA requirement is linked to processing “likely to result in a high risk for the rights and freedoms of natural persons,” taking into account “the nature, scope, context and purposes of the processing.”  This is a fact-specific standard, and one therefore that is likely to be interpreted differently by different data protection authorities (“DPAs”), although it is generally understood to refer to significant detrimental consequences for individuals.

The GDPR requires a DPIA in three specific circumstances:

  • Where the processing involves a “systematic and extensive” evaluation of an individual in order to make an automated decision about the individual (e.g., profiling) that has a legal effect on that individuals (e.g., denial of benefits).
  • The processing “on a large scale” of sensitive categories of personal data, specifically (a) personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, (b) genetic data, biometric data for the purpose of uniquely identifying an individual, data concerning health, or data concerning an individual’s sex life or sexual orientation, and (c) personal data pertaining to criminal convictions and offenses.
  • A systematic monitoring of a publicly accessible area on a large scale (e.g., through CCTV).

The data protection authorities (“DPAs”) have promised to provide guidance before the end of 2016 on this aspect of the Regulation and the individual DPAs have authority to publish guidance on the kinds of processing operations that require a DPIA and those that do not, and these individual guidance documents might differ from country to country.

IOS: For what types of data and processing would data controllers be required to engage in a “prior consultation” with a DPA?

HL: Data controllers are required to consult with a DPA prior to engaging in data processing where a DPIA indicates that the processing “would result in a high risk” in the absence of measures taken by the controller to mitigate the risk . “High risk” is not defined, but it likely to carry a similar meaning to the threshold DPIA requirement described above:  that is, a significant detrimental consequence for individuals. The requirement to engage in a prior consultation will also likely be influenced by DPA guidance on the issue and we can expect further guidance on this point before the end of this year.


IOS: What does a DPIA under the GDPR look like?

HL: A DPIA is not too complicated.  A suggested approach is to undertake a review of what the proposed data processing activities involve as against the relevant requirements of the Regulation (transparency, legal basis for processing,  data retention, data security, etc).   This can be done by creating a standard DPIA template that requires the filling in of these fields which then allows for a final assessment of risk by an assessor who, based on the analysis provided, will recommend the steps to take to address any risks identified..


IOS: A lot of Varonis customers already follow data security standards (principally ISO 27001). Articles 42 and 43 of the GDPR seem to suggest outside, accredited certification and other bodies will have the power to issue a GDPR certification. Does this mean that existing accreditations will be sufficient to comply with the GDPR standard?

HL: Under Articles 42 and 43, DPAs can approve certifications issued by certain certification bodies as creating the presumption of compliance with various parts of the GDPR.  However, the relevant DPA or the EU-wide group of DPAs, the European Data Protection Board (currently known as the Article 29 Working Party), would have to approve a particular certification before it can be deemed to be sufficient to comply with the GDPR standard.

The Article 29 Working Party has indicated that it intends to provide further guidance on this topic before the end of 2016.


IOS: The GDPR requires notification of data breaches within 72 hours of discovery, including “the measures taken or proposed to be taken by the controller to address the personal data breach, including, where appropriate, measures to mitigate its potential adverse effects.”  What types of “measures to mitigate” the breach’s “potential adverse effects” will be required?

HL:The “adverse effects” mentioned here reference potential adverse effects to the individual. There are no hard and fast rules as of yet, but this standard encourages companies to provide mitigating measures as may be appropriate in a given situation to help protect individuals.  This can include information about the offering of credit monitoring services, information about the offering of identity theft insurance, or steps taken to confirm the deletion of the personal data by unauthorized third parties. It is also possible that some DPAs may request information on technical and other remedial security measures taken by the company. The specific requirements are likely to be borne out through additional regulatory guidance and practice.


IOS: When a data subject requests erasure of personal data, does that mean that data must be deleted everywhere the personal data is located (including in emails, memos, spreadsheets, etc.)?

HL: The right to erasure—also known as the “right to be forgotten”— was one of the more controversial aspects of the GDPR when it was first published, not least because the practical limits on a controller’s obligation to delete personal data were unclear.  The right to erasure is not unlimited and an organization is only required to erase the data when one of the grounds specified in the Regulation apply. These include that the personal data is no longer needed for its original purpose; the individual withdraws consent, or objects to the processing; the data must be erased in order to comply with a legal obligation to which the controller is subject; the data has been collected in relation to the offering of information society services to children; or the processing is unlawful.

There are exemptions to the erasure right, including where an individual objects to the processing, but the organization can establish an overriding legitimate ground to continue the processing; or where the individual withdraws consent to the processing, and the organization has another basis on which to rely to continue the processing. Other exemptions include where processing is necessary for exercising a right of freedom of information or expression, for compliance with a legal obligation, for reasons of public interest in relation to health care, or for exercising or defending legal claims.

In theory, this means that an organization should take reasonable steps to delete personal data subject to a valid erasure request wherever it resides, although we recognize that there may be practical limitations on the ability of an organization to delete certain information. The DPAs do have the ability under the GDPR to introduce further exemptions to this provision but we do not know yet what these will look like.

Organizations do have room to put forward arguments that they have overriding legitimate grounds to continue processing personal data in certain circumstances. Where consent has been withdrawn in many cases it is also likely that there will be another basis on which organizations can continue to process at least some of the data (e.g., legitimate business interests). Organizations should document the steps they take to comply (or choose not to comply) with erasure requests, to justify the reasonableness of those steps if pressed by a DPA.

Where a data controller has made personal data public (e.g., by publishing it on a website) and receives a valid erasure request for that personal data, the GDPR requires the controller to, “taking account of available technology and the cost of implementation,” take “reasonable steps” to inform other third-party controllers who have access to the personal data of the erasure request.

This is an area on which we can expect further guidance from the DPAs, although it is not in the list of first wave guidance that we are expecting from the Article 29 Working Party this year.


IOS: An organization must appoint a data protection officer (“DPO”) if, among other things, “the core activities” of the organization require “regular and systematic monitoring of data subjects on a large scale.”  Many Varonis customers are in the B2B space, where they do not directly market to consumers. Their customer lists are perhaps in the tens of thousands of recipients up to the lower six-figure range. First, does the GDPR apply to personal data collected from individuals in a B2B context? And second, how when does data processing become sufficiency “large scale” to require the appointment of a DPO?

HL: Yes, the GDPR applies to personal data collected from individuals in a B2B context (e.g., business contacts).  The GDPR’s DPO requirement, however, is not invoked through the maintenance of customer databases.  The DPO requirement is triggered when the core activities of an organization involve regular and systematic monitoring of data subjects on a large scale, or the core activities consist of large scale processing of special categories of data (which includes data relating to health, sex life or sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, or biometric or genetic data).

“Monitoring” requires an ongoing tracking of the behaviors, personal characteristics, or movements of individuals, such that the controller can ascertain additional details about those individuals that it would not have known through the discrete collection of information.

Therefore, from what we understand of Varonis’ customers’ activities, it is unlikely that a DPO will be required, although this is another area on which we can expect to see guidance from the DPAs, particularly in the European Member States where having a DPO is an existing requirement (such as Germany).

Whether or not a company is required to appoint a DPO, if the company will be subject to the GDPR, it will still need to be able to comply with the “Accountability” record-keeping requirements of the Regulation and demonstrate how it meets the required standards. This will involve designating a responsible person or team to put in place and maintain appropriate  policies and procedures , including data privacy training programs.

[Podcast] Attorney and Data Scientist Bennett Borden: Data Analysis Techniq...

[Podcast] Attorney and Data Scientist Bennett Borden: Data Analysis Techniques (Part 1)

Once we heard Bennett Borden, a partner at the Washington law firm of DrinkerBiddle, speak at the CDO Summit about data science, privacy, and metadata, we knew we had to reengage him to continue the conversation.

His bio is quite interesting: in addition to being a litigator, he’s also a data scientist. He’s a sought after speaker on legal tech issues. Bennett has written law journal articles about the application of machine learning and document analysis to ediscovery and other legal transactions.

In this first part in a series of podcasts, Bennett discusses the discovery process and how data analysis techniques came to be used by the legal world. His unique insights on the value of the file system as a knowledge asset as well as his perspective as an attorney made for a really interesting discussion.

Subscribe Now

- Leave a review for our podcast & we'll put you in the running for a pack of cards

- Follow the Inside Out Security Show panel on Twitter @infosec_podcast

- Add us to your favorite podcasting app:

HHS to Investigate Smaller HIPAA Privacy Breaches

HHS to Investigate Smaller HIPAA Privacy Breaches

As  a reader of this blog, you know all about Health and Human Services’ (HHS) wall of shame. That’s where breaches involving protected health information (PHI) affecting 500 or more records are posted for the world to see. It’s actually a requirement of HIPAA – technically the HITECH Act. But now there’s been a slight change in breach policy.

The Office of Civil Rights (OCR), which is part of HHS, investigates all large HIPAA breaches. But this month they announced they will increase efforts to look into smaller breaches that come to their attention.

Regional offices will be given discretion to prioritize which smaller breaches to look into. Some of the factors that they’ll take into account are “breaches that involve unwanted intrusions to IT systems (for example, by hacking)” and “instances where numerous breach reports from a particular covered entity or business associate raise similar issues.”

The investigations will likely take the form of offsite “desk audits”.

Attorneys in data compliance will tell you that to pass these audits you’ll need to have your HIPAA paper work in order — documented security and privacy policies, recent risk assessments, and breach reporting procedures are top on the list.

This is just another indication of how HHS/OCR is stepping up its auditing and HIPAA enforcement.

Covered entities: you’ve been warned!


Data Privacy US-Style: Our National Privacy Research Strategy

Data Privacy US-Style: Our National Privacy Research Strategy

While the EU has been speeding ahead with its own digital privacy laws, the US has been taking its own steps. Did you know there’s a National Privacy Research Strategy (NPRS) white paper that lays out plans for federally funded research projects into data privacy?

Sure, the Federal Trade Commission has taken up the data privacy mantle in the US, bringing actions against social media, hotels, and data brokers. But there’s still more to do.

So you can think of the NPRS as a blue-print for the research and development phase of the US’s own privacy initiatives. By the way, the US government spent about $80 million on privacy research efforts in 2014.

What’s the Plan?

I scanned through this 30+ page report looking for some blog worthy points to share. I found a few

First, the NPRS has interesting ideas on how to define privacy.

The authors of the paper, representing major federal agencies including the FTC, don’t have a firm definition but instead they view privacy as being characterized by four areas: subjects, data, actions, and context.

Essentially, consumers release data into a larger community (based on explicit or implicit rules about how the data will be used) on which certain actions are then taken – processing, analysis, sharing, and retention. The idea (see diagram) parallels our own Varonis approach to using metadata to provide context to user actions to file data. We approve of the NPRS approach!


The larger point being that privacy has a context that shapes our privacy expectations and what we consider a privacy harm.

NPRS is partially focused on understanding our expectations in different contexts and ways to incentivize us to make better choices.

Second, the plan takes up the sexier matter of privacy engineering. In other words, research into building privacy widgets that software engineers can assemble together to meet certain objectives.

I for one am waiting for a Privacy by Design (PbD) toolkit. We’ll see.

The third major leg in this initiative targets the transparency of  data collection, sharing, and retention. As it stands now, you click “yes” affirming you’ve read the multi-page legalese in online privacy agreements. And then are surprised that you’re being spammed at some point by alternative medical therapy companies.

The good news is that some are experimenting with “just in time” disclosures that provide bite-size nuggets of information at various points in the transaction — allowing you, potentially, to opt out.

More research needs to be undertaken, and NPRS calls for developing automated tools to watch personal data information flows and report back to consumers.

And this leads to the another priority: ensuring that personal information flows meet agreed upon privacy objectives.

Some of the fine print for this goal will sound familiar to Varonis-istas. NPRS suggests adding tags to personal data — essentially metadata — and processing the data so that consumer privacy preferences are then honored.

Of course, this would require privacy standardization and software technology that could quickly read the tags to see if the processing meets legal and regulatory standards. This is an important area of research in the NPRS.

In the Meantime, the FTC Needs You!

You’re now fired up reading this post and wondering whether you—white hat researcher, academic, or industry pro—can have his or her voice regarding privacy heard by the US government.

You can!

The FTC is now calling for personal privacy papers and presentations for its second annual Privacy Con to be held in Washington in January 2017. You can check out the papers from last year’s conference here.

If you do submit and plan to speak, let us know! We’d love to follow up with you.

What is the Minimum Acceptable Risk Standards for Exchanges (MAR-E)?

What is the Minimum Acceptable Risk Standards for Exchanges (MAR-E)?

Under the Affordable Care Act (ACA) of 2010, there are now online marketplaces to buy health insurance. These are essentially websites that allow consumers to shop around for an insurance policy by comparing plans from different private providers.

Result: US consumers can purchase health insurance using the same technology that allows them to buy books, gadgets, and artisanal coffees on the web.

I think we can agree that health data that’s collected on these web sites deserves some extra protections.

The Origin of MARS

To address security issues of the exchanges, the ACA required the Department of Health and Human Services (HHS) to come up with data security standards.

Specifically, the Centers for Medicare & Medicaid Services (CMS), a part of HHS, was made responsible for providing guidance and oversight for the exchanges, including defining technical standards.

CMS then established the Minimum Acceptable Risk Standards for Exchanges (MARS-E), which defines a series of security controls. MARS-E is now in its second version, which was released in 2015.

Those familiar with NIST 800-53 — a security standard underlying other federal data laws such as FISMA — will immediately recognize the two-letter abbreviations used by MARS. They borrowed 17 control families from NIST 800-53, which for the record are:

Access Control (AC), Awareness and Training (AT), Audit and Accountability (AU), Security Assessment and Authorization (CA), Configuration Management (CM), Contingency Planning (CP), Identification and Authentication (IA), Incident Response (IR), Maintenance (MA), Media Protection (MP), Physical and Environment Protection (PE), Planning (PL), Personnel Security (PS), Risk Assessment (RA), System and Services Acquisition (SA), Systems and Communication Protection (SC), Systems and Information Integrity (SI).

The complete catalog of controls can be found here.

The controls provide only guidance — they are not meant to force specific security technologies on the exchanges!

HIPAA Confusion

You may ask whether HIPAA rules on privacy and security for protected health information (PHI) also apply to the health exchanges?

Great question!

Health exchanges are not covered entities under HIPAA.  So HIPAA’s Privacy and Security rules wouldn’t seem to apply.

But  … are they Business Associates (BAs) of the covered entity?

As you may recall, after the new rules that were published back in 2013 (the “HIPAA Omnibus Final Rule”) third-party contractors and their subcontractors who handle or process PHI would fall under HIPAA.

The short answer is that the exchanges can be BAs if they perform more than minimal data functions and have a deeper relationship with the insurer.

It’s really the same question that comes up with health wearables. HIPAA doesn’t apply to these gadgets, unless the gadget provider has a direct relationship with the insurer or health plan – for example, through a corporate wellness plan.

To get a little more insight into this confusing issue of health exchanges and HIPAA, read this article.

In the meantime, you can peruse the table below showing the mapping of relevant MARS-E controls to Varonis products.



MARS Control Family Requirement Varonis Solution
AC Access Control

AC-2 Account Management

a. Identifying account types (i.e., individual, group, system, application, guest/anonymous, and temporary);

b. Establishing conditions for group membership;

c. Identifying authorized users of the information system and specifying access privileges;

By combining user and group information taken directly from Active Directory, LDAP, NIS, or other directory services with a complete picture of the file system, Varonis DatAdvantage gives organizations a complete picture of their permissions structures. Both logical and physical permissions are displayed and organized highlighting and optionally aggregating NTFS and share permissions. Flag, tag and annotate your files and folders to track, analyze and report on users, groups and data. Varonis DatAdvantage also shows you every user and group that can access data as well as every folder that can be accessed by any user or group.
AC-6 Least Privilege


a. Employs the concept of least privilege, allowing only authorized accesses for users (and processes acting on behalf of users) that are necessary to accomplish assigned tasks in accordance with Exchange missions and business functions

Varonis DataPrivilege helps organizations not only define the policies that govern who can access, and who can grant access to unstructured data, but it also enforces the workflow and the desired action to be taken (i.e. allow, deny, allow for a certain time period). This has a two-fold effect on the consistent and broad communication of the access policy: 1) it unites all of the parties responsible including data owners, auditors, data users and IT around the same set of information and 2) it allows organizations to continually monitor the access framework in order to make changes and optimize both for compliance and for continuous enforcement of warranted access.
AU Audit and Accountability

AU-2 Auditable Events

(a) … that the information system must be capable of auditing the list of auditable events specified in the Implementation Standards;

Implementation Standards

Generate audit records for the following events …

h. File creation,

i. File deletion

j. File modification,

m. use of administrator privileges

Varonis DatAdvantage helps organizations examine and audit the use of ordinary and privileged access accounts to detect and prevent abuse. With a continual audit record of all file, email, SharePoint, and Directory Services activity, DatAdvantage provides visibility into users’ actions. The log can be viewed interactively or via email reports. DatAdvantage can also identify when users have administrative rights they do not use or need and provides a way to safely remove excess privileges without impacting the business.

Through Varonis DataPrivilege, membership in administrative and other groups can be tightly controlled, audited and reviewed.

Varonis DatAlert can be configured to send real-time alerts on a number of actions including the granting of administrative rights to a user or group. This allows the organization to detect, in real-time, when privileged access has been granted erroneously and act before abuse occurs. Real-time alerts can also be triggered when administrative users access, modify, or delete business data.

AU-6 Audit Review, Analysis, and Reporting

a) Reviews and analyzes information system audit records regularly for indications of inappropriate or unusual activity, and reports findings to designated organizational officials …

Implementation standards

5. Use automated utilities to review audit records at least once every seven (7) days for unusual, unexpected, or suspicious behavior.


Varonis DatAlert provides real-time alerting based on file activity, Active Directory changes, permissions changes, and other events. Alert criteria and output are easily configurable so that the right people and systems can be notified about the right things, at the right times in the right ways. DatAlert improves your ability to detect possible security breaches, and misconfigurations. DatAlert can be configured to alert on changes made outside a particular time window.

Varonis DatAdvantage monitors every touch of every file on the file system, normalizes, processes, and stores them in a normalized database so that they are quickly sortable and searchable. Detailed information for every file event is provided; all data can be reported on and provided to data owners. Data collection does not require native object success auditing on Windows.

IR Incident Response

IR-6.1 Incident Reporting

The organization employs automated mechanisms to assist in the reporting of security incidents


Varonis DatAlert provides real-time alerting based on file activity, Active Directory changes, permissions changes, and other events. Alert criteria and output are easily configurable so that the right people and systems can be notified about the right things, at the right times in the right ways. DatAlert improves your ability to detect possible security breaches, and misconfigurations. DatAlert can be configured to alert on changes made outside a particular time window


Hospitals (and Other Covered Entities) Will Be Randomly Selected for HIPAA ...

Hospitals (and Other Covered Entities) Will Be Randomly Selected for HIPAA Audits in 2016

With July coming to an end and the year more than half over, it’s a good time to look at where we stand breach-wise. Your intuition may be telling you that 2016 has been a bad year with hacking attacks reported daily. You intuition is right.

The Identity Theft Resource Center is my go-to resource for current breach stats. As of July 19, the ITRC has tallied an astonishing 538 breach incidents, setting us up for the worst year ever.[1]

The mid-year mark is also a good point for IT security groups to collectively take a deep breath and re-evaluate their IT spending. And if you’re in a healthcare organization, there are additional considerations due to a new wave of HIPAA audits (more on that below).

Health Care Data Security Crisis

To get a big picture view of breach statistics, you should gaze upon ITRC’s table of stats from 2005 -2015.

The ITRC breaks out their numbers based on five broad industry sectors: business, educational, government, health, and banking.  Since 2012, medical has led the other groups in all but one year.

So far in 2016, medical is putting in another solid performance: it’s in 2nd place with 185 incidents. [2]


2016 breach incidents by industry sector (ITRC)

What’s Going on in Health Care IT Security?

Adam Tanner, a fellow at Harvard University’s Institute for Quantitative Social Science, has suggested that hackers see health care organizations as an easy target.

For a recent article in Scientific American, Tanner interviewed security experts who pointed out that health care industry doesn’t have the “security maturity” of other industries and underinvests compared to banking.

Point taken.

Though, you can also understand hospitals that look at their limited financial resources, and decide to invest in new treatment centers and lifesaving equipment rather than re-engineering their IT.

This may have to change.

Randomly Selected Covered Entities and Business Associates Will Receive Audits

Earlier this month, HHS launched Phase 2 of its HIPAA audit program. Over 160 randomly selected covered entities received a notice of a “desk audit”. These chosen healthcare organization will be required to answer questions—see this.

The second part of this program will involve audits of the business associates of these covered entities. Business associates, as you may remember, do additional processing of protected health information (PHI) for hospitals, and are now also covered by HIPAA’s rules.  We’re talking about you cloud-based service providers!

And then in the final round of audits, 50 hospitals and business associates can expect an onsite visit from HHS starting in 2017.


[1] Let’s also add the usual caveats.

The ITRC numbers are based on breaches reported in major publications. The breaches are not verified by ITRC or other security experts. And finally these are US-based incidents.

In other words, we’ll likely see different numbers from other sources, particularly Verizon’s DBIR, as they come up with their final 2016 results.  I suspect, though, that Verizon will also report record-breaking breach incident numbers.

[2] For those who want to see a more granular report of healthcare breach data, they should check out the Health and Human Services’ (HHS) “wall of shame”—healthcare providers and other covered entities who’ve reported HIPAA medical information exposures. By the way, the HIPAA breaches include hacking incidents as well as unauthorized data access, theft, and loss.

Understanding Canada: Ontario’s New Medical Breach Notification Provision...

Understanding Canada: Ontario’s New Medical Breach Notification Provision (and Other Canadian Data Privacy Facts)

Remember Canada’s profusion of data privacy laws?

The Personal Information Protection and Electronic Documents Act (PIPEDA) is the law that covers all commercial organizations across Canada.

Canadian federal government agencies, though, are under a different law known as the Privacy Act.

But then there are overriding laws at the provincial level.

If a Canadian province adopts substantially similar data privacy legislation to PIPEDA, then a local organization would instead fall under the provincial law.

To date, Alberta and British Columbia have adopted their own laws, each known as the Personal Information Protection Act (PIPA). Alors, Québec has its own data privacy law.

Adding to the plenitude of provincial privacy laws, Ontario, New Brunswick, and Newfoundland have adopted similar privacy legislation with regard to health records.

Ontario’s PHIPA

So that brings us to Ontario’s Personal Health Information Protection Act (PHIPA).  Recently, PHIPA was amended to include a breach notification provision.

If personal health information is “stolen or lost or if it is used or disclosed without authority”, a healthcare organization in Ontario will have to notify the consumer “at the first reasonable opportunity”, as well as the provincial government.

Alberta, by the way, has had a breach notification requirement for all personal data since 2010.

What About Breach Notification at the Federal Level?

In June 2015, the Digital Privacy Act amended PIPEDA to include breach notification. Organizations must notify affected individuals and the Privacy Commissioner of Canada when there is a breach that creates a “real risk of significant harm” to an individual.

Notice that the federal law has a risk threshold for exposed personal information, whereas the new Ontario law for health records doesn’t. Alberta’s breach notification requirement, by the way, has a similar risk threshold to PIPEDA

Confused by all this? Get a good Canadian privacy lawyer!

Don’t be confused by how to detect and stop breaches! Learn more about Varonis DatAlert.









Is Browsing Facebook While in the Hospital a HIPAA Violation?

Is Browsing Facebook While in the Hospital a HIPAA Violation?

A recently filed federal class-action suit claims that several healthcare providers are violating HIPAA’s rules on protected health information (PHI). If the suit succeeds, privacy advocates say it has the potential to disrupt the way the ad targeting industry deals with the healthcare sector.

To really understand what’s going on, you’ll need some background on HIPAA.

HIPAA Privacy and Authorization

According to HIPAA’s Privacy Rule, covered entities – healthcare providers, insurers, and clearinghouses—require patients to give explicit authorization (as in ‘check box to approve PHI transfer to third-party’ in an online form) for their PHI to be used outside of a few very specific areas (payment, treatment, healthcare operations ).

PHI for marketing purposes definitely requires the covered entity to get authorization.

Hospitals, Patients, and Facebook

Suppose you’re a hospital patient waiting (and waiting) to see your doctor, and browsing the hospital website on your laptop looking for answers to a medical question. And let’s assume the hospital website also has a Facebook plugin that supports “like”.

As an active Facebook user, you are also keeping friends informed of your medical adventure.

Unbeknownst to you, URLs are being sent back to Facebook based on your hospital website browsing. The Facebook cookies on your laptop adds identifier information that lets Facebook then target information to its subscribers.

So as you’re lying in bed looking at friends’ Facebook status updates while dealing with amazing amounts of pain, you might be served up an ad about, say morphine drips, which are based on browsing the pain management section of the hospital website.

Of course, this is a huge part of the way Facebook makes its money. And this is what the suit is alleging took place with the hospitals and healthcare organizations that were named: webpages with Facebook plugins were sending browsing histories back to the FB mothership.

So What’s the Problem?

Another crucial fact: PHI covers more than a name, address, and other obvious identifiers.

While the healthcare organizations  in the suit are not sending classic identifiers, they are potentially providing URLs, IP addresses and sub state-level geo data back to FB.

According to HIPAA, these would qualify as PHI — based on the Department of Health and Human Services’ 18 element safe harbor list. And therefore, it would require patient authorization, which the websites did not request from users.

We’ve written previously about the broad definition of identifiable data used by HIPAA. In this case, these providers seemed to have been caught in the PHI’s very wide net.

In short: PHI is being sent from these websites to Facebook without patient permission. A big HIPAA violation.

Legal Questions

As a non-lawyer, this suit does raise an issue or two for me.

If you’re not a patient of a healthcare provider but use the site anyway, are you covered by HIPAA?

One argument I read is that if a hospital is a covered entity in the context of a patient-provider relationship, they’re a covered entity in all contexts, including the more typical user-website relationship.

So it doesn’t matter that you’re not a patient when browsing a hospital website: HIPAA would still apply!

The suit essentially says a hospital website can’t take online user information and send it to an ad network without violating HIPAA. If this claim is proven right, it will have enormous implications for the use of health and possibly non-health data by ad networks.

Facebook is clearly not a covered entity, so what did they do wrong?

The class-action suit says that Facebook violated state laws on health information, and — get this! — the federal Wiretap Act.

There’s a California law, for example, that requires explicit consent for health information to be sent to third parties. And if we use the broad PHI definition of identifiers, then Facebook could have violated that state’s law.

And the Wiretap law may kick in when you collect information over the Intertoobz without authorization. To me, though, this last one seems a bit of a — ahem — legal stretch.

This law suit is being closely watched by privacy pros. We’ll keep you posted if we hear anything new.

Confused by HIPAA? Then take our five-part email  HIPAA class. and soar like a legal eagle (or at least be able to answer a few legally related HIPAA questions).