Tag Archives: hipaa

HIPAA Security Rule Explained

hipaa security rule

The HIPAA Journal estimates that a large data breach ( > 50k records) can cost the organization around $6 million – and that’s before the Office of Civil Rights (OCR) drops their own hammer. Over the last few years, we’ve seen more reports of breaches, an increase of HIPAA investigations, and higher fines across the board – all stemming from violations of the HIPAA security rule.

hipaa security rule statistic

What is The HIPAA Security Rule?

The HIPAA Security Rule sets the minimum standards required for Covered Entities (CE) to manage electronic PHI (ePHI). To be considered HIPAA compliant, CEs need to address 3 key security zones: administrative, physical, and technical.

How Does the HIPAA Security Rule Protect Your Data?

hipaa security rule safeguards

Administrative Safeguards 

HIPAA rules require CEs to adhere to certain processes to ensure and verify their compliance with the HIPAA Security Rule:

  • Security Management Process: CEs must establish policies and procedures to prevent, detect, contain and correct security violations. Part of this process is to follow the procedures in the Risk Management Framework to assess overall risk in your current processes or when you implement new policies.
  • Assigned Security Responsibility: One designated security official must be responsible for the development and implementation of the HIPAA Security Rule.
  • Workforce Security: CEs must identify which employees require access to ePHI and make efforts to provide control over that access. To achieve this, implement a least privilege model and automatically enforce and manage permissions.
  • Information Access Management: Restrict access to ePHI via permissions after you have identified the who should have access in the step above.
  • Security Awareness and Training: In order to enforce these rules and security policies, organizations need to train their users on what the rules are and how to abide by them.
  • Security Incident Procedures: This standard provides guidance on how to create a policy to address data breaches: it’s good practice regardless – report breaches and security violations, and set up alerts and security analytics so that you can prevent breaches in the first place.
  • Contingency Plan: This is the “what happens next” standard. Create and follow a data backup plan, disaster recovery plan, and have an emergency mode operation plan in place, just in case things go sideways and you get breached. There’s also guidance in this standard for testing and revising these plans, as well as managing critical applications that store, maintain or transmit ePHI.
  • Evaluation: Establish a process to review and maintain the policies and procedures to stay up to date and current with the HIPAA Security Rule.
  • Business Associate Contracts and other Arrangements: While it’s ok to use other businesses to implement your overall HIPAA Security strategy, as with any 3rd party contractor, you must get assurances from them that they understand HIPAA and they won’t leak your ePHI.

Physical Safeguards 

This section of the HIPAA Security Rule sets standards for physical security: the “lock your doors” and “batten down the hatches” kind of guidance – along with what to do in case of natural disasters, naturally.

  • Facility Access Controls: Limit and audit physical access to the computers that store and process ePHI. Pro tip – put a lock on the server room door.
  • Workstation Use: Manage and secure computers (desktop, laptop, and tablets) that are used to access ePHI. Every computer with access to a CEs ePHI must adhere to this policy, including systems that are offsite (and offline).
  • Workstation Security: Implement physical safeguards for all computers that access ePHI: restrict access to computers that access ePHI, install remote wipe safeguards on laptops that grow legs.
  • Device and Media Controls: Once computers are covered, you still need safeguards on all the rest: devices and media like USB drives, tape backups, or removable storage. Establish a policy to inventory, allow the use of, and reuse or dispose of these devices as needed.

Technical Safeguards

Technical safeguards as the technology and procedures that CEs use to protect ePHI. The HIPAA Security Rule does not define what technology to use, but demands that CEs adhere to the standard and adequately protect ePHI from data breaches.

  • Access Control: Authenticate users as necessary to access ePHI, establish and maintain a least privilege model, and have appropriate procedures in place to audit access control lists (ACL) on a regular schedule.
  • Audit Controls: Audit your ePHI to record and analyze activity in case of a data breach. CE’s need to be able to show the OCR exactly how a data breach occurred with a complete audit trail and reporting.
  • Integrity: To be HIPAA compliant, CEs needs to be able to prove that the ePHI they manage is protected from threats both inside and out, intentional or not. Whether the new intern deletes a record accidentally, or a nefarious hacker deletes it intentionally, you should be able to recover and restore that record.
  • Person or Entity Authentication: CEs must provide assurances that the person accessing ePHI is, in fact, who they say they are. These assurances can be a password, two-factor authentication, or retinal scan – whatever works as long as you have something implemented.
  • Transmission Security: When sending data to other business partners, you need to be able to prove that only authorized individuals accessed the ePHI. You can use encrypted email with a private key, HTTPS file transfer, or a VPN – as long as only the people that are authorized to use the ePHI, HIPPA doesn’t care how you set it up.

Ensuring Compliance: HIPAA Security Rule

HIPAA doesn’t spell out what specific software to install or how to implement the requirements in the HIPAA Security Rule.
Varonis provides a 30-day free risk assessment to help get started: we’ll outline problem areas, potential violations, and a plan on how to fix them – we’ve got a proven track record of thousands of customers, many of whom deal with ePHI and HIPAA regulations on a daily basis.

Check out our US Data Protection Compliance and Guidance – or get in touch to discuss how we can help you reach HIPAA compliance and improve your current compliance strategies.

 

HIPAA Privacy Rule Explained

hipaa privacy rule hero

It’s an unfortunate (but inevitable) fact of life: Laptops get stolen, and the consequences can be devastating. If those laptops have electronic protected health information (ePHI) on them, they fall under HIPAA regulations and the theft must be reported.

Even if the thief doesn’t look at the data, the company can’t prove it: everyone should take precautions to protect themselves against not just fallout from lost data, but from the potential fines that can accrue: install remote wipe capabilities, encrypt your drives, and don’t store ePHI on your local drive.

Hopefully, the next time a laptop grows legs, you will be better prepared to mitigate the damage.

What is The HIPAA Privacy Rule?

hipaa privacy rule explained

The HIPAA privacy rule explains how to use, manage, and protect personal health information (PHI or ePHI). Congress wrote the HIPAA Privacy Rule to protect patient data, and those rules apply to covered entities: the people that that transmit, store, manage, and access personal health information.

What Information Does the Privacy Rule Protect?

The HIPAA Privacy Rule defines PHI as individually “identifiable health information” stored or transmitted by a covered entity or their business associates, in any form or media (electronic, paper, or oral).

The law further defines “individually identifiable health information” as an individual’s past, present, and future health conditions, the details of the health care provided to an individual, and the payments or arrangement of payments made by an individual.
In the simplest terms: any and all data having to do with all doctor visits, ever, including (but not limited to):

  • Names
  • Birth, death or treatment dates, and any other dates relating to a patient’s illness or care
  • Contact information: telephone numbers, addresses, and more
  • Social Security numbers
  • Medical records numbers
  • Photographs
  • Finger and voice prints
  • Any other unique identifying number or account number

To Whom Does the HIPAA Privacy Rule Apply?

The HIPAA Privacy Rule protects individual PHI by governing the practices of the covered entities.

Covered entities are the people and organizations that hold and process PHI data for their customers – the ones required to report HIPAA violations and who are responsible to pay fines imposed by the Office of Civil Rights if and when a HIPAA violation occurs.

These organizations are considered Covered Entities under HIPAA:

Health Care Providers

  • Doctors
  • Clinics
  • Psychologists
  • Dentists
  • Chiropractors
  • Nursing homes
  • Pharmacies

Health Plan 

  • Health insurance companies
  • HMO’s
  • Company health plans
  • Government provided health care plans

Health Care Clearinghouse

  • These entities process healthcare data from another entity into a standard form.

What Happens if a HIPAA Data Breach Occurs?

According to the HIPAA breach notification rules, a covered entity is supposed to report data breaches to each individual affected within 60 days of discovery.

If the breach affects over 500 individuals, the covered entity must also report the breach to the Department of Health and Human Services within 60 days, which in turn opens an investigation with the Office of Civil Rights. On top of that, if the breach falls within that over 500 club, the covered entity is required by HIPPA rules to issue a press release to media outlets local to the affected individuals.

Not only is a PHI data breach potentially bad for the bottom line, but it’s also government mandated bad press.

HIPAA compliance isn’t just the law, it’s good business practice. Protecting an individual’s personal data and preventing data breaches affects both the bottom line (no fines) and company image (no bad press).

The Varonis Data Security Platform provides the foundation for a HIPAA compliant data security strategy – sign up for a free email course on HIPAA compliance, or get started with a demo to see the state of your HIPAA security.

HIPAA Compliance: Guide and Checklist

running track

There are currently 14,930,463 individual records in the United States with an open HIPAA data breach investigation. That’s up to 14 million humans that have had their Protected Health Information (PHI) exposed by hacking, IT incident, theft, loss, or unauthorized access/disclosure.

hipaa compliant visualization

That’s just the unresolved case list. If we add the numbers from the resolved breach notifications, we end up with 162,599,642 records – over half of the current US population.

And that’s why we need HIPAA in the first place.

What is HIPAA Compliance?

The US Congress passed the Health Insurance Portability and Accountability Act (HIPAA) in 1996 to set standards for how US citizens’ PHI records are stored, secured, and used. Nowadays – along with the Health Information Technology for Economic and Clinical Health Act (HITECH) – this legislation governs how anyone with access to your PHI needs to manage and protect that data.

HIPAA doesn’t explicitly define PHI other than information that can “reasonably” be linked to an individual – it could include anything from your birth date to social security number to medical ID or more.

What is the HIPAA Enforcement Rule?

The HIPAA Enforcement Rule explains how companies need to handle HIPAA violations – and the process isn’t just a slap on the wrist.

Individuals or companies report HIPAA violations to the Office for Civil Rights (OCR), and the OCR is responsible for investigating and reviewing those violations. If the OCR finds the violators negligent, the violators must fix what caused the breach in the first place and deal with the affected individuals data to the satisfaction of the OCR. If the OCR does not find their response satisfactory or if they find the data breach egregious, the OCR will fine the violators based on the number of records involved.

In 2018 alone there have already been two different settlements costing the violators $3.5 million and $100,000, the latter of which came after the business had already shut down due to HIPAA violations. You can read all about these settlements and more – it’s public record!

What is The HIPAA Privacy Rule?

The HIPAA Privacy Rule is the nuts and bolts of the legislation: it explains how and when healthcare professionals, lawyers, or anyone who accesses your PHI can or can not use that data.

For example: If I want to allow my PHI to be available to my girlfriend, the law requires a signed HIPAA PHI Release form in order for the Doctor’s office to share my information with her. Those are the kinds of scenarios covered in the Privacy Rule.

What is The HIPAA Security Rule?

The HIPAA Security Rule sets the standards on the how Covered Entities (the humans who are governed by HIPAA) must protect PHI data. These standards include things like ‘lock the door to the server room’ and ‘only allow access to read PHI data to people who need to see it.’

That makes it paramount to protect person information that qualifies as PHI – whether online, on paper, or verbally.

What is The HIPAA Breach Notification Rule?

The HIPAA Breach Notification Rule says you have 60 days to notify an individual of improper access to their PHI. It’s important to remember that even if ePHI is encrypted by a ransomware attack, it’s considered a breach – and therefore falls under the HIPAA breach notification rule.

If there are more than 500 PHI records impacted, you must notify the Department of Health and Human Services (which in turn gets the OCR involved) – and you’re required to issue a press release about the breach.

If you are in the unfortunate (but not uncommon) situation of reporting a HIPAA violation, here is the information you must initially provide OCR:

  • What PHI was available and how that data was made available? What personal identifiers were available during the breach?
  • Who was the unauthorized person who saw or had access to the data?
  • Did anyone actually view or acquire the ePHI?
  • What have you done to fix the issue or mitigate the damage?

There is good news: if you don’t break that 500 record limit in a single event, you can report all of your smaller violations to HHS in a single batch once per year per the Breach Notification Rules.

HIPAA Standard Transactions

A HIPAA Standard Transaction is an exchange of PHI data between two entities. For example, your doctor sends your prescription to the pharmacy, which in turn requests coverage verification from the insurance company.
HIPAA governs all of these PHI transactions, including:

  • Claims and encounter information
  • Payment and remittance advice
  • Claims status
  • Eligibility status
  • Enrollment and disenrollment
  • Referrals and authorizations
  • Coordination of benefits
  • Premium payment

How to Become HIPAA Compliant

Becoming HIPAA compliant isn’t all that different from any of your other basic 21st-century data security plans. In fact, setting up a solid data security plan will help maintain HIPAA compliance.

Here is a HIPAA Compliance Checklist to get you started:

hipaa compliance checklist with icons

  1. Map your data and discover where your HIPAA protected files live on your network (including cloud storage)
  2. Determine who has access to HIPAA data, who should have access to HIPAA data, and implement a least privilege model.
  3. Monitor all file access to your data.
  4. Set up alerts to notify you if someone accesses HIPAA data, or if someone creates new HIPAA data in a non-compliant repository. Use data security analytics to differentiate between normal behaviors and potential HIPAA violations.
  5. Protect the perimeter with firewalls, endpoint security, locks on server rooms, two-factor authentication, strong passwords, and session timeouts.
  6. Monitor activity on the perimeter and add threat models to your data security analytics.

HIPAA compliance isn’t just the law – it will protect your customer’s data and ensure that your business prospers in the age of digital medical records.

Varonis has been working with our customers on HIPAA compliance since before the HITECH Act in 2009. The Varonis Data Security Platform provides the foundation for a HIPAA compliant data security strategy.

Get started with a free email course on HIPAA compliance or sign up for a demo to talk directly with our data security experts.

A Few Thoughts on Data Security Standards

A Few Thoughts on Data Security Standards

Did you know that the 462-page NIST 800-53 data security standard has 206 controls with over 400 sub-controls1?  By the way, you can gaze upon the convenient XML-formatted version here. PCI DSS is no slouch either with hundreds of sub-controls in its requirements’ document. And then there’s the sprawling IS0 27001 data standard.

Let’s not forget about security frameworks, such as COBIT and NIST CSF, which are kind of meta-standards that map into other security controls. For organizations in health or finance that are subject to US federal data security rules, HIPAA and GLBA’s data regulations need to be considered as well. And if you’re involved in the EU market, there’s GDPR; in Canada, it’s PIPEDA; in the Philippines, it’s this, etc., etc.

There’s enough technical and legal complexity out there to keep teams of IT security pros, privacy attorneys, auditors, and diplomats busy till the end of time.

As a security blogger, I’ve also puzzled and pondered over the aforementioned standards and regulations. I’m not the first to notice the obvious: data security standards fall into patterns that make them all very similar.

Security Control Connections

If you’ve mastered and implemented one, then very likely you’re compliant to others as well. In fact, that’s one good reason for having frameworks. For example, with, say NIST CSF, you can leverage your investment in ISO 27001 or ISA 62443 through their cross-mapped control matrix (below).

Got ISO 27001? Then you’re compliant with NIST CSF!

I think we can all agree that most organizations will find it impossible to implement all the controls in a typical data standard with the same degree of attention— when was last time you checked the physical access audit logs to your data transmission assets (NIST 800-53, PE-3b)?

So to make it easier for companies and the humans that work there, some of the standards group have issued further guidelines that break the huge list of controls into more achievable goals.

The PCI group has a prioritized approach to dealing with their DSS—they have six practical milestones that are broken into a smaller subset of relevant controls. They also have a best practices guide that views — and this is important — security controls into three broader functional areas: assessment, remediation, and monitoring.

In fact, we wrote a fascinating white paper explaining these best practices, and how you should be feeding back the results of monitoring into the next round of assessments. In short: you’re always in a security process.

NIST CSF, which itself is a pared down version of NIST 800-53, also has a similar breakdown of its controls into broader categories, including identification, protection, and detection. If you look more closely at the CSF identification controls, which mostly involve inventorying your IT data assets and systems, you’ll see that the main goal in this area is to evaluate or assess the security risks of the assets that you’ve collected.

File-Oriented Risk Assessments

In my mind, the trio of assess, protect, and monitor is a good way to organize and view just about any data security standard.

In dealing with these data standards, organizations can also take a practical short-cut through these controls based on what we know about the kinds of threats appearing in our world — and not the one that data standards authors were facing when they wrote the controls!

We’re now in a new era of stealthy attackers who enter systems undetected, often though phish mails, leveraging previously stolen credentials, or zero-day vulnerabilities. Once inside, they can fly under the monitoring radar with malware-free techniques, find monetizable data, and then remove or exfiltrate it.

Of course it’s important to assess, protect and monitor network infrastructure, but these new attack techniques suggest that the focus should be inside the company.

And we’re back to a favorite IOS blog theme. You should really be making it much harder for hackers to find the valuable data — like credit card or account numbers, corporate IP — in your file systems, and detect and stop the attackers as soon as possible.

Therefore, when looking at the how to apply typical data security controls, think file systems!

For, say, NIST 800.53, that means scanning file systems, looking for sensitive data, examining the ALCs or permissions and then assessing the risks (CM-8, RA-2,RA-3). For remediation or protection, this would involve reorganizing Active Directory groups and resetting ACLs to be more exclusive (AC-6). For detection, you’ll want to watch for unusual file system accesses that likely indicate hackers borrowing employee credentials (SI-4).

I think the most important point is not to view these data standards as just an enormous list of disconnected controls, but instead to consider them in the context of assess-protect-monitor, and then apply them to your file systems.

I’ll have more to say on a data or file-focused view of data security controls in the coming weeks.

1 How did I know that NIST 800-53 has over 400 sub-controls? I took the XML file and ran this amazing two lines of PowerShell:

[xml]$books = Get-Content 800-53-controls.xml
$books.controls.control|%{$_.statement.statement.number}| measure -line

 

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

At the end of the previous post, we took up the nuts-and-bolts issues of protecting sensitive data in an organization’s file system. One popular approach, least-privileged access model, is often explicitly mentioned in compliance standards, such as NIST 800-53 or PCI DSS. Varonis DatAdvantage and DataPrivilege provide a convenient way to accomplish this.

Ownership Management

Let’s start with DatAdvantage. We saw last time that DA provides graphical support for helping to identify data ownership.

If you want to get more granular than just seeing who’s been accessing a folder, you can view the actual access statistics of the top users with the Statistics tab (below).

This is a great help in understanding who is really using the folder. The ultimale goal is to find the true users, and remove extraneous groups and users, who perhaps needed occasional access but not as part of their job role.

The key point is to first determine the folder’s owner — the one who has the real knowledge and wisdom of what the folder is all about. This may require some legwork on IT’s part in talking to the users, based on the DatAdvantage stats, and working out the real-chain of command.

Once you use DatAdvantage to set the folder owners (below), these more informed power users, as we’ll see, can independently manage who gets access and whose access should be removed. The folder owner will also automatically receive DatAdvantage reports, which will help guide them in making future access decisions.

There’s another important point to make before we move one. IT has long been responsible for provisioning access, without knowing the business purpose. Varonis DatAdvantage assists IT in finding these owners and then giving them the access granting powers.

Anyway, once the owner has done the housekeeping of paring and removing unnecessary folder groups, they’ll then want to put into place a process for permission management. Data standards and laws recognize the importance of having security policies and procedures as part of on-going program – i.e., not something an owner does once a year.

And Varonis has an important part to play here.

Maintaining Least-Privileged Access

How do ordinary users whose job role now requires then to access a managed folder request permission to the owner?

This is where Varonis DataPrivilege makes an appearance. Regular users will need to bring this interface up (below) to formally request access to a managed folder.

The owner of the folder has a parallel interface from which to receive these requests and then grant or revoke permissions.

As I mentioned above, these security ideas for least-privilege-access and permission management are often explicitly part of compliance standards and data security laws. Building on my list from the previous post, here’s a more complete enumeration of controls that Varonis DatAdvantage supports:

  • NIST 800-53: AC-2, AC-3, AC-5, CM-5
  • NIST 800-171: 3.1.4, 3.1.5, 3.4.5
  • PCI DSS 3.x: 7.1,7.2
  • HIPAA: 45 CFR 164.312 a(1), 164.308a(4)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3, A11.2.2
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Stale Sensitive Data

Minimization is an important theme in security standards and laws. These ideas are best represented in the principles of Privacy by Design (PbD), which has good overall advice on this subject: minimize the sensitive data you collect, minimize who gets to see it, and minimize how long you keep it.

Let’s address the last point, which goes under the more familiar name of data retention. One low-hanging fruit to reducing security risks is to delete or archive sensitive data embedded in files.

This make incredible sense, of course. This stale data can be, for example, consumer PII collected in short-term marketing campaigns, but now residing in dusty spread-sheets or rusting management presentations.

Your organization may no longer need it, but it’s just the kind of monetizable data that hackers love to get their hands on.

As we saw in the first post, which focused on Identification, DatAdvantage can find and identify file data that hasn’t been used after a certain threshold date.

Can the stale data report be tweaked to find stale data this is also sensitive?

Affirmative.

You need to add the hit count filter and set the number of sensitive data matches to an appropriate number.

In my test environment, I discovered that C:Share\pvcs folder hasn’t been touched in over a year and has some sensitive data.

The next step is then to take a visit to the Data Transport Engine (DTE) available in DatAdvantage (from the Tools menu). It allows you to create a rule that will search for files to archive and delete if necessary.

In my case, my rule’s search criteria mirrors the same filters used in generating the report. The rule is doing the real heavy-lifting of removing the stale, sensitive data.

Since the rule is saved, it can be rerun again to enforce the retention limits. Even better, DTE can automatically run the rule on a periodic basis so then you never have to worry about stale sensitive data in your file system.

Implementing date retention policies can be found in the following security standards and regulations:

  • NIST 800-53: SI-12
  • PCI DSS 3.x: 3.1
  • CIS Critical Security Controls: 14.7
  • New York State DFS Cybersecurity Regulations: 500.13
  • EU General Data Protection Regulation (GDPR): Article 25.2

Detecting and Monitoring

Following the order of the NIST higher-level security control categories from the first post, we now arrive at our final destination in this series, Detect.

No data security strategy is foolproof, so you need a secondary defense based on detection and monitoring controls: effectively you’re watching the system and looking for unusual activities.

Varonis and specifically DatAlert has unique role in detection because its underlying security platform is based on monitoring file system activities.

By now everyone knows (or should know) that phishing and injection attacks allow hackers to get around network defenses as they borrow existing users’ credentials, and fully-undetectable (FUD) malware means they can avoid detection by virus scanners.

So how do you detect the new generation of stealthy attackers?

No attacker can avoid using the file system to load their software, copy files, and crawl a directory hierarchy looking for sensitive data to exfiltrate. If you can spot their unique file activity patterns, then you can stop them before they remove or exfiltrate the data.

We can’t cover all of DatAlert’s capabilities in this post — probably a good topic for a separate series! — but since it has deep insight to all file system information and events, and histories of user behaviors, it’s in a powerful position to determine what’s out of the normal range for a user account.

We call this user behavior analytics or UBA, and DatAlert comes bundled with a suite of UBA threat models (below).  You’re free to add your own, of course, but the pre-defined models are quite powerful as is. They include detecting crypto intrusions, ransomware activity, unusual user access to sensitive data, unusual access to files containing credentials, and more.

All the alerts that are triggered can be tracked from the DatAlert Dashboard.  IT staff can either intervene and respond manually or even set up scripts to run automatically — for example, automatically disable accounts.

If a specific data security law or regulations requires a breach notification to be sent to an authority, DatAlert can provide some of the information that’s typically required – files that were accessed, types of data, etc.

Let’s close out this post with a final list of detection and response controls in data standards and laws that DatAlert can help support:

  • NIST 800-53: SI-4, AU-13, IR-4
  • PCI DSS 3.x: 10.1, 10.2, 10.6
  • CIS Critical Security Controls: 5.1, 6.4, 8.1
  • HIPAA: 45 CFR 164.400-164.414
  • ISO 27001: A.16.1.1, A.16.1.4
  • New York State DFS Cybersecurity Regulations: 500.02, 500.16, 500.27
  • EU General Data Protection Regulation (GDPR): Article 33, 34
  • Most US states have breach notification rules

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessme...

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessment

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

I can’t really overstate the importance of risk assessments in data security standards. It’s really at the core of everything you subsequently do in a security program. In this post we’ll finish discussing how DatAdvantage helps support many of the risk assessment controls that are in just about every security law, regulation, or industry security standard.

Last time, we saw that risk assessments were part of NIST’s Identify category. In short: you’re identifying the risks and vulnerabilities in your IT system. Of course, at Varonis we’re specifically focused on sensitive plain-text data scattered around an organization’s file system.

Identify Sensitive Files in Your File System

As we all know from major breaches over the last few years, poorly protected folders is where the action is for hackers: they’ve been focusing their efforts there as well.

The DatAdvantage 4a report I mentioned in the last last post is used for finding sensitive data in folders with global permissions. Varonis uses various built-in filters or rules to decide what’s considered sensitive.

I counted about 40 or so such rules, covering credit card, social security, and various personal identifiers that are required to be protected by HIPAA and other laws. And with our new GDPR Patterns there are now filters  —  over 250! —covering phone, license, and national IDs for EU countries

Identify Risky and Unnecessary Users Accessing Folders

We now have a folder that is a potential source of data security risk. What else do we want to identify?

Users that have accessed this folder is a good starting point.

There are a few ways to do this with DatAdvantage, but let’s just work with the raw access audit log of every file event on a server, which is available in the 2a report. By adding a directory path filter, I was able to narrow down the results to the folder I was interested in.

So now we at least know who’s really using this specific folder (and sub-folders).  Often times this is a far smaller pool of users then has been enabled through the group permissions on the folders. In any case, this should be the basis of a risk assessment discussion to craft more tightly focused groups for this folder and setting an owner who can then manage the content.

In the Review Area of DatAdvantage, there’s more graphical support for finding users accessing folders, the percentage of the Active Directory group who are actually using the folder, as well as recommendations for groups that should be accessing the folder. We’ll explore this section of DataAdvantage further below.

For now, let’s just stick to the DatAdvantage reports since there’s so much risk assessment power bundled into them.

Another similar discussion can be based on using the 12l report to analyze folders containing sensitive data but have global access – i.e., includes the Everyone group.

There are two ways to think about this very obvious risk. You can remove the Everyone access on the folder. This can and likely will cause headaches for users. DatAdvantage conveniently has a sandbox feature that allows you to test this.

On the other hand, there may be good reasons the folder has global access, and perhaps there are other controls in place that would (in theory) help reduce the risk of unauthorized access. This is a risk discussion you’d need to have.

Another way to handle this is to see who’s copying files into the folder — maybe it’s just a small group of users — and then establish policies and educate these users about dealing with sensitive data.

You could then go back to the 1A report, and set up filters to search for only file creation events in these folders, and collect the user names (below).

Who’s copying files into my folder?

After emailing this group of users with followup advice and information on copying, say, spreadsheets with credit card numbers, you can run the 12l reports the next month to see if any new sensitive data has made its way into the folder.

The larger point is that the DatAdvantage reports help identify the risks and the relevant users involved so that you can come up with appropriate security policies — for example, least-privileged access, or perhaps looser controls but with better monitoring or stricter policies on granting access in the first place. As we’ll see later on in this series, Varonis DatAlert and DataPrivilege can help enforce these policies.

In the previous post, I listed the relevant controls that DA addresses for the core identification part of risk assessment. Here’s a list of risk assessment and policy making controls in various laws and standards where DatAdvantage can help:

  • NIST 800-53: RA-2, RA-3, RA-6
  • NIST 800-171: 3.11.1
  • HIPAA:  164.308(a)(1)(i), 164.308(a)(1)(ii)
  • Gramm-Leach-Bliley: 314.4(b),(c)
  • PCI DSS 3.x: 12.1,12.2
  • ISO 27001: A.12.6.1, A.18.2.3
  • CIS Critical Security Controls: 4.1, 4.2
  • New York State DFS Cybersecurity Regulations: 500.03, 500.06

Thou Shalt Protect Data

A full risk assessment program would also include identifying external threats—new malware, new hacking techniques. With this new real-world threat intelligence, you and your IT colleagues should go back re-adjust the risk levels you’ve assigned initially and then re-strategize.

It’s an endless game of cyber cat-and-mouse, and a topic for another post.

Let’s move to the next broad functional category, Protect. One of the critical controls in this area is limiting access to only authorized users. This is easier said done, but we’ve already laid the groundwork above.

The guiding principles are typically least-privileged-access and role-based access controls. In short: give appropriate users just the access they need to their jobs or carry out roles.

Since we’re now at a point where we are about to take a real action, we’ll need to shift from the DatAdvantage Reports section to the Review area of DatAdvantage.

The Review Area tells me who’s been accessing the legal\Corporate folder, which turns out to be a far smaller set than has been given permission through their group access rights.

To implement least-privilege access, you’ll want to create a new AD group for just those who really, truly need access to the legal\Corporate folder. And then, of course, remove the existing groups that have been given access to the folder.

In the Review Area, you can select and move the small set of users who really need folder access into their own group.

Yeah, this assumes you’ve done some additional legwork during the risk assessment phase — spoken to the users who accessed Corporate\legal folder, identified the true data owners, and understood what they’re using this folder for.

DatAdvantage can provide a lot of support in narrowing down who to talk to. So by the time you’re ready to use the Review Area to make the actual changes, you already should have a good handle on what you’re doing.

One other key control, which will discuss in more detail the next time, is managing file permission for the folders.

Essentially, that’s where you find and assign data owners, and then insure that there’s a process going forward to allow the owner to decide who gets access. We’ll show how Varonis has a key role to play here through both DatAdvatange and DataPrivilege.

I’ll leave you with this list of least permission and management controls that Varonis supports:

  • NIST 800-53: AC-2, AC-3, AC-6
  • NIST 800-171: 3.14,3.15
  • PCI DSS 3.x: 7.1
  • HIPAA: 164.312 a(1)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07
Continue reading the next post in "Data Security Compliance and DatAdvantage"

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for ...

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for Risk Assessment

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

Over the last few years, I’ve written about many different data security standards, data laws, and regulations. So I feel comfortable in saying there are some similarities in the EU’s General Data Protection Regulation, the US’s HIPAA rules, PCI DSS, NIST’s 800 family of controls and others as well.

I’m really standing on the shoulders of giants, in particular the friendly security standards folks over at the National Institute of Standards and Technology (NIST), in understanding the inter-connectedness. They’re the go-to people for our government’s own data security standards: for both internal agencies (NIST 800-53) and outside contractors (NIST 800-171).  And through its voluntary Critical Infrastructure Security Framework, NIST is also influencing data security ideas in the private sector as well.

One of their big ideas is to divide security controls, which every standard and regulation has in one form or another, into five functional areas: Identify, Protect, Detect, Respond, and Recover. In short, give me a data standard and you can map their controls into one of these categories.

The NIST big picture view of security controls.

The idea of commonality led me to start this series of posts about how our own products, principally Varonis DatAdvantage, though not targeted at any specific data standard or law, in fact can help meet many of the key controls and legal requirements. In fact, the out-of-the-box reporting feature in DatAdvantage is a great place to start to see how all this works.

In this first blog post, we’ll focus on DA reporting functions that roughly cover the identify category. This is a fairly large area in itself, taking in asset identification, governance, and risk assessment.

Assets: Users, Files, and More

For DatAdvatange, users, groups, and folders are the raw building blocks used in all its reporting. However, if you wanted to view pure file system asset information, you can go to the following three key reports in DatAdvantage.

The 3a report gives IT staff a listing of Active Directory group membership. For starters, you could run the report on the all-encompassing Domain Users group to get a global user list (below). You can also populate the report with any AD property associated with a user (email, managers, department, location, etc.)

For folders, report 4f provides access paths, size, number of subfolder, and the share path.

Beyond a vanilla list of folders, IT security staff usually wants to dig a little deeper into the file structure in order to identify sensitive or critical data. What is critical will vary by organization, but generally they’re looking for personally identifiable information (PII), such as social security numbers, email addresses, and account numbers, as well as intellectual property (proprietary code, important legal documents, sales lists).

With DatAdvantage’s 4g report, Varonis lets security staff zoom into folders containing sensitive PII data, which is often scattered across huge corporate file systems. Behind the scenes, the Varonis classification engine has scanned files using PII filters for different laws and regulations, and rated the files based on the number of hits — for example, number of US social security numbers or Canadian driver’s license numbers.

The 4g report lists these sensitive files from highest to lowest “hit” count. By the way, this is the report our customers often run first and find  very eye-opening —especially if they were under the impression that there’s ‘no way millions of credit card numbers could be found in plaintext’.

Assessing the Risks

We’ve just seen how to view nuts-and-bolts asset information, but the larger point is to use the file asset inventory to help security pros discover where an organization’s particular risks are located.

In other words, it’s the beginning of a formal risk assessment.

Of course, the other major part of assessment is to look (continuously) at the threat environment and then be on the hunt for specific vulnerabilities and exploits. We’ll get to that in a future post.

Now let’s use DatAdvantage for risk assessments, starting with users.

Stale user accounts are an overlooked scenario that has lots of potential risk. Essentially, user accounts are often not disabled or removed when an employee leaves the company or a contractor’s temporary assignment is over.

For the proverbially disgruntled employee, it’s not unusual for this former insider to still have access to his account.  Or for hackers to gain access to a no-longer used third-party contractor’s account and then leverage that to hop into their real target.

In DatAdvantage’s 3a report, we can produce a list of stale users accounts based on the last logon time that’s maintained by Active Directory.

The sensitive data report that we saw earlier is the basis for another risk assessment report. We just have to filter on folders that have “everyone” permissions.

Security pros know from the current threat environment that phishing or SQL injection attacks allow an outsider to get the credentials of an insider. With no special permissions, a hacker would then have automatic access to folders with global permissions.

Therefore there’s a significant risk in having sensitive data in these open folders (assuming there’s no other compensating controls).

DatAdvantage’s 4a report nicely shows where these files are.

Let’s take a breath.

In the next post, we’ll continue our journey through DatAdvantage by finishing up with the risk assessment area and then focusing on the Protect and Defend categories.

For those compliance-oriented IT pros and other legal-istas, here’s a short list of regulations and standards (based on our customers requests) that the above reports help support:

  • NIST 800-53: IA-2,CM-8
  • NIST 800-171: 3.51
  • HIPAA:  45 CFR 164.308(a)(1)(ii)(A)
  • GLBA: FTC Safeguards Rule (16 CFR 314.4)
  • PCI DSS 3.x: 12.2
  • ISO 27001: A.7.1.1
  • New York State DFS Cybersecurity Regulations: 500.02
  • EU GDPR: Security of Processing (Article 32) and Impact Assessments (Article 35)
Continue reading the next post in "Data Security Compliance and DatAdvantage"

[Podcast] Adam Tanner on the Dark Market in Medical Data, Transcript

[Podcast] Adam Tanner on the Dark Market in Medical Data, Transcript

This article is part of the series "[Podcast] Adam Tanner on the Dark Market in Medical Data". Check out the rest:

Adam Tanner, author of Our Bodies, Our Data, has shed light on the dark market in medical data. In my interview with Adam, I learned that our medical records, principally drug transactions, are sold to medical data brokers who then resell this information to drug companies. How can this be legal under HIPAA without patient consent?

Adam explains that if the data is anonymized then it no longer falls under HIPAA’s rules. However, the prescribing doctor’s name is still left on the record that is sold to brokers.

As readers of this blog know, bits of information related to location, like the doctor’s name, don’t truly anonymize a record and can act as quasi-identifiers when associated with other data.

My paranoia was certainly in the red zone during this interview, and we explored what would happen if hackers or others could connect the dots. Some of the possibilities were a little unsettling.

Adam believes that by writing this book, he can raise awareness about this hidden medical data market. He also believes that consumers should be given a choice — since it’s really their data  — about whether to release the “anonymized” HIPAA records to third-parties.

 

Inside Out Security: Today, I’d like to welcome Adam Tanner. Adam is a writer-in-residence at Harvard University’s Institute for Quantitative Social Science. He’s written extensively on data privacy. He’s the author of What Stays In Vegas: The World of Personal Data and the End of Privacy As We Know It. His articles on data privacy have appeared in Scientific American, Forbes, Fortune, and Slate. And he has a new book out, titled “Our Bodies, Our Data,” which focuses on the hidden market in medical data. Welcome, Adam.

Adam Tanner: Well, I’m glad to be with you.

IOS: We’ve also been writing about medical data privacy for our Inside Out Security blog. And we’re familiar with how, for example, hospital discharge records can be legally sold to the private sector.

But in your new book, and this is a bit of a shock to me, you describe how pharmacies and others sell prescription drug records to data brokers. Can you tell us more about the story you’ve uncovered?

AT: Basically, throughout your journey as a patient into the healthcare system, information about you is sold. It has nothing to do with your direct treatment. It has to do with commercial businesses wanting to gain insight about you and your doctor, largely, for sales and marketing.

So, take the first step. You go to your doctor’s office. The door is shut. You tell your doctor your intimate medical problems. The information that is entered into the doctor’s electronic health system may be sold, commercially, as may the prescription that you pick up at the pharmacy or the blood tests that you take or the urine tests at the testing lab. The insurance company that pays for all of this or subsidizes part of this, may also sell the information.

That information about you is anonymized.  That means that your information contains your medical condition, your date of birth, your doctor’s name, your gender, all or part of your postal zip code, but it doesn’t have your name on it.

All of that trade is allowed, under U.S. rules.

IOS: You mean under HIPAA?

AT: That’s right. Now this may be surprising to many people who would ask this question, “How can this be legal under current rules?” Well, HIPAA says that if you take out the name and anonymize according to certain standards, it’s no longer your data. You will no longer have any say over what happens to it. You don’t have to consent to the trade of it. Outsiders can do whatever they want with that.

I think a lot of people would be surprised to learn that. Very few patients know about it. Even doctors and pharmacists and others who are in the system don’t know that there’s this multi-billion-dollar trade.

IOS:Right … we’ve written about the de-identification process, which it seems like it’s the right thing to do, in a way, because you’re removing all the identifiers, and that includes zip code information, other geo information. It seems that for research purposes that would be okay. Do you agree with that, or not?

AT: So, these commercial companies, and some of the names may be well-known to us, companies such as IBM Watson Health, GE, LexisNexis, and the largest of them all may not be well-known to the general public, which is Quintiles and IMS. These companies have dossiers on hundreds of millions of patients worldwide. That means that they have medical information about you that extends over time, different procedures you’ve had done, different visits, different tests and so on, put together in a file that goes back for years.

Now, when you have that much information, even if it only has your date of birth, your doctor’s name, your zip code, but not your name, not your Social Security number, not things like that, it’s increasingly possible to identify people from that. Let me give you an example.

I’m talking to you now from Fairbanks, Alaska, where I’m teaching for a year at the university here. I lived, before that, in Boston, Massachusetts, and before that, in Belgrade, Serbia. I may be the only man of my age who meets that specific profile!

So, if you knew those three pieces of information about me and had medical information from those years, I might be identifiable, even in a haystack of millions of different other people.

IOS: Yeah …We have written about that as well in the blog. We call these quasi-identifiers. They’re not the traditional kind of identifiers, but they’re other bits of information, as you pointed out, that can be used to sort of re-identify. Usually it’s a small subset, but not always. And that this information would seem also should be protected as well in some way. So, do you think that the laws are keeping up with this?

AT: HIPAA was written 20 years ago, and the HIPAA rules say that you can freely trade our patient information if it is anonymized to a certain standard. Now, the technology has gone forward, dramatically, since then.

So, the ability to store things very cheaply and the ability to scroll through them is much more sophisticated today than it was when those rules came into effect. For that reason, I think it’s a worthwhile time to have a discussion now. Is this the best system? Is this what we want to do?

Interestingly, the system of the free trade in our patient information has evolved because commercial companies have decided this is what they’d want to do. There has not been an open public discussion of what is best for society, what is best for patients, what is best for science, and so on. This is just a system that evolved.

I’m saying, in writing this book, “Our Bodies, Our Data,” that it is maybe worthwhile that we re-examine where we’re at right now and say, “Do we want to have better privacy protection? Do we want to have a different system of contributing to science than we do now?”

IOS: I guess what also surprised me was that you say that pharmacies, for example, can sell the drug records, as long as it’s anonymized. You would think that the drug companies would be against that. It’s sort of leaking out their information to their competitors, in some way. In other words, information goes to the data brokers and then gets resold to the drug companies.

AT: Well, but you have to understand that everybody in what I call this big-data health bazaar is making money off of it. So, a large pharmacy chain, such as CVS or Walgreen’s, they may make tens of millions of dollars in selling copies of these prescriptions to data miners.

Drug companies are particularly interested in buying this information because this information is doctor-identified. It says that Dr. Jones in Pittsburgh prescribes drug A almost all the time, rather than drug B. So, the company that makes drug B may send a sales rep to the doctor and say, “Doctor, here’s some free samples. Let’s go out to lunch. Let me tell you about how great drug B is.”

So, this is because there exists these doctor profiles on individual doctors across the country, that are used for sales and marketing, for very sophisticated kind of targeting.

IOS: So, in an indirect way, the drug companies can learn about the other drug companies’ sales patterns, and then say, “Oh, let me go in there and see if I can take that business away.” Is that sort of the way it’s working?

AT: In essence, yes. The origins of this trade date back to the 1950s. In its first form, these data companies, such as IMS Health, what they did was just telling companies what drugs sold in what market. Company A has 87% of the market. Their rival has 13% of the market. When medical information began to become digitized in the 1960s and ’70s and evermore since then, there was a new opportunity to trade this data.

So, all of a sudden, insurance companies and middle-men connecting up these companies, and electronic health records providers and others, had a product that they could sell easily, without a lot of work, and data miners were eager to buy this and produce new products for mostly the pharmaceutical companies, but there are other buyers as well.

IOS:  I wanted to get back to another point you mentioned, in that even with anonymized data records of medical records, with all the other information that’s out there, you can re-identify or at least limit, perhaps, the pool of people who that data would apply to.

What’s even more frightening now is that hackers have been stealing health records like crazy over the last couple of years. So, there’s a whole dark market of hacked medical data that, I guess, if they got into this IMS database, they would have the keys to the kingdom, in a way.

Am I being too paranoid here?

AT: Well, no, you correctly point out that there has been a sharp upswing in hacking into medical records. That can happen into a small, individual practice, or it could happen into a large insurance company.

And in fact, the largest hacking attack of medical records in the last couple of years has been into Anthem Health, which is the Blue Cross Blue Shield company. Almost 80 million records were hacked in that.

So even people that did… I was hacked in that, even though I was not, at the time, a customer of them or had never been a customer of them, but they… One company that I dealt with outsourced to someone else, who outsourced to them. So, all of a sudden, this information can be in circulation.

There’s a government website people can look at, and you’ll see, every day or two, there are new hackings. Sometimes it involves a few thousand names and an obscure local clinic. Sometimes it’ll be a major company, such as a lab test company, and millions of names could be impacted.

So, this is something definitely to be concerned about. Yes, you could take these hacked records and match them with anonymized records to try to figure out who people are, but I should point out that there is no recorded instance of hackers getting into these anonymized dossiers by the big data miners.

IOS: Right. We hope so!

AT: I say recorded or acknowledged instance.

IOS: Right. Right. But there’s now been sort of an awareness of cyber gangs and cyber terrorism and then the use of, let’s say, records for blackmail purposes.

I don’t want to get too paranoid here, but it seems like there’s just a potential for just a lot of bad possibilities. Almost frightening possibilities with all this potential data out there.

AT: Well, we have heard recently about rumors of an alleged dossier involving Donald Trump and Russia.

IOS: Exactly.

AT: And information that… If you think about what kind of information could be most damaging or harmful to someone, it could be financial information. It could be sexual information, or it could be health information.

IOS: Yeah, or someone using… or has a prescription to a certain drug of some sort. I’m not suggesting anything, but that… All that information together could have sort of lots of implications, just, you know, political implications, let’s say.

AT: I mean if you know that someone takes a drug that’s commonly used for a mental health problem, that could be information used against someone. It could be used to deny them life insurance. It could be used to deny them a promotion or a job offer. It could be used by rivals in different ways to humiliate people. So, this medical information is quite powerful.

One person who has experienced this and spoken publicly about it is the actor, Charlie Sheen. He tested positive for HIV. Others somehow learned of it and blackmailed him. He said he paid millions of dollars to keep that information from going public, before he decided finally that he would stop paying it, and he’d have to tell the world about his medical condition.

IOS: Actually I was not aware of the payments he was making. That’s just astonishing. So, is there any hope here? Do you see some remedies, through maybe regulations or enforcement of existing laws? Or perhaps we need new laws?

AT: As I mentioned, the current rules, HIPAA, allows for the free trade of your data if it’s anonymized. Now, I think, given the growth of sophistication in computing, that we should change what the rule is and to define our medical data as any medical information about us, whether or not it’s anonymized.

So, if a doctor is writing in the electronic health record, you should have a say as to whether or not that information is going to be used elsewhere.

A little side point I should mention. There are a lot of good scientists and researchers who want data to see if they can gain insights into disease and new medications. I think people should have the choice whether or not they want to contribute to those efforts.

So, you know, there’s a lot of good efforts. There’s a government effort under way now to gather a million DNA samples from people to make available to science. So, if people want to participate in that, and they think that’s good work, they should definitely be encouraged to do so, but I think they should have the say and decide for themselves.

And so far, we don’t really have that system. So, by redefining what patient data is, to say, “Medical information about a patient, whether or not it’s anonymized,” I think that would give us the power to do that.

IOS: So effectively, you’re saying the patient owns the data, is the owner, and then would have to give consent for the data to be used. Is that, about right?

AT: I think so. But on the other hand, as I mentioned, I’ve written this book to encourage this discussion. The problem we have right now is that the trade is so opaque.

Companies are extremely reluctant to talk about this commercial trade. So, they do occasionally say that, “Oh, this is great for science and for medicine, and all of these great things will happen.” Well, if that is so fantastic, let’s have this discussion where everyone will say, “All right. Here’s how we use the data. Here’s how we share it. Here’s how we sell it.”

Then let people in on it and decide whether they really want that system or not. But it’s hard to have that intelligent policy discussion, what’s best for the whole country, if industry has decided for itself how to proceed without involving others.

IOS: Well, I’m so glad you’ve written this book. This will, I’m hoping, will promote the discussion that you’re talking about. Well, this has been great. I want to thank you for the interview. So, by the way, where can our listeners reach out to you on social media? Do you have a handle on Twitter? Or Facebook?

AT: Well, I’m @datacurtain  and I have a webpage, which is http://adamtanner.news/

IOS: Wonderful. Thank you very much, Adam.

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

This article is part of the series "[Podcast] Adam Tanner on the Dark Market in Medical Data". Check out the rest:

 

Leave a review for our podcast & we'll send you a pack of infosec cards.


More Adam Tanner! In this second part of my interview with the author of Our Bodies, Our Data, we start exploring the implications of having massive amounts of online medical  data. There’s much to worry about.

With hackers already good at stealing health insurance records, is it only a matter of time before they get into the databases of the drug prescription data brokers?

My data privacy paranoia about all this came out in full force during the interview. Thankfully, Adam was able to calm me down, but there’s still potential for frightening possibilities, including political blackmail.

Is the answer more regulations for drug data? Listen to the rest of the interview below to find out, and follow Adam on Twitter, @datacurtain, to keep up to date.

Continue reading the next post in "[Podcast] Adam Tanner on the Dark Market in Medical Data"

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

This article is part of the series "[Podcast] Adam Tanner on the Dark Market in Medical Data". Check out the rest:

 

Leave a review for our podcast & we'll send you a pack of infosec cards.


In our writing about HIPAA and medical data, we’ve also covered a few of the gray areas of medical privacy, including  wearables, Facebook, and hospital discharge records. I thought both Cindy and I knew all the loopholes. And then I talked to writer Adam Tanner about his new book Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records.

In the first part of my interview with Tanner, I learned how pharmacies sell our prescription drug transactions to medical data brokers, who then resell it to pharmaceutical companies and others. This is a billion dollar market that remains unknown to the public.

How can this be legal under HIPAA, and why doesn’t it require patient consent?

It turns out after the data record is anonymized, but with the doctor’s name still attached, it’s no longer yours!  Listen in as we learn more from Tanner in this first podcast.

Continue reading the next post in "[Podcast] Adam Tanner on the Dark Market in Medical Data"

Ransomware: Legal Cheat Sheet for Breach Notification

Ransomware: Legal Cheat Sheet for Breach Notification

You respond to a ransomware attack in many of the same ways you would to any other cyber attack. In short: have plans in place to analyze the malware, contain the damage, restore operations if need be, and notify any regulatory or enforcement authorities.

And your legal, IT, and communications team should be working together in all your response efforts. Legal meet IT, IT meet legal.

So far so good. But ransomware is a different animal.

Unlike in just about any other cyber attack, the hackers announce what they’re doing: it’s called a ransom note.

The discovery process therefore happens far more quickly, not as is often the case, months later.

And the hackers’ goal is to leave the data on site, encrypted of course, so there’s no immediate concern of credit card or account theft.

I suppose those are some minor pluses to ransomware. However, this raises a big legal question.

Since the data is just accessed, but not exposed to outsiders, does this mean that the victim won’t have to notify authorities and consumers as required by the few US data laws and regulations that have breach notification language?

We thought it was an interesting question as well.

And that’s why we wrote a white paper on this important (if somewhat obscure) legal topic.

The paper provides essential background on the data security laws that many US companies will have to deal with: Health Insurance Portability and Accessibility Act (HIPAA), Gramm-Leach-Bliley Act (GLBA), state laws, and the EU’s own data laws.

It’s a great read for those directly involved in a breach response. But to spare the casual IT person from all the legalese,  we’ve mercifully put together this cheat sheet.

Breach Notification Rules for Ransomware

The real issue to investigate is whether unauthorized access alone triggers a notification to customers. In effect, that is what ransomware is doing – accessing your PII without your permission.

We present for your ransomware breach response edification the following:

  1. Healthcare – HIPAA’s Breach Notification rules requires covered entities (hospital, insurers) to notify customers and the Department of Health and Human Services (HHS) when there’s been unauthorized access to protected health information (PHI). This is the strictest federal consumer data laws when it comes to a ransomware breach response. There are, though, some exceptions so read the paper to learn what they are!
  2. Consumer banks and loan companies – Under GLBA, the Federal Trade Commission (FTC) enforces data protection rules for consumer banking and finance through the Safeguards Rule. According to the FTC, ransomware (or any other malware attack) on your favorite bank or lender would not require a notification. They recommend that these financial companies alert customers, but it’s not an explicit obligation.
  3. Brokers, dealers, investment advisors – The Securities and Exchange Commission (SEC) has regulatory authority for these types of investment firms. Under GBLA, the SEC came up with their own rule, called Regulation S-P, which does call for a breach response program. But there’s no explicit breach notification requirement in the program. In other words, it’s something you should do, but you don’t have to.
  4. Investment banks, national banks, private bankers – With these remaining investment companies, the Federal Reserve and various Treasury Department agencies jointly came up with their own rules. In this case, these companies have “an affirmative duty” to protect against unauthorized use or access, and notification is part of that duty. In the fine print it says, though, that there has to be a determination of “misuse” of data. Whether ransomware’s encryption is misuse of the data is unclear. In any case, the rules spell out what the notification must contain — a description of the incident and the data that was accessed.
  5. US state laws – Currently, there are 48 states that have consumer breach notification laws. However, only two states, New Jersey and Connecticut, require a breach notification on access alone, thereby covering a ransomware attack. But there’s additional fine print that may allow companies to avoid reporting the breach to affected consumer in their state.
  6. EU data laws – Under the Data Protection Directive (DPD), there isn’t a breach notification requirement. Some countries such as Germany, though, have added it in their national data laws. (And ISPs and telecoms under the EU’s e-Privacy Directive already have their own breach reporting rule.) But the new EU General Data Protection Regulation, which will go into effect in 2018, does have a 72-hour rule requiring notification to local data protection authorities (DPAs) and consumers when “personal data” is accessed. However, a harm-based threshold is applied – the breach would have to “result in a risk to the rights and freedoms” of consumers. Notification for a ransomware attack would be very dependent on specific circumstances, and we’ll likely have to wait for more clarification from the regulators.

That’s the cheat sheet. However, the white paper provides a lot more context, and also goes into a few of the subtleties, particularly involving HIPAA .

Our view?

Always report a ransomware breach to the appropriate agencies and law-enforcement authorities.

For IT people who want to impress their peers in the legal department, and for legal eagles who need some quick background on ransomware, this white paper covers it all. Download it today!