All posts by Jeff Petters

What is Spear Phishing?

spear phishing hero

According to the 2018 Verizon Data Breach Report, phishing and pretexting are the two favorite tactics employed in social engineering attacks, used in 98% and 93% of data breaches respectively. And last year, the IRS noted a 400% surge in spear phishing against CEOs.

What is Spear Phishing?

Spear phishing is a targeted attack where an attacker creates a fake narrative or impersonates a trusted person, in order steal credentials or information that they can then use to infiltrate your networks. It’s often an email to a targeted individual or group that appears to come from a trusted or known source.

Spear Phishing vs. Phishing

Spear phishing is a subset of phishing attacks. The end goals are the same: steal information to infiltrate your network and either steal data or plant malware, however the tactics employed by the two are different.

Phishing attacks cast a wide net: phishers are throwing hunks of bread into a lake, and they don’t care what kind of fish they catch – as long as you take the bait, they can get into the network. They’re not personalized attacks: they’re typically distributed to a wide group of people at a time, using something that looks vaguely legitimate in hopes that enough people will click on their link so that they can get more information or install malware.

Spear Phishing, on the other hand, targets a specific individual or group. They lure their victims with information that makes it seem like they’re a trusted or familiar source, with as much personal information as possible to make their approach look legitimate.

spear phishing definition

Spear Phishing Examples

The Russian cyber espionage group Fancy Bear allegedly committed one of the more famous spear phishing campaigns: using spear phishing techniques to infiltrate the Democratic National Convention to steal emails. They first obtained an updated contact list and then targeted high-level party officials, which lead them to Podesta’s Gmail account. They stole 50,000 emails in one day, and the rest is recent history.

Fancy Bear also allegedly used spear phishing to infiltrate Bundestag, part of the German Parliament, and Emmanuel Macron’s campaign in the French election.

Spear phishing is one of the more reliable social engineering methods employed by blackhats – which is what makes the defense against spear phishing both important and challenging.

Tips for Avoiding a Spear Phishing Attack

  • Be skeptical: If you want to avoid being scammed you have to ask questions – both to the potential scammer and to yourself. As a general rule, don’t immediately comply with the first request you get. Ask a question, “why do you need that?” “What are you going to do with this data?” “No, I won’t buy you a Walmart gift card.”
  • Be aware of your online presence: Spear phishers depend on a certain amount of familiarity with their target. The more information you share with the public, the more ammunition a spear phisher has to convince you to give them something.
  • Inspect the link: Visually inspect the links in your emails by hovering over them. Scammers are pretty good at masking URLs or making them look similar enough to trick our human brains into thinking they are ok. If a domain looks like it’s overpromising, it probably isn’t legitimate.
  • Don’t click the link: Instead of clicking a link in the email, use your browser and manually navigate to the destination. Avoiding a link sent in a spear phisher’s email should guarantee that you aren’t going to a malicious website. Make it a habit of going to the websites you trust instead of clicking a link, use https as much as possible, and use your bookmarks to keep track of your known good web destinations.
  • Be smart with your passwords: We all know a modern computer can easily crack a short password. You should be using passphrases that are at least 16 alphanumeric characters long: write it down, or use a password manager service. Change passwords regularly, and practice basic internet security to keep your data safe.
  • Keep your software updated: Security researchers and malware distributors are in an arms race, and we are caught in the middle. Security researchers do their best to update their Anti-virus and security software to match the most recent known attacks and patch vulnerabilities. Malware distributors are doing their best to find the next best hack, application, or vulnerability they can use to steal your data. As consumers, it’s important to stay up to date: patch vulnerabilities, and update security settings and software.
  • Implement a company-wide data security strategy: If 1 out of every 100 spear phishing attempts is successful, it’s more than likely that some of your data will be compromised. One compromised users can lead to lateral movement, privilege escalation, data exfiltration, and more. Implement a layered security technique to protect against spear phishing on an enterprise level – and never underestimate the value of educating employees with security awareness training.

tips for avoiding a spear phishing attack

There are many ways to enhance your data security strategy to defend your users from phishing and spear phishing attacks. You can configure strict SPF rules to check and validate who is sending the emails. Implement a Data Security Platform to protect and monitor your data, and leverage security analytics to alert your team of suspicious behavior.

Want to learn more? Find out how Varonis can help prevent and defend against spear phishing attacks – and protect your data from being compromised or stolen.


Data Migration: Guide to Success

data migration hero

More than 50% of data migration projects will exceed budget and/or harm the business due to flawed strategy and execution, according to Gartner. Nevertheless – data migrations are a fact of IT life.

They can be complicated and time-consuming – and regardless if you are going cross-platform, cross-domain, or adding new systems, at the end of the day you’ve got to get it right.

data migration statistic

What is Data Migration?

Data migration is the process of transitioning any kind of data from one system to another. You could be getting a new storage appliance to replace an old system. You could be migrating to a cloud-based storage solution. You could be upgrading an application database and need new hardware. Any of those situations require moving data from one system to a different system.

There are several factors to consider while planning and executing a data migration:

  • Data Integrity
  • Business Impact
  • Cost
  • User Experience and Impact
  • Potential Downtime
  • Data Assessment
  • Data Quality

Types of Data Migration

  • Storage migration: Storage migrations focus on moving data from one storage device to a new or different device – on premise or on the cloud. These are the most straightforward types of data migrations on the surface, but that doesn’t mean you can just copy and paste a 5TB folder to a new drive. You need to plan and execute the migration to ensure success. Keep in mind when migrating sensitive and critical data, it’s especially important to understand what data is moving where, and who can (or should) have access to it.
  • Database migration: Database migrations are required when you need to upgrade the database engine or move the database installation or the database files to a new device. There are more steps to a database migration than a storage migration, and you need to plan a database outage to perform the migration. You need to backup the databases, detach the databases from the engine, migrate the files and/or update the database engine, and then restore the files to the new database from the new location.
  • Application migration: Application migrations can be some combination of the two options above. Applications can have databases, and they can have installation folders and data folders that need to be migrated. Application migrations may require additional steps per the application vendor.

Tips to Make Your Data Migration Seamless

It’s important to protect your data during a data migration: moving sensitive and critical data can be a delicate task – it’s important to make sure your data migration is seamless.

  • Create and follow a data migration plan: Determine what data needs to be moved, how data should be moved, where it’s going, and who should access it. Set up a data migration plan that outlines each step, considering who will be affected, what the downtime will be, potential technical or compatibility issues, how to maintain data integrity, and how to protect that data during the migration.
  • Fully understand the data you’re migrating: Take a good hard look at what you are migrating. Is there stale data that you can send to the great bitbucket in the sky? Is there regulated data that requires security controls and specific access management? What data should go where? Who should be able to access what?
  • Extract, transform and deduplicate data before moving: It’s a good idea to do a full data clean-up before migration. Once the data is migrated, it’s probably going to be in that state until the next migration. Make sure you’re migrating the right data while preserving data integrity.
  • Implement data migration policies: Establish policies to make sure data is going to the right place and ensure that data is protected once migrated. You can automate these policies to make the destination data even more secure than the source – and even set up rules to re-permission the data during the migration.
  • Test and validate migrated data: Make sure everything’s where it should be, create automatic retention policy, clean up stale data, and double check permissions. Back up your old system, so that you’ll be able to find any missing files offline, if necessary.
  • Audit and document the entire process: Your compliance team will appreciate it.

How to Avoid Data Migration Errors

how to avoid data migration errors

  1. Don’t migrate bad habits: Clean up broken inheritance before you migrate the data. Migrating files and folders with broken inheritance will create a worse mess than you already have.
  2. Automate and simplify: So many bad things can happen when you try to use a workstation session to move large amounts of data. The session could disconnect, the computer could go offline, or the computer could blue screen. With the Data Transport Engine, you can automatically move data from one storage server to another, saving money and reducing overhead, all while maintaining or updating file access permissions. .
  3. Cover all your bases: Do you have a backup plan to the contingency plan? You probably should. If you were moving $1 million in cash how careful would you be? Now think of your data in dollars. How much is that data worth? It could be much more than $1 million.

Data Migration vs. Data Integration

Data migrations and data integrations are completely different tasks. Make sure you’re using the correct tool for the correct jobs.

Data integrations are taking two data repositories and merging them to make one big repository. You see this task in Big Data projects where you want to have large data stores available for many types of analytics tasks. The data might not be all the same kind of data, but it all lives in the same repository.

Data migrations are simply moving from one data store to the other.

How to Automate and Simplify Data Migrations

Data migrations can be huge (and difficult) projects – and when migrating sensitive and critical data, it’s more important than ever to make sure you’ve got a plan in place to migrate your data as securely as possible. You can take the guesswork out of it, while lowering risk, by leveraging the Data Transport Engine to move large amounts of data from one storage system to another, cross-platform, or to SharePoint. You can even map permissions from one system to the other, even if you are moving from NTFS to NFS.

Want to see how it works? Get in touch to see how Varonis helps automate and simplify data migrations.

Insider Threats: A CISO’s Guide

pencils in a line and a red pencil higher

According to the recent Verizon DBIR, insiders are complicit in 28% of data breaches in 2017. Broken down by vertical, insiders are responsible for 54% of data breaches in the Healthcare industry and 34% in the Public Administration. Hacking (48%) and malware (30%) were the top 2 tactics used to steal data, while human error (17%) and privilege misuse (12%) made the cut as well.

insider threat statistic

What does it all mean? Insiders have capabilities and privileges that can be abused by either themselves or bad actors to steal important data – making a CISO’s job to identify and build a defense against all of those attack vectors even more complicated.

What is an Insider Threat?

An insider threat is a security incident that originates within the targeted organization. This doesn’t mean that the actor must be a current employee or officer in the organization. They could be a consultant, former employee, business partner or board member.

Anyone who has insider knowledge and/or access to the organization’s confidential data, IT, or network resources should be considered a potential insider threat.

Types of Insider Threats

So who are the possible actors in an insider threat?

First, we have the Turncloak: This is an insider who is maliciously stealing data. In most cases, it’s an employee or contractor – someone who is supposed to be on the network and has legitimate credentials, but is abusing their access for fun or profit. We’ve seen all sorts of motives that drive this type of behavior: some as sinister as selling secrets to foreign governments, others as simple as taking a few documents over to a competitor upon resignation.

Next, we have the Pawn: This is just a normal employee – a do-gooder who makes a mistake that is exploited by a bad guy: whether it’s a lost laptop or mistakenly emailing a sensitive document to the wrong person.

Finally, we have the Impostor: Whereas the Turncloak is a legitimate insider gone rogue, the Imposter is really an outsider who has acquired an insider’s credentials. They’re on your network posing as a legitimate employee. Their goal is to find the biggest treasure trove of information to which their “host” has access and exfiltrate it without being noticed.

Common Behavioral Indicators of an Insider Threat

How do you identify an insider threat? There are common behaviors that suggest an insider threat – whether digitally or in person. These indicators are important for CISO’s, security officers, and their teams to monitor, track, and analyze in order to identify potential insider threats.

behavioral indicators of an insider threat

Digital Warning Signs 

  • Downloading or accessing substantial amounts of data
  • Accessing sensitive data not associated with their job function
  • Accessing data that is outside of their behavioral profile
  • Multiple requests for access to resources not associated with their job function
  • Using unauthorized storage devices (e.g., USB drives or floppy disks)
  • Network crawling and searches for sensitive data
  • Data hoarding, copying files from sensitive folders
  • Emailing sensitive data outside the organization

Human Warning Signs 

  • Attempts to bypass security
  • Frequently in the office during off hours
  • Displays disgruntled behavior toward co-workers
  • Violation of corporate policies
  • Discussions of resigning or new opportunities

While the human behavioral warnings can be an indication of potential issues, having digital forensics and analytics is one of the most powerful ways to protect against insider threats. User Behavior Analytics (UBA) and security analytics help detect potential insider threats, analyzing and alerting when a user behaves suspiciously or outside of their typical behavior.

Fighting Insider Threats

A data breach of 10 million records costs an organization around $3 million – and as the old adage says, “an ounce of prevention is worth a pound of cure”.

Because insiders are already inside, you can’t rely on traditional perimeter security measures to protect your company. Furthermore, since it’s an insider – who is primarily responsible for dealing with the situation? Is it IT, or HR, is it a legal issue? Or is it all 3 and the CISO’s team? Creating and socializing a policy to act on potential insider threats needs to come from the top of the organization.

The key to account for and remediate insider threats is to have the right approach – and the right solutions in place to detect and protect against insider threats.

Steps for an Insider Threat Defense Plan:

  1. Monitor files, emails, and activity on your core data sources
  2. Identify and discover where your sensitive files live
  3. Determine who has access to that data and who should have access to that data
  4. Implement and maintain a least privilege model through your infrastructure
    1. Eliminate Global Access Group
    2. Put data owners in charge of managing permissions for their data and expire temporary access quickly
  5. Apply security analytics to alert on abnormal behaviors including:
    1. Attempts to access sensitive data that isn’t part of normal job function
    2. Attempts to gain access permissions to sensitive data outside of normal processes
    3. Increased file activity in sensitive folders
    4. Attempts to change system logs or delete large volumes of data
    5. Large amounts of data emailed out of the company, outside of normal job function
  6. Socialize and train your employees to adapt a data security mindset

It’s equally important to have a response plan in place in order to respond to a potential data breach:

  1. Identify threat and take action
    1. Disable and/or logout the user when suspicious activity or behavior is detected
    2. Determine what users and files have been affected
  2. Verify accuracy (and severity) of the threat and alert appropriate teams (Legal, HR, IT, CISO)
  3. Remediate
    1. Restore deleted data if necessary
    2. Remove any additional access rights used by the insider
    3. Scan and remove any malware used during the attack
    4. Re-enable any circumvented security measures
  4. Investigate and perform forensics on the security incident
  5. Alert Compliance and Regulatory Agencies as needed

The secret to defending against insider threats is to monitor your data, gather information, and trigger alerts on abnormal behavior.

The Varonis Data Security Platform identifies who has access to your data, classifies your sensitive data, alerts your teams to potential threats, and helps maintain a least privilege model. With the proper resources, CISO/CIO can gain visibility of highest risk users and gather the intelligence needed to avoid insider threats.

NIST 800-53: Definition and Tips for Compliance

nist 800-53

NIST sets the security standards for agencies and contractors – and given the evolving threat landscape, NIST is influencing data security in the private sector as well. It’s structured as a set of security guidelines, designed to prevent major security issues that are making the headlines nearly every day.

NIST SP 800-53 Defined

The National Institute of Standards and Technology – NIST for short – is a non-regulatory agency of the U.S. Commerce Department, tasked with researching and establishing standards across all federal agencies. NIST SP 800-53 defines the standards and guidelines for federal agencies to architect and manage their information security systems. It was established to provide guidance for the protection of agency’s and citizen’s private data.

nist 800 53 definition

Federal agencies must follow these standards, and the private sector should follow the same guidelines.

NIST SP 800-53 breaks the guidelines up into 3 Minimum Security Controls spread across 18 different control families.

Minimum Security Controls:

  • High-Impact Baseline
  • Medium-Impact Baseline
  • Low-Impact Baseline

Control Families:

What’s The Purpose of NIST SP 800-53

NIST SP 800-53 sets basic standards for information security policies for federal agencies – it was created to heighten the security (and security policy) of information systems used in the federal government.

The overall idea is that federal organizations first determine the security category of their information system based on FIPS Publication 199, Standards for Security Categorization of Federal Information and Information Systems — essentially deciding whether the security objective is confidentiality, integrity, or availability.

NIST SP 800-53 then helps explain which standards apply to each goal – and provides guidance on how to implement them. NIST SP 800-53 does not define any required security applications or software packages, instead leaving those decisions up to the individual agency.

NIST has iterated on the standards since their original draft to keep up with the changing world of information security, and the SP 800-53 is now in its 4th revision dated January 22, 2015. The 5th revision is currently up for comments – stay tuned for updates.

Benefits of NIST SP 800-53

NIST SP 800-53 is an excellent roadmap to covering all the basics for a good data security plan. If you establish policies and procedures and applications to cover all 18 of the areas, you will be in excellent shape.

Once you have the baseline achieved, you can further improve and secure your system by adding additional software, more stringent requirements, and enhanced monitoring.

Data security, like NIST SP 800-53, is evolving rapidly. A data security team needs to constantly look for more ways to reduce the risk of a data breach and to protect their data from insider threats and malware. The Varonis Data Security Platform maps to many of the basic requirements for NIST, and reduces your overall risk profile throughout the implementation process and into the future.

NIST 800-53 Compliance Best Practices

nist 800 53 compliance best practices

Implement these basic principles to data security to work towards NIST 800-53 compliance:

  • Discover and Classify Sensitive Data
    Locate and secure all sensitive data
    Classify data based on business policy
  • Map Data and Permissions
    Identify users, groups, folder and file permissions
    Determine who has access to what data
  • Manage Access Control
    Identify and deactivate stale users
    Manage user and group memberships
    Remove Global Access Groups
    Implement a least privilege model
  • Monitor Data, File Activity, and User Behavior
    Audit and report on file and event activity
    Monitor for insider threats, malware, misconfigurations and security breaches
    Detect security vulnerabilities and remediate

Compliance with NIST 800 53 is a perfect starting point for any data security strategy. The new GDPR regulations coming in May 2018 shine a spotlight on data security compliance guidelines in Europe, and changes are already coming to state legislation in the US that will implement additional requirements on top of NIST 800 53. As new legislation rolls out, achieving and maintaining compliance with the current baseline will make much easier to meet updated requirements.

NIST sets the security standards for internal agencies – building blocks for common sense security standards. Want to learn more? See how Varonis maps to NIST 800 53 and can help meet NIST standards.

5 FSMO Roles in Active Directory

fsmo roles hero

Active Directory (AD) has been the de facto standard for enterprise domain authentication services ever since it first appeared in late 1999 (in Windows Server 2000). There have been several enhancements and updates since then to make it the stable and secure authentication system in use today.

In its infancy, AD had some rather glaring flaws. If you had multiple Domain Controllers (DC) in your domain, they would fight over which DC gets to make changes – and sometimes your changes would stick, and sometimes they wouldn’t. To level up AD and keep the DCs from fighting all the time, Microsoft implemented “last writer wins” – which can be a good thing, or it’s the last mistake that breaks all the permissions.

Then Microsoft took a left turn at Albuquerque and introduced a “Single Master Model” for AD. One DC that could make changes to the domain, while the rest simply fulfilled authentication requests. However, when the single master DC goes down, no changes can be made to the domain until it’s back up.

To resolve that fundamental flaw, Microsoft separated the responsibilities of a DC into multiple roles. Admins distribute these roles across several DCs, and if one of those DCs goes out to lunch, another will take over any missing roles! This means domain services have intelligent clustering with built-in redundancy and resilience.

Microsoft calls this paradigm Flexible Single Master Operation (FSMO).

FSMO Roles: What are They?

Microsoft split the responsibilities of a DC into 5 separate roles that together make a full AD system.

fsmo roles

The 5 FSMO roles are:

  • Schema Master – one per forest
  • Domain Naming Master – one per forest
  • Relative ID (RID) Master – one per domain
  • Primary Domain Controller (PDC) Emulator – one per domain
  • Infrastructure Master – one per domain

FSMO Roles: What do They do?

Schema Master: The Schema Master role manages the read-write copy of your Active Directory schema. The AD Schema defines all the attributes – things like employee ID, phone number, email address, and login name – that you can apply to an object in your AD database.

Domain Naming Master: The Domain Naming Master makes sure that you don’t create a second domain in the same forest with the same name as another. It is the master of your domain names. Creating new domains isn’t something that happens often, so of all the roles, this one is most likely to live on the same DC with another role.

RID Master: The Relative ID Master assigns blocks of Security Identifiers (SID) to different DCs they can use for newly created objects. Each object in AD has an SID, and the last few digits of the SID are the Relative portion. In order to keep multiple objects from having the same SID, the RID Master grants each DC the privilege of assigning certain SIDs.

PDC Emulator: The DC with the Primary Domain Controller Emulator role is the authoritative DC in the domain. The PDC Emulator responds to authentication requests, changes passwords, and manages Group Policy Objects. And the PDC Emulator tells everyone else what time it is! It’s good to be the PDC.

Infrastructure Master: The Infrastructure Master role translates Globally Unique Identifiers (GUID), SIDs, and Distinguished Names (DN) between domains. If you have multiple domains in your forest, the Infrastructure Master is the Babelfish that lives between them. If the Infrastructure Master doesn’t do its job correctly you will see SIDs in place of resolved names in your Access Control Lists (ACL).

FSMO gives you confidence that your domain will be able to perform the primary function of authenticating users and permissions without interruption (with standard caveats, like the network staying up).

It’s important to monitor AD in order to prevent brute force attacks or privilege elevation attempts – two common attack vectors for data theft. Want to see how to do it? We can show you. Get a demo to see how Varonis protects AD from both insider and external threats.


HIPAA Security Rule Explained

hipaa security rule

The HIPAA Journal estimates that a large data breach ( > 50k records) can cost the organization around $6 million – and that’s before the Office of Civil Rights (OCR) drops their own hammer. Over the last few years, we’ve seen more reports of breaches, an increase of HIPAA investigations, and higher fines across the board – all stemming from violations of the HIPAA security rule.

hipaa security rule statistic

What is The HIPAA Security Rule?

The HIPAA Security Rule sets the minimum standards required for Covered Entities (CE) to manage electronic PHI (ePHI). To be considered HIPAA compliant, CEs need to address 3 key security zones: administrative, physical, and technical.

How Does the HIPAA Security Rule Protect Your Data?

hipaa security rule safeguards

Administrative Safeguards 

HIPAA rules require CEs to adhere to certain processes to ensure and verify their compliance with the HIPAA Security Rule:

  • Security Management Process: CEs must establish policies and procedures to prevent, detect, contain and correct security violations. Part of this process is to follow the procedures in the Risk Management Framework to assess overall risk in your current processes or when you implement new policies.
  • Assigned Security Responsibility: One designated security official must be responsible for the development and implementation of the HIPAA Security Rule.
  • Workforce Security: CEs must identify which employees require access to ePHI and make efforts to provide control over that access. To achieve this, implement a least privilege model and automatically enforce and manage permissions.
  • Information Access Management: Restrict access to ePHI via permissions after you have identified the who should have access in the step above.
  • Security Awareness and Training: In order to enforce these rules and security policies, organizations need to train their users on what the rules are and how to abide by them.
  • Security Incident Procedures: This standard provides guidance on how to create a policy to address data breaches: it’s good practice regardless – report breaches and security violations, and set up alerts and security analytics so that you can prevent breaches in the first place.
  • Contingency Plan: This is the “what happens next” standard. Create and follow a data backup plan, disaster recovery plan, and have an emergency mode operation plan in place, just in case things go sideways and you get breached. There’s also guidance in this standard for testing and revising these plans, as well as managing critical applications that store, maintain or transmit ePHI.
  • Evaluation: Establish a process to review and maintain the policies and procedures to stay up to date and current with the HIPAA Security Rule.
  • Business Associate Contracts and other Arrangements: While it’s ok to use other businesses to implement your overall HIPAA Security strategy, as with any 3rd party contractor, you must get assurances from them that they understand HIPAA and they won’t leak your ePHI.

Physical Safeguards 

This section of the HIPAA Security Rule sets standards for physical security: the “lock your doors” and “batten down the hatches” kind of guidance – along with what to do in case of natural disasters, naturally.

  • Facility Access Controls: Limit and audit physical access to the computers that store and process ePHI. Pro tip – put a lock on the server room door.
  • Workstation Use: Manage and secure computers (desktop, laptop, and tablets) that are used to access ePHI. Every computer with access to a CEs ePHI must adhere to this policy, including systems that are offsite (and offline).
  • Workstation Security: Implement physical safeguards for all computers that access ePHI: restrict access to computers that access ePHI, install remote wipe safeguards on laptops that grow legs.
  • Device and Media Controls: Once computers are covered, you still need safeguards on all the rest: devices and media like USB drives, tape backups, or removable storage. Establish a policy to inventory, allow the use of, and reuse or dispose of these devices as needed.

Technical Safeguards

Technical safeguards as the technology and procedures that CEs use to protect ePHI. The HIPAA Security Rule does not define what technology to use, but demands that CEs adhere to the standard and adequately protect ePHI from data breaches.

  • Access Control: Authenticate users as necessary to access ePHI, establish and maintain a least privilege model, and have appropriate procedures in place to audit access control lists (ACL) on a regular schedule.
  • Audit Controls: Audit your ePHI to record and analyze activity in case of a data breach. CE’s need to be able to show the OCR exactly how a data breach occurred with a complete audit trail and reporting.
  • Integrity: To be HIPAA compliant, CEs needs to be able to prove that the ePHI they manage is protected from threats both inside and out, intentional or not. Whether the new intern deletes a record accidentally, or a nefarious hacker deletes it intentionally, you should be able to recover and restore that record.
  • Person or Entity Authentication: CEs must provide assurances that the person accessing ePHI is, in fact, who they say they are. These assurances can be a password, two-factor authentication, or retinal scan – whatever works as long as you have something implemented.
  • Transmission Security: When sending data to other business partners, you need to be able to prove that only authorized individuals accessed the ePHI. You can use encrypted email with a private key, HTTPS file transfer, or a VPN – as long as only the people that are authorized to use the ePHI, HIPPA doesn’t care how you set it up.

Ensuring Compliance: HIPAA Security Rule

HIPAA doesn’t spell out what specific software to install or how to implement the requirements in the HIPAA Security Rule.
Varonis provides a 30-day free risk assessment to help get started: we’ll outline problem areas, potential violations, and a plan on how to fix them – we’ve got a proven track record of thousands of customers, many of whom deal with ePHI and HIPAA regulations on a daily basis.

Check out our US Data Protection Compliance and Guidance – or get in touch to discuss how we can help you reach HIPAA compliance and improve your current compliance strategies.


HIPAA Privacy Rule Explained

hipaa privacy rule hero

It’s an unfortunate (but inevitable) fact of life: Laptops get stolen, and the consequences can be devastating. If those laptops have electronic protected health information (ePHI) on them, they fall under HIPAA regulations and the theft must be reported.

Even if the thief doesn’t look at the data, the company can’t prove it: everyone should take precautions to protect themselves against not just fallout from lost data, but from the potential fines that can accrue: install remote wipe capabilities, encrypt your drives, and don’t store ePHI on your local drive.

Hopefully, the next time a laptop grows legs, you will be better prepared to mitigate the damage.

What is The HIPAA Privacy Rule?

hipaa privacy rule explained

The HIPAA privacy rule explains how to use, manage, and protect personal health information (PHI or ePHI). Congress wrote the HIPAA Privacy Rule to protect patient data, and those rules apply to covered entities: the people that that transmit, store, manage, and access personal health information.

What Information Does the Privacy Rule Protect?

The HIPAA Privacy Rule defines PHI as individually “identifiable health information” stored or transmitted by a covered entity or their business associates, in any form or media (electronic, paper, or oral).

The law further defines “individually identifiable health information” as an individual’s past, present, and future health conditions, the details of the health care provided to an individual, and the payments or arrangement of payments made by an individual.
In the simplest terms: any and all data having to do with all doctor visits, ever, including (but not limited to):

  • Names
  • Birth, death or treatment dates, and any other dates relating to a patient’s illness or care
  • Contact information: telephone numbers, addresses, and more
  • Social Security numbers
  • Medical records numbers
  • Photographs
  • Finger and voice prints
  • Any other unique identifying number or account number

To Whom Does the HIPAA Privacy Rule Apply?

The HIPAA Privacy Rule protects individual PHI by governing the practices of the covered entities.

Covered entities are the people and organizations that hold and process PHI data for their customers – the ones required to report HIPAA violations and who are responsible to pay fines imposed by the Office of Civil Rights if and when a HIPAA violation occurs.

These organizations are considered Covered Entities under HIPAA:

Health Care Providers

  • Doctors
  • Clinics
  • Psychologists
  • Dentists
  • Chiropractors
  • Nursing homes
  • Pharmacies

Health Plan 

  • Health insurance companies
  • HMO’s
  • Company health plans
  • Government provided health care plans

Health Care Clearinghouse

  • These entities process healthcare data from another entity into a standard form.

What Happens if a HIPAA Data Breach Occurs?

According to the HIPAA breach notification rules, a covered entity is supposed to report data breaches to each individual affected within 60 days of discovery.

If the breach affects over 500 individuals, the covered entity must also report the breach to the Department of Health and Human Services within 60 days, which in turn opens an investigation with the Office of Civil Rights. On top of that, if the breach falls within that over 500 club, the covered entity is required by HIPPA rules to issue a press release to media outlets local to the affected individuals.

Not only is a PHI data breach potentially bad for the bottom line, but it’s also government mandated bad press.

HIPAA compliance isn’t just the law, it’s good business practice. Protecting an individual’s personal data and preventing data breaches affects both the bottom line (no fines) and company image (no bad press).

The Varonis Data Security Platform provides the foundation for a HIPAA compliant data security strategy – sign up for a free email course on HIPAA compliance, or get started with a demo to see the state of your HIPAA security.

HIPAA Compliance: Guide and Checklist

running track

There are currently 14,930,463 individual records in the United States with an open HIPAA data breach investigation. That’s up to 14 million humans that have had their Protected Health Information (PHI) exposed by hacking, IT incident, theft, loss, or unauthorized access/disclosure.

hipaa compliant visualization

That’s just the unresolved case list. If we add the numbers from the resolved breach notifications, we end up with 162,599,642 records – over half of the current US population.

And that’s why we need HIPAA in the first place.

What is HIPAA Compliance?

The US Congress passed the Health Insurance Portability and Accountability Act (HIPAA) in 1996 to set standards for how US citizens’ PHI records are stored, secured, and used. Nowadays – along with the Health Information Technology for Economic and Clinical Health Act (HITECH) – this legislation governs how anyone with access to your PHI needs to manage and protect that data.

HIPAA doesn’t explicitly define PHI other than information that can “reasonably” be linked to an individual – it could include anything from your birth date to social security number to medical ID or more.

What is the HIPAA Enforcement Rule?

The HIPAA Enforcement Rule explains how companies need to handle HIPAA violations – and the process isn’t just a slap on the wrist.

Individuals or companies report HIPAA violations to the Office for Civil Rights (OCR), and the OCR is responsible for investigating and reviewing those violations. If the OCR finds the violators negligent, the violators must fix what caused the breach in the first place and deal with the affected individuals data to the satisfaction of the OCR. If the OCR does not find their response satisfactory or if they find the data breach egregious, the OCR will fine the violators based on the number of records involved.

In 2018 alone there have already been two different settlements costing the violators $3.5 million and $100,000, the latter of which came after the business had already shut down due to HIPAA violations. You can read all about these settlements and more – it’s public record!

What is The HIPAA Privacy Rule?

The HIPAA Privacy Rule is the nuts and bolts of the legislation: it explains how and when healthcare professionals, lawyers, or anyone who accesses your PHI can or can not use that data.

For example: If I want to allow my PHI to be available to my girlfriend, the law requires a signed HIPAA PHI Release form in order for the Doctor’s office to share my information with her. Those are the kinds of scenarios covered in the Privacy Rule.

What is The HIPAA Security Rule?

The HIPAA Security Rule sets the standards on the how Covered Entities (the humans who are governed by HIPAA) must protect PHI data. These standards include things like ‘lock the door to the server room’ and ‘only allow access to read PHI data to people who need to see it.’

That makes it paramount to protect person information that qualifies as PHI – whether online, on paper, or verbally.

What is The HIPAA Breach Notification Rule?

The HIPAA Breach Notification Rule says you have 60 days to notify an individual of improper access to their PHI. It’s important to remember that even if ePHI is encrypted by a ransomware attack, it’s considered a breach – and therefore falls under the HIPAA breach notification rule.

If there are more than 500 PHI records impacted, you must notify the Department of Health and Human Services (which in turn gets the OCR involved) – and you’re required to issue a press release about the breach.

If you are in the unfortunate (but not uncommon) situation of reporting a HIPAA violation, here is the information you must initially provide OCR:

  • What PHI was available and how that data was made available? What personal identifiers were available during the breach?
  • Who was the unauthorized person who saw or had access to the data?
  • Did anyone actually view or acquire the ePHI?
  • What have you done to fix the issue or mitigate the damage?

There is good news: if you don’t break that 500 record limit in a single event, you can report all of your smaller violations to HHS in a single batch once per year per the Breach Notification Rules.

HIPAA Standard Transactions

A HIPAA Standard Transaction is an exchange of PHI data between two entities. For example, your doctor sends your prescription to the pharmacy, which in turn requests coverage verification from the insurance company.
HIPAA governs all of these PHI transactions, including:

  • Claims and encounter information
  • Payment and remittance advice
  • Claims status
  • Eligibility status
  • Enrollment and disenrollment
  • Referrals and authorizations
  • Coordination of benefits
  • Premium payment

How to Become HIPAA Compliant

Becoming HIPAA compliant isn’t all that different from any of your other basic 21st-century data security plans. In fact, setting up a solid data security plan will help maintain HIPAA compliance.

Here is a HIPAA Compliance Checklist to get you started:

hipaa compliance checklist with icons

  1. Map your data and discover where your HIPAA protected files live on your network (including cloud storage)
  2. Determine who has access to HIPAA data, who should have access to HIPAA data, and implement a least privilege model.
  3. Monitor all file access to your data.
  4. Set up alerts to notify you if someone accesses HIPAA data, or if someone creates new HIPAA data in a non-compliant repository. Use data security analytics to differentiate between normal behaviors and potential HIPAA violations.
  5. Protect the perimeter with firewalls, endpoint security, locks on server rooms, two-factor authentication, strong passwords, and session timeouts.
  6. Monitor activity on the perimeter and add threat models to your data security analytics.

HIPAA compliance isn’t just the law – it will protect your customer’s data and ensure that your business prospers in the age of digital medical records.

Varonis has been working with our customers on HIPAA compliance since before the HITECH Act in 2009. The Varonis Data Security Platform provides the foundation for a HIPAA compliant data security strategy.

Get started with a free email course on HIPAA compliance or sign up for a demo to talk directly with our data security experts.

Risk Management Framework (RMF): An Overview

risk framework management

The Risk Management Framework (RMF) is a set of criteria that dictate how United States government IT systems must be architected, secured, and monitored. Originally developed by the Department of Defense (DoD), the RMF was adopted by the rest of the US federal information systems in 2010.

Today, the RMF is maintained by the National Institute of Standards and Technology (NIST), and provides a solid foundation for any data security strategy.

What is the Risk Management Framework (RMF)?

The elegantly titled “NIST SP 800-37 Rev.1” defines the RMF as a 6-step process to architect and engineer a data security process for new IT systems, and suggests best practices and procedures each federal agency must follow when enabling a new system. In addition to the primary document SP 800-37, the RMF uses supplemental documents SP 800-30, SP 800-53, SP 800-53A, and SP 800-137.

Risk Management Framework (RMF) Steps

We’ve visualized the RMF 6-step process below. Browse through the graphic and take a look at the steps in further detail beneath.

risk management framework steps

Step 1: Categorize Information System 

The Information System Owner assigns a security role to the new IT system based on mission and business objectives. The security role must be consistent with the organization’s risk management strategy.

Step 2: Select Security Controls 

The security controls for the project are selected and approved by leadership from the common controls, and supplemented by hybrid or system-specific controls. Security controls are the hardware, software, and technical processes required to fulfill the minimum assurance requirements as stated in the risk assessment. Additionally, the agency must develop plans for continuous monitoring of the new system during this step.

Step 3: Implement Security Controls 

Simply put, put step 2 into action. By the end of this step, the agency should have documented and proven that they have achieved the minimum assurance requirements and demonstrated the correct use of information system and security engineering methodologies.

Step 4: Assess Security Controls 

An independent assessor reviews and approves the security controls as implemented in step 3. If necessary, the agency will need to address and remediate any weaknesses or deficiencies the assessor finds and then documents the security plan accordingly.

Step 5: Authorize Information System

The agency must present an authorization package for risk assessment and risk determination. The authorizing agent then submits the authorization decision to all necessary parties.

Step 6: Monitor Security Controls

The agency continues to monitor the current security controls and update security controls based on changes to the system or the environment. The agency regularly reports on the security status of the system and remediates any weaknesses as necessary.

How Can Varonis Help You Be Compliant?

NIST regulation and the RMF (in fact, many of the data security standards and compliance regulations) have three areas in common:

  • Identify your sensitive and at risk data and systems (including users, permissions, folders, etc.);
  • Protect that data, manage access, and minimize the risk surface;
  • Monitor and detect what’s happening on that data, who’s accessing it, and identify when there is suspicious behavior or unusual file activity.

The Varonis Data Security Platform enables federal agencies to manage (and automate) many of these practices and regulations required in the RMF.

DatAdvantage and Data Classification Engine identifies sensitive data on core data stores, and maps user, group, and folder permissions so that you can identify where your sensitive data is and who can access it. Knowing who has access to your data is a key component of the risk assessment phase, defined in NIST SP 800-53.

Data security analytics helps meet the NIST SP 800-53 requirement to constantly monitor your data: Varonis analyzes monitored data against dozens of threat models that warn you of ransomware, malware, misconfigurations, insider attacks, and more.

NIST SP 800-137 establishes guidelines to protect your data and requires that the agency meet a least privilege model. DatAdvantage, Automation Engine, and DataPrivilege streamline permissions and access management, and provide a way to more easily get to least privilege and automate permissions cleanup.

While the Risk Management Framework is complex on the surface, ultimately it’s a no-nonsense and logical approach to good data security practices at its core – see how Varonis can help you meet the NIST SP 800-37 RMF guidelines today.

Advanced Persistent Threat (APT) Explained

advanced persistent threat hero

In March of 2018, a report detailing “Slingshot” malware revealed that the malware hid on routers and computers for approximately 6 years before being discovered. Slingshot is a perfect example of malware designed for Advanced Persistent Threat (APT) attacks.

What is an Advanced Persistent Threat (APT)?

Advanced Persistent Threats (APTs) are long-term operations designed to infiltrate and/or exfiltrate as much valuable data as possible without being discovered. It’s not yet possible to estimate exactly how much data actors were able to access with Slingshot, but Kaspersky’s data says that Slingshot affected approximately 100 individuals across Africa and the Middle East, with most of the targets in Yemen and Kenya. As we saw with the Stuxnet APT, Slingshot appears to have originated from a nation-state. As an APT, it doesn’t get much better than 6 years undetected.

Advanced Persistent Threat (APT) Lifecycle

advanced persistent threat lifecycle

The lifecycle of an APT is much longer and more complex than other kinds of attacks.

Stuxnet, for example, led a strategic attack on a high-value target: the programmers wrote code to attack a specific control board by a specific manufacturer that Iran used to enrich uranium. And they wrote it in such a way that it would be hard to find, so it had as much time to do as much damage as possible. The lifecycle of an APT is much longer and more complicated than other kinds of attacks.

  1. Define target: Determine who you’re targeting, what you hope to accomplish – and why.
  2. Find and organize accomplices: Select team members, identify required skills, and pursue insider access.
  3. Build or acquire tools: Find currently available tools, or create new applications to get the right tools for the job.
  4. Research target: Discover who has access you need, what hardware and software the target uses, and how to best engineer the attack.
  5. Test for detection: Deploy a small reconnaissance version of your software, test communications and alarms, identify any weak spots.
  6. Deployment: The dance begins. Deploy the full suite and begin infiltration.
  7. Initial intrusion: Once you’re inside the network, figure out where to go and find your target.
  8. Outbound connection initiated: Target acquired, requesting evac. Create a tunnel to begin sending data from the target.
  9. Expand access and obtain credentials: Create a “ghost network” under your control inside the target network, leveraging your access to gain more movement.
  10. Strengthen foothold: Exploit other vulnerabilities to establish more zombies or extend your access to other valuable locations.
  11. Exfiltrate data: Once you find what you were looking for, get it back to base.
  12. Cover tracks and remain undetected: The entire operation hinges upon your ability to stay hidden on the network. Keep rolling high on your stealth checks and make sure to clean up after yourself.

Toolbox: Advanced Persistent Threat

APT operations, with many steps and people involved, require a massive amount of coordination. There are a few tried and true tactics that reappear across different APT operations:

  • Social engineering: The oldest and most successful of all infiltration methods is plain old social engineering. It’s much easier to convince somebody to provide you the access you need than it is to steal or engineer it on your own. The majority of APT attacks have a social engineering component, either at the beginning during the target research phase or towards the end to cover your tracks.
  • Spear phishing: Spear phishing is a targeted attempt to steal credentials from a specific individual. The individual is typically scouted during target research and identified as a possible asset for infiltration. Like shotgun phishing attack, Spear phishing attempts use malware, keylogger, or email to get the individual to give away the credentials.
  • Rootkits: Because Rootkits live close to the root of the computer systems they are difficult to detect. Rootkits do a good job of hiding themselves and granting access to the infected system. Once installed, the operators can access the target company through the rootkit. They can continue to infiltrate other systems once they are on the network, making it much more difficult for security teams to contain the threat.
  • Exploits: An easy target for APTs is zero-day bugs or other known security exploits. An unpatched security flaw allowed the APT operation at Equifax to go on for several months undetected.
  • Other tools: While the above is the most common, there are a seemingly endless amount of potential tools: Infected downloads, DNS tunneling, rogue WI-FI, and more. And who knows what the next generation of hackers will develop, or what is already out there undiscovered?

Who is Behind Advanced Persistent Threats (APT)?

Operators who lead APT attacks tend to be motivated and committed. They have a goal in mind and are organized, capable, and intent on carrying out that goal. Some of these operations live under a larger organization, like a nation-state or corporation.
These groups are engaged in espionage with the sole purpose of gathering intelligence or undermining their targets capabilities.

Some examples of well-known APT groups include:

  • APT28 (or Fancy Bear)
  • Deep Panda
  • Equation
  • OilRig

Corporations will engage in industrial espionage with APTs, and Hacktivists will use APTs to steal incriminating information about their targets. Some lower level APTs are designed just to steal money.

These are by far the most prevalent, but their actors are not as sophisticated or capable as the actors sponsored by nation-states.

Typical motives for APTs are espionage, gaining a financial or competitive advantage over a rival, or simple theft and exploitation.

What are Common Targets for Advanced Persistent Threats (APT)?

In general, APTs target higher-value targets like other nation-states or rival corporations. However, any individual can ultimately be a target of an APT.

Two telling characteristics of an APT attack are an extended period, and consistent attempts at concealment.

Any (and all) sensitive data is a target for an APT, as is cash or or cash equivalents like bank account data or bitcoin wallet keys. Potential targets include:

  • Intellectual property (e.g., inventions, trade secrets, patents, designs, processes)
  • Classified data
  • Personally identifiable information (PII)
  • Infrastructure data (i.e., reconnaissance data)
  • Access credentials
  • Sensitive or incriminating communications (i.e., Sony)

How to Manage Advanced Persistent Threats (APT)?

advanced persistent threats how to manage

Protecting yourself from APTs requires a layered security approach:

  • Monitor everything: Gather everything you can about your data. Where does your data live? Who has access to that data? Who makes changes to the firewall? Who makes changes to credentials? Who is accessing sensitive data? Who is accessing our network and where are they coming from? You should know everything that happens within your network and to your data. The files themselves are the targets. If you know what is happening to your files, you can react to and prevent APTs from damaging your organization.
  • Apply data security analytics: Compare file and user activity to baseline behaviors – so you know what’s normal and what’s suspicious. Track and analyze potential security vulnerabilities and suspicious activity so that you can stop a threat before it’s too late. Create an action plan to manage threats as you get the alerts. Different threats will require a different response plan: your teams need to know how to proceed and investigate each threat and security incident.
  • Protect the perimeter: Limit and control access to the firewall and the physical space. Any access points are potential points of entry in an APT attack. Unpatched servers, open WIFI routers, unlocked server room doors, and insecure firewall allow the opportunity for infiltration. While you can’t ignore the perimeter, if we had to do data security all over again we would monitor the data first.

APT attacks can be difficult (or impossible) to detect without monitoring. The attackers are actively working against you to remain undetected but still able to operate. Once they’re inside the perimeter, they may look like any other remote user – making it difficult to detect when they’re stealing data or damaging your systems.

The Varonis Data Security Platform provides the monitoring and analytics capabilities you need to detect and thwart APTs against your organization – even once they’re inside.

How to Protect GDPR Data with Varonis

How to Protect GDPR Data with Varonis

In the overall data security paradigm, GDPR data isn’t necessarily more important than other sensitive data, but demands specific monitoring, policy, and processing – with significant fines to encourage compliance. Once you discover and identify GDPR data, you need to be able to secure and protect that data.

GDPR Article 25, “Data Protection by Design and Default,” sets the rules for securing GDPR data. Varonis helps automate and implement a process to get to and maintain a least privilege model to help meet this part of the GDPR. Once you limit access to data, you can proactively protect GDPR data by analyzing file activity and user behavior, automating how to process that data, and actively monitoring your GDPR data.

Apply Security Analytics to GDPR Data

Varonis applies data security analytics to file activity and user behavior, and DatAlert can apply specific threat models to monitor and alert on suspicious activity on GDPR data. Below is a sample of some of our GDPR threat models:

Threat Model: Access to an unusual number of idle GDPR files

How it works: DatAlert triggers this alert when a user accesses a statically significant number of GDPR files that they have not accessed previously (i.e., did not create or modify).
What it means: This user account is looking for something containing GDPR data that they don’t normally access. This attack could be an infiltration attempt, a compromised account, or evidence of breached security.
Where it works: Dell Fluid, EMC, Hitachi NAS, HP NAS, NetApp, OneDrive, Sharepoint, SharePoint Online, Unix, Unix SMB, Windows, Nasuni, HPE 3PAR File Persona

Threat Model: Unusual number of GDPR files deleted or modified

How it works: DatAlert identifies when a user account is deleting or modifying an unusual amount of files that contain GDPR data, compared to that user’s typical behavior.
What it means: When users are deleting or changing many files, it could be an attempt to either cover their tracks, steal data, or modify information. It often indicates that an attacker is attempting to damage or destroy critical data as part of a denial-of-service attack. It’s possible that this user is simply doing clean-up, but more likely is an attempt to steal (or destroy) data.
Where it works: Dell Fluid, EMC, Hitachi NAS, HP NAS, NetApp, OneDrive, Sharepoint, SharePoint Online, Unix, Unix SMB, Windows, Nasuni, HPE 3PAR File Persona

Threat Model: Unusual number of GDPR files with denied access

How it works: DatAlert detects an increase in the number of GDPR files a user has failed to access.
What it means: When a user gets that many denies in a set amount of time, they are looking for – or trying to access – something that they likely shouldn’t be touching. Most likely they are not supposed to be looking for this kind of data, and someone is trying to use this account to access GDPR data in order to exfiltrate it.
Where it works: EMC, Windows, Hitachi NAS

DatAlert highlights suspicious activity and unusual behavior on GDPR data, and helps streamline investigation and pursue forensics on potential threats. DatAlert will also give you the all-important heads up you need to be able to report a data breach discovery within the GDPR mandated 72 hours.

It’s best practice to develop an alert response plan that makes sense with your organization’s security practices and policies so that you have an actionable plan to investigate unusual behavior and suspicious activity.

Automatically Quarantine GDPR Data

In order to stay compliant on a day-to-day basis, you need to be constantly detecting new unsecured GDPR data and protecting that data as quickly as possible.

As users create new files there is a possibility that GDPR data will be left unsecured. Because the Data Classification Engine continuously discovers new GDPR data in your shares, it can pass that information to the Data Transport Engine. The Data Transport Engine can move those newly discovered files containing GDPR data to a quarantine folder during its next scheduled run. Once the GDPR data is quarantined and secured, you can investigate the file and determine who should have access, where it should be stored, and any additional conditions to help comply with GDPR.

Monitor your GDPR Data

It’s vital to maintain a holistic perspective of your GDPR security status. Varonis provides several reports that allow you to keep track of your GDPR data, which can be delivered to your inbox or a shared folder.

Report 12.I.02, Open Access on Sensitive Data, will show you all the GDPR classification matches you have on the network that were discovered within your specified time slice. If you use Data Transport Engine to quarantine new matches, you’ll be able to use this report as a starting point for which files you want to investigate. If you aren’t using Data Transport Engine, you will have to ensure these files are locked down as quickly as possible.

GDPR regulations represent a shift in the way governments are broadly approaching data privacy and data security requirements – and it’s rooted in data security best practices.

Are you ready to see what how your current GDPR situation looks? Get a free 30-day GDPR Readiness Assessment and see how Varonis can help protect your GDPR data.