Lessons from OPM: Turning Security Inside Out With User Behavior Analytics

The US Office of Personnel Management (OPM) suffered a massive breach in early June. They publicly announced that the personally identifiable information or PII of over 4 million current and former government workers had likely been scooped up by hackers.

Who done it?

At this point, there’s some interesting speculation about the source of the OPM breach, but nothing definitive yet.

If this were a physical heist of similar magnitude involving, say, expensive jewelry or art and the thieves got away without leaving much of or any trail, the typical conclusion is “inside job.”

We even know from studies by Ponemon and Verizon that most organizations are not very good at detecting data theft– whether from an outsider who becomes an insider or an insider who goes rogue—and often take months to discover the data has been stolen.

In attacks over the last year where we do have more information — Target, Home Depot, Anthem — we have a better understanding of why there’s such a lag.

While IT staff may feel safe and even want larger investments in firewalls, virus detection and other perimeter defenses, the on-the-ground reality is that hackers enter through a virtual side-door without being spotted.

This side door is typically a phish mail — an email that appears to be from a legitimate site, and contains a link or a direct file attachment. Once clicked, undetectable malware is downloaded through the employee’s browser to the desktop. With the malware acting on commands from a remote site, hackers effectively access internal resources as if they were that employee.

Back to the physical world. Consider the scary situation wherein the thief has an employee badge and a keycard to the safe deposit boxes. That’s the closest parallel to OPM and other recent breaches.

The real question that IT security should be asking: how do you tell the difference between an employee and an outsider? One answer is to look at behavioral patterns — detailed file and email access patterns of a user account.

The fancy name for this technique is UBA, or user behavior analytics. At some point, the file accesses — directory searches, files read, and files copied — of these outsiders pretending to be insiders will diverge sharply from the normal behavior of that actual employee. UBA will spot this divergence.

But before we talk about UBA, let’s quickly see why conventional techniques are not enough to block or even spot the attackers in the OPM case.

How it Was Done

Security research firm ThreatConnect has put together some of the pieces of the OPM puzzle. According to their experts, the hackers registered a domain name that was similar to OPM’s—along the lines of opm-learning.org.

It’s tantalizingly close enough to opm.gov — the official URL — that it would work as a bait in a phish email. The site was also designed to include government logos and other OPM branding to make it evenmore convincing.

The theory is that attackers phished an OPM employee by sending an email with a link to the fake OPM site. The unlucky government worker clicked, and the malware was downloaded onto a laptop or computer—a series of DLLs.

Why wasn’t the malware detected?

This is where Google’s VirusTotal — purchased in 2012 — comes into play. This service lets you check suspect files against a list of known viral signatures. Obviously, it’s a great tool for IT departments.

But it has been discovered that hackers have also been using VirusTotal to craft undetectable malware. They would run it by this web service to ensure that no flags were raised. Result: stealthy malware.

And there’s evidence from ThreatConnect that the OPM gang did exactly this.

If you’re keeping score, the OPM hackers tricked an employee to download malware that would stay below the radar. OPM’s intrusion system, known as Einstein, was completely beside the point: the hackers entered OPM through a ‘legitimate’ email followed by a garden-variety employee download action.

The next step for the hackers was to find the valuable data and then copy it to their servers without being detected.

Security researchers connected the dots of OPM to similar attacks and decided that likely a command and control or C2-style attack vector was used in the breach.

In plain English, the hackers were able to send commands to the malware to search for records on the OPM servers and then copy the data back to their servers. The commands that were sent and the data that was received were encoded as an HTTP stream.

To any poor IT admin monitoring OPM during the breach, the hackers’ activities would have appeared as an employee browsing the web!

Bad Behavior

I think you’ll agree that the odds were heavily stacked against OPM being able to spot the attack with their conventional defenses.

That’s where UBA could have helped.  But the biggest conceptual block to adopting this approach is accepting that hackers will get in, and, like insiders, are probably already there. It’s not necessarily an easy transition for IT staff who believe in heavily fortified defenses.

UBA works by collecting long-term statistical data on employee file access patterns. The internal analytics engine effectively builds up its own signatures or profiles of each user, associating a level of likelihood to visiting a directory or accessing a file.

When outside hackers now come in and take on the identity of an employee, as they did in OPM, the UBA software is watching.

For example, the software may report that visiting the sales directory at 10 a.m. on a Tuesday fits within the employee profile. But entering into the customer directory, where sensitive PII is kept, at 4 a.m. on a Sunday is not a standard pattern.

And that’s just enough of a variation to trigger an alarm to IT. Admins can then turn on more granular logging to see what’s being done: if the activities don’t check out, they can nab the hacker before the data is exported.

UBA is a Great Idea

But let me add there’s no one silver bullet. You’ll need to implement a different kind of program, one that looks at data security from the inside-out and where the focus is on the data first. Sure, perimeter defenses are important, but as I said earlier, hackers will get through and potentially rogue insiders are already there.

If you want a quick six-point inside-out program, here’s what I would do:

  1. Inventory your file system assets with an eye towards classifying files into easily monetizable (credit card numbers, health information, customer identifiers, etc.) and sensitive (corporate IP).
  2. Determine who has access to these files and who should have access.
  3. Consider taking off-line and archiving rarely used files.
  4. Restrict file permissions to only those employees who need access as part of their jobs—“role-based permissions.”
  5. Put in place controls for new users requesting access and for deleting those who no longer require it.
  6. Finally, since hackers can and will get through your defense (and employees can steal data as well), you’ll need a UBA system to monitor access patterns of everyone—even those who have the required permissions to your data—to spot data thieves in the act.

Get the latest security news in your inbox.