All posts by Rob Sobers

What is a Data Security Platform?

What is a Data Security Platform?

A Data Security Platform (DSP) is a category of security products that replaces traditionally disparate security tools.

DSPs combine data protection capabilities such as sensitive data discovery, data access governance, user behavior analytics, advanced threat detection, activity monitoring, and compliance reporting, and integrate with adjacent security technologies.

They also provide a single management interface to allow security teams to centrally orchestrate their data security controls and uniformly enforce policies across a variety of data repositories, on-premises and in the cloud.

Data Security Platform (DSP)

Adapted from a figure used in the July 2016 Forrester report, The Future Of Data Security And Privacy: Growth And Competitive Differentiation.

The Rise of the Data Security Platform

A rapidly evolving threat landscape, rampant data breaches, and increasingly rigorous compliance requirements have made managing and protecting data more difficult than ever. Exponential data growth across multiple silos has created a compound effect that has made the disparate tool approach untenable. Siloed tools often result in inconsistently applied data security policies.

Many organizations are finding that simply increasing IT security spend doesn’t necessarily correlate to better overall data security. How much you spend isn’t as important as what you spend it on and how you use what you buy.

“Expense in depth” hasn’t been working. As a result, CISOs are aiming to consolidate and focus their IT spend on platforms over products to improve their enterprise-wide security posture, simplify manageability, streamline processes, and control costs.

According to Gartner, “By 2020, data-centric audit and protection products will replace disparate siloed data security tools in 40% of large enterprises, up from less than 5% today.”

(Source: Gartner Market Guide for Data-Centric Audit and Protection, March 21, 2017).

What are the benefits of a Data Security Platform?

There are clear benefits to consolidation which are generally true in all facets of technology, not just information security:

  • Easier to manage and maintain
  • Easier to coordinate strategy
  • Easier to train new employees
  • Fewer components to patch and upgrade
  • Fewer vendors to deal with
  • Fewer incompatibilities
  • Lower costs from retiring multiple point solutions

In information security, context is king. And context is enhanced drastically when products are integrated as part of a unified platform.

As a result, the benefits of a Data Security Platform are pronounced:

  • By combining previously disparate functions, DSPs have more context about data sensitivity, access controls, and user behavior, and can therefore paint a more complete picture of a security incident and the risk of potential breaches.
  • The total cost of ownership (TCO) is lower for a DSP than for multiple, hard-to-integrate point solutions.
  • In general, platform technologies have the flexibility and scalable architecture to accommodate new data stores and add new functionality when required, making the investment more durable
  • Maintaining compatibility between multiple data security products can be a massive challenge for security teams.
    • DSPs often result in an OpEx reduction because the security teams are dealing with fewer vendors and maintaining, tuning, and upgrading fewer products.
    • Capex reduction by retiring point solutions
  • CISOs want to be able to apply their data security strategy consistently across data silos and easily measure results.

Why context is essential to threat detection

What happens when your tools lack context?

Let’s take a standalone data loss prevention (DLP) product as an example.

Upon implementing DLP it is not uncommon to have tens of thousands of “alerts” about sensitive files. Where do you begin? How do you prioritize? Which incident in the colossal stack represents a significant risk that warrants your immediate, undivided attention?

The challenge doesn’t stop here. Pick an incident/alert at random – the sensitive files involved may have been auto-encrypted and auto-quarantined, but what comes next? Who has the knowledge and authority to decide the appropriate access controls? Who are we now preventing from doing their jobs? How and why were the files placed here in the first place?

DLP solutions by themselves provide very little context about data usage, permissions, and ownership, making it difficult for IT to proceed with sustainable remediation. IT is not qualified to make decisions about accessibility and acceptable use on its own; even if it were, it is not realistic to make these kinds of decisions for each and every file.

You can see a pattern forming here – with disparate products we often end up with excellent questions, but we urgently need answers that only a DSP can provide.

Which previously standalone technologies does a Data Security Platform include?

  • Data Classification & Discovery
    • Where is my sensitive data?
    • What kind of sensitive, regulated data do we have? (e.g., PCI, PII, GDPR)
    • How should I prioritize my remediation and breach detection efforts? Which data is out of scope?
  • Permissions Management
    • Where is my sensitive data overexposed?
    • Who has access to sensitive information they don’t need?
    • How are permissions applied? Are they standardized? Consistent?
  • User Behavior Analytics
    • Who is accessing data in abnormal ways?
    • What is normal behavior for a given role or account?
    • Which accounts typically run automated processes? Which access critical data? Executive files and emails?
  • Advanced Threat Detection & Response
    • Which data is under attack or potentially being compromised by an insider threat?
    • Which user accounts have been compromised?
    • Which data was actually taken, if any?
    • Who is trying to exfiltrate data?
  • Auditing & Reporting (response could be better here)
    • Which data was accessed? By whom? When?
    • Which files and emails were accessed or deleted by a particular user?
    • Which files were compromised in a breach, by which accounts, and exactly when were they accessed?
    • Which user made this change to a file system, access controls or group policy, and when?
  • Data Access Governance
    • How do we implement and maintain a least privilege model?
    • Who owns the data? Who should be making the access control decisions for each critical dataset?
    • How do I manage joiners, movers, and leavers so only the right people maintain access?
  • Data Retention & Archiving
    • How do we get rid of toxic data that we no longer need?
    • How do we ensure personal data rights (right to erasure & to be forgotten)?

Analyst Research

A number of analysts firms have taken note of the Data Security Platform market and have released research reports and market guides to help CISOs and other security decision-makers.

Forrester’s “Expense in Depth” Research

In January 2017, Forrester Consulting released a study, commissioned by Varonis, entitled The Data Security Money Pit: Expense in Depth Hinders Maturity that shows a candy-store approach to data security may actually hinder data protection and explores how a unified data security platform could give security professionals the protection capabilities they desire, including security analytics, classification and access control while reducing costs and technical challenges.

The study finds that a fragmented approach to data security exacerbates many vulnerabilities and challenges, and 96% of these respondents believe a unified approach would benefit them, including preventing and more quickly responding to attempted attacks, limiting exposure and reducing complexity and cost.. The study goes on to highlight specific areas where enterprise data security falls short:

  • 62% of respondents don’t know where their most sensitive unstructured data resides
  • 66% don’t classify this data properly
  • 59% don’t enforce a least privilege model for access to this data
  • 63% don’t audit use of this data and alert on abuses
  • 93% suffer persistent technical challenges with their current data security approach

Point products may mitigate specific threats, but when used tactically, they undermine more comprehensive data security efforts.

According to the study, “It’s time to put a stop to expense in depth and wrestling with cobbling together core capabilities via disparate solutions.”

Almost 90% of respondents desire a unified data security platform. Key criteria to include in such a platform as selected by the survey respondents include:

  • Data classification, analytics and reporting (68% of respondents)
  • Meeting regulatory compliance (76% of respondents)
  • Aggregating key management capabilities (70% of respondents)
  • Improving response to anomalous activity (68% of respondents)

Forrester concludes:

Forrester on Data Security Platforms

Gartner’s DCAP Market Guide

Gartner released the 2017 edition of their Market Guide for Data-Centric Audit and Protection. The guide’s summary concisely describes the need for a platform approach to data security:

Garter on Data-Centric Audit and Protection

Gartner recommends that organizations “implement a DCAP strategy, and ‘shortlist’ products that orchestrate data security controls consistently across all silos that store the sensitive data.” Further, the report advises, “A vendor’s ability to integrate these capabilities across multiple silos will vary between products and also in comparison with vendors in each market subsegment. Below is a summary of some key features to investigate:”

  • Data classification and discovery
  • Data security policy management
  • Monitoring user privileges and data access activity
  • Auditing and reporting
  • Behavior analysis, alerting and blocking
  • Data protection

Demo the Varonis Data Security Platform

The Varonis Data Security Platform (DSP) protects enterprise data against insider threats, data breaches and cyberattacks by analyzing content, accessibility of data and the behavior of the people and machines that access data to alert on misbehavior, enforce a least privilege model and automate data management functions. Learn more about the Varonis Data Security Platform →

What customers are saying about the Varonis Data Security Platform

City of San Diego on the Varonis Data Security Platform

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Detecting Malware Payloads in Office Document Metadata

Office Documents with Malicious Metadata

Ever consider document properties like “Company,” “Title,” and “Comments” a vehicle for a malicious payload? Checkout this nifty PowerShell payload in the company metadata:

Here’s the full VirusTotal entry. The target opens the Office document and, with macros enabled, the payload stored within the document’s own metadata executes and does its work. No extra files written to disk or network requests made.

The question  about whether DatAlert can detect stuff like this came up in the Twitter thread, so I decided to write up a quick how-to.

Finding Malicious Metadata with Varonis

What you’ll need: DatAdvantage, Data Classification Framework, DatAlert

Step 1: Add Extended File Properties to be scanned by Data Classification Framework.

  • Open up the Varonis Management Console
  • Click on Configuration → Extended File properties
  • Add a new property for whichever field you’d like to scan (e.g., “Company”)

Varonis Management Console

(Note: prior to version 6.3, extended properties are created in DatAdvantage under Tools → DCF and DW → Configuration → Advanced)

Step 2: Define a malicious metadata classification rule

  • In the main menu of DatAdvantage select Tools → DCF and DW → Configuration
  • Create a new rule
  • Create a new filter
  • Select File properties → Company (or whichever property you’re scanning)
  • Select “like” to search for a substring
  • Add the malicious value you’d like to look for (e.g., .exe or .bat)

Varonis DCF New Classification Rule

Step 3: Create an alert in DatAlert to notify you whenever a file with malicious metadata is discovered

  • In the main menu of DatAdvantage select Tools → DatAlert
  • Click the green “+” button to create a new rule
  • Click on the “Where (Affected Object)” sub menu on the left
  • Add a new filter → Classification Results
  • Select your rule name (e.g., “Malicious Metadata”)
  • Select “Files with hits” and “Hit count (on selected rules)” greater than 0

DatAlert Rule for Malicious Document Metadata

You can fill out the rest of the details of your alert rule–like which systems to scan, how you want to get your alerts, etc.

As an extra precaution, you could also create a Data Transport Engine rule based on the same classification result that will automatically quarantine files that are found to have malicious metadata.

That’s it! You can update your “Malicious Metadata” over time as you see reports from malware researchers of new and stealthier ways to encode malicious bits within document metadata.

If you’re an existing Varonis customer, you can setup office hours with your assigned engineer to review your classification rules and alerts. Not yet a Varonis customer? What are you waiting for? Get a demo of our data security platform today.

Are Wikileaks and ransomware the precursors to mass extortion?

Are Wikileaks and ransomware the precursors to mass extortion?

Despite Julian Assange’s promise not to let Wikileaks’ “radical transparency” hurt innocent people, an investigation found that the whistleblowing site has published hundreds of sensitive records belonging to ordinary citizens, including medical files of rape victims and sick children.

The idea of having all your secrets exposed, as an individual or a business, can be terrifying. Whether you agree with Wikileaks or not, the world will be a very different place when nothing is safe. Imagine your all your emails, health records, texts, finances open for the world to see. Unfortunately, we may be closer to this than we think.  

If ransomware has taught us one thing it’s that an overwhelming amount of important business and personal data isn’t sufficiently protected. Researcher Kevin Beaumont says he’s seeing around 4,000 new ransomware infections per hour. If it’s so easy for an intruder to encrypt data, what’s stopping cybercriminals from publishing it on the open web?

There are still a few hurdles for extortionware, but none of them are insurmountable:

1. Attackers would have to exfiltrate the data in order to expose it

Ransomware encrypts data in place without actually stealing it. Extortionware has to bypass traditional network monitoring tools that are built to detect unusual amounts of data leaving their network quickly. Of course, files could be siphoned off slowly disguised as benign web or DNS traffic.

2. There is no central “wall of shame” repository like Wikileaks

If attackers teamed up to build a searchable public repository for extorted data, it’d make the threat of exposure feel more real and create a greater sense of urgency. Wikileaks is very persistent about reminding the public that the DNC and Sony emails are out in the open, and they make it simple for journalists and others to search the breached data and make noise about it.

3. Maybe ransomware pays better

Some suggest that the economics of ransomware are better than extortionware, which is why we haven’t seen it take off. On the other hand, how do you recover when copies of your files and emails are made public? Can the DNC truly recover? Payment might be the only option, and one big score could be worth hundreds of ransomware payments.  

So what’s preventing ransomware authors from trying to doing both? Unfortunately, not much. They could first encrypt the data then try to exfiltrate it. If you get caught during exfiltration, it’s not a big deal. Just pop up your ransom notification and claim your BTC.

Ransomware has proven that organizations are definitely behind the curve when it comes to catching abnormal behavior inside their perimeters, particularly on file systems. I think the biggest lesson to take away from Wikileaks, ransomware, and extortionware is that we’re on the cusp of a world where unprotected files and emails will regularly hurt businesses, destroy privacy, and even jeopardize lives (I’m talking about hospitals that have suffered from cyberattacks like ransomware).

If it’s trivially easy for noisy cybercriminals that advertise their presence with ransom notes to penetrate and encrypt thousands of files at will, the only reasonable conclusion is that more subtle threats are secretly succeeding in a huge way.  We just haven’t realized it yet…except for the U.S. Office of Personnel Management. And Sony Pictures. And Mossack Fonseca. And the DNC. And…

The Enemy Within: A Free Security Training Course by Troy Hunt

The Enemy Within: A Free Security Training Course by Troy Hunt

It takes a very long time to discover a threat on your network according to the Verizon DBIR:

breach-discovery

Which is mind-boggling given the most devastating breaches often start with an insider—either an employee or an attacker that gets inside using an insider’s credentials. Target, OPM, Panama Papers, Wikileaks. The list goes on and on.

The truth is that many organizations are behind the curve when it comes to understanding and defending against insider threats.

So when we were tossing around topic ideas with Troy, it quickly became clear what our next video course should focus on.

I’m happy to announce the third course in our free, CPE-eligible security training series—The Enemy Within: Understanding Insider Threats.



Get all the videos now



What’s inside?

The course is broken into 8 video modules totaling over an hour worth of entertaining material covering where insider threats originate from, how they exfiltrate data, and how to stop them.

More free content

While you’re at it, grab the previous two courses in the series:

About Troy

Troy is a Microsoft Regional Director, most Valuable Professional and top-rated international speaker on online security, regularly delivering the number one rated talk at events across the globe. He’s also the author of 26 online Pluralsight courses which frequently feature at the top of the charts. Troy’s site, HaveIBeenPwned.com, is one of the world’s most popular data breach verification sites.

Yahoo Breach: Pros react to massive breach impacting hundreds of millions o...

Yahoo has confirmed a data breach affecting at least 500 million users in the latest mega breach to make headlines. Here’s what some infosec pros had to say about it.

***

***

***

***

***

***

***

***

***

***

***

***

We’ll update as more details around the story unfolds. Stay tuned.

Why the OPM Breach Report is a call-to-action for CSOs to embrace data-cent...

The Committee on Oversight and Government Reform released a fascinating 231-page report detailing the how and why behind the epic breach at the United States Office of Personnel Management.

Richard Spires, the former CIO of the IRS and DHS, remarked on OPM’s failure to take a data-centric approach to information security:

“[I]f I had walked in there [OPM] as the CIO—and, you know, again, I’m speculating a bit, but—and I saw the kinds of lack of protections on very sensitive data, the first thing we would have been working on is how do we protect that data? OK? Not even talking about necessarily the systems. How is it we get better protections and then control access to that data better?

What data was taken?

A picture of the damage inflicted by the OPM breach is painted through a series of powerful quotes, like this one from James Comey, Director of the FBI:

“My SF-86 lists every place I’ve ever lived since I was 18, every foreign travel I’ve ever taken, all of my family, their addresses. So it’s not just my identity that’s affected. I’ve got siblings. I’ve got five kids. All of that is in there.”

It’s hard to refute the argument that this is the most devastating breach of all time given the scale and sensitivity of the data that was stolen:

  • 4.2 million personnel files of former and current government employees
  • 21.5 million security clearance background investigation files
  • 5.6 million fingerprints

The background investigation files include things like mental health history, alcohol abuse, gambling issues, and other deeply personal information.

How OPM happened

The landmark event that everyone thinks of when they hear “OPM breach” is the theft of 21.5 million background investigation files from the Personnel Investigations Processing System (PIPS) – a legacy mainframe that stores the organization’s crown jewels. This breach was disclosed in 2015.

However, a file share breach disclosed back in 2014 appears to have played an instrumental role in the eventual PIPS breach. In fact, investigations showed that hackers had access to OPM’s network since July of 2012 and were discovered only after advanced monitoring was enabled in March of 2014.

Regrettably, we’ll never know the extent of documents exfiltrated prior to March 2014.

On March 20, 2014, the Department of Homeland Security’s Computer Emergency Response Team (US-CERT) informed OPM’s own response team that a hacker had exfiltrated OPM data from the network.

To “better understand” the threat posed by the hacker (referred to as Hacker X1), OPM monitored the adversary’s movements for two months until they discovered a second hacker (Hacker X2) who gained initial access using a contractor’s stolen credentials.

Brendan Saulsbury, an OPM contractor with OPM’s IT Security Operations, says:

“So we would sort of observe the attacker every day or, you know, every couple of days get on the network and perform various commands. And so we could sort of see what they were looking for. They might take some documentation, come back, and then access, you know, somebody else’s file share that might be a little bit closer or have more access into the system.”

Hikit and SMB

Hacker X2 dropped Hikit malware to establish a backdoor, escalate privileges, and perform keylogging. Hikit was found on numerous systems and was beaconing back to a C2 server. OPM sniffed the hacker’s traffic to determine what was being exfiltrated.

Activity logs showed that the hackers would logon between 10 p.m. and 10 a.m. ET using a compromised Windows domain administrator account and search for PII on file shares using SMB commands.

OPM watched a hacker exfiltrate documents from a file share which contained information that described the PIPS system and how it is architected.

Appendix D of US-CERT’s June 2014 incident report describes the stolen file-share data:

exfiltrated-opm-data_001

OPM’s Director of IT Security Operations, Jeff Wagner, testified:

“In 2014, the adversary was utilizing a Visual Basic script to scan all of our unstructured data. So the data comes in two forms. It’s either structured, i.e., a database, or unstructured, like file shares or the home drive of your computer, things of that nature. All the data that is listed here, all came out of personal file shares that were stored in the domain storage network.”

The value of the data known to be exfiltrated was initially dismissed as being fairly inconsequential, but the US-CERT investigation report makes it clear that the hackers were doing reconnaissance on OPM’s file-sharing infrastructure in order to get closer to PIPS:

The attackers primarily focused on utilizing SMB [Server Message Block] commands to map network file shares of OPM users who had administrator access or were knowledgeable of OPM’s PIPS system. The attacker would create a shopping list of the available documents contained on the network file shares. After reviewing the shopping list of available documents, the attacker would return to copy, compress and exfiltrate the documents of interest from a compromised OPM system to a C2 server.”

When asked if the documents exfiltrated from the file shares would yield an advantage in future attacks, Wagner replied:

“It gives them more familiarity with how the systems are architected. Potentially some of these documents may contain accounts, account names, or machine names, or IP addresses that are relevant to these critical systems.”

Not so trivial after all.

After conceding that the hackers were getting “too close” to PIPS, security ops decided to “boot” the hacker in an operation called the “Big Bang.”

They successfully booted Hacker X1 in late May 2014, but Hacker X2 maintained a foothold, traversing the cyber kill chain en route to the famous PIPS breach:

“Beginning in July through August 2014, the Hacker X2 exfiltrated the security clearance background investigation files. Then in December 2014, personnel records were exfiltrated, and in early 2015, fingerprint data was exfiltrated.”

A stunning lack of visibility

US-CERT identified numerous gaps in the OPM’s centralized logging strategy:

“Gaps in OPM’s audit logging capability likely limited OPM’s ability to answer important forensic and threat assessment questions related to the incident discovered in 2014. This limited capability also undermined OPM’s ability to timely detect the data breaches that were eventually announced in June and July 2015.”

The big takeaway from US-CERT’s gap analysis is that traditional security strategies have a severe vulnerability when it comes to insider threats. By Jeff Wagner’s own admission, OPM had focused heavily on perimeter security, but lacked the technology necessary to detect and stop attackers who were already inside.

The report outlines OPM’s history of inadequate security controls and failed audits:

  • 2005 – the Inspector General (IG) gives OPM a bad security grade, says they’re vulnerable to hackers
  • FY 2013-2015 – OPM’s IT spending is at the bottom of all federal agencies
  • 2014 – the IG says “material weaknesses” have become “significant deficiencies”
  • 2015 – despite a mandate, only one percent of OPM employee and contractor accounts were required to use multi-factor authentication
  • 2015 (post-breach) – IG still sees an “overall lack of compliance that seems to permeate the agency’s IT security program.”

Why all CISOs need to pay attention to what happened at OPM

OPM isn’t exceptional. Many of the breaches that grab headlines are eerily similar.

First, they start with someone who is already an insider, like Edward Snowden, or an attacker hijacks the credentials of an insider, as was the case with Target and OPM. The explosion of ransomware has proven just how easy it is to get inside, and every vector seems to be working at scale – phishing, hijacked websites, cloud file-sharing.

Second, what do they take? Files and emails — unstructured data. In the Wikileaks and Snowden incidents, an insider took confidential cables, or emails. What was taken in the Sony Pictures breach? Emails, video files, files containing passwords. All unstructured data. Ransomware also shows how vulnerable this data is – a single infected user account can encrypt thousands of files without being noticed, many of which that user probably shouldn’t have access to in the first place.

There are of course other kinds of data we need to worry about, but unstructured data is what most organizations have the most of and know the least about. And so much of it contains sensitive information like that taken in OPM: social security or credit card numbers, health records, or detailed roadmaps describing how to infiltrate a massive database of PII.

Employees and contractors have access to all this data just by showing up to work—usually to much more than they need to do their jobs. Outsiders only need to steal an employee’s or contractor’s credentials through phishing or some other means, and then they have access to it, too.

It’s just too easy for data to be stolen, and we have to make it harder.

SIEM by itself is not enough

 “Currently, OPM utilizes Arcsight as their SIEM [security information and event management] solution of choice, but there are numerous gaps in auditable events being forwarded to Arcsight for analysis, correlation, and retention.”

Many organizations don’t forward file access events to their SIEM because native auditing is performance intensive, the raw audit logs are too noisy and voluminous, and SIEM vendors often charge by data volume. In order to protect file-share data from insider threats and outside attackers that find their way inside, security technologies like SIEM and UBA must have credible telemetry from the file shares, including access activity and content awareness.

A data-centric approach

As Richard Spires points out, we need a new approach that focuses more on the data itself than the infrastructure that allows us to access that information. It’s one thing to lose a server; it’s another to lose millions of files containing employees’ deepest personal secrets.

Organizations need to get a grip on where their information assets are, who is using them, and who is responsible for them. There are just too many unknowns right now. They need to put all that data lying around in the right place, restrict access to it and monitor and analyze who is using it.

One thing organizations have started to realize is that they can jump light years ahead of where they are today very quickly just by installing technology to watch and analyze how employees use data. Smart AI and machine learning can be used to look for patterns of abuse and help you spot breaches before they happen. Think of it like the fraud detection that your credit card company uses – it’s very effective in stopping thieves from stealing money. The same analytics can help prevent insiders and outside attackers from stealing data.

There is no security silver bullet. But if you’re not watching what is going on with your unstructured data, which is growing exponentially, you have an intolerably dangerous blind spot – it’s almost impossible to detect an attack and very difficult to assess the scope of the damage, making recovery arduous and expensive. Organizations have overlooked this for a long time because the notion of organizing, categorizing and sorting out this metadata has been daunting. But that doesn’t need to be the case anymore.

I’ll close with the bold statement that the report opens with – one that is directed to federal CIOs, but that all CIOs and CSOs should take to heart:

“Federal CIOs matter. In fact, your work has never been more important, and the margin for error has never been smaller.

As we continue to confront the ongoing challenges of modernizing antiquated systems, CIOs must remain constantly vigilant to protect the information of hundreds of millions of Americans in an environment where a single vulnerability is all a sophisticated actor needs to steal information, identities, and profoundly damage our national security.”

Well said.

Caveats & Notes

The report, which is titled “The OPM Data Breach: How the Government Jeopardized Our National Security for More than a Generation,” was authored by Republican Congressional staffers. You can read the Democratic response to the report here.

Regardless of partisan politics,  the report contains important information about attack vectors, timelines, files stolen, and exfiltration methods. That’s what we’ll stick to here.

If you want to read the whole report, OPM released it as a rasterized image (so you can’t CTRL-F to search). Luckily, Dan Nguyen made an OCR’d PDF and plain-text versions for us:

If you’re feeling ultra-ambitious, OPM itself released a doc shortly after the breach explaining how they plan to improve their security posture. You’ll find that here.

Protecting Bridget Jones’s Baby

Protecting Bridget Jones’s Baby

In the wake of the Sony Pictures breach, studios are getting much smarter when it comes to data protection. A shining example is Miramax, a global film and television studio best known for its award-winning and original content such as 2016’s Bridget Jones’s Baby with Universal Pictures and Studio Canal.

Read the full case study ⟶

Miramax was looking for a solution that could monitor for insider threat and user behavior activity, and help classify its unstructured data for content discovery, remediation, and protection—that’s when implementation of Varonis DatAdvantage, DatAnswers, and Data Classification Framework all came into play.

Denise Evans, VP of Information Technology at Miramax mentioned, “Prior to implementing a least privilege model with Varonis, 40% of our files were overexposed when they didn’t need to be. This kind of exposure isn’t a problem until a  security breach occurs. Should there be a breach, we’re now able to quickly identify and target problem areas in a manner we weren’t previously able to do.” With the help of Varonis, Miramax was able to put in place a least privilege model, so that users only had access to the files they needed to do their jobs.

What’s also really compelling about this story is that Miramax is using our secure search product DatAnswers to enhance productivity. Miramax can now support eDiscovery requests and get very accurate search results that save the company time and money.

Click to read the full case study: https://www.varonis.com/success-stories/miramax

The Best Ransomware Defense: Limiting File Access

The Best Ransomware Defense:  Limiting File Access

If ransomware lands on your machine, but can’t find your files, are you really infected? This isn’t a philosophical thought experiment, I promise.  Let me explain.

Keeping data off your endpoints

A common paradigm in IT for many years has been to keep user data on network drives–departmental shares, home folders, etc. Not only do network drives make sharing files possible, but they minimize the amount of data stored on endpoints.

If nothing of importance is kept on local hard drives, a single machine can be lost or destroyed and it has minimal impact on business continuity. IT provisions new hardware, maps the user’s network drives, and we’re good to go.

Unfortunately, storing files on network drives won’t necessarily keep them safe from ransomware because the OS treats mapped network drives just like local folders. Some strains of ransomware such as Locky will even encrypt files on un-mapped network drives.

Maybe you use Dropbox instead of file shares to collaborate. Same problem. In fact, most file sync tools actively push data to all of your endpoints, leaving them susceptible to ransomware attacks. Your file sync tool might even exacerbate the problem by syncing encrypted files from an infected device to all your other devices. Doh.

Bottom line: if ransomware can see your files, it can encrypt them.

Enter Stubs

DatAnywhere, our file sync and share product that uses SMB shares as a backend, has a feature called stubs. Instead of keeping your files locally, you just have a pointer, or “stub.”

When you double-click on a stub file, DatAnywhere goes out and gets it on-demand. When we built this feature we anticipated quite a few benefits:

  • Save local disk space
  • Files are transferred to endpoints encrypted via SSL (instead of SMB)
  • Users can still locally cache their favorite files for offline use

One benefit we didn’t anticipate until it saved a customer’s bacon is that stubs aren’t accessible by ransomware because they don’t actually exist.

Our customers are finding that by replacing mapped drives with DatAnywhere workspaces and get another layer of malware protection without having to change anything about their file servers and NAS.

If I open up Explorer I can see my stub files and they look real:

explorer

I have all of the metadata like size, type, and date modified, but the data doesn’t actually exist.

Third party programs and scripts, including ransomware, won’t see any files in the directory. In fact, PoweShell and cmd.exe don’t know about our stubs either:

cmd

As a result, a malicious program can’t access your files to encrypt them, nor can they initiate the download of a file from the server. It has no file handle to work with.

Of course any file that the user has elected to cache locally would be vulnerable, but this is usually a tiny percent of what they actually have access to–a vast improvement over the network drive approach which can expose terabytes of data to ransomware. What’s more, if a file does get encrypted, DatAnywhere has built-in version control, so rolling back to an unencrypted version is easy.

Is this 100% fool-proof? Nothing ever is. Technically, someone could write a crypto-variant that emulates Windows Explorer, but this isn’t very probable given that there’s so much low-hanging fruit out there.

In the words of the late great Muhammad Ali:

Float like a butterfly, sting like a bee, his crypto can’t hit what his code can’t see.

Want to try DatAnywhere? It’s completely free for up to 5 users. Download it and try it now.

21 Free Tools Every SysAdmin Should Know

21 Free Tools Every SysAdmin Should Know

Knowing the right tool to the right job is something that can save you hours of extra work and tedium. We’ve compiled a list of of some of the best general purpose sysadmin tools for troubleshooting, testing, communicating and fixing the systems that you need to keep running.

WireShark

http://www.wireshark.org/

Wireshark is the world’s foremost network protocol analyzer. It lets you see what’s happening on your network at a microscopic level. It is the de facto standard across many industries and educational institutions.

Wireshark is cross platform and works on OS X, Windows and Unix.

FileZilla

https://filezilla-project.org/

Filezilla is an GPL licensed FTP client and Server. It’s ability to connect to SSH secured hosts makes it a great choice if you need to give access to client more comfortable with GUI than CLI interfaces.

Fiddler

http://www.telerik.com/fiddler

Fiddler is a proxy server that is meant to run locally to allow for developers to debug web applications. If you have multiple different applications or processes that can modify the values in a form it’s great to be able to see the actual output as transmitted.

If you’re working with a remote API, you can also compose and replay requests as needed.

Sysinternals Suite

http://technet.microsoft.com/en-us/sysinternals/bb545021.aspx

The Sysinternals Suite is a collection of general sysadmin tools for file and disk, networking, process management, security and collecting system information on Windows hosts.

One of the most popular and immediately beneficial utility is Autoruns.exe which will identify programs that start automatically.

Mosh

http://mosh.mit.edu/

SSH users will be familiar with the frailty of their remote sessions, a single wifi hiccup and they go down. Mosh is a secure replacement protocol that allows for the resumption of sessions as well as generally improved performance. Mosh is available for almost every platform, including a Chrome plugin for even more portability.

Autossh

http://www.harding.motd.ca/autossh/

Designed more for SSH tunnels than interactive sessions, Autossh will restart dropped SSH sessions and tunnels.

If you’d like a perpetual session, use with “screen”.

Clonezilla

 

http://clonezilla.org/

If you’re administering or provisioning a larger number of computers, it is very beneficial to create a master image and then push it to all of the target machines, which is exactly what the open source Clonezilla does.

The multi-cast feature of Clonezilla SE let you update machines in massive parallel batches.

Clusto


https://github.com/clusto/clusto

A python based server cluster management tool, Clusto lets you maintain an abstracted interface for interacting with your infrastructure.

Clusto stores data in any database that you can interact with via SQLAlchemy, easing your management as you can get started with anything you have in place.

Ansible

http://www.ansible.com

Ansible bills itself as the simplest way to automate IT provisioning tasks.

Ansible Playbooks are the programmatic method of bundling instructions to be run, which you can then replay on any number of servers you’re connecting to over SSH.

Chef

http://www.opscode.com/chef/

Chef helps automate your server infrastructure via Chef clients installed on each node in your network. Periodically the clients poll the central Chef server and check their internal config against the instructed configuration – if discrepancies are found it then runs the commands to bring them into compliance.

Chef’s constant compliance checking is very helpful in quickly recovering if manual changes are made by sysadmins.

Puppet

http://puppetlabs.com/

Puppet allows for declarative configuration of servers via their ruby DSL. If you already know ruby it’s easy to dig into Puppet and manage any number of servers.

If you want to get an easy taste for Puppet, they offer a preconfigured VM that you can play around with.

Dnsmasq

http://www.thekelleys.org.uk/dnsmasq/doc.html

Dnsmasq is a much lighter weight DNS resolver for local networks than BIND, or other ‘heavyweight’ servers. It is ideally suited for use in low resource environments like routers and firewalls.

It caches requests locally but falls back to an upstream DNS provider.

Bugzilla

http://www.bugzilla.org/

Primarily used to report and process software bugs, Bugzilla has expanded to allow for quality assurance management and the submission and review of patches.

Buzilla integrates with many source control systems, letting you setup two way communications so that you can close bugs with commits, etc.

Sysdig

http://www.sysdig.org/

Sysdig is an open source, system level management tool. It lets you capture, filter and save the different processes that are active within a Linux machine at any given point.

Sysdig makes some common tasks, like tracking any file opened in a directory in real time trivially easy.

Lua scripts can be used to modify and extend the core Sysdig functionality.

TreeSize


http://www.jam-software.com/treesize_free/

Treesize is a NTFS file space viewer that helps visualize space usage in a Windows Explorer like view.

Treesize works off of the Master File Table of the target machine, letting you read results faster and without the need for content read permissions.

7-Zip

http://www.7-zip.org/

An open source windows compression utility. 7-zip works extremely fast on even very large zip files and can produce self extracting archives in the 7z format. 

Notepad++


http://notepad-plus-plus.org/download/

An excellent Open Source Windows text editor with regular expression support, syntax highlighting and a tabbed interface.

If you’re moving between machines, checkout Notepad++ Portable which you can run from a share or a USB drive.

KeePass

http://keepass.info/

An opensource password manager, KeePass lets you generate strong random passwords per site or application. Stored securely, KeePass lets you maintain secure passwords without have to remember hundreds of 20 character plus password or even worse – writing one down.

If you need to share your password file with others, or access it from multiple locations, keep it on a DataAnywhere share.

Netcat 

http://netcat.sourceforge.net/

Often described as the ‘swiss-army’ knife of network utilities, netcat is tremendously useful for anything dealing with sending or receiving network port information.

Example: if you need a one-shot webserver on port 8080

{ echo -ne "HTTP/1.0 200 OK\r\nContent-Length: $(wc -c <some.file)\r\n\r\n"; cat some.file; } | nc -l 8080

Process Explorer

http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx

Track, identify, start and stop processes that need manipulation on a Windows machine. Additionally, it is a great way to track down memory leaks and find rogue processes.

ADModify.NET

https://blogs.technet.microsoft.com/exchange/2004/08/04/admodify-net-is-here/

ADModify.NET is a tool primarily utilized by Exchange and Active Directory administrators to facilitate bulk user attribute modifications.

Email security in the wake of #DNCLeaks

Email security in the wake of #DNCLeaks

Back in December, our #1 prediction for 2016 was that the U.S. Presidential campaign would be impacted by a cyber attack. And here we are.

Watching the fallout from #DNCLeaks it’s evident just how devastating email breaches can be. For many organizations email is the most sensitive asset they have, yet monitoring for anomalous access to mail servers isn’t a given. It should be.

If you’re a Varonis customer with DatAlert Analytics and DatAdvantage for Exchange, here are some of the threat models that are working to protect you:

  • Suspicious mailbox activity: detects behavior indicative of email exfiltration or obfuscation (i.e., covering their tracks).
  • Abnormal service behavior: detects instances of an attacker exploiting service account privileges to gain email access.
  • Abnormal admin behavior: detects abusive admins exploiting their privileges to exfiltrate or compromise email data.

These threat models are adaptive and are based on the behavioral profiles of your users. Of course you can always create a custom real-time alerts based on the vast array of metadata we collect—e.g., access to a mailbox by someone other than the owner.

Need help enabling these threat models? Book an office hours session with your systems engineer.

Interested in learning more? Sign up for a 15-minute demo.

 

The Difference Between Active Directory and LDAP

Active Directory (AD) is a directory service made by Microsoft. It provides all sorts of functionality like authentication, group and user management, policy administration and more. LDAP is a way of speaking to Active Directory.

LDAP, which stands for Lightweight Directory Access Protocol, is a means for querying items in any directory service that supports it.

The relationship between AD and LDAP is much like the relationship between Apache and HTTP:

  • HTTP is a web protocol.
  • Apache is a web server that uses the HTTP protocol.
  • LDAP is a directory services protocol.
  • Active Directory is a directory server that uses the LDAP protocol.

Active Directory is just one example of a directory service that supports LDAP. There are other flavors, too: Red Hat Directory Service, OpenLDAP, Apache Directory Server, and more.

LDAP is not a product

Occasionally you’ll hear someone say, “We don’t have Active Directory, but we have LDAP.” What they probably mean is that they have another product, such as OpenLDAP, which is an LDAP server.

It’s kind of like someone saying “We have HTTP” when they really meant “We have an Apache web server.”

What’s an LDAP query?

I mentioned earlier that the LDAP protocol is a way to speak to your directory service. That dialog happens via LDAP queries.

An LDAP query is a command that asks a directory service for some information. For instance, if you’d like to see which groups a particular user is a part of, you’d submit a query that looks like this:

(&(objectClass=user)(sAMAccountName=yourUserName)
(memberof=CN=YourGroup,OU=Users,DC=YourDomain,DC=com))

Beautiful syntax, huh? Not quite as simple as typing a web address into your browser. Feels like LISP.

Luckily, in most cases, you won’t need to write LDAP queries. To maintain your sanity, you’ll perform all your directory services tasks through a point-and-click management interface like Varonis DatAdvantage or perhaps using a command line shell like PowerShell that abstracts away the details of the raw LDAP protocol.

Other Resources: