Archive for: October, 2012

New Zealand’s Leaky Servers Highlight the Need for Information Govern...

MSD kiosk network locationsHow a Permissions Report Could Have Plugged the Hole in New Zealand’s Leaky Servers

Earlier this week, Keith Ng blogged about a massive security hole in the New Zealand Ministry of Social Development’s (MSD) network.  He was able to walk up to a public kiosk in the Work and Income office and—without cracking a password or planting a Trojan—immediately gain access to thousands upon thousands of  sensitive files.

How sensitive, you ask?  Among other things, Ng could browse, read, and modify:

  • Invoices and other financial data
  • Call system logs
  • Files linking children to medical prescriptions
  • Identities of children in special needs programs


How did this happen?

Well, there are two possibilities:

1. The kiosks were logged in with an administrative account (e.g., Domain Admin) with full access to all data on the network

2. The kiosks were logged in with a “normal” account, but the file shares were incorrectly permissioned, allowing global access

I find it very hard to believe that the kiosks were logged in as administrators, but we can’t rule it out.  The latter cause, broken/excessive permissions, is actually a very common problem that we address with organizations literally every week at Varonis.

What could have been done to prevent it?

Unplugging the kiosks is only step 1.  The kiosks aren’t the issue.  There are much bigger information governance problems at the heart of this data leak.

Here are some tips that will help address the root cause, not just the catalyst:

1. Locate exposed, sensitive data

  • Use a data classification framework to scan your file servers and determine where your most sensitive content lives, and where it is exposed to too many people

Once you’ve located the sensitive stuff, make sure only the right people have access, and then monitor activity on that sensitive data to make sure that authorized users aren’t abusing their access.

If I’m a CSO, I want a solution that tells me at any given time exactly where all my sensitive data is, where it is over-exposed, and who is accessing it.  If someone creates a file with a social security number or patient ID and plops it onto a public share that a kiosk can see, I want my team to be alerted automatically.

2. Identify and remove global access groups from ACLs

  • Figure out where “Everyone” or “Authenticated Users” appears on ACLs and remove them

This can be tough because a.) it’s not trivial to crawl every ACL on every file server or NAS device looking for “Everyone” and b.) you have to pull global access without cutting off people who really need the data.

3. Watch your super users

  • Setup alerts for whenever someone is granted super user/administrator privileges
  • Periodically review the list of people who have privileged access
  • Review your audit trail to see what super users are doing with their elevated rights

Even if the kiosks were mistakenly setup to run under a super user account, if MSD were reviewing access activity they likely would have noticed an inordinate amount of super user activity from the public kiosks’ IP addresses.

4. Assign and involve data owners

  • Access to children’s medical records, for instance, should be granted and reviewed not by IT, but by the business unit that is responsible for managing patients (e.g., a medical director).

By transferring this responsibility to the people who are most equipped to make access control decisions (i.e. data owners), not only do you end up with better decisions, but you also relieve some of the burden on IT.

How hard can it be?

Many of the comments on Ng’s posts were along the lines of “Rookie mistake!” or “Security 101!” I assure you, information governance is much harder than people think, especially in an age where data is somewhat of a contagion, being created and replicated at such a staggering pace.

To these commenters, I’d like to propose a simple question: without an automated solution, how would MSD’s IT department know which folders were mistakenly open to everyone?

It takes one frustrated person 30 seconds to add “Everyone” to an ACL, but it could take years to find and correct that access control failure.  Worse yet, once found, how do you know whether the over-exposed data was stolen by someone who isn’t as harmless as Keith Ng?

That’s the question New Zealand’s government is facing right now.

What is the state of your data protection?

If you’d like a free data security assessment courtesy of Varonis, please let us know.

The New Privacy Environment: European Union Leads the Way on Personal Data ...

We all understand the risks in accidentally revealing a social security number. But are there other pieces of less identifying or even anonymous information that taken together act like a social security number? The European Union is breaking new ground on consumer privacy as it begins to reform its own regulations. The EU’s broader ideas on personal identity have even made their way across the pond into proposed new US regulations.

The history of the European Union’s consumer privacy and data security regulations begins with its 1995 Data Protection Directive–or EU 96/46EC for security wonks. EU directives provide guidance to its member nations’ legislatures, who then are free to craft their own specific laws. The DPD has been influential in shaping the vocabulary and, less charitably, the jargon of the consumer privacy discussion on both sides of the Atlantic.

In the US, the starting point for discussion on data security is Sarbanes-Oxley, which became law in 2002. In comparing and contrasting the two, it’s fair to say the DPD was more focused on securing consumer information, but more inclusive—unlike SOX–in covering both public and private companies. To this day in the US there’s currently no single comprehensive law on consumer privacy.

The EU’s original directive is significant because it defined personal data as “information relating to an identified or identifiable natural person”. For example, by EU rules, street address, name, and phone number are personal data; height, eye color, and model of car you drive are not. This notion of personal data as a type of key is part of the definition used in privacy laws outside the EU–including the US. In North America, though, we’ve come up with our own term for personal data, calling it instead “personally identifiable information” or PII.

By the way, the EU regulators intentionally created a less explicit definition of personal data so that it would encompass new technologies. In 2012, data related to an identifiable person could now be an email address, IP address, and for some EU nations, even a photo image.

To bring the story up to date, security experts began to realize that along with personal data there was other data–let’s call it quasi-personal–that if released could also be used to relate back to an individual. The data magic to accomplish identification typically requires matching a collection of anonymous data points– birth dates (or years), zip codes, ethnicity, and perhaps car model driven–against publicly available databases .

For example, there are well documented cases involving anonymized hospital discharge records subsequently used to re-identify the original patients!

With Facebook now up to 1 billion active users, it’s fair to say that the Web is overflowing with personal data at all levels of detail. Essentially social networks have provided hackers—the new ominous player on the scene—with a huge public repository to match against (c.f. Matt Honan).

To get a better understanding of how it’s possible to re-identify an individual, let’s review a variation on the aforementioned case. While the technique is not always guaranteed to uniquely identify a person (this depends on the available related information), it can often produce a narrowed down list of highly likely subjects.

Suppose, for argument’s sake, a European mortgage company analyzes a health report from a large public hospital. The records show that five individuals were being treated for a rare disease. Their ages were also published. Assuming the patients live near the hospital, the mortgage lender then simply filters its database on zip code and birth year. Working with a smaller set of records, it then scans social media sites or other online forums, filtering on the retrieved names and other data, all the while looking, for say, “get well” messages. If it finds a few matches, and with the additional new data points from the social site … I think you see where this is leading.

The good news is that the EU countries have long recognized that their laws have not kept pace. And the EU governing body is currently in the process of reforming the 1995 directive, taking into account the new realities of public data on the Web and the blurring of personal and anonymous data. To get a sense of the EU’s new thinking on personal data, refer to this work-in-progress paper.

And there are also rumblings of change in the US along the same lines as the EU reforms.

I’ll be writing more about US laws and what this will all mean for your company’s data protection policies in future posts.

Image credit: Rama