Archive for: December, 2012

Using Varonis: Involving Data Owners (Part I)

(This one entry in a series of posts about the Varonis Operational Plan – a clear path to data governance.  You can find the whole series here.)

Almost every organization is now data driven. With all the talk about data growth and big data analytics over the past couple of years, people have started to ask: “How do we maximize the value of our data? How can we make sure we’re deriving real business benefit?”

The keys to maximizing the value of our data are to gather the right intelligence about it, and then give the right people the ability to take action using the intelligence you’ve gathered.

Now that we know who our Data Owners are, it’s time to start getting them involved. Remember that it’s the owners—not IT—that have adequate context to make decisions about who should and shouldn’t have access to their assets.

The next step in operationalizing Varonis is to provide owners intelligence about their data assets.  DatAdvantage can deliver data-driven reports that shed light on what is happening with their data: who can access it, what they’re doing with it, which data is stale, etc. These reports greatly simplify and optimize reporting by delivering reports to all owners which contain information about only the data they own.

An Example

Say you’ve spent a few weeks identifying and confirming business owners for all of the top-level folders on a large NAS (or two, or three…). Depending on the size of the company, this might be a few dozen or a few thousand people. One of the most common next steps is to provide permissions reports on all of these data sets to the relevant owners. So the HR owner gets a report on all of the users who have access to the HR folder, for instance. It’s the same with Finance, Marketing, R&D, etc. In the past, you would have to create and deliver a separate report for each owner, which depending on the complexity of your reporting process might be an onerous undertaking all by itself. DatAdvantage gives you a far better alternative.

In DatAdvantage, to accomplish the same thing, you’d only need to create a single report, and all owners would get permissions reports once a quarter (or however often you like). Create the report, include the proper filters and formatting, and then set up a data-driven subscription to be delivered on the first day of the first month of the quarter. That’s it you’re done.

Every quarter, every data owner is going to get that report in their inbox, and the report will contain information about only the data that they own—they won’t see anything that doesn’t belong to them. As you add and change owners over time, the subscription will continue to work without intervention. If my job role changes and suddenly I’m the owner of additional folders, my permissions report will show those as well. If I’m no longer an owner, my report won’t contain information about what I no longer own.

Permissions reporting is a great use case for data driven reports, and it’s not the only one. Reports that show actual access can be useful, too.  What if every data owner could see exactly who on their team was accessing data most? What about those people who weren’t accessing any? Or people from outside their team bumbling around?  Who creates content? Showing owners what data is stale or which folders are growing the fastest can help give them understanding of how their using resources. Providing owners intelligence about where their sensitive data is, where it’s exposed, and who has been accessing it lead to informed decisions about how they can reduce risk.

Once you’ve started putting intelligence into the hands of your owners, the next step is to give them the power to take action without bugging IT. We’ll cover that next.

Data Brokers: Too Much Information

I keep coming back to the FTC’s report on new consumer privacy guidelines issued early in the year. Not only do the guidelines give a sense of the agency’s view on online data protection, but it also suggests what new legislation may eventually look like.

I bring up the FTC report yet again, because earlier this month, as an end-of-year surprise, it issued an order to several major US information brokers to learn more about their business practices.

Is your conventional data classification method as efficient as it could be? No? Then click here

In the FTC words, information or data brokers, are “companies that collect personal information about consumers from a variety of public and non-public sources and resell the information to other companies.”  Sent to nine data brokers, the FTC order requested specific information on the source of their data, how the data is maintained, and consumer’s ability to access and correct inaccurate information.

It’s no secret that the FTC has its own ideas about how these brokers should be doing their job. In their guidelines, the FTC calls for a voluntary privacy framework that would support several “substantive” principles, which include data security, reasonable collection limits, sound retention practices, and data accuracy.

While these principles apply to all companies that handle consumer data, the FTC sees something special about data brokers. The key point is that consumers don’t have a direct relationship with these companies, and the broker is in the business of selling this data to others.

So what’s at issue here?

Data brokers are good at connecting online public records to quasi-private information trawled from multiple online sources, including website interactions, cookies, and mobile activity, with the goal of creating detailed profiles.

From voter rolls, campaign contribution lists, “anonymous” hospital data, housing sales, mortgage files, and now, apparently registered gun ownership records, publicly available data alone provides a good starting point in creating a rough sketch. By the way many of these public records started life as paper documents held in a town hall and then were subsequently digitized. More on this implicit loss of privacy later.

With not too much difficulty, though, depending on the data and the computing resources, it’s then possible to combine it with other de-identified information and link it, with high likelihood, back to an individual or group, thereby filling in finer details of the consumer portrait.

For example, at least one of the data brokers to which the FTC sent its request had done just that: tying personal data it had collected in Facebook to identifiable data stored in its databases. The broker has since changed its Facebook data gathering policy.

Ideally, the FTC would like to give consumers the right to access the data mined by the brokers, correct it when it’s invalid, and opt-out if necessary. For those following my posts, this approach should appear familiar—it’s very much in the spirit of the EU’s Data Protection Directive.

If we accept the fact that we’ll all have an online profile that is continually extended as more information is made public, then the FTC’s privacy policies are reasonable.

On the other hand, if we want to put the genie partially back in the bottle, we may have to rethink the easy availability of public and governmental records, or at least give more choice to consumers about opting in.

Public records created before the Internet-era required a visit to a physical location to view, and it would seem that the intention was not to make the data widely and instantly accessible. From what I’ve read about the gun-ownership map controversy in particular, the public data privacy question has actually united people on both sides of the debate on gun laws: with many agreeing that perhaps we shouldn’t too hastily webify public records.

Is your conventional data classification method as efficient as it could be? No? Then click here

A Sneak Peek at DatAnywhere: Enterprise-Class File Synchronization Has Arri...

Last week we showcased DatAnywhere to a hundred attendees on our monthly webinar.  We’re delighted in the interest so far, and proud that we are able to help enterprises share and synchronize data in a secure way without relying on third-party servers—in the cloud or otherwise.

  • Want your in-house file servers to have all the functionality of Dropbox? You got it.
  • Need to edit a spreadsheet from your iOS or Android phone while on a plane?  Go for it.
  • Have Windows file servers but users want to use Macs?  No problem.
  • Want to access files on your corporate NAS via a web browser? Check.
  • Anxious to dump drive mappings and sFTP sites?  Now you can.
  • Want to keep all your internal processes (e.g. backups, retention, access control) the same?  Yup.

There are so many more things that DatAnywhere can do to drive up productivity while reducing risk. Check out our short video or request a free trial today.

More Security Wisdom From 2012’s Lesser Known Hacks

In my last post, I wrote about Verizon’s impressive annual Data Breach Investigations Report. The DBIR has enough eye-opening data analysis and stats to educate even the savviest IT security guru. While we’re waiting for the 2013 report to be released, a good source for real-time breach activity can be found at the Identity Theft Resource Center. Their weekly report and statistical summary of verified data exposures is another way to gain additional insights into current IT security practices and the attacker’s hack-craft.

In the last few months, Global Payments and Zappos’s breaches along with rest of the top 5 have received the lion’s share of press attention. Using ITRC’s lists as a guide, I decided to look beyond the more heavily publicized attacks to see what I could learn.

My first stop was the case involving lost data cartridges belonging to one state’s child services department. The backed-up data contained the names of hundreds of thousands of adults and children. Officials explained that the cartridges were misplaced by their vendors, IBM and Iron Mountain.

On a similar theme, I also learned of another incident in which a US bank lost backup tapes containing—you guessed it—customer names, addresses, social security numbers, and credit card numbers.

What’s going on with backups? Anyway, these two examples serve as reminders that controls and procedures to guard against internal staff errors—with backups apparently requiring special attention—shouldn’t be neglected as you battle against external threats.

Then there was the transfer of almost one million Medicaid claim records hacked from another state’s servers. Medicaid data typically contains social security numbers, patient names, and addresses, along with physician names and tax identifiers. The state’s health department was ultimately alerted of this medical information theft.

The attackers involved in the Medicaid job were thought to be part of an Eastern European criminal gang. In this exploit, they were able to, in the words of an official, “circumvent the server’s multi-layered security system.”

From what I could piece together, it seems that a new server was put on-line without privileged passwords being reset. No surprises here. This is a valuable object lesson for IT: damaging but preventable data break-ins are often due more to sys admin oversights than to clever hackers or sophisticated malware.

As the Verizon DBIR notes, and I will re-emphasize, it’s a good idea to monitor unusual file activity from privileged users in order to catch this very common password  mishap.

Finally, there was the October hack of one small college’s servers, wherein the personal data of hundreds of thousands students and employees were snatched up. In this caper, the exposed information included student names, social security numbers, and birth days, in addition to employee direct deposit routing and bank account numbers.

Where did the hackers find this data treasure trove? The college president announced that hackers gained access to a folder containing several files of student records. The attack appears to have occurred earlier in the year, and by time the breach was finally discovered, 50 employees, including the president and faculty members, reported incidents of identity theft.

One way this particular data removal could have been prevented or at least minimized is if IT had procedures in place to hunt down files containing text-based personal data identifiers.  If your own IT group has better things to do than search for social security numbers among thousand or tens of thousands of files, this Varonis blogger has a suggestion.

Varonis’ IDU Classification Framework  has powerful automated capabilities to perform regular expression searches based on configurable patterns and then notify  IT admins when, say, a social security number or bank routing number appears in loosely protected files.

Back to ITRC. After looking at several more exploits in the ITRC summaries, it was becoming clearer to me that the hackers’ modus operandi was similar to what we’ve seen in cases involving better known victims.

The bigger data hauls make the news, but the lessons are still the same.

Image credit: Fry1989

DatAnywhere is now in the App Store

DatAnywhere is the easiest way to setup secure, private file sharing and syncing that runs on your organization’s existing servers.  There’s no need to move or re-permissions data – IT simply installs the DatAnywhere server software on your network and you’re set!

Today, we’re happy to announce that DatAnywhere for iPhone and iPad is available in the Apple App Store.  Your team is just a few taps away from accessing the business documents sitting on your company’s internal storage right from their iPhone.

If you’re not part of our beta, sign-up today for free.

DatAnywhere for iOS

The Biggest Hacks of 2012

With 2012 coming to a close, I decided to take a look back at some of the year’s more significant hacks. Two of the largest heists involved thefts of millions of records of personal data. In March, Global Payments, a credit card processor, revealed a breach in which at least 1.5 million credit card numbers were exported. And the year began when hackers targeted Zappos, the online shoe retailer, and relieved this e-tailer of over 24 million rows of email addresses and other data.

Based on these gigantic incidents, I thought this was the year of the Big Hack and a unique turning point. For perspective, I reviewed two years’ worth of Verizon’s indispensable Data Breach Investigations Reports. The DBIR is based on data collected from the US Secret Service and the Dutch National High Tech Crime Unit. For 2011, Verizon reported over 855 incidents and 174 million records compromised. Last year was the second highest data loss recorded since Verizon began this study in 2004.

I’m not sure if 2012 hacking levels will surpass 2011, and neither of these two years will come close to the 360 million records compromised in 2008. However, there are other trends that seem to have remained relatively constant.

In recent years, the top three industry sectors breached have been hospitality (read: restaurants), retail, and financial services. No surprises here.

Another common theme in the report is that poor authorization monitoring and procedures often broaden the damage done by attackers. Verizon suggests that companies should constantly be on the lookout for new files, especially growing archive and log files, with unusual attribute settings. These often indicate an attack in progress.

The DBIR also tells us that straightforward hacking—using default passwords, stolen login credentials, or backdoor attacks—is still a very effective way to extract protected data.

One revealing stat is that most of the records hacked in the last few years have not involved credit card numbers. The winner in the most-hacked-data category instead goes to plain old PII—name, address, and social security number.

So how do Global Payments and Zappos match up with the overall trends? Depressingly, these two incidents fit it like a glove. Financial or retail? Check. External attack? Yes.  Straightforward hack? It seems so, and no malware was involved that we know about.

For both Global Payments and Zappos, the actual exploits used are still a  little fuzzy. According to Gartner Research’s Avivah Litan, the Global Payments attacker may have been able to get through the company’s knowledge-based authentication layer by answering questions correctly. This is still just speculation. Here’s what we do know: Global Payments was PCI-DSS compliant. Visa and Mastercard have since revoked their certification.

Zappos, which is also PCI-DSS compliant, kept their credit card numbers encrypted and separated from other personal information. Hackers were not able to access the “PANs”—PCI lingo for the card numbers. Zappos has kept their certification.

The most eye-opening part of Verizon’s DBIR can be found in their conclusions. Not to put too fine a point on this, but companies are simply not making the attackers work very hard. It’s not that they are so clever; it’s that IT has been a bit lax.

Here’s some of their all-too-familiar advice:

  • change default credentials
  • review user accounts on a regular basis
  • restrict and monitor privileged users

On that last point, I’ll quote the actual text from the DBIR:

“Don’t give users more privileges than they need (this is a biggie) and use separation of duties. Make sure they have direction (they know policies and expectations) and supervision (to make sure they adhere to them). Privileged use should be logged and generate messages to management.”

Speaking as a Varonis blogger, I couldn’t have said it better.

Let’s hope some of this advice takes hold, and 2013 will be a more forgettable year in hacking annals.

Top 3 SharePoint Security Challenges

The rapid adoption of SharePoint has outpaced the ability of organizations to control its growth and enforce consistent policies for security and access control. The ease with which SharePoint sites can be created means that SharePoint use is decentralized and often outside the purview of IT departments, security personnel and even dedicated SharePoint administrators.

So what are the top 3 SharePoint security challenges?

1 – Organic and chaotic deployment of SharePoint sites

Pervasive departmental use of SharePoint means that all types of data makes its way into SharePoint repositories. This can range in sensitivity and importance and may easily include human resources or product information. So, now the problem for organizations becomes not only identifying sensitive data but locating all SharePoint sites, existing and emerging.

2 – Ad hoc, complex permissions administration

The levels and types of permissions available with SharePoint are more complex than their NTFS counterparts, and the additional granularity and inheritance complexity creates more access levels and a high probability for erroneous or overly permissive access.

While access control decisions may be (rightly) left to the data owners through SharePoint’s permissions workflow, the complexity of its implementation often leads to inconsistency in ACL configuration and group assignment. Without strict auditing and oversight, permissions may be set in conflict with enterprise-level access policies, and may not include key business intelligence about why the access should be limited (e.g., content might be regulated or copyright protected).

3 – Limited, resource-intense auditing

Key to maintaining good access control over data is continuous monitoring of how data is being used. This is another challenge with a SharePoint environment. Microsoft SharePoint audit detail is geared toward helping site administrators manage content, not toward refining access policy. Consequently there is no way for SharePoint administrators to easily establish which users took what action on data.

The native auditing capabilities are also limited in terms of scalability across sites. “Normalizing” the data, i.e., creating a unified and accurate view of data use and access across sites and locations, is challenging and time-intensive. Exacerbating the problem is that files on SharePoint often make their way to other platforms like file shares and email – without a unified audit trail of activity, understanding how and by whom data is accessed in the collaborative environment can be a significant challenge.

Download our FREE guide to learn how to make sense of SharePoint permissions & lock down and monitor your sensitive data.

Cloud-based Big Data Also Requires Big Backup

Starting last January, more than a few blogs posts have been devoted to explaining the saga of Megaupload, the file sharing or “file locker” service whose web domain was seized by the US government. While the actual circumstances of this case are extreme—criminal asset seizure laws applied to copyright infringement—the effective results, a cloud-based service suddenly going dark leaving subscribers stranded, is not that uncommon. With the rush to Big Data in the Cloud, companies should also be thinking about Big Backup.

Over the last few years I’ve been an active user of many SaaS startups, as well as depending on web hosting services for my own personal projects. During this time, a few of my app providers have shut their doors. However, when this final step was taken, it has usually been performed in a graceful (and business savvy) manner. Subscribers are generally notified weeks or even months in advance before the cables are yanked from the server racks. The user community then has time to get the word out to its members to download files, pictures, code, docs, music, and any other digital content for safekeeping.

While government property seizures are unfortunate and (we hope) a rare event, it’s more likely that customers lose access to data as a result of billing disputes or financial problems, including bankruptcy, on the part of the provider.

On the latter point, in this recent period of intense social media startup activity, investors expect exponential growth, or else they’ll turn the cash flow spigot off  in an instant—while holding their customers non-portable data and apps. See, for example, Color, Oink, Gowalla, or Hashable.

So what’s the recourse for cloud customers in these situations?

I decided to take a look at the terms of use of my own web hosting provider and another better-known player in this space. When it comes to liabilities for data loss, the actual legalese—usually somewhere after the Copyright Infringement clause—from your cloud service may read something like this:

It is solely subscriber’s duty and responsibility to backup subscriber’s files and data on our Cloud Service, and under no circumstance will our Cloud Service be liable to anyone for damages of any kind under any legal theory for loss of Subscriber files and/or data on any of our servers


Or you may even see this:

Our Cloud Service serves the right to freeze or terminate your access to Cloud Servers, or take any other measures deemed to be appropriate (as determined by our Cloud Service in its sole and absolute discretion), at any time and without prior notice, to enforce this Agreement or to ensure the stability of its network.

There are two key points that should be considered when your organization’s data resides on someone else’s disk drives.  First, while many of these cloud services will back their file systems up on an occasional basis, no company should blindly outsource their backup responsibilities. A cloud service is a magical black box in many ways, but sensible backup rules still apply.  Of course, some of this can be worked out in Service Level Agreements.

Second, backing up data using the same cloud service provider opens your company to data denial risk of the kind we opened this post up with—the service shutting its doors. If you use another cloud service to do the backups, you’re merely kicking the risk down the Internet highway.

Along with the risk of data denial, there are complexities involving overall control of data and security protections. In fact, there are enough fine-print details to be worked that a few law firms have stepped up to provide due diligence advice and negotiating strategies.

If all this is making you nervous about keeping your data with a cloud provider, we certainly understand. Varonis has solutions to help you gain the benefits of the cloud while avoiding the risks of depending on others.

While I won’t try to argue against any company using a cloud service, they should take reasonable precautions in protecting their interests and minimizing data risks.

Or else they may end up like this business, stuck in the Megaupload legal web and arguing in Federal court for the right to get the sole copy of their data back.

Using Varonis: Who Owns What?

(This one entry in a series of posts about the Varonis Operational Plan – a clear path to data governance.  You can find the whole series here.)

All organizational data needs an owner. It’s that simple, right? I think most of us would be hard pressed to argue against that as a principle—the data itself is an organizational asset, so of course it’s not the Help Desk or AD Admin folks who own it, it’s the users or business units that should own it. Of course, that’s great in theory, but with 1, 5, 10, or even 20 years’ worth of shared, unstructured data, figuring out who owns data is far from simple, let alone involving those owners in any meaningful way.

Before we get into using Varonis to locate owners, I want to talk about why finding a single data owner can be such a problem. IT probably knows who owns the Finance folder.  It’s the CFO or a delegated steward. Same with HR, Marketing or Legal—these tend to be clearly-delineated departmental shares and it’s not hard to figure out whom to go to if we need an informed decision. (Regularly involving those owners in data governance is a different problem, and one I will cover in future posts.)  The identification for these folders is relatively straightforward.

But what happens if you need to find the owner of a folder that has a less obvious name? What if the folder’s name is a project ID, or an acronym of some kind? In my experience, a majority of unstructured data resides in folders that aren’t obviously owned by anyone.

What IT tends to do then is a few different things:

  • Check the ACL and see which groups have access. If it’s a single group with an obvious owner, that’s a likely candidate. If the ACL contains many different groups or a global access group like Domain Users, though, this tactic tends to fail.
  • Check the Windows owner under Special Permissions. This metadata can be helpful, but can also be a red herring since it’s often just set to the local Administrator of the server. Even if there’s actually a human user there (who likely created the folder), that value may be outdated or inaccurate.
Special Permissions Dialog

  • Check the owner of files within the folder. Same problems as above.
File Properties Dialog
  • Enable operating system auditing to identify the most active user. Anyone out there excited about turning on file level auditing in Windows? I have yet to talk to anyone who answers yes to this question because of the performance hit on the server as well as the storage required and expertise to parse the logs effectively.
  • Turn off access and see who complains. Not an optimal strategy when it comes to critical data.
  • Email the world and hope for a response. In general, people don’t want to take ownership of something without good reason, since it may mean more work. How confident are you that the proper owners (who may be at a management or director level) are going to know exactly which data sets their teams are using regularly? If they’re not sure, are they going to jump to take responsibility?

So finding owners is hard, let alone finding owners at scale. If you’ve got thousands of unique ACLs and you want owners for all of them (or at least the ones that make sense) you’re going to have to go through some version of this process for each one. It’s no wonder we haven’t done a good job of this over time. Thankfully, there’s a better way.

Step 4: Identify Data Owners

The key difference between attempting to solve this problem manually and attacking it intelligently with Varonis is the DatAdvantage audit trail. A normalized, continuous, non-intrusive audit record of all data access is a key piece of DatAdvantage, and it allows us to actually identify data owners at scale without having to hunt and peck. Once you start gathering usage data and rolling it up into high level stats you can start to see the likely owners of any data set, not just the obvious ones.

DatAdvantage gives you two straightforward ways to get this information: First, we can quickly take a look at a high-level view of a single folder within the Statistics pane of the DatAdvantage GUI. This will show us the most active users of a particular folder. We like to say that at most, you’re one phone call away, since if the most active user isn’t the data owner, they almost certainly know who is.

You can operationalize this process even further by creating a statistics report, which can be run on an entire tree or even a server. A single report can show the top users of every unique ACL, and it’s possible to set up advanced filters to make this even more useful—showing only users outside of IT or in a specific OU, for example. You can even add additional properties from AD to the report, showing each user’s department or line manager, if available. None of this is possible without constantly gathering access activity and providing an interface to combine it with other available metadata.

Identifying owners is useful, but actually involving them is where IT can really start to make headway when it comes to ongoing governance. We’ll tackle that next.

Oops, we lost a few terabytes! NBD!

Earlier this week, Swiss intelligence agency (NBD) warned US and UK counterparts that they might have lost terabytes of top secret data due to insider theft by a disgruntled IT admin.  Reminds me of this xckd:

 Chain of Command

We emphasize insider threats and the importance of zero trust all the time at Varonis.  Yes, it’s extremely important to secure the perimeter walls and use data loss prevention to protect endpoints.  But perimeter defense is far more straightforward, if nothing else, than defending against those who appear to be on your team – Kingslayers.

Inside jobs happen over and over again because they’re so hard to stop. According to a Forrester survey in 2010 [1], 43% of data breaches were caused by “trusted” insiders.  Just a few months ago, I wrote about the Zynga employee who, upon leaving the company, felt compelled to take 763 documents—including business plans and other IP—along with him.

So what do we do about it?  The answer is actually in Varonis’ mission statement: we ensure that that only the right users have access to the right data at all times from any device, all use is monitored, abuse is flagged.

Where do you stand in the battle against insider threats?

Are you alerted when statistical deviations in file system and email activity occur?

We jokingly call this our early resignation detection system since, sometimes, when someone is about to resign, they copy everything they’ve ever worked on.  But the alerting system in DatAdvantage was primarily designed to detect suspicious and potentially harmful behavior.

Are you alerted any time someone is granted admin-level access?

One of the top use cases for DatAdvantage for Directory Services is to always know exactly when someone is given super user rights, who granted it, when, and why.  And perhaps even more importantly, we can see what they’re doing with that access.

Do you know when IT administrators can, and do, access business data?

There’s likely no good reason for an IT admin to be rifling through customer records, changing the contents of business data, or deleting files without justification.  If you can say for certain that this isn’t even possible, you’ll be able to prevent a situation like NBD’s.  Incidentally, one of the core reasons businesses cite for not wanting to move corporate data to the cloud is that they lack visibility into what the cloud provider’s IT admin are doing with their sensitive business data at any point in time.

If you’d like a free data protection assessment to find out if your environment is at risk, sign up here.

[1] Source:Forrester, Forrsights Security Survey, Q3 2010


Using Varonis: Which Data Needs Owners?

(This one entry in a series of posts about the Varonis Operational Plan – a clear path to data governance.  You can find the whole series here.)

Which Data Needs Owners?

In a single terabyte of data there are typically around 50,000 folders or containers, about 5% of which have unique permissions. If IT were to set a goal of assigning an owner for every unique ACL, they’d need to locate owners for 2,500 folders. That’s quite daunting. And most organizations aren’t dealing with a single terabyte of data; in fact, many enterprise installations we encounter are dealing with multiple petabytes of unstructured data. Clearly we need a more surgical approach to assign owners.

Varonis tackled this problem with a longtime customer who needed to identify and assign owners for more than 200 terabytes of CIFS data on their fleet of NetApp filers. There were about 40,000 users in the company, approximately 3,000 of which (as it turned out) needed to be as designated owners for some data.

When we started taking a close look at specific folders, we discovered that many of them (especially at the top of the hierarchy) simply didn’t need an owner; the only users who could read or write data, according to the ACL, were either services accounts or administrative/IT.

What we needed was a methodology for locating the folders where business users had access and a way to identify the likely owner for just those folders. So that’s what we built.

The logic went like this:

  • Identify the topmost unique ACL in a tree where business users have access.
  • If that ACL’s permissions allow write access to users outside of IT, it’s considered a “demarcation point.”
  • For what’s left, identify higher-level demarcation points where non-IT users can only read data.
  • For each demarcation point, identify the most active users
  • Correlate active users with other metadata, such as department name, payroll code, managed by, etc.

The end result of this process is that each demarcation point has a likely ownership candidate. For this particular customer, the next step was to go through a survey process to confirm ownership of each demarcation point with the likely owners (as determined by Varonis’ reports). Any data without a confirmed owner was locked down to remove non-IT access and underwent a separate disposition process.

Other customers have since added content classification and other risk factors in order to better prioritize the data ownership assignment process. With a good classification scheme in place, IT is able to start assigning owners to the most critical data first.

The key takeaway from this process is we can use DatAdvantage to quickly identify the folders that need owners as well as likely owners, so IT doesn’t need to make decisions about 2500 folders per terabyte of data.

While this report was a originally a customization for one customer, we’ve now baked it right into DatAdvantage as report 12M – Recommended Base Folders.

Now that we know who our owners are, the next step is to start getting them involved. My next few posts will cover exactly how we do this using both DatAdvantage and DataPrivilege.

Stay tuned!

Image credit: gorbould