All posts by Manuel Roldan-Vega

4 Secrets for Archiving Stale Data Efficiently

Stale DataThe mandate to every IT department these days seems to be: “do more with less.”  The basic economic concept of scarcity is hitting home for many IT teams, not only in terms of headcount, but storage capacity as well.  Teams are being asked to fit a constantly growing stockpile of data into an often-fixed storage infrastructure.

So what can we do given the constraints? The same thing we do when we see our own PC’s hard drive filling up – identify stale, unneeded data and archive or delete it to free up space and dodge the cost of adding new storage.

Stale Data: A Common Problem

A few weeks ago, I had the opportunity to attend VMWorld Barcelona, and talk to several storage admins. The great majority were concerned about finding an efficient way to identify and archive stale data.  Unfortunately, most of the conversations ended with: “Yes, we have lots of stale data, but we don’t have a good way to deal with it.”

The problem is that most of the existing off-the-shelf solutions try to determine what is eligible for archiving based on a file’s “lastmodifieddate” attribute; but this method doesn’t yield accurate results and isn’t very efficient, either.

Why is this?

Automated processes like search indexers, backup tools, and anti-virus programs are known to update this attribute, but we’re only concerned with human user activity.  The only way to know whether humans are modifying data is to track what they’re doing—i.e. gather an audit trail of access activity.  What’s more, if you’re reliant on checking “lastmodifieddate” then, well, you have to actually check it.  This means looking at every single file every time you do a crawl.

With unstructured data growing about 50% year over year and with about 70% of the data becoming stale within 90 days of its creation, the accurate identification of stale data not only represents a huge challenge, but also a massive opportunity to reduce costs.

4 Secrets for Archiving Stale Data Efficiently

In order for organizations to find an effective solution to help deal with stale data and comply with defensible disposition requirements, there are 4 secrets  to efficiently identify and clean-up stale data:

1. The Right Metadata

In order to accurately and efficiently identify stale data, we need to have the right metadata – metadata that reflects the reality of our data, and that can answer questions accurately. It is not only important to know which data hasn’t been used in the last 3 months, but also to know who touched it last, who has access to it, and if it contains sensitive data. Correlating multiple metadata streams provides the appropriate context so storage admins  can make smart, metadata-driven decisions about stale data.

2. An Audit Trail of Human User Activity

We need to understand the behavior of our users, how they access data, what data they access frequently, and what data is never touched. Rather than continually checking the “lastmodifieddate” attribute of every single data container or file, an audit trail gives you a list of known changes by human users.  This audit trail is crucial for quick and accurate scans for stale data, but also proves vital for forensics, behavioral analysis, and help desk use cases (“Hey! Who deleted my file?”).

3. Granular Data Selection

All data is not created equally.  HR data might have different archiving criteria than Finance data or Legal data.  Each distinct data set might require a different set of rules, so it’s important to have as many axes to pivot on as possible.  For example, you might need to select data based on its last access data as well as the sensitivity of the content (e.g., PII, PCI, HIPAA) or the profile of the users who use the data most often (C-level vs. help desk).

The capability to be granular when slicing and dicing data to determine, with confidence, which data will be affected (and how) with a specific operation will make storage pros lives much easier.

4. Automation

Lastly, there needs to be a way to automate data selection, archival, and deletion. Stale data identification cannot consume more IT resources; otherwise the storage savings diminish.  As we mentioned at the start, IT is always trying to do more with less, and intelligent automation is the key. The ability to automatically identify and archive or delete stale data, based on metadata will make this a sustainable and efficient task that can save time and money.

Interested in how you can save time and money by using automation to turbocharge your stale data identification?  Request a demo of the Varonis Data Transport Engine.

Photo credit: austinevan

Case Study: City of Buffalo

Located in western New York, Buffalo is the second most populous city in the state of New York (after New York City) with a population of 261,310, according to the 2010 census. Its municipal government provides network resources for the 8,000+ employees, encompassing various departments, from its emergency services including the Police and Fire Departments, to its Municipal Housing Authority.

In the words of City of Buffalo’s systems administrator for network security and communication, the challenge it faced was not dissimilar to any other organization, “We have too many windmills and not enough Don Quixote’s. As far as our network security went, we were hard on the outside and soft on the inside, and this needed to change.”

The IT team refers to City of Buffalo’s Active Directory as a ‘bit of a basement and that, since using DatAdvantage, they’ve managed to turn it around into a very effective, efficient and intuitive warehouse. He adds, “The thing with Varonis is it gives you that stand off capability but then allow you to almost instantly come in and work to a fine gradient of detail that, without the product, would take hours and hours.”

Find out how Varonis® DatAdvantage® is helping the City of Buffalo clean-up their permissions and audit access activity efficiently

Click here to see the complete case study.

 

Case Study: NBC Holdings

NBC Holdings (Pty) Ltd (NBC) is the first black-owned and managed employee benefits company in South Africa. Today NBC is a leading force in the South African employee benefits arena, providing a comprehensive range of employee benefits products and services to 120 registered pension and provident funds, representing the retirement fund savings of more than 350,000 members.

As a financial institution, NBC Holdings needs to closely monitor access to data. When data was moved or deleted it was difficult and time-consuming for the IT department to figure out who moved it, and where. In addition, there were some instances in which it was necessary to provide a record of email messages that were read, sent, or deleted and the IT department required an efficient way to produce this information. (Native Windows and Exchange auditing tools could not provide the granularity NBC required and on their own provided no actionable intelligence or activity analysis).

Further, they wanted to relieve the IT helpdesk of some manual access provisioning tasks, as these were very time-consuming, and the helpdesk often lacked context about the data to make accurate decisions about who should have access.  Even identifying who had access to a particular data-set had been inefficient and resource-intensive. NBC is now able to identify data owners and involve them in the authorization processes through automation.

Find out how Varonis® DatAdvantage® for Windows, Varonis® DatAdvantage® for Exchange and DataPrivilege® helped NBC with their auditing, permissions and data ownership challenges.

Click here to read the complete case study.

 

New Case Study: Western Precooling

Western Precooling was founded in 1942. For nearly 70 years it has been the partner of choice for growers and shippers to get fresh, healthy produce from the field to their customers.

Western Precooling wanted to eliminate possible security concerns due to folders open to global access groups like “Everyone” and “Domain Users.”  These folders would be accessible to the entire organization, and since some of them might contain sensitive information, it was imperative to restrict access only to users who needed it. In addition, Western Precooling wanted to have a more detailed record of access activity.

Brian Paine, Director of IT, began looking for a solution that could clean-up excessive permissions and provide granular auditing capabilities. He considered bringing in a team of consultants, but was concerned that this approach wouldn’t allow him to maintain a secure environment after the clean-up process, and a team could not provide the auditing he needed. One of Brian’s concerns was the impact the clean-up process might have on business activity; he needed solution that could allow him to clean up permissions without affecting the daily operations of the company.

Since Western Precooling is preparing to move several applications and services to the cloud, it was necessary to have permissions in order prior to the migration; it would become a much more difficult problem to fix later on. It was also important to identify stale data so it could be archived instead of migrated. Finally, Brian needed a solution that could support their newly acquired NetApp NAS device.

Varonis DatAdvantage was the long term solution that Brian was looking for.  Varonis gives his team the ability to clean up permissions, audit access activity, identify stale data, and provide support for NetApp. Download the case study to read the complete story.

 

Fixing Access Control without Inciting a Riot

In a previous post, Fixing the Open Shares Problem, we talked about some of the challenges we face when trying to remediate access to open shares. One of the main problems is minimizing the impact these clean-up activities can have on the day to day activities of business users.

Think about it: if a global access group is removed from an ACL, the odds are very high that someone who has been using that data will now be restricted.  We find ourselves in a catch 22 between remediating global access and weeks of business disruption as they try and respond to the problems caused by the “fix.”

IT: “I’m sorry that you’re unable access your data.  We’re working on fixing it now.  I assure you, the only reason this happened is because we were trying to make things better.

Business user: “I totally understand.  Thank you!  You should get a raise!”

(We all know this is not how the conversation goes).

There’s a better way.

Varonis DatAdvantage provides the ability to simulate permission changes and see theGlobal Group Access Report probable outcome before you commit those changes to production. How? DatAdvantage correlates every audit event with the permissions on an ACL and then analyzes the impact of each simulated ACL  and/or group change. Through this sandbox, IT can identify the users who would have been affected by that change had it already been made—those users who would have called up the help desk screaming that they couldn’t access data they needed.

Once you’ve verified that those users really need access, you can continue to configure the ACL and group members within DatAdvantage to provide them access, and keep simulating until you’re confident that your permissions changes will not disturb people’s work. If you have the credentials to be able to make changes, DatAdvantage lets you commit all permissions and group changes right through the interface (over all platforms), either immediately or scheduled to hit a change management window later.

These simulation capabilities eliminate the risks of manually cleaning up open shares, since IT is able to fix the problem without ever impacting legitimate use.  Most IT departments have seen the results of trying to solve this problem manually: lots of broken ACLs and annoyed users. It’s a lot of fun to show them a better way.

You can request a free 1:1 demo of the Varonis suite here or watch our next live demo on the web.

Simulating Cleanup in a Sandbox

 

Exchange Journaling and Diagnostics: How to

Journaling and Diagnostics Logging are services to monitor and audit activity on Microsoft Exchange servers. They provide basic auditing functionality for email activity (e.g. who sent which message to whom) and, if collected and analyzed, may help organizations answer basic questions about email, as well as comply with  policies and regulations. (Note: Varonis DatAdvantage for Exchange does not require journaling or diagnostics to monitor Exchange activity.)

Journaling records email communication traffic and processes messages on the Hub Transport servers. The information collected by the journaling agent can be viewed through journaling reports, which include the original message with all the attachments.

Diagnostics writes additional activities to the event log (visible in Windows Event Viewer), such as “message sent as” and “message sent on behalf of” actions. Diagnostics can be configured through the Manage Diagnostics Logging Properties window in the Exchange Management Console.

Journaling and Diagnostics Logging collect significant amounts of events and generate a large amount of raw log data, so it is critical to plan which mailboxes and messages will be monitored and allocate additional storage before enabling.

Here are the steps to enable Journaling and Diagnostics in your Exchange Server.

Setting up Journaling in Exchange

There are two types of Journaling: standard and premium. Standard provides journaling of all the messages sent and received from mailboxes on a specified mailbox database, while premium provides the ability journal individual recipients by using journaling rules.
Setting up Journaling in Exchange
Here are the high-level steps to setup journaling on your Exchange server:

  1. First, create a journaling mailbox. This mailbox will be configured to collect all the journaling reports, and should ideally be setup with no storage limits to avoid missing any. The process to create the mailbox is:
    1. Select a different OU than the default
    2. Assign a display name
    3. Assign user logon name (user will use to login to this mailbox)
    4. Setup a password—take into account that journaling mailboxes may contain sensitive information, as a copy of the message is stored with the report.
  2. To enable standard Journaling it is necessary to modify the properties of the mailbox database. Under the Organization Configuration/Mailbox/Database Management/Maintenance tab, you will need to specify the journaling mailbox where you want the journaling reports sent.
  3. Premium Journaling requires an Exchange Enterprise Client license. To setup premium journaling, it is necessary to create journal rules, which are used to setup journaling for specific recipients. Using the EMC (Exchange Management Console) the journal rules can be created under the Hub Transport section of the Organization Configuration; on the Journal Rules tab. The fields to configure a journal rule are the following:
    1. Name
    2. Send reports to email
    3. Scope
      • Global – all messages through the Hub transport
      • Internal – messages sent and received by users in the organization
      • External – messages sent to or from recipients outside the organization
    4. Journal messages for recipient – journal messages sent to or from a specific recipient
    5. Enable rule – checkbox

Make sure the status on the completion page is “Completed” to verify that the rule was created successfully.

Setting up Diagnostics in Exchange

Diagnostics logging is configured separately for each service on each server. The steps toSetting up Diagnostics in Exchange configure diagnostics logging are:

  1. In the Exchange Management Console (EMC), click on Server Configuration.
  2. Right-click on an Exchange server  to enable Diagnostics Logging on it.
  3. Click on Manage Diagnostics Logging Properties.
  4. On the Manage Diagnostics Logging window, select the services you want to enable diagnostics for.
  5. Choose the level of diagnostics you would like on that service.
    • Lowest – log only critical events
    • Low – log only events with logging level 1 or lower
    • Medium – log events with logging level 3 or lower
    • High – log events with logging level 5 or lower
    • Expert – log events with logging level 7 or lower
  6. Click on configure. The system will provide a confirmation screen.

In a future post, we will go over the Mailbox Audit Logging in MS Exchange 2010.

New Case Study: Greenhill & Co.

Greenhill & Co., Inc. is a leading independent investment bank. The company was established in 1996 by Robert F. Greenhill, the former President of Morgan Stanley and former Chairman and Chief Executive Officer of Smith Barney.

As the CIO of Greenhill & Co., Inc., John Shaffer sought a data governance solution that could provide visibility into employee access rights, and identify potential issues. Additionally, the company needed a more efficient way to determine when content was moved or deleted, how it was being used, and by whom.

Greenhill & Co found that trying to manually manage and protect information for a company with global reach was often time-consuming, ineffective and error prone. The CIO and his team needed automated analysis of the organization’s permissions structure to more efficiently determine which files and folders required owners and who those owners were likely to be.  Identifying likely data owners required the ability to analyze actual access activity to identify likely data owners.

Further, Greenhill required a system to manage access and permissions to sensitive data.

“We liked DatAdvantage because it told us right away the access rights that certain folders had, which people had access to those folders, where the content was moving to, and if that access should be tightened.”

Click here to read the whole case study.

Data Governance Made Easier: Version 5.7 is now GA

Version 5.7 of the Varonis® Data Governance Suite® has been officially released. This version includes enhancements for Varonis DatAdvantage® and Varonis DataPrivilege® as well as a brand new product, DatAdvantage for Directory Services® . Almost all new features and enhancements came straight from our customers so we would like to say, thank you!

Some of the new features and enhancements to Varonis DatAdvantage® version 5.7 include:

  • Reports template wizard: customize the content and look of your reports
  • Flags, tags, and notes: create your own metadata on folders, files, groups, and users
  • Easy change reporting for data owners – automatically receive reports on changes to your folders & groups
  • Demarcation Report – report on folders that need owners based on their permissions and place in the hierarchy, and who those owners are likely to be
  • Support for HP IBRIX X9000 NAS Systems

DatAdvantage for Directory Services® provides new capabilities to audit and monitor Active Directory:

  • View domain and domain objects in DatAdvantage GUI
  • Analyze Organizational Unit’s and other AD objects
  • Augment auditing of changes to AD objects

These new functionalities are viewable in the DatAdvantage® GUI, providing a complete picture of your environment from a single interface.

In version 5.7 of Varonis DataPrivilege®, new features include:

  • Create folders from the DataPrivilege interface for easy collaboration
  • Dynamically assign first authorizer for permission and group membership requests (“Authorizer 0”)
  • “Locations” for groups, adding hierarchical organization to large group structures
  • New reports for data owners

Request a demo or request a 30-day free trial of Varonis® Data Governance Suite version 5.7. Customers may contact support@varonis.com for assistance with upgrading.

 

Case Study: Matanuska Telephone Association

Matanuska Telephone Association (MTA) is a co-operative telecommunications service provider that offers its members local telephone services, high-speed Internet access, wireless phone service, digital television and managed business services.

Like many organizations, there were occasions when MTA’s employees would inadvertently move, rename, or accidentally delete files. Finn Rye, MTA’s Information Security Officer, and his team would try to locate or recover the information. The hours spent manually tracking down data were significant, which meant that Rye’s team was often unable to attend to other, more pressing matters.

Further, for internal compliance requirements, MTA’s Performance Integrity office mandates that Rye’s team be able to verify who has access to which data and what files those individuals actually access.

MTA recently deployed Varonis® DatAdvantage® for Windows. DatAdvantage provides a searchable and sortable complete audit trail, which includes “delete” events in files and folders. The Audit Trail provided Rye’s team the ability to find deleted or moved files and to determine how it happened.

“Without DatAdvantage®, we simply weren’t able to do the investigation or incident responses we can now,” Rye said.

Rye’s team has configured automatic alerts and reports to obtain the visibility and control they needed, fulfilling their compliance requirements. Now they can identify sensitive files and folders, and determine who should and should not have access to them.

“It was virtually impossible before Varonis®,” he said. “We just didn’t have the logging capacity or a way to search in an efficient manner.”

Varonis® DatAdvantage® for Windows provided MTA ability to analyze and audit access, visibility into their permissions structure and actionable intelligence on how to remediate excessive permissions; this is why MTA chose Varonis. To read the complete case study, click here.

Finn Rye is MTA’s information security officer – his department oversees the company’s information security initiatives for MTA’s 400+ full-time employees.

Thoughts on the 2011 Data Breach Investigations Report

While reading through the 2011 Data Breach Investigations Report, there were two things that caught my attention:

The first one is that approximately 83% of the data breach attacks are considered “opportunistic.” According to the report, “the victim was identified because they exhibited a weakness or vulnerability that the attacker could exploit.” In other words, the attack took place because the attacker noticed a weakness—if that weakness had not existed or had not been noticed, the attack would not have been conceived, or the attacker would have moved on to an easier target.

The second one is that the ones who are taking advantage of these weaknesses are its own employees. The report mentions that “it is regular employees and end-users – not highly trusted ones – who are behind the majority of data compromises. This is a good time to remember that users need not to be super users to make off with sensitive and/or valuable data.” Contrary to what most of us might think, in many situations we don’t always have specialized criminals attacking our organizations. Regular users are responsible for many of the attacks; employees that are tempted after discovering that they have access to valuable information.

Putting these two things together, it makes sense that a primary area of risk is where employees have access to valuable data, and where access is too permissive. Many organizations are already looking for sensitive, valuable data (e.g. with data classification technologies). More recently, organizations are starting to look for better context awareness, linking content with permissions, activity, and ownership information to identify significant exposures, and accelerate data protection and remediation efforts.

In our next post, we’ll discuss how you can use metadata framework technology to identify users that might be looking for weaknesses in your environment.