All posts by Manuel Roldan-Vega

How DatAdvantage Helps With Virus Recovery

How DatAdvantage Helps With Virus Recovery

During my conversations with our customers, it is always great to hear how they are leveraging Varonis to support their data governance initiatives. It is even better when we hear about scenarios that reach outside their original use-cases, like recovering from a virus. Today we are sharing a story from a customer who was recently the victim of a variation of the CryptoLocker virus, and was able to use Varonis DatAdvantage to minimize the recovery time.

One of the key features of Varonis DatAdvantage is a complete audit trail of access activity. DatAdvantage collects every access event (e.g. open, read, write, modify, delete) on monitored file and email servers without requiring native auditing, and presents them in a searchable and sortable interface. In this particular case, the audit trail was the feature that helped our customer reduce the virus recovery time.

The Issue

Here’s how our customer described the situation:

“My Windows server admin notified me that there were several users complaining that their files were corrupted, and asked me if I could look into it. Using the Varonis DatAdvantage audit trail, I could identify all the users that had accessed the corrupted files. While investigating several files, I was able to identify a common user between them.  Within DatAdvantage I ran a query on that specific user and realized that there were over 400,000 access events that had been generated from that user’s account. It was at that point that we knew it was a virus”.

They looked at the websites visited by this user, and with another tool, they were able to identify a second user account that had also accessed the same website.  The second user’s machine was also infected with the same virus.

“Once we had identified the second user, we went back to DatAdvantage to identify the files they had accessed. There were over 200,000 access events generated from this user’s account.”

Recovery

Once the virus was identified and removed, they had to recover the corrupted data from backups. Since they were able to use DatAdvantage to identify the files that were accessed (and corrupted) by these specific users, they were able topinpoint and restore those specific files, rather than having to restore the entire server from a snapshot.

The fact that they were able to quickly identify which files had been corrupted helped them reduce the impact of the virus on the environment and the downtime for the users. In addition, it allowed them to maximize their time and resources by only having to restore the data that was affected.

Next Day Checkup

DatAdvantage also provides daily reports on anomalous behavior. The next morning, our customer reviewed this report and was able to confirm that there were no other user accounts generating excessive amounts of access events.

To see how Varonis is helping other organizations visit our Customer Success Stories page.

Also, our brand new product, DatAlert, can help you identify potentially malicious activity in real-time. Visit our DatAlert product page to see more info.

Image credit (cc): http://www.flickr.com/photos/hj_barraza/415134620

5 Steps to Get Data Owners Started

During a recent conversation a customer asked if we had a Getting Started Guide for Data Owners. After using Varonis to identify and assign owners, one of the new data owners asked, “What am I supposed to do now? What do data owners do?”  In order to help him—and anyone else in this situation—I created 5 high-level steps business users can follow to get started as a data owner.

Step 1: Take inventory of your data and confirm ownership

One of the first things data owners should do is review the data for which they are responsible; IT should provide them a report listing all the folders, SharePoint sites, etc. that they own. Owners should carefully review this report and confirm with IT that they are, in fact, the correct owners of this data. It is also important that they understand which, if any, of these folders contain sensitive data, which folders are open to other groups in the organization, and which teams they expect to collaborating with.

Once they have reviewed their data assets, they will be able to start governing and protecting their data effectively. In addition, they should determine if other users will need to be involved in the authorization process (delegated “authorizers” for specific folders), and coordinate with them on how access requests will be processed.

Step 2: Review permissions/users with access

Once they’ve confirmed ownership and they understand the types of data contained in these folders, the next step would be to perform an initial Entitlement Review.  These can either be done manually with IT provided lists of people to review, or with automated solutions, like Varonis DatAdvantage and DataPrivilege.

During an initial entitlement review, data owners will review which users have access to which data and make decisions about which users should be removed or added. Solutions that provide automated entitlement reviews, like DataPrivilege, automate this task end to end, providing actionable information to data owners, (e.g. recommendations based on access activity and cluster analysis) and effect changes to the appropriate ACL’s and groups without IT intervention.

It is important that this step be carefully performed, whether manual or automated, as this will be the first step in cleaning up excess access and ensuring that only the right people have access to data.

Step 3: Ensure all requests are processed for the appropriate reasons

Once owners have performed their initial review, they should now be in “maintenance mode” and ongoing data ownership activities shouldn’t take much time– they’ll mostly need to approve/decline access requests as they come up, either with an automated solution (like DataPrivilege) or through a manual process. As a best practice, every access request should ask the requestor to enter a reason for requesting access, either selected from a menu of legitimate reasons, or manually entered.

Data owners should consider access requests carefully, especially when the data they’re managing is sensitive:

  • What data are they requesting access to?
  • If I grant access, is there anything in that folder that they should treat as confidential?
  • Should access be granted permanently, or temporarily?
  • If access should be granted temporarily, how will we remember to revoke it? (Manual process or with automation like DataPrivilege)

Step 4: Do periodic entitlement reviews

On a regular basis—once a quarter, every 6 months, etc.—IT should require owners to complete an attestation, or entitlement review. This will ensure data owners review any changes or new recommendations made since their last review and ensure that organizational changes have not granted unwarranted access. Owners should have the option to specify where access should be restricted or stay the same, and a record of their decisions should be kept. Entitlement reviews help organizations efficiently maintain a least privilege model.

Step 5: Review access statistics on your data

If available, data owners should have the ability to access a dashboard which includes permissions and access activity relevant to their data, as with DataPrivilege’s Self-Service Portal. Data owners can make better decisions if they are able to see who is accessing their data, which folders are most accessed, least accessed, or stale, and who is accessing folders that hold sensitive data.

Conclusion

While there are a lot more details on data ownership, we hope this list provides a starting point for Data Owners on how to govern their data effectively. For more information you can visit our collection of blogs on data ownership or download our whitepapers from our resource center.

Image credit: Electron

4 Secrets for Archiving Stale Data Efficiently

Stale DataThe mandate to every IT department these days seems to be: “do more with less.”  The basic economic concept of scarcity is hitting home for many IT teams, not only in terms of headcount, but storage capacity as well.  Teams are being asked to fit a constantly growing stockpile of data into an often-fixed storage infrastructure.

So what can we do given the constraints? The same thing we do when we see our own PC’s hard drive filling up – identify stale, unneeded data and archive or delete it to free up space and dodge the cost of adding new storage.

Stale Data: A Common Problem

A few weeks ago, I had the opportunity to attend VMWorld Barcelona, and talk to several storage admins. The great majority were concerned about finding an efficient way to identify and archive stale data.  Unfortunately, most of the conversations ended with: “Yes, we have lots of stale data, but we don’t have a good way to deal with it.”

The problem is that most of the existing off-the-shelf solutions try to determine what is eligible for archiving based on a file’s “lastmodifieddate” attribute; but this method doesn’t yield accurate results and isn’t very efficient, either.

Why is this?

Automated processes like search indexers, backup tools, and anti-virus programs are known to update this attribute, but we’re only concerned with human user activity.  The only way to know whether humans are modifying data is to track what they’re doing—i.e. gather an audit trail of access activity.  What’s more, if you’re reliant on checking “lastmodifieddate” then, well, you have to actually check it.  This means looking at every single file every time you do a crawl.

With unstructured data growing about 50% year over year and with about 70% of the data becoming stale within 90 days of its creation, the accurate identification of stale data not only represents a huge challenge, but also a massive opportunity to reduce costs.

4 Secrets for Archiving Stale Data Efficiently

In order for organizations to find an effective solution to help deal with stale data and comply with defensible disposition requirements, there are 4 secrets  to efficiently identify and clean-up stale data:

1. The Right Metadata

In order to accurately and efficiently identify stale data, we need to have the right metadata – metadata that reflects the reality of our data, and that can answer questions accurately. It is not only important to know which data hasn’t been used in the last 3 months, but also to know who touched it last, who has access to it, and if it contains sensitive data. Correlating multiple metadata streams provides the appropriate context so storage admins  can make smart, metadata-driven decisions about stale data.

2. An Audit Trail of Human User Activity

We need to understand the behavior of our users, how they access data, what data they access frequently, and what data is never touched. Rather than continually checking the “lastmodifieddate” attribute of every single data container or file, an audit trail gives you a list of known changes by human users.  This audit trail is crucial for quick and accurate scans for stale data, but also proves vital for forensics, behavioral analysis, and help desk use cases (“Hey! Who deleted my file?”).

3. Granular Data Selection

All data is not created equally.  HR data might have different archiving criteria than Finance data or Legal data.  Each distinct data set might require a different set of rules, so it’s important to have as many axes to pivot on as possible.  For example, you might need to select data based on its last access data as well as the sensitivity of the content (e.g., PII, PCI, HIPAA) or the profile of the users who use the data most often (C-level vs. help desk).

The capability to be granular when slicing and dicing data to determine, with confidence, which data will be affected (and how) with a specific operation will make storage pros lives much easier.

4. Automation

Lastly, there needs to be a way to automate data selection, archival, and deletion. Stale data identification cannot consume more IT resources; otherwise the storage savings diminish.  As we mentioned at the start, IT is always trying to do more with less, and intelligent automation is the key. The ability to automatically identify and archive or delete stale data, based on metadata will make this a sustainable and efficient task that can save time and money.

Interested in how you can save time and money by using automation to turbocharge your stale data identification?  Request a demo of the Varonis Data Transport Engine.

Photo credit: austinevan

5 Step Guide to Reducing the #1 Data Security Risk

RiskLast week I had the opportunity to attend an event on 3rd party data security and risk. Throughout the event, I talked with folks from many different industries and in many different roles.  I spoke with auditors, general IT managers, storage administrators, CIOs, and of course, security professionals.

What is the Top Priority for Reducing Risk?

Everyone shared one common concern:

How can we reduce risk and protect our clients’ data?

One executive was asked, “Which area would you consider your number one priority for reducing risk?”  His decisive answer was that, of all the areas of risk his massive enterprise faces, priority number one is unstructured data security.

This shocked me a bit at first, but when you think about it, it makes perfect sense.  According to Gartner, unstructured data accounts for more than 80% of all organizational data, and it’s growing approximately 50% every year.

Even data that is normally stored in databases or apps is regularly being dumped into spreadsheets for analysis, PowerPoint slides for presentations, PDFs for reading, and email for sharing between teams.

When you think about it this way, it becomes very easy to see why unstructured data is the highest risk area for many IT departments.

Compliance and Regulations

In addition to the intrinsic motivation for securing unstructured data, external regulations such as SOX, HIPAA, and PCI are forcing organizations to put processes in place to ensure the protection of 3rd party data.  Unfortunately,  most organizations don’t have an efficient and affordable way to put these controls in place and prove that they’re being enforced.

An auditor I spoke with mentioned how difficult and time-consuming it is to perform attestations, and how, for most companies, entitlement reviews are manual and painful processes that don’t really accomplish the end goal of protecting data.

Where Do We Begin?  A 5 Step Guide

If you are trying to start a risk management project in your organization, here are some actionable ideas on what to focus on:

1. Identify your most valuable assets

All 3rd Party data is valuable.  Our clients trust us to manage and protect all of it.  But it is critical to pick a starting point.  To do this, talk with data owners and key stakeholders to find out which types of data are the most sensitive or most valuable.

2. Locate your most valuable assets

You can’t protect sensitive data if you don’t know where it resides.  Is it in the CEO’s mailbox?  Is it propagated across all your Windows file servers and NAS devices?  In order to do this at scale, you’ll need a data classification framework that can scan files on your network for sensitive content indicators.

3. Identify where sensitive data is overexposed

You probably found a ton of high value data in step #2.  Now you have to figure out who can access that data and prioritize data sets that are wide-open to everyone.

Many of us, when we move to a new home, we tend to change the locks. Why? Because we don’t know who has had a key in the past – the owners, realtors, past owners, builders?  This represents a big risk for us and our families.

The same principle applies with 3rd party data.  We need to identify who can access it, and what type of access they have. Then we can identify which data is overexposed, and where permissions need to be tightened up and assigned owners.

4. Monitor Data Access

As my good friend @rsobers says: Context is king. Part of reducing risk is monitoring who is actually accessing the data and what are they doing with it. If we’re constantly monitoring access, we can identify patterns in user behavior and alert when suspicious activity occurs. And if we store the audit data intelligently, we can use it for forensics, help desk, and stale data identification.

5. Use Automation

Are you ready to implement steps 1-4?  Do you have an army of IT staff with nothing planned for the next 50 years?  Luckily, that won’t be needed.  You can use automation to identify the most critical data, understand who can access it, and monitor what they’re actually doing with.

By leveraging automation to provide your security intelligence dashboard, you can spot problems and then use automation (again) to simulate changes and automatically execute the remediation.

There you have it!  Go forth and protect your customers’ data!  Oh, and by the way, there’s a 6th step that doesn’t require IT involvement at allAsk us about it.

Are you curious to see how your company measures up?  Get a free data protection assessment.  We’ll scan your infrastructure for holes and help you plug them with automated data protection and management software from Varonis.

Photo credit: http://www.flickr.com/photos/fayjo/

Case Study: City of Buffalo

Located in western New York, Buffalo is the second most populous city in the state of New York (after New York City) with a population of 261,310, according to the 2010 census. Its municipal government provides network resources for the 8,000+ employees, encompassing various departments, from its emergency services including the Police and Fire Departments, to its Municipal Housing Authority.

In the words of City of Buffalo’s systems administrator for network security and communication, the challenge it faced was not dissimilar to any other organization, “We have too many windmills and not enough Don Quixote’s. As far as our network security went, we were hard on the outside and soft on the inside, and this needed to change.”

The IT team refers to City of Buffalo’s Active Directory as a ‘bit of a basement and that, since using DatAdvantage, they’ve managed to turn it around into a very effective, efficient and intuitive warehouse. He adds, “The thing with Varonis is it gives you that stand off capability but then allow you to almost instantly come in and work to a fine gradient of detail that, without the product, would take hours and hours.”

Find out how Varonis® DatAdvantage® is helping the City of Buffalo clean-up their permissions and audit access activity efficiently

Click here to see the complete case study.

 

Case Study: NBC Holdings

NBC Holdings (Pty) Ltd (NBC) is the first black-owned and managed employee benefits company in South Africa. Today NBC is a leading force in the South African employee benefits arena, providing a comprehensive range of employee benefits products and services to 120 registered pension and provident funds, representing the retirement fund savings of more than 350,000 members.

As a financial institution, NBC Holdings needs to closely monitor access to data. When data was moved or deleted it was difficult and time-consuming for the IT department to figure out who moved it, and where. In addition, there were some instances in which it was necessary to provide a record of email messages that were read, sent, or deleted and the IT department required an efficient way to produce this information. (Native Windows and Exchange auditing tools could not provide the granularity NBC required and on their own provided no actionable intelligence or activity analysis).

Further, they wanted to relieve the IT helpdesk of some manual access provisioning tasks, as these were very time-consuming, and the helpdesk often lacked context about the data to make accurate decisions about who should have access.  Even identifying who had access to a particular data-set had been inefficient and resource-intensive. NBC is now able to identify data owners and involve them in the authorization processes through automation.

Find out how Varonis® DatAdvantage® for Windows, Varonis® DatAdvantage® for Exchange and DataPrivilege® helped NBC with their auditing, permissions and data ownership challenges.

Click here to read the complete case study.

 

New Case Study: Western Precooling

Western Precooling was founded in 1942. For nearly 70 years it has been the partner of choice for growers and shippers to get fresh, healthy produce from the field to their customers.

Western Precooling wanted to eliminate possible security concerns due to folders open to global access groups like “Everyone” and “Domain Users.”  These folders would be accessible to the entire organization, and since some of them might contain sensitive information, it was imperative to restrict access only to users who needed it. In addition, Western Precooling wanted to have a more detailed record of access activity.

Brian Paine, Director of IT, began looking for a solution that could clean-up excessive permissions and provide granular auditing capabilities. He considered bringing in a team of consultants, but was concerned that this approach wouldn’t allow him to maintain a secure environment after the clean-up process, and a team could not provide the auditing he needed. One of Brian’s concerns was the impact the clean-up process might have on business activity; he needed solution that could allow him to clean up permissions without affecting the daily operations of the company.

Since Western Precooling is preparing to move several applications and services to the cloud, it was necessary to have permissions in order prior to the migration; it would become a much more difficult problem to fix later on. It was also important to identify stale data so it could be archived instead of migrated. Finally, Brian needed a solution that could support their newly acquired NetApp NAS device.

Varonis DatAdvantage was the long term solution that Brian was looking for.  Varonis gives his team the ability to clean up permissions, audit access activity, identify stale data, and provide support for NetApp. Download the case study to read the complete story.

 

Varonis Demonstration of Metadata Framework Automation

Last week during our monthly webinar, David Gibson, VP of Marketing and Matt Gilbo, Director of West Coast Sales, gave a demonstration of how Varonis can help organizations effectively protect their unstructured and semi-structured data (documents, images, spreadsheets, presentations, etc.) stored on file servers, NAS devices, SharePoint and Exchange.

During the webinar, David and Matt explained the importance of having a scalable big data analytics platform that can collect, aggregate, and analyze massive amounts of metadata from various sources  to provide organizations actionable intelligence to effectively manage sensitive business data, automate complex IT tasks, and answer fundamental data protection questions like:

  • Who can access the data?
  • Who is accessing the data?
  • Where is the sensitive data over-exposed (and how can I fix it)?
  • Who shouldn’t have access to data?
  • Who does this data belongs to?
  • Which data is stale?

If you missed last week’s live demonstration and would like to see Varonis in action, click here to request a demo.

Varonis hosts monthly webinars on data governance, big data, IT security, and related topics. Check our News & Events page and Twitter feed (@varonis) for upcoming events.

Fixing Access Control without Inciting a Riot

In a previous post, Fixing the Open Shares Problem, we talked about some of the challenges we face when trying to remediate access to open shares. One of the main problems is minimizing the impact these clean-up activities can have on the day to day activities of business users.

Think about it: if a global access group is removed from an ACL, the odds are very high that someone who has been using that data will now be restricted.  We find ourselves in a catch 22 between remediating global access and weeks of business disruption as they try and respond to the problems caused by the “fix.”

IT: “I’m sorry that you’re unable access your data.  We’re working on fixing it now.  I assure you, the only reason this happened is because we were trying to make things better.

Business user: “I totally understand.  Thank you!  You should get a raise!”

(We all know this is not how the conversation goes).

There’s a better way.

Varonis DatAdvantage provides the ability to simulate permission changes and see theGlobal Group Access Report probable outcome before you commit those changes to production. How? DatAdvantage correlates every audit event with the permissions on an ACL and then analyzes the impact of each simulated ACL  and/or group change. Through this sandbox, IT can identify the users who would have been affected by that change had it already been made—those users who would have called up the help desk screaming that they couldn’t access data they needed.

Once you’ve verified that those users really need access, you can continue to configure the ACL and group members within DatAdvantage to provide them access, and keep simulating until you’re confident that your permissions changes will not disturb people’s work. If you have the credentials to be able to make changes, DatAdvantage lets you commit all permissions and group changes right through the interface (over all platforms), either immediately or scheduled to hit a change management window later.

These simulation capabilities eliminate the risks of manually cleaning up open shares, since IT is able to fix the problem without ever impacting legitimate use.  Most IT departments have seen the results of trying to solve this problem manually: lots of broken ACLs and annoyed users. It’s a lot of fun to show them a better way.

You can request a free 1:1 demo of the Varonis suite here or watch our next live demo on the web.

Simulating Cleanup in a Sandbox

 

Exchange Journaling and Diagnostics: How to

Journaling and Diagnostics Logging are services to monitor and audit activity on Microsoft Exchange servers. They provide basic auditing functionality for email activity (e.g. who sent which message to whom) and, if collected and analyzed, may help organizations answer basic questions about email, as well as comply with  policies and regulations. (Note: Varonis DatAdvantage for Exchange does not require journaling or diagnostics to monitor Exchange activity.)

Journaling records email communication traffic and processes messages on the Hub Transport servers. The information collected by the journaling agent can be viewed through journaling reports, which include the original message with all the attachments.

Diagnostics writes additional activities to the event log (visible in Windows Event Viewer), such as “message sent as” and “message sent on behalf of” actions. Diagnostics can be configured through the Manage Diagnostics Logging Properties window in the Exchange Management Console.

Journaling and Diagnostics Logging collect significant amounts of events and generate a large amount of raw log data, so it is critical to plan which mailboxes and messages will be monitored and allocate additional storage before enabling.

Here are the steps to enable Journaling and Diagnostics in your Exchange Server.

Setting up Journaling in Exchange

There are two types of Journaling: standard and premium. Standard provides journaling of all the messages sent and received from mailboxes on a specified mailbox database, while premium provides the ability journal individual recipients by using journaling rules.
Setting up Journaling in Exchange
Here are the high-level steps to setup journaling on your Exchange server:

  1. First, create a journaling mailbox. This mailbox will be configured to collect all the journaling reports, and should ideally be setup with no storage limits to avoid missing any. The process to create the mailbox is:
    1. Select a different OU than the default
    2. Assign a display name
    3. Assign user logon name (user will use to login to this mailbox)
    4. Setup a password—take into account that journaling mailboxes may contain sensitive information, as a copy of the message is stored with the report.
  2. To enable standard Journaling it is necessary to modify the properties of the mailbox database. Under the Organization Configuration/Mailbox/Database Management/Maintenance tab, you will need to specify the journaling mailbox where you want the journaling reports sent.
  3. Premium Journaling requires an Exchange Enterprise Client license. To setup premium journaling, it is necessary to create journal rules, which are used to setup journaling for specific recipients. Using the EMC (Exchange Management Console) the journal rules can be created under the Hub Transport section of the Organization Configuration; on the Journal Rules tab. The fields to configure a journal rule are the following:
    1. Name
    2. Send reports to email
    3. Scope
      • Global – all messages through the Hub transport
      • Internal – messages sent and received by users in the organization
      • External – messages sent to or from recipients outside the organization
    4. Journal messages for recipient – journal messages sent to or from a specific recipient
    5. Enable rule – checkbox

Make sure the status on the completion page is “Completed” to verify that the rule was created successfully.

Setting up Diagnostics in Exchange

Diagnostics logging is configured separately for each service on each server. The steps toSetting up Diagnostics in Exchange configure diagnostics logging are:

  1. In the Exchange Management Console (EMC), click on Server Configuration.
  2. Right-click on an Exchange server  to enable Diagnostics Logging on it.
  3. Click on Manage Diagnostics Logging Properties.
  4. On the Manage Diagnostics Logging window, select the services you want to enable diagnostics for.
  5. Choose the level of diagnostics you would like on that service.
    • Lowest – log only critical events
    • Low – log only events with logging level 1 or lower
    • Medium – log events with logging level 3 or lower
    • High – log events with logging level 5 or lower
    • Expert – log events with logging level 7 or lower
  6. Click on configure. The system will provide a confirmation screen.

In a future post, we will go over the Mailbox Audit Logging in MS Exchange 2010.

New Case Study: Greenhill & Co.

Greenhill & Co., Inc. is a leading independent investment bank. The company was established in 1996 by Robert F. Greenhill, the former President of Morgan Stanley and former Chairman and Chief Executive Officer of Smith Barney.

As the CIO of Greenhill & Co., Inc., John Shaffer sought a data governance solution that could provide visibility into employee access rights, and identify potential issues. Additionally, the company needed a more efficient way to determine when content was moved or deleted, how it was being used, and by whom.

Greenhill & Co found that trying to manually manage and protect information for a company with global reach was often time-consuming, ineffective and error prone. The CIO and his team needed automated analysis of the organization’s permissions structure to more efficiently determine which files and folders required owners and who those owners were likely to be.  Identifying likely data owners required the ability to analyze actual access activity to identify likely data owners.

Further, Greenhill required a system to manage access and permissions to sensitive data.

“We liked DatAdvantage because it told us right away the access rights that certain folders had, which people had access to those folders, where the content was moving to, and if that access should be tightened.”

Click here to read the whole case study.