Archive for: February, 2012

Big Data Management On Your NAS Made Easy

Got data? Got a lot of it? Most companies with NAS devices are struggling with how to manage permissions and understand usage patterns, find data owners, and identify and lock down sensitive information. If any of that sounds familiar, we’ve got the webinar for you. As part of our new partnership with HP, Varonis is co-presenting a webinar on how we can help you master big data.

We enable customers to get control of the information stored within HP IBRIX X9000 storage systems and file shares to help you realize:

  • Visibility into your permissions (set in Active Directory, LDAP, SharePoint, and Exchange)
  • A detailed audit trail of every file and e-mail touch on your servers
  • Recommendations into where access can be reduced without affecting user activity
  • Identification of data owners so they can be directly involved in the management and protection of their data
  • Sensitive content analysis so you can assess risk to your most critical data, allowing you to focus on high-priority areas for remediation

Read the press release announcing our partnership here.

Sign up to attend the webinar here.

Thoughts on the 2011 Data Breach Investigations Report

While reading through the 2011 Data Breach Investigations Report, there were two things that caught my attention:

The first one is that approximately 83% of the data breach attacks are considered “opportunistic.” According to the report, “the victim was identified because they exhibited a weakness or vulnerability that the attacker could exploit.” In other words, the attack took place because the attacker noticed a weakness—if that weakness had not existed or had not been noticed, the attack would not have been conceived, or the attacker would have moved on to an easier target.

The second one is that the ones who are taking advantage of these weaknesses are its own employees. The report mentions that “it is regular employees and end-users – not highly trusted ones – who are behind the majority of data compromises. This is a good time to remember that users need not to be super users to make off with sensitive and/or valuable data.” Contrary to what most of us might think, in many situations we don’t always have specialized criminals attacking our organizations. Regular users are responsible for many of the attacks; employees that are tempted after discovering that they have access to valuable information.

Putting these two things together, it makes sense that a primary area of risk is where employees have access to valuable data, and where access is too permissive. Many organizations are already looking for sensitive, valuable data (e.g. with data classification technologies). More recently, organizations are starting to look for better context awareness, linking content with permissions, activity, and ownership information to identify significant exposures, and accelerate data protection and remediation efforts.

In our next post, we’ll discuss how you can use metadata framework technology to identify users that might be looking for weaknesses in your environment.

File system audit data taking up too much space? Read on…

I had the privilege of speaking about eliminating data security threats at Data Connectors in Houston a couple weeks ago, and I was asked by several people about how much space “all that audit log data” would take up, and how long you could realistically keep it while still being able to report on it.  One person that asked explained that he had a product to collect audit data on a single busy file server, but it could only hold a month or so of data before it consumed a full terabyte of space, and (worse) became almost unusable when generating reports.

If you’ve ever enabled native auditing (like audit object access success in windows or BSM in Solaris) and taken a look at the logs, you’ve certainly noticed, among other things, the astounding number of events they generate. I just enabled native auditing on my workstation while writing this to get some numbers. I then opened one (existing) file, edited one line, saved it, and closed it– this generated 130 distinct events by itself (46 4656 events, 46 4658 events, and 38 4663 events). With numbers like this, it’s no wonder that collecting and storing raw audit logs can take up so much space, and be so slow to parse through.

This is one of the areas where metadata framework technology really shines in unstructured data protection. Not only can a metadata framework replace the inefficient native operating system auditing functionalities on many platforms, it can also normalize the audit information and store it within intelligent data structures. Normalization eliminates redundant information, and the data structures are much easier to process after the computationally intensive parts of the audit trail (like the path and SID) are converted into integers.

With normalization and intelligent data structures, not only can audit information be stored more efficiently, it is also quicker to search and easier to analyze.

Using Varonis: Start With Classification

(This one entry in a series of posts about the Varonis Operational Plan – a clear path to data governance.  You can find the whole series here.)

Start with ClassificationWe spend a lot of time talking here and elsewhere about the many and varied problems IT faces when it comes to access control. What I’ve found, though, is that some of our customers end up with either DatAdvantage or DataPrivilege (or both) to fix a specific need. For instance, one customer I met recently bought DatAdvantage for Windows because they discovered that a particularly sensitive file had been exposed and they wanted to make sure they both cleaned up and tracked access to it in the future. It’s not uncommon for folks to come to us with one use case in mind, and after quickly addressing the initial need, they want to know what they should do next.

What we’re all really looking to do is understand where access is broken, fix it and then maintain correct controls in the future, including auditing use and flagging abuse. Making sure that the right people have access to the right data means being continuously vigilant in identifying fixing these problems. Usually it also means identifying data owners and shifting the burden from IT to the business–the data belongs to them, after all.

Without a roadmap or methodology, it can be hard to know where to begin, which brings us back to the customer I mentioned earlier. They wanted to know how to get from a chaotic file sharing environment–where they don’t even know what’s broken, let alone how to fix it–to controlled collaboration that’s continuously secure. The answer isn’t just in the technology you use, it’s embracing a methodology and a culture that treats data as a business asset rather than a technology asset. What I’d like to do over a series of posts is lay out the basics of our approach, as well as talk about how we’ve seen it work with some recent customers.

Step 1: Figure Out What’s Valuable

Not everything on unstructured shares has the same value. The first challenge is figuring out exactly what’s important, and what’s not. The problem is that 80% of a company’s data is unstructured, and a lot of it is accessible to too many people, so it can be difficult to prioritize where you should focus your time and energy.

The last few years have seen a huge investment in DLP products, including classification of data “at rest“ (data sitting on servers, basically). Classification can involve a lot of things, but at it’s core what we’re doing is taking a close look at as much of the data as possible to figure out what’s important to the organization and what’s not. For example, a heath care provider probably wants to locate all the patient records, so they may scan for Social Security or patient ID numbers. Another example might be a bank looking for credit card or account numbers. Content inspection alone isn’t always enough–sometimes you want to look for a pattern in those files that are created or accessed by specific people. Either way, the first step in our methodology is going to be identifying those sensitive data patterns: what do we think is important? Until we decide what’s important, it can be hard to know where to begin fixing things. It also gets to the real heart of the problem: if this stuff is valuable, we need to protect it.

In future posts I’ll continue to lay out the Varonis methodology. Stay tuned.

Image credit: mamsy

Forensic Investigation of Trade Secret Theft (Part 2)

In our recent blog post, we discussed a hypothetical situation where the General Counsel of “Alpha Chemicals” approached you and requested a whole bunch of information about “Allen Carey,” including documents he accessed and email messages he read related to the company’s blockbuster product, “Transparent Aluminum”, and a list of permissions that “Allen” had to various IT resources. Well, in parallel to his request for this information, the General Counsel also questioned the HR department and discovered that though “Allen Carey” had performed malicious activities, according to the HR department, “Allen Carey” didn’t exist!!

While not directly relevant to IT Security (but directly relevant to this scenario), in 1973, the most popular show on television was M*A*S*H. In one episode the lead character, Hawkeye Pierce, created a fictitious character, “Captain Tuttle.” During the episode, Captain Tuttle’s persona morphed from imagination to legend within the hospital, as “Captain Tuttle” was responsible for a number of very heroic actions, yet no one ever saw him. The episode ends with “Captain Tuttle” dying in a tragic accident, the only proof of his existence the dog-tags found near the accident site. That was the extent of forensics performed in this very funny comedy.

While our hypothetical situation may seem like it was created for a Hollywood comedy, what would you do if it was determined that a fictitious person named “Allen Carey” performed malicious activities that resulted in the loss of your companies trade secrets? What type of information would you require to perform an investigation? Minimally, you would require the ability to answer the following questions:

  1. Who created Allen Carey’s user account, and when?
  2. Was Allen Carey’s user account added to or removed from any group or Access Control List, and by whom?
  3. Can you provide a record of any email accounts where Allen Carey might have had send-as or send-on-behalf of privileges, when he got those permissions, and who granted them?
  4. Which, if any, other user accounts accessed files from the workstation that Allen Carey used?

In order to provide the General Counsel with the answers to the above questions, you would need to be auditing administrative access to Active Directory and Exchange. You would also need to correlate access activities from a specific workstation to the user accounts that used that workstation. Most importantly, you would require a product that would provide historical reporting with the ability to correlate all relevant variables. AND, you would need to provide this information quickly. Of course, the General Counsel also requires the previous information he requested, as he still needs to know about the documents that Allen Carey accessed, the email messages that he read, and a list of the permissions that he had to various IT resources.

In the next blog, we will dissect the forensics process in detail.

Accelerate Data Protection with Context Awareness

I’ve been reading a lot about DLP technology lately.  Almost every article, discussion, and whitepaper I’ve stumbled upon focuses on content awareness — scan my files, find sensitive data, and ensure that it doesn’t escape.

This is a great place to start–locating critically sensitive files is a terrific first step–but a massive list with hundreds of thousands of “alerts” across petabytes of data can be extremely daunting if you don’t have any actionable intelligence accompanying it.

After a scan, there are usually far more questions than answers:

  • Who is using this data?
  • Who owns the data?
  • Which data is most at risk?
  • Once we’ve remediated exposures, how can we keep things under control?

This is why context is King.  And the key to context is metadata.

Forrester states, “To manage and protect information effectively, particularly from insiders and business partners, information risk and security professionals must integrate identity and access management with data life-cycle management. Forrester refers to this as protecting information consistently with identity context (PICWIC).”

Download our new whitepaper, Accelerate Data Protection with Context Awareness, where we talk much more about how identity context awareness can close the loop on DLP.

Photo: http://farm4.staticflickr.com/3109/2659027985_065d5c9ff0_m.jpg

Data Protection: It’s Just the Right Thing To Do

A couple of weeks ago, 37signals, makers of the popular project management app Basecamp, wrote a blog post about the 100 millionth file upload.  In the post, the author made an ostensibly innocent comment about the filename: cat.jpg.

Funny, right?  37signals co-founder David Heinemeier Hansson was not laughing.

So, what’s the big deal?  As David explains in his apologetic follow-up post, if a 37signals employee can not only view, but comment publicly about cat.jpg, what prevents them from doing the same with truly sensitive data, like Downsizing-Plans-2012.pdf?

Even though this incident was extremely minor on the spectrum of data security issues (just ask Sony), it’s clear from David’s apology that 37signals feels a responsibility to safeguard customer data and uphold a high standard of trust and openness. 37signals should be commended for acknowledging and addressing this responsibility.

In an ideal world, every company would honor their obligation to protect their customers’ private information. Protecting customer data is certainly the right thing to do from a moral perspective; it is also the right thing to do—in fact, the urgent, critical thing to do—from a business perspective. Who will put their money in a bank that doesn’t lock its vault when there is a responsible bank down the street?

Unfortunately, despite the moral and financial incentives, many companies neglect to take the most basic measures to secure personal data, and some don’t disclose when their systems have been compromised. As the number of individuals negatively impacted each year by breaches continues to skyrocket, legislators and regulatory agencies are putting pressure on organizations to get control of their data or risk substantial fines and lawsuits.

When there is a trend that threatens the welfare of consumers, public companies and its shareholders, regulations always follow.  Not surprisingly, 46 US states have enacted laws that mandate security breach notification and adherence to strict standards (such as PCI-DSS) which prescribe rules about passwords, data encryption, access controls, event monitoring, security policies, and more. The EU has even broader legislation pending. In an attempt to mitigate risk to investors, the SEC has released a “Disclosure Guidance” document advising firms to disclose in their regulatory filings attacks and breaches that have occurred as well as potential security risks they face going forward.

With the average cost of a data breach reaching $7.2 million in 2010, it has become exceedingly risky for businesses not to make protecting customer data a top priority. In a follow up post we’ll talk about where the bulk of customer data and other critical information typically resides.

How to Share Your Mailbox or Calendar in Outlook: It’s Easy

It’s so easy, in fact, that just about everyone figures it out the first time. Most end users have realized by now that right-clicking on something in Windows does magical things, and it’s not a long leap from there to select properties and permissions when you want to grant access to something. (If you really need help, there’s a nice how-to here)

What is difficult, however, is reporting on exactly what has been shared in the Microsoft Exchange environment, and who is making use of that access. Who is reading your email? Who is looking at your calendar? As we’ve spoken to Exchange administrators over the past year, we’ve discovered that two things are usually true about mailbox and calendar sharing:

  • Once mailboxes and calendars are shared, they usually stay shared—indefinitely
  • The higher in the organizational hierarchy you are, the more likely it is that your mailbox or calendar is shared, and shared with more people

Exchange administrators are obviously concerned about this, as are security and compliance folks. It’s not surprising that we were asked to create two specific reports:

  • Which people’s mailboxes are shared
  • Which people’s mailboxes are being accessed, and by whom (other than the owner)

DatAdvantage for Exchange Sample Report

It’s also not surprising that we created them, and they are now available with DatAdvantage for Exchange. Take a look at all the Exchange features in our upcoming webinar on February 22nd.