Archive for: November, 2011

Our Top Predictions for 2012

It’s that time of year again—reruns of It’s a Wonderful Life (or The Lord of the Rings), comfy chairs in front of a blazing fire, libations and cheer, and when we start to consider what’s around the corner for us next year. This time we’re avoiding the long shopping list of predictions for a few pithy ones that you might actually remember. (We’re sure you haven’t forgotten our list of forty things that 2011 would bring and we’re not going to remind you because of course they all happened).

Secure Collaboration will go viral

Data will continue to grow at 50% year over year and digital collaboration will continue to be the core of every business process. 2012 will be the year data owners get involved – they will take back access control decisions from IT, and demand automation to analyze data, make better decisions, and eliminate costly, ineffective manual processes.

Organizations will realize that continuing on the current path will have devastating results for their businesses. Infrastructure must be refreshed periodically (convergence is the trend—small, distributed servers are consolidated into fewer, larger, centralized ones), and organizations will soon realize that without intelligence, moving data quickly and safely is difficult or impossible. The wrong data will be moved, the wrong people will have access, and data owners will be needed. Every decision about managing and protecting data will be difficult without adequate context and automation, and the right people will be needed to make these decisions.

We regularly work with organizations with 10-100,000 plus shares for which they need to identify data owners. In 2012 organizations are going to find it even harder to identify owners, and to make any data management and protection decisions without harnessing the power of metadata. If they don’t know who this “stuff” belongs to it’s just going to grow indefinitely, and will be perpetually at risk. It’s dawning on them that they can’t afford it.

Big Data analytics will expand its focus to the biggest data of all—unstructured information sitting on file servers, NAS devices, and in email systems

Effective data governance requires harnessing the power of metadata through intelligent automation. It is not surprising that industry experts are now saying that the same kind of automation is necessary for more than good governance.

In order to harness the power of “Big Data,” you’ll need to analyze and look for patterns in how and when these massive amounts of data are used, who uses it, in what sequence, and what it contains in order to effectively run a data-driven organization. Widely known fact: the majority of big data in the enterprise is unstructured versus structured.

Organizations will start keeping track of their assets through automation and we will see some IT departments taking drastic measures, such as shutting down “at risk” servers or access to e-mail if the proper audit trails are not in place

In a recent high profile case, one organization used our software to catch an infiltrator who was operating as a contractor within their firewall. This had enormous implications for the IT security of that organization. If they had not found the suspected hacker when they did who knows what damage could have been done. This individual is now out of their system— and in the justice system.

2012 may be the year servers get shut down and email withdrawn if there is no audit trail. If you couldn’t audit your bank account, you’d want to freeze it until you could—it’s now the same with data.  We will see some IT departments take drastic measures, such as shutting down unaudited, “at risk,” servers or access to e-mail if the proper audit trails are not in place. Organizations will start keeping better track of their digital assets through automation

Internal threats will still be a major worry for corporates in 2012 despite the demise of WikiLeaks

In many of the security breaches in 2011, employees or contractors were able to delete or download thousands of files without raising concerns because often no one was able to determine what sensitive data they had access to and secure it before information could be stolen, view an audit trail of what they actually did access after the fact, and certainly not hear any alarms go off while the breach was in progress, when access activity was unusual.

Much of the data accessed and leaked in recent breaches was composed of unstructured or semi-structured data – documents, spreadsheets, images, presentations, video and more – that resided on file shares accessible throughout organizations.

Download the full Varonis Top Predictions for Data Governance in 2012 White Paper here

Authorized Access – Understanding how US laws affect your authorization p...

In 1986, the United States Congress passed the Computer Fraud and Abuse Act (CFAA).  While the intent of these laws were originally to protect government computers and information from hackers, the laws have been applied to commercial interests, as well. Specifically, the Computer Fraud and Abuse Act subjects punishment to anyone who “knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value.”  While it is not our position to advise clients on this topic, it is important to understand how the US Courts interpret the phrase “authorized access,” and “exceeds authorized access.”

Through litigation, the US legal system has attempted to interpret the CFAA and determine the legal definition of “authorized access” and “exceeds authorized access.” Before getting into the value of Varonis features, it is essential to review the prevailing case law and judicial opinions about this topic.  While there have been a number of cases addressing this issue, there are two cases and an opinion by a US District Court that stand out, each of which provides a basis for current legal decisions that address authorization issues.  Not surprisingly, most available case law involves data “theft” by individuals who, at some level, had permission to access the information that they accessed.  For example:

  • USA v Nosal – In this case Nosal (a former employee of Korn/Ferry) obtained proprietary information from his former co-workers which he used to start a competing business. The former co-workers had authorization to access the information via the access permissions provided to them by Korn/Ferry, but the courts challenged whether they “Exceeded Authorized Access” because they signed a Non-Disclosure Agreement as well as an Acceptable Use Policy.
  • LVRC Holdings LLV v. Brekka – Brekka (an employee of LVRC) emailed business documents to his and his wife’s personal email accounts. Brekka had permission to access the business documents and LVRC did not have an acceptable use policy, so Brekka did not violate any access restrictions and ultimately maintained “Authorized Access.”
  • The United States Seventh Circuit District Court has stated that “an employee accesses a computer without authorization the moment the employee uses a computer or information on a computer in a manner adverse to the employer’s interest.”   This opinion stated that access permissions were only one factor in determining authorized access.  In this case, the access permissions available to the employee were considered, as well as whether the employee used these permissions and data in a manner which was detrimental to his employer’s interests. In other words, regardless of the permissions available to an employee, a “disloyal” employee may be guilty by accessing information available to them with ill-intent.  Other courts have offered differing opinions about this specific issue, creating additional confusion.

As you can see, the ability to determine what constitutes authorized access is still subject to interpretation in the courts. Acceptable Use policies and Non-Disclosure Agreements are important, but they are only useful after an incident has taken place.  Written policies and expectations of loyalty don’t safeguard important data and they don’t prevent disloyal employees from using data to their advantage.  Ultimately, IT Administrators must enforce rightful access via best practices–data owner involvement in authorization processes in conjunction with an audit trail to validate acceptable use. In other words, access should be granted purposefully and periodically reviewed.

Varonis products provide the following features which will help to address the legal issues identified above:

  • Complete visibility into the permissions that each individual has across Windows, Unix, Linux, SharePoint and Exchange environments
  • A full audit trail which demonstrates whether an employee has accessed data that an employer would consider important or inappropriate
  • The ability to ensure rightful access, involving data owners in the decision making process
  • The ability to determine the sensitivity of data, as defined by data owners
  • A provisioning system complete with an audit trail which can report on why a person was granted access to a resource, when, and by whom
  • Automated entitlement reviews to ensure that permissions are always appropriate

Moral of the story: Make every effort to ensure and validate rightful access so that you can peacefully co-exist with the vagaries of the law. Varonis products can ensure ongoing authorized access and provide information to support a claim that a person exceeded their authorized access.

Reduce Risk for Your Most Critical Assets: Data and People

Register for our TechTalk on December 1, 2:00 pm (EST) with Varonis partner, SPHERE Technology Solutions

Every company knows that they have risk, either from external or internal forces, but few know where this risk comes from, how to measure it and more importantly how to effectively reduce their risk. It boils down to two words: data and people. Luckily, data and people are the two risks you can actually start to control, and the way you control these risks is through governance.

But where do you start? Unless your organization is made up of five people with only a handful of file shares, figuring out how to scope this can feel like an incredibly challenging problem. Since we’re trying to reduce risk, perhaps focusing on your most risky areas fits the bill? Ask yourself these questions:

  • Where do I have open access?
  • Where is my sensitive data?
  • Where do my sensitive users have access?

With the right tools and processes, you can answer these questions and immediately start reducing risk through a targeted approach. Design and implement a remediation strategy to reduce risk across the rest of your landscape and then automate ongoing governance processes going forward to maintain a secure model.

What’s important is that you start and keep going. Nobody wants to make the news because of a data breach, and in addition to reputational risks, new rules by the SEC state that public companies now have to make investors aware of how IP is valued and secured.

“The SEC guidance will fundamentally alter this equation by raising questions that historically have not been asked at many U.S. companies. Businesses will now have to consider, among other things, what constitutes a material cyber security breach and how to disclose such events to investors; how the value of intellectual property is measured; whether appropriate defenses are in place around that property; and whether risks are being appropriately mitigated, through defensive technologies or appropriate insurance coverage.” – Washington Post, “A new line of defense in cyber security, with help from the SEC” by Jay Rockefeller and Michael Chertoff.

Varonis and SPHERE will be hosting a Varonis TechTalk on Thursday, December 1st to go over ways you can successfully design and implement a governance strategy at your company. Register today: http://go.varonis.com/l/7352/2011-11-22/1EODO

Improving Authorization with Metadata

Now that we’ve covered why authorization tends to be broken, let’s talk about some solutions. To recap:

  • Authorization is the process whereby we figure out what someone can and can’t access in a system
  • Authorization controls are typically too permissive thanks to global access groups, groups not properly aligned with data, and excessive group membership
  • Traditional tools are ill-equipped to fix these problems, so they tend to spread and get worse rather than get better

So what’s an IT manager to do? We need machines to start doing this work. We need automated systems that can gather and process information about our data so that we can both find and fix authorization problems. At a minimum, we need a few different types of metadata:

First, we need a map of all the users and groups in the system. Users and groups are the foundation for access control, so we need to start there. It might sound obvious, but any system designed to help fix authorization has to start with the user and group directory. The problem has traditionally been that we both start and stop here, figuring that if we fix the groups we fix the problem. We all know that’s not true now, though. Right?

Next, we need the actual permissions. We can’t know where an ACL is overly permissive, or even broken, if we don’t go out and scan all the ACLs. Gathering a complete map of permissions will let us start to answer some of the more difficult questions, like “Who’s got access to this folder?” and “What folders does this group have access to?”

Just having permissions isn’t enough if we actually want to fix things. We also need a record of all data access—an audit trail. I went over this in a little more detail in a previous post, but the main point is that you can’t figure out where access is broken unless you’re recording and analyzing how it’s actually being used.

Finally, knowing which data is the most important (sensitive, critical, whatever you want to classify it as) will also help prioritize the work. Broken access controls on PII data may be a bigger problem than on someone’s vacation photos, for instance.

Varonis refers to the metadata above as “metadata streams.” There’s a ton of this metadata to be gathered if you’re collecting all of the access activity and correlating it with file system permissions. Next time we’ll talk about how to collect it all, and what it’s going to take to use it.

Hampton Products

“We have a level of confidence now that we didn’t have before Varonis. DatAdvantage helps us simplify security and know with certainty that files and information at risk for overly permissive access are locked down.”

That quote’s from Brian Millsap, CIO and Vice President at Hampton Products, who announced today that they’ve successfully implemented Varonis DatAdvantage to clean up and maintain access to sensitive data within the organization. Hampton is a leading manufacturer of padlocks and other portable security hardware and like a lot of us these days was faced with a tough problem: lots of sensitive information on unstructured data stores with broken access controls. With an IT staff of nine people, they also didn’t have a lot of free resources to throw at the problem.

Being able to do more with less when you’re fixing permissions is key—it’s too much to try and do it all manually. Even if you have a good way of finding all the important data that needs protecting—no small feat—figuring out who actually needs access to each and every folder is a monumental task. Automation is key, and it’s one of the main reasons Hampton Products chose Varonis.

 

Data Authorization Processes – A need to relive the past

In 1941, the accounting governance body, the American Institute of Certified Public Accountants (AICPA) overhauled their Rules of Professional Conduct.  Rule 16 stated “A member or an associate shall not violate the confidential relationship between himself and his client.”  This provision was developed to guide Accountants (Data Stewards) and to reassure their customers (Data Owners) of the confidentiality of business and personal information.  Ironically enough, prior to this time, the AICPA, which has been in existence since the 1800’s, felt that a provision like this was unnecessary as they felt that  “The man with a loose tongue, the man who cannot keep a secret, should never attempt to practice public accounting.”* Prior to this change, the AICPA believed that an Accountant would never risk their professional career by revealing confidential information to a third party.

From the late 1800’s through the mid-1980’s, the manner Accountants stored financial information and the processes they used to manage it supported information confidentiality—usually a simple wood filing cabinet and keys.  Authentication and Authorization protections were simple:  If you didn’t have the key to the office you couldn’t get the filing cabinet.  If you didn’t have the key to the filing cabinet, you couldn’t open it or access the information within.  The key to the office was only given to select employees and associates authorized by the Accountant (once again, the Data Steward) to have access to the information.  If client information was revealed to a competitor, it was fairly easy to determine who leaked it.  In this regard, as early as the 1900’s the premise of least privilege existed and both the data owner and data steward had control and visibility into data authorization process.

Unfortunately, the paradigm of data protection has changed, and not in a positive way. Financial information is no longer controlled by a process with a clearly identified data owner and data steward.  Most companies have not identified data owners, most companies don’t have appropriate controls over their data, and most companies cannot exercise the same level of data owner involvement in access control decisions that existed in the early and mid-20th Century Accounting.  And, if a data owner has not been identified risk is extremely difficult to quantify, appropriate controls are difficult to implement and enforce, and customers will eventually lose faith in the ability for a supplier to protect personal and business data.

Electronic record keeping in conjunction with digital collaboration has overloaded manual authentication and authorization processes, even for those data sets that do have owners.  Automation is now necessary to achieve the level of data protection that Accountants used in 1900’s, where users are authenticated, owners are identified, and participate in the authorization process armed with the intelligence to make good decisions.  No one can dispute the many benefits of the electronic recordkeeping.  However, as we approach the end of the fiscal year, while many companies are doing tax planning and budgeting for 2011 and 2012, we should all be conscious of the steps our IT suppliers are taking to protect our business and personal data—hopefully they’re using more than a bigger file cabinet.

*Carey, John L. “Professional Ethics of Public Accounting” New York: American Institute of Accountants, 1946.

Big Data

Big data is in the news quite a bit these days as organizations become excited about the possible benefits of analyzing website traffic, database logs, and many other kinds of “Big Data.”  Some Big Data examples that are of particular interest are the spreadsheets, images, emails, audio files, video files, blueprints, and presentations that reside on file servers, NAS devices, SharePoint, and in Email. How will this enormous data set be harnessed to make better decisions, and what’s needed in order to do so?

Big Data Analysis and Structured Data

So  far,  Big  Data  analytics  has  mostly  centered  on  information  stores  where  there  is  ample  metadata  to  analyze,  like websites  with  extensive  logs  of  activity,  and  structured  data  repositories  (databases),  where  transactions  are straightforward to track and analyze. In situations where metadata is available, the challenge truly is about volume and technique—how  to  process  lots  of  information  quickly  enough  and  analyze  it  effectively  to  test  assumptions,  answer questions quickly, detect changes, and understand patterns.

Examples of data include spreadsheets, presentations, images, audio files, video files, blueprints, and designs. The data most often resides in unstructured repositories, like file shares.  Unstructured data repositories often don’t have much existing metadata to analyze. There is usually no record of activity, no  strict  connection  to  the  creators  and  owners  of  the  data,  and  no  catalog  or  index  of  what  all  the  data  contains. Ironically,  this  is  where  the  most  (and  biggest)  data  actually  lives:  many  studies  show  that  more  than  80%  of organizational data is stored in unstructured repositories.

For an interesting glimpse into the future of Big Data Analytics and metadata, read the rest of Mastering Big Data, by Yaki Faitelson.

 

Open Shares

In my post last week, Share Permissions, I promised I’d write a follow up post on “open shares.” Open shares, in a nutshell, are folders that are accessible to all (or pretty much all) of the people on the network. In the Windows world, these are folders are that are shared over the network via CIFS, and accessible to what are called “global access groups,” like Everyone, Domain Users, and Authenticated Users.

In order for a folder to be accessible to a global access group, its NTFS permissions must be set to be accessible by the group, and the folder must be shared or reside within the hierarchy of a share whose permissions are also accessible to the global access group.  For example, for a folder to be accessible, or open, to the Everyone group, the Everyone group must be on its access control list (ACL) with some level of access, and the folder and/or one of its parents must be shared so that Everyone has some level of share permissions. (See Share Permissions for an explanation of how sharing permissions work).

There are many possible combinations that can provide such open access—Everyone may be on the NTFS permissions while Authenticated or Domain Users have share access, Authenticated Users may be a child of another group that has either NTFS or share access, etc. No matter what the combination, the end result is that just about everyone in the organization has access to the data that resides in the folder, and the vast majority of the time that’s bad. To put it simply:

Open Shares = Bad

Unfortunately, organizations usually have lots of open shares on their servers and NAS devices, and often quite a few contain sensitive data. Using the native tools provided with Windows these shares are very difficult to find and even harder to fix. Once remediated, it’s also difficult to make sure these folders continue to stay locked down and new, insecure folders aren’t created.

The good news is that metadata framework technology now exists to identify and remediate open shares, prioritize which ones to remediate first based on exposure, content and activity, and make sure that no one who has a legitimate need for access gets cut off. Once open shares are eliminated, a metadata framework can automatically detect a relapse as well as any newly created open shares.

Windows Auditing

Before we really dig into how we’re going to fix authorization problems, we need to tackle that last level of data protection: access auditing and analysis. With access control this basically means: are we recording what people are doing, and are we reviewing those logs to make sure what they’re doing is appropriate? Authorization implies a level of trust, but we still need to be able to verify, right?

Auditing is important because unless we’re looking closely at how people are accessing data, we can’t measure whether our access controls are optimized. We can’t see whether someone has too much access unless we analyze how they’re using the access they have. Without an audit trail, we also can’t observe someone abusing access—maybe they’re going out and grabbing everything they have access to so they can send it off to Wikileaks, for example. Managing access control without an audit trail, as Yaki likes to say, is like using airplanes without air traffic controllers or streets without traffic lights. It sounds simple, right? In order to measure the effectiveness of a control you need to watch what’s passing through it.  The problem is that we haven’t had a good way of observing access activity when it comes to unstructured and semi-structured data.

Auditing access to structured systems isn’t terribly hard, since auditing and transaction logging tend to be built into databases. We haven’t been so lucky when it comes to unstructured and semi-structured data, though. It’s been a problem for so long that it seems a lot of us have just given up on knowing what’s going on. The problem, though, is that so much of our data is outside databases: 80% by some measures. If we don’t believe there’s anything of real value there, why is there so much of it? If we do believe it’s valuable, why aren’t we doing more to protect it?

If you’re running a Windows file server, I can almost guarantee you’re not using native auditing. It’s simply too resource-intensive and doesn’t offer enough functionality. The number I tend to hear from teams running Windows file servers is that enabling auditing (audit object access: success) means taking about a 30% performance hit on the server, let alone the disk space required to store all those text logs. Even if you’ve got the logs, using them for anything productive is pretty much impossible. The same holds true for Solaris, Linux and AIX. With Exchange we’ve got journaling and diagnostics, but even those are limited. You can’t see when people mark things as unread, for example, no matter how high you turn up your journaling level.

What this all means is that no matter what controls we put in place, it’s likely that we’re not doing enough to measure the effectiveness of those controls. We need to do more (or anything at all, really), to make sure we’re properly auditing access. Luckily, auditing and metadata framework technologies like Varonis are now available to not only audit access activity, but also analyze it automatically.