Category Archives: IT Pros

Working With Windows Local Administrator Accounts, Part III

Working With Windows Local Administrator Accounts, Part III

One point to keep in mind in this series is that we’re trying to limit the powers that are inherent in Administrator accounts. In short: use the Force sparingly. In the last post, we showed it’s possible to remove the local Administrator account and manage it centrally with GPOs. Let’s go over a few things I glossed over last time, and discuss additional ways to secure these accounts.

Restricted Groups: Handle with Care

In my Acme environment, the Restricted Groups GPO is used to push out a domain-level group to the local Administrators group in each of the OUs: one policy for Masa and Pimiento, another for Taco. It’s a neat trick, and for larger domains, it saves IT from having to do this through scripts or spending time performing this manually.

To refresh memories, here’s how my GPO for Restricted Groups looked:

Replaces local  Administrators groups with Acme-IT-1.

By using the “Member of this group” section, I’m forcing the Group Policy Manager to replace, not add, Acme-IT-1 to each local Administrators group in my OU. The problem is you may overwrite existing group members, and you don’t know what services or apps depend on certain local accounts being there.

You’ll likely want to evaluate this idea out on a small sample. This may involve more work— local scripts to re-add those accounts, or possibly creating new domain level accounts that can be added into the above.

Or if you prefer, you can use Group Policy Preferences (GPP). It has an update option for adding a new group (or user) under a local Administrator account (below). We know not to use GPP to reset local Administrator account passwords, right?

With GPP, you can add Acme-IT-2 to the local Administrators groups.

Even More Secure

There is, sigh, a problem in using Restricted Groups and centrally managed domain-level Administrator accounts. Since all users by default,  are under Domain Users, it means that local Administrators can be exploited through Pass-the-Hash (PtH) techniques — get NTLM hash, and pass to psexec — to log on to any other machine in the network.

This was the conundrum we were trying to grapple with in the first place! Recall: local Administrators are usually given simple — easily guessable or hackable — passwords which can then be leveraged to log on to other machines. We wanted to avoid having an Administrator-level local account that can be potentially used globally.

As I mentioned in the second post, this security hole can be addressed by creating a GPO – under User Rights Assignment — to restrict network access all together. This may not be practical in all cases for Administrators accounts.

Another possibility is to limit the machines that these domain-level Administrator accounts can log into. And again we make a lifeline call to User Rights Assignment, but this time enlisting the “Allows log on locally” property, adding the Acme-IT-1 Administrators group (below). We would do the same for the other OU in the Acme domain, but adding the Acme-IT-2 group.

This GPO prevents accounts from logging on to machines outside the specified domain. So even if a clever hacker gets into the Acme company, he could PtH with Administrator account but only within the OU.

It’s a reasonable solution. And I do realize that many companies likely already use this GPO property for ordinary user accounts, just for reasons I noted above.

Additional Thoughts

In writing this brief series, you quickly come to the conclusion that zillions of IT folks already know in their bones: you’re always trying to balance security against convenience. You won’t have a perfect solution, and you’ve probably erred on the side of convenience (to avoid getting shouted at by the user community).

Of course, you live with what you have. But then you should compensate for potential security holes by stepping up your monitoring game! You know where this is going.

One final caveat goes back to my amazing pen testing series where I showed how delegated Administrator groups can be leveraged to allow hacker to hop more freely around a domain—this has to do with accounts being in more than one Active Directory group. Take another look at it!

[Podcast] Roxy Dee, Threat Intelligence Engineer

[Podcast] Roxy Dee, Threat Intelligence Engineer

Leave a review for our podcast & we'll send you a pack of infosec cards.


Some of you might be familiar with Roxy Dee’s infosec book giveaways. Others might have met her recently at Defcon as she shared with infosec n00bs practical career advice. But aside from all the free books and advice, she also has an inspiring personal and professional story to share.

In our interview, I learned about her budding interest in security, but lacked the funds to pursue her passion. How did she workaround her financial constraint? Free videos and notes with Professor Messer! What’s more, she thrived in her first post providing tech support for Verizon Fios. With grit, discipline and volunteering at BSides, she eventually landed an entry-level position as a network security analyst.

Now she works as a threat intelligence engineer and in her spare time, she writes how-tos and shares sage advice on her Medium account, @theroxyd

Transcript

Cindy Ng: For individuals who have had a nonlinear career path in security, Threat Intelligence Engineer Roxy Dee knows exactly what that entails. She begins by describing what it was like to learn about a new industry with limited funding, and how she studied security fundamentals in order to get her foot in the door. In our interview, she reveals three things you need to know about vulnerability management, why fraud detection is a lot like network traffic detection, and how to navigate your career with limited resources.

We currently have a huge security shortage, and people are making analogies as to the kind of people we should hire. For instance, if you’re able to pick up music, you might be able to pick up technology. And I’ve found that in security it’s extremely important to be detail oriented, because the adage is the bad guys only need to be right once and security people need to be right all the time. And I had read on your Medium account the way you got into security, for practical reasons. And so let’s start there, because it might help encourage others to start learning about security on their own. Tell us what aspect of security you found interesting and the circumstances that led you in this direction. –

Roxy Dee: Just to comment on what you’ve said. Actually, that’s a really good reason to make sure you have a diverse team is because everybody has their own special strengths and having a diverse team means that you’ll be able to fight the bad guys a lot better because there will always be someone that has that strength where it’s needed. The bad guys, they can develop their own team the way they want and so it’s important to have a diverse team because every bad guy you meet is going to be different. That’s a very good point, itself.

Cindy Ng: Can you clarify “diverse?” You mean everybody on your team is going to have their own specialty that they’re really passionate about? By knowing what they’re passionate about, you know how to leverage their skill set? Is that what you mean by diversity?

Roxy Dee: Yeah. That’s part of it. I mean, just making sure that you don’t have the same person. For example, I’ll tell my story like you asked in the original question. As a single mom, I have a different experience than someone that has had less difficulties in that area, so I might think of things differently, or be resourceful in different ways. Or I’m not really that great at writing reports. I can write well, but I haven’t had the practice of writing reports. Somebody that went to college, they might have that because they were kind of forced to do it, by having people from different backgrounds that have had different struggles.

And I got into security because I was already into phone phreaking, which is a way of hacking the phone system. And so for me, when I went to my first 2600 Meeting and they were talking about computer security and information security, it was a new topic and I was kind of surprised. I was like, “I thought 2600 was just about phone hacking.” But I realized that at the time…It was 2011, and phone hacking had become less of a thing and computer security became more of something. I got the inspiration to go that route, because I realized that it’s very similar. But as a single mom, I didn’t have the time or the money to go to college and study for it. So I used a lot of self-learning techniques, I went to a lot of conferences, I surrounded myself with people that were interested in the topic, and through that I was able to learn what I needed to do to start my career.

Cindy Ng: People have trouble learning the vocabulary because it’s like learning a new language. How did you…even though you were into phone hacking and the transition into computer security, it has its own distinct language, how did you make the connections and how long did it take you? What experiences did you surround yourself with to cultivate a security mindset?

Roxy Dee: I’ve been on computers since I was a little kid, like four or five years old. So for me, it may not be as difficult for me as other people, because I kind grew up on computers. Having that background helped. But when it came to information security, there were a lot of times where I had no idea what people were saying. Like I did not know what “Reverse Engineering” meant, or I didn’t know what “Trojan” meant. And now, it’s like, “Oh, I obviously know what those things are.” But I had no idea what people were talking about. So going to conferences and watching DEF CON talks, and listening to people. But by the time I had gone to DEF CON about three times, I think it was my third time I went to DEF CON, I thought, “Wow. I actually know what people are saying now.” And it’s just a gradual process, because I didn’t have that formal education.

There were a few conferences that I volunteered at. Mostly at BSides. And BSides are usually free anyway. When you volunteer, you become more visible in the community, and so people will come to you or people will trust you with things. And that was a big part of my career, was networking with people and becoming visible in the community. That way, if I wanted to apply for a job, if I already knew someone there or if I knew someone that knew someone, it was a lot easier to get my resume pushed to the hiring manager than if I just apply.

Cindy Ng: How were you able to land your first security job?

Roxy Dee: And as far as my first InterSec job, I was working in tech support and I was doing very well at it. I was at the top of the metrics, I was always in like the top 10 agents.

Cindy Ng: What were some of the things that you were doing?

Roxy Dee: It was tech support for Verizon Fios. There was a lot of, “Restart your router,” “Restart your set-top box,” things like that. But I was able to learn how to explain things to people in ways that they could understand. So it really helped me understand tech speak, I guess, understand how to speak technically without losing the person, like a non-technical person.

Cindy Ng: And then how did you transition into your next role?

Roxy Dee: It all had to do with networking, and at this point, I had volunteered for a few BSides. So actually, someone that I knew at the time told me about a position that was an entry-level network security analyst, and all I needed to do was get my Security+ certification within the first six months of working there. And so it was an opportunity for me because they accepted entry-level. And when they gave me the assessment that they give people they interview, I aced it because I had studied already about networking through a website called “Professor Messer.” And that website actually helped me with Security+ as well, and I was just able to do that through YouTube videos, like his entire website is just YouTube videos. So once I got there, I took my Security plus and I ended up, actually, on the night shift. So I was able to study in quiet during my shift every day at work. I just made it a routine, “I have to spend, you know, this amount of time studying on,” whatever topic I wanted to move forward with, which I knew what to study because I was going to conferences and I was taking notes from the talks, writing down things I didn’t understand or words I didn’t know and then later I was researching that topic so I could understand more. And then I would watch the talk again with that understanding if it was recorded, or I would go back to my notes with that understanding. The fact that I was working overnight and I was not interrupted really helped, and then from there…and that was like a very entry-level position. And from there, I went to a cloud hosting company, secure cloud hosting company with a focus on security and the great thing about that was that it was a startup. They didn’t have a huge staff, and they had a ton of things that they had to do and a bunch of unrealistic deadlines. So they would constantly be throwing me into situations I was not prepared for.

Cindy Ng: Can you give us an example?

Roxy Dee: Yeah. That was really like the best training for me, is just being able to do it. So when they started a Vulnerability Management Program, I have no experience in vulnerability management before this and they wanted me to be one of the two people on the team. So I had a manager, and then I was the only other person. Through this position, I learned what good techniques are and I was also inspired to do more research on it. And if I hadn’t been given that position, I wouldn’t have been inspired to look it up.

Cindy Ng: What does Vulnerability Management entail, three things that you should know?

Roxy Dee: Yeah. So Vulnerability Management has a lot to do with making sure that all the systems are up to date on patching. That’s one of them. The second thing I would say that’s very important is inventory management, because there were some systems that nobody was using and vulnerabilities existed there, but there was actually no one to fix them. And so if you don’t take proper inventory of your systems and you don’t do, you know, discovery scans to discover what’s out there, you could have something sitting there that an attacker, once they get in, they could use or they might have access to. And then another thing that’s really important in Vulnerability Management is actually managing the data because you’ll get a lot of data. But if you don’t use it properly it’s pretty much useless, if you don’t have a system to track when you need to have this remediated by, what are your compliance requirements? And so you have to track, “When did I discover this and when is it due? And what are the vulnerabilities and what are the systems? What do the systems look like? So there’s a lot of data you’re going to get and you have to manage it, or you will be completely unable to use it.

Cindy Ng: And then you moved on into something else?

Roxy Dee: Oh, yes. Actually, it being a startup kind of wore on me, to be honest. So I got a phone call from a recruiter, actually, while I was at work.

This was another situation where I had no idea how to do what I was tasked with, and the task was…So from my previous positions, I had learned how to monitor and detect, and how to set up alerts, useful alerts that can serve, you know, whatever purpose was needed. So I already had this background. So they said, “We have this application. We want you to log into it, and do whatever you need to do to detect fraud.” Like it was very loosely defined what my role was, “Detect bad things happening on the website.” So I find out that this application actually had been stood up four years prior and they kind of used it for a little while, but then they abandoned it.

And so my job was to bring it back to life and fix some of the issues that they didn’t have time for, or they didn’t actually know how to fix or didn’t want to spend time fixing them. That was extremely beneficial. I had been given a task, so I was motivated to learn this application and how to use it, and I didn’t know anything about fraud. So I spent a lot of time with the Fraud Operations team, and through that, through that experience of being given a task and having to do it, and not knowing anything about it, I learned a lot about fraud.

Cindy Ng: I’d love to hear from your experience what you’ve learned about fraud that most people might not know.

Roxy Dee: What I didn’t consider was that, actually, fraud detection is very much like network traffic detection. You look for a type of activity or a type of behavior and you set up detection for it, and then you make sure that you don’t have too many false positives. And it’s very similar to what network security analysts do. And when I hear security people say, “Oh, I don’t even know where to start with fraud,” well, just think about from a network security perspective if you’re a network security analyst, how you would go about detecting and alerting. And the other aspect of it is the fraudulent activity is almost always an anomaly. It’s almost always something that is not normal. If you’re just looking around for things that are off or not normal, you’re going to find the fraud.

Cindy Ng: But how can you can tell what’s normal and what’s not normal?

Roxy Dee: Well, first, it’s good to look up all sorts of sessions and all sorts of activity and get like a baseline of, you know, “This is normal activity.” But you can also talk to the Fraud team or, you know, or whatever team handles…It’s not specific to fraud, but, you know, if you’re detecting something else, talk to the people that handle it. And ask them, “What would make your alerts better? What is something that has not been found before or something that you were alerted to, but it was too late?” And ask just a bunch of questions, and then you’ll find through asking that what you need to detect.

Like for example, there was one situation where we had a rule that if a certain amount was sent in a certain way, like a wire, that it would alert. But what we didn’t consider was, “What if there’s smaller amounts that add up to a large amount?” And understanding…So we found out that, “Oh, this amount was sent out, but it was sent out in small pieces over a certain amount of time.” So through talking to the Fraud Operations team, if we didn’t discuss it with them, we never would have known that that was something that was an issue. So then we came up with a way to detect those types of fraudulent wire transfers as well.

Cindy Ng: How interesting. Okay. You were talking about your latest role at another bank.

Roxy Dee: I finished my contract and then I went to my current role, which focuses on a lot more than just online activity. I have more to work with now. With each new position, I just kind of layered more experience on top of what I already knew. And I know it’s better to work for a company for a long time and I kind of wish these past six years, I had been with just one company.

Each time that I changed positions, I got more responsibility, pay increase, and I’m hoping I don’t have to change positions as much. But it kind of gave me like a new environment to work with and kind of forced me to learn new things. So I would say, in the beginning of your career, don’t settle. If you get somewhere and you don’t like what you’re being paid, and you don’t think your career is advancing, don’t be afraid to move to a different position, because it’s a lot harder to ask for a raise than to just go somewhere else that’s going to pay you more.

So I’m noticing a lot of the companies that I’m working for, will expect the employees to stay there without giving them any sort of incentive to stay. And so when a new company comes along, they say, you know, “Wow. She’s working on this and that, and she’s making x amount. And we can take all that knowledge that she learned over there, and we can basically buy it for $10,000 more than what she’s making currently.” So companies are interested in grabbing people from other companies that have already had the experience, because it’s kind of a savings in training costs. So, you know, I try to look every six months or so, just to make sure there’s not a better deal out there, because they do exist. And I don’t know how that is in other fields, though. I know in information security, we have that. That’s just the nature of the field right now.

Cindy Ng: I think I got a good overview of your career trajectory. I’m wondering if there’s anything else that you’d want to share with our listeners?

Roxy Dee: Yeah. I guess, I pretty much have spent…So the first two or three years, I spent really working on myself, and making sure that I had all the knowledge and resources I needed to get that first job. The person that I was five or six years ago is different than who I am now. And what I mean is, my situation has changed a bit, to where I have more income and I have more capabilities than I did five years ago. One of the things that’s been important to me is giving back and making sure that, you know, just because I went through struggles five years ago…You know, I understand we all have to go through our struggles. But if I can make something a little bit easier for someone that was in my situation or maybe in a different situation but still needs help, that’s my way of giving back.

And spending $20 to buy someone a book is a lot less of a hit on me financially than it would have been five years ago. Five years ago, I couldn’t afford to drop to even $20 on a book to learn. I had to do everything online, and everything had to be free. I just want to encourage people, if you see an opportunity to help someone and, you know, for example, if you see someone that wants to speak at a conference and they just don’t have the resources to do so. And you think, “Well, this $100 hotel a night, a hotel room is less of a financial hit to me than to, you know, than to that person. And that could mean the difference between them having a career-building opportunity or not having that.” Just seek out ways to help people. One of the things I’ve been doing is the free book giveaway, where I actually have people sending me Amazon gift cards and there is actually one person that’s done it consistently in large amounts. And what I do with that is, like every two weeks, I have a tweet that I send out that if you reply to it with the book that you want, then you can win that book up until I run out of money, up until I run out of Amazon dollars.

Cindy Ng: Is this person an anonymous patron or benefactor? This person just sends you an Amazon gift card…with a few bucks and you share it with everyone? That’s so great.

Roxy Dee: And other people have sent me, you know, $20 to $50 in Amazon credits, and it’s just a really good…It kind of happen accidentally, and there’s the story of it on my Medium account.

Cindy Ng: What were the last three books that you gave away? – Oh, the last three? Well… – Or the last one, if you…

Roxy Dee: …the most popular one right now, this is just based on the last one that I did, is the Defensive Security Handbook. That was the most popular one. But I also get a lot of requests for Practical Packet Analysis by Chris Sanders and Practical Malware Analysis. And so this one, actually, this is a very recent book that came out called the Defensive Security Handbook. That’s by Amanda Berlin and Lee Brotherston. And that’s about…it says, “Best practices for securing infrastructure.” So it’s a blue team-themed book. That’s actually sold over 1,000 copies already and it just came out recently. It came out about a month ago. Yeah. So I think that’s going to be a very popular book for my giveaways.

Cindy Ng: How are you growing yourself these days?

Roxy Dee: Well, I wanted to spend more time writing guides. I just want to write things that can help beginners. I have set up my Medium account, and I posted something on setting up a honeypot network, which is a very…it sounds very complicated, but I broke it down step by step. So my goal in this was to make one article where you could set it up. Because a lot of the issues I was having was, yeah, I might find a guide on how to do something, but it didn’t include every single step. Like they assumed that you knew certain things before you started on that guide. So I want to write things that are easy for people to follow without having to go look up other sources. Or if they do have to look up another source, I have it listed right there. I want to make things that are not assuming that there’s already prior knowledge.

Cindy Ng: Thank you so much for sharing with me, with our listeners.

Roxy Dee: Thank you for letting me tell my story, and I hope that it’s helpful to people. I hope that people get some sort of inspiration, because I had a lot of struggles and, you know, there’s plenty of times I could have quit. And I just want to let people know that there are other ways of doing things and you don’t have to do something a certain way. You can do it the way that works for you.

 

Brute Force: Anatomy of an Attack

Brute Force: Anatomy of an Attack

The media coverage of NotPetya has hidden what might have been a more significant attack: a brute force attack on the UK Parliament.  While for many it was simply fertile ground for Twitter Brexit jokes, an attack like this that targets a significant government body is a reminder that brute force remains a common threat to be addressed.

It also raises important questions as to how such an attack could have happened in the first place:

These issues do suggest that we need to look deeper into this important, but often misunderstood type of attack.

Brute force defined

At the end of the day, there are two methods used to infiltrate an organization: exploit human errors or guesswork. Exploiting human errors has many attack variants: phishing (a user mistake), taking advantage of flawed configuration (an administrator mistake) or abusing a zero-day vulnerability (a developer mistake). Guesswork, on the other hand, is encompassed in one type of attack: brute force.

Most commonly, a brute force attack is used to guess credentials, though it may be used to guess other things such as URLs.

A classic brute force attack is an attempt to guess passwords at the attacker home base, once that attacker got hold of encrypted passwords. This enables the attacker to use powerful computers to test a large number of passwords without risking detection. On the other hand, such a brute force attack can’t be the first step of an attack, since the attacker should already have a copy of the victim’s encrypted passwords.

The online, real-time variant of the attack tries to pound a login function of a system or an application to guess credentials. Since it avoids the need to get encrypted passwords in the first place, attackers can use this technique when attempting to penetrate a system on which they have no prior foothold.

Do online brute force attacks happen?

Trying to guess both a user name and a password is ultimately very hard. Since most systems don’t report whether the username or the password was wrong on failed login, the first shortcut that an attacker takes is to try to attack known users. The attacker can find usernames using open source intelligence: in many organizations, for instance, user names have a predictable structure based on the employee name – and a simple LinkedIn search will reveal a large number of usernames.

That said, this type of classic online brute force attack (at least towards well-established systems) is more myth than a reality. The reason is simple – most modern systems and applications have a built-in countermeasure: lockout. If a user fails to log in more than a few times, the account is locked and requires an administrator intervention to unlock. Today, it’s the lockouts that create a headache to IT departments rather than brute force, making lockout monitoring more essential than brute force detection.

The exception to this is custom-built applications. While the traditional Windows login may not be exploitable, a new web application developed specifically for an upcoming holidays marketing campaign may very well be.

In comes credential stuffing

While classic online brute force attacks seem to be diminishing, credential stuffing is making its way to the central stage. Credential stuffing is an attack in which attackers use credentials – username/password pairs – stolen from public internet sites to break into a target system. The number of successful attacks against public websites is increasing, and the attackers publish the credential databases or sell them on underground exchanges. The assumption, which too often holds true, is that people will use the same username and password across sites.

Credential stuffing bypasses lockout protections since each username is used only once.  By using known username/password pairs, credential stuffing also increases the likelihood of success with a lower number of tries.

Since lockout is not effective as a countermeasure, organizations incorporate two-factor authentication mechanisms to try to avoid credential stuffing – and the use of stolen credentials in general. Two-factor authentication requires a user to have something else besides a password to authenticate: for example, a particular cell phone on which the user can receive a text message. Since two-factor authentication is cumbersome, a successful authentication usually approves any “similar” access. “Similar” may imply the use of the same device or the same geographical location. Most of us have experienced public websites requiring two-factor authentication when we accessed them from a new device, a public computer, or when traveling.

While Two-factor authentication is a robust solution, it has significant downsides: it alters the user experience and requires an interactive login. Nothing is more frustrating than being asked for two-factor authentication on your phone just when landing in a foreign airport after a long flight. As a result, it and is often left as an option for the user. And so, this calls for a detection system, often utilizing analytics, to recognize brute force attacks in action.

Detecting brute force attacks

The most commonly prescribed detection method for brute force seems to address the classic but impractical variant of the attack: detecting multiple failed login attempts for a single user over a short span of time. Many beginner exercises in creating SIEM (Security Information and Event Management) correlation rules focus on detecting brute force attacks by identifying exactly such a scenario. While elegant and straightforward as an exercise, it addresses a practically non-existent attack vector – and we need much more to detect a real world brute force attacks.

The factor that will hold true for any authentication brute force attack is a large number of failed authentication attempts. However, since the user cannot be the key for detection, a detection system has to focus on another key to connect the thread of events making up the attack.

One method will be to track failed authentications from a single source: often the source IP address. However, with public IP addresses becoming scarcer and more expensive, more and more users end up sharing the same source IP.

To overcome this, a detection mechanism can learn the normal rate of connections or of failures from a source IP to determine what would be an abnormal rate, thus taking into account the fact that multiple users might be connecting from the same source IP. The detector might also use a device fingerprint – a combination of properties in the authentication event that are typical to a device – to identify a particular source from those using the same IP address. This cannot be a primary factor, however, and can only help verify detection as most of the fingerprinting properties are under the control of the attacker and can be forged.

Distributed attacks – for example by utilizing a botnet or by rerouting attempts through a network of privacy proxies such as TOR – further complicate the challenge since source monitoring becomes irrelevant. Device fingerprinting may work to an extent, especially if the attack is still carried out by a single source but through multiple routes. An additional approach is to use threat intelligence, which would help to identify access from known botnet nodes or privacy proxies.

The Practicalities of Detection

So far, we’ve assumed that the events used for analysis are neat and tidy: any failed login event is clearly labeled as a “login,” the result is clearly identified as success or failure, and the username is always in the same field and format.

In reality, processing an event stream to make it ready for brute force detection analysis is an additional challenge to consider.

Let’s take Windows, the most ubiquitous source of them all, as an example. The windows successful login event (event ID 4624) and Windows failed login event (event ID 4625) are logged locally on each computer. This makes it more challenging (though not impossible) to collect them. It also means that the attacker – who may own the computer – can prevent them from ever being received. The domain controller registers an authentication event, which can be used as a proxy for a login event. This one comes in a Kerberos (event ID 4768) and NTLM (event ID 4776) variants for Windows 2008 and above and yet another set of event IDs for previous Windows versions.

Once we know which events to track, we still need to know how to identify success and failure correctly. Local login success and failure are separate events, while for the domain controller authentication events, success and failure are flagged within the event.
The following Splunk search (from the GoSplunk search repository), which I’ve used to identify failed vs. successful Windows logins, demonstrates the level of knowledge needed to extract such information from events – and it doesn’t even support the domain controller authentication alternative.

source="WinEventLog:security" (Logon_Type=2 OR Logon_Type=7 OR Logon_Type=10) (EventCode=528 OR EventCode=540 OR EventCode=4624 OR EventCode=4625 OR EventCode=529 OR EventCode=530 OR EventCode=531 OR EventCode=532 OR EventCode=533 OR EventCode=534 OR EventCode=535 OR EventCode=536 OR EventCode=537 OR EventCode=539) | eval status=case(EventCode=528, "Successful Logon", EventCode=540, "Successful Logon", EventCode=4624, "Successful Logon", EventCode=4625, "Failed Logon", EventCode=529, "Failed Logon", EventCode=530, "Failed Logon", EventCode=531, "Failed Logon", EventCode=532, "Failed Logon", EventCode=533, "Failed Logon", EventCode=534, "Failed Logon", EventCode=535, "Failed Logon", EventCode=536, "Failed Logon", EventCode=537, "Failed Logon", EventCode=539, "Failed Logon") | stats count by status | sort - count

Detection through the Cyber Kill-Chain

The brute force example makes it clear that attack detection is not trivial. None of the detection methods described above are bullet proof, and attackers are continually enhancing their detection avoidance methods.

Detection requires expertise in both attack techniques and the behavior of the monitored systems. It also requires ongoing updates and enhancements. It’s therefore critical that any system used for detecting brute force attacks must include out of the box algorithms and updates to those detection algorithms. It’s not enough to simply provide the means to collect events and write the rules or algorithms, which leaves the user with the task of implementing the actual detection logic. It also suggests that it might be worthwhile to understand how a system detects brute force – rather than just relying on a general promise for detecting brute force. There’s just so much more to it than a textbook example will provide.

It is also yet another great example of why a layered detection system that will capture attackers past the infiltration point is critical: and why the cyber kill-chain is a good place to start.

Exploring Windows File Activity Monitoring with the Windows Event Log

Exploring Windows File Activity Monitoring with the Windows Event Log

One might hope that Microsoft would provide straightforward and coherent file activity events in the Windows event log. The file event log is important for all the usual reasons –  compliance, forensics, monitoring privileged users, and detecting ransomware and other malware attacks while they’re happening.  A log of file activities seems so simple and easy, right? All that’s needed is a timestamp, user name, file name, operation (create, read, modify, rename, delete, etc.), and a result (success or failure).

But this is Microsoft. And they never, ever do anything that’s nice and easy.

The Case of “Delete”

Let’s start with delete, the simplest scenario. Google “Windows file delete event” and the first answer, as usual, will be from Randy Franklin Smith’s Windows Event Encyclopedia: “4660: An object was deleted”.

To a Windows event newbie, it would appear that our research effort is complete. Unfortunately, this delete event is missing one critical piece of information: the file name. Why? If you’re detecting ransomware, one would want to know which files are being encrypted. Likewise, when being alerted on privileged user access to sensitive files, one would expect to know exactly which files were accessed.

The hard part is: how do you determine the file name associated with an event? Even identifying something as simple as a file name isn’t easy because Windows event information is spread out across multiple log entries. For example, you’d have to correlate the delete event 4660 with the “access object” event 4663. In practice, you’d create a search for matching 4660 and 4663 events, and then combine information from both events to derive a more user-friendly log entry.

Let’s continue discussing the correlations needed to implement file activity monitoring using the Windows event log. The actual implementation depends on the tools used to collect the event: a database query, a program, or a dedicated correlation engine – a program tailored for such activity.

The Windows File Activity Audit Flow

Windows does not log file activity in a way that you’d expect. Instead, it logs granular file operations that require further processing. So let’s deep dive into how Windows logs file operations.

The diagram below outlines how Windows logs each file operation using multiple micro-operations logs:The delete operation is a unique case in that there is a fourth event, 4660, mentioned above. The sequence is identified by the “Handle ID” event property which is unique to this sequence (at least until a reboot).

The event that provides the most information is 4663, identifying that an attempt was made to access an object. However, the name is misleading because Windows only issues the event when the operation is complete. In reality, there might be multiple 4663 events for a single handle, logging smaller operations that make up the overall action. For example, a rename involves a read, delete, and a write operation. Also, one 4663 event might include multiple values in the “Accesses” property which lists access rights exercised to perform the operation. Those “Accesses” serve as our guideline for the operations themselves. More on that later.

The following table provides more information about each event:

Event ID Name Description Data It Provides
4656 A handle to an object was
requested
Logs the start of every file
activity but does not guarantee that it succeeded
The name of the file
4663 An attempt was made to access an object Logs the specific micro
operations performed as part of the activity
What exactly was done
4660 An object was deleted Logs a delete operation The only way to verify an
activity is actually a delete
4658 The handle to an object was
closed
Logs the end of a file
activity
How much time it took

One step you would not want to skip is setting Windows to log those events, which is not the default. You’ll find a good tutorial on how to do that here.

You get the idea: you’re dealing with lots of different low-level events related to a higher-level file actions. Let’s take up the problem of correlating them to produce user-friendly information.

Interpreting “Accesses”

To identify the actual action, we need to decode the exercised permissions as reported in the “Accesses” event property. Unfortunately, this is not a one-to-one mapping. Each file action is made up of many smaller operations that Windows performs and those smaller operations are the ones logged.

The more important “Accesses” property values are:

  • “WriteData” implies that a file was created or modified unless a “delete” access was recorded for the same handle.
  • “ReadData” will be logged as part of practically any action. It implies “Access” if no “Delete” or “WriteData” were encountered for the same handle and for the same file name around the same time.
  • “Delete” may signify many things: delete, rename (same folder), move (to a different folder) or recycled, which is essentially a move to the recycle bin. Event 4660 with the same handle differentiate between delete or recycled for which a 4660 event is issued and a rename or move for which it is not.

Complex?

Consider this only as a starting point. The analysis above is somewhat simplified, and real-world implementation will require more research. Some areas for further research are:

  • Differentiating delete from recycle and move from rename.
  • Analyzing attributes access (with or without other access operations).
  • Handling event 4659 which is similar to 4660 but is logged on a request to delete a locked file on the next reboot rather than deleting them now.
  • Researching reports that events come out of order and the “request handle event” (4656) may not be the first in the sequence.

You may want to review this PowerShell Script which reads Windows events and generates from them meaningful file activity report to get a somewhat less simplified analysis. A word of warning: it is not for the faint-hearted!

Windows Event Log Limitations for File Access Monitoring

While the Windows file activity events seem comprehensive, there are things that cannot be determined using only the event log. A few examples are:

  • Create vs. modify: the only way to know if this is a new file or a modified file is to have knowledge of the prior state, i.e. whether the file existed before.
  • Missing information on failures: in a case of an operation that was rejected due to insufficient permissions, the only event issued is 4656. Without a sequence, most of the processing described in this article is not possible.
  • Cut & paste: while one would assume “cut and paste” would be similar to a move operation, in practice, the behavior seems to be similar to a delete operation followed by a create operation with no relations whatsoever between the two operations.

Scalability Considerations

Collecting Windows file activity is a massive event flow and the Microsoft event structure, generating many events for a single file action, does not help. Such collection will require more network bandwidth to transfer events and more storage to keep them. Furthermore, the sophisticated logic required may need a powerful processing unit and a lot of memory.

To reduce the overhead, you may want to:

  • Carefully select which files you monitor based on the scenario you plan to implement. For example, you may want to track only system files or shares that include sensitive data.
  • Limit collection of unneeded events at the source. If your collection infrastructure uses Microsoft Event Forwarding, you can build sophisticated filters based on event IDs and event properties. In our case, filter only events 4656, 4660, 4663 and optionally 4658 and only for the “Accesses” values needed.
  • Limit event storage and event sizes as raw Windows events are sizable.

An Alternative Approach

Real world experience shows that there are not many organizations that succeed in utilizing Windows Event Log for file activity monitoring. An alternative approach for implementing this important security and compliance measure would be to use a lightweight agent on each monitored Windows system, with a focus on file servers. Such an agent, similar to the Varonis agent alongside DatAdvantage, would record file activity with a minimal server and network overhead, enabling better threat detection and forensics.

Disabling PowerShell and Other Malware Nuisances, Part III

Disabling PowerShell and Other Malware Nuisances, Part III

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

One of the advantages of AppLocker over Software Restriction Policies is that it can selectively enable PowerShell for Active Directory groups. I showed how this can be done in the previous post. The goal is to limit as much as possible the ability of hackers to launch PowerShell malware, but still give legitimate users access.

It’s a balancing act of course. And as I suggested, you can accomplish the same thing by using a combination of Software Restriction Policies (SRP) and ACLs, but AppLocker does this more efficiently in one swoop.

Let’s Get Real About Whitelisting

As a practical matter, whitelisting is just plain hard to do, and I’m guessing most IT security staff won’t go down this route. However, AppLocker does provide an ‘audit mode’ that makes whitelisting slightly less painful than SRP.

AppLocker can be configured to log events that show up directly in the Windows Event Viewer. For whatever reason, I couldn’t get this to work in my AWS environment. But this would be a little less of a headache than setting up a Registry entry and dealing with a raw file — the SPR approach.

In any case, I think most of you will try what I did. I took the default rules provided by AppLocker to enable the standard Windows system and program folders, added an exception for PowerShell, and then created a special rule to allow only member of a select AD group — Acme-VIPs in my case — to access PowerShell.

AppLocker: Accept the default path rules, and then selectively enable PowerShell.

Effectively, I whitelisted all-the-usual Windows suspects, and then partially blacklisted PowerShell.

PowerShell for Lara, who’s in the Acme-VIPs group, but no PowerShell for Bob!

And Acme Was Hacked

No, the hacking of my Acme domain on AWS is not going to make any headlines. But I thought as a side note it’s worth mentioning.

I confess: I was a little lax with my Amazon firewall port setting, and some malware slipped in.

After some investigation, I discovered a suspicious executable in  the \Windows\Prefetch directory. It was run as a service that looked legit, and it opened a zillion UDP ports.

It took me an afternoon or two to figure all this out. My tip offs were when my server became somewhat sluggish, and then receiving an Amazon email politely suggesting that my EC2 instance may have been turned into a bot used for a DDoS attack.

This does relate to SRP and AppLocker!

Sure, had I activated these protection services earlier, Windows would have been prevented from launch the malware, which was living in in a non-standard location.

Lesson learned.

And I hold my head in shame if I caused some DDos disturbance for someone, somewhere.

Final Thoughts

Both SRP and AppLocker also have rules that take into account file hashes and digital certificates. Either will provide an additional level of security that the executable are really what they claim to be, and not the work of evil hackers.

AppLocker is more granular than SRP when it comes to certificates, and it allows you to filter on a specific app from a publisher and a version number as well. You can learn more about this here.

Bottom line: whitelisting is not an achievable goal for the average IT mortal. For the matter at hand, disabling PowerShell, my approach of using default paths provided by either SRP or AppLocker, and then selectively allowing PowerShell for certain groups — easier with AppLocker — would be far more realistic.

Disabling PowerShell and Other Malware Nuisances, Part II

Disabling PowerShell and Other Malware Nuisances, Part II

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

Whitelisting apps is nobody’s idea of fun. You need to start with a blank slate, and then carefully add back apps you know to be essential and non-threatening. That’s the the idea behind what we started to do with Software Restriction Policies (SRP) from last time.

As you’ll recall, we ‘cleared the board’ though the default disabling of app execution in the Property Rules. In the Additional Rules section, I then started adding Path rules for apps I though were essential.

The only apps you’ll ever need!

Obviously, this can get a little tedious, so Microsoft helpfully provides two default rules: one to enable execution of apps in the Program folder and the other to enable executables in the Windows system directory.

But this is cheating and then you’d be force then to blacklist apps you don’t really need.

Anyway, when a user runs an unapproved app or a hacker tries to load some malware that’s not in the whitelist, SRP will prevent it.  Here’s what happened when I tried to launch PowerShell, which wasn’t in my whitelist, from the old-style cmd shell, which was in the list:

Damn you Software Restriction Policies!

100% Pure Security

To be ideologically pure, you wouldn’t use the default Windows SRP rules. Instead, you need to start from scratch with bupkes and do the grunt work of finding out what apps are being used and that are truly needed.

To help you get over this hurdle, Microsoft suggests in a Technet article that you turn on a logging feature that writes out an entry whenever SRP evaluates an app.  You’ll need to enable the following registry entry and set a log file location:

"HKLM\SOFTWARE\Policies\Microsoft\Windows\Safer\CodeIdentifiers"

String Value: LogFileName, <path to a log file>



Here’s a part of this log file from my AWS test environment.

Log file produced by SRP.

So you have to review the log, question users, and talk it over with your fellow IT admins. Let’s say you work out a list of approved apps (excluding PowerShell), which you believe would make sense for a large part of your user community. You can then leverage the Group Policy Management console to publish the rules to the domain.

In theory, I should be able to drag-and-drop the rules I created in the Group Policy Editor for the machine I was working into the Management console. I wasn’t able to pull that off in my AWS environment.

I instead  had to recreate the rules directly in the Group Policy Management Policy Editor (below), and then let it do the work of distributing it across the domain — in my case, the Acme domain.

Magic!

You can read more about how to do this here.

A Look at AppLocker

Let’s get back to the issue of PowerShell. We can’t live without it, yet hackers have used it as tool for stealthy post-exploitation.

If I enable it in my whitelist, along with some of the built-in PowerShell protections I mentioned in the last post, there are still so many ways to get around these security precautions that it’s not worth the trouble.

It would be nice if SRP allows you to do the whitelisting selectively based on Active Directory user or group membership. In other words, effectively turn off PowerShell except if you’re, say, an IT admin that’s a member of the ‘Special PowerShell’ AD group.

That ain’t happening in SRP since it doesn’t support this level of granularity!

Starting in Windows 7 (and Windows Server 2008), Microsoft deprecated SPR and introduced the (arguably) more powerful AppLocker. It’s very similar to the what it replaces, but it does provide this user-group level filtering.

We’ll talk more about AppLocker and some of its benefits in the final post in this series. In any case, you can find this policy next to SRP in the Group Policy Editor under Application Control Policies.

For my Acme environment, I set up a rule that enables PowerShell for only users in the Acme-VIP group, Acme’s small group of power IT employees. You can see how I started setting this up as I follow the AppLocker wizard dialog:

PowerShell is an important and useful tool, so you’ll need to weigh the risks in selectively enabling it through AppLocker— dare I say it, perform a risk assessment.

Of course, you should have secondary controls, such as, ahem, User Behavior Analytics, that allows you to protect against PowerShell misuse should the credentials of the PowerShell-enabled group be compromised by hackers or insiders.

We’ll take up  other AppLocker capabilities and final thoughts on whitelisting in the next post.

Continue reading the next post in "Disabling PowerShell and Other Malware Nuisances"

Disabling PowerShell and Other Malware Nuisances, Part I

Disabling PowerShell and Other Malware Nuisances, Part I

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

Back in more innocent times, circa 2015, we began to hear about hackers going malware-free and “living off the land.” They used whatever garden-variety IT tools were lying around on the target site. It’s the ideal way to do post-exploitation without tripping any alarms.

This approach has taken off and gone mainstream, primarily because of off-the-shelf post-exploitation environments like PowerShell Empire.

I’ve already written about how PowerShell, when supplemented with PowerView, becomes a potent purveyor of information for hackers. (In fact, all this PowerShell pen-testing wisdom is available in an incredible ebook that you should read as soon as possible.)

Any tool can be used for good or bad, so I’m not implying PowerShell was put on this earth to make the life of hackers easier.

But just as you wouldn’t leave a 28” heavy-duty cable cutter next to a padlock, you probably don’t want to allow, or at least make it much more difficult for, hackers get their hands on PowerShell.

This brings up a large topic in the cybersecurity world: restricting application access, which is known more commonly as whitelisting or blacklisting. The overall idea is for the operating system to strictly control what apps can be launched by users.

For example, as a member of homo blogus, I generally need some basic tools and apps (along with a warm place to sleep at night), and can live without PowerShell, netcat, psexec, and some other cross-over IT tools I’ve discussed in previous posts. The same applies to most employees in an organization, and so a smart IT person should be able come up with a list of apps that are safe to use.

In the Windows world, you can enforce rules on application execution using Software Restriction Policies and more recently AppLocker.

However, before we get into these more advanced ideas, let’s try two really simple solutions and then see what’s wrong with them.

ACLs and Other Simplicities

We often think of Windows ACLs as being used to control access to readable content. But they can also be applied to executables — that is, .exe, .vbs, .ps1, and the rest.

I went back into Amazon Web Services where the Windows domain for the mythical and now legendary Acme company resides and then did some ACL restriction work.

The PowerShell .exe, as any sys admin can tell you, lives in C:\Windows\System32\WindowsPowerShell\v1.0. I navigated to the folder, clicked on properties, and effectively limited execution of PowerShell to a few essential groups: Domain Admins and Acme-SnowFlakes, which is the group of Acme employee power users.

I logged backed into the server as Bob, my go to Acme employee, and tried to bring up PowerShell. You see the results below.

In practice, you could probably come up with a script — why not use PowerShell? — to automate this ACL setting for all the laptops and servers in a small- to mid-size site.

It’s not a bad solution.

If you don’t like the idea of setting ACLs in executable files, PowerShell offers its own execution restriction controls. As a user with admin privileges, you can use, what else but a PowerShell cmdlet called Set-ExecutionPolicy.

It’s not nearly as blunt a force as the ACLs, but you can restrict PowerShell to work only in interactive mode –  with the Restricted parameter — so that it won’t execute scripts that contain the hackers’ malware. PowerShell would still be available in a limited way, but it wouldn’t be capable of running the scripts containing hacker PS malware.

However, this would PowerShell scripts from being run by your IT staff.  To allow IT-approved scripts, but disable evil hacker scripts, you use the RemoteSigned parameter in Set-ExecutionPolicy. Now PowerShell will only launch signed scripts. The IT staff, of course, would need to create their own scripts and sign them using an approved credential.

I won’t go into the details how to do this, mostly because it’s so easy to get around these controls. Someone even has a listicle blog post in which 15 PowerShell  security workarounds are described.

The easiest one is using the Bypass parameter in PowerShell itself. Duh! (below).

Seems like a security hole, no?

So PowerShell has some basic security flaws. It’s somewhat understandable since it is, after all, just a shell program.

But even the ACL restriction approach has a fundamental problem.

If hackers loosen up the “live off the land” philosophy, they can simply download — say, using a remote access trojan (RAT) — their own copy of PowerShell .exe. And then run it directly, avoiding the permission restrictions with the resident PowerShell.

Software Restriction Policies

These basic security holes (and many others) are always an issue with a consumer-grade operating systems. This has led OS researchers to come up with secure secure operating systems that have direct power to control what can be run.

In the Windows world, these powers are known as Software Restriction Policies (SRP) — for a good overview, see this — that are managed through the Group Policy Editor.

With SRP you can control which apps can be run, based on file extension, path names, and whether the app has been digitally signed.

The most effective, though most painful approach, is to disallow everything and then add back application that you really, really need. This is known as whitelisting.

We’ll go into more details in the next post.

Anyway, you’ll need to launch the policy editor, gpedit, and navigate to Local Computer Policy>Windows Settings>Security Settings>Software Restriction Polices>Security Levels. If you click on “Disallowed”, you can then make this the default security policy — to not run any executables!

The whitelist: disallow as default, and then add app policies in “Additional Rules”.

This is more like a scorched earth policy. In practice, you’ll need to enter “Additional Rules” to add back the approved apps (with their path names). If you leave out PowerShell, then you’ve effectively disabled this tool on the site.

Unfortunately, you can’t fine-tune the SRP rules based on AD groups or users. Drat!

And that we’ll bring us to Microsoft’s latest and greatest security enforcer, known as AppLocker, which does provide some nuance to application access. We’ll take that up next time as well.

Continue reading the next post in "Disabling PowerShell and Other Malware Nuisances"

Practical PowerShell for IT Security, Part III: Classification on a Budget

Practical PowerShell for IT Security, Part III: Classification on a Budget

This article is part of the series "Practical PowerShell for IT Security". Check out the rest:

Last time, with a few lines of PowerShell code, I launched an entire new software category, File Access Analytics (FAA). My 15-minutes of fame is almost over, but I was able to make the point that PowerShell has practical file event monitoring aspects. In this post, I’ll finish some old business with my FAA tool and then take up PowerShell-style data classification.

Event-Driven Analytics

To refresh memories, I used the Register-WmiEvent cmdlet in my FAA script to watch for file access events in a folder. I also created a mythical baseline of event rates to compare against. (For wonky types, there’s a whole area of measuring these kinds of things — hits to web sites, calls coming into a call center, traffic at espresso bars — that was started by this fellow.)

When file access counts reach above normal limits, I trigger a software-created event that gets picked up by another part of the code and pops up the FAA “dashboard”.

This triggering is performed by the New-Event cmdlet, which allows you to send an event, along with other information, to a receiver. To read the event, there’s the WMI-Event cmdlet. The receiving part can even be in another script as long as both event cmdlets use the same SourceIdentifier — Bursts, in my case.

These are all operating systems 101 ideas: effectively, PowerShell provides a simple message passing system. Pretty neat considering we are using what is, after all, a bleepin’ command language.

Anyway, the full code is presented below for your amusement.

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(3,8,5); "Tuesday" = @(4,10,7);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append
  
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 
 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   


#Dashboard
While ($true) {
    $args=Wait-Event -SourceIdentifier Bursts # wait on Burst event
    Remove-Event -SourceIdentifier Bursts #remove event
  
    $outarray=@() 
    foreach ($result in $args.SourceArgs) {
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File -Value $result[0]
      $obj | Add-Member -type NoteProperty -Name Time -Value $result[1]
      $outarray += $obj  
    }


     $outarray|Out-GridView -Title "FAA Dashboard: Burst Data"
 }

Please don’t pound your laptop as you look through it.

I’m aware that I continue to pop up separate grid views, and there are better ways to handle the graphics. With PowerShell, you do have access to the full .Net framework, so you could create and access objects —listboxes, charts, etc. — and then update as needed. I’ll leave that for now as a homework assignment.

Classification is Very Important in Data Security

Let’s put my file event monitoring on the back burner, as we take up the topic of PowerShell and data classification.

At Varonis, we preach the gospel of “knowing your data” for good reason. In order to work out a useful data security program, one of the first steps is to learn where your critical or sensitive data is located — credit card numbers, consumer addresses, sensitive legal documents, proprietary code.

The goal, of course, is to protect the company’s digital treasure, but you first have to identify it. By the way, this is not just a good idea, but many data security laws and regulations (for example, HIPAA)  as well as industry data standards (PCI DSS) require asset identification as part of doing real-world risk assessment.

PowerShell should have great potential for use in data classification applications. Can PS access and read files directly? Check. Can it perform pattern matching on text? Check. Can it do this efficiently on a somewhat large scale? Check.

No, the PowerShell classification script I eventually came up with will not replace the Varonis Data Classification Framework. But for the scenario I had in mind – a IT admin who needs to watch over an especially sensitive folder – my PowerShell effort gets more than a passing grad, say B+!

WQL and CIM_DataFile

Let’s now return to WQL, which I referenced in the first post on event monitoring.

Just as I used this query language to look at file events in a directory, I can tweak the script to retrieve all the files in a specific directory. As before I use the CIM_DataFile class, but this time my query is directed at the folder itself, not the events associated with it.

$Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"

Terrific!  This line of code will output an array of file path names.

To read the contents of each file into a variable, PowerShell conveniently provides the Get-Content cmdlet. Thank you Microsoft.

I need one more ingredient for my script, which is pattern matching. Not surprisingly, PowerShell has a regular expression engine. For my purposes it’s a little bit of overkill, but it certainly saved me time.

In talking to security pros, they’ve often told me that companies should explicitly mark documents or presentations containing proprietary or sensitive information with an appropriate footer — say, Secret or Confidential. It’s a good practice, and of course it helps in the data classification process.

In my script, I created a PowerShell hashtable of possible marker texts with an associated regular expression to match it. For documents that aren’t explicitly marked this way, I also added special project names — in my case, snowflake — that would also get scanned. And for kicks, I added a regular expression for social security numbers.

The code block I used to do the reading and pattern matching is listed below. The file name to read and scan is passed in as a parameter.

$Action = {

Param (

[string] $Name

)

$classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }


$data = Get-Content $Name

$cnts= @()

foreach ($key in $classify.Keys) {

  $m=$classify[$key].matches($data)

  if($m.Count -gt 0) {

    $cnts+= @($key,$m.Count)
  }
}

$cnts
}

Magnificent Multi-Threading

I could have just simplified my project by taking the above code and adding some glue, and then running the results through the Out-GridView cmdlet.

But this being the Varonis IOS blog, we never, ever do anything nice and easy.

There is a point I’m trying to make. Even for a single folder in a corporate file system, there can be hundreds, perhaps even a few thousand files.

Do you really want to wait around while the script is serially reading each file?

Of course not!

Large-scale file I/O applications, like what we’re doing with classification, is very well-suited for multi-threading—you can launch lots of file activity in parallel and thereby significantly reduce the delay in seeing results.

PowerShell does have a usable (if clunky) background processing system known as Jobs. But it also boasts an impressive and sleek multi-threading capability known as Runspaces.

After playing with it, and borrowing code from a few Runspaces’ pioneers, I am impressed.

Runspaces handles all the messy mechanics of synchronization and concurrency. It’s not something you can grok quickly, and even Microsoft’s amazing Scripting Guys are still working out their understanding of this multi-threading system.

In any case, I went boldly ahead and used Runspaces to do my file reads in parallel. Below is a bit of the code to launch the threads: for each file in the directory I create a thread that runs the above script block, which returns matching patterns in an array.

$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, 5)

$RunspacePool.Open()

$Tasks = @()


foreach ($item in $list) {

   $Task = [powershell]::Create().AddScript($Action).AddArgument($item.Name)

   $Task.RunspacePool = $RunspacePool

   $status= $Task.BeginInvoke()

   $Tasks += @($status,$Task,$item.Name)
}

Let’s take a deep breath—we’ve covered a lot.

In the next post, I’ll present the full script, and discuss some of the (painful) details.  In the meantime, after seeding some files with marker text, I produced the following output with Out-GridView:

Content classification on the cheap!

In the meantime, another idea to think about is how to connect the two scripts: the file activity monitoring one and the classification script partially presented in this post.

After all, the classification script should communicate what’s worth monitoring to the file activity script, and the activity script could in theory tell the classification script when a new file is created so that it could classify it—incremental scanning in other words.

Sounds like I’m suggesting, dare I say it, a PowerShell-based security monitoring platform. We’ll start working out how this can be done the next time as well.

Continue reading the next post in "Practical PowerShell for IT Security"

Practical PowerShell for IT Security, Part I: File Event Monitoring

Practical PowerShell for IT Security, Part I: File Event Monitoring

This article is part of the series "Practical PowerShell for IT Security". Check out the rest:

Back when I was writing the ultimate penetration testing series to help humankind deal with hackers, I came across some interesting PowerShell cmdlets and techniques. I made the remarkable discovery that PowerShell is a security tool in its own right. Sounds to me like it’s the right time to start another series of PowerShell posts.

We’ll take the view in these posts that while PowerShell won’t replace purpose-built security platforms — Varonis can breathe easier now — it will help IT staff monitor for threats and perform other security functions. And also give IT folks an appreciation of the miracles that are accomplished by real security platforms, like our own Metadata Framework. PowerShell can do interesting security work on a small scale, but it is in no way equipped to take on an entire infrastructure.

It’s a Big Event

To begin, let’s explore using PowerShell as a system monitoring tool to watch files, processes, and users.

Before you start cursing into your browsers, I’m well aware that any operating system command language can be used to monitor system-level happenings. A junior IT admin can quickly put together, say, a Linux shell script to poll a directory to see if a file has been updated or retrieve a list of running processes to learn if a non-standard process has popped up.

I ain’t talking about that.

PowerShell instead gives you direct event-driven monitoring based on the operating system’s access to low-level changes. It’s the equivalent of getting a push notification on a news web page alerting you to a breaking story rather than having to manually refresh the page.

In this scenario, you’re not in an endless PowerShell loop, burning up CPU cycles, but instead the script is only notified or activated when the event — a file is modified or a new user logs in — actually occurs. It’s a far more efficient way to do security monitoring than by brute-force polling.

Further down below, I’ll explain how this is accomplished.

But first, anyone who’s ever taken, as I have, a basic “Operating Systems for Poets” course knows that there’s a demarcation between user-level and system-level processes.

The operating system, whether Linux or Windows, does the low-level handling of device actions – anything from disk reads, to packets being received — and hides this from garden variety apps that we run from our desktop.

So if you launch your favorite word processing app and view the first page of a document, the whole operation appears as a smooth, synchronous activity. But in reality there are all kinds of time-sensitive actions events — disk seeks, disk blocks being read, characters sent to the screen, etc. — that are happening under the hood and deliberately hidden from us.  Thank you Bill Gates!

In the old days, only hard-core system engineers knew about this low-level event processing. But as we’ll soon see, PowerShell scripters can now share in the joy as well.

An OS Instrumentation Language

This brings us to Windows Management Instrumentation (WMI), which is a Microsoft effort to provide a consistent view of operating system objects.

Only a few years old, WMI is itself part of a broader industry effort, known as Web-based Enterprise Management (WBEM), to standardize the information pulled out of routers, switches, storage arrays, as well as operating systems.

So what does WMI actually look and feel like?

For our purposes, it’s really a query language, like SQL, but instead of accessing rows of vanilla database columns, it presents complex OS information organized as a WMI_class hierarchy. Not too surprisingly, the query language is known as, wait for it, WQL.

Windows generously provides a utility, wbemtest, that lets you play with WQL. In the graphic below, you can see the results of my querying the Win32_Process object, which holds information on the current processes running.

WQL on training wheels with wbemtest.

Effectively, it’s the programmatic equivalent of running the Windows task monitor. Impressive, no? If you want to know more about WQL, download Ravi Chaganti’s wonderous ebook on the subject.

PowerShell and the Register-WmiEvent Cmdlet

But there’s more! You can take off the training wheels provided by wbemtest, and try these queries directly in PowerShell.

Powershell’s Get-WMIObject is the appropriate cmdlet for this task, and it lets you feed in the WQL query directly as a parameter.

The graphic below shows the first few results from running select Name, ProcessId, CommandLine from Win32_Process on my AWS test environment.

gwmi is the PowerShell alias for Get-WmiObject.

The output is a bit wonky since it’s showing some hidden properties having to do with underlying class bookkeeping. The cmdlet also spews out a huge list that speeds by on my console.

For a better Win32_Process experience, I piped the output from the query into Out-GridView, a neat PS cmdlet that formats the data as a beautiful GUI-based table.

Not too shabby for a line of PowerShell code. But WMI does more than allow you to query these OS objects.

As I mentioned earlier, it gives you access to relevant events on the objects themselves. In WMI, these events are broadly broken into three types: creation, modification, and deletion.

Prior to PowerShell 2.0, you had to access these events in a clunky way: creating lots of different objects, and then you were forced to synchronously ‘hang’, so it wasn’t true asynchronous event-handling. If you want to know more, read this MS Technet post for the ugly details.

Now in PS 2.0 with the Register-WmiEvent cmdlet, we have a far prettier way to react to all kinds of events. In geek-speak, I can register a callback that fires when the event occurs.

Let’s go back to my mythical (and now famous) Acme Company, whose IT infrastructure is set up on my AWS environment.

Let’s say Bob, the sys admin, notices every so often that he’s running low on file space on the Salsa server. He suspects that Ted Bloatly, Acme’s CEO, is downloading huge files, likely audio files, into one of Bob’s directories and then moving them into Ted’s own server on Taco.

Bob wants to set a trap: when a large file is created in his home directory, he’ll be notified on his console.

To accomplish this, he’ll need to work with the CIM_DataFile class.  Instead of accessing processes, as we did above, Bob uses this class to connect with the underlying file metadata.

CIM_DataFile object can be accessed directly in PowerShell.

Playing the part of Bob, I created the following Register-WmiEvent script, which will notify the console when a very large file is created in the home directory.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance isa 'CIM_DataFile' and TargetInstance.FileSize &gt; 2000000 and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' "-sourceIdentifier "Accessor3" -Action  { Write-Host "Large file" $EventArgs.NewEvent.TargetInstance.Name  "was created”}

 

Running this script directly from the Salsa console launches the Register-WmiEvent command in the background, assigning it a job number, and then only interacts with the console when the event is triggered.

In the next post, I’ll go into more details about what I’ve done here. Effectively, I’m using WQL to query the CIM_DataFile object — particularly anything in the \Users\bob directory that’s over 2 million bytes — and set up a notification when a new file is created that fits this criteria —that’s where InstanceModificationEvent comes into play.

Anyway, in my Bob role  I launched the script from the PS command line, and then putting on my Ted Bloatly hat, I copied a large mp4 into Bob’s directory. You can see the results below.

We now know that Bloatly is a fan of Melody Gardot. Who would have thunk it?

You begin to see some of the exciting possibilities with PowerShell as a tool to detect threats patterns and perhaps for doing a little behavior analytics.

We’ll be exploring these ideas in the next post.

Continue reading the next post in "Practical PowerShell for IT Security"

Binge Read Our Pen Testing Active Directory Series

Binge Read Our Pen Testing Active Directory Series

With winter storm Niko now on its extended road trip, it’s not too late, at least here in the East Coast, to make a few snow day plans. Sure you can spend part of Thursday catching up on Black Mirror while scarfing down this slow cooker pork BBQ pizza. However, I have a healthier suggestion.

Why not binge on our amazing Pen Testing Active Directory Environments blog posts?

You’ve read parts of it, or — spoiler alert — perhaps heard about the exciting conclusion involving a depth-first-search of the derivative admin graph. But now’s your chance to totally immerse yourself and come back to work better informed about the Active Directory dark knowledge that hackers have known about for years.

And may we recommend eating these healthy soy-roasted kale chips while clicking below?

Continue reading the next post in "Pen Testing Active Directory Environments"

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

We’ve already written about the growing role of the chief data officer(CDO) and their challenging task to leverage data science to drive profits. But the job of a CDO is not just about moving the profit meter.

It’s less-widely known that they’re also tasked with meeting three other business objectives: finding ways to drive overall growth, improve efficiencies and manage risk.

Why? All business activities and processes benefit from these three objectives.

Luckily, we can turn to Morgan Stanley’s CDO, Jeffrey McMillan for some guidance. I heard him speak at a recent CDO Summit in New York City, where he dispensed sage advice for both practicing and aspiring CDOs.

McMillan suggested these five analytics and data strategy processes:

1. Make sure your data science is aligned with your business strategy.

Yes, good data scientists are hard to find. But McMillan says that rather than spending your energies finding a good data scientist, make sure that your scientist can also think like a sales or business person.

He says, “I would much rather have a very mediocre data scientist, who really understood the business than the reverse. Because the reverse doesn’t help me at all. It’s not about the algorithm, it’s about understanding of the business.”

Once you have your data scientist in place, McMillan is adamant about ensuring this resource is honored and respected.

He explains, “If no one is actually going to do anything with what you recommend doing, they don’t get more resources. There are a lot of things we learn about the world that are interesting that don’t actually change our behaviors. And we need to focus on things that changes our behaviors.”

2. Empower the end users to consume data visualizations

According to McMillan, it’s more important to get a little bit of data in the hands of many, than a lot of data in the hands of few. Why? It’s vital to bring data to the decision makers.

“They don’t always want your algorithm,” McMillan says. “They do want information about the business.”

Moreover, his plan for Morgan Stanley to make data accessible to everyone is this: “Our vision is: in the next five years, I want every single employee in our firm to have access to a data visualization tool. And I want 15-20% of the employees to be able to create their own content using their own data visualization tool.”

3. Create the next-best action framework

McMillan has a process that makes decision making vastly better. He calls it the “next-best action framework.” This system learns, evolves, and adjusts in real time.

He describes the process in the following way:

“Everything single thing that a human can do at the office gets ingested into a system. It gets modelled against their own expectation, their historical behaviors, their customer’s behaviors, market conditions, and if you can believe, 400 other factors.

Then, it gets optimized, based on specific needs of the customer and the employee. Out comes a few ideas, which are scored.

We score whether or not we should call a customer about a bounced check versus an opportunity to call them about an opportunity about a golf outing. Then, we watch what the customer does.”

According to McMillan, Morgan Stanley has found success in their next-best action approach, delivering real time investment advice, in scale, to 16,000 advisers.

4. Leverage digital intelligence

When it comes to artificial intelligence, the real value is in the intelligence. In some ways, McMillan prefers the term digital intelligence.

“We’re digitizing human understanding in a way that creates scale,” notes McMillan. “In the end, the winners aren’t going to be the technology providers. They’re going to be organizations that have the knowledge. If you have knowledge and information that’s differentiated, you will do well in the space over time because someone will have to teach the machine how to start. It just doesn’t learn by itself.”

When you can, remember to keep it simple. McMillan reminds us, “No one cares how hard it is for you to do it.”

5. Take a holistic approach to data management

Finally, McMillan warns that your efforts will fail or significantly under-deliver if you don’t take a holistic approach to managing one of your firm’s most valuable resource – your data. To prioritize, he says to focus on the most critical attributes that drive your key business objectives.