Category Archives: IT Pros

Disabling PowerShell and Other Malware Nuisances, Part III

Disabling PowerShell and Other Malware Nuisances, Part III

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

One of the advantages of AppLocker over Software Restriction Policies is that it can selectively enable PowerShell for Active Directory groups. I showed how this can be done in the previous post. The goal is to limit as much as possible the ability of hackers to launch PowerShell malware, but still give legitimate users access.

It’s a balancing act of course. And as I suggested, you can accomplish the same thing by using a combination of Software Restriction Policies (SRP) and ACLs, but AppLocker does this more efficiently in one swoop.

Let’s Get Real About Whitelisting

As a practical matter, whitelisting is just plain hard to do, and I’m guessing most IT security staff won’t go down this route. However, AppLocker does provide an ‘audit mode’ that makes whitelisting slightly less painful than SRP.

AppLocker can be configured to log events that show up directly in the Windows Event Viewer. For whatever reason, I couldn’t get this to work in my AWS environment. But this would be a little less of a headache than setting up a Registry entry and dealing with a raw file — the SPR approach.

In any case, I think most of you will try what I did. I took the default rules provided by AppLocker to enable the standard Windows system and program folders, added an exception for PowerShell, and then created a special rule to allow only member of a select AD group — Acme-VIPs in my case — to access PowerShell.

AppLocker: Accept the default path rules, and then selectively enable PowerShell.

Effectively, I whitelisted all-the-usual Windows suspects, and then partially blacklisted PowerShell.

PowerShell for Lara, who’s in the Acme-VIPs group, but no PowerShell for Bob!

And Acme Was Hacked

No, the hacking of my Acme domain on AWS is not going to make any headlines. But I thought as a side note it’s worth mentioning.

I confess: I was a little lax with my Amazon firewall port setting, and some malware slipped in.

After some investigation, I discovered a suspicious executable in  the \Windows\Prefetch directory. It was run as a service that looked legit, and it opened a zillion UDP ports.

It took me an afternoon or two to figure all this out. My tip offs were when my server became somewhat sluggish, and then receiving an Amazon email politely suggesting that my EC2 instance may have been turned into a bot used for a DDoS attack.

This does relate to SRP and AppLocker!

Sure, had I activated these protection services earlier, Windows would have been prevented from launch the malware, which was living in in a non-standard location.

Lesson learned.

And I hold my head in shame if I caused some DDos disturbance for someone, somewhere.

Final Thoughts

Both SRP and AppLocker also have rules that take into account file hashes and digital certificates. Either will provide an additional level of security that the executable are really what they claim to be, and not the work of evil hackers.

AppLocker is more granular than SRP when it comes to certificates, and it allows you to filter on a specific app from a publisher and a version number as well. You can learn more about this here.

Bottom line: whitelisting is not an achievable goal for the average IT mortal. For the matter at hand, disabling PowerShell, my approach of using default paths provided by either SRP or AppLocker, and then selectively allowing PowerShell for certain groups — easier with AppLocker — would be far more realistic.

Disabling PowerShell and Other Malware Nuisances, Part II

Disabling PowerShell and Other Malware Nuisances, Part II

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

Whitelisting apps is nobody’s idea of fun. You need to start with a blank slate, and then carefully add back apps you know to be essential and non-threatening. That’s the the idea behind what we started to do with Software Restriction Policies (SRP) from last time.

As you’ll recall, we ‘cleared the board’ though the default disabling of app execution in the Property Rules. In the Additional Rules section, I then started adding Path rules for apps I though were essential.

The only apps you’ll ever need!

Obviously, this can get a little tedious, so Microsoft helpfully provides two default rules: one to enable execution of apps in the Program folder and the other to enable executables in the Windows system directory.

But this is cheating and then you’d be force then to blacklist apps you don’t really need.

Anyway, when a user runs an unapproved app or a hacker tries to load some malware that’s not in the whitelist, SRP will prevent it.  Here’s what happened when I tried to launch PowerShell, which wasn’t in my whitelist, from the old-style cmd shell, which was in the list:

Damn you Software Restriction Policies!

100% Pure Security

To be ideologically pure, you wouldn’t use the default Windows SRP rules. Instead, you need to start from scratch with bupkes and do the grunt work of finding out what apps are being used and that are truly needed.

To help you get over this hurdle, Microsoft suggests in a Technet article that you turn on a logging feature that writes out an entry whenever SRP evaluates an app.  You’ll need to enable the following registry entry and set a log file location:

"HKLM\SOFTWARE\Policies\Microsoft\Windows\Safer\CodeIdentifiers"

String Value: LogFileName, <path to a log file>



Here’s a part of this log file from my AWS test environment.

Log file produced by SRP.

So you have to review the log, question users, and talk it over with your fellow IT admins. Let’s say you work out a list of approved apps (excluding PowerShell), which you believe would make sense for a large part of your user community. You can then leverage the Group Policy Management console to publish the rules to the domain.

In theory, I should be able to drag-and-drop the rules I created in the Group Policy Editor for the machine I was working into the Management console. I wasn’t able to pull that off in my AWS environment.

I instead  had to recreate the rules directly in the Group Policy Management Policy Editor (below), and then let it do the work of distributing it across the domain — in my case, the Acme domain.

Magic!

You can read more about how to do this here.

A Look at AppLocker

Let’s get back to the issue of PowerShell. We can’t live without it, yet hackers have used it as tool for stealthy post-exploitation.

If I enable it in my whitelist, along with some of the built-in PowerShell protections I mentioned in the last post, there are still so many ways to get around these security precautions that it’s not worth the trouble.

It would be nice if SRP allows you to do the whitelisting selectively based on Active Directory user or group membership. In other words, effectively turn off PowerShell except if you’re, say, an IT admin that’s a member of the ‘Special PowerShell’ AD group.

That ain’t happening in SRP since it doesn’t support this level of granularity!

Starting in Windows 7 (and Windows Server 2008), Microsoft deprecated SPR and introduced the (arguably) more powerful AppLocker. It’s very similar to the what it replaces, but it does provide this user-group level filtering.

We’ll talk more about AppLocker and some of its benefits in the final post in this series. In any case, you can find this policy next to SRP in the Group Policy Editor under Application Control Policies.

For my Acme environment, I set up a rule that enables PowerShell for only users in the Acme-VIP group, Acme’s small group of power IT employees. You can see how I started setting this up as I follow the AppLocker wizard dialog:

PowerShell is an important and useful tool, so you’ll need to weigh the risks in selectively enabling it through AppLocker— dare I say it, perform a risk assessment.

Of course, you should have secondary controls, such as, ahem, User Behavior Analytics, that allows you to protect against PowerShell misuse should the credentials of the PowerShell-enabled group be compromised by hackers or insiders.

We’ll take up  other AppLocker capabilities and final thoughts on whitelisting in the next post.

Continue reading the next post in "Disabling PowerShell and Other Malware Nuisances"

Disabling PowerShell and Other Malware Nuisances, Part I

Disabling PowerShell and Other Malware Nuisances, Part I

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

Back in more innocent times, circa 2015, we began to hear about hackers going malware-free and “living off the land.” They used whatever garden-variety IT tools were lying around on the target site. It’s the ideal way to do post-exploitation without tripping any alarms.

This approach has taken off and gone mainstream, primarily because of off-the-shelf post-exploitation environments like PowerShell Empire.

I’ve already written about how PowerShell, when supplemented with PowerView, becomes a potent purveyor of information for hackers. (In fact, all this PowerShell pen-testing wisdom is available in an incredible ebook that you should read as soon as possible.)

Any tool can be used for good or bad, so I’m not implying PowerShell was put on this earth to make the life of hackers easier.

But just as you wouldn’t leave a 28” heavy-duty cable cutter next to a padlock, you probably don’t want to allow, or at least make it much more difficult for, hackers get their hands on PowerShell.

This brings up a large topic in the cybersecurity world: restricting application access, which is known more commonly as whitelisting or blacklisting. The overall idea is for the operating system to strictly control what apps can be launched by users.

For example, as a member of homo blogus, I generally need some basic tools and apps (along with a warm place to sleep at night), and can live without PowerShell, netcat, psexec, and some other cross-over IT tools I’ve discussed in previous posts. The same applies to most employees in an organization, and so a smart IT person should be able come up with a list of apps that are safe to use.

In the Windows world, you can enforce rules on application execution using Software Restriction Policies and more recently AppLocker.

However, before we get into these more advanced ideas, let’s try two really simple solutions and then see what’s wrong with them.

ACLs and Other Simplicities

We often think of Windows ACLs as being used to control access to readable content. But they can also be applied to executables — that is, .exe, .vbs, .ps1, and the rest.

I went back into Amazon Web Services where the Windows domain for the mythical and now legendary Acme company resides and then did some ACL restriction work.

The PowerShell .exe, as any sys admin can tell you, lives in C:\Windows\System32\WindowsPowerShell\v1.0. I navigated to the folder, clicked on properties, and effectively limited execution of PowerShell to a few essential groups: Domain Admins and Acme-SnowFlakes, which is the group of Acme employee power users.

I logged backed into the server as Bob, my go to Acme employee, and tried to bring up PowerShell. You see the results below.

In practice, you could probably come up with a script — why not use PowerShell? — to automate this ACL setting for all the laptops and servers in a small- to mid-size site.

It’s not a bad solution.

If you don’t like the idea of setting ACLs in executable files, PowerShell offers its own execution restriction controls. As a user with admin privileges, you can use, what else but a PowerShell cmdlet called Set-ExecutionPolicy.

It’s not nearly as blunt a force as the ACLs, but you can restrict PowerShell to work only in interactive mode –  with the Restricted parameter — so that it won’t execute scripts that contain the hackers’ malware. PowerShell would still be available in a limited way, but it wouldn’t be capable of running the scripts containing hacker PS malware.

However, this would PowerShell scripts from being run by your IT staff.  To allow IT-approved scripts, but disable evil hacker scripts, you use the RemoteSigned parameter in Set-ExecutionPolicy. Now PowerShell will only launch signed scripts. The IT staff, of course, would need to create their own scripts and sign them using an approved credential.

I won’t go into the details how to do this, mostly because it’s so easy to get around these controls. Someone even has a listicle blog post in which 15 PowerShell  security workarounds are described.

The easiest one is using the Bypass parameter in PowerShell itself. Duh! (below).

Seems like a security hole, no?

So PowerShell has some basic security flaws. It’s somewhat understandable since it is, after all, just a shell program.

But even the ACL restriction approach has a fundamental problem.

If hackers loosen up the “live off the land” philosophy, they can simply download — say, using a remote access trojan (RAT) — their own copy of PowerShell .exe. And then run it directly, avoiding the permission restrictions with the resident PowerShell.

Software Restriction Policies

These basic security holes (and many others) are always an issue with a consumer-grade operating systems. This has led OS researchers to come up with secure secure operating systems that have direct power to control what can be run.

In the Windows world, these powers are known as Software Restriction Policies (SRP) — for a good overview, see this — that are managed through the Group Policy Editor.

With SRP you can control which apps can be run, based on file extension, path names, and whether the app has been digitally signed.

The most effective, though most painful approach, is to disallow everything and then add back application that you really, really need. This is known as whitelisting.

We’ll go into more details in the next post.

Anyway, you’ll need to launch the policy editor, gpedit, and navigate to Local Computer Policy>Windows Settings>Security Settings>Software Restriction Polices>Security Levels. If you click on “Disallowed”, you can then make this the default security policy — to not run any executables!

The whitelist: disallow as default, and then add app policies in “Additional Rules”.

This is more like a scorched earth policy. In practice, you’ll need to enter “Additional Rules” to add back the approved apps (with their path names). If you leave out PowerShell, then you’ve effectively disabled this tool on the site.

Unfortunately, you can’t fine-tune the SRP rules based on AD groups or users. Drat!

And that we’ll bring us to Microsoft’s latest and greatest security enforcer, known as AppLocker, which does provide some nuance to application access. We’ll take that up next time as well.

Continue reading the next post in "Disabling PowerShell and Other Malware Nuisances"

Practical PowerShell for IT Security, Part III: Classification on a Budget

Practical PowerShell for IT Security, Part III: Classification on a Budget

Last time, with a few lines of PowerShell code, I launched an entire new software category, File Access Analytics (FAA). My 15-minutes of fame is almost over, but I was able to make the point that PowerShell has practical file event monitoring aspects. In this post, I’ll finish some old business with my FAA tool and then take up PowerShell-style data classification.

Event-Driven Analytics

To refresh memories, I used the Register-WmiEvent cmdlet in my FAA script to watch for file access events in a folder. I also created a mythical baseline of event rates to compare against. (For wonky types, there’s a whole area of measuring these kinds of things — hits to web sites, calls coming into a call center, traffic at espresso bars — that was started by this fellow.)

When file access counts reach above normal limits, I trigger a software-created event that gets picked up by another part of the code and pops up the FAA “dashboard”.

This triggering is performed by the New-Event cmdlet, which allows you to send an event, along with other information, to a receiver. To read the event, there’s the WMI-Event cmdlet. The receiving part can even be in another script as long as both event cmdlets use the same SourceIdentifier — Bursts, in my case.

These are all operating systems 101 ideas: effectively, PowerShell provides a simple message passing system. Pretty neat considering we are using what is, after all, a bleepin’ command language.

Anyway, the full code is presented below for your amusement.

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(3,8,5); "Tuesday" = @(4,10,7);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append
  
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 
 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   


#Dashboard
While ($true) {
    $args=Wait-Event -SourceIdentifier Bursts # wait on Burst event
    Remove-Event -SourceIdentifier Bursts #remove event
  
    $outarray=@() 
    foreach ($result in $args.SourceArgs) {
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File -Value $result[0]
      $obj | Add-Member -type NoteProperty -Name Time -Value $result[1]
      $outarray += $obj  
    }


     $outarray|Out-GridView -Title "FAA Dashboard: Burst Data"
 }

Please don’t pound your laptop as you look through it.

I’m aware that I continue to pop up separate grid views, and there are better ways to handle the graphics. With PowerShell, you do have access to the full .Net framework, so you could create and access objects —listboxes, charts, etc. — and then update as needed. I’ll leave that for now as a homework assignment.

Classification is Very Important in Data Security

Let’s put my file event monitoring on the back burner, as we take up the topic of PowerShell and data classification.

At Varonis, we preach the gospel of “knowing your data” for good reason. In order to work out a useful data security program, one of the first steps is to learn where your critical or sensitive data is located — credit card numbers, consumer addresses, sensitive legal documents, proprietary code.

The goal, of course, is to protect the company’s digital treasure, but you first have to identify it. By the way, this is not just a good idea, but many data security laws and regulations (for example, HIPAA)  as well as industry data standards (PCI DSS) require asset identification as part of doing real-world risk assessment.

PowerShell should have great potential for use in data classification applications. Can PS access and read files directly? Check. Can it perform pattern matching on text? Check. Can it do this efficiently on a somewhat large scale? Check.

No, the PowerShell classification script I eventually came up with will not replace the Varonis Data Classification Framework. But for the scenario I had in mind – a IT admin who needs to watch over an especially sensitive folder – my PowerShell effort gets more than a passing grad, say B+!

WQL and CIM_DataFile

Let’s now return to WQL, which I referenced in the first post on event monitoring.

Just as I used this query language to look at file events in a directory, I can tweak the script to retrieve all the files in a specific directory. As before I use the CIM_DataFile class, but this time my query is directed at the folder itself, not the events associated with it.

$Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"

Terrific!  This line of code will output an array of file path names.

To read the contents of each file into a variable, PowerShell conveniently provides the Get-Content cmdlet. Thank you Microsoft.

I need one more ingredient for my script, which is pattern matching. Not surprisingly, PowerShell has a regular expression engine. For my purposes it’s a little bit of overkill, but it certainly saved me time.

In talking to security pros, they’ve often told me that companies should explicitly mark documents or presentations containing proprietary or sensitive information with an appropriate footer — say, Secret or Confidential. It’s a good practice, and of course it helps in the data classification process.

In my script, I created a PowerShell hashtable of possible marker texts with an associated regular expression to match it. For documents that aren’t explicitly marked this way, I also added special project names — in my case, snowflake — that would also get scanned. And for kicks, I added a regular expression for social security numbers.

The code block I used to do the reading and pattern matching is listed below. The file name to read and scan is passed in as a parameter.

$Action = {

Param (

[string] $Name

)

$classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }


$data = Get-Content $Name

$cnts= @()

foreach ($key in $classify.Keys) {

  $m=$classify[$key].matches($data)

  if($m.Count -gt 0) {

    $cnts+= @($key,$m.Count)
  }
}

$cnts
}

Magnificent Multi-Threading

I could have just simplified my project by taking the above code and adding some glue, and then running the results through the Out-GridView cmdlet.

But this being the Varonis IOS blog, we never, ever do anything nice and easy.

There is a point I’m trying to make. Even for a single folder in a corporate file system, there can be hundreds, perhaps even a few thousand files.

Do you really want to wait around while the script is serially reading each file?

Of course not!

Large-scale file I/O applications, like what we’re doing with classification, is very well-suited for multi-threading—you can launch lots of file activity in parallel and thereby significantly reduce the delay in seeing results.

PowerShell does have a usable (if clunky) background processing system known as Jobs. But it also boasts an impressive and sleek multi-threading capability known as Runspaces.

After playing with it, and borrowing code from a few Runspaces’ pioneers, I am impressed.

Runspaces handles all the messy mechanics of synchronization and concurrency. It’s not something you can grok quickly, and even Microsoft’s amazing Scripting Guys are still working out their understanding of this multi-threading system.

In any case, I went boldly ahead and used Runspaces to do my file reads in parallel. Below is a bit of the code to launch the threads: for each file in the directory I create a thread that runs the above script block, which returns matching patterns in an array.

$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, 5)

$RunspacePool.Open()

$Tasks = @()


foreach ($item in $list) {

   $Task = [powershell]::Create().AddScript($Action).AddArgument($item.Name)

   $Task.RunspacePool = $RunspacePool

   $status= $Task.BeginInvoke()

   $Tasks += @($status,$Task,$item.Name)
}

Let’s take a deep breath—we’ve covered a lot.

In the next post, I’ll present the full script, and discuss some of the (painful) details.  In the meantime, after seeding some files with marker text, I produced the following output with Out-GridView:

Content classification on the cheap!

In the meantime, another idea to think about is how to connect the two scripts: the file activity monitoring one and the classification script partially presented in this post.

After all, the classification script should communicate what’s worth monitoring to the file activity script, and the activity script could in theory tell the classification script when a new file is created so that it could classify it—incremental scanning in other words.

Sounds like I’m suggesting, dare I say it, a PowerShell-based security monitoring platform. We’ll start working out how this can be done the next time as well.

Practical PowerShell for IT Security, Part I: File Event Monitoring

Practical PowerShell for IT Security, Part I: File Event Monitoring

Back when I was writing the ultimate penetration testing series to help humankind deal with hackers, I came across some interesting PowerShell cmdlets and techniques. I made the remarkable discovery that PowerShell is a security tool in its own right. Sounds to me like it’s the right time to start another series of PowerShell posts.

We’ll take the view in these posts that while PowerShell won’t replace purpose-built security platforms — Varonis can breathe easier now — it will help IT staff monitor for threats and perform other security functions. And also give IT folks an appreciation of the miracles that are accomplished by real security platforms, like our own Metadata Framework. PowerShell can do interesting security work on a small scale, but it is in no way equipped to take on an entire infrastructure.

It’s a Big Event

To begin, let’s explore using PowerShell as a system monitoring tool to watch files, processes, and users.

Before you start cursing into your browsers, I’m well aware that any operating system command language can be used to monitor system-level happenings. A junior IT admin can quickly put together, say, a Linux shell script to poll a directory to see if a file has been updated or retrieve a list of running processes to learn if a non-standard process has popped up.

I ain’t talking about that.

PowerShell instead gives you direct event-driven monitoring based on the operating system’s access to low-level changes. It’s the equivalent of getting a push notification on a news web page alerting you to a breaking story rather than having to manually refresh the page.

In this scenario, you’re not in an endless PowerShell loop, burning up CPU cycles, but instead the script is only notified or activated when the event — a file is modified or a new user logs in — actually occurs. It’s a far more efficient way to do security monitoring than by brute-force polling.

Further down below, I’ll explain how this is accomplished.

But first, anyone who’s ever taken, as I have, a basic “Operating Systems for Poets” course knows that there’s a demarcation between user-level and system-level processes.

The operating system, whether Linux or Windows, does the low-level handling of device actions – anything from disk reads, to packets being received — and hides this from garden variety apps that we run from our desktop.

So if you launch your favorite word processing app and view the first page of a document, the whole operation appears as a smooth, synchronous activity. But in reality there are all kinds of time-sensitive actions events — disk seeks, disk blocks being read, characters sent to the screen, etc. — that are happening under the hood and deliberately hidden from us.  Thank you Bill Gates!

In the old days, only hard-core system engineers knew about this low-level event processing. But as we’ll soon see, PowerShell scripters can now share in the joy as well.

An OS Instrumentation Language

This brings us to Windows Management Instrumentation (WMI), which is a Microsoft effort to provide a consistent view of operating system objects.

Only a few years old, WMI is itself part of a broader industry effort, known as Web-based Enterprise Management (WBEM), to standardize the information pulled out of routers, switches, storage arrays, as well as operating systems.

So what does WMI actually look and feel like?

For our purposes, it’s really a query language, like SQL, but instead of accessing rows of vanilla database columns, it presents complex OS information organized as a WMI_class hierarchy. Not too surprisingly, the query language is known as, wait for it, WQL.

Windows generously provides a utility, wbemtest, that lets you play with WQL. In the graphic below, you can see the results of my querying the Win32_Process object, which holds information on the current processes running.

WQL on training wheels with wbemtest.

Effectively, it’s the programmatic equivalent of running the Windows task monitor. Impressive, no? If you want to know more about WQL, download Ravi Chaganti’s wonderous ebook on the subject.

PowerShell and the Register-WmiEvent Cmdlet

But there’s more! You can take off the training wheels provided by wbemtest, and try these queries directly in PowerShell.

Powershell’s Get-WMIObject is the appropriate cmdlet for this task, and it lets you feed in the WQL query directly as a parameter.

The graphic below shows the first few results from running select Name, ProcessId, CommandLine from Win32_Process on my AWS test environment.

gwmi is the PowerShell alias for Get-WmiObject.

The output is a bit wonky since it’s showing some hidden properties having to do with underlying class bookkeeping. The cmdlet also spews out a huge list that speeds by on my console.

For a better Win32_Process experience, I piped the output from the query into Out-GridView, a neat PS cmdlet that formats the data as a beautiful GUI-based table.

Not too shabby for a line of PowerShell code. But WMI does more than allow you to query these OS objects.

As I mentioned earlier, it gives you access to relevant events on the objects themselves. In WMI, these events are broadly broken into three types: creation, modification, and deletion.

Prior to PowerShell 2.0, you had to access these events in a clunky way: creating lots of different objects, and then you were forced to synchronously ‘hang’, so it wasn’t true asynchronous event-handling. If you want to know more, read this MS Technet post for the ugly details.

Now in PS 2.0 with the Register-WmiEvent cmdlet, we have a far prettier way to react to all kinds of events. In geek-speak, I can register a callback that fires when the event occurs.

Let’s go back to my mythical (and now famous) Acme Company, whose IT infrastructure is set up on my AWS environment.

Let’s say Bob, the sys admin, notices every so often that he’s running low on file space on the Salsa server. He suspects that Ted Bloatly, Acme’s CEO, is downloading huge files, likely audio files, into one of Bob’s directories and then moving them into Ted’s own server on Taco.

Bob wants to set a trap: when a large file is created in his home directory, he’ll be notified on his console.

To accomplish this, he’ll need to work with the CIM_DataFile class.  Instead of accessing processes, as we did above, Bob uses this class to connect with the underlying file metadata.

CIM_DataFile object can be accessed directly in PowerShell.

Playing the part of Bob, I created the following Register-WmiEvent script, which will notify the console when a very large file is created in the home directory.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance isa 'CIM_DataFile' and TargetInstance.FileSize &gt; 2000000 and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' "-sourceIdentifier "Accessor3" -Action  { Write-Host "Large file" $EventArgs.NewEvent.TargetInstance.Name  "was created”}

 

Running this script directly from the Salsa console launches the Register-WmiEvent command in the background, assigning it a job number, and then only interacts with the console when the event is triggered.

In the next post, I’ll go into more details about what I’ve done here. Effectively, I’m using WQL to query the CIM_DataFile object — particularly anything in the \Users\bob directory that’s over 2 million bytes — and set up a notification when a new file is created that fits this criteria —that’s where InstanceModificationEvent comes into play.

Anyway, in my Bob role  I launched the script from the PS command line, and then putting on my Ted Bloatly hat, I copied a large mp4 into Bob’s directory. You can see the results below.

We now know that Bloatly is a fan of Melody Gardot. Who would have thunk it?

You begin to see some of the exciting possibilities with PowerShell as a tool to detect threats patterns and perhaps for doing a little behavior analytics.

We’ll be exploring these ideas in the next post.

Binge Read Our Pen Testing Active Directory Series

Binge Read Our Pen Testing Active Directory Series

With winter storm Niko now on its extended road trip, it’s not too late, at least here in the East Coast, to make a few snow day plans. Sure you can spend part of Thursday catching up on Black Mirror while scarfing down this slow cooker pork BBQ pizza. However, I have a healthier suggestion.

Why not binge on our amazing Pen Testing Active Directory Environments blog posts?

You’ve read parts of it, or — spoiler alert — perhaps heard about the exciting conclusion involving a depth-first-search of the derivative admin graph. But now’s your chance to totally immerse yourself and come back to work better informed about the Active Directory dark knowledge that hackers have known about for years.

And may we recommend eating these healthy soy-roasted kale chips while clicking below?

Continue reading the next post in "Pen Testing Active Directory Environments"

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

We’ve already written about the growing role of the chief data officer(CDO) and their challenging task to leverage data science to drive profits. But the job of a CDO is not just about moving the profit meter.

It’s less-widely known that they’re also tasked with meeting three other business objectives: finding ways to drive overall growth, improve efficiencies and manage risk.

Why? All business activities and processes benefit from these three objectives.

Luckily, we can turn to Morgan Stanley’s CDO, Jeffrey McMillan for some guidance. I heard him speak at a recent CDO Summit in New York City, where he dispensed sage advice for both practicing and aspiring CDOs.

McMillan suggested these five analytics and data strategy processes:

1. Make sure your data science is aligned with your business strategy.

Yes, good data scientists are hard to find. But McMillan says that rather than spending your energies finding a good data scientist, make sure that your scientist can also think like a sales or business person.

He says, “I would much rather have a very mediocre data scientist, who really understood the business than the reverse. Because the reverse doesn’t help me at all. It’s not about the algorithm, it’s about understanding of the business.”

Once you have your data scientist in place, McMillan is adamant about ensuring this resource is honored and respected.

He explains, “If no one is actually going to do anything with what you recommend doing, they don’t get more resources. There are a lot of things we learn about the world that are interesting that don’t actually change our behaviors. And we need to focus on things that changes our behaviors.”

2. Empower the end users to consume data visualizations

According to McMillan, it’s more important to get a little bit of data in the hands of many, than a lot of data in the hands of few. Why? It’s vital to bring data to the decision makers.

“They don’t always want your algorithm,” McMillan says. “They do want information about the business.”

Moreover, his plan for Morgan Stanley to make data accessible to everyone is this: “Our vision is: in the next five years, I want every single employee in our firm to have access to a data visualization tool. And I want 15-20% of the employees to be able to create their own content using their own data visualization tool.”

3. Create the next-best action framework

McMillan has a process that makes decision making vastly better. He calls it the “next-best action framework.” This system learns, evolves, and adjusts in real time.

He describes the process in the following way:

“Everything single thing that a human can do at the office gets ingested into a system. It gets modelled against their own expectation, their historical behaviors, their customer’s behaviors, market conditions, and if you can believe, 400 other factors.

Then, it gets optimized, based on specific needs of the customer and the employee. Out comes a few ideas, which are scored.

We score whether or not we should call a customer about a bounced check versus an opportunity to call them about an opportunity about a golf outing. Then, we watch what the customer does.”

According to McMillan, Morgan Stanley has found success in their next-best action approach, delivering real time investment advice, in scale, to 16,000 advisers.

4. Leverage digital intelligence

When it comes to artificial intelligence, the real value is in the intelligence. In some ways, McMillan prefers the term digital intelligence.

“We’re digitizing human understanding in a way that creates scale,” notes McMillan. “In the end, the winners aren’t going to be the technology providers. They’re going to be organizations that have the knowledge. If you have knowledge and information that’s differentiated, you will do well in the space over time because someone will have to teach the machine how to start. It just doesn’t learn by itself.”

When you can, remember to keep it simple. McMillan reminds us, “No one cares how hard it is for you to do it.”

5. Take a holistic approach to data management

Finally, McMillan warns that your efforts will fail or significantly under-deliver if you don’t take a holistic approach to managing one of your firm’s most valuable resource – your data. To prioritize, he says to focus on the most critical attributes that drive your key business objectives.

 

Pen Testing Active Directory Environments, Part VI: The Final Case

Pen Testing Active Directory Environments, Part VI: The Final Case

If you’ve come this far in the series, I think you’ll agree that security pros have to move beyond checking off lists. The mind of the hacker is all about making connections, planning several steps ahead, and then jumping around the victim’s network in creative ways.

Lateral movement through derivative admins is a good example of this approach. In this concluding post, I’ll finish up a few loose ends from last time and then talk about Active Directory, metadata, security, and what it all means.

Back to the Graph

Derivative admin is one of those very creative ways to view the IT landscape. As pen testers we’re looking for AD domain groups that have been assigned to the local administrator group and then discover those domain users that are shared.

The PowerView cmdlet Get-NetLocalGroup provides the raw information, which we then turn into a graph. I started talking about how this can be done. It’s a useful exercise, so let’s complete what we started.

Back in the Acme IT environment, I showed how it was possible to jump from the Salsa server to Avocado and then land in Enchilada. Never thought I’d get hungry writing a sentence about network topology!

With a little thought, you can envision a graph that lets your travel between server nodes and user nodes where edges are bi-directional. And if fact, the right approach is to use an undirected graph that can contain cycles, making it the inverse of the directed acyclic graph (DAG) that I discussed earlier in the series.

In plain English, this mean that if I put a user node on a server’s adjacency list, I then need to create a user adjacency list with that server. Here it is sketched out using arrows to represent adjacency for the Salsa to Enchilada scenario:  salsa-> cal    cal  -> salsa, avocado   avocado->cal,meg   meg->enchilada enchilada->meg

Since I already explained the amazing pipeline based on using Get-NetLocaGroup with –Recursive option, it’s fairly straightforward to write out the PowerShell (below). My undirected graph is contained in $Gda.

buildda

Unlike the DAG I already worked out where I can only go from root to leaf and not circle back to a node, there can be cycles in this graph. So I need to keep track of whether I previously visited a node. To account for this, I created a separate PS array called $visited. When I traverse this graph to find a path, I’ll use $visited to mark nodes I’ve processed.

I ran my script giving it parameters “salsa”, “enchilada”, and “avocado”, and it displays $Gda containing my newly created adjacency lists.

gda-results

Lost in the Graph

The last piece now is to develop a script to traverse the undirected graph and produce the “breadcrumb” trail.

Similar to the breadth-first-search (BFS) I wrote about to learn whether a user belongs to a domain group, depth-first-search (DFS) is a graph navigation algorithm with one helpful advantage.

DFS is actually the more intuitive node  traversal technique. It’s really closer to the way many people deal with finding a destination when they’re lost. As an experienced international traveler, I often used something close to DFS when my local maps prove less than informative.

Let’s say you get to where you think the destination is, realize you’re lost, and then backtrack to the last known point where map and reality are somewhat similar. You then try another path from that point. And then backtrack if you still can’t find that hidden gelato café.

If you’ve exhausted all the paths from that point, you backtrack yet again and try new paths further up the map. You’ll eventually come across the destination spot, or at least get a good tour of the older parts of town.

That’s essentially DFS! The appropriate data structure is a stack that keeps track of where you’ve been. If all the paths from the current top of the stack don’t lead anywhere, you pop the stack and work with the previous node in the path – the backtrack step.

To avoid getting into a loop because of the cycles in the undirected graph, you just mark every node you visit and avoid those that you’ve already visited.

Finally, whatever nodes are already on the stack is your breadcrumb trail — the path to get from your source to destination.

All these ideas are captured in the script below, and you see the results of my running it to find a path between Salsa and Enchilada.

findpath

Stacks: it’s what you need when you’re lost.

find-results

From Salsa to Enchilada via way of Cal and Meg!

Is this a complete and practical solution?

The answer is no and no. To really finish this, you’ll also need to scan the domain for users who are currently logged into the servers. If these ostensibly local admin users whose credentials you want steal are not online, their hashes are likely not available for passing. You would therefore have to account for this in working out possible paths.

As you might imagine, in most corporate networks, with potentially hundreds of computers and users, the graph can get gnarly very quickly. More importantly, just because you found a path, doesn’t necessarily mean it’s the shortest path. In other words, my code above may chug and find a completely impractical path that involve hopping between say, twenty or thirty computers. It’s possible but not practical.

Fortunately, Andy Robbins worked out far prettier PowerShell code that addresses the above weaknesses in my scripts. Robbins uses PowerView’s Get-NetSession to scan for online users. And he cleverly employs a beloved computer science 101 algorithm, Dijkstra’s Shortest Path, to find the optimal path between two nodes.

Pen Testers as Metadata Detectives and Final Thoughts

Once I stepped back from all this PowerShell and algorithms (and had a few appropriate beverages), the larger picture came into focus.

Thinking like hackers, pen testers know that to crack a network that they’ve land in, they need to work indirectly because there isn’t (or rarely) the equivalent of a neon sign pointing to the treasure.

And that’s where metadata helps.

Every piece of information I leveraged in this series of posts is essentially metadata: file ACLs, Active Directory groups and users, system and session information, and other AD information scooped up by PowerView.

The pen tester, unlike the perimeter-based security pro, is incredibly clever at using this metadata to find and exploit security gaps. They’re masters at thinking in terms of connections, moving around the network with the goal of collecting more metadata, and then with a little luck, they can get the goodies.

I was resisting, but you can think of pen testers as digital consulting detectives — Sherlock Holmes, the Benedict Cumberbatch variant that is, but with we hope better social skills.

Here are some final thoughts.

While pen testers offer valuable services, the examples in this series could be accomplished offline by regular folks — IT security admins and analysts.

In other words, the IT group could scoop up the AD information, do the algorithms, and then discover if there are possible paths for both the derivative admin case and the file ACL case from earlier in the series.

The goal for them is to juggle Active Directory users and groups into a configuration that greatly reduces the risk of hackers gaining user credentials.

And ultimately prevent valuable content from being taken, like your corporate IP or millions of your customers’ credit card numbers.

 

Connecting Your Data Strategy to Analytics: Eight Questions to Ask

Connecting Your Data Strategy to Analytics: Eight Questions to Ask

Big data has ushered in a new executive role over the past few years. The chief data officer or CDO now joins the C-level club, tasked with leveraging data science to drive the bottom line. According to a recent executive survey, 54% of firms surveyed now report having appointed a CDO.

Taking on the role is one thing, learning out how to be successful is another.

“A CDO’s job starts like this: a CEO, CFO or maybe a CMO says, ‘We want our company to be more data driven, and we want to start capitalizing on these new technologies. Go figure out what that means for us.’ And that’s quite often the beginning point for a chief data officer,” CDO Richard Wendell explained to me during an interview.

One way CDOs are approaching this challenge is by connecting the organization’s data strategy with their analytics tools, and then acting on the resulting new idea or information.

Jeffrey McMillan of Morgan Stanley is one such CDO. I heard him speak at a recent CDO Summit in New York City. While his focus is primarily on financial services, McMillan had invaluable, hard-earned wisdom that all CDOs can learn from.

McMillan’s first step to enlightenment was when he realized several years ago that technology alone was an illusion. He noted, “You’re just spending money on your clusters, you put your project plan together, you’re all excited you got your Hadoop infrastructure cluster in place, your ETL tools, and you have your data governance in place. And you know what, nothing really changes.”

The Magic Remedy? Ask These Eight Questions

#1 What is your business strategy?

To McMillian, a data strategy doesn’t exist.

He emphasizes that there is only a business strategy, and data and analytics are just tools: “A data strategy isn’t going to generate a single incremental dollar for your business, it’s an enabler. No different than your web strategy or your workflow strategy. It’s just another component to your solution.”

#2 – Have you defined and communicated key objectives throughout your organization?

McMillian advises that if you can’t answer the first two questions, stop and don’t spend more money: “You’re going to be wasting a lot of time, money and resources solving for a problem and you don’t even know what the problem even is.”

#3 – What is the role of data and analytics in driving your strategy?

Sometimes your data analytics is transformative and sometimes, it’s marginal. And if it’s not actually moving the needle, ask, ‘What are your goals and objectives?’; ‘What are you trying to solve here?’; ‘And how does your data and analytics strategy work to do that?’

#4 – Are people really buying in?

McMillian gets it. Getting people to buy in to your idea is going to be difficult.

Your CEO and your COO or the head of your business might say, ‘Yeah, I hear ya, the world is changing but we’ve got limited resources, we’re not sure how to engage.’

And in some cases, they’re threatened by changes.

#5 – What new roles need to be created and how do the existing roles fit into this?

There are going to be new roles that will be generated from the work you do. How does the Chief Data/Analytics Officer fit into the organization?

#6 – What’s already going on within your organization that you can leverage?

There’s going to be extremely valuable work that’ll happen, which will unfortunately never get the exposure it deserves. It’s the age-old complaint from IT: “I only get noticed when something goes wrong!’

McMillian posits, “How do you enable that guy that’s working on the weekend, that’s doing some interesting stuff with python code that’s generating some interesting value. How do you get him or her get exposed to an opportunity?”

#7 – Do you have the right data and is it of sufficient quality?

McMillan warns, “Honestly, you’re going to fail if you don’t have data quality.”

#8 – How do you define success?

The best part about analytics is that you can actually define and measure success.

And finally, you should know that McMillian spends 90% of his time focused on question one and two because he thinks that “if you don’t have one or two right, everything else is pretty much useless.”

In a future post, we’ll also review how he drives growth, improves efficiency and manages risk.

Pen Testing Active Directory Environments, Part V: Admins and Graphs

Pen Testing Active Directory Environments, Part V: Admins and Graphs

If you’ve survived my last blog post, you know that Active Directory group structures can be used as powerful weapons by hackers. Our job as pen testers is to borrow these same techniques — in the form of PowerView — that hackers have known about for years, and then show management where the vulnerabilities live in their systems.

I know I had loads of fun building my AD graph structures. It was even more fun running my breadth-first-search (BFS) script on the graph to quickly tell me who the users are that would allow access to a file that I couldn’t enter with my current credentials.

Remember?

The “Top Secret” directory on the Acme Salsa server was off limits with “Bob” credentials but available to anyone in the “Acme-Legal” group. The PowerShell script I wrote helped me navigate the graph and find the underlying users in Acme-Legal.

Closing My Graphs

If you think about this, instead of having to always search the same groups to find the leaf nodes, why not just build a table that has this information pre-loaded?

I’m talking about what’s known in the trade as the transitive closure of a graph. It sounds nerdier than it really needs to be: I’m just finding everything reachable, directly and indirectly, from any of the AD nodes in my graph structure.

I turned to brute-force to solve the closure problem. I simply modified my PowerShell scripts from last time to do a BFS from each node or entry in my lists and then collect everything I’ve visited. My closed graph is now contained in $GroupTC (see below).

transitive-ps1

 

Before you scream into your browsers, there are better ways do this, especially for directed graphs, and I know about the node sorting approach. The point here is to transcend your linear modes of thinking and view the AD environment in terms of connections.

Graph perfectionists can check this out.

Here’s a partial dump of my raw graph structure from last time:

group-raw-acme

And the same information, just for “Acme-VIPs”, that’s been processed with my closure algorithm:

group-tc-acme

Notice how the Acme-VIPs list has all the underlying users! If I had spent a little more time I’d eliminate every group in the search path from the list and just have the leaf nodes — in other words, the true list of users who can access a directory with Acme-VIPs access control permission.

Still, what I’ve created is quite valuable. You can imagine hackers using these very same ideas. Perhaps they log in quickly to run PowerView scripts to grab the raw AD group information and then leave the closure processing for large AD environments to an offline step.

There is an Easier Way to Do Closure

We can all agree that knowledge is valuable just for knowledge’s sake. And even if I tell you there’s a simpler way to do closure than I just showed, you’ll still have benefited from the deep wisdom gained from knowing about breadth first searches.

There is a simpler way to do closure.

As it turns out, PowerView cmdlets with a little extra PowerShell sauce can work out the users belonging to a top-level AD group in one long pipeline.

Remember the Get-NetGroupMember cmdlet that spews out all the direct underlying AD members? It also has a –Recurse option that performs the deep search that I accomplished with the breadth-first-search algorithm above.

To remove the AD groups in the search path that my algorithm didn’t, I can filter on the IsGroup field, which conveniently has a self-explanatory name. And since users can be in multiple groups (for example, Cal), I want a unique list. To rid the list of duplicates, I used PowerShell’s get-object –unique cmdlet.

Now for the great reveal: my one line of PS code that lists the true users who are underlying a given AD Group, in this case Acme-VIPs:

ps-recurse

This is an amazing line of PowerShell for pen testers (and hackers as well), allowing them to quickly see who are the users  worth going after.

Thank you Will Schroeder for this PowerView miracle!

Commercial Break

It’s a good time to step back, take a deep breath, and look at the big picture. If you—IT security or admin team—don’t do the work of minimizing who has access to a directory, the hackers will effectively do if for you. I’ve just shown that with PowerView, they have tools to make this happen.

Of course, you bring in pen testers to discover these permission gaps and other security holes before the hackers.

Or there is another possibility.

Our blog’s generous sponsor, Varonis Systems, has been making beautifully crafted data access and governance solutions since Yaki Faitelson and Ohad Korkus set up shop in 2004. Their DatAdvantage solution has been helping IT admins and security pros find the underlying users who have access to files and directories.

Varonis: For over ten years, they’ve been saving IT from writing complicated breadth-first-search scripts!

Taking the Derivative of the Admin

Back to our show.

Two blog posts ago, I began to show how PowerView can help pen testers hop around the network. I didn’t go into much detail.

Now for the details.

A few highly evolved AD pen testers, including Justin Warner, Andy Robbins  and Will Schroeder worked out the concept of “derivative admin”, which is a more efficient way to move laterally.

Their exploit hinges on two facts of life in AD environments. One, many companies have grown complex AD group structures. And they often lose track of who’s in which group.

Second, they configure domain-level groups to be local administrators of user workstations or servers. This is a smart way to centralize local administration of Windows machines without requiring the local administrator to be a domain-level admin.

For example, I set up special AD groups Acme-Server1, Acme-Server2, and Acme-Server3 that are divided up among the Acme IT admin team members — Cal, Meg, Rodger, Lara, and Camille.

In my simple Acme network, I assigned these AD groups to Salsa (Acme-Server1), Avocado (Acme-Server3), and Enchilada (Acme-Server2) and placed them under the local Administrators group (using lusrmgr.msc).

In large real-world networks, IT can deploy many AD groups to segment the Windows machines in large corporate environments — it’s a good way to limit the risks if an admin credential has been taken.

In my Acme environment, Cal who’s a member of Acme-Server1, uses his ordinary domain user account to log into Salsa and then gain admin privileges to do power-user level work.

By using this approach, though, corporate IT may have created a trap for themselves.

How?

There’s a PowerView command called Get-NetLocalGroup that discovers these local admins on a machine-by-machine basis.

get-netlocalgroup

Got that?

Get-NetLocalGroup effectively tells you that specific groups and users are tied to specific machines, and these users are power users!

So as a smart hacker or pen tester, you can try something like the following as a lateral move strategy. Use Get-NetLocalGroup to find the groups that have local admin access on the current machine. Then do the same for other servers in the neighborhood to find those machines that share the same groups.

You can dump the hashes of users in the local admin group of the machine you’ve landed on and then freely jump to any machine that Get-NetLocalGroup tells you has the same domain groups!

So once I dump and pass the hash of Cal, I can hop to any machine that uses Acme-Server1 as local admin group.

By the way, how do you figure out definitively all the admin users that belong to Acme-Server1?

Answer: use the one-line script that I came up with above that does the drill-down and apply it to the results of Get-NetLocalGroup.

And, finally, where does derived or derivative admin come into play?

If you’re really clever, you might make the safe assumption that IT occasionally puts the same user in more than one admin group.

As a pen tester, this means you may not be restricted to only the machines that the users in the local admin domain group of your current server have access to!

To make this point, I’ve placed Cal in Acme-Server1 and Acme-Server2, and Meg in Acme-Server2 and Acme-Server3.

 

acme-graph

Lateral movement by exploiting hidden connections in the Acme network.

If you’re following along at home, that means I can use Cal to hop from Salsa to Avocado. On Avocado, I use Meg’s credentials to then jump from Avocado to Enchilada.

On the surface it appears that my teeny three-machine network was segmented with three AD groups, but in fact there were hidden connections —Cal and Meg — that broke through these surface divisions.

So Cal in Acme-Server1 can get to an Acme-Server3 machine, and is ultimately considered a derivative admin of Enchilada!

Neat, right?

If you’re thinking in terms of connections, rather than lists, you’ll start seeing this as a graph search problem that is very similar in nature to what I presented in the last post.

This time, though, you’ll have to add into the graph, along with the users, the server names. In our make-believe scenario, I’ll have adjacency lists that tell me that Salsa is connect to Cal; Avocado is connected to Cal, Meg, Lara, and Roger; and Enchilada is connected to Meg and Camille.

I’ve given you enough clues to work out the PowerView and PowerShell code for the derivative admin graph code, which I’ll show next time.

As you might imagine, there can be lots of paths through this graph from one machine to another. There is a cool idea, though, that helps make this problem easier.

In the meantime, if you want to cheat a little to see how the pros worked this out, check out Andy Robbins’ code.

Continue reading the next post in "Pen Testing Active Directory Environments"

How to setup a SPF record to prevent spam and spear phishing

How to setup a SPF record to prevent spam and spear phishing

Some things go together like peanut butter and jelly: delicious, delightful and a good alternative to my dad’s “Thai-Italian Fusion” dinner experiments as a kid.

When other things are combined it can be terrifying: like SPF records and spear-phishing.

While the nuances of something seemingly mundane as SPF DNS records can seem like a dry boring topic for executives in your organization, you may be able to get them to pay attention to it as they are the most likely targets of spear-phishing attacks.

SPF records not only keep your C-Suite safe, but so much more. Like what, you say? Here’s just the tip of the iceberg on the magnificent benefits of SPF records:

  • Prevent breaches
  • Are cheap (free!) to set up
  • Prevent bad PR from being used as Spam
  • Overall benefits to organizational identification

With this in mind, let’s dig into some more of the the how and why of these incredibly useful DNS records.

What is a SPF record?

The Sender Policy Framework (SPF) is an anti-spam system built on top of the existing DNS and Email Internet Infrastructure.

Spammers were impersonating domains to make offers look like they were coming from Amazon or other reputable places, but when you would click through they’d steal your credit card and run up a bill at the local Chuck E Cheese (which is where I presume mob members go to eat).

What does a SPF record do?

An SPF record defines which IP addresses are allowed to send email on behalf of a particular domain. This is tricker than it sounds as many companies have multiple different Email Service Providers for different purposes.

Common different uses:

  • Transactional emails from applications
  • Internal notifications
  • Internal email
  • External email
  • PR/Marketing emails

Further complicating the situation is that while a company might have a name like SafeEmailSender, there is nothing stopping them from having an email sending domain like wookie-fighter.com.

What does a SPF record prevent?

Having strict SPF rules allows you to control who can send email on behalf of your domain. A good way to think of this is the reverse: who would gain by sending email on behalf of your domain.

What is phishing?

Phishing is where a con artist sends mass emails out that appear as if they are from a legitimate source. Most often impersonated are banks, credit card companies and money handling corporations (like Paypal).

From the point of view of the phisher, they would like to appear as much as possible like the company they are pretending to be. A key aspect of this is making their email appear to be from the genuine source and to definitively not appear to be coming from my clueless neighbor’s malware riddled Windows XP box.

In recent years, data breaches have served as a prime resource for phishers as they are able to create a more convincing email as they have more details about targets.

What is spear phishing?

Spear phishing is similar in intent to standard phishing attempts: trick people into thinking a fake email message is legitimate, what differs is the audience.

With spear phishing it’s an audience of one.

A canonical example of this is the February 2016 spear phishing attack on a Snapchat payroll employee:

snapchat

http://money.cnn.com/2016/02/29/technology/snapchat-phishing-scam/

What’s the difference between a SPF record and an SPF rule?

All DNS entries are “records”, most typically a domain has A and CNAME records for their website and some MX records to direct where email traffic should go.

A SPF record is what holds the rule. The mere presence of a SPF record doesn’t protect anything. It’s like a padlock that is left unclasped. It could protect something, but whether or not it actually is is something different.

What type of DNS record is a SPF record?

If you thought that people who invented DNS were smart, you are correct. What is somewhat surprising though is that they were also wise. Wise enough to know that while their DNS system was able to (with a few bumps along the way) scale up from a dozen computers to the millions online today that there would be new unexpected uses for DNS and that there should be an option to handle these. Thus the TXT record.

TXT (text) records are used for all sorts of interesting DNS purposes, like proving that you own a domain for SSL issuing purposes, up to and including ASCII art self portraits:

https://isc.sans.edu/forums/diary/Odd+DNS+TXT+Record+Anybody+Seen+This+Before/20283/

ascii-selfie

So, it’s no surprise that when new functionality was needed for the Sender Policy Framework, the tool of choice was DNS TXT records.

While this historical context is somewhat interesting (come on a guy put a selfie in a DNS record, that deserves some praise) on a more practical note it will also save you from fruitlessly looking for a “SPF DNS Record Type” in the dropdown of your preferred DNS service. You’d choose TXT and enter in the rules.

What are the components of a SPF record?

There are two primary components of an SPF record:

Mechanisms: What is being matched.

Qualifiers: What action should be taken if the mechanism is matched.

What is a SPF Mechanism?

A SPF mechanism is just a group of IP addresses. The nuances of exactly how that group is defined differ a bit between the mechanism types, but at the heart of it the question is always the same: Does the IP address sending email belong to one of these groups?

A SPF mechanism doesn’t have an opinion on anything. An IP address matching a mechanism doesn’t automatically mean it’s good or bad, just that it matched and that further commands about how to consider it can now be evaluated.

What are the SPF Mechanism Types?

The mechanism types are:

DIRECT IP/IP MECHANISMS
Does the client ip match an address in this range?

ip4 and ip6

DNS RECORD MECHANISMS
Does the client ip match the IP address resolving to one of these other domain record types?

a, mx, and ptr

DOMAIN MECHANISMS
Does the client IP address match one of the SPF rules at this OTHER domain. You typically see this when using external email sending services like marketing automation suites and transactional email systems?

include and exists

CATCH ALL MECHANISM
Well the client IP address didn’t match any of the other rules.

all

What are the SPF Qualifier Types?

There are four SPF qualifier types that act upon the SPF Mechanisms.

+ If the client IP matches the mechanism (IP matching group) that follows, it is allowed to send email for this domain.

Example: v=spf1 +a

This example means “If the IP address that any DNS a record for this domain resolves to matches the client IP address, then it is allowed to send email for this domain.”

- If the client IP matches the mechanism that follows, it is NOT allowed to send email

~ If the client IP matches the mechanism that follows, it is allowed to send email. But is marked as being potentially suspicious. The SoftFail qualifier is often used when first implementing SPF rules as you’re less likely to accidently mark all legitimate email emanating from your domain as spam.

In production, typically the final qualifier+mechanism pair is ~all which allows for the earlier rules to positively match.

? Neutral – pass but don’t positively or negatively identify.

Other than “+” which definitively marks an email as properly coming from your domain, the other qualifiers can be thought of as “hints” that an inbound email server can use in their spam calculations:

+ This is our email
? Maybe our email?
~ Pretty sure not our email
Really not our email

What’s the best practice method of adding a new SPF record into your DNS Records?

A key aspect of DNS is properly manipulating Time To Live (TTL) Settings. Please checkout our Definitive Guide to DNS TTL Settings for the optimum method of adding and modifying DNS records.

What order should SPF mechanisms be listed?

SPF records are evaluated left to right within the record. Matching a mechanism group immediately invokes the qualifier action and no further rules are matched.

In general you should put your IP address designations, your Domain designations, includes and then your all mechanism. This should roughly align with the time it takes to evaluate the rules.

What evaluates SPF?

It’s important to keep in mind that the receiving email servers for wherever you are sending email is ultimately who reads your SPF record. So if you send an email to abaigail@example.com, it will be the example.com mail server that reads the SPF record for example.com, compares the sending IP Address to the rules, and makes a determination about whether or not the email should be delivered to its intended recipient.

Why use SPF and not another email security standard?

Spam and impersonation have been problems on the Internet since it was invented, so why SPF and not one of the many different standards that have come before?

In contrast to previous security solutions, SPF is reasonably fast to execute and isn’t dependent upon the actual content of the email being received. An email with a 15MB video attached to it can be evaluated as quickly as a one sentence status update – since only the headers of the email are examined. Many previous standards relied upon the ability to cryptographically sign off of the bodies of email, making them unwieldy at best, and a potential vector for denial of service attacks at worst.

How do I lookup the SPF records for my Domain?

On OSX and Linux systems you can use the dig command to list the TXT records for your domain of which your SPF listing will be (if any).

dig -t txt example.com +short

On Windows you can use the NSLookup Utility

Nslookup.exe =q=TXT example.com

I recommend looking up the SPF entry for microsoft.com as you can very easily pick out their different SPF domains included as well as their permission for hotmail.com to send email on their behalf.

websec