Category Archives: IT Pros

Practical PowerShell for IT Security, Part III: Classification on a Budget

Practical PowerShell for IT Security, Part III: Classification on a Budget

Last time, with a few lines of PowerShell code, I launched an entire new software category, File Access Analytics (FAA). My 15-minutes of fame is almost over, but I was able to make the point that PowerShell has practical file event monitoring aspects. In this post, I’ll finish some old business with my FAA tool and then take up PowerShell-style data classification.

Event-Driven Analytics

To refresh memories, I used the Register-WmiEvent cmdlet in my FAA script to watch for file access events in a folder. I also created a mythical baseline of event rates to compare against. (For wonky types, there’s a whole area of measuring these kinds of things — hits to web sites, calls coming into a call center, traffic at espresso bars — that was started by this fellow.)

When file access counts reach above normal limits, I trigger a software-created event that gets picked up by another part of the code and pops up the FAA “dashboard”.

This triggering is performed by the New-Event cmdlet, which allows you to send an event, along with other information, to a receiver. To read the event, there’s the WMI-Event cmdlet. The receiving part can even be in another script as long as both event cmdlets use the same SourceIdentifier — Bursts, in my case.

These are all operating systems 101 ideas: effectively, PowerShell provides a simple message passing system. Pretty neat considering we are using what is, after all, a bleepin’ command language.

Anyway, the full code is presented below for your amusement.

$cur = Get-Date
$Global:baseline = @{"Monday" = @(3,8,5); "Tuesday" = @(4,10,7);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $i= [math]::floor((Get-Date).Hour/8) 


   #event auditing!
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append
   if(!$Global:burst) {
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   

While ($true) {
    $args=Wait-Event -SourceIdentifier Bursts # wait on Burst event
    Remove-Event -SourceIdentifier Bursts #remove event
    foreach ($result in $args.SourceArgs) {
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File -Value $result[0]
      $obj | Add-Member -type NoteProperty -Name Time -Value $result[1]
      $outarray += $obj  

     $outarray|Out-GridView -Title "FAA Dashboard: Burst Data"

Please don’t pound your laptop as you look through it.

I’m aware that I continue to pop up separate grid views, and there are better ways to handle the graphics. With PowerShell, you do have access to the full .Net framework, so you could create and access objects —listboxes, charts, etc. — and then update as needed. I’ll leave that for now as a homework assignment.

Classification is Very Important in Data Security

Let’s put my file event monitoring on the back burner, as we take up the topic of PowerShell and data classification.

At Varonis, we preach the gospel of “knowing your data” for good reason. In order to work out a useful data security program, one of the first steps is to learn where your critical or sensitive data is located — credit card numbers, consumer addresses, sensitive legal documents, proprietary code.

The goal, of course, is to protect the company’s digital treasure, but you first have to identify it. By the way, this is not just a good idea, but many data security laws and regulations (for example, HIPAA)  as well as industry data standards (PCI DSS) require asset identification as part of doing real-world risk assessment.

PowerShell should have great potential for use in data classification applications. Can PS access and read files directly? Check. Can it perform pattern matching on text? Check. Can it do this efficiently on a somewhat large scale? Check.

No, the PowerShell classification script I eventually came up with will not replace the Varonis Data Classification Framework. But for the scenario I had in mind – a IT admin who needs to watch over an especially sensitive folder – my PowerShell effort gets more than a passing grad, say B+!

WQL and CIM_DataFile

Let’s now return to WQL, which I referenced in the first post on event monitoring.

Just as I used this query language to look at file events in a directory, I can tweak the script to retrieve all the files in a specific directory. As before I use the CIM_DataFile class, but this time my query is directed at the folder itself, not the events associated with it.

$Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"

Terrific!  This line of code will output an array of file path names.

To read the contents of each file into a variable, PowerShell conveniently provides the Get-Content cmdlet. Thank you Microsoft.

I need one more ingredient for my script, which is pattern matching. Not surprisingly, PowerShell has a regular expression engine. For my purposes it’s a little bit of overkill, but it certainly saved me time.

In talking to security pros, they’ve often told me that companies should explicitly mark documents or presentations containing proprietary or sensitive information with an appropriate footer — say, Secret or Confidential. It’s a good practice, and of course it helps in the data classification process.

In my script, I created a PowerShell hashtable of possible marker texts with an associated regular expression to match it. For documents that aren’t explicitly marked this way, I also added special project names — in my case, snowflake — that would also get scanned. And for kicks, I added a regular expression for social security numbers.

The code block I used to do the reading and pattern matching is listed below. The file name to read and scan is passed in as a parameter.

$Action = {

Param (

[string] $Name


$classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }

$data = Get-Content $Name

$cnts= @()

foreach ($key in $classify.Keys) {


  if($m.Count -gt 0) {

    $cnts+= @($key,$m.Count)


Magnificent Multi-Threading

I could have just simplified my project by taking the above code and adding some glue, and then running the results through the Out-GridView cmdlet.

But this being the Varonis IOS blog, we never, ever do anything nice and easy.

There is a point I’m trying to make. Even for a single folder in a corporate file system, there can be hundreds, perhaps even a few thousand files.

Do you really want to wait around while the script is serially reading each file?

Of course not!

Large-scale file I/O applications, like what we’re doing with classification, is very well-suited for multi-threading—you can launch lots of file activity in parallel and thereby significantly reduce the delay in seeing results.

PowerShell does have a usable (if clunky) background processing system known as Jobs. But it also boasts an impressive and sleek multi-threading capability known as Runspaces.

After playing with it, and borrowing code from a few Runspaces’ pioneers, I am impressed.

Runspaces handles all the messy mechanics of synchronization and concurrency. It’s not something you can grok quickly, and even Microsoft’s amazing Scripting Guys are still working out their understanding of this multi-threading system.

In any case, I went boldly ahead and used Runspaces to do my file reads in parallel. Below is a bit of the code to launch the threads: for each file in the directory I create a thread that runs the above script block, which returns matching patterns in an array.

$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, 5)


$Tasks = @()

foreach ($item in $list) {

   $Task = [powershell]::Create().AddScript($Action).AddArgument($item.Name)

   $Task.RunspacePool = $RunspacePool

   $status= $Task.BeginInvoke()

   $Tasks += @($status,$Task,$item.Name)

Let’s take a deep breath—we’ve covered a lot.

In the next post, I’ll present the full script, and discuss some of the (painful) details.  In the meantime, after seeding some files with marker text, I produced the following output with Out-GridView:

Content classification on the cheap!

In the meantime, another idea to think about is how to connect the two scripts: the file activity monitoring one and the classification script partially presented in this post.

After all, the classification script should communicate what’s worth monitoring to the file activity script, and the activity script could in theory tell the classification script when a new file is created so that it could classify it—incremental scanning in other words.

Sounds like I’m suggesting, dare I say it, a PowerShell-based security monitoring platform. We’ll start working out how this can be done the next time as well.

Practical PowerShell for IT Security, Part I: File Event Monitoring

Practical PowerShell for IT Security, Part I: File Event Monitoring

Back when I was writing the ultimate penetration testing series to help humankind deal with hackers, I came across some interesting PowerShell cmdlets and techniques. I made the remarkable discovery that PowerShell is a security tool in its own right. Sounds to me like it’s the right time to start another series of PowerShell posts.

We’ll take the view in these posts that while PowerShell won’t replace purpose-built security platforms — Varonis can breathe easier now — it will help IT staff monitor for threats and perform other security functions. And also give IT folks an appreciation of the miracles that are accomplished by real security platforms, like our own Metadata Framework. PowerShell can do interesting security work on a small scale, but it is in no way equipped to take on an entire infrastructure.

It’s a Big Event

To begin, let’s explore using PowerShell as a system monitoring tool to watch files, processes, and users.

Before you start cursing into your browsers, I’m well aware that any operating system command language can be used to monitor system-level happenings. A junior IT admin can quickly put together, say, a Linux shell script to poll a directory to see if a file has been updated or retrieve a list of running processes to learn if a non-standard process has popped up.

I ain’t talking about that.

PowerShell instead gives you direct event-driven monitoring based on the operating system’s access to low-level changes. It’s the equivalent of getting a push notification on a news web page alerting you to a breaking story rather than having to manually refresh the page.

In this scenario, you’re not in an endless PowerShell loop, burning up CPU cycles, but instead the script is only notified or activated when the event — a file is modified or a new user logs in — actually occurs. It’s a far more efficient way to do security monitoring than by brute-force polling.

Further down below, I’ll explain how this is accomplished.

But first, anyone who’s ever taken, as I have, a basic “Operating Systems for Poets” course knows that there’s a demarcation between user-level and system-level processes.

The operating system, whether Linux or Windows, does the low-level handling of device actions – anything from disk reads, to packets being received — and hides this from garden variety apps that we run from our desktop.

So if you launch your favorite word processing app and view the first page of a document, the whole operation appears as a smooth, synchronous activity. But in reality there are all kinds of time-sensitive actions events — disk seeks, disk blocks being read, characters sent to the screen, etc. — that are happening under the hood and deliberately hidden from us.  Thank you Bill Gates!

In the old days, only hard-core system engineers knew about this low-level event processing. But as we’ll soon see, PowerShell scripters can now share in the joy as well.

An OS Instrumentation Language

This brings us to Windows Management Instrumentation (WMI), which is a Microsoft effort to provide a consistent view of operating system objects.

Only a few years old, WMI is itself part of a broader industry effort, known as Web-based Enterprise Management (WBEM), to standardize the information pulled out of routers, switches, storage arrays, as well as operating systems.

So what does WMI actually look and feel like?

For our purposes, it’s really a query language, like SQL, but instead of accessing rows of vanilla database columns, it presents complex OS information organized as a WMI_class hierarchy. Not too surprisingly, the query language is known as, wait for it, WQL.

Windows generously provides a utility, wbemtest, that lets you play with WQL. In the graphic below, you can see the results of my querying the Win32_Process object, which holds information on the current processes running.

WQL on training wheels with wbemtest.

Effectively, it’s the programmatic equivalent of running the Windows task monitor. Impressive, no? If you want to know more about WQL, download Ravi Chaganti’s wonderous ebook on the subject.

PowerShell and the Register-WmiEvent Cmdlet

But there’s more! You can take off the training wheels provided by wbemtest, and try these queries directly in PowerShell.

Powershell’s Get-WMIObject is the appropriate cmdlet for this task, and it lets you feed in the WQL query directly as a parameter.

The graphic below shows the first few results from running select Name, ProcessId, CommandLine from Win32_Process on my AWS test environment.

gwmi is the PowerShell alias for Get-WmiObject.

The output is a bit wonky since it’s showing some hidden properties having to do with underlying class bookkeeping. The cmdlet also spews out a huge list that speeds by on my console.

For a better Win32_Process experience, I piped the output from the query into Out-GridView, a neat PS cmdlet that formats the data as a beautiful GUI-based table.

Not too shabby for a line of PowerShell code. But WMI does more than allow you to query these OS objects.

As I mentioned earlier, it gives you access to relevant events on the objects themselves. In WMI, these events are broadly broken into three types: creation, modification, and deletion.

Prior to PowerShell 2.0, you had to access these events in a clunky way: creating lots of different objects, and then you were forced to synchronously ‘hang’, so it wasn’t true asynchronous event-handling. If you want to know more, read this MS Technet post for the ugly details.

Now in PS 2.0 with the Register-WmiEvent cmdlet, we have a far prettier way to react to all kinds of events. In geek-speak, I can register a callback that fires when the event occurs.

Let’s go back to my mythical (and now famous) Acme Company, whose IT infrastructure is set up on my AWS environment.

Let’s say Bob, the sys admin, notices every so often that he’s running low on file space on the Salsa server. He suspects that Ted Bloatly, Acme’s CEO, is downloading huge files, likely audio files, into one of Bob’s directories and then moving them into Ted’s own server on Taco.

Bob wants to set a trap: when a large file is created in his home directory, he’ll be notified on his console.

To accomplish this, he’ll need to work with the CIM_DataFile class.  Instead of accessing processes, as we did above, Bob uses this class to connect with the underlying file metadata.

CIM_DataFile object can be accessed directly in PowerShell.

Playing the part of Bob, I created the following Register-WmiEvent script, which will notify the console when a very large file is created in the home directory.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance isa 'CIM_DataFile' and TargetInstance.FileSize > 2000000 and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' "-sourceIdentifier "Accessor3" -Action  { Write-Host "Large file" $EventArgs.NewEvent.TargetInstance.Name  "was created”}


Running this script directly from the Salsa console launches the Register-WmiEvent command in the background, assigning it a job number, and then only interacts with the console when the event is triggered.

In the next post, I’ll go into more details about what I’ve done here. Effectively, I’m using WQL to query the CIM_DataFile object — particularly anything in the \Users\bob directory that’s over 2 million bytes — and set up a notification when a new file is created that fits this criteria —that’s where InstanceModificationEvent comes into play.

Anyway, in my Bob role  I launched the script from the PS command line, and then putting on my Ted Bloatly hat, I copied a large mp4 into Bob’s directory. You can see the results below.

We now know that Bloatly is a fan of Melody Gardot. Who would have thunk it?

You begin to see some of the exciting possibilities with PowerShell as a tool to detect threats patterns and perhaps for doing a little behavior analytics.

We’ll be exploring these ideas in the next post.

Binge Read Our Pen Testing Active Directory Series

Binge Read Our Pen Testing Active Directory Series

With winter storm Niko now on its extended road trip, it’s not too late, at least here in the East Coast, to make a few snow day plans. Sure you can spend part of Thursday catching up on Black Mirror while scarfing down this slow cooker pork BBQ pizza. However, I have a healthier suggestion.

Why not binge on our amazing Pen Testing Active Directory Environments blog posts?

You’ve read parts of it, or — spoiler alert — perhaps heard about the exciting conclusion involving a depth-first-search of the derivative admin graph. But now’s your chance to totally immerse yourself and come back to work better informed about the Active Directory dark knowledge that hackers have known about for years.

And may we recommend eating these healthy soy-roasted kale chips while clicking below?

Episode 1: Crackmapexec and PowerView

Episode 2: Getting Stuff Done With PowerView

Episode 3: Chasing Power Users

Episode 4: Graph Fun

Episode 5: Admins and Graphs

Episode 6: The Final Case

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

We’ve already written about the growing role of the chief data officer(CDO) and their challenging task to leverage data science to drive profits. But the job of a CDO is not just about moving the profit meter.

It’s less-widely known that they’re also tasked with meeting three other business objectives: finding ways to drive overall growth, improve efficiencies and manage risk.

Why? All business activities and processes benefit from these three objectives.

Luckily, we can turn to Morgan Stanley’s CDO, Jeffrey McMillan for some guidance. I heard him speak at a recent CDO Summit in New York City, where he dispensed sage advice for both practicing and aspiring CDOs.

McMillan suggested these five analytics and data strategy processes:

1. Make sure your data science is aligned with your business strategy.

Yes, good data scientists are hard to find. But McMillan says that rather than spending your energies finding a good data scientist, make sure that your scientist can also think like a sales or business person.

He says, “I would much rather have a very mediocre data scientist, who really understood the business than the reverse. Because the reverse doesn’t help me at all. It’s not about the algorithm, it’s about understanding of the business.”

Once you have your data scientist in place, McMillan is adamant about ensuring this resource is honored and respected.

He explains, “If no one is actually going to do anything with what you recommend doing, they don’t get more resources. There are a lot of things we learn about the world that are interesting that don’t actually change our behaviors. And we need to focus on things that changes our behaviors.”

2. Empower the end users to consume data visualizations

According to McMillan, it’s more important to get a little bit of data in the hands of many, than a lot of data in the hands of few. Why? It’s vital to bring data to the decision makers.

“They don’t always want your algorithm,” McMillan says. “They do want information about the business.”

Moreover, his plan for Morgan Stanley to make data accessible to everyone is this: “Our vision is: in the next five years, I want every single employee in our firm to have access to a data visualization tool. And I want 15-20% of the employees to be able to create their own content using their own data visualization tool.”

3. Create the next-best action framework

McMillan has a process that makes decision making vastly better. He calls it the “next-best action framework.” This system learns, evolves, and adjusts in real time.

He describes the process in the following way:

“Everything single thing that a human can do at the office gets ingested into a system. It gets modelled against their own expectation, their historical behaviors, their customer’s behaviors, market conditions, and if you can believe, 400 other factors.

Then, it gets optimized, based on specific needs of the customer and the employee. Out comes a few ideas, which are scored.

We score whether or not we should call a customer about a bounced check versus an opportunity to call them about an opportunity about a golf outing. Then, we watch what the customer does.”

According to McMillan, Morgan Stanley has found success in their next-best action approach, delivering real time investment advice, in scale, to 16,000 advisers.

4. Leverage digital intelligence

When it comes to artificial intelligence, the real value is in the intelligence. In some ways, McMillan prefers the term digital intelligence.

“We’re digitizing human understanding in a way that creates scale,” notes McMillan. “In the end, the winners aren’t going to be the technology providers. They’re going to be organizations that have the knowledge. If you have knowledge and information that’s differentiated, you will do well in the space over time because someone will have to teach the machine how to start. It just doesn’t learn by itself.”

When you can, remember to keep it simple. McMillan reminds us, “No one cares how hard it is for you to do it.”

5. Take a holistic approach to data management

Finally, McMillan warns that your efforts will fail or significantly under-deliver if you don’t take a holistic approach to managing one of your firm’s most valuable resource – your data. To prioritize, he says to focus on the most critical attributes that drive your key business objectives.


Pen Testing Active Directory Environments, Part VI: The Final Case

Pen Testing Active Directory Environments, Part VI: The Final Case

If you’ve come this far in the series, I think you’ll agree that security pros have to move beyond checking off lists. The mind of the hacker is all about making connections, planning several steps ahead, and then jumping around the victim’s network in creative ways.

Lateral movement through derivative admins is a good example of this approach. In this concluding post, I’ll finish up a few loose ends from last time and then talk about Active Directory, metadata, security, and what it all means.

Back to the Graph

Derivative admin is one of those very creative ways to view the IT landscape. As pen testers we’re looking for AD domain groups that have been assigned to the local administrator group and then discover those domain users that are shared.

The PowerView cmdlet Get-NetLocalGroup provides the raw information, which we then turn into a graph. I started talking about how this can be done. It’s a useful exercise, so let’s complete what we started.

Back in the Acme IT environment, I showed how it was possible to jump from the Salsa server to Avocado and then land in Enchilada. Never thought I’d get hungry writing a sentence about network topology!

With a little thought, you can envision a graph that lets your travel between server nodes and user nodes where edges are bi-directional. And if fact, the right approach is to use an undirected graph that can contain cycles, making it the inverse of the directed acyclic graph (DAG) that I discussed earlier in the series.

In plain English, this mean that if I put a user node on a server’s adjacency list, I then need to create a user adjacency list with that server. Here it is sketched out using arrows to represent adjacency for the Salsa to Enchilada scenario:  salsa-> cal    cal  -> salsa, avocado   avocado->cal,meg   meg->enchilada enchilada->meg

Since I already explained the amazing pipeline based on using Get-NetLocaGroup with –Recursive option, it’s fairly straightforward to write out the PowerShell (below). My undirected graph is contained in $Gda.


Unlike the DAG I already worked out where I can only go from root to leaf and not circle back to a node, there can be cycles in this graph. So I need to keep track of whether I previously visited a node. To account for this, I created a separate PS array called $visited. When I traverse this graph to find a path, I’ll use $visited to mark nodes I’ve processed.

I ran my script giving it parameters “salsa”, “enchilada”, and “avocado”, and it displays $Gda containing my newly created adjacency lists.


Lost in the Graph

The last piece now is to develop a script to traverse the undirected graph and produce the “breadcrumb” trail.

Similar to the breadth-first-search (BFS) I wrote about to learn whether a user belongs to a domain group, depth-first-search (DFS) is a graph navigation algorithm with one helpful advantage.

DFS is actually the more intuitive node  traversal technique. It’s really closer to the way many people deal with finding a destination when they’re lost. As an experienced international traveler, I often used something close to DFS when my local maps prove less than informative.

Let’s say you get to where you think the destination is, realize you’re lost, and then backtrack to the last known point where map and reality are somewhat similar. You then try another path from that point. And then backtrack if you still can’t find that hidden gelato café.

If you’ve exhausted all the paths from that point, you backtrack yet again and try new paths further up the map. You’ll eventually come across the destination spot, or at least get a good tour of the older parts of town.

That’s essentially DFS! The appropriate data structure is a stack that keeps track of where you’ve been. If all the paths from the current top of the stack don’t lead anywhere, you pop the stack and work with the previous node in the path – the backtrack step.

To avoid getting into a loop because of the cycles in the undirected graph, you just mark every node you visit and avoid those that you’ve already visited.

Finally, whatever nodes are already on the stack is your breadcrumb trail — the path to get from your source to destination.

All these ideas are captured in the script below, and you see the results of my running it to find a path between Salsa and Enchilada.


Stacks: it’s what you need when you’re lost.


From Salsa to Enchilada via way of Cal and Meg!

Is this a complete and practical solution?

The answer is no and no. To really finish this, you’ll also need to scan the domain for users who are currently logged into the servers. If these ostensibly local admin users whose credentials you want steal are not online, their hashes are likely not available for passing. You would therefore have to account for this in working out possible paths.

As you might imagine, in most corporate networks, with potentially hundreds of computers and users, the graph can get gnarly very quickly. More importantly, just because you found a path, doesn’t necessarily mean it’s the shortest path. In other words, my code above may chug and find a completely impractical path that involve hopping between say, twenty or thirty computers. It’s possible but not practical.

Fortunately, Andy Robbins worked out far prettier PowerShell code that addresses the above weaknesses in my scripts. Robbins uses PowerView’s Get-NetSession to scan for online users. And he cleverly employs a beloved computer science 101 algorithm, Dijkstra’s Shortest Path, to find the optimal path between two nodes.

Pen Testers as Metadata Detectives and Final Thoughts

Once I stepped back from all this PowerShell and algorithms (and had a few appropriate beverages), the larger picture came into focus.

Thinking like hackers, pen testers know that to crack a network that they’ve land in, they need to work indirectly because there isn’t (or rarely) the equivalent of a neon sign pointing to the treasure.

And that’s where metadata helps.

Every piece of information I leveraged in this series of posts is essentially metadata: file ACLs, Active Directory groups and users, system and session information, and other AD information scooped up by PowerView.

The pen tester, unlike the perimeter-based security pro, is incredibly clever at using this metadata to find and exploit security gaps. They’re masters at thinking in terms of connections, moving around the network with the goal of collecting more metadata, and then with a little luck, they can get the goodies.

I was resisting, but you can think of pen testers as digital consulting detectives — Sherlock Holmes, the Benedict Cumberbatch variant that is, but with we hope better social skills.

Here are some final thoughts.

While pen testers offer valuable services, the examples in this series could be accomplished offline by regular folks — IT security admins and analysts.

In other words, the IT group could scoop up the AD information, do the algorithms, and then discover if there are possible paths for both the derivative admin case and the file ACL case from earlier in the series.

The goal for them is to juggle Active Directory users and groups into a configuration that greatly reduces the risk of hackers gaining user credentials.

And ultimately prevent valuable content from being taken, like your corporate IP or millions of your customers’ credit card numbers.


Connecting Your Data Strategy to Analytics: Eight Questions to Ask

Connecting Your Data Strategy to Analytics: Eight Questions to Ask

Big data has ushered in a new executive role over the past few years. The chief data officer or CDO now joins the C-level club, tasked with leveraging data science to drive the bottom line. According to a recent executive survey, 54% of firms surveyed now report having appointed a CDO.

Taking on the role is one thing, learning out how to be successful is another.

“A CDO’s job starts like this: a CEO, CFO or maybe a CMO says, ‘We want our company to be more data driven, and we want to start capitalizing on these new technologies. Go figure out what that means for us.’ And that’s quite often the beginning point for a chief data officer,” CDO Richard Wendell explained to me during an interview.

One way CDOs are approaching this challenge is by connecting the organization’s data strategy with their analytics tools, and then acting on the resulting new idea or information.

Jeffrey McMillan of Morgan Stanley is one such CDO. I heard him speak at a recent CDO Summit in New York City. While his focus is primarily on financial services, McMillan had invaluable, hard-earned wisdom that all CDOs can learn from.

McMillan’s first step to enlightenment was when he realized several years ago that technology alone was an illusion. He noted, “You’re just spending money on your clusters, you put your project plan together, you’re all excited you got your Hadoop infrastructure cluster in place, your ETL tools, and you have your data governance in place. And you know what, nothing really changes.”

The Magic Remedy? Ask These Eight Questions

#1 What is your business strategy?

To McMillian, a data strategy doesn’t exist.

He emphasizes that there is only a business strategy, and data and analytics are just tools: “A data strategy isn’t going to generate a single incremental dollar for your business, it’s an enabler. No different than your web strategy or your workflow strategy. It’s just another component to your solution.”

#2 – Have you defined and communicated key objectives throughout your organization?

McMillian advises that if you can’t answer the first two questions, stop and don’t spend more money: “You’re going to be wasting a lot of time, money and resources solving for a problem and you don’t even know what the problem even is.”

#3 – What is the role of data and analytics in driving your strategy?

Sometimes your data analytics is transformative and sometimes, it’s marginal. And if it’s not actually moving the needle, ask, ‘What are your goals and objectives?’; ‘What are you trying to solve here?’; ‘And how does your data and analytics strategy work to do that?’

#4 – Are people really buying in?

McMillian gets it. Getting people to buy in to your idea is going to be difficult.

Your CEO and your COO or the head of your business might say, ‘Yeah, I hear ya, the world is changing but we’ve got limited resources, we’re not sure how to engage.’

And in some cases, they’re threatened by changes.

#5 – What new roles need to be created and how do the existing roles fit into this?

There are going to be new roles that will be generated from the work you do. How does the Chief Data/Analytics Officer fit into the organization?

#6 – What’s already going on within your organization that you can leverage?

There’s going to be extremely valuable work that’ll happen, which will unfortunately never get the exposure it deserves. It’s the age-old complaint from IT: “I only get noticed when something goes wrong!’

McMillian posits, “How do you enable that guy that’s working on the weekend, that’s doing some interesting stuff with python code that’s generating some interesting value. How do you get him or her get exposed to an opportunity?”

#7 – Do you have the right data and is it of sufficient quality?

McMillan warns, “Honestly, you’re going to fail if you don’t have data quality.”

#8 – How do you define success?

The best part about analytics is that you can actually define and measure success.

And finally, you should know that McMillian spends 90% of his time focused on question one and two because he thinks that “if you don’t have one or two right, everything else is pretty much useless.”

In a future post, we’ll also review how he drives growth, improves efficiency and manages risk.

Pen Testing Active Directory Environments, Part V: Admins and Graphs

Pen Testing Active Directory Environments, Part V: Admins and Graphs

If you’ve survived my last blog post, you know that Active Directory group structures can be used as powerful weapons by hackers. Our job as pen testers is to borrow these same techniques — in the form of PowerView — that hackers have known about for years, and then show management where the vulnerabilities live in their systems.

I know I had loads of fun building my AD graph structures. It was even more fun running my breadth-first-search (BFS) script on the graph to quickly tell me who the users are that would allow access to a file that I couldn’t enter with my current credentials.


The “Top Secret” directory on the Acme Salsa server was off limits with “Bob” credentials but available to anyone in the “Acme-Legal” group. The PowerShell script I wrote helped me navigate the graph and find the underlying users in Acme-Legal.

Closing My Graphs

If you think about this, instead of having to always search the same groups to find the leaf nodes, why not just build a table that has this information pre-loaded?

I’m talking about what’s known in the trade as the transitive closure of a graph. It sounds nerdier than it really needs to be: I’m just finding everything reachable, directly and indirectly, from any of the AD nodes in my graph structure.

I turned to brute-force to solve the closure problem. I simply modified my PowerShell scripts from last time to do a BFS from each node or entry in my lists and then collect everything I’ve visited. My closed graph is now contained in $GroupTC (see below).



Before you scream into your browsers, there are better ways do this, especially for directed graphs, and I know about the node sorting approach. The point here is to transcend your linear modes of thinking and view the AD environment in terms of connections.

Graph perfectionists can check this out.

Here’s a partial dump of my raw graph structure from last time:


And the same information, just for “Acme-VIPs”, that’s been processed with my closure algorithm:


Notice how the Acme-VIPs list has all the underlying users! If I had spent a little more time I’d eliminate every group in the search path from the list and just have the leaf nodes — in other words, the true list of users who can access a directory with Acme-VIPs access control permission.

Still, what I’ve created is quite valuable. You can imagine hackers using these very same ideas. Perhaps they log in quickly to run PowerView scripts to grab the raw AD group information and then leave the closure processing for large AD environments to an offline step.

There is an Easier Way to Do Closure

We can all agree that knowledge is valuable just for knowledge’s sake. And even if I tell you there’s a simpler way to do closure than I just showed, you’ll still have benefited from the deep wisdom gained from knowing about breadth first searches.

There is a simpler way to do closure.

As it turns out, PowerView cmdlets with a little extra PowerShell sauce can work out the users belonging to a top-level AD group in one long pipeline.

Remember the Get-NetGroupMember cmdlet that spews out all the direct underlying AD members? It also has a –Recurse option that performs the deep search that I accomplished with the breadth-first-search algorithm above.

To remove the AD groups in the search path that my algorithm didn’t, I can filter on the IsGroup field, which conveniently has a self-explanatory name. And since users can be in multiple groups (for example, Cal), I want a unique list. To rid the list of duplicates, I used PowerShell’s get-object –unique cmdlet.

Now for the great reveal: my one line of PS code that lists the true users who are underlying a given AD Group, in this case Acme-VIPs:


This is an amazing line of PowerShell for pen testers (and hackers as well), allowing them to quickly see who are the users  worth going after.

Thank you Will Schroeder for this PowerView miracle!

Commercial Break

It’s a good time to step back, take a deep breath, and look at the big picture. If you—IT security or admin team—don’t do the work of minimizing who has access to a directory, the hackers will effectively do if for you. I’ve just shown that with PowerView, they have tools to make this happen.

Of course, you bring in pen testers to discover these permission gaps and other security holes before the hackers.

Or there is another possibility.

Our blog’s generous sponsor, Varonis Systems, has been making beautifully crafted data access and governance solutions since Yaki Faitelson and Ohad Korkus set up shop in 2004. Their DatAdvantage solution has been helping IT admins and security pros find the underlying users who have access to files and directories.

Varonis: For over ten years, they’ve been saving IT from writing complicated breadth-first-search scripts!

Taking the Derivative of the Admin

Back to our show.

Two blog posts ago, I began to show how PowerView can help pen testers hop around the network. I didn’t go into much detail.

Now for the details.

A few highly evolved AD pen testers, including Justin Warner, Andy Robbins  and Will Schroeder worked out the concept of “derivative admin”, which is a more efficient way to move laterally.

Their exploit hinges on two facts of life in AD environments. One, many companies have grown complex AD group structures. And they often lose track of who’s in which group.

Second, they configure domain-level groups to be local administrators of user workstations or servers. This is a smart way to centralize local administration of Windows machines without requiring the local administrator to be a domain-level admin.

For example, I set up special AD groups Acme-Server1, Acme-Server2, and Acme-Server3 that are divided up among the Acme IT admin team members — Cal, Meg, Rodger, Lara, and Camille.

In my simple Acme network, I assigned these AD groups to Salsa (Acme-Server1), Avocado (Acme-Server3), and Enchilada (Acme-Server2) and placed them under the local Administrators group (using lusrmgr.msc).

In large real-world networks, IT can deploy many AD groups to segment the Windows machines in large corporate environments — it’s a good way to limit the risks if an admin credential has been taken.

In my Acme environment, Cal who’s a member of Acme-Server1, uses his ordinary domain user account to log into Salsa and then gain admin privileges to do power-user level work.

By using this approach, though, corporate IT may have created a trap for themselves.


There’s a PowerView command called Get-NetLocalGroup that discovers these local admins on a machine-by-machine basis.


Got that?

Get-NetLocalGroup effectively tells you that specific groups and users are tied to specific machines, and these users are power users!

So as a smart hacker or pen tester, you can try something like the following as a lateral move strategy. Use Get-NetLocalGroup to find the groups that have local admin access on the current machine. Then do the same for other servers in the neighborhood to find those machines that share the same groups.

You can dump the hashes of users in the local admin group of the machine you’ve landed on and then freely jump to any machine that Get-NetLocalGroup tells you has the same domain groups!

So once I dump and pass the hash of Cal, I can hop to any machine that uses Acme-Server1 as local admin group.

By the way, how do you figure out definitively all the admin users that belong to Acme-Server1?

Answer: use the one-line script that I came up with above that does the drill-down and apply it to the results of Get-NetLocalGroup.

And, finally, where does derived or derivative admin come into play?

If you’re really clever, you might make the safe assumption that IT occasionally puts the same user in more than one admin group.

As a pen tester, this means you may not be restricted to only the machines that the users in the local admin domain group of your current server have access to!

To make this point, I’ve placed Cal in Acme-Server1 and Acme-Server2, and Meg in Acme-Server2 and Acme-Server3.



Lateral movement by exploiting hidden connections in the Acme network.

If you’re following along at home, that means I can use Cal to hop from Salsa to Avocado. On Avocado, I use Meg’s credentials to then jump from Avocado to Enchilada.

On the surface it appears that my teeny three-machine network was segmented with three AD groups, but in fact there were hidden connections —Cal and Meg — that broke through these surface divisions.

So Cal in Acme-Server1 can get to an Acme-Server3 machine, and is ultimately considered a derivative admin of Enchilada!

Neat, right?

If you’re thinking in terms of connections, rather than lists, you’ll start seeing this as a graph search problem that is very similar in nature to what I presented in the last post.

This time, though, you’ll have to add into the graph, along with the users, the server names. In our make-believe scenario, I’ll have adjacency lists that tell me that Salsa is connect to Cal; Avocado is connected to Cal, Meg, Lara, and Roger; and Enchilada is connected to Meg and Camille.

I’ve given you enough clues to work out the PowerView and PowerShell code for the derivative admin graph code, which I’ll show next time.

As you might imagine, there can be lots of paths through this graph from one machine to another. There is a cool idea, though, that helps make this problem easier.

In the meantime, if you want to cheat a little to see how the pros worked this out, check out Andy Robbins’ code.

How to setup a SPF record to prevent spam and spear phishing

How to setup a SPF record to prevent spam and spear phishing

Some things go together like peanut butter and jelly: delicious, delightful and a good alternative to my dad’s “Thai-Italian Fusion” dinner experiments as a kid.

When other things are combined it can be terrifying: like SPF records and spear-phishing.

While the nuances of something seemingly mundane as SPF DNS records can seem like a dry boring topic for executives in your organization, you may be able to get them to pay attention to it as they are the most likely targets of spear-phishing attacks.

SPF records not only keep your C-Suite safe, but so much more. Like what, you say? Here’s just the tip of the iceberg on the magnificent benefits of SPF records:

  • Prevent breaches
  • Are cheap (free!) to set up
  • Prevent bad PR from being used as Spam
  • Overall benefits to organizational identification

With this in mind, let’s dig into some more of the the how and why of these incredibly useful DNS records.

What is a SPF record?

The Sender Policy Framework (SPF) is an anti-spam system built on top of the existing DNS and Email Internet Infrastructure.

Spammers were impersonating domains to make offers look like they were coming from Amazon or other reputable places, but when you would click through they’d steal your credit card and run up a bill at the local Chuck E Cheese (which is where I presume mob members go to eat).

What does a SPF record do?

An SPF record defines which IP addresses are allowed to send email on behalf of a particular domain. This is tricker than it sounds as many companies have multiple different Email Service Providers for different purposes.

Common different uses:

  • Transactional emails from applications
  • Internal notifications
  • Internal email
  • External email
  • PR/Marketing emails

Further complicating the situation is that while a company might have a name like SafeEmailSender, there is nothing stopping them from having an email sending domain like

What does a SPF record prevent?

Having strict SPF rules allows you to control who can send email on behalf of your domain. A good way to think of this is the reverse: who would gain by sending email on behalf of your domain.

What is phishing?

Phishing is where a con artist sends mass emails out that appear as if they are from a legitimate source. Most often impersonated are banks, credit card companies and money handling corporations (like Paypal).

From the point of view of the phisher, they would like to appear as much as possible like the company they are pretending to be. A key aspect of this is making their email appear to be from the genuine source and to definitively not appear to be coming from my clueless neighbor’s malware riddled Windows XP box.

In recent years, data breaches have served as a prime resource for phishers as they are able to create a more convincing email as they have more details about targets.

What is spear phishing?

Spear phishing is similar in intent to standard phishing attempts: trick people into thinking a fake email message is legitimate, what differs is the audience.

With spear phishing it’s an audience of one.

A canonical example of this is the February 2016 spear phishing attack on a Snapchat payroll employee:


What’s the difference between a SPF record and an SPF rule?

All DNS entries are “records”, most typically a domain has A and CNAME records for their website and some MX records to direct where email traffic should go.

A SPF record is what holds the rule. The mere presence of a SPF record doesn’t protect anything. It’s like a padlock that is left unclasped. It could protect something, but whether or not it actually is is something different.

What type of DNS record is a SPF record?

If you thought that people who invented DNS were smart, you are correct. What is somewhat surprising though is that they were also wise. Wise enough to know that while their DNS system was able to (with a few bumps along the way) scale up from a dozen computers to the millions online today that there would be new unexpected uses for DNS and that there should be an option to handle these. Thus the TXT record.

TXT (text) records are used for all sorts of interesting DNS purposes, like proving that you own a domain for SSL issuing purposes, up to and including ASCII art self portraits:


So, it’s no surprise that when new functionality was needed for the Sender Policy Framework, the tool of choice was DNS TXT records.

While this historical context is somewhat interesting (come on a guy put a selfie in a DNS record, that deserves some praise) on a more practical note it will also save you from fruitlessly looking for a “SPF DNS Record Type” in the dropdown of your preferred DNS service. You’d choose TXT and enter in the rules.

What are the components of a SPF record?

There are two primary components of an SPF record:

Mechanisms: What is being matched.

Qualifiers: What action should be taken if the mechanism is matched.

What is a SPF Mechanism?

A SPF mechanism is just a group of IP addresses. The nuances of exactly how that group is defined differ a bit between the mechanism types, but at the heart of it the question is always the same: Does the IP address sending email belong to one of these groups?

A SPF mechanism doesn’t have an opinion on anything. An IP address matching a mechanism doesn’t automatically mean it’s good or bad, just that it matched and that further commands about how to consider it can now be evaluated.

What are the SPF Mechanism Types?

The mechanism types are:

Does the client ip match an address in this range?

ip4 and ip6

Does the client ip match the IP address resolving to one of these other domain record types?

a, mx, and ptr

Does the client IP address match one of the SPF rules at this OTHER domain. You typically see this when using external email sending services like marketing automation suites and transactional email systems?

include and exists

Well the client IP address didn’t match any of the other rules.


What are the SPF Qualifier Types?

There are four SPF qualifier types that act upon the SPF Mechanisms.

+ If the client IP matches the mechanism (IP matching group) that follows, it is allowed to send email for this domain.

Example: v=spf1 +a

This example means “If the IP address that any DNS a record for this domain resolves to matches the client IP address, then it is allowed to send email for this domain.”

- If the client IP matches the mechanism that follows, it is NOT allowed to send email

~ If the client IP matches the mechanism that follows, it is allowed to send email. But is marked as being potentially suspicious. The SoftFail qualifier is often used when first implementing SPF rules as you’re less likely to accidently mark all legitimate email emanating from your domain as spam.

In production, typically the final qualifier+mechanism pair is ~all which allows for the earlier rules to positively match.

? Neutral – pass but don’t positively or negatively identify.

Other than “+” which definitively marks an email as properly coming from your domain, the other qualifiers can be thought of as “hints” that an inbound email server can use in their spam calculations:

+ This is our email
? Maybe our email?
~ Pretty sure not our email
Really not our email

What’s the best practice method of adding a new SPF record into your DNS Records?

A key aspect of DNS is properly manipulating Time To Live (TTL) Settings. Please checkout our Definitive Guide to DNS TTL Settings for the optimum method of adding and modifying DNS records.

What order should SPF mechanisms be listed?

SPF records are evaluated left to right within the record. Matching a mechanism group immediately invokes the qualifier action and no further rules are matched.

In general you should put your IP address designations, your Domain designations, includes and then your all mechanism. This should roughly align with the time it takes to evaluate the rules.

What evaluates SPF?

It’s important to keep in mind that the receiving email servers for wherever you are sending email is ultimately who reads your SPF record. So if you send an email to, it will be the mail server that reads the SPF record for, compares the sending IP Address to the rules, and makes a determination about whether or not the email should be delivered to its intended recipient.

Why use SPF and not another email security standard?

Spam and impersonation have been problems on the Internet since it was invented, so why SPF and not one of the many different standards that have come before?

In contrast to previous security solutions, SPF is reasonably fast to execute and isn’t dependent upon the actual content of the email being received. An email with a 15MB video attached to it can be evaluated as quickly as a one sentence status update – since only the headers of the email are examined. Many previous standards relied upon the ability to cryptographically sign off of the bodies of email, making them unwieldy at best, and a potential vector for denial of service attacks at worst.

How do I lookup the SPF records for my Domain?

On OSX and Linux systems you can use the dig command to list the TXT records for your domain of which your SPF listing will be (if any).

dig -t txt +short

On Windows you can use the NSLookup Utility

Nslookup.exe =q=TXT

I recommend looking up the SPF entry for as you can very easily pick out their different SPF domains included as well as their permission for to send email on their behalf.


Pen Testing Active Directory Environments, Part III:  Chasing Power Users

Pen Testing Active Directory Environments, Part III:  Chasing Power Users

For those joining late, I’m currently pen testing the mythical Acme company, now made famous by a previous pen testing engagement (and immortalized in this free ebook). This time around I’m using two very powerful tools, PowerView and crackmapexec, in my post-exploitation journey into Acme’s IT.

Before we get into more of the details of hunting down privileged users, I wanted to take up one point regarding Active Directory mitigations that I touched on last time.

Protecting the VIPs

As we saw, PowerView cmdlets give pen testers and hackers incredibly valuable information about the user population. It does this by pulling attributes out of Active Directory, some of which can then be used to launch a phishing-whaling attack.

So you’re wondering whether or not we can put restrictions on who gets to see the data? Or what data is made available in the first place?

Yes and yes.

For the purposes of this post, I’m proposing a quick fix. We’ll simply prevent some key AD attributes from being displayed in PowerView’s Get-NetUser cmdlet.

We really don’t want to make it easy for hackers to access phone numbers, mail addresses, and other personal information of the C-suite.

These folks may not have customer accounts and credit card numbers in their files, but they surely have access to key corporate IP – contracts, plans, pending deals, etc.

The answer can be found in the Active Directory Computer and User interface.

Our first priority should be to secure Ted Bloatly, Chief Honcho (CEO) of Acme.

If we click on his Security tab, we can view a list of broad AD attribute permissions — personal, phone and email — that we can allow or deny access to.

For Mr. Bloatly, I’ll simply deny access to his contact information (see below) for anyone in the Acme domain.


I really don’t want hackers and even employees to get this kind of sensitive data.  If you want to know anything about Mr. Bloatly, you’ll have to find out the old-fashioned way, by contacting his loyal personal assistant, Smithers.

Sure we can be more granular about who gets to see this information. Clicking on “Advanced” lets you enable certain groups to view Bloatly’s contact information: for example, I could allow access for just the Acme-VIPs group, the C-levels of the company.

In any case, if we go back to the Salsa server that we landed on, and run Get-NetUser, we’ll see that his postal address and the personal info about his bowling habits no longer shows up.


We’ll delve into other ways to restrict access to AD attributes later on in this series.

The Credential Hunt

Building on the scenario from last time, I’m back on Salsa with Lele’s credential. Lele, like her friend Bob, is in the Acme-Serfs group.

Let’s rerun Get-NetComputer.


You’re probably thinking, as I did when I set this up, that Enchilada is where the important people hang out. “Big Enchiladas”, right?

Let’s see if Lele’s credentials will allow me access to it. One quick way to do this is to use crackmapexec and point it at the server you’re trying to access—it will let you know whether can log in (below).


My pen testing senses are tingling. I’m denied access to Enchilada, but allowed access to Taco and Salsa.

It’s like the equivalent of a sign that says “Private Property: Keep Out!”. You know there has to be something valuable on the Enchilada server.

We’re now at the point where you have to find the users who’ll get you what you want – access to Enchilada.

Like last time, we can run Get-GroupMembers Acme-VIPs. I’ve found two power users now– Ted Bloatly and Lara Crasus. (fyi: I added VIP Lara since the previous post.)

What you can hope for is that one of these VIPs will let you log on to the Salsa machine. Then we can grab the hashes, and use them with crackmapexec to get into Enchilada.

By the way, this brings up an important point about risk assessments regarding user accounts: you have to be very careful about assigning user account access rights.

One common technique is to assign multiple accounts to the same user with each account having its own privileges. This avoids the problem of an over-privileged account logging into a less-privileged account’s machine, thereby leaving it open to credential theft and pass-the-hash.

So let’s say Acme hasn’t learned this lesson, and Ted Bloatly occasionally uses his one AD account to log into the Salsa server used by the plebians.

We can set an alarm.

Enter something like Invoke-UserHunter –GroupName Acme-VIPs on the command line, then check the output and repeat. Obviously, we can do a better job of fine-tuning and automating. I’ll leave that as a homework assignment.

Once we find an Acme-VIPs member, we dump the hash using the --lsa option for crackmapexec and the pass-the-hash using the –H option to log into the Enchilada server.

PowerShell Empire and Reverse Shells

One aspect of hopping around a domain that’s worth talking about is the topic of getting shell connections. So far I’ve been cheating a little bit in showing screen output from the actual server.

In real life, hackers and pen testers are using reverse-shells — remember those? — to see what’s going on from a remote terminal.

In my last pen testing series, getting a reverse shell from a PowerShell environment was a bit rocky. In fact, I didn’t really have a good way of doing this,

And the I discovered PowerShell Empire.

It describes itself as having the ability “to run PowerShell agents without needing powershell.exe, rapidly deployable post-exploitation modules ranging from key loggers to Mimikatz… all wrapped up in a usability-focused framework”

Amen, and it lives up to its billing. This is powerful stuff and I attained beautiful remote PowerShell access to the Acme environment.

If you want to play around with Empire for yourself, you can download it from GitHub here. With a little bit of struggle (and two aspirins later), I installed it on an Ubuntu Linux server in my AWS environment.

In terms of its remote PowerShell powers, it allows you to create a Listener, which lives at one end of the connection. And then you grab some shell code to run on the victim’s machine. Ultimately, it launches an Agent, which is what you interact with in Empire.



PowerShell Empire: multiple agents each with its own shell connection.  Shellcode runs on the target computer. Awesome power.

Effectively, we’re implementing the PowerShell version of the reverse shell that I previously accomplished with ncat.

You can have many agents running at a time and interact concurrently with each PowerShell session on the target machines.


PowerShell connection back to Salsa!

This is very powerful, and I’m only scratching the surface.

Let’s take a breath.

In my next post, we’ll go into more detail for this Empire-based reverse PowerShell technique, and demonstrate how you can use it to hop around the Acme domain using crackmapexec to inject the shellcode for the next hop.

Yes, we’ll get back into exploiting the information in Active Directory groups and in particular use the relationships in it to guide which users to chase down. It’s referred to as derived or derivative  admins.

I’ll leave you with this interesting observation made by (I believe) Will Graebner: pen testers think in graphs, IT people think in terms of lists.

Meditate on that thought till next time.

New Mirai Attacks, But It’s Still About Passwords

New Mirai Attacks, But It’s Still About Passwords

Last week, Mirai-like wormware made the news again with attacks on ISPs in the UK. Specifically, customers of TalkTalk and PostOffice reported Internet outages. As with the last Mirai incident involving consumer cameras, this one also took advantage of an exposed router port.

And by an amazing coincidence, some of the overall points about these ISP incidents were covered in two recent posts of ours: injection exploits are still a plague, and consumers should learn how to change their router passwords.

It’s Mirai, But It’s Not

This recent Mirai infestation started last month in Germany with perhaps up to 900,000 Deutsche Telekom customers experiencing connectivity problems with their routers.

And then it spread to the UK. But on closer analysis, security pros began to notice differences this time around.

The new variant of the Mirai malware — called Annie — probes on port 7547, not on port 23 (telnet). As every network and telecom wonk knows, that’s the port the ISPs can use to manage their routers through the obscure TR-064 protocol.

To summarize the research and analysis I’ve looked at, the attackers were able to use the protocol directly to snatch the router’s WiFi password along with the wireless network name or SSID.

To make matters worse, the attackers found a bad implementation of another TR-064 command that let them slip in or inject their own shell commands.

The shell commands do the heavy lifting by downloading and executing binaries from the attackers C2 servers that then starts the process all over again to spread the Annie worm.

The Badcyber blog has a nice write up of all this.

And the Goal Is …

By the way, all the above access did not require any authentication — no user name, no password.

Has anyone at the ISPs or the router manufacturers even heard about Privacy by Design?

I’m guessing not.

In any case, it seems the outages experienced by customers were a result of the extra traffic on the ISP’s network as more and more routers saw incoming requests on their ports.

From what we currently know, the Annie wormware leaves the routing function alone.

In other words, the DDoS aspects may have been an unintended consequence of Annie. There’s also speculation that several different cybergangs were involved, with some using another Mirai-like variant.

It was a cyber free for all.

The ultimate purpose, though, is a little unclear — other than showing that’s it’s possible to exploit vulnerable routers on an enormous scale.

TalkTalk has responded by fixing the TR-064 bug with new firmware that disables access on the open port. It also resets the WiFi password to the factory default setting — the one on the back of the box.

As I mentioned in the Mirai attack on cameras, it’s a good idea to examine your firewall port settings: if you can’t justify remote administration or other special features, simply remove all the public-facing ports.

If only average customers (long painful sigh) were better at WiFi administration, this whole attack would have been greatly diminished.

… Passwords

Ken Munro, PenTest Partner’s brilliant founder — I’m a fan — noticed a flaw in TalkTalk’s initial response. Since most customers never bother to change their WiFi passwords from the factory default, the scooped up passwords taken by the hackers will still be current.

Uh oh.

Attackers could use — see our interview with Ken — to geo-locate the router and then engage in wardriving.


With WiFi passwords, SSID names, and, you can go into the hacking business.

So it’s possible that the WiFi passwords were the real point of this attack, and cybergangs will be reselling their massive password list on the darkweb.

Putting on my black hat, I would charge premium prices for passwords associated with execs, VIPs, and other whales.

Do This Right Now

What’s the take-away?

If you’re a TalkTalk customer, you should change your password and set all your devices to use that new password.

For the rest of us, it’s probably not a bad idea to also change WiFi passwords every so often, and please use horse-battery-staple techniques.

For enterprise IT folks who think none of this has any value to them ‘cause it’s consumer-related, remember that injection attacks and default-itis are problems for you as well.

Pen Testing Active Directory Environments, Part II: Getting Stuff Done With...

Pen Testing Active Directory Environments, Part II: Getting Stuff Done With PowerView

In my last post, I began discussing how valuable pen testing and risk assessments can be done by just gathering information from Active Directory. I also introduced PowerView, which is a relatively new tool for helping pen testers and “red teamers” explore offensive Active Directory techniques.

To get more background on how hackers have been using and abusing Active Directory over the years, I recommend taking a look at some of the slides and talks by Will Schroeder, who is the creator of PowerView.

What Schroeder has done with PowerView is give those of us on the security side a completely self-contained PowerShell environment for seeing AD environments the way hackers do.

100% Raw PowerView

Last time I was crowing about crackmapexec, the Swiss-army knife pen testing tool, which among its many blades has a PowerView parameter. I also showed how you can input PowerView cmdlets directly.

However, the really interesting things you can do with PowerView involve chaining cmdlets together in a PowerShell pipeline. And—long sigh—I couldn’t figure out how to get crackmapexec to pipeline.

But this leads to a wondrous opportunity: download the PV library from GitHub and directly work with the cmdlets.

And that’s what I did.

I uploaded PowerView’s Recon directory and placed it under Documents\ WindowsPowerShell\Modules on one of the servers in my mythical Acme company environment. You then have to enter an Import-Module Recon cmdlet in PowerShell to load PowerView—see the instructions on the GitHiub page.

And then we’re off to the races.

Classy Active Directory

I already showed how it was possible to discover the machines on the Acme network, as well as who was currently logged in locally using a few crackmapexec parameters.

Let’s do the same thing with PowerView cmdlets.

For servers in the domain, the work is done by Get-NetComputer.


Notice how this is a little more useful than the nessus-like output of crackmapexec—we get the more nutritious AD server names and domain.

To find all the user sessions on my current machine, I’ll use the very powerful cmdlet Invoke-UserHunter. More power than I really need for this: it actually tells me all users currently logged in on all machines across the domain.

But this allows me then to introduce a PowerShell pipeline. Unlike in a Linux command shell, the output of a PowerShell cmdlet is an object, not a string, and that brings in all the machinery of the object-oriented model—attributes, classes, inheritance, etc. We’ll explore more of this idea below.

I present for your amusement the following pipeline below. It uses the Where-object cmdlet, aliased by the PowerShell symbol, and filters out only those user objects where the ComputerName AD attribute is equal to “Salsa”, which is my current server.


Note: the $_. is the way PowerShell lets you refer to a single object in a stream or collection of objects.

To see who’s on the Taco server, I did this instead:


Interesting! I found an Administrator.

One of the goals of pen testing is hunting down admins and other users with higher privileges. Invoke-UserHunter is the go-to cmdlet. Let’s store this thought away for the next post.

Another good source of useful information are the AD groups in the Acme environment. You can learn about organizational structure from looking at group names.

I used PowerView’s Get-NetGroup to query Active Directory for all the groups in the Acme domain. As the output sped by, I noticed, besides all the default groups, that there were a few group names that had Acme as prefix. It was probably set up and customized by the Acme system admin, which would be me in this case.

One group that caught my attention was the Acme-VIPs group.

It might be interesting to see this group’s user membership, and PV’s Get-NetGroupMember does this for me.


I now have a person of interest: Ted Bloatly, and obviously important guy at Acme.

Active Directory Treasures

At this point, I’ve not done anything disruptive or invasive. I’m just gathering information – under the hood PowerView, though is making low-level AD queries.

Suppose I want to find out more details about this Ted Bloatly person.

AD administrators are of course familiar with the Users and Computer interface through which they manage the directory (see below).



It’s also a treasure trove of information for hackers.

Can I access this using PowerView?

Through another PV cmdlet, Get-NetUser, I can indeed see all these fields, which includes phone numbers, home address, emails, job title, and notes.

Putting on my red team hat, I could then leverage this personal data in a clever phishing or pretext attack — craft a forged email or perhaps make a phone call.

I then ran Get-NetUser with an account name parameter directly, and you can see some of the attributes and their values displayed below.


However, as a cool pen tester I was only interested in a few of these attributes, so I came up with another script.

I’ll now use the PV cmdlet Foreach-Object, which has an alias of %.

The idea is to filter my user objects using the aforementioned Select-Object to only match on ted, and then use the Foreach-Object cmdlet to reference individual objects—in this case only ted—and its attributes. I’ll print the attributes using PowerShell’s Write-Output.

By the way, Get-NetUser displays a lot of the object’s AD attributes, but not all of them. Let’s say I couldn’t  find the attribute name for Ted’s email address.

So here’s where having a knowledge of Active Directory classes comes into play. The object I’m interested in is a member of the organizationalPerson class. If you look at the Microsoft AD documentation, you’ll find that this class has an email field, known by its LDAP name as “mail”.

With this last piece of the puzzle, I’m now able to get all of Ted’s contact information as well as some personal notes about him contained in the AD info attribute.


So I found Acme’s CEO and even know he’s a bowler. It doesn’t get much better than that for launching a social engineered attack.

As a hacker, I could now call it a day, and use this private information to later phish Ted directly, ultimately landing on the laptop of an Acme executive.

One can imagine hackers doing this on an enormous scale as they scoop up personal data on different key groups within companies: executives, attorneys, financial groups, production managers, etc.

I forgot to mention one thing: I was able to run these cmdlets using just ordinary user access rights.

Scary thought!

In my next post, we’ll instead to try to access Ted’s laptop more directly, and explore techniques of navigating around the Acme network.