Category Archives: IT Pros

Practical PowerShell for IT Security, Part I: File Event Monitoring

Practical PowerShell for IT Security, Part I: File Event Monitoring

Back when I was writing the ultimate penetration testing series to help humankind deal with hackers, I came across some interesting PowerShell cmdlets and techniques. I made the remarkable discovery that PowerShell is a security tool in its own right. Sounds to me like it’s the right time to start another series of PowerShell posts.

We’ll take the view in these posts that while PowerShell won’t replace purpose-built security platforms — Varonis can breathe easier now — it will help IT staff monitor for threats and perform other security functions. And also give IT folks an appreciation of the miracles that are accomplished by real security platforms, like our own Metadata Framework. PowerShell can do interesting security work on a small scale, but it is in no way equipped to take on an entire infrastructure.

It’s a Big Event

To begin, let’s explore using PowerShell as a system monitoring tool to watch files, processes, and users.

Before you start cursing into your browsers, I’m well aware that any operating system command language can be used to monitor system-level happenings. A junior IT admin can quickly put together, say, a Linux shell script to poll a directory to see if a file has been updated or retrieve a list of running processes to learn if a non-standard process has popped up.

I ain’t talking about that.

PowerShell instead gives you direct event-driven monitoring based on the operating system’s access to low-level changes. It’s the equivalent of getting a push notification on a news web page alerting you to a breaking story rather than having to manually refresh the page.

In this scenario, you’re not in an endless PowerShell loop, burning up CPU cycles, but instead the script is only notified or activated when the event — a file is modified or a new user logs in — actually occurs. It’s a far more efficient way to do security monitoring than by brute-force polling.

Further down below, I’ll explain how this is accomplished.

But first, anyone who’s ever taken, as I have, a basic “Operating Systems for Poets” course knows that there’s a demarcation between user-level and system-level processes.

The operating system, whether Linux or Windows, does the low-level handling of device actions – anything from disk reads, to packets being received — and hides this from garden variety apps that we run from our desktop.

So if you launch your favorite word processing app and view the first page of a document, the whole operation appears as a smooth, synchronous activity. But in reality there are all kinds of time-sensitive actions events — disk seeks, disk blocks being read, characters sent to the screen, etc. — that are happening under the hood and deliberately hidden from us.  Thank you Bill Gates!

In the old days, only hard-core system engineers knew about this low-level event processing. But as we’ll soon see, PowerShell scripters can now share in the joy as well.

An OS Instrumentation Language

This brings us to Windows Management Instrumentation (WMI), which is a Microsoft effort to provide a consistent view of operating system objects.

Only a few years old, WMI is itself part of a broader industry effort, known as Web-based Enterprise Management (WBEM), to standardize the information pulled out of routers, switches, storage arrays, as well as operating systems.

So what does WMI actually look and feel like?

For our purposes, it’s really a query language, like SQL, but instead of accessing rows of vanilla database columns, it presents complex OS information organized as a WMI_class hierarchy. Not too surprisingly, the query language is known as, wait for it, WQL.

Windows generously provides a utility, wbemtest, that lets you play with WQL. In the graphic below, you can see the results of my querying the Win32_Process object, which holds information on the current processes running.

WQL on training wheels with wbemtest.

Effectively, it’s the programmatic equivalent of running the Windows task monitor. Impressive, no? If you want to know more about WQL, download Ravi Chaganti’s wonderous ebook on the subject.

PowerShell and the Register-WmiEvent Cmdlet

But there’s more! You can take off the training wheels provided by wbemtest, and try these queries directly in PowerShell.

Powershell’s Get-WMIObject is the appropriate cmdlet for this task, and it lets you feed in the WQL query directly as a parameter.

The graphic below shows the first few results from running select Name, ProcessId, CommandLine from Win32_Process on my AWS test environment.

gwmi is the PowerShell alias for Get-WmiObject.

The output is a bit wonky since it’s showing some hidden properties having to do with underlying class bookkeeping. The cmdlet also spews out a huge list that speeds by on my console.

For a better Win32_Process experience, I piped the output from the query into Out-GridView, a neat PS cmdlet that formats the data as a beautiful GUI-based table.

Not too shabby for a line of PowerShell code. But WMI does more than allow you to query these OS objects.

As I mentioned earlier, it gives you access to relevant events on the objects themselves. In WMI, these events are broadly broken into three types: creation, modification, and deletion.

Prior to PowerShell 2.0, you had to access these events in a clunky way: creating lots of different objects, and then you were forced to synchronously ‘hang’, so it wasn’t true asynchronous event-handling. If you want to know more, read this MS Technet post for the ugly details.

Now in PS 2.0 with the Register-WmiEvent cmdlet, we have a far prettier way to react to all kinds of events. In geek-speak, I can register a callback that fires when the event occurs.

Let’s go back to my mythical (and now famous) Acme Company, whose IT infrastructure is set up on my AWS environment.

Let’s say Bob, the sys admin, notices every so often that he’s running low on file space on the Salsa server. He suspects that Ted Bloatly, Acme’s CEO, is downloading huge files, likely audio files, into one of Bob’s directories and then moving them into Ted’s own server on Taco.

Bob wants to set a trap: when a large file is created in his home directory, he’ll be notified on his console.

To accomplish this, he’ll need to work with the CIM_DataFile class.  Instead of accessing processes, as we did above, Bob uses this class to connect with the underlying file metadata.

CIM_DataFile object can be accessed directly in PowerShell.

Playing the part of Bob, I created the following Register-WmiEvent script, which will notify the console when a very large file is created in the home directory.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance isa 'CIM_DataFile' and TargetInstance.FileSize > 2000000 and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' "-sourceIdentifier "Accessor3" -Action  { Write-Host "Large file" $EventArgs.NewEvent.TargetInstance.Name  "was created”}

Running this script directly from the Salsa console launches the Register-WmiEvent command in the background, assigning it a job number, and then only interacts with the console when the event is triggered.

In the next post, I’ll go into more details about what I’ve done here. Effectively, I’m using WQL to query the CIM_DataFile object — particularly anything in the \Users\bob directory that’s over 2 million bytes — and set up a notification when a new file is created that fits this criteria —that’s where InstanceModificationEvent comes into play.

Anyway, in my Bob role  I launched the script from the PS command line, and then putting on my Ted Bloatly hat, I copied a large mp4 into Bob’s directory. You can see the results below.

We now know that Bloatly is a fan of Melody Gardot. Who would have thunk it?

You begin to see some of the exciting possibilities with PowerShell as a tool to detect threats patterns and perhaps for doing a little behavior analytics.

We’ll be exploring these ideas in the next post.

Binge Read Our Pen Testing Active Directory Series

Binge Read Our Pen Testing Active Directory Series

With winter storm Niko now on its extended road trip, it’s not too late, at least here in the East Coast, to make a few snow day plans. Sure you can spend part of Thursday catching up on Black Mirror while scarfing down this slow cooker pork BBQ pizza. However, I have a healthier suggestion.

Why not binge on our amazing Pen Testing Active Directory Environments blog posts?

You’ve read parts of it, or — spoiler alert — perhaps heard about the exciting conclusion involving a depth-first-search of the derivative admin graph. But now’s your chance to totally immerse yourself and come back to work better informed about the Active Directory dark knowledge that hackers have known about for years.

And may we recommend eating these healthy soy-roasted kale chips while clicking below?

Episode 1: Crackmapexec and PowerView

Episode 2: Getting Stuff Done With PowerView

Episode 3: Chasing Power Users

Episode 4: Graph Fun

Episode 5: Admins and Graphs

Episode 6: The Final Case

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

Five Ways for a CDO to Drive Growth, Improve Efficiencies, and Manage Risk

We’ve already written about the growing role of the chief data officer(CDO) and their challenging task to leverage data science to drive profits. But the job of a CDO is not just about moving the profit meter.

It’s less-widely known that they’re also tasked with meeting three other business objectives: finding ways to drive overall growth, improve efficiencies and manage risk.

Why? All business activities and processes benefit from these three objectives.

Luckily, we can turn to Morgan Stanley’s CDO, Jeffrey McMillan for some guidance. I heard him speak at a recent CDO Summit in New York City, where he dispensed sage advice for both practicing and aspiring CDOs.

McMillan suggested these five analytics and data strategy processes:

1. Make sure your data science is aligned with your business strategy.

Yes, good data scientists are hard to find. But McMillan says that rather than spending your energies finding a good data scientist, make sure that your scientist can also think like a sales or business person.

He says, “I would much rather have a very mediocre data scientist, who really understood the business than the reverse. Because the reverse doesn’t help me at all. It’s not about the algorithm, it’s about understanding of the business.”

Once you have your data scientist in place, McMillan is adamant about ensuring this resource is honored and respected.

He explains, “If no one is actually going to do anything with what you recommend doing, they don’t get more resources. There are a lot of things we learn about the world that are interesting that don’t actually change our behaviors. And we need to focus on things that changes our behaviors.”

2. Empower the end users to consume data visualizations

According to McMillan, it’s more important to get a little bit of data in the hands of many, than a lot of data in the hands of few. Why? It’s vital to bring data to the decision makers.

“They don’t always want your algorithm,” McMillan says. “They do want information about the business.”

Moreover, his plan for Morgan Stanley to make data accessible to everyone is this: “Our vision is: in the next five years, I want every single employee in our firm to have access to a data visualization tool. And I want 15-20% of the employees to be able to create their own content using their own data visualization tool.”

3. Create the next-best action framework

McMillan has a process that makes decision making vastly better. He calls it the “next-best action framework.” This system learns, evolves, and adjusts in real time.

He describes the process in the following way:

“Everything single thing that a human can do at the office gets ingested into a system. It gets modelled against their own expectation, their historical behaviors, their customer’s behaviors, market conditions, and if you can believe, 400 other factors.

Then, it gets optimized, based on specific needs of the customer and the employee. Out comes a few ideas, which are scored.

We score whether or not we should call a customer about a bounced check versus an opportunity to call them about an opportunity about a golf outing. Then, we watch what the customer does.”

According to McMillan, Morgan Stanley has found success in their next-best action approach, delivering real time investment advice, in scale, to 16,000 advisers.

4. Leverage digital intelligence

When it comes to artificial intelligence, the real value is in the intelligence. In some ways, McMillan prefers the term digital intelligence.

“We’re digitizing human understanding in a way that creates scale,” notes McMillan. “In the end, the winners aren’t going to be the technology providers. They’re going to be organizations that have the knowledge. If you have knowledge and information that’s differentiated, you will do well in the space over time because someone will have to teach the machine how to start. It just doesn’t learn by itself.”

When you can, remember to keep it simple. McMillan reminds us, “No one cares how hard it is for you to do it.”

5. Take a holistic approach to data management

Finally, McMillan warns that your efforts will fail or significantly under-deliver if you don’t take a holistic approach to managing one of your firm’s most valuable resource – your data. To prioritize, he says to focus on the most critical attributes that drive your key business objectives.


Pen Testing Active Directory Environments, Part VI: The Final Case

Pen Testing Active Directory Environments, Part VI: The Final Case

If you’ve come this far in the series, I think you’ll agree that security pros have to move beyond checking off lists. The mind of the hacker is all about making connections, planning several steps ahead, and then jumping around the victim’s network in creative ways.

Lateral movement through derivative admins is a good example of this approach. In this concluding post, I’ll finish up a few loose ends from last time and then talk about Active Directory, metadata, security, and what it all means.

Back to the Graph

Derivative admin is one of those very creative ways to view the IT landscape. As pen testers we’re looking for AD domain groups that have been assigned to the local administrator group and then discover those domain users that are shared.

The PowerView cmdlet Get-NetLocalGroup provides the raw information, which we then turn into a graph. I started talking about how this can be done. It’s a useful exercise, so let’s complete what we started.

Back in the Acme IT environment, I showed how it was possible to jump from the Salsa server to Avocado and then land in Enchilada. Never thought I’d get hungry writing a sentence about network topology!

With a little thought, you can envision a graph that lets your travel between server nodes and user nodes where edges are bi-directional. And if fact, the right approach is to use an undirected graph that can contain cycles, making it the inverse of the directed acyclic graph (DAG) that I discussed earlier in the series.

In plain English, this mean that if I put a user node on a server’s adjacency list, I then need to create a user adjacency list with that server. Here it is sketched out using arrows to represent adjacency for the Salsa to Enchilada scenario:  salsa-> cal    cal  -> salsa, avocado   avocado->cal,meg   meg->enchilada enchilada->meg

Since I already explained the amazing pipeline based on using Get-NetLocaGroup with –Recursive option, it’s fairly straightforward to write out the PowerShell (below). My undirected graph is contained in $Gda.


Unlike the DAG I already worked out where I can only go from root to leaf and not circle back to a node, there can be cycles in this graph. So I need to keep track of whether I previously visited a node. To account for this, I created a separate PS array called $visited. When I traverse this graph to find a path, I’ll use $visited to mark nodes I’ve processed.

I ran my script giving it parameters “salsa”, “enchilada”, and “avocado”, and it displays $Gda containing my newly created adjacency lists.


Lost in the Graph

The last piece now is to develop a script to traverse the undirected graph and produce the “breadcrumb” trail.

Similar to the breadth-first-search (BFS) I wrote about to learn whether a user belongs to a domain group, depth-first-search (DFS) is a graph navigation algorithm with one helpful advantage.

DFS is actually the more intuitive node  traversal technique. It’s really closer to the way many people deal with finding a destination when they’re lost. As an experienced international traveler, I often used something close to DFS when my local maps prove less than informative.

Let’s say you get to where you think the destination is, realize you’re lost, and then backtrack to the last known point where map and reality are somewhat similar. You then try another path from that point. And then backtrack if you still can’t find that hidden gelato café.

If you’ve exhausted all the paths from that point, you backtrack yet again and try new paths further up the map. You’ll eventually come across the destination spot, or at least get a good tour of the older parts of town.

That’s essentially DFS! The appropriate data structure is a stack that keeps track of where you’ve been. If all the paths from the current top of the stack don’t lead anywhere, you pop the stack and work with the previous node in the path – the backtrack step.

To avoid getting into a loop because of the cycles in the undirected graph, you just mark every node you visit and avoid those that you’ve already visited.

Finally, whatever nodes are already on the stack is your breadcrumb trail — the path to get from your source to destination.

All these ideas are captured in the script below, and you see the results of my running it to find a path between Salsa and Enchilada.


Stacks: it’s what you need when you’re lost.


From Salsa to Enchilada via way of Cal and Meg!

Is this a complete and practical solution?

The answer is no and no. To really finish this, you’ll also need to scan the domain for users who are currently logged into the servers. If these ostensibly local admin users whose credentials you want steal are not online, their hashes are likely not available for passing. You would therefore have to account for this in working out possible paths.

As you might imagine, in most corporate networks, with potentially hundreds of computers and users, the graph can get gnarly very quickly. More importantly, just because you found a path, doesn’t necessarily mean it’s the shortest path. In other words, my code above may chug and find a completely impractical path that involve hopping between say, twenty or thirty computers. It’s possible but not practical.

Fortunately, Andy Robbins worked out far prettier PowerShell code that addresses the above weaknesses in my scripts. Robbins uses PowerView’s Get-NetSession to scan for online users. And he cleverly employs a beloved computer science 101 algorithm, Dijkstra’s Shortest Path, to find the optimal path between two nodes.

Pen Testers as Metadata Detectives and Final Thoughts

Once I stepped back from all this PowerShell and algorithms (and had a few appropriate beverages), the larger picture came into focus.

Thinking like hackers, pen testers know that to crack a network that they’ve land in, they need to work indirectly because there isn’t (or rarely) the equivalent of a neon sign pointing to the treasure.

And that’s where metadata helps.

Every piece of information I leveraged in this series of posts is essentially metadata: file ACLs, Active Directory groups and users, system and session information, and other AD information scooped up by PowerView.

The pen tester, unlike the perimeter-based security pro, is incredibly clever at using this metadata to find and exploit security gaps. They’re masters at thinking in terms of connections, moving around the network with the goal of collecting more metadata, and then with a little luck, they can get the goodies.

I was resisting, but you can think of pen testers as digital consulting detectives — Sherlock Holmes, the Benedict Cumberbatch variant that is, but with we hope better social skills.

Here are some final thoughts.

While pen testers offer valuable services, the examples in this series could be accomplished offline by regular folks — IT security admins and analysts.

In other words, the IT group could scoop up the AD information, do the algorithms, and then discover if there are possible paths for both the derivative admin case and the file ACL case from earlier in the series.

The goal for them is to juggle Active Directory users and groups into a configuration that greatly reduces the risk of hackers gaining user credentials.

And ultimately prevent valuable content from being taken, like your corporate IP or millions of your customers’ credit card numbers.


Connecting Your Data Strategy to Analytics: Eight Questions to Ask

Connecting Your Data Strategy to Analytics: Eight Questions to Ask

Big data has ushered in a new executive role over the past few years. The chief data officer or CDO now joins the C-level club, tasked with leveraging data science to drive the bottom line. According to a recent executive survey, 54% of firms surveyed now report having appointed a CDO.

Taking on the role is one thing, learning out how to be successful is another.

“A CDO’s job starts like this: a CEO, CFO or maybe a CMO says, ‘We want our company to be more data driven, and we want to start capitalizing on these new technologies. Go figure out what that means for us.’ And that’s quite often the beginning point for a chief data officer,” CDO Richard Wendell explained to me during an interview.

One way CDOs are approaching this challenge is by connecting the organization’s data strategy with their analytics tools, and then acting on the resulting new idea or information.

Jeffrey McMillan of Morgan Stanley is one such CDO. I heard him speak at a recent CDO Summit in New York City. While his focus is primarily on financial services, McMillan had invaluable, hard-earned wisdom that all CDOs can learn from.

McMillan’s first step to enlightenment was when he realized several years ago that technology alone was an illusion. He noted, “You’re just spending money on your clusters, you put your project plan together, you’re all excited you got your Hadoop infrastructure cluster in place, your ETL tools, and you have your data governance in place. And you know what, nothing really changes.”

The Magic Remedy? Ask These Eight Questions

#1 What is your business strategy?

To McMillian, a data strategy doesn’t exist.

He emphasizes that there is only a business strategy, and data and analytics are just tools: “A data strategy isn’t going to generate a single incremental dollar for your business, it’s an enabler. No different than your web strategy or your workflow strategy. It’s just another component to your solution.”

#2 – Have you defined and communicated key objectives throughout your organization?

McMillian advises that if you can’t answer the first two questions, stop and don’t spend more money: “You’re going to be wasting a lot of time, money and resources solving for a problem and you don’t even know what the problem even is.”

#3 – What is the role of data and analytics in driving your strategy?

Sometimes your data analytics is transformative and sometimes, it’s marginal. And if it’s not actually moving the needle, ask, ‘What are your goals and objectives?’; ‘What are you trying to solve here?’; ‘And how does your data and analytics strategy work to do that?’

#4 – Are people really buying in?

McMillian gets it. Getting people to buy in to your idea is going to be difficult.

Your CEO and your COO or the head of your business might say, ‘Yeah, I hear ya, the world is changing but we’ve got limited resources, we’re not sure how to engage.’

And in some cases, they’re threatened by changes.

#5 – What new roles need to be created and how do the existing roles fit into this?

There are going to be new roles that will be generated from the work you do. How does the Chief Data/Analytics Officer fit into the organization?

#6 – What’s already going on within your organization that you can leverage?

There’s going to be extremely valuable work that’ll happen, which will unfortunately never get the exposure it deserves. It’s the age-old complaint from IT: “I only get noticed when something goes wrong!’

McMillian posits, “How do you enable that guy that’s working on the weekend, that’s doing some interesting stuff with python code that’s generating some interesting value. How do you get him or her get exposed to an opportunity?”

#7 – Do you have the right data and is it of sufficient quality?

McMillan warns, “Honestly, you’re going to fail if you don’t have data quality.”

#8 – How do you define success?

The best part about analytics is that you can actually define and measure success.

And finally, you should know that McMillian spends 90% of his time focused on question one and two because he thinks that “if you don’t have one or two right, everything else is pretty much useless.”

In a future post, we’ll also review how he drives growth, improves efficiency and manages risk.

Pen Testing Active Directory Environments, Part V: Admins and Graphs

Pen Testing Active Directory Environments, Part V: Admins and Graphs

If you’ve survived my last blog post, you know that Active Directory group structures can be used as powerful weapons by hackers. Our job as pen testers is to borrow these same techniques — in the form of PowerView — that hackers have known about for years, and then show management where the vulnerabilities live in their systems.

I know I had loads of fun building my AD graph structures. It was even more fun running my breadth-first-search (BFS) script on the graph to quickly tell me who the users are that would allow access to a file that I couldn’t enter with my current credentials.


The “Top Secret” directory on the Acme Salsa server was off limits with “Bob” credentials but available to anyone in the “Acme-Legal” group. The PowerShell script I wrote helped me navigate the graph and find the underlying users in Acme-Legal.

Closing My Graphs

If you think about this, instead of having to always search the same groups to find the leaf nodes, why not just build a table that has this information pre-loaded?

I’m talking about what’s known in the trade as the transitive closure of a graph. It sounds nerdier than it really needs to be: I’m just finding everything reachable, directly and indirectly, from any of the AD nodes in my graph structure.

I turned to brute-force to solve the closure problem. I simply modified my PowerShell scripts from last time to do a BFS from each node or entry in my lists and then collect everything I’ve visited. My closed graph is now contained in $GroupTC (see below).



Before you scream into your browsers, there are better ways do this, especially for directed graphs, and I know about the node sorting approach. The point here is to transcend your linear modes of thinking and view the AD environment in terms of connections.

Graph perfectionists can check this out.

Here’s a partial dump of my raw graph structure from last time:


And the same information, just for “Acme-VIPs”, that’s been processed with my closure algorithm:


Notice how the Acme-VIPs list has all the underlying users! If I had spent a little more time I’d eliminate every group in the search path from the list and just have the leaf nodes — in other words, the true list of users who can access a directory with Acme-VIPs access control permission.

Still, what I’ve created is quite valuable. You can imagine hackers using these very same ideas. Perhaps they log in quickly to run PowerView scripts to grab the raw AD group information and then leave the closure processing for large AD environments to an offline step.

There is an Easier Way to Do Closure

We can all agree that knowledge is valuable just for knowledge’s sake. And even if I tell you there’s a simpler way to do closure than I just showed, you’ll still have benefited from the deep wisdom gained from knowing about breadth first searches.

There is a simpler way to do closure.

As it turns out, PowerView cmdlets with a little extra PowerShell sauce can work out the users belonging to a top-level AD group in one long pipeline.

Remember the Get-NetGroupMember cmdlet that spews out all the direct underlying AD members? It also has a –Recurse option that performs the deep search that I accomplished with the breadth-first-search algorithm above.

To remove the AD groups in the search path that my algorithm didn’t, I can filter on the IsGroup field, which conveniently has a self-explanatory name. And since users can be in multiple groups (for example, Cal), I want a unique list. To rid the list of duplicates, I used PowerShell’s get-object –unique cmdlet.

Now for the great reveal: my one line of PS code that lists the true users who are underlying a given AD Group, in this case Acme-VIPs:


This is an amazing line of PowerShell for pen testers (and hackers as well), allowing them to quickly see who are the users  worth going after.

Thank you Will Schroeder for this PowerView miracle!

Commercial Break

It’s a good time to step back, take a deep breath, and look at the big picture. If you—IT security or admin team—don’t do the work of minimizing who has access to a directory, the hackers will effectively do if for you. I’ve just shown that with PowerView, they have tools to make this happen.

Of course, you bring in pen testers to discover these permission gaps and other security holes before the hackers.

Or there is another possibility.

Our blog’s generous sponsor, Varonis Systems, has been making beautifully crafted data access and governance solutions since Yaki Faitelson and Ohad Korkus set up shop in 2004. Their DatAdvantage solution has been helping IT admins and security pros find the underlying users who have access to files and directories.

Varonis: For over ten years, they’ve been saving IT from writing complicated breadth-first-search scripts!

Taking the Derivative of the Admin

Back to our show.

Two blog posts ago, I began to show how PowerView can help pen testers hop around the network. I didn’t go into much detail.

Now for the details.

A few highly evolved AD pen testers, including Justin Warner, Andy Robbins  and Will Schroeder worked out the concept of “derivative admin”, which is a more efficient way to move laterally.

Their exploit hinges on two facts of life in AD environments. One, many companies have grown complex AD group structures. And they often lose track of who’s in which group.

Second, they configure domain-level groups to be local administrators of user workstations or servers. This is a smart way to centralize local administration of Windows machines without requiring the local administrator to be a domain-level admin.

For example, I set up special AD groups Acme-Server1, Acme-Server2, and Acme-Server3 that are divided up among the Acme IT admin team members — Cal, Meg, Rodger, Lara, and Camille.

In my simple Acme network, I assigned these AD groups to Salsa (Acme-Server1), Avocado (Acme-Server3), and Enchilada (Acme-Server2) and placed them under the local Administrators group (using lusrmgr.msc).

In large real-world networks, IT can deploy many AD groups to segment the Windows machines in large corporate environments — it’s a good way to limit the risks if an admin credential has been taken.

In my Acme environment, Cal who’s a member of Acme-Server1, uses his ordinary domain user account to log into Salsa and then gain admin privileges to do power-user level work.

By using this approach, though, corporate IT may have created a trap for themselves.


There’s a PowerView command called Get-NetLocalGroup that discovers these local admins on a machine-by-machine basis.


Got that?

Get-NetLocalGroup effectively tells you that specific groups and users are tied to specific machines, and these users are power users!

So as a smart hacker or pen tester, you can try something like the following as a lateral move strategy. Use Get-NetLocalGroup to find the groups that have local admin access on the current machine. Then do the same for other servers in the neighborhood to find those machines that share the same groups.

You can dump the hashes of users in the local admin group of the machine you’ve landed on and then freely jump to any machine that Get-NetLocalGroup tells you has the same domain groups!

So once I dump and pass the hash of Cal, I can hop to any machine that uses Acme-Server1 as local admin group.

By the way, how do you figure out definitively all the admin users that belong to Acme-Server1?

Answer: use the one-line script that I came up with above that does the drill-down and apply it to the results of Get-NetLocalGroup.

And, finally, where does derived or derivative admin come into play?

If you’re really clever, you might make the safe assumption that IT occasionally puts the same user in more than one admin group.

As a pen tester, this means you may not be restricted to only the machines that the users in the local admin domain group of your current server have access to!

To make this point, I’ve placed Cal in Acme-Server1 and Acme-Server2, and Meg in Acme-Server2 and Acme-Server3.



Lateral movement by exploiting hidden connections in the Acme network.

If you’re following along at home, that means I can use Cal to hop from Salsa to Avocado. On Avocado, I use Meg’s credentials to then jump from Avocado to Enchilada.

On the surface it appears that my teeny three-machine network was segmented with three AD groups, but in fact there were hidden connections —Cal and Meg — that broke through these surface divisions.

So Cal in Acme-Server1 can get to an Acme-Server3 machine, and is ultimately considered a derivative admin of Enchilada!

Neat, right?

If you’re thinking in terms of connections, rather than lists, you’ll start seeing this as a graph search problem that is very similar in nature to what I presented in the last post.

This time, though, you’ll have to add into the graph, along with the users, the server names. In our make-believe scenario, I’ll have adjacency lists that tell me that Salsa is connect to Cal; Avocado is connected to Cal, Meg, Lara, and Roger; and Enchilada is connected to Meg and Camille.

I’ve given you enough clues to work out the PowerView and PowerShell code for the derivative admin graph code, which I’ll show next time.

As you might imagine, there can be lots of paths through this graph from one machine to another. There is a cool idea, though, that helps make this problem easier.

In the meantime, if you want to cheat a little to see how the pros worked this out, check out Andy Robbins’ code.

How to setup a SPF record to prevent spam and spear phishing

How to setup a SPF record to prevent spam and spear phishing

Some things go together like peanut butter and jelly: delicious, delightful and a good alternative to my dad’s “Thai-Italian Fusion” dinner experiments as a kid.

When other things are combined it can be terrifying: like SPF records and spear-phishing.

While the nuances of something seemingly mundane as SPF DNS records can seem like a dry boring topic for executives in your organization, you may be able to get them to pay attention to it as they are the most likely targets of spear-phishing attacks.

SPF records not only keep your C-Suite safe, but so much more. Like what, you say? Here’s just the tip of the iceberg on the magnificent benefits of SPF records:

  • Prevent breaches
  • Are cheap (free!) to set up
  • Prevent bad PR from being used as Spam
  • Overall benefits to organizational identification

With this in mind, let’s dig into some more of the the how and why of these incredibly useful DNS records.

What is a SPF record?

The Sender Policy Framework (SPF) is an anti-spam system built on top of the existing DNS and Email Internet Infrastructure.

Spammers were impersonating domains to make offers look like they were coming from Amazon or other reputable places, but when you would click through they’d steal your credit card and run up a bill at the local Chuck E Cheese (which is where I presume mob members go to eat).

What does a SPF record do?

An SPF record defines which IP addresses are allowed to send email on behalf of a particular domain. This is tricker than it sounds as many companies have multiple different Email Service Providers for different purposes.

Common different uses:

  • Transactional emails from applications
  • Internal notifications
  • Internal email
  • External email
  • PR/Marketing emails

Further complicating the situation is that while a company might have a name like SafeEmailSender, there is nothing stopping them from having an email sending domain like

What does a SPF record prevent?

Having strict SPF rules allows you to control who can send email on behalf of your domain. A good way to think of this is the reverse: who would gain by sending email on behalf of your domain.

What is phishing?

Phishing is where a con artist sends mass emails out that appear as if they are from a legitimate source. Most often impersonated are banks, credit card companies and money handling corporations (like Paypal).

From the point of view of the phisher, they would like to appear as much as possible like the company they are pretending to be. A key aspect of this is making their email appear to be from the genuine source and to definitively not appear to be coming from my clueless neighbor’s malware riddled Windows XP box.

In recent years, data breaches have served as a prime resource for phishers as they are able to create a more convincing email as they have more details about targets.

What is spear phishing?

Spear phishing is similar in intent to standard phishing attempts: trick people into thinking a fake email message is legitimate, what differs is the audience.

With spear phishing it’s an audience of one.

A canonical example of this is the February 2016 spear phishing attack on a Snapchat payroll employee:


What’s the difference between a SPF record and an SPF rule?

All DNS entries are “records”, most typically a domain has A and CNAME records for their website and some MX records to direct where email traffic should go.

A SPF record is what holds the rule. The mere presence of a SPF record doesn’t protect anything. It’s like a padlock that is left unclasped. It could protect something, but whether or not it actually is is something different.

What type of DNS record is a SPF record?

If you thought that people who invented DNS were smart, you are correct. What is somewhat surprising though is that they were also wise. Wise enough to know that while their DNS system was able to (with a few bumps along the way) scale up from a dozen computers to the millions online today that there would be new unexpected uses for DNS and that there should be an option to handle these. Thus the TXT record.

TXT (text) records are used for all sorts of interesting DNS purposes, like proving that you own a domain for SSL issuing purposes, up to and including ASCII art self portraits:


So, it’s no surprise that when new functionality was needed for the Sender Policy Framework, the tool of choice was DNS TXT records.

While this historical context is somewhat interesting (come on a guy put a selfie in a DNS record, that deserves some praise) on a more practical note it will also save you from fruitlessly looking for a “SPF DNS Record Type” in the dropdown of your preferred DNS service. You’d choose TXT and enter in the rules.

What are the components of a SPF record?

There are two primary components of an SPF record:

Mechanisms: What is being matched.

Qualifiers: What action should be taken if the mechanism is matched.

What is a SPF Mechanism?

A SPF mechanism is just a group of IP addresses. The nuances of exactly how that group is defined differ a bit between the mechanism types, but at the heart of it the question is always the same: Does the IP address sending email belong to one of these groups?

A SPF mechanism doesn’t have an opinion on anything. An IP address matching a mechanism doesn’t automatically mean it’s good or bad, just that it matched and that further commands about how to consider it can now be evaluated.

What are the SPF Mechanism Types?

The mechanism types are:

Does the client ip match an address in this range?

ip4 and ip6

Does the client ip match the IP address resolving to one of these other domain record types?

a, mx, and ptr

Does the client IP address match one of the SPF rules at this OTHER domain. You typically see this when using external email sending services like marketing automation suites and transactional email systems?

include and exists

Well the client IP address didn’t match any of the other rules.


What are the SPF Qualifier Types?

There are four SPF qualifier types that act upon the SPF Mechanisms.

+ If the client IP matches the mechanism (IP matching group) that follows, it is allowed to send email for this domain.

Example: v=spf1 +a

This example means “If the IP address that any DNS a record for this domain resolves to matches the client IP address, then it is allowed to send email for this domain.”

- If the client IP matches the mechanism that follows, it is NOT allowed to send email

~ If the client IP matches the mechanism that follows, it is allowed to send email. But is marked as being potentially suspicious. The SoftFail qualifier is often used when first implementing SPF rules as you’re less likely to accidently mark all legitimate email emanating from your domain as spam.

In production, typically the final qualifier+mechanism pair is ~all which allows for the earlier rules to positively match.

? Neutral – pass but don’t positively or negatively identify.

Other than “+” which definitively marks an email as properly coming from your domain, the other qualifiers can be thought of as “hints” that an inbound email server can use in their spam calculations:

+ This is our email
? Maybe our email?
~ Pretty sure not our email
Really not our email

What’s the best practice method of adding a new SPF record into your DNS Records?

A key aspect of DNS is properly manipulating Time To Live (TTL) Settings. Please checkout our Definitive Guide to DNS TTL Settings for the optimum method of adding and modifying DNS records.

What order should SPF mechanisms be listed?

SPF records are evaluated left to right within the record. Matching a mechanism group immediately invokes the qualifier action and no further rules are matched.

In general you should put your IP address designations, your Domain designations, includes and then your all mechanism. This should roughly align with the time it takes to evaluate the rules.

What evaluates SPF?

It’s important to keep in mind that the receiving email servers for wherever you are sending email is ultimately who reads your SPF record. So if you send an email to, it will be the mail server that reads the SPF record for, compares the sending IP Address to the rules, and makes a determination about whether or not the email should be delivered to its intended recipient.

Why use SPF and not another email security standard?

Spam and impersonation have been problems on the Internet since it was invented, so why SPF and not one of the many different standards that have come before?

In contrast to previous security solutions, SPF is reasonably fast to execute and isn’t dependent upon the actual content of the email being received. An email with a 15MB video attached to it can be evaluated as quickly as a one sentence status update – since only the headers of the email are examined. Many previous standards relied upon the ability to cryptographically sign off of the bodies of email, making them unwieldy at best, and a potential vector for denial of service attacks at worst.

How do I lookup the SPF records for my Domain?

On OSX and Linux systems you can use the dig command to list the TXT records for your domain of which your SPF listing will be (if any).

dig -t txt +short

On Windows you can use the NSLookup Utility

Nslookup.exe =q=TXT

I recommend looking up the SPF entry for as you can very easily pick out their different SPF domains included as well as their permission for to send email on their behalf.


Pen Testing Active Directory Environments, Part III:  Chasing Power Users

Pen Testing Active Directory Environments, Part III:  Chasing Power Users

For those joining late, I’m currently pen testing the mythical Acme company, now made famous by a previous pen testing engagement (and immortalized in this free ebook). This time around I’m using two very powerful tools, PowerView and crackmapexec, in my post-exploitation journey into Acme’s IT.

Before we get into more of the details of hunting down privileged users, I wanted to take up one point regarding Active Directory mitigations that I touched on last time.

Protecting the VIPs

As we saw, PowerView cmdlets give pen testers and hackers incredibly valuable information about the user population. It does this by pulling attributes out of Active Directory, some of which can then be used to launch a phishing-whaling attack.

So you’re wondering whether or not we can put restrictions on who gets to see the data? Or what data is made available in the first place?

Yes and yes.

For the purposes of this post, I’m proposing a quick fix. We’ll simply prevent some key AD attributes from being displayed in PowerView’s Get-NetUser cmdlet.

We really don’t want to make it easy for hackers to access phone numbers, mail addresses, and other personal information of the C-suite.

These folks may not have customer accounts and credit card numbers in their files, but they surely have access to key corporate IP – contracts, plans, pending deals, etc.

The answer can be found in the Active Directory Computer and User interface.

Our first priority should be to secure Ted Bloatly, Chief Honcho (CEO) of Acme.

If we click on his Security tab, we can view a list of broad AD attribute permissions — personal, phone and email — that we can allow or deny access to.

For Mr. Bloatly, I’ll simply deny access to his contact information (see below) for anyone in the Acme domain.


I really don’t want hackers and even employees to get this kind of sensitive data.  If you want to know anything about Mr. Bloatly, you’ll have to find out the old-fashioned way, by contacting his loyal personal assistant, Smithers.

Sure we can be more granular about who gets to see this information. Clicking on “Advanced” lets you enable certain groups to view Bloatly’s contact information: for example, I could allow access for just the Acme-VIPs group, the C-levels of the company.

In any case, if we go back to the Salsa server that we landed on, and run Get-NetUser, we’ll see that his postal address and the personal info about his bowling habits no longer shows up.


We’ll delve into other ways to restrict access to AD attributes later on in this series.

The Credential Hunt

Building on the scenario from last time, I’m back on Salsa with Lele’s credential. Lele, like her friend Bob, is in the Acme-Serfs group.

Let’s rerun Get-NetComputer.


You’re probably thinking, as I did when I set this up, that Enchilada is where the important people hang out. “Big Enchiladas”, right?

Let’s see if Lele’s credentials will allow me access to it. One quick way to do this is to use crackmapexec and point it at the server you’re trying to access—it will let you know whether can log in (below).


My pen testing senses are tingling. I’m denied access to Enchilada, but allowed access to Taco and Salsa.

It’s like the equivalent of a sign that says “Private Property: Keep Out!”. You know there has to be something valuable on the Enchilada server.

We’re now at the point where you have to find the users who’ll get you what you want – access to Enchilada.

Like last time, we can run Get-GroupMembers Acme-VIPs. I’ve found two power users now– Ted Bloatly and Lara Crasus. (fyi: I added VIP Lara since the previous post.)

What you can hope for is that one of these VIPs will let you log on to the Salsa machine. Then we can grab the hashes, and use them with crackmapexec to get into Enchilada.

By the way, this brings up an important point about risk assessments regarding user accounts: you have to be very careful about assigning user account access rights.

One common technique is to assign multiple accounts to the same user with each account having its own privileges. This avoids the problem of an over-privileged account logging into a less-privileged account’s machine, thereby leaving it open to credential theft and pass-the-hash.

So let’s say Acme hasn’t learned this lesson, and Ted Bloatly occasionally uses his one AD account to log into the Salsa server used by the plebians.

We can set an alarm.

Enter something like Invoke-UserHunter –GroupName Acme-VIPs on the command line, then check the output and repeat. Obviously, we can do a better job of fine-tuning and automating. I’ll leave that as a homework assignment.

Once we find an Acme-VIPs member, we dump the hash using the --lsa option for crackmapexec and the pass-the-hash using the –H option to log into the Enchilada server.

PowerShell Empire and Reverse Shells

One aspect of hopping around a domain that’s worth talking about is the topic of getting shell connections. So far I’ve been cheating a little bit in showing screen output from the actual server.

In real life, hackers and pen testers are using reverse-shells — remember those? — to see what’s going on from a remote terminal.

In my last pen testing series, getting a reverse shell from a PowerShell environment was a bit rocky. In fact, I didn’t really have a good way of doing this,

And the I discovered PowerShell Empire.

It describes itself as having the ability “to run PowerShell agents without needing powershell.exe, rapidly deployable post-exploitation modules ranging from key loggers to Mimikatz… all wrapped up in a usability-focused framework”

Amen, and it lives up to its billing. This is powerful stuff and I attained beautiful remote PowerShell access to the Acme environment.

If you want to play around with Empire for yourself, you can download it from GitHub here. With a little bit of struggle (and two aspirins later), I installed it on an Ubuntu Linux server in my AWS environment.

In terms of its remote PowerShell powers, it allows you to create a Listener, which lives at one end of the connection. And then you grab some shell code to run on the victim’s machine. Ultimately, it launches an Agent, which is what you interact with in Empire.



PowerShell Empire: multiple agents each with its own shell connection.  Shellcode runs on the target computer. Awesome power.

Effectively, we’re implementing the PowerShell version of the reverse shell that I previously accomplished with ncat.

You can have many agents running at a time and interact concurrently with each PowerShell session on the target machines.


PowerShell connection back to Salsa!

This is very powerful, and I’m only scratching the surface.

Let’s take a breath.

In my next post, we’ll go into more detail for this Empire-based reverse PowerShell technique, and demonstrate how you can use it to hop around the Acme domain using crackmapexec to inject the shellcode for the next hop.

Yes, we’ll get back into exploiting the information in Active Directory groups and in particular use the relationships in it to guide which users to chase down. It’s referred to as derived or derivative  admins.

I’ll leave you with this interesting observation made by (I believe) Will Graebner: pen testers think in graphs, IT people think in terms of lists.

Meditate on that thought till next time.

New Mirai Attacks, But It’s Still About Passwords

New Mirai Attacks, But It’s Still About Passwords

Last week, Mirai-like wormware made the news again with attacks on ISPs in the UK. Specifically, customers of TalkTalk and PostOffice reported Internet outages. As with the last Mirai incident involving consumer cameras, this one also took advantage of an exposed router port.

And by an amazing coincidence, some of the overall points about these ISP incidents were covered in two recent posts of ours: injection exploits are still a plague, and consumers should learn how to change their router passwords.

It’s Mirai, But It’s Not

This recent Mirai infestation started last month in Germany with perhaps up to 900,000 Deutsche Telekom customers experiencing connectivity problems with their routers.

And then it spread to the UK. But on closer analysis, security pros began to notice differences this time around.

The new variant of the Mirai malware — called Annie — probes on port 7547, not on port 23 (telnet). As every network and telecom wonk knows, that’s the port the ISPs can use to manage their routers through the obscure TR-064 protocol.

To summarize the research and analysis I’ve looked at, the attackers were able to use the protocol directly to snatch the router’s WiFi password along with the wireless network name or SSID.

To make matters worse, the attackers found a bad implementation of another TR-064 command that let them slip in or inject their own shell commands.

The shell commands do the heavy lifting by downloading and executing binaries from the attackers C2 servers that then starts the process all over again to spread the Annie worm.

The Badcyber blog has a nice write up of all this.

And the Goal Is …

By the way, all the above access did not require any authentication — no user name, no password.

Has anyone at the ISPs or the router manufacturers even heard about Privacy by Design?

I’m guessing not.

In any case, it seems the outages experienced by customers were a result of the extra traffic on the ISP’s network as more and more routers saw incoming requests on their ports.

From what we currently know, the Annie wormware leaves the routing function alone.

In other words, the DDoS aspects may have been an unintended consequence of Annie. There’s also speculation that several different cybergangs were involved, with some using another Mirai-like variant.

It was a cyber free for all.

The ultimate purpose, though, is a little unclear — other than showing that’s it’s possible to exploit vulnerable routers on an enormous scale.

TalkTalk has responded by fixing the TR-064 bug with new firmware that disables access on the open port. It also resets the WiFi password to the factory default setting — the one on the back of the box.

As I mentioned in the Mirai attack on cameras, it’s a good idea to examine your firewall port settings: if you can’t justify remote administration or other special features, simply remove all the public-facing ports.

If only average customers (long painful sigh) were better at WiFi administration, this whole attack would have been greatly diminished.

… Passwords

Ken Munro, PenTest Partner’s brilliant founder — I’m a fan — noticed a flaw in TalkTalk’s initial response. Since most customers never bother to change their WiFi passwords from the factory default, the scooped up passwords taken by the hackers will still be current.

Uh oh.

Attackers could use — see our interview with Ken — to geo-locate the router and then engage in wardriving.


With WiFi passwords, SSID names, and, you can go into the hacking business.

So it’s possible that the WiFi passwords were the real point of this attack, and cybergangs will be reselling their massive password list on the darkweb.

Putting on my black hat, I would charge premium prices for passwords associated with execs, VIPs, and other whales.

Do This Right Now

What’s the take-away?

If you’re a TalkTalk customer, you should change your password and set all your devices to use that new password.

For the rest of us, it’s probably not a bad idea to also change WiFi passwords every so often, and please use horse-battery-staple techniques.

For enterprise IT folks who think none of this has any value to them ‘cause it’s consumer-related, remember that injection attacks and default-itis are problems for you as well.

Pen Testing Active Directory Environments, Part II: Getting Stuff Done With PowerView

Pen Testing Active Directory Environments, Part II: Getting Stuff Done With PowerView

In my last post, I began discussing how valuable pen testing and risk assessments can be done by just gathering information from Active Directory. I also introduced PowerView, which is a relatively new tool for helping pen testers and “red teamers” explore offensive Active Directory techniques.

To get more background on how hackers have been using and abusing Active Directory over the years, I recommend taking a look at some of the slides and talks by Will Schroeder, who is the creator of PowerView.

What Schroeder has done with PowerView is give those of us on the security side a completely self-contained PowerShell environment for seeing AD environments the way hackers do.

100% Raw PowerView

Last time I was crowing about crackmapexec, the Swiss-army knife pen testing tool, which among its many blades has a PowerView parameter. I also showed how you can input PowerView cmdlets directly.

However, the really interesting things you can do with PowerView involve chaining cmdlets together in a PowerShell pipeline. And—long sigh—I couldn’t figure out how to get crackmapexec to pipeline.

But this leads to a wondrous opportunity: download the PV library from GitHub and directly work with the cmdlets.

And that’s what I did.

I uploaded PowerView’s Recon directory and placed it under Documents\ WindowsPowerShell\Modules on one of the servers in my mythical Acme company environment. You then have to enter an Import-Module Recon cmdlet in PowerShell to load PowerView—see the instructions on the GitHiub page.

And then we’re off to the races.

Classy Active Directory

I already showed how it was possible to discover the machines on the Acme network, as well as who was currently logged in locally using a few crackmapexec parameters.

Let’s do the same thing with PowerView cmdlets.

For servers in the domain, the work is done by Get-NetComputer.


Notice how this is a little more useful than the nessus-like output of crackmapexec—we get the more nutritious AD server names and domain.

To find all the user sessions on my current machine, I’ll use the very powerful cmdlet Invoke-UserHunter. More power than I really need for this: it actually tells me all users currently logged in on all machines across the domain.

But this allows me then to introduce a PowerShell pipeline. Unlike in a Linux command shell, the output of a PowerShell cmdlet is an object, not a string, and that brings in all the machinery of the object-oriented model—attributes, classes, inheritance, etc. We’ll explore more of this idea below.

I present for your amusement the following pipeline below. It uses the Where-object cmdlet, aliased by the PowerShell symbol, and filters out only those user objects where the ComputerName AD attribute is equal to “Salsa”, which is my current server.


Note: the $_. is the way PowerShell lets you refer to a single object in a stream or collection of objects.

To see who’s on the Taco server, I did this instead:


Interesting! I found an Administrator.

One of the goals of pen testing is hunting down admins and other users with higher privileges. Invoke-UserHunter is the go-to cmdlet. Let’s store this thought away for the next post.

Another good source of useful information are the AD groups in the Acme environment. You can learn about organizational structure from looking at group names.

I used PowerView’s Get-NetGroup to query Active Directory for all the groups in the Acme domain. As the output sped by, I noticed, besides all the default groups, that there were a few group names that had Acme as prefix. It was probably set up and customized by the Acme system admin, which would be me in this case.

One group that caught my attention was the Acme-VIPs group.

It might be interesting to see this group’s user membership, and PV’s Get-NetGroupMember does this for me.


I now have a person of interest: Ted Bloatly, and obviously important guy at Acme.

Active Directory Treasures

At this point, I’ve not done anything disruptive or invasive. I’m just gathering information – under the hood PowerView, though is making low-level AD queries.

Suppose I want to find out more details about this Ted Bloatly person.

AD administrators are of course familiar with the Users and Computer interface through which they manage the directory (see below).



It’s also a treasure trove of information for hackers.

Can I access this using PowerView?

Through another PV cmdlet, Get-NetUser, I can indeed see all these fields, which includes phone numbers, home address, emails, job title, and notes.

Putting on my red team hat, I could then leverage this personal data in a clever phishing or pretext attack — craft a forged email or perhaps make a phone call.

I then ran Get-NetUser with an account name parameter directly, and you can see some of the attributes and their values displayed below.


However, as a cool pen tester I was only interested in a few of these attributes, so I came up with another script.

I’ll now use the PV cmdlet Foreach-Object, which has an alias of %.

The idea is to filter my user objects using the aforementioned Select-Object to only match on ted, and then use the Foreach-Object cmdlet to reference individual objects—in this case only ted—and its attributes. I’ll print the attributes using PowerShell’s Write-Output.

By the way, Get-NetUser displays a lot of the object’s AD attributes, but not all of them. Let’s say I couldn’t  find the attribute name for Ted’s email address.

So here’s where having a knowledge of Active Directory classes comes into play. The object I’m interested in is a member of the organizationalPerson class. If you look at the Microsoft AD documentation, you’ll find that this class has an email field, known by its LDAP name as “mail”.

With this last piece of the puzzle, I’m now able to get all of Ted’s contact information as well as some personal notes about him contained in the AD info attribute.


So I found Acme’s CEO and even know he’s a bowler. It doesn’t get much better than that for launching a social engineered attack.

As a hacker, I could now call it a day, and use this private information to later phish Ted directly, ultimately landing on the laptop of an Acme executive.

One can imagine hackers doing this on an enormous scale as they scoop up personal data on different key groups within companies: executives, attorneys, financial groups, production managers, etc.

I forgot to mention one thing: I was able to run these cmdlets using just ordinary user access rights.

Scary thought!

In my next post, we’ll instead to try to access Ted’s laptop more directly, and explore techniques of navigating around the Acme network.

Definitive Guide to DNS TTL Settings

Definitive Guide to DNS TTL Settings

DNS is a foundational piece of technology. Nearly every higher level network request, all internet traffic, web searches, email, etc. rely on the ability to resolve DNS lookups (translate names like to IP Addresses or other domains).

We wanted to write about Time To Live (TTL) as most Sysadmins don’t interact with DNS configurations on a daily basis and much of the information that’s out there is based upon half-remembered war stories handed down from the generations of sysadmins who came before us.

We asked on Twitter and there were some sysadmins who weren’t even exactly sure what TTL stood for (though thankfully most of them did).

To help with this situation, we are going to cover:

  1. DNS and TTL Basics
  2. DNS TTL Troubleshooting
  3. DNS Best Practices for Change Management
  4. DNS Tools
  5. Next Steps


What is a DNS Record?

Domain Name Server (DNS) records specify two important things:

Where requests for an entry should be pointed (resolved) to.
How long the record can be cached before it needs to be requested again – this is ominously called the Time To Live (TTL) of the record.

Why is DNS cached?

Most organizations set up DNS records and then don’t change them for years. Since they’re often requested but infrequently updated, caching DNS records is very effective in improving network performance at the cost of increasing the complexity of reasoning about and troubleshooting DNS issues.

What’s a TTL?

A Time to Live represents how long *each step* of the DNS resolution chain will cache a record – and it’s tracked in seconds (hang on, that bit will be important later.)

This doesn’t really capture the nuance of the situation though: even though it’s definitely “Time to Live”, it might make more sense if you think of it as “Time To Lookup” or “How long to keep this DNS record in cache”.

What are typical TTL times for DNS records?

Time to Live values are always represented in seconds. Most DNS setup configuration services provide you a preset list of values to set your records to.

300 seconds = 5 minutes = “Very Short”
3600 seconds = 1 hour = “Short”
86400 seconds = 24 hours = “Long”
604800 seconds = 7 days = “Insanity”

How do DNS Lookups Work?

When you type a URL into your browser, a whole series of lookups are created:

The following questions are asked at every step in this process (and there are often more steps than listed here)

  1. Do we have this record cached?
  2. If it is cached, is the TTL still valid?

If the answer to either of these questions is “No” then the request is moved to the next step up the chain.

Why DNS is about Network Connections not Devices

Troubleshooting DNS is difficult not just because the TTL and caching system introduce complexities, but because many modern devices connect via different networks and chains of DNS Servers.

Consider an ordinary laptop computer. Mine is more or less glued to my desk, and although it hasn’t moved in weeks, I’ve connected to:

  • My normal home wi-fi / cable network
  • My cell phone when cable didn’t work
  • Both of the above, but connected over VPN

Every time that a network is switched, a new DNS chain is brought into effect.If you happen to be in the midst of making changes, the servers and cache locations in the DNS chain may or may not have the correct information.

This often happens on corporate networks where the Active Directory domain is the same as the company’s website. An external DNS server (at the ISP level) has a DNS record that points ‘’ to the correct web server IP address/CNAME, but on the internal DNS server used by Active Directory the records haven’t been mirrored.

So you’ll have people in a panic. “The webserver is down!” “the sky is falling” “where are my pants?”…and when you begin troubleshooting, you’ll find that what actually happened is that they left their VPN connected.


How long will it take my DNS to update?

To calculate the maximum (worst case) amount of time it will take between when you update a DNS record and you can feel confident that every client now references the new value, multiply the number of steps (not counting the authoritative server) times the TTL value.

So if your TTL is 3600 seconds (1 hour) and there are 5 steps, it shouldn’t take more than 18,000 seconds (5 hours) for changes to fully propagate.

But wait!

How much does a DNS lookup cost?

When you ask how much a DNS lookup “costs” you’re not usually worrying about money. You’re worrying about time. Depending on the usual menagerie of internet network gremlins, a DNS request typically takes between 100 to 200 milliseconds to complete.

While this is a very small amount of time, consider a webpage. Every image, css file, and javascript asset file that are referenced on a page need to have their DNS resolved. Without caching you’d have greatly increased load times.

Naive DNS Cost Calculations

This is “naive” as it’s unlikely that every asset on your website will be served from a different domain and browsers have lots of nice caching tricks and strategies to make things faster than this simplified model of how they work.

With Caching
(30 image files * 50ms to download each) + (100 ms one time lookup of the DNS which is then cached) = 1600ms

Without Caching
(30 image files * 50ms to download each) + (30 * 100ms DNS lookups) = 3000ms

Why isn’t my DNS updating?

On top of that, there can be additional factors that extend the propagation time beyond the base calculation. Some examples include:

  • Web browsers internally caching DNS entries for non TTL controlled periods of time in an attempt to seem “faster”. For example, modern Internet Explorer versions cache DNS for 30 minutes by default (prior to IE 4 it was 24 hours) and will ignore TTLs below that.
  • Mobile ISPs may seek to reduce overall traffic by increasing TTL times, lowering how often requests are made.
  • Complex internal networks with more DNS servers than you would anticipate will naturally take longer to update.

The above is the reason why most services state something like: “It can take days for your DNS changes to fully propagate, plan accordingly.”

Is there any way to force a client to update their DNS record remotely?

This question is typically asked in the context of: “I updated my DNS records and now a client can’t reach some site, how do I force the update to take place?”

Unfortunately the answer is “no”. There is no DNS configuration command that you can enter to force early updates from downstream clients.

There are commands you can run that will clear out DNS entries from a local cache, but typically these don’t work as effectively as you’d hope, since you still have upstream (ISP DNS caching) and downstream (Browser DNS caching) occurring.

Your best bet is to change the TTL for your records ahead of time.


What’s better: a short or a long TTL?

Developers have long waged holy wars over whether code indentations should be tabs or spaces. I’ve found that network admins feel similarly about TTL length.

Typically what informs this opinion is whatever network configuration attack/debacle they were previously involved with.

A DDOS attack that disrupts root/ISP level DNS servers for 12 hours will have less of an impact on sites with a very long TTL. The long TTL lets clients keep working – even when the DNS server is offline or overwhelmed.

But, if you’re in the midst of switching web or email hosts and you happen to typo a record, the last thing you want to do is to have that change irrevocably stick around for the next 12 hours. And so you have people who advocate TTLs of a minute.

My strong personal preference is to have short (less than 1 hour/3600 seconds) DNS TTLs.

How do I know when a client will request the updated DNS record?

It’s very difficult to estimate when all clients will be updated.

See, Time To Live is *not* a freshness date. Do not consider DNS TTLs a “Best By: ” date on a stale loaf of bread – it’s not a singular time when a record goes from good to bad and needs to be replaced.


DNS is much more like an org chart, where as you make changes, the changes slowly propagate out to the entire network as time passes and clients “lower” in the chart have their caches expire and request the record from the DNS server higher in the list.

What’s the Best Practice for changing a DNS record?

For something relatively simple like modifying a single record to a domain, it might feel like overkill to have a “plan” or “strategy” – but given the very public severity of screwing up DNS some caution is warranted. It’s like the old saying: “A packet of prevention is worth a pound of cure.”

There’s a simple way to limit your mistakes: never update both a DNS record and a TTL for that record at the same time. Ideally you’ll have a process of:

  1. Drop TTL on the DNS record to something very low a couple of days before you actually need to make the switch. Ex: 300 seconds
  2. Change the actual record on your cutover date.
  3. Several days after you’ve made the switch, up the TTL to something higher.

What’s the Best Practice for adding a new DNS record?

Adding new records is simpler than modifying existing ones.

  1. Add the record with a low TTL.
  2. After you’re sure everything works, raise the TTL.

What’s the most common TTL setting?

There’s so much controversy around what your TTL settings *should* be that we thought we’d try to generate some hard data. The Moz Top 500 sites is a nice cross section of websites and they’ve already done the hard work of putting them all into a CSV file.

I wrote a quick script to iterate through the list and look up the current TTL of the primary record for each domain. Like any type of data project, depending on how you ask the question, this data is wildly off base. It’s not a broad enough sample, it’s pulling the current (cached) results, etc. etc. With that disclaimer out of the way, there are still some great insights to pull from the results.

TTL Analysis of the Top 500 Moz domains

View/Modify the script or download and run it yourself at:

See the list and download the CSV at:

Lowest TTL: 1
Highest TTL: 129,540
Domains Resolved: 485
Average TTL: 6468
Median TTL: 300

The lowest values are coming from domains that are doing very rapid DNS changes for load balancing purposes. The highest are from domains that haven’t been updated in a long time ( I’m looking at you).

From the standpoint of needing to defend the decision to have a short (sub 1 hour, 3600 seconds) TTL, you can point to the median value of 300s (5 minutes) with and state confidently that you have empirical evidence of what the setting should be.


How do I check the TTL of a DNS record on Windows?

The nslookup – – utility is the easiest way to check DNS records on a Windows

Example: C:\>nslookup -type=cname -debug


The TTL is listed at the bottom of the output. “Non-authoritative answer” indicates that this is the TTL as seen from the client (that we have 2 mins and 11 secs until the local client checks the next level up in DNS).

How do I check the TTL of a DNS record on Unix/Linux/Mac?

For Unix (and derived) systems, the dig command is used for DNS troubleshooting.

Example: dig


The TTL is shown outlined in red.

How do I check a DNS record from the Web?

You might not always be at your computer when you need to check a DNS record. A handy (and free) version of dig is available online from Google at:


The TTL is shown outlined in red.

How do I test for DNS TTL propagation?

If you’re trying to find out if a specific DNS server has been updated with your new DNS settings, all DNS tools (dig, nslookup, etc) will let you specify what DNS server you’d like to run your queries against instead of the default local setup.

To get a broader picture of your changes, I recommend which will check many of the top (ISP level) DNS servers which will let you know if something has gone really wrong.



Setting DNS TTL’s can be tricky, but if you set them to short (less than an hour) values you’ll preserve your sanity and better prepare your network to handle changes.

If you’ve like this article we’d recommend you also checkout our Web Security Fundamentals course, which will help you better secure that site or application that you just setup DNS for, it’s free and well worth your time.

Click on this handsome Australian man’s face to get started.