Tag Archives: penetration testing

Disabling PowerShell and Other Malware Nuisances, Part I

Disabling PowerShell and Other Malware Nuisances, Part I

Back in more innocent times, circa 2015, we began to hear about hackers going malware-free and “living off the land.” They used whatever garden-variety IT tools were lying around on the target site. It’s the ideal way to do post-exploitation without tripping any alarms.

This approach has taken off and gone mainstream, primarily because of off-the-shelf post-exploitation environments like PowerShell Empire.

I’ve already written about how PowerShell, when supplemented with PowerView, becomes a potent purveyor of information for hackers. (In fact, all this PowerShell pen-testing wisdom is available in an incredible ebook that you should read as soon as possible.)

Any tool can be used for good or bad, so I’m not implying PowerShell was put on this earth to make the life of hackers easier.

But just as you wouldn’t leave a 28” heavy-duty cable cutter next to a padlock, you probably don’t want to allow, or at least make it much more difficult for, hackers get their hands on PowerShell.

This brings up a large topic in the cybersecurity world: restricting application access, which is known more commonly as whitelisting or blacklisting. The overall idea is for the operating system to strictly control what apps can be launched by users.

For example, as a member of homo blogus, I generally need some basic tools and apps (along with a warm place to sleep at night), and can live without PowerShell, netcat, psexec, and some other cross-over IT tools I’ve discussed in previous posts. The same applies to most employees in an organization, and so a smart IT person should be able come up with a list of apps that are safe to use.

In the Windows world, you can enforce rules on application execution using Software Restriction Policies and more recently AppLocker.

However, before we get into these more advanced ideas, let’s try two really simple solutions and then see what’s wrong with them.

ACLs and Other Simplicities

We often think of Windows ACLs as being used to control access to readable content. But they can also be applied to executables — that is, .exe, .vbs, .ps1, and the rest.

I went back into Amazon Web Services where the Windows domain for the mythical and now legendary Acme company resides and then did some ACL restriction work.

The PowerShell .exe, as any sys admin can tell you, lives in C:\Windows\System32\WindowsPowerShell\v1.0. I navigated to the folder, clicked on properties, and effectively limited execution of PowerShell to a few essential groups: Domain Admins and Acme-SnowFlakes, which is the group of Acme employee power users.

I logged backed into the server as Bob, my go to Acme employee, and tried to bring up PowerShell. You see the results below.

In practice, you could probably come up with a script — why not use PowerShell? — to automate this ACL setting for all the laptops and servers in a small- to mid-size site.

It’s not a bad solution.

If you don’t like the idea of setting ACLs in executableefiles, PowerShell offers its own execution restriction controls. As a user with admin privileges, you can use, what else but a PowerShell cmdlet called Set-ExecutionPolicy.

It’s not nearly as blunt a force as the ACLs, but you can restrict PowerShell to work only in interactive mode –  with the Restricted parameter — so that it won’t execute scripts that contain the hackers’ malware. PowerShell would still be available in a limited way, but it wouldn’t be capable of running the scripts containing hacker PS malware.

However, this would PowerShell scripts from being run by your IT staff.  To allow IT-approved scripts, but disable evil hacker scripts, you use the RemoteSigned parameter in Set-ExecutionPolicy. Now PowerShell will only launch signed scripts. The IT staff, of course, would need to create their own scripts and sign them using an approved credential.

I won’t go into the details how to do this, mostly because it’s so easy to get around these controls. Someone even has a listicle blog post in which 15 PowerShell  security workarounds are described.

The easiest one is using the Bypass parameter in PowerShell itself. Duh! (below).

Seems like a security hole, no?

So PowerShell has some basic security flaws. It’s somewhat understandable since it is, after all, just a shell program.

But even the ACL restriction approach has a fundamental problem.

If hackers loosen up the “live off the land” philosophy, they can simply download — say, using a remote access trojan (RAT) — their own copy of PowerShell .exe. And then run it directly, avoiding the permission restrictions with the resident PowerShell.

Software Restriction Policies

These basic security holes (and many others) are always an issue with a consumer-grade operating systems. This has led OS researchers to come up with secure secure operating systems that have direct power to control what can be run.

In the Windows world, these powers are known as Software Restriction Policies (SRP) — for a good overview, see this — that are managed through the Group Policy Editor.

With SRP you can control which apps can be run, based on file extension, path names, and whether the app has been digitally signed.

The most effective, though most painful approach, is to disallow everything and then add back application that you really, really need. This is known as whitelisting.

We’ll go into more details in the next post.

Anyway, you’ll need to launch the policy editor, gpedit, and navigate to Local Computer Policy>Windows Settings>Security Settings>Software Restriction Polices>Security Levels. If you click on “Disallowed”, you can then make this the default security policy — to not run any executables!

The whitelist: disallow as default, and then add app policies in “Additional Rules”.

This is more like a scorched earth policy. In practice, you’ll need to enter “Additional Rules” to add back the approved apps (with their path names). If you leave out PowerShell, then you’ve effectively disabled this tool on the site.

Unfortunately, you can’t fine-tune the SRP rules based on AD groups or users. Drat!

And that we’ll bring us to Microsoft’s latest and greatest security enforcer, known as AppLocker, which does provide some nuance to application access. We’ll take that up next time as well.

 

 

 

 

 

 

 

Practical PowerShell for IT Security, Part III: Classification on a Budget

Practical PowerShell for IT Security, Part III: Classification on a Budget

Last time, with a few lines of PowerShell code, I launched an entire new software category, File Access Analytics (FAA). My 15-minutes of fame is almost over, but I was able to make the point that PowerShell has practical file event monitoring aspects. In this post, I’ll finish some old business with my FAA tool and then take up PowerShell-style data classification.

Event-Driven Analytics

To refresh memories, I used the Register-WmiEvent cmdlet in my FAA script to watch for file access events in a folder. I also created a mythical baseline of event rates to compare against. (For wonky types, there’s a whole area of measuring these kinds of things — hits to web sites, calls coming into a call center, traffic at espresso bars — that was started by this fellow.)

When file access counts reach above normal limits, I trigger a software-created event that gets picked up by another part of the code and pops up the FAA “dashboard”.

This triggering is performed by the New-Event cmdlet, which allows you to send an event, along with other information, to a receiver. To read the event, there’s the WMI-Event cmdlet. The receiving part can even be in another script as long as both event cmdlets use the same SourceIdentifier — Bursts, in my case.

These are all operating systems 101 ideas: effectively, PowerShell provides a simple message passing system. Pretty neat considering we are using what is, after all, a bleepin’ command language.

Anyway, the full code is presented below for your amusement.

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(3,8,5); "Tuesday" = @(4,10,7);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append
  
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 
 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   


#Dashboard
While ($true) {
    $args=Wait-Event -SourceIdentifier Bursts # wait on Burst event
    Remove-Event -SourceIdentifier Bursts #remove event
  
    $outarray=@() 
    foreach ($result in $args.SourceArgs) {
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File -Value $result[0]
      $obj | Add-Member -type NoteProperty -Name Time -Value $result[1]
      $outarray += $obj  
    }


     $outarray|Out-GridView -Title "FAA Dashboard: Burst Data"
 }

Please don’t pound your laptop as you look through it.

I’m aware that I continue to pop up separate grid views, and there are better ways to handle the graphics. With PowerShell, you do have access to the full .Net framework, so you could create and access objects —listboxes, charts, etc. — and then update as needed. I’ll leave that for now as a homework assignment.

Classification is Very Important in Data Security

Let’s put my file event monitoring on the back burner, as we take up the topic of PowerShell and data classification.

At Varonis, we preach the gospel of “knowing your data” for good reason. In order to work out a useful data security program, one of the first steps is to learn where your critical or sensitive data is located — credit card numbers, consumer addresses, sensitive legal documents, proprietary code.

The goal, of course, is to protect the company’s digital treasure, but you first have to identify it. By the way, this is not just a good idea, but many data security laws and regulations (for example, HIPAA)  as well as industry data standards (PCI DSS) require asset identification as part of doing real-world risk assessment.

PowerShell should have great potential for use in data classification applications. Can PS access and read files directly? Check. Can it perform pattern matching on text? Check. Can it do this efficiently on a somewhat large scale? Check.

No, the PowerShell classification script I eventually came up with will not replace the Varonis Data Classification Framework. But for the scenario I had in mind – a IT admin who needs to watch over an especially sensitive folder – my PowerShell effort gets more than a passing grad, say B+!

WQL and CIM_DataFile

Let’s now return to WQL, which I referenced in the first post on event monitoring.

Just as I used this query language to look at file events in a directory, I can tweak the script to retrieve all the files in a specific directory. As before I use the CIM_DataFile class, but this time my query is directed at the folder itself, not the events associated with it.

$Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"

Terrific!  This line of code will output an array of file path names.

To read the contents of each file into a variable, PowerShell conveniently provides the Get-Content cmdlet. Thank you Microsoft.

I need one more ingredient for my script, which is pattern matching. Not surprisingly, PowerShell has a regular expression engine. For my purposes it’s a little bit of overkill, but it certainly saved me time.

In talking to security pros, they’ve often told me that companies should explicitly mark documents or presentations containing proprietary or sensitive information with an appropriate footer — say, Secret or Confidential. It’s a good practice, and of course it helps in the data classification process.

In my script, I created a PowerShell hashtable of possible marker texts with an associated regular expression to match it. For documents that aren’t explicitly marked this way, I also added special project names — in my case, snowflake — that would also get scanned. And for kicks, I added a regular expression for social security numbers.

The code block I used to do the reading and pattern matching is listed below. The file name to read and scan is passed in as a parameter.

$Action = {

Param (

[string] $Name

)

$classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }


$data = Get-Content $Name

$cnts= @()

foreach ($key in $classify.Keys) {

  $m=$classify[$key].matches($data)

  if($m.Count -gt 0) {

    $cnts+= @($key,$m.Count)
  }
}

$cnts
}

Magnificent Multi-Threading

I could have just simplified my project by taking the above code and adding some glue, and then running the results through the Out-GridView cmdlet.

But this being the Varonis IOS blog, we never, ever do anything nice and easy.

There is a point I’m trying to make. Even for a single folder in a corporate file system, there can be hundreds, perhaps even a few thousand files.

Do you really want to wait around while the script is serially reading each file?

Of course not!

Large-scale file I/O applications, like what we’re doing with classification, is very well-suited for multi-threading—you can launch lots of file activity in parallel and thereby significantly reduce the delay in seeing results.

PowerShell does have a usable (if clunky) background processing system known as Jobs. But it also boasts an impressive and sleek multi-threading capability known as Runspaces.

After playing with it, and borrowing code from a few Runspaces’ pioneers, I am impressed.

Runspaces handles all the messy mechanics of synchronization and concurrency. It’s not something you can grok quickly, and even Microsoft’s amazing Scripting Guys are still working out their understanding of this multi-threading system.

In any case, I went boldly ahead and used Runspaces to do my file reads in parallel. Below is a bit of the code to launch the threads: for each file in the directory I create a thread that runs the above script block, which returns matching patterns in an array.

$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, 5)

$RunspacePool.Open()

$Tasks = @()


foreach ($item in $list) {

   $Task = [powershell]::Create().AddScript($Action).AddArgument($item.Name)

   $Task.RunspacePool = $RunspacePool

   $status= $Task.BeginInvoke()

   $Tasks += @($status,$Task,$item.Name)
}

Let’s take a deep breath—we’ve covered a lot.

In the next post, I’ll present the full script, and discuss some of the (painful) details.  In the meantime, after seeding some files with marker text, I produced the following output with Out-GridView:

Content classification on the cheap!

In the meantime, another idea to think about is how to connect the two scripts: the file activity monitoring one and the classification script partially presented in this post.

After all, the classification script should communicate what’s worth monitoring to the file activity script, and the activity script could in theory tell the classification script when a new file is created so that it could classify it—incremental scanning in other words.

Sounds like I’m suggesting, dare I say it, a PowerShell-based security monitoring platform. We’ll start working out how this can be done the next time as well.

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

In working on this series, I almost feel that with PowerShell we have technology that somehow time-traveled back from the future. Remember on Star Trek – the original of course — when the Enterprise’s CTO, Mr. Spock, was looking into his visor while scanning parsecs of space? The truth is Spock was gazing at the output of a Starfleet-approved PowerShell script.

Tricorders? Also powered by PowerShell.

Yes, I’m a fan of PowerShell, and boldly going where no blogger has gone before. For someone who’s been raised on bare-bones Linux shell languages, PowerShell looks like super-advanced technology. Part of PowerShell’s high-tech prowess is its ability, as I mentioned in the previous post, to monitor low-level OS events, like file updates.

A Closer Look at Register-WmiEvent

Let’s return to the amazing one-line of file monitoring PS code I introduced last time.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'doc' or targetInstance.Extension = 'txt)' and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action

 

As you might have guessed, the logic on what to monitor is buried in the WQL contained in Register-WmiEvent’s query parameter.

You’ll recall that WQL allows scripters to retrieve information about Windows system events in general and, specifically in our case, file events – files created, updated, or deleted.  With this query, I’m effectively pulling out of Windows darker depths file modification events that are organized as a CIM_DataFile class.

WQL allows me to set the drive and folder I’m interested in searching — that would be the Drive and Path properties that I reference above.

Though I’m not allowed to use a wild card search — it’s a feature, not a bug — I can instead search for specific file extensions. My goal in developing the script for this post is to help IT security spot excessive activity on readable files.  So I set up a logical condition to search for files with “doc” or “txt” extensions. Makes sense, right?

Now for the somewhat subtle part.

I’d like to collect file events generated by anyone accessing a file, including those who just read a Microsoft Word documents without making changes.

Can that be done?

When we review a file list in Windows Explorer, we’re all familiar with the “Date Modified” field. But did you know there’s also a “Date Accessed” field? Every time you read a file in Windows, this field is, in theory, updated with the current time stamp. You can discover this for yourself—see below—by clicking on the column heads and enabling the access field.

However, in practice, Windows machines aren’t typically configured to update this internal field when a file is just accessed—i.e., read, but not modified. Microsoft says it will slow down performance. But let’s throw caution to the wind.

To configure Windows to always update the file access time, you use the under-appreciated fsutil utility (you’ll need admin access) with the following parameters: 

fsutil set behavior disablelastaccess 0

With file access events now configured in my test environment, I’ve now enabled Windows to also record read-only events.

My final search criteria in the above WQL should make sense:

targetInstance.LastAccessed > '$($cur)'

It says that I’m only interested in file events in which file access has occurred after the Register-WmiEvent is launched. The $cur variable, by the way is assigned the current time pulled from the Get-Date cmdlet.

File Access Analytics (FAA)

We’ve gotten through the WQL, so let’s continue with the remaining parameters in Register-WmiEvent.

SourceIdentifer allows you to name an event. Naming things – people, tabby cats, and terriers—is always a good practice since you can call them when you need ‘em.

And it holds just as true for events! There are few cmdlets that require this identifier. For starters, Unregister-Event for removing a given event subscription, Get-Event for letting you review all the events that are queued, Remove-Event for erasing current events in the queue, and finally Wait-Event for doing an explicit synchronous wait. We’ll be using some of these cmdlets in the completed code.

I now have the core of my script worked out.

That leaves the Action parameter. Since Register-WmiEvent responds asynchronously to events, it needs some code to handle the response to the triggering event, and that’s where the action, so to speak is: in a block of PowerShell code that’s passed in.

This leads to what I really want to accomplish with my script, and so I’m forced to reveal my grand scheme to take over the User Behavior Analytics world with a few lines of PowerShell code.

Here’s the plan: This PS script will monitor file access event rates, compare it to a baseline, and decide whether the event rates fall into an abnormal range, which could indicate possible hacking. If this threshold is reached, I’ll display an amazing dashboard showing the recent activity.

In other words, I’ll have a threat monitor alert system that will spot unusual activity against text files in a specific directory.

Will Powershell Put Security Solutions Out of Business?

No, Varonis doesn’t have anything to worry about, for a few reasons.

One, event monitoring is not really something Windows does efficiently. Microsoft in fact warns that turning on last access file updates through fsutil adds system overhead. In addition, Register-WmiEvent makes the internal event flywheels spin faster: I came across some comments saying the cmdlet may cause the system to slow down.

Two, I’ve noticed that this isn’t real-time or near real-time monitoring: there’s a lag in receiving file events, running up to 30 minutes or longer. At least, that was my experience running the scripts on my AWS virtual machine. Maybe you’ll do better on your dedicated machine, but I don’t think Microsoft is making any kind of promises here.

Three, try as I might, I was unable to connect a file modification event to the user of the app that was causing the event. In other words, I know a file even has occurred, but alas it doesn’t seem to be possible with Register-WMIEvent to know who caused it.

So I’m left with a script that can monitor file access but without assigning attribution. Hmmm …  let’s create a new security monitoring category, called File Access Analytics (FAA), which captures what I’m doing. Are you listening Gartner?

The larger point, of course, is that User Behavior Analytics (UBA) is a far better way to spot threats because user-specific activity contains the interesting information. My far less granular FAA, while useful, can’t reliably pinpoint the bad behaviors since it aggregates events over many users.

However, for small companies and with a few account logged on, FAA may be just enough. I can see an admin using the scripts when she suspects a user who is spending too much time poking around a directory with sensitive data. And there are some honeypot possibilities with this code as well.

And even if my script doesn’t quite do the job, the even larger point is that understanding the complexities of dealing with Windows events using PowerShell (or other language you use) will make you, ahem, appreciate enterprise-class solutions.

We’re now ready to gaze upon the Powershell scriptblock of my Register-WmiEvent:

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append

   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
        write-host "sfactor: $($sfactor))"
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
          
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 

 

Yes, I do audit logging by using the Out-File cmdlet to write a time-stamped entry for each access. And I detect bursty file access hits over 15-minute intervals, comparing the event counts against a baseline that’s held in the $Global:baseline array.

I got a little fancy here, and set up mythical average event counts in baseline for each day of the week, dividing the day into three eight hour periods. When the burst activity in a given period falls at the far end of the “tail” of the bell curve, we can assume we’ve spotted a threat.

The FAA Dashboard

With the bursty event data held in $Global:evarray (files accessed with timestamps), I decided that it would be a great idea to display it as a spiffy dashboard. But rather than holding up the code in the scriptblock, I “queued” up this data on its own event, which can be handled by a separates app.

Whaaat?

Let me try to explain. This is where the New-Event cmdlet comes into play at the end of the scriptblock above. It simply allows me to asynchronously ping another app or script, thereby not tying down the code in the scriptblock so it can then handle the next file access event.

I’ll present the full code for my FAA PowerShell script in the next post.  For now, I’ll just say that I set up a Wait-Event cmdlet whose sole purpose is to pick up these burst events and then funnel the output into a beautiful table, courtesy of Out-GridView.

Here’s the end result that will pop on an admin’s console:

 

Impressive in its own way considering the whole FAA “platform” was accomplished in about 60 lines of PS code.

We’ve covered a lot of ground, so let’s call it a day.

We’ll talk more about the full FAA script the next time, and then we’ll start looking into the awesome hidden content classification possibilities of PowerShell.

 

Varonis eBook: Pen Testing Active Directory Environments

Varonis eBook: Pen Testing Active Directory Environments

You may have been following our series of posts on pen testing Active Directory environments and learned about the awesome powers of PowerView. No doubt you were wowed by our cliffhanger ending — spoiler alert — where we applied graph theory to find the derivative admin!

Or maybe you tuned in late, saw this post, and binge read the whole thing during snow storm Nemo.

In any case, we know from the many emails we received that you demanded a better ‘long-form’ content experience. After all, who’d want to read about finding hackable vulnerabilities using Active Directory while being forced to click six-times to access the entire series?

We listened!

Thanks to the miracle of PDF technology, we’ve compressed the entire series into an easy-to-ready, comfy ebook format. Best of all, you can scroll through the entire contents without having to touch messy hyperlinks.

Download the Varonis Pen Testing Active Directory Environments ebook, and enjoy click-free reading today!

Practical PowerShell for IT Security, Part I: File Event Monitoring

Practical PowerShell for IT Security, Part I: File Event Monitoring

Back when I was writing the ultimate penetration testing series to help humankind deal with hackers, I came across some interesting PowerShell cmdlets and techniques. I made the remarkable discovery that PowerShell is a security tool in its own right. Sounds to me like it’s the right time to start another series of PowerShell posts.

We’ll take the view in these posts that while PowerShell won’t replace purpose-built security platforms — Varonis can breathe easier now — it will help IT staff monitor for threats and perform other security functions. And also give IT folks an appreciation of the miracles that are accomplished by real security platforms, like our own Metadata Framework. PowerShell can do interesting security work on a small scale, but it is in no way equipped to take on an entire infrastructure.

It’s a Big Event

To begin, let’s explore using PowerShell as a system monitoring tool to watch files, processes, and users.

Before you start cursing into your browsers, I’m well aware that any operating system command language can be used to monitor system-level happenings. A junior IT admin can quickly put together, say, a Linux shell script to poll a directory to see if a file has been updated or retrieve a list of running processes to learn if a non-standard process has popped up.

I ain’t talking about that.

PowerShell instead gives you direct event-driven monitoring based on the operating system’s access to low-level changes. It’s the equivalent of getting a push notification on a news web page alerting you to a breaking story rather than having to manually refresh the page.

In this scenario, you’re not in an endless PowerShell loop, burning up CPU cycles, but instead the script is only notified or activated when the event — a file is modified or a new user logs in — actually occurs. It’s a far more efficient way to do security monitoring than by brute-force polling.

Further down below, I’ll explain how this is accomplished.

But first, anyone who’s ever taken, as I have, a basic “Operating Systems for Poets” course knows that there’s a demarcation between user-level and system-level processes.

The operating system, whether Linux or Windows, does the low-level handling of device actions – anything from disk reads, to packets being received — and hides this from garden variety apps that we run from our desktop.

So if you launch your favorite word processing app and view the first page of a document, the whole operation appears as a smooth, synchronous activity. But in reality there are all kinds of time-sensitive actions events — disk seeks, disk blocks being read, characters sent to the screen, etc. — that are happening under the hood and deliberately hidden from us.  Thank you Bill Gates!

In the old days, only hard-core system engineers knew about this low-level event processing. But as we’ll soon see, PowerShell scripters can now share in the joy as well.

An OS Instrumentation Language

This brings us to Windows Management Instrumentation (WMI), which is a Microsoft effort to provide a consistent view of operating system objects.

Only a few years old, WMI is itself part of a broader industry effort, known as Web-based Enterprise Management (WBEM), to standardize the information pulled out of routers, switches, storage arrays, as well as operating systems.

So what does WMI actually look and feel like?

For our purposes, it’s really a query language, like SQL, but instead of accessing rows of vanilla database columns, it presents complex OS information organized as a WMI_class hierarchy. Not too surprisingly, the query language is known as, wait for it, WQL.

Windows generously provides a utility, wbemtest, that lets you play with WQL. In the graphic below, you can see the results of my querying the Win32_Process object, which holds information on the current processes running.

WQL on training wheels with wbemtest.

Effectively, it’s the programmatic equivalent of running the Windows task monitor. Impressive, no? If you want to know more about WQL, download Ravi Chaganti’s wonderous ebook on the subject.

PowerShell and the Register-WmiEvent Cmdlet

But there’s more! You can take off the training wheels provided by wbemtest, and try these queries directly in PowerShell.

Powershell’s Get-WMIObject is the appropriate cmdlet for this task, and it lets you feed in the WQL query directly as a parameter.

The graphic below shows the first few results from running select Name, ProcessId, CommandLine from Win32_Process on my AWS test environment.

gwmi is the PowerShell alias for Get-WmiObject.

The output is a bit wonky since it’s showing some hidden properties having to do with underlying class bookkeeping. The cmdlet also spews out a huge list that speeds by on my console.

For a better Win32_Process experience, I piped the output from the query into Out-GridView, a neat PS cmdlet that formats the data as a beautiful GUI-based table.

Not too shabby for a line of PowerShell code. But WMI does more than allow you to query these OS objects.

As I mentioned earlier, it gives you access to relevant events on the objects themselves. In WMI, these events are broadly broken into three types: creation, modification, and deletion.

Prior to PowerShell 2.0, you had to access these events in a clunky way: creating lots of different objects, and then you were forced to synchronously ‘hang’, so it wasn’t true asynchronous event-handling. If you want to know more, read this MS Technet post for the ugly details.

Now in PS 2.0 with the Register-WmiEvent cmdlet, we have a far prettier way to react to all kinds of events. In geek-speak, I can register a callback that fires when the event occurs.

Let’s go back to my mythical (and now famous) Acme Company, whose IT infrastructure is set up on my AWS environment.

Let’s say Bob, the sys admin, notices every so often that he’s running low on file space on the Salsa server. He suspects that Ted Bloatly, Acme’s CEO, is downloading huge files, likely audio files, into one of Bob’s directories and then moving them into Ted’s own server on Taco.

Bob wants to set a trap: when a large file is created in his home directory, he’ll be notified on his console.

To accomplish this, he’ll need to work with the CIM_DataFile class.  Instead of accessing processes, as we did above, Bob uses this class to connect with the underlying file metadata.

CIM_DataFile object can be accessed directly in PowerShell.

Playing the part of Bob, I created the following Register-WmiEvent script, which will notify the console when a very large file is created in the home directory.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance isa 'CIM_DataFile' and TargetInstance.FileSize > 2000000 and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' "-sourceIdentifier "Accessor3" -Action  { Write-Host "Large file" $EventArgs.NewEvent.TargetInstance.Name  "was created”}

 

Running this script directly from the Salsa console launches the Register-WmiEvent command in the background, assigning it a job number, and then only interacts with the console when the event is triggered.

In the next post, I’ll go into more details about what I’ve done here. Effectively, I’m using WQL to query the CIM_DataFile object — particularly anything in the \Users\bob directory that’s over 2 million bytes — and set up a notification when a new file is created that fits this criteria —that’s where InstanceModificationEvent comes into play.

Anyway, in my Bob role  I launched the script from the PS command line, and then putting on my Ted Bloatly hat, I copied a large mp4 into Bob’s directory. You can see the results below.

We now know that Bloatly is a fan of Melody Gardot. Who would have thunk it?

You begin to see some of the exciting possibilities with PowerShell as a tool to detect threats patterns and perhaps for doing a little behavior analytics.

We’ll be exploring these ideas in the next post.

Binge Read Our Pen Testing Active Directory Series

Binge Read Our Pen Testing Active Directory Series

With winter storm Niko now on its extended road trip, it’s not too late, at least here in the East Coast, to make a few snow day plans. Sure you can spend part of Thursday catching up on Black Mirror while scarfing down this slow cooker pork BBQ pizza. However, I have a healthier suggestion.

Why not binge on our amazing Pen Testing Active Directory Environments blog posts?

You’ve read parts of it, or — spoiler alert — perhaps heard about the exciting conclusion involving a depth-first-search of the derivative admin graph. But now’s your chance to totally immerse yourself and come back to work better informed about the Active Directory dark knowledge that hackers have known about for years.

And may we recommend eating these healthy soy-roasted kale chips while clicking below?

Episode 1: Crackmapexec and PowerView

Episode 2: Getting Stuff Done With PowerView

Episode 3: Chasing Power Users

Episode 4: Graph Fun

Episode 5: Admins and Graphs

Episode 6: The Final Case

Update: New York State Finalizes Cyber Rules for Financial Sector

Update: New York State Finalizes Cyber Rules for Financial Sector

When last we left New York State’s innovative cybercrime regulations, they were in a 45-day public commenting period. Let’s get caught up. The comments are now in. The rules were tweaked based on stakeholders’ feedback, and the regulations will begin a grace period starting March 1, 2017.

To save you the time, I did the heavy lifting and looked into the changes made by the regulators at the New York State Department of Financial Services (NYSDFS).

There are a few interesting ones to talk about. But before we get into them, let’s consider how important New York State — really New York City — is as a financial center.

Made in New York: Money!

To get a sense of what’s encompassed in the NYDFS’s portfolio, I took a quick dip into their annual report.

For the insurance sector, they supervise almost 900 insurers with assets of $1.4 trillion and receive premiums of $361 billion. Under wholesale domestic and foreign banks — remember New York has a global reach — they monitor 144 institutions with assets of $2.2 trillion. And I won’t even get into community and regional banks, mortgage brokers, and pension funds.

In a way, the NYSDFS has the regulatory power usually associated with a small country’s government. And therefore the rules that New York makes regarding data security has an outsized influence.

One Rule Remains the Same

Back to the rules. First, let’s look at one key part that was not changed.

NYSDFS received objections from the commenters on their definition of cyber events. This is at the center of the New York law—detecting, responding, and recovering from these events—so it’s important to take a closer look at its meaning.

Under the rules, a cybersecurity event is “any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information …”

Some of the commenters didn’t like the inclusion of “attempt” and “unsuccessful”. But the New York regulators held firm and kept the definition as is.

Cybersecurity is a broader term than a data breach. For a data breach, there usually has to be data access and exposure or exfiltration. In New York State, though, access alone or an IT disruption, even when attempted (or executed but not successfully) is considered an event.

As we’ve pointed out in our ransomware and the law cheat sheet, very few states in the US would classify a ransomware attack as a breach under their breach laws.

But in New York State, if ransomware (or a remote access trojan or other malware) was loaded on the victim’s server and perhaps abandoned or stopped by IT in mid-hack, it would indeed be a cybersecurity event.

Notification Worthy

This leads naturally to another rule, notification of a cybersecurity event to the New York State regulators, where the language was tightened.

The 72-hour time frame for reporting remains, but the clock starts ticking after a determination by the financial company that an event has occurred.

The financial companies were also given more wiggle room in the types of events that require notification: essentially the malware would need to “have a reasonable likelihood of materially harming any material part of the normal operation…”

That’s a mouthful.

In short: financial companies will notify the regulators at NYSDFS when the malware could seriously affect an operation that’s important to the company.

For example, malware that infects the digital console on the bank’s espresso machine is not notification worthy. But a key logger that lands in a bank’s foreign exchange area and is scooping up user passwords is very worthy.

The NYDFS’s updated notification rule language, by the way, puts it more in line with other data security laws, including the EU’s General Data Protection Regulation (GDPR).

So would you have to notify the New York State regulator when malware infects a server but hasn’t necessarily completed its evil mission?

Getting back to the language of “attempt” and “unsuccessful” found in the definition of cybersecurity events, it would appear that you would but only if the malware lands on a server that’s important to the company’s operations — either because of the data it contains or its function.

State of Grace

The original regulation also said you had to appoint a Chief Information Security Officer (CISO) who’d be responsible for seeing this cybersecurity regulation is carried out. Another important task of the CISO is to annually report to the board on the state of the company’s cybersecurity program.

With pushback from industry, this language was changed so that you can designate an existing employee as a CISO — likely a CIO or other C-level.

One final point to make is that the grace period for compliance has been changed. For most of the rules, it’s still 180 days.

But for certain requirements – multifactor authentication and penetration testing — the grace period has been extended to 12 months, and for a few others – audit trails, data retention, and the CISO report to the board — it’s been pushed out to 18 months.

For more details on the changes, check this legal note from our attorney friends at Hogan Lovells.

Pen Testing Active Directory Environments, Part I: Introduction to crackmap...

Pen Testing Active Directory Environments, Part I: Introduction to crackmapexec (and PowerView)

I was talking to a pen testing company recently at a data security conference to learn more about “day in the life” aspects of their trade. Their president told me that one of their initial obstacles in getting an engagement is fear from IT that the pen testers will bring down the system. As it turns out, IT has really nothing to be concerned about.

Some of the most interesting pen testing can be done by simply gathering information. I’ve covered some of these ideas in my “pen testing explained” series, where I showed that the more you know about your environment — IP addresses, computer names, users and especially admin accounts, as well as where sensitive content is — the better position you’re in as a hacker to get the goodies and do real damage to the victim.

As pen testers, we need to find these informational clues before the hackers (or insiders) do and show to the company the risks involved when attackers “live off the land.”

In this new series we’ll be focusing on how Active Directory can be used an offensive tool. And we’ll learn more about PowerView, which is part of the PowerShell Empire, a post-exploitation environment. PowerView essentially gives you easy access to AD information.

If you want to skip ahead, you can read more about PowerView here or watch this interesting presentation given by Will Schroeder, the founding father behind PowerView.

Hello crackmapexec

But before we get to that, let’s first look at another pen testing utility called crackmapexec.

Can crackmapexec really be described as a swiss army knife?

This term gets overused in the software world, but crackmapexec comes pretty close!

It’s similar to psexec with a bit of nessus thrown in, and it provides access to PowerView commands.

This multi-function python-based software can also be had as a convenient self-contained binary, which you can download from here.

For my own testing of crackmapexec, I used the aforementioned binary.

As in my first exploration of pen testing, I set up a simple Windows domain using my amazin’ Amazon Web Services account.

This time I was a little better in my IT admin duties and had my domain controller and the rest of the network for my mythical Acme company up and running after only one espresso.

For the purposes of this post, let’s assume I’ve landed on one of boxes in the network — perhaps through a phishing attack or by just guessing bad credentials.

Once in, the first bit of exploration you can perform as a pen tester is to get the lay of the land. Just like I did with nessus in initial round of pen testing, I can also use crackmapexec to scan a subnet to see what else is out there.

By the way, you would need to initially have a logon name and password (or hash) to use this tool, and I’ll discuss in another post ways that pen testers can obtain these stepping stone credentials.

Eureka. I found another box on the Acme network where my “bob” credentials will let me in.

crackmap-24

Another neat feature of powermapexec is that it can display currently logged on users with the -–lusers parameter.

crack-map-users

That’s interesting: there’s someone named SuperUser on my box. Part of the game of pen testing is to think like the hacker, and hackers love to spot accounts with potential domain level admin privileges.

Wouldn’t it be great to get the hashes of these users? In my initial pen testing series, I used a separate memory dumping utility and mimkitz, but with crackmapexec these basic hash dumping functions are built in.

crackmap-lsa

Yum, hashes!

Anyway, we now know about another machine on the network from the scan. For kicks, let’s try running a command remotely on this other box with the -x parameter. In this particular scenario, I’m looking for interesting content.

After a bit of poking around, I found a potentially valuable file in a Documents folder.

crack-map-dir

Hmmm, it seems like an executive left a sensitive memo in a public area. I’m sure that never happens in the real world.

Of course, this is a made up scenario, but it’s similar in spirit to what pen testers would do during an engagement — probing for weaknesses and finding potential risks. Crackmapexec just makes this a whole lot easier.

A Taste of PowerView

PowerView is your portal into  Active Directory domain data — really meta-data — on users, groups, privileges, and more. The key point here is to really understand your environment from a risk perspective.

We’ll look into PowerView commands in more detail in the next post.

One thing to keep in mind is that hackers are very interested in finding users on the network with enhanced privileges.

Earlier we saw that SuperUser would be a good candidate for being one of those special users belonging to a Windows domain admin group.

Can we find out for sure?

We now use our first PowerView cmdlet, called Get-GroupMember, which displays, as you might have guessed, group membership.

crackmap-powerview

As I suspected, SuperUser has domain admin privileges. And, yikes, I already have his hash, which I can then borrow to take over the account.

Game.Set.Match.

Obviously, you don’t want users with domain admin privileges remotely logging into employee workstations, and that would be part of the conclusions for this engagement.

But even for users who are not directly logged into the workstation you’ve landed on, having access to their Active Directory data opens other attack possibilities. This can include, learning home addresses, personal email, or cell phone numbers in AD, which then can be exploited, for example, in a social engineering attack.

We’re just scratching the surface in this initial post, and we’ll get more deeply into PowerView in the next one.

By the way, since PowerView is based on PowerShell, you can link commands into a pipeline, parsing the data and displaying the fields you’re really interested in.

Homework assignment before the next post: it’s a good idea to review our PowerShell post on cmdlets!

The Mirai Botnet Attack and Revenge of the Internet of Things

The Mirai Botnet Attack and Revenge of the Internet of Things

Once upon a time in early 2016, we were talking with pen tester Ken Munro about the security of IoT gadgetry — everything from wireless doorbells to coffee makers and other household appliances. I remember his answer when I asked about basic security in these devices. His reply: “You’re making a big step there, which is assuming that the manufacturer gave any thought to an attack from a hacker at all.”

Privacy by Design is not part of the vocabulary of the makers of these IoT gadgets

If you take a look at Ken Munro’s blog at Pen Test Partners, the security testing company he founded, it’s filled with examples of vendor’s PbD negligence. In fact he has a few hacking scenarios worked out for IP cameras, just the kind of IoT devices which were implicated in last week’s Mirai attack.

Once you understand some of his techniques, you come away with what I’m calling Munro’s golden rule when dealing with consumer-grade IoT: change the bleepin’ defaults if you want to prevent hackers from owning your IoT thing.

When hackers get control of, say, an Internet connected coffee maker — because an easy-to-guess admin default password that wasn’t changed — they then often also find the WiFi’s PSK in plain-text stored on the device as well. It gets pretty serious very quickly from there.

And Then Along Came Mirai

These types of attacks, though, require the hacker to be within range of the WiFi signal, then deauthenticate the IoT gadget with, say, aireplay-ng, and force it to reconnect to the hacker’s own access point. So you have more to worry about from neighborhood kids or the occasional wardriving attacker.

I came away from our interview with Munro thinking the baseline security of consumer IoT is, well, not very high, but also relatively benign, and involving security weakness known by a few security pros.

The Mirai attack last week changed all that.  The Internet of Insecure Things became a topic for coverage in even the non-technical media.  I was reading a good description in, of all places, Forbes of how cameras like the ones Munro tested were taken over by bots in the Mirai-based DDoS assault against DNS provider Dyn.

There’s still much we don’t know about the Dyn incident, but it’s clear that WiFi cameras were involved — perhaps up to 30,000.

So how were the hackers able to scale up the camera attack that I referenced above to take over thousands of WiFi cameras worldwide?

Certainly they weren’t sending scores of self-driving cars equipped with Yagi antennas to roam the roads.

At least, not yet.

Doomed by Default

Instead they realized that consumers, like their counterparts in IT organizations, are bad at changing default settings. I mean if security admins and other super users can’t update default passwords to other than ‘123456’, then how can ordinary humans do any better?

What I thought would be the hardest part for hackers to pull off — discovering the  address of the cameras — became relatively easy for them because of a default setting in WiFi routers that non-tech consumers never bother to reset.

Does anyone know what UPnP or Universal Plug and Play is?

I thought so.

UPnP is a protocol that let’s networked devices automatically open a port on the router to let you communicate with your gadget remotely.

The UPnP idea makes some sense in the context of WiFi cameras:  you might want to remotely view what’s going on in your kitchen or the kid’s room over a browser, or perhaps do some remote admin work using telnet. In that case, you would need a public port that’s mapped by your router to the device. UPnP does the mapping for you.

UPnP is convenient, but can lead to a less secure environment if you’re not careful about strengthening admin passwords to the device.

The hackers behind the recent Mirai attack exploited the consumer’s default-itis. Specifically, they took a brute force approach, scanning tens or even hundreds of thousands of routers worldwide searching for exposed telnet ports, which were likely added by UPnP.

They then gained shell access by trying a short series of typical default passwords —“12345”, “admin”, etc. —  until they succeeded in logging in. If not, they moved on to the next router.

Once on the gadget, which was in many cases a specific brand of wireless camera, they loaded the Mirai software, transforming the device into a bot spewing out UDP packets targeted at Dyn.

Do The Right Thing Right Now

Like many consumers, I was unaware of the UPnP capabilities of my router. After checking my own home router, I discovered that my UPnP was (long excruciating sigh) enabled. I’m fairly sure I never touched this setting, so this must have been the default from the vendor— Linksys in my case.

As a public service, I worked out the following list to help you avoid becoming an unwilling participant in the next IoT-based DDoS attack:

1. If you’re a consumer or small business person, I would disable the UPnP feature now! To inspire and help you, below is a snapshot of my Linksys router’s admin screen with the UPnP switch.

router-upnp

Don’t be like me. Disable the UPnP.

2. Take the time now to also update the admin password of your router and the WiFi PSK. And make these high-entropy – 8 or more characters of the battery-horse-staple variety. This would involve resetting the PSK on any other computers or devices already connected to the WiFi network.

3. With UPnP disabled, going forward when you add new devices, you would have to expose the relevant ports manually. That is, if you decide that you absolutely need remote access.

To do this, open up the port forwarding panel on the router and explicitly add a mapping entry. Refer to the manual for your device – camera, coffee maker, refrigerator, etc. – for details  It’s more work with this approach, but now you’ll be forced to really think about what you’re doing when adding IoT gadgetry.

linksys-nat

Get to know your router’s port forwarding table. You may need to manually add entries. And remember to set strong passwords for exposed devices!

 

4. For any IoT devices already on the network, change their admin passwords, making them longer and more complex! Sure, you’ve disabled outside access to these gadgets by disabling UPnP. But you still have to worry about the local attacks I first wrote about in the beginning of this post.

5. Finally, download and run nmap on your WiFi network. It’s the port scanning tool I wrote about a few months back. You’ll get a report showing what ports are publicly exposed on your WiFi network. I’d then study the results to see if it matches your understanding of the apps or devices that have access to the great wide world.

If you’re like me, it’s just one or (more likely) zero ports.  If not, then you’ll have to do more investigation.

nmap

One more point to note: nmap requires the public IP address of your router, not its fake internal IP address — i.e., 198.x.x.1 To find out what the address is for your network, enter “what is my IP address” into the Google search bar on your browser.

myipgoogle

Google knows everything, including your true public IP address.

Enter the IP address that Google returns into nmap for its deep-dive scan.

Designed to Fail

There’s enough blame in the Mirai incident to spread around to everyone involved: vendors, the public, and the government as well.

First, vendors need to up their game before releasing IoT devices. In Munro’s blog, he presents so many basic authentication and password problems withthe  IoT devices he’s tested as to almost make it seem as if they were ‘designed to be hacked.’

It’s that bad.

One step vendors can take is to encrypt and sign the downloadable firmware that’s available on their sites, thereby making it harder for hackers to work out their exploits by scanning the binaries for certain strings.  Or at least store WiFi passwords in encrypted form on the device.

Sheeple! We as consumers also need to cultivate a security mindset as well. Don’t assume that the IoT gadgets you install take care of everything. Ask questions! If you’re not being directed to update an admin password during installation, something is wrong. If that happens, put the gadget back in its box.

Finally, when we buy electronic devices, we know it has passed some basic tests and received approvals — Underwriters Laboratories, FCC, etc.

What about a privacy and security certification?

Some of the cameras that were hacked in the Mirai should never have been released on the market. Currently, there’s no data security equivalent of the FCC’s electronic compliance standard.

Perhaps there should be.  Or at least the IoT industry should set up their own security standards and enforce them.

In the near future, we can expect more incidents in which the default settings of IoT devices will continue to be exploited.  Want to get a better handle on the IoT environment?  Listen to the first part of our interview with Ken Munro.

[Podcast] IoT Pen Tester Ken Munro: Security Holes (Part 1)

[Podcast] IoT Pen Tester Ken Munro: Security Holes (Part 1)

If you want to understand the ways of a pen tester, Ken Munro is a good person to listen to. An info security veteran for over 15 years and founder of UK-based Pen Test Partners, his work in hacking into consumer devices — particularly coffee makers — has earned lots of respect from vendors. He’s also been featured on the BBC News.

You quickly learn from Ken that pen testers, besides having amazing technical skills, are at heart excellent researchers.

They thoroughly read the device documentation and examine firmware and coding like a good QA tester. You begin to wonder why tech companies, particularly the ones making  IoT gadgets, don’t run their devices past him first!

There is a reason.

According to Ken, when you’re small company under pressure to get product out, especially IoT things, you end up sacrificing security. It’s just the current economics of startups. This approach may not have been a problem in the past, but in the age of hacker ecosystems, and public tools such as wigle.net, you’re asking for trouble.

The audio suffered a little from the delay in our UK-NYC connection, and let’s just say my Skype conferencing skills need work.

Anyway, we join Ken as he discusses how he found major security holes in wireless doorbells and coffee makers that allowed him to get the PSK (pre-shared keys) of the WiFi network that’s connected to them.


Subscribe Now

- Leave a review for our podcast & we'll put you in the running for a pack of cards

- Follow the Inside Out Security Show panel on Twitter @infosec_podcast

- Add us to your favorite podcasting app:

New Varonis eBook Helps IT Kickstart Their Pen Testing Program

New Varonis eBook Helps IT Kickstart Their Pen Testing Program

Hackers are able to break into systems and move around easily without being detected. How is this possible with so much invested in firewalls, malware scanners, and other intrusion detection software?  Answer: often times, no one has really taken the IT system out for a security test drive.

Real-world security involve simulating an attack and then finding and correcting weaknesses and vulnerabilities. Reading the specs of your vendor’s security software, and kicking the IT tires won’t cut it anymore in an era of social engineered attacks and stealthy malware.

This is where penetration testing, also known as ethical hacking, comes into play. Pen testing involves taking on the role of the attacker and using their techniques to hack in and then find and remove relevant data.

In our latest ebook, Getting Started With Penetration Testing: Seeing IT Security From The Hacker’s Point Of View, you’ll get an overview of the art of ethical hacking.

The book eases you into this complex subject by taking you through a few testing scenarios. Along the way, you’ll learn about remote access trojans (RATs), reverse shells, password cracking, and pass the hash.

Required by more and more security standards, such as PCI DSS and NIST, pen testing should be an important part of your own security assessment work.

Our new ebook will provide the background and understanding you need to launch your own formal pen testing program. Download it today!