All posts by Andy Green

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

In this second part of our interview with attorney and GDPR pro Sue Foster, we get into a cyber topic that’s been on everyone’s mind lately: ransomware.

A ransomware attack on EU personal data is unquestionably a breach —  “accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access  …”

But would it be reportable under the GDPR, which goes into effect next year?

In other words, would an EU company (or US one as well) have to notify a DPA and affected customers within the 72-hour window after being attacked by, say, WannaCry?

If you go by the language of the law, the answer is a definite …  no!

Foster explains that for it to be reportable, a breach has to cause a risk “to the rights and freedoms of natural persons.”  For what this legalese really means, you’ll just have to listen to the podcast. (Hint: it refers to a fundamental document of the EU.)

Anyway, personal data that’s encrypted by ransomware and not taken off premises is not much of a risk for anybody. There’s still more subtleties involving ransomware and other EU data laws that I think is best explained by her, so you’ll just have to listen to Sue’s legal advice directly!

There’s also very interesting analysis by Foster on the implications of the GDPR for Internet-of-Things gadget makers.

Come for the ransomware, but stay for the IoT:

Planet Ransomware

Planet Ransomware

If you were expecting a quiet Friday in terms of cyberattacks, this ain’t it. There are reports of a massive ransomware attack affecting computers on a global scale: in the UK, Spain, Russia, Ukraine, Japan, and Taiwan.

The ransomware variant that’s doing the damage is called WCry, also known as WannaCry or WanaCrypt0r. It has so far claimed some high-profile targets, including NHS hospitals in the UK, and telecom and banking companies in Spain.

Be calm and carry on, of course.

In the blog, we’ve been writing about ransomware over the last two years, and we have great educational resources to help you prevent or reduce the damage of an attack.

Here’s a quick overview of our content.

What is it?

Our ransomware guide: https://blog.varonis.com/the-complete-ransomware-guide/ 

Learning more

The Troy Hunt course: https://blog.varonis.com/introduction-to-ransomware-course/

How it spreads

Yes, it can have worm-like features: https://blog.varonis.com/next-gen-ransomware-ransomworm-gets-deadlier/

Can I make my own (for research purposes)?

Yes, but only under adult supervision:

https://blog.varonis.com/malware-coding-lessons-for-it-people-part-ii-more-fun-with-fud-ransomware/

https://blog.varonis.com/malware-coding-lessons-people-part-learning-write-custom-fud-fully-undetected-malware/

Reducing the risk

Limiting file access really, really helps: https://blog.varonis.com/the-best-ransomware-defense-dont-have-files/

Legal and Regulatory Implications

For US companies, this is what you need to know: https://blog.varonis.com/ransomware-the-legal-cheat-sheet-for-breach-notification/

Should you pay?

It depends:

https://blog.varonis.com/should-the-website-that-infected-a-pc-with-ransomware-pay/

https://blog.varonis.com/hospital-paid-ransom-didnt-get-all-files-back/

Is a decryption solution available?

Check here: https://www.varonis.com/ransomware-identifier/

The ultimate answer to ransomware

User Behavior Analytics (UBA): https://blog.varonis.com/why-uba-will-catch-the-zero-day-ransomware-attacks-that-endpoint-protection-cant/

And here’s proof:  https://www.varonis.com/ransomware-solutions

 

 

 

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

Sue Foster is a London-based partner at Mintz Levin. She has a gift for explaining the subtleties in the EU General Data Protection Regulation (GDPR). In this first part of our interview, Foster discusses how the GDPR’s new extraterritoriality rule would place US companies under the law’s data obligations.

In the blog, we’ve written about some of the implications of the GDPR’s Article 3, which covers the law’s territorial scope. In short: if you market online to EU consumers — web copy, say, in the language of some EU country  — then you’ll fall under the GDPR. And this also means you would have to report data exposures under the GDPR’s new 72-hour breach rule.

Foster points out that if a US company happens to attract EU consumers through their overall marketing, they would not fall under the law.

So a cheddar cheese producer from Wisconsin whose web site gets the attention and business of French-based frommage lovers is not required to protect their data at the level of the GDPR.

There’s another snag for US companies, an update to the EU’s ePrivacy Directive, which places restrictions on embedded communication services. Foster explains how companies, not necessarily ISPs, that provide messaging — that means you WhatsApp, Skype, and Gmail — would fall under this law’s privacy rules.

Sue’s insights on these and other topics will be relevant to both corporate privacy officers and IT security folks.

Listen and learn:

Practical PowerShell for IT Security, Part IV:  Security Scripting Platfor...

Practical PowerShell for IT Security, Part IV:  Security Scripting Platform (SSP)

In the previous post in this series, I suggested that it may be possible to unify my separate scripts — one for event handling, the other for classification — into a single system. Dare I say it, a security platform based on pure PowerShell code?

After I worked out a few details, mostly having to do with migraine-inducing PowerShell events, I was able to declare victory and register my patent for SSP, the Security Scripting Platform ©.

United States of PowerShell

While I’m having an unforgettable PowerShell adventure, I realize that a few of you may not be able to recall my recent scripting handiwork.

Let’s review together.

In the first post, I introduced the amazing one line of PowerShell that watched for file events and triggered a PS-style script block — that is, a piece of scripting code than runs in its own memory space.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'doc' or targetInstance.Extension = 'txt)' and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action

With a little more scripting sauce, I then cooked up what I called a File Access Analytics (FAA) app. Effectively, it tallies up the access event and displays some basic stats, and also detects bursts of access activity, which could indicate hacking behavior.

It’s a simplified version of User Behavior Analytics (UBA) technology that we’re keen on here at Varonis.

So far, so good.

Then in the third post, I showed how relatively easy it is with PowerShell to scan and classify files in a folder. Since this is a disk-intensive activity, it makes incredible sense to use PowerShell’s multi-tasking capability, known as Runspaces to speed up the classification work.

In the real-world of file event handling and data classification, say Varonis’s Data Classification Framework, a more optimized approach to categorizing file content is to feed file events into the classification engine.

Why?

Because then you don’t have to reclassify file content from scratch: you only look at the files that have changed. My classifier script would therefor benefit greatly by knowing something about file modification events.

That’s the approach I took with SSP.

Varonis’s own agents, which catch Linux or Windows file events, are finely tuned low-level code. For this kind of work, you want the code to be lean, mean, and completely focused on collecting events and to quickly pass this info to other apps that can do higher-level processing.

So I took my original event handling script, streamlined it and removed all the code to display statistics. I then reworked the classifier to scan for specific files that have been modified.

Basically, it’s a classic combination of a back-end engine coupled with a front-end interface.

The question is how to connect the two scripts: how do I tell the classifier that there’s been a file event?

Messages and Events

After spending a few long afternoons scanning the dev forums, I eventually stumbled upon PowerShell’s Register-EngineEvent.

What is this PowerShell cmdlet?

In my mind, it’s a way to pass messages using a named event that you can share between scripts. It works a little bit differently from traditional system messaging and queues since the received message asynchronously triggers a PowerShell script block. This’ll become clearer below.

In any case, register-EngineEvent has two faces. With the -forward parameter, it acts as a publisher.  And without the -forward ,parameter it takes on the role of a receiver.

Got that?

I used the event name Delta — technically the SourceIdentifer — to coordinate between my event handler script, which pushes out event messages, and my classifier script, which receives these message.

In the first of two scripts snippets below, I show how I register the public Delta event name with -Register-EngineEvent -forward, and then wait for internal file access events. When one comes in, I then send the internal file event message — in PowerShell-speak, it’s forwarded — to the corresponding Register-EngineEventin the classifier script in the second snippet.

Register-EngineEvent -SourceIdentifier Delta -Forward
While ($true) {
   $args=Wait-Event -SourceIdentifier Access  # wait on internal file event
    Remove-Event -SourceIdentifier Access
    if ($args.MessageData -eq "Access") {  
       #do some plain access processing 
       New-Event -SourceIdentifier Delta -EventArguments $args.SourceArgs -MessageData $args.MessageData  #send event to classifier via forwarding
     }
    elseif ($args.MessageData -eq "Burst") {
       #do some burst processing
       New-Event -SourceIdentifier Delta -EventArguments $args.SourceArgs  -MessageData $args.MessageData #send event to classifier via forwarding
     }
}

On the receiving side, I leave out the -forward parameter and instead pass in a PowerShell script bock, which asynchronously handles the event. You can see this below.

Register-EngineEvent -SourceIdentifier Delta -Action {
    
      Remove-Event -SourceIdentifier Delta
      if($event.MessageData -eq "Access") {
        $filename = $args[0] #got file!
         Lock-Object $deltafile.SyncRoot{ $deltafile[$filename]=1} #lock&load            
      }
      elseif ($event.Messagedata -eq "Burst") {
        #do something     
      }

}

Confused? And have I mentioned recently that file event handling is not easy, and that my toy scripts won’t stand up to business-level processing?

This gets messy because the New-Event and Wait-Eventcmdlets for internal event messaging are different from the external event messaging provided by Register-EngineEvent. 

More Messiness

The full classification script is presented below. I’ll talk a little more about it in the next and final post in this series. In the meantime, gaze upon it in all its event-handling and multi-tasking glory.

Import-Module -Name .\pslock.psm1 -Verbose
function updatecnts {
Param ( 
        [parameter(position=1)]  
        $match, 
        [parameter(position=2)]
        $obj
        )

for($j=0; $j -lt $match.Count;$j=$j+2) {    
        switch -wildcard ($match[$j]) {
          'Top*'  { $obj| Add-Member -Force -type NoteProperty -Name Secret   -Value $match[$j+1] }
          'Sens*' { $obj|  Add-Member -Force -type NoteProperty -Name Sensitive -Value $match[$j+1] }
          'Numb*' { $obj|  Add-Member -Force -type NoteProperty -Name Numbers  -Value $match[$j+1] }           
         }
         
      }

  return $obj
}
  
$scan = {
$name=$args[0]
function scan {
   Param (
      [parameter(position=1)]
      [string] $Name
   )
      $classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }
     
      $data = Get-Content $Name
      
      $cnts= @()
      
      if($data.Length -eq 0) { return $cnts} 
      
      foreach ($key in $classify.Keys) {
       
        $m=$classify[$key].matches($data)           
           
        if($m.Count -gt 0) {
           $cnts+= @($key,$m.Count)  
        }
      }   
 $cnts   
}
scan $name
}

 
$outarray = @() #where I keep classification stats
$deltafile = [hashtable]::Synchronized(@{})  #hold file events for master loop 

$list=Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"  


#long list --let's multithread

#runspace
$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1,5)
$RunspacePool.Open()
$Tasks = @()


foreach ($item in $list) {
  
  $Task = [powershell]::Create().AddScript($scan).AddArgument($item.Name)
  $Task.RunspacePool = $RunspacePool
  
  $status= $Task.BeginInvoke()
  $Tasks += @($status,$Task,$item.Name)
}


Register-EngineEvent -SourceIdentifier Delta -Action {
    
      Remove-Event -SourceIdentifier Delta
      if($event.MessageData -eq "Access") {
        $filename = $args[0] #got file
         Lock-Object $deltafile.SyncRoot{ $deltafile[$filename]=1} #lock& load
      }
      elseif ($event.Messagedata -eq "Burst") {
        #do something
      }
}

while ($Tasks.isCompleted -contains $false){
  
}

#check results of tasks
for ($i=0; $i -lt $Tasks.Count; $i=$i+3){
   $match=$Tasks[$i+1].EndInvoke($Tasks[$i])
   
  
   if ($match.Count -gt 0) {  # update clasafication array 
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File   -Value $Tasks[$i+2]
      #defaults
      $obj| Add-Member -type NoteProperty -Name Secret -Value 0
      $obj| Add-Member -type NoteProperty -Name Sensitive -Value 0
      $obj| Add-Member -type NoteProperty -Name Numbers -Value 0

      $obj=updatecnts $match $obj
      $outarray += $obj
   } 
   $Tasks[$i+1].Dispose()
   
}

$outarray | Out-GridView -Title "Content Classification" #display

#run event handler as a separate job
Start-Job -Name EventHandler -ScriptBlock({C:\Users\bob\Documents\evhandler.ps1})  #run event handler in background


while ($true) { #the master executive loop
   
   
      Start-Sleep -seconds 10
      Lock-Object $deltafile.SyncRoot { #lock and iterate through synchronized list
        foreach ($key in $deltafile.Keys) {  
    
        $filename=$key
       
        if($deltafile[$key] -eq 0) { continue} #nothing new

        $deltafile[$key]=0
        $match = & $scan $filename  #run scriptblock
                                        #incremental part
       
        $found=$false
        $class=$false
        if($match.Count -gt 0) 
            {$class =$true} #found sensitive data
        if($outarray.File -contains $filename) 
                {$found = $true} #already in the array  
        if (!$found -and !$class){continue}
 
        #let's add/update
        if (!$found) {

            $obj = New-Object System.Object
            $obj | Add-Member -type NoteProperty -Name File   -Value $Tasks[$i+2]
            #defaults
            $obj| Add-Member -type NoteProperty -Name Secret -Value 0
            $obj| Add-Member -type NoteProperty -Name Sensitive -Value 0
            $obj| Add-Member -type NoteProperty -Name Numbers -Value 0

            $obj=updatecnts $match $obj

        }
        else {
            $outarray|? {$_.File -eq $filename} | % { updatecnts $match $_} 
        }
        $outarray | Out-GridView -Title "Content Classification ( $(get-date -format M/d/yy:HH:MM) )"   
        
       } #foreach

    } #lock
}#while

Write-Host "Done!"

In short, the classifier does an initial sweep of the files in a folder, stores the classification results in $outarray, and then when a file modification event shows up, it updates and displays $outarray with new classification data. In other words, incremental scanning.

There’s a small side issue of having to deal with updates to $outarray that can happen at any time while in another part of the classification script I’m actually looking to see what’s changed in this hashtable variable.

It’s a classic race condition. And the way I chose to handle it is to use PowerShell’s synchronized variables.

I’ll talk more about this mystifying PowerShell feature in the next post, and conclude with some words of advice on rolling-your-own solutions.

ITRC: 2017 Data Breaches on Record Pace

ITRC: 2017 Data Breaches on Record Pace

The Identity Theft Resource Center (ITRC) is this blog’s go-to source for current breach statistics. As of April 18, ITRC breach count has reached 456 incidents. That puts us ahead of last year’s sizzling pace of 356 for the same period.

If you do the math, then at this rate the number of breaches will reach 1500 by the end of 2017. And that’s way ahead of 2016’s record setting count of 1093 breaches.

What’s going on?

Some of the increase is because ITRC has been widening its data collection net by contacting more state attorney generals and sending out more FOIA requests.

(Source ITRC)

But there’s just been a lot more hacking activity.

For example, the IRS noted back in February there’s been a 400% surge in spear phishing against CEOs.

In other words, it’s more of the same — actually, a lot more of the same — basic techniques that commenters, like one blog associated with a leading data security company, have been pointing out for years.

For those skeptics who don’t believe it’s possible to inflict a serious data breach using techniques and approaches that a smart 15-year old could master, I offers three such incidents, all based on breach letters and other sources that ITRC obtained:

Guessing simple passwords

Pretexting

Phishing

We’ve been pointing out how basic block-and-tackle techniques such as enforcing stronger password policies and two-factor authentication, implementing least-privileged access, and conducting basic security training (especially on phishing) can go a long way towards preventing breaches and reducing risks of data exfiltration.

After reading through the ITRC cases, it’s clear that we Americans need to really up our game to … make American data security great again.

MADSGA!

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

At the end of the previous post, we took up the nuts-and-bolts issues of protecting sensitive data in an organization’s file system. One popular approach, least-privileged access model, is often explicitly mentioned in compliance standards, such as NIST 800-53 or PCI DSS. Varonis DatAdvantage and DataPrivilege provide a convenient way to accomplish this.

Ownership Management

Let’s start with DatAdvantage. We saw last time that DA provides graphical support for helping to identify data ownership.

If you want to get more granular than just seeing who’s been accessing a folder, you can view the actual access statistics of the top users with the Statistics tab (below).

This is a great help in understanding who is really using the folder. The ultimale goal is to find the true users, and remove extraneous groups and users, who perhaps needed occasional access but not as part of their job role.

The key point is to first determine the folder’s owner — the one who has the real knowledge and wisdom of what the folder is all about. This may require some legwork on IT’s part in talking to the users, based on the DatAdvantage stats, and working out the real-chain of command.

Once you use DatAdvantage to set the folder owners (below), these more informed power users, as we’ll see, can independently manage who gets access and whose access should be removed. The folder owner will also automatically receive DatAdvantage reports, which will help guide them in making future access decisions.

There’s another important point to make before we move one. IT has long been responsible for provisioning access, without knowing the business purpose. Varonis DatAdvantage assists IT in finding these owners and then giving them the access granting powers.

Anyway, once the owner has done the housekeeping of paring and removing unnecessary folder groups, they’ll then want to put into place a process for permission management. Data standards and laws recognize the importance of having security policies and procedures as part of on-going program – i.e., not something an owner does once a year.

And Varonis has an important part to play here.

Maintaining Least-Privileged Access

How do ordinary users whose job role now requires then to access a managed folder request permission to the owner?

This is where Varonis DataPrivilege makes an appearance. Regular users will need to bring this interface up (below) to formally request access to a managed folder.

The owner of the folder has a parallel interface from which to receive these requests and then grant or revoke permissions.

As I mentioned above, these security ideas for last-privilege-access and permission management are often explicitly part of compliance standards and data security laws. Building on my list from the previous post, here’s a more complete enumeration of controls that Varonis DatAdvantage supports:

  • NIST 800-53: AC-2, AC-3, AC-5, CM-5
  • NIST 800-171: 3.1.4, 3.1.5, 3.4.5
  • PCI DSS 3.x: 7.1,7.2
  • HIPAA: 45 CFR 164.312 a(1), 164.308a(4)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3, A11.2.2
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Stale Sensitive Data

Minimization is an important theme in security standards and laws. These ideas are best represented in the principles of Privacy by Design (PbD), which has good overall advice on this subject: minimize the sensitive data you collect, minimize who gets to see it, and minimize how long you keep it.

Let’s address the last point, which goes under the more familiar name of data retention. One low-hanging fruit to reducing security risks is to delete or archive sensitive data embedded in files.

This make incredible sense, of course. This stale data can be, for example, consumer PII collected in short-term marketing campaigns, but now residing in dusty spread-sheets or rusting management presentations.

Your organization may no longer need it, but it’s just the kind of monetizable data that hackers love to get their hands on.

As we saw in the first post, which focused on Identification, DatAdvantage can find and identify file data that hasn’t been used after a certain threshold date.

Can the stale data report be tweaked to find stale data this is also sensitive?

Affirmative.

You need to add the hit count filter and set the number of sensitive data matches to an appropriate number.

In my test environment, I discovered that C:Share\pvcs folder hasn’t been touched in over a year and has some sensitive data.

The next step is then to take a visit to the Data Transport Engine (DTE) available in DatAdvantage (from the Tools menu). It allows you to create a rule that will search for files to archive and delete if necessary.

In my case, my rule’s search criteria mirrors the same filters used in generating the report. The rule is doing the real heavy-lifting of removing the stale, sensitive data.

Since the rule is saved, it can be rerun again to enforce the retention limits. Even better, DTE can automatically run the rule on a periodic basis so then you never have to worry about stale sensitive data in your file system.

Implementing date retention policies can be found in the following security standards and regulations:

  • NIST 800-53: SI-12
  • PCI DSS 3.x: 3.1
  • CIS Critical Security Controls: 14.7
  • New York State DFS Cybersecurity Regulations: 500.13
  • EU General Data Protection Regulation (GDPR): Article 25.2

Detecting and Monitoring

Following the order of the NIST higher-level security control categories from the first post, we now arrive at our final destination in this series, Detect.

No data security strategy is foolproof, so you need a secondary defense based on detection and monitoring controls: effectively you’re watching the system and looking for unusual activities.

Varonis and specifically DatAlert has unique role in detection because its underlying security platform is based on monitoring file system activities.

By now everyone knows (or should know) that phishing and injection attacks allow hackers to get around network defenses as they borrow existing users’ credentials, and fully-undetectable (FUD) malware means they can avoid detection by virus scanners.

So how do you detect the new generation of stealthy attackers?

No attacker can avoid using the file system to load their software, copy files, and crawl a directory hierarchy looking for sensitive data to exfiltrate.  If you can spot their unique file activity patterns, then you can stop them before they remove or exfiltrate the data.

We can’t cover all of DatAlert’s capabilities in this post — probably a good topic for a separate series! — but since it has deep insight to all file system information and events, and histories of user behaviors, it’s in a powerful position to determine what’s out of the normal range for a user account.

We call this user behavior analytics or UBA, and DatAlert comes bundled with a suite of UBA threat models (below).  You’re free to add your own, of course, but the pre-defined models are quite powerful as is. They include detecting crypto intrusions, ransomware activity, unusual user access to sensitive data, unusual access to files containing credentials, and more.

All the alerts that are triggered can be tracked from the DatAlert Dashboard.  IT staff can either intervene and respond manually or even set up scripts to run automatically — for example, automatically disable accounts.

If a specific data security law or regulations requires a breach notification to be sent to an authority, DatAlert can provide some of the information that’s typically required – files that were accessed, types of data, etc.

Let’s close out this post with a final list of detection and response controls in data standards and laws that DatAlert can help support:

  • NIST 800-53: SI-4, AU-13, IR-4
  • PCI DSS 3.x: 10.1, 10.2, 10.6
  • CIS Critical Security Controls: 5.1, 6.4, 8.1
  • HIPAA: 45 CFR 164.400-164.414
  • ISO 27001: A.16.1.1, A.16.1.4
  • New York State DFS Cybersecurity Regulations: 500.02, 500.16, 500.27
  • EU General Data Protection Regulation (GDPR): Article 33, 34
  • Most US states have breach notification rules

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessme...

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessment

I can’t really overstate the importance of risk assessments in data security standards. It’s really at the core of everything you subsequently do in a security program. In this post we’ll finish discussing how DatAdvantage helps support many of the risk assessment controls that are in just about every security law, regulation, or industry security standard.

Last time, we saw that risk assessments were part of NIST’s Identify category. In short: you’re identifying the risks and vulnerabilities in your IT system. Of course, at Varonis we’re specifically focused on sensitive plain-text data scattered around an organization’s file system.

Identify Sensitive Files in Your File System

As we all know from major breaches over the last few years, poorly protected folders is where the action is for hackers: they’ve been focusing their efforts there as well.

The DatAdvantage 2b report is the go-to report for finding sensitive data across all folders, not just ones with global permissions that are listed in 12l. Varonis uses various built-in filters or rules to decide what’s considered sensitive.

I counted about 40 or so such rules, covering credit card, social security, and various personal identifiers that are required to be protected by HIPAA and other laws.

In the test system on which I ran the 2b report, the \share\legal\Corporate folder was snagged by the aforementioned filters.

Identify Risky and Unnecessary Users Accessing Folders

We now have a folder that is a potential source of data security risk. What else do we want to identify?

Users that have accessed this folder is a good starting point.

There are a few ways to do this with DatAdvantage, but let’s just work with the raw access audit log of every file event on a server, which is available in the 2a report. By adding a directory path filter, I was able to narrow down the results to the folder I was interested in.

So now we at least know who’s really using this specific folder (and sub-folders).  Often times this is a far smaller pool of users then has been enabled through the group permissions on the folders. In any case, this should be the basis of a risk assessment discussion to craft more tightly focused groups for this folder and setting an owner who can then manage the content.

In the Review Area of DatAdvantage, there’s more graphical support for finding users accessing folders, the percentage of the Active Directory group who are actually using the folder, as well as recommendations for groups that should be accessing the folder. We’ll explore this section of DataAdvantage further below.

For now, let’s just stick to the DatAdvantage reports since there’s so much risk assessment power bundled into them.

Another similar discussion can be based on using the 12l report to analyze folders containing sensitive data but have global access – i.e., includes the Everyone group.

There are two ways to think about this very obvious risk. You can remove the Everyone access on the folder. This can and likely will cause headaches for users. DatAdvantage conveniently has a sandbox feature that allows you to test this.

On the other hand, there may be good reasons the folder has global access, and perhaps there are other controls in place that would (in theory) help reduce the risk of unauthorized access. This is a risk discussion you’d need to have.

Another way to handle this is to see who’s copying files into the folder — maybe it’s just a small group of users — and then establish policies and educate these users about dealing with sensitive data.

You could then go back to the 1A report, and set up filters to search for only file creation events in these folders, and collect the user names (below).

Who’s copying files into my folder?

After emailing this group of users with followup advice and information on copying, say, spreadsheets with credit card numbers, you can run the 12l reports the next month to see if any new sensitive data has made its way into the folder.

The larger point is that the DatAdvantage reports help identify the risks and the relevant users involved so that you can come up with appropriate security policies — for example, least-privileged access, or perhaps looser controls but with better monitoring or stricter policies on granting access in the first place. As we’ll see later on in this series, Varonis DatAlert and DataPrivilege can help enforce these policies.

In the previous post, I listed the relevant controls that DA addresses for the core identification part of risk assessment. Here’s a list of risk assessment and policy making controls in various laws and standards where DatAdvantage can help:

  • NIST 800-53: RA-2, RA-3, RA-6
  • NIST 800-171: 3.11.1
  • HIPAA:  164.308(a)(1)(i), 164.308(a)(1)(ii)
  • Gramm-Leach-Bliley: 314.4(b),(c)
  • PCI DSS 3.x: 12.1,12.2
  • ISO 27001: A.12.6.1, A.18.2.3
  • CIS Critical Security Controls: 4.1, 4.2
  • New York State DFS Cybersecurity Regulations: 500.03, 500.06

Thou Shalt Protect Data

A full risk assessment program would also include identifying external threats—new malware, new hacking techniques. With this new real-world threat intelligence, you and your IT colleagues should go back re-adjust the risk levels you’ve assigned initially and then re-strategize.

It’s an endless game of cyber cat-and-mouse, and a topic for another post.

Let’s move to the next broad functional category, Protect. One of the critical controls in this area is limiting access to only authorized users. This is easier said done, but we’ve already laid the groundwork above.

The guiding principles are typically least-privileged-access and role-based access controls. In short: give appropriate users just the access they need to their jobs or carry out roles.

Since we’re now at a point where we are about to take a real action, we’ll need to shift from the DatAdvantage Reports section to the Review area of DatAdvantage.

The Review Area tells me who’s been accessing the legal\Corporate folder, which turns out to be a far smaller set than has been given permission through their group access rights.

To implement least-privilege access, you’ll want to create a new AD group for just those who really, truly need access to the legal\Corporate folder. And then, of course, remove the existing groups that have been given access to the folder.

In the Review Area, you can select and move the small set of users who really need folder access into their own group.

Yeah, this assumes you’ve done some additional legwork during the risk assessment phase — spoken to the users who accessed Corporate\legal folder, identified the true data owners, and understood what they’re using this folder for.

DatAdvantage can provide a lot of support in narrowing down who to talk to. So by the time you’re ready to use the Review Area to make the actual changes, you already should have a good handle on what you’re doing.

One other key control, which will discuss in more detail the next time, is managing file permission for the folders.

Essentially, that’s where you find and assign data owners, and then insure that there’s a process going forward to allow the owner to decide who gets access. We’ll show how Varonis has a key role to play here through both DatAdvatange and DataPrivilege.

I’ll leave you with this list of least permission and management controls that Varonis supports:

  • NIST 800-53: AC-2, AC-3, AC-6
  • NIST 800-171: 3.14,3.15
  • PCI DSS 3.x: 7.1
  • HIPAA: 164.312 a(1)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Practical PowerShell for IT Security, Part III: Classification on a Budget

Practical PowerShell for IT Security, Part III: Classification on a Budget

Last time, with a few lines of PowerShell code, I launched an entire new software category, File Access Analytics (FAA). My 15-minutes of fame is almost over, but I was able to make the point that PowerShell has practical file event monitoring aspects. In this post, I’ll finish some old business with my FAA tool and then take up PowerShell-style data classification.

Event-Driven Analytics

To refresh memories, I used the Register-WmiEvent cmdlet in my FAA script to watch for file access events in a folder. I also created a mythical baseline of event rates to compare against. (For wonky types, there’s a whole area of measuring these kinds of things — hits to web sites, calls coming into a call center, traffic at espresso bars — that was started by this fellow.)

When file access counts reach above normal limits, I trigger a software-created event that gets picked up by another part of the code and pops up the FAA “dashboard”.

This triggering is performed by the New-Event cmdlet, which allows you to send an event, along with other information, to a receiver. To read the event, there’s the WMI-Event cmdlet. The receiving part can even be in another script as long as both event cmdlets use the same SourceIdentifier — Bursts, in my case.

These are all operating systems 101 ideas: effectively, PowerShell provides a simple message passing system. Pretty neat considering we are using what is, after all, a bleepin’ command language.

Anyway, the full code is presented below for your amusement.

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(3,8,5); "Tuesday" = @(4,10,7);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append
  
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 
 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   


#Dashboard
While ($true) {
    $args=Wait-Event -SourceIdentifier Bursts # wait on Burst event
    Remove-Event -SourceIdentifier Bursts #remove event
  
    $outarray=@() 
    foreach ($result in $args.SourceArgs) {
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File -Value $result[0]
      $obj | Add-Member -type NoteProperty -Name Time -Value $result[1]
      $outarray += $obj  
    }


     $outarray|Out-GridView -Title "FAA Dashboard: Burst Data"
 }

Please don’t pound your laptop as you look through it.

I’m aware that I continue to pop up separate grid views, and there are better ways to handle the graphics. With PowerShell, you do have access to the full .Net framework, so you could create and access objects —listboxes, charts, etc. — and then update as needed. I’ll leave that for now as a homework assignment.

Classification is Very Important in Data Security

Let’s put my file event monitoring on the back burner, as we take up the topic of PowerShell and data classification.

At Varonis, we preach the gospel of “knowing your data” for good reason. In order to work out a useful data security program, one of the first steps is to learn where your critical or sensitive data is located — credit card numbers, consumer addresses, sensitive legal documents, proprietary code.

The goal, of course, is to protect the company’s digital treasure, but you first have to identify it. By the way, this is not just a good idea, but many data security laws and regulations (for example, HIPAA)  as well as industry data standards (PCI DSS) require asset identification as part of doing real-world risk assessment.

PowerShell should have great potential for use in data classification applications. Can PS access and read files directly? Check. Can it perform pattern matching on text? Check. Can it do this efficiently on a somewhat large scale? Check.

No, the PowerShell classification script I eventually came up with will not replace the Varonis Data Classification Framework. But for the scenario I had in mind – a IT admin who needs to watch over an especially sensitive folder – my PowerShell effort gets more than a passing grad, say B+!

WQL and CIM_DataFile

Let’s now return to WQL, which I referenced in the first post on event monitoring.

Just as I used this query language to look at file events in a directory, I can tweak the script to retrieve all the files in a specific directory. As before I use the CIM_DataFile class, but this time my query is directed at the folder itself, not the events associated with it.

$Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"

Terrific!  This line of code will output an array of file path names.

To read the contents of each file into a variable, PowerShell conveniently provides the Get-Content cmdlet. Thank you Microsoft.

I need one more ingredient for my script, which is pattern matching. Not surprisingly, PowerShell has a regular expression engine. For my purposes it’s a little bit of overkill, but it certainly saved me time.

In talking to security pros, they’ve often told me that companies should explicitly mark documents or presentations containing proprietary or sensitive information with an appropriate footer — say, Secret or Confidential. It’s a good practice, and of course it helps in the data classification process.

In my script, I created a PowerShell hashtable of possible marker texts with an associated regular expression to match it. For documents that aren’t explicitly marked this way, I also added special project names — in my case, snowflake — that would also get scanned. And for kicks, I added a regular expression for social security numbers.

The code block I used to do the reading and pattern matching is listed below. The file name to read and scan is passed in as a parameter.

$Action = {

Param (

[string] $Name

)

$classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }


$data = Get-Content $Name

$cnts= @()

foreach ($key in $classify.Keys) {

  $m=$classify[$key].matches($data)

  if($m.Count -gt 0) {

    $cnts+= @($key,$m.Count)
  }
}

$cnts
}

Magnificent Multi-Threading

I could have just simplified my project by taking the above code and adding some glue, and then running the results through the Out-GridView cmdlet.

But this being the Varonis IOS blog, we never, ever do anything nice and easy.

There is a point I’m trying to make. Even for a single folder in a corporate file system, there can be hundreds, perhaps even a few thousand files.

Do you really want to wait around while the script is serially reading each file?

Of course not!

Large-scale file I/O applications, like what we’re doing with classification, is very well-suited for multi-threading—you can launch lots of file activity in parallel and thereby significantly reduce the delay in seeing results.

PowerShell does have a usable (if clunky) background processing system known as Jobs. But it also boasts an impressive and sleek multi-threading capability known as Runspaces.

After playing with it, and borrowing code from a few Runspaces’ pioneers, I am impressed.

Runspaces handles all the messy mechanics of synchronization and concurrency. It’s not something you can grok quickly, and even Microsoft’s amazing Scripting Guys are still working out their understanding of this multi-threading system.

In any case, I went boldly ahead and used Runspaces to do my file reads in parallel. Below is a bit of the code to launch the threads: for each file in the directory I create a thread that runs the above script block, which returns matching patterns in an array.

$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, 5)

$RunspacePool.Open()

$Tasks = @()


foreach ($item in $list) {

   $Task = [powershell]::Create().AddScript($Action).AddArgument($item.Name)

   $Task.RunspacePool = $RunspacePool

   $status= $Task.BeginInvoke()

   $Tasks += @($status,$Task,$item.Name)
}

Let’s take a deep breath—we’ve covered a lot.

In the next post, I’ll present the full script, and discuss some of the (painful) details.  In the meantime, after seeding some files with marker text, I produced the following output with Out-GridView:

Content classification on the cheap!

In the meantime, another idea to think about is how to connect the two scripts: the file activity monitoring one and the classification script partially presented in this post.

After all, the classification script should communicate what’s worth monitoring to the file activity script, and the activity script could in theory tell the classification script when a new file is created so that it could classify it—incremental scanning in other words.

Sounds like I’m suggesting, dare I say it, a PowerShell-based security monitoring platform. We’ll start working out how this can be done the next time as well.

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for ...

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for Risk Assessment

Over the last few years, I’ve written about many different data security standards, data laws, and regulations. So I feel comfortable in saying there are some similarities in the EU’s General Data Protection Regulation, the US’s HIPAA rules, PCI DSS, NIST’s 800 family of controls and others as well.

I’m really standing on the shoulders of giants, in particular the friendly security standards folks over at the National Institute of Standards and Technology (NIST), in understanding the inter-connectedness. They’re the go-to people for our government’s own data security standards: for both internal agencies (NIST 800-53) and outside contractors (NIST 800-171).  And through its voluntary Critical Infrastructure Security Framework, NIST is also influencing data security ideas in the private sector as well.

One of their big ideas is to divide security controls, which every standard and regulation has in one form or another, into five functional areas: Identify, Protect, Detect, Respond, and Recover. In short, give me a data standard and you can map their controls into one of these categories.

The NIST big picture view of security controls.

The idea of commonality led me to start this series of posts about how our own products, principally Varonis DatAdvantage, though not targeted at any specific data standard or law, in fact can help meet many of the key controls and legal requirements. In fact, the out-of-the-box reporting feature in DatAdvantage is a great place to start to see how all this works.

In this first blog post, we’ll focus on DA reporting functions that roughly cover the identify category. This is a fairly large area in itself, taking in asset identification, governance, and risk assessment.

Assets: Users, Files, and More

For DatAdvatange, users, groups, and folders are the raw building blocks used in all its reporting. However, if you wanted to view pure file system asset information, you can go to the following three key reports in DatAdvantage.

The 3a report gives IT staff a listing of Active Directory group membership. For starters, you could run the report on the all-encompassing Domain Users group to get a global user list (below). You can also populate the report with any AD property associated with a user (email, managers, department, location, etc.)

For folders, report 3f provides access paths, size, number of subfolder, and the share path.

Beyond a vanilla list of folders, IT security staff usually wants to dig a little deeper into the file structure in order to identify sensitive or critical data. What is critical will vary by organization, but generally they’re looking for personally identifiable information (PII), such as social security numbers, email addresses, and account numbers, as well as intellectual property (proprietary code, important legal documents, sales lists).

With DatAdvantage’s 4g report, Varonis lets security staff zoom into folders containing sensitive PII data, which is often scattered across huge corporate file systems. Behind the scenes, the Varonis classification engine has scanned files using PII filters for different laws and regulations, and rated the files based on the number of hits — for example, number of US social security numbers or Canadian driver’s license numbers.

The 4g report lists these sensitive files from highest to lowest “hit” count. By the way, this is the report our customers often run first and find  very eye-opening —especially if they were under the impression that there’s ‘no way millions of credit card numbers could be found in plaintext’.

Assessing the Risks

We’ve just seen how to view nuts-and-bolts asset information, but the larger point is to use the file asset inventory to help security pros discover where an organization’s particular risks are located.

In other words, it’s the beginning of a formal risk assessment.

Of course, the other major part of assessment is to look (continuously) at the threat environment and then be on the hunt for specific vulnerabilities and exploits. We’ll get to that in a future post.

Now let’s use DatAdvantage for risk assessments, starting with users.

Stale user accounts are an overlooked scenario that has lots of potential risk. Essentially, user accounts are often not disabled or removed when an employee leaves the company or a contractor’s temporary assignment is over.

For the proverbially disgruntled employee, it’s not unusual for this former insider to still have access to his account.  Or for hackers to gain access to a no-longer used third-party contractor’s account and then leverage that to hop into their real target.

In DatAdvantage’s 3a report, we can produce a list of stale users accounts based on the last logon time that’s maintained by Active Directory.

The sensitive data report that we saw earlier is the basis for another risk assessment report. We just have to filter on folders that have “everyone” permissions.

Security pros know from the current threat environment that phishing or SQL injection attacks allow an outsider to get the credentials of an insider. With no special permissions, a hacker would then have automatic access to folders with global permissions.

Therefore there’s a significant risk in having sensitive data in these open folders (assuming there’s no other compensating controls).

DatAdvantage’s 12 L report nicely shows where these folders are.

Let’s take a breath.

In the next post, we’ll continue our journey through DatAdvantage by finishing up with the risk assessment area and then focusing on the Protect and Defend categories.

For those compliance-oriented IT pros and other legal-istas, here’s a short list of regulations and standards (based on our customers requests) that the above reports help support:

  • NIST 800-53: IA-2,CM-8
  • NIST 800-171: 3.51
  • HIPAA:  45 CFR 164.308(a)(1)(ii)(A)
  • GLBA: FTC Safeguards Rule (16 CFR 314.4)
  • PCI DSS 3.x: 12.2
  • ISO 27001: A.7.1.1
  • New York State DFS Cybersecurity Regulations: 500.02
  • EU GDPR: Security of Processing (Article 32) and Impact Assessments (Article 35)

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

In working on this series, I almost feel that with PowerShell we have technology that somehow time-traveled back from the future. Remember on Star Trek – the original of course — when the Enterprise’s CTO, Mr. Spock, was looking into his visor while scanning parsecs of space? The truth is Spock was gazing at the output of a Starfleet-approved PowerShell script.

Tricorders? Also powered by PowerShell.

Yes, I’m a fan of PowerShell, and boldly going where no blogger has gone before. For someone who’s been raised on bare-bones Linux shell languages, PowerShell looks like super-advanced technology. Part of PowerShell’s high-tech prowess is its ability, as I mentioned in the previous post, to monitor low-level OS events, like file updates.

A Closer Look at Register-WmiEvent

Let’s return to the amazing one-line of file monitoring PS code I introduced last time.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'doc' or targetInstance.Extension = 'txt)' and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action

 

As you might have guessed, the logic on what to monitor is buried in the WQL contained in Register-WmiEvent’s query parameter.

You’ll recall that WQL allows scripters to retrieve information about Windows system events in general and, specifically in our case, file events – files created, updated, or deleted.  With this query, I’m effectively pulling out of Windows darker depths file modification events that are organized as a CIM_DataFile class.

WQL allows me to set the drive and folder I’m interested in searching — that would be the Drive and Path properties that I reference above.

Though I’m not allowed to use a wild card search — it’s a feature, not a bug — I can instead search for specific file extensions. My goal in developing the script for this post is to help IT security spot excessive activity on readable files.  So I set up a logical condition to search for files with “doc” or “txt” extensions. Makes sense, right?

Now for the somewhat subtle part.

I’d like to collect file events generated by anyone accessing a file, including those who just read a Microsoft Word documents without making changes.

Can that be done?

When we review a file list in Windows Explorer, we’re all familiar with the “Date Modified” field. But did you know there’s also a “Date Accessed” field? Every time you read a file in Windows, this field is, in theory, updated with the current time stamp. You can discover this for yourself—see below—by clicking on the column heads and enabling the access field.

However, in practice, Windows machines aren’t typically configured to update this internal field when a file is just accessed—i.e., read, but not modified. Microsoft says it will slow down performance. But let’s throw caution to the wind.

To configure Windows to always update the file access time, you use the under-appreciated fsutil utility (you’ll need admin access) with the following parameters: 

fsutil set behavior disablelastaccess 0

With file access events now configured in my test environment, I’ve now enabled Windows to also record read-only events.

My final search criteria in the above WQL should make sense:

targetInstance.LastAccessed > '$($cur)'

It says that I’m only interested in file events in which file access has occurred after the Register-WmiEvent is launched. The $cur variable, by the way is assigned the current time pulled from the Get-Date cmdlet.

File Access Analytics (FAA)

We’ve gotten through the WQL, so let’s continue with the remaining parameters in Register-WmiEvent.

SourceIdentifer allows you to name an event. Naming things – people, tabby cats, and terriers—is always a good practice since you can call them when you need ‘em.

And it holds just as true for events! There are few cmdlets that require this identifier. For starters, Unregister-Event for removing a given event subscription, Get-Event for letting you review all the events that are queued, Remove-Event for erasing current events in the queue, and finally Wait-Event for doing an explicit synchronous wait. We’ll be using some of these cmdlets in the completed code.

I now have the core of my script worked out.

That leaves the Action parameter. Since Register-WmiEvent responds asynchronously to events, it needs some code to handle the response to the triggering event, and that’s where the action, so to speak is: in a block of PowerShell code that’s passed in.

This leads to what I really want to accomplish with my script, and so I’m forced to reveal my grand scheme to take over the User Behavior Analytics world with a few lines of PowerShell code.

Here’s the plan: This PS script will monitor file access event rates, compare it to a baseline, and decide whether the event rates fall into an abnormal range, which could indicate possible hacking. If this threshold is reached, I’ll display an amazing dashboard showing the recent activity.

In other words, I’ll have a threat monitor alert system that will spot unusual activity against text files in a specific directory.

Will Powershell Put Security Solutions Out of Business?

No, Varonis doesn’t have anything to worry about, for a few reasons.

One, event monitoring is not really something Windows does efficiently. Microsoft in fact warns that turning on last access file updates through fsutil adds system overhead. In addition, Register-WmiEvent makes the internal event flywheels spin faster: I came across some comments saying the cmdlet may cause the system to slow down.

Two, I’ve noticed that this isn’t real-time or near real-time monitoring: there’s a lag in receiving file events, running up to 30 minutes or longer. At least, that was my experience running the scripts on my AWS virtual machine. Maybe you’ll do better on your dedicated machine, but I don’t think Microsoft is making any kind of promises here.

Three, try as I might, I was unable to connect a file modification event to the user of the app that was causing the event. In other words, I know a file even has occurred, but alas it doesn’t seem to be possible with Register-WMIEvent to know who caused it.

So I’m left with a script that can monitor file access but without assigning attribution. Hmmm …  let’s create a new security monitoring category, called File Access Analytics (FAA), which captures what I’m doing. Are you listening Gartner?

The larger point, of course, is that User Behavior Analytics (UBA) is a far better way to spot threats because user-specific activity contains the interesting information. My far less granular FAA, while useful, can’t reliably pinpoint the bad behaviors since it aggregates events over many users.

However, for small companies and with a few account logged on, FAA may be just enough. I can see an admin using the scripts when she suspects a user who is spending too much time poking around a directory with sensitive data. And there are some honeypot possibilities with this code as well.

And even if my script doesn’t quite do the job, the even larger point is that understanding the complexities of dealing with Windows events using PowerShell (or other language you use) will make you, ahem, appreciate enterprise-class solutions.

We’re now ready to gaze upon the Powershell scriptblock of my Register-WmiEvent:

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append

   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
        write-host "sfactor: $($sfactor))"
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
          
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 

 

Yes, I do audit logging by using the Out-File cmdlet to write a time-stamped entry for each access. And I detect bursty file access hits over 15-minute intervals, comparing the event counts against a baseline that’s held in the $Global:baseline array.

I got a little fancy here, and set up mythical average event counts in baseline for each day of the week, dividing the day into three eight hour periods. When the burst activity in a given period falls at the far end of the “tail” of the bell curve, we can assume we’ve spotted a threat.

The FAA Dashboard

With the bursty event data held in $Global:evarray (files accessed with timestamps), I decided that it would be a great idea to display it as a spiffy dashboard. But rather than holding up the code in the scriptblock, I “queued” up this data on its own event, which can be handled by a separates app.

Whaaat?

Let me try to explain. This is where the New-Event cmdlet comes into play at the end of the scriptblock above. It simply allows me to asynchronously ping another app or script, thereby not tying down the code in the scriptblock so it can then handle the next file access event.

I’ll present the full code for my FAA PowerShell script in the next post.  For now, I’ll just say that I set up a Wait-Event cmdlet whose sole purpose is to pick up these burst events and then funnel the output into a beautiful table, courtesy of Out-GridView.

Here’s the end result that will pop on an admin’s console:

 

Impressive in its own way considering the whole FAA “platform” was accomplished in about 60 lines of PS code.

We’ve covered a lot of ground, so let’s call it a day.

We’ll talk more about the full FAA script the next time, and then we’ll start looking into the awesome hidden content classification possibilities of PowerShell.

 

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

In this series on governmental responses to cybercrime, we’re taking a look at how countries through their laws are dealing with broad attacks against IT infrastructure beyond just data theft. Ransomware and DDoS are prime examples of threats that don’t necessarily fit into the narrower definition of breaches found in PII-focused data security laws. That’s where special cybercrime rules come into play.

In the first post, we discussed how the EU’s Network and Information Security (NIS) Directive tries to close the gaps left open by the EU Data Protection Directive(DPD) and the impending General Data Protection Regulation (GDPR).

Let’s now head north to Canada.

Like the EU, Canada has a broad consumer data-oriented security law, which is known as the Personal Information Protection and Electronic Documents Act (PIPEDA).  For nitpickers, there are also overriding data laws at the provincial level — Alberta and British Columbia’s PIPA — that effectively mirror PIPEDA.

The good news about PIPEDA is that it has a strong breach notification rule wherein unauthorized data access has to be reported to the Canadian regulators.  So ransomware attacks would fall under this rule. But for reporting a breach to consumers, PIPEDA uses a “risk of harm” threshold.” Harm can be of a financial nature as well as anything having a significant affect on the reputation of the individual.

Anyway, PIPEDA is like the Canadian version of the current EU DPD but with a fairly practical breach reporting requirement.

Is there anything like the EU’S NIS?

Not at this point.

But in 2015, the Canadian government started funding several initiatives to help the private sector protect against cyber threats. One of the key programs that came out of this was the Canadian Cyber Incident Response Centre (CCIRC), which is similar to the EU’s CSIRTs.

CCIRC provides technical advice and support, monitors the threat environment and posts cybersecurity bulletins (see their RSS feed), as well as provide a forum, the Community Portal, through which companies can share information.

For now, Canada is following a US-style approach: help and support private industry in dealing with cyberattacks against important IT infrastructure, but make reporting and other compliance matters to be a voluntary arrangement.

However, the public discussion continues, and with attacks like this, new approaches may be needed.