All posts by Andy Green

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

At the end of the previous post, we took up the nuts-and-bolts issues of protecting sensitive data in an organization’s file system. One popular approach, least-privileged access model, is often explicitly mentioned in compliance standards, such as NIST 800-53 or PCI DSS. Varonis DatAdvantage and DataPrivilege provide a convenient way to accomplish this.

Ownership Management

Let’s start with DatAdvantage. We saw last time that DA provides graphical support for helping to identify data ownership.

If you want to get more granular than just seeing who’s been accessing a folder, you can view the actual access statistics of the top users with the Statistics tab (below).

This is a great help in understanding who is really using the folder. The ultimale goal is to find the true users, and remove extraneous groups and users, who perhaps needed occasional access but not as part of their job role.

The key point is to first determine the folder’s owner — the one who has the real knowledge and wisdom of what the folder is all about. This may require some legwork on IT’s part in talking to the users, based on the DatAdvantage stats, and working out the real-chain of command.

Once you use DatAdvantage to set the folder owners (below), these more informed power users, as we’ll see, can independently manage who gets access and whose access should be removed. The folder owner will also automatically receive DatAdvantage reports, which will help guide them in making future access decisions.

There’s another important point to make before we move one. IT has long been responsible for provisioning access, without knowing the business purpose. Varonis DatAdvantage assists IT in finding these owners and then giving them the access granting powers.

Anyway, once the owner has done the housekeeping of paring and removing unnecessary folder groups, they’ll then want to put into place a process for permission management. Data standards and laws recognize the importance of having security policies and procedures as part of on-going program – i.e., not something an owner does once a year.

And Varonis has an important part to play here.

Maintaining Least-Privileged Access

How do ordinary users whose job role now requires then to access a managed folder request permission to the owner?

This is where Varonis DataPrivilege makes an appearance. Regular users will need to bring this interface up (below) to formally request access to a managed folder.

The owner of the folder has a parallel interface from which to receive these requests and then grant or revoke permissions.

As I mentioned above, these security ideas for last-privilege-access and permission management are often explicitly part of compliance standards and data security laws. Building on my list from the previous post, here’s a more complete enumeration of controls that Varonis DatAdvantage supports:

  • NIST 800-53: AC-2, AC-3, AC-5, CM-5
  • NIST 800-171: 3.1.4, 3.1.5, 3.4.5
  • PCI DSS 3.x: 7.1,7.2
  • HIPAA: 45 CFR 164.312 a(1), 164.308a(4)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3, A11.2.2
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Stale Sensitive Data

Minimization is an important theme in security standards and laws. These ideas are best represented in the principles of Privacy by Design (PbD), which has good overall advice on this subject: minimize the sensitive data you collect, minimize who gets to see it, and minimize how long you keep it.

Let’s address the last point, which goes under the more familiar name of data retention. One low-hanging fruit to reducing security risks is to delete or archive sensitive data embedded in files.

This make incredible sense, of course. This stale data can be, for example, consumer PII collected in short-term marketing campaigns, but now residing in dusty spread-sheets or rusting management presentations.

Your organization may no longer need it, but it’s just the kind of monetizable data that hackers love to get their hands on.

As we saw in the first post, which focused on Identification, DatAdvantage can find and identify file data that hasn’t been used after a certain threshold date.

Can the stale data report be tweaked to find stale data this is also sensitive?

Affirmative.

You need to add the hit count filter and set the number of sensitive data matches to an appropriate number.

In my test environment, I discovered that C:Share\pvcs folder hasn’t been touched in over a year and has some sensitive data.

The next step is then to take a visit to the Data Transport Engine (DTE) available in DatAdvantage (from the Tools menu). It allows you to create a rule that will search for files to archive and delete if necessary.

In my case, my rule’s search criteria mirrors the same filters used in generating the report. The rule is doing the real heavy-lifting of removing the stale, sensitive data.

Since the rule is saved, it can be rerun again to enforce the retention limits. Even better, DTE can automatically run the rule on a periodic basis so then you never have to worry about stale sensitive data in your file system.

Implementing date retention policies can be found in the following security standards and regulations:

  • NIST 800-53: SI-12
  • PCI DSS 3.x: 3.1
  • CIS Critical Security Controls: 14.7
  • New York State DFS Cybersecurity Regulations: 500.13
  • EU General Data Protection Regulation (GDPR): Article 25.2

Detecting and Monitoring

Following the order of the NIST higher-level security control categories from the first post, we now arrive at our final destination in this series, Detect.

No data security strategy is foolproof, so you need a secondary defense based on detection and monitoring controls: effectively you’re watching the system and looking for unusual activities.

Varonis and specifically DatAlert has unique role in detection because its underlying security platform is based on monitoring file system activities.

By now everyone knows (or should know) that phishing and injection attacks allow hackers to get around network defenses as they borrow existing users’ credentials, and fully-undetectable (FUD) malware means they can avoid detection by virus scanners.

So how do you detect the new generation of stealthy attackers?

No attacker can avoid using the file system to load their software, copy files, and crawl a directory hierarchy looking for sensitive data to exfiltrate.  If you can spot their unique file activity patterns, then you can stop them before they remove or exfiltrate the data.

We can’t cover all of DatAlert’s capabilities in this post — probably a good topic for a separate series! — but since it has deep insight to all file system information and events, and histories of user behaviors, it’s in a powerful position to determine what’s out of the normal range for a user account.

We call this user behavior analytics or UBA, and DatAlert comes bundled with a suite of UBA threat models (below).  You’re free to add your own, of course, but the pre-defined models are quite powerful as is. They include detecting crypto intrusions, ransomware activity, unusual user access to sensitive data, unusual access to files containing credentials, and more.

All the alerts that are triggered can be tracked from the DatAlert Dashboard.  IT staff can either intervene and respond manually or even set up scripts to run automatically — for example, automatically disable accounts.

If a specific data security law or regulations requires a breach notification to be sent to an authority, DatAlert can provide some of the information that’s typically required – files that were accessed, types of data, etc.

Let’s close out this post with a final list of detection and response controls in data standards and laws that DatAlert can help support:

  • NIST 800-53: SI-4, AU-13, IR-4
  • PCI DSS 3.x: 10.1, 10.2, 10.6
  • CIS Critical Security Controls: 5.1, 6.4, 8.1
  • HIPAA: 45 CFR 164.400-164.414
  • ISO 27001: A.16.1.1, A.16.1.4
  • New York State DFS Cybersecurity Regulations: 500.02, 500.16, 500.27
  • EU General Data Protection Regulation (GDPR): Article 33, 34
  • Most US states have breach notification rules

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessme...

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessment

I can’t really overstate the importance of risk assessments in data security standards. It’s really at the core of everything you subsequently do in a security program. In this post we’ll finish discussing how DatAdvantage helps support many of the risk assessment controls that are in just about every security law, regulation, or industry security standard.

Last time, we saw that risk assessments were part of NIST’s Identify category. In short: you’re identifying the risks and vulnerabilities in your IT system. Of course, at Varonis we’re specifically focused on sensitive plain-text data scattered around an organization’s file system.

Identify Sensitive Files in Your File System

As we all know from major breaches over the last few years, poorly protected folders is where the action is for hackers: they’ve been focusing their efforts there as well.

The DatAdvantage 2b report is the go-to report for finding sensitive data across all folders, not just ones with global permissions that are listed in 12l. Varonis uses various built-in filters or rules to decide what’s considered sensitive.

I counted about 40 or so such rules, covering credit card, social security, and various personal identifiers that are required to be protected by HIPAA and other laws.

In the test system on which I ran the 2b report, the \share\legal\Corporate folder was snagged by the aforementioned filters.

Identify Risky and Unnecessary Users Accessing Folders

We now have a folder that is a potential source of data security risk. What else do we want to identify?

Users that have accessed this folder is a good starting point.

There are a few ways to do this with DatAdvantage, but let’s just work with the raw access audit log of every file event on a server, which is available in the 2a report. By adding a directory path filter, I was able to narrow down the results to the folder I was interested in.

So now we at least know who’s really using this specific folder (and sub-folders).  Often times this is a far smaller pool of users then has been enabled through the group permissions on the folders. In any case, this should be the basis of a risk assessment discussion to craft more tightly focused groups for this folder and setting an owner who can then manage the content.

In the Review Area of DatAdvantage, there’s more graphical support for finding users accessing folders, the percentage of the Active Directory group who are actually using the folder, as well as recommendations for groups that should be accessing the folder. We’ll explore this section of DataAdvantage further below.

For now, let’s just stick to the DatAdvantage reports since there’s so much risk assessment power bundled into them.

Another similar discussion can be based on using the 12l report to analyze folders containing sensitive data but have global access – i.e., includes the Everyone group.

There are two ways to think about this very obvious risk. You can remove the Everyone access on the folder. This can and likely will cause headaches for users. DatAdvantage conveniently has a sandbox feature that allows you to test this.

On the other hand, there may be good reasons the folder has global access, and perhaps there are other controls in place that would (in theory) help reduce the risk of unauthorized access. This is a risk discussion you’d need to have.

Another way to handle this is to see who’s copying files into the folder — maybe it’s just a small group of users — and then establish policies and educate these users about dealing with sensitive data.

You could then go back to the 1A report, and set up filters to search for only file creation events in these folders, and collect the user names (below).

Who’s copying files into my folder?

After emailing this group of users with followup advice and information on copying, say, spreadsheets with credit card numbers, you can run the 12l reports the next month to see if any new sensitive data has made its way into the folder.

The larger point is that the DatAdvantage reports help identify the risks and the relevant users involved so that you can come up with appropriate security policies — for example, least-privileged access, or perhaps looser controls but with better monitoring or stricter policies on granting access in the first place. As we’ll see later on in this series, Varonis DatAlert and DataPrivilege can help enforce these policies.

In the previous post, I listed the relevant controls that DA addresses for the core identification part of risk assessment. Here’s a list of risk assessment and policy making controls in various laws and standards where DatAdvantage can help:

  • NIST 800-53: RA-2, RA-3, RA-6
  • NIST 800-171: 3.11.1
  • HIPAA:  164.308(a)(1)(i), 164.308(a)(1)(ii)
  • Gramm-Leach-Bliley: 314.4(b),(c)
  • PCI DSS 3.x: 12.1,12.2,
  • ISO 27001: A.12.6.1, A.18.2.3
  • CIS Critical Security Controls: 4.1, 4.2
  • New York State DFS Cybersecurity Regulations: 500.03, 500.06

Thou Shalt Protect Data

A full risk assessment program would also include identifying external threats—new malware, new hacking techniques. With this new real-world threat intelligence, you and your IT colleagues should go back re-adjust the risk levels you’ve assigned initially and then re-strategize.

It’s an endless game of cyber cat-and-mouse, and a topic for another post.

Let’s move to the next broad functional category, Protect. One of the critical controls in this area is limiting access to only authorized users. This is easier said done, but we’ve already laid the groundwork above.

The guiding principles are typically least-privileged-access and role-based access controls. In short: give appropriate users just the access they need to their jobs or carry out roles.

Since we’re now at a point where we are about to take a real action, we’ll need to shift from the DatAdvantage Reports section to the Review area of DatAdvantage.

The Review Area tells me who’s been accessing the legal\Corporate folder, which turns out to be a far smaller set than has been given permission through their group access rights.

To implement least-privilege access, you’ll want to create a new AD group for just those who really, truly need access to the legal\Corporate folder. And then, of course, remove the existing groups that have been given access to the folder.

In the Review Area, you can select and move the small set of users who really need folder access into their own group.

Yeah, this assumes you’ve done some additional legwork during the risk assessment phase — spoken to the users who accessed Corporate\legal folder, identified the true data owners, and understood what they’re using this folder for.

DatAdvantage can provide a lot of support in narrowing down who to talk to. So by the time you’re ready to use the Review Area to make the actual changes, you already should have a good handle on what you’re doing.

One other key control, which will discuss in more detail the next time, is managing file permission for the folders.

Essentially, that’s where you find and assign data owners, and then insure that there’s a process going forward to allow the owner to decide who gets access. We’ll show how Varonis has a key role to play here through both DatAdvatange and DataPrivilege.

I’ll leave you with this list of least permission and management controls that Varonis supports:

  • NIST 800-53: AC-2, AC-3, AC-6
  • NIST 800-171: 3.14,3.15
  • PCI DSS 3.x: 7.1
  • HIPAA: 164.312 a(1)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Practical PowerShell for IT Security, Part III: Classification on a Budget

Practical PowerShell for IT Security, Part III: Classification on a Budget

Last time, with a few lines of PowerShell code, I launched an entire new software category, File Access Analytics (FAA). My 15-minutes of fame is almost over, but I was able to make the point that PowerShell has practical file event monitoring aspects. In this post, I’ll finish some old business with my FAA tool and then take up PowerShell-style data classification.

Event-Driven Analytics

To refresh memories, I used the Register-WmiEvent cmdlet in my FAA script to watch for file access events in a folder. I also created a mythical baseline of event rates to compare against. (For wonky types, there’s a whole area of measuring these kinds of things — hits to web sites, calls coming into a call center, traffic at espresso bars — that was started by this fellow.)

When file access counts reach above normal limits, I trigger a software-created event that gets picked up by another part of the code and pops up the FAA “dashboard”.

This triggering is performed by the New-Event cmdlet, which allows you to send an event, along with other information, to a receiver. To read the event, there’s the WMI-Event cmdlet. The receiving part can even be in another script as long as both event cmdlets use the same SourceIdentifier — Bursts, in my case.

These are all operating systems 101 ideas: effectively, PowerShell provides a simple message passing system. Pretty neat considering we are using what is, after all, a bleepin’ command language.

Anyway, the full code is presented below for your amusement.

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(3,8,5); "Tuesday" = @(4,10,7);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append
  
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 
 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   


#Dashboard
While ($true) {
    $args=Wait-Event -SourceIdentifier Bursts # wait on Burst event
    Remove-Event -SourceIdentifier Bursts #remove event
  
    $outarray=@() 
    foreach ($result in $args.SourceArgs) {
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File -Value $result[0]
      $obj | Add-Member -type NoteProperty -Name Time -Value $result[1]
      $outarray += $obj  
    }


     $outarray|Out-GridView -Title "FAA Dashboard: Burst Data"
 }

Please don’t pound your laptop as you look through it.

I’m aware that I continue to pop up separate grid views, and there are better ways to handle the graphics. With PowerShell, you do have access to the full .Net framework, so you could create and access objects —listboxes, charts, etc. — and then update as needed. I’ll leave that for now as a homework assignment.

Classification is Very Important in Data Security

Let’s put my file event monitoring on the back burner, as we take up the topic of PowerShell and data classification.

At Varonis, we preach the gospel of “knowing your data” for good reason. In order to work out a useful data security program, one of the first steps is to learn where your critical or sensitive data is located — credit card numbers, consumer addresses, sensitive legal documents, proprietary code.

The goal, of course, is to protect the company’s digital treasure, but you first have to identify it. By the way, this is not just a good idea, but many data security laws and regulations (for example, HIPAA)  as well as industry data standards (PCI DSS) require asset identification as part of doing real-world risk assessment.

PowerShell should have great potential for use in data classification applications. Can PS access and read files directly? Check. Can it perform pattern matching on text? Check. Can it do this efficiently on a somewhat large scale? Check.

No, the PowerShell classification script I eventually came up with will not replace the Varonis Data Classification Framework. But for the scenario I had in mind – a IT admin who needs to watch over an especially sensitive folder – my PowerShell effort gets more than a passing grad, say B+!

WQL and CIM_DataFile

Let’s now return to WQL, which I referenced in the first post on event monitoring.

Just as I used this query language to look at file events in a directory, I can tweak the script to retrieve all the files in a specific directory. As before I use the CIM_DataFile class, but this time my query is directed at the folder itself, not the events associated with it.

$Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"

Terrific!  This line of code will output an array of file path names.

To read the contents of each file into a variable, PowerShell conveniently provides the Get-Content cmdlet. Thank you Microsoft.

I need one more ingredient for my script, which is pattern matching. Not surprisingly, PowerShell has a regular expression engine. For my purposes it’s a little bit of overkill, but it certainly saved me time.

In talking to security pros, they’ve often told me that companies should explicitly mark documents or presentations containing proprietary or sensitive information with an appropriate footer — say, Secret or Confidential. It’s a good practice, and of course it helps in the data classification process.

In my script, I created a PowerShell hashtable of possible marker texts with an associated regular expression to match it. For documents that aren’t explicitly marked this way, I also added special project names — in my case, snowflake — that would also get scanned. And for kicks, I added a regular expression for social security numbers.

The code block I used to do the reading and pattern matching is listed below. The file name to read and scan is passed in as a parameter.

$Action = {

Param (

[string] $Name

)

$classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }


$data = Get-Content $Name

$cnts= @()

foreach ($key in $classify.Keys) {

  $m=$classify[$key].matches($data)

  if($m.Count -gt 0) {

    $cnts+= @($key,$m.Count)
  }
}

$cnts
}

Magnificent Multi-Threading

I could have just simplified my project by taking the above code and adding some glue, and then running the results through the Out-GridView cmdlet.

But this being the Varonis IOS blog, we never, ever do anything nice and easy.

There is a point I’m trying to make. Even for a single folder in a corporate file system, there can be hundreds, perhaps even a few thousand files.

Do you really want to wait around while the script is serially reading each file?

Of course not!

Large-scale file I/O applications, like what we’re doing with classification, is very well-suited for multi-threading—you can launch lots of file activity in parallel and thereby significantly reduce the delay in seeing results.

PowerShell does have a usable (if clunky) background processing system known as Jobs. But it also boasts an impressive and sleek multi-threading capability known as Runspaces.

After playing with it, and borrowing code from a few Runspaces’ pioneers, I am impressed.

Runspaces handles all the messy mechanics of synchronization and concurrency. It’s not something you can grok quickly, and even Microsoft’s amazing Scripting Guys are still working out their understanding of this multi-threading system.

In any case, I went boldly ahead and used Runspaces to do my file reads in parallel. Below is a bit of the code to launch the threads: for each file in the directory I create a thread that runs the above script block, which returns matching patterns in an array.

$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, 5)

$RunspacePool.Open()

$Tasks = @()


foreach ($item in $list) {

   $Task = [powershell]::Create().AddScript($Action).AddArgument($item.Name)

   $Task.RunspacePool = $RunspacePool

   $status= $Task.BeginInvoke()

   $Tasks += @($status,$Task,$item.Name)
}

Let’s take a deep breath—we’ve covered a lot.

In the next post, I’ll present the full script, and discuss some of the (painful) details.  In the meantime, after seeding some files with marker text, I produced the following output with Out-GridView:

Content classification on the cheap!

In the meantime, another idea to think about is how to connect the two scripts: the file activity monitoring one and the classification script partially presented in this post.

After all, the classification script should communicate what’s worth monitoring to the file activity script, and the activity script could in theory tell the classification script when a new file is created so that it could classify it—incremental scanning in other words.

Sounds like I’m suggesting, dare I say it, a PowerShell-based security monitoring platform. We’ll start working out how this can be done the next time as well.

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for ...

Data Security Compliance and DatAdvantage, Part I:  Essential Reports for Risk Assessment

Over the last few years, I’ve written about many different data security standards, data laws, and regulations. So I feel comfortable in saying there are some similarities in the EU’s General Data Protection Regulation, the US’s HIPAA rules, PCI DSS, NIST’s 800 family of controls and others as well.

I’m really standing on the shoulders of giants, in particular the friendly security standards folks over at the National Institute of Standards and Technology (NIST), in understanding the inter-connectedness. They’re the go-to people for our government’s own data security standards: for both internal agencies (NIST 800-53) and outside contractors (NIST 800-171).  And through its voluntary Critical Infrastructure Security Framework, NIST is also influencing data security ideas in the private sector as well.

One of their big ideas is to divide security controls, which every standard and regulation has in one form or another, into five functional areas: Identify, Protect, Detect, Respond, and Recover. In short, give me a data standard and you can map their controls into one of these categories.

The NIST big picture view of security controls.

The idea of commonality led me to start this series of posts about how our own products, principally Varonis DatAdvantage, though not targeted at any specific data standard or law, in fact can help meet many of the key controls and legal requirements. In fact, the out-of-the-box reporting feature in DatAdvantage is a great place to start to see how all this works.

In this first blog post, we’ll focus on DA reporting functions that roughly cover the identify category. This is a fairly large area in itself, taking in asset identification, governance, and risk assessment.

Assets: Users, Files, and More

For DatAdvatange, users, groups, and folders are the raw building blocks used in all its reporting. However, if you wanted to view pure file system asset information, you can go to the following three key reports in DatAdvantage.

The 3a report gives IT staff a listing of Active Directory group membership. For starters, you could run the report on the all-encompassing Domain Users group to get a global user list (below). You can also populate the report with any AD property associated with a user (email, managers, department, location, etc.)

For folders, report 3f provides access paths, size, number of subfolder, and the share path.

Beyond a vanilla list of folders, IT security staff usually wants to dig a little deeper into the file structure in order to identify sensitive or critical data. What is critical will vary by organization, but generally they’re looking for personally identifiable information (PII), such as social security numbers, email addresses, and account numbers, as well as intellectual property (proprietary code, important legal documents, sales lists).

With DatAdvantage’s 4g report, Varonis lets security staff zoom into folders containing sensitive PII data, which is often scattered across huge corporate file systems. Behind the scenes, the Varonis classification engine has scanned files using PII filters for different laws and regulations, and rated the files based on the number of hits — for example, number of US social security numbers or Canadian driver’s license numbers.

The 4g report lists these sensitive files from highest to lowest “hit” count. By the way, this is the report our customers often run first and find  very eye-opening —especially if they were under the impression that there’s ‘no way millions of credit card numbers could be found in plaintext’.

Assessing the Risks

We’ve just seen how to view nuts-and-bolts asset information, but the larger point is to use the file asset inventory to help security pros discover where an organization’s particular risks are located.

In other words, it’s the beginning of a formal risk assessment.

Of course, the other major part of assessment is to look (continuously) at the threat environment and then be on the hunt for specific vulnerabilities and exploits. We’ll get to that in a future post.

Now let’s use DatAdvantage for risk assessments, starting with users.

Stale user accounts are an overlooked scenario that has lots of potential risk. Essentially, user accounts are often not disabled or removed when an employee leaves the company or a contractor’s temporary assignment is over.

For the proverbially disgruntled employee, it’s not unusual for this former insider to still have access to his account.  Or for hackers to gain access to a no-longer used third-party contractor’s account and then leverage that to hop into their real target.

In DatAdvantage’s 3a report, we can produce a list of stale users accounts based on the last logon time that’s maintained by Active Directory.

The sensitive data report that we saw earlier is the basis for another risk assessment report. We just have to filter on folders that have “everyone” permissions.

Security pros know from the current threat environment that phishing or SQL injection attacks allow an outsider to get the credentials of an insider. With no special permissions, a hacker would then have automatic access to folders with global permissions.

Therefore there’s a significant risk in having sensitive data in these open folders (assuming there’s no other compensating controls).

DatAdvantage’s 12 L report nicely shows where these folders are.

Let’s take a breath.

In the next post, we’ll continue our journey through DatAdvantage by finishing up with the risk assessment area and then focusing on the Protect and Defend categories.

For those compliance-oriented IT pros and other legal-istas, here’s a short list of regulations and standards (based on our customers requests) that the above reports help support:

  • NIST 800-53: IA-2,
  • NIST 800-171: 3.51
  • HIPAA:  45 CFR 164.308(a)(1)(ii)(A)
  • GLBA: FTC Safeguards Rule (16 CFR 314.4)
  • PCI DSS 3.x: 12.2
  • ISO 27001: A.7.1.1
  • New York State DFS Cybersecurity Regulations: 500.02
  • EU GDPR: Security of Processing (Article 32) and Impact Assessments (Article 35)

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

In working on this series, I almost feel that with PowerShell we have technology that somehow time-traveled back from the future. Remember on Star Trek – the original of course — when the Enterprise’s CTO, Mr. Spock, was looking into his visor while scanning parsecs of space? The truth is Spock was gazing at the output of a Starfleet-approved PowerShell script.

Tricorders? Also powered by PowerShell.

Yes, I’m a fan of PowerShell, and boldly going where no blogger has gone before. For someone who’s been raised on bare-bones Linux shell languages, PowerShell looks like super-advanced technology. Part of PowerShell’s high-tech prowess is its ability, as I mentioned in the previous post, to monitor low-level OS events, like file updates.

A Closer Look at Register-WmiEvent

Let’s return to the amazing one-line of file monitoring PS code I introduced last time.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'doc' or targetInstance.Extension = 'txt)' and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action

 

As you might have guessed, the logic on what to monitor is buried in the WQL contained in Register-WmiEvent’s query parameter.

You’ll recall that WQL allows scripters to retrieve information about Windows system events in general and, specifically in our case, file events – files created, updated, or deleted.  With this query, I’m effectively pulling out of Windows darker depths file modification events that are organized as a CIM_DataFile class.

WQL allows me to set the drive and folder I’m interested in searching — that would be the Drive and Path properties that I reference above.

Though I’m not allowed to use a wild card search — it’s a feature, not a bug — I can instead search for specific file extensions. My goal in developing the script for this post is to help IT security spot excessive activity on readable files.  So I set up a logical condition to search for files with “doc” or “txt” extensions. Makes sense, right?

Now for the somewhat subtle part.

I’d like to collect file events generated by anyone accessing a file, including those who just read a Microsoft Word documents without making changes.

Can that be done?

When we review a file list in Windows Explorer, we’re all familiar with the “Date Modified” field. But did you know there’s also a “Date Accessed” field? Every time you read a file in Windows, this field is, in theory, updated with the current time stamp. You can discover this for yourself—see below—by clicking on the column heads and enabling the access field.

However, in practice, Windows machines aren’t typically configured to update this internal field when a file is just accessed—i.e., read, but not modified. Microsoft says it will slow down performance. But let’s throw caution to the wind.

To configure Windows to always update the file access time, you use the under-appreciated fsutil utility (you’ll need admin access) with the following parameters: 

fsutil set behavior disablelastaccess 0

With file access events now configured in my test environment, I’ve now enabled Windows to also record read-only events.

My final search criteria in the above WQL should make sense:

targetInstance.LastAccessed > '$($cur)'

It says that I’m only interested in file events in which file access has occurred after the Register-WmiEvent is launched. The $cur variable, by the way is assigned the current time pulled from the Get-Date cmdlet.

File Access Analytics (FAA)

We’ve gotten through the WQL, so let’s continue with the remaining parameters in Register-WmiEvent.

SourceIdentifer allows you to name an event. Naming things – people, tabby cats, and terriers—is always a good practice since you can call them when you need ‘em.

And it holds just as true for events! There are few cmdlets that require this identifier. For starters, Unregister-Event for removing a given event subscription, Get-Event for letting you review all the events that are queued, Remove-Event for erasing current events in the queue, and finally Wait-Event for doing an explicit synchronous wait. We’ll be using some of these cmdlets in the completed code.

I now have the core of my script worked out.

That leaves the Action parameter. Since Register-WmiEvent responds asynchronously to events, it needs some code to handle the response to the triggering event, and that’s where the action, so to speak is: in a block of PowerShell code that’s passed in.

This leads to what I really want to accomplish with my script, and so I’m forced to reveal my grand scheme to take over the User Behavior Analytics world with a few lines of PowerShell code.

Here’s the plan: This PS script will monitor file access event rates, compare it to a baseline, and decide whether the event rates fall into an abnormal range, which could indicate possible hacking. If this threshold is reached, I’ll display an amazing dashboard showing the recent activity.

In other words, I’ll have a threat monitor alert system that will spot unusual activity against text files in a specific directory.

Will Powershell Put Security Solutions Out of Business?

No, Varonis doesn’t have anything to worry about, for a few reasons.

One, event monitoring is not really something Windows does efficiently. Microsoft in fact warns that turning on last access file updates through fsutil adds system overhead. In addition, Register-WmiEvent makes the internal event flywheels spin faster: I came across some comments saying the cmdlet may cause the system to slow down.

Two, I’ve noticed that this isn’t real-time or near real-time monitoring: there’s a lag in receiving file events, running up to 30 minutes or longer. At least, that was my experience running the scripts on my AWS virtual machine. Maybe you’ll do better on your dedicated machine, but I don’t think Microsoft is making any kind of promises here.

Three, try as I might, I was unable to connect a file modification event to the user of the app that was causing the event. In other words, I know a file even has occurred, but alas it doesn’t seem to be possible with Register-WMIEvent to know who caused it.

So I’m left with a script that can monitor file access but without assigning attribution. Hmmm …  let’s create a new security monitoring category, called File Access Analytics (FAA), which captures what I’m doing. Are you listening Gartner?

The larger point, of course, is that User Behavior Analytics (UBA) is a far better way to spot threats because user-specific activity contains the interesting information. My far less granular FAA, while useful, can’t reliably pinpoint the bad behaviors since it aggregates events over many users.

However, for small companies and with a few account logged on, FAA may be just enough. I can see an admin using the scripts when she suspects a user who is spending too much time poking around a directory with sensitive data. And there are some honeypot possibilities with this code as well.

And even if my script doesn’t quite do the job, the even larger point is that understanding the complexities of dealing with Windows events using PowerShell (or other language you use) will make you, ahem, appreciate enterprise-class solutions.

We’re now ready to gaze upon the action block of my Register-WmiEvent:

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append

   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
        write-host "sfactor: $($sfactor))"
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
          
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 

 

Yes, I do audit logging by using the Out-File cmdlet to write a time-stamped entry for each access. And I detect bursty file access hits over 15-minute intervals, comparing the event counts against a baseline that’s held in the $Global:baseline array.

I got a little fancy here, and set up mythical average event counts in baseline for each day of the week, dividing the day into three eight hour periods. When the burst activity in a given period falls at the far end of the “tail” of the bell curve, we can assume we’ve spotted a threat.

The FAA Dashboard

With the bursty event data held in $Global:evarray (files accessed with timestamps), I decided that it would be a great idea to display it as a spiffy dashboard. But rather than holding up the code in the action block, I “queued” up this data on its own event, which can be handled by a separate app.

Whaaat?

Let me try to explain. This is where the New-Event cmdlet comes into play at the end of the action block above. It simply allows me to asynchronously ping another app or script, thereby not tying down the action code block so it can then handle the next file access event.

I’ll present the full code for my FAA PowerShell script in the next post.  For now, I’ll just say that I set up a Wait-Event cmdlet whose sole purpose is to pick up these burst events and then funnel the output into a beautiful table, courtesy of Out-GridView.

Here’s the end result that will pop on an admin’s console:

 

Impressive in its own way considering the whole FAA “platform” was accomplished in about 60 lines of PS code.

We’ve covered a lot of ground, so let’s call it a day.

We’ll talk more about the full FAA script the next time, and then we’ll start looking into the awesome hidden content classification possibilities of PowerShell.

 

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

In this series on governmental responses to cybercrime, we’re taking a look at how countries through their laws are dealing with broad attacks against IT infrastructure beyond just data theft. Ransomware and DDoS are prime examples of threats that don’t necessarily fit into the narrower definition of breaches found in PII-focused data security laws. That’s where special cybercrime rules come into play.

In the first post, we discussed how the EU’s Network and Information Security (NIS) Directive tries to close the gaps left open by the EU Data Protection Directive(DPD) and the impending General Data Protection Regulation (GDPR).

Let’s now head north to Canada.

Like the EU, Canada has a broad consumer data-oriented security law, which is known as the Personal Information Protection and Electronic Documents Act (PIPEDA).  For nitpickers, there are also overriding data laws at the provincial level — Alberta and British Columbia’s PIPA — that effectively mirror PIPEDA.

The good news about PIPEDA is that it has a strong breach notification rule wherein unauthorized data access has to be reported to the Canadian regulators.  So ransomware attacks would fall under this rule. But for reporting a breach to consumers, PIPEDA uses a “risk of harm” threshold.” Harm can be of a financial nature as well as anything having a significant affect on the reputation of the individual.

Anyway, PIPEDA is like the Canadian version of the current EU DPD but with a fairly practical breach reporting requirement.

Is there anything like the EU’S NIS?

Not at this point.

But in 2015, the Canadian government started funding several initiatives to help the private sector protect against cyber threats. One of the key programs that came out of this was the Canadian Cyber Incident Response Centre (CCIRC), which is similar to the EU’s CSIRTs.

CCIRC provides technical advice and support, monitors the threat environment and posts cybersecurity bulletins (see their RSS feed), as well as provide a forum, the Community Portal, through which companies can share information.

For now, Canada is following a US-style approach: help and support private industry in dealing with cyberattacks against important IT infrastructure, but make reporting and other compliance matters to be a voluntary arrangement.

However, the public discussion continues, and with attacks like this, new approaches may be needed.

Varonis eBook: Pen Testing Active Directory Environments

Varonis eBook: Pen Testing Active Directory Environments

You may have been following our series of posts on pen testing Active Directory environments and learned about the awesome powers of PowerView. No doubt you were wowed by our cliffhanger ending — spoiler alert — where we applied graph theory to find the derivative admin!

Or maybe you tuned in late, saw this post, and binge read the whole thing during snow storm Nemo.

In any case, we know from the many emails we received that you demanded a better ‘long-form’ content experience. After all, who’d want to read about finding hackable vulnerabilities using Active Directory while being forced to click six-times to access the entire series?

We listened!

Thanks to the miracle of PDF technology, we’ve compressed the entire series into an easy-to-ready, comfy ebook format. Best of all, you can scroll through the entire contents without having to touch messy hyperlinks.

Download the Varonis Pen Testing Active Directory Environments ebook, and enjoy click-free reading today!

Practical PowerShell for IT Security, Part I: File Event Monitoring

Practical PowerShell for IT Security, Part I: File Event Monitoring

Back when I was writing the ultimate penetration testing series to help humankind deal with hackers, I came across some interesting PowerShell cmdlets and techniques. I made the remarkable discovery that PowerShell is a security tool in its own right. Sounds to me like it’s the right time to start another series of PowerShell posts.

We’ll take the view in these posts that while PowerShell won’t replace purpose-built security platforms — Varonis can breathe easier now — it will help IT staff monitor for threats and perform other security functions. And also give IT folks an appreciation of the miracles that are accomplished by real security platforms, like our own Metadata Framework. PowerShell can do interesting security work on a small scale, but it is in no way equipped to take on an entire infrastructure.

It’s a Big Event

To begin, let’s explore using PowerShell as a system monitoring tool to watch files, processes, and users.

Before you start cursing into your browsers, I’m well aware that any operating system command language can be used to monitor system-level happenings. A junior IT admin can quickly put together, say, a Linux shell script to poll a directory to see if a file has been updated or retrieve a list of running processes to learn if a non-standard process has popped up.

I ain’t talking about that.

PowerShell instead gives you direct event-driven monitoring based on the operating system’s access to low-level changes. It’s the equivalent of getting a push notification on a news web page alerting you to a breaking story rather than having to manually refresh the page.

In this scenario, you’re not in an endless PowerShell loop, burning up CPU cycles, but instead the script is only notified or activated when the event — a file is modified or a new user logs in — actually occurs. It’s a far more efficient way to do security monitoring than by brute-force polling.

Further down below, I’ll explain how this is accomplished.

But first, anyone who’s ever taken, as I have, a basic “Operating Systems for Poets” course knows that there’s a demarcation between user-level and system-level processes.

The operating system, whether Linux or Windows, does the low-level handling of device actions – anything from disk reads, to packets being received — and hides this from garden variety apps that we run from our desktop.

So if you launch your favorite word processing app and view the first page of a document, the whole operation appears as a smooth, synchronous activity. But in reality there are all kinds of time-sensitive actions events — disk seeks, disk blocks being read, characters sent to the screen, etc. — that are happening under the hood and deliberately hidden from us.  Thank you Bill Gates!

In the old days, only hard-core system engineers knew about this low-level event processing. But as we’ll soon see, PowerShell scripters can now share in the joy as well.

An OS Instrumentation Language

This brings us to Windows Management Instrumentation (WMI), which is a Microsoft effort to provide a consistent view of operating system objects.

Only a few years old, WMI is itself part of a broader industry effort, known as Web-based Enterprise Management (WBEM), to standardize the information pulled out of routers, switches, storage arrays, as well as operating systems.

So what does WMI actually look and feel like?

For our purposes, it’s really a query language, like SQL, but instead of accessing rows of vanilla database columns, it presents complex OS information organized as a WMI_class hierarchy. Not too surprisingly, the query language is known as, wait for it, WQL.

Windows generously provides a utility, wbemtest, that lets you play with WQL. In the graphic below, you can see the results of my querying the Win32_Process object, which holds information on the current processes running.

WQL on training wheels with wbemtest.

Effectively, it’s the programmatic equivalent of running the Windows task monitor. Impressive, no? If you want to know more about WQL, download Ravi Chaganti’s wonderous ebook on the subject.

PowerShell and the Register-WmiEvent Cmdlet

But there’s more! You can take off the training wheels provided by wbemtest, and try these queries directly in PowerShell.

Powershell’s Get-WMIObject is the appropriate cmdlet for this task, and it lets you feed in the WQL query directly as a parameter.

The graphic below shows the first few results from running select Name, ProcessId, CommandLine from Win32_Process on my AWS test environment.

gwmi is the PowerShell alias for Get-WmiObject.

The output is a bit wonky since it’s showing some hidden properties having to do with underlying class bookkeeping. The cmdlet also spews out a huge list that speeds by on my console.

For a better Win32_Process experience, I piped the output from the query into Out-GridView, a neat PS cmdlet that formats the data as a beautiful GUI-based table.

Not too shabby for a line of PowerShell code. But WMI does more than allow you to query these OS objects.

As I mentioned earlier, it gives you access to relevant events on the objects themselves. In WMI, these events are broadly broken into three types: creation, modification, and deletion.

Prior to PowerShell 2.0, you had to access these events in a clunky way: creating lots of different objects, and then you were forced to synchronously ‘hang’, so it wasn’t true asynchronous event-handling. If you want to know more, read this MS Technet post for the ugly details.

Now in PS 2.0 with the Register-WmiEvent cmdlet, we have a far prettier way to react to all kinds of events. In geek-speak, I can register a callback that fires when the event occurs.

Let’s go back to my mythical (and now famous) Acme Company, whose IT infrastructure is set up on my AWS environment.

Let’s say Bob, the sys admin, notices every so often that he’s running low on file space on the Salsa server. He suspects that Ted Bloatly, Acme’s CEO, is downloading huge files, likely audio files, into one of Bob’s directories and then moving them into Ted’s own server on Taco.

Bob wants to set a trap: when a large file is created in his home directory, he’ll be notified on his console.

To accomplish this, he’ll need to work with the CIM_DataFile class.  Instead of accessing processes, as we did above, Bob uses this class to connect with the underlying file metadata.

CIM_DataFile object can be accessed directly in PowerShell.

Playing the part of Bob, I created the following Register-WmiEvent script, which will notify the console when a very large file is created in the home directory.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance isa 'CIM_DataFile' and TargetInstance.FileSize > 2000000 and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' "-sourceIdentifier "Accessor3" -Action  { Write-Host "Large file" $EventArgs.NewEvent.TargetInstance.Name  "was created”}

 

Running this script directly from the Salsa console launches the Register-WmiEvent command in the background, assigning it a job number, and then only interacts with the console when the event is triggered.

In the next post, I’ll go into more details about what I’ve done here. Effectively, I’m using WQL to query the CIM_DataFile object — particularly anything in the \Users\bob directory that’s over 2 million bytes — and set up a notification when a new file is created that fits this criteria —that’s where InstanceModificationEvent comes into play.

Anyway, in my Bob role  I launched the script from the PS command line, and then putting on my Ted Bloatly hat, I copied a large mp4 into Bob’s directory. You can see the results below.

We now know that Bloatly is a fan of Melody Gardot. Who would have thunk it?

You begin to see some of the exciting possibilities with PowerShell as a tool to detect threats patterns and perhaps for doing a little behavior analytics.

We’ll be exploring these ideas in the next post.

G’Day, Australia Approves Breach Notification Rule

G’Day, Australia Approves Breach Notification Rule

Last month, Australia finally amended its Privacy Act to now require breach notification. This proposed legislative change has been kicking around the Federal Government for a few years. Our attorney friends at Hogan Lovells have a nice summary of the new rule.

The good news here is that Australia defines a breach broadly enough to include both unauthorized disclosure and access of personal information. Like the GDPR, Australia also considers personal data to be any information about an identified individual or that can be reasonably linked to an individual.

In real-world terms, it means that if hackers get phone numbers, bank account data, or medical records or if malware, like ransomware, merely accesses this information, then it’s considered a breach.

So far, so good.

There’s a ‘But’

However, the new Australian requirement  has a harm threshold that also has to be met for the breach to be reportable. This is not in itself unusual in that we’ve seen these same harm thresholds in US states breach notification laws, and even the EU’s GDPR and the NIS Directive.

In the Australian case, the language used is that the breach will “likely to result in serious harm.”  While not explicitly stated, the surrounding context in the amendment says that breach would have to cause serious physical, psychological, emotional, economic, reputational, and financial harm or other effect that a “reasonable” person would agree.

By the way, this is also similar to what’s in the GDPR’s preamble.

The Australian breach notification rule, though, goes further with explicit remediation exceptions that give the covered entities – privacy sector companies, government agencies, and health care providers – even more wiggle room. If the breached entity can show that they have taken actions involving the disclosure or access before it results in serious harm, then they don’t have to report it.

I suppose you could come up with scenarios where there’s been, say, limited exposure of passwords from a health insurance company’s website, the company freezes the relevant user accounts, and the instructs affected individuals to contact them about resetting passwords. That might be a successful remediation.

You can see what the Australian regulators were getting at. By the way, I don’t think this rule is as “floppy” as one publication called the notification criteria. But it does give the covered entities something of a second chance.

Anyway, if there’s a harmful breach event, then Australian organizations will have to notify the regulators as soon as possible after discovery. They’ll need to provide them with breach details, including the information accessed, as well as steps affected individuals should take.

The Australian breach notification rule is set to go into effect in a few weeks, and there will be a one-year grace period from that point. Failure to comply can result in investigations, forced remedial actions, and fines or compensations.

Verizon Data Breach Digest 2017

Verizon Data Breach Digest 2017

While we’re anxiously waiting for the next edition of the Data Breach Investigations Report (DBIR), Verizon released its annual Data Breach Digest (DBD) earlier this month. What’s the DBD? It condenses the various breach patterns discussed in the DBIR.  In this year’s report, Verizon reduced 12 patterns into a mere four generalized scenarios: the Human Element, Conduit Devices, Configuration Exploitation, and Malicious Software.

Of course, when you start abstracting and clustering information, you end up creating fuzzy caricatures. So they call what they’ve come up with “scenari-catures”.

You can’t accuse the Verizon research team of not having a wacky sense of humor.

If you play along with them, you can use the DBD to get a snapshot view of various breach scenarios. In fact, they’ve created  attack-defend cards (below) each with their own threat persona.

Trade them, collect them! (Verizon Data Breach Bulletin 2017)

By gamifying all this complicated data, C-levels and other executives who are used to dealing in broad abstractions will find these cards very comforting. Think of it as baseball cards for the security minded.

In looking through the DBD, I’ll grudgingly admit that it has real practical value, especially for those who believe that the breach they just experienced is unique.

The DBD shows that they are not alone in being phished into accidentally making a wire transfer to the Cayman Islands or discovering that their systems were hacked because IT never caught up with patches even after they assured you everything was secure. Verizon calls that scenario “the Fiddling Nero”.

I told you they had a sense of humor.  Have as much fun looking through it as I did!

Cybersecurity Laws Get Serious: EU’s NIS Directive

Cybersecurity Laws Get Serious: EU’s NIS Directive

In the IOS blog, our cyberattack focus has mostly been on hackers stealing PII and other sensitive personal data. The breach notification laws and regulations that we write about require notification only when there’s been acquisition or disclosure of PII by an unauthorized user. In plain speak, the data is stolen.

These data laws, though, fall short in two significant ways.

One, the hackers can potentially take data that’s not covered by the law: non-PII that can include corporate IP, sensitive emails from the CEO, and other valuable proprietary information. Two, the attackers are not interested in taking data but rather in disruption: for example, deploying DoS attacks or destroying important system or other non-PII data.

Under the US’s HIPAA, GLBA, and state breach laws as well as the EU’s GDPR, neither of the two cases above — and that takes in a lot of territory — would trigger a notification to the appropriate government authority.

The problem is that data privacy and security laws focus, naturally, on the data, instead of the information system as a whole. However, it doesn’t mean that governments aren’t addressing this broader category of cybersecurity.

There’s not been nearly enough attention paid to the EU’s Network and Information Systems (NIS) Directive, the US’s (for now) voluntary Critical Infrastructure Security Framework, Canada’s cybersecurity initiatives, and other laws in major EU countries.

And that’s my motivation in writing this first in a series of posts on cybersecurity rules. These are important rules that organizations should be more aware. Sometime soon, it won’t be good enough, legally speaking, to protect special classes of data. Companies will be required to protect entire IT systems and report to regulatory authorities when there’s been actions to disrupt or disable the IT infrastructure.

Protecting the Cyber

The laws and guidelines that have evolved in this area are associated with safeguarding critical infrastructure – telecom, financial, medical, chemical, transportation. The reason is that cybercrime against the IT network of, say, Hoover Dam or the Federal Reserve should be treated differently than an attack against a dating web site.

Not that an attack against any IT system isn’t a serious and potentially costly act. But with critical infrastructure, where there isn’t an obvious financial motivation, we start entering the realm of cyber espionage or cyber disruption initiated by governments.

In other words, bank ATM machines suddenly not dispensing cash, the cell phone network dropping calls, or – heaven help us! — Google replying with wrong and deceptive answers, may be a sign of a cyberwar or at least a cyber ambush.

A few months back, we wrote about an interview between Charlie Rose and John Carlin, the former Assistant Attorney General in the National Security Division of the Department of Justice. The transcript can be found here, and it’s worth going through it, or at least searching on the “attribution” keyword.

Essentially, Carlin tells us that US law enforcement is getting far better at learning who are behind cyberattacks. The Department of Justice is now publicly naming the attackers, and then prosecuting them. By the way, Carlin went after Iranian hackers accused of intrusions into banks and a small dam near New York City. Fortunately, the dam’s valves were still manually operated and not connected to the Internet.

Carlin believes there are important advantages in going public with a prosecution against named individuals. Carlin sees it as a way to deter future cyber incidents. As he puts it, “because if you are going to be able to deter, you’ve got to make sure the world knows we can figure out who did it.”

So it would make enormous sense to require companies to report cyberattacks to governmental agencies, who can then put the pieces together and formally take legal and other actions against the perps.

First Stop: EU’s NIS Directive.

As with the Data Protection Directive for data privacy, which was adopted in 1995, the EU has again been way ahead of other countries in formalizing cyber reporting legislation. Its Network and Information Systems Directive was initially drafted in 2013 and was approved by the EU last July.

Since it is a directive, individual EU countries will have to transpose NIS into their own individual laws. EU countries will have a two-year transition period to get their houses in order. And an additional six months to select companies providing essential services (see Appendix II).

In Article 14, operators of essential services are required to take “appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems.”  They are also required to report, without undue delay, significant incidents to a Computer Security Incident Response Team or CSIRT.

There’s separate and similar language in Article 16 covering digital service providers, which is the EU’s way of saying ecommerce, cloud computing, and search services.

CSIRTs are at the center of the NIS Directive. Besides collecting incident data, CSIRTs are also responsible for monitoring and analyzing threat activity at a national level, issuing alerts and warnings, and sharing their information and threat awareness with other CSIRTs.  (In the US, the closest equivalent is the Department of Homeland Security’s NCCIC.)

What is considered an incident in the NIS Directive?

It is any “event having an actual adverse effect on the security of network and information systems.”  Companies designated as providing essential services are given some wiggle room in what they have to report to a CSIRT. For an incident to be significant, and thus reportable, the company has to consider the number of users affected, the duration, and the geographical scope.

Essential digital service operators must also take into account the effect of their disruption on economic and “societal activities”.

Does this mean that a future attack against, say, Facebook in the EU, in which Messenger or status posting activity is disrupted would have to be reported?

To this non-attorney blogger, it appears that Facebooking could be considered an important societal activity.

Yeah, there are vagaries in the NIS Directive, and it will require more guidance from the regulators.

In my next post in this series, I’ll take a closer look at cybersecurity rules due north of us for our Canadian neighbor.