All posts by Andy Green

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

Practical Powershell For IT Security, Part II: File Access Analytics (FAA)

In working on this series, I almost feel that with PowerShell we have technology that somehow time-traveled back from the future. Remember on Star Trek – the original of course — when the Enterprise’s CTO, Mr. Spock, was looking into his visor while scanning parsecs of space? The truth is Spock was gazing at the output of a Starfleet-approved PowerShell script.

Tricorders? Also powered by PowerShell.

Yes, I’m a fan of PowerShell, and boldly going where no blogger has gone before. For someone who’s been raised on bare-bones Linux shell languages, PowerShell looks like super-advanced technology. Part of PowerShell’s high-tech prowess is its ability, as I mentioned in the previous post, to monitor low-level OS events, like file updates.

A Closer Look at Register-WmiEvent

Let’s return to the amazing one-line of file monitoring PS code I introduced last time.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'doc' or targetInstance.Extension = 'txt)' and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action

As you might have guessed, the logic on what to monitor is buried in the WQL contained in Register-WmiEvent’s query parameter.

You’ll recall that WQL allows scripters to retrieve information about Windows system events in general and, specifically in our case, file events – files created, updated, or deleted.  With this query, I’m effectively pulling out of Windows darker depths file modification events that are organized as a CIM_DataFile class.

WQL allows me to set the drive and folder I’m interested in searching — that would be the Drive and Path properties that I reference above.

Though I’m not allowed to use a wild card search — it’s a feature, not a bug — I can instead search for specific file extensions. My goal in developing the script for this post is to help IT security spot excessive activity on readable files.  So I set up a logical condition to search for files with “doc” or “txt” extensions. Makes sense, right?

Now for the somewhat subtle part.

I’d like to collect file events generated by anyone accessing a file, including those who just read a Microsoft Word documents without making changes.

Can that be done?

When we review a file list in Windows Explorer, we’re all familiar with the “Date Modified” field. But did you know there’s also a “Date Accessed” field? Every time you read a file in Windows, this field is, in theory, updated with the current time stamp. You can discover this for yourself—see below—by clicking on the column heads and enabling the access field.

However, in practice, Windows machines aren’t typically configured to update this internal field when a file is just accessed—i.e., read, but not modified. Microsoft says it will slow down performance. But let’s throw caution to the wind.

To configure Windows to always update the file access time, you use the under-appreciated fsutil utility (you’ll need admin access) with the following parameters: 

fsutil set behavior disablelastaccess 0

With file access events now configured in my test environment, I’ve now enabled Windows to also record read-only events.

My final search criteria in the above WQL should make sense:

targetInstance.LastAccessed > '$($cur)'

It says that I’m only interested in file events in which file access has occurred after the Register-WmiEvent is launched. The $cur variable, by the way is assigned the current time pulled from the Get-Date cmdlet.

File Access Analytics (FAA)

We’ve gotten through the WQL, so let’s continue with the remaining parameters in Register-WmiEvent.

SourceIdentifer allows you to name an event. Naming things – people, tabby cats, and terriers—is always a good practice since you can call them when you need ‘em.

And it holds just as true for events! There are few cmdlets that require this identifier. For starters, Unregister-Event for removing a given event subscription, Get-Event for letting you review all the events that are queued, Remove-Event for erasing current events in the queue, and finally Wait-Event for doing an explicit synchronous wait. We’ll be using some of these cmdlets in the completed code.

I now have the core of my script worked out.

That leaves the Action parameter. Since Register-WmiEvent responds asynchronously to events, it needs some code to handle the response to the triggering event, and that’s where the action, so to speak is: in a block of PowerShell code that’s passed in.

This leads to what I really want to accomplish with my script, and so I’m forced to reveal my grand scheme to take over the User Behavior Analytics world with a few lines of PowerShell code.

Here’s the plan: This PS script will monitor file access event rates, compare it to a baseline, and decide whether the event rates fall into an abnormal range, which could indicate possible hacking. If this threshold is reached, I’ll display an amazing dashboard showing the recent activity.

In other words, I’ll have a threat monitor alert system that will spot unusual activity against text files in a specific directory.

Will Powershell Put Security Solutions Out of Business?

No, Varonis doesn’t have anything to worry about, for a few reasons.

One, event monitoring is not really something Windows does efficiently. Microsoft in fact warns that turning on last access file updates through fsutil adds system overhead. In addition, Register-WmiEvent makes the internal event flywheels spin faster: I came across some comments saying the cmdlet may cause the system to slow down.

Two, I’ve noticed that this isn’t real-time or near real-time monitoring: there’s a lag in receiving file events, running up to 30 minutes or longer. At least, that was my experience running the scripts on my AWS virtual machine. Maybe you’ll do better on your dedicated machine, but I don’t think Microsoft is making any kind of promises here.

Three, try as I might, I was unable to connect a file modification event to the user of the app that was causing the event. In other words, I know a file even has occurred, but alas it doesn’t seem to be possible with Register-WMIEvent to know who caused it.

So I’m left with a script that can monitor file access but without assigning attribution. Hmmm …  let’s create a new security monitoring category, called File Access Analytics (FAA), which captures what I’m doing. Are you listening Gartner?

The larger point, of course, is that User Behavior Analytics (UBA) is a far better way to spot threats because user-specific activity contains the interesting information. My far less granular FAA, while useful, can’t reliably pinpoint the bad behaviors since it aggregates events over many users.

However, for small companies and with a few account logged on, FAA may be just enough. I can see an admin using the scripts when she suspects a user who is spending too much time poking around a directory with sensitive data. And there are some honeypot possibilities with this code as well.

And even if my script doesn’t quite do the job, the even larger point is that understanding the complexities of dealing with Windows events using PowerShell (or other language you use) will make you, ahem, appreciate enterprise-class solutions.

We’re now ready to gaze upon the action block of my Register-WmiEvent:

Yes, I do audit logging by using the Out-File cmdlet to write a time-stamped entry for each access. And I detect bursty file access hits over 15-minute intervals, comparing the event counts against a baseline that’s held in the $Global:baseline array.

I got a little fancy here, and set up mythical average event counts in baseline for each day of the week, dividing the day into three eight hour periods. When the burst activity in a given period falls at the far end of the “tail” of the bell curve, we can assume we’ve spotted a threat.

The FAA Dashboard

With the bursty event data held in $Global:evarray (files accessed with timestamps), I decided that it would be a great idea to display it as a spiffy dashboard. But rather than holding up the code in the action block, I “queued” up this data on its own event, which can be handled by a separate app.

Whaaat?

Let me try to explain. This is where the New-Event cmdlet comes into play at the end of the action block above. It simply allows me to asynchronously ping another app or script, thereby not tying down the action code block so it can then handle the next file access event.

I’ll present the full code for my FAA PowerShell script in the next post.  For now, I’ll just say that I set up a Wait-Event cmdlet whose sole purpose is to pick up these burst events and then funnel the output into a beautiful table, courtesy of Out-GridView.

Here’s the end result that will pop on an admin’s console:

 

Impressive in its own way considering the whole FAA “platform” was accomplished in about 60 lines of PS code.

We’ve covered a lot of ground, so let’s call it a day.

We’ll talk more about the full FAA script the next time, and then we’ll start looking into the awesome hidden content classification possibilities of PowerShell.

 

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

Cybercrime Laws Get Serious: Canada’s PIPEDA and CCIRC

In this series on governmental responses to cybercrime, we’re taking a look at how countries through their laws are dealing with broad attacks against IT infrastructure beyond just data theft. Ransomware and DDoS are prime examples of threats that don’t necessarily fit into the narrower definition of breaches found in PII-focused data security laws. That’s where special cybercrime rules come into play.

In the first post, we discussed how the EU’s Network and Information Security (NIS) Directive tries to close the gaps left open by the EU Data Protection Directive(DPD) and the impending General Data Protection Regulation (GDPR).

Let’s now head north to Canada.

Like the EU, Canada has a broad consumer data-oriented security law, which is known as the Personal Information Protection and Electronic Documents Act (PIPEDA).  For nitpickers, there are also overriding data laws at the provincial level — Alberta and British Columbia’s PIPA — that effectively mirror PIPEDA.

The good news about PIPEDA is that it has a strong breach notification rule wherein unauthorized data access has to be reported to the Canadian regulators.  So ransomware attacks would fall under this rule. But for reporting a breach to consumers, PIPEDA uses a “risk of harm” threshold.” Harm can be of a financial nature as well as anything having a significant affect on the reputation of the individual.

Anyway, PIPEDA is like the Canadian version of the current EU DPD but with a fairly practical breach reporting requirement.

Is there anything like the EU’S NIS?

Not at this point.

But in 2015, the Canadian government started funding several initiatives to help the private sector protect against cyber threats. One of the key programs that came out of this was the Canadian Cyber Incident Response Centre (CCIRC), which is similar to the EU’s CSIRTs.

CCIRC provides technical advice and support, monitors the threat environment and posts cybersecurity bulletins (see their RSS feed), as well as provide a forum, the Community Portal, through which companies can share information.

For now, Canada is following a US-style approach: help and support private industry in dealing with cyberattacks against important IT infrastructure, but make reporting and other compliance matters to be a voluntary arrangement.

However, the public discussion continues, and with attacks like this, new approaches may be needed.

Varonis eBook: Pen Testing Active Directory Environments

Varonis eBook: Pen Testing Active Directory Environments

You may have been following our series of posts on pen testing Active Directory environments and learned about the awesome powers of PowerView. No doubt you were wowed by our cliffhanger ending — spoiler alert — where we applied graph theory to find the derivative admin!

Or maybe you tuned in late, saw this post, and binge read the whole thing during snow storm Nemo.

In any case, we know from the many emails we received that you demanded a better ‘long-form’ content experience. After all, who’d want to read about finding hackable vulnerabilities using Active Directory while being forced to click six-times to access the entire series?

We listened!

Thanks to the miracle of PDF technology, we’ve compressed the entire series into an easy-to-ready, comfy ebook format. Best of all, you can scroll through the entire contents without having to touch messy hyperlinks.

Download the Varonis Pen Testing Active Directory Environments ebook, and enjoy click-free reading today!

Practical PowerShell for IT Security, Part I: File Event Monitoring

Practical PowerShell for IT Security, Part I: File Event Monitoring

Back when I was writing the ultimate penetration testing series to help humankind deal with hackers, I came across some interesting PowerShell cmdlets and techniques. I made the remarkable discovery that PowerShell is a security tool in its own right. Sounds to me like it’s the right time to start another series of PowerShell posts.

We’ll take the view in these posts that while PowerShell won’t replace purpose-built security platforms — Varonis can breathe easier now — it will help IT staff monitor for threats and perform other security functions. And also give IT folks an appreciation of the miracles that are accomplished by real security platforms, like our own Metadata Framework. PowerShell can do interesting security work on a small scale, but it is in no way equipped to take on an entire infrastructure.

It’s a Big Event

To begin, let’s explore using PowerShell as a system monitoring tool to watch files, processes, and users.

Before you start cursing into your browsers, I’m well aware that any operating system command language can be used to monitor system-level happenings. A junior IT admin can quickly put together, say, a Linux shell script to poll a directory to see if a file has been updated or retrieve a list of running processes to learn if a non-standard process has popped up.

I ain’t talking about that.

PowerShell instead gives you direct event-driven monitoring based on the operating system’s access to low-level changes. It’s the equivalent of getting a push notification on a news web page alerting you to a breaking story rather than having to manually refresh the page.

In this scenario, you’re not in an endless PowerShell loop, burning up CPU cycles, but instead the script is only notified or activated when the event — a file is modified or a new user logs in — actually occurs. It’s a far more efficient way to do security monitoring than by brute-force polling.

Further down below, I’ll explain how this is accomplished.

But first, anyone who’s ever taken, as I have, a basic “Operating Systems for Poets” course knows that there’s a demarcation between user-level and system-level processes.

The operating system, whether Linux or Windows, does the low-level handling of device actions – anything from disk reads, to packets being received — and hides this from garden variety apps that we run from our desktop.

So if you launch your favorite word processing app and view the first page of a document, the whole operation appears as a smooth, synchronous activity. But in reality there are all kinds of time-sensitive actions events — disk seeks, disk blocks being read, characters sent to the screen, etc. — that are happening under the hood and deliberately hidden from us.  Thank you Bill Gates!

In the old days, only hard-core system engineers knew about this low-level event processing. But as we’ll soon see, PowerShell scripters can now share in the joy as well.

An OS Instrumentation Language

This brings us to Windows Management Instrumentation (WMI), which is a Microsoft effort to provide a consistent view of operating system objects.

Only a few years old, WMI is itself part of a broader industry effort, known as Web-based Enterprise Management (WBEM), to standardize the information pulled out of routers, switches, storage arrays, as well as operating systems.

So what does WMI actually look and feel like?

For our purposes, it’s really a query language, like SQL, but instead of accessing rows of vanilla database columns, it presents complex OS information organized as a WMI_class hierarchy. Not too surprisingly, the query language is known as, wait for it, WQL.

Windows generously provides a utility, wbemtest, that lets you play with WQL. In the graphic below, you can see the results of my querying the Win32_Process object, which holds information on the current processes running.

WQL on training wheels with wbemtest.

Effectively, it’s the programmatic equivalent of running the Windows task monitor. Impressive, no? If you want to know more about WQL, download Ravi Chaganti’s wonderous ebook on the subject.

PowerShell and the Register-WmiEvent Cmdlet

But there’s more! You can take off the training wheels provided by wbemtest, and try these queries directly in PowerShell.

Powershell’s Get-WMIObject is the appropriate cmdlet for this task, and it lets you feed in the WQL query directly as a parameter.

The graphic below shows the first few results from running select Name, ProcessId, CommandLine from Win32_Process on my AWS test environment.

gwmi is the PowerShell alias for Get-WmiObject.

The output is a bit wonky since it’s showing some hidden properties having to do with underlying class bookkeeping. The cmdlet also spews out a huge list that speeds by on my console.

For a better Win32_Process experience, I piped the output from the query into Out-GridView, a neat PS cmdlet that formats the data as a beautiful GUI-based table.

Not too shabby for a line of PowerShell code. But WMI does more than allow you to query these OS objects.

As I mentioned earlier, it gives you access to relevant events on the objects themselves. In WMI, these events are broadly broken into three types: creation, modification, and deletion.

Prior to PowerShell 2.0, you had to access these events in a clunky way: creating lots of different objects, and then you were forced to synchronously ‘hang’, so it wasn’t true asynchronous event-handling. If you want to know more, read this MS Technet post for the ugly details.

Now in PS 2.0 with the Register-WmiEvent cmdlet, we have a far prettier way to react to all kinds of events. In geek-speak, I can register a callback that fires when the event occurs.

Let’s go back to my mythical (and now famous) Acme Company, whose IT infrastructure is set up on my AWS environment.

Let’s say Bob, the sys admin, notices every so often that he’s running low on file space on the Salsa server. He suspects that Ted Bloatly, Acme’s CEO, is downloading huge files, likely audio files, into one of Bob’s directories and then moving them into Ted’s own server on Taco.

Bob wants to set a trap: when a large file is created in his home directory, he’ll be notified on his console.

To accomplish this, he’ll need to work with the CIM_DataFile class.  Instead of accessing processes, as we did above, Bob uses this class to connect with the underlying file metadata.

CIM_DataFile object can be accessed directly in PowerShell.

Playing the part of Bob, I created the following Register-WmiEvent script, which will notify the console when a very large file is created in the home directory.

Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance isa 'CIM_DataFile' and TargetInstance.FileSize > 2000000 and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' "-sourceIdentifier "Accessor3" -Action  { Write-Host "Large file" $EventArgs.NewEvent.TargetInstance.Name  "was created”}

Running this script directly from the Salsa console launches the Register-WmiEvent command in the background, assigning it a job number, and then only interacts with the console when the event is triggered.

In the next post, I’ll go into more details about what I’ve done here. Effectively, I’m using WQL to query the CIM_DataFile object — particularly anything in the \Users\bob directory that’s over 2 million bytes — and set up a notification when a new file is created that fits this criteria —that’s where InstanceModificationEvent comes into play.

Anyway, in my Bob role  I launched the script from the PS command line, and then putting on my Ted Bloatly hat, I copied a large mp4 into Bob’s directory. You can see the results below.

We now know that Bloatly is a fan of Melody Gardot. Who would have thunk it?

You begin to see some of the exciting possibilities with PowerShell as a tool to detect threats patterns and perhaps for doing a little behavior analytics.

We’ll be exploring these ideas in the next post.

G’Day, Australia Approves Breach Notification Rule

G’Day, Australia Approves Breach Notification Rule

Last month, Australia finally amended its Privacy Act to now require breach notification. This proposed legislative change has been kicking around the Federal Government for a few years. Our attorney friends at Hogan Lovells have a nice summary of the new rule.

The good news here is that Australia defines a breach broadly enough to include both unauthorized disclosure and access of personal information. Like the GDPR, Australia also considers personal data to be any information about an identified individual or that can be reasonably linked to an individual.

In real-world terms, it means that if hackers get phone numbers, bank account data, or medical records or if malware, like ransomware, merely accesses this information, then it’s considered a breach.

So far, so good.

There’s a ‘But’

However, the new Australian requirement  has a harm threshold that also has to be met for the breach to be reportable. This is not in itself unusual in that we’ve seen these same harm thresholds in US states breach notification laws, and even the EU’s GDPR and the NIS Directive.

In the Australian case, the language used is that the breach will “likely to result in serious harm.”  While not explicitly stated, the surrounding context in the amendment says that breach would have to cause serious physical, psychological, emotional, economic, reputational, and financial harm or other effect that a “reasonable” person would agree.

By the way, this is also similar to what’s in the GDPR’s preamble.

The Australian breach notification rule, though, goes further with explicit remediation exceptions that give the covered entities – privacy sector companies, government agencies, and health care providers – even more wiggle room. If the breached entity can show that they have taken actions involving the disclosure or access before it results in serious harm, then they don’t have to report it.

I suppose you could come up with scenarios where there’s been, say, limited exposure of passwords from a health insurance company’s website, the company freezes the relevant user accounts, and the instructs affected individuals to contact them about resetting passwords. That might be a successful remediation.

You can see what the Australian regulators were getting at. By the way, I don’t think this rule is as “floppy” as one publication called the notification criteria. But it does give the covered entities something of a second chance.

Anyway, if there’s a harmful breach event, then Australian organizations will have to notify the regulators as soon as possible after discovery. They’ll need to provide them with breach details, including the information accessed, as well as steps affected individuals should take.

The Australian breach notification rule is set to go into effect in a few weeks, and there will be a one-year grace period from that point. Failure to comply can result in investigations, forced remedial actions, and fines or compensations.

Verizon Data Breach Digest 2017

Verizon Data Breach Digest 2017

While we’re anxiously waiting for the next edition of the Data Breach Investigations Report (DBIR), Verizon released its annual Data Breach Digest (DBD) earlier this month. What’s the DBD? It condenses the various breach patterns discussed in the DBIR.  In this year’s report, Verizon reduced 12 patterns into a mere four generalized scenarios: the Human Element, Conduit Devices, Configuration Exploitation, and Malicious Software.

Of course, when you start abstracting and clustering information, you end up creating fuzzy caricatures. So they call what they’ve come up with “scenari-catures”.

You can’t accuse the Verizon research team of not having a wacky sense of humor.

If you play along with them, you can use the DBD to get a snapshot view of various breach scenarios. In fact, they’ve created  attack-defend cards (below) each with their own threat persona.

Trade them, collect them! (Verizon Data Breach Bulletin 2017)

By gamifying all this complicated data, C-levels and other executives who are used to dealing in broad abstractions will find these cards very comforting. Think of it as baseball cards for the security minded.

In looking through the DBD, I’ll grudgingly admit that it has real practical value, especially for those who believe that the breach they just experienced is unique.

The DBD shows that they are not alone in being phished into accidentally making a wire transfer to the Cayman Islands or discovering that their systems were hacked because IT never caught up with patches even after they assured you everything was secure. Verizon calls that scenario “the Fiddling Nero”.

I told you they had a sense of humor.  Have as much fun looking through it as I did!

Cybersecurity Laws Get Serious: EU’s NIS Directive

Cybersecurity Laws Get Serious: EU’s NIS Directive

In the IOS blog, our cyberattack focus has mostly been on hackers stealing PII and other sensitive personal data. The breach notification laws and regulations that we write about require notification only when there’s been acquisition or disclosure of PII by an unauthorized user. In plain speak, the data is stolen.

These data laws, though, fall short in two significant ways.

One, the hackers can potentially take data that’s not covered by the law: non-PII that can include corporate IP, sensitive emails from the CEO, and other valuable proprietary information. Two, the attackers are not interested in taking data but rather in disruption: for example, deploying DoS attacks or destroying important system or other non-PII data.

Under the US’s HIPAA, GLBA, and state breach laws as well as the EU’s GDPR, neither of the two cases above — and that takes in a lot of territory — would trigger a notification to the appropriate government authority.

The problem is that data privacy and security laws focus, naturally, on the data, instead of the information system as a whole. However, it doesn’t mean that governments aren’t addressing this broader category of cybersecurity.

There’s not been nearly enough attention paid to the EU’s Network and Information Systems (NIS) Directive, the US’s (for now) voluntary Critical Infrastructure Security Framework, Canada’s cybersecurity initiatives, and other laws in major EU countries.

And that’s my motivation in writing this first in a series of posts on cybersecurity rules. These are important rules that organizations should be more aware. Sometime soon, it won’t be good enough, legally speaking, to protect special classes of data. Companies will be required to protect entire IT systems and report to regulatory authorities when there’s been actions to disrupt or disable the IT infrastructure.

Protecting the Cyber

The laws and guidelines that have evolved in this area are associated with safeguarding critical infrastructure – telecom, financial, medical, chemical, transportation. The reason is that cybercrime against the IT network of, say, Hoover Dam or the Federal Reserve should be treated differently than an attack against a dating web site.

Not that an attack against any IT system isn’t a serious and potentially costly act. But with critical infrastructure, where there isn’t an obvious financial motivation, we start entering the realm of cyber espionage or cyber disruption initiated by governments.

In other words, bank ATM machines suddenly not dispensing cash, the cell phone network dropping calls, or – heaven help us! — Google replying with wrong and deceptive answers, may be a sign of a cyberwar or at least a cyber ambush.

A few months back, we wrote about an interview between Charlie Rose and John Carlin, the former Assistant Attorney General in the National Security Division of the Department of Justice. The transcript can be found here, and it’s worth going through it, or at least searching on the “attribution” keyword.

Essentially, Carlin tells us that US law enforcement is getting far better at learning who are behind cyberattacks. The Department of Justice is now publicly naming the attackers, and then prosecuting them. By the way, Carlin went after Iranian hackers accused of intrusions into banks and a small dam near New York City. Fortunately, the dam’s valves were still manually operated and not connected to the Internet.

Carlin believes there are important advantages in going public with a prosecution against named individuals. Carlin sees it as a way to deter future cyber incidents. As he puts it, “because if you are going to be able to deter, you’ve got to make sure the world knows we can figure out who did it.”

So it would make enormous sense to require companies to report cyberattacks to governmental agencies, who can then put the pieces together and formally take legal and other actions against the perps.

First Stop: EU’s NIS Directive.

As with the Data Protection Directive for data privacy, which was adopted in 1995, the EU has again been way ahead of other countries in formalizing cyber reporting legislation. Its Network and Information Systems Directive was initially drafted in 2013 and was approved by the EU last July.

Since it is a directive, individual EU countries will have to transpose NIS into their own individual laws. EU countries will have a two-year transition period to get their houses in order. And an additional six months to select companies providing essential services (see Appendix II).

In Article 14, operators of essential services are required to take “appropriate and proportionate technical and organisational measures to manage the risks posed to the security of network and information systems.”  They are also required to report, without undue delay, significant incidents to a Computer Security Incident Response Team or CSIRT.

There’s separate and similar language in Article 16 covering digital service providers, which is the EU’s way of saying ecommerce, cloud computing, and search services.

CSIRTs are at the center of the NIS Directive. Besides collecting incident data, CSIRTs are also responsible for monitoring and analyzing threat activity at a national level, issuing alerts and warnings, and sharing their information and threat awareness with other CSIRTs.  (In the US, the closest equivalent is the Department of Homeland Security’s NCCIC.)

What is considered an incident in the NIS Directive?

It is any “event having an actual adverse effect on the security of network and information systems.”  Companies designated as providing essential services are given some wiggle room in what they have to report to a CSIRT. For an incident to be significant, and thus reportable, the company has to consider the number of users affected, the duration, and the geographical scope.

Essential digital service operators must also take into account the effect of their disruption on economic and “societal activities”.

Does this mean that a future attack against, say, Facebook in the EU, in which Messenger or status posting activity is disrupted would have to be reported?

To this non-attorney blogger, it appears that Facebooking could be considered an important societal activity.

Yeah, there are vagaries in the NIS Directive, and it will require more guidance from the regulators.

In my next post in this series, I’ll take a closer look at cybersecurity rules due north of us for our Canadian neighbor.

Interview With Medical Privacy Author Adam Tanner [TRANSCRIPT]

Interview With Medical Privacy Author Adam Tanner [TRANSCRIPT]

Adam Tanner, author of Our Bodies, Our Data, has shed light on the dark market in medical data. In my interview with Adam, I learned that our medical records, principally drug transactions, are sold to medical data brokers who then resell this information to drug companies. How can this be legal under HIPAA without patient consent?

Adam explains that if the data is anonymized then it no longer falls under HIPAA’s rules. However, the prescribing doctor’s name is still left on the record that is sold to brokers.

As readers of this blog know, bits of information related to location, like the doctor’s name, don’t truly anonymize a record and can act as quasi-identifiers when associated with other data.

My paranoia was certainly in the red zone during this interview, and we explored what would happen if hackers or others could connect the dots. Some of the possibilities were a little unsettling.

Adam believes that by writing this book, he can raise awareness about this hidden medical data market. He also believes that consumers should be given a choice — since it’s really their data  — about whether to release the “anonymized” HIPAA records to third-parties.

 

Inside Out Security: Today, I’d like to welcome Adam Tanner. Adam is a writer-in-residence at Harvard University’s Institute for Quantitative Social Science. He’s written extensively on data privacy. He’s the author of What Stays In Vegas: The World of Personal Data and the End of Privacy As We Know It. His articles on data privacy have appeared in Scientific American, Forbes, Fortune, and Slate. And he has a new book out, titled “Our Bodies, Our Data,” which focuses on the hidden market in medical data. Welcome, Adam.

Adam Tanner: Well, I’m glad to be with you.

IOS: We’ve also been writing about medical data privacy for our Inside Out Security blog. And we’re familiar with how, for example, hospital discharge records can be legally sold to the private sector.

But in your new book, and this is a bit of a shock to me, you describe how pharmacies and others sell prescription drug records to data brokers. Can you tell us more about the story you’ve uncovered?

AT: Basically, throughout your journey as a patient into the healthcare system, information about you is sold. It has nothing to do with your direct treatment. It has to do with commercial businesses wanting to gain insight about you and your doctor, largely, for sales and marketing.

So, take the first step. You go to your doctor’s office. The door is shut. You tell your doctor your intimate medical problems. The information that is entered into the doctor’s electronic health system may be sold, commercially, as may the prescription that you pick up at the pharmacy or the blood tests that you take or the urine tests at the testing lab. The insurance company that pays for all of this or subsidizes part of this, may also sell the information.

That information about you is anonymized.  That means that your information contains your medical condition, your date of birth, your doctor’s name, your gender, all or part of your postal zip code, but it doesn’t have your name on it.

All of that trade is allowed, under U.S. rules.

IOS: You mean under HIPAA?

AT: That’s right. Now this may be surprising to many people who would ask this question, “How can this be legal under current rules?” Well, HIPAA says that if you take out the name and anonymize according to certain standards, it’s no longer your data. You will no longer have any say over what happens to it. You don’t have to consent to the trade of it. Outsiders can do whatever they want with that.

I think a lot of people would be surprised to learn that. Very few patients know about it. Even doctors and pharmacists and others who are in the system don’t know that there’s this multi-billion-dollar trade.

IOS:Right … we’ve written about the de-identification process, which it seems like it’s the right thing to do, in a way, because you’re removing all the identifiers, and that includes zip code information, other geo information. It seems that for research purposes that would be okay. Do you agree with that, or not?

AT: So, these commercial companies, and some of the names may be well-known to us, companies such as IBM Watson Health, GE, LexisNexis, and the largest of them all may not be well-known to the general public, which is Quintiles and IMS. These companies have dossiers on hundreds of millions of patients worldwide. That means that they have medical information about you that extends over time, different procedures you’ve had done, different visits, different tests and so on, put together in a file that goes back for years.

Now, when you have that much information, even if it only has your date of birth, your doctor’s name, your zip code, but not your name, not your Social Security number, not things like that, it’s increasingly possible to identify people from that. Let me give you an example.

I’m talking to you now from Fairbanks, Alaska, where I’m teaching for a year at the university here. I lived, before that, in Boston, Massachusetts, and before that, in Belgrade, Serbia. I may be the only man of my age who meets that specific profile!

So, if you knew those three pieces of information about me and had medical information from those years, I might be identifiable, even in a haystack of millions of different other people.

IOS: Yeah …We have written about that as well in the blog. We call these quasi-identifiers. They’re not the traditional kind of identifiers, but they’re other bits of information, as you pointed out, that can be used to sort of re-identify. Usually it’s a small subset, but not always. And that this information would seem also should be protected as well in some way. So, do you think that the laws are keeping up with this?

AT: HIPAA was written 20 years ago, and the HIPAA rules say that you can freely trade our patient information if it is anonymized to a certain standard. Now, the technology has gone forward, dramatically, since then.

So, the ability to store things very cheaply and the ability to scroll through them is much more sophisticated today than it was when those rules came into effect. For that reason, I think it’s a worthwhile time to have a discussion now. Is this the best system? Is this what we want to do?

Interestingly, the system of the free trade in our patient information has evolved because commercial companies have decided this is what they’d want to do. There has not been an open public discussion of what is best for society, what is best for patients, what is best for science, and so on. This is just a system that evolved.

I’m saying, in writing this book, “Our Bodies, Our Data,” that it is maybe worthwhile that we re-examine where we’re at right now and say, “Do we want to have better privacy protection? Do we want to have a different system of contributing to science than we do now?”

IOS: I guess what also surprised me was that you say that pharmacies, for example, can sell the drug records, as long as it’s anonymized. You would think that the drug companies would be against that. It’s sort of leaking out their information to their competitors, in some way. In other words, information goes to the data brokers and then gets resold to the drug companies.

AT: Well, but you have to understand that everybody in what I call this big-data health bazaar is making money off of it. So, a large pharmacy chain, such as CVS or Walgreen’s, they may make tens of millions of dollars in selling copies of these prescriptions to data miners.

Drug companies are particularly interested in buying this information because this information is doctor-identified. It says that Dr. Jones in Pittsburgh prescribes drug A almost all the time, rather than drug B. So, the company that makes drug B may send a sales rep to the doctor and say, “Doctor, here’s some free samples. Let’s go out to lunch. Let me tell you about how great drug B is.”

So, this is because there exists these doctor profiles on individual doctors across the country, that are used for sales and marketing, for very sophisticated kind of targeting.

IOS: So, in an indirect way, the drug companies can learn about the other drug companies’ sales patterns, and then say, “Oh, let me go in there and see if I can take that business away.” Is that sort of the way it’s working?

AT: In essence, yes. The origins of this trade date back to the 1950s. In its first form, these data companies, such as IMS Health, what they did was just telling companies what drugs sold in what market. Company A has 87% of the market. Their rival has 13% of the market. When medical information began to become digitized in the 1960s and ’70s and evermore since then, there was a new opportunity to trade this data.

So, all of a sudden, insurance companies and middle-men connecting up these companies, and electronic health records providers and others, had a product that they could sell easily, without a lot of work, and data miners were eager to buy this and produce new products for mostly the pharmaceutical companies, but there are other buyers as well.

IOS:  I wanted to get back to another point you mentioned, in that even with anonymized data records of medical records, with all the other information that’s out there, you can re-identify or at least limit, perhaps, the pool of people who that data would apply to.

What’s even more frightening now is that hackers have been stealing health records like crazy over the last couple of years. So, there’s a whole dark market of hacked medical data that, I guess, if they got into this IMS database, they would have the keys to the kingdom, in a way.

Am I being too paranoid here?

AT: Well, no, you correctly point out that there has been a sharp upswing in hacking into medical records. That can happen into a small, individual practice, or it could happen into a large insurance company.

And in fact, the largest hacking attack of medical records in the last couple of years has been into Anthem Health, which is the Blue Cross Blue Shield company. Almost 80 million records were hacked in that.

So even people that did… I was hacked in that, even though I was not, at the time, a customer of them or had never been a customer of them, but they… One company that I dealt with outsourced to someone else, who outsourced to them. So, all of a sudden, this information can be in circulation.

There’s a government website people can look at, and you’ll see, every day or two, there are new hackings. Sometimes it involves a few thousand names and an obscure local clinic. Sometimes it’ll be a major company, such as a lab test company, and millions of names could be impacted.

So, this is something definitely to be concerned about. Yes, you could take these hacked records and match them with anonymized records to try to figure out who people are, but I should point out that there is no recorded instance of hackers getting into these anonymized dossiers by the big data miners.

IOS: Right. We hope so!

AT: I say recorded or acknowledged instance.

IOS: Right. Right. But there’s now been sort of an awareness of cyber gangs and cyber terrorism and then the use of, let’s say, records for blackmail purposes.

I don’t want to get too paranoid here, but it seems like there’s just a potential for just a lot of bad possibilities. Almost frightening possibilities with all this potential data out there.

AT: Well, we have heard recently about rumors of an alleged dossier involving Donald Trump and Russia.

IOS: Exactly.

AT: And information that… If you think about what kind of information could be most damaging or harmful to someone, it could be financial information. It could be sexual information, or it could be health information.

IOS: Yeah, or someone using… or has a prescription to a certain drug of some sort. I’m not suggesting anything, but that… All that information together could have sort of lots of implications, just, you know, political implications, let’s say.

AT: I mean if you know that someone takes a drug that’s commonly used for a mental health problem, that could be information used against someone. It could be used to deny them life insurance. It could be used to deny them a promotion or a job offer. It could be used by rivals in different ways to humiliate people. So, this medical information is quite powerful.

One person who has experienced this and spoken publicly about it is the actor, Charlie Sheen. He tested positive for HIV. Others somehow learned of it and blackmailed him. He said he paid millions of dollars to keep that information from going public, before he decided finally that he would stop paying it, and he’d have to tell the world about his medical condition.

IOS: Actually I was not aware of the payments he was making. That’s just astonishing. So, is there any hope here? Do you see some remedies, through maybe regulations or enforcement of existing laws? Or perhaps we need new laws?

AT: As I mentioned, the current rules, HIPAA, allows for the free trade of your data if it’s anonymized. Now, I think, given the growth of sophistication in computing, that we should change what the rule is and to define our medical data as any medical information about us, whether or not it’s anonymized.

So, if a doctor is writing in the electronic health record, you should have a say as to whether or not that information is going to be used elsewhere.

A little side point I should mention. There are a lot of good scientists and researchers who want data to see if they can gain insights into disease and new medications. I think people should have the choice whether or not they want to contribute to those efforts.

So, you know, there’s a lot of good efforts. There’s a government effort under way now to gather a million DNA samples from people to make available to science. So, if people want to participate in that, and they think that’s good work, they should definitely be encouraged to do so, but I think they should have the say and decide for themselves.

And so far, we don’t really have that system. So, by redefining what patient data is, to say, “Medical information about a patient, whether or not it’s anonymized,” I think that would give us the power to do that.

IOS: So effectively, you’re saying the patient owns the data, is the owner, and then would have to give consent for the data to be used. Is that, about right?

AT: I think so. But on the other hand, as I mentioned, I’ve written this book to encourage this discussion. The problem we have right now is that the trade is so opaque.

Companies are extremely reluctant to talk about this commercial trade. So, they do occasionally say that, “Oh, this is great for science and for medicine, and all of these great things will happen.” Well, if that is so fantastic, let’s have this discussion where everyone will say, “All right. Here’s how we use the data. Here’s how we share it. Here’s how we sell it.”

Then let people in on it and decide whether they really want that system or not. But it’s hard to have that intelligent policy discussion, what’s best for the whole country, if industry has decided for itself how to proceed without involving others.

IOS: Well, I’m so glad you’ve written this book. This will, I’m hoping, will promote the discussion that you’re talking about. Well, this has been great. I want to thank you for the interview. So, by the way, where can our listeners reach out to you on social media? Do you have a handle on Twitter? Or Facebook?

AT: Well, I’m @datacurtain  and I have a webpage, which is http://adamtanner.news/

IOS: Wonderful. Thank you very much, Adam.

Binge Read Our Pen Testing Active Directory Series

Binge Read Our Pen Testing Active Directory Series

With winter storm Niko now on its extended road trip, it’s not too late, at least here in the East Coast, to make a few snow day plans. Sure you can spend part of Thursday catching up on Black Mirror while scarfing down this slow cooker pork BBQ pizza. However, I have a healthier suggestion.

Why not binge on our amazing Pen Testing Active Directory Environments blog posts?

You’ve read parts of it, or — spoiler alert — perhaps heard about the exciting conclusion involving a depth-first-search of the derivative admin graph. But now’s your chance to totally immerse yourself and come back to work better informed about the Active Directory dark knowledge that hackers have known about for years.

And may we recommend eating these healthy soy-roasted kale chips while clicking below?

Episode 1: Crackmapexec and PowerView

Episode 2: Getting Stuff Done With PowerView

Episode 3: Chasing Power Users

Episode 4: Graph Fun

Episode 5: Admins and Graphs

Episode 6: The Final Case

Update: New York State Finalizes Cyber Rules for Financial Sector

Update: New York State Finalizes Cyber Rules for Financial Sector

When last we left New York State’s innovative cybercrime regulations, they were in a 45-day public commenting period. Let’s get caught up. The comments are now in. The rules were tweaked based on stakeholders’ feedback, and the regulations will begin a grace period starting March 1, 2017.

To save you the time, I did the heavy lifting and looked into the changes made by the regulators at the New York State Department of Financial Services (NYSDFS).

There are a few interesting ones to talk about. But before we get into them, let’s consider how important New York State — really New York City — is as a financial center.

Made in New York: Money!

To get a sense of what’s encompassed in the NYDFS’s portfolio, I took a quick dip into their annual report.

For the insurance sector, they supervise almost 900 insurers with assets of $1.4 trillion and receive premiums of $361 billion. Under wholesale domestic and foreign banks — remember New York has a global reach — they monitor 144 institutions with assets of $2.2 trillion. And I won’t even get into community and regional banks, mortgage brokers, and pension funds.

In a way, the NYSDFS has the regulatory power usually associated with a small country’s government. And therefore the rules that New York makes regarding data security has an outsized influence.

One Rule Remains the Same

Back to the rules. First, let’s look at one key part that was not changed.

NYSDFS received objections from the commenters on their definition of cyber events. This is at the center of the New York law—detecting, responding, and recovering from these events—so it’s important to take a closer look at its meaning.

Under the rules, a cybersecurity event is “any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information …”

Some of the commenters didn’t like the inclusion of “attempt” and “unsuccessful”. But the New York regulators held firm and kept the definition as is.

Cybersecurity is a broader term than a data breach. For a data breach, there usually has to be data access and exposure or exfiltration. In New York State, though, access alone or an IT disruption, even when attempted (or executed but not successfully) is considered an event.

As we’ve pointed out in our ransomware and the law cheat sheet, very few states in the US would classify a ransomware attack as a breach under their breach laws.

But in New York State, if ransomware (or a remote access trojan or other malware) was loaded on the victim’s server and perhaps abandoned or stopped by IT in mid-hack, it would indeed be a cybersecurity event.

Notification Worthy

This leads naturally to another rule, notification of a cybersecurity event to the New York State regulators, where the language was tightened.

The 72-hour time frame for reporting remains, but the clock starts ticking after a determination by the financial company that an event has occurred.

The financial companies were also given more wiggle room in the types of events that require notification: essentially the malware would need to “have a reasonable likelihood of materially harming any material part of the normal operation…”

That’s a mouthful.

In short: financial companies will notify the regulators at NYSDFS when the malware could seriously affect an operation that’s important to the company.

For example, malware that infects the digital console on the bank’s espresso machine is not notification worthy. But a key logger that lands in a bank’s foreign exchange area and is scooping up user passwords is very worthy.

The NYDFS’s updated notification rule language, by the way, puts it more in line with other data security laws, including the EU’s General Data Protection Regulation (GDPR).

So would you have to notify the New York State regulator when malware infects a server but hasn’t necessarily completed its evil mission?

Getting back to the language of “attempt” and “unsuccessful” found in the definition of cybersecurity events, it would appear that you would but only if the malware lands on a server that’s important to the company’s operations — either because of the data it contains or its function.

State of Grace

The original regulation also said you had to appoint a Chief Information Security Officer (CISO) who’d be responsible for seeing this cybersecurity regulation is carried out. Another important task of the CISO is to annually report to the board on the state of the company’s cybersecurity program.

With pushback from industry, this language was changed so that you can designate an existing employee as a CISO — likely a CIO or other C-level.

One final point to make is that the grace period for compliance has been changed. For most of the rules, it’s still 180 days.

But for certain requirements – multifactor authentication and penetration testing — the grace period has been extended to 12 months, and for a few others – audit trails, data retention, and the CISO report to the board — it’s been pushed out to 18 months.

For more details on the changes, check this legal note from our attorney friends at Hogan Lovells.

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

More Adam Tanner! In this second part of my interview with the author of Our Bodies, Our Data, we start exploring the implications of having massive amounts of online medical  data. There’s much to worry about.

With hackers already good at stealing health insurance records, is it only a matter of time before they get into the databases of the drug prescription data brokers?

My data privacy paranoia about all this came out in full force during the interview. Thankfully, Adam was able to calm me down, but there’s still potential for frightening possibilities, including political blackmail.

Is the answer more regulations for drug data? Listen to the rest of the interview below to find out, and follow Adam on Twitter, @datacurtain, to keep up to date.