All posts by Andy Green

Please Disable UPnP on Your Router. Now!

Please Disable UPnP on Your Router. Now!

Remember the first large-scale Mirai attack late last year? That was the one directed at IP cameras, and took advantage of router configurations settings that many consumers never bother changing. The main culprit, though, was Universal Plug and Play or UPnP, which is enabled as a default setting on zillions of routers worldwide.

Also known as port forwarding, UPnP is a convenient way for allowing gadgets, such as the aforementioned cameras (or WiFi-connected coffee pots), to be accessible on the other side of the firewall through a public port. UPnP automatically creates this public port when these gadgets are installed.

Command and Control Meets UPnP

However, this convenience factor provides an opening for hackers. In the case of Mirai, it allowed them to scan for these ports, and then hack into the device at the other end.

Hackers have now found an even more diabolical use of UPnP with the banking trojan Pinkslipbot, also known as QakBot or QBot.

Around since 2000, QakBot infects computers, installs a key logger, and then sends banking credentials to remote Command and Control (C2) servers.

Remember C2?

When we wrote our first series on pen testing, we described how remote access trojans (RATs) residing on the victims’ computers are sent commands remotely from the hackers’ servers over an HTTP or HTTPS connection.

This is a stealthy approach in post-exploitation because it makes it very difficult for IT security to spot any abnormalities. After all, to an admin or technician watching the network it would just appear that the user is web browsing — even though the RAT is receiving embedded commands to log keystrokes or search for PII, and exfiltrating passwords, credit card numbers, etc. to the C2s.

The right defense against this is to block the domains of known C2 hideouts. Of course, it becomes a cat-and-mouse game with the hackers as they find new dark spots on the Web to set up their servers as old ones are filtered out by corporate security teams.

And that’s where Pinkslipbot has added a significant innovation. It has introduced, for lack of a better term, middle-malware, which infects computers, but not to take user credentials! Instead, the middle-malware installs a proxy C2 server that relays HTTPS to the real C2 servers.

Middle-malware: C2 servers can be anywhere!

The Pinkslipbot infrastructure therefore doesn’t have a fixed domain for their C2 servers. In effect, the entire Web is their playing field! It means that it’s almost impossible to maintain a list of known domains or addresses to filter out.

What does UPnP have to do with Pinkslipbot?

When the Pinkslipbot is taking over a consumer laptop, it checks to see if UPnP is enabled. If it is, the Pinkslipbot middle-malware issues a UPnP request to the router to open up a public port. This allows Pinslipbot to then act as a relay between those computers infected with the RATs and the hackers’ C2 servers (see the diagram).

It’s fiendish, and I begrudgingly give these guys a (black) hat tip.

One way for all of us to make these kinds of attacks more difficult to pull off is to simply disable the UPnP or port-forwarding feature on our home routers. You probably don’t need it!

By the way, you can see this done here for my own home Linksys router. And while you’re carrying out the reconfiguration, take the time to come up with a better admin password.

Do this now!

Security Stealth Wars: IT Is Not Winning (With Perimeter Defenses)

PhishingFUD malware, malware-free hacking with PowerShell, and now hidden C2 servers. The hackers are gaining the upper-hand in post-exploitation: their activities are almost impossible to block or spot with traditional perimeter security techniques and malware scanning.

What to do?

The first part is really psychological: you have to be willing to accept that the attackers will get in. I realize that it means admitting defeat, which can be painful for IT and tech people. But now you’re liberated from having to defend an approach that no longer makes sense!

Once you’ve passed over this mental barrier, the next part follows: you need a secondary defense for detecting hacking that’s not reliant on malware signatures or network monitoring.

I think you know where this is going. Defensive software that’s based on – wait for it — User Behavior Analytics (UBA) can spot the one part of the attack that can’t be hidden: searching for PII in the file system, accessing critical folders and files, and copying the content.

In effect, you grant the hackers a small part of the cyber battlefield, only to defeat them later on.

I Click Therefore I Exist: Disturbing Research On Phishing

I Click Therefore I Exist: Disturbing Research On Phishing

Homo sapiens click on links in clunky, non-personalized phish mails. They just do. We’ve seen research suggesting a small percentage are simply wired to click during their online interactions. Until recently, the “why” behind most people’s clicking behaviors remained something of a mystery. We now have more of an answer to this question based on findings from German academics. Warning:  IT security people will not find their conclusions very comforting.

Attention Marketers: High Click-Through Rates!

According to research by Zinaida Benenson and her colleagues, the reasons for clicking on phish bait are based on an overall curiosity factor, and then secondarily, on content that connects in some way to the victim.

The research group used the following email template in the experiment, and sent it to over 1200 students at two different universities:


The New Year’s Eve party was awesome! Here are the pictures:

http://<IP address>/photocloud/page.php?h=<participant ID>

But please don’t share them with people who have not been there!

See you next time!

<sender’s first name>

The message, by the way, was blasted out during the first week of January.

Anybody want to guess what was the overall click-through rate for this spammy message?

A blazing 25%.

Marketers everywhere are officially jealous of this awesome metric.

Anyway, the German researchers followed up with survey questions to find the motivations behind these click-aholics.

Of those who responded to the survey, 34% said they were curious about the party pictures linked to in the mail, another 27% said the message fits the time of year, and another 16% said they thought they knew the sender based on just the first name.

To paraphrase one of those cat memes, “Humans is EZ to fool!”

The clever German researchers conducted a classic cover-story design in their experiment. They enlisted students to ostensibly participate in a study on Internet habits and offered online shopping vouchers as an incentive. Nothing was mentioned about phish mails being sent to them.

And yes, after the real study on phishing was completed, the student subjects were told the reason for the research, the results, and given a good stern warning about not clicking on silly phish mail links.

Benenson also gave a talk on her research at last year’s Black Hat. It’s well-worth your time.

Phishing: The Ugly Truth

At the IOS blog, we’ve also been writing about phishing and have been following the relevant research. In short: we can’t say we’re surprised by the findings of the German team, especially as it relates to clicking on links to pictures.

The German study seems to confirm our own intuitions: people at corporate at jobs are bored and are finding cheap thrills by gazing into the private lives of strangers.

Ok, you can’t change human nature, etc.

But there’s another more disturbing conclusion related to the general context of the message.The study strongly suggests the more you know and can say about the target in the phish mail, the more likely it is that they will click. And in fact in an earlier study by Benenson, a 56% click-rate was achieved when the phish mail recipient was addressed by name.

Here’s what they had to say about their latest research:

 … fitting the content and the context of the message to the current life situation of a person plays an important role. Many people did not click because they learned to avoid messages from unknown senders, or with an unexpected content  … For some participants, however, the same heuristic (‘does this message fit my current situation?’) led to the clicks, as they thought that the message might be from a person from their New Year’s Eve party, or that they might know the sender.


Implications for Data Security

At Varonis, we’ve been preaching the message that you can’t expect perimeter security to be your last line of defense. Phishing, of course, is one of the major reasons why hackers find it so easy to get inside the corporate intranet.

But hackers are getting smarter all the time, collecting more details about their phishing targets to make the lure more attractive.The German research shows that even poorly personalized content is very effective.

So imagine what happens if they gain actual personal preference and other informational details from observing victims on social media sites or, perhaps, through a previous hack of another web site you engage with.

Maybe a smart hacker who’s been stalking me might send this fiendish email to my Varonis account:

Hey Andy,

Sorry I didn’t see you Black Hat this year! I ran into your colleague Cindy Ng, and she said you’d really be interested in research I’m doing on phishing and user behavior analytics. Click on this link and let me know what you think.  Hope things are going well at Varonis!


Bob Simpson, CEO of Phishing Analytics

Hmmm, you know I could fall for something like this the next time I’m in a vulnerable state.

The takeaway lesson for IT is that they need a secondary security defense, one that monitors hackers when they’re behind the firewall and can detect unusual behaviors by analyzing file system activity.

Want to find out more, click here!

Did you click? Good, that link doesn’t point to a Varonis domain!

Another conclusion of the study is that your organization should also undertake security training, especially for non-tech savvy staff.

We approve as well: it’s a worthwhile investment!

GDPR: Troy Hunt Explains it All in Video Course

GDPR: Troy Hunt Explains it All in Video Course

You’re a high-level IT security person, who’s done the grunt work of keeping your company compliant with PCI DSS, ISO 27001, and a few other security abbreviations, and one day you’re in a meeting with the CEO, CSO, and CIO. When the subject of General Data Protection Regulation or GDPR comes up, all the Cs agree that there are some difficulties, but everything will be worked out.

You are too afraid to ask, “What is the GDPR?”

Too Busy for GDPR

We’ve all been there, of course. Your plate has been full over the last few weeks and months hunting down vulnerabilities, hardening defenses against ransomware and other malware, upgrading your security, along with all the usual work involved in keeping the IT systems humming along.

So it’s understandable that the General Data Protection Regulation may have flown under your radar.

However, there’s no need to panic.

The GDPR shares many similarities with other security standards and regulations so it’s just question of learning some basic background, the key requirements of the new EU law, and a few gotchas, preferably explained by an instructor with a knack for connecting with IT people.

Hunt on GDPR

And that’s why we engaged with Troy Hunt to develop a 7-part video course on the GDPR. Troy is a web security guru, Australian Microsoft Regional Director, and author whose security writing has appeared in Forbes, Time Magazine, and Mashable. And he’s no stranger to this blog as well!

Let’s get back to you and other busy IT security folks like you who need to get up to speed quickly.  With just an hour of your time, Troy will cover the basic vocabulary and definitions (“controller”, “processor”, “personal data”), the key concept underlying GDPR (personal data is effectively owned by the consumer), and what you’ll need to do to keep your organization compliant (effectively, minimize and monitor this personal data.)

By the way, Troy also explains how US companies, even those without EU offices, can get snagged by GDPR’s territorial scope rule— Article 3 to be exact. US-based e-commerce companies: you’ve been warned!

While Troy doesn’t expect you to be an attorney, he analyzes and breaks down a few of more critical requirements and the penalties for not complying, particularly on breach reporting, so that you’ll be able to keep up with some of the legalese when it arises at your next GDPR meeting.

And I think you’ll see by the end of the course that while there may be some new aspects to this EU law, as Troy notes, the GDPR really legislates IT common sense.

What are you waiting for?  Register and get GDPR-aware starting today!





[Infographic] From Bad Report Cards to Insider Data Theft

[Infographic] From Bad Report Cards to Insider Data Theft

We’ve all read the news recently about employees and contractors selling internal customer data records or stealing corporate intellectual property. But insiders breaking bad have been with us as long as we’ve had computers and disgruntled humans who understand IT systems.

You may not know it, but academic researchers have also been studying the psychological insides of insiders.

Carnegie Mellon’s Computer Emergency Response Team (CERT) has an entire group devoted to insider threats. Based on looking at real cases, these academics have come up with, to our minds, a very convincing model of what drives insiders.

In short, it’s their belief that the root causes lie beyond just a raise or promotion denied, but rather in earlier traumas, likely starting in childhood.

For instance, it is thought that children who, during a famous psych experiment, immediately ate the marshmallow (instead of waiting for two marshmallows) had issues with parental and other authority figures that would later show up through impulsive behaviors. Or perhaps for a certain kind of child, not getting into the genius program for advanced 4-year-olds can have devastating consequences later!

We’ve turned the complex CERT multi-stage insider model into this more accessible infographic. Check out the original CERT paper (or read our incredibly informative series) to learn more.


Disabling PowerShell and Other Malware Nuisances, Part III

Disabling PowerShell and Other Malware Nuisances, Part III

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

One of the advantages of AppLocker over Software Restriction Policies is that it can selectively enable PowerShell for Active Directory groups. I showed how this can be done in the previous post. The goal is to limit as much as possible the ability of hackers to launch PowerShell malware, but still give legitimate users access.

It’s a balancing act of course. And as I suggested, you can accomplish the same thing by using a combination of Software Restriction Policies (SRP) and ACLs, but AppLocker does this more efficiently in one swoop.

Let’s Get Real About Whitelisting

As a practical matter, whitelisting is just plain hard to do, and I’m guessing most IT security staff won’t go down this route. However, AppLocker does provide an ‘audit mode’ that makes whitelisting slightly less painful than SRP.

AppLocker can be configured to log events that show up directly in the Windows Event Viewer. For whatever reason, I couldn’t get this to work in my AWS environment. But this would be a little less of a headache than setting up a Registry entry and dealing with a raw file — the SPR approach.

In any case, I think most of you will try what I did. I took the default rules provided by AppLocker to enable the standard Windows system and program folders, added an exception for PowerShell, and then created a special rule to allow only member of a select AD group — Acme-VIPs in my case — to access PowerShell.

AppLocker: Accept the default path rules, and then selectively enable PowerShell.

Effectively, I whitelisted all-the-usual Windows suspects, and then partially blacklisted PowerShell.

PowerShell for Lara, who’s in the Acme-VIPs group, but no PowerShell for Bob!

And Acme Was Hacked

No, the hacking of my Acme domain on AWS is not going to make any headlines. But I thought as a side note it’s worth mentioning.

I confess: I was a little lax with my Amazon firewall port setting, and some malware slipped in.

After some investigation, I discovered a suspicious executable in  the \Windows\Prefetch directory. It was run as a service that looked legit, and it opened a zillion UDP ports.

It took me an afternoon or two to figure all this out. My tip offs were when my server became somewhat sluggish, and then receiving an Amazon email politely suggesting that my EC2 instance may have been turned into a bot used for a DDoS attack.

This does relate to SRP and AppLocker!

Sure, had I activated these protection services earlier, Windows would have been prevented from launch the malware, which was living in in a non-standard location.

Lesson learned.

And I hold my head in shame if I caused some DDos disturbance for someone, somewhere.

Final Thoughts

Both SRP and AppLocker also have rules that take into account file hashes and digital certificates. Either will provide an additional level of security that the executable are really what they claim to be, and not the work of evil hackers.

AppLocker is more granular than SRP when it comes to certificates, and it allows you to filter on a specific app from a publisher and a version number as well. You can learn more about this here.

Bottom line: whitelisting is not an achievable goal for the average IT mortal. For the matter at hand, disabling PowerShell, my approach of using default paths provided by either SRP or AppLocker, and then selectively allowing PowerShell for certain groups — easier with AppLocker — would be far more realistic.

Disabling PowerShell and Other Malware Nuisances, Part II

Disabling PowerShell and Other Malware Nuisances, Part II

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

Whitelisting apps is nobody’s idea of fun. You need to start with a blank slate, and then carefully add back apps you know to be essential and non-threatening. That’s the the idea behind what we started to do with Software Restriction Policies (SRP) from last time.

As you’ll recall, we ‘cleared the board’ though the default disabling of app execution in the Property Rules. In the Additional Rules section, I then started adding Path rules for apps I though were essential.

The only apps you’ll ever need!

Obviously, this can get a little tedious, so Microsoft helpfully provides two default rules: one to enable execution of apps in the Program folder and the other to enable executables in the Windows system directory.

But this is cheating and then you’d be force then to blacklist apps you don’t really need.

Anyway, when a user runs an unapproved app or a hacker tries to load some malware that’s not in the whitelist, SRP will prevent it.  Here’s what happened when I tried to launch PowerShell, which wasn’t in my whitelist, from the old-style cmd shell, which was in the list:

Damn you Software Restriction Policies!

100% Pure Security

To be ideologically pure, you wouldn’t use the default Windows SRP rules. Instead, you need to start from scratch with bupkes and do the grunt work of finding out what apps are being used and that are truly needed.

To help you get over this hurdle, Microsoft suggests in a Technet article that you turn on a logging feature that writes out an entry whenever SRP evaluates an app.  You’ll need to enable the following registry entry and set a log file location:


String Value: LogFileName, <path to a log file>

Here’s a part of this log file from my AWS test environment.

Log file produced by SRP.

So you have to review the log, question users, and talk it over with your fellow IT admins. Let’s say you work out a list of approved apps (excluding PowerShell), which you believe would make sense for a large part of your user community. You can then leverage the Group Policy Management console to publish the rules to the domain.

In theory, I should be able to drag-and-drop the rules I created in the Group Policy Editor for the machine I was working into the Management console. I wasn’t able to pull that off in my AWS environment.

I instead  had to recreate the rules directly in the Group Policy Management Policy Editor (below), and then let it do the work of distributing it across the domain — in my case, the Acme domain.


You can read more about how to do this here.

A Look at AppLocker

Let’s get back to the issue of PowerShell. We can’t live without it, yet hackers have used it as tool for stealthy post-exploitation.

If I enable it in my whitelist, along with some of the built-in PowerShell protections I mentioned in the last post, there are still so many ways to get around these security precautions that it’s not worth the trouble.

It would be nice if SRP allows you to do the whitelisting selectively based on Active Directory user or group membership. In other words, effectively turn off PowerShell except if you’re, say, an IT admin that’s a member of the ‘Special PowerShell’ AD group.

That ain’t happening in SRP since it doesn’t support this level of granularity!

Starting in Windows 7 (and Windows Server 2008), Microsoft deprecated SPR and introduced the (arguably) more powerful AppLocker. It’s very similar to the what it replaces, but it does provide this user-group level filtering.

We’ll talk more about AppLocker and some of its benefits in the final post in this series. In any case, you can find this policy next to SRP in the Group Policy Editor under Application Control Policies.

For my Acme environment, I set up a rule that enables PowerShell for only users in the Acme-VIP group, Acme’s small group of power IT employees. You can see how I started setting this up as I follow the AppLocker wizard dialog:

PowerShell is an important and useful tool, so you’ll need to weigh the risks in selectively enabling it through AppLocker— dare I say it, perform a risk assessment.

Of course, you should have secondary controls, such as, ahem, User Behavior Analytics, that allows you to protect against PowerShell misuse should the credentials of the PowerShell-enabled group be compromised by hackers or insiders.

We’ll take up  other AppLocker capabilities and final thoughts on whitelisting in the next post.

Continue reading the next post in "Disabling PowerShell and Other Malware Nuisances"

[Transcript] Interview With GDPR Attorney Sue Foster

[Transcript] Interview With GDPR Attorney Sue Foster

Over two podcasts, attorney Sue Foster dispensed incredibly valuable GDPR wisdom. If you’ve already listened, you know it’s the kind of insights that would have otherwise required a lengthy Google expedition, followed by chatting with your cousin Vinny the lawyer. We don’t recommend that!

In reviewing the transcript below, I think there are three points that are worth commenting on. One, the GDPR’s breach reporting rule may appear to give organizations some wiggle room. But in fact that’s not the case! The reference to “right and freedoms of natural persons” refers to explicit privacy and property rights spelled out in the EU Charter. This ain’t vague language.

However, there is some leeway in reporting within the 72-hour time frame. In short: you have to make a good effort, but you can delay if, say, you’re currently investigating and need more time because otherwise you’d compromise the investigation.

Two, the territorial scope requirements in Article 3 are complicated by what it means to target EU citizens in your marketing. The very tricky part is when you’re a multinational company that has both a EU and non-EU presence. If you read closely, Foster is suggesting that EU citizens that happen to find their way to, say, your US web site, would not be protected by the GDPR.

In other words, if the company’s general marketing doesn’t target EU citizens, then the information collected is not under GDPR protections. But that would not apply to a company’s localized web content for, say, the French or German markets — information submitted through those sites would of course be under the GDPR.

Yes, I will confirm this with Foster. But if this is not the case for multinationals, then it would cause a pretty large mal de tête.

Third, GPDR compliance is based on, as Foster notes, a “show your work” principle, the same as you did on math tests in high school. It is not like PCI DSS, where you’re going down a checkoff list: Two Factor Authentication? Yes.  Vulnerability Scanning? Yes, etc.

The larger issue is that security technology will change and so what worked well in the past will likely not hold up in the future. With GDPR, you should be able to justify your security plan based on the current state of security technology and document what you’ve done.

Enough said.

Inside Out Security
Sue Foster is a partner with Mintz Levin based out of the London office. She works with clients on European data protection compliance and on commercial matters in the fields of clean tech, high tech, mobile media, and life sciences. She’s a graduate of Stanford Law School. SF is also, and we like this here at Varonis, a Certified Information Privacy Professional.

I’m very excited to be talking to an attorney with a CIPP, and with direct experience on a compliance topic we cover on our blog — the General Data Protection Regulation, or GDPR.

Welcome, Susan.

Sue Foster
Hi Andy. Thank you very much for inviting me to join you today. There’s a lot going on in Europe around cybersecurity and data protection these days, so it’s a fantastic set of topics.
Oh terrific. So what are some of the concerns you’re hearing from your clients on GDPR?
So one of the big concerns is getting to grips with the extra-territorial reach. I work with a number of companies that don’t have any office or other kind of presence in Europe that would qualify them as being established in Europe.

But they are offering goods or services to people in Europe. And for these companies, you know in the past they’ve had to go through quite a bit of analysis to understand the Data Protection Directive applies to them. Under the GDPR, it’s a lot clearer and there are rules that are easier for people to understand and follow.

So now when I speak to my U.S. clients, if they’re a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they’ll be subject to the GDPR. That’s very clear.

Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.

It’s also really important for U.S. companies to understand that there’s a new ePrivacy Regulation in draft form that would cover any provider, regardless of location, of any form of publicly available electronic communication services to EU users.

Under this ePrivacy Regulation, the notion of what these communication services providers are is expanded from the current rules, and it includes things that are called over-the-top applications – so messaging apps and communications features, even when a communication feature is just something that is embedded in a website.

If it’s available to the public and enables communication, even in a very limited sort of forum, it’s going to be covered. That’s another area where U.S. companies are getting to grips with the fact that European rules will apply to them.

So this new security regulation as well that may apply to companies located outside the EU. So all of these things are combining to suddenly force a lot of U.S. companies to get to grips with European law.

So just to clarify, let’s say a small U.S. social media company that doesn’t market specifically to EU countries, doesn’t have a website in the language of some of the EU country, they would or would not fall under the GDPR?
On the basis of their [overall] marketing activity they wouldn’t. But we would need to understand if they’re profiling or they’re tracking EU users or through viral marketing that’s been going on, right? And they are just tracking everybody. And they know that they’re tracking people in the EU. Then they’re going to be caught.

But if they’re not doing that, if not engaging in any kind of tracking, profiling, or monitoring activities, and they’re not affirmatively marketing into the EU, then they’re outside of the scope. Unless of course, they’re offering some kind of service that falls under one of these other regulations that we were talking about.

What we’re hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I’m wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA’s and consumers?
Sure absolutely. So first it’s important to look at the specific definition of personal data breach. It means a breached security leading to the ‘accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to personal data’.  So it’s fairly broad.

The requirement to report these incidents has a number of caveats. So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.

Then there’s a set of exceptions. And that is unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. So I can understand why U.S. companies would sort of look at this and say, ‘I don’t really know what that means’. How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?

Because that’s not defined anywhere in this regulation!

It’s important to understand that that little bit of text is EU-speak that really refers to the Charter of Fundamental Rights of the European Union, which is part of EU law.

There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person’s privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?

So you could, for example, say that there’s a breach that isn’t likely to reveal information that I would consider personally compromising in a privacy perspective, but it could lead to fraud, right? So that could affect my property rights. So that would be one of those issues. Basically, most of the time you’re going to have to report the breach.

When you’re going through the process of working out whether you need to report the breach to the DPA, and you’re considering whether or not the breach is likely to result in a risk to the rights and freedoms of natural persons, one of the things that you can look at is whether people are practically protected.

Or whether there’s a minimal risk because of steps you’ve already taken such as encrypting data or pseudonymizing data and you know that the key that would allow re-identification of the subjects hasn’t been compromised.

So these are some of the things that you can think about when determining whether or not you need to report to the Data Protection Authority.

If you decide you have to report, you then need to think about ‘do you need to report the breach to the data subjects’, right?

And the standard there is that is has to be a “high risk to the rights and freedoms” of natural persons’. So a high risk to someone’s privacy rights or rights on their property and things of that sort.

And again, you can look at the steps that you’ve taken to either prevent the data from — you know before it even was leaked — prevent it from being potentially vulnerable in a format where people could be damaged. Or you could think also whether you’ve taken steps after the breach that would prevent those kinds of risks from happening.

Now, of course, the problem is the risk of getting it wrong, right?

If you decide that you’re not going to report after you go through this full analysis and the DPA disagrees with you, now you’re running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.

And that I think it’s going to lead to a lot of companies being cautious in reporting when even they might have been able to take advantage of some of these exceptions but they won’t feel comfortable with that.

I see. So just to bring it to more practical terms. We can assume that let’s say credit card numbers or some other identification number, if that was breach or taken, would have to be reported both to the DPA and the consumer?
Most likely. I mean if it’s…yeah almost certainly. Particularly if the security code on the back of the card has been compromised, and absolutely you’ve got a pretty urgent situation. You also have a responsibility to basically provide a risk assessment to the individuals, and advise them on steps that they can take to protect themselves such as canceling their card immediately.
One hypothetical that I wanted to ask you about is the Yahoo breach, which technically happened a few years ago. I think it was over two years ago … Let’s say something like that had happened after the GDPR where a company sort of had known that there was something happening that looked like a breach, but they didn’t know the extent of it.

If they had not reported it, and waited until after the 72-hour rule, what would have happened to let’s say a multinational like Yahoo?

Well, Yahoo would need to go through the same analysis, and it’s hard to imagine that a breach on that scale and with the level of access that was provided to the Yahoo users accounts as a result of those breaches, and of course the fact that people know that it’s very common for individuals to reuse passwords across different sites, and so you, you know, have the risks sort of follow on problems.

It’s hard to imagine they would be in a situation where they would be off the hook for reporting.

Now the 72-hour rule is not hard and fast.

But the idea is you report as soon as possible. So you can delay for a little while if it’s necessary for say a law enforcement investigation, right? That’s one possibility.

Or if you’re doing your own internal investigation and somehow that would be compromised or taking security measures would be compromised in some way by reporting it to the DPA. But that’ll be pretty rare.

Obviously going along for months and months with not reporting it would be beyond the pale. And I would say a company like Yahoo would potentially be facing a fine of 2% of its worldwide revenue!

So this is really serious business, especially for multinationals.

This is also a breach reporting related question, and it has to do with ransomware. We’re seeing a lot of ransomware attacks these days. In fact, when we visit customer sites and analyze their systems, we sometimes see these attacks happening in real time. Since a ransomware attack encrypts the file data but most of the time doesn’t actually take the data or the personal data, would that breach have to be reported or not?

This is a really interesting question! I think the by-the-book answer is, technically, if a ransomware attack doesn’t lead to the accidental or unlawful destruction, loss, or alteration or unauthorized disclosure of or access to the personal data, it doesn’t actually fall under the GDPR’s definition of a personal data breach, right?

So, if a company is subject to an attack that prevents it from accessing its data, but the intruder can not itself access, change or destroy the data, you could argue it’s not a personal data breach, therefore not reportable.

But it sure feels like one, doesn’t it?

Yes, it does!
Yeah. I suspect we’re going to find that the new European Data Protection Board will issue guidance that somehow brings ransomware attacks into the fold of what’s reportable. Don’t know that for sure, but it seems likely to me that they’ll find a way to do that.

Now, there are two important caveats.

Even though, technically, a ransomware attack may not be reportable, companies should remember that a ransomware attack could cause them to be in breach of other requirements of the GDPR, like the obligation to ensure data integrity and accessibility of the data.

Because by definition, you know, the ransomware attack has made the data non-assessable and has totally corrupted its integrity. So, there could be a liability there under the GDPR.

And also, the company that’s suffering the ransomware attack should consider whether they’re subject to the new Network and Information Security Directive, which is going to be implemented in national laws by May 9th of 2018. So again, May 2018 being a real critical time period. That directive requires service providers to notify the relevant authority when there’s been a breach that has a substantial impact on the services, even if there was no GDPR personal data breach.

And the Network and Information Security Directive applies to a wide range of companies, including those that provide “essential services”. Sort of the fundamentals that drive the modern economy: energy, transportation, financial services.

But also, it applies to digital service providers, and that would include cloud computing service providers.

You know, there could be quite a few companies that are being held up by ransomware attacks who are in the cloud space, and they’ll need to think about their obligations to report even if there’s maybe not a GDPR reporting requirement.

Right, interesting. Okay. As a security company, we’ve been preaching Privacy by Design principles, data minimization and retention limits, and in the GPDR it’s now actually part of the law.

The GDPR is not very specific about what has to be done to meet these Privacy by Design ideas, so do you have an idea what the regulators might say about PbD as they issue more detailed guidelines?

They’ll probably tell us more about the process but not give us a lot of insight as to specific requirements, and that’s partly because the GDPR itself is very much a show-your-work regulation.

You might remember back on old,old math tests, right? When you were told, ‘Look, you might not get the right answer, but show all of your work in that calculus problem and you might get some partial credit.’

And it’s a little bit like that. The GDPR is a lot about process!

So, the push for Privacy by Design is not to say that there are specific requirements other than paying attention to whatever the state of the art is at the time. So, really looking at the available privacy solutions at the time and thinking about what you can do. But a lot of it is about just making sure you’ve got internal processes for analyzing privacy risks and thinking about privacy solutions.

And for that reason, I think we’re just going to get guidance that stresses that, develops that idea.

But any guidance that told people specifically what security technologies they needed to apply would probably be good for, you know, 12 or 18 months, and then something new would come along.

Where we might see some help is, eventually, in terms of ISO standards. Maybe there’ll be an opportunity in the future for something that comes along that’s an international standard, that talks about the process that companies go through to design privacy into services and devices, etc. Maybe then we’ll have a little more certainty about it.

But for now, and I think for the foreseeable future, it’s going to be about showing your work, making sure you’ve engaged, and that you’ve documented your engagement, so that if something does go wrong, at least you can show what you did.

That’s very interesting, and a good thing to know. One last question, we’ve been following some of the security problems related to Internet of Things devices, which are gadgets on the consumer market that can include internet-connected coffee pots, cameras, children toys.

We’ve learned from talking to testing experts that vendors are not really interested in PBD. It’s ship first, maybe fix security bugs later. Any thoughts on how the GDPR will effect IOT vendors?

It will definitely have an impact. The definition of personal data under the GDPR is very, very broad. So, effectively, anything that I am saying that a device picks up is my personal data, as well as data kind of about me, right?

So, if you think about a device that knows my shopping habits that I can speak to and I can order things, everything that the device hears is effectively my personal data under the European rules.

And Internet of Things vendors do seem to be lagging behind in Privacy by Design. I suspect we’re going to see investigations and fines in this area early on, when the GDPR starts being enforced on May, 2018.

Because the stories about the security risks of, say, children’s toys have really caught the attention of the media and the public, and the regulators won’t be far behind.

And now, we have fines for breaches that range from 2% to 4% of a group’s global turnover. It’s an area that is ripe for enforcement activity, and I think it may be a surprise to quite a few companies in this space.

It’s also really important to go back to this important theme that there are other regulations, besides the GDPR itself, to keep track of in Europe. The new ePrivacy Regulation contains some provisions targeted at the internet of things, such as the requirement to get consent from consumers from machine-to-machine transfers of communications data, which is going to be very cumbersome.

The [ePrivacy] Regulation says you have to do it, it doesn’t really say how you’re going to get consent, meaningful consent, that’s a very high standard in Europe, to these transfers when there’s no real intelligent interface between the device and the person, the consumer who’s using it. Because there are some things that have, maybe kind of a web dashboard. There’s some kind of app that you use and you communicate with your device, you could have privacy settings.

There’s other stuff that’s much more behind the scenes with Internet of Things, where the user is not having a high level of engagement. So, maybe a smart refrigerator that’s reeling information about energy consumption to, you know, the grid. Even there, you know, there’s potentially information where the user is going to have to give consent to the transfer.

And it’s hard to kind of imagine exactly what that interface is going to look like!

I’ll mention one thing about the ePrivacy Regulation. It’s in draft form. It could change, and that’s important to know. It’s not likely to change all that much, and it’s on a fast-track timeline because the commission would like to have it in place and ready to go May, 2018, the same time as the GDPR.

 Sue Foster, I’d like to thank you again for your time.
You’re very welcome. Thank you very much for inviting me to join you today.

Disabling PowerShell and Other Malware Nuisances, Part I

Disabling PowerShell and Other Malware Nuisances, Part I

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

Back in more innocent times, circa 2015, we began to hear about hackers going malware-free and “living off the land.” They used whatever garden-variety IT tools were lying around on the target site. It’s the ideal way to do post-exploitation without tripping any alarms.

This approach has taken off and gone mainstream, primarily because of off-the-shelf post-exploitation environments like PowerShell Empire.

I’ve already written about how PowerShell, when supplemented with PowerView, becomes a potent purveyor of information for hackers. (In fact, all this PowerShell pen-testing wisdom is available in an incredible ebook that you should read as soon as possible.)

Any tool can be used for good or bad, so I’m not implying PowerShell was put on this earth to make the life of hackers easier.

But just as you wouldn’t leave a 28” heavy-duty cable cutter next to a padlock, you probably don’t want to allow, or at least make it much more difficult for, hackers get their hands on PowerShell.

This brings up a large topic in the cybersecurity world: restricting application access, which is known more commonly as whitelisting or blacklisting. The overall idea is for the operating system to strictly control what apps can be launched by users.

For example, as a member of homo blogus, I generally need some basic tools and apps (along with a warm place to sleep at night), and can live without PowerShell, netcat, psexec, and some other cross-over IT tools I’ve discussed in previous posts. The same applies to most employees in an organization, and so a smart IT person should be able come up with a list of apps that are safe to use.

In the Windows world, you can enforce rules on application execution using Software Restriction Policies and more recently AppLocker.

However, before we get into these more advanced ideas, let’s try two really simple solutions and then see what’s wrong with them.

ACLs and Other Simplicities

We often think of Windows ACLs as being used to control access to readable content. But they can also be applied to executables — that is, .exe, .vbs, .ps1, and the rest.

I went back into Amazon Web Services where the Windows domain for the mythical and now legendary Acme company resides and then did some ACL restriction work.

The PowerShell .exe, as any sys admin can tell you, lives in C:\Windows\System32\WindowsPowerShell\v1.0. I navigated to the folder, clicked on properties, and effectively limited execution of PowerShell to a few essential groups: Domain Admins and Acme-SnowFlakes, which is the group of Acme employee power users.

I logged backed into the server as Bob, my go to Acme employee, and tried to bring up PowerShell. You see the results below.

In practice, you could probably come up with a script — why not use PowerShell? — to automate this ACL setting for all the laptops and servers in a small- to mid-size site.

It’s not a bad solution.

If you don’t like the idea of setting ACLs in executable files, PowerShell offers its own execution restriction controls. As a user with admin privileges, you can use, what else but a PowerShell cmdlet called Set-ExecutionPolicy.

It’s not nearly as blunt a force as the ACLs, but you can restrict PowerShell to work only in interactive mode –  with the Restricted parameter — so that it won’t execute scripts that contain the hackers’ malware. PowerShell would still be available in a limited way, but it wouldn’t be capable of running the scripts containing hacker PS malware.

However, this would PowerShell scripts from being run by your IT staff.  To allow IT-approved scripts, but disable evil hacker scripts, you use the RemoteSigned parameter in Set-ExecutionPolicy. Now PowerShell will only launch signed scripts. The IT staff, of course, would need to create their own scripts and sign them using an approved credential.

I won’t go into the details how to do this, mostly because it’s so easy to get around these controls. Someone even has a listicle blog post in which 15 PowerShell  security workarounds are described.

The easiest one is using the Bypass parameter in PowerShell itself. Duh! (below).

Seems like a security hole, no?

So PowerShell has some basic security flaws. It’s somewhat understandable since it is, after all, just a shell program.

But even the ACL restriction approach has a fundamental problem.

If hackers loosen up the “live off the land” philosophy, they can simply download — say, using a remote access trojan (RAT) — their own copy of PowerShell .exe. And then run it directly, avoiding the permission restrictions with the resident PowerShell.

Software Restriction Policies

These basic security holes (and many others) are always an issue with a consumer-grade operating systems. This has led OS researchers to come up with secure secure operating systems that have direct power to control what can be run.

In the Windows world, these powers are known as Software Restriction Policies (SRP) — for a good overview, see this — that are managed through the Group Policy Editor.

With SRP you can control which apps can be run, based on file extension, path names, and whether the app has been digitally signed.

The most effective, though most painful approach, is to disallow everything and then add back application that you really, really need. This is known as whitelisting.

We’ll go into more details in the next post.

Anyway, you’ll need to launch the policy editor, gpedit, and navigate to Local Computer Policy>Windows Settings>Security Settings>Software Restriction Polices>Security Levels. If you click on “Disallowed”, you can then make this the default security policy — to not run any executables!

The whitelist: disallow as default, and then add app policies in “Additional Rules”.

This is more like a scorched earth policy. In practice, you’ll need to enter “Additional Rules” to add back the approved apps (with their path names). If you leave out PowerShell, then you’ve effectively disabled this tool on the site.

Unfortunately, you can’t fine-tune the SRP rules based on AD groups or users. Drat!

And that we’ll bring us to Microsoft’s latest and greatest security enforcer, known as AppLocker, which does provide some nuance to application access. We’ll take that up next time as well.

Continue reading the next post in "Disabling PowerShell and Other Malware Nuisances"

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

Leave a review for our podcast & we'll put you in the running for a pack of cards.

In this second part of our interview with attorney and GDPR pro Sue Foster, we get into a cyber topic that’s been on everyone’s mind lately: ransomware.

A ransomware attack on EU personal data is unquestionably a breach —  “accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access  …”

But would it be reportable under the GDPR, which goes into effect next year?

In other words, would an EU company (or US one as well) have to notify a DPA and affected customers within the 72-hour window after being attacked by, say, WannaCry?

If you go by the language of the law, the answer is a definite …  no!

Foster explains that for it to be reportable, a breach has to cause a risk “to the rights and freedoms of natural persons.”  For what this legalese really means, you’ll just have to listen to the podcast. (Hint: it refers to a fundamental document of the EU.)

Anyway, personal data that’s encrypted by ransomware and not taken off premises is not much of a risk for anybody. There’s still more subtleties involving ransomware and other EU data laws that I think is best explained by her, so you’ll just have to listen to Sue’s legal advice directly!

There’s also very interesting analysis by Foster on the implications of the GDPR for Internet-of-Things gadget makers.

Planet Ransomware

Planet Ransomware

If you were expecting a quiet Friday in terms of cyberattacks, this ain’t it. There are reports of a massive ransomware attack affecting computers on a global scale: in the UK, Spain, Russia, Ukraine, Japan, and Taiwan.

The ransomware variant that’s doing the damage is called WCry, also known as WannaCry or WanaCrypt0r. It has so far claimed some high-profile targets, including NHS hospitals in the UK, and telecom and banking companies in Spain.

Be calm and carry on, of course.

In the blog, we’ve been writing about ransomware over the last two years, and we have great educational resources to help you prevent or reduce the damage of an attack.

Here’s a quick overview of our content.

What is it?

Our ransomware guide: 

Learning more

The Troy Hunt course:

How it spreads

Yes, it can have worm-like features:

Can I make my own (for research purposes)?

Yes, but only under adult supervision:

Reducing the risk

Limiting file access really, really helps:

Legal and Regulatory Implications

For US companies, this is what you need to know:

Should you pay?

It depends:

Is a decryption solution available?

Check here:

The ultimate answer to ransomware

User Behavior Analytics (UBA):

And here’s proof:




[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

Leave a review for our podcast & we'll put you in the running for a pack of cards.

Sue Foster is a London-based partner at Mintz Levin. She has a gift for explaining the subtleties in the EU General Data Protection Regulation (GDPR). In this first part of our interview, Foster discusses how the GDPR’s new extraterritoriality rule would place US companies under the law’s data obligations.

In the blog, we’ve written about some of the implications of the GDPR’s Article 3, which covers the law’s territorial scope. In short: if you market online to EU consumers — web copy, say, in the language of some EU country  — then you’ll fall under the GDPR. And this also means you would have to report data exposures under the GDPR’s new 72-hour breach rule.

Foster points out that if a US company happens to attract EU consumers through their overall marketing, they would not fall under the law.

So a cheddar cheese producer from Wisconsin whose web site gets the attention and business of French-based frommage lovers is not required to protect their data at the level of the GDPR.

There’s another snag for US companies, an update to the EU’s ePrivacy Directive, which places restrictions on embedded communication services. Foster explains how companies, not necessarily ISPs, that provide messaging — that means you WhatsApp, Skype, and Gmail — would fall under this law’s privacy rules.

Sue’s insights on these and other topics will be relevant to both corporate privacy officers and IT security folks.