Category Archives: Data Security

Please Disable UPnP on Your Router. Now!

Please Disable UPnP on Your Router. Now!

Remember the first large-scale Mirai attack late last year? That was the one directed at IP cameras, and took advantage of router configurations settings that many consumers never bother changing. The main culprit, though, was Universal Plug and Play or UPnP, which is enabled as a default setting on zillions of routers worldwide.

Also known as port forwarding, UPnP is a convenient way for allowing gadgets, such as the aforementioned cameras (or WiFi-connected coffee pots), to be accessible on the other side of the firewall through a public port. UPnP automatically creates this public port when these gadgets are installed.

Command and Control Meets UPnP

However, this convenience factor provides an opening for hackers. In the case of Mirai, it allowed them to scan for these ports, and then hack into the device at the other end.

Hackers have now found an even more diabolical use of UPnP with the banking trojan Pinkslipbot, also known as QakBot or QBot.

Around since 2000, QakBot infects computers, installs a key logger, and then sends banking credentials to remote Command and Control (C2) servers.

Remember C2?

When we wrote our first series on pen testing, we described how remote access trojans (RATs) residing on the victims’ computers are sent commands remotely from the hackers’ servers over an HTTP or HTTPS connection.

This is a stealthy approach in post-exploitation because it makes it very difficult for IT security to spot any abnormalities. After all, to an admin or technician watching the network it would just appear that the user is web browsing — even though the RAT is receiving embedded commands to log keystrokes or search for PII, and exfiltrating passwords, credit card numbers, etc. to the C2s.

The right defense against this is to block the domains of known C2 hideouts. Of course, it becomes a cat-and-mouse game with the hackers as they find new dark spots on the Web to set up their servers as old ones are filtered out by corporate security teams.

And that’s where Pinkslipbot has added a significant innovation. It has introduced, for lack of a better term, middle-malware, which infects computers, but not to take user credentials! Instead, the middle-malware installs a proxy C2 server that relays HTTPS to the real C2 servers.

Middle-malware: C2 servers can be anywhere!

The Pinkslipbot infrastructure therefore doesn’t have a fixed domain for their C2 servers. In effect, the entire Web is their playing field! It means that it’s almost impossible to maintain a list of known domains or addresses to filter out.

What does UPnP have to do with Pinkslipbot?

When the Pinkslipbot is taking over a consumer laptop, it checks to see if UPnP is enabled. If it is, the Pinkslipbot middle-malware issues a UPnP request to the router to open up a public port. This allows Pinslipbot to then act as a relay between those computers infected with the RATs and the hackers’ C2 servers (see the diagram).

It’s fiendish, and I begrudgingly give these guys a (black) hat tip.

One way for all of us to make these kinds of attacks more difficult to pull off is to simply disable the UPnP or port-forwarding feature on our home routers. You probably don’t need it!

By the way, you can see this done here for my own home Linksys router. And while you’re carrying out the reconfiguration, take the time to come up with a better admin password.

Do this now!

Security Stealth Wars: IT Is Not Winning (With Perimeter Defenses)

PhishingFUD malware, malware-free hacking with PowerShell, and now hidden C2 servers. The hackers are gaining the upper-hand in post-exploitation: their activities are almost impossible to block or spot with traditional perimeter security techniques and malware scanning.

What to do?

The first part is really psychological: you have to be willing to accept that the attackers will get in. I realize that it means admitting defeat, which can be painful for IT and tech people. But now you’re liberated from having to defend an approach that no longer makes sense!

Once you’ve passed over this mental barrier, the next part follows: you need a secondary defense for detecting hacking that’s not reliant on malware signatures or network monitoring.

I think you know where this is going. Defensive software that’s based on – wait for it — User Behavior Analytics (UBA) can spot the one part of the attack that can’t be hidden: searching for PII in the file system, accessing critical folders and files, and copying the content.

In effect, you grant the hackers a small part of the cyber battlefield, only to defeat them later on.

[Podcast] Troy Hunt and Lessons from a Billion Breached Data Records

[Podcast] Troy Hunt and Lessons from a Billion Breached Data Records

Leave a review for our podcast & we'll put you in the running for a pack of cards.


Troy Hunt is a web security guru, Microsoft Regional Director, and author whose security work has appeared in Forbes, Time Magazine and Mashable. He’s also the creator of “Have I been pwned?”, the free online service for breach monitoring and notifications.

In this podcast, we discuss the challenges of the industry, learn about his perspective on privacy and revisit his talk from RSA, Lessons from a Billion Breached Data Records as well as a more recent talk, The Responsibility of Disclosure: Playing Nice and Staying Out of Prison.

After the podcast, you might want to check out the free 7-part video course we developed with Troy on the new European General Data Protection Regulation that will be law on May 25, 2018. It will change the landscape of regulated data protection law and the way that companies collect personal data. Pro tip: GDPR will also impact companies outside the EU.

 

I Click Therefore I Exist: Disturbing Research On Phishing

I Click Therefore I Exist: Disturbing Research On Phishing

Homo sapiens click on links in clunky, non-personalized phish mails. They just do. We’ve seen research suggesting a small percentage are simply wired to click during their online interactions. Until recently, the “why” behind most people’s clicking behaviors remained something of a mystery. We now have more of an answer to this question based on findings from German academics. Warning:  IT security people will not find their conclusions very comforting.

Attention Marketers: High Click-Through Rates!

According to research by Zinaida Benenson and her colleagues, the reasons for clicking on phish bait are based on an overall curiosity factor, and then secondarily, on content that connects in some way to the victim.

The research group used the following email template in the experiment, and sent it to over 1200 students at two different universities:

Hey!

The New Year’s Eve party was awesome! Here are the pictures:

http://<IP address>/photocloud/page.php?h=<participant ID>

But please don’t share them with people who have not been there!

See you next time!

<sender’s first name>

The message, by the way, was blasted out during the first week of January.

Anybody want to guess what was the overall click-through rate for this spammy message?

A blazing 25%.

Marketers everywhere are officially jealous of this awesome metric.

Anyway, the German researchers followed up with survey questions to find the motivations behind these click-aholics.

Of those who responded to the survey, 34% said they were curious about the party pictures linked to in the mail, another 27% said the message fits the time of year, and another 16% said they thought they knew the sender based on just the first name.

To paraphrase one of those cat memes, “Humans is EZ to fool!”

The clever German researchers conducted a classic cover-story design in their experiment. They enlisted students to ostensibly participate in a study on Internet habits and offered online shopping vouchers as an incentive. Nothing was mentioned about phish mails being sent to them.

And yes, after the real study on phishing was completed, the student subjects were told the reason for the research, the results, and given a good stern warning about not clicking on silly phish mail links.

Benenson also gave a talk on her research at last year’s Black Hat. It’s well-worth your time.

Phishing: The Ugly Truth

At the IOS blog, we’ve also been writing about phishing and have been following the relevant research. In short: we can’t say we’re surprised by the findings of the German team, especially as it relates to clicking on links to pictures.

The German study seems to confirm our own intuitions: people at corporate at jobs are bored and are finding cheap thrills by gazing into the private lives of strangers.

Ok, you can’t change human nature, etc.

But there’s another more disturbing conclusion related to the general context of the message.The study strongly suggests the more you know and can say about the target in the phish mail, the more likely it is that they will click. And in fact in an earlier study by Benenson, a 56% click-rate was achieved when the phish mail recipient was addressed by name.

Here’s what they had to say about their latest research:

 … fitting the content and the context of the message to the current life situation of a person plays an important role. Many people did not click because they learned to avoid messages from unknown senders, or with an unexpected content  … For some participants, however, the same heuristic (‘does this message fit my current situation?’) led to the clicks, as they thought that the message might be from a person from their New Year’s Eve party, or that they might know the sender.

 

Implications for Data Security

At Varonis, we’ve been preaching the message that you can’t expect perimeter security to be your last line of defense. Phishing, of course, is one of the major reasons why hackers find it so easy to get inside the corporate intranet.

But hackers are getting smarter all the time, collecting more details about their phishing targets to make the lure more attractive.The German research shows that even poorly personalized content is very effective.

So imagine what happens if they gain actual personal preference and other informational details from observing victims on social media sites or, perhaps, through a previous hack of another web site you engage with.

Maybe a smart hacker who’s been stalking me might send this fiendish email to my Varonis account:

Hey Andy,

Sorry I didn’t see you Black Hat this year! I ran into your colleague Cindy Ng, and she said you’d really be interested in research I’m doing on phishing and user behavior analytics. Click on this link and let me know what you think.  Hope things are going well at Varonis!

Regards,

Bob Simpson, CEO of Phishing Analytics

Hmmm, you know I could fall for something like this the next time I’m in a vulnerable state.

The takeaway lesson for IT is that they need a secondary security defense, one that monitors hackers when they’re behind the firewall and can detect unusual behaviors by analyzing file system activity.

Want to find out more, click here!

Did you click? Good, that link doesn’t point to a Varonis domain!

Another conclusion of the study is that your organization should also undertake security training, especially for non-tech savvy staff.

We approve as well: it’s a worthwhile investment!

[Podcast] John P. Carlin, Part 4: Emerging Threats

[Podcast] John P. Carlin, Part 4: Emerging Threats

Leave a review for our podcast & we'll put you in the running for a pack of cards.


In this concluding post of John Carlin’s Lessons from the DOJ, we cover a few emerging threats: cyber as an entry point, hacking for hire and cybersecurity in the IoT era.

One of the most notable anecdotes are John’s descriptions of how easy it was to find hacking for hire shops on the dark web. Reviews of the most usable usernames and passwords and most destructive botnets are widely available to shoppers. Also, expect things to get worse before they get better. With the volume of IoT devices now available developed without security by design, we’ll need to find a way to mitigate the risks.

Transcript

Cindy Ng: You may have following our series on John Carlin’s work during his tenure as Assistant Attorney General for the U.S. Justice Department. He described cyber as an entry point as one of our threats using our latest election process as an example. But now, John has a few more emerging threats to bring to your attention, hacking for hire and cyber security in the IoT era. One of John’s striking descriptions is how easy it is to find hacking for hire shops on the dark web. Reviews of the most usable usernames and passwords and the most destructive botnets are widely available to shoppers. Expect things to get worse before they get better. With the volume of IoT devices created without security by design, we’ll need to find a way to mitigate the risk.

John Carlin: Let me move to emerging threats. We’ve talked about cyber as an entry part, a way that an attack can start. Even when the cyber event isn’t really the critical event in the end, our electoral system and confidence in it wasn’t damaged because there was an actual attack on the voting infrastructure, if there’s an attack where they steal some information that’s relatively easy to steal and then they get to combine with the whole campaign of essentially weaponizing information, and that caused the harm. The other trend we’re seeing is the hacking for hire. I really worry about this one. I think over the next five years, what we’re seeing is, the dark web now, it’s so easy to use, well, I don’t recommend this necessarily, but when you go on it, you see sophisticated sales bazaars that look as customer-friendly as Amazon.

And when I say that I mean it literally looks like Amazon. I went on one site and it’s complete with customer reviews, like, “I gave him four stars, he’s always been very reliable, and 15% of the stolen user names and passwords that he gives me work, which is a very high rate.” Another one will be like, “This crook’s botnet has always been really good at doing denial-of-service attacks, five stars!” So that’s the way it looks right now on the dark web, and that’s because they’re making just so much, so much money they can invest in an infrastructure and it starts to look as corporate as our private companies.

What I worry about, is because those tools are for rent, use the botnet example, you know, one of the cases that we did was the Iranian Revolutionary Guard Corps attack on the financial sector. They hit 46 different financial institutions with the distributed denial-of-service attack, taking advantage of a huge botnet of hundreds and hundreds of thousands of compromised computers. They’d knocked financial institutions, who have a lot of resources offline, effected hundreds of thousands of customers, cost tens of millions of dollars.

Right now, on the dark web, you can rent the use of an already made botnet. So the criminal group creates the botnet, they’re not the ones who necessarily use it. Right now they tend to rent it to other criminal groups who will do things like GameOver Zeus, a case that we did, you know, they’ll use it for profit, they’ll use it for things like injecting malware that will lead to ransomware or injecting malware for a version of extortion, essentially, where they were turning on people’s video cameras and taking naked pictures, and then charging money, or all the other criminal purposes you can put a botnet to.

But it doesn’t take much imagination to see how a nation stayed or a terrorist group could just rent what the criminal groups are doing to cause an attack on your companies. In terms of emerging threats, you’re certainly tracking the Internet of Things era. I mean, you think about how far behind we are given where the threat is just because we moved very, very quickly from putting everything we value, from analog to digital space, connecting it to the internet over a 25-year period roughly. We’re now on the verge of an even more transformative evolution, where we put not just information, but all the devices that we need from everything, from the pacemakers in our heart, the original versions that were rolled out, actually this is still an issue, for good medical reasons they wanted to be able to track in real-time information coming out of people’s hearts, but they rolled it out un-encrypted, because they just don’t think about it when it comes to the Internet of Things.

They were testing whether it worked, which it did, but they weren’t testing whether it would work where they had security by design, if a bad guy, a crook, a terrorist, or a spy wanted to exploit them. Drones in the sky, they were rolled out, same problem, rolled out originally not encrypted commercial drone. So, again, a 12-year-old could kill someone by taking advantage of the early pacemakers, they could with drones as well. And then the automobiles on our roads, forgetting the self-driving vehicle already, estimates are 70% of the cars on the road by 2020 are essentially gonna be computers on wheels.

One of the big cases we dealt with was the proof of concept hack where someone got in through the entertainment system through the steering and braking system, then led to 1.4 million car recall of Jeep Cherokees. So that’s the smart device used to cause new types of harm, from car accidents, to drones in the sky, to killing people on pacemakers. But we also just have the sheer volume, it’s exponentially increasing and we saw the denial-of-service attack that we’ve all been warning about for a period of time take place this October, knocked down essentially internet connectivity for a short period of time. Because there were just so many devices, from video cameras, etc., that are default being rolled out and can be abused. So, hopefully there will be regulatory public policy focus to try to fix that.

In the interim though, my bottom line is, things are gonna get worse before they get better on the threat side, which is why we need to focus on the risk side. We won’t spend too much time on what government’s been doing. We’ve talked about some of it a little bit already, but this is…the idea is, we need to, one, bring deterrents to bare, make the bad guys feel pain. Because as long as they’re getting away completely cost-free, offense is gonna continue to vastly outstrip defense. Number two, we gotta figure out a way to share information better with the private sector.

And I think you’re hopefully seeing some of that now, where government agencies, FBI, Justice, Secret Service are incentivized to try to figure out ways to increase information sharing for information that, for many, many years now, has been kept only on the classified side of the house. And that’s a whole new approach for government, and it just in its early steps. But, we’ve been moving too slowly given where the threat is, we need to do more, faster. You know, just a couple weeks ago they heard the Director of the FBI said, “Okay, they came after us in 2016 in the Presidential election, but I’m telling you they’re gonna do it again in 2020,” and the head of National Security Agency agreed. That’s in just one sphere, so I think we’re definitely in a trend now where we need to move faster in government.

What’s law enforcement doing? They’re increasing the cooperation. They’re doing this new approach on attribution. When I was there, we issued towards towards the end a new presidential policy directive that tried to clarify who’s in charge of threat, assets, intel support to make it easier. That said, if any of you guys actually looked at the attachment on that, it had something like 15 different phone numbers that you’re supposed to call in the event of an incident. And so, right now, what you need to do is think ahead on your crisis and risk mitigation plan, and know by name and by face who you’d call law enforcement by having an incident response plan that you test when the worst happens.

And there’s reasons…I’m not saying in every case do it, but there are reasons to do it, and it can increase the intelligence you get back. It’s a hedge against risk, if what you thought was a low level act, like a criminal act, the Ferizi example, turns out to be a terrorist, at least you notified somebody. You also want to pick a door, and this requires sometime getting assistance, you want to pick the right door in government, that ideally minimizes the regulatory risk to your company, depending on what space that you’re in, that the information that you provide them, as a victim, isn’t used against you to say that you didn’t meet some standard of care.

Even if…with the shift of administration, I know generally there’s a talk about trying to decrease regulations under this administration, but when it comes to cyber, everyone’s so concerned about where the risk is, that for a period of time I think we’re gonna continue to see a spike, that’ll hopefully level off at some point as each of the regulators tries to figure out a way they can move into this space. So, what can you do? One, most importantly, treat this as an inevitability. You know there’s no wall high enough, deep enough to keep the dedicated adversary out, and that means changing the mindset.

So, where…just like many other areas, this is a risk management, incident response area. Yes, you should focus front end on trying to minimize their ability to get in but you also need to assume that they can, and then plan what’s gonna happen when they’re in my perimeter. That means knowing what you got, knowing where it is, doing things like assuming they can get into my system. If I have crown jewels, I shouldn’t put that in a folder that’s called “Crown Jewels,” maybe put something else in there that will cause the bad guy to steal the wrong information. Have a loss of efficiency, which is why it’s a risk mitigation exercise. I mean, you need to bring the business side in to figure out, how can I, assuming they get in, make it hardest for them to damage what need but most to get back to business. Sony, despite all the public attention, their share price was up that spring, and that’s because they knew exactly who and how to call someone in the government. They actually had a good internal corporate process in place in terms of who was responsible for handling the crisis and crisis communication.

Second, assuming again that there are sophisticated adversaries that get more sophisticated, they can get in if they want to, you need to have a system that’s constantly monitoring internally, what’s going on from a risk standpoint, because the faster you can catch what’s going on inside your system, the faster you can have plan to either kick them out, remediate it, or if you know the data is already lost, start having a plan to figure out how you can respond to it, whether it’s anything from intellectual property, to salacious emails inside your system. And that way, you quickly identify and correct anomalies, reduce the loss of information.

Implement access controls, can’t hit this hard enough. This is true in government as well, by the way, along with the private sector. The default was just it’s just easier to give everybody access. And I think people, when it came very highly regulated types of information, maybe literally, if you know, you had source code, key intellectual property, people knew to try to limit that. But all that other type of sensitive peripheral information, pricing discussions, etc., my experience, a majority of companies don’t implement internally controls as to who has access and doesn’t, and part of the reason for that is because it’s too complicated for the business side so they don’t pay attention to doing it, and you can limit access to sensitive information and others.

Then you can focus your resources, for those who have access, on how they can use it, and really focus on training them and target your training efforts to those who have the access to the highest risk information. Multi-factor authentication, of course, is becoming standard. What else can you do? Segmenting your network. Many of the worst incidents we have are because of the networks were essentially flat and we watch bad guys cruise around the network. Supply chain risk, large majority, Target, Home Depot, etc., a different version of the supply chain but the same idea. Once you get your better practices in place, the risk can sometimes be down the supply chain or with a 3rd party vendor, but it’s your brand that suffers in the event of a breach.

Train employees. We talked about how access controls can help you target that training. And then have an incident response plan and exercise it. Some of them will be, you’ll go in and there will be an incident response plan, but it’s like hundreds of pages, and in an actual incident, nobody’s going to look at it. So it needs to be simple enough that people can use, accessible both on the IT, technical side of the house, and the business side of the house, and then exercise, which is, you start spotting issues that really are more corporate governance issues inside the company as you try to do table top exercises. And we’ve talked a lot about building relationships with law enforcement, and the idea is know by name and by face pre-crisis who it is that you trust in law enforcement, have that conversation with them. This is easier to do if you’re a Fortune 500 company to get their attention. If you’re smaller, you may have to do it in groups or through an association, but have a sense of who it is that’d you call, and then you need to understand who in your organization will make that call.

[Podcast] Tracking Dots, Movement and People

[Podcast] Tracking Dots, Movement and People

Leave a review for our podcast & we'll put you in the running for a pack of cards.


Long before websites, apps and IoT devices, one primary way of learning and sharing information is with a printed document. They’re still not extinct yet. In fact, we’ve given them an upgrade to such that nearly all modern color printers include some form of tracking information that associates documents with the printer’s serial number. This type of metadata is called tracking dots. We learned about them when prosecutors alleged 25-year-old federal contractor Reality Leah Winner printed a top-secret NSA document detailing the ongoing investigation into Russian election hacking last November and mailed it to The Intercept. Rest assured the Inside Out Security Show panelists all had a response to this form of printed metadata.

Another type of metadata that will be discussed in the Supreme Court is whether the government needs a warrant to access a person’s cell phone location history. “Because cell phone location records can reveal countless private details of our lives, police should only be able to access them by getting a warrant based on probable cause,” said Nathan Freed Wessler, a staff attorney with the ACLU Speech, Privacy, and Technology Project.

Other articles discussed:

  • Malware installed on a router can take control over a device’s LEDs and use them to transmit data
  • Twitter product, Studio has vulnerability that allowed tweeting from any account
  • Commenting secret code on Britney Spears’ Instagram account

Inside Out Security Show panelists: Mike Buckbee, Kilian Englert, Forrest Temple

GDPR: Troy Hunt Explains it All in Video Course

GDPR: Troy Hunt Explains it All in Video Course

You’re a high-level IT security person, who’s done the grunt work of keeping your company compliant with PCI DSS, ISO 27001, and a few other security abbreviations, and one day you’re in a meeting with the CEO, CSO, and CIO. When the subject of General Data Protection Regulation or GDPR comes up, all the Cs agree that there are some difficulties, but everything will be worked out.

You are too afraid to ask, “What is the GDPR?”

Too Busy for GDPR

We’ve all been there, of course. Your plate has been full over the last few weeks and months hunting down vulnerabilities, hardening defenses against ransomware and other malware, upgrading your security, along with all the usual work involved in keeping the IT systems humming along.

So it’s understandable that the General Data Protection Regulation may have flown under your radar.

However, there’s no need to panic.

The GDPR shares many similarities with other security standards and regulations so it’s just question of learning some basic background, the key requirements of the new EU law, and a few gotchas, preferably explained by an instructor with a knack for connecting with IT people.

Hunt on GDPR

And that’s why we engaged with Troy Hunt to develop a 7-part video course on the GDPR. Troy is a web security guru, Australian Microsoft Regional Director, and author whose security writing has appeared in Forbes, Time Magazine, and Mashable. And he’s no stranger to this blog as well!

Let’s get back to you and other busy IT security folks like you who need to get up to speed quickly.  With just an hour of your time, Troy will cover the basic vocabulary and definitions (“controller”, “processor”, “personal data”), the key concept underlying GDPR (personal data is effectively owned by the consumer), and what you’ll need to do to keep your organization compliant (effectively, minimize and monitor this personal data.)

By the way, Troy also explains how US companies, even those without EU offices, can get snagged by GDPR’s territorial scope rule— Article 3 to be exact. US-based e-commerce companies: you’ve been warned!

While Troy doesn’t expect you to be an attorney, he analyzes and breaks down a few of more critical requirements and the penalties for not complying, particularly on breach reporting, so that you’ll be able to keep up with some of the legalese when it arises at your next GDPR meeting.

And I think you’ll see by the end of the course that while there may be some new aspects to this EU law, as Troy notes, the GDPR really legislates IT common sense.

What are you waiting for?  Register and get GDPR-aware starting today!

 

 

 

 

[Infographic] From Bad Report Cards to Insider Data Theft

[Infographic] From Bad Report Cards to Insider Data Theft

We’ve all read the news recently about employees and contractors selling internal customer data records or stealing corporate intellectual property. But insiders breaking bad have been with us as long as we’ve had computers and disgruntled humans who understand IT systems.

You may not know it, but academic researchers have also been studying the psychological insides of insiders.

Carnegie Mellon’s Computer Emergency Response Team (CERT) has an entire group devoted to insider threats. Based on looking at real cases, these academics have come up with, to our minds, a very convincing model of what drives insiders.

In short, it’s their belief that the root causes lie beyond just a raise or promotion denied, but rather in earlier traumas, likely starting in childhood.

For instance, it is thought that children who, during a famous psych experiment, immediately ate the marshmallow (instead of waiting for two marshmallows) had issues with parental and other authority figures that would later show up through impulsive behaviors. Or perhaps for a certain kind of child, not getting into the genius program for advanced 4-year-olds can have devastating consequences later!

We’ve turned the complex CERT multi-stage insider model into this more accessible infographic. Check out the original CERT paper (or read our incredibly informative series) to learn more.

 

[Podcast] John P. Carlin, Part 3: Ransomware & Insider Threat

[Podcast] John P. Carlin, Part 3: Ransomware & Insider Threat

Leave a review for our podcast & we'll put you in the running for a pack of cards.


We continue with our series with John Carlin, former Assistant Attorney General for the U.S. Department of Justice’s National Security Division. This week, we tackle ransomware and insider threat.

According to John, ransomware continues to grow, with no signs of slowing down. Not to mention, it is a vastly underreported problem. He also addressed the confusion on whether or not one should engage law enforcement or pay the ransom. And even though recently the focus has been on ransomware as an outside threat, let’s not forget insider threat because an insider can potentially do even more damage.

Transcript

Cindy Ng: We continue our series with John Carlin, former Assistant Attorney General for the U.S. Justice Department. This week we tackle ransomware and insider threats. According to John, ransomware is a vastly under-reported problem. He also addressed the confusion on whether or not one should engage law enforcement or pay the ransom. And even though, lately, we’ve been focused on ransomware as an outside threat, one area that doesn’t get as much focus is insider threat. And that’s worrisome because an insider can potentially do even more damage.

John Carlin: Ransomware, it was skyrocketing when I was in government. In the vast, vast, as I said earlier, majority of the cases, we were hearing about them with the caveat that they were asking us not to make it public, and so it is also vastly under-reported. I don’t think there’s anywhere near, right now, the reporting. I think Verizon attempted to do a good job. There’ve been other reports that have attempted to get a firm number on how big the problem is. I think the most recent example that’s catching peoples attention is Netflix.

Another area where I think too few companies right now are thinking through how they’d engage law enforcement. And I don’t think there’s an easy answer. I mean, there’s a lot of confusion out there as to whether you should or shouldn’t pay. And there was such confusion over FBI folks, when I was there, giving guidance saying, “Always pay.” The FBI issued guidance, and we have a link to it here, that officially says they do not encourage paying a ransom. That doesn’t mean, though, that if you go into law enforcement that they’re gonna order you not to pay. Just like they have for years in kidnapping, I think they may give you advice. They can also give back valuable information. Number one, if it’s a group they’ve been monitoring, they can tell you, and do as they’ve tried to move more towards the customer service model, they can tell you whether they’ve seen that group attack other actors before, and if they have, whether if you pay they’re likely to go away or not. Because some groups just take your money and continue. Some groups, the group who’s asking for your money isn’t the same group that hacked you, and they can help you on that as well. Secondly, just as risk-reduction, as the example I gave earlier of Ferizi shows, or the Syrian Electronic Army, you can end up, number one, violating certain laws when it comes to the Treasury, so called OFAC, and material support for terrorism laws by paying a terrorist or other group that’s designated as a bad actor. But more importantly, I think for many of you, then, that potential criminal regulatory loss is the brand. You do not want a situation where it becomes clear later that you paid off a terrorist. And so, by telling law enforcement what your doing, you can hedge against that risk.

The other thing you need to do has nothing to do with law enforcement, but is resilience and trying to figure out, “Okay, what are my critical systems, and what’s the critical data that could embarrass us? Is it locked down? What would be the risk?” The most recent public example Netflix has shown, you know, some companies decide season 5 of “Orange is the New Black,” it’s not worth paying off the bad guy.

We’ve been focusing a lot on outside actors coming inside, and something I think has gotten too little attention or sometimes get too little attention, is the insider threat. That’s another trend. As we focus on how, when it comes to outsider threats, the approach needs to change, and instead of focusing so much on perimeter defense, we really need to focus on understanding what’s inside a company, what the assets are, what we can do to complicate the life of a bad guy when they get inside your company. Risk mitigation, in other words. A lot of the same expenditures that you would make, or same processes that you put in place to help mitigate that risk, are also excellent at mitigating the risk from insider threat. And that’s where you can get a economy of scale on your implementation.

When I took over National Security Division, my first, I think, week, was the Boston Marathon attack. But then, shortly after that was a fellow named Snowden deciding to disclose, on bulk, information that was devastating to certain government agencies across the board. And one of my last acts was indicting another insider and contractor at the National Security Agency who’d similarly taken large amounts of information in October of last year. So, if I can share one lesson, having lived through it on the government end of the spectrum, that sometimes our best agencies, who are very good at erecting barriers and causing complications for those who try to get them from outside the wall, didn’t have the same type of protections in place inside the perimeter area, in those that were trusted. And that’s something we just see so often in the private sector, as well. In terms of the amount of damage they can do, the insider may actually be the most significant threat that you face. This is the kind of version of the blended threat, the accidental or negligent threat that happens from a human error, and then that’s the gap that, no matter how good you are on the IT, the actor exploits. In order to protect against that, you really need to figure out systems internally for flagging anomalous behavior, knowing where your data is, knowing what’s valued inside your system, and then putting access controls in place.

From a recent study that Varonis did, and this is completely consistent with my experience both in government, in terms of government systems in government, in terms of providing assistance to the private sector and now giving advice to the private sector, is that it did not surprise me, this fact, although it’s disturbing, that nearly half of the respondents indicated that at least 1,000 sensitive files are open to every employee, and that one fifth had 12,000 or more sensitive files exposed to every employee. I can’t tell you how many of these I’ve responded to in crisis mode, where all the lawyers, etc. are trying to figure out how to mitigate risk, who do they need to notify because their files may have been stolen, whether it’s business customers or their consumer-type customers. And then, they realize too late, at this point, that they didn’t have any access controls in place. This ability to put in an access control is vital, both when you have an insider and also, it shouldn’t matter how the person gained access to your system, whether they were outside-in or it’s an insider. It’s the same risk. And so, what I’ve found is that…and this was a given example of this that we learned through the OPM hack. But what often happens is the IT side knows how to secure the information or put in access controls, but there’s not an easy way to plug in your business side of the house. So, nearly three-fourths of employees say they know they have access to data they don’t need to see. More than half said it’s frequent or very frequent. And then, on the other side of the house, on the IT, they know that three-quarters of the issues that they’re seeing is insider negligence. So, you combine over-access with the fact that people make mistakes, and you get a witches’ brew in terms of trying to mitigate risk. So, what you should be looking for there is, “How can I make it as easy as possible to get the business side involved?” They can determine who gets access or who doesn’t get access. And the problem right now, I think, with a lot of products out there, is that it’s too complicated, and so the business side ignores it and then you have to try to guess at who should or shouldn’t have access. All they see then is, “Oh, it’s easier just to give everybody access than it is to try to think through and implement the product. I don’t know who to call or how to do it.”

OPM, major breach inside the government where, according to public reporting, China, but the government has not officially said one way or the other so I’m just relying on public reporting, it breached inside our systems, our government systems. And one of the problems was they were able to move laterally, in a way, and we didn’t have a product in place where we could see easily what the data was. And then, it turned out afterwards, as well, there was too much access when it came to the personally identifiable information. I have hundreds of thousands of government employees who ultimately had to get notice because you just couldn’t tell what had or hadn’t been breached.

When we went to fix OPM, this is another corporate governance lesson, three times the President tried to get the Cabinet to meet so that the business side would help own this risk and decide what data people should have access to, recognizing when you’re doing risk mitigation, there may be a loss of efficiency but you should try to make a conscious decision over what’s connected to the internet, and if it’s connect to the internet, who has access to it and what level of protection, recognizing, you know, as you slim access there can be a loss of efficiency. In order to do that, the person who’s in charge is not the Chief Information Officer, it is the Cabinet sector. It is the Attorney General or the Secretary of State. The President tried three times to convene his Cabinet. Twice, I know for Justice, we were guilty because they sent me and our Chief Information Officer, the Cabinet members didn’t show up because they figured, “This is too complicated. It’s technical. I’m gonna send the cyber IT people.” The third time, the Chief of Staff to the President had to send a harsh email that said, “I don’t care who you bring with you, but the President is requiring you to show up to the meeting because you own the business here, and you’re the only person who can decide who has access, who doesn’t and where they should focus their efforts.” So, for all the advice we were given, private companies, at the time, we were good at giving advice from government. We weren’t as good, necessarily, at following it. That’s simply something we recommend people do.

Continue reading the next post in "[Podcast] John P. Carlin"

Disabling PowerShell and Other Malware Nuisances, Part III

Disabling PowerShell and Other Malware Nuisances, Part III

This article is part of the series "Disabling PowerShell and Other Malware Nuisances". Check out the rest:

One of the advantages of AppLocker over Software Restriction Policies is that it can selectively enable PowerShell for Active Directory groups. I showed how this can be done in the previous post. The goal is to limit as much as possible the ability of hackers to launch PowerShell malware, but still give legitimate users access.

It’s a balancing act of course. And as I suggested, you can accomplish the same thing by using a combination of Software Restriction Policies (SRP) and ACLs, but AppLocker does this more efficiently in one swoop.

Let’s Get Real About Whitelisting

As a practical matter, whitelisting is just plain hard to do, and I’m guessing most IT security staff won’t go down this route. However, AppLocker does provide an ‘audit mode’ that makes whitelisting slightly less painful than SRP.

AppLocker can be configured to log events that show up directly in the Windows Event Viewer. For whatever reason, I couldn’t get this to work in my AWS environment. But this would be a little less of a headache than setting up a Registry entry and dealing with a raw file — the SPR approach.

In any case, I think most of you will try what I did. I took the default rules provided by AppLocker to enable the standard Windows system and program folders, added an exception for PowerShell, and then created a special rule to allow only member of a select AD group — Acme-VIPs in my case — to access PowerShell.

AppLocker: Accept the default path rules, and then selectively enable PowerShell.

Effectively, I whitelisted all-the-usual Windows suspects, and then partially blacklisted PowerShell.

PowerShell for Lara, who’s in the Acme-VIPs group, but no PowerShell for Bob!

And Acme Was Hacked

No, the hacking of my Acme domain on AWS is not going to make any headlines. But I thought as a side note it’s worth mentioning.

I confess: I was a little lax with my Amazon firewall port setting, and some malware slipped in.

After some investigation, I discovered a suspicious executable in  the \Windows\Prefetch directory. It was run as a service that looked legit, and it opened a zillion UDP ports.

It took me an afternoon or two to figure all this out. My tip offs were when my server became somewhat sluggish, and then receiving an Amazon email politely suggesting that my EC2 instance may have been turned into a bot used for a DDoS attack.

This does relate to SRP and AppLocker!

Sure, had I activated these protection services earlier, Windows would have been prevented from launch the malware, which was living in in a non-standard location.

Lesson learned.

And I hold my head in shame if I caused some DDos disturbance for someone, somewhere.

Final Thoughts

Both SRP and AppLocker also have rules that take into account file hashes and digital certificates. Either will provide an additional level of security that the executable are really what they claim to be, and not the work of evil hackers.

AppLocker is more granular than SRP when it comes to certificates, and it allows you to filter on a specific app from a publisher and a version number as well. You can learn more about this here.

Bottom line: whitelisting is not an achievable goal for the average IT mortal. For the matter at hand, disabling PowerShell, my approach of using default paths provided by either SRP or AppLocker, and then selectively allowing PowerShell for certain groups — easier with AppLocker — would be far more realistic.

[Podcast] Security Pros and Users: We’re All in This Together

[Podcast] Security Pros and Users: We’re All in This Together

Leave a review for our podcast & we'll put you in the running for a pack of cards.


The latest release of SANS’ Security Awareness Report attributed communication as one of the primary reasons why awareness programs thrive or fail. Yes, communication is significant, but what does communication mean?

“The goal of communication is to facilitate understanding,” said Inside Out Security Show(IOSS) panelist, Mike Thompson.

Another panelist, Forrest Temple expanded on that idea, “The skill of communication is the clarity through which that process happens. Being about to tell a regular user about the purpose behind the policy is the important part.”

However, IOSS panelist Kilian Englert pushed back on the report’s findings that insinuated users or security pros are to blame when a program fails. Yes, clear communication is vital, but also added, “We’re all in this together.”

Others echoed this sentiment as well when we discussed a recent report that 83% of Security Pros Waste Time Fixing Co-Workers Non-Security Problems.

Other articles discussed:

US State Data Breach Law Definitions

US State Data Breach Law Definitions

We discussed in Part 1: A Guide to Per State Data Breach Response the importance of understanding what classes of data you have in your control.

We stress this point as it’s easy to get lost in the different numerical conditions around per state data breach disclosure. What’s often not considered is that due to differences in how a state defines Personally Identifiable Information (PII), what may be considered a data breach in North Dakota might not be a data breach in Florida.

Typically “Personal information” does not include publicly available information that is lawfully made available to the general public from federal, state, or local government records.

Also, it’s important to remember that these data points are combinatorial. For example, emailing a spreadsheet of Social Security Numbers that did not include associated first and last names likely wouldn’t be considered sufficient to trigger data breach disclosures in most cases.

All of this results in the need to understand exactly what information was lost in a breach.

Common PII Definitions

Almost all states consider a mixture of:

  • First Name or Last Name
  • Social Security Number
  • State ID (Driver License, Passport) – Given that these are per state laws, they are often keenly interested in disclosure of Driver’s License numbers, Passport information, etc.
  • Financial Account Information (account code, passcode, password). Typically this is summarized as “any ability to access a financial account” and encompasses anything that might be used for access to bank, credit card, retirement, investment or savings accounts. The definition is broad enough to include things like cryptocurrencies as they’re clearly financial in nature.

With that as the baseline we can then start to consider some more of the outlying criteria. While this may or may not affect you today, the conventional wisdom is that sometime in the not too distant future a comprehensive Federal Data Breach disclosure law is going to be passed. Most likely it will be a roll-up of the different state disclosure laws.

Given this, it’s good to consider this a roadmap of the data that you need to preferentially protect, manage, secure and dispose of to protect your organization from breaking the law.

Account Information

In large part, mass data breaches are dangerous because consumers often reuse credentials between accounts. It’s not uncommon for someone to use the same email address and password between say their social network, their bank and their preferred shopping site.

This means that a breach in any one of those systems actually compromises them all.

With that in mind, I was pleasantly surprised to find that a handful of states require notification if any kind of username/password is leaked from a service (as it’s quite likely that those passwords would also unlock more sensitive financial or medical accounts)

These statutes are not widely known and potentially affect thousands of tiny one off SAAS services, forums, blogs, companies and other websites.

It’s a big change from the mindset of “We don’t have any valuable information, so it’s not a big deal if we’re hacked.”

Someone who runs a moderately popular WordPress blog with comments enabled is likely not thinking “I need to check Georgia Data Breach Notification laws” when their site gets hacked.

Biometric Information

Biometrics are increasingly popular as a means of adding additional factors to authentication or as a user friendly way of securing access. Given this, unsurprisingly, unauthorized access to biometric data is considered to be a leak of personally identifiable information.

Fingerprints, retina / iris scans or any other “unique physical representation” (so presumably facial recognition, palm scans, gait analysis, etc would all fall under this category).

The statutes themselves don’t get into the fine detail of what constitutes biometric storage. They don’t differentiate storing a high definition image of a thumbprint from a system that takes sample points from a thumbprint and stores a hash of the value. Unauthorized disclosure of either would be considered a data breach.

DNA

Currently, only Wisconsin considers a disclosure of your personal genetic makeup to be “Personally Identifying Information”.

Electronic Signature

Somewhat maddeningly, the definitions for what constitutes an electronic signature are quite vague. But it would fairly safe to assume that they include PKI keys as a signatory mechanism.

To me this is interesting as there are lots of cases where a web host might have thousands of vulnerable sites in standalone VPS silo’s. You could imagine some PHP bug that allowed for the contents of them to listed – which would then trigger the disclosure rules.

Medical Information

Generally defined as: “any electronic or physical information about treatment, diagnosis or history”, which extends far beyond a formal medical record as one might have in a hospital.

Consider something like a consent form for a trampoline park (not pregnant or has a history of heart issues) or a checkbox in a form that indicates that someone has a peanut allergy.

Date of Birth

Date of Birth is often used as a security question and inclusion of it as a PII indicator seems forward thinking.

Employer ID

An identification number assigned to the individual by the individual’s employer in combination with any required security code, access code, or password.

Mothers Maiden Name

Long used as the answer to security questions, disclosure could potentially be used for account recovery attacks.

Health Insurance Information

This is distinct from any actual medical information, but purely items of information like who is providing coverage and the identification number for the account.

Tax Information

I’m honestly a bit surprised that tax information isn’t more often considered to be a reportable data breach event as it’s so often used as a means of identification.

Conclusion

We hope this underscores the importance of classifying the data on your network to better prepare for a potential data breach.