[Podcast] John P. Carlin, Part 3: Ransomware & Insider Threat

[Podcast] John P. Carlin, Part 3: Ransomware & Insider Threat

Leave a review for our podcast & we'll send you a pack of infosec cards.


We continue with our series with John Carlin, former Assistant Attorney General for the U.S. Department of Justice’s National Security Division. This week, we tackle ransomware and insider threat.

According to John, ransomware continues to grow, with no signs of slowing down. Not to mention, it is a vastly underreported problem. He also addressed the confusion on whether or not one should engage law enforcement or pay the ransom. And even though recently the focus has been on ransomware as an outside threat, let’s not forget insider threat because an insider can potentially do even more damage.

Transcript

Cindy Ng: We continue our series with John Carlin, former Assistant Attorney General for the U.S. Justice Department. This week we tackle ransomware and insider threats. According to John, ransomware is a vastly under-reported problem. He also addressed the confusion on whether or not one should engage law enforcement or pay the ransom. And even though, lately, we’ve been focused on ransomware as an outside threat, one area that doesn’t get as much focus is insider threat. And that’s worrisome because an insider can potentially do even more damage.

John Carlin: Ransomware, it was skyrocketing when I was in government. In the vast, vast, as I said earlier, majority of the cases, we were hearing about them with the caveat that they were asking us not to make it public, and so it is also vastly under-reported. I don’t think there’s anywhere near, right now, the reporting. I think Verizon attempted to do a good job. There’ve been other reports that have attempted to get a firm number on how big the problem is. I think the most recent example that’s catching peoples attention is Netflix.

Another area where I think too few companies right now are thinking through how they’d engage law enforcement. And I don’t think there’s an easy answer. I mean, there’s a lot of confusion out there as to whether you should or shouldn’t pay. And there was such confusion over FBI folks, when I was there, giving guidance saying, “Always pay.” The FBI issued guidance, and we have a link to it here, that officially says they do not encourage paying a ransom. That doesn’t mean, though, that if you go into law enforcement that they’re gonna order you not to pay. Just like they have for years in kidnapping, I think they may give you advice. They can also give back valuable information. Number one, if it’s a group they’ve been monitoring, they can tell you, and do as they’ve tried to move more towards the customer service model, they can tell you whether they’ve seen that group attack other actors before, and if they have, whether if you pay they’re likely to go away or not. Because some groups just take your money and continue. Some groups, the group who’s asking for your money isn’t the same group that hacked you, and they can help you on that as well. Secondly, just as risk-reduction, as the example I gave earlier of Ferizi shows, or the Syrian Electronic Army, you can end up, number one, violating certain laws when it comes to the Treasury, so called OFAC, and material support for terrorism laws by paying a terrorist or other group that’s designated as a bad actor. But more importantly, I think for many of you, then, that potential criminal regulatory loss is the brand. You do not want a situation where it becomes clear later that you paid off a terrorist. And so, by telling law enforcement what your doing, you can hedge against that risk.

The other thing you need to do has nothing to do with law enforcement, but is resilience and trying to figure out, “Okay, what are my critical systems, and what’s the critical data that could embarrass us? Is it locked down? What would be the risk?” The most recent public example Netflix has shown, you know, some companies decide season 5 of “Orange is the New Black,” it’s not worth paying off the bad guy.

We’ve been focusing a lot on outside actors coming inside, and something I think has gotten too little attention or sometimes get too little attention, is the insider threat. That’s another trend. As we focus on how, when it comes to outsider threats, the approach needs to change, and instead of focusing so much on perimeter defense, we really need to focus on understanding what’s inside a company, what the assets are, what we can do to complicate the life of a bad guy when they get inside your company. Risk mitigation, in other words. A lot of the same expenditures that you would make, or same processes that you put in place to help mitigate that risk, are also excellent at mitigating the risk from insider threat. And that’s where you can get a economy of scale on your implementation.

When I took over National Security Division, my first, I think, week, was the Boston Marathon attack. But then, shortly after that was a fellow named Snowden deciding to disclose, on bulk, information that was devastating to certain government agencies across the board. And one of my last acts was indicting another insider and contractor at the National Security Agency who’d similarly taken large amounts of information in October of last year. So, if I can share one lesson, having lived through it on the government end of the spectrum, that sometimes our best agencies, who are very good at erecting barriers and causing complications for those who try to get them from outside the wall, didn’t have the same type of protections in place inside the perimeter area, in those that were trusted. And that’s something we just see so often in the private sector, as well. In terms of the amount of damage they can do, the insider may actually be the most significant threat that you face. This is the kind of version of the blended threat, the accidental or negligent threat that happens from a human error, and then that’s the gap that, no matter how good you are on the IT, the actor exploits. In order to protect against that, you really need to figure out systems internally for flagging anomalous behavior, knowing where your data is, knowing what’s valued inside your system, and then putting access controls in place.

From a recent study that Varonis did, and this is completely consistent with my experience both in government, in terms of government systems in government, in terms of providing assistance to the private sector and now giving advice to the private sector, is that it did not surprise me, this fact, although it’s disturbing, that nearly half of the respondents indicated that at least 1,000 sensitive files are open to every employee, and that one fifth had 12,000 or more sensitive files exposed to every employee. I can’t tell you how many of these I’ve responded to in crisis mode, where all the lawyers, etc. are trying to figure out how to mitigate risk, who do they need to notify because their files may have been stolen, whether it’s business customers or their consumer-type customers. And then, they realize too late, at this point, that they didn’t have any access controls in place. This ability to put in an access control is vital, both when you have an insider and also, it shouldn’t matter how the person gained access to your system, whether they were outside-in or it’s an insider. It’s the same risk. And so, what I’ve found is that…and this was a given example of this that we learned through the OPM hack. But what often happens is the IT side knows how to secure the information or put in access controls, but there’s not an easy way to plug in your business side of the house. So, nearly three-fourths of employees say they know they have access to data they don’t need to see. More than half said it’s frequent or very frequent. And then, on the other side of the house, on the IT, they know that three-quarters of the issues that they’re seeing is insider negligence. So, you combine over-access with the fact that people make mistakes, and you get a witches’ brew in terms of trying to mitigate risk. So, what you should be looking for there is, “How can I make it as easy as possible to get the business side involved?” They can determine who gets access or who doesn’t get access. And the problem right now, I think, with a lot of products out there, is that it’s too complicated, and so the business side ignores it and then you have to try to guess at who should or shouldn’t have access. All they see then is, “Oh, it’s easier just to give everybody access than it is to try to think through and implement the product. I don’t know who to call or how to do it.”

OPM, major breach inside the government where, according to public reporting, China, but the government has not officially said one way or the other so I’m just relying on public reporting, it breached inside our systems, our government systems. And one of the problems was they were able to move laterally, in a way, and we didn’t have a product in place where we could see easily what the data was. And then, it turned out afterwards, as well, there was too much access when it came to the personally identifiable information. I have hundreds of thousands of government employees who ultimately had to get notice because you just couldn’t tell what had or hadn’t been breached.

When we went to fix OPM, this is another corporate governance lesson, three times the President tried to get the Cabinet to meet so that the business side would help own this risk and decide what data people should have access to, recognizing when you’re doing risk mitigation, there may be a loss of efficiency but you should try to make a conscious decision over what’s connected to the internet, and if it’s connect to the internet, who has access to it and what level of protection, recognizing, you know, as you slim access there can be a loss of efficiency. In order to do that, the person who’s in charge is not the Chief Information Officer, it is the Cabinet sector. It is the Attorney General or the Secretary of State. The President tried three times to convene his Cabinet. Twice, I know for Justice, we were guilty because they sent me and our Chief Information Officer, the Cabinet members didn’t show up because they figured, “This is too complicated. It’s technical. I’m gonna send the cyber IT people.” The third time, the Chief of Staff to the President had to send a harsh email that said, “I don’t care who you bring with you, but the President is requiring you to show up to the meeting because you own the business here, and you’re the only person who can decide who has access, who doesn’t and where they should focus their efforts.” So, for all the advice we were given, private companies, at the time, we were good at giving advice from government. We weren’t as good, necessarily, at following it. That’s simply something we recommend people do.

Continue reading the next post in "[Podcast] John P. Carlin"

Get the latest security news in your inbox.

Next Article

Disabling PowerShell and Other Malware Nuisances, Part III

One of the advantages of AppLocker over Software Restriction Policies is that it can selectively enable PowerShell for Active Directory groups. I showed how this can be done in the previous post. The goal is to limit as much as possible the ability of hackers to launch PowerShell malware, but still give legitimate users access.