Category Archives: Data Security

[Podcast] When Security is a Status Symbol

[Podcast] When Security is a Status Symbol

As sleep and busyness gain prominence as status symbols, I wondered when or if good security would ever achieve the same notoriety. Investing in promising security technology is a good start. We’ve also seen an upsurge in biometrics as a form of authentication. And let’s not forget our high school cybersecurity champs!

However, as we celebrate new technologies, sometimes we remain at a loss for vulnerabilities in existing technologies, such as one’s ability to guess a user’s PIN with the phone’s sensors. I’m also alarmed with how easily you can order an attack!

Tool of the week: CaptureBox


Subscribe Now

- Leave a review for our podcast & we'll put you in the running for a pack of cards

- Follow the Inside Out Security Show panel on Twitter @infosec_podcast

- Add us to your favorite podcasting app:

[Podcast] Christina Morillo, Enterprise Information Security Expert

If you want to be an infosec guru, there are no shortcuts to the top. And enterprise information security expert, Christina Morillo knows exactly what that means.

When she worked at the help desk, she explained technical jargon to non-technical users. As a system administrator, Christina organized and managed AD, met compliance regulations, and completed entitlement reviews. Also, as a security architect, she developed a comprehensive enterprise information security program. And if you need someone to successfully manage an organization’s risk, Christina can do that as well.

In our interview, Christina Morillo revealed the technical certificates that helped jumpstart her infosec career, described work highlights, and shared her efforts in bringing a more accurate representation of women of color in tech through stock images.


Subscribe Now

- Leave a review for our podcast & we'll put you in the running for a pack of cards

- Follow the Inside Out Security Show panel on Twitter @infosec_podcast

- Add us to your favorite podcasting app:


Transcript

Cindy Ng: Christina Morillo has been in the security space long before automation and actual data became the industry’s “it” word. She has been helping organizations advance their infosec and insider threat programs through her deep technical expertise in centralizing disparate systems, strengthening and automating tasks, as well as translating complex issues between the business and IT stakeholders. In our interview, Christina highlights hallmarks in her career, turning points in the industry, and how she worked her way to the top.

Cindy Ng So, you’ve been in the security space for almost 20 years, and you’ve seen the field transform into something that people didn’t really know about. Into something that people see almost regularly on the front page news. And I wanted to go back in time and for you to tell us how you got started in the security business.

Christina Morillo: So, I actually got started in the technology industry about 18 years ago, and out of that, in security, I’ve been like 11 to 12 years. But I pretty much got started from the ground up while I was attending university. I actually got a job doing technical support for, at the time, compaq computers. So that’s like I’m aging myself right there. But back when compaq computers were really popular, I worked for a call center, and we did 24-hour technical support. And that’s where I kind of learned all of my troubleshooting skills, and being able to kind of walk someone through restarting their computer, installing an update, installing a patch, being able to articulate technical jargon, in a nontechnical format. Then from there, I moved on to doing more desktop support. I wanted to get away from the call center environment, I wanted to get away from that, and be in, like, an enterprise environment where I was the support person, so I could get that user interaction. So that’s where my journey started. It feels like yesterday, but it’s been a long time.

Cindy Ng It goes by quickly, and how did you get started at Swiss Re?

Christina Morillo: When I came back home from university, I am originally from New York City, I was looking for work. And I wanted to really get into financial services, doing IT within the financial services industry because I knew that would be a good strategic move for my professional career. I bumped into this recruiter, and he told me about a position at Swiss Re within their capital management investment division. And so I gave it a go even though I didn’t have the experience. You know, I took a shot. And they really liked the fact that I had prior experience with active directory and networking. And since I was very much hands-on and I had just taken some Microsoft certifications, so I was like really into it. So I was able to answer the questions really efficiently, and they liked me, so they gave me the shot. That’s what started me into the world of information security, and identity, and access management, and access control. I learned all my “manual foundation” I’ll call it, manual fundamentals, at Swiss Re.

Cindy Ng Would you say that your deep understanding of AD was an important part of your career?

Christina Morillo: Oh, absolutely. Absolutely.

Cindy Ng And what do most sysadmins get wrong when it comes to their understanding of AD?

Christina Morillo: There is a lot to do with the whole permissioning and file structure. A lot of times people don’t really go into the differences between share permissions and NTFS permissions. And it can get really complex really fast. Especially when you’re learning in school, you create your environment, right? So it’s very clean. But when you start at a company, you’re looking at years of buildup. So you go into these environments where it’s nowhere near what you learned at school. So you’re just like, oh my goodness. And it becomes really overwhelming very quickly. I think it’s, like, not having that deep understanding and deep knowledge, and just kind of taking short routes. Because we’re very busy during the day, and there’s a lot to do, right? Especially for sysadmins. They have a lot on their plates. So I think a lot of times it’s like, okay, use your own backlist. Just throw them in whatever group, we’ll fix it later. And later never comes. I don’t fault them, but I just think that we need to be a little bit more diligent with understanding structures and fundamentals.

Cindy Ng How did you spend time figuring out how to restructure a certain group, if that was an important part in your job? In your team?

Christina Morillo: Yeah. Of course, absolutely. I always want to because it makes my life easier. But, you know, you’re not always able to. And that’s because, like I said, it’s so complex, and there’s so many layers that peeling these layers back will cause chaos. So sometimes you have to prioritize. And just from like a business perspective you have to prioritize. You know, is this something that we can do gradually or look at setting up as a project and completing it in phases, or is it high-priority, right?

And so, the first thing I do is I talk to whoever owns the group or let’s say whatever specific department, like finance. So who approved access to this group? So I like to kind of determine that. And then work my way backwards. So, okay, if this is the owner of the group, then I like to say, “Who should get access to this group?” What kind of access do they need to this group? Do they need read-only access, or do they need modify access?” And then go from there. And who should be the initial members of the group? And a lot of times its a matter of having to recreate the group. So create a fresh group, add the individual users, read-write or modify, or read-only, and then migrate them into the group, and then delete the old group. Which that part can take time because you don’t know what you’re touching.

A lot of times people like to permission groups at different levels where they don’t belong. The worst thing that can happen is you can cause an outage and you never really want that. Kind of investigating and using tools like DatAdvantage to help with the investigations to better understand what you’re doing before you do it. So it’s a process. I mean, I wouldn’t say it’s something easy. That’s why, a lot of times, it’s put on the back burner. But, you know, I feel like it’s something that has to be done.

Cindy Ng Your next role which was at Alliance Bernstein?

Christina Morillo: So at Alliance Bernstein, that was a short-term contract. That was part of their incident response & security team. 50% of the time I was handling tickets, and, you know, approving out FTP access, and approving firewall access, and checking out scans or anti-virus scans, and making sure that our AV was up to date, and doing all that stuff.

And then the other 50% was working on identity management and, like, onboarding applications into the system and testing. And then training the team that would handle day to day support. So it’s like a level two, level three. And then defining the processes. You know, onboarding the applications, defining the processes, writing the documentation, and then handing over to the support team to take over from there. So it was a lot of conversation with stakeholders, application owners, and I really appreciated being able to be a part of those processes.

That’s why I started seeing more of the automation. I mean, at Swiss Re, we were very much manual for the first couple of years. Which was fantastic because, you know, although it was a pain, it was fantastic because I got to understand how to do things if the system was down. It gave me that understanding of like ‘Oh, I know how to generate a manual report.’ So when it came time to automate, I was like, ‘Oh. Okay, this is nothing. I understand the workflow,’ right? I can create a workflow quickly, or I can… I understand what we need, right? And it also helps when people are just like, “That’s gonna take four days.” I’m like, “Absolutely not. That’s going to take you 45 minutes.” So it was a great experience.

Cindy Ng Would you ever buffer in time if systems went down? I’m thinking about something like ransomware.

Christina Morillo: Thankfully, that never happened while I was at these companies. That never happened. And since it didn’t hit my team, I think I’ve always been more on the preventative rather than being on the reactive side. A lot of times you did have to react to different situations or work in tandem with other teams, but I’m really into, like, preventative. Like, how can we minimize risk? How can we prevent this from happening? Kind of thinking out of the box that way. You have to not be an optimistic person. Like, you have to be like, well, this can happen if we leave that open. Right? And it’s not even meant to sound negative, but it’s almost like you have to have that approach because you have to understand what adversaries and hackers, how do they think? What would I want to do? Right? Like, if I see a door unlocked. It’s almost like you’re on the edge and you have to think that way, and you have to look at problems a little bit differently because, in business, you don’t rank, you just want to do their work.

Cindy Ng Did you develop that skill naturally, or was it innate, or did you realize, ‘Oh my God, I need to start thinking a certain way’? The business isn’t gonna care about it. That’s why you’re responsible for it.

Christina Morillo: I think I’ve always had that skill set, but I think that I developed it more throughout my career. Like, added strength in that skill throughout my career. Because when you’re starting, especially with network administration and sysadmin stuff, you have to be the problem solver. So you have to be on the lookout for problems. Because that’s, like, your job, right? So there’s a problem, you fix it. There’s a problem, you fix it. So, a lot of times, just to make your job a little bit easier, you have to almost have to anticipate a problem. You have to say, ‘Oh, if that window’s open and if it rains, the water’s gonna get in. So let’s close the window before it rains!’ It sounds intuitive, but a lot of times people just don’t think that far ahead.

I think it’s just a matter of the longer I remain in the industry, the more I see things changing. And then you just have to evolve. So you always have to think about being one or two steps ahead, when you can. And I think that skill set comes with time. You just have to prepare. And also, like, the more you know… Like, I’m very big on education and training and learning even if it’s not specific to my job. I feel like it helps broaden my perspective. And it helps me with whatever work I’m doing. I’m always taking either, like, a Javascript class or some class, or just like a fun in web development class. I’ve been looking for a Python class. Like, I did a technical cert, like boot camp. Like, I’m preparing for a cert. But it’s a lot. But I also take ad-hoc stuff. Like I’ll take a calligraphy class, just to kind of balance it out. You know I’ll go to different talks at the 92nd Street Y. Whether it’s technology related or just, like, futurism related, or just innovation related. Or something completely different.

Cindy Ng I’ve read your harrowing story about taking a class at General Assembly with having kids and a husband. Oh my God, you are so amazing. It’s so inspiring.

Christina Morillo: Definitely hard. But, you know, you gotta do what you gotta do. And it’s a problem because when you become a parent, it doesn’t mean that you lose your ambition. It just kind of goes on a temporary hold. But then you when you remember, you’re like ‘Oh, wait a minute. No. I have to get back to it.’

Cindy Ng So let’s talk about Fitch Ratings. That role is really interesting.

Christina Morillo: Yeah, yeah. Thus far, it’s been one of my favorites. Because, at Fitch, I was actually able to deploy an identity and access management platform. So, on nothing to create something completely new and just deploy it globally, right? So what that means is that I changed the HR onboarding process and offboarding process. So, like, how new-hires are added to the system. How people that are terminated are removed from the system. How employees request access to different applications. How managers approve. How authorizers approve the entire workflow. So that was amazing.

Basically, when I started, they wanted to go from pretty decentralized to a centralized model to purchase this out of the box application. They had a lot of transitions, so they needed someone to come in and own the application and say, like, “Okay, but let me implement it.” It was just on a like a development server, not fully configured. So, my job was to come in, look at the use-cases, look at what they needed. At least initially. What needed to happen? How did they need to use this application? Then I needed to understand the business processes. Current things, or how do they perform this work today? Like, does the help desk do it? Does a developer give access to a specific application that they manage? What are they developed for? What happens now?

So I took time to understand all of the processes. Right? Like, I spoke to everyone. I spoke to HR. I spoke to finance. I spoke to legal. I spoke to compliance. I spoke to the help desk. I spoke to network administration. I spoke to application developers. To compile all of that information in order to better create the use-cases and the workflows, and to kind of flesh them out. Then what I did is I started building and automating these processes in that tool, on that platform.

My boss gave me… He said, “Oh, I’ll give you like a year.” And I was like, “Okay. Fine.” But, I guess, once I got into like the thick of things, I got like really aggressive, and I really was hard with the vendor. Because I was a team of one. You know, I had support from our internal app team, and network administration team, and the sysadmins. But I completely owned the process, and owned the applications, and owned building it out. So I rode the vendor like crazy just to get this done, and understand, and just to look at it from top-bottom, bottom-to-top. And we were able to deploy it in five months.

You know, I got them from sending emails and creating help desk tickets, to fully automated system, onboarding, offboarding, and requesting entitlements. But more importantly, I was able to get people on board. Because that’s one of the other big things that you don’t really discuss. A lot of times we got a lot of pushback. While what we do is extremely important, especially in security, and sometimes we’re not the ones that are the most liked. People are afraid, right? So it’s also about developing new relationships with your constituents, with the users, right? And helping them understand that you’re not trying to make their lives miserable, you’re just getting them on board. I think that also takes skill. It takes finesse. It takes being able to speak to people, relate to people. And also, it takes being able to listen at scale. Right? So you have to listen to understand.

You know, I think if a lot of us did more listening and less talking, we would definitely understand where people are coming from and be able to kind of come up with solutions. I mean, you’re not always gonna make people happy. Maybe some of the time. Not all of the time. But at least you’ve communicated, and they can respect you for that. Right? So I was able to get pretty much the entire company on board. And to welcome this tool that they had heard about for so long. And they weren’t hesitant. To the point where I couldn’t get them to leave me alone about it.

Cindy Ng You were able to help them realize that you’re still able to do your work, but to do it securely.

Christina Morillo: And better.

Cindy Ng When you say scared and concerned, what were they worried about?

Christina Morillo: When you say the word “automation,” the main worry is that people are gonna lose their jobs. When someone says, “Oh, I heard that the tool will allow you to onboard a user.’ People won’t need to call the help desk anymore for that or won’t need help with that. Then you’re taking away like a piece or a portion of their work that may affect their productivity. And if it affects their productivity, it will affect the money that the team or the department gets. If that happens, then, obviously, we don’t need ten help desk people. We only need five. Right?

So, pretty much, it’s like fear of losing their jobs or fear that they’re becoming obsolete. So that’s usually the biggest one. And also when there’s, like, a new person coming in asking you how do you do your work, what is the process, that’s kind of scary. “Why do you want to know? Are you taking over? Are you trying to take away my work?” You’re always going to get push back. I think that’s part of the job, especially when you’re in security. You’re just always going to. And, you know, people fear what they don’t understand. So that’s part of it too.

Cindy Ng Let’s talk about Morgan Stanley now. So at this point, you’re at a really more strategic level where you’re really helping entire teams managing risk?

Christina Morillo: Yeah. So while I was at Fitch and, you know, while I loved it, it became more of a sysadmin type of role. So I decided to begin looking for my next opportunity. And Morgan Stanley came up with that summer. And I looked at it as, well, this is a great opportunity for me to be at a more strategic level and understand, become a middleman, right? Almost like a business analyst where I’m understanding what the business needs and the kind of liaising on the technology side. So I thought it would be a good opportunity for me to hone that skill set on the business side and look at values opposition. But also because of my technical background, I’ll be able to communicate with and get things done on the tech side.

So that was amazing. I mean, I learned a lot about how the business and IT engage. What’s important, and how to present certain, I guess, calls for action. Like, if you need something done, like, oh, you implement a new DLP solution. Are you solving a problem for the business or are you solving a problem for technology? Understanding the goal. Understanding your approach. And looking at things two ways. Looking at how to resolve a problem tactically. How can we resolve this issue today? And then what is the strategic or long-term solution? So a lot of business-speak, a lot of how to present.

I think I would almost equate it to… My time at Morgan Stanley… And I’m no longer at Morgan Stanley, actually. But my time at Morgan Stanley I equated to getting a mini-MBA because it really prepared me and allowed me to think differently. I think, you know, when you’re in technology you tend to stay in your tech cocoon. And that’s all you want to do and talk about. But understanding how others think about it, even how project managers engage with a business. The business is just thinking about risk, and how to minimize risk, and how they can do their jobs and make money. Because, at the end of the day, that’s what the goal is, right? Yeah, it allowed me to understand that. Whereas normally, on the tech side, I never really had to deal with that or face it. So I didn’t think about it. But at Morgan, you have to think about it, and you have to create solutions around it.

Cindy Ng Also, IT’s often seen as, like, a call center rather a money generator.

Christina Morillo: I’ve always had an issue with that. Even though IT, like, we’re seen as a call center, without us… And I’m biased, obviously… But I feel like without us, you wouldn’t be able to function. At the end of they day, are we generating money? I think so. But then it goes into that whole chicken or the egg thing. But that’s my argument, and I guess I’m biased. I’ve always been in IT, right?

Cindy Ng What’s most important to business? Is it always about the bottom line? For IT people, its always about security and minimizing risk.

Christina Morillo: It is about the bottom line. There are many avenues to get to there more efficiently, or just a little bit smarter. It’s like working smarter. But I think one of the ways is by listening at scale. Just like if you’re starting a company, you’re providing a service, you need to understand who your target market is, right? You need to understand what they want and why they want it. And that’s how you know what service you can provide or how you can tailor your needs to them. Why? Because then they will buy it from you, or they will seek services from you. And what does that mean? That means you get to collect that money.

And sometimes you need, like, a neutral group. You know? Like a working group. I realized they have a lot of working groups. So a lot of discussion. Sometimes that can be good and bad, but I see it as more of a positive thing. And the reason why is because you need to be able to hear from both sides, right? Both sides need to be able to express themselves, and everyone needs to be one the same page or get to that same page somehow. You need to understand what I need as a business user. I need to be able to book a trade, or I need to be able to do this, and I need to do it in this amount of time. Now how can you help me? And then the IT person, or the security person, whoever needs to be able to say, “Okay. Well, this is what I can do, this is what I cannot do right now. But maybe this is what I can do in the future.”

Again, it goes back to that we are problem solvers. So we’re all about solutions and how to keep the business afloat and keep the business running and operating. That’s our job. We’re not there to say we have to do it this way. That’s not what we’re there for. So I think it’s also understanding what role everyone plays, and understanding that we all have to kind of like work together to get to that common goal.

Let’s say we have a working group about implementing Varonis DataPrivilege globally, right? So then you have stakeholders from every department, or every department that it would touch. So if that means if that the security team is going to be involved, we have a representative from the security team. If that means that the project management who’s managing the project is gonna be involved, we have someone from that team. So you pretty much have a representative from each team that it will affect. Including the business, at times, so that they’re aware of what’s going on. And then you have status updates on what’s going on. What do we need? Where are the blocks and the blockers? And people get to speak, and people get to brainstorm, and you get to bring up problems, and what you need from the other team, what they need from you. And it just helps with getting projects moving and getting things going quickly and just more efficiently without anyone feeling like they weren’t represented in the decision-making process. It also speaks to that as well.

Cindy Ng Before our initial conversation, I had no idea that you used DatAdvantage.

Christina Morillo: My last employer, they used DatAdvantage, and were also implementing portions of DataPrivilege. The company before that, Fitch, we used DatAdvantage heavily. So, like, recording. You know, it’s been a couple of years, so I don’t know if they still use the tool. But I know when I was there, I actually used it for reporting purposes, and to help me generate reports, and just do, like, investigations, and other rule-based stuff.

Cindy Ng Was it helpful for, like, SOX compliance?

Christina Morillo: Yeah. Yeah, especially when whether it was internal or external audits, we always got the call. Like, “Can you come and give me access to this group on such and such date?” or, “Can you come and get this removed?” or, “Can you tell me this?” Just weird ad-hoc requests. That makes sense, right? But at the time, you’re like, ‘Why did you need this?’ Being able to kinda quickly generate the report was, like, super helpful.

Cindy Ng And finally, I love what you do with the Women of Color in Tech chat.

Christina Morillo: Yeah, yeah. A friend of mine, Stephanie Morillo…no relation, just same last name…but we both work in tech. And in 2015, we decided to co-found a grassroots initiative to help other women of color, and non-binary folks and just under-represented people in technology to have a voice, a community. We started off as Twitter chats. So we would have weekly, bi-weekly Twitter chats. Just have conversations, conversations with the community.

And then we started getting contacted by different organizations. So they wanted to sponsor some of our community members to attend conferences, and just different discussions and meetups and events. So we started to do that. We also did, like, a monthly job newsletter, where companies, like Twitter and Google, they contacted us. Then we worked with them. We kind of posted different positions they were recruiting for and shared it directly with our community.

And then, the thing we’re most known for is the Women of Color in Tech stock photos, which basically is a collection of open-source stock photos featuring women and non-binary folks of color who work in technology. So those photos, the goal was to give them out for free, open-source them, so people that can have better imagery, right? Because we felt that that representation mattered. The way that that came about was when I was building the landing page for the initiative, I realized that I couldn’t find any photos of women who like me who work in technology. And it made me really upset. Right? And so that activated… I feel like that anger activated something within me, and maybe it came as a rant. Like, I was just, like, “Okay, Getty, don’t you have photos of women in tech who look like me?” Why is every… Whether white or Asian or whoever… Why is any… And I see a woman with a computer or an iPad, it looks like she’s playing around with it. Those are the pictures that I was seeing. This is not what I do. This is not what I’ve done. So I just felt like I wasn’t represented. And then if I wasn’t represented, countless of other folks weren’t as well.

I spoke to a photographer friend of mine who also works in tech. And he started like his side passion stuff. So he agreed, and we just kind of started out. I mean, we went with the flow. It turned out amazing. And we released the photos. We open sourced them, and we got a lot of interest, a lot of feedback, a lot of features, a lot of reporting on it. And we decided to go for another two rounds. You know, a lot of companies we talked to were like, “We want to be a part of this. This is amazing. How can we support you?” So a lot of great organizations. If you look at the site, you see of those organizations that sponsored the last two photo shoots.

We released the collection of over 500 photos. And we’ve seen them everywhere, from Forbes, Wall Street Journal. It’s like I’ve seen them everywhere. They’re just, like, all over the web. Some of our tech models have gotten jobs because they started conversations. Like, “Wait, weren’t you in the Women of Color in Tech photos?” “Yeah, that’s me!” Whatever. Some people have gotten stopped, like, “Wait a minute, you’re in this photo.” Or they get tags. They’ve been used at conferences. Some organizations are now using them as part of their landing pages. They’re like all over the place. And that was the goal.

But it really, you know, makes us really happy. But just seeing photos all over the place, and the fact that people recognize that those are our photos, it was just amazing. We actually open sourced our process as well. We released an article that spoke about how we got sponsors, what we did, in hopes that other people, other organizations would also get inspired and replicate the stock photos. But we also get inquiries about, you know, “Are you gonna have another one? Can you guys have another one?” So it’s up in the air. I’m debating it. Maybe.

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessme...

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessment

I can’t really overstate the importance of risk assessments in data security standards. It’s really at the core of everything you subsequently do in a security program. In this post we’ll finish discussing how DatAdvantage helps support many of the risk assessment controls that are in just about every security law, regulation, or industry security standard.

Last time, we saw that risk assessments were part of NIST’s Identify category. In short: you’re identifying the risks and vulnerabilities in your IT system. Of course, at Varonis we’re specifically focused on sensitive plain-text data scattered around an organization’s file system.

Identify Sensitive Files in Your File System

As we all know from major breaches over the last few years, poorly protected folders is where the action is for hackers: they’ve been focusing their efforts there as well.

The DatAdvantage 2b report is the go-to report for finding sensitive data across all folders, not just ones with global permissions that are listed in 12l. Varonis uses various built-in filters or rules to decide what’s considered sensitive.

I counted about 40 or so such rules, covering credit card, social security, and various personal identifiers that are required to be protected by HIPAA and other laws.

In the test system on which I ran the 2b report, the \share\legal\Corporate folder was snagged by the aforementioned filters.

Identify Risky and Unnecessary Users Accessing Folders

We now have a folder that is a potential source of data security risk. What else do we want to identify?

Users that have accessed this folder is a good starting point.

There are a few ways to do this with DatAdvantage, but let’s just work with the raw access audit log of every file event on a server, which is available in the 2a report. By adding a directory path filter, I was able to narrow down the results to the folder I was interested in.

So now we at least know who’s really using this specific folder (and sub-folders).  Often times this is a far smaller pool of users then has been enabled through the group permissions on the folders. In any case, this should be the basis of a risk assessment discussion to craft more tightly focused groups for this folder and setting an owner who can then manage the content.

In the Review Area of DatAdvantage, there’s more graphical support for finding users accessing folders, the percentage of the Active Directory group who are actually using the folder, as well as recommendations for groups that should be accessing the folder. We’ll explore this section of DataAdvantage further below.

For now, let’s just stick to the DatAdvantage reports since there’s so much risk assessment power bundled into them.

Another similar discussion can be based on using the 12l report to analyze folders containing sensitive data but have global access – i.e., includes the Everyone group.

There are two ways to think about this very obvious risk. You can remove the Everyone access on the folder. This can and likely will cause headaches for users. DatAdvantage conveniently has a sandbox feature that allows you to test this.

On the other hand, there may be good reasons the folder has global access, and perhaps there are other controls in place that would (in theory) help reduce the risk of unauthorized access. This is a risk discussion you’d need to have.

Another way to handle this is to see who’s copying files into the folder — maybe it’s just a small group of users — and then establish policies and educate these users about dealing with sensitive data.

You could then go back to the 1A report, and set up filters to search for only file creation events in these folders, and collect the user names (below).

Who’s copying files into my folder?

After emailing this group of users with followup advice and information on copying, say, spreadsheets with credit card numbers, you can run the 12l reports the next month to see if any new sensitive data has made its way into the folder.

The larger point is that the DatAdvantage reports help identify the risks and the relevant users involved so that you can come up with appropriate security policies — for example, least-privileged access, or perhaps looser controls but with better monitoring or stricter policies on granting access in the first place. As we’ll see later on in this series, Varonis DatAlert and DataPrivilege can help enforce these policies.

In the previous post, I listed the relevant controls that DA addresses for the core identification part of risk assessment. Here’s a list of risk assessment and policy making controls in various laws and standards where DatAdvantage can help:

  • NIST 800-53: RA-2, RA-3, RA-6
  • NIST 800-171: 3.11.1
  • HIPAA:  164.308(a)(1)(i), 164.308(a)(1)(ii)
  • Gramm-Leach-Bliley: 314.4(b),(c)
  • PCI DSS 3.x: 12.1,12.2,
  • ISO 27001: A.12.6.1, A.18.2.3
  • CIS Critical Security Controls: 4.1, 4.2
  • New York State DFS Cybersecurity Regulations: 500.03, 500.06

Thou Shalt Protect Data

A full risk assessment program would also include identifying external threats—new malware, new hacking techniques. With this new real-world threat intelligence, you and your IT colleagues should go back re-adjust the risk levels you’ve assigned initially and then re-strategize.

It’s an endless game of cyber cat-and-mouse, and a topic for another post.

Let’s move to the next broad functional category, Protect. One of the critical controls in this area is limiting access to only authorized users. This is easier said done, but we’ve already laid the groundwork above.

The guiding principles are typically least-privileged-access and role-based access controls. In short: give appropriate users just the access they need to their jobs or carry out roles.

Since we’re now at a point where we are about to take a real action, we’ll need to shift from the DatAdvantage Reports section to the Review area of DatAdvantage.

The Review Area tells me who’s been accessing the legal\Corporate folder, which turns out to be a far smaller set than has been given permission through their group access rights.

To implement least-privilege access, you’ll want to create a new AD group for just those who really, truly need access to the legal\Corporate folder. And then, of course, remove the existing groups that have been given access to the folder.

In the Review Area, you can select and move the small set of users who really need folder access into their own group.

Yeah, this assumes you’ve done some additional legwork during the risk assessment phase — spoken to the users who accessed Corporate\legal folder, identified the true data owners, and understood what they’re using this folder for.

DatAdvantage can provide a lot of support in narrowing down who to talk to. So by the time you’re ready to use the Review Area to make the actual changes, you already should have a good handle on what you’re doing.

One other key control, which will discuss in more detail the next time, is managing file permission for the folders.

Essentially, that’s where you find and assign data owners, and then insure that there’s a process going forward to allow the owner to decide who gets access. We’ll show how Varonis has a key role to play here through both DatAdvatange and DataPrivilege.

I’ll leave you with this list of least permission and management controls that Varonis supports:

  • NIST 800-53: AC-2, AC-3, AC-6
  • NIST 800-171: 3.14,3.15
  • PCI DSS 3.x: 7.1
  • HIPAA: 164.312 a(1)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

[Podcast] Evolving Bank Security Threats

[Podcast] Evolving Bank Security Threats


It was only last week that we applauded banks for introducing cardless ATMs in an effort to curb financial fraud. But with the latest bank heists, it may help to turn up the offense and defense. Why? Hackers were able to drill a hole, connect a wire, cover it up with a sticker and the ATM will automatically and obediently dispense thousands. Another group of enterprising hackers changed a bank’s DNS, taking over their website and mobile sites, redirecting customers to phishing sites.

But let’s be honest and realistic. Bank security is no easy feat. They’re complicated systems with a large attack surface to defend. Whereas attackers only need to find one vulnerability, sprinkle it with technical expertise, and gets to decide when and how the attack happens. Moreover, they don’t have to worry about bureaucracy, meeting compliance and following laws. The bottom-line is that attackers have more flexibility and are more agile.

In addition to evolving bank security threats, we also covered the following:

Tool of the week: ngrok, secure introspected tunnels to localhost


Subscribe Now

- Leave a review for our podcast & we'll put you in the running for a pack of cards

- Follow the Inside Out Security Show panel on Twitter @infosec_podcast

- Add us to your favorite podcasting app:

What is a Data Security Platform?

What is a Data Security Platform?

A Data Security Platform (DSP) is a category of security products that replaces traditionally disparate security tools.

DSPs combine data protection capabilities such as sensitive data discovery, data access governance, user behavior analytics, advanced threat detection, activity monitoring, and compliance reporting, and integrate with adjacent security technologies.

They also provide a single management interface to allow security teams to centrally orchestrate their data security controls and uniformly enforce policies across a variety of data repositories, on-premises and in the cloud.

Data Security Platform (DSP)

Adapted from a figure used in the July 2016 Forrester report, The Future Of Data Security And Privacy: Growth And Competitive Differentiation.

The Rise of the Data Security Platform

A rapidly evolving threat landscape, rampant data breaches, and increasingly rigorous compliance requirements have made managing and protecting data more difficult than ever. Exponential data growth across multiple silos has created a compound effect that has made the disparate tool approach untenable. Siloed tools often result in inconsistently applied data security policies.

Many organizations are finding that simply increasing IT security spend doesn’t necessarily correlate to better overall data security. How much you spend isn’t as important as what you spend it on and how you use what you buy.

“Expense in depth” hasn’t been working. As a result, CISOs are aiming to consolidate and focus their IT spend on platforms over products to improve their enterprise-wide security posture, simplify manageability, streamline processes, and control costs.

According to Gartner, “By 2020, data-centric audit and protection products will replace disparate siloed data security tools in 40% of large enterprises, up from less than 5% today.”

(Source: Gartner Market Guide for Data-Centric Audit and Protection, March 21, 2017).

What are the benefits of a Data Security Platform?

There are clear benefits to consolidation which are generally true in all facets of technology, not just information security:

  • Easier to manage and maintain
  • Easier to coordinate strategy
  • Easier to train new employees
  • Fewer components to patch and upgrade
  • Fewer vendors to deal with
  • Fewer incompatibilities
  • Lower costs from retiring multiple point solutions

In information security, context is king. And context is enhanced drastically when products are integrated as part of a unified platform.

As a result, the benefits of a Data Security Platform are pronounced:

  • By combining previously disparate functions, DSPs have more context about data sensitivity, access controls, and user behavior, and can therefore paint a more complete picture of a security incident and the risk of potential breaches.
  • The total cost of ownership (TCO) is lower for a DSP than for multiple, hard-to-integrate point solutions.
  • In general, platform technologies have the flexibility and scalable architecture to accommodate new data stores and add new functionality when required, making the investment more durable
  • Maintaining compatibility between multiple data security products can be a massive challenge for security teams.
    • DSPs often result in an OpEx reduction because the security teams are dealing with fewer vendors and maintaining, tuning, and upgrading fewer products.
    • Capex reduction by retiring point solutions
  • CISOs want to be able to apply their data security strategy consistently across data silos and easily measure results.

Why context is essential to threat detection

What happens when your tools lack context?

Let’s take a standalone data loss prevention (DLP) product as an example.

Upon implementing DLP it is not uncommon to have tens of thousands of “alerts” about sensitive files. Where do you begin? How do you prioritize? Which incident in the colossal stack represents a significant risk that warrants your immediate, undivided attention?

The challenge doesn’t stop here. Pick an incident/alert at random – the sensitive files involved may have been auto-encrypted and auto-quarantined, but what comes next? Who has the knowledge and authority to decide the appropriate access controls? Who are we now preventing from doing their jobs? How and why were the files placed here in the first place?

DLP solutions by themselves provide very little context about data usage, permissions, and ownership, making it difficult for IT to proceed with sustainable remediation. IT is not qualified to make decisions about accessibility and acceptable use on its own; even if it were, it is not realistic to make these kinds of decisions for each and every file.

You can see a pattern forming here – with disparate products we often end up with excellent questions, but we urgently need answers that only a DSP can provide.

Which previously standalone technologies does a Data Security Platform include?

  • Data Classification & Discovery
    • Where is my sensitive data?
    • What kind of sensitive, regulated data do we have? (e.g., PCI, PII, GDPR)
    • How should I prioritize my remediation and breach detection efforts? Which data is out of scope?
  • Permissions Management
    • Where is my sensitive data overexposed?
    • Who has access to sensitive information they don’t need?
    • How are permissions applied? Are they standardized? Consistent?
  • User Behavior Analytics
    • Who is accessing data in abnormal ways?
    • What is normal behavior for a given role or account?
    • Which accounts typically run automated processes? Which access critical data? Executive files and emails?
  • Advanced Threat Detection & Response
    • Which data is under attack or potentially being compromised by an insider threat?
    • Which user accounts have been compromised?
    • Which data was actually taken, if any?
    • Who is trying to exfiltrate data?
  • Auditing & Reporting (response could be better here)
    • Which data was accessed? By whom? When?
    • Which files and emails were accessed or deleted by a particular user?
    • Which files were compromised in a breach, by which accounts, and exactly when were they accessed?
    • Which user made this change to a file system, access controls or group policy, and when?
  • Data Access Governance
    • How do we implement and maintain a least privilege model?
    • Who owns the data? Who should be making the access control decisions for each critical dataset?
    • How do I manage joiners, movers, and leavers so only the right people maintain access?
  • Data Retention & Archiving
    • How do we get rid of toxic data that we no longer need?
    • How do we ensure personal data rights (right to erasure & to be forgotten)?

Analyst Research

A number of analysts firms have taken note of the Data Security Platform market and have released research reports and market guides to help CISOs and other security decision-makers.

Forrester’s “Expense in Depth” Research

In January 2017, Forrester Consulting released a study, commissioned by Varonis, entitled The Data Security Money Pit: Expense in Depth Hinders Maturity that shows a candy-store approach to data security may actually hinder data protection and explores how a unified data security platform could give security professionals the protection capabilities they desire, including security analytics, classification and access control while reducing costs and technical challenges.

The study finds that a fragmented approach to data security exacerbates many vulnerabilities and challenges, and 96% of these respondents believe a unified approach would benefit them, including preventing and more quickly responding to attempted attacks, limiting exposure and reducing complexity and cost.. The study goes on to highlight specific areas where enterprise data security falls short:

  • 62% of respondents don’t know where their most sensitive unstructured data resides
  • 66% don’t classify this data properly
  • 59% don’t enforce a least privilege model for access to this data
  • 63% don’t audit use of this data and alert on abuses
  • 93% suffer persistent technical challenges with their current data security approach

Point products may mitigate specific threats, but when used tactically, they undermine more comprehensive data security efforts.

According to the study, “It’s time to put a stop to expense in depth and wrestling with cobbling together core capabilities via disparate solutions.”

Almost 90% of respondents desire a unified data security platform. Key criteria to include in such a platform as selected by the survey respondents include:

  • Data classification, analytics and reporting (68% of respondents)
  • Meeting regulatory compliance (76% of respondents)
  • Aggregating key management capabilities (70% of respondents)
  • Improving response to anomalous activity (68% of respondents)

Forrester concludes:

Forrester on Data Security Platforms

Gartner’s DCAP Market Guide

Gartner released the 2017 edition of their Market Guide for Data-Centric Audit and Protection. The guide’s summary concisely describes the need for a platform approach to data security:

Garter on Data-Centric Audit and Protection

Gartner recommends that organizations “implement a DCAP strategy, and ‘shortlist’ products that orchestrate data security controls consistently across all silos that store the sensitive data.” Further, the report advises, “A vendor’s ability to integrate these capabilities across multiple silos will vary between products and also in comparison with vendors in each market subsegment. Below is a summary of some key features to investigate:”

  • Data classification and discovery
  • Data security policy management
  • Monitoring user privileges and data access activity
  • Auditing and reporting
  • Behavior analysis, alerting and blocking
  • Data protection

Demo the Varonis Data Security Platform

The Varonis Data Security Platform (DSP) protects enterprise data against insider threats, data breaches and cyberattacks by analyzing content, accessibility of data and the behavior of the people and machines that access data to alert on misbehavior, enforce a least privilege model and automate data management functions. Learn more about the Varonis Data Security Platform →

What customers are saying about the Varonis Data Security Platform

City of San Diego on the Varonis Data Security Platform

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Varonis Data Security Platform Listed in Gartner 2017 Market Guide for Data...

Varonis Data Security Platform Listed in Gartner 2017 Market Guide for Data-Centric Audit and Protection

In 2005, our founders had a vision to build a solution focused on protecting the data organizations have the most of and yet know the least about – files and emails.  Executing on this vision, Varonis has built an innovative Data Security Platform (DSP) to protect enterprise data against insider threats, data breaches and cyberattacks.

To this end, we are pleased to be listed as a representative vendor in Gartner’s 2017 Market Guide for Data-Centric Audit and Protection (DCAP) for the capabilities found within our DSP.

According to Gartner, “By 2020, data-centric audit and protection products will replace disparate siloed data security tools in 40% of large enterprises, up from less than 5% today.”

“Traditional data security approaches are limited because the manner in which products address policy is siloed, and thus the organizational data security policies themselves are siloed,” Gartner said in the guide. “The challenge facing organizations today is that data is pervasive and does not stay in a single silo on-premises, but is compounded by the use of cloud SaaS or IaaS. There is a critical need to establish organization wide data security policies and controls based upon Data Security Governance (DSG).”

Gartner recommends that organizations “implement a DCAP strategy, and ‘shortlist’ products that orchestrate data security controls consistently across all silos that store the sensitive data.” Further, the report advises, “A vendor’s ability to integrate these capabilities across multiple silos will vary between products and also in comparison with vendors in each market subsegment. Below is a summary of some key features to investigate:”

  • Data classification and discovery
  • Data security policy management
  • Monitoring user privileges and data access activity
  • Auditing and reporting
  • Behavior analysis, alerting and blocking
  • Data protection

The Varonis DSP protects enterprise data by analyzing content, accessibility of data and the behavior of the people and machines that access data to alert on misbehavior, enforce a least privilege model and automate data management functions.

Explore the use cases and benefits of a DSP today.

Source: Gartner Market Guide for Data-Centric Audit and Protection, March 21, 2017

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

 

[Podcast] Americans’ Cyber Hygiene

[Podcast] Americans’ Cyber Hygiene


Recently, the Pew Research Center released a report highlighting what Americans know about cybersecurity. The intent of the survey and quiz was to understand how closely Americans are following best practices recommended by cybersecurity experts.

One question on the quiz reminded us that we’re entitled to one free copy of our credit report every 12 months from each of the three nationwide credit reporting companies. The reason behind this offering is that there is so much financial fraud.

And in an effort to curve banking scams, Wells Fargo introduced cardless ATMs, where customers can log into their app to request an eight-digit code to enter along with their PIN to retrieve cash.

Outside the US, the £1 coin gets a new look and line of defense. It uses an Integrated Secure Identification Systems, which gets authenticated at high speeds. Plus, it’s harder to counterfeit and that’s exactly what we want!

Other themes and ideas we covered that weren’t part of the quiz:

Did the Inside Out Security panel – Mike Thompson, Kilian Englert, and Mike Buckbee – pass Pew’s cybersecurity quiz? Listen to find out!


Subscribe Now

- Leave a review for our podcast & we'll put you in the running for a pack of cards

- Follow the Inside Out Security Show panel on Twitter @infosec_podcast

- Add us to your favorite podcasting app:

 

 

 

Practical PowerShell for IT Security, Part III: Classification on a Budget

Practical PowerShell for IT Security, Part III: Classification on a Budget

Last time, with a few lines of PowerShell code, I launched an entire new software category, File Access Analytics (FAA). My 15-minutes of fame is almost over, but I was able to make the point that PowerShell has practical file event monitoring aspects. In this post, I’ll finish some old business with my FAA tool and then take up PowerShell-style data classification.

Event-Driven Analytics

To refresh memories, I used the Register-WmiEvent cmdlet in my FAA script to watch for file access events in a folder. I also created a mythical baseline of event rates to compare against. (For wonky types, there’s a whole area of measuring these kinds of things — hits to web sites, calls coming into a call center, traffic at espresso bars — that was started by this fellow.)

When file access counts reach above normal limits, I trigger a software-created event that gets picked up by another part of the code and pops up the FAA “dashboard”.

This triggering is performed by the New-Event cmdlet, which allows you to send an event, along with other information, to a receiver. To read the event, there’s the WMI-Event cmdlet. The receiving part can even be in another script as long as both event cmdlets use the same SourceIdentifier — Bursts, in my case.

These are all operating systems 101 ideas: effectively, PowerShell provides a simple message passing system. Pretty neat considering we are using what is, after all, a bleepin’ command language.

Anyway, the full code is presented below for your amusement.

$cur = Get-Date
$Global:Count=0
$Global:baseline = @{"Monday" = @(3,8,5); "Tuesday" = @(4,10,7);"Wednesday" = @(4,4,4);"Thursday" = @(7,12,4); "Friday" = @(5,4,6); "Saturday"=@(2,1,1); "Sunday"= @(2,4,2)}
$Global:cnts =     @(0,0,0)
$Global:burst =    $false
$Global:evarray =  New-Object System.Collections.ArrayList

$action = { 
    $Global:Count++  
    $d=(Get-Date).DayofWeek
    $i= [math]::floor((Get-Date).Hour/8) 

   $Global:cnts[$i]++ 

   #event auditing!
    
   $rawtime =  $EventArgs.NewEvent.TargetInstance.LastAccessed.Substring(0,12)
   $filename = $EventArgs.NewEvent.TargetInstance.Name
   $etime= [datetime]::ParseExact($rawtime,"yyyyMMddHHmm",$null)
  
   $msg="$($etime)): Access of file $($filename)"
   $msg|Out-File C:\Users\bob\Documents\events.log -Append
  
   
   $Global:evarray.Add(@($filename,$etime))
   if(!$Global:burst) {
      $Global:start=$etime
      $Global:burst=$true            
   }
   else { 
     if($Global:start.AddMinutes(15) -gt $etime ) { 
        $Global:Count++
        #File behavior analytics
        $sfactor=2*[math]::sqrt( $Global:baseline["$($d)"][$i])
       
        if ($Global:Count -gt $Global:baseline["$($d)"][$i] + 2*$sfactor) {
         
         
          "$($etime): Burst of $($Global:Count) accesses"| Out-File C:\Users\bob\Documents\events.log -Append 
          $Global:Count=0
          $Global:burst =$false
          New-Event -SourceIdentifier Bursts -MessageData "We're in Trouble" -EventArguments $Global:evarray
          $Global:evarray= [System.Collections.ArrayList] @();
        }
     }
     else { $Global:burst =$false; $Global:Count=0; $Global:evarray= [System.Collections.ArrayList]  @();}
   }     
} 
 
Register-WmiEvent -Query "SELECT * FROM __InstanceModificationEvent WITHIN 5 WHERE TargetInstance ISA 'CIM_DataFile' and TargetInstance.Path = '\\Users\\bob\\' and targetInstance.Drive = 'C:' and (targetInstance.Extension = 'txt' or targetInstance.Extension = 'doc' or targetInstance.Extension = 'rtf') and targetInstance.LastAccessed > '$($cur)' " -sourceIdentifier "Accessor" -Action $action   


#Dashboard
While ($true) {
    $args=Wait-Event -SourceIdentifier Bursts # wait on Burst event
    Remove-Event -SourceIdentifier Bursts #remove event
  
    $outarray=@() 
    foreach ($result in $args.SourceArgs) {
      $obj = New-Object System.Object
      $obj | Add-Member -type NoteProperty -Name File -Value $result[0]
      $obj | Add-Member -type NoteProperty -Name Time -Value $result[1]
      $outarray += $obj  
    }


     $outarray|Out-GridView -Title "FAA Dashboard: Burst Data"
 }

Please don’t pound your laptop as you look through it.

I’m aware that I continue to pop up separate grid views, and there are better ways to handle the graphics. With PowerShell, you do have access to the full .Net framework, so you could create and access objects —listboxes, charts, etc. — and then update as needed. I’ll leave that for now as a homework assignment.

Classification is Very Important in Data Security

Let’s put my file event monitoring on the back burner, as we take up the topic of PowerShell and data classification.

At Varonis, we preach the gospel of “knowing your data” for good reason. In order to work out a useful data security program, one of the first steps is to learn where your critical or sensitive data is located — credit card numbers, consumer addresses, sensitive legal documents, proprietary code.

The goal, of course, is to protect the company’s digital treasure, but you first have to identify it. By the way, this is not just a good idea, but many data security laws and regulations (for example, HIPAA)  as well as industry data standards (PCI DSS) require asset identification as part of doing real-world risk assessment.

PowerShell should have great potential for use in data classification applications. Can PS access and read files directly? Check. Can it perform pattern matching on text? Check. Can it do this efficiently on a somewhat large scale? Check.

No, the PowerShell classification script I eventually came up with will not replace the Varonis Data Classification Framework. But for the scenario I had in mind – a IT admin who needs to watch over an especially sensitive folder – my PowerShell effort gets more than a passing grad, say B+!

WQL and CIM_DataFile

Let’s now return to WQL, which I referenced in the first post on event monitoring.

Just as I used this query language to look at file events in a directory, I can tweak the script to retrieve all the files in a specific directory. As before I use the CIM_DataFile class, but this time my query is directed at the folder itself, not the events associated with it.

$Get-WmiObject -Query "SELECT * From CIM_DataFile where Path = '\\Users\\bob\\' and Drive = 'C:' and (Extension = 'txt' or Extension = 'doc' or Extension = 'rtf')"

Terrific!  This line of code will output an array of file path names.

To read the contents of each file into a variable, PowerShell conveniently provides the Get-Content cmdlet. Thank you Microsoft.

I need one more ingredient for my script, which is pattern matching. Not surprisingly, PowerShell has a regular expression engine. For my purposes it’s a little bit of overkill, but it certainly saved me time.

In talking to security pros, they’ve often told me that companies should explicitly mark documents or presentations containing proprietary or sensitive information with an appropriate footer — say, Secret or Confidential. It’s a good practice, and of course it helps in the data classification process.

In my script, I created a PowerShell hashtable of possible marker texts with an associated regular expression to match it. For documents that aren’t explicitly marked this way, I also added special project names — in my case, snowflake — that would also get scanned. And for kicks, I added a regular expression for social security numbers.

The code block I used to do the reading and pattern matching is listed below. The file name to read and scan is passed in as a parameter.

$Action = {

Param (

[string] $Name

)

$classify =@{"Top Secret"=[regex]'[tT]op [sS]ecret'; "Sensitive"=[regex]'([Cc]onfidential)|([sS]nowflake)'; "Numbers"=[regex]'[0-9]{3}-[0-9]{2}-[0-9]{3}' }


$data = Get-Content $Name

$cnts= @()

foreach ($key in $classify.Keys) {

  $m=$classify[$key].matches($data)

  if($m.Count -gt 0) {

    $cnts+= @($key,$m.Count)
  }
}

$cnts
}

Magnificent Multi-Threading

I could have just simplified my project by taking the above code and adding some glue, and then running the results through the Out-GridView cmdlet.

But this being the Varonis IOS blog, we never, ever do anything nice and easy.

There is a point I’m trying to make. Even for a single folder in a corporate file system, there can be hundreds, perhaps even a few thousand files.

Do you really want to wait around while the script is serially reading each file?

Of course not!

Large-scale file I/O applications, like what we’re doing with classification, is very well-suited for multi-threading—you can launch lots of file activity in parallel and thereby significantly reduce the delay in seeing results.

PowerShell does have a usable (if clunky) background processing system known as Jobs. But it also boasts an impressive and sleek multi-threading capability known as Runspaces.

After playing with it, and borrowing code from a few Runspaces’ pioneers, I am impressed.

Runspaces handles all the messy mechanics of synchronization and concurrency. It’s not something you can grok quickly, and even Microsoft’s amazing Scripting Guys are still working out their understanding of this multi-threading system.

In any case, I went boldly ahead and used Runspaces to do my file reads in parallel. Below is a bit of the code to launch the threads: for each file in the directory I create a thread that runs the above script block, which returns matching patterns in an array.

$RunspacePool = [RunspaceFactory]::CreateRunspacePool(1, 5)

$RunspacePool.Open()

$Tasks = @()


foreach ($item in $list) {

   $Task = [powershell]::Create().AddScript($Action).AddArgument($item.Name)

   $Task.RunspacePool = $RunspacePool

   $status= $Task.BeginInvoke()

   $Tasks += @($status,$Task,$item.Name)
}

Let’s take a deep breath—we’ve covered a lot.

In the next post, I’ll present the full script, and discuss some of the (painful) details.  In the meantime, after seeding some files with marker text, I produced the following output with Out-GridView:

Content classification on the cheap!

In the meantime, another idea to think about is how to connect the two scripts: the file activity monitoring one and the classification script partially presented in this post.

After all, the classification script should communicate what’s worth monitoring to the file activity script, and the activity script could in theory tell the classification script when a new file is created so that it could classify it—incremental scanning in other words.

Sounds like I’m suggesting, dare I say it, a PowerShell-based security monitoring platform. We’ll start working out how this can be done the next time as well.

Ransomware: What happens when the first layer of defense fails?

Ransomware: What happens when the first layer of defense fails?

76% of respondents see ransomware as a major business threat today, according to a recent Information Security Media Group (ISMG) survey, “2017 Ransomware Defense Survey: The Empire Strikes Back,” aimed at understanding the true impact of ransomware on organizations.

While this news isn’t worthy of breaking into the latest episode of Madame Secretary, what follows in the Varonis sponsored survey is an alarming disconnect between perception and reality of how these attacks happen and how to defend against them.

Key findings among the results:

    • 83% of respondents are confident in their endpoint security to detect ransomware before spreading to workstations and infecting critical files via file-share.
    • But only 21% say their anti-malware solution is completely effective at protecting their organization from ransomware.
    • 44% of respondent’s state that users are the single biggest weakness in the security chain related to the surge in ransomware.
    • Only 37% of respondents who suffered an attack proceeded to improve internal user access controls to reduce future attack footprint and 36% sought to improve detective and recovery capabilities.

People are placing their faith in endpoints to stop ransomware, but we see this threat bypassing that layer. Organizations should ask themselves: “What happens if this layer fails?”

They need to consider other layers of defense to counter this threat, including prioritizing protection around the assets that are most valuable to their organization and productivity. Ransomware target the data on file shares where there is 10 to 1,000 times more data than on a laptop or workstation.  It makes good defense sense to place a micro-perimeter around this data, restrict access to reduce an attack’s footprint and monitor for ransomware-like behaviors in order to immediately stop those threats that sneak past your outer defenses.

A lot of organizations like to think they don’t have insider threats, but often times it’s the loud intrusion of ransomware that is alerting an organization to over-exposed, unmonitored permissions and data. When a user with excessive permissions to data across the network is infected and the ransomware spreads to every file to which that user has access, organizations cannot ignore the crippling effects of hijacked data.

They should be thanking the ransomware criminals for shining a big, bright spotlight on the holes in their defenses. If ransomware can temporarily halt productivity due to overexposed permissions, only imagine what a malicious insider or external actor with co-opted credentials can do to your organization and how long they can go undetected.

Organizations should monitor how the data they depend on is used — especially files and emails that are frequent targets of breaches– and then perform regular attestations of access rights to reduce overexposed sensitive information from being hijacked in the first place as well as deploy user behavior analytics against data activity that look for signs of ransomware.

Read the full survey here.

Then see how our customers are using Varonis to detect ransomware when anti-malware tools fail.

[Podcast] What CISOs are Making, Reading and Sharing

[Podcast] What CISOs are Making, Reading and Sharing


Besides talking to my fav security experts on the podcast, I’ve also been curious with what CISOs have been up to lately. Afterall they have the difficult job of keeping an organization’s network and data safe and secure. Plus, they tend to always be a few steps ahead in their thinking and planning.

After a few clicks on Twitter, I found a CISO at a predictive analytics SaaS platform who published a security manifesto. His goal was to build security awareness into every job, every role, and to give people a reason to choose the more secure path.

Another CSO at a team communication and collaboration tool company stressed the importance of transparency. This means communicating with their customers as much as possible – what he’s working on and how their bug bounty and features work.

As for what CISOs are reading and sharing, here are a few links to keep you on your toes and us talkin’:


Subscribe Now

- Leave a review for our podcast & we'll put you in the running for a pack of cards

- Follow the Inside Out Security Show panel on Twitter @infosec_podcast

- Add us to your favorite podcasting app:

Detecting Malware Payloads in Office Document Metadata

Office Documents with Malicious Metadata

Ever consider document properties like “Company,” “Title,” and “Comments” a vehicle for a malicious payload? Checkout this nifty PowerShell payload in the company metadata:

Here’s the full VirusTotal entry. The target opens the Office document and, with macros enabled, the payload stored within the document’s own metadata executes and does its work. No extra files written to disk or network requests made.

The question  about whether DatAlert can detect stuff like this came up in the Twitter thread, so I decided to write up a quick how-to.

Finding Malicious Metadata with Varonis

What you’ll need: DatAdvantage, Data Classification Framework, DatAlert

Step 1: Add Extended File Properties to be scanned by Data Classification Framework.

  • Open up the Varonis Management Console
  • Click on Configuration → Extended File properties
  • Add a new property for whichever field you’d like to scan (e.g., “Company”)

Varonis Management Console

(Note: prior to version 6.3, extended properties are created in DatAdvantage under Tools → DCF and DW → Configuration → Advanced)

Step 2: Define a malicious metadata classification rule

  • In the main menu of DatAdvantage select Tools → DCF and DW → Configuration
  • Create a new rule
  • Create a new filter
  • Select File properties → Company (or whichever property you’re scanning)
  • Select “like” to search for a substring
  • Add the malicious value you’d like to look for (e.g., .exe or .bat)

Varonis DCF New Classification Rule

Step 3: Create an alert in DatAlert to notify you whenever a file with malicious metadata is discovered

  • In the main menu of DatAdvantage select Tools → DatAlert
  • Click the green “+” button to create a new rule
  • Click on the “Where (Affected Object)” sub menu on the left
  • Add a new filter → Classification Results
  • Select your rule name (e.g., “Malicious Metadata”)
  • Select “Files with hits” and “Hit count (on selected rules)” greater than 0

DatAlert Rule for Malicious Document Metadata

You can fill out the rest of the details of your alert rule–like which systems to scan, how you want to get your alerts, etc.

As an extra precaution, you could also create a Data Transport Engine rule based on the same classification result that will automatically quarantine files that are found to have malicious metadata.

That’s it! You can update your “Malicious Metadata” over time as you see reports from malware researchers of new and stealthier ways to encode malicious bits within document metadata.

If you’re an existing Varonis customer, you can setup office hours with your assigned engineer to review your classification rules and alerts. Not yet a Varonis customer? What are you waiting for? Get a demo of our data security platform today.