All posts by Cindy Ng

The Difference between Windows Server Active Directory and Azure AD

The Difference between Windows Server Active Directory and Azure AD

Once upon a time, IT pros believed that the risks of a data breach and compromised credentials were high enough to delay putting data on the cloud. After all, no organization wants to be a trending headline, announcing yet another data breach to the world. But over time with improved security, wider adoption and greater confidence, tech anxiety subsides and running cloud-based applications such as Microsoft’s subscription-based service Office 365 feels like a natural next step.

Once users start using Office 365, how do they manage AD? Windows Server AD or Azure AD? How are on-premise AD and Azure AD similar, and how are they different?

In this post, I will discuss the similarities, differences, and a few things in between.

What We Know For Sure: Windows Server Active Directory

Let’s start with what we know about Active Directory Domain Services.

First released with Windows 2000 Server edition, Active Directory is essentially a database that helps  organize your company’s users, computers and more. It provides authentication and authorization to applications, file services, printers, and other on-premises resources. It uses protocols such as Kerberos and NTLM for authentication and LDAP to query and modify items in the AD databases.

There’s also that wonderful Group Policy feature to streamline user and computer settings throughout a network.

With so many security groups, user and admin accounts, and passwords stored in Active Directory, as well as identity and access rights  managed there as well, securing AD is key to   safeguarding an organization’s assets.

Now with emails, files, CRM systems and even applications stored in the cloud, can we be as confident they’re as safe as when they were in the company’s own servers?

A Whole New World: AD Service in the Cloud?

As new startups and organizations build their companies, they most likely won’t have any on-premise data and the huge shocker is that they also won’t be creating forests and domains in AD. I’ll get more into this later.

But organizations with existing infrastructure have already made a significant investment in on-premise infrastructure and will have to visualize a new way of operationalizing their business.

Why? Azure AD will likely be a key part of Microsoft’s future. So if you’re already using any of Microsoft’s online services such as Office 365, Sharepoint Online and Exchange online, you’ll have to figure out how to navigate your way around it. And it already looks like organizations are rapidly adopting cloud-based apps and are running them nearly 50% of the time.

What’s different in Azure Active Directory?

First, you should know that Windows Server Active Directory wasn’t designed to manage web-based services.

Azure Active Directory, on the other hand, was designed to support web-based services that use REST (REpresentational State Transfer) API interfaces for Office 365, Salesforce.com etc. Unlike plain Active Directory, it uses completely different protocols (Goodbye, Kerberos and NTLM) that work with these services–protocols such as SAML and OAuth 2.0.

As I’ve pointed out earlier, with Azure AD, you won’t be creating forests and domains. Instead, you’ll be a tenant, which represents an entire organization. In fact, once you sign up for an Office 365, Sharepoint or Exchange Online, you’ll automatically be a Azure AD tenant, where you can manage all the users in the company as well as the passwords, permissions, user data, etc.

Besides seamlessly connecting to any Microsoft Online Services, Azure AD can connect to hundreds of SaaS applications using a single sign-on. This lets employees access the organization’s data without repeatedly requiring them to log in. The access token is stored locally on the employee’s device. Plus you can limit access by creating token expiration dates.

For a list on free, basic and premium features, check out this comparison chart.

Introducing Azure AD Connect

For organizations ready to migrate their on-premises structure to Azure AD, try Azure AD Connect. For a great tutorial on integration, read this how-to article.

And in an upcoming post, I’ll curate a list of top Azure AD tutorials to help you transition into a brand new interface and terminology.

With the move to Azure, we bid you farewell Kerberos, forests and domains. And flights of Microsoft angels sing thee to thy rest! 

[Podcast] The Anatomy of a Cybercriminal Startup

[Podcast] The Anatomy of a Cybercriminal Startup

Leave a review for our podcast & we'll send you a pack of infosec cards.


Outlined in the National Cyber Security Centre’s “Cyber crime: understanding the online business model,” the structure of a cybercrime organization is in many ways a lot like a regular tech startup. There’s a CEO, developer, and if there are enough funds, an IT department.

However, one role outlined on an infographic on page nine of the report that was a surprise and does not exist in legitimate businesses. This role is known as a “money mule.” Vulnerable individuals are often lured into these roles with titles such as “payment processing agents” or “money transfer agents.”

But when “money mules” apply for the job and even after they get the job, they’re not aware that they are being used to commit fraud. Therefore if cybercriminals get caught, “money mules” might also get in trouble with law enforcement. The “money mule” can expect a freeze on his bank account, face possible prosecution, and might be responsible for repaying for the losses. It might even be on your permanent record.

Other articles and threads discussed:

Tool of the week: SPF Translator

Panelists: Mike Buckbee, Kilian Englert, Mike Thompson

[Podcast] How Weightless Data Impacts Data Security

[Podcast] How Weightless Data Impacts Data Security

Leave a review for our podcast & we'll send you a pack of infosec cards.


By now, we’re all aware that many of the platforms and services we use collect and store information about our data usage. Afterall, they want to provide us with the most personalized experience.

So when I read that an EU Tinder user requested information about her data and was sent 800 pages, I was very intrigued with the comment from Luke Stark, a digital technology sociologist at Dartmouth University, “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”

He is on to something. We don’t usually consider archiving stale data until we’re out of space. It is often through printing photos, docs, spreadsheets, and pdfs that we would feel the weight and space consuming nature of the data we own.

Stark’s description of data’s intangible quality led me to wonder how weightless data impacts how we think about data security.

For instance, when there’s a power outage, some IT departments aren’t deemed important enough to be on a generator. Or when Infosec is often seen as a compliance requirement, not as security. Another roadblock security pros often face is when they report a security vulnerability – it’s not usually well received.

Podcast panelists: Mike Buckbee, Kilian Englert, Mike Thompson

[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

Leave a review for our podcast & we'll send you a pack of infosec cards.


While some regard Infosec as compliance rather than security, veteran pentesters Sanjiv Kawa and Tom Porter believe otherwise. They have deep expertise working with large enterprise networks, exploit development, defensive analytics and I was lucky enough to speak with them about the fascinating world of pentesting.

In our podcast interview, we learned what a pentesting engagement entails, assigning budget to risk, the importance of asset identification, and so much more.

Regular speakers at Security Bsides, they have a presentation on October 7th in DC, The World is Y0ur$: Geolocation-based Wordlist Generation with Wordsmith.

Learn more by clicking on the interview above.

Transcript

Sanjiv Kawa: My name is Sanjiv Kawa, I’m a penetration tester with PSC. I’ve been with PSC for…well, since June 2015. Prior to that, I had a couple of different hats on. I was a security consultant. I did some development work, and I was also a QA. My IT knowledge and my development knowledge is pretty well rounded, but my real interests are with penetration testing. Large enterprise networks as well as exploit development and automation. So, yeah, that’s me.

Tom Porter: I’m Tom Porter, been doing security for about eight years. My roots are on the blue team. So I got started in the government contracting space, doing mostly defensive analytics, network situational awareness, dissecting packets, writing ideas, rules. So now that I do pen testing, I have an idea of what the blue team is looking for. And I’ve used that to help me bypass the IR restrictions and find my way into FCDs.

Cindy Ng: Well let’s start with some foundational vocabulary words, like what are white box, black box, and a grey box is?

Tom Porter: So when you look at different approaches to carrying out pen testing, you’re gonna have this spectrum where on one side is kind of the white box or what you hear as the crystal box pen testing. That’s where the organization’s divisions are sharing information about their environment with the penetration testers. So they actually might give them the keys to the kingdom so they can log in and analyze machines. They have an idea of what the network layout already looks like before they come on site. They know what might be the architectural weaknesses in their deployment, or in their environment, and then the penetration testers step in there with an upper hand. But it gives them a little more value in that regards, just because they already have…they don’t have to spend the time doing the reconnaissance, the intelligence gathering. So you can…it’s a way to kind of crush down a pen test.

On the other side of the spectrum are kind of the black box. And that mimics more of what a real-world attacker might be doing, just because they don’t have necessarily insight into what the architecture, what the systems…you know, kind of operating systems are running, what kind of applications, versions of those, who are privileged users, who’s logging in from where. And there is a hefty portion upon the front side of the engagement to do a lot of reconnaissance, a lot of intelligence gathering, a lot of monitoring. But it’s also a great test of your incident response team to see how they’re adapting and responding to what these black box testers might be doing.

And then kinda in the middle, we have this notion of a grey box test. It’s where some information is shared, but not necessarily everything. And it’s a style that we like. And it’s kind of the assumed breach style, where we are given an idea of what network ranges reside there. We’re given an idea of maybe generalized what the process is or a privileged user might use to get into a secure environment. We know ahead of time that in the past year, they’ve rolled out a new MFA solution. It’s another way of kinda crunching down the time necessary for an engagement, from several months down to a week or two. In that way, we don’t break the bank with our customers. But we can still work with them to provide insights just because we have…or provide value just because we have a little more extra insight into their environment.

Cindy Ng: Even if you get a white box environment and they tell you everything that they know, there are some still grey areas such as things that they might not know, so do you think essentially you’re always working in a grey box zone?

Tom Porter: To a degree, for sure. As much as we’d like to, we don’t necessarily have a full asset list of everything in our environment. Not every organization knows every single piece of applications that they’ve installed on their local development machines. There’s always that kind of grey box notion of it, but just to generalize that not every company has an idea of what their inventory list looks like. And that’s where we come in. We can do our kinda empirical assessment to figure out what systems are running, what services are listening. And we can use that to cross-reference with what our client has on their side to figure out where the deltas are and give them more a complete picture of what their inventory list looks like, what their assets look like.

Cindy Ng: What’s the difference a penetration test versus a vulnerability scan? Because on the surface they sound potentially very similar?

Sanjiv Kawa: This is Sanjiv Kawa here. The real difference between a vulnerability assessment and a penetration test is the vulnerability assessment does sort of identify rank and report any vulnerabilities which have been identified in the environment. But it doesn’t go that next step, which is test the exploitation of those vulnerabilities and leverage those exploitations for the assessor’s personal benefit, or the assessor’s goal in that particular penetration test. So a good example would be, you know, a vulnerability was identified with a particular service in this network. Well, the vulnerability scan will say, “You know, this is a high-risk vulnerability because this service is using weak passwords, or this service is unpatched.” And at that point it’s kinda hands-off. It’s up to the patch management team or the incident response folks, or whoever the blue team is in that environment, to assess that vulnerability and put it into sort of a risk bubble as to whether they wanna fix it, or whether it’s an okay risk in that environment.

The penetration tester will exploit that vulnerability and leverage whatever underlying information on that system they can use to then move laterally throughout the environment, vertically throughout the environment, and essentially get to their end goal. I think it becomes…you’re absolutely right, there’s definitely a bit of haze surrounding what the differences between those two things, but really, the penetration test shows value in exploiting these vulnerabilities and ultimately reaching the end goal, as opposed to assuming these vulnerabilities exist, not testing the exploitability of them, and not really understanding the full depth of what this vulnerability on this particular service can result in.

Cindy Ng: Are there an X number of vulnerabilities you see again and again or something that you would see as new and upcoming that you rarely see? But so, I was reminded…last week I was talking to a cryptographer and he says, “You don’t always have to go complicated. It’s sometimes back to the basics.” So then my second question too, is you might have come up with a list of a bunch of vulnerabilities, how do you prioritize them?

Sanjiv Kawa: Yeah, it’s a really good question. And the cryptographer that you were speaking to is absolutely right. There’s times where you don’t actually need to complicate the situation. To be fair, a large amount of my penetration tests, I’m not actively exploiting vulnerabilities in terms of services, I’m actually looking for misconfigurations in a pre-existing network or native features in pre-existing operating systems that get me to my end goal.

And to answer the second part of your question, you know, what’s new and upcoming in the whole vulnerabilities world? Well, we’ve recently seen the Shadow Brokers release the dumps from the Equation Groups, who have kind of a close tie to NSA, but that’s neither here or there. And, you know, these guys have been hoarding zero-day exploits, which absolutely affect a large spectrum of operating systems and services. The most common, EternalBlue, for example, affects the SMB version 1 protocol for, I think 2012 down all the way to, you know, Vista/2003. It’s a really interesting space that we live in because there’ll be a lull for a little while where there’s no real service exploitation, at least in a wide sort of area of what you would expect an enterprise network to look like. And so you’re playing the misconfiguration game. And that usually gets you to where you need to be. However, it’s really exciting when you start seeing exploits which you’ve heard about on the wire, but haven’t necessarily been released yet and turned into a real proof of concept.

I guess another thing that kind of touched on here as well is it’s all sort of environments specific. It’s really interesting. There’s no real concrete methodology. I mean, you do have the PTES, the pen testing execution standard, where you kind of go from OSINT all the way to Cleanup and have Post Exploitation in between, and vulnerability assessments and exploitation. And that’s kind of a framework that you can follow as a penetration tester. But I think, and what I’m trying to get at here is, it’s really organic, when you’re hunting for these vulnerabilities and misconfigurations in a network.

Cindy Ng: Tom, you mentioned CDE earlier. And before I spoke to you guys, I had no idea what CDE meant. It stands for cardholder data environment. Can you explain to our listeners what that means?

Tom Porter: CDE are cardholder data environment. It’s this notion that comes from PCI compliance from payment card industries. They put together a standard for how folks that host, so merchants and service providers, should secure their environments to be compliant. And it started back, you know, over a decade ago where Visa, MasterCard, and three other brands had their own testing standards to be compliant with each of their brands. But it was kinda clunky, and just because you’re compliant with one brand doesn’t mean you’re necessarily compliant with the others. And not a lot of merchants or service providers really came on board.

So they got together and they ended up producing version 1.0 of what’s now known as the PCI Data Security Standards. And it’s evolved over the years. It doesn’t necessarily fit every business model, but it tries to incorporate as many as possible. But it started out mostly covering Web Apps, but it’s eventually evolved into hosting, so data centers, hosting solutions, web applications, as much as they can get under the umbrella. So in PCI DSS, they have what’s called the PCI zone. And it’s a fairly strict bounds on the systems that people owned process, that store, transmit or process credit card data, or sensitive authentication permission. And what we just call that in our parlance is CDE. And it’s our targets for PCI pen testing.

Cindy Ng: Let’s go through a scenario, an engagement that you both have been a part of. I wanted to know, what does regular engagement looks like? Are they all the same? Is there a canned process?

Tom Porter: So we work within…and what their standard dictates is, and as this evolved over the years, typically, they should be done by third parties. They should follow a standard that’s already publicly accessible and vetted. So like Sanjiv mentioned earlier about PTES, or if you’re using something like OSSTMM, something that’s been rigorously discussed and debated and has been vetted as a proper way of going about an engagement built like this, instead of just rolling your own, in which case you might not have full coverage.

So what we do is kind of adopt these into pens, to each client, because not every client environment is uniform, and not every business model is the same. So we’ll have some clients where we go in with an already pretty good idea of what we’re gonna be doing. We know how we’re gonna proceed from A to B to C, with a little bit of room for creativity mixed in. Some clients are brand-new. Some of the environments or technologies they’re rolling out, we’ve never seen before, and we have to get creative. We work cooperatively with a client to figure out how we’re gonna rigorously test this so it meets the letter of the standard. So we stick to a general methodology, but it doesn’t necessarily mean it’s rigid. We do have some room in there for flexibility to adapt to whatever the client is using.

Cindy Ng: Oftentimes clients are struggling with old technology as it attempts to integrate with new technology, and it takes a few versions to get it right. You can make a recommendation one year and it might take a while for things to smooth out. What is the timeline for fixing and patching?

Sanjiv Kawa: Yeah, that’s a really good question. So after we’ve done a penetration test, and if significant findings have been made, which really affect the client’s CDE, then the client has to have 90 days to remediate these findings. There’s typically various different ways that you can do this. In most cases, an entire re-architecture of a client’s enterprise network is not gonna be a completely valid recommendation, right? There’s just not enough time or resources to complete a task like that within 90 days. So at that point, we start looking at, you know, reasonable remediations that we can suggest. And often, clients might wanna look at one bottleneck. So, for example, if I fix this, how does this affect the rest of the vulnerabilities that you guys identified? Does that kind of wrap it into a mitigation bubble in terms of, “Will this particular bottleneck affect the security of the CDE?”

Secondarily, there might be compensating controls that you can put into place to help either, you know, harden endpoint systems, or there might be network-based controls you can put into place, like, for example, packet inspection or rate limiting, or just reels segmentation, for example. So there’s a lot of creative solutions that we can kind of adapt to a per-client basis. There’s really no silver bullet to fix an enterprise network. And ideally, what we strive to achieve is to try and give the client the best remediation possible, which fits a correct timeline and is reasonable to do in their environment. I think we’re different in the sense that a lot of penetration testers will enter an environment. They won’t understand the full complexity and dependencies of an enterprise network. And once a penetration test is done, they kinda wipe their hands and don’t really follow up with any sort of meaningful remediations or suggestions. We interact with everyone, from the system administrators, network administrators, all the way up to C-Suite, to identify meaningful solutions, meaningful recommendations that they can implement within a reasonable timeline.

Cindy Ng: There is a human aspect. You know, people say humans are the weakest link, and social engineering, for instance, it’s one of the many requirements in PCI compliance. And it’s one of the things that people often debate about. You know, some say that users need more training, and then there are also other security researchers who think that users aren’t the enemy. That we haven’t focused enough on user experience design. What is your experience as a pen tester when you’ve worked with so many different departments it’s both a human and social aspect as well as technology and security? And when you have multiple layers of complexities, how do you mitigate risk, and what is your approach?

Sanjiv Kawa: Yeah, that’s a really good question. I guess every person has a different opinion. If you look at C-Suite or management, they might say it’s a policy issue. If you look at the system administrators and the network administrators, they might say, “Oh, there’s a technological issue, or indeed it’s a user issue.” Touching on your first point of social engineering, it’s not a requirement in PCI DSS 3.2 yet. Something that PSC is actually working on is to…and something that PSC has worked on is…well, PSC is a co-sponsor of the PCI DSS Special Interest Group for pen testing, and we’ve authored a significant portion of the pen testing guidance supplement for PCI DSS, and we’ve also authored a significant portion of PCI DSS. And something we are trying to work on is getting social engineering to be a requirement.

Now, the hardest part about that is how you can sort of measure whether user education, whether user training, has become effective over time. In addition to that, we also believe that, you know, the outcome of social engineering shouldn’t be pass/fail, right? There should be a program of some level at which is something that we believe you should be doing since you’re phishing or vishing. But it should mainly for reinforcement and betterment of the end user. Yeah, I guess that’s kind of, like, our sort of stance on social engineering.

 

We currently don’t do it. And from a personal perspective, I think there’s only so many times that you can tell a organization that hires you that phishing or vishing, or some form of social engineering was the main entry point, right? There’s only so many times you can do that before they become kind of sick to your approach, or your initial vector or foothold into your environment. There’s some lot more creative ways. There’s a lot more ways that show value, especially with pre-existing technologies or pre-existing things that they had already deployed in their network.

Cindy Ng: I was reading the penetration testing guidance and it was so thorough that I just assumed that that was what you had to follow for pen testing. That’s why I’m like, “Oh, it’s part of the requirement.”

Sanjiv Kawa: As Tom had spoken about a bit earlier, you know, PCI isn’t the perfect program, right? But I truly believe that it’s designed to fit as many business models that there can be. And it’s a good introductory framework for, especially penetration testers, because it’s so clear. I mean, you define your priced assets, your CDE, which should ideally be a segmented part of your network, where there’s absolutely minimal to zero significant connectivity to your corporate network. And the pen tester can use basically anything that’s in the corporate network to try and gain access to the CDE. And if it can be used against you, it’s in scope.

One thing I really like about PSC is that we don’t really enter these scoping arguments, you know. For example, “These are the IPs that you’re limited to.” Or, you know, “You pay $3 to test this IP.” Because with PCI, especially as a compliant standard, it says, “Anything that can be used in your corporate scope to gain access to the CDE can be used against you.” So that way, it can almost bring everything into scope, which is kind of nice for a pen tester. It’s, in my opinion, pen testing in the purest form possible.

Cindy Ng: You both have mentioned CDE multiple times, and that’s what you’re…you spend all your time at. I was wondering do you prioritize that list though, like first you need to protect the crown jewels, then do you look at the technology second, or the processes third, and then people last? So is it, it goes back to whatever is important to the organization?

Tom Porter: It’s not uniform for every organization, just because it depends on their secure remote access scheme. The crux might come down to misconfigure a technology appliance, it might come down to a user who’s behaving in a manner that’s outside of procedure. So it really depends. And that’s why when we’re on site carrying out an engagement, we kind of cast a wide net because we wanna catch all of these deficiencies and whether it’s the process, the systems or the people. So to give you some examples, we might see that a technology that is supposed to provide adequate segmentation between a secure zone and a less secure zone has a vulnerability in some type of service that we can exploit. It’s rare, but we see it every now and then. That would be something that falls into the technology list.

Sometimes we might see on the user side, an admin who sits outside of the PCI zone but wants to get in, and has to do it fairly often, they’ve set up their own little backdoor proxy on the machine so that now it’s on a separate VUM. So we’ll get on their machine after we’ve compromised the domain and inspect their next step connections. We’ll see that they’ve set up their own little private proxy that they haven’t told anybody about it. So it depends. We cast a wider net just because we’re not entirely sure what we’re gonna find. And the organization doesn’t necessarily always know where the breakdown in the process will be.

Sanjiv Kawa: To add to that, I would also say that a majority of our time is spent in sort of like the Post Exploitation phase. Typically, our time is spent identifying that network segmentation gap and trying to sort of jump from the corporate network into the CDE, and assessing all possible roots which will, you know, result in that outcome. The corporate domain is something that…it’s kind of like the Wild West. A lot of clients don’t…in my opinion, don’t have enough emphasis and controls placed around the corporate network, but they’ve really secured the CDE and all the access in terms of authentication, their multi-factor authentication, and granular network segmentation, into the CDE. Yeah, a lot of our time is spent assessing those sort of roots into the CDE.

Tom Porter: And to piggyback off that, Sanjiv’s reminded me, I’ve been on a number of engagements now where we have this kind of division in labor where most of the resources have been applied to the CDE to make it as secure as possible, while the non-CDE systems or corporate scope has kind of fallen by the wayside with regards to attention and then time and money.

And then we have…we find where the corporate network is then set up as a trust, like a domain trust, as an example, with the CDE. So if we compromise the corporate network, now we have a pivot point into the CDE or with this domain trust. So we end up going to clients and saying, “Hey, we’re very proud of you how much work you’ve put into securing your CDE, but now we understand that your CDE is vulnerable to the deficiencies of the corporate network because you’ve put this trust in it.”

Cindy Ng: I think you’re thinking of security as a gradient, or as…a lot of C-Level and regular users, in general, think, “Well, you’re either secure or you’re not. And it’s a zero or a one.” How do you explain that to C-Level when you’re talking to them?

Sanjiv Kawa: Sure. Yeah. So I don’t believe that, you know, security is binary. I think, especially in modern networks, there needs to be an adequate balance of convenience and security. But that needs to be applied at the right levels. There are so many examples which I might not even be able to get into some of them, but you need to really be able to…not as a third-party assessor, but as an internal assessment administrator or as an internal network administrator, or as an internal IT manager, we provide you, as a third-party assessor, with the necessary tools or vocabulary and the recommendations that you can then sort of take and package to your C-Level suite.

I think a lot of the focus that I’ve definitely been looking at lately is just better access and authentication controls. It seems like one of the most common entry points into most networks is just weak passwords. And so how do you remediate that? How do put a policy in place? Well, in truth, most organizations have a policy in place, but it’s a written policy. It’s not necessarily a technical control. There are technical controls that you can put in place, which are kind of opaque to the end user and kind of makes them better without realizing that they’re becoming better.

So a really common example is third-party integrations into Active Directory, right? By default, I believe that the Windows password policy is a mixture of alphanumeric characters and a seven-length password. And, you know, with, you know, modern networks, it’s kind of archaic to think about because it doesn’t have any sort of intelligent identification of whether a user is using season year as a password, or a company name123 as a password, or something to that effect. So how do you train a user to become better at selecting passwords? Well, in short, you can purchase one of these integration tools and integrate that into Active Directory and load in bad passwords or what you would consider to be bad passwords. And at that point, a user is automatically more secure because they’re unable to select a weak password. By default, they’ve already selected a better password. It’s really just kind of identifying what an organization’s weakest points are, what their failing points are, and how you can make those better in a cost-effective, potentially technical control, which can kind of remove the risk to the environment.

Cindy Ng: You speak about it a little bit about being cost-effective? How do you rate and assign risk to the budget? Do they tell you, “Here’s how much money we have, and work with it?”

Tom Porter: Not necessarily. We give them recommendations, but our recommendations aren’t necessarily gospel. They know their resource constraints, whether it’s budget, whether it’s people, time, whatever it may be. And they work within those. And we can flexible with them throughout the remediation process. So what we do ask, as we work with clients through remediation, is if they come up with ideas for how they wanna go about remediating a finding, they come back and bounce those ideas off of us. Let us pen-test this idea on paper before you invest a significant amount of resources into it. And then we can save each other a bunch of headache down the road.

Cindy Ng: And when you’re engaged with the C-Level, what do they care about, versus what IT people care about?

Sanjiv Kawa: I think the most common thing with C-Level suite is just brand integrity, right? I mean, if they show up on the front page of a newspaper because of a breach, you know, it’s really gonna impact negatively their ability to continue to sell. And a byproduct of that is customer confidence, right? So C-Suite will always care about…in relation to security, brand integrity, and customer confidence. Second to that, they care about time to remediate, and a byproduct of that is cost to remediate. But from my experience, those are probably the big few things that the C-Suite cares about the most.

Tom Porter: I’ll say part of that too also depends on the type of business they’re in. Dependent upon what your revenue stream is. You know, something like a denial-of-service where it takes you offline for several hours might have a greater impact than some data being exfiltrated from your environment. If you think about maybe a pharmaceutical company, their crown jewels aren’t necessarily their uptime, it’s their…you know, their patents or their IP. So if we can exfiltrate those, then that’s gonna have a much more impact on the business than say taking them offline for a few hours.

Cindy Ng: You mentioned earlier that you guys also are working under a whole bunch of zero-day potential vulnerabilities that you might be encountering in the future. And we saw that happen. How do you red-team yourself and improve on the new knowledge that you get?

Tom Porter: There’s a lot of debate right now. If you…dependent upon your preferred internet source. But with regards to this, what we see it from penetration testers, there’s a few things that we do for our clients. One, we can send out advisors and bulletins to our clients just to let them know, “Hey, you know, the internet is a dangerous place right now, and these are the things you should be doing to make sure you’re secure.” One thing that we do as we’re going about our engagement, we line out in our report, and we do executive briefs for customers as well, when we do these presentations and reports for customers, we line out all of our findings to say, “You know, these are our highs, these are our mediums, these are some best practice observations.” And like, we tell them, “You know, you should remediate findings one, two and three in order to be PCI compliant. But we also use this as an opportunity for organizations to strengthen their security posture. So we might have some findings that fall out of that kill chain or that critical path to compromise, but if we see an opportunity for someone to strengthen their security posture, we’re gonna mention it to them. We won’t require it for remediation for PCI compliance, but we’re more of a mindset of security first, compliance second.

Cindy Ng: What’s the important security concept that you wish people would get?

Sanjiv Kawa: I don’t wanna really preach here, but I just think really understanding your assets, your inventory. And a good way to do that is to have an internal security team who is proactive and continuously analyzing systems and services that are exposed. Yeah, I think it all comes down from good asset identification, good network controls. And the best companies that I have been to, which I have not been able to compromise in terms of gaining access through CDE, just has really good segmentation. Identifying your prized assets, putting those into an inaccessible network or a network that is accessible only to few but through very complicated access control mechanisms that require multi-authentication, very limited. There’s kind of multiple answers to that, but really it comes down to asset identification, network controls, segmentation, and limitations on authentication controls. Those are the key things that I care about the most. Tom might have some different things.

Tom Porter: When I talk to users and it almost happens in every single compromise or engagement that I’m on, a lot of it comes back to either weak passwords or more commonly, password reuse. And that’s something that echoes with users not only in the enterprise but also personally when they’re on their…you know, their email or their banking websites. And you see this a lot with attacks out across the internet with credential stuffing. But the idea is not only choosing strong passwords, but also using unique passwords for every type of site that you might have a login stored. You see all these breaches from places like, you know, LinkedIn to any handful of other dumps where either passwords are dumped either in clear text or hash form, and when those hashes are crashed, there’s now a whole litany of usernames, email addresses, and passwords that folks can try to breach other accounts with.

And that’s kind of where the idea of credential stuffing comes in. A password breach comes out, a set of credentials are revealed, and then attackers have these tools to try this username/password combination across a wide array of sites. And what happens is users end up getting several of their accounts compromised because they’ve reused the same password. So not only does that echo on a personal level, but we see that in enterprise too. I’ll crack and I’ll get someone’s, you know, Active Directory password, or it just happens to be the same password for their domain admin account, or it happens to be the same password used to login to the secure VDI appliance. So it’s not necessarily something that’s gonna show up in a vulnerability scan listed out in a report, but it is something that we see often, that we end up having to remediate almost all the time.

Sanjiv Kawa: Yeah. And there’s several ways that you can combat this. If you have an internal security team, one of the things they get to be doing is monitoring breach dumps and monitoring passwords, and simply just pulling the hashes for any of their domains that they have and running comparison checks to see if any of these known bad passwords tying to known users in their environment exist. Secondarily just comparing the hashes from two separate domains, or comparing hashes from two separate user accounts, specifically privileged and non-privileged. And it’s kind of just doing password audits, right? Just maybe quarterly or every single…whatever aligns with your password change policy, be it 30 days or 90 days, whatever it is. And that way, you can, as a security team, ensure that…well, ensure to a certain degree that users are selecting passwords which are smart, and more importantly, aren’t reusing passwords from zones of low security to zones of high security.

Cindy Ng: Have you seen organizations move away from passwords and go into biometrics?

Sanjiv Kawa: Yeah, there’s been talk about it. I recently had a conversation with a client last week about this. But what they’re battling with is user adoption and cost. Cost being having to have certain, you know, devices which can read fingerprints or read faces. What people end up just going with is a second factor of authentication, either through an RSA one-time pass or do a multi-factor authentication or Google Authenticator. There’s lots of multi-factor authentications there which can basically…you know, biometrics is still a single factor of authentication, right? So having a token supplied to you by any of the aforementioned providers is probably one of the most cost-effective ways and most…and a secondary factor of authentication which is more secure in the long run in terms of user account controls.

Cindy Ng: What kind of problems, though, do you see with biometrics? Let’s say cost isn’t a problem. Could you guys play around with that idea for a little bit? What could you see potentially go wrong?

Tom Porter: So what I have seen in the past is some of these biometric-type logins, when they’re actually stored on the machine, so on like a Windows desktop, are just stored in memory as some kind of token, very similar to a password hash. So it’s just…operates on the same function, and you could reuse that token around the network and still adequately impersonate that user.

Not only that, we’ve also seen…you’ve probably read about it online. Technologies are using, you know, facial-type recognition how people can mimic that by just scrolling through your LinkedIn or Facebook and reconstructing the necessary pattern to log in with your face. And those are some of the things. Just because these are factors…unlike a password, these are factors that are potentially publicly available. Things like your hands for thumbprints to your face on Facebook. It’s just something that we haven’t historically secured in the past.

Cindy Ng: Your experience is so vast and multi-faceted, but is there something that I didn’t ask that you think is really important to share with our listeners?

Sanjiv Kawa: I guess we could probably share about…share some of the other penetration tests that we do. Not all of our pen tests are PCI-oriented. We have done things like pre-assessments in the past. So, for example, this organization is gearing up for another compliance regime, whether it be, you know, SOX or HIPPA, or FFIEC, or something to that effect, and PSC will do pre-assessment penetration tests conforming to the constraints of those compliance regimes.

We also do evaluation-based pen tests. So let’s say….and Tom sort of spoke about this a bit earlier, but let’s say your organization is implementing a new set of core security technology so that it requires some sort of an architectural sort of shuffle, whether it be a new MFA or multi-factor authentication implementation or some sort of segmentation, or a new monitoring alerting system, we can pen-test those or identify if technologies are adequate for your environment before we deploy. We’ve also done very complex mobile application and, well, just regular RESTful or SOAP-based API or other sort of application or web services-style testing that fall outside of the PCI compliance sort of zone. But for the most part, our mindset is still PCI-oriented, right? All you do is you substitute the CDE for that client’s goal, and that’s your success criteria. That’s what you’re trying to get to. And you’re using everything that you can ingest around you to get to that goal.

Tom Porter: I’d like to add, when we’ve sat down with the clients and we’re looking at results of a penetration test and we lay out the findings for what should remediate it and what doesn’t necessarily remediate it, one of the luxuries of PCI that’s kind of a gift and a curse, depending on your perspective, is that we actually get to see remediation through. And it’s a rarity in our industry just because remediation is so rarely required. So we actually gonna sit there and walk through remediation with clients. And when we come back and do our retesting to verify the remediation’s in place, we find that not always all of the findings were remediated. Maybe they remediated some of them, or the ones we required, but not necessarily all the ones we had in the report.

And as Sanjiv spoke about earlier, we kind of have this notion of a kill chain. The findings that we link together to achieve a compromise. You might hear it referred to as a critical path to compromise and kill chain and cyber kill chain. Something with the word “chain” in it. Yeah, the attack path. But essentially we’re trying to identify these bottlenecks, and what ends up happening is these organizations get to places where there’s a kind of mitigation bubble, they start offering compensating controls, which is just more of like a Band-Aid-on-it solution instead of fixing the real problem. So what we do is, especially these findings, and we’re talking about…talking with network or sysadmins who have a very knowledgeable layout of kind of the tech environment. And they’re trying to relay the importance of having, you know, patches for this, or a new appliance for that up to the C-levels, they can use us as a resource. They can tell us whether to focus and then in our report, if we find a deficiency, we’re giving the tech people ammunition to take to the C-Level to say, “Hey, we actually need to act on this. We’ve got a verified third party that says, “You know, we need to beef up our security here. We need to invest resources there.” And it kinda gives them some backing to say, “Hey, you know, we need this.”

Cindy Ng: Do you notice that you’re also looking at IoT devices?

Tom Porter: Absolutely. All the time, actually. Because when we do our network sweeps, we see all kinds of things out there. And they’re almost always set with factory default passwords. They usually have some type of embedded OS that we can pivot through, and they’re usually on wide open networks that don’t have…you know, if they’re using host-based firewalls to protect ingress and egress…So we now have our pivot points to get around.

[Podcast] Dr. Tyrone Grandison on Data, Privacy and Security

[Podcast] Dr. Tyrone Grandison on Data, Privacy and Security

Leave a review for our podcast & we'll send you a pack of infosec cards.


Dr. Tyrone Grandison has done it all. He is an author, professor, mentor, board member, and a former White House Presidential Innovation Fellow. He has held various positions in the C-Suite, including his most recent role as Chief Information Officer at the Institute of Health Metrics and Evaluation, an independent health research center that provides metrics on the world’s most important health problems.

In our interview, Tyrone shares what it’s like to lead a team of forty highly skilled technologists who provide tools, infrastructure, and technology to enable researchers develop statistical models, visualizations and reports. He also describes his adventures on wrangling petabytes of data, the promise and peril of our data economy, and what board members need to know about cybersecurity.

Transcript

Tyrone Grandison:  My name is Tyrone Grandison. I am the Chief Information Officer at the Institute for Health Metrics and Evaluation, IHME, at the University of Washington in Seattle. And IHME is global in profit in the public health and population health space, where we’re focused on how do we get people to have a long life and have that long life at the highest health capacity possible.

Cindy Ng: Often times, the bottom line drives businesses forward, where your institute is driven by helping policy makers and donors determine how to help people live longer and healthier lives. What is your involvement in ensuring that that vision is sustained and carried through?

Tyrone Grandison:  Perfect. So I lead the technology team here, which is a team of 40 really skilled data scientists, software engineer, system administrators, project and program managers. And what we do is that we provide the base, the infrastructure. We provide tools and technologies that enable researchers to, one, ingest data. So we get data from every single country across the world. Everything from surveys to censuses to death records. No matter how small or poor or politically closed a country is. And we basically house this information. We help the researchers develop statistical models. Like, very sophisticated statistical models and tools on them that make sense of the data. And then we actually put it out there to a network of over 2,400 collaborators.

And they help us produce what we called the Global Burden of Disease that, you know, shows what in different countries of the world is the predominant thing that is actually shortening lives in particular age groups, for particular genders and all demographic information. So, now people can, if they wanted to, do an apples-to-apples comparison between countries across ages and over time. So, if you wanted to see the damage done by tobacco smoking in Greece and compare that to the healthy years lost due to traffic injuries in Guatemala, you can actually do that. If you wanted to compare both of those things with the impact of HIV in Ghana, then that’s now possible. So our entire thing is, how do we actually provide the technology base and the skills to, one, host the data, support the building of the models and support the visualization of it. So people can actually make these comparisons.

Cindy Ng: You’re responsible for a lot and let’s try to break it down a bit. When you receive a bunch of data sets from various sources, take me through what your plan is for it. Last time we spoke, we spoke about obesity. Maybe is that a good one to, that everyone can relate to and with?

Tyrone Grandison:  Sure. So, say we get a obesity data sets from either the health entities within a particular country. It goes through a process where we have a team of data analysts look at the data and extract the relevant portions of it. We then put it into our ingesting pipeline, where we then vet it. Vet it in terms of what can it apply to. Does it apply to specific diseases? Obviously, it’s going to apply to a specific country. Does it apply to a particular age group and gender? From that point on, we then include it in models. And we have our modeling pipeline that does everything from estimating the number of years lost from obesity in that particular country. Also, as I mentioned before, it actually sees if that particular statistic that we got from that survey is relevant or not.

From there, we basically use it to figure out, okay, well what is the overall picture across the world for obesity? And then, we visualize it and make it accessible. And provide people with the ability to tell stories on it with the hope that at someone point, a policymaker or somebody within the public health institute within a particular country is gonna see it and actually use it in their decision making in terms of how to actually improve obesity in their particular country.

Cindy Ng: And when you talk about relevant and modeling, people say in the industry that there is a lot of unconscious bias. How do you reconcile that? And how do you work with certain factors that people think is controversial? For instance, people have said that using a body mass index isn’t accurate.

Tyrone Grandison:  That’s where we actually depend a lot on the network of collaborators that we spoke about. Not only do we have like a team that has been doing epidemiology and can advance the population health metrics for, you know, over two decades. We do depend upon experts within each particular country once we actually produce, like, you know, the first estimates based upon the initial models to actually look at these estimates and say, “Nope. This does not make sense. We need to actually adjust your model to add a factor in, that same unconscious bias.” Or, to kind of remove that the model says that we’re seeing but that the model may need to be tweaked or is wrong about. It all boils down to having people vet what the models are doing.

So, it’s more along the lines of how do you create systems that are really good at human computation. Marrying the things that machines are good with and then putting in a step there that forces a human to verify and kind of improve the final estimate that you want to actually want to produce.

Cindy Ng: Is there a pattern that you’ve seen over time where time and time again, the model doesn’t count for X, Y and Z? And then, the human gets involved and then figures out what’s needed and provides the context? Is there a particular concept or idea that you’ve seen?

Tyrone Grandison:  There is. And there is to the point where we basically have included it in our initial processing. So, there is this concept, right. The idea of a shock. Where a shock is an event that models cannot predict and it may have wide ranging impact essentially on what you’re trying to produce. So, for example, you could consider the earthquake in Haiti as a shock. You could consider the HIV epidemic as a shock. Every single country in any one given year may have a few shocks depending upon what the geolocation is that you’re looking at. And again, the shocks are different and we are really grateful to the collaborative network for providing insight and telling us that, “Op, this shock is actually missing from your model for this particular location, for this particular population segment.”

Cindy Ng: It sounds like there’s a lot of relationship building, too, with these organizations because sometimes people aren’t so forthcoming with what you need to know.

Tyrone Grandison:  So, I mean, it’s relationship building over the work that we’ve been doing here has been going on for 20 years. So, imagine 20 years of work just producing this Global Burden of Disease. And then, probably another decade or two before that just building the connections across the world. Because our Director has been in this space for quite a while now. He’s worked at everywhere from WHO to the MIT doing this work. So, the connections there and the connections from the executive team have been invaluable in making sure that people actually speak candidly and honestly about what’s going on. Because we are the impartial arbiters of the best data on what’s happening in population health.

Cindy Ng: And it certainly really helps when it’s not driven by the bottom line. It’s the most important thing is to improve everyone’s health outcome. What are the challenges of working with disparate data sets?

Tyrone Grandison:  So, the challenge is the same everywhere, right? The set challenges all relate to, okay, well, are we talking about the same things? Right. Are talking the same language? Do we have the same semantics? Basic challenge. Two is, well, does the data have what we need to actually answer the question? Not all data is relevant. Not all data is created equal. So, just figuring out what is gonna actually give us insight into, you know, the question as to how many years do you lose for a particular disease? And the third thing which is pretty common to, you know, every field that is trying tot push into the data open data areas. Do we have the right facets in each data set to actually integrate them? Does it make sense to integrate them at all? So, the challenges are not different from what the broader industry is facing.

Cindy Ng: You’ve developed relationships for over 20 years. Back then, we weren’t able to assess so many different, I’m guessing billions and trillions of data sets. Have you seen the transition happen? And how has that transition been difficult? And how has it made your lives so much better?

Tyrone Grandison:  Yeah. So, the Global Burden of Disease actually started on a cycle that was, you know, when we had considered we had enough data to actually make those estimates, we would actually produce the next Global Burden of Disease. Right, and we just moved starting this year to an annual cycle. So, that’s the biggest change. The biggest change is because of the wealth of data that exists out there. Because of the advances of technology, now we can actually increase the production of this data asset, so to speak. Whereas before, it was a lot of anecdotal evidence. It was a lot of negotiation to get the data that we actual need. Now, in other far more open data sets. So, lots more that’s actually available.

A willingness due to prior past demonstrations of the power of home data for governments and people to actually provide and produce them, because they know that they can actually use them. It’s more of the technology hand-in-hand with the cultural change that’s happened. That’s been the biggest changes.

Cindy Ng: What have you learned about wrangling petabytes of data set?

Tyrone Grandison:  A lot. In a nutshell, it’s very difficult and if I was to say that I give advice to people, I would start with, so what’s the problem you’re trying to solve? What’s the mission you’re trying to achieve? And figure out what are the things that you need in your data sets that would help you answer that question or mission. And finally, as much as possible, stick with standardize and simplify kind of methodology. Leverage a standard infrastructure and a standard architecture across what you are doing. And make it dead simple because if it’s not standard or simple, then getting to scale is really difficult. And scale meaning processing tens, hundreds of petrabytes worth of data.

Cindy Ng: There are a lot of health trackers, too, where they’re trying to gather all sorts of data in hopes that they might use it later. Is that a recommended best practice approach for figuring your solution or the problem out? Because, you know, what if you didn’t think of something and then a new idea popped into your head? And then there’s a lot of controversy with that. What is your insight…

Tyrone Grandison:  A controversy is, in my view, actually very real. One, what is the level of data that you are collecting, right? So, at IHME, like, we’re lucky to be actually looking at population level data. If you’re looking at or collecting individual records, then we have a can of worms in terms of data ownership, data privacy, data security. Right. And, especially in America, what you’re referring to is the whole argument around secondary use of health data. The concern or issue is just like with HIPAA, the Healthcare Information Portability and Accountability Act. You’re supposed to just have data for one person for a specific purpose and only that purpose. The issue or concern, like, you just brought up is, one, a lot of companies actually view data that is created or generated on the particular individual as being their own property. Their own intellectual property. Which you may or may not agree with.

At some point, there’s no tack list that says the person who this data is about should actually have a say in this in the current model, the current infrastructure. Right. And I can just say it like, personally, I believe that if the data is about you, that data’s created by you, then technically you should own it. And the company should be good stewards of the data. Right. Being a good steward simply means that you’re going to use the data for the purpose that you told the owner that you’re going to use if for. And that you will destroy the data after you finish using it. If you come up with a secondary use for it, then you should ask the person again, do they want to actually participate in it?

So, the issue that I have with it is basically is the disenfranchisement of the data owner. The neglection of like consent or even asking for it to be used in a secondary function or for a secondary purpose. And the fact that there are inherent things in that scenario with that question that are still unresolved and are just assumed to be true that people just need to look at.

Cindy Ng: When you say when the project is over, how do you know when the project is over? Because I can, for instance, write a paper and keep editing and editing and it will never feel completed and done.

Tyrone Grandison:  Sure. So, it’s… I mean, put it this way. If I say to the people that are involved in a particular study or that gave me their data, that I want to use this data to test a hypothesis and the hypothesis is that drinking a lot of alcohol will cause liver damage. Okay, obvious. And I, you know, publish my findings on it. It gets revised. You know, that at the very end, there has to be a point where either the papers published in the journal are somewhere or not. Right. I’m assuming. If that’s the case and, you know, I publish it and I found out that, hey, I can actually use the same data to actually figure out the affects of alcohol consumption on some other thing. That is a secondary purpose that I did not have an agreement with you on, and so I should actually ask for your consent on that. Right.

So, the question is just not when is the task done, but when have I actually accomplished the purpose that I negotiated and asked you to use your data for.

Cindy Ng: So, it sounds like that’s the really best practice when you’re gathering or using someone’s personal data. That that’s the initial contract. If there is a secondary use that they should also know about it. Because you don’t want to end up in a situation like Henrietta Lacks and they’re using your cells and you don’t even know it, right?

Tyrone Grandison:  Yup. But Henrietta Lacks actually is like a good example. It highlights what the current practices of the industry. Right. And again, luckily published health does not have this issue because we have aggregated data on different people. But like in the general healthcare scenario where you do have individual health records, what companies are doing and what they did within, in the Henrietta Lacks case was they may have actually specified in some legal document that, “Hey, we’re gonna use your information for X, and X is the purpose.” And they make either X so broad, so general that in encompasses like every possible thing that you can imagine. Or, they basically say, “We’re going to do a really specific purpose and anything else that we find.” And that is now the common practice within the field. Right?

And to me, the heart of that is very, seems very deceptive. Right. Because you’re saying to somebody that, you know, we have no idea what we’re going to do with your data, we want access to do it and, oh, we assume that you’re not going to own it. That we assume that any profits or anything that we get from it is going to be ours. Do you see the model itself just seems perverse? It’s tilted or veered towards how do we actually get something from somebody for free and turn it into a asset for my business. Where I have carte blanche to do what I want with it. And I think that discussion has not happened seriously by the healthcare industry.

Cindy Ng: I’m surprised that businesses haven’t approached your institution in assisting with this matter.Well, just it sounds like it would make total sense because I’m assuming that all of your data perhaps might have all the names and PHI stripped.

Tyrone Grandison:  We don’t even get to that level at this point.

Cindy Ng: Oh, you don’t even…

Tyrone Grandison:  It’s information on a generalized level. So there are multiple techniques that you can actually use to, let’s say, protect privacy for people. Like, one, would be just suppression. Okay, so I suppress the things that I call or consider PII. Or the other is like generalization. Right. So, it’s basically, I’m going to look at or get information that is not at the most granular level. But it’s at the level above it. Don’t look like you and all your peers. You just go a level above this and say, “Okay. Well, let’s look at everyone that lives in a particular zip code or a particular state or country.” So, that way, you have protection from hiding in a crowd. So, you can’t really identify one particular person in a data set itself. So, at IHME we don’t have the PHI/PII issue because we work on generalized data sets.

Cindy Ng: You’ve held many different roles. You’ve been a CDO, a CIO, a CEO. Which role do you enjoy doing most?

Tyrone Grandison:  So, any role that actually allows me to do two things. Like, one, create and drive the direction or strategy of an organization. And, two, enables me to help with the execution of that strategy to actually produce things that will positively impact people. The roles that I have been fond of so far would be CEO and CIO because at those levels, you basically also get to set what the organizational culture is, which is very valuable in my mind.

Cindy Ng: And since you’ve also been a board member, what do you think the board needs to know when it comes to privacy in cyber security?

Tyrone Grandison:  First of all, I think it should be an agenda item that you deal with upfront and not after a breech or an incident. It should be something that you bake into your plans and into the product life cycle from the very beginning. You should be proactive in how you actually view it. The main things I’ve actually noticed over time is just like, people do not pay attention to privacy, cyber security, cyber crime until, you know, after there is a… This is a horrible analogy but until there’s a dead body in the sea. What happened? And then you start having reputational damage and financial damage because of it.

When, you know, thinking about the process technology, people and tools that would actually help you fix this from the very get-go would have actually saved you a lot of time. And, you know, the whole perception, not perception, but the whole thought of both of these things, privacy and security, being cost centers, you don’t see a profit from them. You don’t see revenue being generated from them. And you only actually see the benefit, the cost savings, so to speak, after everyone else has actually been breached or damaged from an episode and you’re not. Right. Yeah. It’s a little bit more proactive upfront rather than reactive and, you know, post-fact.

Cindy Ng: But do you also think that it’s been said that IT make technology now more complicated than it really is? And they’re unable to follow what the IT presenting and so they’re confused, and there’s not a series of steps you can follow? Or maybe they asked for a budget for the one thing one year and then want some more money next year. And as you said, it costs money. But do you also think that there’s a value proposition that’s not carried across in a presentation? How can the point be driven home then?

Tyrone Grandison:  So, I mean, the biggest thing you just identified a while ago is the language barrier. The translation problem. So, I don’t fundamentally believe that anyone tech or otherwise is purposely trying to sound complex. Or purposely trying to confuse people. It’s just a matter of, you know, you have skilled people in a field or domain. Whatever the domain is. So, if you went tomorrow and started talking to a oncologist or a water engineer, and they just went off and just uses a bunch of jargon from their particular fields. They’re not trying to be overly complex. They’re not trying to not have you understand what they’re doing. But they’ve been studying this for decades. And they’re just, like, so steeped in it that that’s their vocabulary.

So, the number one issue is just that, one, understanding your audience. Right. If you know that your audience is not tech or is from a different field or a different era in tech or is the board, and understanding the audience and knowing what their language is and then translating your language lingo into things that they can understand, I think that would go a long, long way in actually helping people understand the importance of privacy and cyber security.

Cindy Ng: And we often like to make the analogy of that we should treat data like money. But do you think that data can be potentially be more valuable than money when the attacks aren’t deterrent financially driven then they’re out to destroy data, instead? We react in a really different way, I wanted to hear your thoughts on the analogy of data versus money.

Tyrone Grandison:  Interesting. So, money is just a convenient currency. Right. To enable a trade. And money has been associated with giving value to certain objects that we consider important. So, I’m viewing data. And data as something that needs to have a value assigned to it. Right. Which money is going to be that medium. Right. Whether the money is actual physical money or it’s Bitcoin. So, I don’t see the two things being in conflict. Or the two things having a comparison between value. I just think that data is valuable. A chair is valuable. A phone is valuable. Money is just, like, that medium that allows us to have one standard unit to compare the value between all those things.

Is data going to be more valuable than the current physical IT assets that a company has? Overtime, I think, yes. Because the data that you’re using, that you’re hopefully going to be using is going to be driving more, one, insights. More, hopefully, revenue. More creative uses of the current resources. So, the data itself is under influence how much of the other resources that you will actually acquire or how much of the other resources you need to place in particular spots or instances or allocate across the world. So, I see data as a good driving force to making these value driven decisions. So, I think the importance of it versus the physical IT assets is going to increase over time. You can see that happening already. To say data is more valuable than cash. I’m not too sure that’s the right question.

Cindy Ng: We’ve talked about the value of data, but what about the data retention and migration? It’s sort of dull, but yet so important.

Tyrone Grandison:  Well, multiple perspectives here. Data retention and migration is important for multiple reasons. Right. And the importance normally lies in risk. In minimizing the risk or the harm that can potentially be done to the owner or the data, or the subjects that are referenced too in the data sets. Right. That’s all the importance. That’s why you have whole countries, states actually saying that they have a data retention policy or plan. And that means that after a certain time, either the stuff has to be gone, completely deleted, or be stored somewhere that is secure and not well accessible.

And the whole premise of it is just like you assume for a particular period of time, that companies are going to need to use that data to actually accomplish a purpose that they specified initially, but then after that point, the risk or the potential harm of that becomes so high that you need to do something to reduce that risk. And that thing normally is a destruction or migration somewhere else.

Cindy Ng: What about integrating that data set with another, so probably a secondary use, but integrating it with other institutes? I hear that people want a one health solution in terms of patient data. So that all organizations can access it. It’s definitely a risk. But is that something that you think is a good idea that we should even entertain it? Or we’re going to create a monster and that the results of having a one single unit, a database where everything and all the data integrates is a bad solution? It’s great for analytics and technology and use.

Tyrone Grandison:  I agree with everything you just said. It’s both. So, it’s for certain purposes and scenarios, you know, is good. Because you get to see new things and you get a different picture, a better picture, a more holistic picture once you integrate data sets. That being said, once you get data sets, you basically also, you increase the risk profile of the results in data sets. And you lower the privacy of the people that are referenced in the data sets. Right. The more data sets you integrate…

So there’s this paper that a colleague of mine, Star Ying and I wrote, like last year or the year before last. That basically says there’s no privacy in big data. Simply because, like, big data you assume the three Vs. So, velocity, volume and variety. As you actually add more and more data sets in to get, like, a larger, just say, like a larger big data sets, as we call it. What you have happening is that you have the things that actually can be uniquely combined to identify the subject in that larger, big data set becomes larger and larger.

So, I mean, a quick, let me see what the quick example would be. So, if you have access to toll data, you have access to the data of, you know, people that are going on, you know, your local highway or your state highway. And you have the logs of when a particular car went through a certain point. The time, the license plates, the owner. All that stuff. So, that’s one data set by itself. You have a police data set that had a list of crimes that happened in particular locations. And you pick something else. You have a bunch of records from the DMV that tell you when somebody actually came in to actually have some operations in. All by themselves very innocuous. All by themselves if you anonymized them, or put techniques on them to protect the privacy of the individuals. Perfectly. Okay. Perfectly safe. Right. Not perfectly but relatively.

If you start combining the different data sets just randomly. You combine the toll data with the police data. And you found out that there’s a particular car that was at a scene of a crime where somebody was murdered. And that car was at a toll booth that was nearby, like, one minute afterward. Now you have something interesting. You have interesting insight. So that’s a good case.

We want to actually have this integration be possible. Because you get insights that you couldn’t get from just having that one data set itself. If you start looking at other cases where, you know, somebody wants to actually be protected, you have, and this is just within one data set, you have a data set of all the hospital visits across four different hospitals for a particular person. What you can do if you start merging them is that you can actually use the pattern of visits to uniquely identify somebody. If you start merging that with, again, the transportation records and that may be something that gives you insight as to what somebody’s sick with. That may be used…

You can identify them first of all, which they don’t want to do because they went to one hospital. And that would be used to actually do everything, something negative against him. Like deny them insurance or whatever the used case is. But you see, like in multiple different cases, the, one, the privacy of individuals that can hold the…is actually decreased. And, two, it can be used for, you know, positive or negative purposes. For and against the individual data subject or data owner.

Cindy Ng: People have spoken about these worries. How should we intelligently synthesize this information? Because it’s interesting, it’s worrisome. But it can be also be very beneficial. Because we tend to sensationalize everything.

Tyrone Grandison:  Yup. That’s a good question. So, I mean, I would say to look at the things the major decisions in your life that you plan to be making for the next couple of years. And then look at the tools, software, things that you have online right now that potential employer may actually look at. Then not employer but a potential person that you’re looking could…to do something with, get a service from. May actually look at to evaluate whether you get the service or not. Whether it be getting a job or getting a new car. Whatever it is. Whatever that thing is that, you know, want to actually get done.

And you know, see if the current things, the current questions that the person on the other side will be asking and looking at. Would that be interpreted negatively on you? A quick example would just be, okay, you’re a Facebook user and look at all the things that you do on there and all the kinda good apps that you have. And then look at who has access to all that. And in those particular instances, is that going to be a positive with that interaction or a negative with that interaction? I mean, I think that’s just being responsible in the digital age, right?

Cindy Ng: Right. What is a project that you’re most proud of?

Tyrone Grandison:  I’m proud of a lot of things. I’m proud of the work that we do here at IHME. I think it’s going breaking work that’s gonna help a lot of people. The data that we produce have actually been used to do pollution legislation. And numbers come out. Different ministers see it. The Ministry in China saw it and said, “Oh, we have an issue here. And we need to actually figure out how do we actually improve our longevity in terms of carbon emission.”

 

We’ve had the same thing Africa where there was somebody from the Ministry. I think it was, sorry, was it at Gambia or Ghana. I’ll find out for you afterwards. And they saw the numbers from, like, deaths due to in-house combustion. And started a program that gave a few hundred, well, a few thousand pots to different households and within like a few years, I saw that number went down. So, literally saving lives.

I’m proud of the White House Presidential Innovation Fellows. That group of people that I work with two and a half years ago. The work that they did. So,one of the fellows in my group worked with the Department of Interior to increase the number of kids that were going to National Parks. And, you know, they did it by actually going out and talking to kids and figuring out, like, what the correct incentive scheme would be. To actually have kids come to the park when they had their summer breaks. And that program is called, like, Every Kid in the Park. And it’s hugely successful about getting people, kids and parents like connected back into nature in life. Right. I’m proud of the work the commerce did of service team at the Department of Commerce. And that did help a lot of people.

We routinely just created data products with the user, the average American citizen in mind. And, like, one of the things that I’m really so proud of is that we helped them democratize and open up U.S. Census Bureau data. Which, you know, is very powerful. It’s actually freely open to everybody and it’s been used by a lot of businesses that make a lot of money from sending the data itself. Right. So we looked at and exposed that data through something called a CitySDK and, you know, that led to everything from people building apps to help food trucks find out where demand was. To people building websites to help accessibility channels people to figure out how to get around particular cities. To people helping supermarkets to figure out how to get fresh foods to communities that didn’t have access to them. That was awesome to actually see.

The other thing was exposing the income inequality data and just like showing people that, like, the narrative that like people are hearing about the gender and the race inequality amongst different professionals is actually far worse than is actually mentioned out there in the public. So, I mean, I’m proud of all of it because it was all fun work. All impactful work. All work that hopefully helped people.

[Podcast] When Hackers Behave Like Ghosts

[Podcast] When Hackers Behave Like Ghosts

Leave a review for our podcast & we'll send you a pack of infosec cards.


We’re a month away from Halloween, but when a police detective aptly described a hotel hacker as a ghost, I thought it was a really clever analogy! It’s hard to recreate and retrace an attacker’s steps when there are no fingerprints or evidence of forced entry.

Let’s start with your boarding pass. Before you toss it, make sure you shred it, especially the barcode. It can reveal your frequent flyer number, your name, and other PII. You can even submit the passenger’s information on the airline’s website and learn about any future flights. Anyone with access to your printed boarding pass could do harm and you would never know who your perpetrator would be.

Next, let’s assume you arrive at your destination and the hotel is using a hotel key with a vulnerability. In the past, when hackers reveal a vulnerability, companies step up to fix it. But now, when systems need a fix and a software patch won’t do, how do we scale the fix for millions of hotel keys when it comes to hardware?

Other articles discussed:

Tool of the week: Gost: Build a local copy of Security Tracker. 

Panelists: Kilian Englert, Forrest Temple, Mike Buckbee

[Podcast] Security Doesn’t Take a Vacation

[Podcast] Security Doesn’t Take a Vacation

Leave a review for our podcast & we'll send you a pack of infosec cards.



Do you keep holiday photos away from social media when you’re on vacation? Security pros advise that it’s one way to reduce your security risk. Yes, the idea of an attacker mapping out a route to steal items from your home sound ambitious. However, we’ve seen actual examples of a phishing attack as well as theft occur.

Alternatively, the panelists point out that this perspective depends on how vulnerable you might be. If attackers need an entry and believe that you’re a worthy target is vastly different from the general noise of regular social media sharers.

Other articles discussed:

Tool of the week: A Tunnel which turns UDP Traffic into Encrypted FakeTCP/UDP/ICMP Traffic 

Panelists: Mike Thompson, Forrest Temple, Mike Buckbee

[Podcast] The Security of Visually Impaired Self-Driving Cars

[Podcast] The Security of Visually Impaired Self-Driving Cars

Leave a review for our podcast & we'll send you a pack of infosec cards.


How long does it take you to tell the difference between fried chicken or poodle? What about a blueberry muffin or Chihuahua? When presented with these photos, it requires a closer look to differentiate the differences.

It turns out that self-driving car cameras have the same problem. Recently security researchers were able to confuse self-driving car cameras by adhering small stickers to a standard stop sign. What did the cameras see instead? A 45 mph speed limit sign.

The dangers are self-evident. However, the good news is that there are enough built-in sensors and cameras to act as a failsafe. But followers of our podcast know that other technologies with other known vulnerabilities might not be as lucky.

Other articles discussed:

Tool of the week: Macie, Automatically Discover, Classify, and Secure Content at Scale

Panelists: Jeff Peters, Kris Keyser, Mike Buckbee

[Podcast] Deleting a File Is More than Placing It into the Trash

[Podcast] Deleting a File Is More than Placing It into the Trash

Leave a review for our podcast & we'll send you a pack of infosec cards.


When we delete a file, our computer’s user interface makes the file disappear as if it is just a simple drag and drop. The reality is that the file is still in your hard drive.

In this episode of the Inside Out Security Show, our panelists elaborate on the complexities of deleting a file, the lengths IT pros go through to obliterate a file, and surprising places your files might reside.

Kris Keyser explains, “When you’re deleting a file, you’re not necessarily deleting a file. You’re deleting the reference to that file.”

Other Articles Discussed:

Instead of “Tool of the Week”, we learned about a coveted certification from a Blackhat attendee: Offensive Security Certified Professional. It is a 24-hour lab test to demonstrate your understanding of identifying vulnerabilities, pen testing, etc.

Panelists: Kris Keyser, Jeff Peters, Forrest Temple

[Podcast] Are Cyber War Rooms Necessary?

[Podcast] Are Cyber War Rooms Necessary?

Leave a review for our podcast & we'll send you a pack of infosec cards.


While some management teams are afraid of a pentest or risk assessment, other organizations – particularly financial institutions – are well aware of their security risks. They are addressing these risks by simulating fake cyberattacks. By putting IT, managers, board members and executives who would be responsible for responding to a real breach or attack, they are learning how to respond to press, regulators, law enforcement, as well as other scenarios they might not otherwise expect.

However, other security experts would argue that cyber war rooms are financially prohibitive for most organizations with a limited budget. What’s more, organizations should keep in mind that not all attacks have to be complicated. If organizations curb phishing attacks or achieve a least privilege model, they would already significantly reduce their risk.

Other Articles Discussed:

  • Dark web marketplaces AlphaBay and Hansa shut down
  • Every voting machine gets hacked at DEF CON
  • Real life Minority Report
  • German judge rule that keylogging employees is illegal

Tool of the week: Reply All Podcast: Long Distance

Panelists: Mike Buckbee, Kris Keyser, Kilian Englert

 

 

[Podcast] Roxy Dee, Threat Intelligence Engineer

[Podcast] Roxy Dee, Threat Intelligence Engineer

Leave a review for our podcast & we'll send you a pack of infosec cards.


Some of you might be familiar with Roxy Dee’s infosec book giveaways. Others might have met her recently at Defcon as she shared with infosec n00bs practical career advice. But aside from all the free books and advice, she also has an inspiring personal and professional story to share.

In our interview, I learned about her budding interest in security, but lacked the funds to pursue her passion. How did she workaround her financial constraint? Free videos and notes with Professor Messer! What’s more, she thrived in her first post providing tech support for Verizon Fios. With grit, discipline and volunteering at BSides, she eventually landed an entry-level position as a network security analyst.

Now she works as a threat intelligence engineer and in her spare time, she writes how-tos and shares sage advice on her Medium account, @theroxyd

Transcript

Cindy Ng: For individuals who have had a nonlinear career path in security, Threat Intelligence Engineer Roxy Dee knows exactly what that entails. She begins by describing what it was like to learn about a new industry with limited funding, and how she studied security fundamentals in order to get her foot in the door. In our interview, she reveals three things you need to know about vulnerability management, why fraud detection is a lot like network traffic detection, and how to navigate your career with limited resources.

We currently have a huge security shortage, and people are making analogies as to the kind of people we should hire. For instance, if you’re able to pick up music, you might be able to pick up technology. And I’ve found that in security it’s extremely important to be detail oriented, because the adage is the bad guys only need to be right once and security people need to be right all the time. And I had read on your Medium account the way you got into security, for practical reasons. And so let’s start there, because it might help encourage others to start learning about security on their own. Tell us what aspect of security you found interesting and the circumstances that led you in this direction. –

Roxy Dee: Just to comment on what you’ve said. Actually, that’s a really good reason to make sure you have a diverse team is because everybody has their own special strengths and having a diverse team means that you’ll be able to fight the bad guys a lot better because there will always be someone that has that strength where it’s needed. The bad guys, they can develop their own team the way they want and so it’s important to have a diverse team because every bad guy you meet is going to be different. That’s a very good point, itself.

Cindy Ng: Can you clarify “diverse?” You mean everybody on your team is going to have their own specialty that they’re really passionate about? By knowing what they’re passionate about, you know how to leverage their skill set? Is that what you mean by diversity?

Roxy Dee: Yeah. That’s part of it. I mean, just making sure that you don’t have the same person. For example, I’ll tell my story like you asked in the original question. As a single mom, I have a different experience than someone that has had less difficulties in that area, so I might think of things differently, or be resourceful in different ways. Or I’m not really that great at writing reports. I can write well, but I haven’t had the practice of writing reports. Somebody that went to college, they might have that because they were kind of forced to do it, by having people from different backgrounds that have had different struggles.

And I got into security because I was already into phone phreaking, which is a way of hacking the phone system. And so for me, when I went to my first 2600 Meeting and they were talking about computer security and information security, it was a new topic and I was kind of surprised. I was like, “I thought 2600 was just about phone hacking.” But I realized that at the time…It was 2011, and phone hacking had become less of a thing and computer security became more of something. I got the inspiration to go that route, because I realized that it’s very similar. But as a single mom, I didn’t have the time or the money to go to college and study for it. So I used a lot of self-learning techniques, I went to a lot of conferences, I surrounded myself with people that were interested in the topic, and through that I was able to learn what I needed to do to start my career.

Cindy Ng: People have trouble learning the vocabulary because it’s like learning a new language. How did you…even though you were into phone hacking and the transition into computer security, it has its own distinct language, how did you make the connections and how long did it take you? What experiences did you surround yourself with to cultivate a security mindset?

Roxy Dee: I’ve been on computers since I was a little kid, like four or five years old. So for me, it may not be as difficult for me as other people, because I kind grew up on computers. Having that background helped. But when it came to information security, there were a lot of times where I had no idea what people were saying. Like I did not know what “Reverse Engineering” meant, or I didn’t know what “Trojan” meant. And now, it’s like, “Oh, I obviously know what those things are.” But I had no idea what people were talking about. So going to conferences and watching DEF CON talks, and listening to people. But by the time I had gone to DEF CON about three times, I think it was my third time I went to DEF CON, I thought, “Wow. I actually know what people are saying now.” And it’s just a gradual process, because I didn’t have that formal education.

There were a few conferences that I volunteered at. Mostly at BSides. And BSides are usually free anyway. When you volunteer, you become more visible in the community, and so people will come to you or people will trust you with things. And that was a big part of my career, was networking with people and becoming visible in the community. That way, if I wanted to apply for a job, if I already knew someone there or if I knew someone that knew someone, it was a lot easier to get my resume pushed to the hiring manager than if I just apply.

Cindy Ng: How were you able to land your first security job?

Roxy Dee: And as far as my first InterSec job, I was working in tech support and I was doing very well at it. I was at the top of the metrics, I was always in like the top 10 agents.

Cindy Ng: What were some of the things that you were doing?

Roxy Dee: It was tech support for Verizon Fios. There was a lot of, “Restart your router,” “Restart your set-top box,” things like that. But I was able to learn how to explain things to people in ways that they could understand. So it really helped me understand tech speak, I guess, understand how to speak technically without losing the person, like a non-technical person.

Cindy Ng: And then how did you transition into your next role?

Roxy Dee: It all had to do with networking, and at this point, I had volunteered for a few BSides. So actually, someone that I knew at the time told me about a position that was an entry-level network security analyst, and all I needed to do was get my Security+ certification within the first six months of working there. And so it was an opportunity for me because they accepted entry-level. And when they gave me the assessment that they give people they interview, I aced it because I had studied already about networking through a website called “Professor Messer.” And that website actually helped me with Security+ as well, and I was just able to do that through YouTube videos, like his entire website is just YouTube videos. So once I got there, I took my Security plus and I ended up, actually, on the night shift. So I was able to study in quiet during my shift every day at work. I just made it a routine, “I have to spend, you know, this amount of time studying on,” whatever topic I wanted to move forward with, which I knew what to study because I was going to conferences and I was taking notes from the talks, writing down things I didn’t understand or words I didn’t know and then later I was researching that topic so I could understand more. And then I would watch the talk again with that understanding if it was recorded, or I would go back to my notes with that understanding. The fact that I was working overnight and I was not interrupted really helped, and then from there…and that was like a very entry-level position. And from there, I went to a cloud hosting company, secure cloud hosting company with a focus on security and the great thing about that was that it was a startup. They didn’t have a huge staff, and they had a ton of things that they had to do and a bunch of unrealistic deadlines. So they would constantly be throwing me into situations I was not prepared for.

Cindy Ng: Can you give us an example?

Roxy Dee: Yeah. That was really like the best training for me, is just being able to do it. So when they started a Vulnerability Management Program, I have no experience in vulnerability management before this and they wanted me to be one of the two people on the team. So I had a manager, and then I was the only other person. Through this position, I learned what good techniques are and I was also inspired to do more research on it. And if I hadn’t been given that position, I wouldn’t have been inspired to look it up.

Cindy Ng: What does Vulnerability Management entail, three things that you should know?

Roxy Dee: Yeah. So Vulnerability Management has a lot to do with making sure that all the systems are up to date on patching. That’s one of them. The second thing I would say that’s very important is inventory management, because there were some systems that nobody was using and vulnerabilities existed there, but there was actually no one to fix them. And so if you don’t take proper inventory of your systems and you don’t do, you know, discovery scans to discover what’s out there, you could have something sitting there that an attacker, once they get in, they could use or they might have access to. And then another thing that’s really important in Vulnerability Management is actually managing the data because you’ll get a lot of data. But if you don’t use it properly it’s pretty much useless, if you don’t have a system to track when you need to have this remediated by, what are your compliance requirements? And so you have to track, “When did I discover this and when is it due? And what are the vulnerabilities and what are the systems? What do the systems look like? So there’s a lot of data you’re going to get and you have to manage it, or you will be completely unable to use it.

Cindy Ng: And then you moved on into something else?

Roxy Dee: Oh, yes. Actually, it being a startup kind of wore on me, to be honest. So I got a phone call from a recruiter, actually, while I was at work.

This was another situation where I had no idea how to do what I was tasked with, and the task was…So from my previous positions, I had learned how to monitor and detect, and how to set up alerts, useful alerts that can serve, you know, whatever purpose was needed. So I already had this background. So they said, “We have this application. We want you to log into it, and do whatever you need to do to detect fraud.” Like it was very loosely defined what my role was, “Detect bad things happening on the website.” So I find out that this application actually had been stood up four years prior and they kind of used it for a little while, but then they abandoned it.

And so my job was to bring it back to life and fix some of the issues that they didn’t have time for, or they didn’t actually know how to fix or didn’t want to spend time fixing them. That was extremely beneficial. I had been given a task, so I was motivated to learn this application and how to use it, and I didn’t know anything about fraud. So I spent a lot of time with the Fraud Operations team, and through that, through that experience of being given a task and having to do it, and not knowing anything about it, I learned a lot about fraud.

Cindy Ng: I’d love to hear from your experience what you’ve learned about fraud that most people might not know.

Roxy Dee: What I didn’t consider was that, actually, fraud detection is very much like network traffic detection. You look for a type of activity or a type of behavior and you set up detection for it, and then you make sure that you don’t have too many false positives. And it’s very similar to what network security analysts do. And when I hear security people say, “Oh, I don’t even know where to start with fraud,” well, just think about from a network security perspective if you’re a network security analyst, how you would go about detecting and alerting. And the other aspect of it is the fraudulent activity is almost always an anomaly. It’s almost always something that is not normal. If you’re just looking around for things that are off or not normal, you’re going to find the fraud.

Cindy Ng: But how can you can tell what’s normal and what’s not normal?

Roxy Dee: Well, first, it’s good to look up all sorts of sessions and all sorts of activity and get like a baseline of, you know, “This is normal activity.” But you can also talk to the Fraud team or, you know, or whatever team handles…It’s not specific to fraud, but, you know, if you’re detecting something else, talk to the people that handle it. And ask them, “What would make your alerts better? What is something that has not been found before or something that you were alerted to, but it was too late?” And ask just a bunch of questions, and then you’ll find through asking that what you need to detect.

Like for example, there was one situation where we had a rule that if a certain amount was sent in a certain way, like a wire, that it would alert. But what we didn’t consider was, “What if there’s smaller amounts that add up to a large amount?” And understanding…So we found out that, “Oh, this amount was sent out, but it was sent out in small pieces over a certain amount of time.” So through talking to the Fraud Operations team, if we didn’t discuss it with them, we never would have known that that was something that was an issue. So then we came up with a way to detect those types of fraudulent wire transfers as well.

Cindy Ng: How interesting. Okay. You were talking about your latest role at another bank.

Roxy Dee: I finished my contract and then I went to my current role, which focuses on a lot more than just online activity. I have more to work with now. With each new position, I just kind of layered more experience on top of what I already knew. And I know it’s better to work for a company for a long time and I kind of wish these past six years, I had been with just one company.

Each time that I changed positions, I got more responsibility, pay increase, and I’m hoping I don’t have to change positions as much. But it kind of gave me like a new environment to work with and kind of forced me to learn new things. So I would say, in the beginning of your career, don’t settle. If you get somewhere and you don’t like what you’re being paid, and you don’t think your career is advancing, don’t be afraid to move to a different position, because it’s a lot harder to ask for a raise than to just go somewhere else that’s going to pay you more.

So I’m noticing a lot of the companies that I’m working for, will expect the employees to stay there without giving them any sort of incentive to stay. And so when a new company comes along, they say, you know, “Wow. She’s working on this and that, and she’s making x amount. And we can take all that knowledge that she learned over there, and we can basically buy it for $10,000 more than what she’s making currently.” So companies are interested in grabbing people from other companies that have already had the experience, because it’s kind of a savings in training costs. So, you know, I try to look every six months or so, just to make sure there’s not a better deal out there, because they do exist. And I don’t know how that is in other fields, though. I know in information security, we have that. That’s just the nature of the field right now.

Cindy Ng: I think I got a good overview of your career trajectory. I’m wondering if there’s anything else that you’d want to share with our listeners?

Roxy Dee: Yeah. I guess, I pretty much have spent…So the first two or three years, I spent really working on myself, and making sure that I had all the knowledge and resources I needed to get that first job. The person that I was five or six years ago is different than who I am now. And what I mean is, my situation has changed a bit, to where I have more income and I have more capabilities than I did five years ago. One of the things that’s been important to me is giving back and making sure that, you know, just because I went through struggles five years ago…You know, I understand we all have to go through our struggles. But if I can make something a little bit easier for someone that was in my situation or maybe in a different situation but still needs help, that’s my way of giving back.

And spending $20 to buy someone a book is a lot less of a hit on me financially than it would have been five years ago. Five years ago, I couldn’t afford to drop to even $20 on a book to learn. I had to do everything online, and everything had to be free. I just want to encourage people, if you see an opportunity to help someone and, you know, for example, if you see someone that wants to speak at a conference and they just don’t have the resources to do so. And you think, “Well, this $100 hotel a night, a hotel room is less of a financial hit to me than to, you know, than to that person. And that could mean the difference between them having a career-building opportunity or not having that.” Just seek out ways to help people. One of the things I’ve been doing is the free book giveaway, where I actually have people sending me Amazon gift cards and there is actually one person that’s done it consistently in large amounts. And what I do with that is, like every two weeks, I have a tweet that I send out that if you reply to it with the book that you want, then you can win that book up until I run out of money, up until I run out of Amazon dollars.

Cindy Ng: Is this person an anonymous patron or benefactor? This person just sends you an Amazon gift card…with a few bucks and you share it with everyone? That’s so great.

Roxy Dee: And other people have sent me, you know, $20 to $50 in Amazon credits, and it’s just a really good…It kind of happen accidentally, and there’s the story of it on my Medium account.

Cindy Ng: What were the last three books that you gave away? – Oh, the last three? Well… – Or the last one, if you…

Roxy Dee: …the most popular one right now, this is just based on the last one that I did, is the Defensive Security Handbook. That was the most popular one. But I also get a lot of requests for Practical Packet Analysis by Chris Sanders and Practical Malware Analysis. And so this one, actually, this is a very recent book that came out called the Defensive Security Handbook. That’s by Amanda Berlin and Lee Brotherston. And that’s about…it says, “Best practices for securing infrastructure.” So it’s a blue team-themed book. That’s actually sold over 1,000 copies already and it just came out recently. It came out about a month ago. Yeah. So I think that’s going to be a very popular book for my giveaways.

Cindy Ng: How are you growing yourself these days?

Roxy Dee: Well, I wanted to spend more time writing guides. I just want to write things that can help beginners. I have set up my Medium account, and I posted something on setting up a honeypot network, which is a very…it sounds very complicated, but I broke it down step by step. So my goal in this was to make one article where you could set it up. Because a lot of the issues I was having was, yeah, I might find a guide on how to do something, but it didn’t include every single step. Like they assumed that you knew certain things before you started on that guide. So I want to write things that are easy for people to follow without having to go look up other sources. Or if they do have to look up another source, I have it listed right there. I want to make things that are not assuming that there’s already prior knowledge.

Cindy Ng: Thank you so much for sharing with me, with our listeners.

Roxy Dee: Thank you for letting me tell my story, and I hope that it’s helpful to people. I hope that people get some sort of inspiration, because I had a lot of struggles and, you know, there’s plenty of times I could have quit. And I just want to let people know that there are other ways of doing things and you don’t have to do something a certain way. You can do it the way that works for you.