[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

Leave a review for our podcast & we'll send you a pack of infosec cards.

While some regard Infosec as compliance rather than security, veteran pentesters Sanjiv Kawa and Tom Porter believe otherwise. They have deep expertise working with large enterprise networks, exploit development, defensive analytics and I was lucky enough to speak with them about the fascinating world of pentesting.

In our podcast interview, we learned what a pentesting engagement entails, assigning budget to risk, the importance of asset identification, and so much more.

Regular speakers at Security Bsides, they have a presentation on October 7th in DC, The World is Y0ur$: Geolocation-based Wordlist Generation with Wordsmith.

Learn more by clicking on the interview above.


Sanjiv Kawa: My name is Sanjiv Kawa, I’m a penetration tester with PSC. I’ve been with PSC for…well, since June 2015. Prior to that, I had a couple of different hats on. I was a security consultant. I did some development work, and I was also a QA. My IT knowledge and my development knowledge is pretty well rounded, but my real interests are with penetration testing. Large enterprise networks as well as exploit development and automation. So, yeah, that’s me.

Tom Porter: I’m Tom Porter, been doing security for about eight years. My roots are on the blue team. So I got started in the government contracting space, doing mostly defensive analytics, network situational awareness, dissecting packets, writing ideas, rules. So now that I do pen testing, I have an idea of what the blue team is looking for. And I’ve used that to help me bypass the IR restrictions and find my way into FCDs.

Cindy Ng: Well let’s start with some foundational vocabulary words, like what are white box, black box, and a grey box is?

Tom Porter: So when you look at different approaches to carrying out pen testing, you’re gonna have this spectrum where on one side is kind of the white box or what you hear as the crystal box pen testing. That’s where the organization’s divisions are sharing information about their environment with the penetration testers. So they actually might give them the keys to the kingdom so they can log in and analyze machines. They have an idea of what the network layout already looks like before they come on site. They know what might be the architectural weaknesses in their deployment, or in their environment, and then the penetration testers step in there with an upper hand. But it gives them a little more value in that regards, just because they already have…they don’t have to spend the time doing the reconnaissance, the intelligence gathering. So you can…it’s a way to kind of crush down a pen test.

On the other side of the spectrum are kind of the black box. And that mimics more of what a real-world attacker might be doing, just because they don’t have necessarily insight into what the architecture, what the systems…you know, kind of operating systems are running, what kind of applications, versions of those, who are privileged users, who’s logging in from where. And there is a hefty portion upon the front side of the engagement to do a lot of reconnaissance, a lot of intelligence gathering, a lot of monitoring. But it’s also a great test of your incident response team to see how they’re adapting and responding to what these black box testers might be doing.

And then kinda in the middle, we have this notion of a grey box test. It’s where some information is shared, but not necessarily everything. And it’s a style that we like. And it’s kind of the assumed breach style, where we are given an idea of what network ranges reside there. We’re given an idea of maybe generalized what the process is or a privileged user might use to get into a secure environment. We know ahead of time that in the past year, they’ve rolled out a new MFA solution. It’s another way of kinda crunching down the time necessary for an engagement, from several months down to a week or two. In that way, we don’t break the bank with our customers. But we can still work with them to provide insights just because we have…or provide value just because we have a little more extra insight into their environment.

Cindy Ng: Even if you get a white box environment and they tell you everything that they know, there are some still grey areas such as things that they might not know, so do you think essentially you’re always working in a grey box zone?

Tom Porter: To a degree, for sure. As much as we’d like to, we don’t necessarily have a full asset list of everything in our environment. Not every organization knows every single piece of applications that they’ve installed on their local development machines. There’s always that kind of grey box notion of it, but just to generalize that not every company has an idea of what their inventory list looks like. And that’s where we come in. We can do our kinda empirical assessment to figure out what systems are running, what services are listening. And we can use that to cross-reference with what our client has on their side to figure out where the deltas are and give them more a complete picture of what their inventory list looks like, what their assets look like.

Cindy Ng: What’s the difference a penetration test versus a vulnerability scan? Because on the surface they sound potentially very similar?

Sanjiv Kawa: This is Sanjiv Kawa here. The real difference between a vulnerability assessment and a penetration test is the vulnerability assessment does sort of identify rank and report any vulnerabilities which have been identified in the environment. But it doesn’t go that next step, which is test the exploitation of those vulnerabilities and leverage those exploitations for the assessor’s personal benefit, or the assessor’s goal in that particular penetration test. So a good example would be, you know, a vulnerability was identified with a particular service in this network. Well, the vulnerability scan will say, “You know, this is a high-risk vulnerability because this service is using weak passwords, or this service is unpatched.” And at that point it’s kinda hands-off. It’s up to the patch management team or the incident response folks, or whoever the blue team is in that environment, to assess that vulnerability and put it into sort of a risk bubble as to whether they wanna fix it, or whether it’s an okay risk in that environment.

The penetration tester will exploit that vulnerability and leverage whatever underlying information on that system they can use to then move laterally throughout the environment, vertically throughout the environment, and essentially get to their end goal. I think it becomes…you’re absolutely right, there’s definitely a bit of haze surrounding what the differences between those two things, but really, the penetration test shows value in exploiting these vulnerabilities and ultimately reaching the end goal, as opposed to assuming these vulnerabilities exist, not testing the exploitability of them, and not really understanding the full depth of what this vulnerability on this particular service can result in.

Cindy Ng: Are there an X number of vulnerabilities you see again and again or something that you would see as new and upcoming that you rarely see? But so, I was reminded…last week I was talking to a cryptographer and he says, “You don’t always have to go complicated. It’s sometimes back to the basics.” So then my second question too, is you might have come up with a list of a bunch of vulnerabilities, how do you prioritize them?

Sanjiv Kawa: Yeah, it’s a really good question. And the cryptographer that you were speaking to is absolutely right. There’s times where you don’t actually need to complicate the situation. To be fair, a large amount of my penetration tests, I’m not actively exploiting vulnerabilities in terms of services, I’m actually looking for misconfigurations in a pre-existing network or native features in pre-existing operating systems that get me to my end goal.

And to answer the second part of your question, you know, what’s new and upcoming in the whole vulnerabilities world? Well, we’ve recently seen the Shadow Brokers release the dumps from the Equation Groups, who have kind of a close tie to NSA, but that’s neither here or there. And, you know, these guys have been hoarding zero-day exploits, which absolutely affect a large spectrum of operating systems and services. The most common, EternalBlue, for example, affects the SMB version 1 protocol for, I think 2012 down all the way to, you know, Vista/2003. It’s a really interesting space that we live in because there’ll be a lull for a little while where there’s no real service exploitation, at least in a wide sort of area of what you would expect an enterprise network to look like. And so you’re playing the misconfiguration game. And that usually gets you to where you need to be. However, it’s really exciting when you start seeing exploits which you’ve heard about on the wire, but haven’t necessarily been released yet and turned into a real proof of concept.

I guess another thing that kind of touched on here as well is it’s all sort of environments specific. It’s really interesting. There’s no real concrete methodology. I mean, you do have the PTES, the pen testing execution standard, where you kind of go from OSINT all the way to Cleanup and have Post Exploitation in between, and vulnerability assessments and exploitation. And that’s kind of a framework that you can follow as a penetration tester. But I think, and what I’m trying to get at here is, it’s really organic, when you’re hunting for these vulnerabilities and misconfigurations in a network.

Cindy Ng: Tom, you mentioned CDE earlier. And before I spoke to you guys, I had no idea what CDE meant. It stands for cardholder data environment. Can you explain to our listeners what that means?

Tom Porter: CDE are cardholder data environment. It’s this notion that comes from PCI compliance from payment card industries. They put together a standard for how folks that host, so merchants and service providers, should secure their environments to be compliant. And it started back, you know, over a decade ago where Visa, MasterCard, and three other brands had their own testing standards to be compliant with each of their brands. But it was kinda clunky, and just because you’re compliant with one brand doesn’t mean you’re necessarily compliant with the others. And not a lot of merchants or service providers really came on board.

So they got together and they ended up producing version 1.0 of what’s now known as the PCI Data Security Standards. And it’s evolved over the years. It doesn’t necessarily fit every business model, but it tries to incorporate as many as possible. But it started out mostly covering Web Apps, but it’s eventually evolved into hosting, so data centers, hosting solutions, web applications, as much as they can get under the umbrella. So in PCI DSS, they have what’s called the PCI zone. And it’s a fairly strict bounds on the systems that people owned process, that store, transmit or process credit card data, or sensitive authentication permission. And what we just call that in our parlance is CDE. And it’s our targets for PCI pen testing.

Cindy Ng: Let’s go through a scenario, an engagement that you both have been a part of. I wanted to know, what does regular engagement looks like? Are they all the same? Is there a canned process?

Tom Porter: So we work within…and what their standard dictates is, and as this evolved over the years, typically, they should be done by third parties. They should follow a standard that’s already publicly accessible and vetted. So like Sanjiv mentioned earlier about PTES, or if you’re using something like OSSTMM, something that’s been rigorously discussed and debated and has been vetted as a proper way of going about an engagement built like this, instead of just rolling your own, in which case you might not have full coverage.

So what we do is kind of adopt these into pens, to each client, because not every client environment is uniform, and not every business model is the same. So we’ll have some clients where we go in with an already pretty good idea of what we’re gonna be doing. We know how we’re gonna proceed from A to B to C, with a little bit of room for creativity mixed in. Some clients are brand-new. Some of the environments or technologies they’re rolling out, we’ve never seen before, and we have to get creative. We work cooperatively with a client to figure out how we’re gonna rigorously test this so it meets the letter of the standard. So we stick to a general methodology, but it doesn’t necessarily mean it’s rigid. We do have some room in there for flexibility to adapt to whatever the client is using.

Cindy Ng: Oftentimes clients are struggling with old technology as it attempts to integrate with new technology, and it takes a few versions to get it right. You can make a recommendation one year and it might take a while for things to smooth out. What is the timeline for fixing and patching?

Sanjiv Kawa: Yeah, that’s a really good question. So after we’ve done a penetration test, and if significant findings have been made, which really affect the client’s CDE, then the client has to have 90 days to remediate these findings. There’s typically various different ways that you can do this. In most cases, an entire re-architecture of a client’s enterprise network is not gonna be a completely valid recommendation, right? There’s just not enough time or resources to complete a task like that within 90 days. So at that point, we start looking at, you know, reasonable remediations that we can suggest. And often, clients might wanna look at one bottleneck. So, for example, if I fix this, how does this affect the rest of the vulnerabilities that you guys identified? Does that kind of wrap it into a mitigation bubble in terms of, “Will this particular bottleneck affect the security of the CDE?”

Secondarily, there might be compensating controls that you can put into place to help either, you know, harden endpoint systems, or there might be network-based controls you can put into place, like, for example, packet inspection or rate limiting, or just reels segmentation, for example. So there’s a lot of creative solutions that we can kind of adapt to a per-client basis. There’s really no silver bullet to fix an enterprise network. And ideally, what we strive to achieve is to try and give the client the best remediation possible, which fits a correct timeline and is reasonable to do in their environment. I think we’re different in the sense that a lot of penetration testers will enter an environment. They won’t understand the full complexity and dependencies of an enterprise network. And once a penetration test is done, they kinda wipe their hands and don’t really follow up with any sort of meaningful remediations or suggestions. We interact with everyone, from the system administrators, network administrators, all the way up to C-Suite, to identify meaningful solutions, meaningful recommendations that they can implement within a reasonable timeline.

Cindy Ng: There is a human aspect. You know, people say humans are the weakest link, and social engineering, for instance, it’s one of the many requirements in PCI compliance. And it’s one of the things that people often debate about. You know, some say that users need more training, and then there are also other security researchers who think that users aren’t the enemy. That we haven’t focused enough on user experience design. What is your experience as a pen tester when you’ve worked with so many different departments it’s both a human and social aspect as well as technology and security? And when you have multiple layers of complexities, how do you mitigate risk, and what is your approach?

Sanjiv Kawa: Yeah, that’s a really good question. I guess every person has a different opinion. If you look at C-Suite or management, they might say it’s a policy issue. If you look at the system administrators and the network administrators, they might say, “Oh, there’s a technological issue, or indeed it’s a user issue.” Touching on your first point of social engineering, it’s not a requirement in PCI DSS 3.2 yet. Something that PSC is actually working on is to…and something that PSC has worked on is…well, PSC is a co-sponsor of the PCI DSS Special Interest Group for pen testing, and we’ve authored a significant portion of the pen testing guidance supplement for PCI DSS, and we’ve also authored a significant portion of PCI DSS. And something we are trying to work on is getting social engineering to be a requirement.

Now, the hardest part about that is how you can sort of measure whether user education, whether user training, has become effective over time. In addition to that, we also believe that, you know, the outcome of social engineering shouldn’t be pass/fail, right? There should be a program of some level at which is something that we believe you should be doing since you’re phishing or vishing. But it should mainly for reinforcement and betterment of the end user. Yeah, I guess that’s kind of, like, our sort of stance on social engineering.


We currently don’t do it. And from a personal perspective, I think there’s only so many times that you can tell a organization that hires you that phishing or vishing, or some form of social engineering was the main entry point, right? There’s only so many times you can do that before they become kind of sick to your approach, or your initial vector or foothold into your environment. There’s some lot more creative ways. There’s a lot more ways that show value, especially with pre-existing technologies or pre-existing things that they had already deployed in their network.

Cindy Ng: I was reading the penetration testing guidance and it was so thorough that I just assumed that that was what you had to follow for pen testing. That’s why I’m like, “Oh, it’s part of the requirement.”

Sanjiv Kawa: As Tom had spoken about a bit earlier, you know, PCI isn’t the perfect program, right? But I truly believe that it’s designed to fit as many business models that there can be. And it’s a good introductory framework for, especially penetration testers, because it’s so clear. I mean, you define your priced assets, your CDE, which should ideally be a segmented part of your network, where there’s absolutely minimal to zero significant connectivity to your corporate network. And the pen tester can use basically anything that’s in the corporate network to try and gain access to the CDE. And if it can be used against you, it’s in scope.

One thing I really like about PSC is that we don’t really enter these scoping arguments, you know. For example, “These are the IPs that you’re limited to.” Or, you know, “You pay $3 to test this IP.” Because with PCI, especially as a compliant standard, it says, “Anything that can be used in your corporate scope to gain access to the CDE can be used against you.” So that way, it can almost bring everything into scope, which is kind of nice for a pen tester. It’s, in my opinion, pen testing in the purest form possible.

Cindy Ng: You both have mentioned CDE multiple times, and that’s what you’re…you spend all your time at. I was wondering do you prioritize that list though, like first you need to protect the crown jewels, then do you look at the technology second, or the processes third, and then people last? So is it, it goes back to whatever is important to the organization?

Tom Porter: It’s not uniform for every organization, just because it depends on their secure remote access scheme. The crux might come down to misconfigure a technology appliance, it might come down to a user who’s behaving in a manner that’s outside of procedure. So it really depends. And that’s why when we’re on site carrying out an engagement, we kind of cast a wide net because we wanna catch all of these deficiencies and whether it’s the process, the systems or the people. So to give you some examples, we might see that a technology that is supposed to provide adequate segmentation between a secure zone and a less secure zone has a vulnerability in some type of service that we can exploit. It’s rare, but we see it every now and then. That would be something that falls into the technology list.

Sometimes we might see on the user side, an admin who sits outside of the PCI zone but wants to get in, and has to do it fairly often, they’ve set up their own little backdoor proxy on the machine so that now it’s on a separate VUM. So we’ll get on their machine after we’ve compromised the domain and inspect their next step connections. We’ll see that they’ve set up their own little private proxy that they haven’t told anybody about it. So it depends. We cast a wider net just because we’re not entirely sure what we’re gonna find. And the organization doesn’t necessarily always know where the breakdown in the process will be.

Sanjiv Kawa: To add to that, I would also say that a majority of our time is spent in sort of like the Post Exploitation phase. Typically, our time is spent identifying that network segmentation gap and trying to sort of jump from the corporate network into the CDE, and assessing all possible roots which will, you know, result in that outcome. The corporate domain is something that…it’s kind of like the Wild West. A lot of clients don’t…in my opinion, don’t have enough emphasis and controls placed around the corporate network, but they’ve really secured the CDE and all the access in terms of authentication, their multi-factor authentication, and granular network segmentation, into the CDE. Yeah, a lot of our time is spent assessing those sort of roots into the CDE.

Tom Porter: And to piggyback off that, Sanjiv’s reminded me, I’ve been on a number of engagements now where we have this kind of division in labor where most of the resources have been applied to the CDE to make it as secure as possible, while the non-CDE systems or corporate scope has kind of fallen by the wayside with regards to attention and then time and money.

And then we have…we find where the corporate network is then set up as a trust, like a domain trust, as an example, with the CDE. So if we compromise the corporate network, now we have a pivot point into the CDE or with this domain trust. So we end up going to clients and saying, “Hey, we’re very proud of you how much work you’ve put into securing your CDE, but now we understand that your CDE is vulnerable to the deficiencies of the corporate network because you’ve put this trust in it.”

Cindy Ng: I think you’re thinking of security as a gradient, or as…a lot of C-Level and regular users, in general, think, “Well, you’re either secure or you’re not. And it’s a zero or a one.” How do you explain that to C-Level when you’re talking to them?

Sanjiv Kawa: Sure. Yeah. So I don’t believe that, you know, security is binary. I think, especially in modern networks, there needs to be an adequate balance of convenience and security. But that needs to be applied at the right levels. There are so many examples which I might not even be able to get into some of them, but you need to really be able to…not as a third-party assessor, but as an internal assessment administrator or as an internal network administrator, or as an internal IT manager, we provide you, as a third-party assessor, with the necessary tools or vocabulary and the recommendations that you can then sort of take and package to your C-Level suite.

I think a lot of the focus that I’ve definitely been looking at lately is just better access and authentication controls. It seems like one of the most common entry points into most networks is just weak passwords. And so how do you remediate that? How do put a policy in place? Well, in truth, most organizations have a policy in place, but it’s a written policy. It’s not necessarily a technical control. There are technical controls that you can put in place, which are kind of opaque to the end user and kind of makes them better without realizing that they’re becoming better.

So a really common example is third-party integrations into Active Directory, right? By default, I believe that the Windows password policy is a mixture of alphanumeric characters and a seven-length password. And, you know, with, you know, modern networks, it’s kind of archaic to think about because it doesn’t have any sort of intelligent identification of whether a user is using season year as a password, or a company name123 as a password, or something to that effect. So how do you train a user to become better at selecting passwords? Well, in short, you can purchase one of these integration tools and integrate that into Active Directory and load in bad passwords or what you would consider to be bad passwords. And at that point, a user is automatically more secure because they’re unable to select a weak password. By default, they’ve already selected a better password. It’s really just kind of identifying what an organization’s weakest points are, what their failing points are, and how you can make those better in a cost-effective, potentially technical control, which can kind of remove the risk to the environment.

Cindy Ng: You speak about it a little bit about being cost-effective? How do you rate and assign risk to the budget? Do they tell you, “Here’s how much money we have, and work with it?”

Tom Porter: Not necessarily. We give them recommendations, but our recommendations aren’t necessarily gospel. They know their resource constraints, whether it’s budget, whether it’s people, time, whatever it may be. And they work within those. And we can flexible with them throughout the remediation process. So what we do ask, as we work with clients through remediation, is if they come up with ideas for how they wanna go about remediating a finding, they come back and bounce those ideas off of us. Let us pen-test this idea on paper before you invest a significant amount of resources into it. And then we can save each other a bunch of headache down the road.

Cindy Ng: And when you’re engaged with the C-Level, what do they care about, versus what IT people care about?

Sanjiv Kawa: I think the most common thing with C-Level suite is just brand integrity, right? I mean, if they show up on the front page of a newspaper because of a breach, you know, it’s really gonna impact negatively their ability to continue to sell. And a byproduct of that is customer confidence, right? So C-Suite will always care about…in relation to security, brand integrity, and customer confidence. Second to that, they care about time to remediate, and a byproduct of that is cost to remediate. But from my experience, those are probably the big few things that the C-Suite cares about the most.

Tom Porter: I’ll say part of that too also depends on the type of business they’re in. Dependent upon what your revenue stream is. You know, something like a denial-of-service where it takes you offline for several hours might have a greater impact than some data being exfiltrated from your environment. If you think about maybe a pharmaceutical company, their crown jewels aren’t necessarily their uptime, it’s their…you know, their patents or their IP. So if we can exfiltrate those, then that’s gonna have a much more impact on the business than say taking them offline for a few hours.

Cindy Ng: You mentioned earlier that you guys also are working under a whole bunch of zero-day potential vulnerabilities that you might be encountering in the future. And we saw that happen. How do you red-team yourself and improve on the new knowledge that you get?

Tom Porter: There’s a lot of debate right now. If you…dependent upon your preferred internet source. But with regards to this, what we see it from penetration testers, there’s a few things that we do for our clients. One, we can send out advisors and bulletins to our clients just to let them know, “Hey, you know, the internet is a dangerous place right now, and these are the things you should be doing to make sure you’re secure.” One thing that we do as we’re going about our engagement, we line out in our report, and we do executive briefs for customers as well, when we do these presentations and reports for customers, we line out all of our findings to say, “You know, these are our highs, these are our mediums, these are some best practice observations.” And like, we tell them, “You know, you should remediate findings one, two and three in order to be PCI compliant. But we also use this as an opportunity for organizations to strengthen their security posture. So we might have some findings that fall out of that kill chain or that critical path to compromise, but if we see an opportunity for someone to strengthen their security posture, we’re gonna mention it to them. We won’t require it for remediation for PCI compliance, but we’re more of a mindset of security first, compliance second.

Cindy Ng: What’s the important security concept that you wish people would get?

Sanjiv Kawa: I don’t wanna really preach here, but I just think really understanding your assets, your inventory. And a good way to do that is to have an internal security team who is proactive and continuously analyzing systems and services that are exposed. Yeah, I think it all comes down from good asset identification, good network controls. And the best companies that I have been to, which I have not been able to compromise in terms of gaining access through CDE, just has really good segmentation. Identifying your prized assets, putting those into an inaccessible network or a network that is accessible only to few but through very complicated access control mechanisms that require multi-authentication, very limited. There’s kind of multiple answers to that, but really it comes down to asset identification, network controls, segmentation, and limitations on authentication controls. Those are the key things that I care about the most. Tom might have some different things.

Tom Porter: When I talk to users and it almost happens in every single compromise or engagement that I’m on, a lot of it comes back to either weak passwords or more commonly, password reuse. And that’s something that echoes with users not only in the enterprise but also personally when they’re on their…you know, their email or their banking websites. And you see this a lot with attacks out across the internet with credential stuffing. But the idea is not only choosing strong passwords, but also using unique passwords for every type of site that you might have a login stored. You see all these breaches from places like, you know, LinkedIn to any handful of other dumps where either passwords are dumped either in clear text or hash form, and when those hashes are crashed, there’s now a whole litany of usernames, email addresses, and passwords that folks can try to breach other accounts with.

And that’s kind of where the idea of credential stuffing comes in. A password breach comes out, a set of credentials are revealed, and then attackers have these tools to try this username/password combination across a wide array of sites. And what happens is users end up getting several of their accounts compromised because they’ve reused the same password. So not only does that echo on a personal level, but we see that in enterprise too. I’ll crack and I’ll get someone’s, you know, Active Directory password, or it just happens to be the same password for their domain admin account, or it happens to be the same password used to login to the secure VDI appliance. So it’s not necessarily something that’s gonna show up in a vulnerability scan listed out in a report, but it is something that we see often, that we end up having to remediate almost all the time.

Sanjiv Kawa: Yeah. And there’s several ways that you can combat this. If you have an internal security team, one of the things they get to be doing is monitoring breach dumps and monitoring passwords, and simply just pulling the hashes for any of their domains that they have and running comparison checks to see if any of these known bad passwords tying to known users in their environment exist. Secondarily just comparing the hashes from two separate domains, or comparing hashes from two separate user accounts, specifically privileged and non-privileged. And it’s kind of just doing password audits, right? Just maybe quarterly or every single…whatever aligns with your password change policy, be it 30 days or 90 days, whatever it is. And that way, you can, as a security team, ensure that…well, ensure to a certain degree that users are selecting passwords which are smart, and more importantly, aren’t reusing passwords from zones of low security to zones of high security.

Cindy Ng: Have you seen organizations move away from passwords and go into biometrics?

Sanjiv Kawa: Yeah, there’s been talk about it. I recently had a conversation with a client last week about this. But what they’re battling with is user adoption and cost. Cost being having to have certain, you know, devices which can read fingerprints or read faces. What people end up just going with is a second factor of authentication, either through an RSA one-time pass or do a multi-factor authentication or Google Authenticator. There’s lots of multi-factor authentications there which can basically…you know, biometrics is still a single factor of authentication, right? So having a token supplied to you by any of the aforementioned providers is probably one of the most cost-effective ways and most…and a secondary factor of authentication which is more secure in the long run in terms of user account controls.

Cindy Ng: What kind of problems, though, do you see with biometrics? Let’s say cost isn’t a problem. Could you guys play around with that idea for a little bit? What could you see potentially go wrong?

Tom Porter: So what I have seen in the past is some of these biometric-type logins, when they’re actually stored on the machine, so on like a Windows desktop, are just stored in memory as some kind of token, very similar to a password hash. So it’s just…operates on the same function, and you could reuse that token around the network and still adequately impersonate that user.

Not only that, we’ve also seen…you’ve probably read about it online. Technologies are using, you know, facial-type recognition how people can mimic that by just scrolling through your LinkedIn or Facebook and reconstructing the necessary pattern to log in with your face. And those are some of the things. Just because these are factors…unlike a password, these are factors that are potentially publicly available. Things like your hands for thumbprints to your face on Facebook. It’s just something that we haven’t historically secured in the past.

Cindy Ng: Your experience is so vast and multi-faceted, but is there something that I didn’t ask that you think is really important to share with our listeners?

Sanjiv Kawa: I guess we could probably share about…share some of the other penetration tests that we do. Not all of our pen tests are PCI-oriented. We have done things like pre-assessments in the past. So, for example, this organization is gearing up for another compliance regime, whether it be, you know, SOX or HIPPA, or FFIEC, or something to that effect, and PSC will do pre-assessment penetration tests conforming to the constraints of those compliance regimes.

We also do evaluation-based pen tests. So let’s say….and Tom sort of spoke about this a bit earlier, but let’s say your organization is implementing a new set of core security technology so that it requires some sort of an architectural sort of shuffle, whether it be a new MFA or multi-factor authentication implementation or some sort of segmentation, or a new monitoring alerting system, we can pen-test those or identify if technologies are adequate for your environment before we deploy. We’ve also done very complex mobile application and, well, just regular RESTful or SOAP-based API or other sort of application or web services-style testing that fall outside of the PCI compliance sort of zone. But for the most part, our mindset is still PCI-oriented, right? All you do is you substitute the CDE for that client’s goal, and that’s your success criteria. That’s what you’re trying to get to. And you’re using everything that you can ingest around you to get to that goal.

Tom Porter: I’d like to add, when we’ve sat down with the clients and we’re looking at results of a penetration test and we lay out the findings for what should remediate it and what doesn’t necessarily remediate it, one of the luxuries of PCI that’s kind of a gift and a curse, depending on your perspective, is that we actually get to see remediation through. And it’s a rarity in our industry just because remediation is so rarely required. So we actually gonna sit there and walk through remediation with clients. And when we come back and do our retesting to verify the remediation’s in place, we find that not always all of the findings were remediated. Maybe they remediated some of them, or the ones we required, but not necessarily all the ones we had in the report.

And as Sanjiv spoke about earlier, we kind of have this notion of a kill chain. The findings that we link together to achieve a compromise. You might hear it referred to as a critical path to compromise and kill chain and cyber kill chain. Something with the word “chain” in it. Yeah, the attack path. But essentially we’re trying to identify these bottlenecks, and what ends up happening is these organizations get to places where there’s a kind of mitigation bubble, they start offering compensating controls, which is just more of like a Band-Aid-on-it solution instead of fixing the real problem. So what we do is, especially these findings, and we’re talking about…talking with network or sysadmins who have a very knowledgeable layout of kind of the tech environment. And they’re trying to relay the importance of having, you know, patches for this, or a new appliance for that up to the C-levels, they can use us as a resource. They can tell us whether to focus and then in our report, if we find a deficiency, we’re giving the tech people ammunition to take to the C-Level to say, “Hey, we actually need to act on this. We’ve got a verified third party that says, “You know, we need to beef up our security here. We need to invest resources there.” And it kinda gives them some backing to say, “Hey, you know, we need this.”

Cindy Ng: Do you notice that you’re also looking at IoT devices?

Tom Porter: Absolutely. All the time, actually. Because when we do our network sweeps, we see all kinds of things out there. And they’re almost always set with factory default passwords. They usually have some type of embedded OS that we can pivot through, and they’re usually on wide open networks that don’t have…you know, if they’re using host-based firewalls to protect ingress and egress…So we now have our pivot points to get around.

Get the latest security news in your inbox.