Tag Archives: podcast

[Podcast] Bring Back Dedicated and Local Security Teams

[Podcast] Bring Back Dedicated and Local Security Teams

Leave a review for our podcast & we'll send you a pack of infosec cards.


Last week, I came across a tweet that asked how a normal user is supposed to make an informed decision when a security alert shows up on his screen. Great question!

I found a possible answer to that question at New York Times director of infosecurity, Runa Sandvik’s recent keynote at the O’Reilly Security Conference.

She told the attendees that many moons ago, Yahoo had three types of infosecurity departments: core, dedicated and local.

Core was the primary infosec department. The dedicated group were subject matter experts on security, still on the infosec department, but worked with other teams to help them conduct their activities in a secure way. The security pros on the local group are not officially on the infosec department, but they’re the security experts on another team.

Who knew that once upon a time dedicated and local security teams existed?! It would make natural sense that they would be the ones to assist end users on security questions, why don’t we bring them back? The short answer: it’s not so simple.

Other articles discussed:

Panelists: Kilian Englert, Forrest Temple, Matt Radolec

[Podcast] Rita Gurevich, CEO of SPHERE Technology Solutions

[Podcast] Rita Gurevich, CEO of SPHERE Technology Solutions

Leave a review for our podcast & we'll send you a pack of infosec cards.


Long before cybersecurity and data breaches became mainstream, founder and CEO of SPHERE Technology Solutions, Rita Gurevich built a thriving business on the premise of assisting organizations secure their most sensitive data from within, instead of securing the perimeter from outside attackers.

And because of her multi-faceted experiences interacting with the C-Suite, technology vendors, and others in the business community, we thought listening to her singular perspective would be well worth our time.

What stood out in our podcast interview? When others are concerned about limited security budgets, Gurevich envisioned more hands on deck in the field of information security. The reason is that there are more and varied threats, oversaturated vendors in the marketplace, and a cybersecurity workforce shortage.

“What I see happening is that there’s going to be subject matter CISOs across the company; where there will be many people with that title that become experts in very specific domains.”

Also, now that cybersecurity concerns are not as industry specific, Gurevich does recognize that there are certain industries that are more at risk than others.

She approaches all industries with varying degrees of risk and threats, compliance requirements, and disparate systems all in a strategic way – by giving organizations the visibility into their data and systems, what they need to protect and how they need to protect it.

Transcript

Cindy Ng: Long before data breaches became mainstream, Rita Gurevich, CEO of SPHERE Technology Solutions built a thriving business on the premise of assisting organizations secure their most sensitive data from within. And because of her multifaceted experiences interacting with the C-Suite, technology vendors and others in the business community, we thought listening to her singular perspective would be well worth our time.

Rita, you founded SPHERE in the wake of the 2008 financial crisis when you were just 25 years old. Can you tell us about the process behind how you started your business and what kind of services you provide.

Rita Gurevich: Absolutely, I started the company, essentially, on the collapse of Lehman Brothers. And after the bankruptcy, there were many different firms that bought different areas of Lehman. And I was put on a team to help figure out how to split apart all the different data and assets they owned.

So if you can imagine, up until that point. Lehman was super centralized. It was operating as one company, with lots of shared services.

And overnight, we essentially had to figure out who gets what.

So Barclay’s Capital bought a part of the business. Numera bought a part of the business. Neuberger bought a part of the business. All these different financial services firm that bought different business units from Lehman Brothers.

And what we had to do, was essentially a crash course on deep data analytics. We had to learn how to get a really quick understanding of who uses what, map that to different business entities, to figure out where it needs to go.

So that required a lot of tools, a lot of metrics. We built all these algorithms. And we had to do it almost overnight.

And soon after, slightly a traumatic time, in the history of our country, I had a bit of an ‘aha’ moment when decided to do some independent consulting.

I quickly built a business, and now we focus on cyber security. We have a niche around data governance, identity, and access management, as well as privilege access management. And a lot of the experience that I gained at Lehman was very relevant for what I do now, because you essentially had to figure out how do I capture the information that’s necessary from my environment to create metrics and analytics that are relevant to making sure my information is secure, understanding who owns what, and even potentially preparing myself for some M&A activities.

Cindy Ng: And so, can you describe your work at Lehman Brothers and how that you made the connection that it was important to start your business.

Rita Gurevich: Sure. So, during that time, during the bankruptcy, it was really all about data analytics. It was really about looking at all the different data, all the different assets that Lehman owned and figuring out, “Okay, who gets what?” So, if Barkley’s bought investment banking, how do you know what data belongs to investment banking? If Neuberger Berman bought investment management, the investment management business, how do you figure out what data belongs to investment management? So, it was all around going really deep into the data, and using the right tools to capture all the metadata, all the activity, so you can gain an understanding of who’s using it? Who owns it? and where does it need to go?

So, at that time, not a lot of companies were doing that, and there wasn’t really a lot of need to do that at the time. But around 2008-2009, there was just so much movement within financial services. And there was so much happening in terms of companies going bankrupt, being acquired by other companies, all these different businesses kind of spinning up, and changing, and moving hands that this concept became a lot more relevant. So, when I started the company, it really was around selling myself and my experience that I learned, which was very unique at the time. But over the course of not a very long amount of time, probably two years or so, the focus definitely shifted.

So, initially I was talking to infrastructure people, I was talking to operations people, and I was talking about data analytics. And while it was definitely a nice to have, and people cared about it. Budgets were really tight. We’re still knee-deep in one of the worst recessions in our country. So where are the budgets, where are people focusing, where are, you know, the executives and the board members, you know, allocating resources? And that was for information security. So around 2009-2010, I think the concept of data breaches became a lot more relevant. It became more, kind of, a commonly used word. Companies were starting to actually hire chief information security officers. They were starting to look at data analytics from a security perspective. They wanted to get a better handle to prevent data getting into the wrong hands, and that’s when I shifted the focus from data analytics to data security. And I think that was monumental for me, because really that’s the premise of what my company does today around the data governance program that we implement.

So I think that my experience at Lehman was definitely a blessing in disguise, but I think that probably anybody that was focusing on data analytics, even tangentially, started to think about data security as well.

Cindy Ng: You were 25 when you first started your business. A lot of your college cohorts they were still on their first, second, or third job. Was that relevant or you looked at the opportunity and ran with it?

Rita Gurevich: I think that my age was probably one of my biggest challenges when it came to starting my business and definitely in the earlier years. And you can only imagine, you know, a 25 year old walking into a managing director’s office, and essentially telling them that they can do a better job than his team can do. That’s a really difficult thing to say, and you gotta prove it. So, once you actually start working for them, you better do a good job, which luckily I did and my team did. But as I compare to my other college cohorts, I actually think that because I went to Stevens Institute of Technology, in Hoboken, New Jersey. My business is in Jersey City. My customers are international, but quite a few of them have headquarters in this kind of tri-state area. A lot of my college peers went on to work at all these different companies that could be potential customers at Sphere. So, I think actually it created an opportunity for me because it opened the door to have the right conversations with people in technology to explain, you know, what I’m working on, and what I’m doing.

And, you know, part of having a successful business is not just a good idea, but it’s having people that you can actually sell to, having a relevant problem that’s gonna help people in their professional careers and their professional lives. So I think that my relationship from school and being not so far off from graduating college helped more than hurt. But also from the Lehman bankruptcy, like I mentioned earlier, it was a time where there was a lot of movement, and a lot of people went to all sorts of different firms on the street. And it was different than how it used to be in the past, where people stayed at the same company for a really long time. That movement essentially for me, created an overnight network, where I was able to kind of leverage people that I knew and had worked with for a handful of years across all sorts of different companies within the demographic that I was targeting. So, yeah, I think that the age was definitely sometimes a challenge, but I actually found ways to have it be a benefit as well.

Cindy Ng: But in terms of age, it’s almost non-relevant as long as you have a value proposition, and people are interested.

Rita Gurevich: That’s a really, really good point. So, there’s kind of two aspects to it, right? So, if you have something interesting to say, that’s great, but the way you communicate that message is almost more important, and there has to be a confidence in the way that you present the problem that you’re solving and your solution that’s going to set you apart from others that are knocking on the same people’s doors, maybe for different areas, but are competing for the attention of the people that you’re trying to get in front of. So, I call that, you know, learn confidence. I can’t honestly say that at 25 I felt like I knew everything. I knew I didn’t, but you have to be able to present yourself in a way where the person on the other side of the table knew that, even if you don’t know the answer, you will figure it out, and the other part of that is perseverance. You have to make sure that you continuously have your goals in mind and push forward.

You know, I mentioned that my company focuses on security, and while that’s still relevant and even in 2008, 2009, 2010, it was also very relevant. You can imagine that the people that are in charge of security at these companies have lots of vendors, and lots of partners, and lots of even internal people, knocking on their door vying for their time. So you have to just make sure that your message comes across strong and that, again, there’s a confidence in your approach, and you will deliver when push comes to shove.

Cindy Ng: And when you talk about your learned confidence, when a meeting didn’t go as planned, or a presentation didn’t go as planned, what was your self-talk like?

Rita Gurevich: That’s a great question. So I’ve learned that you have to listen more than you speak. You’re going to learn a lot through osmosis. Just by being in a room, where the conversation is happening. You’re just going to learn and get better. Sometimes, it’s just echoing a common opinion or a common sentiment that the other person has on the other side of the table, and reaffirming them that you’ve also experienced the same problem that they’re sharing. Or you’ve seen it somewhere else. Or you’ve solved that problem with a peer of theirs. So I think that learned confidence isn’t necessarily about having memorized specific compliance requirement or a specific way of doing some task. It’s more about doing a thing more logically. And if you don’t know, it’s okay not to know. Just make sure your follow up and follow through is there. No one expects experts. Data security and cybersecurity as a whole is a very new area. Everyone is learning as we go. It’s all common knowledge. But it’s can you think of solutions in a creative way and that you’re solving the problems that people are having. And sometimes, it’s not reinventing the wheel. Sometimes it’s solving an existing problem in a smarter and more scalable, and a more efficient way. I’ve learned that by failing sometimes. You don’t have to come up with an idea that no one thought of. You just have to come up with a more practical way of doing things sometimes. And the other bit of advice and something that I really believe in is, is becoming kind of a master of some things. So, instead of the “jack-of-all-trades”, focusing in on something and becoming really good at it, and, you know, that’s what I did. So I call Sphere a cybersecurity company, but we’re actually pretty niche. We focus on internal threats, and we specifically focus on putting controls on your data, your systems, and your assets. So, it’s a very kind of narrow piece of the pie when you look at cybersecurity as a whole, but that allows my team, and that allows me to train new personnel really, really effectively because you can hone in on very specific topics. You can give real world examples of very specific things, and people can really start to grasp, you know, the complicated challenges that we’re solving, but also think of them in a more simplistic, logical way.

You know, all these technology challenges from data breaches and around, you know, hackers and all that, it feels very complicated. It really does, but when you break it apart and remove the technical jargon, the problems and the reasons these things are happening are not overly technically challenging problems. A lot of them are profits driven, they’re people driven. They’re not necessarily about, you know, the right configuration of a tool within, you know, this specific domain. It’s a much more kind of systematic issue. So, I think when you start to gain an understanding of this base, you start to figure that out pretty quickly.

Cindy Ng: On top of starting your business at a really young age, there aren’t a whole lot of females in the industry, and we talk a lot about women in tech, but, you know, I wonder how can men join the conversation, because they coexist with us on this planet, and I wanted to hear your perspective in how we can enlist men as allies in our industry?

Rita Gurevich: I definitely get asked a lot about this topic, because, you’re right, there’s not a lot of women in tech, and to be honest there’s not a lot of women CEO’s either, so you kind of merge women, tech, CEO. I guess, I’m a little bit of an anomaly, but I’m hoping that’s not for very long. I think honestly we need to stop caring that the person that’s joining the conversation is a woman, and we know that there’s going to be equality, and we’re not forcing that distinction. And I think more and more women are getting involved in technology early on. And technology is part of nearly every child’s life right now independent of gender, and I think that naturally maybe the next 10 to 20 years. It’s gonna cause dramatic shifts in ratios in the tech workplace.

And I really think that tech is going to be early adopters of inclusiveness of women and inclusiveness across the board. Technology is very interesting because it’s analytical thinking, it’s problem solving, researching. Definitely mixed in sometimes with creativity and out of the box thinking. Maybe I’m partial, but I think these are natural traits of women, and in the end if you work for a big company, managers want successful teams, and their managers want successful orgs, and women will rise through the ranks as there’s just going to be more of them in the running.

Unfortunately, I think that other industries are not as fortunate. And I bring up two specific women whenever I talk about this topic.

One, I met at a panel I was on, “Women In Engineering,” and she’s a civil engineer at a big company, and she works a lot with construction companies. And once she’s on a job site, she’s like they assume that she’s a secretary, and even when she explains herself they just don’t listen to her, and they won’t take direction from her. And she’s expressed how difficult it is for her to advance and these are challenges that have nothing to do with brains, with smarts, with experience. It’s really a people problem, and I don’t envy that. You know, I struggle with even thinking about how do you adjust that mentality.

Another example is a woman that I met as part of the EY Entrepreneur Of The Year Program, which I was on as to be recognized as well there. But she owns a liquor company and half of her job is in a warehouse, and the employees are chain-smoking, they’re, you know, a bunch of old men, no offense to old men, but they kind of act like they’ve never seen a woman with any level of authority before. And it’s sad, and, you know, I’m very fortunate that I work in an industry where technology is definitely going to be on the forefront of diversity and inclusiveness, but you look at some of these other industries, and you hope that they’ll follow suit. You know, hopefully sooner rather than later as more women in general are joining the workforce and taking on careers that aren’t traditionally careers that women participate in.

Cindy Ng: So, let’s go back to the technology, and you work with many different sectors, retail, energy, hospitals, financial. Can you speak to the different industries and what their concerns are regarding security?

Rita Gurevich: I think this is the first time ever that concerns are not as industry specific as they used to be. And I think that’s also due to just the times that we live in. I mean, everybody now cares about cyber security, people are starting to understand how this affects them personally, how it affects them professionally. You know, a year ago, nobody in my family understood what I did for a living, and now, even my grandmother gets it. You know, anytime that there’s like a breach in the news breach or on the front page of the paper, she’ll call me, and she’ll say, “Too bad they didn’t have Sphere”. It’s pretty cute, but I think that just shows that the concept of data breaches and cyber security is part of everybody’s lives. The expectation is that everybody’s going to be involved, and anybody is up for grabs to be affected. And I think the equifax breach is just a prime example. I mean, it was on every news channel we all know that half the country was affected by this. You think about how many people had to, you know, read their credit or react to that event. It’s becoming just common sense that every company, every industry needs to focus on this.

So, sometimes I think that the challenges experienced within the individual industries are scarier than others. So, we all know about financial firms. They’ve been the targets and on the front page of papers for a long time. But if we look at hospitals for example, that can be really scary. So, I’ll give you another anecdote, I love these examples. I use a lot of them, but this one specifically that comes to mind was a panel at an event that we sponsored, and we had a group of CISOs in the front of the room. One of them was a woman, and she was the CISO of a big hospital network, and she explained ransomware and how it affects hospitals differently than, you know, a bank or somewhere else. And she explained, “Imagine you’re a patient about to go into surgery, and the hospital has an attack, and your patient files are now locked down, and you have to now pay ransom in order to get them back, and you’re back going to surgery, the doctors need these records”, and this sounds like a very sci-fi example, and you’re like “that doesn’t really happen”, but it really happens, and that’s how it happens. It’s not even that our wallets are being impacted, it’s our health, it’s our lives, it’s how we receive healthcare is affected by cyber crime. It is so close to home for every single person in the world that I think the industry is just going to massively change. And I thing we’re gonna start to see that almost immediately because it’s just such commonplace knowledge. It’s industry wide, it’s not industry specific, and, again, it’s not just our wallets that are affects, it’s our health.

Cindy Ng: A lot of the problem previously and maybe even now that IT pros are having trouble connecting with the C-Suite, and I’m wondering after the breach, after the ransom, where are CEO’s and individuals in the C-Suite getting more involved in cyber security? What are your recommendations when you’re speaking with the C-Suite versus the IT pros, because you’re kind of like a conduit between the two different channels?

Rita Gurevich: I think the C-Suites, primarily the CISO, has a very different job now than maybe they used to. Honestly, I don’t envy CISO’s right now. You have a bad breach, your whole background is going to be on the front page of the paper. It’s not just that your company will get fined. Your background, your history, where you work, what your college major was is going to be out there for everyone to dissect and criticize, okay? That is not a position that most people are comfortable with. So I think CISO’s now more than ever recognize that the job that they chose and the career that they chose has to be proactive. They have to be on the front lines. They have to think about things in smarter ways. So, I think that we’re going to see a shift in CISO’s where it’s going to be the best of the best of the best. I think that a lot of companies took for granted the need for highly skilled leaders within information security, and they’re starting to see companies and what happens to them once a major attack occurs, and I think that is going to change.

Now, the other challenge was, I think with companies is that many of them placed one person at the helm, and they started to build out these teams, and honestly, it’s not enough. There are way too many threats. There are way too many options. There are honestly way too many vendors that are potentially offering options for one person to be making those decisions. So, what I see happening is that there is going to be subject matter CISO’s across the company, where there’s many people with that title that become experts in very specific domains. So, I think that information security is potentially in terms of employee count is going to eventually exceed all of just general IT, because I think that that’s becoming more of a priority than up time and availability of systems is making sure that the internal people aren’t doing things that they shouldn’t be doing, and that you’re doing everything in your power to prevent anybody from the outside getting in that shouldn’t be getting in.

Cindy Ng: It’s been said that information security is really just compliance but not security. Is that ball thrown out the window after people have realized how serious information security is?

Rita Gurevich: That’s a great question. I’m gonna give you another, another story. I was on the phone with a CISO, he’s the CISO of one of the largest manufacturing companies, and we were talking about his agenda for the year. And he recently started at that company and was told that his mandate was compliance, and maybe this is because the company struggled with compliance in the past, but he immediately said if my mandate is compliance, I don’t want the job. You know, that is not what I should be focusing on. And the challenge with focusing solely on compliance as he put it, is that actually leaves you more exposed. Compliance is about a checklist and often that checklist is very subjective, and often the people who are verifying whether you’ve completed that checklist are ranging in levels of expertise. I mean we have customers that are the 1000 person shop all the way to the 100,000 person shop, and we as outsiders can see the difference in caliber of the people that are coming in from the outside from the regulatory bodies checking on them is vastly different. Just because you’ve checked the box, it doesn’t mean that you have good security. And it’s good security that’s going to minimize your risk. And you have to think about security first. If you think good security will drive compliance and not the other way around, you’re still going to achieve the goal of good compliance, but you’re also going put the right preventative controls to minimize a data breach or some other cybercrime.

Cindy Ng: Lets talk more about your company, SPHERE. I wonder what the mission of your company is?

Rita Gurevich: The mission of SPHERE is to help companies take control of their data, their systems, and their assets. What that means is to give them visibility that they need, understanding what they have, what they need to protect, and how they need to protect it. Along with giving them a SWOT team approach, helping them remediate issues that they have. And also put tooling in place to allow them to manage their environments effectively, in house. A lot of companies have no idea where to start, in terms of looking at data governance. They have no idea what needs to be remediated or fixed or how IAM workflows work. Or they have no idea what threats privileged accounts are posing for their organizations because they don’t have threat level visibility. And once we get them the visibility. A lot of times, they need a one time SWOT team approach to clean up the environment. And it’s something that we also do. And we also partnered with different vendors, and obviously Varonis is one of the most strategic partners we’ve partnered with. We offer tooling to help people manage their environment on their own with their own resources long-term. We also have our own solution called, “Sphereboard”, which integrates with Varonis, along with a handful of other best of technologies to provide a single pane of glass to your data, your system, and your assets.

Cindy Ng: So, you don’t curate a list of vendors for your different clientele to meet their needs? It’s more like here’s what we know all companies need. Here’s what we can provide for you. Because sometimes your clients don’t know that certain technologies might exist, you’re essentially giving them one panel of “here’s everything you need to know.”

Rita Gurevich: Yeah, that’s exactly right, and we’re by no means a VAR where we have a portfolio of, you know, 100 different products, and then we switch them out as we need to. We really invest in the relationship that we built with our partner network, and with the companies that we’ve integrated our solution with, and that’s important because you need to have consistency. And if you want a solution to be sticky, it has to be relevant, and it has to answer the right, the right questions, and there has to be a history of that company doing things the right way. There’s going to be a lot of disruptions within this industry, and there’s going to be a lot of companies that are coming into the space. They’re offering really cool widgets and gadgets and all that good stuff that probably aren’t going to be around in a year or two. That’s just the nature of entrepreneurship and innovation, but they’re are going to be plenty of those that come around and stick around, but the relationship that we formed and the partners that we’ve worked with are ones that we’ve been working with now for a really long time, way before anyone even thought something like Equifax could happen. So, we’ve been solving this problem way before it was cool, and we’re gonna continue to offer that, and be more innovative, and continue to solve problems for our customers.

Cindy Ng: Have you ever figured out in speaking with, say like, after 10 vendors, you realize, “Oh, we’re missing X, Y, and Z products, and I’m gonna go find a vendor to see if there’s anyone I can work with?”

Rita Gurevich: Yeah, at times, but I think it happens a little bit more naturally than that. I think that it’s first about the problem statement, so I’ll give you an example. The last area that we’ve added to our portfolio more officially is privileged access management, and, you know, our focus was, of course, on the traditional challenges with password vaulting and the such, but really from a Sphere perspective, we were noticing challenges of deploying those solutions in terms of understanding what privileged accounts exist in my environment, whether it’s in my Unix environment, on my Window server, my databases, etc., and who owns those accounts, and who do I need to educate on a new way of working? So, it’s not necessarily about the products that will, you know, do password vaulting, or record recessions, or whatever the tools may do, it’s more about kind of the people on the process, and all the work that needs to be done ahead of that. So, I think out expertise comes with that. Now, there’s no doubt in my mind that CyberArk isn’t the leader in that space, and we decided to partner with CyberArk because of that. But, that being said, our solution for privileged access management is not just to recommend a tool, it’s to create a process, to create an end-to-end solution that includes a one time remediation effort. That maybe includes process change that maybe includes training that maybe includes, you know, health checks, and then, of course, there’s also the software element of this. Most companies cannot manage this manually. You need the right tooling, so there’s definitely tooling recommendations. So, I think looking at the problem end-to-end, the products and the vendors who we decide to work with for specific initiatives naturally fall into place.

Cindy Ng: What are upcoming plans for Sphere?

Rita Gurevich: Definitely growth in mind. I get bored easily, so, so growth strategy is always on the forefront of my mind. so, what we’re focusing on is a couple different areas. The first is geographical expansion. We opened up our London office this year. That’s going really well, and essentially just replicating the message here out there. There’s all sorts of requirements out there in terms of GDPR, and just overall data security that companies out there need just as much as they need here. Also, our products, so SPHEREboard is our baby. We came out with our product about two years ago, and it’s a culmination of just years of experience of being in the field from a services perspective, so just building more connectors, having more tools feed into that, and pumping out all sorts of really cool analytics for our customers to leverage. So, those are the two areas that we’re focusing on, and you’re gonna see a lot about Sphere in the next year.

Cindy Ng: Sounds great. Thanks Rita.

[Podcast] The Moral Obligation of Machines and Humans

[Podcast] The Moral Obligation of Machines and Humans

Leave a review for our podcast & we'll send you a pack of infosec cards.


Critical systems once operated by humans are now becoming more dependent on code and developers. There are many benefits to machines and automation such as increased productivity, quality and predictability.

But when websites crash, 911 systems go down or when radiation-therapy machines kill patients because of a software error, it’s vital that we rethink our relationship with code and as well as the moral obligation of machines and humans.

Should developers who create software that impact humans be required to take a ‘do no harm’ ethics training? Should we begin measuring developers by the functionality they create as well as security and moral frameworks they’re able to provide?

Other articles discussed:

Tool of the week: Assemblyline: Files go in, and a handful of small helper applications automatically comb through each one in search of malicious clues.

Panelists: Kilian Englert, Kris Keyser, Mike Buckbee

[Podcast] The Anatomy of a Cybercriminal Startup

[Podcast] The Anatomy of a Cybercriminal Startup

Leave a review for our podcast & we'll send you a pack of infosec cards.


Outlined in the National Cyber Security Centre’s “Cyber crime: understanding the online business model,” the structure of a cybercrime organization is in many ways a lot like a regular tech startup. There’s a CEO, developer, and if there are enough funds, an IT department.

However, one role outlined on an infographic on page nine of the report that was a surprise and does not exist in legitimate businesses. This role is known as a “money mule.” Vulnerable individuals are often lured into these roles with titles such as “payment processing agents” or “money transfer agents.”

But when “money mules” apply for the job and even after they get the job, they’re not aware that they are being used to commit fraud. Therefore if cybercriminals get caught, “money mules” might also get in trouble with law enforcement. The “money mule” can expect a freeze on his bank account, face possible prosecution, and might be responsible for repaying for the losses. It might even be on your permanent record.

Other articles and threads discussed:

Tool of the week: SPF Translator

Panelists: Mike Buckbee, Kilian Englert, Mike Thompson

[Podcast] How Weightless Data Impacts Data Security

[Podcast] How Weightless Data Impacts Data Security

Leave a review for our podcast & we'll send you a pack of infosec cards.


By now, we’re all aware that many of the platforms and services we use collect and store information about our data usage. Afterall, they want to provide us with the most personalized experience.

So when I read that an EU Tinder user requested information about her data and was sent 800 pages, I was very intrigued with the comment from Luke Stark, a digital technology sociologist at Dartmouth University, “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”

He is on to something. We don’t usually consider archiving stale data until we’re out of space. It is often through printing photos, docs, spreadsheets, and pdfs that we would feel the weight and space consuming nature of the data we own.

Stark’s description of data’s intangible quality led me to wonder how weightless data impacts how we think about data security.

For instance, when there’s a power outage, some IT departments aren’t deemed important enough to be on a generator. Or when Infosec is often seen as a compliance requirement, not as security. Another roadblock security pros often face is when they report a security vulnerability – it’s not usually well received.

Podcast panelists: Mike Buckbee, Kilian Englert, Mike Thompson

[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

[Podcast] Penetration Testers Sanjiv Kawa and Tom Porter

Leave a review for our podcast & we'll send you a pack of infosec cards.


While some regard Infosec as compliance rather than security, veteran pentesters Sanjiv Kawa and Tom Porter believe otherwise. They have deep expertise working with large enterprise networks, exploit development, defensive analytics and I was lucky enough to speak with them about the fascinating world of pentesting.

In our podcast interview, we learned what a pentesting engagement entails, assigning budget to risk, the importance of asset identification, and so much more.

Regular speakers at Security Bsides, they have a presentation on October 7th in DC, The World is Y0ur$: Geolocation-based Wordlist Generation with Wordsmith.

Learn more by clicking on the interview above.

Transcript

Sanjiv Kawa: My name is Sanjiv Kawa, I’m a penetration tester with PSC. I’ve been with PSC for…well, since June 2015. Prior to that, I had a couple of different hats on. I was a security consultant. I did some development work, and I was also a QA. My IT knowledge and my development knowledge is pretty well rounded, but my real interests are with penetration testing. Large enterprise networks as well as exploit development and automation. So, yeah, that’s me.

Tom Porter: I’m Tom Porter, been doing security for about eight years. My roots are on the blue team. So I got started in the government contracting space, doing mostly defensive analytics, network situational awareness, dissecting packets, writing ideas, rules. So now that I do pen testing, I have an idea of what the blue team is looking for. And I’ve used that to help me bypass the IR restrictions and find my way into FCDs.

Cindy Ng: Well let’s start with some foundational vocabulary words, like what are white box, black box, and a grey box is?

Tom Porter: So when you look at different approaches to carrying out pen testing, you’re gonna have this spectrum where on one side is kind of the white box or what you hear as the crystal box pen testing. That’s where the organization’s divisions are sharing information about their environment with the penetration testers. So they actually might give them the keys to the kingdom so they can log in and analyze machines. They have an idea of what the network layout already looks like before they come on site. They know what might be the architectural weaknesses in their deployment, or in their environment, and then the penetration testers step in there with an upper hand. But it gives them a little more value in that regards, just because they already have…they don’t have to spend the time doing the reconnaissance, the intelligence gathering. So you can…it’s a way to kind of crush down a pen test.

On the other side of the spectrum are kind of the black box. And that mimics more of what a real-world attacker might be doing, just because they don’t have necessarily insight into what the architecture, what the systems…you know, kind of operating systems are running, what kind of applications, versions of those, who are privileged users, who’s logging in from where. And there is a hefty portion upon the front side of the engagement to do a lot of reconnaissance, a lot of intelligence gathering, a lot of monitoring. But it’s also a great test of your incident response team to see how they’re adapting and responding to what these black box testers might be doing.

And then kinda in the middle, we have this notion of a grey box test. It’s where some information is shared, but not necessarily everything. And it’s a style that we like. And it’s kind of the assumed breach style, where we are given an idea of what network ranges reside there. We’re given an idea of maybe generalized what the process is or a privileged user might use to get into a secure environment. We know ahead of time that in the past year, they’ve rolled out a new MFA solution. It’s another way of kinda crunching down the time necessary for an engagement, from several months down to a week or two. In that way, we don’t break the bank with our customers. But we can still work with them to provide insights just because we have…or provide value just because we have a little more extra insight into their environment.

Cindy Ng: Even if you get a white box environment and they tell you everything that they know, there are some still grey areas such as things that they might not know, so do you think essentially you’re always working in a grey box zone?

Tom Porter: To a degree, for sure. As much as we’d like to, we don’t necessarily have a full asset list of everything in our environment. Not every organization knows every single piece of applications that they’ve installed on their local development machines. There’s always that kind of grey box notion of it, but just to generalize that not every company has an idea of what their inventory list looks like. And that’s where we come in. We can do our kinda empirical assessment to figure out what systems are running, what services are listening. And we can use that to cross-reference with what our client has on their side to figure out where the deltas are and give them more a complete picture of what their inventory list looks like, what their assets look like.

Cindy Ng: What’s the difference a penetration test versus a vulnerability scan? Because on the surface they sound potentially very similar?

Sanjiv Kawa: This is Sanjiv Kawa here. The real difference between a vulnerability assessment and a penetration test is the vulnerability assessment does sort of identify rank and report any vulnerabilities which have been identified in the environment. But it doesn’t go that next step, which is test the exploitation of those vulnerabilities and leverage those exploitations for the assessor’s personal benefit, or the assessor’s goal in that particular penetration test. So a good example would be, you know, a vulnerability was identified with a particular service in this network. Well, the vulnerability scan will say, “You know, this is a high-risk vulnerability because this service is using weak passwords, or this service is unpatched.” And at that point it’s kinda hands-off. It’s up to the patch management team or the incident response folks, or whoever the blue team is in that environment, to assess that vulnerability and put it into sort of a risk bubble as to whether they wanna fix it, or whether it’s an okay risk in that environment.

The penetration tester will exploit that vulnerability and leverage whatever underlying information on that system they can use to then move laterally throughout the environment, vertically throughout the environment, and essentially get to their end goal. I think it becomes…you’re absolutely right, there’s definitely a bit of haze surrounding what the differences between those two things, but really, the penetration test shows value in exploiting these vulnerabilities and ultimately reaching the end goal, as opposed to assuming these vulnerabilities exist, not testing the exploitability of them, and not really understanding the full depth of what this vulnerability on this particular service can result in.

Cindy Ng: Are there an X number of vulnerabilities you see again and again or something that you would see as new and upcoming that you rarely see? But so, I was reminded…last week I was talking to a cryptographer and he says, “You don’t always have to go complicated. It’s sometimes back to the basics.” So then my second question too, is you might have come up with a list of a bunch of vulnerabilities, how do you prioritize them?

Sanjiv Kawa: Yeah, it’s a really good question. And the cryptographer that you were speaking to is absolutely right. There’s times where you don’t actually need to complicate the situation. To be fair, a large amount of my penetration tests, I’m not actively exploiting vulnerabilities in terms of services, I’m actually looking for misconfigurations in a pre-existing network or native features in pre-existing operating systems that get me to my end goal.

And to answer the second part of your question, you know, what’s new and upcoming in the whole vulnerabilities world? Well, we’ve recently seen the Shadow Brokers release the dumps from the Equation Groups, who have kind of a close tie to NSA, but that’s neither here or there. And, you know, these guys have been hoarding zero-day exploits, which absolutely affect a large spectrum of operating systems and services. The most common, EternalBlue, for example, affects the SMB version 1 protocol for, I think 2012 down all the way to, you know, Vista/2003. It’s a really interesting space that we live in because there’ll be a lull for a little while where there’s no real service exploitation, at least in a wide sort of area of what you would expect an enterprise network to look like. And so you’re playing the misconfiguration game. And that usually gets you to where you need to be. However, it’s really exciting when you start seeing exploits which you’ve heard about on the wire, but haven’t necessarily been released yet and turned into a real proof of concept.

I guess another thing that kind of touched on here as well is it’s all sort of environments specific. It’s really interesting. There’s no real concrete methodology. I mean, you do have the PTES, the pen testing execution standard, where you kind of go from OSINT all the way to Cleanup and have Post Exploitation in between, and vulnerability assessments and exploitation. And that’s kind of a framework that you can follow as a penetration tester. But I think, and what I’m trying to get at here is, it’s really organic, when you’re hunting for these vulnerabilities and misconfigurations in a network.

Cindy Ng: Tom, you mentioned CDE earlier. And before I spoke to you guys, I had no idea what CDE meant. It stands for cardholder data environment. Can you explain to our listeners what that means?

Tom Porter: CDE are cardholder data environment. It’s this notion that comes from PCI compliance from payment card industries. They put together a standard for how folks that host, so merchants and service providers, should secure their environments to be compliant. And it started back, you know, over a decade ago where Visa, MasterCard, and three other brands had their own testing standards to be compliant with each of their brands. But it was kinda clunky, and just because you’re compliant with one brand doesn’t mean you’re necessarily compliant with the others. And not a lot of merchants or service providers really came on board.

So they got together and they ended up producing version 1.0 of what’s now known as the PCI Data Security Standards. And it’s evolved over the years. It doesn’t necessarily fit every business model, but it tries to incorporate as many as possible. But it started out mostly covering Web Apps, but it’s eventually evolved into hosting, so data centers, hosting solutions, web applications, as much as they can get under the umbrella. So in PCI DSS, they have what’s called the PCI zone. And it’s a fairly strict bounds on the systems that people owned process, that store, transmit or process credit card data, or sensitive authentication permission. And what we just call that in our parlance is CDE. And it’s our targets for PCI pen testing.

Cindy Ng: Let’s go through a scenario, an engagement that you both have been a part of. I wanted to know, what does regular engagement looks like? Are they all the same? Is there a canned process?

Tom Porter: So we work within…and what their standard dictates is, and as this evolved over the years, typically, they should be done by third parties. They should follow a standard that’s already publicly accessible and vetted. So like Sanjiv mentioned earlier about PTES, or if you’re using something like OSSTMM, something that’s been rigorously discussed and debated and has been vetted as a proper way of going about an engagement built like this, instead of just rolling your own, in which case you might not have full coverage.

So what we do is kind of adopt these into pens, to each client, because not every client environment is uniform, and not every business model is the same. So we’ll have some clients where we go in with an already pretty good idea of what we’re gonna be doing. We know how we’re gonna proceed from A to B to C, with a little bit of room for creativity mixed in. Some clients are brand-new. Some of the environments or technologies they’re rolling out, we’ve never seen before, and we have to get creative. We work cooperatively with a client to figure out how we’re gonna rigorously test this so it meets the letter of the standard. So we stick to a general methodology, but it doesn’t necessarily mean it’s rigid. We do have some room in there for flexibility to adapt to whatever the client is using.

Cindy Ng: Oftentimes clients are struggling with old technology as it attempts to integrate with new technology, and it takes a few versions to get it right. You can make a recommendation one year and it might take a while for things to smooth out. What is the timeline for fixing and patching?

Sanjiv Kawa: Yeah, that’s a really good question. So after we’ve done a penetration test, and if significant findings have been made, which really affect the client’s CDE, then the client has to have 90 days to remediate these findings. There’s typically various different ways that you can do this. In most cases, an entire re-architecture of a client’s enterprise network is not gonna be a completely valid recommendation, right? There’s just not enough time or resources to complete a task like that within 90 days. So at that point, we start looking at, you know, reasonable remediations that we can suggest. And often, clients might wanna look at one bottleneck. So, for example, if I fix this, how does this affect the rest of the vulnerabilities that you guys identified? Does that kind of wrap it into a mitigation bubble in terms of, “Will this particular bottleneck affect the security of the CDE?”

Secondarily, there might be compensating controls that you can put into place to help either, you know, harden endpoint systems, or there might be network-based controls you can put into place, like, for example, packet inspection or rate limiting, or just reels segmentation, for example. So there’s a lot of creative solutions that we can kind of adapt to a per-client basis. There’s really no silver bullet to fix an enterprise network. And ideally, what we strive to achieve is to try and give the client the best remediation possible, which fits a correct timeline and is reasonable to do in their environment. I think we’re different in the sense that a lot of penetration testers will enter an environment. They won’t understand the full complexity and dependencies of an enterprise network. And once a penetration test is done, they kinda wipe their hands and don’t really follow up with any sort of meaningful remediations or suggestions. We interact with everyone, from the system administrators, network administrators, all the way up to C-Suite, to identify meaningful solutions, meaningful recommendations that they can implement within a reasonable timeline.

Cindy Ng: There is a human aspect. You know, people say humans are the weakest link, and social engineering, for instance, it’s one of the many requirements in PCI compliance. And it’s one of the things that people often debate about. You know, some say that users need more training, and then there are also other security researchers who think that users aren’t the enemy. That we haven’t focused enough on user experience design. What is your experience as a pen tester when you’ve worked with so many different departments it’s both a human and social aspect as well as technology and security? And when you have multiple layers of complexities, how do you mitigate risk, and what is your approach?

Sanjiv Kawa: Yeah, that’s a really good question. I guess every person has a different opinion. If you look at C-Suite or management, they might say it’s a policy issue. If you look at the system administrators and the network administrators, they might say, “Oh, there’s a technological issue, or indeed it’s a user issue.” Touching on your first point of social engineering, it’s not a requirement in PCI DSS 3.2 yet. Something that PSC is actually working on is to…and something that PSC has worked on is…well, PSC is a co-sponsor of the PCI DSS Special Interest Group for pen testing, and we’ve authored a significant portion of the pen testing guidance supplement for PCI DSS, and we’ve also authored a significant portion of PCI DSS. And something we are trying to work on is getting social engineering to be a requirement.

Now, the hardest part about that is how you can sort of measure whether user education, whether user training, has become effective over time. In addition to that, we also believe that, you know, the outcome of social engineering shouldn’t be pass/fail, right? There should be a program of some level at which is something that we believe you should be doing since you’re phishing or vishing. But it should mainly for reinforcement and betterment of the end user. Yeah, I guess that’s kind of, like, our sort of stance on social engineering.

 

We currently don’t do it. And from a personal perspective, I think there’s only so many times that you can tell a organization that hires you that phishing or vishing, or some form of social engineering was the main entry point, right? There’s only so many times you can do that before they become kind of sick to your approach, or your initial vector or foothold into your environment. There’s some lot more creative ways. There’s a lot more ways that show value, especially with pre-existing technologies or pre-existing things that they had already deployed in their network.

Cindy Ng: I was reading the penetration testing guidance and it was so thorough that I just assumed that that was what you had to follow for pen testing. That’s why I’m like, “Oh, it’s part of the requirement.”

Sanjiv Kawa: As Tom had spoken about a bit earlier, you know, PCI isn’t the perfect program, right? But I truly believe that it’s designed to fit as many business models that there can be. And it’s a good introductory framework for, especially penetration testers, because it’s so clear. I mean, you define your priced assets, your CDE, which should ideally be a segmented part of your network, where there’s absolutely minimal to zero significant connectivity to your corporate network. And the pen tester can use basically anything that’s in the corporate network to try and gain access to the CDE. And if it can be used against you, it’s in scope.

One thing I really like about PSC is that we don’t really enter these scoping arguments, you know. For example, “These are the IPs that you’re limited to.” Or, you know, “You pay $3 to test this IP.” Because with PCI, especially as a compliant standard, it says, “Anything that can be used in your corporate scope to gain access to the CDE can be used against you.” So that way, it can almost bring everything into scope, which is kind of nice for a pen tester. It’s, in my opinion, pen testing in the purest form possible.

Cindy Ng: You both have mentioned CDE multiple times, and that’s what you’re…you spend all your time at. I was wondering do you prioritize that list though, like first you need to protect the crown jewels, then do you look at the technology second, or the processes third, and then people last? So is it, it goes back to whatever is important to the organization?

Tom Porter: It’s not uniform for every organization, just because it depends on their secure remote access scheme. The crux might come down to misconfigure a technology appliance, it might come down to a user who’s behaving in a manner that’s outside of procedure. So it really depends. And that’s why when we’re on site carrying out an engagement, we kind of cast a wide net because we wanna catch all of these deficiencies and whether it’s the process, the systems or the people. So to give you some examples, we might see that a technology that is supposed to provide adequate segmentation between a secure zone and a less secure zone has a vulnerability in some type of service that we can exploit. It’s rare, but we see it every now and then. That would be something that falls into the technology list.

Sometimes we might see on the user side, an admin who sits outside of the PCI zone but wants to get in, and has to do it fairly often, they’ve set up their own little backdoor proxy on the machine so that now it’s on a separate VUM. So we’ll get on their machine after we’ve compromised the domain and inspect their next step connections. We’ll see that they’ve set up their own little private proxy that they haven’t told anybody about it. So it depends. We cast a wider net just because we’re not entirely sure what we’re gonna find. And the organization doesn’t necessarily always know where the breakdown in the process will be.

Sanjiv Kawa: To add to that, I would also say that a majority of our time is spent in sort of like the Post Exploitation phase. Typically, our time is spent identifying that network segmentation gap and trying to sort of jump from the corporate network into the CDE, and assessing all possible roots which will, you know, result in that outcome. The corporate domain is something that…it’s kind of like the Wild West. A lot of clients don’t…in my opinion, don’t have enough emphasis and controls placed around the corporate network, but they’ve really secured the CDE and all the access in terms of authentication, their multi-factor authentication, and granular network segmentation, into the CDE. Yeah, a lot of our time is spent assessing those sort of roots into the CDE.

Tom Porter: And to piggyback off that, Sanjiv’s reminded me, I’ve been on a number of engagements now where we have this kind of division in labor where most of the resources have been applied to the CDE to make it as secure as possible, while the non-CDE systems or corporate scope has kind of fallen by the wayside with regards to attention and then time and money.

And then we have…we find where the corporate network is then set up as a trust, like a domain trust, as an example, with the CDE. So if we compromise the corporate network, now we have a pivot point into the CDE or with this domain trust. So we end up going to clients and saying, “Hey, we’re very proud of you how much work you’ve put into securing your CDE, but now we understand that your CDE is vulnerable to the deficiencies of the corporate network because you’ve put this trust in it.”

Cindy Ng: I think you’re thinking of security as a gradient, or as…a lot of C-Level and regular users, in general, think, “Well, you’re either secure or you’re not. And it’s a zero or a one.” How do you explain that to C-Level when you’re talking to them?

Sanjiv Kawa: Sure. Yeah. So I don’t believe that, you know, security is binary. I think, especially in modern networks, there needs to be an adequate balance of convenience and security. But that needs to be applied at the right levels. There are so many examples which I might not even be able to get into some of them, but you need to really be able to…not as a third-party assessor, but as an internal assessment administrator or as an internal network administrator, or as an internal IT manager, we provide you, as a third-party assessor, with the necessary tools or vocabulary and the recommendations that you can then sort of take and package to your C-Level suite.

I think a lot of the focus that I’ve definitely been looking at lately is just better access and authentication controls. It seems like one of the most common entry points into most networks is just weak passwords. And so how do you remediate that? How do put a policy in place? Well, in truth, most organizations have a policy in place, but it’s a written policy. It’s not necessarily a technical control. There are technical controls that you can put in place, which are kind of opaque to the end user and kind of makes them better without realizing that they’re becoming better.

So a really common example is third-party integrations into Active Directory, right? By default, I believe that the Windows password policy is a mixture of alphanumeric characters and a seven-length password. And, you know, with, you know, modern networks, it’s kind of archaic to think about because it doesn’t have any sort of intelligent identification of whether a user is using season year as a password, or a company name123 as a password, or something to that effect. So how do you train a user to become better at selecting passwords? Well, in short, you can purchase one of these integration tools and integrate that into Active Directory and load in bad passwords or what you would consider to be bad passwords. And at that point, a user is automatically more secure because they’re unable to select a weak password. By default, they’ve already selected a better password. It’s really just kind of identifying what an organization’s weakest points are, what their failing points are, and how you can make those better in a cost-effective, potentially technical control, which can kind of remove the risk to the environment.

Cindy Ng: You speak about it a little bit about being cost-effective? How do you rate and assign risk to the budget? Do they tell you, “Here’s how much money we have, and work with it?”

Tom Porter: Not necessarily. We give them recommendations, but our recommendations aren’t necessarily gospel. They know their resource constraints, whether it’s budget, whether it’s people, time, whatever it may be. And they work within those. And we can flexible with them throughout the remediation process. So what we do ask, as we work with clients through remediation, is if they come up with ideas for how they wanna go about remediating a finding, they come back and bounce those ideas off of us. Let us pen-test this idea on paper before you invest a significant amount of resources into it. And then we can save each other a bunch of headache down the road.

Cindy Ng: And when you’re engaged with the C-Level, what do they care about, versus what IT people care about?

Sanjiv Kawa: I think the most common thing with C-Level suite is just brand integrity, right? I mean, if they show up on the front page of a newspaper because of a breach, you know, it’s really gonna impact negatively their ability to continue to sell. And a byproduct of that is customer confidence, right? So C-Suite will always care about…in relation to security, brand integrity, and customer confidence. Second to that, they care about time to remediate, and a byproduct of that is cost to remediate. But from my experience, those are probably the big few things that the C-Suite cares about the most.

Tom Porter: I’ll say part of that too also depends on the type of business they’re in. Dependent upon what your revenue stream is. You know, something like a denial-of-service where it takes you offline for several hours might have a greater impact than some data being exfiltrated from your environment. If you think about maybe a pharmaceutical company, their crown jewels aren’t necessarily their uptime, it’s their…you know, their patents or their IP. So if we can exfiltrate those, then that’s gonna have a much more impact on the business than say taking them offline for a few hours.

Cindy Ng: You mentioned earlier that you guys also are working under a whole bunch of zero-day potential vulnerabilities that you might be encountering in the future. And we saw that happen. How do you red-team yourself and improve on the new knowledge that you get?

Tom Porter: There’s a lot of debate right now. If you…dependent upon your preferred internet source. But with regards to this, what we see it from penetration testers, there’s a few things that we do for our clients. One, we can send out advisors and bulletins to our clients just to let them know, “Hey, you know, the internet is a dangerous place right now, and these are the things you should be doing to make sure you’re secure.” One thing that we do as we’re going about our engagement, we line out in our report, and we do executive briefs for customers as well, when we do these presentations and reports for customers, we line out all of our findings to say, “You know, these are our highs, these are our mediums, these are some best practice observations.” And like, we tell them, “You know, you should remediate findings one, two and three in order to be PCI compliant. But we also use this as an opportunity for organizations to strengthen their security posture. So we might have some findings that fall out of that kill chain or that critical path to compromise, but if we see an opportunity for someone to strengthen their security posture, we’re gonna mention it to them. We won’t require it for remediation for PCI compliance, but we’re more of a mindset of security first, compliance second.

Cindy Ng: What’s the important security concept that you wish people would get?

Sanjiv Kawa: I don’t wanna really preach here, but I just think really understanding your assets, your inventory. And a good way to do that is to have an internal security team who is proactive and continuously analyzing systems and services that are exposed. Yeah, I think it all comes down from good asset identification, good network controls. And the best companies that I have been to, which I have not been able to compromise in terms of gaining access through CDE, just has really good segmentation. Identifying your prized assets, putting those into an inaccessible network or a network that is accessible only to few but through very complicated access control mechanisms that require multi-authentication, very limited. There’s kind of multiple answers to that, but really it comes down to asset identification, network controls, segmentation, and limitations on authentication controls. Those are the key things that I care about the most. Tom might have some different things.

Tom Porter: When I talk to users and it almost happens in every single compromise or engagement that I’m on, a lot of it comes back to either weak passwords or more commonly, password reuse. And that’s something that echoes with users not only in the enterprise but also personally when they’re on their…you know, their email or their banking websites. And you see this a lot with attacks out across the internet with credential stuffing. But the idea is not only choosing strong passwords, but also using unique passwords for every type of site that you might have a login stored. You see all these breaches from places like, you know, LinkedIn to any handful of other dumps where either passwords are dumped either in clear text or hash form, and when those hashes are crashed, there’s now a whole litany of usernames, email addresses, and passwords that folks can try to breach other accounts with.

And that’s kind of where the idea of credential stuffing comes in. A password breach comes out, a set of credentials are revealed, and then attackers have these tools to try this username/password combination across a wide array of sites. And what happens is users end up getting several of their accounts compromised because they’ve reused the same password. So not only does that echo on a personal level, but we see that in enterprise too. I’ll crack and I’ll get someone’s, you know, Active Directory password, or it just happens to be the same password for their domain admin account, or it happens to be the same password used to login to the secure VDI appliance. So it’s not necessarily something that’s gonna show up in a vulnerability scan listed out in a report, but it is something that we see often, that we end up having to remediate almost all the time.

Sanjiv Kawa: Yeah. And there’s several ways that you can combat this. If you have an internal security team, one of the things they get to be doing is monitoring breach dumps and monitoring passwords, and simply just pulling the hashes for any of their domains that they have and running comparison checks to see if any of these known bad passwords tying to known users in their environment exist. Secondarily just comparing the hashes from two separate domains, or comparing hashes from two separate user accounts, specifically privileged and non-privileged. And it’s kind of just doing password audits, right? Just maybe quarterly or every single…whatever aligns with your password change policy, be it 30 days or 90 days, whatever it is. And that way, you can, as a security team, ensure that…well, ensure to a certain degree that users are selecting passwords which are smart, and more importantly, aren’t reusing passwords from zones of low security to zones of high security.

Cindy Ng: Have you seen organizations move away from passwords and go into biometrics?

Sanjiv Kawa: Yeah, there’s been talk about it. I recently had a conversation with a client last week about this. But what they’re battling with is user adoption and cost. Cost being having to have certain, you know, devices which can read fingerprints or read faces. What people end up just going with is a second factor of authentication, either through an RSA one-time pass or do a multi-factor authentication or Google Authenticator. There’s lots of multi-factor authentications there which can basically…you know, biometrics is still a single factor of authentication, right? So having a token supplied to you by any of the aforementioned providers is probably one of the most cost-effective ways and most…and a secondary factor of authentication which is more secure in the long run in terms of user account controls.

Cindy Ng: What kind of problems, though, do you see with biometrics? Let’s say cost isn’t a problem. Could you guys play around with that idea for a little bit? What could you see potentially go wrong?

Tom Porter: So what I have seen in the past is some of these biometric-type logins, when they’re actually stored on the machine, so on like a Windows desktop, are just stored in memory as some kind of token, very similar to a password hash. So it’s just…operates on the same function, and you could reuse that token around the network and still adequately impersonate that user.

Not only that, we’ve also seen…you’ve probably read about it online. Technologies are using, you know, facial-type recognition how people can mimic that by just scrolling through your LinkedIn or Facebook and reconstructing the necessary pattern to log in with your face. And those are some of the things. Just because these are factors…unlike a password, these are factors that are potentially publicly available. Things like your hands for thumbprints to your face on Facebook. It’s just something that we haven’t historically secured in the past.

Cindy Ng: Your experience is so vast and multi-faceted, but is there something that I didn’t ask that you think is really important to share with our listeners?

Sanjiv Kawa: I guess we could probably share about…share some of the other penetration tests that we do. Not all of our pen tests are PCI-oriented. We have done things like pre-assessments in the past. So, for example, this organization is gearing up for another compliance regime, whether it be, you know, SOX or HIPPA, or FFIEC, or something to that effect, and PSC will do pre-assessment penetration tests conforming to the constraints of those compliance regimes.

We also do evaluation-based pen tests. So let’s say….and Tom sort of spoke about this a bit earlier, but let’s say your organization is implementing a new set of core security technology so that it requires some sort of an architectural sort of shuffle, whether it be a new MFA or multi-factor authentication implementation or some sort of segmentation, or a new monitoring alerting system, we can pen-test those or identify if technologies are adequate for your environment before we deploy. We’ve also done very complex mobile application and, well, just regular RESTful or SOAP-based API or other sort of application or web services-style testing that fall outside of the PCI compliance sort of zone. But for the most part, our mindset is still PCI-oriented, right? All you do is you substitute the CDE for that client’s goal, and that’s your success criteria. That’s what you’re trying to get to. And you’re using everything that you can ingest around you to get to that goal.

Tom Porter: I’d like to add, when we’ve sat down with the clients and we’re looking at results of a penetration test and we lay out the findings for what should remediate it and what doesn’t necessarily remediate it, one of the luxuries of PCI that’s kind of a gift and a curse, depending on your perspective, is that we actually get to see remediation through. And it’s a rarity in our industry just because remediation is so rarely required. So we actually gonna sit there and walk through remediation with clients. And when we come back and do our retesting to verify the remediation’s in place, we find that not always all of the findings were remediated. Maybe they remediated some of them, or the ones we required, but not necessarily all the ones we had in the report.

And as Sanjiv spoke about earlier, we kind of have this notion of a kill chain. The findings that we link together to achieve a compromise. You might hear it referred to as a critical path to compromise and kill chain and cyber kill chain. Something with the word “chain” in it. Yeah, the attack path. But essentially we’re trying to identify these bottlenecks, and what ends up happening is these organizations get to places where there’s a kind of mitigation bubble, they start offering compensating controls, which is just more of like a Band-Aid-on-it solution instead of fixing the real problem. So what we do is, especially these findings, and we’re talking about…talking with network or sysadmins who have a very knowledgeable layout of kind of the tech environment. And they’re trying to relay the importance of having, you know, patches for this, or a new appliance for that up to the C-levels, they can use us as a resource. They can tell us whether to focus and then in our report, if we find a deficiency, we’re giving the tech people ammunition to take to the C-Level to say, “Hey, we actually need to act on this. We’ve got a verified third party that says, “You know, we need to beef up our security here. We need to invest resources there.” And it kinda gives them some backing to say, “Hey, you know, we need this.”

Cindy Ng: Do you notice that you’re also looking at IoT devices?

Tom Porter: Absolutely. All the time, actually. Because when we do our network sweeps, we see all kinds of things out there. And they’re almost always set with factory default passwords. They usually have some type of embedded OS that we can pivot through, and they’re usually on wide open networks that don’t have…you know, if they’re using host-based firewalls to protect ingress and egress…So we now have our pivot points to get around.

[Podcast] Dr. Tyrone Grandison on Data, Privacy and Security

[Podcast] Dr. Tyrone Grandison on Data, Privacy and Security

Leave a review for our podcast & we'll send you a pack of infosec cards.


Dr. Tyrone Grandison has done it all. He is an author, professor, mentor, board member, and a former White House Presidential Innovation Fellow. He has held various positions in the C-Suite, including his most recent role as Chief Information Officer at the Institute of Health Metrics and Evaluation, an independent health research center that provides metrics on the world’s most important health problems.

In our interview, Tyrone shares what it’s like to lead a team of forty highly skilled technologists who provide tools, infrastructure, and technology to enable researchers develop statistical models, visualizations and reports. He also describes his adventures on wrangling petabytes of data, the promise and peril of our data economy, and what board members need to know about cybersecurity.

Transcript

Tyrone Grandison:  My name is Tyrone Grandison. I am the Chief Information Officer at the Institute for Health Metrics and Evaluation, IHME, at the University of Washington in Seattle. And IHME is global in profit in the public health and population health space, where we’re focused on how do we get people to have a long life and have that long life at the highest health capacity possible.

Cindy Ng: Often times, the bottom line drives businesses forward, where your institute is driven by helping policy makers and donors determine how to help people live longer and healthier lives. What is your involvement in ensuring that that vision is sustained and carried through?

Tyrone Grandison:  Perfect. So I lead the technology team here, which is a team of 40 really skilled data scientists, software engineer, system administrators, project and program managers. And what we do is that we provide the base, the infrastructure. We provide tools and technologies that enable researchers to, one, ingest data. So we get data from every single country across the world. Everything from surveys to censuses to death records. No matter how small or poor or politically closed a country is. And we basically house this information. We help the researchers develop statistical models. Like, very sophisticated statistical models and tools on them that make sense of the data. And then we actually put it out there to a network of over 2,400 collaborators.

And they help us produce what we called the Global Burden of Disease that, you know, shows what in different countries of the world is the predominant thing that is actually shortening lives in particular age groups, for particular genders and all demographic information. So, now people can, if they wanted to, do an apples-to-apples comparison between countries across ages and over time. So, if you wanted to see the damage done by tobacco smoking in Greece and compare that to the healthy years lost due to traffic injuries in Guatemala, you can actually do that. If you wanted to compare both of those things with the impact of HIV in Ghana, then that’s now possible. So our entire thing is, how do we actually provide the technology base and the skills to, one, host the data, support the building of the models and support the visualization of it. So people can actually make these comparisons.

Cindy Ng: You’re responsible for a lot and let’s try to break it down a bit. When you receive a bunch of data sets from various sources, take me through what your plan is for it. Last time we spoke, we spoke about obesity. Maybe is that a good one to, that everyone can relate to and with?

Tyrone Grandison:  Sure. So, say we get a obesity data sets from either the health entities within a particular country. It goes through a process where we have a team of data analysts look at the data and extract the relevant portions of it. We then put it into our ingesting pipeline, where we then vet it. Vet it in terms of what can it apply to. Does it apply to specific diseases? Obviously, it’s going to apply to a specific country. Does it apply to a particular age group and gender? From that point on, we then include it in models. And we have our modeling pipeline that does everything from estimating the number of years lost from obesity in that particular country. Also, as I mentioned before, it actually sees if that particular statistic that we got from that survey is relevant or not.

From there, we basically use it to figure out, okay, well what is the overall picture across the world for obesity? And then, we visualize it and make it accessible. And provide people with the ability to tell stories on it with the hope that at someone point, a policymaker or somebody within the public health institute within a particular country is gonna see it and actually use it in their decision making in terms of how to actually improve obesity in their particular country.

Cindy Ng: And when you talk about relevant and modeling, people say in the industry that there is a lot of unconscious bias. How do you reconcile that? And how do you work with certain factors that people think is controversial? For instance, people have said that using a body mass index isn’t accurate.

Tyrone Grandison:  That’s where we actually depend a lot on the network of collaborators that we spoke about. Not only do we have like a team that has been doing epidemiology and can advance the population health metrics for, you know, over two decades. We do depend upon experts within each particular country once we actually produce, like, you know, the first estimates based upon the initial models to actually look at these estimates and say, “Nope. This does not make sense. We need to actually adjust your model to add a factor in, that same unconscious bias.” Or, to kind of remove that the model says that we’re seeing but that the model may need to be tweaked or is wrong about. It all boils down to having people vet what the models are doing.

So, it’s more along the lines of how do you create systems that are really good at human computation. Marrying the things that machines are good with and then putting in a step there that forces a human to verify and kind of improve the final estimate that you want to actually want to produce.

Cindy Ng: Is there a pattern that you’ve seen over time where time and time again, the model doesn’t count for X, Y and Z? And then, the human gets involved and then figures out what’s needed and provides the context? Is there a particular concept or idea that you’ve seen?

Tyrone Grandison:  There is. And there is to the point where we basically have included it in our initial processing. So, there is this concept, right. The idea of a shock. Where a shock is an event that models cannot predict and it may have wide ranging impact essentially on what you’re trying to produce. So, for example, you could consider the earthquake in Haiti as a shock. You could consider the HIV epidemic as a shock. Every single country in any one given year may have a few shocks depending upon what the geolocation is that you’re looking at. And again, the shocks are different and we are really grateful to the collaborative network for providing insight and telling us that, “Op, this shock is actually missing from your model for this particular location, for this particular population segment.”

Cindy Ng: It sounds like there’s a lot of relationship building, too, with these organizations because sometimes people aren’t so forthcoming with what you need to know.

Tyrone Grandison:  So, I mean, it’s relationship building over the work that we’ve been doing here has been going on for 20 years. So, imagine 20 years of work just producing this Global Burden of Disease. And then, probably another decade or two before that just building the connections across the world. Because our Director has been in this space for quite a while now. He’s worked at everywhere from WHO to the MIT doing this work. So, the connections there and the connections from the executive team have been invaluable in making sure that people actually speak candidly and honestly about what’s going on. Because we are the impartial arbiters of the best data on what’s happening in population health.

Cindy Ng: And it certainly really helps when it’s not driven by the bottom line. It’s the most important thing is to improve everyone’s health outcome. What are the challenges of working with disparate data sets?

Tyrone Grandison:  So, the challenge is the same everywhere, right? The set challenges all relate to, okay, well, are we talking about the same things? Right. Are talking the same language? Do we have the same semantics? Basic challenge. Two is, well, does the data have what we need to actually answer the question? Not all data is relevant. Not all data is created equal. So, just figuring out what is gonna actually give us insight into, you know, the question as to how many years do you lose for a particular disease? And the third thing which is pretty common to, you know, every field that is trying tot push into the data open data areas. Do we have the right facets in each data set to actually integrate them? Does it make sense to integrate them at all? So, the challenges are not different from what the broader industry is facing.

Cindy Ng: You’ve developed relationships for over 20 years. Back then, we weren’t able to assess so many different, I’m guessing billions and trillions of data sets. Have you seen the transition happen? And how has that transition been difficult? And how has it made your lives so much better?

Tyrone Grandison:  Yeah. So, the Global Burden of Disease actually started on a cycle that was, you know, when we had considered we had enough data to actually make those estimates, we would actually produce the next Global Burden of Disease. Right, and we just moved starting this year to an annual cycle. So, that’s the biggest change. The biggest change is because of the wealth of data that exists out there. Because of the advances of technology, now we can actually increase the production of this data asset, so to speak. Whereas before, it was a lot of anecdotal evidence. It was a lot of negotiation to get the data that we actual need. Now, in other far more open data sets. So, lots more that’s actually available.

A willingness due to prior past demonstrations of the power of home data for governments and people to actually provide and produce them, because they know that they can actually use them. It’s more of the technology hand-in-hand with the cultural change that’s happened. That’s been the biggest changes.

Cindy Ng: What have you learned about wrangling petabytes of data set?

Tyrone Grandison:  A lot. In a nutshell, it’s very difficult and if I was to say that I give advice to people, I would start with, so what’s the problem you’re trying to solve? What’s the mission you’re trying to achieve? And figure out what are the things that you need in your data sets that would help you answer that question or mission. And finally, as much as possible, stick with standardize and simplify kind of methodology. Leverage a standard infrastructure and a standard architecture across what you are doing. And make it dead simple because if it’s not standard or simple, then getting to scale is really difficult. And scale meaning processing tens, hundreds of petrabytes worth of data.

Cindy Ng: There are a lot of health trackers, too, where they’re trying to gather all sorts of data in hopes that they might use it later. Is that a recommended best practice approach for figuring your solution or the problem out? Because, you know, what if you didn’t think of something and then a new idea popped into your head? And then there’s a lot of controversy with that. What is your insight…

Tyrone Grandison:  A controversy is, in my view, actually very real. One, what is the level of data that you are collecting, right? So, at IHME, like, we’re lucky to be actually looking at population level data. If you’re looking at or collecting individual records, then we have a can of worms in terms of data ownership, data privacy, data security. Right. And, especially in America, what you’re referring to is the whole argument around secondary use of health data. The concern or issue is just like with HIPAA, the Healthcare Information Portability and Accountability Act. You’re supposed to just have data for one person for a specific purpose and only that purpose. The issue or concern, like, you just brought up is, one, a lot of companies actually view data that is created or generated on the particular individual as being their own property. Their own intellectual property. Which you may or may not agree with.

At some point, there’s no tack list that says the person who this data is about should actually have a say in this in the current model, the current infrastructure. Right. And I can just say it like, personally, I believe that if the data is about you, that data’s created by you, then technically you should own it. And the company should be good stewards of the data. Right. Being a good steward simply means that you’re going to use the data for the purpose that you told the owner that you’re going to use if for. And that you will destroy the data after you finish using it. If you come up with a secondary use for it, then you should ask the person again, do they want to actually participate in it?

So, the issue that I have with it is basically is the disenfranchisement of the data owner. The neglection of like consent or even asking for it to be used in a secondary function or for a secondary purpose. And the fact that there are inherent things in that scenario with that question that are still unresolved and are just assumed to be true that people just need to look at.

Cindy Ng: When you say when the project is over, how do you know when the project is over? Because I can, for instance, write a paper and keep editing and editing and it will never feel completed and done.

Tyrone Grandison:  Sure. So, it’s… I mean, put it this way. If I say to the people that are involved in a particular study or that gave me their data, that I want to use this data to test a hypothesis and the hypothesis is that drinking a lot of alcohol will cause liver damage. Okay, obvious. And I, you know, publish my findings on it. It gets revised. You know, that at the very end, there has to be a point where either the papers published in the journal are somewhere or not. Right. I’m assuming. If that’s the case and, you know, I publish it and I found out that, hey, I can actually use the same data to actually figure out the affects of alcohol consumption on some other thing. That is a secondary purpose that I did not have an agreement with you on, and so I should actually ask for your consent on that. Right.

So, the question is just not when is the task done, but when have I actually accomplished the purpose that I negotiated and asked you to use your data for.

Cindy Ng: So, it sounds like that’s the really best practice when you’re gathering or using someone’s personal data. That that’s the initial contract. If there is a secondary use that they should also know about it. Because you don’t want to end up in a situation like Henrietta Lacks and they’re using your cells and you don’t even know it, right?

Tyrone Grandison:  Yup. But Henrietta Lacks actually is like a good example. It highlights what the current practices of the industry. Right. And again, luckily published health does not have this issue because we have aggregated data on different people. But like in the general healthcare scenario where you do have individual health records, what companies are doing and what they did within, in the Henrietta Lacks case was they may have actually specified in some legal document that, “Hey, we’re gonna use your information for X, and X is the purpose.” And they make either X so broad, so general that in encompasses like every possible thing that you can imagine. Or, they basically say, “We’re going to do a really specific purpose and anything else that we find.” And that is now the common practice within the field. Right?

And to me, the heart of that is very, seems very deceptive. Right. Because you’re saying to somebody that, you know, we have no idea what we’re going to do with your data, we want access to do it and, oh, we assume that you’re not going to own it. That we assume that any profits or anything that we get from it is going to be ours. Do you see the model itself just seems perverse? It’s tilted or veered towards how do we actually get something from somebody for free and turn it into a asset for my business. Where I have carte blanche to do what I want with it. And I think that discussion has not happened seriously by the healthcare industry.

Cindy Ng: I’m surprised that businesses haven’t approached your institution in assisting with this matter.Well, just it sounds like it would make total sense because I’m assuming that all of your data perhaps might have all the names and PHI stripped.

Tyrone Grandison:  We don’t even get to that level at this point.

Cindy Ng: Oh, you don’t even…

Tyrone Grandison:  It’s information on a generalized level. So there are multiple techniques that you can actually use to, let’s say, protect privacy for people. Like, one, would be just suppression. Okay, so I suppress the things that I call or consider PII. Or the other is like generalization. Right. So, it’s basically, I’m going to look at or get information that is not at the most granular level. But it’s at the level above it. Don’t look like you and all your peers. You just go a level above this and say, “Okay. Well, let’s look at everyone that lives in a particular zip code or a particular state or country.” So, that way, you have protection from hiding in a crowd. So, you can’t really identify one particular person in a data set itself. So, at IHME we don’t have the PHI/PII issue because we work on generalized data sets.

Cindy Ng: You’ve held many different roles. You’ve been a CDO, a CIO, a CEO. Which role do you enjoy doing most?

Tyrone Grandison:  So, any role that actually allows me to do two things. Like, one, create and drive the direction or strategy of an organization. And, two, enables me to help with the execution of that strategy to actually produce things that will positively impact people. The roles that I have been fond of so far would be CEO and CIO because at those levels, you basically also get to set what the organizational culture is, which is very valuable in my mind.

Cindy Ng: And since you’ve also been a board member, what do you think the board needs to know when it comes to privacy in cyber security?

Tyrone Grandison:  First of all, I think it should be an agenda item that you deal with upfront and not after a breech or an incident. It should be something that you bake into your plans and into the product life cycle from the very beginning. You should be proactive in how you actually view it. The main things I’ve actually noticed over time is just like, people do not pay attention to privacy, cyber security, cyber crime until, you know, after there is a… This is a horrible analogy but until there’s a dead body in the sea. What happened? And then you start having reputational damage and financial damage because of it.

When, you know, thinking about the process technology, people and tools that would actually help you fix this from the very get-go would have actually saved you a lot of time. And, you know, the whole perception, not perception, but the whole thought of both of these things, privacy and security, being cost centers, you don’t see a profit from them. You don’t see revenue being generated from them. And you only actually see the benefit, the cost savings, so to speak, after everyone else has actually been breached or damaged from an episode and you’re not. Right. Yeah. It’s a little bit more proactive upfront rather than reactive and, you know, post-fact.

Cindy Ng: But do you also think that it’s been said that IT make technology now more complicated than it really is? And they’re unable to follow what the IT presenting and so they’re confused, and there’s not a series of steps you can follow? Or maybe they asked for a budget for the one thing one year and then want some more money next year. And as you said, it costs money. But do you also think that there’s a value proposition that’s not carried across in a presentation? How can the point be driven home then?

Tyrone Grandison:  So, I mean, the biggest thing you just identified a while ago is the language barrier. The translation problem. So, I don’t fundamentally believe that anyone tech or otherwise is purposely trying to sound complex. Or purposely trying to confuse people. It’s just a matter of, you know, you have skilled people in a field or domain. Whatever the domain is. So, if you went tomorrow and started talking to a oncologist or a water engineer, and they just went off and just uses a bunch of jargon from their particular fields. They’re not trying to be overly complex. They’re not trying to not have you understand what they’re doing. But they’ve been studying this for decades. And they’re just, like, so steeped in it that that’s their vocabulary.

So, the number one issue is just that, one, understanding your audience. Right. If you know that your audience is not tech or is from a different field or a different era in tech or is the board, and understanding the audience and knowing what their language is and then translating your language lingo into things that they can understand, I think that would go a long, long way in actually helping people understand the importance of privacy and cyber security.

Cindy Ng: And we often like to make the analogy of that we should treat data like money. But do you think that data can be potentially be more valuable than money when the attacks aren’t deterrent financially driven then they’re out to destroy data, instead? We react in a really different way, I wanted to hear your thoughts on the analogy of data versus money.

Tyrone Grandison:  Interesting. So, money is just a convenient currency. Right. To enable a trade. And money has been associated with giving value to certain objects that we consider important. So, I’m viewing data. And data as something that needs to have a value assigned to it. Right. Which money is going to be that medium. Right. Whether the money is actual physical money or it’s Bitcoin. So, I don’t see the two things being in conflict. Or the two things having a comparison between value. I just think that data is valuable. A chair is valuable. A phone is valuable. Money is just, like, that medium that allows us to have one standard unit to compare the value between all those things.

Is data going to be more valuable than the current physical IT assets that a company has? Overtime, I think, yes. Because the data that you’re using, that you’re hopefully going to be using is going to be driving more, one, insights. More, hopefully, revenue. More creative uses of the current resources. So, the data itself is under influence how much of the other resources that you will actually acquire or how much of the other resources you need to place in particular spots or instances or allocate across the world. So, I see data as a good driving force to making these value driven decisions. So, I think the importance of it versus the physical IT assets is going to increase over time. You can see that happening already. To say data is more valuable than cash. I’m not too sure that’s the right question.

Cindy Ng: We’ve talked about the value of data, but what about the data retention and migration? It’s sort of dull, but yet so important.

Tyrone Grandison:  Well, multiple perspectives here. Data retention and migration is important for multiple reasons. Right. And the importance normally lies in risk. In minimizing the risk or the harm that can potentially be done to the owner or the data, or the subjects that are referenced too in the data sets. Right. That’s all the importance. That’s why you have whole countries, states actually saying that they have a data retention policy or plan. And that means that after a certain time, either the stuff has to be gone, completely deleted, or be stored somewhere that is secure and not well accessible.

And the whole premise of it is just like you assume for a particular period of time, that companies are going to need to use that data to actually accomplish a purpose that they specified initially, but then after that point, the risk or the potential harm of that becomes so high that you need to do something to reduce that risk. And that thing normally is a destruction or migration somewhere else.

Cindy Ng: What about integrating that data set with another, so probably a secondary use, but integrating it with other institutes? I hear that people want a one health solution in terms of patient data. So that all organizations can access it. It’s definitely a risk. But is that something that you think is a good idea that we should even entertain it? Or we’re going to create a monster and that the results of having a one single unit, a database where everything and all the data integrates is a bad solution? It’s great for analytics and technology and use.

Tyrone Grandison:  I agree with everything you just said. It’s both. So, it’s for certain purposes and scenarios, you know, is good. Because you get to see new things and you get a different picture, a better picture, a more holistic picture once you integrate data sets. That being said, once you get data sets, you basically also, you increase the risk profile of the results in data sets. And you lower the privacy of the people that are referenced in the data sets. Right. The more data sets you integrate…

So there’s this paper that a colleague of mine, Star Ying and I wrote, like last year or the year before last. That basically says there’s no privacy in big data. Simply because, like, big data you assume the three Vs. So, velocity, volume and variety. As you actually add more and more data sets in to get, like, a larger, just say, like a larger big data sets, as we call it. What you have happening is that you have the things that actually can be uniquely combined to identify the subject in that larger, big data set becomes larger and larger.

So, I mean, a quick, let me see what the quick example would be. So, if you have access to toll data, you have access to the data of, you know, people that are going on, you know, your local highway or your state highway. And you have the logs of when a particular car went through a certain point. The time, the license plates, the owner. All that stuff. So, that’s one data set by itself. You have a police data set that had a list of crimes that happened in particular locations. And you pick something else. You have a bunch of records from the DMV that tell you when somebody actually came in to actually have some operations in. All by themselves very innocuous. All by themselves if you anonymized them, or put techniques on them to protect the privacy of the individuals. Perfectly. Okay. Perfectly safe. Right. Not perfectly but relatively.

If you start combining the different data sets just randomly. You combine the toll data with the police data. And you found out that there’s a particular car that was at a scene of a crime where somebody was murdered. And that car was at a toll booth that was nearby, like, one minute afterward. Now you have something interesting. You have interesting insight. So that’s a good case.

We want to actually have this integration be possible. Because you get insights that you couldn’t get from just having that one data set itself. If you start looking at other cases where, you know, somebody wants to actually be protected, you have, and this is just within one data set, you have a data set of all the hospital visits across four different hospitals for a particular person. What you can do if you start merging them is that you can actually use the pattern of visits to uniquely identify somebody. If you start merging that with, again, the transportation records and that may be something that gives you insight as to what somebody’s sick with. That may be used…

You can identify them first of all, which they don’t want to do because they went to one hospital. And that would be used to actually do everything, something negative against him. Like deny them insurance or whatever the used case is. But you see, like in multiple different cases, the, one, the privacy of individuals that can hold the…is actually decreased. And, two, it can be used for, you know, positive or negative purposes. For and against the individual data subject or data owner.

Cindy Ng: People have spoken about these worries. How should we intelligently synthesize this information? Because it’s interesting, it’s worrisome. But it can be also be very beneficial. Because we tend to sensationalize everything.

Tyrone Grandison:  Yup. That’s a good question. So, I mean, I would say to look at the things the major decisions in your life that you plan to be making for the next couple of years. And then look at the tools, software, things that you have online right now that potential employer may actually look at. Then not employer but a potential person that you’re looking could…to do something with, get a service from. May actually look at to evaluate whether you get the service or not. Whether it be getting a job or getting a new car. Whatever it is. Whatever that thing is that, you know, want to actually get done.

And you know, see if the current things, the current questions that the person on the other side will be asking and looking at. Would that be interpreted negatively on you? A quick example would just be, okay, you’re a Facebook user and look at all the things that you do on there and all the kinda good apps that you have. And then look at who has access to all that. And in those particular instances, is that going to be a positive with that interaction or a negative with that interaction? I mean, I think that’s just being responsible in the digital age, right?

Cindy Ng: Right. What is a project that you’re most proud of?

Tyrone Grandison:  I’m proud of a lot of things. I’m proud of the work that we do here at IHME. I think it’s going breaking work that’s gonna help a lot of people. The data that we produce have actually been used to do pollution legislation. And numbers come out. Different ministers see it. The Ministry in China saw it and said, “Oh, we have an issue here. And we need to actually figure out how do we actually improve our longevity in terms of carbon emission.”

 

We’ve had the same thing Africa where there was somebody from the Ministry. I think it was, sorry, was it at Gambia or Ghana. I’ll find out for you afterwards. And they saw the numbers from, like, deaths due to in-house combustion. And started a program that gave a few hundred, well, a few thousand pots to different households and within like a few years, I saw that number went down. So, literally saving lives.

I’m proud of the White House Presidential Innovation Fellows. That group of people that I work with two and a half years ago. The work that they did. So,one of the fellows in my group worked with the Department of Interior to increase the number of kids that were going to National Parks. And, you know, they did it by actually going out and talking to kids and figuring out, like, what the correct incentive scheme would be. To actually have kids come to the park when they had their summer breaks. And that program is called, like, Every Kid in the Park. And it’s hugely successful about getting people, kids and parents like connected back into nature in life. Right. I’m proud of the work the commerce did of service team at the Department of Commerce. And that did help a lot of people.

We routinely just created data products with the user, the average American citizen in mind. And, like, one of the things that I’m really so proud of is that we helped them democratize and open up U.S. Census Bureau data. Which, you know, is very powerful. It’s actually freely open to everybody and it’s been used by a lot of businesses that make a lot of money from sending the data itself. Right. So we looked at and exposed that data through something called a CitySDK and, you know, that led to everything from people building apps to help food trucks find out where demand was. To people building websites to help accessibility channels people to figure out how to get around particular cities. To people helping supermarkets to figure out how to get fresh foods to communities that didn’t have access to them. That was awesome to actually see.

The other thing was exposing the income inequality data and just like showing people that, like, the narrative that like people are hearing about the gender and the race inequality amongst different professionals is actually far worse than is actually mentioned out there in the public. So, I mean, I’m proud of all of it because it was all fun work. All impactful work. All work that hopefully helped people.

[Podcast] When Hackers Behave Like Ghosts

[Podcast] When Hackers Behave Like Ghosts

Leave a review for our podcast & we'll send you a pack of infosec cards.


We’re a month away from Halloween, but when a police detective aptly described a hotel hacker as a ghost, I thought it was a really clever analogy! It’s hard to recreate and retrace an attacker’s steps when there are no fingerprints or evidence of forced entry.

Let’s start with your boarding pass. Before you toss it, make sure you shred it, especially the barcode. It can reveal your frequent flyer number, your name, and other PII. You can even submit the passenger’s information on the airline’s website and learn about any future flights. Anyone with access to your printed boarding pass could do harm and you would never know who your perpetrator would be.

Next, let’s assume you arrive at your destination and the hotel is using a hotel key with a vulnerability. In the past, when hackers reveal a vulnerability, companies step up to fix it. But now, when systems need a fix and a software patch won’t do, how do we scale the fix for millions of hotel keys when it comes to hardware?

Other articles discussed:

Tool of the week: Gost: Build a local copy of Security Tracker. 

Panelists: Kilian Englert, Forrest Temple, Mike Buckbee

[Podcast] Security Doesn’t Take a Vacation

[Podcast] Security Doesn’t Take a Vacation

Leave a review for our podcast & we'll send you a pack of infosec cards.



Do you keep holiday photos away from social media when you’re on vacation? Security pros advise that it’s one way to reduce your security risk. Yes, the idea of an attacker mapping out a route to steal items from your home sound ambitious. However, we’ve seen actual examples of a phishing attack as well as theft occur.

Alternatively, the panelists point out that this perspective depends on how vulnerable you might be. If attackers need an entry and believe that you’re a worthy target is vastly different from the general noise of regular social media sharers.

Other articles discussed:

Tool of the week: A Tunnel which turns UDP Traffic into Encrypted FakeTCP/UDP/ICMP Traffic 

Panelists: Mike Thompson, Forrest Temple, Mike Buckbee

[Podcast] The Security of Visually Impaired Self-Driving Cars

[Podcast] The Security of Visually Impaired Self-Driving Cars

Leave a review for our podcast & we'll send you a pack of infosec cards.


How long does it take you to tell the difference between fried chicken or poodle? What about a blueberry muffin or Chihuahua? When presented with these photos, it requires a closer look to differentiate the differences.

It turns out that self-driving car cameras have the same problem. Recently security researchers were able to confuse self-driving car cameras by adhering small stickers to a standard stop sign. What did the cameras see instead? A 45 mph speed limit sign.

The dangers are self-evident. However, the good news is that there are enough built-in sensors and cameras to act as a failsafe. But followers of our podcast know that other technologies with other known vulnerabilities might not be as lucky.

Other articles discussed:

Tool of the week: Macie, Automatically Discover, Classify, and Secure Content at Scale

Panelists: Jeff Peters, Kris Keyser, Mike Buckbee

[Podcast] Deleting a File Is More than Placing It into the Trash

[Podcast] Deleting a File Is More than Placing It into the Trash

Leave a review for our podcast & we'll send you a pack of infosec cards.


When we delete a file, our computer’s user interface makes the file disappear as if it is just a simple drag and drop. The reality is that the file is still in your hard drive.

In this episode of the Inside Out Security Show, our panelists elaborate on the complexities of deleting a file, the lengths IT pros go through to obliterate a file, and surprising places your files might reside.

Kris Keyser explains, “When you’re deleting a file, you’re not necessarily deleting a file. You’re deleting the reference to that file.”

Other Articles Discussed:

Instead of “Tool of the Week”, we learned about a coveted certification from a Blackhat attendee: Offensive Security Certified Professional. It is a 24-hour lab test to demonstrate your understanding of identifying vulnerabilities, pen testing, etc.

Panelists: Kris Keyser, Jeff Peters, Forrest Temple