Category Archives: Privacy

The Equifax Breach and Protecting Your Online Data

The Equifax Breach and Protecting Your Online Data

As we all know by now, the Equifax breach exposed the credit reports of over a 140 million Americans. What are in these reports? They include the credit histories of consumers along with their social security numbers. That makes this breach particularly painful.

The breach has also raised the profile of the somewhat mysterious big three national credit reporting agencies or NCRAs — Experian and TransUnion are the other two. Lenders use NCRAs to help them decide whether to approve credit for auto loans, mortgages, home improvement, and of course new credit cards.

NCRAs Are Supposed to Protect Against Identity Theft

Let’s say the Equifax hackers go into phase two of their business plan, likely selling  personally identifiable information (PII) to cyber gangs and others who will directly  open up fake accounts. Of course, the stolen social security numbers makes this particularly easy to attempt.

The bank or other lender extending credit will normally check the identity and credit worthiness of the person by contacting the NCRAs, who under red flag rules are supposed to help lenders spot identity theft. Often times (but not always), the cyber thieves will use a different address than the victim’s when applying for credit, and this anomaly should be noticed by the NCRAs, who have the real home address.

The credit report should in theory then be flagged so future lenders will be on alert as well, and the financial company originally asking for the report is also warned of possible identify theft for the credit application.

I am not the first to observe that the irony level of the Equifax breach is in the red-zone – like at 11 or 12. The NCRAs are entrusted with our most personal financial data, and they’re the ones who are supposed to protect consumers against identity theft.

Unfortunately, an NCRA hacking is not a new phenomenon, and the big three have even been the target of class action suits brought by affected consumers under the Fair Credit Reporting Act (FCRA). To no one’s surprise, the legal suits have already begun for the Equifax breach – the last count puts it at 23.

What Consumers Should Do

While we hope that red flags have been already placed on affected accounts, it’s probably best to take matters into your own hands. The FTC, the agency in charge of enforcing the FCRA, recommends a few action steps.

At a minimum, you should go to the Equifax link and see if your social security number is one that’s been exposed. If so, you can get free credit monitoring for a year — in short, you’ll know if someone tries to request credit in your name.

(Yes, I just did it myself, and discovered my number might have been compromised. I went ahead and subscribed for the credit monitoring.)

If you’re really paranoid, you can go a step further and put a credit freeze on your credit report. This restricts access to the credit report held by the NCRAs, and in theory, should prevent lenders from creating new accounts. Normally, this would be a charge, but Experian graciously arranged to freeze the reports for free after outraged consumers protested.

None of these measures are fool proof, and clever attackers and thieves can get around these protections.

Online Protection With The Troy Hunt Course

Besides social security numbers, the hackers hauled away a lot of PII – names, addresses, and likely bank and credit card companies. As far as I can tell, passwords were not taken by the Equifax hackers.

Obviously, social security numbers are the most monetizable, but the other PII is still useful, particularly in phishing attacks. Readers of this blog know how we feel on the subject: any online information gained by hackers can and will be used against you!

So we should all be on alert for phish mails from what may appear to be our banks and other financial companies, and we should be wary of other scams.

That’s where the indispensable security expert Troy Hunt can help us all! His Internet Security Basics video course is a favorite of ours because it breaks down online security into a series of simple lessons that non-technical folks can quickly understand and take action on.

I draw your attention to Lesson Three, “How to know when to trust a website”, which will be incredible helpful in helping you avoid  the coming wave of online scams.

Let’s not waste a crisis: it’s probably also a good time to review and change online passwords and understand what makes for good passwords. Troy’s Lesson Two, ‘How to Choose a Good Password” we’ll bring you up to speed on passphrases and password managers.

The Equifax breach is as bad as it gets, but let’s not make it worse by letting cyber thieves exploit us again through lame phishing emails.

Learn how to protect yourself online with security pro Troy Hunt’s five-part course.

[Podcast] Dr. Tyrone Grandison on Data, Privacy and Security

[Podcast] Dr. Tyrone Grandison on Data, Privacy and Security

Leave a review for our podcast & we'll send you a pack of infosec cards.


Dr. Tyrone Grandison has done it all. He is an author, professor, mentor, board member, and a former White House Presidential Innovation Fellow. He has held various positions in the C-Suite, including his most recent role as Chief Information Officer at the Institute of Health Metrics and Evaluation, an independent health research center that provides metrics on the world’s most important health problems.

In our interview, Tyrone shares what it’s like to lead a team of forty highly skilled technologists who provide tools, infrastructure, and technology to enable researchers develop statistical models, visualizations and reports. He also describes his adventures on wrangling petabytes of data, the promise and peril of our data economy, and what board members need to know about cybersecurity.

Transcript

Tyrone Grandison:  My name is Tyrone Grandison. I am the Chief Information Officer at the Institute for Health Metrics and Evaluation, IHME, at the University of Washington in Seattle. And IHME is global in profit in the public health and population health space, where we’re focused on how do we get people to have a long life and have that long life at the highest health capacity possible.

Cindy Ng: Often times, the bottom line drives businesses forward, where your institute is driven by helping policy makers and donors determine how to help people live longer and healthier lives. What is your involvement in ensuring that that vision is sustained and carried through?

Tyrone Grandison:  Perfect. So I lead the technology team here, which is a team of 40 really skilled data scientists, software engineer, system administrators, project and program managers. And what we do is that we provide the base, the infrastructure. We provide tools and technologies that enable researchers to, one, ingest data. So we get data from every single country across the world. Everything from surveys to censuses to death records. No matter how small or poor or politically closed a country is. And we basically house this information. We help the researchers develop statistical models. Like, very sophisticated statistical models and tools on them that make sense of the data. And then we actually put it out there to a network of over 2,400 collaborators.

And they help us produce what we called the Global Burden of Disease that, you know, shows what in different countries of the world is the predominant thing that is actually shortening lives in particular age groups, for particular genders and all demographic information. So, now people can, if they wanted to, do an apples-to-apples comparison between countries across ages and over time. So, if you wanted to see the damage done by tobacco smoking in Greece and compare that to the healthy years lost due to traffic injuries in Guatemala, you can actually do that. If you wanted to compare both of those things with the impact of HIV in Ghana, then that’s now possible. So our entire thing is, how do we actually provide the technology base and the skills to, one, host the data, support the building of the models and support the visualization of it. So people can actually make these comparisons.

Cindy Ng: You’re responsible for a lot and let’s try to break it down a bit. When you receive a bunch of data sets from various sources, take me through what your plan is for it. Last time we spoke, we spoke about obesity. Maybe is that a good one to, that everyone can relate to and with?

Tyrone Grandison:  Sure. So, say we get a obesity data sets from either the health entities within a particular country. It goes through a process where we have a team of data analysts look at the data and extract the relevant portions of it. We then put it into our ingesting pipeline, where we then vet it. Vet it in terms of what can it apply to. Does it apply to specific diseases? Obviously, it’s going to apply to a specific country. Does it apply to a particular age group and gender? From that point on, we then include it in models. And we have our modeling pipeline that does everything from estimating the number of years lost from obesity in that particular country. Also, as I mentioned before, it actually sees if that particular statistic that we got from that survey is relevant or not.

From there, we basically use it to figure out, okay, well what is the overall picture across the world for obesity? And then, we visualize it and make it accessible. And provide people with the ability to tell stories on it with the hope that at someone point, a policymaker or somebody within the public health institute within a particular country is gonna see it and actually use it in their decision making in terms of how to actually improve obesity in their particular country.

Cindy Ng: And when you talk about relevant and modeling, people say in the industry that there is a lot of unconscious bias. How do you reconcile that? And how do you work with certain factors that people think is controversial? For instance, people have said that using a body mass index isn’t accurate.

Tyrone Grandison:  That’s where we actually depend a lot on the network of collaborators that we spoke about. Not only do we have like a team that has been doing epidemiology and can advance the population health metrics for, you know, over two decades. We do depend upon experts within each particular country once we actually produce, like, you know, the first estimates based upon the initial models to actually look at these estimates and say, “Nope. This does not make sense. We need to actually adjust your model to add a factor in, that same unconscious bias.” Or, to kind of remove that the model says that we’re seeing but that the model may need to be tweaked or is wrong about. It all boils down to having people vet what the models are doing.

So, it’s more along the lines of how do you create systems that are really good at human computation. Marrying the things that machines are good with and then putting in a step there that forces a human to verify and kind of improve the final estimate that you want to actually want to produce.

Cindy Ng: Is there a pattern that you’ve seen over time where time and time again, the model doesn’t count for X, Y and Z? And then, the human gets involved and then figures out what’s needed and provides the context? Is there a particular concept or idea that you’ve seen?

Tyrone Grandison:  There is. And there is to the point where we basically have included it in our initial processing. So, there is this concept, right. The idea of a shock. Where a shock is an event that models cannot predict and it may have wide ranging impact essentially on what you’re trying to produce. So, for example, you could consider the earthquake in Haiti as a shock. You could consider the HIV epidemic as a shock. Every single country in any one given year may have a few shocks depending upon what the geolocation is that you’re looking at. And again, the shocks are different and we are really grateful to the collaborative network for providing insight and telling us that, “Op, this shock is actually missing from your model for this particular location, for this particular population segment.”

Cindy Ng: It sounds like there’s a lot of relationship building, too, with these organizations because sometimes people aren’t so forthcoming with what you need to know.

Tyrone Grandison:  So, I mean, it’s relationship building over the work that we’ve been doing here has been going on for 20 years. So, imagine 20 years of work just producing this Global Burden of Disease. And then, probably another decade or two before that just building the connections across the world. Because our Director has been in this space for quite a while now. He’s worked at everywhere from WHO to the MIT doing this work. So, the connections there and the connections from the executive team have been invaluable in making sure that people actually speak candidly and honestly about what’s going on. Because we are the impartial arbiters of the best data on what’s happening in population health.

Cindy Ng: And it certainly really helps when it’s not driven by the bottom line. It’s the most important thing is to improve everyone’s health outcome. What are the challenges of working with disparate data sets?

Tyrone Grandison:  So, the challenge is the same everywhere, right? The set challenges all relate to, okay, well, are we talking about the same things? Right. Are talking the same language? Do we have the same semantics? Basic challenge. Two is, well, does the data have what we need to actually answer the question? Not all data is relevant. Not all data is created equal. So, just figuring out what is gonna actually give us insight into, you know, the question as to how many years do you lose for a particular disease? And the third thing which is pretty common to, you know, every field that is trying tot push into the data open data areas. Do we have the right facets in each data set to actually integrate them? Does it make sense to integrate them at all? So, the challenges are not different from what the broader industry is facing.

Cindy Ng: You’ve developed relationships for over 20 years. Back then, we weren’t able to assess so many different, I’m guessing billions and trillions of data sets. Have you seen the transition happen? And how has that transition been difficult? And how has it made your lives so much better?

Tyrone Grandison:  Yeah. So, the Global Burden of Disease actually started on a cycle that was, you know, when we had considered we had enough data to actually make those estimates, we would actually produce the next Global Burden of Disease. Right, and we just moved starting this year to an annual cycle. So, that’s the biggest change. The biggest change is because of the wealth of data that exists out there. Because of the advances of technology, now we can actually increase the production of this data asset, so to speak. Whereas before, it was a lot of anecdotal evidence. It was a lot of negotiation to get the data that we actual need. Now, in other far more open data sets. So, lots more that’s actually available.

A willingness due to prior past demonstrations of the power of home data for governments and people to actually provide and produce them, because they know that they can actually use them. It’s more of the technology hand-in-hand with the cultural change that’s happened. That’s been the biggest changes.

Cindy Ng: What have you learned about wrangling petabytes of data set?

Tyrone Grandison:  A lot. In a nutshell, it’s very difficult and if I was to say that I give advice to people, I would start with, so what’s the problem you’re trying to solve? What’s the mission you’re trying to achieve? And figure out what are the things that you need in your data sets that would help you answer that question or mission. And finally, as much as possible, stick with standardize and simplify kind of methodology. Leverage a standard infrastructure and a standard architecture across what you are doing. And make it dead simple because if it’s not standard or simple, then getting to scale is really difficult. And scale meaning processing tens, hundreds of petrabytes worth of data.

Cindy Ng: There are a lot of health trackers, too, where they’re trying to gather all sorts of data in hopes that they might use it later. Is that a recommended best practice approach for figuring your solution or the problem out? Because, you know, what if you didn’t think of something and then a new idea popped into your head? And then there’s a lot of controversy with that. What is your insight…

Tyrone Grandison:  A controversy is, in my view, actually very real. One, what is the level of data that you are collecting, right? So, at IHME, like, we’re lucky to be actually looking at population level data. If you’re looking at or collecting individual records, then we have a can of worms in terms of data ownership, data privacy, data security. Right. And, especially in America, what you’re referring to is the whole argument around secondary use of health data. The concern or issue is just like with HIPAA, the Healthcare Information Portability and Accountability Act. You’re supposed to just have data for one person for a specific purpose and only that purpose. The issue or concern, like, you just brought up is, one, a lot of companies actually view data that is created or generated on the particular individual as being their own property. Their own intellectual property. Which you may or may not agree with.

At some point, there’s no tack list that says the person who this data is about should actually have a say in this in the current model, the current infrastructure. Right. And I can just say it like, personally, I believe that if the data is about you, that data’s created by you, then technically you should own it. And the company should be good stewards of the data. Right. Being a good steward simply means that you’re going to use the data for the purpose that you told the owner that you’re going to use if for. And that you will destroy the data after you finish using it. If you come up with a secondary use for it, then you should ask the person again, do they want to actually participate in it?

So, the issue that I have with it is basically is the disenfranchisement of the data owner. The neglection of like consent or even asking for it to be used in a secondary function or for a secondary purpose. And the fact that there are inherent things in that scenario with that question that are still unresolved and are just assumed to be true that people just need to look at.

Cindy Ng: When you say when the project is over, how do you know when the project is over? Because I can, for instance, write a paper and keep editing and editing and it will never feel completed and done.

Tyrone Grandison:  Sure. So, it’s… I mean, put it this way. If I say to the people that are involved in a particular study or that gave me their data, that I want to use this data to test a hypothesis and the hypothesis is that drinking a lot of alcohol will cause liver damage. Okay, obvious. And I, you know, publish my findings on it. It gets revised. You know, that at the very end, there has to be a point where either the papers published in the journal are somewhere or not. Right. I’m assuming. If that’s the case and, you know, I publish it and I found out that, hey, I can actually use the same data to actually figure out the affects of alcohol consumption on some other thing. That is a secondary purpose that I did not have an agreement with you on, and so I should actually ask for your consent on that. Right.

So, the question is just not when is the task done, but when have I actually accomplished the purpose that I negotiated and asked you to use your data for.

Cindy Ng: So, it sounds like that’s the really best practice when you’re gathering or using someone’s personal data. That that’s the initial contract. If there is a secondary use that they should also know about it. Because you don’t want to end up in a situation like Henrietta Lacks and they’re using your cells and you don’t even know it, right?

Tyrone Grandison:  Yup. But Henrietta Lacks actually is like a good example. It highlights what the current practices of the industry. Right. And again, luckily published health does not have this issue because we have aggregated data on different people. But like in the general healthcare scenario where you do have individual health records, what companies are doing and what they did within, in the Henrietta Lacks case was they may have actually specified in some legal document that, “Hey, we’re gonna use your information for X, and X is the purpose.” And they make either X so broad, so general that in encompasses like every possible thing that you can imagine. Or, they basically say, “We’re going to do a really specific purpose and anything else that we find.” And that is now the common practice within the field. Right?

And to me, the heart of that is very, seems very deceptive. Right. Because you’re saying to somebody that, you know, we have no idea what we’re going to do with your data, we want access to do it and, oh, we assume that you’re not going to own it. That we assume that any profits or anything that we get from it is going to be ours. Do you see the model itself just seems perverse? It’s tilted or veered towards how do we actually get something from somebody for free and turn it into a asset for my business. Where I have carte blanche to do what I want with it. And I think that discussion has not happened seriously by the healthcare industry.

Cindy Ng: I’m surprised that businesses haven’t approached your institution in assisting with this matter.Well, just it sounds like it would make total sense because I’m assuming that all of your data perhaps might have all the names and PHI stripped.

Tyrone Grandison:  We don’t even get to that level at this point.

Cindy Ng: Oh, you don’t even…

Tyrone Grandison:  It’s information on a generalized level. So there are multiple techniques that you can actually use to, let’s say, protect privacy for people. Like, one, would be just suppression. Okay, so I suppress the things that I call or consider PII. Or the other is like generalization. Right. So, it’s basically, I’m going to look at or get information that is not at the most granular level. But it’s at the level above it. Don’t look like you and all your peers. You just go a level above this and say, “Okay. Well, let’s look at everyone that lives in a particular zip code or a particular state or country.” So, that way, you have protection from hiding in a crowd. So, you can’t really identify one particular person in a data set itself. So, at IHME we don’t have the PHI/PII issue because we work on generalized data sets.

Cindy Ng: You’ve held many different roles. You’ve been a CDO, a CIO, a CEO. Which role do you enjoy doing most?

Tyrone Grandison:  So, any role that actually allows me to do two things. Like, one, create and drive the direction or strategy of an organization. And, two, enables me to help with the execution of that strategy to actually produce things that will positively impact people. The roles that I have been fond of so far would be CEO and CIO because at those levels, you basically also get to set what the organizational culture is, which is very valuable in my mind.

Cindy Ng: And since you’ve also been a board member, what do you think the board needs to know when it comes to privacy in cyber security?

Tyrone Grandison:  First of all, I think it should be an agenda item that you deal with upfront and not after a breech or an incident. It should be something that you bake into your plans and into the product life cycle from the very beginning. You should be proactive in how you actually view it. The main things I’ve actually noticed over time is just like, people do not pay attention to privacy, cyber security, cyber crime until, you know, after there is a… This is a horrible analogy but until there’s a dead body in the sea. What happened? And then you start having reputational damage and financial damage because of it.

When, you know, thinking about the process technology, people and tools that would actually help you fix this from the very get-go would have actually saved you a lot of time. And, you know, the whole perception, not perception, but the whole thought of both of these things, privacy and security, being cost centers, you don’t see a profit from them. You don’t see revenue being generated from them. And you only actually see the benefit, the cost savings, so to speak, after everyone else has actually been breached or damaged from an episode and you’re not. Right. Yeah. It’s a little bit more proactive upfront rather than reactive and, you know, post-fact.

Cindy Ng: But do you also think that it’s been said that IT make technology now more complicated than it really is? And they’re unable to follow what the IT presenting and so they’re confused, and there’s not a series of steps you can follow? Or maybe they asked for a budget for the one thing one year and then want some more money next year. And as you said, it costs money. But do you also think that there’s a value proposition that’s not carried across in a presentation? How can the point be driven home then?

Tyrone Grandison:  So, I mean, the biggest thing you just identified a while ago is the language barrier. The translation problem. So, I don’t fundamentally believe that anyone tech or otherwise is purposely trying to sound complex. Or purposely trying to confuse people. It’s just a matter of, you know, you have skilled people in a field or domain. Whatever the domain is. So, if you went tomorrow and started talking to a oncologist or a water engineer, and they just went off and just uses a bunch of jargon from their particular fields. They’re not trying to be overly complex. They’re not trying to not have you understand what they’re doing. But they’ve been studying this for decades. And they’re just, like, so steeped in it that that’s their vocabulary.

So, the number one issue is just that, one, understanding your audience. Right. If you know that your audience is not tech or is from a different field or a different era in tech or is the board, and understanding the audience and knowing what their language is and then translating your language lingo into things that they can understand, I think that would go a long, long way in actually helping people understand the importance of privacy and cyber security.

Cindy Ng: And we often like to make the analogy of that we should treat data like money. But do you think that data can be potentially be more valuable than money when the attacks aren’t deterrent financially driven then they’re out to destroy data, instead? We react in a really different way, I wanted to hear your thoughts on the analogy of data versus money.

Tyrone Grandison:  Interesting. So, money is just a convenient currency. Right. To enable a trade. And money has been associated with giving value to certain objects that we consider important. So, I’m viewing data. And data as something that needs to have a value assigned to it. Right. Which money is going to be that medium. Right. Whether the money is actual physical money or it’s Bitcoin. So, I don’t see the two things being in conflict. Or the two things having a comparison between value. I just think that data is valuable. A chair is valuable. A phone is valuable. Money is just, like, that medium that allows us to have one standard unit to compare the value between all those things.

Is data going to be more valuable than the current physical IT assets that a company has? Overtime, I think, yes. Because the data that you’re using, that you’re hopefully going to be using is going to be driving more, one, insights. More, hopefully, revenue. More creative uses of the current resources. So, the data itself is under influence how much of the other resources that you will actually acquire or how much of the other resources you need to place in particular spots or instances or allocate across the world. So, I see data as a good driving force to making these value driven decisions. So, I think the importance of it versus the physical IT assets is going to increase over time. You can see that happening already. To say data is more valuable than cash. I’m not too sure that’s the right question.

Cindy Ng: We’ve talked about the value of data, but what about the data retention and migration? It’s sort of dull, but yet so important.

Tyrone Grandison:  Well, multiple perspectives here. Data retention and migration is important for multiple reasons. Right. And the importance normally lies in risk. In minimizing the risk or the harm that can potentially be done to the owner or the data, or the subjects that are referenced too in the data sets. Right. That’s all the importance. That’s why you have whole countries, states actually saying that they have a data retention policy or plan. And that means that after a certain time, either the stuff has to be gone, completely deleted, or be stored somewhere that is secure and not well accessible.

And the whole premise of it is just like you assume for a particular period of time, that companies are going to need to use that data to actually accomplish a purpose that they specified initially, but then after that point, the risk or the potential harm of that becomes so high that you need to do something to reduce that risk. And that thing normally is a destruction or migration somewhere else.

Cindy Ng: What about integrating that data set with another, so probably a secondary use, but integrating it with other institutes? I hear that people want a one health solution in terms of patient data. So that all organizations can access it. It’s definitely a risk. But is that something that you think is a good idea that we should even entertain it? Or we’re going to create a monster and that the results of having a one single unit, a database where everything and all the data integrates is a bad solution? It’s great for analytics and technology and use.

Tyrone Grandison:  I agree with everything you just said. It’s both. So, it’s for certain purposes and scenarios, you know, is good. Because you get to see new things and you get a different picture, a better picture, a more holistic picture once you integrate data sets. That being said, once you get data sets, you basically also, you increase the risk profile of the results in data sets. And you lower the privacy of the people that are referenced in the data sets. Right. The more data sets you integrate…

So there’s this paper that a colleague of mine, Star Ying and I wrote, like last year or the year before last. That basically says there’s no privacy in big data. Simply because, like, big data you assume the three Vs. So, velocity, volume and variety. As you actually add more and more data sets in to get, like, a larger, just say, like a larger big data sets, as we call it. What you have happening is that you have the things that actually can be uniquely combined to identify the subject in that larger, big data set becomes larger and larger.

So, I mean, a quick, let me see what the quick example would be. So, if you have access to toll data, you have access to the data of, you know, people that are going on, you know, your local highway or your state highway. And you have the logs of when a particular car went through a certain point. The time, the license plates, the owner. All that stuff. So, that’s one data set by itself. You have a police data set that had a list of crimes that happened in particular locations. And you pick something else. You have a bunch of records from the DMV that tell you when somebody actually came in to actually have some operations in. All by themselves very innocuous. All by themselves if you anonymized them, or put techniques on them to protect the privacy of the individuals. Perfectly. Okay. Perfectly safe. Right. Not perfectly but relatively.

If you start combining the different data sets just randomly. You combine the toll data with the police data. And you found out that there’s a particular car that was at a scene of a crime where somebody was murdered. And that car was at a toll booth that was nearby, like, one minute afterward. Now you have something interesting. You have interesting insight. So that’s a good case.

We want to actually have this integration be possible. Because you get insights that you couldn’t get from just having that one data set itself. If you start looking at other cases where, you know, somebody wants to actually be protected, you have, and this is just within one data set, you have a data set of all the hospital visits across four different hospitals for a particular person. What you can do if you start merging them is that you can actually use the pattern of visits to uniquely identify somebody. If you start merging that with, again, the transportation records and that may be something that gives you insight as to what somebody’s sick with. That may be used…

You can identify them first of all, which they don’t want to do because they went to one hospital. And that would be used to actually do everything, something negative against him. Like deny them insurance or whatever the used case is. But you see, like in multiple different cases, the, one, the privacy of individuals that can hold the…is actually decreased. And, two, it can be used for, you know, positive or negative purposes. For and against the individual data subject or data owner.

Cindy Ng: People have spoken about these worries. How should we intelligently synthesize this information? Because it’s interesting, it’s worrisome. But it can be also be very beneficial. Because we tend to sensationalize everything.

Tyrone Grandison:  Yup. That’s a good question. So, I mean, I would say to look at the things the major decisions in your life that you plan to be making for the next couple of years. And then look at the tools, software, things that you have online right now that potential employer may actually look at. Then not employer but a potential person that you’re looking could…to do something with, get a service from. May actually look at to evaluate whether you get the service or not. Whether it be getting a job or getting a new car. Whatever it is. Whatever that thing is that, you know, want to actually get done.

And you know, see if the current things, the current questions that the person on the other side will be asking and looking at. Would that be interpreted negatively on you? A quick example would just be, okay, you’re a Facebook user and look at all the things that you do on there and all the kinda good apps that you have. And then look at who has access to all that. And in those particular instances, is that going to be a positive with that interaction or a negative with that interaction? I mean, I think that’s just being responsible in the digital age, right?

Cindy Ng: Right. What is a project that you’re most proud of?

Tyrone Grandison:  I’m proud of a lot of things. I’m proud of the work that we do here at IHME. I think it’s going breaking work that’s gonna help a lot of people. The data that we produce have actually been used to do pollution legislation. And numbers come out. Different ministers see it. The Ministry in China saw it and said, “Oh, we have an issue here. And we need to actually figure out how do we actually improve our longevity in terms of carbon emission.”

 

We’ve had the same thing Africa where there was somebody from the Ministry. I think it was, sorry, was it at Gambia or Ghana. I’ll find out for you afterwards. And they saw the numbers from, like, deaths due to in-house combustion. And started a program that gave a few hundred, well, a few thousand pots to different households and within like a few years, I saw that number went down. So, literally saving lives.

I’m proud of the White House Presidential Innovation Fellows. That group of people that I work with two and a half years ago. The work that they did. So,one of the fellows in my group worked with the Department of Interior to increase the number of kids that were going to National Parks. And, you know, they did it by actually going out and talking to kids and figuring out, like, what the correct incentive scheme would be. To actually have kids come to the park when they had their summer breaks. And that program is called, like, Every Kid in the Park. And it’s hugely successful about getting people, kids and parents like connected back into nature in life. Right. I’m proud of the work the commerce did of service team at the Department of Commerce. And that did help a lot of people.

We routinely just created data products with the user, the average American citizen in mind. And, like, one of the things that I’m really so proud of is that we helped them democratize and open up U.S. Census Bureau data. Which, you know, is very powerful. It’s actually freely open to everybody and it’s been used by a lot of businesses that make a lot of money from sending the data itself. Right. So we looked at and exposed that data through something called a CitySDK and, you know, that led to everything from people building apps to help food trucks find out where demand was. To people building websites to help accessibility channels people to figure out how to get around particular cities. To people helping supermarkets to figure out how to get fresh foods to communities that didn’t have access to them. That was awesome to actually see.

The other thing was exposing the income inequality data and just like showing people that, like, the narrative that like people are hearing about the gender and the race inequality amongst different professionals is actually far worse than is actually mentioned out there in the public. So, I mean, I’m proud of all of it because it was all fun work. All impactful work. All work that hopefully helped people.

[Podcast] Phishing Researcher Zinaida Benenson, Transcript

[Podcast] Phishing Researcher Zinaida Benenson, Transcript

I’m always reluctant to make a direct shameless plea to read our IOS content. But you must read the following transcript of my recent interview with Dr. Zinaida Benenson, a German security researcher. Last year she presented at Black Hat the results of a nicely designed experiment to measure the susceptibility of college students to phish mail. Let’s just say the students could use some extra tutoring when it comes to the dangers of the web.

She proved to my satisfaction that our curiosity about what lies ahead at the next link overrides our cautiousness and overall rational thinking. Benenson believes that non-college students would do just as poorly in a similar experiment.

What does this have to with data security at your organization?

In short: some percentage of employees will always click on the worst, spammiest phish mail imaginable. And if you make the phish content even somewhat convincing, you’ll get even higher yields. Benenson has also shown that it likely doesn’t matter if you give employees extensive security awareness. Someone will click, and that’s all the hackers need.

The more important point is to have secondary defense in place that can spot an attack in progress — ransomware, data theft, denial of service — and limit the damage.

 

[Inside Out Security] Zinaida Benenson is a senior researcher at the University of Erlangen-Nuremberg. Her research focuses on the human factors connections in privacy and security, and she also explores IoT security, two topics which we are also very interested in at the Inside Out Security blog. Zinaida recently completed research into phishing. If you were at last year’s Black Hat Conference, you heard her discuss these results in a session called How To Make People Click On Dangerous Links Despite Their Security Awareness.

So, welcome Zinaida.

[Zinaida Benenson] Okay. So my group is called Human Factors In Security And Privacy. But also, as you said, we are also doing technical research on the internet of things. And mostly when we are talking about human factors, we think about how people make decisions when they are confronted with security or privacy problems, and how can we help them in making those decisions better.

 

[IOS] What brought you to my attention was the phishing study you presented at Black Hat, I think that was last year. And it was just so disturbing, after reading some of your conclusions and some of the results.

But before we talk about them, can you describe that specific experiment you ran phishing college students using both email and Facebook?

The Experiment

[ZB] So in a nutshell, we sent, to over 1,000 university students, an email or a personal Facebook message from non-existing persons with popular German names. And these messages referred to a party last week and contained a link to supposed pictures from the party.

In reality, this link led to an “access denied” page, but the links were individual. So we could see who clicked, and how many times they clicked. And later, we sent to them a questionnaire where we asked for reasons of their clicking or not clicking.

 

[IOS] Right. So basically, they were told that they would be in an experiment but they weren’t told that they would be phished.

[ZB] Yes. So recruiting people for, you know, cyber security experiments is always tricky because you can’t tell them the real goal of the experiment — otherwise, they would be extra vigilant. But on the other hand, you can’t just send to them something without recruiting them. So this is an ethical problem. It’s usually solved by recruiting people for something similar. So in our case, it was a survey for… about the internet habits.

 

[IOS]  And after the experiment, you did tell them what the purpose was?

[ZB] Yes, yes. So this is called a debriefing and this also a special part of ethical requirements. So we sent to them an email where we described the experiment and also some preliminary results, and also described why it could be dangerous to click on a link in an email or a Facebook message.

 

[IOS] Getting back to the actual phish content, the phish messaging content, in the paper I saw, you showed the actual template you used. And it looked — I mean, as we all get lots of spam – to my eyes and I think a lot of people’s eyes, it just looked like really obvious spam. Yet, you achieved like very respectable click rates, and I think for Facebook, you got a very high rate – almost, was it 40% – of people clicking what looked like junk mail!

[ZB] We had a bare IP address in the link, which should have alerted some people. I think it actually alerted some who didn’t click.. But, yes, depending on the formulation of the message, we had 20% to over 50% of email users clicking.

And independently on the formulation of the message, we had around 40% of users clicking.  So in all cases, it’s enough, for example, to get a company infected with malware!

50% Clicked on Emails

[IOS] That is surprising! But then you also learned by surveying them, the reasons they were clicking. And I was wondering if you can share some of those, some of the results you found?

[ZB] So the reasons. The most important or most frequently stated reason for clicking was curiosity. People were amused that the message was not addressed to them, but they were interested in the pictures.

And the next most frequently stated reason was that the message actually was plausible because people actually went to a party last week, and there were people there that they did not know. And so they decided that it’s quite plausible to receive such a message.

 

[IOS] However, it was kind of a very generic looking message. So it’s a little hard to believe, to me, that they thought it somehow related to them!

[ZB] We should always consider the targeting audience.  And this was students, and students communicate informally. Quite often, people have friends and even don’t know their last names. And of course, I wouldn’t send … if I was sending such a phishing email to, say employees of a company, or to general population, I wouldn’t formulate it like this. So our targeting actually worked quite well.

 

[IOS] So it was almost intentional that it looked…it was intentional that it looked informal and something that a college student might send to another one. “Hey, I saw you at a party.”  Now, I forget, was the name of the person receiving the email mentioned in the content or not? It just said, “Hey”?

[ZB] We had actually two waves of the experiment. In the first wave, we mentioned people’s names and we got over 50% of email recipients’ click. And this was very surprising for us because we actually expected that on Facebook, people would click more just because people share pictures on Facebook, and it’s easier to find a person on Facebook, or they know, okay, there is a student, it is a student and say, her first name is Sabrina or whatever.

And so we were absolutely surprised to learn that over 50% of email recipients clicked in the first wave of the experiment! And we thought, “Okay, why could this be?” And we decided that maybe it was because we addressed people by their first names. So it was like, “Hey, Anna.”

And so we decided to have the second wave of the experiment where we did not address people by their first names, but just said, “Hey.” And so we got the same, or almost the same, clicking rate on Facebook. But a much lower clicking rate on email.

 

[IOS] And I think you had an explanation for that, if you had a theory about why that may be, why the rates were similar [for Facebook]?

[ZB] Yeah. So on Facebook, it seems that it doesn’t matter if people are addressed by name. Because as I said, the names of people on Facebook are very salient. So when you are looking up somebody, you can see their names.

But if somebody knows my email address and knows my name, it might seem to some people …. more plausible. But this is just … we actually didn’t have any people explaining this in the messages. Also, we got a couple of people saying on email that, “Yeah, well, we didn’t click that. Oh, well it didn’t address me by name, so it looked like spam to me.”

So actually … names in emails seem to be important, even if at our university, email addresses consist of first name, point, second name, at university domain.

 

[IOS] I thought you also suggested that because Facebook is a community, that there’s sort of a higher level of trust in Facebook than in just getting an email. Or am I misreading that?

[ZB] Well, it might be. It might be like this. But we did not check for this. And actually, there are different research. So some other people did some research on how well people trust Facebook and Facebook members. And yeah, people defer quite a lot, and I think that people use Facebook, not because they particularly trust it, but because it’s very convenient and very helpful for them.

Curiosity and Good Moods

[IOS] Okay. And so what do you make of this curiosity as a first reason for clicking?

[ZB] Well, first of all, we were surprised how honestly people answered. And saying, “Oh, I was curious about pictures of unknown people and an unknown party.” It’s a negative personality trait, yeah? So it was very good that we had an anonymous questionnaire. Maybe it made people, you know, answering more honestly. And I think that curiosity is, in this case, it was kind of negative, a negative personality trait.

But actually, if you think about it, it’s a very positive personality trait. Because curiosity and interest motivate us to, for example, to study and to get a good job, and to be good in our job. And they are also directly connected to creativity and interaction.

 

[IOS] But on the other hand, curiosity can have some bad results. I think you also mentioned that even for those who were security aware, it didn’t really make a difference.

[ZB] Well, we asked people if they know — in the questionnaire —we asked them before we revealed the experiment, and asked them whether they clicked or not.  We asked them a couple of questions that are related to security awareness like, “Can one be infected by a virus if one clicks on an attachment in an email, or on a link?”

And when we tried to correlate, statistically correlate, the answers to this question, to this link clicking question, with people’s report on whether they clicked or not, we didn’t find any correlation.

So this result is preliminary, yeah. We can’t say with certainty, but it seems like awareness doesn’t help a lot. And again, I have a hypothesis about this, but no proof so far.

[IOS] And what is that? What is your theory?

[ZB] My theory is that people can’t be vigilant all the time. And psychological research actually showed that interaction, creativity, and good mood are connected to increased gullibility.

And on the other hand, the same line of research showed that vigilance, and suspicion, and an analytical approach to solving problems is connected to bad mood and increased effort. So if we apply this, it means that being constantly vigilant is connected to being in a bad mood, which we don’t want!

And which is also not good for atmosphere, for example, in a firm. And with increased effort, which means that we are just tiring. And when we…at some time, we have to relax. And if the message arrives at this time, it’s quite plausible for everybody, and I mean really for everybody including me, you, and every security expert in the world, to click on something!

 

[IOS] It also has some sort of implications for hackers, I suppose. If they know that a company just went IPO …  or everyone got raises in the group, then you start phishing them and sort of leverage off their good moods!

Be Prepared: Secondary Defenses

[IOS] What would you suggest to an IT Security Group using this research in terms of improving security in the company?

[ZB] Well, I would suggest firstly to, you know, to make sure that they understand the users and the humans on the whole, yeah? We security people tend to consider users as you know, as nuisance, like, ‘Okay they’re always doing the wrong things.’

Actually, we as security experts should protect people! And if the employees in the company were not there, then we wouldn’t have our job, yeah?

So what is important is to let humans be humans … And with all their positive but also negative characteristics and something like curiosity, for example, can be both.

And to turn to technical defense I would say. Because to infect a company, one click is enough, yeah? And one should just assume that it will happen because of all these things I was saying even if people are security aware.

The question is, what happens after the click?

And there are not many examples of, you know, companies telling how they mitigate such things. So the only one I was able to find was the [inaudiable] security incident in 2011. I don’t know if you remember. They were hacked and had to change, actually to exchange all the security tokens.

And they, at least they published at least a part of what happened. And yeah, that was a very tiny phishing wave that maybe reached around 10 employees and only one of them clicked. So they got infected, but they noticed, they say that they noticed it quite quickly because of other security measures.

I would say that that’s what one should actually expect and that’s what is the best outcome one can hope for. Yes, if one notices in time.

 

[IOS] I agree that IT should be aware that this will happen and that the hackers and some will get in and you should have some secondary defenses. But I was also wondering, does it also suggest that perhaps some people should not have access to email?

I mean … does this lead to a test  … .and if some employees are just, you know, a little too curious, you just think, “You know what, maybe we take the email away from you for a while?”

[ZB] Well you know, you can. I mean a company can try this if they can sustain the business consequences of this, yeah? So if people don’t have emails then maybe some business processes will become less efficient and also employees might become disgruntled which is also not good.

I would suggest that … I think that it’s not going to work! And at least it’s not a good trade off. It might work but it’s not a good trade off because, you know, all this for…If you implement a security measure that, that impairs business processes, it makes people dissatisfied!

Then you have to count in the consequences.

[IOS] I agree that IT should be aware that this will happen and that the hackers will get in and you should have some secondary defenses.

But I was also wondering, does it also suggest that perhaps some people should not have access to email? I mean … does this lead to a test where   if some employees are just, you know, a little too curious you just say, ‘You know what? Maybe we take the e-mail away from you for a while.’

[ZB] Well, you know, you can. I mean, a company can try this if they can, you know, if they can sustain the business costs and consequences of this, yeah?

So if people don’t have emails then maybe some business processes will become less efficient and yeah, and also employees might become disgruntled which is also not good.

I would suggest that, I think that it’s not going to work!

And at least it’s not a good trade off. It might work, but it’s not a good trade off because, you know, all this for…if you implement security measure that impairs our business processes and makes people dissatisfied, then you have to count in the consequences.

 

[IOS] I’m agreeing with you that the best defense I think is awareness really and then taking other steps. I wanted to ask you one or two more questions.

One of them is about what they call whale phishing or spear phishing perhaps is another way to say it, which is just going after not just any employee, but usually high-level executives.

And at least from some anecdotes I’ve heard, executives are also prone to clicking on spam just like anybody else, but your research also suggests that some of the more context you provide, the more likely you’ll get these executives to click.

[ZB] Okay, so if you get more context of course you can make the email more plausible, and of course if you are targeting a particular person, there is a lot of possibilities to get information about them, and especially if it’s somebody well-known like an executive of a company.

And I think that there are also some personality traits of executives that might make them more likely to click.  Because, you know, they didn’t get their positions by being especially cautious and not taking risk and saying all safety first!

I think that executives maybe even more risk-taking than, you know, average employee and more sure of themselves, and this might get a problem even more difficult. So it also may be even to not like being told by anybody about any kind of their behavior.

IoT and Inferred Preferences

[IOS] I have one more question since it’s so interesting that you also do research on IoT privacy and security. Over in the EU, we know that the new General Data Protection Regulation, which I guess is going to take place in another year, actually  has a very broad definition of what sensitive data is. I’m wondering if you can just talk about some of the implications of this?

[ZB] Well, of course IoT data is everything’s that is collected in our environment about us can be used to infer our preferences with quite a good precision.

So… for example we had an experiment where we were able just from room climate data, so from temperature enter the age of humidity to determine if a person is, you know, staying or sitting. And this kind of data of course can be used to target messages even more precisely

So for example if you can infer a person’s mood and if you suppose if you buy from the psychological research that people in good moods are more likely to click, you might try to target people in better mood, yeah? Through the IOT data available to you or through IOT data available to you through the company that you hacked.

Yeah …  point is, you know, that targeting already works very well. Yeah, you just need to know the name of the person and maybe the company this person is dealing with!

 

[IOS] Zinaida this was a very fascinating conversation and really has a lot of implications for how IT security goes about their job. So I’d like to thank you for joining us on this podcast!

[ZB] You’re welcome. Thank you for inviting me!

[Podcast] Dr. Zinaida Benenson and Secondary Defenses

[Podcast] Dr. Zinaida Benenson and Secondary Defenses

Leave a review for our podcast & we'll send you a pack of infosec cards.


Dr. Zinaida Benenson is a researcher at the University of Erlangen-Nuremberg, where she heads the “Human Factors in Security and Privacy” group. She and her colleagues conducted a fascinating study into our spam clicking habits. Those of you who attended Black Hat last year may have heard her presentation on How to Make People Click on a Dangerous Link Despite their Security Awareness.

In the second part of our interview, Benenson tells us that phishing is almost inevitable in organizations — and that includes executives being phished! The more important security goal for IT groups is to have secondary defense and incident response programs in place. Benenson also warns us about the potential uses of IoT data by hackers.

Click on the above podcast to hear more about Zinaida’s research and security insights.

Continue reading the next post in "[Podcast] Dr. Zinaida Benenson and Phishing Threats"

[Podcast] Dr. Zinaida Benenson and the Human Urge to Click

[Podcast] Dr. Zinaida Benenson and the Human Urge to Click

Leave a review for our podcast & we'll send you a pack of infosec cards.


Dr. Zinaida Benenson is a researcher at the University of Erlangen-Nuremberg, where she heads the “Human Factors in Security and Privacy” group. She and her colleagues conducted a fascinating study into our spam clicking habits. Those of you who attended Black Hat last year may have heard her presentation on How to Make People Click on a Dangerous Link Despite their Security Awareness.

As we’ve already pointed on the IOS blog, phishing is a topic worthy of serious research. Her own clever study adds valuable new insights. Benenson conducted an experiment in which she phished college students (ethically, but without their direct knowledge) and then asked them why they clicked.

In the first part of our interview with Benenson, we discuss how she collected her results, and why curiosity seems to override security concerns when dealing with phish mail. We learned from Benenson that hackers take advantage of our inherent curiosity. And this curiosity about others can override the analytic security-aware part of our brain when we’re in a good mood!

So feel free to (safely) click on the above podcast to hear the interview.

Continue reading the next post in "[Podcast] Dr. Zinaida Benenson and Phishing Threats"

[Podcast] Adam Tanner on the Dark Market in Medical Data, Transcript

[Podcast] Adam Tanner on the Dark Market in Medical Data, Transcript

This article is part of the series "[Podcast] Adam Tanner on the Dark Market in Medical Data". Check out the rest:

Adam Tanner, author of Our Bodies, Our Data, has shed light on the dark market in medical data. In my interview with Adam, I learned that our medical records, principally drug transactions, are sold to medical data brokers who then resell this information to drug companies. How can this be legal under HIPAA without patient consent?

Adam explains that if the data is anonymized then it no longer falls under HIPAA’s rules. However, the prescribing doctor’s name is still left on the record that is sold to brokers.

As readers of this blog know, bits of information related to location, like the doctor’s name, don’t truly anonymize a record and can act as quasi-identifiers when associated with other data.

My paranoia was certainly in the red zone during this interview, and we explored what would happen if hackers or others could connect the dots. Some of the possibilities were a little unsettling.

Adam believes that by writing this book, he can raise awareness about this hidden medical data market. He also believes that consumers should be given a choice — since it’s really their data  — about whether to release the “anonymized” HIPAA records to third-parties.

 

Inside Out Security: Today, I’d like to welcome Adam Tanner. Adam is a writer-in-residence at Harvard University’s Institute for Quantitative Social Science. He’s written extensively on data privacy. He’s the author of What Stays In Vegas: The World of Personal Data and the End of Privacy As We Know It. His articles on data privacy have appeared in Scientific American, Forbes, Fortune, and Slate. And he has a new book out, titled “Our Bodies, Our Data,” which focuses on the hidden market in medical data. Welcome, Adam.

Adam Tanner: Well, I’m glad to be with you.

IOS: We’ve also been writing about medical data privacy for our Inside Out Security blog. And we’re familiar with how, for example, hospital discharge records can be legally sold to the private sector.

But in your new book, and this is a bit of a shock to me, you describe how pharmacies and others sell prescription drug records to data brokers. Can you tell us more about the story you’ve uncovered?

AT: Basically, throughout your journey as a patient into the healthcare system, information about you is sold. It has nothing to do with your direct treatment. It has to do with commercial businesses wanting to gain insight about you and your doctor, largely, for sales and marketing.

So, take the first step. You go to your doctor’s office. The door is shut. You tell your doctor your intimate medical problems. The information that is entered into the doctor’s electronic health system may be sold, commercially, as may the prescription that you pick up at the pharmacy or the blood tests that you take or the urine tests at the testing lab. The insurance company that pays for all of this or subsidizes part of this, may also sell the information.

That information about you is anonymized.  That means that your information contains your medical condition, your date of birth, your doctor’s name, your gender, all or part of your postal zip code, but it doesn’t have your name on it.

All of that trade is allowed, under U.S. rules.

IOS: You mean under HIPAA?

AT: That’s right. Now this may be surprising to many people who would ask this question, “How can this be legal under current rules?” Well, HIPAA says that if you take out the name and anonymize according to certain standards, it’s no longer your data. You will no longer have any say over what happens to it. You don’t have to consent to the trade of it. Outsiders can do whatever they want with that.

I think a lot of people would be surprised to learn that. Very few patients know about it. Even doctors and pharmacists and others who are in the system don’t know that there’s this multi-billion-dollar trade.

IOS:Right … we’ve written about the de-identification process, which it seems like it’s the right thing to do, in a way, because you’re removing all the identifiers, and that includes zip code information, other geo information. It seems that for research purposes that would be okay. Do you agree with that, or not?

AT: So, these commercial companies, and some of the names may be well-known to us, companies such as IBM Watson Health, GE, LexisNexis, and the largest of them all may not be well-known to the general public, which is Quintiles and IMS. These companies have dossiers on hundreds of millions of patients worldwide. That means that they have medical information about you that extends over time, different procedures you’ve had done, different visits, different tests and so on, put together in a file that goes back for years.

Now, when you have that much information, even if it only has your date of birth, your doctor’s name, your zip code, but not your name, not your Social Security number, not things like that, it’s increasingly possible to identify people from that. Let me give you an example.

I’m talking to you now from Fairbanks, Alaska, where I’m teaching for a year at the university here. I lived, before that, in Boston, Massachusetts, and before that, in Belgrade, Serbia. I may be the only man of my age who meets that specific profile!

So, if you knew those three pieces of information about me and had medical information from those years, I might be identifiable, even in a haystack of millions of different other people.

IOS: Yeah …We have written about that as well in the blog. We call these quasi-identifiers. They’re not the traditional kind of identifiers, but they’re other bits of information, as you pointed out, that can be used to sort of re-identify. Usually it’s a small subset, but not always. And that this information would seem also should be protected as well in some way. So, do you think that the laws are keeping up with this?

AT: HIPAA was written 20 years ago, and the HIPAA rules say that you can freely trade our patient information if it is anonymized to a certain standard. Now, the technology has gone forward, dramatically, since then.

So, the ability to store things very cheaply and the ability to scroll through them is much more sophisticated today than it was when those rules came into effect. For that reason, I think it’s a worthwhile time to have a discussion now. Is this the best system? Is this what we want to do?

Interestingly, the system of the free trade in our patient information has evolved because commercial companies have decided this is what they’d want to do. There has not been an open public discussion of what is best for society, what is best for patients, what is best for science, and so on. This is just a system that evolved.

I’m saying, in writing this book, “Our Bodies, Our Data,” that it is maybe worthwhile that we re-examine where we’re at right now and say, “Do we want to have better privacy protection? Do we want to have a different system of contributing to science than we do now?”

IOS: I guess what also surprised me was that you say that pharmacies, for example, can sell the drug records, as long as it’s anonymized. You would think that the drug companies would be against that. It’s sort of leaking out their information to their competitors, in some way. In other words, information goes to the data brokers and then gets resold to the drug companies.

AT: Well, but you have to understand that everybody in what I call this big-data health bazaar is making money off of it. So, a large pharmacy chain, such as CVS or Walgreen’s, they may make tens of millions of dollars in selling copies of these prescriptions to data miners.

Drug companies are particularly interested in buying this information because this information is doctor-identified. It says that Dr. Jones in Pittsburgh prescribes drug A almost all the time, rather than drug B. So, the company that makes drug B may send a sales rep to the doctor and say, “Doctor, here’s some free samples. Let’s go out to lunch. Let me tell you about how great drug B is.”

So, this is because there exists these doctor profiles on individual doctors across the country, that are used for sales and marketing, for very sophisticated kind of targeting.

IOS: So, in an indirect way, the drug companies can learn about the other drug companies’ sales patterns, and then say, “Oh, let me go in there and see if I can take that business away.” Is that sort of the way it’s working?

AT: In essence, yes. The origins of this trade date back to the 1950s. In its first form, these data companies, such as IMS Health, what they did was just telling companies what drugs sold in what market. Company A has 87% of the market. Their rival has 13% of the market. When medical information began to become digitized in the 1960s and ’70s and evermore since then, there was a new opportunity to trade this data.

So, all of a sudden, insurance companies and middle-men connecting up these companies, and electronic health records providers and others, had a product that they could sell easily, without a lot of work, and data miners were eager to buy this and produce new products for mostly the pharmaceutical companies, but there are other buyers as well.

IOS:  I wanted to get back to another point you mentioned, in that even with anonymized data records of medical records, with all the other information that’s out there, you can re-identify or at least limit, perhaps, the pool of people who that data would apply to.

What’s even more frightening now is that hackers have been stealing health records like crazy over the last couple of years. So, there’s a whole dark market of hacked medical data that, I guess, if they got into this IMS database, they would have the keys to the kingdom, in a way.

Am I being too paranoid here?

AT: Well, no, you correctly point out that there has been a sharp upswing in hacking into medical records. That can happen into a small, individual practice, or it could happen into a large insurance company.

And in fact, the largest hacking attack of medical records in the last couple of years has been into Anthem Health, which is the Blue Cross Blue Shield company. Almost 80 million records were hacked in that.

So even people that did… I was hacked in that, even though I was not, at the time, a customer of them or had never been a customer of them, but they… One company that I dealt with outsourced to someone else, who outsourced to them. So, all of a sudden, this information can be in circulation.

There’s a government website people can look at, and you’ll see, every day or two, there are new hackings. Sometimes it involves a few thousand names and an obscure local clinic. Sometimes it’ll be a major company, such as a lab test company, and millions of names could be impacted.

So, this is something definitely to be concerned about. Yes, you could take these hacked records and match them with anonymized records to try to figure out who people are, but I should point out that there is no recorded instance of hackers getting into these anonymized dossiers by the big data miners.

IOS: Right. We hope so!

AT: I say recorded or acknowledged instance.

IOS: Right. Right. But there’s now been sort of an awareness of cyber gangs and cyber terrorism and then the use of, let’s say, records for blackmail purposes.

I don’t want to get too paranoid here, but it seems like there’s just a potential for just a lot of bad possibilities. Almost frightening possibilities with all this potential data out there.

AT: Well, we have heard recently about rumors of an alleged dossier involving Donald Trump and Russia.

IOS: Exactly.

AT: And information that… If you think about what kind of information could be most damaging or harmful to someone, it could be financial information. It could be sexual information, or it could be health information.

IOS: Yeah, or someone using… or has a prescription to a certain drug of some sort. I’m not suggesting anything, but that… All that information together could have sort of lots of implications, just, you know, political implications, let’s say.

AT: I mean if you know that someone takes a drug that’s commonly used for a mental health problem, that could be information used against someone. It could be used to deny them life insurance. It could be used to deny them a promotion or a job offer. It could be used by rivals in different ways to humiliate people. So, this medical information is quite powerful.

One person who has experienced this and spoken publicly about it is the actor, Charlie Sheen. He tested positive for HIV. Others somehow learned of it and blackmailed him. He said he paid millions of dollars to keep that information from going public, before he decided finally that he would stop paying it, and he’d have to tell the world about his medical condition.

IOS: Actually I was not aware of the payments he was making. That’s just astonishing. So, is there any hope here? Do you see some remedies, through maybe regulations or enforcement of existing laws? Or perhaps we need new laws?

AT: As I mentioned, the current rules, HIPAA, allows for the free trade of your data if it’s anonymized. Now, I think, given the growth of sophistication in computing, that we should change what the rule is and to define our medical data as any medical information about us, whether or not it’s anonymized.

So, if a doctor is writing in the electronic health record, you should have a say as to whether or not that information is going to be used elsewhere.

A little side point I should mention. There are a lot of good scientists and researchers who want data to see if they can gain insights into disease and new medications. I think people should have the choice whether or not they want to contribute to those efforts.

So, you know, there’s a lot of good efforts. There’s a government effort under way now to gather a million DNA samples from people to make available to science. So, if people want to participate in that, and they think that’s good work, they should definitely be encouraged to do so, but I think they should have the say and decide for themselves.

And so far, we don’t really have that system. So, by redefining what patient data is, to say, “Medical information about a patient, whether or not it’s anonymized,” I think that would give us the power to do that.

IOS: So effectively, you’re saying the patient owns the data, is the owner, and then would have to give consent for the data to be used. Is that, about right?

AT: I think so. But on the other hand, as I mentioned, I’ve written this book to encourage this discussion. The problem we have right now is that the trade is so opaque.

Companies are extremely reluctant to talk about this commercial trade. So, they do occasionally say that, “Oh, this is great for science and for medicine, and all of these great things will happen.” Well, if that is so fantastic, let’s have this discussion where everyone will say, “All right. Here’s how we use the data. Here’s how we share it. Here’s how we sell it.”

Then let people in on it and decide whether they really want that system or not. But it’s hard to have that intelligent policy discussion, what’s best for the whole country, if industry has decided for itself how to proceed without involving others.

IOS: Well, I’m so glad you’ve written this book. This will, I’m hoping, will promote the discussion that you’re talking about. Well, this has been great. I want to thank you for the interview. So, by the way, where can our listeners reach out to you on social media? Do you have a handle on Twitter? Or Facebook?

AT: Well, I’m @datacurtain  and I have a webpage, which is http://adamtanner.news/

IOS: Wonderful. Thank you very much, Adam.

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

This article is part of the series "[Podcast] Adam Tanner on the Dark Market in Medical Data". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


More Adam Tanner! In this second part of my interview with the author of Our Bodies, Our Data, we start exploring the implications of having massive amounts of online medical  data. There’s much to worry about.

With hackers already good at stealing health insurance records, is it only a matter of time before they get into the databases of the drug prescription data brokers?

My data privacy paranoia about all this came out in full force during the interview. Thankfully, Adam was able to calm me down, but there’s still potential for frightening possibilities, including political blackmail.

Is the answer more regulations for drug data? Listen to the rest of the interview below to find out, and follow Adam on Twitter, @datacurtain, to keep up to date.

Continue reading the next post in "[Podcast] Adam Tanner on the Dark Market in Medical Data"

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

This article is part of the series "[Podcast] Adam Tanner on the Dark Market in Medical Data". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


In our writing about HIPAA and medical data, we’ve also covered a few of the gray areas of medical privacy, including  wearables, Facebook, and hospital discharge records. I thought both Cindy and I knew all the loopholes. And then I talked to writer Adam Tanner about his new book Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records.

In the first part of my interview with Tanner, I learned how pharmacies sell our prescription drug transactions to medical data brokers, who then resell it to pharmaceutical companies and others. This is a billion dollar market that remains unknown to the public.

How can this be legal under HIPAA, and why doesn’t it require patient consent?

It turns out after the data record is anonymized, but with the doctor’s name still attached, it’s no longer yours!  Listen in as we learn more from Tanner in this first podcast.

Continue reading the next post in "[Podcast] Adam Tanner on the Dark Market in Medical Data"

[Podcast] More Dr. Ann Cavoukian: GDPR and Access Control

[Podcast] More Dr. Ann Cavoukian: GDPR and Access Control

Leave a review for our podcast & we'll send you a pack of infosec cards.


We continue our discussion with Dr. Ann Cavoukian. She is currently Executive Director of Ryerson University’s Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design (PbD).

In this segment, Cavoukian tells us that once you’ve involved your customers in the decision making process, “You won’t believe the buy-in you will get under those conditions because then you’ve established trust and that you’re serious about their privacy.”

We also made time to cover GDPR as well as three things organizations can do to demonstrate that they are serious about privacy.

Learn more about Dr. Cavoukian:

Transcript

Cindy Ng: Dr. Cavoukian, besides data minimalization, de-identification, user access control, what are some other concrete steps that businesses can take to benefit from protecting privacy?

Dr. Cavoukian: I think one of the things businesses don’t do very well is involve their customers in the decisions that they make, and I’ll give you an example. Years ago I read something called “Permission Based Marketing” by Seth Godin, and he’s amazing. And I read it, and I thought, “Oh this guy must have a privacy background,” because it was all about enlisting the support of your customers, gaining their permission and getting them to, as Godin said, “Put their hand up and say ‘count me in.'” So I called him, he was based in California at the time, and I said, “Oh Mr. Godin, you must have a privacy background?” And he said something like, “No, lady, I’m a marketer through and through, but I can see the writing on the wall. We’ve gotta engage customers, get them involved, get them to wanna participate in the things we’re doing.”

So, I always tell businesses that are serious about privacy, “First of all, don’t be quiet about it. Shout it from the rooftops, the lengths you’re going to, to protect your customer’s privacy. How much you respect it, how user-centric your programs are, and you’re focused on their needs in delivering.” And, then, once they understand this is the background you’re bringing, and you have great respect for privacy, in that context you say, “We would like you to consider giving us permission to allow it for these additional secondary uses. Here’s how we think it might benefit you, but we won’t do it without your positive consent.” You wouldn’t believe the buy-in you will get under those conditions because then you have established a trusted business relationship. They can see that you’re serious about privacy, and then they say, “Well by all means, if this will help me, in some way, use my information for this additional purpose.” You’ve gotta engage the customers in an active dialog.

Cindy Ng: So ask, and you might receive.

Dr. Cavoukian: Definitely, and you will most likely receive.

Cindy Ng: In sales processes they’re implementing that as well, “Is it okay if I continue to call you, or when can I call you next?” So they’re constantly feeling they’re engaged and part of the process, and it’s so effective.

Dr. Cavoukian: And I love that. Myself, as a customer… I belong to this air miles program, and I love it, because they don’t do anything without my positive consent. And, yet, I benefit because they send me targeted ads and things I’m interested in. And I’m happy to do that, and then I get more points and then it just continues to be a win-win.

Cindy Ng: Did you write anything about user access controls? What are your thoughts on that?

Dr. Cavoukian: We wrote about it in the context of that you’ve gotta have restricted access to those who have… I was gonna say, “Right to know.” Meaning there are some business purpose for which they’re accessing the data. And that can be…when I say, “business purpose,” I mean that broadly, in a hospital. People who are taking care of a patient, in whatever context, it can be in the lab. They go there for testing. Then they go for an MRI, and then they go… So there could be a number of different arms that have legitimate access to the data, because they’ve gotta process it in a variety of different ways. That’s all legitimate, but those people who aren’t taking care of the patient, in some broad manner, should have absolutely complete restricted access to the data. Because that’s when the snooping and the rogue employee…

Cindy Ng: Curiosity.

Dr. Cavoukian: …picture, the curiosity, takes you away, and it completely distorts the entire process in terms of the legitimacy of those people who should have access to it, especially in a hospital context, or patient context. You wanna enable easy access for those who have a right to know because they’re treating patients. And then the walls should go up for those who are not treating in any manner. It’d be difficult to do, but it is imminently doable, and you have to do it because that’s what patients expect. Patients have no idea that someone might be just, out of curiosity, looking at their file. You’ve had a breast removed, you had… I mean horrible things happen.

Cindy Ng: Tell us about GDPR, and it’s implications on Privacy by Design.

Dr. Cavoukian: For the first time, right now the EU has the General Data Protection Regulation, which passed for the first time, ever. It has the words, the actual words, “Privacy by Design” and “Privacy as the default” in the stature.

Cindy Ng: That’s great.

Dr. Cavoukian: It’s a first, it’s really huge, but what that means, it will strengthen those laws far higher than the U.S. laws. We talked about privacy as the default. It’s the model of positive consent. It’s not just looking for the opt out box. It’s gonna really raise the bar, and that might present some problems in dealing with laws in the states.

Cindy Ng: Then there’s also their right to be forgotten, and we live in such a globalized world, people both doing business in the states and in Europe, it’s been complicated.

Dr. Cavoukian: It does get very complicated. What I tell people everywhere that I go to speak is that if you follow the principles of Privacy by Design, which in itself raised the bar dramatically from most legislation, you will virtually be assured of complying with your regulations, whatever jurisdiction you’re in. Because you’re following the highest level of protection. So that’s another attractive feature about Privacy by Design is it offers such a high level of protection that you’re virtually assured of regulatory compliance, whatever jurisdiction you’re in.

And in the U.S., I should say, that the FTC, the Federal Trade Commission, a number of years ago, under Jon Leibowitz, when he was Chair, they made Privacy by Design the first of three best practices that the FTC recommended. And since he’s left, and Chairwoman Edith Ramirez is the Chair, she has also followed Privacy by Design and Security by Design, which are absolutely, interchangeably critical, and they are holding this high bar. So, I urge companies always to follow this to the extent that they can, because it will elevate their standing, both with the regulatory bodies, like the FTC, and with commissioners, and jurisdictions, and the EU, and Australia, and South America, South Africa. There’s something called GPN, the Global Privacy Network, and a lot of the people who participate in these follow these discussions.

Cindy Ng: What are three things that organizations can do in terms of protecting their consumers’ privacy?

Dr. Cavoukian: So, when I go to a company, I speak to the board of directors, their CEO, and their senior executive. And I give them this messaging about, “You’ve gotta be inclusive. You have to have a holistic approach to protecting privacy in your company, and it’s gotta be top down.” If you give the messaging to your frontline folks that you care deeply about your customer’s privacy, you want them to take it seriously, that message will emanate. And, then what happens from there, the more specific messaging is, what you say to people, is you wanna make sure that customers understand their privacy is highly respected by this company. “We go to great lengths to protect your privacy.” You wanna communicate that to them, and then you have to follow up on it. Meaning, “We use your information for the purpose intended that we tell you we’re gonna use it for. We collect it for that purpose. We use it for that purpose.” And then, “Privacy is the default setting. We won’t use it for anything else without your positive consent after that, for secondary uses.”

So that’s the first thing I would do. Second thing I would do is I would have at least quarterly meetings with staff. You need to reinforce this message. It’s gotta be spread across the entire organization. It can’t just be the chief privacy officer who’s communicating this to a few people. You gotta get everyone to buy into this, because you… I was gonna say the lowest. I don’t mean low in terms of category, but the frontline clerk might be low on the totem pole, but they may have the greatest power to breach privacy. So they have to understand, just like the highest senior manager has to understand, how important privacy is and why and how you can protect it. So have these quarterly meetings with your staff. Drive the message home, and it can be as simple as them understanding that this is… You’re gonna get what I call, “privacy payoff.” By protecting your customer’s privacy, it’s gonna yield big returns for your company. It will increase customer confidence and enhance customer trust, and that will increase our bottom line.

And the third thing, I know this is gonna a little pompous, but I would invite, and only because this happened to me, I’ve been invited in to speak to a company, like, once a year. And you invite everybody, from top to bottom. You open it up and… People need to have these ideas reinforced. It has to be made real. “Is this really a problem?” So, you bring in a speaker. I’m using myself as an example because I’ve done it, but it can be anybody who can speak to what happens when you don’t protect your customer’s privacy. It really helps for people inside a company, especially those doing a good job, to understand what can happen when you don’t do it right and what the consequences are to both the company and to employees. They’re huge. You can lose your jobs. The company could go under. You could be facing class action lawsuits.

And I find that it’s not all a bad news story. I give the bad news, what’s happening out there and what can happen, and then I applaud the behavior of the companies. And what they get is this dual message of, “Oh my God, this is real. This has real consequences when we fail to protect customer’s privacy, but look at the gains we have, look at the payoff in doing so.” And it makes them feel really good about themselves and the job that they’re doing, and it underscores the importance of protecting customer’s privacy.

[Podcast] Dr. Ann Cavoukian on Privacy By Design

[Podcast] Dr. Ann Cavoukian on Privacy By Design

Leave a review for our podcast & we'll send you a pack of infosec cards.


I recently had the chance to speak with former Ontario Information and Privacy Commissioner Dr. Ann Cavoukian about big data and privacy. Dr. Cavoukian is currently Executive Director of Ryerson University’s Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design (PbD).

What’s more, she came up with PbD language that made its way into the GDPR, which will go into effect in 2018. First developed in the 1990s, PbD addresses the growing privacy concerns brought upon by big data and IoT devices.

Many worry about PbD’s interference with innovation and businesses, but that’s not the case.

When working with government agencies and organizations, Dr. Cavoukian’s singular approach is that big data and privacy can operate together seamlessly. At the core, her message is this: you can simultaneously collect data and protect customer privacy.

Transcript

Cindy Ng
With Privacy by Design principles codified in the new General Data Protection Regulation, which will go into effect in 2018, it might help to understand the intent and origins of it. And that’s why I called former Ontario Information and Privacy Commissioner, Dr. Ann Cavoukian. She is currently Executive Director of Ryerson University’s Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design. When working with government agencies and organizations, Dr. Cavoukian’s singular approach is that big data and privacy can operate together seamlessly. At the core, her message is this, you can simultaneously collect data and protect customer privacy.

Thank you, Dr. Cavoukian for joining us today. I was wondering, as Information and Privacy Commissioner of Ontario, what did you see what was effective when convincing organizations and government agencies to treat people’s private data carefully?

Dr. Cavoukian
The approach I took…I always think that the carrot is better than the stick, and I did have order-making power as Commissioner. So I had the authority to order government organizations, for example, who were in breach of the Privacy Act to do something, to change what they were doing and tell them what to do. But the problem…whenever you have to order someone to do something, they will do it because they are required to by law, but they’re not gonna be happy about it, and it is unlikely to change their behavior after that particular change that you’ve ordered. So, I always led with the carrot in terms of meeting with them, trying to explain why it was in both their best interest, in citizens’ best interest, in customers’ best interest, when I’m talking to businesses. Why it’s very, very important to make it…I always talk about positive sum, not zero sum, make it a win-win proposition. It’s gotta be a win for both the organization who’s doing the data collection and the data use and the customers or citizens that they’re serving. It’s gotta be a win for both parties, and when you can present it that way, it gives you a seat at the table every time. And let me explain what I mean by that. Many years ago I was asked to join the board of the European Biometrics Forum, and I was honored, of course, but I was surprised because in Europe they have more privacy commissioners than anywhere else in the world. Hundreds of them, they’re brilliant. They’re wonderful, and I said, “Why are you coming to me as opposed to one of your own?” And they said, “It’s simple.” They said, “You don’t say ‘no’ to biometrics. You say ‘yes’ to biometrics, and ‘Here are the privacy protective measures that I insist you put on them.'” They said, “We may not like how much you want us to do, but we can try to accommodate that. But what we can’t accommodate is if someone says, ‘We don’t like your industry.'” You know, basically to say “no” to the entire industry is untenable. So, when you go in with an “and” instead of a “versus,” it’s not me versus your interests. It’s my interests in privacy and your interests in the business or the government, whatever you’re doing. So, zero sum paradigms are one interest versus another. You can only have security at the expense of privacy, for example. In my world, that doesn’t cut it.
Cindy Ng
Dr. Cavoukian, can you tell us a little bit more about Privacy by Design?
Dr. Cavoukian
I really crystallized Privacy by Design really after 9/11, because at 9/11 it became crystal clear that everybody was talking about the vital need for public safety and security, of course. But it was always construed as at the expense of privacy, so if you have to give up your privacy, so be it. Public safety’s more important. Well, of course public safety is extremely important, and we did a position piece at that point for our national newspaper, “The Globe and Mail,” and the position I took was public safety is paramount with privacy embedded into the process. You have to have both. There’s no point in just having public safety without privacy. Privacy forms the basis of our freedoms. You wanna live in free democratic society, you have to be able to have moments of reserve and reflection and intimacy and solitude. You have to be able to do that.
Cindy Ng
Data minimalization is important, but what do you think about companies that do collect everything with hopes that they might use it in the future?
Dr. Cavoukian
See, what they’re asking for, they’re asking for trouble, because I can bet you dollars to doughnuts that’s gonna come back to bite you. Because, especially with data, that you’re not clear about what you’re gonna do with it, so you got data sitting there. What data does is in identifiable form is attracts hackers. It attracts rogue employees on the inside who will make inappropriate use of the data, sell the data, do something with the data. It just…you’re asking for trouble, because keeping data in identifiable form, once the uses have been addressed, just begs trouble. I always tell people, if you wanna keep the data, keep the data, but de-identify it. Strip the personal identifiers, make sure you have the data aggregated, de-identified, encrypted, something that protects it from this kind of rogue activity. And you’ve been reading lately all about the hackers who are in, I think they were in the IRS for God’s sakes, and they’re getting in everywhere here in my country. They’re getting into so many databases, and it’s not only appalling in terms of the data loss, it’s embarrassing for the government departments who are supposed to be protecting this data. And it fuels even additional distrust on the part of the public, so I would say to companies, “Do yourself a huge favor. You don’t need the data, don’t keep it in identifiable form. You can keep it in aggregate form. You can encrypt it. You can do lots of things. Do not keep it in identifiable form where it can be accessed in an unauthorized manner, especially if it’s sensitive data.” Oh my god, health data…Rogue employees, we have a rash of it here, where…and it’s just curiosity, it’s ridiculous. The damage is huge, and for patients, and I can tell you, I’ve been a patient in hospitals many times. The thought that anyone else is accessing my data…it’s so personal and so sensitive. So when I speak this way to boards of directors and senior executives, they get it. They don’t want the trouble, or I haven’t even talked costs. Once these data breaches happen these days, it’s not just lawsuits, they’re class action lawsuits that are initiated. It’s huge, and then the damage to your reputation, the damage to your brand, can be irreparable.
Cindy Ng
Right. Yeah, I remember Meg Whitman said something about how it takes years and years to build your brand and reputation and seconds ruined.
Dr. Cavoukian
Yeah, yes. That is so true. There’s a great book called “The Reputation Economy” by Michael Fertik. He’s the CEO of reputation.com. It’s fabulous. You’d love it. It’s all about exactly how long it takes to build your reputation, how dear it is and how you should cherish it and go to great lengths to protect it.
Cindy Ng
Can you speak about data ownership?
Dr. Cavoukian
You may have custody and control over a lot of data, your customer’s data, but you don’t own that data. And with that custody and control comes an enormous duty of care. You gotta protect that data, restrict your use of the data to what you’ve identified to the customer, and then if you wanna use it for additional purposes, then you’ve gotta go back to the customer and get their consent for secondary uses of the data. Now, that rarely happens, I know that. In Privacy by Design, one of the principles talks about privacy as the default setting. The reason you want privacy to be the default setting…what that means is if a company has privacy as the default setting, it means that they can say to their customers, “We can give you privacy assurance from the get-go. We’re collecting your information for this purpose,” so they identify the purpose of the data collection. “We’re only gonna use it for that purpose, and unless you give us specific consent to use it for additional purposes, the default is we won’t be able to use it for anything else.” It’s a model of positive consent, it gives privacy assurance, and it gives enormous, enormous trust and consumer confidence in terms of companies that do this. I would say to companies, “Do this, because it’ll give you a competitive advantage over the other guys.”

As you know, because you sent it to me, the Pew Research Center, their latest study on Americans’ attitudes, you can see how high the numbers are, in the 90 percents. People have had it. They want control. This is not a single study. There have been multiple surveys that have come out in the last few months like this. Ninety percent of the public, they don’t trust the government or businesses or anyone. They feel they don’t have control. They want privacy. They don’t have it, so you have, ever since, actually, Edward Snowden, you have the highest level of distrust on the part of the public and the lowest levels of consumer confidence. So, how do we change that? So, when I talk to businesses, I say, “You change that by telling your customers you are giving them privacy. They don’t even have to ask for it. You are embedding it as the default setting which means it comes part and parcel of the system.” They’re getting it. I do what I call my neighbors test. I explain these terms to my neighbors who are very bright people, but they’re not in the privacy field. So, when I was explaining this to my neighbor across the street, Pat, she said, “You mean, if privacy’s the default, I get privacy for free? I don’t have to figure out how to ask for it?” And I said, “Yes.” She said, “That’s what I want. Sign me up!”

See, people want to be given privacy assurance without having to go to the lengths they have to go to now to find the privacy policy, search through the terms of service, find the checkout box. I mean, it’s so full of legalese. It’s impossible for people to do this. They wanna be given privacy assurance as the default. That’s your biggest bet if you’re a private-sector company. You will gain such a competitive advantage. You will build the trust of your customers, and you will have enormous loyalty, and you will attract new opportunity.

Cindy Ng
What are your Privacy by Design recommendations for wearables and IoT innovators and developers?
Dr. Cavoukian
The internet of things, wearable devices and new app developers and start up…they are clueless about privacy, and I’m not trying to be disrespectful. They’re working hard, say an app developer, they’re working hard to build their app. They’re focused on the app. That’s all they’re thinking about, how to deliver what the app’s supposed to deliver on. And then you say, “What about privacy?” And they say, “Oh, don’t worry about it. We’ve got it taken care of. You know, the third-party security vendor’s gonna do it. We got that covered.” They don’t have it covered, and what they don’t realize is they don’t know they don’t have it covered. “Give it to the security guys and they’re gonna take care of it,” and that’s the problem. When I speak to app developers…I was at Tim O’Reilly’s Web 2.0 last year or the year before, and there’s 800 people in the room, I was talking about Privacy by Design, and I said, “Look, do yourself a favor. Build in privacy. Right now you’re just starting your app developing, build it in right now at the front end, and then you’re gonna be golden. This is the time to do it, and it’s easy if you do it up front.” I had dozens of people come up to me afterwards because they didn’t even know they were supposed to. It had never appeared on their radar. It’s not resistance to it. They hadn’t thought of it. So our biggest job is educating, especially the young people, the app developers, the brilliant minds. My experience, it’s not that they resist the messaging, they haven’t been exposed to the messaging. Oh, I should just tell you, we started Privacy by Design certification. We’ve partnered with Deloitte and I’ll send you the link and we’re, Ryerson University, where I am housed, we are offering this certification for Privacy by Design. But my assessment arm, my audit arm, my partner, is Deloitte, and we’re partnering together, and we’ve had a real, real, just a deluge of interest.
Cindy Ng
So, do you think that’s also why people are also hiring Chief Privacy Officers?
Dr. Cavoukian
Yes.
Cindy Ng
What are some qualities that are required in a Chief Privacy Officer? Is it just a law background?
Dr. Cavoukian
No, in fact, I’m gonna say the opposite, and this is gonna sound like heresy to most people. I love lawyers. Some of my best friends are lawyers. Don’t just restrict your hiring of Chief Privacy Officers to lawyers. The problem with hiring a lawyer is they’re understandably going to bring a legal regulatory compliance approach to it, which, of course, you want that covered. I’m not saying…You have to be in compliance with whatever legislation is in your jurisdiction. But if that’s all you do, it’s not enough. I want you to go farther. When I ask you to do Privacy by Design, it’s all about raising the bar. Doing technical measures such as embedding privacy into the design that you’re offering into the data architecture, embedding privacy as a default setting. That’s not a legalistic term. It’s a policy term. It’s computer science. It’s a… You need a much broader skill set than law alone. So, for example, I’m not a lawyer, and I managed to be Commissioner for three terms. And I certainly valued my legal department, but I didn’t rely on it exclusively. I always went farther, and if you’re a lawyer, the tendency is just to stick to the law. I want you to do more than that. You have to have an understanding of computer science, technology, encryption, how can you… De-identification protocols are critical, combined with the risk of re-identification framework. When you look at the big data world, the internet of things, they’re going to do amazing things with data. Let’s make sure it’s strongly de-identified and resist re-identification attacks.
Cindy Ng
There have been reports that people can re-identify people without data.
Dr. Cavoukian
That’s right, but if you examine those reports carefully, Cindy, a lot of them are based on studies where the initial de-identification was very weak. They didn’t use strong de-identification protocols. So, like anything, if you start with bad encryption, you’re gonna have easy decryption. So, it’s all about doing it properly at the outset using proper standards. There’s now four standards of de-identification that have all come out that are risk-based, and they’re excellent.
Cindy Ng
Are you a fan of possibly replacing privacy policies with something simpler, like a nutrition label?
Dr. Cavoukian
It’s a very clever idea. They have tried to do that in the past. It’s hard to do, and I think your simplest one for doing the nutrition kinda label would be if you did embed privacy as the default setting. Because then you could have a nutrition label that said, “Privacy built in.” You know how, I think, Intel had something years ago where you had security built it or something. You could say, “Privacy embedded in the system.”

[Podcast] Data Privacy Attorney Sheila FitzPatrick on GDPR

[Podcast] Data Privacy Attorney Sheila FitzPatrick on GDPR

Leave a review for our podcast & we'll send you a pack of infosec cards.


We had a unique opportunity in talking with data privacy attorney Sheila FitzPatrick. She lives and breathes data security and is a recognized expert on EU and other international data protection laws. FitzPatrick has direct experience in representing companies in front of EU data protection authorities (DPAs). She also sits on various governmental data privacy advisory boards.

During this first part of the interview with her, we focused on the new General Data Protection Regulation (GDPR), which she says is the biggest overhaul in EU security and privacy rules in twenty years.

One important point FitzPatrick makes is that the GDPR is not only more restrictive than the existing Data Protection Directive —breach notification, impact assessment rules — but also has far broader coverage.

Cloud computing companies no matter where they are located will be under the GDPR if they are asked to process personal data of EU citizens by their corporate customers. The same goes for companies (or controllers in GDPR-speak) outside the EU who directly collect personal data — think of any US-based e-commerce or social networking company on the web.

Keep all this in mind as you listen to our in-depth discussion with this data privacy and security law professional.

Transcript

Cindy Ng
Sheila FitzPatrick has over 20 years of experience running her own firm as a data protection attorney. She also serves as outside counsel for Netapp as their chief privacy officer, where she provides expertise in global data protection compliance, cyber security regulations, and legal issues associated with cloud computing and big data. In this series, Sheila will be sharing her expertise on GDPR, PCI compliance, and the data security landscape.
Andy Green
Yeah, Sheila. I’m very impressed by your bio and the fact that you’ve actually dealt with some of these PPA’s and EU data protection authorities that we’ve been writing about. I know there’s been, so the GPDR will go into effect in 2018, and I’m just wondering what sort of the biggest change for companies, I guess they’re calling them data controllers, in dealing with DPA’s under the law. Is there something that comes to mind first?
Sheila FitzPatrick
And thank you for the compliment by the way. I live and breathe data privacy. This is the stuff I love. GPR …I mean is certainly the biggest overhaul in 20 years, when it comes to the implication of new data privacy regulations. Much more restrictive than what we’ve seen in the past. And most companies are struggling because they thought what was previously in place was strict.

There’s a couple things that stick out when it comes GDPR, is when you look at the roles of the data controller verses the data processor, in the past many of the data processors, especially when you talk about third party outsourcing companies and any particular cloud providers, have pushed sole liability for data compliance down to their customers. Basically, saying you decide what you’re going to put in our environment, you have responsibility for the privacy and security aspects. We basically accept minimal responsibility. Usually, it’s around physical security.

The GDPR now is going to put very comprehensive and very well-defined regulations and obligations in place for data processors as well. Saying that they can no longer flow responsibility for privacy compliance down to their customers. And if they’re going to be… even if they… often times, cloud providers will say, “We will comply with the laws in countries where we have our processing centers.” And that’s not sufficient under the new laws. Because if they have a data processing center say in in UK, but they’re processing the data of a German citizen or a Canadian citizen or someone from Asia Pacific, Australia, New Zealand, they’re now going to have to comply with the laws in those countries as well. They can’t just push it down to their customers.

The other part of GDPR that is quite different and it’s one of the first times it’s really going to be put into place is that it doesn’t just apply to companies that have operations within the EU. It is basically any company regardless of where they’re located and regardless of whether or not they have a presence in the EU, if they have access to the personal data of any EU citizen they will have to comply with the regulations under the GDPR. And that’s a significant change. And then the third one being the sanction. And the sanction can be 20,000,000 euro or 4% of your global annual revenue, whichever is higher. That’s a substantial change as well.

Andy Green
Right, So that’s some big, big changes. So you’re referring to I think, what they call ‘territorial scope’? They don’t have to necessarily have an office or an establishment in the EU as long as they are collecting data? I mean we’re really referring to social media and to the web commerce, or e-commerce.
Sheila FitzPatrick
Absolutely, but it’s going to apply to any company. So even if for instance you say, “Well, we don’t have any, we’re just a US domestic company”, but if you have employees in your environment that hold EU citizenship, you will have to protect their data in accordance with GDPR. You can’t say, well they’re working the US, therefore US law applies. That’s not going to be the case if they know that the individual holds citizenship in the EU.
Andy Green
We’re talking about employees, or…?
Sheila FitzPatrick
Could be employees, absolutely. Employees…
Andy Green
Anybody?
Sheila FitzPatrick
Anybody.
Andy Green
Isn’t that interesting? I mean one question about this expanded territorial scope, is how are they going to enforce this against US companies? Or not just US, but any company that is doing business but doesn’t necessarily have an office or an establishment?
Sheila FitzPatrick
Well it can be… see what happens under GDPR is any individual can file a complaint with the ports in basically any jurisdiction. They can file it at the EU level. They can file with it within the countries where they hold their citizenship. They can file it now with US courts, although the US courts… and part of that is tied to the new privacy shield, which is a joke. I mean, I think that will be invalidated fairly quickly. With the whole Redress Act, it does allow EU citizens to file complaints with the US courts to protect their personal data in accordance with EU laws.
Andy Green
So, just to follow through, if I came from the UK into the US and was doing transactions, credit card transactions, my data would be protected under EU law?
Sheila FitzPatrick
Well, if the company knows you’re an EU citizen. They’re not going to necessarily know. So, in some cases if they don’t know, they’re not going to held accountable. But if they absolutely do know then they will have to protect that data in accordance with UK or EU law. Well, not the UK… if Brexit goes through, the EU law won’t matter. The UK data protection act will take precedence.
Andy Green
Wow. You know it’s just really fascinating how the data protection and privacy now is just so important. Right, with the new GPDR? For everybody, not just the EU companies.
Sheila FitzPatrick
Yeah, and its always been important, it’s just the US has a totally different attitude. I mean the US has the least restrictive privacy laws in the world. So for individuals that have really never worked or lived outside of the US, the mindset is very much the US mindset, which is the business takes precedence. Where everywhere else in the world, the fundamental right to privacy takes precedence over everything.
Andy Green
We’re getting a lot of questions from our customers the new Breach Notification rule…
Sheila FitzPatrick
Ask me.
Andy Green
…in the GDPR. I was wondering if you could talk about… What are one the most important things you would do when you discover a breach? I mean if you could prioritize it in any way. How would you advise a customer about how to have a breach response program in a GDPR context?
Sheila FitzPatrick
Yeah. Well first and foremost you do need to have in place, before a breach even occurs, an incident response team that’s not made up of just the IT. Because normally organizations have an IT focus. You need to have a response team that includes IT, your chief privacy officer. And if the person… normally a CPO would sit in legal. If he doesn’t sit in legally, you want a legal representative in there as well. You need someone from PR, communications that can actually be the public-facing voice for the company. You need to have someone within Finance and Risk Management that sits on there.

So the first thing to do is to make sure you have that group in place that goes into action immediately. Secondly, you need to determine what data has potentially been breached, even if it hasn’t. Because under GDPR, it’s not… previously it’s been if there’s definitely been a breach that can harm an individual. The definition is if it’s likely to affect an individual. That’s totally different than if the individual could be harmed. So you need to determine okay, what data has been breached, and does it impact an individual?

So, as opposed to if company-related information was breached, there’s a different process you go through. Individual employee or customer data has been breached, the individual, is it likely to affect them? So that’s pretty much anything. That’s a very broad definition. If someone gets a hold of their email address, yes, that could affect them. Someone could email them who is not authorized to email them.

So, you have to launch into that investigation right away and then classify the data that has been any intrusion into the data, what that data is classified as.

Is it personal data?

Is it personal sensitive data?

And then rank it based on is it likely to affect an individual?

Is it likely to impact an individual? Is it likely to harm an individual?

So there could be three levels.

Based on that, what kind of notification? So if it’s likely to affect or impact an individual, you would have to let them know. If it’s likely to harm an individual, you absolutely have to let them know and the data protection authorities know.

Andy Green
And the DPA, right? So, if I’m a consumer, the threshold is… in other words, if the company’s holding my data, I’m not an employee, the threshold is likely to harm or likely to affect?
Sheila FitzPatrick
Likely to affect.
Andy Green
Affect. Okay. That’s a little more generous in terms of…
Sheila FitzPatrick
Right. Right. And that has changed, so it’s put more accountability on a company, because you know that a lot of companies have probably had breaches and have never reported them. So, because they go oh well, there was no Social Security Number, National Identification number, or financial data. It was just their name and their address and their home phone number or their cell phone. And the definition previously has been well, it can’t really harm them. We don’t need to let them know.

And then all of a sudden people’s names show up on these mailing lists. And they’re starting to get this unsolicited marketing. And they can’t determine whether or not… how did they get that? Was it based on a breach or is it based on trolling the Internet and gathering information and a broker selling that information? That’s the other thing. Brokers are going to be impacted by the new GDPR, because in order to sell their lists they have to have explicit consent of the individual to include their name on a list that they’re going to sell to companies.

Andy Green
Alright. Okay. So, it’s quite consumer friendly compared to what we have in the US.
Sheila FitzPatrick
Yes.
Andy Green
Is there sort of new rules about what they call sensitive data? And if you’re going to process certain classes of sensitive data, you need approval from the… I think at some point you might need approval from the DPA? You know what I’m referring to? I think it’s the…
Sheila FitzPatrick
Yes. Absolutely. I mean, that’s always been in place in most of the member states. So, if you look at the member states that have the more restrictive data privacy laws like Germany, France, Italy, Spain, Netherlands, they’ve always had the requirement that you have to register the data with the data protection authorities. And in order to collect and transfer outside of the country of origination any sensitive data, it did require approval.

The difference now is that any personal data that you collect on an individual, whether it’s an employee, whether it’s a customer, whether it’s a supplier, you have to obtain unambiguous and freely given explicit consent. Now this is any kind of data, and that includes sensitive data. Now the one difference with the new law is that there are just a few categories which are truly defined as sensitive data. That’s not what we think of sensitive data. We think of like birth date. Maybe gender. That information is certainly considered sensitive under… that’s personal data under EU law and everywhere else in the world, so it has to be treated to a high degree of privacy. But the categories that are political/religious affiliation, medical history, criminal convictions, social issues and trade union membership: that’s a subset. It’s considered highly sensitive information in Europe. To collect and transfer that information is going to now require explicit approval not only from the individual but from the DPA. Separate from the registrations you have done.

Andy Green
So, I think what I’m referring to is what they call the Impact Assessment.
Sheila FitzPatrick
Privacy Impact Assessments have to be conducted now anytime… and we’ve always… Anytime I’ve worked with any company, I’ve implemented Privacy Impact Assessments. They’re now required under the new GDPR for any collection of any personal data.
Andy Green
But sensitive data… I think they talked about a DNA data or bio-related data.
Sheila FitzPatrick
Oh no. So, what you’re doing… What happened under GPDR, they have expanded the definition of personal data. And so that not the sensitive, that’s expanding the definition of personal data to include biometric information, genetic information, and location data. That data was never included under the definition of personal data. Because the belief was, well you can’t really tie that back to an individual. They have found out since the original laws put in place that yes you can indeed tie that back to an individual. So, that is now included into the definition.
Andy Green
In sort of catching up a little bit with that technology?
Sheila FitzPatrick
Yeah. Exactly. But part of what GPDR did was it went from being a law around processing of personal data to a law that really moves you into the digital age. So, it’s anything about tracking or monitoring or tying different aspects or elements of data together to be able to identify a person. So, it’s really entering into the digital age. So, it’s trying to catch up with new technology.
Andy Green
I have one more question on the GDPR subject. There’s some mention in the law about sort of outside bodies can certify…?
Sheila FitzPatrick
Well, they’re talking about having private certifications and privacy codes. Right now, those are not in place. The highest standard you have right now for privacy law is what’s call Binding Corporate Rules. And so companies that have their Binding Corporate rules in place, there’s only less than a hundred companies worldwide that have those. And actually, I’ve written them for a number of companies, including Netapp has Binding Corporate rules in place. That is the gold standard. If you have BCRs, you are 90% compliant with GDPR. But the additional certifications that they’re talking about aren’t in place yet.
Andy Green
So, it may be possible to get a certification from some outside body and that would somehow help prove your… I mean, so if an incident happens and the DPA looks into it, having that compliance should help a little bit in terms of any kind of enforcement action?
Sheila FitzPatrick
yes, it certainly will once they come up with what those are. Unless you have Binding Corporate Rules. But right now… I mean if you’re thinking something like a trustee. No. there is no trustee certification. Trustee is a US certification for privacy, but it’s not a certification for GDPR.
Andy Green
Alright. Well, thank you so much. I mean these are questions that, I mean it’s great to talk to an expert and get some more perspective on this.