Category Archives: Privacy

Interview With Medical Privacy Author Adam Tanner [TRANSCRIPT]

Interview With Medical Privacy Author Adam Tanner [TRANSCRIPT]

Adam Tanner, author of Our Bodies, Our Data, has shed light on the dark market in medical data. In my interview with Adam, I learned that our medical records, principally drug transactions, are sold to medical data brokers who then resell this information to drug companies. How can this be legal under HIPAA without patient consent?

Adam explains that if the data is anonymized then it no longer falls under HIPAA’s rules. However, the prescribing doctor’s name is still left on the record that is sold to brokers.

As readers of this blog know, bits of information related to location, like the doctor’s name, don’t truly anonymize a record and can act as quasi-identifiers when associated with other data.

My paranoia was certainly in the red zone during this interview, and we explored what would happen if hackers or others could connect the dots. Some of the possibilities were a little unsettling.

Adam believes that by writing this book, he can raise awareness about this hidden medical data market. He also believes that consumers should be given a choice — since it’s really their data  — about whether to release the “anonymized” HIPAA records to third-parties.

 

Inside Out Security: Today, I’d like to welcome Adam Tanner. Adam is a writer-in-residence at Harvard University’s Institute for Quantitative Social Science. He’s written extensively on data privacy. He’s the author of What Stays In Vegas: The World of Personal Data and the End of Privacy As We Know It. His articles on data privacy have appeared in Scientific American, Forbes, Fortune, and Slate. And he has a new book out, titled “Our Bodies, Our Data,” which focuses on the hidden market in medical data. Welcome, Adam.

Adam Tanner: Well, I’m glad to be with you.

IOS: We’ve also been writing about medical data privacy for our Inside Out Security blog. And we’re familiar with how, for example, hospital discharge records can be legally sold to the private sector.

But in your new book, and this is a bit of a shock to me, you describe how pharmacies and others sell prescription drug records to data brokers. Can you tell us more about the story you’ve uncovered?

AT: Basically, throughout your journey as a patient into the healthcare system, information about you is sold. It has nothing to do with your direct treatment. It has to do with commercial businesses wanting to gain insight about you and your doctor, largely, for sales and marketing.

So, take the first step. You go to your doctor’s office. The door is shut. You tell your doctor your intimate medical problems. The information that is entered into the doctor’s electronic health system may be sold, commercially, as may the prescription that you pick up at the pharmacy or the blood tests that you take or the urine tests at the testing lab. The insurance company that pays for all of this or subsidizes part of this, may also sell the information.

That information about you is anonymized.  That means that your information contains your medical condition, your date of birth, your doctor’s name, your gender, all or part of your postal zip code, but it doesn’t have your name on it.

All of that trade is allowed, under U.S. rules.

IOS: You mean under HIPAA?

AT: That’s right. Now this may be surprising to many people who would ask this question, “How can this be legal under current rules?” Well, HIPAA says that if you take out the name and anonymize according to certain standards, it’s no longer your data. You will no longer have any say over what happens to it. You don’t have to consent to the trade of it. Outsiders can do whatever they want with that.

I think a lot of people would be surprised to learn that. Very few patients know about it. Even doctors and pharmacists and others who are in the system don’t know that there’s this multi-billion-dollar trade.

IOS:Right … we’ve written about the de-identification process, which it seems like it’s the right thing to do, in a way, because you’re removing all the identifiers, and that includes zip code information, other geo information. It seems that for research purposes that would be okay. Do you agree with that, or not?

AT: So, these commercial companies, and some of the names may be well-known to us, companies such as IBM Watson Health, GE, LexisNexis, and the largest of them all may not be well-known to the general public, which is Quintiles and IMS. These companies have dossiers on hundreds of millions of patients worldwide. That means that they have medical information about you that extends over time, different procedures you’ve had done, different visits, different tests and so on, put together in a file that goes back for years.

Now, when you have that much information, even if it only has your date of birth, your doctor’s name, your zip code, but not your name, not your Social Security number, not things like that, it’s increasingly possible to identify people from that. Let me give you an example.

I’m talking to you now from Fairbanks, Alaska, where I’m teaching for a year at the university here. I lived, before that, in Boston, Massachusetts, and before that, in Belgrade, Serbia. I may be the only man of my age who meets that specific profile!

So, if you knew those three pieces of information about me and had medical information from those years, I might be identifiable, even in a haystack of millions of different other people.

IOS: Yeah …We have written about that as well in the blog. We call these quasi-identifiers. They’re not the traditional kind of identifiers, but they’re other bits of information, as you pointed out, that can be used to sort of re-identify. Usually it’s a small subset, but not always. And that this information would seem also should be protected as well in some way. So, do you think that the laws are keeping up with this?

AT: HIPAA was written 20 years ago, and the HIPAA rules say that you can freely trade our patient information if it is anonymized to a certain standard. Now, the technology has gone forward, dramatically, since then.

So, the ability to store things very cheaply and the ability to scroll through them is much more sophisticated today than it was when those rules came into effect. For that reason, I think it’s a worthwhile time to have a discussion now. Is this the best system? Is this what we want to do?

Interestingly, the system of the free trade in our patient information has evolved because commercial companies have decided this is what they’d want to do. There has not been an open public discussion of what is best for society, what is best for patients, what is best for science, and so on. This is just a system that evolved.

I’m saying, in writing this book, “Our Bodies, Our Data,” that it is maybe worthwhile that we re-examine where we’re at right now and say, “Do we want to have better privacy protection? Do we want to have a different system of contributing to science than we do now?”

IOS: I guess what also surprised me was that you say that pharmacies, for example, can sell the drug records, as long as it’s anonymized. You would think that the drug companies would be against that. It’s sort of leaking out their information to their competitors, in some way. In other words, information goes to the data brokers and then gets resold to the drug companies.

AT: Well, but you have to understand that everybody in what I call this big-data health bazaar is making money off of it. So, a large pharmacy chain, such as CVS or Walgreen’s, they may make tens of millions of dollars in selling copies of these prescriptions to data miners.

Drug companies are particularly interested in buying this information because this information is doctor-identified. It says that Dr. Jones in Pittsburgh prescribes drug A almost all the time, rather than drug B. So, the company that makes drug B may send a sales rep to the doctor and say, “Doctor, here’s some free samples. Let’s go out to lunch. Let me tell you about how great drug B is.”

So, this is because there exists these doctor profiles on individual doctors across the country, that are used for sales and marketing, for very sophisticated kind of targeting.

IOS: So, in an indirect way, the drug companies can learn about the other drug companies’ sales patterns, and then say, “Oh, let me go in there and see if I can take that business away.” Is that sort of the way it’s working?

AT: In essence, yes. The origins of this trade date back to the 1950s. In its first form, these data companies, such as IMS Health, what they did was just telling companies what drugs sold in what market. Company A has 87% of the market. Their rival has 13% of the market. When medical information began to become digitized in the 1960s and ’70s and evermore since then, there was a new opportunity to trade this data.

So, all of a sudden, insurance companies and middle-men connecting up these companies, and electronic health records providers and others, had a product that they could sell easily, without a lot of work, and data miners were eager to buy this and produce new products for mostly the pharmaceutical companies, but there are other buyers as well.

IOS:  I wanted to get back to another point you mentioned, in that even with anonymized data records of medical records, with all the other information that’s out there, you can re-identify or at least limit, perhaps, the pool of people who that data would apply to.

What’s even more frightening now is that hackers have been stealing health records like crazy over the last couple of years. So, there’s a whole dark market of hacked medical data that, I guess, if they got into this IMS database, they would have the keys to the kingdom, in a way.

Am I being too paranoid here?

AT: Well, no, you correctly point out that there has been a sharp upswing in hacking into medical records. That can happen into a small, individual practice, or it could happen into a large insurance company.

And in fact, the largest hacking attack of medical records in the last couple of years has been into Anthem Health, which is the Blue Cross Blue Shield company. Almost 80 million records were hacked in that.

So even people that did… I was hacked in that, even though I was not, at the time, a customer of them or had never been a customer of them, but they… One company that I dealt with outsourced to someone else, who outsourced to them. So, all of a sudden, this information can be in circulation.

There’s a government website people can look at, and you’ll see, every day or two, there are new hackings. Sometimes it involves a few thousand names and an obscure local clinic. Sometimes it’ll be a major company, such as a lab test company, and millions of names could be impacted.

So, this is something definitely to be concerned about. Yes, you could take these hacked records and match them with anonymized records to try to figure out who people are, but I should point out that there is no recorded instance of hackers getting into these anonymized dossiers by the big data miners.

IOS: Right. We hope so!

AT: I say recorded or acknowledged instance.

IOS: Right. Right. But there’s now been sort of an awareness of cyber gangs and cyber terrorism and then the use of, let’s say, records for blackmail purposes.

I don’t want to get too paranoid here, but it seems like there’s just a potential for just a lot of bad possibilities. Almost frightening possibilities with all this potential data out there.

AT: Well, we have heard recently about rumors of an alleged dossier involving Donald Trump and Russia.

IOS: Exactly.

AT: And information that… If you think about what kind of information could be most damaging or harmful to someone, it could be financial information. It could be sexual information, or it could be health information.

IOS: Yeah, or someone using… or has a prescription to a certain drug of some sort. I’m not suggesting anything, but that… All that information together could have sort of lots of implications, just, you know, political implications, let’s say.

AT: I mean if you know that someone takes a drug that’s commonly used for a mental health problem, that could be information used against someone. It could be used to deny them life insurance. It could be used to deny them a promotion or a job offer. It could be used by rivals in different ways to humiliate people. So, this medical information is quite powerful.

One person who has experienced this and spoken publicly about it is the actor, Charlie Sheen. He tested positive for HIV. Others somehow learned of it and blackmailed him. He said he paid millions of dollars to keep that information from going public, before he decided finally that he would stop paying it, and he’d have to tell the world about his medical condition.

IOS: Actually I was not aware of the payments he was making. That’s just astonishing. So, is there any hope here? Do you see some remedies, through maybe regulations or enforcement of existing laws? Or perhaps we need new laws?

AT: As I mentioned, the current rules, HIPAA, allows for the free trade of your data if it’s anonymized. Now, I think, given the growth of sophistication in computing, that we should change what the rule is and to define our medical data as any medical information about us, whether or not it’s anonymized.

So, if a doctor is writing in the electronic health record, you should have a say as to whether or not that information is going to be used elsewhere.

A little side point I should mention. There are a lot of good scientists and researchers who want data to see if they can gain insights into disease and new medications. I think people should have the choice whether or not they want to contribute to those efforts.

So, you know, there’s a lot of good efforts. There’s a government effort under way now to gather a million DNA samples from people to make available to science. So, if people want to participate in that, and they think that’s good work, they should definitely be encouraged to do so, but I think they should have the say and decide for themselves.

And so far, we don’t really have that system. So, by redefining what patient data is, to say, “Medical information about a patient, whether or not it’s anonymized,” I think that would give us the power to do that.

IOS: So effectively, you’re saying the patient owns the data, is the owner, and then would have to give consent for the data to be used. Is that, about right?

AT: I think so. But on the other hand, as I mentioned, I’ve written this book to encourage this discussion. The problem we have right now is that the trade is so opaque.

Companies are extremely reluctant to talk about this commercial trade. So, they do occasionally say that, “Oh, this is great for science and for medicine, and all of these great things will happen.” Well, if that is so fantastic, let’s have this discussion where everyone will say, “All right. Here’s how we use the data. Here’s how we share it. Here’s how we sell it.”

Then let people in on it and decide whether they really want that system or not. But it’s hard to have that intelligent policy discussion, what’s best for the whole country, if industry has decided for itself how to proceed without involving others.

IOS: Well, I’m so glad you’ve written this book. This will, I’m hoping, will promote the discussion that you’re talking about. Well, this has been great. I want to thank you for the interview. So, by the way, where can our listeners reach out to you on social media? Do you have a handle on Twitter? Or Facebook?

AT: Well, I’m @datacurtain  and I have a webpage, which is http://adamtanner.news/

IOS: Wonderful. Thank you very much, Adam.

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part II

More Adam Tanner! In this second part of my interview with the author of Our Bodies, Our Data, we start exploring the implications of having massive amounts of online medical  data. There’s much to worry about.

With hackers already good at stealing health insurance records, is it only a matter of time before they get into the databases of the drug prescription data brokers?

My data privacy paranoia about all this came out in full force during the interview. Thankfully, Adam was able to calm me down, but there’s still potential for frightening possibilities, including political blackmail.

Is the answer more regulations for drug data? Listen to the rest of the interview below to find out, and follow Adam on Twitter, @datacurtain, to keep up to date.

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

[Podcast] Adam Tanner on the Dark Market in Medical Data, Part I

In our writing about HIPAA and medical data, we’ve also covered a few of the gray areas of medical privacy, including  wearables, Facebook, and hospital discharge records. I thought both Cindy and I knew all the loopholes. And then I talked to writer Adam Tanner about his new book Our Bodies, Our Data: How Companies Make Billions Selling Our Medical Records.

In the first part of my interview with Tanner, I learned how pharmacies sell our prescription drug transactions to medical data brokers, who then resell it to pharmaceutical companies and others. This is a billion dollar market that remains unknown to the public.

How can this be legal under HIPAA, and why doesn’t it require patient consent?

It turns out after the data record is anonymized, but with the doctor’s name still attached, it’s no longer yours!  Listen in as we learn more from Tanner in this first podcast.

[Podcast] More Dr. Ann Cavoukian: GDPR and Access Control

[Podcast] More Dr. Ann Cavoukian: GDPR and Access Control

We continue our discussion with Dr. Ann Cavoukian. She is currently Executive Director of Ryerson University’s Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design (PbD).

In this segment, Cavoukian tells us that once you’ve involved your customers in the decision making process, “You won’t believe the buy-in you will get under those conditions because then you’ve established trust and that you’re serious about their privacy.”

We also made time to cover GDPR as well as three things organizations can do to demonstrate that they are serious about privacy.

Here are some highlights of our interview:

On User Access Control

You’ve got to have restricted access, to those who have a right to know, meaning there is a business purpose for which their accessing the data.

When I say business purpose, I mean that broadly. But it could be in a hospital, people who are taking care of a patient, in whatever context. It could be in the lab, they go there for testing. So there could be a number of different arms that have a legitimate access to the data.

Those who aren’t taking care of the patient in some broad manner should have restricted access to data. That’s when the snooping, the rogue employee, the curiosity, it distorts the legitimate reasons for the people who should have access to information.

Especially in a hospital context, you want to enable access to people who have a right to know because they are treating you. And then the walls should go up for those who are not treating you in any manner. It’s difficult to do, but you have to do it. Because that’s what patients expect. Patients have no idea that out of curiosity, someone who shouldn’t have access is looking at their file.

Three Things Organizations Can Do

1. When I go to an organization to speak about privacy, I speak to the CEO, Board of Directors, and senior executives, I give them message that they need to be inclusive, you have to have a holistic approach to protecting privacy and it’s got to be top down. If you get that messaging to your front line folks, that you care deeply about your customer’s privacy, that message will emanate.

Also let your customers know that their privacy is highly respected by this company, we go to great lengths to protect your privacy, you want to communicate that to them and then you have to follow up on it. Meaning, we use your information for the purpose intended, that we tell you what we use it for, we collect it for that purpose, and then privacy is the default setting, we won’t use it for anything else without your positive consent after that for secondary uses.

2. I would have at least quarterly meetings with staff. You need to reinforce this message. It needs to be spread across the entire organization.

It can’t be just the Chief Privacy Officer, who is communicating this to a few people. You have to have everyone buy into this. The front line clerk might be low on the totem pole, but they might have the greatest power to breach privacy, so they need to understand just like the highest senior manager, how important privacy is and why and how you can protect it.

Meet with you staff, drive the message home, you’re going to get what I call a privacy payoff, you’re protecting your customer’s privacy, it’s going to yield big returns for your company, it will increase customer confidence and enhance customer trust and it will increase your organization’s bottom line.

3. I would invite a speaker and everyone from your company so that you can have these ideas reinforced. You bring in a speaker who can speak to what happens, when you don’t protect your customer’s privacy.

Last year they called it the year of the data breach, it was rampant, so it really helps to tell the people in your company and help them understand what could happen when you don’t do it right and what the consequences are to the company and to the employee – you could lose your job, the company could go under, you could be facing a class-action lawsuit.

It’s not an all a bad news story, I’ll give the bad news and then I applaud the behavior of the company and what they get is this dual message, this has real consequences when we fail to protect our customer’s privacy, but look at the gains and payoff. It makes them feel really good about themselves about the good job they’re doing and it underscores the importance of protecting privacy.

Learn more about Dr. Cavoukian:


Subscribe Now

Add us to your favorite podcasting app:

Follow the Inside Out Security Show panel on Twitter @infosec_podcast

 

 

[Podcast] Dr. Ann Cavoukian on Privacy By Design

[Podcast] Dr. Ann Cavoukian on Privacy By Design


I recently had the chance to speak with former Ontario Information and Privacy Commissioner Dr. Ann Cavoukian about big data and privacy. Dr. Cavoukian is currently Executive Director of Ryerson University’s Privacy and Big Data Institute and is best known for her leadership in the development of Privacy by Design (PbD).

What’s more, she came up with PbD language that made its way into the GDPR, which will go into effect in 2018. First developed in the 1990s, PbD addresses the growing privacy concerns brought upon by big data and IoT devices.

Many worry about PbD’s interference with innovation and businesses, but that’s not the case.

When working with government agencies and organizations, Dr. Cavoukian’s singular approach is that big data and privacy can operate together seamlessly. At the core, her message is this: you can simultaneously collect data and protect customer privacy.

To gain insight into her process, here are some highlights of our interview:

On Privacy by Design

I really crystalized Privacy by Design after 9/11 because everyone was talking about the vital need for public safety and security, but it was always construed at the expense of privacy. Privacy forms the basis of our freedom. You want to live in a democratic society? You have to have moments of reserve, reflection, intimacy and solitude.

The position I took was public safety is paramount, but with privacy embedded into the process – privacy as a default setting. What that means is that if a company has privacy as the default setting, they can say to their customers, we can give you privacy assurance from the get-go. We’re collecting your information for this purpose, so they identify the purpose of data collection. We’re only going to use it for that purpose and unless you give us specific consent to use it for additional purposes, the default is that we won’t be able to use it for anything else.

Privacy Advice for Innovators

Recently, I spoke at a conference and told the developers to build privacy into their apps at the beginning stages of their development and they’re going to be golden. I had dozens of people come up to me afterwards telling me that they didn’t even know they were supposed to because it never appeared on their radar. It’s not they’re resistant to it, they hadn’t thought of it. And our biggest job is educating the app developers, the brilliant minds. My experience isn’t that they’re resisting the messaging, they haven’t been exposed to the messaging.

I’d like to mention, in partnership with Deloitte and Ryerson University, we just started offering Privacy by Design certification.


Subscribe Now

Add us to your favorite podcasting app:

Follow the Inside Out Security Show panel on Twitter @infosec_podcast

 

[Podcast] Data Privacy Attorney Sheila FitzPatrick on GDPR

[Podcast] Data Privacy Attorney Sheila FitzPatrick on GDPR

We had a unique opportunity in talking with data privacy attorney Sheila FitzPatrick. She lives and breathes data security and is a recognized expert on EU and other international data protection laws. FitzPatrick has direct experience in representing companies in front of EU data protection authorities (DPAs). She also sits on various governmental data privacy advisory boards.

During this first part of the interview with her, we focused on the new General Data Protection Regulation (GDPR), which she says is the biggest overhaul in EU security and privacy rules in twenty years.

One important point FitzPatrick makes is that the GDPR is not only more restrictive than the existing Data Protection Directive —breach notification, impact assessment rules — but also has far broader coverage.

Cloud computing companies no matter where they are located will be under the GDPR if they are asked to process personal data of EU citizens by their corporate customers. The same goes for companies (or controllers in GDPR-speak) outside the EU who directly collect personal data — think of any US-based e-commerce or social networking company on the web.

Keep all this in mind as you listen to our in-depth discussion with this data privacy and security law professional.


Subscribe Now

Add us to your favorite podcasting app:

Follow the Inside Out Security Show panel on Twitter @infosec_podcast

 

Are Wikileaks and ransomware the precursors to mass extortion?

Are Wikileaks and ransomware the precursors to mass extortion?

Despite Julian Assange’s promise not to let Wikileaks’ “radical transparency” hurt innocent people, an investigation found that the whistleblowing site has published hundreds of sensitive records belonging to ordinary citizens, including medical files of rape victims and sick children.

The idea of having all your secrets exposed, as an individual or a business, can be terrifying. Whether you agree with Wikileaks or not, the world will be a very different place when nothing is safe. Imagine your all your emails, health records, texts, finances open for the world to see. Unfortunately, we may be closer to this than we think.  

If ransomware has taught us one thing it’s that an overwhelming amount of important business and personal data isn’t sufficiently protected. Researcher Kevin Beaumont says he’s seeing around 4,000 new ransomware infections per hour. If it’s so easy for an intruder to encrypt data, what’s stopping cybercriminals from publishing it on the open web?

There are still a few hurdles for extortionware, but none of them are insurmountable:

1. Attackers would have to exfiltrate the data in order to expose it

Ransomware encrypts data in place without actually stealing it. Extortionware has to bypass traditional network monitoring tools that are built to detect unusual amounts of data leaving their network quickly. Of course, files could be siphoned off slowly disguised as benign web or DNS traffic.

2. There is no central “wall of shame” repository like Wikileaks

If attackers teamed up to build a searchable public repository for extorted data, it’d make the threat of exposure feel more real and create a greater sense of urgency. Wikileaks is very persistent about reminding the public that the DNC and Sony emails are out in the open, and they make it simple for journalists and others to search the breached data and make noise about it.

3. Maybe ransomware pays better

Some suggest that the economics of ransomware are better than extortionware, which is why we haven’t seen it take off. On the other hand, how do you recover when copies of your files and emails are made public? Can the DNC truly recover? Payment might be the only option, and one big score could be worth hundreds of ransomware payments.  

So what’s preventing ransomware authors from trying to doing both? Unfortunately, not much. They could first encrypt the data then try to exfiltrate it. If you get caught during exfiltration, it’s not a big deal. Just pop up your ransom notification and claim your BTC.

Ransomware has proven that organizations are definitely behind the curve when it comes to catching abnormal behavior inside their perimeters, particularly on file systems. I think the biggest lesson to take away from Wikileaks, ransomware, and extortionware is that we’re on the cusp of a world where unprotected files and emails will regularly hurt businesses, destroy privacy, and even jeopardize lives (I’m talking about hospitals that have suffered from cyberattacks like ransomware).

If it’s trivially easy for noisy cybercriminals that advertise their presence with ransom notes to penetrate and encrypt thousands of files at will, the only reasonable conclusion is that more subtle threats are secretly succeeding in a huge way.  We just haven’t realized it yet…except for the U.S. Office of Personnel Management. And Sony Pictures. And Mossack Fonseca. And the DNC. And…

[Podcast] Attorney and Data Scientist Bennett Borden: Data Analysis Techniques (Part 1)

[Podcast] Attorney and Data Scientist Bennett Borden: Data Analysis Techniques (Part 1)

Once we heard Bennett Borden, a partner at the Washington law firm of DrinkerBiddle, speak at the CDO Summit about data science, privacy, and metadata, we knew we had to reengage him to continue the conversation.

His bio is quite interesting: in addition to being a litigator, he’s also a data scientist. He’s a sought after speaker on legal tech issues. Bennett has written law journal articles about the application of machine learning and document analysis to ediscovery and other legal transactions.

In this first part in a series of podcasts, Bennett discusses the discovery process and how data analysis techniques came to be used by the legal world. His unique insights on the value of the file system as a knowledge asset as well as his perspective as an attorney made for a really interesting discussion.


Subscribe Now

Add us to your favorite podcasting app:

Follow the Inside Out Security Show panel on Twitter @infosec_podcast

 

Let’s Get More Serious About AR and Privacy

Let’s Get More Serious About AR and Privacy

Augmented Reality (AR) is the technology of the moment. While some of us have already experienced the thrill of catching a Dragonite in Pokemon Go, AR is not just all fun and games. In fact, depending on how an AR gadget is used, it can have significant privacy implications.

Privacy in Public

Augmented reality enhances real images with digital special effects — it’s reality assisted by coding.  These gadgets generally let you record a scene, and then they give you the option of sharing on social media.

In the public space, you don’t have an expectation of privacy. As an amateur photographer myself, I was always told to be polite and ask permission of a stranger before taking a picture. If you’re curious, there’s a professional code of ethics that spells this out.

But doctors, bankers, lawyers, and some others are under real legal obligations when it comes to taking picturse of people and personal information.

Privacy at the Doctor’s

Suppose a doctor armed with an AR device (or a video-recorder), films his waiting room filled with people. The doctor may not necessarily need consent in this case, but some states and hospital associations may have their own laws and guidelines in this area.

If the doctor photographs a patient’s face for clinical purposes, usually the general HIPAA consent form would be sufficient.

But if the doctor were to use the video of the waiting room or clinical pictures for marketing purposes, HIPPA requires additional authorization.

In general, hospital employees and visitors (except when recording family members) need consent when photographing or video-ing people in a hospital setting.

Mark my words, but at some point a HIPAA case will be brought against hospital workers fooling around with Pokemon Go as they wander the medical corridors hunting for Vapereons.

By the way, photos or videos showing faces are considered protected health information (PHI).

If they were then stored, they would have to be protected in the same was as HIPAA text identifiers. And an unauthorized exposure of this type of PHI would be considered a breach.

Outside the Hospital Setting

These AR gadgets can also be a privacy problem in business and legal settings. If an outsider or unauthorized person with AR glasses were recording confidential data, trade secrets, or PII on someone’s desk or on their screen, then that would be considered a security leak.

And relevant laws such a Gramm-Leach-Bliley and Sarbannes-Oxley would kick in.

A judge recently banned Pokemon Go in the courtroom, but this seems to be more a case of legal etiquette.  Another judge was somewhat upset — and tweeted about it — that a defense counsel was using AR glasses, but apparently nothing illegal was done.

It’s a little premature to become too worried about the privacy and security issues of AR gadgetry with so many more pressing security problems.

However, it’s not a bad idea for your company to come up with initial guidelines and policies on AR device usage by employees and visitors.

Top Minds in PCI Compliance

Top Minds in PCI Compliance

With countless data breaches hitting the front page, many are turning to the Payment Card Industry Data Security Standard (PCI DSS) which is an invaluable controls list to guide, influence, and promote security.

However, there are merchants who argue that these controls provide too much security while security professionals think they provide too little.

So what do the experts think about PCI DSS? Here are five worth listening to:

1.Laura Johnson

laura johnson

As Director of Communications for PCI Security Standards Council (SSC), Laura Johnson is responsible for creating communication strategies that inform, educate, and help PCI SSC global stakeholders to participate in PCI programs and initiatives.

If you want to learn about PCI fundamentals, check out her blog. There, you’ll also find the latest and greatest on PCI DSS 3.2.

2. Anton Chuvakin / @anton_chuvakin

chuvakin_small_400x400

Not only is Anton Chuvakin an Infosec expert, but he’s also super knowledgeable about PCI DSS compliance, offering the best dos and don’ts to keep everyone’s payment cards safe. Currently Dr. Anton Chuvakin is a Research Vice President of Gartner’s  Technical Professionals (GTP) Security and Risk Management Strategies team.

According to Mr. Chuvakin, many make the mistake of only adhering to the PCI DSS specific tasks right before a compliance assessment. However, in reality you really need to adhere to the standards at all times as security doesn’t start and end with PCI compliance.

By the way, get his book on PCI Compliance! You won’t regret it!

3. Nancy Rodriguez

nancy rodriguez

Nancy Rodriguez is currently Enterprise PCI Governance Leader at Wells Fargo and responsible for coordinating and conducting PCI risk assessments.

Her contributions to the industry are wide and varied and started over 25 years ago. She has been a trusted advisor at Citi for all global PCI programs, a former Board of Advisors of PCI SSC, and a PCI Compliance Director at Philips.

See what others have to say about Rodriguez, here.

4. Troy Leach / @TroyLeach

PHOTO-SPEAKER-LEACH

Troy Leach is the Chief Technology Officer for the PCI Security Standards Council (SSC). He partners with industry leaders to develop standards and strategies to ensure that payment card data and infrastructure is secure.

If you want to hear more from Mr. Leach and Mr. Chuvakin on what they have to say about the balance between PCI DSS compliance and security, check out this insightful interview. Also Mr. Leach regularly tweets out links to stories on bank hackers, robberies, and ATM thieves – it’ll feel like you’re watching an episode of Law and Order!

5.John Kindervag/ @kindervag

John-Cas_400x400

Mr. Kindervag is a leading expert on wireless security, network security, security information management, and PCI data security. Currently he is Forrester’s Vice President and Principal Analyst serving security and risk professionals.

In this TechTarget article, Mr. Kindervag dispels the five biggest misunderstandings about PCI DSS.

Want to learn more?

Six Authentication Experts You Should Follow

Six Authentication Experts You Should Follow

Our recent ebook shows what’s wrong with current password-based authentication technology.

But luckily, there are a few leading experts that are shaping the future of the post-password world. Here are six people you should follow:

cranor

1. Lorrie Cranor @lorrietweet

Lorrie Cranor is a password researcher and is currently Chief Technologist at the US Federal Trade Commission. She is primarily responsible for advising the Commission on developing technology and policy matters.

Cranor has authored over 150 research papers on online privacy, usable security, and other topics. She has played a key role in building the usable privacy and security research community, having co-edited the seminal book Security and Usability and founded the Symposium On Usable Privacy and Security.

Prior to the FTC, Cranor was a Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University where she is director of the CyLab Usable Privacy and Security Laboratory (CUPS) and co-director of the MSIT-Privacy Engineering masters program.

Check out Cranor’s tips on how often should you change your password. Also an oldie but goodie is Cranor’s dress made of commonly used passwords.

Johullrich

2. Johannes Ullrich @johullrich

Considered to be one of the 50 most powerful people in Networking by Network World, Johannes Ullrich, Ph.D. is currently Dean of Research for the SANS Technology Institute.

A proponent of biometrics authentication, Mr. Ullrich believes it’s a field that is finally gaining traction. He explained in a recent Wired article, “This field is very important because passwords definitely don’t work.” However, he also recognizes barriers before widespread adoption of biometrics.

For instance, while Mr. Ullrich’s latest analysis of the iPhone’s fingerprint sensor was mostly positive, he revealed one big vulnerability: attackers could in theory lift a fingerprint smudge off a stolen iPhone’s glass and then fool the sensor’s imperfect scanner.

Yikes! Better get out my microfiber cleaning cloth.

mazurek-9394

3. Michelle Mazurek (website)

One of the researchers that brought us the news that a passphrase is just as good as using a password with symbols and/or caps is Michelle Mazurek.

She is currently an Assistant Professor of Computer Science at the University of Maryland. Her expertise is in computer security, with an emphasis on human factors.

Her interest resides in understanding security and privacy behaviors and preferences by collecting real data from real users, and then building systems to support those behaviors and preferences.

Check out more of her work on passwords, here.

david birch

4. David Birch @dgwbirch

David Birch is a recognized thought leader in two things that still count even in the disruptive digital age: money and identity. In his last book, “Identity is the New Money” he presents a unified theory of where these two essential aspects of modern life are heading.

His thinking on identity is based strongly on the work of Dr. Who. Yes, the hero of the long running BBC sci-fi show. Fans know that the Doctor has a psychic paper that always provide just the right information for alien bureaucrats.

Birch envisions something similar: a universal credential that would provide just the information that an online service, retailer, or government agency would require to process a transaction.  Need to prove that you’re 18 years old, have membership in an organization, or access rights to digital content? In Birch’s view, the technology is now available—primarily through biometric, cryptography, and wireless—to accomplish all this without accessing a central database using passwords!

markburnett

5. Mark Burnett @m8urnett

While some might think passwords are on the outs, realistically, we’ll probably continue to use them for years to come. Therefore, we’ll need the expertise of Perfect Passwords author Mark Burnett to help keep our data safe.

This veteran IT security expert regularly blogs on his own personal website and writes articles for sites such as Windows IT Pro and The Register. Also active on social media, he regularly offers ideas on how to improve passwords and authentication.

Check out this fascinating post on how Burnett experimented with his entire family to see if it was really possible to kill the password.

karl martin

6. Karl Martin @KarlTheMartian

With Ph.D. degrees in Electrical and Computer Engineering, Karl Martin, CEO and Founder of Nymi created a wristband that analyzes your heartbeat to seamlessly authenticate you when you’re on the computer, smartphone, car and so much more. Skeptics who are concerned about their data and privacy shouldn’t be worried, according to Mr. Martin. He contends that all the data is encrypted at the hardware level and created the wristband with Privacy by Design.

In this Wired interview, Martin says that it’s impossible for anyone to trace the signal emitting from the wrist band back to the user unless people opt-in to allow that access – the default setting is opt-out.

In future versions, if Mr. Martin can get our computers, phones and car to talk to us with a voice like Scarlett Johansson’s, our life would be complete.