Category Archives: Privacy

Six Authentication Experts You Should Follow

Six Authentication Experts You Should Follow

Our recent ebook shows what’s wrong with current password-based authentication technology.

But luckily, there are a few leading experts that are shaping the future of the post-password world. Here are six people you should follow:

cranor

1. Lorrie Cranor @lorrietweet

Lorrie Cranor is a password researcher and is currently Chief Technologist at the US Federal Trade Commission. She is primarily responsible for advising the Commission on developing technology and policy matters.

Cranor has authored over 150 research papers on online privacy, usable security, and other topics. She has played a key role in building the usable privacy and security research community, having co-edited the seminal book Security and Usability and founded the Symposium On Usable Privacy and Security.

Prior to the FTC, Cranor was a Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University where she is director of the CyLab Usable Privacy and Security Laboratory (CUPS) and co-director of the MSIT-Privacy Engineering masters program.

Check out Cranor’s tips on how often should you change your password. Also an oldie but goodie is Cranor’s dress made of commonly used passwords.

Johullrich

2. Johannes Ullrich @johullrich

Considered to be one of the 50 most powerful people in Networking by Network World, Johannes Ullrich, Ph.D. is currently Dean of Research for the SANS Technology Institute.

A proponent of biometrics authentication, Mr. Ullrich believes it’s a field that is finally gaining traction. He explained in a recent Wired article, “This field is very important because passwords definitely don’t work.” However, he also recognizes barriers before widespread adoption of biometrics.

For instance, while Mr. Ullrich’s latest analysis of the iPhone’s fingerprint sensor was mostly positive, he revealed one big vulnerability: attackers could in theory lift a fingerprint smudge off a stolen iPhone’s glass and then fool the sensor’s imperfect scanner.

Yikes! Better get out my microfiber cleaning cloth.

mazurek-9394

3. Michelle Mazurek (website)

One of the researchers that brought us the news that a passphrase is just as good as using a password with symbols and/or caps is Michelle Mazurek.

She is currently an Assistant Professor of Computer Science at the University of Maryland. Her expertise is in computer security, with an emphasis on human factors.

Her interest resides in understanding security and privacy behaviors and preferences by collecting real data from real users, and then building systems to support those behaviors and preferences.

Check out more of her work on passwords, here.

david birch

4. David Birch @dgwbirch

David Birch is a recognized thought leader in two things that still count even in the disruptive digital age: money and identity. In his last book, “Identity is the New Money” he presents a unified theory of where these two essential aspects of modern life are heading.

His thinking on identity is based strongly on the work of Dr. Who. Yes, the hero of the long running BBC sci-fi show. Fans know that the Doctor has a psychic paper that always provide just the right information for alien bureaucrats.

Birch envisions something similar: a universal credential that would provide just the information that an online service, retailer, or government agency would require to process a transaction.  Need to prove that you’re 18 years old, have membership in an organization, or access rights to digital content? In Birch’s view, the technology is now available—primarily through biometric, cryptography, and wireless—to accomplish all this without accessing a central database using passwords!

markburnett

5. Mark Burnett @m8urnett

While some might think passwords are on the outs, realistically, we’ll probably continue to use them for years to come. Therefore, we’ll need the expertise of Perfect Passwords author Mark Burnett to help keep our data safe.

This veteran IT security expert regularly blogs on his own personal website and writes articles for sites such as Windows IT Pro and The Register. Also active on social media, he regularly offers ideas on how to improve passwords and authentication.

Check out this fascinating post on how Burnett experimented with his entire family to see if it was really possible to kill the password.

karl martin

6. Karl Martin @KarlTheMartian

With Ph.D. degrees in Electrical and Computer Engineering, Karl Martin, CEO and Founder of Nymi created a wristband that analyzes your heartbeat to seamlessly authenticate you when you’re on the computer, smartphone, car and so much more. Skeptics who are concerned about their data and privacy shouldn’t be worried, according to Mr. Martin. He contends that all the data is encrypted at the hardware level and created the wristband with Privacy by Design.

In this Wired interview, Martin says that it’s impossible for anyone to trace the signal emitting from the wrist band back to the user unless people opt-in to allow that access – the default setting is opt-out.

In future versions, if Mr. Martin can get our computers, phones and car to talk to us with a voice like Scarlett Johansson’s, our life would be complete.

 

Summer Reminder: Cloud Storage Ain’t All That Private

Summer Reminder: Cloud Storage Ain’t All That Private

I’ve written before about the lack of privacy protections for consumers storing content in the cloud. In looking back over my notes, I’d forgotten just how few cloud privacy rights we have in the real world. Using the typical terms of service (ToS) from some major providers as a benchmark, your rights to the uploaded cloud can be summarized by this common expression (often used in relationships by one party): “what’s yours is mine”.

I’ve become obsessed recently with applying some of the security and privacy ideas we talk about in this blog in my daily life. Like you, I use a few well-known cloud file storage services to store documents, pictures, and audios — mostly of a quasi-public nature but occasionally more personal content as well.

After doing some additional research for this post, I’m now seeing the cloud a little more ominously. And I’ll  start taking real actions in 2016.  You should too.

In many cases, you lose all your privacy rights by clicking on a typical cloud storage ToS. Effectively, the provider can do whatever it wants with your data, including sending it to outside parties.

It’s Not a Safety Deposit Box

As a refresher course, the Stored Communications Act (SCA) is the relevant legislation covering digital content held by a company. It was written in the late bronze age of computing—circa mid-1980s. The intention was to give the then new email and other computing technologies the same privacy protections as legacy mail.

We don’t expect our letter carriers to casually open and read our mail or the postal service to send us targeted advertising flyers based on whom we’ve written to.

Lawmakers at the time thought they could help spur electronic communications by elevating email and, to a lesser extent, online storage to the same legal status (particular in terms of the 4th amendment) as the postal service and phone systems.

The SCA introduced the legal concept of electronic communications services (ECS) to cover email and messaging, and remote computing services (RCS) for online storage and data processing that are offered to the general public.

RCS is the one that’s most relevant to today’s cloud technology.

Why?

Any service in which the digital content or communications is stored — and that includes web-based email services —is better classified under the Act as an RCS.

Remote Computing Services and Privacy

While the authors of the SCA may have thought they were turning cloud storage into the virtual equivalent of a sealed letter, the reality of ad-based business models have made the cloud storage far less private.

The key problem is the Terms of Service agreements we robotically click on.  Many major providers –no names, please—say in explicit terms that they can access the user’s uploaded contents for advertising purposes or else they have language that the contents can be accessed at some point for some unknown purpose.

From the SCA’s viewpoint, these ToS agreements mean that the cloud provider is not an RCS. The legalese in the Act states an RCS can access your data but only for reasons directly related to storage — say, copying to other sites in a cluster or archiving or some other IT function.

SCA-2702b

The great exception in the SCA for remote computing services can be found  in paragraph B.

As soon as the provider is allowed to take the data and use it for activities not related to storage – say targeted advertising or other vaguely described reasons mentioned in the ToS— it’s no longer in the RCS business as far as the SCA is concerned.

You then lose all your SCA privacy protections since the statute protects your privacy only when the contents are held by an valid RCS. This includes, most importantly,  for the provider to gain authorization from a subscriber when divulging contents.

The core issue is that once you allow the cloud storage provider to peek into the data for other than pure IT reasons, you no longer have an “expectation of privacy”.

If you want to learn more about the SCA and privacy in the cloud era, read this surprisingly interesting legal paper that traces the history and legal reasoning behind the law.

Once the cloud provider falls outside the SCA, it doesn’t need the subscriber’s permission to do anything else it wants with the data.

Send some personal data mined from your documents to a data broker? No problem.

You also lose, not insignificantly, your 4th amendment rights: the cloud provider can simply send your data to the government when faced with an easy-to-obtain administrative subpoena — and they don’t need to inform you!

And It Gets Worse

These days you don’t have to be a cloud provider to be able to implement storage and email services. Any company with an IT department can generally pull this off.

As a result, we as consumers see these services being offered by retail, travel, hospitality, and just about any large company that wants to “engage” with its customers.

But as far as the SCA is concerned, these companies are neither RCS or ECS. Since their primary business function is outside of communications, they’re not covered by SCA at all.

So the next time you’re in your favorite chain espresso bar and hooked into their WiFi, be aware that when using any special storage or messaging features provided by their website, your content is not protected.

Also not covered by the SCA: university or school email systems. Since their email services are not offered to the general public, it’s not considered an RCS/ECS.

Work email system fall outside the SCA as well. Though there are some interesting cases where the employee used a company provided cell phone and was in fact protected by the SCA.

Privacy Options

Many cloud storage ToS agreements will say that won’t sell your data — both contents stored and PII — to third parties.

That’s a good start.

But then you have to look very carefully at how they can access the data: the less they say and the simpler the language in these agreements, the better.

My advice is that anything written that’s too vague will likely put them outside of the SCAs coverage and therefore your privacy will be compromised.

So what’s a privacy-minded consumer to do?

One option is to use one of the many services to encrypt the contents that you do upload into cloud storage and so protecting it from internal data scanners. This idea makes lots of sense, although it’s an extra step.

This is the one that I’ll implement this year!

Or if you do upload contents in plain-text to a cloud storage service, be very selective what you put there.

And for employees of companies who are casually using cloud storage services to upload business documents?

Cease and desist!

Need a cloud storage alternative for your employees? Keep your privacy right by cloud-enabling files with DatAnywhere.

Data Privacy US-Style: Our National Privacy Research Strategy

Data Privacy US-Style: Our National Privacy Research Strategy

While the EU has been speeding ahead with its own digital privacy laws, the US has been taking its own steps. Did you know there’s a National Privacy Research Strategy (NPRS) white paper that lays out plans for federally funded research projects into data privacy?

Sure, the Federal Trade Commission has taken up the data privacy mantle in the US, bringing actions against social media, hotels, and data brokers. But there’s still more to do.

So you can think of the NPRS as a blue-print for the research and development phase of the US’s own privacy initiatives. By the way, the US government spent about $80 million on privacy research efforts in 2014.

What’s the Plan?

I scanned through this 30+ page report looking for some blog worthy points to share. I found a few

First, the NPRS has interesting ideas on how to define privacy.

The authors of the paper, representing major federal agencies including the FTC, don’t have a firm definition but instead they view privacy as being characterized by four areas: subjects, data, actions, and context.

Essentially, consumers release data into a larger community (based on explicit or implicit rules about how the data will be used) on which certain actions are then taken – processing, analysis, sharing, and retention. The idea (see diagram) parallels our own Varonis approach to using metadata to provide context to user actions to file data. We approve of the NPRS approach!

privacy-nprs

The larger point being that privacy has a context that shapes our privacy expectations and what we consider a privacy harm.

NPRS is partially focused on understanding our expectations in different contexts and ways to incentivize us to make better choices.

Second, the plan takes up the sexier matter of privacy engineering. In other words, research into building privacy widgets that software engineers can assemble together to meet certain objectives.

I for one am waiting for a Privacy by Design (PbD) toolkit. We’ll see.

The third major leg in this initiative targets the transparency of  data collection, sharing, and retention. As it stands now, you click “yes” affirming you’ve read the multi-page legalese in online privacy agreements. And then are surprised that you’re being spammed at some point by alternative medical therapy companies.

The good news is that some are experimenting with “just in time” disclosures that provide bite-size nuggets of information at various points in the transaction — allowing you, potentially, to opt out.

More research needs to be undertaken, and NPRS calls for developing automated tools to watch personal data information flows and report back to consumers.

And this leads to the another priority: ensuring that personal information flows meet agreed upon privacy objectives.

Some of the fine print for this goal will sound familiar to Varonis-istas. NPRS suggests adding tags to personal data — essentially metadata — and processing the data so that consumer privacy preferences are then honored.

Of course, this would require privacy standardization and software technology that could quickly read the tags to see if the processing meets legal and regulatory standards. This is an important area of research in the NPRS.

In the Meantime, the FTC Needs You!

You’re now fired up reading this post and wondering whether you—white hat researcher, academic, or industry pro—can have his or her voice regarding privacy heard by the US government.

You can!

The FTC is now calling for personal privacy papers and presentations for its second annual Privacy Con to be held in Washington in January 2017. You can check out the papers from last year’s conference here.

If you do submit and plan to speak, let us know! We’d love to follow up with you.

Understanding Canada: Ontario’s New Medical Breach Notification Provision...

Understanding Canada: Ontario’s New Medical Breach Notification Provision (and Other Canadian Data Privacy Facts)

Remember Canada’s profusion of data privacy laws?

The Personal Information Protection and Electronic Documents Act (PIPEDA) is the law that covers all commercial organizations across Canada.

Canadian federal government agencies, though, are under a different law known as the Privacy Act.

But then there are overriding laws at the provincial level.

If a Canadian province adopts substantially similar data privacy legislation to PIPEDA, then a local organization would instead fall under the provincial law.

To date, Alberta and British Columbia have adopted their own laws, each known as the Personal Information Protection Act (PIPA). Alors, Québec has its own data privacy law.

Adding to the plenitude of provincial privacy laws, Ontario, New Brunswick, and Newfoundland have adopted similar privacy legislation with regard to health records.

Ontario’s PHIPA

So that brings us to Ontario’s Personal Health Information Protection Act (PHIPA).  Recently, PHIPA was amended to include a breach notification provision.

If personal health information is “stolen or lost or if it is used or disclosed without authority”, a healthcare organization in Ontario will have to notify the consumer “at the first reasonable opportunity”, as well as the provincial government.

Alberta, by the way, has had a breach notification requirement for all personal data since 2010.

What About Breach Notification at the Federal Level?

In June 2015, the Digital Privacy Act amended PIPEDA to include breach notification. Organizations must notify affected individuals and the Privacy Commissioner of Canada when there is a breach that creates a “real risk of significant harm” to an individual.

Notice that the federal law has a risk threshold for exposed personal information, whereas the new Ontario law for health records doesn’t. Alberta’s breach notification requirement, by the way, has a similar risk threshold to PIPEDA

Confused by all this? Get a good Canadian privacy lawyer!

Don’t be confused by how to detect and stop breaches! Learn more about Varonis DatAlert.

 

 

 

 

 

 

 

 

Password Security Tips for Very Busy People

Password Security Tips for Very Busy People

If you needed another reminder that you shouldn’t use the same password on multiple online sites, yesterday’s news about the hacking of Mark Zuckerberg’s Twitter and Pinterest accounts is your teachable moment. Mr. Z. was apparently as guilty as the rest of us in password laxness.

From what we know, the hackers worked from a list of cracked accounts that came from a 2012 breach at Linkedin. While an initial round of over six million passwords has been available for some time, it’s now believed that the number of cracked passwords might be as high 167 million. Based on the  messages left by the hackers on his Twitter timeline, Mark’s password may have been on that new list.

Sure, “if it happens to the best of us, it happens to the rest of us”, is one take away. However, we can all do a better job in managing our passwords.

Am I Already a Victim?

Last week, I received an email from Linkedin saying my account was part of the new batch of cracked passwords. I changed my password, and I had already been pretty good (not perfect) about using several different passwords for the online accounts that I care about. But I now needed to revisit some of them as well.

1. Go to this site now and enter the email addresses that you most commonly use in setting up accounts. You’ll discover whether your password is known to hackers.

There’s a service that will tell if you have an account on a site that’s been hacked and the passwords doxed. It’s called have I been pwned?

Besides informing me that Linkedin was one of my breached accounts, the ‘have I been pwned?’ service also alerted me that another account of mine had been compromised.

Yikes! Fortunately, it’s one of my web accounts where I’ve frequently change the password. So no problem.

You may not be so lucky. If this service tells you’ve been pwned, you’ll have to immediately go to the affected web site, along with other accounts that share that password and change them.

But on hold on a sec before you do that.

Turn on Two-Factor Authentication

You shouldn’t waste a crisis. It’s now a good time to turn on two-factor authentication (TFA) if it’s provided.

Linkedin does offer this feature. It works with your cell phone by sending an SMS text with a PIN. Or if you don’t use SMS, the service will call you instead.

So the next time you logon to Linkedin, you’ll be asked for your password (the first factor) and for the PIN (the second factor), which is sent to your cell phone.

2 .Before reading any more of this post, go to your Linkedin profile and turn on TFA – you’ll find the setting under Privacy & Settings>Privacy>Security.

The next time there’s a data exposure, you won’t have to worry (as much) about your account being hacked. The hackers will fail the second factor test.

Besides Linkedin, Google, Twitter, Dropbox, Facebook, and Paypal have this feature as well. A lifehacker article from 2013 list additional web sites with TFA.

Google and others — notably Twitter, Linkedin, and Facebook — also offer their TFA as a service. This allows sites that haven’t implemented strong authentication to hook into, say, Google Authenticator, for instant TFA. Going forward, for those sites that support these TFA services, you can in theory have secure centralized authentication.

3. It’s a good time now to consider using the authenticator feature of Google, Twitter, or Linkedin for all your accounts. As a first step, I would turn on TFA for your Google and Twitter accounts as well. It will also make these services more secure. Do it now!

Correct Battery Horse Staple and its Variants

The best way to stop the whole chain of events that forces you to change passwords on multiple account is to  come with an uncrackable password in the first place.

The correct-horse-battery-staple method is one way to generate high-entropy passwords. You pick four random words out of the dictionary and use them as your password. This classic xkcd comic explains it nicely.

password_strength

Source: xkcd

To remember the password, you devise a little story using the random words, thereby connecting the words together in your neurons. For example, “I showed the horse a battery with a staple on it, and the horse said correct”

Memory tricks where you connect stories to the actual words or ideas you want to recall are known as mnemonics.

I wrote about a variant of this technique where you make up a very simple one sentence story and use the first letter of each word as your password.

For example, here’s my one sentence story: “Bob the tiger ambled across the savannah at 12 o’clock last Tuesday”.

Unforgettable, right?

And the password that comes from this is: Bttaatsa12olT.

4. Now it’s your turn. Make up a memorable one sentence story that is long enough, at least 10 to 15 words, and try to use some punctuation and numbers. Take the first letters of those words and write it down once — just to see it. Throw away the paper. This is your new password. If you’ve been pwned—see #1—then use this as your new password. For anchor sites like Google or Twitter or Linkedin, change your password there as well, since these can in the future become your main authenticator.

Multiple Site Paranoia

More recently I’ve been using my own long one sentence mnemonic as my high-entropy password—I’m very confident it’s uncrackable.

Unfortunately, I didn’t use this technique with the Linkedin account I set up years ago, and hence I am one of the victims.

Can you use this same high-entropy password on multiple sites that are also guarded by TFA?

I’m not that paranoid so I would. But experts will tell you that even TFA has man-in-the-middle vulnerabilities and maybe somehow they launch a brute force dictionary attack against your encrypted password …

If you really want to avoid having to change multiple accounts if you’ve been hacked, then you may want to customize the one sentence story.

Here’s what I came up with. Balancing complexity with convenience, I now make a small part of my one sentence story variable —pick a subject, verb, or object to be the variable part.

And then use some letter in the website name please, not the first, but say the second letter — as the starting letter of another word to replace the subject, verb, or object.

If I want to reuse my “Bob the tiger” password, I could make the verb variable and use the second letter of the website name as the first letter of my new verb.

For Snapchat, my story might become: “Bob the tiger navigates across the savanna at 12 o’clock last Tuesday”.

For Twitter, it could read: “Bob the tiger walked across the savanna at 12 o’clock last Tuesday”.

You get the idea. You have a different password for each site.

For hackers who are used to quickly trying the cracked passwords on other sites besides the oringial, they’ll very likely go on to another victim when they fail to get in.

Identity Theft Complaint? Tell the FTC!

Identity Theft Complaint? Tell the FTC!

Hackers steal information about you, and unfortunately it’s often months later that the company realizes there’s been a breach. But in the meantime, identity thieves use your PII to open new credit card accounts, file false tax returns, or commit medical insurance fraud, as well as make fraudulent charges on existing credit card accounts.

Like everyone else, we try to keep up with all the data security breaches. But there are some eye-popping stats about identity theft worth sharing.

This is a part of the breach response that doesn’t get nearly as much attention, but causes the most pain for consumers.

We’re All Paying a Price (but Some More Than Others)

According to a report from US Department of Justice, in 2014 over 17 million people were victims of identity theft.

Interesting fact: Only 1 in 10 victims knew the identity of the attacker.

doj breaches

Source: Victims of Identity Theft, 2014 ( Department of Justice)

This speaks, of course, to the fact that the identity thieves likely bought the PII from the attackers, who often act as black market retailers of identity.

While almost 50% of the losses were under $100, there was a small percentage that reported over $5000 in financial damage (see chart).

Fans of the 80-20 rule probably know already that a good chunk of the total financial cost of identity theft is concentrated in a very narrow band. They would be right!

Of the $11.3 million in total financial loss in 2014 for identity theft, almost $2.5 million could be found in those reporting losses of $1000 and over. Ouch.

Complaints to the FTC

If you’re a consumer, what should you do after you’ve discovered you’re a victim?

For many it’s just a question of contacting their bank or credit card company. The call center agent may even put a flag on the credit report with one of the national credit reporting agencies (CRAs), so the identity thieves can’t open new accounts.

But it can get more complicated when a false tax return, health insurance claim, or new credit accounts keep getting opened.

The Federal Trade Commission (FTC) is a powerful friend for consumers to turn to for help. For 2015, the agency received over 490,000 identity theft complaints — a 50% jump from previous year — through its web site.

Last week, the FTC launched a new resource, identity.theft.gov, to improve the process of reporting identity theft claims and providing quicker aid for those who can’t resolve the problem directly.

Although not completely automated, the site allows the FTC to notify law enforcement and the CRAs, thereby bringing these key players into the picture earlier.

I gave the site a quick trial, and it’s quite thorough and easy to use.

Companies are still lagging in their abilities to detect breaches. The 2015 Verizon DBIR tells us that 50% of breaches took one or more months to discover. It turns out that companies are very dependent on third parties — law enforcement, CRAs, etc. — to tell them there’s been an incident.

So the new FTC web page may help in narrowing the breach discovery delay as well.

Reduce your time to breach discovery. Learn more about Varonis DatAlert!

The IP Theft Puzzle, Part IV: Ambitious Insiders

The IP Theft Puzzle, Part IV: Ambitious Insiders

In this last post in this series, I’d like to look at another type of insider. I’ve already written about the Entitled Independents. These guys fit our common perception of insiders: a disgruntled employee who doesn’t receive, say, an expected bonus and then erase millions of your business’s CRM records.

These insiders are solo acts. However, that’s not always the case with IP theft.

Take Me to Your Leader

The CMU CERT team discovered another insider variant in analyzing theft incidents in their database. Referred to as an Ambitious Leader, this insider is interested in taking the IP, along with a few employees, to another company — either one he will start by himself or a competing firm where he’ll lead a team.

The Leader will typically recruit others to help him gather the IP and then reward his helpers with jobs at the new company. These underlings are not disgruntled employees but rather have been swayed by a charismatic leader who promises them fame, free cafeteria food, and a cube with a view.

In pop culture, the disgruntled employee has been represented by Office Space-like characters. But for the Ambitious Leader, we’re now looking at white-collar professionals — attorneys, agents, financial traders, and, yes, high-powered tech types.

Does anyone remember early tech history and The Traitorous Eight? They were a group of grad students working for the infamous (and rage prone) William Shockley hoping to commercialize the newly invented transistor. With all the IP embedded in their neurons, they fled from him to found the legendary Fairchild Semiconductor. And the rest is history.

This pattern of Silicon Valley superstar employees leaving to start new companies is still playing out to this day.

Easier to Spot

With higher-level professional employees, you especially need to have non-disclosure agreements in place. You don’t need to be told that, right?

As we pointed out in previous posts, these IP agreements, along with employee communications about data security and employee monitoring, can act as a deterrent. In theory, when potential insider thieves see that the company takes it IP seriously, they’ll back down.

But with Entitled Independents, close to half took no precautions to hide their activities. Since they felt the IP was theirs, these mavericky insiders simply grabbed it. Of course, their spontaneous theft activities were harder to detect.

While they didn’t have a lot of data points, the CMU CERT researchers noticed that the IP agreements did have an effect on the Ambitious Leaders: it made them more likely to apply deceptions!

Their deceptions then led to more observable indicators. The Leader, for example, might plan the attack by scoping out the relevant folders, and then moving or copying files in bits and pieces during off-hours.  It’s reasonable to assume they would rather not get caught early on before they have a head start on their venture and then, perhaps, gain the resources to fight any legal challenges to their IP.

CMU CERT also noticed that when the IP was segregated among difference access groups, the Leader was forced into recruiting additional members.  Makes sense: the Leader can’t do it all and so needs help from new gang members who had the appropriate access rights.

Conclusions

These Ambitious Leaders are showing all the signs of the CEOs they are in the process of becoming: planning, personnel recruitment for their project, and complex execution. Their activities are far easier to detect when appropriate monitoring tools are in place. In several cases, CMU CERT noticed there can be “large downloads of information outside the patterns of normal behavior” by employees directed by a Leader who has sometimes already left the company.

Where does this leave us?

I’ve been keeping score on these different insiders, and here’s my list on the types of employees you’re most likely to catch:

  • Of the Entitled Independents: those who didn’t create the IP directly and therefore will likely exhibit precursors—entering directories they normally don’t visit or exhibiting other rehearsal behaviors.
  • Of the Ambitious Leaders: those who need to recruit several employees who have access to the IP. Precursors could include unusual bursts of email and file copies between the potential employees and their pack leader.
  • Any insider who exhibits bother technical and behavioral precursors. If they keep eyes and ears open, IT security with help from HR can connect the dots between problems at work—abusive behaviors, unexplained absences—with system activities.

No, you won’t be able to completely prevent insider IP theft—they are your employees and they know where the goodies are. But what you can do is reduce the risks.

In my original insider threat series — reread it now! — I concluded with a few tips to help reduce the risks. It’s worth repeating the key ones: enforce least-privilege access and separation of duties, strong passwords, and more focus on security preceding employee exits and terminations.

Finally, companies should inventory their IP resources — code, contracts, customer lists — on their file systems and make sure granular logging and auditing is in place.

In a worst case scenario, the logs can be used forensically later to prove theft had happened. But with the right software, it’s still possible to spot insider activities close to when they occur.

Don’t make it too easy for that ambitious executive! Find out with  Varonis DatAlert who’s looking at your IP.

Five Things You Need to Know About the Proposed EU General Data Protection ...

Five Things You Need to Know About the Proposed EU General Data Protection Regulation

European regulators are serious about data protection reform. They’re inches away from finalizing the General Data Protection Regulation (GDPR), which is a rewrite of the existing rules of the road for data protection and privacy spelled out in their legacy Data Protection Directive (DPD). A new EU data world is coming!

We’ve been writing about the GDPR’s long, epic  journey over the last two years. But with the EU Council—kind of the EU’s executive branch—approving its own version, the stage is set for a final round of discussions with the EU Parliament to split the differences. The GDPR will likely be approved by the end of 2015 (or early 2016) and go into effect in 2017. Organizations, including U.S. multinationals that handle EU personal information, will soon be required to comply with tougher rules to prove they’re actively protect personal data.

Based on  the latest proposal from the Council, we now have a good idea of what the final GDPR will look like. So your homework assignment is to start thinking about these five items below.

  1. Start Implementing Privacy by Design Principles

Developed by former Ontario Information and Privacy Commissioner Ann Cavoukian, Privacy by Design (PbD) has had a large influence on security experts, policy makers, and regulators. Cavoukian believes big data and privacy can live together. At the core, her message is that you can take a few basic steps to achieve the PbD vision: minimize data collected (especially PII) from consumers, not retain personal data beyond its original purpose, and give consumers access and ownership of their data.

The EU likes PbD as well. It’s referenced heavily in Article 23, and in many other places in the new regulation. It’s not too much of a stretch to say that if you implement PbD, you’ve mastered the GDPR.

Need to get up to speed quickly? Use this cheat sheet to understand PbD principles and guide you through key data security decisions.

  1. Right to be Forgotten

The controversial “right to be forgotten” will soon be the law of the EU land. For most companies, this is really a right of consumers to erase their data. Discussed in Article 17 of the proposed GDPR, it states that “the (…) controller shall have the obligation to erase personal data without undue delay, especially in relation to personal data which are collected when the data subject was a child, and the data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay”.

I think that clearly spells out the right to erasure.

What if the data controller gives the personal data to other third-parties, say a cloud-based service for storage or processing? The long arm of the EU regulations still apply: as data processors, the cloud service will also have to erase the personal data when asked to by the controller.

Translation: the consumer or data subject can request to erase the data held by companies at any time. In the EU, the data belongs to the people!

  1. U.S. Multinationals Need to Safeguard Data

It’s worth reiterating Andy’s previous blog post where he urges large U.S. multinationals that collect data from EU citizens to implement data security policies as if those servers were in the EU.

Known as “extraterritorially”, this principle is addressed in the beginning of the proposed GDPR. For legally-minded types, here’s the actual language in all its bureaucratic beauty:

Cross-border flows of personal data…are necessary for the expansion of international trade and international cooperation….when personal data are transferred from the Union to controllers, processors or other recipients in third countries or to international organisations, the level of protection of individuals guaranteed in the Union by this Regulation should not be undermined.

There are some issues and complexities about how this will enforced. But with the U.S. saying its data storage laws apply to data held in Irish servers, it seems only natural that the EU can make a similar type of claim about its citizens’ data held in the U.S.!

  1. How Much Will You Be Fined?

For serious violations (such as processing sensitive data without an individual’s consent or on any other legal grounds) regulators can impose penalties. There are differences between the EU Council’s version and the Parliament’s. The EU Council allows fines up to €1 million or  2% of the global annual turnover—i.e., revenue–of a company.  The Parliament fines are far steeper at up to €100 or 5% of global turnovers. These two bodies will have to work this out in the coming months.

The important point, regardless of the final rule, is that the GDPR penalties will amount to serious money for US multi-nationals.

  1. Consider Hiring a Data Protection Officer

Important projects — yes the proposed EU GDPR is a huge project — need owners. In the proposed EU GDPR, the Data Protection Officer (DPO) is supposed to be responsible for creating access controls, reducing risk, ensuring compliance, responding to requests, reporting breaches within 72 hours, and even creating a good data security policy.

Will you need to designate a DPO in your company? At this point, there are differences again in the proposals from the EU Council versus the one from the Parliament. The Council would like to make this a discretionary position, allowing each member state to decide whether it should be a mandatory requirement or not.

Our view: informally give someone in your company the powers of a DPO. It just makes good sense to have a manager or high-level executive as a focal point for EU rules.

gdpr_cta

How Varonis Helps with PCI DSS 3.1

How Varonis Helps with PCI DSS 3.1

The Payment Card Industry Data Security Standard (PCI-DSS) 3.1 is a set of regulations that govern how organizations manage credit card and other cardholder data. Many security professionals advocate that DSS is not only about passing an annual compliance audit, but also having programs in place for continual assessments, remediation, and monitoring.

To learn how Varonis solutions can help organizations meet PCI compliance and how we provide security to protect your organization inside and out, read our How Varonis Helps with PCI DSS 3.1” compliance brief.

pci_cta

SSL and TLS 1.0 No Longer Acceptable for PCI Compliance

SSL and TLS 1.0 No Longer Acceptable for PCI Compliance

In April of 2016, the PCI Council released version 3.1 of their Data Security Standard (DSS). While most of the changes in this minor release are clarifications, there is at least one significant update involving secure communication protocols. The Council has decided that SSL and TLS 1.0 can no longer be used after June 30, 2016.

The fine print about these two protocols can be found under DSS Requirement 2.0: “Do not use vendor-supplied defaults for system passwords and other security parameters”.

I guess the ancient Netscape-developed SSL (Secure Socket Layer) and TLS (Transport Layer Security) are considered other security parameters. (We’ve got an article dedicated to the difference between SSL & TLS, if you’re curious.)

RIP SSL

In any case, the Council is responding to the well-known POODLE exploit in SSL as well as NIST’s recent conclusions about SSL. As of April 2014, they proclaimed that SSL is not approved for use in protecting Federal information.

Unfortunately, you’ll need a brief history lesson to understand the role of TLS.

Developed in the 1990s by the IETF folks, TLS version 1.0 was based heavily on SSL and designed to solve compatibility issues—a single, non-proprietary security solution. Then a series of cryptographic improvements were made for TLS 1.1 and the current 1.2.

One key point is that TLS implementations support a downgrade negotiation process whereby the client and server can agree on the weaker SSL protocol even if they opened the exchange at the latest and greatest TLS 1.2.

Because of this downgrade mechanism, it was possible in theory to leverage the SSL-targeted POODLE attack to indirectly take a bite out of TLS by forcing servers to use the obsolete SSL.

Then in December 2014, security researchers discovered that a POODLE-type attack could be launched directly at TLS without negotiating a downgrade.

Overall, the subject gets complicated very quickly and depending on whom you read, security pros implicate browser companies for choosing compatibility over security in their continuing support of SSL or everyone for implementing the TLS standard incorrectly.

There’s a good discussion of some of these issues in this Stack Exchange Q&A.

What Can Be Done?

The PCI Council says you must remove completely support for SSL 3.0 and TLS 1.0. In short: servers and clients should disable SSL and then preferably transition everything to TLS 1.2.

However, TLS 1.1 can be acceptable if configured properly. The Council points to a NIST publication that tells you how to do this configuration.

Women in Data Security, Compliance, and Privacy You Should Follow on Twitte...

There are many articles lately lamenting the lack of women in technology. I’m happy we’re having this discussion and that groups are working towards fixing the problem, but I’d like to shift the focus to the women that are in technology.

The good news is that female technologists in data security, compliance, and privacy do exist and are doing great things! For instance, it’s hard to imagine the current public discussion about privacy without the very accomplished Professor Sweeney or FTC Commissioner Brill.

Andy and I thought you might be curious to meet some more of these awesome technologists. We think you’ll be impressed with their multi-faceted resumes, creativity, power, and influence!

Here are our top picks:

  1. Sabrina Ross/ @sabrinaross

To look after its rapidly expanding customer data, Apple recently appointed Sabrina Ross to its new privacy counsel position. Sabrina is an attorney who has practiced privacy law at several leading firms. She holds a CIPP/US, the first certification to be offered in information privacy law.

  1. Michelle Dennedy/ @mdennedy

Michelle Dennedy is VP and Chief Privacy Officer at Intel Security. She has a long history in the tech world, serving as the VP of Security and Privacy Solutions at Oracle, and before that, the Chief Data Governance Officer at Sun Microsystems. She coauthored the “The Privacy Engineer’s Manifesto: Getting from Policy to Code to QA to Value.”

  1. Saira Nayak/ @SairaNayak

Tune, the mobile analytics company, recently picked Saira Nayak as its Chief Privacy Officer. Saira is an attorney specializing in privacy law. She was also the Vice Chair of the ABA’s Privacy and Information Security Committee. Saisha holds a CIPP/US certification.

  1. Joanne Furtsch @privacygeek

Joanne Furtsch is Director of Product Policy at Truste, a major digital privacy management company. When she started at Truste fifteen years ago, privacy was not thought of as a large issue for consumers. Of course, that’s all changed. She’s now focused on both policy and product work, and has a special interest in children’s data privacy.

  1. Alexandra Ross/ @sharemindfully

Alexandra Ross looks at privacy a little bit differently than others. She is Senior Counsel at Paragon Legal where she works with many Bay Area tech companies. But as the founder of the Privacy Guru website, she also writes about the relationship between mindfulness and good data security practices.

  1. Ann Cavoukian / @AnnCavoukian

Ann Cavoukian is currently Executive Director of the Privacy and Big Data Institute. This former Ontario Information and Privacy Commissioner has been powerful and influential in convincing organizations and government agencies to treat people’s private data carefully. By the way, she also came up with Privacy by Design.

  1. Gabriela Zanfir / @gabrielazanfir

Gabriela Zanfir, PhD is a data protection professional and Legal Officer at the European Data Protection Supervisor. She tweets the most up-to-date info on both US and EU data privacy laws.

  1. Sarah Soliman / @BiometricsNerd

Sarah Soliman is a Lecturer and Visiting Scholar at West Virginia University. She is currently teaching Computer Engineering 496/696, a seminar series on privacy and technology. Her tweets are often about emerging technology trends.

  1. Rachel Haot / @rachelhaot

Rachel Haot, New York State’s Chief Digital Officer and Deputy Secretary for Technology, was named CDO of the Year for 2014 by the CDO Club. One of her primary goal as CDO is to improve how the government and the public engage online.

  1. Megan Smith / @smithmegan

Megan Smith is The White House’s Chief Technology Officer. This MIT trained leader has decades of technical and entrepreneurial experience in Silicon Valley and will do a fine job with President Obama’s IT policy and initiatives!