Category Archives: Privacy

Are Wikileaks and ransomware the precursors to mass extortion?

Are Wikileaks and ransomware the precursors to mass extortion?

Despite Julian Assange’s promise not to let Wikileaks’ “radical transparency” hurt innocent people, an investigation found that the whistleblowing site has published hundreds of sensitive records belonging to ordinary citizens, including medical files of rape victims and sick children.

The idea of having all your secrets exposed, as an individual or a business, can be terrifying. Whether you agree with Wikileaks or not, the world will be a very different place when nothing is safe. Imagine your all your emails, health records, texts, finances open for the world to see. Unfortunately, we may be closer to this than we think.  

If ransomware has taught us one thing it’s that an overwhelming amount of important business and personal data isn’t sufficiently protected. Researcher Kevin Beaumont says he’s seeing around 4,000 new ransomware infections per hour. If it’s so easy for an intruder to encrypt data, what’s stopping cybercriminals from publishing it on the open web?

There are still a few hurdles for extortionware, but none of them are insurmountable:

1. Attackers would have to exfiltrate the data in order to expose it

Ransomware encrypts data in place without actually stealing it. Extortionware has to bypass traditional network monitoring tools that are built to detect unusual amounts of data leaving their network quickly. Of course, files could be siphoned off slowly disguised as benign web or DNS traffic.

2. There is no central “wall of shame” repository like Wikileaks

If attackers teamed up to build a searchable public repository for extorted data, it’d make the threat of exposure feel more real and create a greater sense of urgency. Wikileaks is very persistent about reminding the public that the DNC and Sony emails are out in the open, and they make it simple for journalists and others to search the breached data and make noise about it.

3. Maybe ransomware pays better

Some suggest that the economics of ransomware are better than extortionware, which is why we haven’t seen it take off. On the other hand, how do you recover when copies of your files and emails are made public? Can the DNC truly recover? Payment might be the only option, and one big score could be worth hundreds of ransomware payments.  

So what’s preventing ransomware authors from trying to doing both? Unfortunately, not much. They could first encrypt the data then try to exfiltrate it. If you get caught during exfiltration, it’s not a big deal. Just pop up your ransom notification and claim your BTC.

Ransomware has proven that organizations are definitely behind the curve when it comes to catching abnormal behavior inside their perimeters, particularly on file systems. I think the biggest lesson to take away from Wikileaks, ransomware, and extortionware is that we’re on the cusp of a world where unprotected files and emails will regularly hurt businesses, destroy privacy, and even jeopardize lives (I’m talking about hospitals that have suffered from cyberattacks like ransomware).

If it’s trivially easy for noisy cybercriminals that advertise their presence with ransom notes to penetrate and encrypt thousands of files at will, the only reasonable conclusion is that more subtle threats are secretly succeeding in a huge way.  We just haven’t realized it yet…except for the U.S. Office of Personnel Management. And Sony Pictures. And Mossack Fonseca. And the DNC. And…

[Podcast] Attorney and Data Scientist Bennett Borden, Part I: Data Analysis...

[Podcast] Attorney and Data Scientist Bennett Borden, Part I: Data Analysis Techniques

This article is part of the series "[Podcast] Attorney and Data Scientist Bennett Borden". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


Once we heard Bennett Borden, a partner at the Washington law firm of DrinkerBiddle, speak at the CDO Summit about data science, privacy, and metadata, we knew we had to reengage him to continue the conversation.

His bio is quite interesting: in addition to being a litigator, he’s also a data scientist. He’s a sought after speaker on legal tech issues. Bennett has written law journal articles about the application of machine learning and document analysis to ediscovery and other legal transactions.

In this first part in a series of podcasts, Bennett discusses the discovery process and how data analysis techniques came to be used by the legal world. His unique insights on the value of the file system as a knowledge asset as well as his perspective as an attorney made for a really interesting discussion.

Continue reading the next post in "[Podcast] Attorney and Data Scientist Bennett Borden"

Let’s Get More Serious About AR and Privacy

Let’s Get More Serious About AR and Privacy

Augmented Reality (AR) is the technology of the moment. While some of us have already experienced the thrill of catching a Dragonite in Pokemon Go, AR is not just all fun and games. In fact, depending on how an AR gadget is used, it can have significant privacy implications.

Privacy in Public

Augmented reality enhances real images with digital special effects — it’s reality assisted by coding.  These gadgets generally let you record a scene, and then they give you the option of sharing on social media.

In the public space, you don’t have an expectation of privacy. As an amateur photographer myself, I was always told to be polite and ask permission of a stranger before taking a picture. If you’re curious, there’s a professional code of ethics that spells this out.

But doctors, bankers, lawyers, and some others are under real legal obligations when it comes to taking picturse of people and personal information.

Privacy at the Doctor’s

Suppose a doctor armed with an AR device (or a video-recorder), films his waiting room filled with people. The doctor may not necessarily need consent in this case, but some states and hospital associations may have their own laws and guidelines in this area.

If the doctor photographs a patient’s face for clinical purposes, usually the general HIPAA consent form would be sufficient.

But if the doctor were to use the video of the waiting room or clinical pictures for marketing purposes, HIPPA requires additional authorization.

In general, hospital employees and visitors (except when recording family members) need consent when photographing or video-ing people in a hospital setting.

Mark my words, but at some point a HIPAA case will be brought against hospital workers fooling around with Pokemon Go as they wander the medical corridors hunting for Vapereons.

By the way, photos or videos showing faces are considered protected health information (PHI).

If they were then stored, they would have to be protected in the same was as HIPAA text identifiers. And an unauthorized exposure of this type of PHI would be considered a breach.

Outside the Hospital Setting

These AR gadgets can also be a privacy problem in business and legal settings. If an outsider or unauthorized person with AR glasses were recording confidential data, trade secrets, or PII on someone’s desk or on their screen, then that would be considered a security leak.

And relevant laws such a Gramm-Leach-Bliley and Sarbannes-Oxley would kick in.

A judge recently banned Pokemon Go in the courtroom, but this seems to be more a case of legal etiquette.  Another judge was somewhat upset — and tweeted about it — that a defense counsel was using AR glasses, but apparently nothing illegal was done.

It’s a little premature to become too worried about the privacy and security issues of AR gadgetry with so many more pressing security problems.

However, it’s not a bad idea for your company to come up with initial guidelines and policies on AR device usage by employees and visitors.

Top Minds in PCI Compliance

Top Minds in PCI Compliance

With countless data breaches hitting the front page, many are turning to the Payment Card Industry Data Security Standard (PCI DSS) which is an invaluable controls list to guide, influence, and promote security.

However, there are merchants who argue that these controls provide too much security while security professionals think they provide too little.

So what do the experts think about PCI DSS? Here are five worth listening to:

1.Laura Johnson

laura johnson

As Director of Communications for PCI Security Standards Council (SSC), Laura Johnson is responsible for creating communication strategies that inform, educate, and help PCI SSC global stakeholders to participate in PCI programs and initiatives.

If you want to learn about PCI fundamentals, check out her blog. There, you’ll also find the latest and greatest on PCI DSS 3.2.

2. Anton Chuvakin / @anton_chuvakin

chuvakin_small_400x400

Not only is Anton Chuvakin an Infosec expert, but he’s also super knowledgeable about PCI DSS compliance, offering the best dos and don’ts to keep everyone’s payment cards safe. Currently Dr. Anton Chuvakin is a Research Vice President of Gartner’s  Technical Professionals (GTP) Security and Risk Management Strategies team.

According to Mr. Chuvakin, many make the mistake of only adhering to the PCI DSS specific tasks right before a compliance assessment. However, in reality you really need to adhere to the standards at all times as security doesn’t start and end with PCI compliance.

By the way, get his book on PCI Compliance! You won’t regret it!

3. Nancy Rodriguez

nancy rodriguez

Nancy Rodriguez is currently Enterprise PCI Governance Leader at Wells Fargo and responsible for coordinating and conducting PCI risk assessments.

Her contributions to the industry are wide and varied and started over 25 years ago. She has been a trusted advisor at Citi for all global PCI programs, a former Board of Advisors of PCI SSC, and a PCI Compliance Director at Philips.

See what others have to say about Rodriguez, here.

4. Troy Leach / @TroyLeach

PHOTO-SPEAKER-LEACH

Troy Leach is the Chief Technology Officer for the PCI Security Standards Council (SSC). He partners with industry leaders to develop standards and strategies to ensure that payment card data and infrastructure is secure.

If you want to hear more from Mr. Leach and Mr. Chuvakin on what they have to say about the balance between PCI DSS compliance and security, check out this insightful interview. Also Mr. Leach regularly tweets out links to stories on bank hackers, robberies, and ATM thieves – it’ll feel like you’re watching an episode of Law and Order!

5.John Kindervag/ @kindervag

John-Cas_400x400

Mr. Kindervag is a leading expert on wireless security, network security, security information management, and PCI data security. Currently he is Forrester’s Vice President and Principal Analyst serving security and risk professionals.

In this TechTarget article, Mr. Kindervag dispels the five biggest misunderstandings about PCI DSS.

Want to learn more?

Six Authentication Experts You Should Follow

Six Authentication Experts You Should Follow

Our recent ebook shows what’s wrong with current password-based authentication technology.

But luckily, there are a few leading experts that are shaping the future of the post-password world. Here are six people you should follow:

cranor

1. Lorrie Cranor @lorrietweet

Lorrie Cranor is a password researcher and is currently Chief Technologist at the US Federal Trade Commission. She is primarily responsible for advising the Commission on developing technology and policy matters.

Cranor has authored over 150 research papers on online privacy, usable security, and other topics. She has played a key role in building the usable privacy and security research community, having co-edited the seminal book Security and Usability and founded the Symposium On Usable Privacy and Security.

Prior to the FTC, Cranor was a Professor of Computer Science and of Engineering and Public Policy at Carnegie Mellon University where she is director of the CyLab Usable Privacy and Security Laboratory (CUPS) and co-director of the MSIT-Privacy Engineering masters program.

Check out Cranor’s tips on how often should you change your password. Also an oldie but goodie is Cranor’s dress made of commonly used passwords.

Johullrich

2. Johannes Ullrich @johullrich

Considered to be one of the 50 most powerful people in Networking by Network World, Johannes Ullrich, Ph.D. is currently Dean of Research for the SANS Technology Institute.

A proponent of biometrics authentication, Mr. Ullrich believes it’s a field that is finally gaining traction. He explained in a recent Wired article, “This field is very important because passwords definitely don’t work.” However, he also recognizes barriers before widespread adoption of biometrics.

For instance, while Mr. Ullrich’s latest analysis of the iPhone’s fingerprint sensor was mostly positive, he revealed one big vulnerability: attackers could in theory lift a fingerprint smudge off a stolen iPhone’s glass and then fool the sensor’s imperfect scanner.

Yikes! Better get out my microfiber cleaning cloth.

mazurek-9394

3. Michelle Mazurek (website)

One of the researchers that brought us the news that a passphrase is just as good as using a password with symbols and/or caps is Michelle Mazurek.

She is currently an Assistant Professor of Computer Science at the University of Maryland. Her expertise is in computer security, with an emphasis on human factors.

Her interest resides in understanding security and privacy behaviors and preferences by collecting real data from real users, and then building systems to support those behaviors and preferences.

Check out more of her work on passwords, here.

david birch

4. David Birch @dgwbirch

David Birch is a recognized thought leader in two things that still count even in the disruptive digital age: money and identity. In his last book, “Identity is the New Money” he presents a unified theory of where these two essential aspects of modern life are heading.

His thinking on identity is based strongly on the work of Dr. Who. Yes, the hero of the long running BBC sci-fi show. Fans know that the Doctor has a psychic paper that always provide just the right information for alien bureaucrats.

Birch envisions something similar: a universal credential that would provide just the information that an online service, retailer, or government agency would require to process a transaction.  Need to prove that you’re 18 years old, have membership in an organization, or access rights to digital content? In Birch’s view, the technology is now available—primarily through biometric, cryptography, and wireless—to accomplish all this without accessing a central database using passwords!

markburnett

5. Mark Burnett @m8urnett

While some might think passwords are on the outs, realistically, we’ll probably continue to use them for years to come. Therefore, we’ll need the expertise of Perfect Passwords author Mark Burnett to help keep our data safe.

This veteran IT security expert regularly blogs on his own personal website and writes articles for sites such as Windows IT Pro and The Register. Also active on social media, he regularly offers ideas on how to improve passwords and authentication.

Check out this fascinating post on how Burnett experimented with his entire family to see if it was really possible to kill the password.

karl martin

6. Karl Martin @KarlTheMartian

With Ph.D. degrees in Electrical and Computer Engineering, Karl Martin, CEO and Founder of Nymi created a wristband that analyzes your heartbeat to seamlessly authenticate you when you’re on the computer, smartphone, car and so much more. Skeptics who are concerned about their data and privacy shouldn’t be worried, according to Mr. Martin. He contends that all the data is encrypted at the hardware level and created the wristband with Privacy by Design.

In this Wired interview, Martin says that it’s impossible for anyone to trace the signal emitting from the wrist band back to the user unless people opt-in to allow that access – the default setting is opt-out.

In future versions, if Mr. Martin can get our computers, phones and car to talk to us with a voice like Scarlett Johansson’s, our life would be complete.

 

Summer Reminder: Cloud Storage Ain’t All That Private

Summer Reminder: Cloud Storage Ain’t All That Private

I’ve written before about the lack of privacy protections for consumers storing content in the cloud. In looking back over my notes, I’d forgotten just how few cloud privacy rights we have in the real world. Using the typical terms of service (ToS) from some major providers as a benchmark, your rights to the uploaded cloud can be summarized by this common expression (often used in relationships by one party): “what’s yours is mine”.

I’ve become obsessed recently with applying some of the security and privacy ideas we talk about in this blog in my daily life. Like you, I use a few well-known cloud file storage services to store documents, pictures, and audios — mostly of a quasi-public nature but occasionally more personal content as well.

After doing some additional research for this post, I’m now seeing the cloud a little more ominously. And I’ll  start taking real actions in 2016.  You should too.

In many cases, you lose all your privacy rights by clicking on a typical cloud storage ToS. Effectively, the provider can do whatever it wants with your data, including sending it to outside parties.

It’s Not a Safety Deposit Box

As a refresher course, the Stored Communications Act (SCA) is the relevant legislation covering digital content held by a company. It was written in the late bronze age of computing—circa mid-1980s. The intention was to give the then new email and other computing technologies the same privacy protections as legacy mail.

We don’t expect our letter carriers to casually open and read our mail or the postal service to send us targeted advertising flyers based on whom we’ve written to.

Lawmakers at the time thought they could help spur electronic communications by elevating email and, to a lesser extent, online storage to the same legal status (particular in terms of the 4th amendment) as the postal service and phone systems.

The SCA introduced the legal concept of electronic communications services (ECS) to cover email and messaging, and remote computing services (RCS) for online storage and data processing that are offered to the general public.

RCS is the one that’s most relevant to today’s cloud technology.

Why?

Any service in which the digital content or communications is stored — and that includes web-based email services —is better classified under the Act as an RCS.

Remote Computing Services and Privacy

While the authors of the SCA may have thought they were turning cloud storage into the virtual equivalent of a sealed letter, the reality of ad-based business models have made the cloud storage far less private.

The key problem is the Terms of Service agreements we robotically click on.  Many major providers –no names, please—say in explicit terms that they can access the user’s uploaded contents for advertising purposes or else they have language that the contents can be accessed at some point for some unknown purpose.

From the SCA’s viewpoint, these ToS agreements mean that the cloud provider is not an RCS. The legalese in the Act states an RCS can access your data but only for reasons directly related to storage — say, copying to other sites in a cluster or archiving or some other IT function.

SCA-2702b

The great exception in the SCA for remote computing services can be found  in paragraph B.

As soon as the provider is allowed to take the data and use it for activities not related to storage – say targeted advertising or other vaguely described reasons mentioned in the ToS— it’s no longer in the RCS business as far as the SCA is concerned.

You then lose all your SCA privacy protections since the statute protects your privacy only when the contents are held by an valid RCS. This includes, most importantly,  for the provider to gain authorization from a subscriber when divulging contents.

The core issue is that once you allow the cloud storage provider to peek into the data for other than pure IT reasons, you no longer have an “expectation of privacy”.

If you want to learn more about the SCA and privacy in the cloud era, read this surprisingly interesting legal paper that traces the history and legal reasoning behind the law.

Once the cloud provider falls outside the SCA, it doesn’t need the subscriber’s permission to do anything else it wants with the data.

Send some personal data mined from your documents to a data broker? No problem.

You also lose, not insignificantly, your 4th amendment rights: the cloud provider can simply send your data to the government when faced with an easy-to-obtain administrative subpoena — and they don’t need to inform you!

And It Gets Worse

These days you don’t have to be a cloud provider to be able to implement storage and email services. Any company with an IT department can generally pull this off.

As a result, we as consumers see these services being offered by retail, travel, hospitality, and just about any large company that wants to “engage” with its customers.

But as far as the SCA is concerned, these companies are neither RCS or ECS. Since their primary business function is outside of communications, they’re not covered by SCA at all.

So the next time you’re in your favorite chain espresso bar and hooked into their WiFi, be aware that when using any special storage or messaging features provided by their website, your content is not protected.

Also not covered by the SCA: university or school email systems. Since their email services are not offered to the general public, it’s not considered an RCS/ECS.

Work email system fall outside the SCA as well. Though there are some interesting cases where the employee used a company provided cell phone and was in fact protected by the SCA.

Privacy Options

Many cloud storage ToS agreements will say that won’t sell your data — both contents stored and PII — to third parties.

That’s a good start.

But then you have to look very carefully at how they can access the data: the less they say and the simpler the language in these agreements, the better.

My advice is that anything written that’s too vague will likely put them outside of the SCAs coverage and therefore your privacy will be compromised.

So what’s a privacy-minded consumer to do?

One option is to use one of the many services to encrypt the contents that you do upload into cloud storage and so protecting it from internal data scanners. This idea makes lots of sense, although it’s an extra step.

This is the one that I’ll implement this year!

Or if you do upload contents in plain-text to a cloud storage service, be very selective what you put there.

And for employees of companies who are casually using cloud storage services to upload business documents?

Cease and desist!

Need a cloud storage alternative for your employees? Keep your privacy right by cloud-enabling files with DatAnywhere.

Data Privacy US-Style: Our National Privacy Research Strategy

Data Privacy US-Style: Our National Privacy Research Strategy

While the EU has been speeding ahead with its own digital privacy laws, the US has been taking its own steps. Did you know there’s a National Privacy Research Strategy (NPRS) white paper that lays out plans for federally funded research projects into data privacy?

Sure, the Federal Trade Commission has taken up the data privacy mantle in the US, bringing actions against social media, hotels, and data brokers. But there’s still more to do.

So you can think of the NPRS as a blue-print for the research and development phase of the US’s own privacy initiatives. By the way, the US government spent about $80 million on privacy research efforts in 2014.

What’s the Plan?

I scanned through this 30+ page report looking for some blog worthy points to share. I found a few

First, the NPRS has interesting ideas on how to define privacy.

The authors of the paper, representing major federal agencies including the FTC, don’t have a firm definition but instead they view privacy as being characterized by four areas: subjects, data, actions, and context.

Essentially, consumers release data into a larger community (based on explicit or implicit rules about how the data will be used) on which certain actions are then taken – processing, analysis, sharing, and retention. The idea (see diagram) parallels our own Varonis approach to using metadata to provide context to user actions to file data. We approve of the NPRS approach!

privacy-nprs

The larger point being that privacy has a context that shapes our privacy expectations and what we consider a privacy harm.

NPRS is partially focused on understanding our expectations in different contexts and ways to incentivize us to make better choices.

Second, the plan takes up the sexier matter of privacy engineering. In other words, research into building privacy widgets that software engineers can assemble together to meet certain objectives.

I for one am waiting for a Privacy by Design (PbD) toolkit. We’ll see.

The third major leg in this initiative targets the transparency of  data collection, sharing, and retention. As it stands now, you click “yes” affirming you’ve read the multi-page legalese in online privacy agreements. And then are surprised that you’re being spammed at some point by alternative medical therapy companies.

The good news is that some are experimenting with “just in time” disclosures that provide bite-size nuggets of information at various points in the transaction — allowing you, potentially, to opt out.

More research needs to be undertaken, and NPRS calls for developing automated tools to watch personal data information flows and report back to consumers.

And this leads to the another priority: ensuring that personal information flows meet agreed upon privacy objectives.

Some of the fine print for this goal will sound familiar to Varonis-istas. NPRS suggests adding tags to personal data — essentially metadata — and processing the data so that consumer privacy preferences are then honored.

Of course, this would require privacy standardization and software technology that could quickly read the tags to see if the processing meets legal and regulatory standards. This is an important area of research in the NPRS.

In the Meantime, the FTC Needs You!

You’re now fired up reading this post and wondering whether you—white hat researcher, academic, or industry pro—can have his or her voice regarding privacy heard by the US government.

You can!

The FTC is now calling for personal privacy papers and presentations for its second annual Privacy Con to be held in Washington in January 2017. You can check out the papers from last year’s conference here.

If you do submit and plan to speak, let us know! We’d love to follow up with you.

Understanding Canada: Ontario’s New Medical Breach Notification Provision...

Understanding Canada: Ontario’s New Medical Breach Notification Provision (and Other Canadian Data Privacy Facts)

Remember Canada’s profusion of data privacy laws?

The Personal Information Protection and Electronic Documents Act (PIPEDA) is the law that covers all commercial organizations across Canada.

Canadian federal government agencies, though, are under a different law known as the Privacy Act.

But then there are overriding laws at the provincial level.

If a Canadian province adopts substantially similar data privacy legislation to PIPEDA, then a local organization would instead fall under the provincial law.

To date, Alberta and British Columbia have adopted their own laws, each known as the Personal Information Protection Act (PIPA). Alors, Québec has its own data privacy law.

Adding to the plenitude of provincial privacy laws, Ontario, New Brunswick, and Newfoundland have adopted similar privacy legislation with regard to health records.

Ontario’s PHIPA

So that brings us to Ontario’s Personal Health Information Protection Act (PHIPA).  Recently, PHIPA was amended to include a breach notification provision.

If personal health information is “stolen or lost or if it is used or disclosed without authority”, a healthcare organization in Ontario will have to notify the consumer “at the first reasonable opportunity”, as well as the provincial government.

Alberta, by the way, has had a breach notification requirement for all personal data since 2010.

What About Breach Notification at the Federal Level?

In June 2015, the Digital Privacy Act amended PIPEDA to include breach notification. Organizations must notify affected individuals and the Privacy Commissioner of Canada when there is a breach that creates a “real risk of significant harm” to an individual.

Notice that the federal law has a risk threshold for exposed personal information, whereas the new Ontario law for health records doesn’t. Alberta’s breach notification requirement, by the way, has a similar risk threshold to PIPEDA

Confused by all this? Get a good Canadian privacy lawyer!

Don’t be confused by how to detect and stop breaches! Learn more about Varonis DatAlert.

 

 

 

 

 

 

 

 

Password Security Tips for Very Busy People

Password Security Tips for Very Busy People

If you needed another reminder that you shouldn’t use the same password on multiple online sites, yesterday’s news about the hacking of Mark Zuckerberg’s Twitter and Pinterest accounts is your teachable moment. Mr. Z. was apparently as guilty as the rest of us in password laxness.

From what we know, the hackers worked from a list of cracked accounts that came from a 2012 breach at Linkedin. While an initial round of over six million passwords has been available for some time, it’s now believed that the number of cracked passwords might be as high 167 million. Based on the  messages left by the hackers on his Twitter timeline, Mark’s password may have been on that new list.

Sure, “if it happens to the best of us, it happens to the rest of us”, is one take away. However, we can all do a better job in managing our passwords.

Am I Already a Victim?

Last week, I received an email from Linkedin saying my account was part of the new batch of cracked passwords. I changed my password, and I had already been pretty good (not perfect) about using several different passwords for the online accounts that I care about. But I now needed to revisit some of them as well.

1. Go to this site now and enter the email addresses that you most commonly use in setting up accounts. You’ll discover whether your password is known to hackers.

There’s a service that will tell if you have an account on a site that’s been hacked and the passwords doxed. It’s called have I been pwned?

Besides informing me that Linkedin was one of my breached accounts, the ‘have I been pwned?’ service also alerted me that another account of mine had been compromised.

Yikes! Fortunately, it’s one of my web accounts where I’ve frequently change the password. So no problem.

You may not be so lucky. If this service tells you’ve been pwned, you’ll have to immediately go to the affected web site, along with other accounts that share that password and change them.

But on hold on a sec before you do that.

Turn on Two-Factor Authentication

You shouldn’t waste a crisis. It’s now a good time to turn on two-factor authentication (TFA) if it’s provided.

Linkedin does offer this feature. It works with your cell phone by sending an SMS text with a PIN. Or if you don’t use SMS, the service will call you instead.

So the next time you logon to Linkedin, you’ll be asked for your password (the first factor) and for the PIN (the second factor), which is sent to your cell phone.

2 .Before reading any more of this post, go to your Linkedin profile and turn on TFA – you’ll find the setting under Privacy & Settings>Privacy>Security.

The next time there’s a data exposure, you won’t have to worry (as much) about your account being hacked. The hackers will fail the second factor test.

Besides Linkedin, Google, Twitter, Dropbox, Facebook, and Paypal have this feature as well. A lifehacker article from 2013 list additional web sites with TFA.

Google and others — notably Twitter, Linkedin, and Facebook — also offer their TFA as a service. This allows sites that haven’t implemented strong authentication to hook into, say, Google Authenticator, for instant TFA. Going forward, for those sites that support these TFA services, you can in theory have secure centralized authentication.

3. It’s a good time now to consider using the authenticator feature of Google, Twitter, or Linkedin for all your accounts. As a first step, I would turn on TFA for your Google and Twitter accounts as well. It will also make these services more secure. Do it now!

Correct Battery Horse Staple and its Variants

The best way to stop the whole chain of events that forces you to change passwords on multiple account is to  come with an uncrackable password in the first place.

The correct-horse-battery-staple method is one way to generate high-entropy passwords. You pick four random words out of the dictionary and use them as your password. This classic xkcd comic explains it nicely.

password_strength

Source: xkcd

To remember the password, you devise a little story using the random words, thereby connecting the words together in your neurons. For example, “I showed the horse a battery with a staple on it, and the horse said correct”

Memory tricks where you connect stories to the actual words or ideas you want to recall are known as mnemonics.

I wrote about a variant of this technique where you make up a very simple one sentence story and use the first letter of each word as your password.

For example, here’s my one sentence story: “Bob the tiger ambled across the savannah at 12 o’clock last Tuesday”.

Unforgettable, right?

And the password that comes from this is: Bttaatsa12olT.

4. Now it’s your turn. Make up a memorable one sentence story that is long enough, at least 10 to 15 words, and try to use some punctuation and numbers. Take the first letters of those words and write it down once — just to see it. Throw away the paper. This is your new password. If you’ve been pwned—see #1—then use this as your new password. For anchor sites like Google or Twitter or Linkedin, change your password there as well, since these can in the future become your main authenticator.

Multiple Site Paranoia

More recently I’ve been using my own long one sentence mnemonic as my high-entropy password—I’m very confident it’s uncrackable.

Unfortunately, I didn’t use this technique with the Linkedin account I set up years ago, and hence I am one of the victims.

Can you use this same high-entropy password on multiple sites that are also guarded by TFA?

I’m not that paranoid so I would. But experts will tell you that even TFA has man-in-the-middle vulnerabilities and maybe somehow they launch a brute force dictionary attack against your encrypted password …

If you really want to avoid having to change multiple accounts if you’ve been hacked, then you may want to customize the one sentence story.

Here’s what I came up with. Balancing complexity with convenience, I now make a small part of my one sentence story variable —pick a subject, verb, or object to be the variable part.

And then use some letter in the website name please, not the first, but say the second letter — as the starting letter of another word to replace the subject, verb, or object.

If I want to reuse my “Bob the tiger” password, I could make the verb variable and use the second letter of the website name as the first letter of my new verb.

For Snapchat, my story might become: “Bob the tiger navigates across the savanna at 12 o’clock last Tuesday”.

For Twitter, it could read: “Bob the tiger walked across the savanna at 12 o’clock last Tuesday”.

You get the idea. You have a different password for each site.

For hackers who are used to quickly trying the cracked passwords on other sites besides the oringial, they’ll very likely go on to another victim when they fail to get in.

Identity Theft Complaint? Tell the FTC!

Identity Theft Complaint? Tell the FTC!

Hackers steal information about you, and unfortunately it’s often months later that the company realizes there’s been a breach. But in the meantime, identity thieves use your PII to open new credit card accounts, file false tax returns, or commit medical insurance fraud, as well as make fraudulent charges on existing credit card accounts.

Like everyone else, we try to keep up with all the data security breaches. But there are some eye-popping stats about identity theft worth sharing.

This is a part of the breach response that doesn’t get nearly as much attention, but causes the most pain for consumers.

We’re All Paying a Price (but Some More Than Others)

According to a report from US Department of Justice, in 2014 over 17 million people were victims of identity theft.

Interesting fact: Only 1 in 10 victims knew the identity of the attacker.

doj breaches

Source: Victims of Identity Theft, 2014 ( Department of Justice)

This speaks, of course, to the fact that the identity thieves likely bought the PII from the attackers, who often act as black market retailers of identity.

While almost 50% of the losses were under $100, there was a small percentage that reported over $5000 in financial damage (see chart).

Fans of the 80-20 rule probably know already that a good chunk of the total financial cost of identity theft is concentrated in a very narrow band. They would be right!

Of the $11.3 million in total financial loss in 2014 for identity theft, almost $2.5 million could be found in those reporting losses of $1000 and over. Ouch.

Complaints to the FTC

If you’re a consumer, what should you do after you’ve discovered you’re a victim?

For many it’s just a question of contacting their bank or credit card company. The call center agent may even put a flag on the credit report with one of the national credit reporting agencies (CRAs), so the identity thieves can’t open new accounts.

But it can get more complicated when a false tax return, health insurance claim, or new credit accounts keep getting opened.

The Federal Trade Commission (FTC) is a powerful friend for consumers to turn to for help. For 2015, the agency received over 490,000 identity theft complaints — a 50% jump from previous year — through its web site.

Last week, the FTC launched a new resource, identity.theft.gov, to improve the process of reporting identity theft claims and providing quicker aid for those who can’t resolve the problem directly.

Although not completely automated, the site allows the FTC to notify law enforcement and the CRAs, thereby bringing these key players into the picture earlier.

I gave the site a quick trial, and it’s quite thorough and easy to use.

Companies are still lagging in their abilities to detect breaches. The 2015 Verizon DBIR tells us that 50% of breaches took one or more months to discover. It turns out that companies are very dependent on third parties — law enforcement, CRAs, etc. — to tell them there’s been an incident.

So the new FTC web page may help in narrowing the breach discovery delay as well.

Reduce your time to breach discovery. Learn more about Varonis DatAlert!

The IP Theft Puzzle, Part IV: Ambitious Insiders

The IP Theft Puzzle, Part IV: Ambitious Insiders

In this last post in this series, I’d like to look at another type of insider. I’ve already written about the Entitled Independents. These guys fit our common perception of insiders: a disgruntled employee who doesn’t receive, say, an expected bonus and then erase millions of your business’s CRM records.

These insiders are solo acts. However, that’s not always the case with IP theft.

Take Me to Your Leader

The CMU CERT team discovered another insider variant in analyzing theft incidents in their database. Referred to as an Ambitious Leader, this insider is interested in taking the IP, along with a few employees, to another company — either one he will start by himself or a competing firm where he’ll lead a team.

The Leader will typically recruit others to help him gather the IP and then reward his helpers with jobs at the new company. These underlings are not disgruntled employees but rather have been swayed by a charismatic leader who promises them fame, free cafeteria food, and a cube with a view.

In pop culture, the disgruntled employee has been represented by Office Space-like characters. But for the Ambitious Leader, we’re now looking at white-collar professionals — attorneys, agents, financial traders, and, yes, high-powered tech types.

Does anyone remember early tech history and The Traitorous Eight? They were a group of grad students working for the infamous (and rage prone) William Shockley hoping to commercialize the newly invented transistor. With all the IP embedded in their neurons, they fled from him to found the legendary Fairchild Semiconductor. And the rest is history.

This pattern of Silicon Valley superstar employees leaving to start new companies is still playing out to this day.

Easier to Spot

With higher-level professional employees, you especially need to have non-disclosure agreements in place. You don’t need to be told that, right?

As we pointed out in previous posts, these IP agreements, along with employee communications about data security and employee monitoring, can act as a deterrent. In theory, when potential insider thieves see that the company takes it IP seriously, they’ll back down.

But with Entitled Independents, close to half took no precautions to hide their activities. Since they felt the IP was theirs, these mavericky insiders simply grabbed it. Of course, their spontaneous theft activities were harder to detect.

While they didn’t have a lot of data points, the CMU CERT researchers noticed that the IP agreements did have an effect on the Ambitious Leaders: it made them more likely to apply deceptions!

Their deceptions then led to more observable indicators. The Leader, for example, might plan the attack by scoping out the relevant folders, and then moving or copying files in bits and pieces during off-hours.  It’s reasonable to assume they would rather not get caught early on before they have a head start on their venture and then, perhaps, gain the resources to fight any legal challenges to their IP.

CMU CERT also noticed that when the IP was segregated among difference access groups, the Leader was forced into recruiting additional members.  Makes sense: the Leader can’t do it all and so needs help from new gang members who had the appropriate access rights.

Conclusions

These Ambitious Leaders are showing all the signs of the CEOs they are in the process of becoming: planning, personnel recruitment for their project, and complex execution. Their activities are far easier to detect when appropriate monitoring tools are in place. In several cases, CMU CERT noticed there can be “large downloads of information outside the patterns of normal behavior” by employees directed by a Leader who has sometimes already left the company.

Where does this leave us?

I’ve been keeping score on these different insiders, and here’s my list on the types of employees you’re most likely to catch:

  • Of the Entitled Independents: those who didn’t create the IP directly and therefore will likely exhibit precursors—entering directories they normally don’t visit or exhibiting other rehearsal behaviors.
  • Of the Ambitious Leaders: those who need to recruit several employees who have access to the IP. Precursors could include unusual bursts of email and file copies between the potential employees and their pack leader.
  • Any insider who exhibits bother technical and behavioral precursors. If they keep eyes and ears open, IT security with help from HR can connect the dots between problems at work—abusive behaviors, unexplained absences—with system activities.

No, you won’t be able to completely prevent insider IP theft—they are your employees and they know where the goodies are. But what you can do is reduce the risks.

In my original insider threat series — reread it now! — I concluded with a few tips to help reduce the risks. It’s worth repeating the key ones: enforce least-privilege access and separation of duties, strong passwords, and more focus on security preceding employee exits and terminations.

Finally, companies should inventory their IP resources — code, contracts, customer lists — on their file systems and make sure granular logging and auditing is in place.

In a worst case scenario, the logs can be used forensically later to prove theft had happened. But with the right software, it’s still possible to spot insider activities close to when they occur.

Don’t make it too easy for that ambitious executive! Find out with  Varonis DatAlert who’s looking at your IP.