All posts by Michael Buckbee

[Podcast] When Hackers Behave Like Ghosts

[Podcast] When Hackers Behave Like Ghosts

Leave a review for our podcast & we'll send you a pack of infosec cards.

We’re a month away from Halloween, but when a police detective aptly described a hotel hacker as a ghost, I thought it was a really clever analogy! It’s hard to recreate and retrace an attacker’s steps when there are no fingerprints or evidence of forced entry.

Let’s start with your boarding pass. Before you toss it, make sure you shred it, especially the barcode. It can reveal your frequent flyer number, your name, and other PII. You can even submit the passenger’s information on the airline’s website and learn about any future flights. Anyone with access to your printed boarding pass could do harm and you would never know who your perpetrator would be.

Next, let’s assume you arrive at your destination and the hotel is using a hotel key with a vulnerability. In the past, when hackers reveal a vulnerability, companies step up to fix it. But now, when systems need a fix and a software patch won’t do, how do we scale the fix for millions of hotel keys when it comes to hardware?

Other articles discussed:

Tool of the week: Gost: Build a local copy of Security Tracker. 

Panelists: Kilian Englert, Forrest Temple, Mike Buckbee

What You Can Learn About How to Secure an API from the FCC

What You Can Learn About How to Secure an API from the FCC

Every day thousands of phishing emails are sent to unsuspecting people who are tricked into handing over their credentials for online services or directly bilked out of their money. Phishers go to great lengths to lean on the credibility of the organizations they’re impersonating. So what could be better than the ability to post a document onto an actual official website?

Recently, it came to light that as part of the FCC’s public commenting system that malicious individuals could upload their own PDFs and image files to the FCC’s site. These files are indistinguishable from official communications that the agency might post – and the potential to abuse that vulnerability is incredibly high.

Just think of the implications of this capability: Phishers could use hosted PDFs to level fake fines on people. Stock market manipulators could post fake FCC sanctions for public companies. Not surprisingly, trolls and mischief makers are already generating fake FCC memos mocking the agency.

Service Oriented Security Issues

It’s since been fixed, but when reported, the FCC’s commenting application was utilizing an internal app API they had built. For large or complex applications, this style of building a single user facing application out of smaller component apps is becoming more common. This type of Service Oriented Architecture (SOA) can let you build powerful applications more simply (you could imagine that there are many applications the FCC might built that would require the ability to attach files), but greatly increases the potential surface area for attacks.

What’s an API Key Really?

To utilize their own FCC File Upload App, they issued themselves an API key to the application. While the term ‘key’ brings up associations of Public Key Cryptography and and the Public and Private keys used for SSH, an API key is really better considered to be just a password.

Typically they’re long randomly generated sets of characters, like: ‘20yhy3093-3asdfewer34093049-2394xcv02” but there’s no reason an API key couldn’t be something like “this-is-my-super-secret-password-api-key”.

In many ways it’s just as insecure if you were able to log into an application using only a password (With no username).

API Exposure

The FCC’s original commenting application pre-assigned a URL for uploading from their file handling application. In the process of this they also exposed the API key that they’d assigned to the larger commenting app.

Hackers and mischief makers were then able to reuse that same API key (essentially impersonating the legitimate FCC app) to make calls placing extremely questionable files on the FCC server among other actions.

API Issuance

Separate from the above leak of their own API key, it was also found that the FCC was issuing official API keys to anyone who asked. While the intent and possibilities here were great: developers could find interesting aspects about FCC data, or build their own clients to interact with FCC servers, they didn’t take into consideration all the different ways that the system could be abused.

What Are the Lessons?

The internet has identified a huge vulnerability in the FCC’s site – leaving them open to a whole mess of abuse.

Properly authorizing API access is notoriously tricky and there are a few key areas that are paramount to maintaining a secure state:

Keep your own public API key private. Use a secrets management store, or at least don’t check them into source control and call them from environment variables.
Monitor API keys and manage who has access to them in the first place (pro-tip: most people shouldn’t have access)
Separate the publicly modifiable aspects of your service from official communications.

Beyond API Keys

API key issuance as a means of authentication and authorization is in many ways on the way out. It’s too brittle. There isn’t an efficient means to indicate what modules of a larger system an API key should have access to (scoping) and often times there isn’t a great way to revoke privileges once given.

The general solution to all of these challenges is use OAuth. We’d recommend checking out our: Introduction to OAuth post to start learning more.

Wrapping Up

The intersection between an organization’s own services and the public is a highly valuable area that needs to be secured. A century ago it was faked telegraphs and memos sent on stolen letterhead. A decade ago it was common for email servers (or even just HTML forms connected to services that send emails) to be coerced into sending messages from a phisher that look as if they originate from the legitimate organization.

Today it’s API abuse. Just the latest method that horrible people on the Internet are using to defraud people. As you build and secure systems, focus on protecting your data, your users and your organization’s credibility.

Dedham Savings Uses Varonis to Build Trust with Their Customers and Stop Su...

Dedham Savings Uses Varonis to Build Trust with Their Customers and Stop Surprises

Case Study

Trust is important in all relationships, but even more so for a community bank like Dedham Savings. Charged with protecting the sensitive financial data of their customers, Dedham turned to Varonis.

During a Varonis risk assessment, they discovered a large amount of stale customer data on their network: data that they could immediately take off the network, reducing the potential severity of a data breach and instantly improving their security posture.

With default network tools, it’s hard for any IT team to deeply understand exactly what is happening on their network – and they don’t have insight into the behaviors of users on their network: what files they’re using, what they have access to but really shouldn’t and how their network access profile matches others in their department.

As Jim Hanlon, Senior Vice President & CTO of Dedham Savings Bank, says: “The more customer data that you have on your network, the more risk there is. You really do need [something] like Varonis managing these vast amounts of data and to protect your customer data” – see how Dedham Savings uses Varonis in this quick one minute video.


How to Better Structure AWS S3 Security

How to Better Structure AWS S3 Security

If the new IT intern suggests that you install a publicly accessible web server on your core file server – you might suggest that they be fired.

If they give up on that, but instead decide to dump the reports issuing from your highly sensitive data warehouse jobs to your webserver – they’d definitely be fired.

But things aren’t always so clear in the brave new world of the cloud – where services like Amazon’s Simple Storage Service (S3), which performs multiple, often overlapping roles in an application stack, is always one click away from exposing your sensitive files online.

Cloud storage services are now more than merely “a place to keep a file” – they often serve as both inputs and outputs to more elaborate chains of processes. The end result of all of this is the recent spate of high profile data breaches that have stemmed from S3 buckets.

An S3 Bucket Primer

S3 is one of the core services within AWS. Conceptually, it’s similar to an infinitely large file server at a remote site or a FTP server that you’re connecting to from across the Internet.

However, S3 differs in a few fundamental ways that are important to understand: failing to do so will trip you up and may result in insecure configurations.

S3 is organized around the concepts of Buckets and Objects, instead of servers with files.

Buckets are the top level organizational resource within S3 and are always assigned a DNS addressable name. Ex:

This might trick you into thinking of a bucket like a server, where you might create multiple hierarchies within a shared folder for each group that needs access within your organization.

Here’s the thing:

  • There’s no cost difference between creating 1 bucket and a dozen
  • By default you’re limited to a 100 buckets, but getting more is as simple as making a support request.
  • There is no performance difference between accessing a 100 files on one bucket or 1 file in a 100 different buckets.

With these facts in mind, we need to steal a concept from computer science class: the Single Responsibility Principle.

Within a network, a file server is a general resource typically used by lots of different departments for all kinds of work.

S3 allows you to devote a bucket to each individual application, group or even an individual user doing work. For security (and your sanity as a sysadmin) you want the usage of that bucket to be as narrowly aligned as possible and devoted to a single task.

A significant number of the unintentional data exposure incidents on S3 appear to have been caused by public facing S3 buckets (for websites) that were also (likely accidently) used for the storage of sensitive information.

Sidebar: A warning sign is often found in the bucket naming. Generic, general names like: ‘mycompany’ or ‘data-store’ are asking for trouble. Ideally you should establish a naming convention like: companyname-production/staging/development-applicationname

Bucket Policies

Policies are the top level permission structures for buckets. They define:

  • Who can access a bucket (what users/principals)
  • How they can access it (http only, using MFA)
  • Where they can access it from (a Virtual Private Cloud, specific IP)

Policies are defined in blocks of JSON that you can either write by hand or use AWS’s Policy Generator – – to create.

Benefit #1 of organizing your buckets into narrowly defined roles: your bucket policies will be an order of magnitude simpler, since you won’t have to try to puzzle out conflicting policy statements or even just read through (up to 20kb!) of JSON to try and reason out the implications of a change.

Example Bucket Policy

  "Version": "2012-10-17",
  "Id": "S3PolicyId1",
  "Statement": [
      "Sid": "IPAllow",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::examplebucket/*",
      "Condition": {
         "IpAddress": {"aws:SourceIp": ""},
         "NotIpAddress": {"aws:SourceIp": ""} 

Narrow buckets mean simpler policies, which in turn mean less likelihood of accidentally over permissioning users – and unintentionally creating a data breach.

Think of Bucket Policies as how the data should be treated.

IAM Policies in S3

Identity and Access Management IAM policies, on the other hand, are all about what rights a user/group has to a resource in AWS (not just S3).

You can apply both IAM and Bucket policies simultaneously: access attempts will calculate the least privilege union of the two policies and take action accordingly.

Further Reading: IAM Policies and Bucket Policies and ACLs! Oh, My!

VPC Endpoints in S3

A very powerful, but often underutilized tool in securing AWS services is to divide applications into different logically separated application groups inside of a Virtual Private Cloud.

On a grander scale than simply designating a bucket for a particular purpose, a VPC is a logically separated set of Amazon Web Services (including S3) that can be cordoned off for greater security.

Most of the large databreaches that have surfaced regarding groups using S3 have NOT been website related. Organizations are using a variety of AWS’s tools like RedShift and Quicksite to do analysis of massive amounts of (potentially) sensitive data: analysis, reports and raw data that should not be placed on a public network.

The tool of choice to separate this is AWS’s Virtual Private Cloud. With VPC you can define a set of services that will be unable to connect to the general Internet, and only be accessible via a VPN (IPSEC) connection into the VPC.

Think of a VPN connected VPC as a separate section of your internal network – and resources like S3 within the VPC aren’t publicly addressable:

  • A bot scanning for open buckets won’t be able to see them.
  • Your new data scientist can’t accidently leave a bucket publicly accessible because they were trying to download a report.
  • Day to day users of the services don’t have to try and figure out if their actions will cause chaos and destruction.

Enable S3 Logging

By default, S3 doesn’t maintain access logs for objects (files) in a bucket. On a per bucket basis you can enable access logs to write to another S3 bucket.

Reviewing access periodically can give you great insight into if your data is being accessed from an unknown location, or in the case of a data breach, how and when exfiltration occurred.

S3 stores raw logs to the logging bucket where you can parse them with a number of different open source tools, like:

More recently, AWS Athena was launched. It’s a new service that lets you directly run SQL queries against structured data sources like JSON, CSV and log files stored in S3.

In Conclusion

AWS S3 is a powerful and extremely useful service that increases the capabilities of IT and application groups. Properly administered, it can be a safe and powerful tool for data storage and as the base of more complex applications.

Steps to keep your data secure on AWS S3:

  1. Review which of your S3 buckets are open to the public internet
  2. Split S3 Buckets to 1 per application or module
  3. Separate concerns with VPC S3 Endpoints
  4. Log everything

[Podcast] Blackhat Briefings That Will Add to Your Tool Belt

[Podcast] Blackhat Briefings That Will Add to Your Tool Belt

Leave a review for our podcast & we'll send you a pack of infosec cards.

We’re counting down to Blackhat USA to attend one of the world’s leading information security conference to learn about the latest research, development and trends.

We’ll also be at booth #965 handing out fabulous fidget spinners and showcasing all of our solutions that will help you protect your data from insider threats and cyberattacks.

In this podcast episode, we discussed not only sessions you should attend, but also questions to ask that will help you reduce risk. We even covered why it isn’t wise to only rely on important research methods like honeypots save you from insider threats or other attacks.

Tool of the Week: Virtual Private Cloud (VPC)

Panelists: Kris Keyser, Kilian Englert, Mike Buckbee

[Podcast] Cyber Threats Are Evolving and So Must Two-Factor

[Podcast] Cyber Threats Are Evolving and So Must Two-Factor

Leave a review for our podcast & we'll send you a pack of infosec cards.

Finally, after years of advocacy many popular web services have adopted two-factor authentication (2FA) as a default security measure. Unfortunately, as you might suspect attackers have figured out workarounds. For instance, attackers that intercept your PIN in a password reset man-in-the-middle attack.

So what should we do now? As the industry moves beyond 2FA, the good news is that three-factor authentication is not on the shortlist as a replacement. Google’s identity systems manager, Mark Risher said, “One of the truths we’ve found is that people won’t accept more security than they think they need.”

There have been talks about using biometrics as a promising form of authentication. In the meantime, know that using 2FA is more secure than using just a password.

Other Articles Discussed:

Panelists: Rob Sobers, Mike Buckbee, Kilian Englert

[Podcast] Budgets and Ethics

[Podcast] Budgets and Ethics

Leave a review for our podcast & we'll send you a pack of infosec cards.

Right now, many companies are planning 2018’s budget. As always, it is a challenge to secure enough funds to help with IT’s growing responsibilities. Whether you’re a nonprofit, small startup or a large enterprise, you’ll be asked to stretch every dollar. In this week’s podcast, we discussed the challenges a young sysadmin volunteer might face when tasked with setting up the IT infrastructure for a nonprofit.

And for a budget interlude, I asked the panelists about the growing suggestion for engineers to take philosophy classes to help with ethics related questions.

Other Articles Discussed:

Tool of the week: honeyλ, a simple, serverless application designed to create and monitor URL {honey}tokens, on top of AWS Lambda and Amazon API Gateway

Panelists: Kilian Englert, Mike Thompson, Mike Buckbee

[Podcast] Is Data Worth More Than Money?

[Podcast] Is Data Worth More Than Money?

Leave a review for our podcast & we'll send you a pack of infosec cards.

When it comes to infosecurity, we often equate treating data like money. And rightfully so. After all, data is valuable. Not to mention the human hours devoted to safeguarding an organization’s data.

However, when a well-orchestrated attack happens to destroy an organization’s data, rather than for financial gain, we wondered if data is really worth more than money.

Sure you can quantify the cost of tools, equipment, hours spent protecting data, but what about intellectual and emotional labor? How do we assign proper value to the creative essence and spirit of what makes our data valuable?

Other Articles Discussed:

Panelists: Mike Buckbee, Kilian Englert, Mike Thompson

Data Security Software: Platforms Over Tools

Data Security Software: Platforms Over Tools

As recent security incidents like NotPetya, Wannacry and the near daily data breach reports have shown, data security isn’t getting easier. And it’s not because IT groups aren’t putting in the work.

IT and Infosec Is Just Fundamentally Getting More Complex.

New internal and external services are being added constantly, and each service requires management. These days you need everything from data classification to auditing to risk management to archiving in order to stay compliant and secure. These applications and services are challenging enough to run smoothly, and often are accompanied by a dump truck full of wildly complex regulatory or legislative requirements like GDPR.

So how do you do more with less? How do you start to deal with these issues at a fundamental level instead of playing Whack-A-Mole with the security of each new product? You need a place to stand.

Which Is Why we’ve Created the Varonis Data Security Platform.

The Varonis Data Security platform solves these challenges with a unified, integrated solution:

  • Monitor and analyze file activity and user behavior
  • Discover overexposed sensitive data
  • Detect unusual behavior on your network (no matter the user or application)
  • Easily model and cleanup permissions on your network
  • Detect unneeded access and unused data that’s making you vulnerable

All of which lets you do things like:

  • Investigate suspicious users
  • Automatically react to anything behaving like ransomware
  • Alert on permissions changes or unusual access to sensitive data
  • Automate reporting and auditing

See how all of this comes together in a quick three minute overview of the Varonis Data Platform:


[Podcast] In the Dark about Our Data

[Podcast] In the Dark about Our Data

Leave a review for our podcast & we'll send you a pack of infosec cards.

It’s been reported that 85% of businesses are in the dark about their data. This means that they are unsure what types of data they have, where it resides, who has access to it, who owns it, or how to derive business value from it. Why is this a problem? First, the consumer data regulation, GDPR is just a year away and if you’re in the dark about your organization’s data, meeting this regulation will be a challenge. Organizations outside the EU that process EU citizens’ personal data, GDPR rules will apply to you.

Second, when you encounter attacks such as ransomware, it’s a bit of a mess to clean up. You’ll have to figure out which users were infected, if anything else got encrypted, when the attack started, and how to prevent it from happening in the future.

However, what’s worse than a ransomware attack are ones that don’t notify you like insider threats! These threats don’t present you with a ransomware-like pop-up window that tells you you’ve been hacked.

It’s probably better to be the company that got scared into implementing some internal controls, rather than the one that didn’t bother and then went out of business because all its customer data and trade secrets ended up in the public domain.

In short, it just makes good business and security sense to know where your data resides.

Other articles discussed:

Tool of the week: DNSTwist

Panelists: Mike Thompson, Kilian Englert, Mike Buckbee

[Podcast] What Does the GDPR Mean for Countries Outside the EU?

[Podcast] What Does the GDPR Mean for Countries Outside the EU?

Leave a review for our podcast & we'll send you a pack of infosec cards.

The short answer is: if your organization store, process or share EU citizens’ personal data, the EU General Data Protection Regulation (GDPR) rules will apply to you.

In a recent survey, 94% ­of large American companies say they possess EU customer data that will fall under the regulations, with only 60% of respondents that have plans in place to respond to the impact the GDPR will have on how th­ey handle customer data.

Yes, GDPR isn’t light reading, but in this podcast we’ve found a way to simplify the GDPR’s key requirements so that you’ll get a high level sense of what you’ll need to do to become compliant.

We also discuss the promise and challenges of what GDPR can bring – changes to how consumers relate to data as well as how IT will manage consumer data.

After the podcast, you might want to check out the free 7-part video course we developed with Troy Hunt on the new European General Data Protection Regulation that will tell you: What are the requirements?  Who will be affected?  How does this help protect personal data?