Category Archives: Compliance & Regulation

The Right to Be Forgotten and AI

The Right to Be Forgotten and AI

One (of the many) confusing aspects of the EU General Data Protection Regulation (GDPR) is its “right to be forgotten”. It’s related to the right to erasure but takes in far more ground. The right to have your personal deleted means that data held by the data controller must be removed on request by the consumer. The right to be forgotten refers more specifically to personal data the controller has made public on the Intertoobz.

Simple, right?

It ain’t ever that easy.

I came across a paper on this subject that takes a deeper look at the legal and technical issues around erasure and “forgetting”. We learn from the authors that deleting means something different when it comes to big data and artificial intelligence versus data held in a file system.

This paper contains great background on the recent history of the right to be forgotten, which is well worth your time.

Brief Summary of a Summary

Way back in 2010, a Mr. Costeja González brought a complaint against Google and a Spanish newspaper to Spain’s national Data Protection Authority (DPA). He noticed that when he entered his name into Google, the search results displayed a link to a newspaper article about a property sale made by Mr. González to resolve his personal debts.

The Spanish DPA dismissed the complaint against the newspaper —they had legal obligation to publish the property sale. However, the DPA allowed the one against Google to stand.

Google’s argument was that since it didn’t have a true presence in Spain – no physical servers in Spain held the data – and the data was processed outside the EU, it wasn’t under the EU Data Protection Directive (DPD).

Ultimately, the EU’s highest judicial body, the Court of Justice, in their right to be forgotten ruling in 2014 said that: search engine companies are controllers; the DPD applies to companies that market their services in the EU  (regardless of physical presence); and consumers have a right to request search engine companies to remove links that reference their personal information.

With the GDPR becoming EU law in May 2018 and replacing the DPD, the right to be forgotten is now enshrined in article 17 and the extraterritorial scope of the decision can be found in Article 3.

However, what’s interesting about this case is that the original information about Mr. Gonzalez was never deleted — it still can be found if you search the online version of the newspaper.

So the “forgetting” part means, in practical terms, that a key or link to the personal information has been erased, but not the data itself.

Hold this thought.

Artificial Intelligence Is Like a Mini-Google

The second half of this paper starts with a very good computer science 101 look at what happens when data is deleted in software. For non-technical people, this part will be eye opening.

Technical types know that when you’re done with a data object in an app and after the memory is erased or “freed”, the data does not in fact magically disappear. Instead, the memory chunk is put on a “linked list” that will eventually be processed and then made part of available software memory to be re-used again.

When you delete data, it’s actually put on a “take out the garbage” list.

This procedure is known as garbage collection, and it allows performance-sensitive software to delay the CPU-intensive data disposal to a later point when the app is not as busy.

Machine learning uses large data sets to train the software and derive decision making rules. The software is continually allocating and deleting data, often personal data, which at any given moment might be on a garbage collection queue waiting to be disposed.

What does it mean then to implement right to be forgotten in an AI or big data app?

The authors of the paper make the point that eliminating a single data point is not likely to affect the AI software’s rules. Fair enough. But certainly if tens or hundreds of thousands use their right to erase under the GPDR, then you’d expect some of these rules to shift.

They also note that data can be disguised through certain anonymity techniques or pseudonymization as a way to avoid storing identifiable data, thereby getting around the right to be forgotten. Some of these anonymity techniques  involve adding “noise” which may affect the accuracy of the rules.

This leads to an approach to implementing right to be forgotten for AI that we alluded to above: perhaps one way to forget is to make it impossible to access the original data!

A garbage collection process does this by putting the memory in a separate queue that makes it unavailable to the rest of the software—the software’s “handle” to the memory no longer grants access.  Google does the same thing by removing the website URL from its internal index.

In both cases, the data is still there but effectively unavailable.

The Memory Key

The underlying idea behind AI forgetting is that you remove or delete the key that allows access to the data.

This paper ends by suggesting that we’ll need to explore more practical (and economic) ways to handle right to be forgotten for big data apps.

Losing the key is one idea. There are additional methods that can be used: for example, to break up the personal data into smaller sets (or silo them) so that it is impossible or extremely difficult to re-identify each separate set.

Sure removing personal data from a file system is not necessarily easy, but it’s certainly solvable with the right products!

Agreed: AI forgetting involves additional complexity and solutions to the problem will differ from file deletion. It’s possible we’ll see some new erasure-like technologies in the AI area as well.

In the meantime, we’ll likely receive more guidance from EU regulators on what it means to forget for big data applications. We’ll keep you posted!

New York State Cyber Regulations Get Real

New York State Cyber Regulations Get Real

We wrote about NY’s innovate cyber regulations earlier this year. For those who don’t remember, NY State Department of Financial Services (NYSDFS) launched GDPR-like cyber security regulations for its massive financial industry, including requirements for 72-hour breach reporting, limited data retention, and designation of a chief information security officer.

As legal experts have noted, New York leads the rest of the states in its tough data security rules for banks, insurance, and investment companies. And after Equifax, it has proposed extending these rules to credit reporting agencies that operate in the state.

Transition Period Has Ended

The NYS rules are very process-oriented and similar to the GDPR in requiring documented security policies, response planning, and assessments – basically you have to be able to “show your work”.

However, there also specific technical requirements, unlike the GDPR, that have to be complied with as well: for example, pen testing, multi-factor authentication, and limiting access privileges.

Anyway, the cyber regulations went into effect on March 1, 2017, but most of the rules have a 180-day grace period. That period ended in late August.

There are exceptions.

They extended up to one year – March 1, 2018 — some of the more technical requirements: for example, performing pen testing and vulnerability assessments and conducting periodic risk assessments. And up to 18-months for implementing audit trails and application-level security.

So NY financial companies have a little extra time for the nittier rules.

However, that does mean that the 72-hour breach reporting rule is in effect!

Varonis Can Help

I’d like to add that the NYSDFS rules on breach reporting cover a far broader type of cyber event than any other state. Typically, state breach rules have language that requires notification for the exposure of certain types of PII data — see our totally awesome graphics to instantly visualize this.

While these NY rules protect similar types of PII as other states – social security and credit card numbers as well as online identifiers – financial companies in New York will also have to report on cyber events, as defined as follows:

Cybersecurity Event means any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information stored on such Information System.

Note the language for any attempt to gain access or to disrupt or misuse system. This encompasses not only standard data exposures where personal data is stolen, but also denial-of-service (DoS), ransomware, and any kind of post-exploitation where the system tools are leveraged and misused.

Based on my reading and looking closely at the state’s FAQ, financial companies will have to notify NY regulators within 72-hours of data exposures involving PII and cybersecurity events “that have a reasonable likelihood  of materially harming” normal operations – see Section 500.17.

With data attacks now becoming the new normal, this tough notification rule — first in the US! — will likely require IT departments to put in significant technical effort to meet this tight timeline.

Varonis can help NY financial companies.

Ask to see a demo of our DatAlert product and get right with NYSDFS!

 

The Equifax Breach and Protecting Your Online Data

The Equifax Breach and Protecting Your Online Data

As we all know by now, the Equifax breach exposed the credit reports of over a 140 million Americans. What are in these reports? They include the credit histories of consumers along with their social security numbers. That makes this breach particularly painful.

The breach has also raised the profile of the somewhat mysterious big three national credit reporting agencies or NCRAs — Experian and TransUnion are the other two. Lenders use NCRAs to help them decide whether to approve credit for auto loans, mortgages, home improvement, and of course new credit cards.

NCRAs Are Supposed to Protect Against Identity Theft

Let’s say the Equifax hackers go into phase two of their business plan, likely selling  personally identifiable information (PII) to cyber gangs and others who will directly  open up fake accounts. Of course, the stolen social security numbers makes this particularly easy to attempt.

The bank or other lender extending credit will normally check the identity and credit worthiness of the person by contacting the NCRAs, who under red flag rules are supposed to help lenders spot identity theft. Often times (but not always), the cyber thieves will use a different address than the victim’s when applying for credit, and this anomaly should be noticed by the NCRAs, who have the real home address.

The credit report should in theory then be flagged so future lenders will be on alert as well, and the financial company originally asking for the report is also warned of possible identify theft for the credit application.

I am not the first to observe that the irony level of the Equifax breach is in the red-zone – like at 11 or 12. The NCRAs are entrusted with our most personal financial data, and they’re the ones who are supposed to protect consumers against identity theft.

Unfortunately, an NCRA hacking is not a new phenomenon, and the big three have even been the target of class action suits brought by affected consumers under the Fair Credit Reporting Act (FCRA). To no one’s surprise, the legal suits have already begun for the Equifax breach – the last count puts it at 23.

What Consumers Should Do

While we hope that red flags have been already placed on affected accounts, it’s probably best to take matters into your own hands. The FTC, the agency in charge of enforcing the FCRA, recommends a few action steps.

At a minimum, you should go to the Equifax link and see if your social security number is one that’s been exposed. If so, you can get free credit monitoring for a year — in short, you’ll know if someone tries to request credit in your name.

(Yes, I just did it myself, and discovered my number might have been compromised. I went ahead and subscribed for the credit monitoring.)

If you’re really paranoid, you can go a step further and put a credit freeze on your credit report. This restricts access to the credit report held by the NCRAs, and in theory, should prevent lenders from creating new accounts. Normally, this would be a charge, but Experian graciously arranged to freeze the reports for free after outraged consumers protested.

None of these measures are fool proof, and clever attackers and thieves can get around these protections.

Online Protection With The Troy Hunt Course

Besides social security numbers, the hackers hauled away a lot of PII – names, addresses, and likely bank and credit card companies. As far as I can tell, passwords were not taken by the Equifax hackers.

Obviously, social security numbers are the most monetizable, but the other PII is still useful, particularly in phishing attacks. Readers of this blog know how we feel on the subject: any online information gained by hackers can and will be used against you!

So we should all be on alert for phish mails from what may appear to be our banks and other financial companies, and we should be wary of other scams.

That’s where the indispensable security expert Troy Hunt can help us all! His Internet Security Basics video course is a favorite of ours because it breaks down online security into a series of simple lessons that non-technical folks can quickly understand and take action on.

I draw your attention to Lesson Three, “How to know when to trust a website”, which will be incredible helpful in helping you avoid  the coming wave of online scams.

Let’s not waste a crisis: it’s probably also a good time to review and change online passwords and understand what makes for good passwords. Troy’s Lesson Two, ‘How to Choose a Good Password” we’ll bring you up to speed on passphrases and password managers.

The Equifax breach is as bad as it gets, but let’s not make it worse by letting cyber thieves exploit us again through lame phishing emails.

Learn how to protect yourself online with security pro Troy Hunt’s five-part course.

New Post-Brexit UK Data Law: Long Live the GDPR!

New Post-Brexit UK Data Law: Long Live the GDPR!

The UK is leaving the EU to avoid the bureaucracy from Brussels, which includes having to comply with the General Data Protection Regulation (GDPR). So far, so good. However, since the EU is so important to their economy, the UK’s local data laws will in effect have to be at very high-level — basically, GDPR-like — or else the EU won’t allow data transfers.

Then there is the GDPR’s new principal of extra-territoriality or territorial scope — something we’ve yakked a lot about in the blog — which means non-EU countries will still have to deal with the GDPR.

Finally, as a practical matter the GDPR will kick in before the UK formally exits the EU. So the UK will be under the GDPR for at least a year or more no matter what.

Greater legal minds than mine have already commented on all this craziness.

The UK government looked at the situation, and decided to bite the bullet, or more appropriately eat the cold porridge

Last week, the UK released a statement of intent that commits the government to scrapping their existing law, the Data Protection Act, and replacing it with a new Data Protection Bill.

Looks Familiar

This document is very clear about what the new UK data law will look like. Or as they say:

Bringing EU law into our domestic law will ensure that we help to prepare the UK for the future after we have left the EU. The EU General Data Protection Regulation (GDPR) and the Data Protection Law Enforcement Directive (DPLED) have been developed to allow people to be sure they are in control of their personal information while continuing to allow businesses to develop innovative digital services without the chilling effect of over-regulation. Implementation will be done in a way that as far as possible preserves the concepts of the Data Protection Act to ensure that the transition for all is as smooth as possible, while complying with the GDPR and DPLED in full.

In effect, the plan is to have a law that will mirror the GDPR, allowing UK companies to continue to do business as usual

The Bill will include the GDPR’s new privacy rights for individuals: the “right to be forgotten”, data portability, and right to personal data access. And it will contain the GDPR’s obligations for controllers to report breaches, conduct impact assessments involving sensitive data, and designate data protection officers.

What about the GDPR’s considerable fines?

The UK has also gone along with the EU data law’s tiered structure – fines of up to 4% of global turnover (revenue).

Her Majesty’s Government may have left the EU, but EU laws for data privacy and security will remain. The GDPR is dead, long live the GDPR!

GDPR Resources

Of course, the new Bill will have its own articles, with different wording and numbering scheme than the GDPR. And legal experts will  no doubt find other differences — we’ll have to wait for the new law. Having said that, our considerable resources on the EU data law remain relevant.

For UK companies reading this post and looking for a good overview, here are three links that should help:

 

For a deeper dive into the GDPR, we offer for your edification these two resources:

And feel free to search the IOS blog and explore the GDPR on your own!

Introducing Our New DataPrivilege API and a Preview of Our Upcoming GDPR Pa...

Introducing Our New DataPrivilege API and a Preview of Our Upcoming GDPR Patterns

GDPR Patterns Preview

We’re less than a year out from EU General Data Protection Regulation (GDPR) becoming law, and hearing that our customers are facing more pressure than ever to get their data security policies ready for the regulation.  To help enterprises quickly meet GDPR, we’re introducing GDPR Patterns with over 150 patterns of specific personal data that falls in the realm of GDPR, starting with patterns for 19 countries currently in the EU (including the UK).

Using the Data Classification Framework as a foundation, GDPR Patterns will enable organizations to discover regulated personal data: from national identification numbers to IBAN to blood type to credit card information. This means that you’ll be able to generate reports on GDPR applicable data: including permissions, open access, and stale data.  These patterns and classifications will help enterprises meet GDPR head on, building out security policy to monitor and alert on GDPR affected data.

Try it today and discover how GDPR Patterns will help prepare you for 2018 and keep your data secure.

IAM & ITSM Integration with DataPrivilege

We’ve been talking a lot lately about unified strategies for data security and management, and the challenge of juggling multiple solutions to meet enterprise security needs.

DataPrivilege puts owners in charge of file shares, SharePoint sites, AD security and distribution groups by automating authorization requests, entitlement reviews and more. DataPrivilege now includes a new API so customers can take advantage of its capabilities by integrating with other technologies in the security ecosystem, like IAM (Identity and Access Management) and ITSM (IT Service Management) Solutions.

Our new DataPrivilege API provides more flexibility for IT and business users so they can unify and customize their user experience and workflows. With the API, you’ll be able to synchronize managed data with your IAM/ITSM solution and return instructions to DataPrivilege to execute and report on requests and access control changes.  You’ll be able to use the integration to externally control DataPrivilege entitlement reviews, self-service access workflows, ownership assignment, and more.

Ask for a demo and see how it works with your current set up.

 

A Few Thoughts on Data Security Standards

A Few Thoughts on Data Security Standards

Did you know that the 462-page NIST 800-53 data security standard has 206 controls with over 400 sub-controls1?  By the way, you can gaze upon the convenient XML-formatted version here. PCI DSS is no slouch either with hundreds of sub-controls in its requirements’ document. And then there’s the sprawling IS0 27001 data standard.

Let’s not forget about security frameworks, such as COBIT and NIST CSF, which are kind of meta-standards that map into other security controls. For organizations in health or finance that are subject to US federal data security rules, HIPAA and GLBA’s data regulations need to be considered as well. And if you’re involved in the EU market, there’s GDPR; in Canada, it’s PIPEDA; in the Philippines, it’s this, etc., etc.

There’s enough technical and legal complexity out there to keep teams of IT security pros, privacy attorneys, auditors, and diplomats busy till the end of time.

As a security blogger, I’ve also puzzled and pondered over the aforementioned standards and regulations. I’m not the first to notice the obvious: data security standards fall into patterns that make them all very similar.

Security Control Connections

If you’ve mastered and implemented one, then very likely you’re compliant to others as well. In fact, that’s one good reason for having frameworks. For example, with, say NIST CSF, you can leverage your investment in ISO 27001 or ISA 62443 through their cross-mapped control matrix (below).

Got ISO 27001? Then you’re compliant with NIST CSF!

I think we can all agree that most organizations will find it impossible to implement all the controls in a typical data standard with the same degree of attention— when was last time you checked the physical access audit logs to your data transmission assets (NIST 800-53, PE-3b)?

So to make it easier for companies and the humans that work there, some of the standards group have issued further guidelines that break the huge list of controls into more achievable goals.

The PCI group has a prioritized approach to dealing with their DSS—they have six practical milestones that are broken into a smaller subset of relevant controls. They also have a best practices guide that views — and this is important — security controls into three broader functional areas: assessment, remediation, and monitoring.

In fact, we wrote a fascinating white paper explaining these best practices, and how you should be feeding back the results of monitoring into the next round of assessments. In short: you’re always in a security process.

NIST CSF, which itself is a pared down version of NIST 800-53, also has a similar breakdown of its controls into broader categories, including identification, protection, and detection. If you look more closely at the CSF identification controls, which mostly involve inventorying your IT data assets and systems, you’ll see that the main goal in this area is to evaluate or assess the security risks of the assets that you’ve collected.

File-Oriented Risk Assessments

In my mind, the trio of assess, protect, and monitor is a good way to organize and view just about any data security standard.

In dealing with these data standards, organizations can also take a practical short-cut through these controls based on what we know about the kinds of threats appearing in our world — and not the one that data standards authors were facing when they wrote the controls!

We’re now in a new era of stealthy attackers who enter systems undetected, often though phish mails, leveraging previously stolen credentials, or zero-day vulnerabilities. Once inside, they can fly under the monitoring radar with malware-free techniques, find monetizable data, and then remove or exfiltrate it.

Of course it’s important to assess, protect and monitor network infrastructure, but these new attack techniques suggest that the focus should be inside the company.

And we’re back to a favorite IOS blog theme. You should really be making it much harder for hackers to find the valuable data — like credit card or account numbers, corporate IP — in your file systems, and detect and stop the attackers as soon as possible.

Therefore, when looking at the how to apply typical data security controls, think file systems!

For, say, NIST 800.53, that means scanning file systems, looking for sensitive data, examining the ALCs or permissions and then assessing the risks (CM-8, RA-2,RA-3). For remediation or protection, this would involve reorganizing Active Directory groups and resetting ACLs to be more exclusive (AC-6). For detection, you’ll want to watch for unusual file system accesses that likely indicate hackers borrowing employee credentials (SI-4).

I think the most important point is not to view these data standards as just an enormous list of disconnected controls, but instead to consider them in the context of assess-protect-monitor, and then apply them to your file systems.

I’ll have more to say on a data or file-focused view of data security controls in the coming weeks.

1 How did I know that NIST 800-53 has over 400 sub-controls? I took the XML file and ran this amazing two lines of PowerShell:

[xml]$books = Get-Content 800-53-controls.xml
$books.controls.control|%{$_.statement.statement.number}| measure -line

 

[Podcast] What Does the GDPR Mean for Countries Outside the EU?

[Podcast] What Does the GDPR Mean for Countries Outside the EU?

Leave a review for our podcast & we'll send you a pack of infosec cards.


The short answer is: if your organization store, process or share EU citizens’ personal data, the EU General Data Protection Regulation (GDPR) rules will apply to you.

In a recent survey, 94% ­of large American companies say they possess EU customer data that will fall under the regulations, with only 60% of respondents that have plans in place to respond to the impact the GDPR will have on how th­ey handle customer data.

Yes, GDPR isn’t light reading, but in this podcast we’ve found a way to simplify the GDPR’s key requirements so that you’ll get a high level sense of what you’ll need to do to become compliant.

We also discuss the promise and challenges of what GDPR can bring – changes to how consumers relate to data as well as how IT will manage consumer data.

After the podcast, you might want to check out the free 7-part video course we developed with Troy Hunt on the new European General Data Protection Regulation that will tell you: What are the requirements?  Who will be affected?  How does this help protect personal data?

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part II

Leave a review for our podcast & we'll send you a pack of infosec cards.


In this second part of our interview with attorney and GDPR pro Sue Foster, we get into a cyber topic that’s been on everyone’s mind lately: ransomware.

A ransomware attack on EU personal data is unquestionably a breach —  “accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access  …”

But would it be reportable under the GDPR, which goes into effect next year?

In other words, would an EU company (or US one as well) have to notify a DPA and affected customers within the 72-hour window after being attacked by, say, WannaCry?

If you go by the language of the law, the answer is a definite …  no!

Foster explains that for it to be reportable, a breach has to cause a risk “to the rights and freedoms of natural persons.”  For what this legalese really means, you’ll just have to listen to the podcast. (Hint: it refers to a fundamental document of the EU.)

Anyway, personal data that’s encrypted by ransomware and not taken off premises is not much of a risk for anybody. There’s still more subtleties involving ransomware and other EU data laws that I think is best explained by her, so you’ll just have to listen to Sue’s legal advice directly!

There’s also very interesting analysis by Foster on the implications of the GDPR for Internet-of-Things gadget makers.

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

[Podcast] Mintz Levin’s Sue Foster on the GDPR, Part I

Leave a review for our podcast & we'll send you a pack of infosec cards.


Sue Foster is a London-based partner at Mintz Levin. She has a gift for explaining the subtleties in the EU General Data Protection Regulation (GDPR). In this first part of our interview, Foster discusses how the GDPR’s new extraterritoriality rule would place US companies under the law’s data obligations.

In the blog, we’ve written about some of the implications of the GDPR’s Article 3, which covers the law’s territorial scope. In short: if you market online to EU consumers — web copy, say, in the language of some EU country  — then you’ll fall under the GDPR. And this also means you would have to report data exposures under the GDPR’s new 72-hour breach rule.

Foster points out that if a US company happens to attract EU consumers through their overall marketing, they would not fall under the law.

So a cheddar cheese producer from Wisconsin whose web site gets the attention and business of French-based frommage lovers is not required to protect their data at the level of the GDPR.

There’s another snag for US companies, an update to the EU’s ePrivacy Directive, which places restrictions on embedded communication services. Foster explains how companies, not necessarily ISPs, that provide messaging — that means you WhatsApp, Skype, and Gmail — would fall under this law’s privacy rules.

Sue’s insights on these and other topics will be relevant to both corporate privacy officers and IT security folks.

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

Data Security Compliance and DatAdvantage, Part III:  Protect and Monitor

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

At the end of the previous post, we took up the nuts-and-bolts issues of protecting sensitive data in an organization’s file system. One popular approach, least-privileged access model, is often explicitly mentioned in compliance standards, such as NIST 800-53 or PCI DSS. Varonis DatAdvantage and DataPrivilege provide a convenient way to accomplish this.

Ownership Management

Let’s start with DatAdvantage. We saw last time that DA provides graphical support for helping to identify data ownership.

If you want to get more granular than just seeing who’s been accessing a folder, you can view the actual access statistics of the top users with the Statistics tab (below).

This is a great help in understanding who is really using the folder. The ultimale goal is to find the true users, and remove extraneous groups and users, who perhaps needed occasional access but not as part of their job role.

The key point is to first determine the folder’s owner — the one who has the real knowledge and wisdom of what the folder is all about. This may require some legwork on IT’s part in talking to the users, based on the DatAdvantage stats, and working out the real-chain of command.

Once you use DatAdvantage to set the folder owners (below), these more informed power users, as we’ll see, can independently manage who gets access and whose access should be removed. The folder owner will also automatically receive DatAdvantage reports, which will help guide them in making future access decisions.

There’s another important point to make before we move one. IT has long been responsible for provisioning access, without knowing the business purpose. Varonis DatAdvantage assists IT in finding these owners and then giving them the access granting powers.

Anyway, once the owner has done the housekeeping of paring and removing unnecessary folder groups, they’ll then want to put into place a process for permission management. Data standards and laws recognize the importance of having security policies and procedures as part of on-going program – i.e., not something an owner does once a year.

And Varonis has an important part to play here.

Maintaining Least-Privileged Access

How do ordinary users whose job role now requires then to access a managed folder request permission to the owner?

This is where Varonis DataPrivilege makes an appearance. Regular users will need to bring this interface up (below) to formally request access to a managed folder.

The owner of the folder has a parallel interface from which to receive these requests and then grant or revoke permissions.

As I mentioned above, these security ideas for last-privilege-access and permission management are often explicitly part of compliance standards and data security laws. Building on my list from the previous post, here’s a more complete enumeration of controls that Varonis DatAdvantage supports:

  • NIST 800-53: AC-2, AC-3, AC-5, CM-5
  • NIST 800-171: 3.1.4, 3.1.5, 3.4.5
  • PCI DSS 3.x: 7.1,7.2
  • HIPAA: 45 CFR 164.312 a(1), 164.308a(4)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3, A11.2.2
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07

Stale Sensitive Data

Minimization is an important theme in security standards and laws. These ideas are best represented in the principles of Privacy by Design (PbD), which has good overall advice on this subject: minimize the sensitive data you collect, minimize who gets to see it, and minimize how long you keep it.

Let’s address the last point, which goes under the more familiar name of data retention. One low-hanging fruit to reducing security risks is to delete or archive sensitive data embedded in files.

This make incredible sense, of course. This stale data can be, for example, consumer PII collected in short-term marketing campaigns, but now residing in dusty spread-sheets or rusting management presentations.

Your organization may no longer need it, but it’s just the kind of monetizable data that hackers love to get their hands on.

As we saw in the first post, which focused on Identification, DatAdvantage can find and identify file data that hasn’t been used after a certain threshold date.

Can the stale data report be tweaked to find stale data this is also sensitive?

Affirmative.

You need to add the hit count filter and set the number of sensitive data matches to an appropriate number.

In my test environment, I discovered that C:Share\pvcs folder hasn’t been touched in over a year and has some sensitive data.

The next step is then to take a visit to the Data Transport Engine (DTE) available in DatAdvantage (from the Tools menu). It allows you to create a rule that will search for files to archive and delete if necessary.

In my case, my rule’s search criteria mirrors the same filters used in generating the report. The rule is doing the real heavy-lifting of removing the stale, sensitive data.

Since the rule is saved, it can be rerun again to enforce the retention limits. Even better, DTE can automatically run the rule on a periodic basis so then you never have to worry about stale sensitive data in your file system.

Implementing date retention policies can be found in the following security standards and regulations:

  • NIST 800-53: SI-12
  • PCI DSS 3.x: 3.1
  • CIS Critical Security Controls: 14.7
  • New York State DFS Cybersecurity Regulations: 500.13
  • EU General Data Protection Regulation (GDPR): Article 25.2

Detecting and Monitoring

Following the order of the NIST higher-level security control categories from the first post, we now arrive at our final destination in this series, Detect.

No data security strategy is foolproof, so you need a secondary defense based on detection and monitoring controls: effectively you’re watching the system and looking for unusual activities.

Varonis and specifically DatAlert has unique role in detection because its underlying security platform is based on monitoring file system activities.

By now everyone knows (or should know) that phishing and injection attacks allow hackers to get around network defenses as they borrow existing users’ credentials, and fully-undetectable (FUD) malware means they can avoid detection by virus scanners.

So how do you detect the new generation of stealthy attackers?

No attacker can avoid using the file system to load their software, copy files, and crawl a directory hierarchy looking for sensitive data to exfiltrate.  If you can spot their unique file activity patterns, then you can stop them before they remove or exfiltrate the data.

We can’t cover all of DatAlert’s capabilities in this post — probably a good topic for a separate series! — but since it has deep insight to all file system information and events, and histories of user behaviors, it’s in a powerful position to determine what’s out of the normal range for a user account.

We call this user behavior analytics or UBA, and DatAlert comes bundled with a suite of UBA threat models (below).  You’re free to add your own, of course, but the pre-defined models are quite powerful as is. They include detecting crypto intrusions, ransomware activity, unusual user access to sensitive data, unusual access to files containing credentials, and more.

All the alerts that are triggered can be tracked from the DatAlert Dashboard.  IT staff can either intervene and respond manually or even set up scripts to run automatically — for example, automatically disable accounts.

If a specific data security law or regulations requires a breach notification to be sent to an authority, DatAlert can provide some of the information that’s typically required – files that were accessed, types of data, etc.

Let’s close out this post with a final list of detection and response controls in data standards and laws that DatAlert can help support:

  • NIST 800-53: SI-4, AU-13, IR-4
  • PCI DSS 3.x: 10.1, 10.2, 10.6
  • CIS Critical Security Controls: 5.1, 6.4, 8.1
  • HIPAA: 45 CFR 164.400-164.414
  • ISO 27001: A.16.1.1, A.16.1.4
  • New York State DFS Cybersecurity Regulations: 500.02, 500.16, 500.27
  • EU General Data Protection Regulation (GDPR): Article 33, 34
  • Most US states have breach notification rules

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessme...

Data Security Compliance and DatAdvantage, Part II:  More on Risk Assessment

This article is part of the series "Data Security Compliance and DatAdvantage". Check out the rest:

I can’t really overstate the importance of risk assessments in data security standards. It’s really at the core of everything you subsequently do in a security program. In this post we’ll finish discussing how DatAdvantage helps support many of the risk assessment controls that are in just about every security law, regulation, or industry security standard.

Last time, we saw that risk assessments were part of NIST’s Identify category. In short: you’re identifying the risks and vulnerabilities in your IT system. Of course, at Varonis we’re specifically focused on sensitive plain-text data scattered around an organization’s file system.

Identify Sensitive Files in Your File System

As we all know from major breaches over the last few years, poorly protected folders is where the action is for hackers: they’ve been focusing their efforts there as well.

The DatAdvantage 2b report is the go-to report for finding sensitive data across all folders, not just ones with global permissions that are listed in 12l. Varonis uses various built-in filters or rules to decide what’s considered sensitive.

I counted about 40 or so such rules, covering credit card, social security, and various personal identifiers that are required to be protected by HIPAA and other laws.

In the test system on which I ran the 2b report, the \share\legal\Corporate folder was snagged by the aforementioned filters.

Identify Risky and Unnecessary Users Accessing Folders

We now have a folder that is a potential source of data security risk. What else do we want to identify?

Users that have accessed this folder is a good starting point.

There are a few ways to do this with DatAdvantage, but let’s just work with the raw access audit log of every file event on a server, which is available in the 2a report. By adding a directory path filter, I was able to narrow down the results to the folder I was interested in.

So now we at least know who’s really using this specific folder (and sub-folders).  Often times this is a far smaller pool of users then has been enabled through the group permissions on the folders. In any case, this should be the basis of a risk assessment discussion to craft more tightly focused groups for this folder and setting an owner who can then manage the content.

In the Review Area of DatAdvantage, there’s more graphical support for finding users accessing folders, the percentage of the Active Directory group who are actually using the folder, as well as recommendations for groups that should be accessing the folder. We’ll explore this section of DataAdvantage further below.

For now, let’s just stick to the DatAdvantage reports since there’s so much risk assessment power bundled into them.

Another similar discussion can be based on using the 12l report to analyze folders containing sensitive data but have global access – i.e., includes the Everyone group.

There are two ways to think about this very obvious risk. You can remove the Everyone access on the folder. This can and likely will cause headaches for users. DatAdvantage conveniently has a sandbox feature that allows you to test this.

On the other hand, there may be good reasons the folder has global access, and perhaps there are other controls in place that would (in theory) help reduce the risk of unauthorized access. This is a risk discussion you’d need to have.

Another way to handle this is to see who’s copying files into the folder — maybe it’s just a small group of users — and then establish policies and educate these users about dealing with sensitive data.

You could then go back to the 1A report, and set up filters to search for only file creation events in these folders, and collect the user names (below).

Who’s copying files into my folder?

After emailing this group of users with followup advice and information on copying, say, spreadsheets with credit card numbers, you can run the 12l reports the next month to see if any new sensitive data has made its way into the folder.

The larger point is that the DatAdvantage reports help identify the risks and the relevant users involved so that you can come up with appropriate security policies — for example, least-privileged access, or perhaps looser controls but with better monitoring or stricter policies on granting access in the first place. As we’ll see later on in this series, Varonis DatAlert and DataPrivilege can help enforce these policies.

In the previous post, I listed the relevant controls that DA addresses for the core identification part of risk assessment. Here’s a list of risk assessment and policy making controls in various laws and standards where DatAdvantage can help:

  • NIST 800-53: RA-2, RA-3, RA-6
  • NIST 800-171: 3.11.1
  • HIPAA:  164.308(a)(1)(i), 164.308(a)(1)(ii)
  • Gramm-Leach-Bliley: 314.4(b),(c)
  • PCI DSS 3.x: 12.1,12.2
  • ISO 27001: A.12.6.1, A.18.2.3
  • CIS Critical Security Controls: 4.1, 4.2
  • New York State DFS Cybersecurity Regulations: 500.03, 500.06

Thou Shalt Protect Data

A full risk assessment program would also include identifying external threats—new malware, new hacking techniques. With this new real-world threat intelligence, you and your IT colleagues should go back re-adjust the risk levels you’ve assigned initially and then re-strategize.

It’s an endless game of cyber cat-and-mouse, and a topic for another post.

Let’s move to the next broad functional category, Protect. One of the critical controls in this area is limiting access to only authorized users. This is easier said done, but we’ve already laid the groundwork above.

The guiding principles are typically least-privileged-access and role-based access controls. In short: give appropriate users just the access they need to their jobs or carry out roles.

Since we’re now at a point where we are about to take a real action, we’ll need to shift from the DatAdvantage Reports section to the Review area of DatAdvantage.

The Review Area tells me who’s been accessing the legal\Corporate folder, which turns out to be a far smaller set than has been given permission through their group access rights.

To implement least-privilege access, you’ll want to create a new AD group for just those who really, truly need access to the legal\Corporate folder. And then, of course, remove the existing groups that have been given access to the folder.

In the Review Area, you can select and move the small set of users who really need folder access into their own group.

Yeah, this assumes you’ve done some additional legwork during the risk assessment phase — spoken to the users who accessed Corporate\legal folder, identified the true data owners, and understood what they’re using this folder for.

DatAdvantage can provide a lot of support in narrowing down who to talk to. So by the time you’re ready to use the Review Area to make the actual changes, you already should have a good handle on what you’re doing.

One other key control, which will discuss in more detail the next time, is managing file permission for the folders.

Essentially, that’s where you find and assign data owners, and then insure that there’s a process going forward to allow the owner to decide who gets access. We’ll show how Varonis has a key role to play here through both DatAdvatange and DataPrivilege.

I’ll leave you with this list of least permission and management controls that Varonis supports:

  • NIST 800-53: AC-2, AC-3, AC-6
  • NIST 800-171: 3.14,3.15
  • PCI DSS 3.x: 7.1
  • HIPAA: 164.312 a(1)
  • ISO 27001: A.6.1.2, A.9.1.2, A.9.2.3
  • CIS Critical Security Controls: 14.4
  • New York State DFS Cybersecurity Regulations: 500.07
Continue reading the next post in "Data Security Compliance and DatAdvantage"