Tag Archives: eu data protection regulation

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part II

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part II

This article is part of the series "[Podcast] Privacy Attorney Tiffany Li and AI Memory". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties of getting AI to forget.

In this second part, we continue our discussion of GDPR and privacy, and examine ways to bridge the gap between tech and law. We then explore some cutting edge areas of intellectual property. Can AI algorithms own their creative efforts? Listen and learn.

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part I

[Podcast] Privacy Attorney Tiffany Li and AI Memory, Part I

This article is part of the series "[Podcast] Privacy Attorney Tiffany Li and AI Memory". Check out the rest:

Leave a review for our podcast & we'll send you a pack of infosec cards.


Tiffany Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes about the privacy implications of artificial intelligence, virtual reality, and other disruptive technologies. We first learned about Tiffany after reading a paper by her and two colleagues on GDPR and the “right to be forgotten”. It’s an excellent introduction to the legal complexities of erasing memory from a machine intelligence.

In this first part of our discussion, we talk about GDPR’s “right to be forgotten” rule and its origins in a law suit brought against Google. Tiffany then explains how deleting personal data is more than just removing it from a folder or directory.

We learn that GDPR regulators haven’t yet addressed how to get AI algorithms to dynamically change their rules when the underlying data is erased. It’s a major hole in this new law’s requirements!

Click on the above link to learn more about what Tiffany has to say about the gap between law and technology.

Continue reading the next post in "[Podcast] Privacy Attorney Tiffany Li and AI Memory"

IT Guide to the EU GDPR Breach Notification Rule

IT Guide to the EU GDPR Breach Notification Rule

Index

The General Data Protection Regulation (GDPR) is set to go into effect in a few months — May 25 2018 to be exact. While the document is a great read for experienced data security attorneys, it would be nifty if we in the IT world got some practical advice on some of its murkier sections — say, the breach notification rule as spelled out in articles 33 and 34.

The GDPR’s 72-hour breach notification requirement is not in the current EU Directive, the law of the land since the mid-1990s. For many companies, meeting this tight reporting window will involve their IT departments stepping up their game.

With help from a few legal experts — thanks Sue Foster and Brett Cohen — I’ve also been pondering the language in the GDPR’s notification rule. The key question that’s not entirely answered by GPDR legalese is the threshold for reporting in real-world scenarios.

For example, is a ransomware attack reportable to regulators? What about email addresses or online handles that are exposed by hackers?

Read on for the answers.

Personal Data Breach versus Reportable Breach

We finally have some solid guidance from the regulators. Last month, the EU regulators released some answers for the perplexed, in a 30-page document covering guidelines  on breach notification – with bonus tables and flowcharts!

To refresh fading memories, the GDPR says that a personal data breach is a breach of security leading “to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data transmitted, stored or otherwise processed.”

This is fairly standard language found in any data privacy law — first define a data breach or other cybersecurity event. This is what you’re supposed to be protecting against — preventing these incidents!

There’s also additional criteria for deciding when regulators and consumers have to be notified.

In short: not every data security breach requires an external notification!

This is not unusual in data security laws that have breach report requirements. HIPAA at the federal level for medical data and New York State’s innovative cyber rules for finance make these distinctions as well. It’s a way to prevent regulators from being swamped with breach reports.

In the case of the GDPR, breaches can only involve personal data, which is EU-speak for personally identifiable information or PII. If your company is under the GDPR and it experiences an exposure of top-secret diagrams involving a new invention, then it would not be considered a personal data breach and therefore not reportable. You can say the same for stolen proprietary software or other confidential documents.

Notifying the Regulators

Under the GPDR, when does a company or data controller have to report a a personal data breach to the local supervising authority – what we used to call the local data protection authority or DPA in the old Directive?

This is spelled out in article 33, but it’s a little confusing if you don’t know the full context. In essence, a data controller reports a personal data breach — exposure, destruction, or loss of access—if this breach poses a risk to EU citizens “rights and freedoms”.

These rights and freedoms refer to more explicit property and privacy rights spelled out in the EU Charter of Fundamental Rights — kind of the EU Constitution.

I’ve read through the guidance, and just about everything you would intuitively consider a breach — exposure of sensitive personal data, theft of a device containing personal data, unauthorized access to personal data — would be reportable to regulators.

And would have to be reported within 72-hours! It is a little more nuanced and you have some wiggle room, but I’ll get to that at the end of this post.

The only exception here is if the personal data is encrypted with state of the art algorithms, and the key itself is not compromised, then the controller would not have to report it.

And a security breach that involves personal data, as defined by the EU GDPR, but that doesn’t reach the threshold of “risks to rights and freedoms”?

There’s still some paperwork you have to do!

Under the GDPR, every personal data breach must be recorded internally: “The controller shall document any personal data breaches, comprising the facts relating to the personal data breach”— see Article 33(5).

So the lost or stolen laptop that had encrypted personal data or perhaps an unauthorized access made by an employee — she saw some customer account numbers by accident because of a file permission glitch — doesn’t pose risks to rights and freedoms but it would still have to be documented.

There’s a good Venn diagram hidden in this post, but for now gaze upon the flowchart below.

Not as beautiful as a Venn diagram but this flowchart on GDPR breach report will get you the answers. (Source: Article 29 Working Party)

Let’s look at one more GDPR reporting threshold scenario involving availability or alteration of personal data.

Say EU personal data becomes unavailable due to a DDoS attack on part of a network or perhaps it’s deleted by malware but there is a backup, so that in both cases you have a loss albeit temporary — it’s still a personal data breach by the GDPR’s definition.

Is this reportable to the supervising authority?

It depends.

If users can’t gain access to say their financial records for more than a brief period, maybe a day or two, then this would impact their rights and freedoms. This incident would have to be reported to the supervising authority.

Based on the notes in the guidance, there’s some room for interpreting what this brief period would be. You’ll still need, though, to document the incident and the decision making involved.

Breach Notification and Ransomware

Based on my chats with GDPR experts, I learned there was uncertainty even among the legal eagles whether a ransomware attack is reportable.

With the new guidance, we now have a clearer answer: they actually take up ransomware scenarios in their analysis.

As we all know, ransomware encrypts corporate data for which you have to pay money to the extortionists in the form of Bitcoins to decrypt and release the data back to its plaintext form.

In the GDPR view, as I suggested above, ransomware attacks on personal data are considered a data loss. When does it cross the threshold and become a reportable data breach?

According to the examples they give, it would be reportable under two situations: 1) There is a backup of the personal data but the outage caused by the ransomware attack impacts users; or 2) There is no backup of the personal data.

In theory, a very short-lived ransomware attack in which the target recovery quickly is not reportable. In the real world where analysis and recovery takes significant time, most ransomware attacks would effectively be reportable.

Individual Reporting

The next level of reporting is a personal data breach in which there is a “high risks to the rights and freedoms.” These breaches have to reported to the individual.

In terms of Venn diagrams and subsets, we can make the statement that every personal data breach that is individually reported also has to be reported to the supervising authority. (And yes, all Greeks are men).

When does a personal breach reach the level of high risks?

Our intuition is helpful here, and the guidelines list as examples, personal data breaches that involve medical or financial (credit card or bank account numbers).

But there are other examples outside the health and banking context. If the personal data breach involves name and address of customers of a retailer who have requested delivery while on vacation, then that would be a high risk, and would require the individuals to be contact.

A breach of contact information alone — name, address, email address, etc — alone may not necessarily require notification. But would require the supervising authority and individual to be informed if a large number of individual are affected! According to the guidelines, size does matter. So a Yahoo-level exposure of email addresses would lead to notifications.

The guidelines make a point that if this contact information includes other sensitive data — psychological, ethnic, etc. — then if would be reportable regardless of the number of individuals affected.

Note: a small breach of emails without other confidential information is not reportable. (Source:Article 29 Working Party)

Or if the contact information, email addresses say, are hacked from a children’s website and therefore the group is particularly vulnerable, then this would constitute a high risk and a notification to the individuals involved.

Breach Notification in Phases

While the 72-hour GDPR breach notification rule was somewhat controversial, it’s actually more flexible once you read the fine print.

The first key point is that the clock starts ticking after the controller becomes aware of the personal data breach.

For example, suppose an organization detect a network intrusion from an attacker. That 72-hour window does not start at this point.

And then there’s an investigation to see if personal data was breach. The clock still doesn’t start. When the IT security team discovers with reasonable certainty that there has been a personal data breach, then the clock starts!

When notifying the supervising authority, the data controller can do this in phases.

It is perfectly acceptable to notify the supervising initially when there has been discovery (or the likelihood) of a personal data breach and to tell them that more investigation is required to obtain details — see Article 33(4). This process can take more than 72-hours, and is allowed under the GDPR.

And if turns out to be a false alarm, they can ask the supervising authority to cancel the notification.

For personal data breaches in which it is discovered there is a high risk to individual, the notification to affected “data subjects” must be made without “undue delay”— see Article 34(1). The objective is to inform consumers about how they’ve been affected and what they need to take to protect themselves.

Notification Details

This leads to the final topic in this epic post: what do you tell the supervising authority and individuals?

For supervising, here’s the actual language in Article 33:

  • Describe the nature of the personal data breach including where possible, the categories and approximate number of data subjects concerned and the categories and approximate number of personal data records concerned;
  • Communicate the name and contact details of the data protection officer or other contact point where more information can be obtained;
  • Describe the likely consequences of the personal data breach;
  • Describe the measures taken or proposed to be taken by the controller to address the personal data breach, including, where appropriate, measures to mitigate its possible adverse effects.

Note the requirement to provide details on the data categories and approximate number of records involved.

The supervising authority can, by the way, request additional information. The above list is the minimum that the controller has to provide.

When notifying individuals (see Article 34), the controller also has to offer the following:

  • a description of the nature of the breach;
  • the name and contact details of the data protection officer or other contact point;
  • a description of the likely consequences of the breach; and
  • a description of the measures taken or proposed to be taken by the controller to address the breach, including, where appropriate, measures to mitigate its possible adverse effects.

The GDPR prefers that the controller contact affected individuals directly – rather than through a media broadcast.  This can include email, SMS text, and snail mail.

For indirect mass communication, prominent banners on web sites, blog posts, or press releases will do fine.

The GDPR breach notification guidelines that were released last month is about 30 pages. As an IT person, you will not be able to appreciate fully all the subtleties.

You will need an attorney—your corporate counsel, CPO, CLO, etc.—to understand what’s going with this GDPR breach  guideline and other related rules.

That leads nicely to this last thought: incident response to a breach requires combined efforts of IT, legal, communications, operations, and PR, usually at the C-level.

IT can’t do it alone.

The first step is to have an incident response plan.

A great resource for data security and privacy compliance is the International Association of Privacy Professionals (IAPP) website: https://iapp.org/ .

The IAPP also have a incident response toolkit put together by our attorney friends at Hogan Lovells. Check it out here.

GDPR By Any Other Name: The UK’s New Data Protection Bill

GDPR By Any Other Name: The UK’s New Data Protection Bill

Last month, the UK published the final version of a law to replace its current data security and privacy rules. For those who haven’t been following the Brexit drama now playing in London, the Data Protection Bill or DPB will allow UK businesses to continue to do business with the EU after its “divorce” from the EU.

The UK will have data rules that are effectively the same as the EU General Data Protection Regulation (GDPR), but it will be cleverly disguised as the DPB.  Jilted lovers, separations, false identities … sounds like a real-life Shakespearean comedy (or Mrs. Doubtfire).

For businesses that have to accommodate the changes, it’s anything but.

In the Short Term

As it currently stands, the UK is under the EU’s Data Protection Directive (DPD) through its 1998 Data Protection Act or DPA, which in EU-speak “transposes” or copies the DPD into a national law. Come May 2018, the UK will fall under the GDPR, which has as a goal to harmonize  all the separate national data security laws, like the UK’s DPA, into a single set of rules, and to put in a place a more consistent enforcement structure.

Between May 2018 and whenever the UK government officially enacts the DPB, the GDPR will also be the data security and privacy law for the UK. The DPB is expected to become law before Brexit, which is schedule to occur on March 2019.

Since the GDPR will soon be the data security and privacy law in the UK, replacing the DPA, organizations have been gearing up to meet the new rules – especially, the right to erasure, 72-hour breach notification to authorities, and improved record keeping of processing activities. The DPB should, in theory, provide a relatively easy transition for UK businesses.

A Few Differences

As many commenters have pointed out (and to which I can personally attest), the DPB is not a simple piece of legislation — though you’d think it would be otherwise. The Bill starts with the premises that the GDPR rules apply to the UK, so it doesn’t even copy the actual text.

So what takes up the rest of this 200-page bill?

A good part is devoted to exemptions, restrictions, clarifications that are allowed by the GDPR and which the UK DPB takes full advantage of in the fine print

The core of the bill is found in Part 2, wherein these various tweaks — for personal data related to health, scientific research, criminal investigations, employee safety, and public interest — are laid out. The actual details — lawyers take note — is buried at the end of the DPB in a long section of “schedules”.

For example, GDPR articles related to the right to erasure, data rectification, and objection to processing don’t apply to investigations into, say, financial mismanagement or public servants misusing their office. In effect, the targets of an investigation lose control of their data.

The DPB is also complex because it contains a complete parallel set of GDPR-like security and privacy rules for law enforcement and national security services. The DPB actually transposes another EU directive, known as the EU Data Protection Law Enforcement Directive. There is also a long list of exceptions packed into even more schedules and tables at the end of document.

While the goal of Brexit may have been to get out from under EU regulations, the Data Protection Bill essentially keeps the rules in place, and gives us a lot of abbreviations to keep track of.

Business Beware: ICO’s New Audit Powers

However, it doesn’t mean there aren’t any surprises in the new UK law.

The DPB grants regulators at the UK’s Information Commission’s Office (ICO) new investigative powers through “assessment notices”. These notices allows the ICO staff to enter the organization, examine documents and equipment, and observe processing of personal data. Effectively, UK regulators will have the ability to audit an organization’s data security compliance.

Under the existing DPA, the ICO can order these non-voluntary assessments only against government agencies, such as the NHS. The DBP expands mandatory data security auditing to the private sector.

If the ICO decides the organization is not meeting DPD compliance, these audits can lead to enforcement notices that point out the security shortcomings along with a schedule of when they should be corrected.

The actual teeth in the ICO’s enforcement is their power to issue fines of up 4% of an organization’s worldwide revenue. It’s the same level of monetary penalties as in the original GDPR.

In short: the DPB is the GDPR, and smells as sweet.

For UK companies (and UK-based multinationals) that already have security controls and procedures in place — based on recognized standards like ISO 27001 — the DPB’s rules should not be a difficult threshold to meet.

However, for companies that have neglected basic data governance practices, particularly for the enormous amounts of data that are found in corporate file systems, the DPD will come as a bit of a shock.

CSOs, CIOs, and CPOs in these organizations will have to ask this question: do we want to conduct our own assessments and improve data security or let the ICO do it for us?

I think the answer is pretty obvious!

The Right to Be Forgotten and AI

The Right to Be Forgotten and AI

One (of the many) confusing aspects of the EU General Data Protection Regulation (GDPR) is its “right to be forgotten”. It’s related to the right to erasure but takes in far more ground. The right to have your personal deleted means that data held by the data controller must be removed on request by the consumer. The right to be forgotten refers more specifically to personal data the controller has made public on the Intertoobz.

Simple, right?

It ain’t ever that easy.

I came across a paper on this subject that takes a deeper look at the legal and technical issues around erasure and “forgetting”. We learn from the authors that deleting means something different when it comes to big data and artificial intelligence versus data held in a file system.

This paper contains great background on the recent history of the right to be forgotten, which is well worth your time.

Brief Summary of a Summary

Way back in 2010, a Mr. Costeja González brought a complaint against Google and a Spanish newspaper to Spain’s national Data Protection Authority (DPA). He noticed that when he entered his name into Google, the search results displayed a link to a newspaper article about a property sale made by Mr. González to resolve his personal debts.

The Spanish DPA dismissed the complaint against the newspaper —they had legal obligation to publish the property sale. However, the DPA allowed the one against Google to stand.

Google’s argument was that since it didn’t have a true presence in Spain – no physical servers in Spain held the data – and the data was processed outside the EU, it wasn’t under the EU Data Protection Directive (DPD).

Ultimately, the EU’s highest judicial body, the Court of Justice, in their right to be forgotten ruling in 2014 said that: search engine companies are controllers; the DPD applies to companies that market their services in the EU  (regardless of physical presence); and consumers have a right to request search engine companies to remove links that reference their personal information.

With the GDPR becoming EU law in May 2018 and replacing the DPD, the right to be forgotten is now enshrined in article 17 and the extraterritorial scope of the decision can be found in Article 3.

However, what’s interesting about this case is that the original information about Mr. Gonzalez was never deleted — it still can be found if you search the online version of the newspaper.

So the “forgetting” part means, in practical terms, that a key or link to the personal information has been erased, but not the data itself.

Hold this thought.

Artificial Intelligence Is Like a Mini-Google

The second half of this paper starts with a very good computer science 101 look at what happens when data is deleted in software. For non-technical people, this part will be eye opening.

Technical types know that when you’re done with a data object in an app and after the memory is erased or “freed”, the data does not in fact magically disappear. Instead, the memory chunk is put on a “linked list” that will eventually be processed and then made part of available software memory to be re-used again.

When you delete data, it’s actually put on a “take out the garbage” list.

This procedure is known as garbage collection, and it allows performance-sensitive software to delay the CPU-intensive data disposal to a later point when the app is not as busy.

Machine learning uses large data sets to train the software and derive decision making rules. The software is continually allocating and deleting data, often personal data, which at any given moment might be on a garbage collection queue waiting to be disposed.

What does it mean then to implement right to be forgotten in an AI or big data app?

The authors of the paper make the point that eliminating a single data point is not likely to affect the AI software’s rules. Fair enough. But certainly if tens or hundreds of thousands use their right to erase under the GPDR, then you’d expect some of these rules to shift.

They also note that data can be disguised through certain anonymity techniques or pseudonymization as a way to avoid storing identifiable data, thereby getting around the right to be forgotten. Some of these anonymity techniques  involve adding “noise” which may affect the accuracy of the rules.

This leads to an approach to implementing right to be forgotten for AI that we alluded to above: perhaps one way to forget is to make it impossible to access the original data!

A garbage collection process does this by putting the memory in a separate queue that makes it unavailable to the rest of the software—the software’s “handle” to the memory no longer grants access.  Google does the same thing by removing the website URL from its internal index.

In both cases, the data is still there but effectively unavailable.

The Memory Key

The underlying idea behind AI forgetting is that you remove or delete the key that allows access to the data.

This paper ends by suggesting that we’ll need to explore more practical (and economic) ways to handle right to be forgotten for big data apps.

Losing the key is one idea. There are additional methods that can be used: for example, to break up the personal data into smaller sets (or silo them) so that it is impossible or extremely difficult to re-identify each separate set.

Sure removing personal data from a file system is not necessarily easy, but it’s certainly solvable with the right products!

Agreed: AI forgetting involves additional complexity and solutions to the problem will differ from file deletion. It’s possible we’ll see some new erasure-like technologies in the AI area as well.

In the meantime, we’ll likely receive more guidance from EU regulators on what it means to forget for big data applications. We’ll keep you posted!

New York State Cyber Regulations Get Real

New York State Cyber Regulations Get Real

We wrote about NY’s innovate cyber regulations earlier this year. For those who don’t remember, NY State Department of Financial Services (NYSDFS) launched GDPR-like cyber security regulations for its massive financial industry, including requirements for 72-hour breach reporting, limited data retention, and designation of a chief information security officer.

As legal experts have noted, New York leads the rest of the states in its tough data security rules for banks, insurance, and investment companies. And after Equifax, it has proposed extending these rules to credit reporting agencies that operate in the state.

Transition Period Has Ended

The NYS rules are very process-oriented and similar to the GDPR in requiring documented security policies, response planning, and assessments – basically you have to be able to “show your work”.

However, there also specific technical requirements, unlike the GDPR, that have to be complied with as well: for example, pen testing, multi-factor authentication, and limiting access privileges.

Anyway, the cyber regulations went into effect on March 1, 2017, but most of the rules have a 180-day grace period. That period ended in late August.

There are exceptions.

They extended up to one year – March 1, 2018 — some of the more technical requirements: for example, performing pen testing and vulnerability assessments and conducting periodic risk assessments. And up to 18-months for implementing audit trails and application-level security.

So NY financial companies have a little extra time for the nittier rules.

However, that does mean that the 72-hour breach reporting rule is in effect!

Varonis Can Help

I’d like to add that the NYSDFS rules on breach reporting cover a far broader type of cyber event than any other state. Typically, state breach rules have language that requires notification for the exposure of certain types of PII data — see our totally awesome graphics to instantly visualize this.

While these NY rules protect similar types of PII as other states – social security and credit card numbers as well as online identifiers – financial companies in New York will also have to report on cyber events, as defined as follows:

Cybersecurity Event means any act or attempt, successful or unsuccessful, to gain unauthorized access to, disrupt or misuse an Information System or information stored on such Information System.

Note the language for any attempt to gain access or to disrupt or misuse system. This encompasses not only standard data exposures where personal data is stolen, but also denial-of-service (DoS), ransomware, and any kind of post-exploitation where the system tools are leveraged and misused.

Based on my reading and looking closely at the state’s FAQ, financial companies will have to notify NY regulators within 72-hours of data exposures involving PII and cybersecurity events “that have a reasonable likelihood  of materially harming” normal operations – see Section 500.17.

With data attacks now becoming the new normal, this tough notification rule — first in the US! — will likely require IT departments to put in significant technical effort to meet this tight timeline.

Varonis can help NY financial companies.

Ask to see a demo of our DatAlert product and get right with NYSDFS!

 

Finding EU Personal Data With Regular Expressions (Regexes)

Finding EU Personal Data With Regular Expressions (Regexes)

If there is one very important but under-appreciated point to make about complying with tough data security regulations such as the General Data Protection Regulation (GDPR), it’s the importance of finding and classifying the personally identifiable information, or personal data as it’s referred to in the EU. Discovering where personal data is located in file systems and the permissions used to protect it should be the first step in any action plan.

You don’t have to necessarily take our word for it, you can look at GDPR to-do lists from law firms and consulting groups that are heavily involved with advising companies on compliance.

We’ve already given you a heads up about Varonis GDPR Patterns, which helps you spot this personal data, and now that I’ve chatted and learned more from Sarah and the Varonis product development team, I’ve more to share.

Nobody Does It Better

GDPR Patterns is, of course, built on our Data Classification Framework or DCF. For those new to Varonis, DCF has an enormous advantage over other classification solutions, since it implements true incremental scanning. After the initial scan of the file system, DCF can quickly identify any changes, and then selectively scan those directories or folders that have been accessed. This makes far more sense than starting scanning from scratch!

By the way, for those crazy enough to think they can try rolling their own data scanning software, they can refer to my series of posts on a DIY classification system based on PowerShell. Please learn from my craziness and avoid the urge.

With DCF doing the heavy lifting, GDPR Patterns can focus on spotting EU-style personal data within files. According to the GDPR definition, personal data is effectively anything related to an individual that can identify that person. The definition’s very broad and deceptively vague language covers a lot of territory! (For more excruciating details, please refer to this official EU document.)

Obviously, we’re talking about all the usual suspects: names, addresses, phone numbers, credit card, bank and other account numbers. GDPR personal data also encompasses internet-era identifiers such as IP and email addresses, and futuristic biometric identifiers (DNA, retinal scans) as well.

Many EU Identifiers

The EU comprises 28 countries, and that means many identifiers vary by country. This is where the Varonis product team did the hard work of research, spending months analyzing phone numbers, license plate numbers, VAT codes, passports, driver’s licenses, and national identification numbers across the EU.

Does anybody know what the Hungarian personal identification code, known as Születési szám, looks like?

That would be an 11-digit sequence based on date of birth, gender, a unique number to separate those born on the same date, and a checksum.

Or what about a Slovakian passport number?

That’s 9-characters: 2-digits followed by 7-letters.

Varonis has worked all this out!

We use regular expressions or regexes to do pattern matching when possible. It’s not as easy to craft these regexes as you might think.

If you want to match wits against the people who devised the Dutch license plate numbering scheme, you can click here to see a regex analysis of one sample number. And then you can try a few out on your own to see if you’ve got it. Enjoy!

A regular expression representing Dutch license plates. Think you understand it? Try your luck with the link above!

Patterns Are More Than Regexes

The research and effort we put into the regular expressions only forms part of the GDPR Patterns solution. Sure, it’s conceivable that someone could work out regexes for a few countries or do Google searches to find these expressions on the web.

However, we’ve crafted our regexes by looking at real-world data samples, and not automatically accepting what’s provided by government agencies and others. Our GDPR regexes have proven themselves in the field!

With so many different alphanumeric patterns, it shouldn’t be surprising there’d be occasional “collisions” — sequences that could be classified into several types of  personal data. For example, EU passport numbers vary between 8 and 10 consecutive numbers, so they’d also be caught by an EU phone number regex.

That is why we’ve also added validator algorithms to supplement the regexes. Specifically, GDPR Patterns scans for special keywords that are near or in proximity to the EU personal data: if we find the keyword, it helps zero in on the right GDPR pattern.

For example, when GDPR Patterns finds an 11-digit number, it looks for additional keywords to determine if this represents a national personal ID:  “IK” or “ISIKUKOOD” implies Esontia; “Születési szám” or “Személyi szám” or “Személyi azonosító” would of course mean Hungary, etc.

If we don’t find the extra keywords, then we can’t assume the 11 digits are an identification code, and so it would not be classified as GDPR personal data. In other words, the validation algorithms reduce false positives.

In case you’re asking, we do use negative keywords as well. If GDPR Patterns finds one of these types of keywords, it means that personal data caught by the regex expression can’t be classified under that pattern.

More GDPR Patterns Details

The Varonis developers have dived deep into EU identification numbers, driver’s licenses, license plates, and phone numbers, looking at real-world samples to come up with both positive and negative keywords and proximity information.

We’ve integrated GDPR Patterns into our DatAdvantage reports to show which files contain a specific Pattern based on a hit count.

GDPR Patterns is also integrated with DatAlerts so that notifications can be delivered when files are accessed containing personal data. We’ll help you meet the GPDR 72-hour breach notification requirement.

Data Transport Engine will also use GDPR Patterns to archive or remove stale or no longer useful EU personal data, another requirement in GDPR.

Have questions?  Contact us for more information.

New Post-Brexit UK Data Law: Long Live the GDPR!

New Post-Brexit UK Data Law: Long Live the GDPR!

The UK is leaving the EU to avoid the bureaucracy from Brussels, which includes having to comply with the General Data Protection Regulation (GDPR). So far, so good. However, since the EU is so important to their economy, the UK’s local data laws will in effect have to be at very high-level — basically, GDPR-like — or else the EU won’t allow data transfers.

Then there is the GDPR’s new principal of extra-territoriality or territorial scope — something we’ve yakked a lot about in the blog — which means non-EU countries will still have to deal with the GDPR.

Finally, as a practical matter the GDPR will kick in before the UK formally exits the EU. So the UK will be under the GDPR for at least a year or more no matter what.

Greater legal minds than mine have already commented on all this craziness.

The UK government looked at the situation, and decided to bite the bullet, or more appropriately eat the cold porridge

Last week, the UK released a statement of intent that commits the government to scrapping their existing law, the Data Protection Act, and replacing it with a new Data Protection Bill.

Looks Familiar

This document is very clear about what the new UK data law will look like. Or as they say:

Bringing EU law into our domestic law will ensure that we help to prepare the UK for the future after we have left the EU. The EU General Data Protection Regulation (GDPR) and the Data Protection Law Enforcement Directive (DPLED) have been developed to allow people to be sure they are in control of their personal information while continuing to allow businesses to develop innovative digital services without the chilling effect of over-regulation. Implementation will be done in a way that as far as possible preserves the concepts of the Data Protection Act to ensure that the transition for all is as smooth as possible, while complying with the GDPR and DPLED in full.

In effect, the plan is to have a law that will mirror the GDPR, allowing UK companies to continue to do business as usual

The Bill will include the GDPR’s new privacy rights for individuals: the “right to be forgotten”, data portability, and right to personal data access. And it will contain the GDPR’s obligations for controllers to report breaches, conduct impact assessments involving sensitive data, and designate data protection officers.

What about the GDPR’s considerable fines?

The UK has also gone along with the EU data law’s tiered structure – fines of up to 4% of global turnover (revenue).

Her Majesty’s Government may have left the EU, but EU laws for data privacy and security will remain. The GDPR is dead, long live the GDPR!

GDPR Resources

Of course, the new Bill will have its own articles, with different wording and numbering scheme than the GDPR. And legal experts will  no doubt find other differences — we’ll have to wait for the new law. Having said that, our considerable resources on the EU data law remain relevant.

For UK companies reading this post and looking for a good overview, here are three links that should help:

 

For a deeper dive into the GDPR, we offer for your edification these two resources:

And feel free to search the IOS blog and explore the GDPR on your own!

A Few Thoughts on Data Security Standards

A Few Thoughts on Data Security Standards

Did you know that the 462-page NIST 800-53 data security standard has 206 controls with over 400 sub-controls1?  By the way, you can gaze upon the convenient XML-formatted version here. PCI DSS is no slouch either with hundreds of sub-controls in its requirements’ document. And then there’s the sprawling IS0 27001 data standard.

Let’s not forget about security frameworks, such as COBIT and NIST CSF, which are kind of meta-standards that map into other security controls. For organizations in health or finance that are subject to US federal data security rules, HIPAA and GLBA’s data regulations need to be considered as well. And if you’re involved in the EU market, there’s GDPR; in Canada, it’s PIPEDA; in the Philippines, it’s this, etc., etc.

There’s enough technical and legal complexity out there to keep teams of IT security pros, privacy attorneys, auditors, and diplomats busy till the end of time.

As a security blogger, I’ve also puzzled and pondered over the aforementioned standards and regulations. I’m not the first to notice the obvious: data security standards fall into patterns that make them all very similar.

Security Control Connections

If you’ve mastered and implemented one, then very likely you’re compliant to others as well. In fact, that’s one good reason for having frameworks. For example, with, say NIST CSF, you can leverage your investment in ISO 27001 or ISA 62443 through their cross-mapped control matrix (below).

Got ISO 27001? Then you’re compliant with NIST CSF!

I think we can all agree that most organizations will find it impossible to implement all the controls in a typical data standard with the same degree of attention— when was last time you checked the physical access audit logs to your data transmission assets (NIST 800-53, PE-3b)?

So to make it easier for companies and the humans that work there, some of the standards group have issued further guidelines that break the huge list of controls into more achievable goals.

The PCI group has a prioritized approach to dealing with their DSS—they have six practical milestones that are broken into a smaller subset of relevant controls. They also have a best practices guide that views — and this is important — security controls into three broader functional areas: assessment, remediation, and monitoring.

In fact, we wrote a fascinating white paper explaining these best practices, and how you should be feeding back the results of monitoring into the next round of assessments. In short: you’re always in a security process.

NIST CSF, which itself is a pared down version of NIST 800-53, also has a similar breakdown of its controls into broader categories, including identification, protection, and detection. If you look more closely at the CSF identification controls, which mostly involve inventorying your IT data assets and systems, you’ll see that the main goal in this area is to evaluate or assess the security risks of the assets that you’ve collected.

File-Oriented Risk Assessments

In my mind, the trio of assess, protect, and monitor is a good way to organize and view just about any data security standard.

In dealing with these data standards, organizations can also take a practical short-cut through these controls based on what we know about the kinds of threats appearing in our world — and not the one that data standards authors were facing when they wrote the controls!

We’re now in a new era of stealthy attackers who enter systems undetected, often though phish mails, leveraging previously stolen credentials, or zero-day vulnerabilities. Once inside, they can fly under the monitoring radar with malware-free techniques, find monetizable data, and then remove or exfiltrate it.

Of course it’s important to assess, protect and monitor network infrastructure, but these new attack techniques suggest that the focus should be inside the company.

And we’re back to a favorite IOS blog theme. You should really be making it much harder for hackers to find the valuable data — like credit card or account numbers, corporate IP — in your file systems, and detect and stop the attackers as soon as possible.

Therefore, when looking at the how to apply typical data security controls, think file systems!

For, say, NIST 800.53, that means scanning file systems, looking for sensitive data, examining the ALCs or permissions and then assessing the risks (CM-8, RA-2,RA-3). For remediation or protection, this would involve reorganizing Active Directory groups and resetting ACLs to be more exclusive (AC-6). For detection, you’ll want to watch for unusual file system accesses that likely indicate hackers borrowing employee credentials (SI-4).

I think the most important point is not to view these data standards as just an enormous list of disconnected controls, but instead to consider them in the context of assess-protect-monitor, and then apply them to your file systems.

I’ll have more to say on a data or file-focused view of data security controls in the coming weeks.

1 How did I know that NIST 800-53 has over 400 sub-controls? I took the XML file and ran this amazing two lines of PowerShell:

[xml]$books = Get-Content 800-53-controls.xml
$books.controls.control|%{$_.statement.statement.number}| measure -line

 

GDPR: Troy Hunt Explains it All in Video Course

GDPR: Troy Hunt Explains it All in Video Course

You’re a high-level IT security person, who’s done the grunt work of keeping your company compliant with PCI DSS, ISO 27001, and a few other security abbreviations, and one day you’re in a meeting with the CEO, CSO, and CIO. When the subject of General Data Protection Regulation or GDPR comes up, all the Cs agree that there are some difficulties, but everything will be worked out.

You are too afraid to ask, “What is the GDPR?”

Too Busy for GDPR

We’ve all been there, of course. Your plate has been full over the last few weeks and months hunting down vulnerabilities, hardening defenses against ransomware and other malware, upgrading your security, along with all the usual work involved in keeping the IT systems humming along.

So it’s understandable that the General Data Protection Regulation may have flown under your radar.

However, there’s no need to panic.

The GDPR shares many similarities with other security standards and regulations so it’s just question of learning some basic background, the key requirements of the new EU law, and a few gotchas, preferably explained by an instructor with a knack for connecting with IT people.

Hunt on GDPR

And that’s why we engaged with Troy Hunt to develop a 7-part video course on the GDPR. Troy is a web security guru, Australian Microsoft Regional Director, and author whose security writing has appeared in Forbes, Time Magazine, and Mashable. And he’s no stranger to this blog as well!

Let’s get back to you and other busy IT security folks like you who need to get up to speed quickly.  With just an hour of your time, Troy will cover the basic vocabulary and definitions (“controller”, “processor”, “personal data”), the key concept underlying GDPR (personal data is effectively owned by the consumer), and what you’ll need to do to keep your organization compliant (effectively, minimize and monitor this personal data.)

By the way, Troy also explains how US companies, even those without EU offices, can get snagged by GDPR’s territorial scope rule— Article 3 to be exact. US-based e-commerce companies: you’ve been warned!

While Troy doesn’t expect you to be an attorney, he analyzes and breaks down a few of more critical requirements and the penalties for not complying, particularly on breach reporting, so that you’ll be able to keep up with some of the legalese when it arises at your next GDPR meeting.

And I think you’ll see by the end of the course that while there may be some new aspects to this EU law, as Troy notes, the GDPR really legislates IT common sense.

What are you waiting for?  Register and get GDPR-aware starting today!

 

 

 

 

[Transcript] Interview With GDPR Attorney Sue Foster

[Transcript] Interview With GDPR Attorney Sue Foster

Over two podcasts, attorney Sue Foster dispensed incredibly valuable GDPR wisdom. If you’ve already listened, you know it’s the kind of insights that would have otherwise required a lengthy Google expedition, followed by chatting with your cousin Vinny the lawyer. We don’t recommend that!

In reviewing the transcript below, I think there are three points that are worth commenting on. One, the GDPR’s breach reporting rule may appear to give organizations some wiggle room. But in fact that’s not the case! The reference to “right and freedoms of natural persons” refers to explicit privacy and property rights spelled out in the EU Charter. This ain’t vague language.

However, there is some leeway in reporting within the 72-hour time frame. In short: you have to make a good effort, but you can delay if, say, you’re currently investigating and need more time because otherwise you’d compromise the investigation.

Two, the territorial scope requirements in Article 3 are complicated by what it means to target EU citizens in your marketing. The very tricky part is when you’re a multinational company that has both a EU and non-EU presence. If you read closely, Foster is suggesting that EU citizens that happen to find their way to, say, your US web site, would not be protected by the GDPR.

In other words, if the company’s general marketing doesn’t target EU citizens, then the information collected is not under GDPR protections. But that would not apply to a company’s localized web content for, say, the French or German markets — information submitted through those sites would of course be under the GDPR.

Yes, I will confirm this with Foster. But if this is not the case for multinationals, then it would cause a pretty large mal de tête.

Third, GPDR compliance is based on, as Foster notes, a “show your work” principle, the same as you did on math tests in high school. It is not like PCI DSS, where you’re going down a checkoff list: Two Factor Authentication? Yes.  Vulnerability Scanning? Yes, etc.

The larger issue is that security technology will change and so what worked well in the past will likely not hold up in the future. With GDPR, you should be able to justify your security plan based on the current state of security technology and document what you’ve done.

Enough said.

Inside Out Security
Sue Foster is a partner with Mintz Levin based out of the London office. She works with clients on European data protection compliance and on commercial matters in the fields of clean tech, high tech, mobile media, and life sciences. She’s a graduate of Stanford Law School. SF is also, and we like this here at Varonis, a Certified Information Privacy Professional.

I’m very excited to be talking to an attorney with a CIPP, and with direct experience on a compliance topic we cover on our blog — the General Data Protection Regulation, or GDPR.

Welcome, Susan.

Sue Foster
Hi Andy. Thank you very much for inviting me to join you today. There’s a lot going on in Europe around cybersecurity and data protection these days, so it’s a fantastic set of topics.
IOS
Oh terrific. So what are some of the concerns you’re hearing from your clients on GDPR?
SF
So one of the big concerns is getting to grips with the extra-territorial reach. I work with a number of companies that don’t have any office or other kind of presence in Europe that would qualify them as being established in Europe.

But they are offering goods or services to people in Europe. And for these companies, you know in the past they’ve had to go through quite a bit of analysis to understand the Data Protection Directive applies to them. Under the GDPR, it’s a lot clearer and there are rules that are easier for people to understand and follow.

So now when I speak to my U.S. clients, if they’re a non-resident company that promotes goods or services in the EU, including free services like a free app, for example, they’ll be subject to the GDPR. That’s very clear.

Also, if a non-resident company is monitoring the behavior of people who are located in the EU, including tracking and profiling people based on their internet or device usage, or making automated decisions about people based on their personal data, the company is subject to the GDPR.

It’s also really important for U.S. companies to understand that there’s a new ePrivacy Regulation in draft form that would cover any provider, regardless of location, of any form of publicly available electronic communication services to EU users.

Under this ePrivacy Regulation, the notion of what these communication services providers are is expanded from the current rules, and it includes things that are called over-the-top applications – so messaging apps and communications features, even when a communication feature is just something that is embedded in a website.

If it’s available to the public and enables communication, even in a very limited sort of forum, it’s going to be covered. That’s another area where U.S. companies are getting to grips with the fact that European rules will apply to them.

So this new security regulation as well that may apply to companies located outside the EU. So all of these things are combining to suddenly force a lot of U.S. companies to get to grips with European law.

IOS
So just to clarify, let’s say a small U.S. social media company that doesn’t market specifically to EU countries, doesn’t have a website in the language of some of the EU country, they would or would not fall under the GDPR?
SF
On the basis of their [overall] marketing activity they wouldn’t. But we would need to understand if they’re profiling or they’re tracking EU users or through viral marketing that’s been going on, right? And they are just tracking everybody. And they know that they’re tracking people in the EU. Then they’re going to be caught.

But if they’re not doing that, if not engaging in any kind of tracking, profiling, or monitoring activities, and they’re not affirmatively marketing into the EU, then they’re outside of the scope. Unless of course, they’re offering some kind of service that falls under one of these other regulations that we were talking about.

IOS
What we’re hearing from our customers is that the 72-hour breach rule for reporting is a concern. And our customers are confused and after looking at some of the fine print, we are as well!! So I’m wondering if you could explain the breach reporting in terms of thresholds, what needs to happen before a report is made to the DBA’s and consumers?
SF
Sure absolutely. So first it’s important to look at the specific definition of personal data breach. It means a breached security leading to the ‘accidental or unlawful destruction, loss, alteration, unauthorized disclosure of or access to personal data’.  So it’s fairly broad.

The requirement to report these incidents has a number of caveats. So you have to report the breach to the Data Protection Authority as soon as possible, and where feasible, no later than 72 hours after becoming aware of the breach.

Then there’s a set of exceptions. And that is unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. So I can understand why U.S. companies would sort of look at this and say, ‘I don’t really know what that means’. How do I know if a breach is likely to ‘result in a risk to the rights and freedoms of natural persons’?

Because that’s not defined anywhere in this regulation!

It’s important to understand that that little bit of text is EU-speak that really refers to the Charter of Fundamental Rights of the European Union, which is part of EU law.

There is actually a document you can look at to tell you what these rights and freedoms are. But you can think of it basically in common sense terms. Are the person’s privacy rights affected, are their rights and the integrity of their communications affected, or is their property affected?

So you could, for example, say that there’s a breach that isn’t likely to reveal information that I would consider personally compromising in a privacy perspective, but it could lead to fraud, right? So that could affect my property rights. So that would be one of those issues. Basically, most of the time you’re going to have to report the breach.

When you’re going through the process of working out whether you need to report the breach to the DPA, and you’re considering whether or not the breach is likely to result in a risk to the rights and freedoms of natural persons, one of the things that you can look at is whether people are practically protected.

Or whether there’s a minimal risk because of steps you’ve already taken such as encrypting data or pseudonymizing data and you know that the key that would allow re-identification of the subjects hasn’t been compromised.

So these are some of the things that you can think about when determining whether or not you need to report to the Data Protection Authority.

If you decide you have to report, you then need to think about ‘do you need to report the breach to the data subjects’, right?

And the standard there is that is has to be a “high risk to the rights and freedoms” of natural persons’. So a high risk to someone’s privacy rights or rights on their property and things of that sort.

And again, you can look at the steps that you’ve taken to either prevent the data from — you know before it even was leaked — prevent it from being potentially vulnerable in a format where people could be damaged. Or you could think also whether you’ve taken steps after the breach that would prevent those kinds of risks from happening.

Now, of course, the problem is the risk of getting it wrong, right?

If you decide that you’re not going to report after you go through this full analysis and the DPA disagrees with you, now you’re running the risk of a fine to 2% of the group’s global turnover …or gross revenue around the world.

And that I think it’s going to lead to a lot of companies being cautious in reporting when even they might have been able to take advantage of some of these exceptions but they won’t feel comfortable with that.

IOS
I see. So just to bring it to more practical terms. We can assume that let’s say credit card numbers or some other identification number, if that was breach or taken, would have to be reported both to the DPA and the consumer?
SF
Most likely. I mean if it’s…yeah almost certainly. Particularly if the security code on the back of the card has been compromised, and absolutely you’ve got a pretty urgent situation. You also have a responsibility to basically provide a risk assessment to the individuals, and advise them on steps that they can take to protect themselves such as canceling their card immediately.
IOS
One hypothetical that I wanted to ask you about is the Yahoo breach, which technically happened a few years ago. I think it was over two years ago … Let’s say something like that had happened after the GDPR where a company sort of had known that there was something happening that looked like a breach, but they didn’t know the extent of it.

If they had not reported it, and waited until after the 72-hour rule, what would have happened to let’s say a multinational like Yahoo?

SF
Well, Yahoo would need to go through the same analysis, and it’s hard to imagine that a breach on that scale and with the level of access that was provided to the Yahoo users accounts as a result of those breaches, and of course the fact that people know that it’s very common for individuals to reuse passwords across different sites, and so you, you know, have the risks sort of follow on problems.

It’s hard to imagine they would be in a situation where they would be off the hook for reporting.

Now the 72-hour rule is not hard and fast.

But the idea is you report as soon as possible. So you can delay for a little while if it’s necessary for say a law enforcement investigation, right? That’s one possibility.

Or if you’re doing your own internal investigation and somehow that would be compromised or taking security measures would be compromised in some way by reporting it to the DPA. But that’ll be pretty rare.

Obviously going along for months and months with not reporting it would be beyond the pale. And I would say a company like Yahoo would potentially be facing a fine of 2% of its worldwide revenue!

IOS
So this is really serious business, especially for multinationals.

This is also a breach reporting related question, and it has to do with ransomware. We’re seeing a lot of ransomware attacks these days. In fact, when we visit customer sites and analyze their systems, we sometimes see these attacks happening in real time. Since a ransomware attack encrypts the file data but most of the time doesn’t actually take the data or the personal data, would that breach have to be reported or not?

SF
This is a really interesting question! I think the by-the-book answer is, technically, if a ransomware attack doesn’t lead to the accidental or unlawful destruction, loss, or alteration or unauthorized disclosure of or access to the personal data, it doesn’t actually fall under the GDPR’s definition of a personal data breach, right?

So, if a company is subject to an attack that prevents it from accessing its data, but the intruder can not itself access, change or destroy the data, you could argue it’s not a personal data breach, therefore not reportable.

But it sure feels like one, doesn’t it?

IOS
Yes, it does!
SF
Yeah. I suspect we’re going to find that the new European Data Protection Board will issue guidance that somehow brings ransomware attacks into the fold of what’s reportable. Don’t know that for sure, but it seems likely to me that they’ll find a way to do that.

Now, there are two important caveats.

Even though, technically, a ransomware attack may not be reportable, companies should remember that a ransomware attack could cause them to be in breach of other requirements of the GDPR, like the obligation to ensure data integrity and accessibility of the data.

Because by definition, you know, the ransomware attack has made the data non-assessable and has totally corrupted its integrity. So, there could be a liability there under the GDPR.

And also, the company that’s suffering the ransomware attack should consider whether they’re subject to the new Network and Information Security Directive, which is going to be implemented in national laws by May 9th of 2018. So again, May 2018 being a real critical time period. That directive requires service providers to notify the relevant authority when there’s been a breach that has a substantial impact on the services, even if there was no GDPR personal data breach.

And the Network and Information Security Directive applies to a wide range of companies, including those that provide “essential services”. Sort of the fundamentals that drive the modern economy: energy, transportation, financial services.

But also, it applies to digital service providers, and that would include cloud computing service providers.

You know, there could be quite a few companies that are being held up by ransomware attacks who are in the cloud space, and they’ll need to think about their obligations to report even if there’s maybe not a GDPR reporting requirement.

IOS
Right, interesting. Okay. As a security company, we’ve been preaching Privacy by Design principles, data minimization and retention limits, and in the GPDR it’s now actually part of the law.

The GDPR is not very specific about what has to be done to meet these Privacy by Design ideas, so do you have an idea what the regulators might say about PbD as they issue more detailed guidelines?

SF
They’ll probably tell us more about the process but not give us a lot of insight as to specific requirements, and that’s partly because the GDPR itself is very much a show-your-work regulation.

You might remember back on old,old math tests, right? When you were told, ‘Look, you might not get the right answer, but show all of your work in that calculus problem and you might get some partial credit.’

And it’s a little bit like that. The GDPR is a lot about process!

So, the push for Privacy by Design is not to say that there are specific requirements other than paying attention to whatever the state of the art is at the time. So, really looking at the available privacy solutions at the time and thinking about what you can do. But a lot of it is about just making sure you’ve got internal processes for analyzing privacy risks and thinking about privacy solutions.

And for that reason, I think we’re just going to get guidance that stresses that, develops that idea.

But any guidance that told people specifically what security technologies they needed to apply would probably be good for, you know, 12 or 18 months, and then something new would come along.

Where we might see some help is, eventually, in terms of ISO standards. Maybe there’ll be an opportunity in the future for something that comes along that’s an international standard, that talks about the process that companies go through to design privacy into services and devices, etc. Maybe then we’ll have a little more certainty about it.

But for now, and I think for the foreseeable future, it’s going to be about showing your work, making sure you’ve engaged, and that you’ve documented your engagement, so that if something does go wrong, at least you can show what you did.

IOS
That’s very interesting, and a good thing to know. One last question, we’ve been following some of the security problems related to Internet of Things devices, which are gadgets on the consumer market that can include internet-connected coffee pots, cameras, children toys.

We’ve learned from talking to testing experts that vendors are not really interested in PBD. It’s ship first, maybe fix security bugs later. Any thoughts on how the GDPR will effect IOT vendors?

SF
It will definitely have an impact. The definition of personal data under the GDPR is very, very broad. So, effectively, anything that I am saying that a device picks up is my personal data, as well as data kind of about me, right?

So, if you think about a device that knows my shopping habits that I can speak to and I can order things, everything that the device hears is effectively my personal data under the European rules.

And Internet of Things vendors do seem to be lagging behind in Privacy by Design. I suspect we’re going to see investigations and fines in this area early on, when the GDPR starts being enforced on May, 2018.

Because the stories about the security risks of, say, children’s toys have really caught the attention of the media and the public, and the regulators won’t be far behind.

And now, we have fines for breaches that range from 2% to 4% of a group’s global turnover. It’s an area that is ripe for enforcement activity, and I think it may be a surprise to quite a few companies in this space.

It’s also really important to go back to this important theme that there are other regulations, besides the GDPR itself, to keep track of in Europe. The new ePrivacy Regulation contains some provisions targeted at the internet of things, such as the requirement to get consent from consumers from machine-to-machine transfers of communications data, which is going to be very cumbersome.

The [ePrivacy] Regulation says you have to do it, it doesn’t really say how you’re going to get consent, meaningful consent, that’s a very high standard in Europe, to these transfers when there’s no real intelligent interface between the device and the person, the consumer who’s using it. Because there are some things that have, maybe kind of a web dashboard. There’s some kind of app that you use and you communicate with your device, you could have privacy settings.

There’s other stuff that’s much more behind the scenes with Internet of Things, where the user is not having a high level of engagement. So, maybe a smart refrigerator that’s reeling information about energy consumption to, you know, the grid. Even there, you know, there’s potentially information where the user is going to have to give consent to the transfer.

And it’s hard to kind of imagine exactly what that interface is going to look like!

I’ll mention one thing about the ePrivacy Regulation. It’s in draft form. It could change, and that’s important to know. It’s not likely to change all that much, and it’s on a fast-track timeline because the commission would like to have it in place and ready to go May, 2018, the same time as the GDPR.

IOS
 Sue Foster, I’d like to thank you again for your time.
SF
You’re very welcome. Thank you very much for inviting me to join you today.