April 20, 2018 at 07:23AM
Facebook users in Europe are reporting that the company has started giving them the option to turn on its controversial facial recognition technology.
Jimmy Nsubuga, a journalist at Metro, is among several European Facebook users who have reporting getting notifications asking if they want to turn on face recognition technology.
Facebook has previously said an opt-in option would be pushed out to all European users, and also globally, as part of changes to its T&Cs and consent flow.
In Europe the company is hoping to convince users to voluntarily allow it to deploy the privacy-hostile tech — which was turned off in the bloc after regulatory pressure, back in 2012, when Facebook began using facial recognition to offer features such as automatically tagging users in photo uploads.
But under impending changes to its T&Cs — ostensibly to comply with the EU’s incoming GDPR data protection standard — the company has crafted a manipulative consent flow that tries to sell people on giving it their data; including filling in its own facial recognition blanks by convincing Europeans to agree to it grabbing and using their biometric data after all.
Users who choose not to switch on facial recognition still have to click through a ‘continue’ screen before they get to the off switch. On this screen Facebook attempts to convince them to turn it on — using manipulative examples of how the tech can “protect” them.
As another Facebook user who has also already received the notifications — journalist, Jennifer Baker — points out, what it’s doing here is incredibly disingenuous — because it’s using fear to try to manipulate people’s choices.
Sure #Facebook, I'll take a milisecond to consider whether you want me to enable #facialrecognition for my own protection or your #data #tracking business model. #Disingenuous pricks! pic.twitter.com/s7nngaHVSq
— Jennifer Baker (@BrusselsGeek) April 20, 2018
Under the EU’s incoming data protection framework Facebook cannot automatically opt users into facial recognition — it has to convince people to switch the tech on themselves. So it is emphasizing that users can choose whether or not to enable the technology.
But data protection experts we spoke to earlier this week do not believe Facebook’s approach to consent will be legal under GDPR.
Essentially, this is big data-powered manipulation of human decision-making — until the ‘right’ answer (for Facebook’s business) is ‘selected’ by the user. In other words, not freely given, informed consent at all.
Legal challenges are certain at this point.
A Facebook spokeswoman confirmed to TechCrunch that any European users who are being asked about the tech now, ahead of the May 25 GDPR deadline, are part of its rollout of platform changes intended to comply with the incoming standard.
“The flow is not a test, it is part of a rollout we are doing across the EU,” she said. “We are asking people for opt-in consent for three things — third party data for ads, facial recognition and the permission to process their sensitive data.”
She also confirmed that Facebook did run a test of “a very similar version of this flow to a small percentage of users in the EU back in March”, adding: “The flow + wording was broadly the same. At all times it was opt-in.”
The problem is, given Facebook controls the entire consent flow, and can rely on big data insights gleaned from its own platform (of 2BN+ users), this is not even remotely a fair fight. Manipulated acceptance is not consent.
But legal challenges take time. And in the meanwhile Facebook users are being socially engineered, with selective examples and friction, into agreeing with things that align with the company’s data-harvesting business interests — handing over sensitive personal data without understanding the full implications of doing so.
It’s not clear exactly how many Facebook users were part of the earlier flow test. It’s likely the company used the aforementioned variations in wording to determine — via an A/B testing process — which consent screens were most successful at convincing people to accept the highly privacy-hostile technology.
Last month — when Facebook said it would be rolling out “a limited test of some of the additional choices we’ll ask people to make as part of GDPR” — it also said it would start “by asking only a small percentage of people so that we can be sure everything is working properly”.
Interestingly it did not put a number on how many people were involved in that test. And Facebook’s spokeswoman did not provide an answer when we asked.
The company was likely hoping the test would not attract too much attention — given how much GDPR news is flowing through its PR channels, and how much attention the topic is generally sucking up.
But depending on how successful those tests prove to be at convincing Europeans to let it have and use their facial biometric data, millions of additional Facebook users could soon be providing the company with fresh streams of sensitive data — and having their fundamental rights trampled on, yet again, thanks to a very manipulative consent flow.
This article was updated with a series of corrections after Facebook confirmed the notifications are in fact the rollout of its new consent flow, not part of the earlier tests. It has also told us categorically that no users were auto-enrolled in facial recognition tech in Europe — even in the test. So we’ve updated this article accordingly.