The bottom line here: Facebook did curtail access to profile information after 2014 (see the Cambridge Analytica scandal). But — and this is a very important qualifier — they did so for developers with access to the public API. The breaking news from the New York Times is that “special partnerships” with other large tech companies were not subject to the same controls.
What does this mean?
Functionally, I think, we find ourselves in a similar situation to Cambridge Analytica — perhaps with better-intentioned actors on the other end. I’ll claim the real concern here is not that Facebook let this data go (it was bound to happen). Instead, it’s that the data is out there and we’re not really sure how to track its travel after it leaves FB’s metaphorical gates. We don’t know what exactly happened to the data once it arrived on the systems of Bing, Yahoo, Netflix, Spotify — and the 144 other companies that had partnerships.
Even if we put complete faith in all 150 companies and believe they never misused the data, we’ve seen that these savvy companies are not immune to security incidents or breaches. (On top of everything else, this is a pretty charitable interpretation of the problem; the Times cites data sharing agreements with “automakers and media organizations”, not paragons of good cybersecurity.)
The core of this issue? We have no idea how the data — perhaps used to train machine learning models or generate features that make their way to advertisers — will trickle out in various transformed and aggregate forms. The relative difficulty of finding out where valuable pieces of data have gone—likes, messages, public and private communication—makes it hard to quantify the potential harms of this policy.
To be frank, I wouldn’t be surprised if the majority of data didn’t go anywhere: I’m sure most of the private messages that Spotify could theoretically access were never accessed and that much of the data they had for legitimate reasons has stayed on their servers. However, even if one subset of private data from an obscure automaker’s infotainment systems gets hacked or misused, there has been harm done. This is of the same fear that we had about CA: sure, they may have used it for unsavory political stuff, but we don’t know how they post-processed, combined, sold, or transformed our data afterwards. Did it get aggregated and further sold to advertisers? Perhaps peddlers of foreign influence? Or, more charitably, is it languishing unused on a server?
There’s also the small matter of the Federal Trade Commission’s 2011 consent decree which barred Facebook from sharing user data without “explicit permission.” Facebook’s current argument is that partners are extensions of the Facebook platform and therefore sharing wasn’t subject to terms of the decree. In a recent blog post on the subject of partnerships, they state:
Our integration partners had to get authorization from people. You would have had to sign in with your Facebook account to use the integration offered by Apple, Amazon or another integration partner.
On the subject of access to Facebook messages, they also state:
[P]eople had to explicitly sign in to Facebook first to use a partner’s messaging feature.
They also discuss their Instant Personalization feature which allowed select partners to reference part of your Facebook social graph in their own products, calling it an ability for you to “see public information [your] friends shared” embedded in other services.
Across these contexts, I believe Facebook’s argument about the irrelevance of the consent decree is weak. While it’s true that users are taking a concrete action to link a service to Facebook when logging in with their credentials, I’m not convinced that this substitutes for affirmative consent. Users’ perceptions of their data flows does not always align with the realities of the underlying technical systems, and it’s not clear that users were made aware of the fact that data flowed through third parties—even if the data was never stored. With the right line of argument, this could certainly be argued in court as a violation of the consent decree.
Trust and Consent
I also believe that this is an indication that we need to be mindful when we discuss consent and data. Consent for handling of data is much different than consent for collection of data. Platforms—and users—have been conflating the two for a while. An unrelated story from Buzzfeed on the sharing of Facebook advertising identifiers touches on this:
A Facebook representative clarified to BuzzFeed News that while it enables users to opt out of targeted ads from third parties, the controls apply to the usage of the data and not its collection.
Is this the right model to be using for consent? The European Union’s GDPR approximates this split by differentiating between data controllers and processors, but we have no such definition here in the U.S. outside of certain specific sectors. I think there’s a lot of thinking and work to be done here, especially on the subject of these fine-grained consent questions.
Finally, there’s the matter of trust. This is their 4th or 5th major breach of trust in less than two years. I’m sure FB will always have a lot of staying power due to platform effects and its insane critical mass, but I have to wonder what the tipping point is. Will there be calls for regulation? Personally, I’m in favor of forcing Facebook to bust open its walled garden and forcing it to interconnect with outside systems on kinder terms. At the same time, though, others are calling for it to be broken up antitrust style — and I think they may have a point too.