Facebook and the end of the world

FB under fire.

EPISODE 2: March 20, 2018.

When the world goes up in flames, the handful of people left in the burning ruins of civilization will shrug, look at their feet, and—from inside a deep black hole of unending ennui—mumble pathetically how ironic and silly it is that the thing that ultimately took us all down was Facebook.

Fucking Facebook.

How sad, how tragic, to stumble towards the end of history with the feeling that the maniac with his finger on the button is there because we all signed up for Mark Zuckerberg’s social network, and he in turn did everything he could to make us into targets. How depressing to know that looking at those adorable baby photos and trying to avoid family members and taking dumb personality quizzes were all just a way of selling out our futures.

Fucking Facebook.

As I’m writing this, Facebook is putting up strange defences—often led by senior figures who have little clout outside the industry (the kind who seem to be on their way out) while Zuckerberg and Sheryl Sandberg are basically AWOL. It’s a strange moment to witness.

Hypocrisies are racking up faster than anyone can keep up with, and denial and whataboutism are strewn everywhere. There’s Facebook claiming it has huge influence in order to win advertisers, while simultaneously saying it’s not really a big deal when it’s put under regulatory scrutiny. There’s Cambridge Analytica doing something similar by saying they’ve done nothing untoward while promising on camera that they can push fake news at a micro level and get Ukrainian sex workers to blackmail political opponents. Oh, and there’s the red herring they’re trying out that an unauthorized use of data obtained through deceptive practices is not a “breach”… so stop looking so hard! It’s a feature, not a bug!

That *is* true. Facebook built this machine on purpose. And yet, really, if the ascendancy of Trump and Brexit shows us anything, it’s that hypocrisy in the public sphere means less and less each day in the face of power and division.

This is not an unexpected turn of events. There are dozens of prominent campaigners who have been warning about Facebook’s fundamental structures and imperatives for a long time. The warping of the web, first into nodes and then into closed platforms, has created a set of incentives that have combined with an ideology of disinterest. The danger signs have been there, wriggling around uncomfortably in plain sight, the whole time.

I remember writing a piece that went bonkers back in 2010, based on remarks Zuckerberg made. On a public stage, he said privacy was “no longer a social norm”—that we were sharing things we’d never shared before, and that our concept of privacy was essentially dead. Facebook pushed back on my story, of course. Zuck didn’t mean it like that! He was just observing a shift in people’s behavior where they are more and more willing to put information about themselves on the internet!

Except it’s one thing to observe a shift—but quite another to be a willing enabler and profiteer from these movements. This was not so long after the Beacon debacle, and, like many missteps Facebook has made over the years, Beacon came back soon afterwards in the form of another product, Facebook Connect. The transgression, for them, wasn’t the underlying idea, but the failure to understand how it would be received. And at the core of it, the underlying idea is what these Masters of The Universe believe—your information, public and private, is a commodity which you can be encouraged to trade in… and if it’s to your ultimate detriment, or to society’s, then that’s your problem.

This whole debacle is absolutely about Facebook and Cambridge Analytica; but it’s also about much more than Facebook and Cambridge Analytica. The same attitude prevails across so many companies (god knows what we’ll discover when Palantir operatives start turning against the company) all of whom are slowly invading, changing our baseline expectations, providing us with conveniences that require us to trade in things we don’t even know we have. And there aren’t many of us—users, technologists, the media—who get away with clean hands from this whole mess. We’re all culpable, to some extent, either for being willing to trade in our own personal data for trinkets, or by choosing to retool our publishing platforms and do the same to our user’s information. We have all made this possible. The difference is that it’s almost impossible for most of us to fix.

So I’d love to see some change, and some introspection. A culture of first, do no harm. A recognition that there are huge dangers if you just do what’s possible, or build a macho “fail fast” culture that promotes endangerment. It’s about building teams that know they’ll make mistakes but also recognize the difference between great businesses opportunities and gigantic, universe-sized fuck ups. Because we’ve all made mistakes in our lives, nobody’s saying otherwise. But we have to see that there are different kinds of mistakes. Some we can’t take back: Some that hurt other people; some that we should regret and move on from. But there are mistakes that hurt everybody, and that were done with foreknowledge. Those are acts of violence, and those we should be angry about.


Here’s a short reading list of Facebook stories that can help contextualize this, beyond the triple punch of Observer, New York Times, and Channel 4 pieces that came out in the last few days. Over the last few years we’ve had so many great pieces, including Zeynep Tufekci in the NYT // John Lanchester in LRB // Danah Boyd on her blog // Max Read in New York Magazine // Alexis Madrigal in The Atlantic.

Until next time.