Techno-fundamentalism can’t save you, Mark Zuckerberg
It was like a verbal tic. Last week, in two days of congressional testimony, Facebook CEO Mark Zuckerberg conjured up a magic-sounding phrase whenever he was cornered on a difficult issue. The problem was content moderation, and the phrase was “artificial intelligence”. In 2004, Zuckerberg explained, when Facebook debuted, it was just him and a friend in his dorm at Harvard. “We didn’t have artificial intelligence technology that could look at the content people were sharing,” he told the Senate Commerce and Justice Committees. “So we had to enforce our content policies reactively.” Over the next fourteen years, the platform grew to 2.2 billion monthly active users; they speak over a hundred languages, each with their own subtle variations on hate speech, sexual content, harassment, threats of violence and suicide, and terrorist recruitment. Facebook’s staggering size and influence, Zuckerberg admitted, along with a series of high-profile scandals, had made it clear that “we need to take a more proactive role and take a broader view of our accountability.” He pledged to hire several thousand human content reviewers around the world, but he seemed to see AI as the ultimate panacea. In all, he said the phrase more than thirty times.
Tarleton Gillespie, in his forthcoming book “Internet Guardiansexplains what is at the root of Zuckerberg’s problem:
Should the values of a CEO trump those of an engineer or an end user? If, as Zuckerberg told Congress, some kind of “community standards” apply, what constitutes a “community”? For Facebook in Iraq, should it be Kurdish norms or Shia norms? And what exactly are Sunni standards? In Illinois, should it be rural standards or urban standards? Imagine trying to answer these questions on a platform as large as Facebook. Imagine trying to hire, train and retain worthy judges in places like Myanmar, where the Buddhist majority is wage a brutal campaign of expulsion and oppression against the Rohingya, a Muslim minority group. Imagine finding moderators for all eleven official languages of South Africa.
Hiring more humans, if there are even enough of them, won’t solve these problems, and it probably won’t be good for the humans themselves either. Sarah Roberts, information scientist at the University of California, Los Angeles, has interviewed content moderators throughout Silicon Valley and beyond, and she reports that many are traumatized by the experience and working for low wages with no benefits. But Zuckerberg’s AI solution, which he sees becoming a reality “over a period of five to ten years,” is equally untenable. It’s like Mark Twain’s Connecticut Yankee, Hank Morgan, fooling the people of Camelot with his technocratic “magic.” But, more crucially, it is also an expression of techno-fundamentalism, the unshakable belief that one can and must invent the next technology to solve the problem caused by the latest technology. Techno-fundamentalism is what got us into this mess. And that’s not the right way to get us out.
The main selling point of automated content moderation is that it claims to circumvent the two obstacles that thwart humans: scale and subjectivity. For a machine that learns from historical experience: “This is an example of what we want to flag for review; it’s not” – scale is an advantage. The more data it consumes, the more its judgments supposedly become accurate. Even errors, when identified as errors, can refine the process. Computers love rules too, which is why artificial intelligence has had its greatest successes in highly organized settings, such as chess matches and Go tournaments. If you combine rules and lots of historical data , a computer can even win at “Jeopardy!” – as we did in 2011. At first, the rules must be developed by human programmers, but there is some hope that the machines will refine, revise, or even rewrite the rules over time, taking into account diversity, localism and changing values.
This is where the promise of artificial intelligence falls apart. At its core is an assumption that historical patterns can reliably predict future norms. But the past, even the very recent past, is full of words and ideas that many of us today find repugnant. No system is adept enough to respond to the rapidly changing varieties of cultural expressions in a single language, let alone a hundred. Slang is fleeting but powerful; the irony is quite hard to read for some people. If we rely on AI to write our rules of conduct, we risk privileging those rules over our own creativity. Plus, we’re handing over the policing of our speech to the people who got the system going in the first place, with all of their biases and blind spots baked into the code. Questions about types of expressions that are harmful to ourselves or to others are difficult. We shouldn’t pretend that they will get easier.
So what is the purpose of Zuckerberg’s AI incantation? To take the cynical view, this offers a convenient way to defer public scrutiny: Facebook is a work in progress, and waiting for the right tools to be developed will take patience. (Once these tools are in place, of course, the company can blame errors on faulty algorithms or bad data.) But Zuckerberg is no cynic; he’s a techno-fundamentalist, and it’s an equally unhealthy habit of mind. It gives the impression that technology exists outside, beyond, even above messy human decisions and relationships, when in truth, such a gap does not exist. Society is technological. Technology is social. Tools, as Marshall McLuhan told us over fifty years ago, are extensions of ourselves. They amplify and distort our strengths and flaws. That’s why we have to design them carefully from the start.
The problem with Facebook is Facebook. He went too fast. He broke too many things. It has grown too big to be governed, either by a group of humans or a suite of computers. To chart the way forward, Zuckerberg has few effective tools. He should be honest about their limitations, if not for the good of his business, at least for ours.