Before I tell you what happened at exactly 2:28 p.m. on Wednesday, January 6, 2021, at the White House—and how it elicited a very specific reaction, some 2,400 miles away, in Menlo Park, California—you need to remember the mayhem of that day, the exuberance of the mob as it gave itself over to violence, and how several things seemed to happen all at once.
At 2:10 p.m., a live microphone captured a Senate aide’s panicked warning that “protesters are in the building,” and both houses of Congress began evacuating.
At 2:13 p.m., Vice President Mike Pence was hurried off the Senate floor and out of the chamber.
At 2:15 p.m., thunderous chants were heard: “Hang Mike Pence! Hang Mike Pence!”
At the White House, President Donald Trump was watching the insurrection live on television. The spectacle excited him. Which brings us to 2:28 p.m., the moment when Trump shared a message he had just tweeted with his 35 million Facebook followers: “Mike Pence didn’t have the courage to do what should have been done to protect our Country and our Constitution … USA demands the truth!”
Even for the Americans inured to the president’s thumbed outbursts, Trump’s attack against his own vice president—at a moment when Pence was being hunted by the mob Trump sent to the Capitol—was something else entirely. Horrified Facebook employees scrambled to enact “break the glass” measures, steps they could take to quell the further use of their platform for inciting violence. That evening, Mark Zuckerberg, Facebook’s founder and CEO, posted a message on Facebook’s internal chat platform, known as Workplace, under the heading “Employee FYI.”
“This is a dark moment in our nation’s history,” Zuckerberg wrote, “and I know many of you are frightened and concerned about what’s happening in Washington, DC. I’m personally saddened by this mob violence.”
Facebook staffers weren’t sad, though. They were angry, and they were very specifically angry at Facebook. Their message was clear: This is our fault.
Chief Technology Officer Mike Schroepfer asked employees to “hang in there” as the company figured out its response. “We have been ‘hanging in there’ for years,” one person replied. “We must demand more action from our leaders. At this point, faith alone is not sufficient.”
“All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence?” another staffer responded. “We’ve been fueling this fire for a long time and we shouldn’t be surprised it’s now out of control.”
“I’m tired of platitudes; I want action items,” another staffer wrote. “We’re not a neutral entity.”
“One of the darkest days in the history of democracy and self-governance,” yet another staffer wrote. “History will not judge us kindly.”
Facebook employees have long understood that their company undermines democratic norms and restraints in America and across the globe. Facebook’s hypocrisies, and its hunger for power and market domination, are not secret. Nor is the company’s conflation of free speech and algorithmic amplification. But the events of January 6 proved for many people—including many in Facebook’s workforce—to be a breaking point.
The Atlantic reviewed thousands of pages of documents from Facebook, including internal conversations and research conducted by the company, from 2017 to 2021. Frances Haugen, the whistleblower and former Facebook engineer who testified before Congress earlier this month, filed a series of disclosures about Facebook to the Securities and Exchange Commission and to Congress before her testimony. Redacted versions of those documents were obtained by a consortium of more than a dozen news organizations, including The Atlantic. The names of Facebook employees are mostly blacked out.
The documents are astonishing for two reasons: First, because their sheer volume is unbelievable. And second, because these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.
Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more.
And again and again, staffers say, Facebook’s leaders ignore them.
By nightfall on January 6, 2021, the siege had been reversed, though not without fatalities. Washington’s mayor had issued a citywide curfew and the National Guard was patrolling the streets. Facebook announced that it would lock Trump’s account, effectively preventing him from posting on the platform for 24 hours.
“Do you genuinely think 24 hours is a meaningful ban?” one Facebook staffer wrote on an internal message board. The staffer then turned, just as others had, to the years of failures and inaction that had preceded that day. “How are we expected to ignore when leadership overrides research based policy decisions to better serve people like the groups inciting violence today. Rank and file workers have done their part to identify changes to improve our platform but have been actively held back. Can you offer any reason we can expect this to change in the future.”
It was a question without a question mark. The employee seemed to know that there wouldn’t be a satisfying answer.
Facebook later extended the ban at least until the end of Trump’s presidential term, and then, when Facebook’s Oversight Board ruled against imposing an indefinite ban, it extended the temporary ban until at least January 7, 2023. But for some Facebook employees, the decision to crack down on Trump for inciting violence was comically overdue. Facebook had finally acted, but to many at the company, it was too little, too late. For months, Trump had incited the insurrection—in plain sight, on Facebook.
Facebook has dismissed the concerns of its employees in manifold ways. One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me. This stance allows Facebook to claim transparency while ignoring the substance of the complaints, and the implication of the complaints: that many of Facebook’s employees believe their company operates without a moral compass.
“Employees have been crying out for months to start treating high-level political figures the same way we treat each other on the platform,” one employee wrote in the January 6 chat. “That’s all we’re asking for … Today, a coup was attempted against the United States. I hope the circumstances aren’t even more dire next time we speak.”
rewind two months to November 4, 2020, the day after the presidential election. The outcome of the election was still unknown when a 30-year-old political activist created a Facebook group called “Stop the Steal.”
“Democrats are scheming to disenfranchise and nullify Republican votes,” the group’s manifesto read. “It’s up to us, the American people, to fight and to put a stop to it.” Within hours, “Stop the Steal” was growing at a mind-scrambling rate. At one point it was acquiring 100 new members every 10 seconds. It soon became one of the fastest-growing groups in Facebook history.
As “Stop the Steal” metastasized, Facebook employees traded messages on the company’s internal chat platform, expressing anxiety about their role in spreading election misinformation. “Not only do we not do something about combustible election misinformation in comments,” one wrote on November 5; “we amplify and give them broader distribution. Why?”
By then, less than 24 hours after the group’s creation, “Stop the Steal” had grown to 333,000 members, and the group’s administrator couldn’t keep up with the pace of commenting. Facebook employees were worried that “Stop the Steal” members were inciting violence, and the group came to the attention of executives. Facebook, to its credit, promptly shut down the group. But we now know that “Stop the Steal” had already reached too many people, too quickly, to be contained. The movement jumped from one platform to another. And even when the group was removed by Facebook, the platform remained a key hub for people to coordinate the attack on the U.S. Capitol.
After the best-known “Stop the Steal” Facebook group was dismantled, copycat groups sprang up. All the while, the movement was encouraged by President Trump, who posted to Facebook and Twitter, sometimes a dozen times a day, his complaint always the same—he won, and Joe Biden lost. His demand was always the same as well: It was time for his supporters to fight for him and for their country.
Never before in the history of the Justice Department has an investigation been so tangled up with social media. Facebook is omnipresent in the related court documents, woven throughout the stories of how people came to be involved in the riot in the first place, and reappearing in accounts of chaos and bloodshed. More than 600 people have been charged with crimes in connection to January 6. Court documents also detail how Facebook provided investigators with identifying information about its users, as well as metadata that investigators used to confirm alleged perpetrators’ whereabouts that day. Taken in aggregate, these court documents from January 6 are themselves a kind of facebook, one filled with selfies posted on Facebook apps over the course of the insurrection.
On a bright, chilly Wednesday weeks after the insurrection, when FBI agents finally rolled up to Russell Dean Alford’s Paint & Body Shop in Hokes Bluff, Alabama, they said Alford’s reaction was this: “I wondered when y’all were going to show up. Guess you’ve seen the videos on my Facebook page.” Alford pleaded not guilty to four federal charges, including knowingly entering a restricted building and disorderly conduct.
Not only were the perpetrators live-streaming their crimes as they committed them, but federal court records show that those who have been indicted spent many weeks stoking violence on Facebook with posts such as “NO EXCUSES! NO RETREAT! NO SURRENDER! TAKE THE STREETS! TAKE BACK OUR COUNTRY! 1/6/2021=7/4/1776” and “Grow a pair of balls and take back your government!”
When you stitch together the stories that spanned the period between Joe Biden’s election and his inauguration, it’s easy to see Facebook as instrumental to the attack on January 6. (A spokesperson told me that the notion that Facebook played an instrumental role in the insurrection is “absurd.”) Consider, for example, the case of Daniel Paul Gray. According to an FBI agent’s affidavit, Gray posted several times on Facebook in December about his plans for January 6, commenting on one post, “On the 6th a f[*]cking sh[*]t ton of us are going to Washington to shut the entire city down. It’s gonna be insane I literally can’t wait.” In a private message, he bragged that he’d just joined a militia and also sent a message saying, “are you gonna be in DC on the 6th like trump asked us to be?” Gray was later indicted on nine federal charges, including obstruction of an official proceeding, engaging in acts of physical violence, violent entry, assault, and obstruction of law enforcement. He has pleaded not guilty to all of them.
Then there’s the case of Cody Page Carter Connell, who allegedly encouraged his Facebook friends to join him in D.C. on January 6. Connell ended up charged with eight federal crimes, and he pleaded not guilty to all of them. After the insurrection, according to an FBI affidavit, he boasted on Facebook about what he’d done.
“We pushed the cops against the wall, they dropped all their gear and left,” he wrote in one message.
“Yall boys something serious, lol,” someone replied. “It lookin like a civil war yet?”
Connell’s response: “It’s gonna come to it.”
All over America, people used Facebook to organize convoys to D.C., and to fill the buses they rented for their trips. Facebook users shared and reshared messages like this one, which appeared before dawn on Christmas Eve in a Facebook group for the Lebanon Maine Truth Seekers:
This election was stolen and we are being slow walked towards Chinese ownership by an establishment that is treasonous and all too willing to gaslight the public into believing the theft was somehow the will of the people. Would there be an interest locally in organizing a caravan to Washington DC for the Electoral College vote count on Jan 6th, 2021? I am arranging the time off and will be a driver if anyone wishes to hitch a ride, or a lead for a caravan of vehicles. If a call went out for able bodies, would there be an answer? Merry Christmas.
The post was signed by Kyle Fitzsimons, who was later indicted on charges including attacking police officers on January 6. Fitzsimons has pleaded not guilty to all eight federal charges against him.
You may be thinking: It’s 2021; of course people used Facebook to plan the insurrection. It’s what they use to plan all aspects of their lives. But what emerges from a close reading of Facebook documents, and observation of the manner in which the company connects large groups of people quickly, is that Facebook isn’t a passive tool but a catalyst. Had the organizers tried to plan the rally using other technologies of earlier eras, such as telephones, they would have had to identify and reach out individually to each prospective participant, then persuade them to travel to Washington. Facebook made people’s efforts at coordination highly visible on a global scale. The platform not only helped them recruit participants but offered people a sense of strength in numbers. Facebook proved to be the perfect hype machine for the coup-inclined.
Among those charged with answering Trump’s call for revolution were 17 people from Florida, Ohio, North Carolina, Georgia, Alabama, Texas, and Virginia who allegedly coordinated on Facebook and other social platforms to join forces with the far-right militia known as the Oath Keepers. One of these people, 52-year-old Kelly Meggs from rural Florida, allegedly participated with his wife in weapons training to prepare for January 6.
“Trump said It’s gonna be wild!!!!!!!” Meggs wrote in a Facebook message on December 22, according to an indictment. “It’s gonna be wild!!!!!!! He wants us to make it WILD that’s what he’s saying. He called us all to the Capitol and wants us to make it wild!!! Sir Yes Sir!!! Gentlemen we are heading to DC pack your shit!!” Meggs and his Facebook friends arrived in Washington with paramilitary gear and battle-ready supplies—including radio equipment, camouflage combat uniforms, helmets, eye protection, and tactical vests with plates. They’re charged with conspiracy against the United States. Meggs has pleaded not guilty to all charges. His wife, Connie Meggs, has a trial date set for January 2022.
Ronald Mele, a 51-year-old California man, also used Facebook to share his plans for the insurrection, writing in a December Facebook post that he was taking a road trip to Washington “to support our President on the 6th and days to follow just in case,” according to his federal indictment. Prosecutors say he and five other men mostly used the chat app Telegram to make their plans—debating which firearms, shotgun shells, and other weapons to bring with them and referring to themselves as soldiers in the “DC Brigade”—and three of them posted to Instagram and Facebook about their plans as well. On January 2, four members of the group met at Mele’s house in Temecula, about an hour north of San Diego. Before they loaded into an SUV and set out across the country, someone suggested that they take a group photo. The men posed together, making hand gestures associated with the Three Percenters, a far-right militia movement that’s classified as a terrorist organization in Canada. (Mele has pleaded not guilty to all four charges against him.)
On January 6, federal prosecutors say, members of the DC Brigade were among the rioters who broke through the final police line, giving the mob access to the West Terrace of the Capitol. At 2:30 p.m., just after President Trump egged on the rioters on Facebook, Mele and company were on the West Terrace celebrating, taking selfies, and shouting at fellow rioters to go ahead and enter the Capitol. One of the men in the group, Alan Hostetter, a 56-year-old from San Clemente, posted a selfie to his Instagram account, with a crowd of rioters in the background. Hostetter, who has pleaded not guilty to all charges, tapped out a caption to go with the photo: “This was the ‘shot heard ’round the world!’ … the 2021 version of 1776. That war lasted 8 years. We are just getting warmed up.”
I
n November 2019, Facebook staffers noticed they had a serious problem. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.
But Facebook was slow to act. In July 2020, researchers presented the findings of a series of experiments. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
By dialing anger’s weight back to zero in the algorithm, the researchers found, they could keep posts to which people reacted angrily from being viewed by as many users. That, in turn, translated to a significant (up to 5 percent) reduction in the hate speech, civic misinformation, bullying, and violent posts—all of which are correlated with offline violence—to which users were exposed. Facebook rolled out the change in early September 2020, documents show; a Facebook spokesperson confirmed that the change has remained in effect. It was a real victory for employees of the Integrity team.
But it doesn’t normally work out that way. In April 2020, according to Frances Haugen’s filings with the SEC, Facebook employees had recommended tweaking the algorithm so that the News Feed would deprioritize the surfacing of content for people based on their Facebook friends’ behavior. The idea was that a person’s News Feed should be shaped more by people and groups that a person had chosen to follow. Up until that point, if your Facebook friend saw a conspiracy theory and reacted to it, Facebook’s algorithm might show it to you, too. The algorithm treated any engagement in your network as a signal that something was worth sharing. But now Facebook workers wanted to build circuit breakers to slow this form of sharing.
Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg “rejected this intervention that could have reduced the risk of violence in the 2020 election,” Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.
This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform. Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.
“I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”
“I just wish we could hear the truth directly,” another added. “Anything feels like we (the employees) are being intentionally deceived.”
I’ve been covering Facebook for a decade now, and the challenges it must navigate are novel and singularly complex. One of the most important, and heartening, revelations of the Facebook Papers is that many Facebook workers are trying conscientiously to solve these problems. One of the disheartening features of these documents is that these same employees have little or no faith in Facebook leadership. It is quite a thing to see, the sheer number of Facebook employees—people who presumably understand their company as well as or better than outside observers—who believe their employer to be morally bankrupt.
I spoke with several former Facebook employees who described the company’s metrics-driven culture as extreme, even by Silicon Valley standards. (I agreed not to name them, because they feared retaliation and ostracization from Facebook for talking about the company’s inner workings.) Facebook workers are under tremendous pressure to quantitatively demonstrate their individual contributions to the company’s growth goals, they told me. New products and features aren’t approved unless the staffers pitching them demonstrate how they will drive engagement. As a result, Facebook has stoked an algorithm arms race within its ranks, pitting core product-and-engineering teams, such as the News Feed team, against their colleagues on Integrity teams, who are tasked with mitigating harm on the platform. These teams establish goals that are often in direct conflict with each other.
One of Facebook’s Integrity staffers wrote at length about this dynamic in a goodbye note to colleagues in August 2020, describing how risks to Facebook users “fester” because of the “asymmetrical” burden placed on employees to “demonstrate legitimacy and user value” before launching any harm-mitigation tactics—a burden not shared by those developing new features or algorithm changes with growth and engagement in mind. The note said:
We were willing to act only after things had spiraled into a dire state … Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB. It has been painful to observe.
Current and former Facebook employees describe the same fundamentally broken culture—one in which effective tactics for making Facebook safer are rolled back by leadership or never approved in the first place. (A Facebook spokesperson rejected the notion that it deprioritizes the well-being of its users.) That broken culture has produced a broken platform: an algorithmic ecosystem in which users are pushed toward ever more extreme content, and where Facebook knowingly exposes its users to conspiracy theories, disinformation, and incitement to violence.
One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences. Under the program, internal documents show, millions of high-profile users—including politicians—are left alone by Facebook even when they incite violence. Some employees have flagged for their superiors how dangerous this is, explaining in one internal document that Facebook had solid evidence showing that when “a piece of content is shared by a co-partisan politician, it tends to be perceived as more trustworthy, interesting, and helpful than if it’s shared by an ordinary citizen.” In other words, whitelisting influential users with massive followings on Facebook isn’t just a secret and uneven application of Facebook’s rules; it amounts to “protecting content that is especially likely to deceive, and hence to harm, people on our platforms.”
Facebook workers tried and failed to end the program. Only when its existence was reported in September by The Wall Street Journal did Facebook’s Oversight Board ask leadership for more information about the practice. Last week, the board publicly rebuked Facebook for not being “fully forthcoming” about the program. (Although Oversight Board members are selected by Facebook and paid by Facebook, the company characterizes their work as independent.)
The Facebook Papers show that workers agonized over trade-offs between what they saw as doing the right thing for the world and doing the right thing for their employer. “I am so torn,” one employee wrote in December 2020 in response to a colleague’s comments on how to fight Trump’s hate speech and incitements to violence. “Following these recommendations could hasten our own demise in a variety of ways, which might interfere [with] all the other good we do in the world. How do you weigh these impacts?” Messages show workers wanting Facebook to make honorable choices, and worrying that leadership is incapable of doing so. At the same time, many clearly believe that Facebook is still a net force for good, and they also worry about hurting the platform’s growth.
These worries have been exacerbated lately by fears about a decline in new posts on Facebook, two former employees who left the company in recent years told me. People are posting new material less frequently to Facebook, and its users are on average older than those of other social platforms. The explosive popularity of platforms such as TikTok, especially among younger people, has rattled Facebook leadership. All of this makes the platform rely more heavily on ways it can manipulate what its users see in order to reach its goals. This explains why Facebook is so dependent on the infrastructure of groups, as well as making reshares highly visible, to keep people hooked.
But this approach poses a major problem for the overall quality of the site, and former Facebook employees repeatedly told me that groups pose one of the biggest threats of all to Facebook users. In a particularly fascinating document, Facebook workers outline the downsides of “community,” a buzzword Zuckerberg often deploys as a way to justify the platform’s existence. Zuckerberg has defined Facebook’s mission as making “social infrastructure to give people the power to build a global community that works for all of us,” but in internal research documents his employees point out that communities aren’t always good for society:
When part of a community, individuals typically act in a prosocial manner. They conform, they forge alliances, they cooperate, they organize, they display loyalty, they expect obedience, they share information, they influence others, and so on. Being in a group changes their behavior, their abilities, and, importantly, their capability to harm themselves or others … Thus, when people come together and form communities around harmful topics or identities, the potential for harm can be greater.
The infrastructure choices that Facebook is making to keep its platform relevant are driving down the quality of the site, and exposing its users to more dangers. Those dangers are also unevenly distributed, because of the manner in which certain subpopulations are algorithmically ushered toward like-minded groups. And the subpopulations of Facebook users who are most exposed to dangerous content are also most likely to be in groups where it won’t get reported.
Many Facebook employees believe that their company is hurting people. Many have believed this for years. And even they can’t stop it. “We can’t pretend we don’t see information consumption patterns, and how deeply problematic they are for the longevity of democratic discourse,” a user-experience researcher wrote in an internal comment thread in 2019, in response to a now-infamous memo from Andrew “Boz” Bosworth, a longtime Facebook executive. “There is no neutral position at this stage, it would be powerfully immoral to commit to amorality.”
In the months since January 6, Mark Zuckerberg has made a point of highlighting Facebook’s willingness to help federal investigators with their work. “I believe that the former president should be responsible for his words, and the people who broke the law should be responsible for their actions,” Zuckerberg said in congressional testimony last spring. “So that leaves the question of the broader information ecosystem. Now, I can’t speak for everyone else—the TV channels, radio stations, news outlets, websites, and other apps. But I can tell you what we did. Before January 6, we worked with law enforcement to identify and address threats. During and after the attack, we provided extensive support in identifying the insurrectionists, and removed posts supporting violence. We didn’t catch everything, but we made our services inhospitable to those who might do harm.”
Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.
In hindsight, it is easy to say that Facebook should have made itself far more hostile to insurrectionists before they carried out their attack. But people post passionately about lawful protests all the time. How is Facebook to know which protests will spill into violence and which won’t? The answer here is simple: because its own staffers have obsessively studied this question, and they’re confident that they’ve already found ways to make Facebook safer.
Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice. Facebook has a sophisticated understanding of measures it could take to make its platform safer without resorting to broad or ideologically driven censorship tactics.
Facebook knows that no two people see the same version of the platform, and that certain subpopulations experience far more dangerous versions than others do. Facebook knows that people who are isolated—recently widowed or divorced, say, or geographically distant from loved ones—are disproportionately at risk of being exposed to harmful content on the platform. It knows that repeat offenders are disproportionately responsible for spreading misinformation. And it knows that 3 percent of Facebook users in the United States are super-consumers of conspiracy theories, accounting for 37 percent of known consumption of misinformation on the platform.
The most viral content on Facebook is basically untouchable—some is so viral that even turning down the distribution knob by 90 percent wouldn’t make a dent in its ability to ricochet around the internet. (A Facebook spokesperson told me that although the platform sometimes reduces how often people see content that has been shared by a chain of two or more people, it is reluctant to apply that solution more broadly: “While we have other systems that demote content that might violate our specific policies, like hate speech or nudity, this intervention reduces all content with equal strength. Because it is so blunt, and reduces positive and completely benign speech alongside potentially inflammatory or violent rhetoric, we use it sparingly.”)
Facebook knows that there are harmful activities taking place on the platform that don’t break any rules, including much of the coordination leading up to January 6. And it knows that its interventions touch only a minuscule fraction of Facebook content anyway. Facebook knows that it is sometimes used to facilitate large-scale societal violence. And it knows that it has acted too slowly to prevent such violence in the past.
Facebook could ban reshares. It could consistently enforce its policies regardless of a user’s political power. It could choose to optimize its platform for safety and quality rather than for growth. It could tweak its algorithm to prevent widespread distribution of harmful content. Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time. It could make public its rules for how frequently groups can post and how quickly they can grow. It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.
Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right. Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content. It could hold its employees accountable for preventing users from finding these too-harmful versions of the platform, thereby preventing those versions from existing.
It could do all of these things. But it doesn’t.
Facebook certainly isn’t the only harmful entity on the social web. Extremism thrives on other social platforms as well, and plenty of them are fueled by algorithms that are equally opaque. Lately, people have been debating just how nefarious Facebook really is. One argument goes something like this: Facebook’s algorithms aren’t magic, its ad targeting isn’t even that good, and most people aren’t that stupid.
All of this may be true, but that shouldn’t be reassuring. An algorithm may just be a big dumb means to an end, a clunky way of maneuvering a massive, dynamic network toward a desired outcome. But Facebook’s enormous size gives it tremendous, unstable power. Facebook takes whole populations of people, pushes them toward radicalism, and then steers the radicalized toward one another. For those who found themselves in the “Stop the Steal” corners of Facebook in November and December of last year, the enthusiasm, the sense of solidarity, must have been overwhelming and thrilling. Facebook had taken warped reality and distributed it at scale.
I’ve sometimes compared Facebook to a Doomsday Machine in that it is technologically simple and unbelievably dangerous—a black box of sensors designed to suck in environmental cues and deliver mutually assured destruction. When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.
The lesson for individuals is this: You must be vigilant about the informational streams you swim in, deliberate about how you spend your precious attention, unforgiving of those who weaponize your emotions and cognition for their own profit, and deeply untrusting of any scenario in which you’re surrounded by a mob of people who agree with everything you’re saying.
And the lesson for Facebook is that the public is beginning to recognize that it deserves much greater insight into how the platform’s machinery is designed and deployed. Indeed, that’s the only way to avoid further catastrophe. Without seeing how Facebook works at a finer resolution, in real time, we won’t be able to understand how to make the social web compatible with democracy.
Source link