In an apparent bid to avoid the ire of President-elect Trump, Meta has announced it will end its fact checking program and rely on a Community Notes model.
On the eve of Trump’s second term as President, Meta is making major changes to its moderation policies. In a blog post announcing the changes, Chief Global Affirs Officer Joel Kaplan framed them in the context of returning the platform to its free speech roots.
In his 2019 speech at Georgetown University, Mark Zuckerberg argued that free expression has been the driving force behind progress in American society and around the world and that inhibiting speech, however well-intentioned the reasons for doing so, often reinforces existing institutions and power structures instead of empowering people. He said: “Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous.”
In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.
We want to fix that and return to that fundamental commitment to free expression. Today, we’re making some changes to stay true to that ideal.
Kaplan goes on to say that the company’s attempts to fact check didn’t turn out as it hoped, with bias creeping in.
When we launched our independent fact checking program in 2016, we were very clear that we didn’t want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organizations. The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.
That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how. Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.
We are now changing this approach. We will end the current third party fact checking program in the United States and instead begin moving to a Community Notes program. We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see. We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias.
Kaplan says the company will roll out Community Notes in the coming months and “stop demoting fact checked content.” Instead, the company will show “a much less obtrusive label indicating that there is additional information for those who want to see it.”
Return to Free Speech Roots
Meta is removing some of its existing restrictions on some types of content, instead focusing its moderation efforts on illegal content.
For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies). This does not account for actions we take to tackle large-scale adversarial spam attacks. We plan to expand our transparency reporting to share numbers on our mistakes on a regular basis so that people can track our progress. As part of that we’ll also include more details on the mistakes we make when enforcing our spam policies.
We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement. We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms. These policy changes may take a few weeks to be fully implemented.
We’re also going to change how we enforce our policies to reduce the kind of mistakes that account for the vast majority of the censorship on our platforms. Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn’t have been. So, we’re going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.
Changes to Political Moderation
Meta is also making significant changes to how it handles political content after the company started limiting such content in 2021.
Since 2021, we’ve made changes to reduce the amount of civic content people see – posts about elections, politics or social issues – based on the feedback our users gave us that they wanted to see less of this content. But this was a pretty blunt approach. We are going to start phasing this back into Facebook, Instagram and Threads with a more personalized approach so that people who want to see more political content in their feeds can.
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Meta’s changes are a significant about-face for the company. While Kaplan frames the changes in the context of free speech, it is hard to dismiss the likelihood that such changes are being made in preparation for Trump’s second term.
Meta and other social media platforms drew strong criticism and threats from Trump and conservatives during the first Trump administration. Ever the salesman, Trump has a well-established reputation for making hard-to-prove claims, as well as ones that are factually incorrect. As a result, he has been a vocal opponent of platforms’ fact-checking, with conservatives at large accusing Meta and other companies of censorship.
Only time will tell if Meta’s changes are enough to placate Trump and keep the company out of the administration’s crosshairs.