
Meta is ditching the use of independent fact-checkers on Facebook and Instagram, replacing them with X-style “community feedback” where feedback on the accuracy of posts is left to users.
In a video posted next to it Blog post By the company on Tuesday, CEO Mark Zuckerberg said third-party moderators were “too politically biased” and that “it’s time to go back to our roots on freedom of expression.”
Joel Kaplan, who He will replace Sir Nick Clegg As head of global affairs at Meta, he wrote that the company’s reliance on independent brokers was “well-intentioned” but often led to censorship of users.
However, activists against online hate speech reacted with dismay – and suggested that the motivation behind the change was really about getting on the right side of Donald Trump.
“Zuckerberg’s announcement is a blatant attempt to cozy up to the incoming Trump administration — with damaging implications,” said Ava Lee of Global Witness, a campaign group that describes itself as seeking to hold Big Tech accountable.
She added, “The claim to avoid ‘censorship’ is a political move to avoid taking responsibility for the hate and misinformation encouraged and facilitated by the platforms.”
X simulation
Meta’s current fact-checking program, introduced in 2016, refers posts that appear to be false or misleading to independent organizations to evaluate their credibility.
Posts flagged as inaccurate may have labels attached to them that give viewers more information, and are moved lower in users’ feeds.
“In the US first” will now be replaced by community feedback.
Meta says it has “no immediate plans” to get rid of third-party fact-checkers in the UK or EU.
The new community feedback system is copied from X, which it introduced after Elon Musk bought it and renamed it.
It involves people with different viewpoints agreeing on feedback that adds context or clarification to controversial posts.
“That’s cool,” he said of Meta’s adoption of a similar mechanism.
However, the UK’s Molly Rose Foundation described the announcement as a “major concern about online safety”.
Its chairman, Ian Russell, said: “We are urgently clarifying the scope of these measures, including whether this will apply to suicide, self-harm and depressive content.”
“These movements could have serious consequences for many children and young people.”
Meta told the BBC that it would consider content that broke the suicide and self-harm rules a “high-risk” violation and would therefore be subject to automated moderation systems.
Fact-checking organization Full Fact – which participates in Facebook’s post-verification program in Europe – said it “refutes allegations of bias” made against its profession.
The authority’s chief executive, Chris Morris, described the change as “disappointing and a backward step that will have a chilling effect around the world”.
“Facebook prison”
Along with content moderators, fact-checkers sometimes describe themselves as the Internet’s emergency services.
But Meta bosses concluded they were interfering too much.
“Too much harmless content is being censored, too many people find themselves wrongly locked in ‘Facebook jail,’ and we are often too slow to respond when they do,” Kaplan wrote on Tuesday.
But Meta appears to acknowledge there are some risks, with Zuckerberg saying in his video that the changes will mean a “trade-off.”
“This means we’ll catch fewer bad things, but we’ll also cut down on the number of innocent people’s posts and accounts we accidentally delete,” he added.
This approach also runs counter to recent regulation in both the UK and Europe, where big tech companies are forced to take more responsibility for the content they transmit or face severe penalties.
So perhaps it is not surprising that Meta’s move away from this line of oversight is limited to the United States, at least for now.
“radical swing”
dead One blog said it would also “roll back the mission creep” of rules and policies – highlighting the removal of restrictions on topics including “immigration, gender and gender identity” – saying these had led to political debate and debate.
“It’s not right for things to be said on television or on the floor of Congress, but not on our platforms,” she said.
These changes come as technology companies and their executives prepare for the inauguration of President-elect Donald Trump on January 20.
Trump has previously been an outspoken critic of Meta and its approach to content moderation, calling Facebook “the enemy of the people” in March 2024.
But relations between the two men have improved since then – Mr. Zuckerberg Dinner at Trump’s Florida estate At Mar-a-Lago in November. Meta also donated $1 million To Trump’s inauguration fund.
“The recent election also appears to be a cultural turning point toward prioritizing freedom of expression once again,” Zuckerberg said in the video on Tuesday.
Mr Kaplan’s replacement of Sir Nick Clegg – the former Liberal Democrat deputy prime minister – as head of the company’s global affairs has also been interpreted as a signal of the company’s shift in approach to moderation and its changing political priorities.
The changes reflect a trend that “has seemed inevitable over the last few years, especially since Musk’s acquisition of Company X,” said Kate Klonick, an associate professor of law at St. John’s University School of Law.
“Private management of expression on these platforms is increasingly becoming a political point,” she told BBC News.
She added that while companies had previously faced pressure to build trust and safety mechanisms to deal with issues such as harassment, hate speech and misinformation, a “radical swing in the opposite direction” was now taking place.
https://ichef.bbci.co.uk/news/1024/branded_news/94b6/live/272b6a00-cd02-11ef-94cb-5f844ceb9e30.jpg