Hot Posts

6/recent/ticker-posts

Ad Code

Advertisement
Advertisement

Meta to Shut Down Fact-Checking Program Worldwide, Shifts to Community Notes Model


 

By Agboola Aluko | GLiDE NEWS | April 6, 2025



M eta, the tech conglomerate behind Facebook, Instagram, and Threads, is ending its global fact-checking initiative as of Monday, marking a major shift in how the company approaches content moderation and misinformation on its platforms.

Joel Kaplan, Meta’s new head of global policy, confirmed the decision on Friday via a post on X. “By Monday afternoon, our fact-checking program in the U.S. will be officially over. That means no new fact checks and no fact checkers,” Kaplan wrote. “In place of fact checks, the first Community Notes will start appearing gradually across Facebook, Threads & Instagram, with no penalties attached.”

The company frames this change as part of a broader commitment to promoting free speech and reducing accusations of political censorship. However, critics argue the move could open the door to a wave of unchecked misinformation online.

Meta’s Community Notes system, which draws comparisons to the crowdsourced model currently used by X under Elon Musk, is designed to allow users to collaboratively flag misleading or disputed content. Users who wish to participate as contributors must be at least 18 years old, have accounts older than six months, and maintain a “good standing” status.

Despite the rollout of this system, Meta has confirmed that the Community Notes model will not apply to paid advertising. Observers note that this loophole potentially allows the spread of misinformation via sponsored content, as long as advertisers are willing to pay.

In tandem with ending its fact-checking program, Meta has also dismantled its Diversity, Equity, and Inclusion (DEI) initiatives and relaxed some of its hate speech enforcement policies. The changes suggest a broad recalibration of the company’s approach to content governance—one that places greater emphasis on user autonomy, but may come at the cost of increased platform toxicity.

Analysts and digital rights advocates are now closely watching how this new model will impact the online information landscape, especially in a year marked by global elections and rising concerns over AI-generated disinformation.

Post a Comment

0 Comments

Advertisement

Ad Code

Advertisement