‘In an age of misinformation, facts suffer too’: How the end of fact-checking puts democracy at risk
Mark Zuckerberg announced that fact checking will soon be a thing of the past on Facebook and Instagram. It means everyone must learn how to find information away from social media, experts said
Facebook (now known as Meta) used to have an internal corporate motto: “Move fast and break things.”
“The idea,” founder Mark Zuckerberg explained in a 2012 letter to investors, “is that if you never break anything, you’re probably not moving fast enough.”
The company dropped the slogan in 2014. But it seems that Zuckerberg – who kicked off the year by announcing an end to fact-checking on Facebook, Threads, and Instagram – still doesn’t care what he breaks.
The sites will do away with moderation that identified and discouraged misinformation, the owner of the social tech giant Meta said this month, in favour of a community notes-style system like that used on Elon Musk’s X, formerly Twitter.
This, he said, is in the pursuit of free speech, and a rejection of censorship.
The censorship in question targets conspiracy theories and hoaxes, and the approach taken on X resulted in a mass exodus of users last year– including The Guardian, which no longer posts on the site – in response to the rise of far-right narratives allowed to run rampant. Meta, which includes Facebook, Instagram and Threads, will also adopt new guidelines which, among other things, explicitly allow users to call LGBTQ+ people “mentally ill”.
Advertisement
Advertisement
Zuckerberg’s announcement came just days before Donald Trump’s second term as president began. And as US tech billionaires move into position with an eye on the new conservative administration, the rollback of progressivism is already leaking into UK democracy.
“I think it’s a mistake,” said Jeffrey Howard, professor of political philosophy and public policy, and director of the Digital Speech Lab, at University College London (UCL). “Zuckerberg referred to the fact-checking program as a form of censorship, but I think that’s inaccurate.
“When Facebook and Instagram fact check misinformation, all they’re doing is putting a label next to it, indicating that the post is disputed, and giving links so that users can find more information. They’re not removing the content. So I don’t see at all how that counts as censorship.”
Donald Trump and other political conservatives have complained about some of the ways in which Meta has engaged in fact-checking over the years. But the response, if tech billionaires’ platforms are to give any, should be to improve the systems already in place – “not to essentially give up on it,” Howard says. “Drawing the lines of permissible online speech is really difficult, and Zuckerberg isn’t crazy to suggest that maybe Meta has drawn overly sweeping lines in some areas. I do think making it a matter of letting users decide is a way of not taking responsibility.”
There is a possibility, Howard posed, that Meta’s announcement is more a piece of political posturing than something likely to result in broadscale changes in how the platforms operate. “You could totally imagine Mark Zuckerberg releasing this as a PR technique to assuage Republicans in Washington and to assuage right-wing conservatives in the UK, but actually continue doing a lot of the moderation they’re already doing. That would be my prediction – that this is a PR effort to deflect attention off of themselves.
“I think Meta will continue to do quite a lot of work. It’s just doing a bit of reticking and reputation management to avoid the ire of conservatives, part of a general tendency we’re seeing in Silicon Valley to cosy up to the Trump administration out of fear that he will adopt an antagonistic relationship toward their businesses,” Howard said.
Advertisement
If he wants, Trump can torpedo market confidence in a business. In March, the now-president took to Truth Social to label Facebook an “enemy of the people”, an accusation he later repeated on the American news channel CNBC. Over the ensuing four days, Meta’s market valuation plummeted $60bn. After Trump won the November election, Meta stocks fell again.
The markets are clear: anger Trump, and you spook investors. And Zuckerberg’s company can’t afford to do that.
One could ask when the most powerful figures in tech last gave way to the left in a similar fashion, if ever, and how that shapes democracy in the digital sphere. Rolling back moderation leaves a vacuum filled by “extreme discussion” from right across the political spectrum, Howard explains, and can be kindling for conspiracy theories. And the spread of misinformation online comes, the evidence would suggest, hand in hand with an uptick in vitriol between users.
In the UK, there are some safeguards – the Online Safety Act, for example, puts obligations on platforms to police content that’s illegal or harmful to children. It’s expected that Ofcom will have the power to start enforcing this within the year, which gives the UK government some scope to insist tech companies like Meta and X moderate effectively.
But most misinformation isn’t illegal. “All sorts of conspiracy theories and hoaxes and rumours are totally legal, even though they can cause harm,” Howard said. Meta’s fact-checking programme, when it identifies a post as misinformation, uses its algorithm to demote it – essentially making sure it’s seen by as few people as possible. Getting rid of this system, no longer officially designating content as misinformation, means hoaxes and conspiracy theories could “run rampant”.
If the law can’t police misinformation, we rely on the owners of social media platforms – who, arguably, have eyes on maintaining their links to Trumpian conservative power in the White House – to build fair fact-checking into their systems.
It was Elon Musk, of course, who reinstated Trump’s Twitter/X account after he was banned following the 6 January insurrection. “Vox Populi, Vox Dei,” Musk posted at the time after polling users on the matter – “the voice of the people is the voice of god”.
Trump has awarded Musk’s loyalty – the self-titled ‘first buddy’ is set to lead the new so-called Department of Government Efficiency, and to hold unprecedented sway in the incoming administration. And though X itself is in financial difficulty – losing some 80% of its value since Musk acquired it – Musk himself is doing just fine. According to Forbes, the businessman is officially richer than he has ever been, with a net worth of $416bn USD. It’s partly attributable to the Trump effect: After the election, Tesla stock surged 40%, with investors anticipating a positive regulatory environment for the company.
The figures are clear. Appeasing Trump pays financial dividends. But if platforms don’t take into account matters beyond their own self-interest, Howard said, content that could do little harm in small doses is given the floor on a huge scale – and risks shaping not just the political conversation but people’s daily lives. “Climate change, for example. Misinformation there is no big deal if there’s just some of it kicking around the platform, but when it’s amplified, that becomes a problem. Or content that glorifies unhealthy body images – maybe not especially problematic in small amounts, but it can take over young women’s feeds without effective moderation. Even once we figure out what speech and content should not be allowed, we need to think about questions of amplification and de-amplification, and what principles platforms should use.”
Platforms increasingly rely on artificial intelligence (AI) to moderate, Howard pointed out, and machines don’t register nuance the way people do, or recognise hate speech if it’s coded. It opens questions of how AI itself should be policed – if machines should be held to the same standards as people, and what those standards should be.
Thanks to the internet’s globalisation of culture – as well as the so-called special relationship between the US and the UK – both political and tech moves across the pond have a knock-on effect here. With Musk reportedly set on ousting Keir Starmer from his position as prime minister, and Nigel Farage counting the X owner as a close ally, the temperature of UK democracy is driven up by what’s happening elsewhere.
Advertisement
“We have to trust that ultimately facts will override lies, but unfortunately in an age of misinformation, facts suffer too,” said Myfanwy Nixon, communications manager for mySociety, a UK social enterprising providing pro-democracy digital tools like TheyWorkForYou, which helps people learn and verify what MPs have said or how they have voted. “A lower level of trust in online information affects both truth and untruths.”
It’s vital that people don’t lose the skill of finding valuable information outside of social media platforms, said Alex Parsons, the organisation’s democracy lead and senior researcher.
“Social media platforms want to keep you on their site. Whether you see something and it makes you happy, or see something and it makes you angry – it doesn’t matter to the machine.
“They punish or forbid links off the site to keep you there, but that also stops you learning more.”
If the revolution won’t be televised, will it be posted online? Could it even start there? The relationship between digital conversation and democracy is a two-way street, UCL’s Howard said, and more often the two create a feedback loop of already familiar inequalities.
“The online space reflects the real world, but it also shapes it,” he said. “Certain tendencies that are occurring in the real world will show up online, but then the online world will reinforce and exacerbate those particular tendencies.
Advertisement
“Political polarisation wasn’t created by the Internet. It definitely pre-existed it. But our online discourse and the way it is algorithmically managed has the risk of reinforcing and making worse those dynamics.”
Whether in the US or the UK, we are still “deeply, deeply confused” about what the proper rules should be for governing online speech, Howard said.
“I don’t think we’re anywhere near the end of this controversy. I think we’re, somewhat astonishingly, still at the beginning of it.”
Doing away with fact-checking means those looking for answers can be exposed to hate
Hate crime is on the rise in the UK, with police announcing the highest figures since records began last year. Online hate speech, in particular, is soaring – half of 12 to 15-year-olds reported seeing hateful content online in 2020, Ofcom said.
It creates a catch-22 for marginalised people who use the internet to find others who share their experience, or those who want to organise. The internet was instrumental in social movements like #MeToo and Black Lives Matter – and allows them a level of anonymity that should, in theory, protect people from harm. But the latest moves from Meta show that LGBTQ+ people and other minorities are at increasing risk online.
“Social media, for many young people, can be a difficult place,” said Laura Mckay, chief executive of queer youth charity Just Like Us. “But when LGBT+ young people lack inclusive education at school or feel unable to have conversations about LGBT+ identities at home, the internet can be a place which they turn to for information and in search of community. Unfortunately, in looking for answers, they can often be exposed to hate.
Advertisement
“The discourse around LGBT+ lives on social media, as well as in traditional media and politics, is becoming increasingly divisive and hostile. We know from our research that LGBT+ school pupils are twice as likely as their non-LGBT+ peers to have been bullied in the past year, but a quarter have specifically experienced cyberbullying. We also know that 78% of primary school pupils have heard homophobic language, with many citing social media as the source.
“It’s absolutely vital that we protect LGBT+ young people from hate and abuse online by putting in place necessary moderation and safeguards, not only to protect their mental health and wellbeing, but also to mitigate the very real risk of online hate leading to real-world violence.”