Dealing with the demons of disinformation
In a world of growing epistemic insecurity, members of the media need to step up in defense of the truth. But technology may also have a role in saving the day.

Skepticism is a virtue in the maelstrom of claims and counter-claims that is our modern world. Everyone is competing to sell their own narratives about what is going on around the globe, to fit their agendas.
A new phrase is beginning to be used alongside "national security" and "cyber security." It is "epistemic security." Epistemology is the field of philosophical study pertaining to knowledge and truth. In the information battlefields of the world, it is truth that is being fought over—and there are many, many casualties.
The big platforms have recently come under greater scrutiny in the context of international geopolitics. The BBC has just completed a fact-checking project where they discovered 800 fake, almost certainly Russian accounts on TikTok. The Chinese-owned TikTok has itself deleted 12,000 fake accounts since the beginning of the war in Ukraine. YouTube continues to have issues with Russian trolls gaming the system (this happened to me and has also recently led to a significant pro-Ukrainian channel, Warthog Defense, being deleted).
And today, the "European Union has formally announced it suspects X, previously known as Twitter, of breaching its rules in areas including countering illegal content and disinformation." The difference here is that it is Elon Musk who is also part of the suspicions and is thought to have breached his obligations on transparency. This was on the back of the EU producing a report that explicitly claimed that X was the worst of the social media platforms in terms of combating disinformation:
"My message for [X] is: you have to comply with the hard law. We'll be watching what you're doing," the EU's Values and Transparency Commissioner Vera Jourova warned.
The disinformation study which prompted Ms Jourova's comments covered Spain, Poland and Slovakia, countries at risk of being targeted by disinformation due to elections or proximity to the war in Ukraine.
The platform with the largest "ratio of discoverability" of disinformation - meaning the proportion of sensitive content made up of disinformation - was Twitter. YouTube had the lowest, the study suggested."Disinformation most active on X, formerly known as Twitter, EU says", BBC
This follows hot on the heels of the US Securities and Exchange Commission (SEC) on Thursday urging a federal judge to force Musk to testify for its investigation into his takeover of the then Twitter.
Since taking over the platform, Musk has courted immense amounts of controversy. Musk fired half of the company's employees — including almost all of its trust and safety team. Indeed, pertaining to the war in Ukraine, Musk has been mocking Ukraine to the delight of Russian trolls, parroting Kremlin propaganda and amplifying their narratives, while having live Twitter/X Spaces with a Who's Who of Russian disinformation and conspiracy theories (Jackson Hinkle, Scott Ritter, Alex Jones, and even Andrew Tate). Leaked Twitter code escaped over this last year showing that Musk's Twitter was down-ranking tweets about Ukraine and Ukraine news, treating it as disinformation. Another such example is Twitter's verification system helped pro-Kremlin voices spread misleading claims after the Ohio train disaster.
Rather than implement a voluntary code of conduct, which Musk opted out of in the EU, he has created the Community Notes system where users can correct disinformation themselves on the platform. There was added concern recently Elon Musk, on a number of different occasions, deleted Community Notes that were appended to his own disinformational tweets. And a week ago, Musk claimed that when one of his tweets in support of a Russian shill (Gonzalo Lira) was hit with fact-checking from Community Notes users, it was a system being "gamed by state actors."
In short, there are now battles being fought all day long inside our heads over the landscapes of truth and narrative, using the weapons of technology and social media.
Sometimes it feels like a thankless and unending task to combat the lies and disinformation. For too long, mainstream media outlets have allowed nonsense to gain a foothold on our television screens and on our radio waves. The idea of a false balance has long been discussed, where TV stations have previously assumed they must give equal airtime to competing accounts of reality, even if one of those accounts is only adhered to by fringe wingnuts. Back in 2018, the BBC accepted it gets coverage of climate change “wrong too often” and told staff: “You do not need a ‘denier’ to balance the debate.”
But rather than being passive acceptors of disinformation without having the gall to challenge open falsehoods, it might just be that we are seeing mainstream media outlets start to bite back. Over the last few months, there have been a number of news anchors and reporters who have stood up to politicians attempting to spread their lies. The latest example is CNN's Caitlyn Collins in her interview with Ron Johnson.
After he tried a bit of bothsidesism concerning overturning elections with "Democratic electors have tried to do that repeatedly, Democrats have done the same things.... It's happened in different states-"
"Which ones," she interjected.
"I didn't come prepared to give you the exact states but it has happened repeatedly. It has happened repeatedly. Just go check the books."
"Which books?"
Ouch. But this is good. It is as if some interviewers are taking a page out of Mehdi Hasan's interview book. When Johnson claimed he would provide evidence at a later point, Collins then did a follow-up segment debunking the claims that Johnson subsequently sent in. Now this is how to do good, skeptical, evidence-based journalism. This is a start in fostering epistemic security. It's great to see Collins return to this and not to let those claims go unchecked.
It can be very difficult when the interviewee is a snake oil salesman who lets facts slip off his Teflon coating, talking over any attempt to hold him to account:
But to hear the audience applaud his live lies and conspiracies indicates that the job of getting facts to land and gain a foothold is exceptionally tough.
To return to Mehdi Hasan, and indeed Vivek Ramaswamy, here the two are engaged in a wrangle over facts:
Hasan shows why he has the reputation he now does. It is well-earned. On point, Hasan sets out the challenge of answering disinformation in this Meidas Touch analysis video:
There is also (or perhaps necessarily) a technological challenge here. TV stations and media outlets need the ability to be able to fact-check in real-time. This can just about be done with a production team in the ear of the interviewer sitting on PCs and frantically Googling, but there is also a place for AI and other technology.
During the 2020 US presidential debates, one company monitored the election debates live using humans and AI. Using AI, a 2-3 hour fact-check can be reduced by 80%. The AI/human interaction turned the debate into text (difficult due to people talking over each other), and the machine checked and verified about a third of the claims. 8 out of the 27 claims had to be fully manually checked, but the role played by AI was important.
This is a continually evolving discipline, especially given the growth in online content and international media spreading 24/7 across the globe. Wired documents the work that fact-checkers are doing with AI:
The proliferation of online misinformation and propaganda has meant an uphill battle for fact-checkers worldwide, who have to sift through and verify vast quantities of information during complex or fast-moving situations, such as the Russian invasion of Ukraine, the Covid-19 pandemic, or election campaigns. That task has become even harder with the advent of chatbots using large language models, such as OpenAI’s ChatGPT, which can produce natural-sounding text at the click of a button, essentially automating the production of misinformation.
Faced with this asymmetry, fact-checking organizations are having to build their own AI-driven tools to help automate and accelerate their work. It’s far from a complete solution, but fact-checkers hope these new tools will at least keep the gap between them and their adversaries from widening too fast, at a moment when social media companies are scaling back their own moderation operations.
Organizations like Full Fact AI are at the forefront of fighting the good fight, and it will be interesting to see how the technology developed over time to hold bad-faith actors to account.
However, it's an uphill battle. Tim Gordon is the cofounder of Best Practice AI (an artificial intelligence strategy and governance advisory firm), as well as being a trustee of a UK fact-checking charity. “The race between fact-checkers and those they are checking on is an unequal one. Fact-checkers are often tiny organizations compared to those producing disinformation,” Gordon says. “And the scale of what generative AI can produce, and the pace at which it can do so, means that this race is only going to get harder.”
The challenge is in scaling up. The good news is that, according to Wired, "there are nearly 400 fact-checking initiatives in over 100 countries, with two-thirds of those within traditional news organizations, growth has slowed." This comes at a time when X has cut back its teams overseeing misinformation and hate speech and Meta (the parent company of Facebook) reportedly restructured its content moderation team amid thousands of layoffs in November of last year. All of this comes with a backdrop of huge tech layoffs in the industry.
Of course, it is not just words that can be used to misinform, deep fake imagery from both humans and AI are now part of the challenge.
The search for truth and accuracy is an increasingly difficult one, but where news anchors take the step to do the right thing, we should recognize and applaud it. More of that is needed. Much more.
The Pandora's Box of lies and mistruths has been ripped open in recent decades, and putting those impudent wisps and demons of disinformation back in is one heck of a tall order. For all our benefits, we must try, or soon our society will be built on a foundation of falsification. Soon enough, it'll come tumbling down.