A Wartime Case for Information Integrity
We are living through bewildering times. Once again, the fog of war is driving the spread of hate and lies online — resulting in dangerous errors with real-time, real-world consequences. The case for information integrity has rarely been more compelling, or more urgent.
We’ve been here before. Just as in the early days of Russia’s invasion of Ukraine, demand for information is sky-high. Minute by minute, we’re glued to social media, checking for updates on the violence in Gaza and Israel. Horrified and anxious, we can’t look away.
The United Nations is intensely focused on the dire humanitarian situation and the plight of all civilians in need. Secretary General Antonio Guterres is urging unimpeded and sustained humanitarian access to Gaza and the immediate and unconditional release of Israeli hostages.
And we’re raising the alarm on another problem- a big one. Related hate speech, mis- and disinformation — already rampant — is flooding social media feeds, warping perceptions, and risking further violence. In this context especially, hate lands on fertile ground.
The problem is largely structural. Digital platforms are a double-edged sword in times like these. On one hand, they are invaluable news-gathering tools, bringing us agonizingly close to events in real time and helping brave reporters and citizen journalists bear witness to the human cost of war.
But the same platforms are equally deceptive — as all journalists and fact-checkers know. Social media has long been a useful tool for anyone wanting to spread false, hateful, or incendiary messages. Disinformation actors are masters at exploiting most platforms’ business models — attention-grabbing algorithms that boost provocative content to drive engagement.
That means that when news breaks, verifying video, images and audio circulating online becomes a central task of all good newsrooms. In the fog of war, with emotions high and deception in the air, that task gets harder than ever.
Dangerous rumors thrive in this fevered atmosphere. Unverified claims swirl across encrypted messaging services before reaching wider audiences on larger platforms. By then, the content has been forwarded so many times it’s hard to verify its origin and accuracy.
New tools are muddying the water even further. Alongside the “usual” fakes — images lifted from video games, old or unrelated images posted out of context — we’re seeing bots posing as high-profile politicians, journalists, or news outlets, spewing out sophisticated fakes.
AI technology is changing the landscape rapidly, and radically. Even seasoned journalists and fact-checkers are now struggling to verify information in real time. Dangerous errors are being made, with dire impacts on already slumping trust in traditional news media.
So what can be done?
The UN has long been urging platforms to ramp up efforts to enforce their own guardrails against the spread of harmful content. We are encouraged that the European Union is demanding platforms comply with the Digital Services Act and that some have responded by outlining the measures they’re taking to do so. Yet it’s clear current efforts are not nearly enough.
Our alarm bells are constantly ringing. Anyone on social media can see that things are bad. Many are saying — anecdotally — that they seem worse than ever. But the truth is that no one knows just how bad, since the platforms do not share sufficient data. The time has come for that to change.
Researchers need access to hard data to quantitatively measure the true spread of hate speech, mis- and disinformation, and assess how well current efforts to counter online harms are working — or not. Sober solutions require sober analysis.
We can no longer allow a handful of companies to not only control the content users see, but also determine how and when we learn how they operate. At the moment, they work in secrecy, acting as judge and jury over their own practices, all the while reaping huge profits.
Things have to change. But until they do, I want to make two urgent appeals to all social media users. First, be patient. We’ve all got used to round-the-clock updates. But such a polluted information environment just can’t support that. It’s time to adjust our expectations.
Second, stay wary. Many of us are fearful, outraged and grieving — it can be tempting to react in the heat of the moment. But fakes often spread precisely because they trigger these emotions, making us more likely to share a post without checking whether it’s true. Studies have shown that lies travel multiple times faster than facts.
We’ve been here before. So let’s make that into a strength. Recent crises — Ukraine, the COVID-19 pandemic, have taught us some hard lessons — including methods to stop the spread of harmful content. We must remember them now, in another dark hour for our world.
At a time of fear, outrage and grief, it is important to consult multiple, independent, trusted sources. But it’s equally important to remember that not all news outlets are willing or able to present a nuanced and accurate view. Be aware that many are struggling. And we can do our part for the integrity of our information environment. We can take care, before we share.
Many thanks to Josie Le Blond for her collaboration on this piece.