Online Lies are poisoning our societies
Mis- and disinformation spread online are harming our world. We see the most obvious examples every day in the headlines. But the United Nations is also seeing this globally. We are most concerned about the impacts in three areas: COVID, climate, and conflict.
Social media has connected people in amazing ways. It has given people hope as they struggle with rare diseases, given voice to people who were previously unheard, such as in the Black Lives Matter or MeToo movements. It helps the UN to connect to people all over the world directly, get their feedback and make them feel heard. But in its current form I’m not convinced it’s doing us any good. Elon Musk likes to refer to social media as the global town square. But is this a town square we really want to be in? I’d argue it’s actually quite dystopian.
We go to the town square because we don’t want to miss out. This is where you can join conversations, discover the latest news and trends. But things aren’t what they seem. We may think entry is free, but our data is being monetized. And the noise is deafening.
Everyone is shouting — those with good intentions, and those with bad. But the sound system is rigged — against good actors and in favor of bad actors who are very good at spreading lies for nefarious ends. They profit from fear, they sow discord, they confuse and obscure the facts.
The platforms actively reward this behavior, with algorithms hard-wired to amplify provocative material. The result is a polluted information environment that augments voices that intentionally mislead us. It’s putting our health, peace, and the future of the planet at stake.
Take COVID, for example. When the pandemic hit, an infodemic of harmful mis- and disinformation flooded social media. We saw wild conspiracy theories, from claims the pandemic was a hoax to dangerous fake cures. Soon, the new vaccines became the target.
Research showed antivaxxer disinformation was coming from snake oil salesmen and entrepreneurs, quack doctors, alternative practitioners, and publishers. They peddled fake cures, books, courses, and DVDs. Many of them were based in the US.
But the global power of social media meant conspiracy theories dreamed up in Florida were being seen on the other side of the world. Many people refused to get vaccinated after believing lies they saw online. It was deeply frustrating. Countless lives were at stake.
We came up with a response, a way to fight back. Together with communications agency Purpose, we developed the Verified initiative to drown out and debunk lies by getting reliable health guidance onto social media feeds. Our sharable content reached millions of people.
But then the COVID conspiracy theorists switched tack. Sensing a new opportunity to profit from growing alarm at the crises facing the planet, they expanded their narratives to encompass not just the pandemic, but also climate change.
Disinformation was now not only undermining public health, but climate action too. The latest UN IPPC report was the first to state this unequivocally, blaming “deliberate undermining of science financed by vested interests” for “delaying climate action and preparedness.”
Disinformation is also causing immediate harm in conflict situations. Our clearest example of this was in Myanmar. In 2018, the UN found that disinformation spread on Facebook played a significant role in the extreme violence against the Rohingya minority.
Facebook, meanwhile, was looking the other way. At the time, the platform employed just five Burmese speakers to handle a community of 18 million users. Mark Zuckerberg later called what happened in Myanmar a “terrible tragedy” and said his platform “needed to do more.”
Facebook says it has since ramped up moderation, but the problem is baked into the design of social media. Five years later, the UN is still monitoring a surge in online lies and hate and their destabilizing effects on peacekeeping operations around the world.
We see it in the DRC, where disinformation has triggered attacks against UN staff. Online rumors and threats were followed by real world violence. UN bases were set on fire, offices looted. Three peacekeepers and numerous civilians were killed when a protest turned violent.
These aren’t isolated incidents. In a recent internal survey, 44 percent of UN peacekeepers said mis- and disinformation was having a critical impact on their work. A similar number said it was severely impacting their safety.
Innocent lives are at stake, but we aren’t sitting idly by. Whether it’s on COVID, climate, or conflict, good actors are actively pushing back. Yet we often struggle to cut through the noise, partly because our content is down-ranked.
The platforms won’t tell us how widespread this practice is. But we know for sure it happens on Facebook, which promotes posts from family and friends at the expense of civic institutions. Reliable sources are often buried, and false content amplified. In a crisis, that can prove deadly.
There’s a fierce debate about how to mitigate against the harm caused by mis- and disinformation spread online. Many say we shouldn’t suppress harmful content because that would threaten freedom of expression. We don’t need less speech, they say, but more.
Yet social media business models and algorithms mean the game is rigged against us. No matter how much good information we post, it doesn’t reach users who need it most. That hampers our efforts to fight climate change, build cohesion, and protect people from pandemics.
Unsurprisingly, there is no silver bullet. But we are working to develop new approaches. My team and are working on a Code of Conduct for Integrity in Public Information. We are consulting experts and civil society groups, states, regulatory bodies, and the tech giants.
Our aim is to hold the social media industry responsible for the dangerous side effects of its business models. We’re seeking changes to platforms so that they no longer host harmful content. Where appropriate, that could mean government regulation.
This is already happening in places. The EU Digital Services Act is one example, as is similar legislation proposed by the US. These approaches are relatively new — only time will tell whether they reduce the harm we’re seeing.
Ultimately nothing will meaningfully change unless the platforms agree to help build a more humane internet. A start would be to dramatically increase moderation in vulnerable countries. The business model must change if the town square they manage is to become civil.
Social media has the potential to be what it claims to be — a space of connection, community and exchange. But that can’t happen until platforms dismantle the tools being used to harm society. If they don’t, they must be held accountable.