We’re all living through a whirlwind of terrible news. These past weeks, so many of us have been glued to our phones, checking social media day and night for the latest updates on the Russian invasion of Ukraine. Our horror and anxiety keep us checking in. We cannot look away.
Social media platforms have proven invaluable tools, bringing us unbearably close to events as they unfold in real time. Thanks to the courage of reporters and citizen journalists, the world sees and feels the horror of daily life for Ukrainians under siege. We feel their pain and we want to help.
But we must stay wary online, especially when emotions are high. We are in an information war. Accurate information can come from eyewitnesses, but we should be cautious when sharing. Pausing to verify sources and check information must become second nature for us all.
After all, useful as they are, social media platforms weren’t designed to keep us informed, but engaged. To do so, they study our behavior and show us content likely to get us to react. One surefire way is to encourage us to be outraged, promoting anger, polarization, and even bloodshed.
Even before Ukraine, the design of the platforms made them useful tools for bad actors seeking to spread false, hateful, or incendiary messages to tens of millions of people. Amplified by algorithms designed to grab attention, the most provocative posts often rise to the top of our feeds.
Not those posts that are true, or those that serve the public interest. The most provocative. It’s easy to be provocative if you don’t need to stick to the truth. Lies are often more engaging, so the algorithms promote them. That’s why studies show disinformation travels faster on social media.
These design flaws have allowed a “splinter in reality,” Nobel Peace prize-winning journalist Maria Ressa recently told CNN’s Reliable Sources. “The world’s largest delivery platform of news has actually prioritized the spread of lies laced with anger and hate over facts,” she added.
This is problematic enough in peace time, eroding our sense of a shared reality and confining us to our echo chambers. But when societies are descending into violence, it can be catastrophic. In 2018, the UN found that abuse of Facebook had a “significant” role in horrific violence in Myanmar.
And this is no isolated case. Across the world, the UN is monitoring a surge in the spread of disinformation, racism, and antisemitism. We see it in Ethiopia, and in former flash points in the Balkans where there’s been a spike in genocide denial and the glorification of war criminals.
I’m not suggesting platforms shoulder all the blame. Let’s be clear, they’re the medium, not the messenger. Yet it’s high time we talked openly about the damage done when these flaws are exploited. Ukraine has made this conversation even more urgent, if not already long overdue.
Yet these past weeks have seen a shift. Tech companies have stopped hiding behind their role as platforms and begun to acknowledge that content published on them can meaningfully influence real world events. We are seeing unprecedented responses, some more considered than others.
On the plus side, moderation is ramping up. Lies are being identified and removed more systematically. Other responses have been kneejerk. Facebook lifting a blanket ban on hate speech and incitement of violence in certain circumstances was “one hell of a can of worms,” as media sociologist Jeremy Littau put it.
“Facebook has rules, until it doesn’t,” Littau went on. “It’s just a platform and doesn’t take sides, until it does.” It’s a pertinent observation and one that could apply to other social media platforms too. Above all, the Ukraine response belies the claim the platforms are neutral. It shows that they have the power to make big changes — if they want to.
All this makes Ukraine a watershed moment. So, let’s use this impetus to fuel a conversation about how we fix some of social media’s worst design flaws. A redesign might sound far-fetched, but let’s imagine it for a moment. It might easier than you think.
We often think of algorithms as being automatic, beyond our control. Yet human engineers regularly tweak them and update them — the platforms say to give users a better experience. So, what if our priorities shifted? What if algorithms promoted content that made the online space more humane?
There’s no rule to say social media can’t serve humanity’s best interests. Why can’t platforms underpin peace, dignity, and the rule of law? We just might build a welcoming place of free exchange. A place for reliable information in a crisis, where privacy and human rights are upheld.
This is our vision at the United Nations. My team and I are working on global commitments to address the flaws inflicting real harm on societies. Consulting widely, we will develop a Code of Conduct for integrity in public information that aims to restore and refocus our digital commons to serve the global public good.
We are just beginning this process. But I see this as a great chance to rebuild trust in our shared online spaces. It’s an opportunity to promote facts and science and reduce harm caused by lies and hate. The events of the past weeks are final proof, if any is needed: It’s time for a change.