META move empowers purveyors of disinformation and hate
Meta’s recent announcement that it will roll back trust and safety measures will cause untold harm to our world.
Hateful content is likely to surge, prompting spikes in on- and offline harassment and violence. Far from allowing “more speech,” this will in effect silence speech and participation, shrink civic spaces and narrow access to information. Social media users will find it harder to access accurate, fact-based information.
Removing trained, nonpartisan and ethical fact-checkers and leaving it to users to police is haphazard, often slow and unreliable.
Maria Ressa, Nobel Peace Prize winner said Meta’s decision meant “extremely dangerous times ahead” for journalism, democracy and social media users. She added that it could lead to “a world without facts.”
The claim that efforts to create safer online spaces amount to censorship is as inaccurate as it is dangerous. Unregulated information spaces allow people to be silenced and targeted, often the most vulnerable and marginalized.
This undermines freedom of expression, which thrives when diverse voices are heard, not when hate or disinformation are amplified.
We are extremely concerned about the impacts of disinformation and hate campaigns spreading in already volatile situations. We’ve seen the potential for horrific impacts in conflicts such as in Myanmar and Ethiopia among others. Mark Zuckerberg called what happened in Myanmar a “terrible tragedy” and said his platform “needed to do more.”
Where has that sentiment gone now? Instead, we’re seeing a new, combative rhetoric, taking sweeping aim at efforts around the world to strengthen the integrity of our information ecosystem.
But digital trust and safety are not optional, nice-to-haves. They are fundamental to upholding human rights , and for trust in science and institutions.
Writing on LinkedIn, UN Human Rights Chief, Volker Türk said:
Social media shapes society and has immense potential to enhance lives and connect us. It also has demonstrated ability to fuel conflict, incite hatred and threaten safety. When at its best, social media is a place where people with divergent views can exchange, if not always agree.
When we call efforts to create safe online spaces ‘censorship’, we ignore the fact that unregulated space means some people are silenced — in particular those whose voices are often marginalised. At the same time, allowing hatred online limits free expression and may result in real world harms.
Freedom of expression thrives when diverse voices can be heard without enabling harm or disinformation.
We do not yet have the full picture of what these changes mean globally but we will not be standing idly by. We will continue to speak out about the impacts on vulnerable and marginalized communities around the world that were already under-served by Meta’s trust and safety resources.
We will continue to advocate for a more humane internet. One that allows people everywhere to navigate online spaces safely, express themselves freely without fear of attack, and access a range of views and information sources.
We will respond to the expected increase in hate and disinformation narratives directed at the UN and its priority issues, and continue to harness coalitions, build capacity and carry out mitigation measures.
We will continue to advocate for independent media acting ethically in the public interest, recognizing that a free and pluralistic media landscape is vital to the health of our information ecosystem.
And we will continue to advocate for healthier advertising practices, raising awareness of the potential brand safety concerns for advertisers where their ads are increasingly likely to appear alongside hate speech and disinformation.
One thing is clear. The international community is demanding better from big tech. As the UN Secretary-General has repeatedly said, tech companies must take responsibility for the damage caused by their products and take measures to uphold human rights. There can be no exceptions.