How we can make our information ecosystem more humane

Melissa Fleming
4 min readJul 17, 2024

--

For the past three years, my team and I have been working on a set of recommendations to guide action for a healthier information ecosystem. Now, I’m proud to say, they are out in the world.

Recently, I heard a story that broke my heart. At a conference, I met a British campaigner named Ian Russell who lost his daughter Molly to suicide in 2017. She was just 14.

Ian spoke with such warmth about Molly. She was kind, he said, sensitive and caring, with many friends. She spent her last evening watching TV with her family. Nothing seemed untoward when Molly went up to bed. But her mother found her unresponsive the next morning.

Since that day, Ian has been on a mission to understand this tragedy — and to stop it happening again. Quickly, he discovered that Molly’s online world had been radically different to her family life. Social media had been bombarding her with content promoting suicide and self-harm.

Listening to Ian, my thoughts went to the young people I know and love. Many have told me that time online has harmed their mental health — as platform algorithms designed to boost user engagement push them content that feeds on and inflames their deepest vulnerabilities.

Yet seven years on from Molly’s death, Ian says that little has changed. A raft of similar stories suggest not enough is being done to keep young people safe online. There are mounting calls for action.

These tragic stories highlight just how troubled our information ecosystems have become. In fact, they are a symptom of a wider malaise.

The United Nations has long warned that the spread of hate and lies is causing grave harm to our world — fueling conflict, threatening democracy and human rights, and undermining public health and climate action.

These threats to information integrity aren’t new, but they are unprecedented in their current scale and sophistication — and now supercharged by the rapid rise of readily available AI technologies.

UN Secretary-General Antonio Guterres has been crystal clear: We cannot go on like this.

Back in 2021, he asked my team and I to spearhead a new initiative.

We were to present a vision of a more humane information ecosystem — one that no longer incentivizes harmful misinformation, disinformation and hate speech, and that guarantees human rights for all.

Now I’m proud to share the results of that initiative — and to introduce a powerful new advocacy tool.

Launched on June 24, the United Nations Global Principles for Information Integrity is our blueprint for healthier information spaces — a blueprint firmly rooted in human rights.

Developed in consultation with UN member states, youth leaders, academia, civil society, tech companies and the media, they offer guidance to strengthen information integrity.

The recommendations include a call for governments, tech companies, AI developers and advertisers to take special measures to protect and empower children, with governments providing resources for parents, guardians and educators.

It is my great hope that the Global Principles act as a beacon for change.

And I hope they offer support and encouragement to all those I have met in recent years who are struggling in this information environment.

For the civil society groups and researchers under attack as they seek tangible and lasting change.

For the social media employees and whistleblowers pushing their employers to act.

For the public interest media and fact-checkers putting out reliable and accurate information and disarming disinformation and hate.

And for all those — like Molly’s father, Ian — seeking to protect vulnerable young people from harm.

To all of you, the United Nations has heard your call for guidance and support. The Global Principles are for you.

Our key proposals include:

  • Governments, tech companies, advertisers, media and other stakeholders should refrain from using, supporting or amplifying disinformation and hate speech for any purpose.
  • Governments should provide timely access to information, guarantee a free, viable, independent, and plural media landscape and ensure strong protections for journalists, researchers and civil society.
  • Tech companies should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages, with particular attention to the needs of those groups often targeted online. They should elevate crisis response and take measures to support information integrity around elections.
  • All stakeholders involved in the development of AI technologies should take urgent, immediate, inclusive and transparent measures to ensure that all AI applications are designed, deployed and used safely, securely, responsibly and ethically, and uphold human rights.
  • Tech companies should scope business models that do not rely on programmatic advertising and do not prioritize engagement above human rights, privacy, and safety, allowing users greater choice and control over their online experience and personal data
  • Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure that ad budgets do not inadvertently fund disinformation or hate or undermine human rights.
  • Tech companies and AI developers should ensure meaningful transparency and allow researchers and academics access to data while respecting user privacy, commission publicly available independent audits and co-develop industry accountability frameworks.
  • Government, tech companies, AI developers and advertisers should take special measures to protect and empower children, with governments providing resources for parents, guardians and educators.

--

--

Melissa Fleming

Chief Communicator #UnitedNations promoting a peaceful, sustainable, just & humane world. Author: A Hope More Powerful than the Sea. Podcast: Awake at Night.