How AI is boosting disinformation

Melissa Fleming
4 min readFeb 7, 2024

--

Hate and lies are already polluting our digital information ecosystems, but generative AI tools could be about to make things dramatically worse.

UN Photo/Elma Okic

“AI” was named the word of 2023 — and for good reason. New tools, if developed and used responsibly, can change the world for the better.

We’ve already seen glimpses of how AI-powered tools are improving access to all kinds of information, as well as healthcare, education, legal and public services for people around the world.

But we must be cautious. This rapidly evolving technology presents grave risks as well as opportunities. In January, the World Economic Forum declared AI-powered misinformation as the world’s biggest short-term threat.

Misinformation, disinformation and hate speech are already polluting our information ecosystems — polarizing societies, eroding trust, and ultimately threatening human progress.

The use of AI to help spread this content is nothing new. Disinformation actors have long been deploying AI-powered bots on social media and training AI-powered algorithms into promoting hate-filled and misleading content.

Yet high investment and maintenance costs have always limited the scale of disinformation operations — until now.

Cheap, off-the-shelf generative AI tools have lowered barriers for creating and spreading disinformation, both in terms of cost and manpower. Hateful and misleading content can now be churned out with little human intervention, cheaply, and at scale.

What’s more, it’s even harder to detect. AI-generated content leaves few fingerprints, making it much harder for journalists, fact-checkers, law enforcement or ordinary users to tell it apart from the real deal.

We already see impacts in many areas, from peace and security to human rights. Targeted disinformation — already a potent weapon in any war — is cheaper to make and spread. Disinformation campaigns have been cited as one of the biggest challenges for UN peacekeeping missions —75% of surveyed UN Peacekeepers reported that misinformation or disinformation had impacted their safety and security. The ability to flood online environments in conflict zones with AI generated disinformation can lead to further violence and destabilization.

The same goes for content that is racist, anti-Semitic, Islamophobic, otherwise xenophobic, or even sexually abusive. One shocking recent study by Stanford researchers found more than 1,000 exploitative illegal images of children in a prominent open-source database used to train some AI image-generating tools.

Equally disturbing is the rise of AI-generated non-consensual pornographic images. In many cases, these are being spread in a bid to silence female voices — politicians, journalists, activists, and even disinformation researchers.

But they are also cropping up in more everyday settings. Last year, in Spain and in the United States, AI-generated fake nude images of teen girls were found circulating online and through messaging apps. They had been made by the girls’ teen classmates, simply by loading photos into an AI app.

The potential dangers don’t end there. Many researchers are warning of the threat AI-generated disinformation poses to democracies — a massive issue during this bumper election year, when more than 2 billion people around the world are eligible to vote.

In fact, AI-powered voter manipulation is already here. AI tools are already being used to spread plausible-looking deepfakes and other disinformation, in places via mock news sites or fake broadcasters — complete with AI-generated news anchors. Almost anyone can create a news outlet that looks like a real channel. This deepfake video technology is being deployed on social media feeds to deceive people with propaganda disguised as news.

Many attempts to sway voters are part of wider efforts to sow confusion and undermine public trust in everything from the media to public institutions to the electoral process itself. Even science — including the scientific consensus around climate change — is under attack.

We can’t afford to go on like this. Many dedicated bodies around the world, including the UN, have long been exploring ways to tackle online harms while robustly upholding human rights. Yet AI tools are evolving so fast that they threaten to overtake that work.

That’s why we must act fast. Governments, civil society, and individual users are demanding urgent action from the developers of AI tools to make their work safer and more transparent.

We need effective guardrails, we need humane solutions, and we need generative AI tools that embrace safety and privacy by design.

The UN is seeking action on several fronts. In October, UN Secretary-General Antonio Guterres established a multidisciplinary, representative AI Advisory Body that recently presented recommendations for strengthening global AI governance, while UNESCO has issued important guidelines on the ethics of AI and how to mitigate against potential online harms.

In addition, my team and I are developing a code of conduct for information integrity to help boost societal resilience against disinformation and hate, while robustly upholding human rights.

There are some hopeful signs. It’s encouraging that some AI developers have agreed to watermark and fingerprint AI-generated photos and videos. While the technology is not foolproof, that’s a start. New iterations of watermarking technology should also carefully consider implications on the basic rights of the user.

At the same time, we’ve seen how AI-powered tools themselves are essential allies in the fight against information harms, with many tech companies relying heavily on AI to detect and address harmful content on their platforms.

Gen AI could be a powerful force in our work for information integrity if harnessed for good. But this has to happen now. The stakes are far too high. Once the damage is done, it will be too late.

--

--

Melissa Fleming
Melissa Fleming

Written by Melissa Fleming

Chief Communicator #UnitedNations promoting a peaceful, sustainable, just & humane world. Author: A Hope More Powerful than the Sea. Podcast: Awake at Night.