The Coming Flood Of Disinformation Foreign Affairs

Bonisiwe Shabane
-
the coming flood of disinformation foreign affairs

Nearly eight years after Russian operatives attempted to interfere in the 2016 U.S. presidential election, U.S. democracy has become even less safe, the country’s information environment more polluted, and the freedom of speech of U.S. citizens more at risk. Disinformation—the deliberate spread of false or misleading information—was never the sole domain of foreign actors, but its use by domestic politicians and grifters has ballooned in recent years. And yet the country has been unable to rein it in because the very subject has become a partisan, politicized issue.

Lawmakers have not been able to agree to common-sense reforms that would, for instance, require more transparency about the actions of social media companies or about the identity of online advertisers. In the process, they have enabled an environment of hearsay, in which many people, particularly conservatives, have used false or misleading information to raise the specter of a vast government censorship regime. That chimera of censorship chills legitimate academic inquiry into disinformation, undermines public-private cooperation in investigating and addressing the problem, and halts crucial government responses. The result is an information ecosystem that is riper for manipulation than ever. I have had a unique view of this slow-motion failure. Between 2012 and 2016, I worked on democracy-support programs in Europe and Eurasia when Russia was auditioning the disinformation tactics it would later employ in the United States.

I heard regularly from my colleagues in Georgia, Poland, and the Baltic states about Russia’s attempts to influence their political systems and derail their efforts to integrate with the West; Russian agents would launch... Officials in Washington and Brussels almost uniformly ignored these operations. I then watched from Kyiv in 2017 as the United States grappled with revelations of Russian interference in the U.S. presidential election: the Kremlin had sought to influence U.S. voters by spreading propaganda and lies on social media and by hacking U.S. political campaigns.

By contrast, Ukrainians were not surprised to see Russia brazenly trying to manipulate the democratic process in the United States. After all, the Kremlin had used the same online assets, including the Internet Research Agency—the infamous St. Petersburg–based online propaganda company—to call into question the legitimacy of Ukraine’s 2013 Euromaidan protests ahead of Russia’s illegal annexation of Crimea in 2014. From 2018 onward, I briefed U.S. and foreign officials and testified multiple times before Congress at the invitation of both Democrats and Republicans, always reminding lawmakers that disinformation should concern both parties. Along with many of my colleagues in academia and tech, I called for more nonpartisan action from legislators and transparency from social media platforms, championed investment in information literacy programs and public media, and...

But then the same salacious, captivating disinformation narratives came for me. In 2022, I was appointed the executive director of the Disinformation Governance Board, a new body in the Department of Homeland Security that would help coordinate anti-disinformation efforts within the agency. At no point did the board or I have the mission or ability to suppress or censor speech—the board’s charter made that explicitly clear. But soon after its unveiling, partisan political operatives pounced and subjected the board and me to a baseless and ruthless assault, claiming that I sought to clamp down on conservative speech. They misrepresented the board’s purpose, maligned me and my work, and spurred a torrent of death threats targeting me and my family. Instead of backing the board and me, the U.S.

government caved. It paused the activities of the board. I resigned, and the board was disbanded a few months later. The United States had failed to stand up to the very disinformation it had sought to fight. And its broader, ongoing struggle to grapple with disinformation bodes ill not just for the country but also for democracies around the world. The Center for National Security and Foreign Affairs’ (NSFA) 2024 – 2025 Foreign Affairs Forums series kicked off last week with Elis Vllasi, senior research associate and lecturer at NSFA, speaking on The Dangers...

An expert in analyzing complex challenges emanating in the grey zone as a result of influence operations, information warfare, social media weaponization, and emerging and disruptive technologies, Vllasi provided the framework of why disinformation... Vllasi began by recounting the first recorded instance of state-sponsored disinformation, which occurred in 1274 BC during the Battle of Qadesh between Muwattalli II of Hatti and Ramses II of Egypt. Two Hittite soldiers deliberately allowed themselves to be captured by Ramses’ forces and falsely reported that the Hittite army was farther north than Qadesh. Ramses II, eager to seize Qadesh, fell into the trap and nearly lost the battle, saved only by the arrival of reinforcements. This event, recorded on five Egyptian temples, illustrates the long history of disinformation in warfare. What we know about disinformation and propaganda.

There are a lot of definitions of disinformation, but the common denominator is that it is about giving people wrong information to get them to do something that they wouldn’t normally do. It is used to create confusion and information paralysis. But disinformation is just one tactic. Attempts by Russia, China, and other U.S. adversaries to spread dangerous false narratives need to be countered before they take root. Dana S.

LaFon is the 2023–24 National Intelligence Fellow at CFR. Disinformation campaigns can be a powerful tool to shape beliefs on matters of great geopolitical importance. Bad actors can deploy them against rivals to sow costly discord, create political uncertainty, and deepen divides within a community. Monitoring and “pre-bunking” even the most obscure claims is important because, if left unaddressed, their damage can be hard to undo, and in some cases, those false narratives can presage a real-life attack. There are three steps to building an effective disinformation campaign: 1) craft an influential false narrative around an egregious lie; 2) amplify the false narrative across various channels using influence principles; and 3) obfuscate... A prime example is the Russian government’s false narrative that the United States has been developing bioweapons in Ukraine for years.

Importantly, this narrative was among the earliest indicators that Russia intended to invade Ukraine. A 2022 Microsoft report [PDF] found that Russian disinformation operatives “pre-positioned” the false claim in November 2021, when it was featured on a YouTube channel operated by an American based in Moscow. When Russia invaded Ukraine three months later, Kremlin-operated news sites such as RT and Sputnik News referred to the pre-positioned report as an authoritative account that justified Russia’s invasion. This narrative has been debunked repeatedly, including by NewsGuard, a U.S.-based media watchdog whose analysts are specially trained to identify the spreading of false information. This disinformation campaign is similar to one the Soviet Union employed in 1980’s, which claimed that the United States developed HIV/AIDS as a bioweapon. In an era defined by rapid technological advancement and instantaneous global communication, information has become not just a resource but also a battleground.

Foreign state-sponsored disinformation campaigns—especially those orchestrated by Russia—have emerged as potent, low-cost tools for geopolitical influence and social disruption. Moreover, these efforts exploit the global reach of digital platforms, erode democratic norms, and blur the boundaries between information warfare and traditional statecraft. This essay argues that disinformation has evolved into a strategic weapon, transforming the digital information environment into a contested domain akin to land, sea, air, space, and cyberspace. Therefore, understanding the structure, intent, and consequences of these campaigns is vital for developing effective countermeasures. Disinformation, in contrast to misinformation, is the deliberate dissemination of false information to deceive or manipulate an audience. Similarly, Russia’s deployment of disinformation has been especially systematic and adaptive, integrating propaganda techniques rooted in Soviet-era doctrine with modern digital tools.

Additionally, according to Rid (2020), this approach constitutes “active measures”—covert and semi-covert operations designed to influence political outcomes, sow discord, and undermine adversaries from within. The State Department’s 2019 report, Weapons of Mass Distraction, highlights the architecture of these operations. Furthermore, they rely on a combination of state-controlled media (e.g., RT and Sputnik), proxy outlets, covert social media accounts, and sympathetic influencers to amplify false narratives. Consequently, these strategies reflect a broader doctrine of “information confrontation” in Russian military thinking, where the control of information is as critical as conventional weaponry in achieving strategic goals. The concept of disinformation as a “biohazard” underscores its infectious nature. Like a virus, disinformation spreads invisibly, mutates rapidly, and exploits the vulnerabilities of its host societies.

Thus, the NDU Press article notes, Russia’s campaigns are tailored to the sociopolitical fault lines of target countries—race, immigration, economic inequality, and vaccine hesitancy, among others. Meanwhile, this precision targeting is facilitated by data analytics and AI-driven algorithms that allow for hyper-personalized influence operations. Similarly, the 2016 U.S. presidential election marked a watershed moment. Russian actors, notably the Internet Research Agency (IRA), orchestrated extensive efforts to polarize voters through false personas, divisive memes, and coordinated inauthentic behavior. Therefore, these tactics were not merely aimed at supporting one candidate but at degrading trust in democratic institutions and electoral integrity itself.

Falsehoods, fabrications, fake news – disinformation is nothing new. For centuries, people have taken deliberate action to mislead the public. In medieval Europe, Jewish communities were persecuted because people believed conspiracy theories suggesting that Jews spread the Black Death by poisoning wells. In 1937, Joseph Stalin doctored newspaper photographs to remove those who no longer aligned with him, altering the historical record to fit the political ambitions of the present. The advent of social media helped democratise access to information – giving (almost) anyone, (almost) anywhere, the ability to create and disseminate ideas, opinions, and make-up tutorials to millions of people all over the... Bad actors, or just misinformed ones, can now share whatever they want with whomever they want at an unprecedented scale.

Thanks to generative AI tools, it’s now even cheaper and easier to create misleading audio or visual content at scale. This new, more polluted, information environment has real-world impact. For our institutions (however imperfect they may be), a disordered information ecosystem results in everything from lower voter turnout, impeded effectiveness of emergency responses during natural disasters and mistrust in evidence-based health advice. Like any viral TikTok moment, trends in misinformation and disinformation will also evolve. New technologies create new opportunities for scale and impact; new platforms give access to new audiences. In the same way BBC Research & Development's Advisory team explored trends shaping the future of social media, we now look to the future of disinformation.

We want to know how misinformation and disinformation are changing – and what technologies drive that change. Most importantly, we want to understand public service media’s role in enabling a healthier information ecosystem beyond our journalistic output. R&D has already been developing new tools and standards for dealing with trust online. A founding member of the Coalition for Content Provenance and Authenticity (C2PA), we recently trialled content credentials with BBC Verify. We’ve also built deepfake detection tools to help journalists assess whether a video or a photo has been altered by AI. But it’s important to understand where things are going, not just where they are today.

Based on some preliminary expert interviews, a new picture is emerging:

People Also Search

Nearly Eight Years After Russian Operatives Attempted To Interfere In

Nearly eight years after Russian operatives attempted to interfere in the 2016 U.S. presidential election, U.S. democracy has become even less safe, the country’s information environment more polluted, and the freedom of speech of U.S. citizens more at risk. Disinformation—the deliberate spread of false or misleading information—was never the sole domain of foreign actors, but its use by domestic ...

Lawmakers Have Not Been Able To Agree To Common-sense Reforms

Lawmakers have not been able to agree to common-sense reforms that would, for instance, require more transparency about the actions of social media companies or about the identity of online advertisers. In the process, they have enabled an environment of hearsay, in which many people, particularly conservatives, have used false or misleading information to raise the specter of a vast government ce...

I Heard Regularly From My Colleagues In Georgia, Poland, And

I heard regularly from my colleagues in Georgia, Poland, and the Baltic states about Russia’s attempts to influence their political systems and derail their efforts to integrate with the West; Russian agents would launch... Officials in Washington and Brussels almost uniformly ignored these operations. I then watched from Kyiv in 2017 as the United States grappled with revelations of Russian inter...

By Contrast, Ukrainians Were Not Surprised To See Russia Brazenly

By contrast, Ukrainians were not surprised to see Russia brazenly trying to manipulate the democratic process in the United States. After all, the Kremlin had used the same online assets, including the Internet Research Agency—the infamous St. Petersburg–based online propaganda company—to call into question the legitimacy of Ukraine’s 2013 Euromaidan protests ahead of Russia’s illegal annexation o...

But Then The Same Salacious, Captivating Disinformation Narratives Came For

But then the same salacious, captivating disinformation narratives came for me. In 2022, I was appointed the executive director of the Disinformation Governance Board, a new body in the Department of Homeland Security that would help coordinate anti-disinformation efforts within the agency. At no point did the board or I have the mission or ability to suppress or censor speech—the board’s charter ...