Since 2016, hostile foreign states have been using weaponized information to attack the United States. There have been two prominent targets of such attacks: First, disinformation has been used to exploit domestic U.S. race relations, and second, it has been used to allege U.S. culpability for the COVID-19 pandemic. Race was the principal target of Russian disinformation during the 2016 U.S. presidential election, with the aim of assisting Donald J. Trump and undermining his opponent, Hillary Clinton.1 Race again became a target of Russian disinformation when protests against police violence erupted across the United States in May 2020. Meanwhile, China has borrowed Russia’s disinformation playbook to disseminate conspiracies alleging the U.S. government’s responsibility for the novel coronavirus.2 Russian and Chinese information warfare attacks on the United States about race and COVID-19 may appear new, but in fact they have a long history. At the start of the 21st century, the U.S. government has seemingly forgotten that history: the threat that Kremlin disinformation previously posed to U.S. national security, and, crucially, Washington’s own past experience with countering it.
Exploiting U.S. race relations and disseminating disinformation about pandemics were major methods of Soviet disinformation during the later stages of the Cold War. The Kremlin’s aim was to discredit the U.S. government, delegitimize its institutions, disorient and polarize American society, and cast doubt on the true account of events in order to undermine the country’s effective functioning as a democracy. Although the technology that Russia is deploying today to spread disinformation is new, its strategy is the same as that of its predecessor, the Soviet Union. Equally important, the methods that Washington previously devised for debunking Soviet disinformation about race relations and pandemics are still applicable today. Using a growing field of public policy research — applied history — this paper argues that the most effective way to counter hostile foreign state disinformation is through a whole-of-society effort in which U.S. intelligence agencies work with other government bodies as well as the private sector to expose audiences to accurate information and encourage digital literacy.
Despite the disinformation threats that the United States faces, it lacks a coherent, coordinated strategy to counter them.3 Different government agencies are confronting disinformation, as are the social media technology giants, but the United States currently lacks a comprehensive strategy involving the spectrum of public and private sector stakeholders.4 Drawing on a wealth of previously classified intelligence records from multiple international archives and exclusive interviews with intelligence practitioners, this paper offers recommendations for creating such a strategy. Establishing a coherent strategy for countering disinformation is important because, far from being isolated, recent efforts to influence particular events, like the 2016 and 2020 U.S. elections, indicate that disinformation will be a major, persistent theme of 21st-century international security. Foreign states will increasingly weaponize digital information to attack democracies in order to advance their geopolitical grand strategies.
Russia’s invasion of Ukraine in February 2022 damaged its past efforts to use disinformation to divide NATO: It created more resolve in the alliance than arguably anything else in NATO history. It would be a mistake, however, to think that Russian disinformation has been relegated to the past. In fact, there are good reasons to believe that new opportunities for Russian disinformation against the West will open as the war in Ukraine grinds on in the coming winter. For example, we are likely to find Russian disinformation campaigns aimed at exploiting social dislocation caused by continued oil price spikes and food supply disruptions from Ukraine, and resulting increases in inflation in Western countries. Given these circumstances, it is all the more important to understand Russian disinformation. Studying its history is the best way to do so.
Before Russia’s interference in the 2016 U.S. election, the history of Soviet active measures was a niche academic subject within intelligence history. Its study was undertaken by former practitioners and historians like Christopher Andrew.5 More recently, Thomas Rid has built on Andrew’s work.6 Since 2016, the U.S. government and public policy scholarship has busily set about “re-learning” Moscow’s long history of using active measures.7 Scholars are also, more generally, turning to the Cold War as an analog for the resurgence of great powers in the 21st century.8 However, still missing from recent official investigations and scholarship are lessons that can be gleaned from the U.S. government’s own public policy efforts to counter Soviet disinformation in the later stages of the Cold War.9 While the diagnosis of the disinformation problem has advanced, scholars and practitioners have still failed to learn from past treatments. By studying and applying their history, this paper thus prescribes disinformation cures. When it comes to disinformation and debunking it, what is old is new again.
Studying history in depth ... offers decision-makers insight into how their predecessors tackled analogous situations in the past and illuminates policy decisions today.
This paper makes three principal arguments. First, applied history is a valuable field of public policy research, as demonstrated by the history of intelligence, disinformation, and international security. Second, the history of Soviet disinformation targeting U.S. domestic racial protests and Washington’s use of bioweapons to cause pandemics shows how and why hostile foreign states use disinformation to attack liberal democracies. Contrary to past and present claims about foreign malign “hidden hands” in U.S. domestic affairs, in fact the Soviet Union’s disinformation strategy, and its impact, were limited: It targeted and amplified existing divisions within American society, doing nothing more than magnifying them. Third, the U.S. government devised policies for countering Soviet disinformation about race and pandemics that are still applicable, even in today’s digital information landscape, where cyber interconnectivity and the prevalence of social media mean that citizens and policymakers drink from a daily firehose of information. Although the digital revolution has offered unprecedented capabilities through which states can disseminate disinformation, the history of what came before is still relevant and applicable. In fact, it is impossible to understand contemporary foreign state disinformation strategies without appreciating their past. This will become an increasingly important subject as societies become more interconnected this century. The digitized 21st century will witness “infodemic” events, producing so much information that it will be difficult, if not impossible, for audiences to distinguish facts from state-sponsored lies.
This paper proceeds by assessing Soviet disinformation strategy and using two case studies that illustrate it: the KGB’s targeting of U.S. race relations at the 1984 Los Angeles Olympics, and KGB allegations that the U.S. government created the AIDS virus. These two case studies were chosen because they show the separate aims of Soviet disinformation: to influence a specific outcome and to create a broad atmosphere of distrust. They were also chosen because they reveal the impact of disinformation and, crucially, how the U.S. government debunked that disinformation, providing lessons for doing so today.
The paper uses an applied history methodology: the express effort to use the past to inform current decisions and policymaking. The use of history for public policy has a long tradition, but is currently experiencing a renaissance.10 This article follows a methodological and analytic approach set out by the late Harvard professors Ernest May and Richard Neustadt in their seminal book, Thinking in Time, in which they demonstrate that, to provide policymakers with valuable insights from history, it is necessary to analyze similarities and differences between past and present situations.11 While the history of (counter)disinformation can be applied, some preliminary words of caution are necessary about doing so. History never repeats itself, so lessons cannot be lifted from one period and simply transplanted to another. History is no better at predicting the future than any other field of public-policy scholarship. It is also difficult to take deep historical expertise and turn it into easily consumed products for policymakers. As one scholar-turned-foreign policy adviser has put it, “History does not lend itself easily to the PowerPoints or executive summaries on which our policymakers increasingly rely.”12 However, not studying history is equally mistaken: It would mean living like an amnesiac, failing to learn from past experiences and repeating old mistakes.
Studying history in depth — not through cursory PowerPoint presentations — offers decision-makers insight into how their predecessors tackled analogous situations in the past and illuminates policy decisions today. Scholars are rightly concerned about anachronism and are wary of making “presentist” historical arguments. But May and Neustadt’s methodology, to embrace both similarities and differences with the past, helps to alleviate those concerns. Some scholars prefer “historical sensibility” or “historicism” to the term “applied history,” but the end purpose is the same: to inform contemporary policy decisions by identifying precedents and finding analogs with the past, while avoiding historical violence by applying bad historical lessons.13
Disinformation Tactics and Strategy
While disinformation currently attracts significant attention in the news, it frequently lacks a clear definition. It can most usefully be understood as false, misleading, or distracting information that is deliberately spread. To be effective, it must be unattributable to a government, which usually necessitates the involvement of a clandestine intelligence service. It is distinct from propaganda, the purpose of which is to persuade, and also from misinformation, which is false or misleading information that a government officially and openly produces.14 When a government secretly plants a false story in a newspaper and disguises the authorship, that is disinformation. When it publicly provides misleading “alternative facts,” that is misinformation.
Disinformation is an ancient part of warfare. However, during the 20th century, it was institutionalized in unprecedented ways. In Soviet Russia, the Bolsheviks adopted and expanded disinformation tactics that their tsarist predecessors had used against them. Disinformation was thereafter part of the Bolsheviks’ political warfare against their ideological enemies, foreign and domestic. The Soviet secret police, the Cheka, later known as the KGB, had a disinformation unit in its foreign intelligence branch from its early days.15 During World War II, Soviet Russia and the other major belligerent powers used disinformation in their war efforts. In the war’s early stages, British intelligence forged documents to deceive President Franklin D. Roosevelt’s administration to bring America into the war on Britain’s side. Once in the war, the British and U.S. governments then used disinformation as part of their successful strategic deception of the Axis powers, using a “bodyguard of lies,” as Prime Minister Winston Churchill put it, before the Allied invasion of Europe on D-Day, June 1944.16 Britain’s wartime foreign intelligence and sabotage services, MI6 and SOE, had forgery departments, as did America’s wartime intelligence service, the OSS.17
In the post-war years, after the wartime Grand Alliance had disintegrated, its members — the United Kingdom, the United States, and the Soviet Union — used disinformation as part of their respective ideologically driven Cold War grand strategies. One of the first acts undertaken by the CIA after its establishment in 1947 was to meddle in Italy’s democratic elections the following year, forging documents to discredit socialist candidates as communists and bribing moderate politicians.18 It did so, in cooperation with British intelligence, to counter Soviet clandestine subversion that was similarly targeting that election. Moscow also rigged elections elsewhere in post-war Eastern Europe. Thereafter, secretly planting stories became a common British and U.S. covert tactic during the Cold War.
As part of the British government’s controversial efforts to reduce Jewish immigration to the Mandate of Palestine, MI6 sabotaged boats carrying Jewish refugees there — including Holocaust survivors — and then forged documents claiming that a bogus pro-Soviet Arab group had undertaken the sabotage. The aim of MI6’s disinformation and sabotage operation, aptly named EMBARRASS, was to trick Soviet authorities into reducing Jewish immigration to Palestine from behind the Iron Curtain in order to prevent a Jewish-Arab civil war in Palestine.19
Meanwhile, as part of the CIA and MI6’s efforts to instigate a coup in Iran in 1953, the CIA planted articles and editorial cartoons in Iranian newspapers to discredit Prime Minister Mohammad Mosaddegh and bolster their preferred leader, Shah Mohammad Reza Pahlavi.20 The CIA used similar disinformation tactics in Latin America, both to topple the government of Jacobo Árbenz in Guatemala in 1954 and later to discredit Chile’s socialist president, Salvador Allende, in 1973. When Washington feared that Indonesian president Sukarno had pro-communist beliefs, a CIA team produced a sex film featuring a Sukarno lookalike in order to embarrass and blackmail him — an effort that failed. (Sukarno’s well-known libido also made him the target of an unsuccessful KGB plot to influence him by using a “honeytrap.”)21 During the Soviet invasion of Afghanistan in 1979, Soviet intelligence disseminated disinformation that Afghans “welcomed” Moscow’s “fraternal assistance.”22 Meanwhile, to discredit the invasion, the CIA planted stories in Afghan newspapers of dubious validity, carrying the Soviet military seal to make them appear official, announcing “invasion day celebrations” at Soviet embassies across the Middle East. One senior CIA officer concerned with Soviet affairs has recalled that the agency regularly disrupted Soviet tactical intelligence operations against the United States by planting false or misleading stories to discredit them.23
Generally, the Soviet disinformation strategy against Western countries was not to create a false story — a lie — but instead to amplify existing grievances.
Although Cold War-era Kremlin disinformation bore similarities to Western disinformation efforts, it was different in scope, scale, and nature. British and U.S. intelligence used disinformation to tactically support covert actions, like those noted above. For the Kremlin, disinformation had its own strategic goal: to destabilize the society of its “Main Adversary,” the United States, and disrupt relations between it and other Western nations.24 It did so to defend its strategic interests against perceived U.S. and NATO aggression.25 Western intelligence services correspondingly tried to use secret, non-avowed propaganda to amplify divisions in Soviet and Eastern Bloc societies. However, because they were police states, lacking free press and freedom of expression, these intelligence services were unable to wage information warfare behind the Iron Curtain like Soviet intelligence did in the West. There was thus a fundamental asymmetry between Soviet and Western disinformation during the Cold War. Additionally, Soviet foreign disinformation against Western countries was an extension of its domestic “political warfare” — Eastern Bloc populations were the principal targets of Soviet active measures.26 Western governments did not similarly use disinformation against their own populations in the Cold War. There is no evidence that U.S. intelligence ever orchestrated a similarly reckless and dangerous public health disinformation campaign as the KGB’s campaign with regard to AIDS, as discussed below. It would therefore be erroneous to draw an equivalence between East and West disinformation efforts during the Cold War.
Soviet intelligence used relative freedoms in Western societies — including freedom of speech, press, and association, and freedom to protest — against them, with the strategic aim of polluting public opinion and undermining democratic decision-making.27 Generally, the Soviet disinformation strategy against Western countries was not to create a false story — a lie — but instead to amplify existing grievances.28 A former senior Soviet Bloc disinformation officer and defector to the United States, Ladislav Bittman, explained that his strategy was never to create a “big lie” out of nothing, because it was unlikely to be believable. Instead, he collected intelligence about existing fractures in targeted Western societies, and then used disinformation to amplify them.29 In that respect, the best Soviet disinformation involved self-deception in Western societies, “playing upon the audience’s political and cultural biases, sending messages it wants to hear,” as one CIA analysis put it.30 Bittman recalled that he never tried to influence people with extreme political views, those whose beliefs were unshakable — either the “true believers” or the “atheists.” Instead, he targeted “the agnostic middle, whose beliefs could be swayed, and driven to extremes.” Other Soviet Bloc disinformation officers were less orthodox about grounding disinformation in facts. Hungarian defector Laszlo Szabo, who testified in the U.S. Congress about his work for the Hungarian intelligence service, the AVH, recalled his instructions when sent to its London residency in 1965:
He [the head of AVH disinformation] told me they preferred that an item for disinformation should have some real basis, be based on facts, but if I can produce a good idea that does not have any fact send it in any way. Truth is not important if the idea is good. Just send it in. They will make it look truthful, then get it published in some little paper somewhere. After that we Hungarians will hand it out, get it republished everywhere. Who can prove it is not true?31
Soviet active measures remain highly sensitive secrets in Putin’s Russia. The Russian Federation’s six-volume official history of foreign intelligence mentions them briefly — and misleadingly.32 However, secrets stolen from KGB archives and accounts from defectors reveal that Soviet disinformation reached unprecedented levels under Yuri Andropov, the longest-serving KGB chairman (1967 to 1982) and then Soviet leader (1982 to 1984). Andropov’s belief in the value of disinformation for Soviet statecraft stemmed from his experiences as Soviet ambassador to Hungary in 1956, where he observed its effective use in crushing the anti-Soviet uprising there.33 Under Andropov, KGB foreign intelligence — the First Chief Directorate — established its own department for carrying out political warfare called Service A (“A” for active measures).34 A 1972 top-secret KGB dictionary defined “disinformation data,” produced by Service A, as “especially prepared data, used for the creation, in the mind of the enemy, of incorrect or imaginary pictures of reality, on the basis of which the enemy would make decisions beneficial” to the Soviet Union.35 KGB officers stationed in residencies (rezidentura) overseas were best placed to suggest subjects for disinformation, which KGB Moscow headquarters (“the Center”), the Soviet Communist Party’s International Department, and the Central Committee’s Propaganda Department then approved.36
According to Bittman, the KGB was a “forgery factory,” producing bogus documents for planting in obscure publications in the hope that they would be picked up by witting and unwitting agents in Western media — “useful idiots,” in the KGB lexicon.37 The Kremlin used a constellation of front groups in the West, like the World Peace Council, to disseminate its disinformation.38 As an indication of the importance that Moscow attached to disinformation and other Soviet active measures, consider that, in the 1980s, it directed KGB political intelligence officers stationed overseas to spend about a quarter of their time on such activities. Between 1975 and 1985, Service A expanded from about 50 to 80 officers at the First Chief Directorate headquarters at Yasenevo, near Moscow, but this was only a tiny proportion of all Soviet officials devoted to active measures.39 In April 1982, Andropov decreed that it was the duty of all Soviet foreign intelligence officers, whatever their “line,” or department, to participate in active measures like disinformation. As a result, the entire KGB First Chief Directorate — approximately 15,000 officers in the 1980s — was engaged in active measures. In addition, other branches of the Soviet government, like the Soviet Communist Party’s International Department, the Central Committee’s Propaganda Department, and Soviet state media outlets like Novosti and Tass, also undertook active measures.40 In 1980, a conservative U.S. estimate put the cost of Soviet active measures at $3 billion.41 By contrast, the U.S. government’s body for countering Soviet disinformation, the Active Measures Working Group, had approximately 20 officials, who were drawn from various other departments. The State Department’s own Office of Active Measures Analysis and Response had just four to five full-time professional staff.42
Disinformation was one of the Soviet Union’s many active measures, which ranged from overt propaganda to covert acts of physical violence, up to and including assassinations. All of these measures were designed to influence world affairs in Moscow’s favor.43 For the Kremlin, active measures like disinformation were not a sideshow but were integral to Soviet foreign policy. They were the clandestine corollary to overt Soviet diplomacy.44 Sitting in the middle of the spectrum of Soviet political warfare, active measures were influence campaigns, which were separate from, but linked to, disinformation.45 An illustration of the relationship between disinformation and Soviet agents of influence was the KGB’s recruitment, in 1958, of a New York law student, Richard Flink. The KGB blackmailed him and then offered to finance his campaign as a Republican candidate for the New York state assembly in the hope of influencing U.S. politics and using him to disseminate Soviet disinformation. Flink, however, had reported his attempted KGB recruitment to the FBI at the outset.46
Soviet Disinformation Exploiting Domestic U.S. Race Relations
From the 1950s onwards, exploiting racial injustice in America became a stock in trade for Soviet disinformation. According to Bittman, Moscow instructed Eastern Bloc disinformation departments to make race in the United States a “top priority.” Amid race protests and the civil rights movement, they lacked no opportunities for inflaming and amplifying U.S. race relations. In August 1967, the Center approved an operation by the deputy head of Service A, Yuri Modin, to discredit the U.S. government on “the Negro issue.” It authorized Modin to organize KGB residencies in the United States to fabricate and distribute leaflets denouncing the U.S. government’s brutal suppression of “the Negro rights movement,” and to forge documents claiming that white supremacist groups were planning to assassinate leading members of the civil rights movement, like Martin Luther King Jr.47 The KGB’s campaign to inflame U.S. race relations is described by a senior KGB officer stationed in the United States, Oleg Kalugin, who worked undercover as a Radio Moscow correspondent in New York and Washington in the 1960s and early 1970s: “Our active measures campaign did not discriminate on the basis of race, creed, or color: we went after everybody.” Kalugin befriended an editor of a black activist newspaper in New York and went on several trips with him to Harlem. “I knew our propaganda was exaggerating the extent of racism in America,” Kalugin later recalled, “yet I also saw first-hand the blatant discrimination against blacks.”48
As Kalugin later recalled, he had no qualms about “stirring up as much trouble as possible for the US government” because it was “all part of the job.”
Soviet active measures involving U.S. race relations during the civil-rights era took two forms: commissioning hate crimes and forging race-based hate letters. Both had the same strategic aim: to divide American society. Following violent riots in the summer of 1965 in a predominantly black district of Los Angeles, Andropov personally approved KGB active measures to exploit relations in other black communities in major U.S. cities. Taking inspiration from a similar KGB operation in West Germany,49 its New York residency paid American agents to paint swastikas on synagogues in New York and desecrate Jewish cemeteries and then make anonymous phone calls to the police claiming they were the work of black activist groups.50 Soviet intelligence archives reveal that, on at least one occasion, the KGB ordered the use of an explosive to exacerbate racial tensions in New York. In July 1971, the Center instructed its New York residency to conduct an operation codenamed PANDORA: They were to plant a delayed-action explosive “in the Negro section of New York,” preferably targeting “one of the Negro colleges.” The Center’s instructions were that, after the explosion, the residency was to make anonymous calls to two or three black organizations claiming that the bomb had been planted by the hardline Jewish Defense League. Available evidence does not reveal PANDORA’s outcome. It is impossible to determine which, if any, of the attacks on black organizations that were blamed on the Jewish Defense League were, in fact, the KGB’s work.51
KGB forgeries designed to exacerbate U.S. racial tensions were also common. According to Kalugin, one of the KGB’s active measures in America “involved a nasty letter writing campaign against African diplomats at the United Nations.” The operation, conceived by the Center and approved by the Soviet Communist Party’s Central Committee itself, was carried out in November 1960. Residency officers like Kalugin typed hundreds of such anonymous letters and sent them to African missions at the United Nations. The letters, purportedly from white supremacists and “normal” Americans, contained virulent racist diatribes. The African diplomats publicized some of the letters as examples that racism was rampant in the United States.52 Kalugin, who covered the United Nations for Radio Moscow, dutifully broadcast about this letter-writing campaign to discredit the United States, which he, but not his listeners, knew was his own handiwork. As Kalugin later recalled, he had no qualms about “stirring up as much trouble as possible for the US government” because it was “all part of the job.” As he put it: “I lost no sleep over such dirty tricks, figuring they were just another weapon in the cold war.”53
Case Study I: The KGB Targets the 1984 Olympic Games in Los Angeles
A revealing example of Service A’s efforts to exacerbate racial tensions in the United States to influence a specific outcome occurred at the 1984 Los Angeles Olympic Games. In the first week of July, the KGB’s Washington residency mailed letters purportedly from the Ku Klux Klan to the Olympic committees of 20 African and Asian countries. The letters stated that the games were “for whites only” and threatened that athletes from these countries would be lynched or shot if they attended the games. Moscow’s disinformation strategy was to disrupt the Olympics, bolster support for its boycott of the games in retaliation for President Jimmy Carter’s embargo of Moscow’s games four years earlier, and discredit the Reagan administration by claiming it was unable to guarantee the safety of athletes.54
The strategy was unsuccessful: None of the Olympic athletes who received the Soviet death threats withdrew from the games. As the games ended in August 1984, U.S. Attorney General William French Smith publicly denounced the letters as a major Soviet disinformation effort.55 Moscow predictably claimed indignation in reaction to Washington’s anti-Soviet “slanders.”
The U.S. intelligence community detected the true origin of the letters through forensic analysis and intelligence collection. Eleven days before the games began, the CIA’s Office of Soviet Analysis produced an assessment for senior officials in the State Department, Department of Defense, and National Security Council that explained why it believed the letters were Soviet forgeries. First, the letters, post-marked within a 30-minute drive from Washington, contained grammar and syntax errors suggesting they were written by Slavic-language speakers. They contained mistakes like placing a hyphen between the first two words of “Ku Klux Klan,” a construction not used by the Klan itself, and an error that notably disappeared in subsequent Soviet media reporting about the letters. Second, Soviet media picked up the story suspiciously quickly. As the CIA assessment noted, “The typical Soviet active measures modus operandi is to disseminate disinformation abroad and subsequently replay it in the Soviet press. According to Soviet calculations, this replaying enhances the credibility of both the original disinformation and any later versions.” Third, the FBI is known to have kept the Klan under tight surveillance, through the use of human agents and technical collection, like telephone taps. According to the CIA, the FBI knew that the Klan had never targeted Asians nor addressed the issue of Third World countries participating in the Olympics.56
The U.S. intelligence community was also able to detect the KGB origins of the letters by using sources inside Soviet intelligence. The FBI and CIA are known to have run three sources inside Soviet intelligence at the time: Sergei Motorin, Valery Martynov, and Boris Yuzhin.57 Although a lack of publicly available documentation makes it impossible to prove conclusively, one of these agents likely provided U.S. intelligence with information about the letters. Martynov is the most likely candidate. He is known to have worked in the KGB’s Washington residency at the time. Codenamed GENTILE by the CIA and PIMENTA by the FBI, he met his agency and bureau handlers twice a month, usually in safe houses in Virginia suburbs, where they paid him between $2,000 and $4,000 per month. The FBI and CIA helped Martynov’s KGB career progress by feeding disinformation of their own to the KGB to make him appear successful. According to the KGB’s Washington resident at the time, Victor Cherkashin, “Martynov gave the FBI a running commentary on the goings on in the rezidentura, including operations and targets, instructions from the centre and rumours from Yasenevo [KGB foreign intelligence headquarters, near Moscow].”58 A year after the Olympics, all three U.S. agents were betrayed by a high-level Soviet agent inside the CIA, Aldrich Ames. Motorin and Martynov were recalled to Moscow and executed, while Yuzhin defected safely to the United States.59 U.S. authorities shared background notes on the forgeries with partners, like the United Kingdom, and international media outlets.60
Soviet Disinformation: U.S. Bioweapons
Another strain of Soviet disinformation, beginning in the 1950s, was to discredit the U.S. government by alleging that it was developing biological weapons capable of causing pandemics. After the Korean War, the KGB disseminated disinformation that the U.S. military had unleashed germ warfare in China and North Korea. At that time, the U.S. government had a biological weapons research program based at Fort Detrick, MD — the same facility currently at the forefront of pandemic biosecurity. The Truman administration considered deploying nuclear weapons during the Korean War, but subsequent Soviet claims that it used biological weapons in Korea was Kremlin disinformation, as Soviet records opened after the Cold War revealed. The U.S. government’s biological research program was closed down by President Richard Nixon, who outlawed U.S. offensive biological research and signed the Biological Weapons Convention in 1972, which banned the development, production, and stockpiling of biological weapons of mass destruction.61 In reality, Soviet disinformation about U.S. bioweapons proved to be mere projection: In violation of the 1972 convention, the Soviet Union — not the United States — was operating the world’s largest secret bioweapon program illegally.62 Nevertheless, Fort Detrick offered a perfect opportunity for the Soviet Union to use disinformation to distract and discredit the U.S. government, by suggesting that America’s bioweapon program secretly continued after Nixon closed it, a claim that, more recently, the North Korean regime has repeated.63
Case Study II: KGB Operation INFEKTION
In May 1983, Soviet intelligence again disseminated disinformation that Washington was using chemical warfare in Southeast Asia, what it called “yellow rain.”64 Another Soviet disinformation trope was that the U.S. military was developing an “ethnic bomb” capable of killing only non-white populations.65 However, Soviet disinformation about American bioweapons reached an entirely new level when a novel pathogen spread across the world in the 1980s: AIDS. In operation INFEKTION, also known as DENVER, the KGB spread disinformation that AIDS had been manufactured by the U.S. military at Fort Detrick — the same conspiracy theory that appeared during the COVID-19 pandemic.66 The KGB’s strategic aim in doing so was to discredit the U.S. government domestically and internationally, cause Americans to lose faith in public health, and destroy Washington’s ability to claim the moral high ground when it came to Soviet human rights abuses.
Within the KGB, the recycling and repackaging of disinformation was known as creating an “echo effect.”
Andropov’s rise to power in the Kremlin coincided with the AIDS outbreak, a novel disease first detected in the United States in 1981 that then grew into a pandemic. It offered the KGB an unprecedented opportunity for making allegations about America’s secret use of bioweapons. Service A laid the groundwork in July 1983, when it manufactured a front-page article in an obscure newspaper in India called the Patriot, titled, “AIDS May Invade India: Mystery Disease Caused by U.S. Experiments.” The story cited a letter from an anonymous but “well-known American scientist and anthropologist” that stated that AIDS was a bioweapon created by the Department of Defense. “Now that these menacing experiments seem to have gone out of control,” the story read, “plans are being hatched to hastily transfer them from the U.S. to other countries, primarily developing nations where governments are pliable to Washington’s pressure and persuasion.” In reality, Service A wrote the letter in the newspaper, which was itself a Soviet front.67
Eighteen months later, as the AIDS pandemic continued to intensify, the mainstream Soviet press picked up the Patriot’s story, reporting it as fact — and carefully omitting that it arose from a letter to a Soviet mouthpiece. Within the KGB, the recycling and repackaging of disinformation was known as creating an “echo effect.” As a U.S. official and expert on Soviet disinformation later stated, it was like a “pin ball game”: “A fake story ran in country A and then was picked up as a legitimate story in country B and C.”68 Service A organized pseudoscientific support for the story by recruiting an East German, Russian-born retired biophysicist, Professor Jacob Segal, who produced a 52-page booklet claiming that the U.S. military had created AIDS by artificially synthesizing two natural viruses, VISNA and HTLV-1. It then “escaped” from Fort Detrick:
It is very easy using genetic technologies to unite two parts of completely independent viruses … but who would be interested in doing this? The military, of course … In 1977 a special top security lab … was set up … at the Pentagon’s central biological laboratory. One year after that … the first cases of AIDS occurred in the US, in New York City. How it occurred precisely at this moment and how the virus managed to get out of the secret, hush-hush laboratory is quite easy to understand. Everyone knows that prisoners are used for military experiments in the U.S. They are promised their freedom if they come out of the experiment alive.69
The KGB’s disinformation campaign alleging that the U.S. military had created AIDS through human experimentation fed on distrust among African Americans toward public health officials. In the “Tuskegee Study of Untreated Syphilis in the Negro Male,” exposed in 1972, the U.S. Public Health Service had experimented on 600 black men, who had been recruited on the promise of receiving free medical care, to find a cure for syphilis.70 The KGB’s AIDS disinformation efforts exploited these concerns, highlighting the prevalence of AIDS in Africa. In mid-1986, the “Segal Report” took off, becoming what may legitimately be called fake news in many Third World countries, particularly in Africa, where similar “proof” of the virus’ American origins was published in letters to newspapers that were likely Service A products disseminated by Soviet front groups.71 In October 1986, the conservative-leaning British tabloid Sunday Express made the KGB narrative its main front-page story. After that, it was picked up by an international news wire. By 1987, the story had received coverage in major media outlets in 80 countries in 30 languages.72
U.S. Efforts to Counter Soviet Disinformation
The U.S. government’s efforts to combat the Soviet disinformation campaigns that targeted the 1984 Los Angeles Olympics and the AIDS pandemic reveal three principles for countering disinformation: first, the necessity of having a strategy; second, the value of intelligence collection; and third, the benefit of cooperation between government and non-government bodies and international partners.
Strategy
The U.S. government’s experience debunking Soviet disinformation shows the value of having a coherent, interagency strategy led by a single body that takes ownership of the counter-effort. Washington only developed such a strategy toward the end of the Cold War — and that strategy remains relevant to countering disinformation today.
In the early days of the Cold War, Washington’s response to Soviet disinformation was ad hoc. In 1961, the CIA’s Richard Helms, later the agency’s director, testified publicly before Congress to expose 32 Soviet counterfeits that the agency had identified over the past four years.73 The U.S. government’s public response to Soviet disinformation slowed in the 1970s amid U.S.-Soviet détente and public criticism of the U.S. intelligence community following the Church Committee’s exposure of its abuses in 1975. However, Soviet intelligence archives reveal that, under Andropov, KGB active measures against the “Main Adversary” continued unabated during the period of détente.74 The defection of a KGB active measures officer, Stanislav Levchenko, to the United States in 1979 warned U.S. authorities of these goings on. Levchenko noted that in the Soviet embassy in Tokyo from which he had defected, there were approximately 50 KGB officers, five of whom were devoted full time to active measures, who ran a network of 25 agents in Japan.75
The Soviet invasion of Afghanistan in 1979 and President Ronald Reagan’s election in the United States, with his public anti-Sovietism, brought a reinvigoration of KGB disinformation targeting America. After efforts to discredit Carter and interfere in Reagan’s election in 1980, the U.S. government created a public policy strategy for countering Soviet disinformation for the first time. In 1981, Reagan’s administration established a dedicated interagency body to neutralize Soviet disinformation: the Active Measures Working Group. Part of the Reagan administration’s broader effort to counter Soviet propaganda — PROJECT TRUTH76 — the working group’s mission was to be the U.S. government’s disinformation response unit: to identify and provide timely responses to Soviet disinformation by publicly exposing it to domestic and foreign audiences.77
The Reagan administration accompanied its face-to-face confrontations about Soviet disinformation with a threat of sanctions, informing the Kremlin that U.S.-Soviet AIDS research, which Soviet scientists needed to address AIDS in their country, would be closed down unless the disinformation stopped.
The working group was led at the time by the State Department (later it would be led by the U.S. Information Agency) and was comprised of CIA, FBI, Department of Defense, and National Security Council officers.78 To disseminate its counter-messages, it worked closely with the Voice of America. Its methodology for debunking Soviet disinformation was threefold: Report, Analyze (or Attribute), and Publish, also known as “RAP.” As Lawrence Eagleburger, the State Department’s undersecretary of state for political affairs and key supporter of the Active Measures Working Group, put it: “[A]ctive measures need to be countered by public exposure. They are infections that thrive only in darkness, and sunlight is the best antiseptic.”79 The group’s first head, Dennis Kux, a career U.S. foreign service officer, established its working methodology: that its efforts had to be based on facts, not broad ideological arguments. The group had high evidentiary standards in order to prove its case before the court of public opinion beyond reasonable doubt, similar to a criminal trial. As Kux recalled, “The fact that we made a credible presentation — not an ideological show — lent a certain amount of professionalism to the whole effort.” He added, “[P]eople don’t like to be duped. Not only were we telling them that they were being duped but we told them how.”80 One of the group’s later heads, Kathleen Bailey, recalled that they frequently knew that a false story was the Kremlin’s work, but could not prove it sufficiently and so refrained from trying to do so.81
The group’s strategy was successful at neutralizing Service A’s racist death threats at the Los Angeles Olympics. According to an FBI special agent who sat on the group, James Milburn, “[W]e really came together on the KKK forgeries in May of 1984. The group got a lot of good exposure in the national news stories. It was one of the few times what I did there was in the news.”82
The group was also successful in changing Soviet policy about AIDS disinformation. After reporting and analyzing the KGB’s AIDS conspiracy theory, in the spring of 1987 the working group published a report about it and publicly attributed it to the Kremlin. In October 1987, U.S. Secretary of State George Shultz had what he would later call a “sour and aggressive” meeting with Soviet leader Mikhail Gorbachev, in which Gorbachev produced the group’s report and angrily said it went against the spirit of glasnost. Schultz replied that when the Soviet Union stopped lying Washington would stop exposing them. Two months later, during a meeting between Gorbachev and Reagan in Washington, Gorbachev pulled aside the head of the U.S. Information Agency (and member of the working group), Charles Wick, and said, “No more lying. No more disinformation; I don’t want politicians and bureaucrats creating all of these tensions anymore, disinformation and all that. It’s going to be a new day.”83 The Reagan administration accompanied its face-to-face confrontations about Soviet disinformation with a threat of sanctions, informing the Kremlin that U.S.-Soviet AIDS research, which Soviet scientists needed to address AIDS in their country, would be closed down unless the disinformation stopped. As one of the group’s senior officials, Herbert Romerstein, stated, “The effect was as if a faucet was turned off. Suddenly the stories practically disappeared.”84 Soon after, the Kremlin officially disowned the AIDS story.85 After the Soviet Union’s collapse, Yevgeny Primakov, the head of Russia’s new foreign intelligence service, the SVR, admitted that the AIDS story had been a Service A fabrication.86 Recently opened Eastern Bloc intelligence records have confirmed this.87
Intelligence Collection
The second principle from later Cold War U.S. efforts against Soviet disinformation is the value of intelligence collection. As Kux put it, “[Y]ou can’t do anything about disinformation without good information.”88 As noted above, the U.S. intelligence community detected the KGB death threats prior to the Olympics through forensic analysis, and also, it seems, from human sources inside Soviet intelligence. Published literature has not hitherto revealed how the U.S. intelligence community detected the KGB’s AIDS disinformation efforts. However, according to this author’s exclusive interviews, it did so through the open-source collection of Soviet broadcast and print media. The CIA’s Foreign Broadcast Information Service, which collected Soviet media, monitored the Patriot in India and then observed the mainstream Soviet press picking up its AIDS story and amplifying it, indicating that the Soviet disinformation strategy was at work.89 The work of the Foreign Broadcast Information Service, and its trans-Atlantic partner, the BBC Monitoring Service, during the Cold War reveals the value of using open-source intelligence in detecting foreign state threats, including disinformation.90
International Cooperation
A third principle that emerged from U.S. efforts to counter Soviet disinformation about the Los Angeles Olympics and the AIDS pandemic is the value of international cooperation. The fact that Soviet disinformation was transnational, spreading across Western countries and the developing world, required the response to be international. British Foreign Office records — only declassified in 2018 — reveal that the Active Measures Working Group worked closely with international partners to detect and expose Soviet disinformation. The group liaised with Britain’s Foreign Office, as well as NATO countries, to collect intelligence about recent developments in Soviet disinformation efforts, like the disinformation campaigns about the Olympics and the AIDS virus. The group had meetings with senior Foreign Office officials in which they shared their discoveries about both disinformation campaigns. The British officials, in turn, agreed to take steps to counter both conspiracy theories by alerting journalists and academics about them.91 Beginning in the spring of 1983, the group launched “truth squads” — road shows to visit and brief NATO countries — and had an annual meeting at its headquarters to share intelligence about disinformation.92
Applying History From the Cold War to the Digital Age
The conclusion of the Cold War did not mark the end of the KGB’s disinformation strategy. The post-Soviet Russian state continued to use the intelligence agency’s former playbook, updating and innovating KGB disinformation for the digital age. Russia’s foreign intelligence service, the SVR, has, from its establishment in 1991, proudly embraced its KGB past — and continues to do so today, regarding itself as the heir to its traditions. Continuity between Soviet and Russian intelligence centers on Putin himself, who was a former KGB officer. He graduated in 1985 from the KGB’s training school, the Andropov Institute, where recruits were trained in disinformation and other active measures.93 Before rising to power in Russia at the end of the 1990s, Putin led the country’s domestic security service, the FSB, which created a special place for Soviet intelligence in Russian security and politics. These are the siloviki, “men of force, in Putin’s Russia, people with backgrounds in intelligence services and the military.” The continuity between Soviet and Russian disinformation was apparent in the revealing, but overlooked, case of Sergei Tretyakov, a former KGB officer, who, in the mid-1990s, became SVR head of station in New York. After defecting to the United States in 1999, Tretyakov disclosed that his work for the SVR had involved visiting the New York Public Library to use public computer terminals to disseminate disinformation about Russia to U.S. media outlets and politicians.94 His efforts directly foreshadowed Russia’s online disinformation efforts less than two decades later, during the 2016 U.S. election.
Russia’s foreign intelligence services, the SVR and GRU, have used disinformation to target the same issues within the United States as their Soviet predecessors: exploiting U.S. race relations and alleging that disease outbreaks are the result of U.S. bioweapons. The U.S. Senate’s bipartisan investigation into Russian interference in the 2016 election revealed that no other subject was targeted more by Russian intelligence disinformation than U.S. race relations, with Russian online troll farms, like the Internet Research Agency, creating bogus Facebook and Twitter accounts purportedly from Black Lives Matter to incite race-related issues, discredit Clinton’s candidacy, and promote Trump’s.95 Russia’s active measures campaign in 2016 involved cyber espionage, hacking into emails of the Democratic National Convention, and then leaking them in an influence operation to discredit Clinton. The tradecraft that Russian intelligence used, masquerading as genuine American people and political groups, is known as “astroturfing.” Race was a key theme. Echoing the KGB disinformation described above, one Internet Research Agency troll described his work in 2015: “When there were black people rioting in the U.S. we had to write that U.S. policy on the black community had failed, Obama’s administration couldn’t cope with the problem, the situation is getting tenser. The negroes are rising up.”96 Russian disinformation similarly targeted U.S. race relations during race and police violence protests that erupted in the United States in 2020.97
Amplifying domestic American anti-vaccine movements, they have also disseminated online conspiracy theories that wearing facemasks is harmful to health and ineffective for combating COVID-19.
Russian intelligence redeployed the KGB’s old conspiracy about the U.S. government’s culpability in disease outbreaks with regard to the Ebola outbreak in West Africa in 2014 and then again during the COVID-19 pandemic.98 During the Cold War, the most successful targets of Soviet active measures were Third World countries, where KGB disinformation exploited existing anti-American sentiments.99 Echoing how Soviet intelligence targeted African countries with AIDS disinformation, Russian outlets have promoted bogus remedies for the novel coronavirus in Africa, with the intention of discrediting the United States. In KGB tradition, Kremlin online actors have disseminated disinformation that Washington is seeking to test COVID-19 vaccines on prisoners, immigrants, and black Americans.100 Amplifying domestic American anti-vaccine movements, they have also disseminated online conspiracy theories that wearing facemasks is harmful to health and ineffective for combating COVID-19.101
Continuity between Soviet and Russian disinformation raises two applied history questions for analysis: First, has today’s digital information ecosystem changed the threat that disinformation poses to Western democracies compared to during the Cold War, or is it a case of plus ça change, plus c’est la même chose? Second, are the U.S. policies that were used to counter Cold War disinformation applicable today, in an age when digital bytes have replaced analog broadsheets?
Has the Digital Revolution Changed the Threat of Disinformation?
Before assessing whether the digital revolution has changed the threat of disinformation, it is first necessary to understand the threat that Soviet disinformation posed to U.S. national security during the Cold War. Contrary to claims made by KGB officers, often intended to exaggerate their successes to superiors in Moscow, Soviet disinformation never actually posed a strategic threat to the United States during the Cold War. This is confirmed by intelligence archives from both sides of the conflict. As Soviet intelligence officers later explained, KGB active measures never managed to be more than a tactical nuisance for the U.S. government.102 The CIA came to similar conclusions in 1985, when two of its senior officers, Robert Gates, future director of central intelligence and secretary of defense, and John McLaughlin, later acting director of central intelligence, testified before Congress about Soviet active measures. Responding to questions from then-Sen. Joe Biden, Gates stated that, in general, Soviet active measures were an annoyance that sapped U.S. government resources but did not pose a Cold War strategic threat. However, in certain circumstances, Gates said, they could provide the Soviet Union with a competitive edge: “[I]n a close election or legislative battle, they could spell all the difference.”103
After the Cold War, however, the threat was perceived to have diminished. This led the U.S. government, and its closest intelligence partner, the United Kingdom, to lose decades of institutional knowledge about Soviet disinformation. To achieve budget cuts — the “peace dividend” — McLaughlin slashed staff in the CIA’s office of Slavic and Eurasian Analysis by 42 percent in three years.104 For experts on Soviet Russia, the post-Cold War “end of history” was the end of many careers.105 The British intelligence community underwent similar staffing reductions. At the height of the Cold War, 70 percent of Britain’s signals intelligence service —GCHQ — was focused on the Soviet Union and the Eastern Bloc. By 2000, its efforts on hostile state activity, including Russia, had decreased to 14 percent, and in 2006 reached just 4 percent. Its resources were overwhelmingly focused on counterterrorism.106 The experience of Soviet defector Bittman illustrates the historical amnesia that befell U.S. public policy regarding Soviet active measures. The American university where he taught a course on disinformation, Boston University, became decreasingly interested in the topic, viewing it as a relic of the Cold War past. Bittman retired in 1996 and the course was not continued.107 By contrast, for Putin, who came to power in Russia at the end of the 1990s, Cold War strategies remained alive. Consequently, while Russia deployed and updated Cold War tradecraft, the U.S. government dangerously failed to apply its own history. Like a person who has lost his or her memory, the U.S. government has had to rediscover its past experiences with disinformation. This historical amnesia has been made worse by political leaders on both sides of the Atlantic who have willfully neglected tasking their intelligence services with investigating Russian interference in elections because they feared that doing so would de-legitimize their electoral victories.108
The digital era has transformed the potential for hostile states to use disinformation to “provide the difference,” as Gates warned active measures could do. Leveraging digital tools, Russia’s intelligence services have spread disinformation more effectively than their Soviet predecessors. Today’s interconnected digital world makes it quicker, cheaper, and easier than ever before to use disinformation as a strategic weapon to deceive, confuse, and undermine democracies. During the Cold War, it was a slow, laborious, and complex process for Soviet intelligence to do so, usually involving forged documents, like the Olympic Games death threat letters and AIDS disinformation campaign. While the KGB previously planted stories and used physical front groups and agents to propagate disinformation, today all that states like Russia need are social media accounts and online operatives (i.e., “trolls”).
Cyber interconnectivity has not only changed how states disseminate disinformation, but it has also changed its consumption, as individuals increasingly look online for information from news sites, search engines, and social media. Social media companies have successfully disrupted mainstream news media outlets such that about half of adults in the United States now often receive news from the former rather than from the latter.109 Traditionally, news outlets played a major role in publicly countering disinformation by fact-checking reporting before publication as part of their standard editorial process. Research shows that a consequence of inhabiting an online, algorithmically driven information environment is the increase in tribalism and confirmation bias — listening to and recycling information that conforms to one’s existing views and enhances one’s sense of belonging.110 According to Bittman, who devoted his life to studying disinformation after he defected to America, the advent of social media was a godsend for anyone in his former occupation — a “professional manipulator of public opinions,” as he described himself.111 To put things in perspective, a false story disseminated online is likely to reach 1,500 people six times more quickly than a factual story.112
To answer the first applied history question posed above, then: Evidence suggests that digital interconnectivity has increased the strategic threat of state disinformation seeking to cast doubt on truth, facts, and science. There have always been those who are willing to believe in conspiracy theories about events like disease outbreaks. A lack of data makes it impossible to determine whether contemporary open democratic societies believe in conspiracies more than in the past. However, it is clear that the digital revolution was a watershed moment in the dissemination of disinformation. There is compelling evidence that this has caused a public erosion of truth, facts, and science.113 The national security threat that contemporary digital disinformation poses is measured in its corrosive effects on democracies. Social media has provided megaphones to people who reject fundamental aspects of post-Enlightenment thinking: facts, rational thought, truth, falsifiable theories, and science itself. The more that American society is pulled into a post-fact, post-truth virtual reality, the more the Kremlin considers itself to be strategically winning.114
Understanding the history of successful past U.S. policies to counter disinformation is necessary, but not sufficient, for democracies to counter disinformation today.
It is also impossible to determine whether social media has increased the ability of disinformation to influence specific outcomes, succeeding where the KGB failed with its 1984 Olympic Games disinformation efforts. Scholars are divided with regard to the impact that Russian disinformation had on the 2016 U.S. presidential election.115 Facebook believes that 126 million Americans may have viewed Russian-backed disinformation on its site before that election (Trump won the electoral college by approximately 80,000 total votes across key swing states).116 However, without data about the impact of disinformation on voters, which would need to be obtained from polling, it is impossible to determine whether disinformation determined the outcome of the election. There is no evidence, at least none that is publicly available, that such polling was undertaken — although Russia’s intelligence services may have done, to judge the effectiveness of its 2016 influence campaign.
The U.S. government came to similar conclusions when trying to measure the effects of Soviet disinformation during the Cold War. One of the Active Measures Working Group’s key supporters, Eagleburger, noted that because Moscow did not use active measures independently, but instead in coordination with overt parts of Soviet foreign policy, it was impossible to conduct controlled experiments separating it from those other variables. The fact that the Kremlin’s active measures targeted existing fault lines in a given society made it further difficult to isolate their effects. However, as Eagleburger noted, the Kremlin evidently believed that its disinformation was worth the effort, given the resources it devoted to it.117 The same can be said today.
Do Cold War Countermeasures Apply in the Digital Era?
The second applied history question is whether U.S. policies devised to counter Soviet disinformation are still applicable in the current digital information landscape. Using May and Neustadt’s suggested methodology, an applied history analytic approach studying similarities and differences, the answer to that question is “yes” — up to a point. Understanding the history of successful past U.S. policies to counter disinformation is necessary, but not sufficient, for democracies to counter disinformation today. Truth and facts exist and do still matter in open states today. The methodology that the U.S. working group devised during the Cold War is still applicable to online environments: reporting (detecting) disinformation, attributing it, and publishing (exposing) it. The British government, for example, created a toolkit to counter online disinformation in 2015 called RESIST, later launching RESIST 2 to counter COVID-19 disinformation.118 In essence, RESIST follows the same principles of the Active Measures Working Group: instead of Report, Analyze, Publish, it advocates for “Recognize,” “Early Warning,” “Situational Insight,” “Impact Analysis,” and “Strategic Communication.”
One of the key lessons that still applies from the U.S. government’s efforts to counter disinformation in the Cold War through bodies like the Active Measures Working Group, is the importance of having a unified strategy. In practice, that means having someone, or something, take ownership of the effort, as discussed above. At present, Western democracies lack a strategy to counter disinformation. Their public-policy responses are fragmented, at best. A 2020 report by the U.K. Parliament’s Intelligence and Security Committee, for example, shows that countering Russian disinformation is such a “hot potato” for its government that no one department or agency wants responsibility for it.119
This raises a deeper question: Should governments or technology giants like Facebook (Meta), Google, and Twitter, whose platforms are used to deliver disinformation, be responsible for countering it? Any strategy will obviously need to include these companies. After all, they have unique, proprietary technical capabilities that allow them to detect disinformation that is being spread on their platforms. Since Russia’s interference in the 2016 U.S. election, each company has improved its efforts to detect and expose disinformation, but such efforts are still insufficient and are similar to a whack-a-mole strategy.120 Efforts by individual social media companies to counter disinformation, by de-platforming accounts, removing content, or providing provenance “health warnings” on false information, work in some circumstances, but not in others. Evidence suggests that health warnings can, counter-intuitively, make false information more attractive and increase its dissemination online.121 A study by Britain’s Royal Society in 2022, for example, has argued against removing content as a solution to scientific misinformation because democracies benefit from open and honest discussion of scientific claims.122 De-platforming risks the content simply moving elsewhere online, with potentially new, smaller, and even less trustworthy platforms emerging. A fundamental problem with relying on social media companies to police themselves is that their business models are driven by advertising views, which incentivizes (dis)information dissemination, i.e., “click-bait.” The U.S. legal framework in which these companies currently operate do not necessarily match up with the U.S. government’s public interest in reducing the dissemination and consumption of disinformation.123 The financial incentives of click-bait stories place a premium on outrage (irrespective of accuracy) over that of balanced and more accurate, but less sensational, stories. Aldous Huxley was correct when he noted, “An unexciting truth may be eclipsed by a thrilling falsehood.”124 Social media companies are necessary, but not sufficient, for safeguarding the online public sphere.
Nevertheless, someone needs to take ownership for countering disinformation. If it is not going to be the government, then it is difficult to see who else would take the helm.
The U.S. government should consider establishing a dedicated coordinating body, following the Active Measures Working Group’s example. Such a body could be modeled after the multi-agency center established after 9/11, the National Center for Counterterrorism. A corresponding counter-disinformation body would be responsible for implementing America’s counter-disinformation strategy. Due to the unique collection and analytical capabilities of U.S. intelligence agencies concerning foreign state disinformation — from espionage, technical collection, and open-source intelligence — that body would work with the intelligence community (falling under the Office of the Director of National Intelligence). It would also work with other U.S. government departments, like the State Department and the executive and legislative branches. Crucially, the new center would also act as a bridge to technology firms and news media. Unlike the Active Measures Working Group, it would need to be resourced with full-time, dedicated staff. It would be essential for such a new body to be public facing: Its primary customer should be public audiences, not secret corners of the U.S. government.
Another key lesson from the U.S. government’s history of countering disinformation in the Cold War that still applies today is that to be effective, it is necessary to combine government and non-government efforts in order to increase the public’s digital literacy. As Eagleburger put it regarding the Active Measures Working Group, while counterintelligence was important for detecting disinformation, its antidote was a broad, persistent approach: “Sudden enthusiasms to expose their dirty tricks followed by troughs of apathy are not the answer. A reasoned and effective response must be persistent and continuing, and this is best achieved by a growing public understanding and emerging consensus of the significance of these activities.”125 As the working group recognized, “public understanding” about Soviet disinformation meant creating an informed citizenry that is alert to hostile foreign states that are manipulating facts and casting doubt on them.
The same principles apply today. In the contemporary context, however, creating an informed citizenry means promoting digital literacy — teaching the public to detect online disinformation by separating facts from fiction.126 The best strategy for promoting digital literacy in democracies is through a new public-private partnership that combines government and private sector initiatives. This will require a patient, long-term — even generational — effort. Quick fixes, like removing online content, are insufficient.127 The Royal Society’s report on online scientific misinformation is unambiguous in its recommendation: “The UK should invest in lifelong, nationwide, information literacy initiatives.”128
As in the past, disinformation relies on the public’s willingness to believe objectively falsifiable information. This willingness is higher in times of social upheaval, war, and pandemics.
Today, exposing disinformation requires an appreciation of online behavioral science. Public psychology research shows, for example, that publishing factual information is more effective for countering disinformation than highlighting false information.129 The United Nations has adopted this approach in launching “Verified,” an initiative to “counter the spread of COVID-19 misinformation by sharing fact-based advice.”130 Recent scholarship has demonstrated that purveyors of disinformation use narratives to gain traction among audiences. That suggests that establishing truth-based counter narratives may be a way of fighting back against online disinformation.”131 Likewise, research suggests that “pre-bunking” — preemptively refuting a story — offers a useful method of delivering resistance against fake news.
Here, we get to the heart of the matter about applying the history of Cold War disinformation policies today. As in the past, disinformation relies on the public’s willingness to believe objectively falsifiable information. This willingness is higher in times of social upheaval, war, and pandemics. Conspiracy theories offer people seductive ways to “explain” apparently unexplainable events and find meaning in the otherwise meaningless. This is as true today as it was in the past. Similarities and continuities between past and present conspiracy theorists should not, however, blind us to the major differences that exist today concerning technology, polarization within the U.S. government and society, and the nature of the public sphere since the Cold War’s end.
Technological advances have magnified the speed at which disinformation spreads. This means that lies can now spread quickly across the world before truth can get its boots on, to borrow a phrase attributed to Mark Twain. In the past, it was possible to gradually formulate a response to Soviet disinformation involving documentary fraud. Today, the algorithmic amplification of information and disinformation requires rapid, near real-time, responses. This means that U.S. policies devised in the pre-digital, print-press era cannot simply be grafted onto the online world. The domestic political environment in the United States has also recently undergone seismic changes, polarizing to the left and right extremes. That has itself been facilitated by social media. Polarization has occurred at break-neck speed over the past five years, first under Trump’s presidency and now under Biden’s. The U.S. Congress is as polarized as American society, divided into political tribes that increasingly view each other not as political opponents with whom they can respectfully disagree, but as “enemies.”
In addition, the advent of the digital revolution has changed the nature of the public sphere. At the time of our case studies above, the guardians of the critical public sphere who determined what was respectable news were government press departments and people like newspaper editors and television anchors. Social media has caused that relationship between government and journalists to splinter, which has in turn fueled political polarization and created a crisis of authority in journalism. One way to rehabilitate the embattled public sphere is through digital literacy. This reinforces the importance of public-private cooperation in countering disinformation. Countries like Finland and Norway have, for example, embedded digital literacy in school curriculums, which is showing promising results.132 Whether this would be applicable in the United States, which has lower levels of public trust in government, remains to be seen.133 Things do not bode well, however. In March 2022, Biden’s adviser on disinformation, Nina Jancowicz, resigned from the Department of Homeland Security after just two months, having become the sustained target of critics on the right who labelled her work an Orwellian “Department of Truth.”134
Conclusion
The KGB’s AIDS disinformation effort reveals the lasting damage that disinformation about a pandemic can cause. Despite efforts by U.S. medical experts, like Anthony Fauci, to debunk the Soviet AIDS conspiracy theory, it continued to infect public thinking long after Moscow disowned it.135 In 2005, polling showed that 50 percent of African Americans still believed that AIDS was a man-made virus.136 Denial of the causal link between HIV and AIDS, which KGB disinformation contributed to, has had horrific consequences in African countries. In South Africa, President Thabo Mbeki denied a connection between HIV and AIDS.137 By the time he left office in 2008, there had been almost 330,000 preventable HIV deaths there.138 Meanwhile, HIV-AIDS denialism may have contributed to Russia’s own terrible AIDS epidemic.139 If this example from the pre-digital era, when dissemination of disinformation was relatively controllable, is any indication, then online disinformation about the novel coronavirus is likely to cause even broader damage. Early research shows that hundreds of people have died as a result of false COVID-19 “cures” promoted on social media.140 At the time of writing, polling indicates that one quarter of U.S. adults see at least some truth in the theory that “powerful people” intentionally planned the COVID-19 outbreak.141
Today’s digital revolution constitutes the greatest shift in the history of the transmission of ideas since the development of the printing press in the 15th century, which sparked generations of social dislocation in Europe and the New World, contributing to civil wars in both places. It is too early to understand the evolving impact of social media on our societies. Thanks to the internet, never before has so much information been available to so many people. Ninety percent of the world’s data was created in the two years before 2018.142 That unprecedented and exponential data explosion has transformed the dissemination of information as well as its consumption. This makes it tempting to assume that the history of disinformation from earlier, pre-digital periods is not applicable today. However, as I have demonstrated above, such an assumption is mistaken. In fact, using the applied history methodology proposed by May and Neustadt, we can see that there is much to glean from the history of Cold War disinformation. At the same time, there are major differences relating to the technological, social, and political landscape in the United States today compared to the (relatively recent) Cold War past. This should caution us against applying history, lock-stock-and-barrel, to a contemporary context.
Soviet disinformation campaigns that targeted domestic racial injustice and disease outbreak are direct precedents for disinformation efforts by Russia, and other hostile states, that target Western democracies today. Russia weaponizes disinformation for the same strategic aims as its Soviet predecessor: to discredit the U.S. government and disrupt its effective democratic functioning. By targeting divisive “wedge” issues like race and a government’s purported creation of a pandemic, a hostile foreign state can inculcate public distrust in government institutions and damage the democratic process.143 Studying the history of Soviet disinformation is important for comprehending the contemporary aims of Russian disinformation and the national security threat that it poses. It is also important for understanding the disinformation efforts of the Chinese government, which, whether consciously or not, has deployed the KGB’s old conspiracy that the U.S. government is responsible for a pandemic — in this case, COVID-19.144 The strategic aim of Russian and Chinese disinformation about COVID-19 is to discredit democracy and highlight the comparative “security” that an authoritarian society can provide. The more doubt these countries can cast on events — by questioning official COVID-19 mortality rates, proposing that a cabal of powerful satanic cannibals and pedophiles operates in the United States, or suggesting that 5G cellular networks caused the novel coronavirus outbreak — the weaker they believe Western societies will be.145
Instead of eliminating disinformation, a successful counter-strategy for democracies should be based on inoculating against its effects. We need to learn to live with it, minimize its impact, and prevent it from metastasizing.
U.S. efforts to debunk the KGB’s Olympic death threat letters and its AIDS bioweapon conspiracy provide important lessons for how to counter present Russian disinformation about U.S. racial injustice and COVID-19. As in the Cold War, disinformation today essentially depends on human nature, in particular psychological tendencies to believe stories that confirm existing prejudices. This has not changed over time, despite the immense changes in the technological landscape, as Rid’s study, Active Measures, shows.146 Thus, the policies devised for detecting and countering past disinformation are still relevant. Foreign intelligence collection and assessment remain essential: Forensic analysis of forgeries and well-placed human sources can detect and attribute foreign state disinformation. Meanwhile, the experience of the U.S. government’s Active Measures Working Group shows the value of having a dedicated body to coordinate government efforts to debunk disinformation. Its strategy of reporting, assessing, and publishing was effective in the above two case studies of Soviet disinformation and remains applicable today. However, the true lesson of U.S. efforts to counter later Cold War Soviet disinformation is that it requires broader efforts that go beyond the work of a single government department. Because disinformation targets existing grievances that, by definition, usually constitute broad trends in a society, countering it has to be correspondingly broad.
The digital age, with its global interconnections, is an era in which digital information warfare — with attacks in the form of bytes rather than bombs — has become a pervasive national security concern. The U.K. Parliament’s Intelligence and Security Committee has predicted that disinformation will be the new normal for open democratic governments and societies in the 21st century.147 The advent of immersive social media technologies, like virtual reality and “deep fakes,” will increase opportunities for what we may call disruptive synthetic mind viruses. If the use of disinformation is indeed becoming the “new normal,” democracies need to understand what is needed to successfully counter it. It is unrealistic to suppose that open societies, with freedom of speech and political association, can — or should want to — eradicate disinformation. It is inherent in democratic societies that people are free to express themselves, even if that means allowing the dissemination of objectively false information. Instead of eliminating disinformation, a successful counter-strategy for democracies should be based on inoculating against its effects. We need to learn to live with it, minimize its impact, and prevent it from metastasizing, like someone living with an inoperable disease. It is also worth appreciating that online platforms are not perfect delivery tools for disinformation. Once a shot is fired over social media, it is difficult, if not impossible, to control its spread and achieve its intended aims, while minimizing unintended consequences, including retaliation by targeted states. Cyberspace is not a perfect weapon.148
In an account published in 1988 about his time in the KGB’s Service A, Levchenko provided a primer on how Western democracies should defend against Kremlin disinformation. He advised them to be informed about Soviet disinformation, understand its nature, and be vigilant against its corrosive effects in the news they consume. The most effective countermeasure, Levchenko argued, was for Western citizens to read international newspapers widely: “Train yourself to read the front pages of your newspapers and read them every day. Read news magazines.” By reading widely, Western citizens can establish facts for themselves. Levchenko continued:
Many Americans say that they are dissatisfied with the various media. They say they don’t trust the newspapers. ‘Newspapers are too liberal’, some say, while others say that ‘reporters manage the news’. Whether these charges are true is irrelevant. The press in democratic countries is there to serve the readership. That is why my best advice for the future of the free world is to read, inform yourselves, make your interpretations, and draw your own conclusions.149
Today, Levchenko’s advice applies to promoting digital literacy, which will require a broad, generational effort involving all of society to achieve, for example, by having schools teach students to distinguish facts from fake news. Separating facts from fiction also applies to the subject of disinformation itself. As Eagleburger summarized about Soviet disinformation: “Active measures are not magic, nor does the world dance to a covert Moscow tune. Moscow does not dominate the political process of the western democracies.”150 The same is true today. As in the Cold War, the grievances that Russian disinformation targets in U.S. society are American made, not foreign.
Calder Walton is assistant director of the Applied History Project at the Kennedy School of Government, Harvard University, where he is also research director at its Intelligence Project. His new book, Spies: The Epic Intelligence War Between East and West, will be published in 2023.
Acknowledgements: I would like to thank Sean Power and Nidal Morrison, research assistants at the Belfer Center’s Intelligence Project, for their help with this article.
Image: www.kremlin.ru (CC BY 4.0)