Recent events have confirmed that the cyber realm can be used to disrupt democracies as surely as it can destabilize dictatorships. Weaponization of information and malicious dissemination through social media pushes citizens into polarized echo chambers and pull at the social fabric of a country. Present technologies enhanced by current and upcoming Artificial Intelligence (AI) capabilities, could greatly exacerbate disinformation and other cyber threats to democracy.

Robert Kagan in his recent Post essay, “The strongmen strike back,” insightfully states:

What we used to regard as the inevitable progress toward democracy, driven by economics and science, is being turned on its head. In non-liberal societies, economics and science are leading toward the perfection of dictatorship.

Kagan refers to state surveillance and social media control and manipulation as technological mechanisms of social control. Here’s how it works and why Artificial Intelligence (AI) could become the biggest threat to democracy.


AI Disnformation Quote 1

We live in an age of instant communication. We can stream nearby events to the world in the time it take to point our phones at them or tap in our impressions of them, and we receive information as it happens in the same way. Unfortunately, this access to instant information also strips away the mechanisms that at least attempt to ensure that the information we receive is accurate and – to the extent that is humanly possible – unbiased.

The rise of digital communication has allowed individuals with malicious intent to weaponize information. Some cyber criminals seek to use our own, personally identifiable information against us in attempts to defraud us, or to defraud others under the cover of our names. Others steal information from secure servers to subject victims to public humiliation or blackmail. Elaborate networks disseminate information that is heavily biased or completely false with the intent of manipulating public opinion to better suit the geopolitical goals of the groups or nations behind them.

Yet this weaponization of information could potentially get much, much worse. Present technologies enhanced by current and upcoming Artificial Intelligence (AI) capabilities, could greatly exacerbate the weaponization of information. It could enable more potent tools for malicious efforts in information acquisition, information falsification and information dissemination – disinformation.

Disinformation Today Without AI

AI Misnformation Quote 2

Old-style propaganda networks that grew out of the Cold War have advanced significantly with the help of digital technologies. Russian-controlled print and broadcast media have been joined by an explosion of self-proclaimed grassroots internet news outlets covertly controlled by Russian officials. Troll armies have supported Russian interests by muddying understanding of separatist actions in Ukraine and Georgia and human rights violations by the Russian-aligned Syrian government. Troll armies also work to inflame divisions within Western nations whose geopolitical goals conflict with Russian goals. The work of cyber criminals and troll armies to influence electorates in U.S., French and German elections and in the UK Brexit referendum have been well-documented.

It would be a mistake, though, to focus solely on Russia in the matter of disinformation or “fake news”. Although it is the leading player in that arena, it is by no means alone in it.

Freedom House has identified no fewer than 30 countries as engaging in state-sponsored disinformation focused either on their own citizens or against the citizens of countries that they consider to be their adversaries. Whether these nations – and other non-nation groups – are following the disinformation blueprint Russia has laid out or developing their own, one thing is clear: the use of information is increasingly being weaponized.


Weaponization of information acquisition

AI Misinformation and Disinformation
AI-Exacerbated Disinformation and Cyber Threats to Democracy
AI Disinformation Quote 3

Attackers can have one of two motivations: acquiring information for direct financial gain; or acquiring it to further geopolitical or other non-monetary goals. The former is more familiar to us. An attacker breaches computer networks to acquire sensitive information. They could target customer identification that can be used to conduct fraudulent transactions or execute identity fraud. Alternately, it could involve industrial espionage or proprietary information that could give the attacker’s company a competitive advantage, or it could be used to blackmail the competition. The monetary advantage in these attacks are obvious.

The latter motivation – seeking information that can be used to further nonmonetary goals – is not as obvious, outside of the realm of traditional espionage, but it is becoming increasingly prevalent. One needs to look only so far as the recent reports of meddling in the elections of other nations and widespread leaks of classified documents to see this motivation at work. The acquisition and release of confidential information that has been strategically selected, often taken out of context and structured to undermine an adversary has dominated the news in such examples as Wikileaks and the hacking of the Democratic Party officials during the U.S. election.

The fact that these released documents are presumed to be accurate because of their sources made them devastating weapons against U.S. government and Democratic Party targets. As author Anurag Shourie said:

A half-truth is even more dangerous than a lie. A lie, you can detect at some stage, but half a truth is sure to mislead you for long.

Targeting the weakest link

Those motivations – both, direct financial advantage or use in disinformation campaigns – depend largely on the same strategy: deceiving someone to perform an action at their disadvantage. In the cyber world, we refer to those actions as social engineering. Phishing, as just one of social engineering techniques, can be linked to 91 percent of cyberattacks that resulted in a data breach according to a Cofense report.

That’s because social engineering targets the weakest link in cybersecurity defenses: the humans. Whereas digital systems operate on predictable and unchanging programming that can be securely encrypted, humans possess too many variables in their thoughts, emotions and actions to allow for airtight security of them. It takes only one individual whose gullibility, doubt or fears can be exploited for an attacker to compromise even the most robust cybersecurity protections.

Highly targeted spear-phishing

Even more successful than phishing is its highly-targeted form of spear-phishing in use when attackers seek specific targets. This form has been greatly aided by the wealth of personal information available on social media, but it has required a tremendous amount of skilled labor and time to achieve its goals until now. This form requires attackers to first identify high-value targets and then perform extensive research on these targets’ social and professional networks before the attacker can create messages with enough details for them to “ring true” for the recipient.

Such attacks are used when specific high-value information is desired. It has been used to obtain proprietary information from top executives of a competitor, for blackmail or for public release to discredit a political opponent.

Malicious information acquisition aided by AI

AI Misinformation Quote 4

AI is already making it easier for attackers to gather and collate detailed information about targeted individuals by gathering information from their social and professional networks. This greatly reduces the amount of skills and time needed by attackers to accomplish more sophisticated kind of attack.

AI can differentiate and even learn to emulate individuals’ writing style. It can learn to assess the value of other potential targets in the communication networks of individuals whose devices have been compromised. With long-term monitoring of compromised accounts and machine learning as AI-aided tools assess compromised accounts, such highly targeted spear-phishing could become increasingly common among attackers who are seeking access to specific information.

Today’s sophisticated attacks could grow even more sophisticated. Whereas Russian hackers targeted Clinton and Podesta in the 2016 U.S. presidential election, future such attacks could be more expansive in their targeting. AI could give them far more opportunity to target far more functionaries and even families of top officials, exploiting a multitude of holes that could give an attacker a wealth of damaging information to exploit. Most worryingly, AI could keep learning and adapting so that it makes subsequent attacks more effective.

Cybersecurity role in addressing weaponization of information acquisition

While the discussion is focused on the threats to democracy, this is yet another scenario where corporate cyber protection interests align with society’s. We, cybersecurity professionals, have to start proactively addressing these threats for both reasons – protecting information under our control, and supporting the defense of our society. Few approaches to consider:

  • Phishing awareness and testing has to evolve. We cannot be satisfied by simply reducing our organization’s click through rates to basic phishing emails. We have to start building the level of critical thinking and awareness that would be able to resist well crafted spear-phishing communication. For example, teach our users that if they didn’t expect to be sent an attachment or a link, they should verify with the purported sender whether they actually sent that email.
  • In addition to phishing awareness, we should consider providing elicitation awareness training that can help our users detect elicitation and social engineering attacks through any channel.
  • Expand the scope of monitoring for existing Data Loss Prevention tools. From the usual cases of monitoring for exfiltration of credit card or personally identifiable information to monitoring for any data exfiltration that is uncommon even if it’s highly unstructured and appears low risk.
  • Some AI-aided malware, like the proof-of-concept DeepLocker [PDF] focus on using AI capabilities to remain stealthy until the right target is identified. Others use AI techniques to learn how to emulate the user of the infected machine. The problem with AI is that it is difficult to understand how AI derives decisions and which actions are triggered by them. The program logic is no longer visible by analyzing the code. This only points to the need to expand our defensive analytical techniques. We could detect AI-based procedures and libraries and question whether the detected executable is supposed to have embedded AI. We have to focus more on behavioral analysis approaches to detect anomalies. Traditional file integrity monitoring and whitelisting can be used to address these threats as well.

Weaponization of information creation

AI Weaponization of Information
AI-Exacerbated Disinformation and Cyber Threats to Democracy
AI Zombification

Information is being weaponized not only in the information that is acquired to be used against those who possess it, but also in the information that is created for dispersal. This is where the concept of disinformation takes shape.

Russia is considered the most advanced player in this area. They long considered human cognition as one of the aspects in their concept of “information space” and as a concern for information security. Whereas the West focused only on digital information in their cyber efforts, the Russia extended the definition of cyber weapons to information weapons which includes: “…Information weapons also include means that implement technologies of zombification and psycholinguistic programming.” They long engaged in shaping the news their citizens received to protect against the societal-level social engineering. I wrote more about that here: Human Zombification as an Information Security Threat – Differences in Information Security Concepts Between Euro-Atlantic and China-Russia. Once Russia accepted the idea that they can achieve their geopolitical objectives through a sophisticated program of subversion, destabilization, and disinformation rather than military operations, it was only a matter of time before they expanded and enhanced their techniques to spread the influence beyond their own borders. This was significantly aided by the adoption of technological platforms that gave them greater access to the citizens of other countries. In doing so, they have created a blueprint that other nations and unaligned groups are starting to follow, and likely will expand as AI advancements reduce the barriers to replicating Russia’s blueprint.

Russia’s disinformation blueprint

AI Disinformation Russia

In the blueprint, disseminators spread the message they want believed through a number of different outlets.

  1. First of – all the messages released by the government itself – official government claims.
  2. Second are the messages released through state-run news agencies. These concur with official releases but give impression of being impartial interpretations of events by virtue of their connection with a news agency.
  3. Third is their widespread network of officially unaffiliated but state-sponsored news agencies. These are usually specialized news organizations located in geopolitical hot spots. They claim to offer up-close information on events there. Although officially unaffiliated, they usually are traceable back to high-ranking Russian government officials. These outlets are spread around the world, wherever it suits Russian interests to provide an alternative to what Western news agencies report.
  4. More spectacular, Russia figured out and weaponized the fact that people spread the message they want to believe is true, even if it’s highly implausible. Through a number of different outlets conspiratorially minded disinformation is distributed to an assemblage of individuals or organizations that serve as a fourth outlet. These are extremists on both ends of the political spectrum who can be counted on to disseminate any report – no matter how wild or implausible – that puts the targets of their hostility in a bad light. Although these are not necessarily directly connected to Russian state officials, they are easily used to muddy the waters with spectacular, eye-grabbing rumors designed to inflame the indignation of the outlet’s followers.
  5. The final outlet is the army of social media trolls and bots, whose purpose is to give the impression of broad acceptance of disinformation reports, to keep discussion of those reports active online and to direct those with whom they interact to additional disinformation on the same subject.

Large troll armies promoting Russian objectives in other nations’ events have been positively identified as playing roles in 2016 U.S. election discussions on social media, in the UK Brexit referendum and in the affairs of France, Germany, Ukraine, Georgia and the Baltic states. Their proximity to Ukraine enabled them to effectively control the narrative that Western nations received of the conflict until Russia had achieved its goals of annexing Crimea and occupying eastern Ukraine. Then, once Western news organizations got feet on the ground in Ukraine, Russia managed to negate those organizations’ efforts to see for themselves what was happening by spreading rumors of shocking atrocities by Ukrainian troops that succeeded in luring those organizations from the actual action.

Competing “truths”

AI Fake News

Russia also has practiced the creation of multiple contradictory reports from multiple sources to overwhelm the reporting of events that it finds embarrassing. This is where the final three sources mentioned above prove useful. They can easily offer multiple narratives that make an issue appear too complex to arrive at a definitive conclusion.

Such was the case with the downing of Malaysia Airlines Flight 17 in 2014 over Ukraine. A social media report from Russian sources that proudly claimed the downing of a Ukrainian warplane was quickly deleted once the disappearance of Flight 17 became known. What followed was an onslaught of reports from Russian-related sources that ranged from simple shifting of the blame to Ukrainian anti-aircraft guns to bizarre conspiracy theories. By the time the investigation into the crash was complete, the official finding that the plane has been shot down by a Russian-made missile was hopelessly buried in the muddle of competing – and often spectacularly eye-grabbing – theories that had been promoted. The strategy was to so confuse public opinion that the truth would be seen as nothing more as just another one of the many theories – no more true than any of the other ones. And it worked.

AI Polarization

This use of multiple conflicting theories to muddle controversial issues has become common in disinformation strategies, particularly on issues that are highly divisive. By presenting conflicting accounts designed to ring true to the biases of groups that are already polarized, disseminators of disinformation can easily increase the polarization between the groups and keep them focused on blaming each other, thus ensuring that they don’t work together to resolve their disagreements. This effectively distracts all groups from getting in the way of the dissemination attainment of its political or social goals.

This is the strategy behind the troll armies and automated Facebook and Twitter accounts that have been traced to Russian efforts to influence U.S. election and the UK Brexit referendum. By feeding distinct demographic groups an avalanche of sensationalized reports about institutions or groups within their society that are “different from them,” they have successfully inflamed the passions of different ends of the political spectrum against each other and intensified doubts toward the institutions on which those groups jointly depend.

It is important to recognize that this disinformation and planned confusion is directed not only toward one political ideology or demographic, but toward any group that is like-minded enough to pit against others – blacks vs. whites, liberals vs. conservatives, Christians vs. Muslims vs. Jews, etc. The goal is not to support one group over another, but to keep all groups preoccupied with each other so that they overlook the disseminator’s efforts to achieve its goals. Goals that could include weakening the society of their perceived enemy.

As mentioned earlier, the practice of disinformation is by no means limited to Russian interests. Many other state actors have adopted whatever parts of this blueprint that can advance their interests. Such techniques are even being used among independent individuals who find the creation and dissemination of sensationalized information and unconfirmed conspiracy theories to be an avenue to online fame and wealth. Their strategy is the same as that of state actors – play for the existing biases of a demographic group to intensify its biases. The goal for these individuals may be financial rather than geopolitical, but the strategy is the same.

Malicious information creation aided by AI

AI Disinformation Scalability

Basic AI tools have already been used to create news items or to post automated comments on articles. These, so far though, have been rather formulaic and often easily identified as fakes. With advancements in machine learning, the quality of writing that AI can achieve should improve to the point where it would be much harder to identify automated content.

This would make it less expensive to generate content for mass disinformation campaigns. Instead of managing huge networks of controlled news outlets and paying large troll armies to create and disseminate information, much of the content creation could be done by bots. With the scalability of AI, the only limit to the amount of content such bots could generate is the amount of computing power dedicated to the campaign.

Those who wish to follow Russia’s blueprint in disinformation but who lack the resources that such a major player can put into it, could thus engage in disinformation campaign on a smaller scale. That means that we could see such campaigns operating not only on global or national stages, but also on local ones. Much smaller players could avail themselves of the ability to support their causes or vilify their perceived adversaries through disinformation campaigns.

Fake News AI

AI also makes it easier for disseminators to distance themselves from the content they create. Many of the accounts that were used to dispense information to citizens of countries whose national elections or referendums Russian actors were seeking to influence were traced back to a troll factory in St. Petersburg. With automated content creation tools, oversight of campaigns could require a much less centralized setup. Automated tools could be physically scattered across many locations and overseen remotely with far less difficulty than overseeing a widespread group of human operators would be. This would make it much harder for investigators to connect all these sources back to their roots.

Faked evidence

One other element of weaponization of information creation is important at this point: the use of faked documents, images and videos to support disinformation campaigns. This has become a common tool among disinformation purveyors. Documents are often forged. Images are digitally altered to associate individuals with extremist groups of which they are not part. One of this falsification is, again, the downing of Malaysia Airlines Flight 17.

As Russian news sources sought to blame the Ukrainian military for the tragedy, a supposed satellite image of the moment of attack showed a Ukrainian warplane as the attacker. That forgery, however, was quickly debunked because it showed the warplane flying at an altitude far above what it was capable of reaching.

Advancements in AI systems’ ability to accurately recognize the content of images – including facial recognition – has enabled them to master the intricacies of the human face enough to synthesize facial images that are virtually impossible to distinguish from actual photos. The same is true for speech recognition and language comprehension.

AI systems are increasingly able to synthesize human voices and form sentences that not only have the vocal characteristics of the purported speaker, but also follow that person’s speech pattern. The potential misuse of these kinds of technologies to create highly convincing fakes of targeted individuals are obvious.

Software already exists – and in some cases can be downloaded as mobile apps – that can create such highly convincing fakes. One such app can put the head of one person on the body of another to make it appear that the targeted person was somewhere they never were or was doing something they never did. Such tools can make it much easier for large, well-funded actors to provide falsified images, videos or audios that are nearly perfect, to “prove” the misdeeds of someone they wish to discredit.

The potential increase in disinformation purveyors because of AI

AI Misinformation Fake News

The fact that such tools are increasingly available and effective opens the door for even smaller actors who lack extensive resources to use them for disinformation efforts. Fakes will become harder to recognize, and if their use occurs on a widespread scale, it will overwhelm reputable news outlets‘ ability to identify and debunk those items that are fakes.

No longer would such fakes be confined to targets in national or international spotlights. Smaller groups with an axe to grind against local officials, prominent business leaders, activists, particular retailers or competitors or others in the public eye could become targets of highly convincing social media campaigns. With the aid of AI-enhanced information acquisition strategies, AI-falsified evidence and AI-powered social media bots to amplify their message.

It is not beyond the realm of possibility that tech-savvy individuals could apply those strategies to lone-wolf attacks on institutions or to cyberbullying of individuals. Under such scenarios, it would become increasingly difficult – if not impossible – to determine what information we receive is accurate and what isn’t – as well as what to believe about anyone or anything.

Worldwide news outlets are not blind to the use of disinformation strategies and faked “proofs”. It takes far less time, though, to create and disseminate a dozen fake news reports than it does to thoroughly fact-check any one of them. And when faced with thousands of reports spreading rapidly across the social-digital realm, fact-checkers can easily be overwhelmed by the flood.

This is exacerbated when you consider that the target of false news reports is often the credibility of the traditional news outlets themselves. Such attacks on the credibility of traditional news outlets raises doubts in the eyes of many individuals about whether they should trust anything those news outlets report. This makes it even harder to for traditional news outlets to effectively debunk false reports and easier for individuals to accept any reports that echo their existing biases – no matter where that report originates.

Applications to spear-phishing

AI-enhanced abilities in information creation could even affect generic spear-phishing and other social engineering efforts’ effectiveness. With AI-aided systems able to analyze and emulate targets’ writing styles, they could advance well beyond the generic type of spear-phishing messages now typical.

By analyzing past email exchanges with the target’s contacts, such systems would not have to rely on generic, standalone messages to entice contacts to take the requested action. Instead, they could tailor messages to appear to be part of an existing exchange, thus making it more likely to be actioned on.

What with improvements in information acquisition offered by AI, confidential and damaging accurate information about individuals will also be increasingly accessible to those with malicious intent. With the ability to target specific individuals more effectively growing, it would be far more possible for blackmailers or extortionists to work their way through their targets’ personal and professional networks to uncover damaging information that they could use against their targets. And combining this with an increased ability to produce fakes opens the door for attackers to plant fake “evidence” on targets’ devices for future investigators to find.

Cybersecurity role in addressing weaponization of information creation

The threats above could be also repurposed for financial gain targeting our organizations. Fake information targeted at professionals in our organizations could drive pump-and-dump scams; influence M&A activity; sabotage our business decision making to benefit the competition; etc. We, cybersecurity practitioners, should consider playing a role in helping our users filter the information. We cannot and should not become content arbitrators, however, there are certain steps we could consider taking:

  • Through our existing content filtering proxies we could display warnings to users when they access fringe, hyper-biased news sites. Those could be identified through a number of means ranging from various ratings in peer-reviewed journalistic research, through audience scores such Alexa, ComScore, Hitwise, Nielsen, etc.
  • Similarly, we could provide certain level of warning based on the age of the domain, ad networks it serves, and similar attributes that could be automatically assessed.
  • We should include in the rest of our cyber awareness efforts information about the fake content and fake social network profiles with examples. Highlight the need to verify any decision making information through multiple reliable sources.
  • Develop / engage / buy services or tools that could be used to analyze and detect deepfakes and make it available for our users.

Weaponization of information distribution

AI Weaponization of Information Distribution
AI-Exacerbated Disinformation and Cyber Threats to Democracy
AI Biases

Clearly, information acquisition and information creation are greatly enhanced by AI in conducting disinformation campaigns. It naturally follows that AI also has the potential to greatly enhance disinformation campaigns in information distribution. We’ve already considered the potential for widely distributed bots to replace the need for massive troll armies that have been identified.

With the omnipresent nature of social media in our society, AI gives small groups or even tech-savvy lone wolves the ability to disseminate disinformation without third-party filtering, fact-checking or editorial judgement of its accuracy or relevance, according to A. Hunt and M. Gentzkow. They point out:

An individual user with no track record or reputation could in some cases reach as many readers as Fox News, CNN or the New York Times.

The use of bots in disinformation networks is growing to the point where it becomes hard to distinguish bots from humans in online interactions. The helpful individual on social media who seems to have an inexhaustible supply of shocking news articles on the subject you are discussing may not be human at all, but merely a well-trained bot.

That theory of bots competing to influence and polarize humans on social media is likely to multiply exponentially as advancements in AI tools and availability of open-source AI code makes these capabilities available not only to the major players that are already using them, but also to a much larger number of players that desire to sway the opinions of others on an ever-widening range of subjects.

As polarization on issues increases and traditional media’s credibility erodes, the massive infrastructures of news agencies controlled by major disinformation disseminators will become less essential to give fake news items credibility. As those with hardened biases trust only those sources that echo their biases, it will become easier for tech-savvy small players, or even lone wolves, to capture an audience for their views.

Cybersecurity role in addressing weaponization of information distribution

Same as with other concerns, we have to focus on building awareness of these tactics with our users. Just understanding that these tactics are now a reality could help.

  • Through education we have to start changing the established perception that the more an item is shared, the more likely it’s true. In reality, increasingly the opposite is true. Items with high shock value that often go viral are more likely to be fake.
  • Build awareness on how to evaluate the online support and acceptance of reports in question. If most of the source social media posts link to accounts that are new, that appear to be on a fringe, that seem to have early access to far too many shocking reports, etc. it is likely that the impression of broad acceptance of reports is manufactured.
  • Eventually the industry would build publicly available tools that use AI to analyze the reliability of news items and its disseminators and promoters. We should continue watching the industry and evaluate any such solutions that could help our users.

AI Disinformation Takeaways

With disinformation likely to increase, it will become harder to discern truth from disinformation. Bot-driven disinformation will be able to spread faster than human fact-checkers can handle. Even messages we perceive as coming from a trusted source may be nothing but a sophisticated reproduction of that person’s style created by a piece of malware. Under such scenarios, discerning what is real could be incredibly challenging.

In the past, we saw sophisticated spamming tools become readily available for use by anyone with a basic level of tech-savviness. That has greatly expanded the ranks of spammers. As advanced AI open source code becomes more available and easier to use, we likely will see the same migration of today’s disinformation tools into the hands of a broader spectrum of attackers. This would increase polarization of society and prevent groups from working on the problems that underlie that polarization.

Those in the political center have little reason to use AI-enhanced disinformation tools to push their agendas. So the use of these tools will fall to those on the extremes of society and to those who hold passionate grudges against individuals, organizations or institutions. Nobody and nothing will be safe unless we focus on defending against these threats with AI-enabled cybersecurity tools that better identify and neutralize disinformation efforts toward information acquisition, creation and dissemination. The alternative to such defensive initiatives would be chaos.

Avatar of Marin Ivezic
Marin Ivezic
Website | Other articles

For over 30 years, Marin Ivezic has been protecting critical infrastructure and financial services against cyber, financial crime and regulatory risks posed by complex and emerging technologies.

He held multiple interim CISO and technology leadership roles in Global 2000 companies.