Skip to main content
Support

What do cybersecurity and artificial intelligence (AI) have to do with each other? More than you may think. 

Due to advances in both hardware and software, cybersecurity and AI have become and will continue to be increasingly interconnected. This convergence raises significant security concerns. How should policymakers think about the threats and capabilities posed by the increasing overlap between cybersecurity and AI?

The intersection between AI and cybersecurity falls into three broad buckets, or categories of concern:

  1. Cybersecurity concerns with AI systems, i.e. hacking AI; 
  2. Cyber operations carried out in support of AI development and deployment; and 
  3. The impact of AI on cybersecurity dynamics, both offensive and defensive.

1. Cybersecurity Concerns with AI Systems 

Cyber operations, whether espionage or an attack, instruct computer systems or machines to operate in ways they were not intended. For example, in 2007, the Idaho National Lab carried out a demonstration called Aurora in which 30 lines of code damaged an electric generator beyond repair. 

Bad actors can use the same tactics with AI systems: find weaknesses in models and exploit them or as former Wilson Center Global Fellow Ben Buchanan highlights in a recent Georgetown Center for Security and Emerging Technology (CSET) report, compromise “the data upon which they depend.” Two methods for data poisoning could include modifying “input data while in transit or while stored on servers” or changing “ what data the system sees during training and therefore change how it behaves.” 

Examples of weaknesses in AI systems are extensive, such as a vacuum cleaner that ejects collected dust back onto a space it just cleaned so it can collect even more dust and a racing boat in a digital game looping in place to collect points instead of pursuing the main objective of winning the race. While these examples may seem trivial, these same techniques can be used to far more serious effect. Remember, AI systems have been deployed in support of a diversity of functions including aircraft collision avoidance systems, healthcare, loans and credit scores, and facial recognition technologies. 

2. Cyber Operations in Support of AI Development and Deployment

In addition to concerns stemming from hacking AI systems themselves, data gathered from cyber operations can potentially be funneled into a country’s domestic AI efforts. This outcome is particularly concerning when the national AI program in question is that of a geostrategic rival and peer competitor. 

Why does data matter? AI is underpinned by three mutually reinforcing pillars: data, algorithms, and computational power. Data plays a vital role in the training, validating, and testing of AI systems. As a result, and as Kiersten Todt (former executive director of the Obama administration's bipartisan commission on cybersecurity) has warned, “diversity of data, quality of data aggregation, [and] accumulation of data” will be critical to the success of China’s AI ambitions. Notably, while China has access to an abundance of domestic data, algorithms tailored for use-cases within China and based on local data may well have limited broader global utility. 

This is where cyber operations enter the conversation. Cyber espionage operations are uniquely suited for gathering large amounts of data across a diversity of targets. Take, for example, the recent Microsoft Exchange Server hack, the latest in a long list of Chinese-sponsored hacks. It keenly demonstrated the scale and scope of access afforded to cyber espionage operations with somewhere between 30,000 and 60,000 organizations compromised. This operation is notable for another reason as well, however. Earlier this year, NPR first reported that part of the motivation behind this hack may have been to collect data to power the growth and capabilities of China’s AI systems. 

3. The Impact of AI on Cybersecurity Dynamics

The intersection between cybersecurity and AI does not end with the implications of offensive cyber operations for AI. AI also has significant implications for cybersecurity, both as an offensive and defensive endeavour. 

Using AI to improve cybersecurity is a double-edged sword: aiding defenders of and malicious actors seeking to compromise systems alike. Hackers want to find and use vulnerabilities in software to compromise victims’ systems and defenders want to find the same vulnerabilities to protect their users. AI can assist in both of those lines of effort. 

Defense

Cyber defense is only getting harder and much of our current defensive landscape is reactive in nature. The cybersecurity best practices of threat-hunting, for example, are time-intensive, leverage signatures or indicators of compromise, and rely heavily on human intervention. At the same time, IT systems are now geographically farther apart due to cost savings provided by cloud-computing efficiencies and the techniques, tactics, and procedures (TTPs) used by attackers constantly evolve and are harder to detect due, in part, to that very evolution. 

Given this landscape, AI can enable better cyber defenses. How? While the cybersecurity industry has used automation for some time, new Machine Learning systems—a subset of AI—can learn and improve on the job. Some systems are able to help identify hackers inside networks earlier based on suspicious behaviors, reducing the “dwell” time on a network (which could be especially helpful tackling the scale of today’s ransomware problem). Other AI models can be tailored to identify potential vulnerabilities and patch at a larger scale than humans ever could. Companies such as Darktrace and Palo Alto Networks promote such skills and suggest they will be critical for achieving the concept of “zero trust” across an Internet built to be open and accessible. 

Offense

AI can also enable better offensive cyber operations, including (as previously discussed) hacking AI itself. Just as AI can help network defenders identify and patch vulnerabilities at a faster rate and at a larger scale, the same models can be deployed by adversaries to identify the most valuable “zero days”—or previously unknown software vulnerabilities. But the potential benefits to the attacker extend far beyond identifying vulnerabilities to be exploited. Greater automation can occur across an operation’s lifecycle: including target selection, vulnerability discovery and exploitation, command and control (C2), lateral movement and privilege escalation, and action on objective. Notably, this array of opportunities are of particular utility to sophisticated hackers faced with complex, highly capable targets. Moreover, the growing sophistication of criminal groups—often employing ransomware—might mean that these techniques could also be leveraged by non-state actors sooner rather than later.

Importantly, cybersecurity is a two-way street. Just as defenders increasingly leverage AI for their own purposes, hackers will test and improve their malware in an effort to make it more resistant to the AI-based security tools they may encounter. As the IEEE points out, “hackers learn from existing AI tools to develop more advanced attacks.” Given this interplay, the jury is still out on whether or not AI will provide greater advantages to the hackers or defenders of digital systems. 

 


Science and Technology Innovation Program

The Science and Technology Innovation Program (STIP) serves as the bridge between technologists, policymakers, industry, and global stakeholders.  Read more