The Growing Cyber–Artificial Intelligence Nexus

The Growing Cyber–Artificial Intelligence Nexus
Updated:
News Analysis 

While the debate about artificial intelligence (AI) and augmented reality rages, virtual terrorists—those who operate primarily on the dark web—are getting smarter and thinking of new ways to benefit from both, creating methods to operate autonomously in this brave new world.

Malware is being designed with adaptive, success-based learning to improve the accuracy and efficacy of cyberattacks. The coming generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next, behaving like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection.

This next generation of malware uses code that is a precursor to AI, replacing traditional “if not this, then that” code logic with more complex decision-making trees. Autonomous malware operates much like branch prediction technology, designed to guess which branch of a decision tree a transaction will take before it is executed. A branch predictor keeps track of whether or not a branch is taken, so that when it encounters a conditional action it has seen before, it makes a prediction so that, over time, the software becomes more efficient.

Autonomous malware is guided by the collection and analysis of “offensive intelligence,” such as types of devices deployed in a network to segment traffic flow, applications being used, transaction details, or the time of day transactions occur. The longer a threat can persist inside a host, the more adept it becomes at operating independently, blending into its environment, selecting tools based on the platform it is targeting and, eventually, taking counter-measures based on the security tools in place.

Cross-platform autonomous malware designed to operate on and between a variety of mobile devices is also being developed. Such cross-platform “transformers” include a variety of exploit and payload tools that can operate across different environments. This evolving variant of autonomous malware includes a learning component that gathers offensive intelligence about where it has been deployed (including the platform on which it has been loaded), then selects, assembles, and executes an attack against its target.

Transformer malware is being used to target cross-platform applications with the goal of infecting and spreading across multiple platforms, thereby expanding the threat surface and making detection and resolution more difficult. Once a vulnerable target has been identified, these tools can also cause code failure and then exploit that vulnerability to inject code, collect data, and persist undetected.

Autonomous malware can have a devastating effect on our connected devices and, as a result, our ability to perform daily tasks we usually take for granted. Fighting against it will require highly integrated and intelligent security technologies that can see across platforms, correlate threat intelligence, and automatically synchronize a coordinated response.

A new cyber era has begun, with AI and machines ready to fight battles, and sophisticated cyber attackers and criminal groups seizing any opportunity to take advantage of systemic vulnerabilities. The battlefield is corporate and government networks, and the prize is control of the organization—whether known or unknown to them. The stakes are extremely high.

The target of this behind-the-scenes battle is not just stolen information or the ability to embarrass a rival, but the ability to alter IT systems, with the ability to install kill switches that can be activated at will. These attackers are sophisticated; they use previously unknown code, and they silently breach boundary defenses without being seen or heard.

Conventional approaches to cybersecurity rely on being able to understand the nature of a threat in advance, but that approach is fundamentally flawed, since threats are constantly evolving, laws and policies are outdated, and the threat from insiders is growing.

In the current cyber era, threats easily bypass legacy defense tools. New black hat machine intelligence need only enter an organization’s IT systems a single time. Based on that point of entry, they listen, learn how to behave, blend in, and appear as authentic as the original devices, servers, and users. These automated attackers can hide their malicious actions among ordinary daily system tasks, with at times devastating results.

Today’s attacks can be so swift and severe that it is impossible for humans to react quickly enough to stop them. Based on advances in self-learning, it is, however, possible for machines to rapidly uncover emerging threats and deploy appropriate, real-time responses against the most serious cyberthreats. Firewalls, endpoint security methods, and other tools are routinely deployed in some organizations to enforce specific policies and provide protection against certain threats. These tools form an important part of an organization’s cyber defense strategy, but they are quickly becoming obsolete in the new age of AI and machine learning-driven cyber threats.

Absent the development of adequate defenses, actors with malicious intent should be expected to expand existing threats, introduce new threats, or alter the typical character of threats. The diffusion of efficient AI systems can increase the number of actors who can afford to carry out particular attacks. Future attacks using AI technology should be expected to be more effective, finely targeted, difficult to attribute, and more likely to exploit vulnerabilities in AI systems. Increased use of AI should also be expected to expand the range of actors who are capable of carrying out attacks, the rate at which these actors can carry attacks out, and the set of plausible targets.

AI and cybersecurity are expected to evolve in tandem in the coming years, but it is clear that a proactive effort is needed to stay ahead of motivated and capable attackers. Educated consumers can identify telltale signs of certain attacks (such as poorly crafted phishing attempts) and practice better cybersecurity hygiene (such as using diverse and complex passwords and two-factor authentication), yet most end users of IT systems will remain vulnerable to even simple attacks (such as the exploitation of unpatched or otherwise poorly secured systems). This is concerning in light of the AI-cybersecurity nexus, especially if high-precision attacks can be scaled up to impact large numbers of victims.

AI will remain intimately linked with digital security, physical security, and political security, creating an even more challenging dynamic that will require constant security management. In the cyber domain, AI can be used to augment attacks on and defenses of infrastructure and other critical aspects of society, implying that its future negative impacts are probably not being adequately contemplated.

Preparing for the potential malicious uses of AI is already an urgent task. As AI systems extend further into domains previously believed to be uniquely human (such as social interaction), more sophisticated attacks drawing on the social domain will occur. As a result of the many vulnerabilities that cyber attackers can identify, and the many platforms and methods from which they may choose to attack, these are very difficult to defend against and may result in an explosion of network penetrations, personal data theft, and an epidemic of intelligent computer viruses. While AI is a looming threat in the cyber arena, one of the best ways to defend against automated hacking is also via AI, through automation of our cyber-defense systems.

AI-based defense is not a panacea, however. More work needs to be done in understanding and achieving the right balance of transparency in AI, while developing improved technical measures for verifying the robustness of systems and ensuring that policy frameworks that were developed in a less AI-infused world adapt to the new world we are living in.

There is no alternative to changing the manner in which we are accustomed to thinking about the nexus between AI and cybersecurity, as well as the number of resources devoted to staying a step ahead and creating new methods of combating bad actors in cyberspace.

Daniel Wagner is CEO of Country Risk Solutions. Keith Furst is managing director of Data Derivatives. They are the co-authors of the forthcoming book “AI Supremacy,” which will be published in September.
Daniel Wagner
Daniel Wagner
CEO of Country Risk Solutions