Getting smarter and faster – Artificial intelligence and cybersecurity

Listen to this article:

Artifi cial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. Picture: https://www.sciencealert.com

Artificial intelligences (AIs) are getting smarter, fast. That’s creating tricky questions that we can’t answer right now.

AIs don’t have human-level abilities yet, and they might never have them. But there are questions of responsibility; rights and moral status that we still need to consider.

Today, AI covers a smart but limited set of software tools. But in the future, as artificial intelligence becomes more and more complex and ubiquitous, we could be forced to rethink the rights and wrongs of how we treat AIs – and even how they treat us.

AIs are narrow in nature, performing tasks like image recognition, fraud detection, and customer service.

But, as AIs develop, they will become increasingly autonomous. At some point, they’re likely to do wrong.

Who’s really at fault when AIs make mistakes is a question that’s set to trouble businesses and excite lawyers as they struggle to work out who could, and should, be held responsible for any resulting harm.

Today, in most cases of problems caused by AIs, it’s obvious where fault lies. If you buy an AI and run it out of the box, and it does something terrible, it’s probably the manufacturer’s fault. If you build an AI and train it do something terrible, it’s probably yours. But it won’t always be so clear-cut.

The complications begin when these systems acquire memories, and develop agency – where they start to do things that a manufacturer or a user never planned or wanted them to do.

We may just have to accept that we won’t always understand why AIs do what they do and live with that uncertainty – after all, we do the same for other humans.

Over time, AIs may become so sophisticated that they will be considered legally and morally responsible for their own actions, whether we understand them or not. In law, it’s already possible for non-human entities to be held legally at fault for wrongdoing through what’s called corporate personhood: where businesses have legal rights and responsibilities in the same way people do. Potentially the same could one day apply to AIs.

That means, in future, if we can find AIs guilty of a crime, we might even have to think about whether they should be punished for their crimes if they don’t understand the rights and wrongs of their actions, often a threshold for criminal liability in humans.

When it comes to AI, cyberspace, and national security, there are more questions than answers. But these questions are important, as they touch on key issues related to how countries use increasingly powerful technologies while, at the same time, keep their citizens safe. As a good example, few national security topics are as technical as nuclear security. How might the linkages between AI and cyberspace impact the security of nuclear systems?

A new generation of AI-augmented offensive cyber capabilities will likely exacerbate the military escalation risks associated with emerging technology, especially inadvertent and accidental escalation.

Examples include the increasing vulnerability of nuclear command, control, and communication (NC3) systems to cyber attacks. Further, the challenges posed by remote sensing technology, autonomous vehicles, conventional precision munitions, and hypersonic weapons to hitherto concealed and hardened nuclear assets. Taken together, this trend might further erode the survivability of a nation state’s nuclear forces.

AI, and the state-of-the-art capabilities it empowers, is a natural manifestation — not the cause or origin — of an established trend in emerging technology. The increasing speed of war, the shortening of the decision-making timeframe, and the co-mingling of nuclear, cyber and conventional capabilities are leading nation states to adopt destabilising launch postures.

AI will make existing cyberwarfare capabilities more powerful. Rapid advances in AI and increasing degrees of military autonomy could amplify the speed, power, and scale of future attacks in cyberspace.

Specifically, there are three ways in which AI and cybersecurity converge in a military context.

First, advances in autonomy and machine learning mean that a much broader range of physical systems are now vulnerable to cyber attacks, including, hacking, spoofing, and data poisoning. In 2016, a hacker brought a Jeep to a standstill on a busy highway and then interfered with its steering system causing it to accelerate. Furthermore, machine learning-generated Deepfake (i.e., audio or video manipulation), have added a new, and potentially more sinister, twist to the risk of miscalculation, misperception, and inadvertent escalation that originates in cyberspace but has a very real impact in the physical world. The scale of this problem ranges from smartphones and household electronic appliances, to industrial equipment, roadways, and pacemakers — these applications are associated with the ubiquitous connectivity phenomena known as the Internet of Things (IoT).

Second, cyber attacks that target AI systems can offer attackers access to machine learning algorithms, and potentially vast amounts of data from facial recognition and intelligence collection and analysis systems. These things could be used, for example, to cue precision munitions strikes and support intelligence, surveillance, and reconnaissance missions.

Third, AI systems used in conjunction with existing cyber offense tools might become powerful force multipliers, thus enabling sophisticated cyber attacks to be executed on a larger scale (both geographically and across networks), at faster speeds, simultaneously across multiple military domains, and with greater anonymity than before.

During the early stages of a cyber operation, it is generally unclear whether an adversary intends to collect intelligence or prepare for an offensive attack. The blurring of cyber offense-defense will likely compound an adversary’s fear of a preemptive strike and increase first-mover incentives. In extremis, strategic ambiguity caused by this issue may trigger use-them-or-lose-them situations.

Open-source intelligence suggests, for example, that Chinese analysts view the vulnerability of China’s NC3 to cyber infiltrations — even if an attacker’s objective was limited to cyber espionage — as a highly escalatory national security threat. By contrast, Russian analysts tend to view Russia’s nuclear command, control, communications, and intelligence (C3I) network as more isolated, and thus, relatively insulated from cyber attacks.

To be sure, even a modicum of uncertainty about the effectiveness of AI-augmented cyber capabilities during a crisis or conflict would, therefore, reduce both sides’ risk tolerance, increasing the incentive to strike preemptively.

It is now thought possible that a cyber attack (i.e., spoofing, hacking, manipulation, and digital jamming) could infiltrate a nuclear weapons system, threaten the integrity of its communications, and ultimately (and possibly unbeknown to its target) gain control of both its nuclear and non-nuclear command and control systems.

Somewhat paradoxically, AI applications designed to enhance cyber security for nuclear forces could simultaneously make cyber-dependent nuclear weapon systems (e.g., communications, data processing, or early-warning sensors) more vulnerable to cyber attacks.

Ironically, new technologies designed to enhance information, such as 5G networks, machine learning, big-data analytics, and quantum computing, can also undermine its clear and reliable flow and communication, which is critical for effective deterrence.

Advances in AI could also exacerbate this cybersecurity challenge by enabling improvements to cyber offense. Machine learning and AI, by automating advanced persistent threat (or “hunting for weaknesses”) operations, might dramatically reduce the extensive manpower resources and high levels of technical skill required to execute advanced persistent threat operations, especially against hardened nuclear targets.

During a crisis, the inability of a nation state to determine an attacker’s intent may lead an actor to conclude that an attack (threatened or actual) was intended to undermine its nuclear deterrent. For example, an AI-enabled, third-party-generated deepfake, coupled with data-poisoning cyber attacks, could spark an escalatory crisis between two (or more) nuclear states.

The explainability (or “black box”) problem associated with AI applications may further compound these dynamics. Insufficient understanding of how and why AI algorithms reach a particular judgment or decision would complicate the task of determining if datasets had been deliberately compromised to manufacture false outcomes — such as attacking incorrect targets even allies or misdirecting allies during combat.

Rapid advances in military-use AI and autonomy could amplify the speed, power, and scale of future attacks in cyberspace via several interconnected mechanisms — the ubiquitous connectivity between physical and digital information ecosystems; the creation of vast treasure troves of data and intelligence harvested via machine learning; the formation of powerful force multipliers for increasingly sophisticated, anonymous, and possibly multi-domain cyber attacks.

Despite all these high-tech AI scenarios, as someone once wisely commented – “The exact moment technology got out of control: When social media and phones met…” As always be blessed and stay safe in both digital and physical worlds this weekend as we cheer on our National 7s teams – Go Fiji Go!

Array
(
    [post_type] => post
    [post_status] => publish
    [orderby] => date
    [order] => DESC
    [update_post_term_cache] => 
    [update_post_meta_cache] => 
    [cache_results] => 
    [category__in] => 1
    [posts_per_page] => 4
    [offset] => 0
    [no_found_rows] => 1
    [date_query] => Array
        (
            [0] => Array
                (
                    [after] => Array
                        (
                            [year] => 2024
                            [month] => 01
                            [day] => 16
                        )

                    [inclusive] => 1
                )

        )

)