Gaming the system – Artificial Intelligence (AI) Hackers

Listen to this article:

SThe malware families used by hackers are largely well known to the IT security community and thus the means of mitigating their impact is understood. The key is having effective people in place who know how to use those tools to protect the IT/OT environment. Picture: https://www.foodengineeringmag.com

THE formal definition of a hack is something that a system allows, but that is unintended and unanticipated by the systems designers.

Note the details – hacking is not cheating per se. It’s following the rules, but subverting their intent. It’s seeking an unintended result, an exploitation – basically it’s “gaming the system”. Hacks are clever, but not quite the same as innovations.

Systems are optimised for specific outcomes. Hacking is the pursuit of another outcome, often at the expense of the original optimisation.

Systems limit what we can do and invariably, some of us want to do something else. So we hack the system. Not everyone, of course, is a hacker. But most of us are.

Hacking is normally thought of something you can do to computers.

But hacks can be perpetrated on any system of rules – including the tax code for example.

The tax code isn’t software. It doesn’t run on a computer. But you can still think of it as a series of algorithms that takes an input and produces an output – your tax bill.

Hacking is as old as humanity. We are creative problem solvers – it’s how we survive and advance.

We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth.

To date, hacking has exclusively been a human activity. Not for long – now consider a world where AIs are hackers too!

Artificial intelligence (AI) is an information technology. It consists of software.

It runs on computers. And it is already deeply embedded into our social fabric, both in ways we understand and in ways we don’t. It will hack our society to a degree and effect unlike anything that’s come before. I mean this in two very different ways.

One, AI systems will be used to hack us. And two, AI systems will themselves become hackers: finding vulnerabilities in all sorts of social, economic, and political systems, and then exploiting them at an unprecedented speed, scale, and scope.

It’s not just a difference in degree; it’s a difference in kind. We risk a future of AI systems hacking other AI systems, with humans being little more than collateral damage!

Maybe this sounds farfetched but it requires no futuristic science fiction technology! I’m not suggesting an AI “singularity” – where the AI-learning feedback loop becomes so fast that it outstrips human understanding! I’m not assuming intelligent androids.

I’m not assuming evil intent. Most of these hacks don’t even require major research breakthroughs in AI. They’re already happening.

What’s worrying is that as AI gets more sophisticated, we often won’t even know its happening or when we’ve been surpassed.

AIs don’t solve problems like humans do. They look at more types of solutions than us.

They’ll go down complex paths that we haven’t considered. This can be an issue because of something called the explainability problem.

Modern AI systems are essentially black boxes. Data goes in one end, and an answer comes out the other.

It can be impossible to understand how the system reached its conclusion, even if you’re a programmer looking at the code!

In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases.

It could and outperformed doctors, especially for psychiatric diagnoses, but Deep Patient provided no explanation for the basis of a diagnosis, and the researchers have no idea how it came to its conclusions.

AIs can engage in something called reward hacking. Because AIs don’t solve problems in the same way people do, they will invariably stumble on solutions we humans might never have anticipated­ — and some will subvert the intent of the system.

That’s because AIs don’t think in terms of the implications, context, norms, and values we humans share and take for granted e.g. the gift of life or good health.

This reward hacking involves achieving a goal but in a way the AI’s designers never anticipated.

Take a soccer simulation where an AI figured out that if it kicked the ball out of bounds, the goalie would have to throw the ball in and leave the goal undefended.

If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to an acceptable solution as defined by the rules, then AIs will find these hacks – blindingly fast.

Where this gets interesting are most systems that are pretty well specified and almost entirely digital today.

Think about systems of governance like the tax code: a series of algorithms, with inputs and outputs. Think about financial systems, which are more or less algorithmically tractable.

We can imagine loading up an AI with all of the world’s laws and regulations, plus the world’s financial information in real time, plus anything else we think might be relevant; and then giving it the goal of “maximum profit.”

My guess is that this scenario isn’t that far off, and that the result will be all sorts of novel hacks!

But advances in AI are discontinuous and counterintuitive.

Things that seem easy turn out to be hard, and things that seem hard turn out to be easy. We don’t know until the breakthrough occurs and by then it may be too late!

When AIs start hacking, everything will change. They won’t be constrained in the same ways, or have the same limits, as humans. They’ll change hacking’s speed, scale, and scope, at rates and magnitudes we can’t imagine.

The increasing scope of AI systems also makes hacks more dangerous.

AIs are already making important decisions about our lives, decisions we used to believe were the exclusive responsibilities of humans: Who gets parole, receives bank loans, gets into university, or gets a job.

As AI systems get more proven capability, society will allocate more important decisions to them – humans are inherently lazy.

Hacks of these systems will then become exponentially more damaging.

While we have societal systems that deal with hacks, those were developed when hackers were humans, and reflect human speed.

An AI that discovers unanticipated but legal hacks of financial systems could crash our markets faster than we could recover.

Logically I would deduce that while hacks can be used by attackers to exploit systems, they can also be used by defenders to patch and secure systems.

So in the long run, AI hackers will hopefully favour the defense because our software, tax code, financial systems, and so on can be patched before they’re deployed – or can they?

Therefore our solution has to be resilience.

We need to build resilient governing structures that can quickly and effectively respond to the hacks.

It won’t do any good if it takes years to update the tax code, or if a legislative hack becomes so entrenched that it can’t be patched for political reasons.

This is a hard problem of modern governance. It also isn’t a substantially different problem than building governing structures that can operate at the speed and complexity of the information age.

While it’s easy to let technology lead us into the future, we’re much better off if we as a society decide now what technology’s role in our future should be.

This is all something we need to figure out now, before these AIs are released online and start hacking our world!

As renowned theoretical physicist and cosmologist Stephen Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate.

Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

As always wishing you and your families a blessed weekend, stay safe and well and wear your mask publicly in both physical and digital worlds!

  • Ilaitia B. Tuisawau is a private cybersecurity consultant.The views expressed in this article are his and not necessarily shared by this newspaper. Mr Tuisawau can be contacted on ilaitia@cyberbati.com
Array
(
    [post_type] => post
    [post_status] => publish
    [orderby] => date
    [order] => DESC
    [update_post_term_cache] => 
    [update_post_meta_cache] => 
    [cache_results] => 
    [category__in] => 1
    [posts_per_page] => 4
    [offset] => 0
    [no_found_rows] => 1
    [date_query] => Array
        (
            [0] => Array
                (
                    [after] => Array
                        (
                            [year] => 2024
                            [month] => 01
                            [day] => 24
                        )

                    [inclusive] => 1
                )

        )

)