Ransomware and AI surveillance

Listen to this article:

Ransomware attacks have hit ‘stratospheric’ levels and now account for almost 70 per cent of all attacks that use any form of malware or malicious software. Picture: csoonline

Ransomware attacks have hit ‘stratospheric’ levels and now account for almost 70 per cent of all attacks that use any form of malware or malicious software – which is any software intentionally designed by hackers to steal data, harm or exploit any electronic device. This is a 30 per cent jump from the same period in 2020.

The most common targets of ransomware in the second quarter of 2021 were governmental, medical and industrial companies along with scientific and educational institutions, according to a recent report from leading global cybersecurity provider Positive Technologies.

The overall percentage of attacks against government agencies climbed to 20 per cent in the second quarter from 12 per cent in the first quarter. Ransomware third parties (using Ransomware-as-a-service) were involved in almost 75 per cent of all of these malware-related attacks.

For the second quarter, the industrial sector was involved in 80 per cent of overall malware attacks. Citing one specific incident, Positive Technologies said it found a new type of remote administration tool (RAT) called B-JDUN, which was used to target an energy company. They do not specify if nuclear power plants were involved although these cyber attacks on critical infrastructure do form a worrying trend and is now being classified by governments as impacting National Security.

The volume of ransomware attacks had already been surging in April this year. But in early May, cyberattacks targeted the US Colonial Pipeline and the police department of the District of Columbia. Such attacks revealed the boldness and audacity of today’s ransomware gangs. But they also triggered unwanted publicity, catching the attention of law enforcement agencies and eventually the U.S. government, leading to efforts to crack down on ransomware attacks by US federal agencies and international law enforcement.

As a result, cybercriminals have since started to change their methods, relying less on partners or clients (in ransomware-as-a-service) to carry out attacks and more closely supervising their cyber attacks. Some have also vowed to leave alone certain industries, such as those involved in critical operations or infrastructure.

As a result of the bad publicity and law enforcement efforts, disputes have flared up on Dark Web forums questioning the nature of ransomware. Several forums have since banned posts related to ransomware partner programs or services. Some forum users have even said that ransomware gangs should stop what they’re doing and find a different way to make money.

Does this mean that ransomware operators will turn a new leaf and see the error of their ways? Hardly, according to Positive Technologies. In fact I think it is extremely unlikely that successful ransomware hacker groups responsible for high-profile attacks will quit such a profitable business, and will instead wait for things to blow over before developing a new approach. They may even close down temporarily and start operations under a new alias, utilising the downtime to hone their ransomware attack strategies and come up with more innovative malware.

With ransomware likely to remain a major threat, here are a few tips on how organisations can protect themselves;

  • Install security updates. Be sure to install security updates in a timely manner;
  • Fully investigate any major attack. Conduct thorough investigations of all major incidents to discover the points of compromise and uncover any vulnerability exploited by the attackers. Furthermore, ensure the hackers didn’t leave behind any backdoors for themselves to return;
  • Boost up perimeter security. You can strengthen security at the corporate perimeter by using modern security tools, such as web application firewalls for protecting web resources. To prevent malware infections, use sandboxes that analyse file behavior in a virtual environment as a way to find malicious activity; and
  • Finally, have regular cyber vulnerability and risk assessments by an independent cybersecurity consultant to monitor the cybersecurity status of your information systems.

Artificial Intelligence (AI) has made video surveillance automated and frankly terrifying. Summarised from an ACLU report and essay a couple of years ago, AI can now flag people based on their clothing or behavior, identify people’s emotions, and find people who are acting ‘unusual.’

It used to be that surveillance cameras were passive devices. Maybe they just recorded, and no one looked at the video unless they needed to e.g. if a crime was committed. Usually a bored guard watched a dozen different screens, scanning for something interesting. In either case, the video was only stored for a few days because storage was expensive.

In developed countries this is now longer the case. Advancements in video analytics — fueled by AI techniques like machine learning — now enable computers to watch and understand surveillance videos! Identification technologies make it easier to automatically figure out who is in the videos. And finally, the cameras themselves have become cheaper, more ubiquitous, and much better; cameras mounted on drones can effectively watch an entire city or country zones. Computers can watch surveillance video without human issues like distraction, fatigue or even needing to be paid. The result is a high level of surveillance that was impossible just a few years ago.

Let’s take the technologies one at a time. First: video analytics. Computers are getting better at recognising what’s going on in a video. Detecting when a person or vehicle enters a forbidden area is easy. They can count people or cars. They can detect when luggage is left unattended, or when previously unattended luggage is picked up and removed. They can detect when someone is loitering in an area, is lying down, or is running. Increasingly, they can detect particular actions by people.

More than identifying actions, video analytics allow computers to understand what’s going on in a video: They can flag people based on their clothing or behavior, identify people’s emotions through body language and behavior, and find people who are acting ‘unusual’ based on everyone else around them.

Computers can also identify people. AIs are getting better at identifying people in those videos. Facial recognition technology is improving all the time, made easier by the enormous stockpile of tagged photographs we voluntarily give to Facebook, Instagram and other social media sites, and the photos governments collect in the process of issuing, for example – Fiji Voter registration cards, Fiji TIN/FNPF dual cards, drivers licenses and even passports. The technology already exists to automatically identify everyone a camera ‘sees’ in real time. Even without video identification, we can be identified by the unique information continuously broadcasted by the Smartphones we carry with us everywhere (GPS location switched on), or by our laptops or Bluetooth-connected devices. Law enforcement agencies have been tracking mobile phones for years, and this practice can now be combined with video analytics.

Once a monitoring system identifies people, their data can be combined with other data, either collected or purchased: from mobile phone records, GPS surveillance history, purchasing data, and so on. Social media companies like Facebook have spent years learning about our personalities and beliefs by what we post, comment on, and ‘like.’ This is ‘data inference,’ and when combined with video it offers a powerful window into people’s behaviors and motivations.

Camera resolution is also improving. Gigapixel cameras are so good that they can capture individual faces and identify license places in photos taken miles away. ‘Wide-area surveillance’ cameras can be mounted on vehicles, airplanes and drones, and can operate continuously. Cameras can be hidden in street lights and other regular objects. In space, satellite cameras have also dramatically improved.

Data storage has become incredibly cheap, and cloud storage makes it all so easy. Video data can easily be saved for years, allowing computers to conduct all of this surveillance backwards in time.

In democratic countries, such surveillance is marketed as crime prevention/deterrence – or counterterrorism. In totalitarian countries like China, it is blatantly used to suppress political activists and for general public control. In all instances, it’s being implemented without a lot of public debate by authorities and by corporations in public spaces they control.

This is bad, because ubiquitous surveillance will drastically change our relationship to society in general. Most importantly, the inability to live anonymously will have an enormous chilling effect on speech and behaviour, which in turn will moderate society’s ability to experiment and change.

We often believe that technological change is inevitable, and that there’s nothing we can do to stop it – or even to steer it. That’s simply not true. We’re led to believe this because we don’t often see it, understand it, or have a say in how or when it is deployed. The problem is that technologies of cameras, resolution, AI and machine learning are complex and specialised.

But as the rate of technological change increases, so does the unanticipated effects on our lives. Just as we’ve been surprised by the threats to democracy caused by surveillance capitalism, AI-enabled video surveillance will have similar surprising effects.

As US lawyer, author and investigate journalist Glenn Greenwald succinctly puts it – ‘The way things are supposed to work is that we’re supposed to know virtually everything about what they (the government) do: that’s why they’re called public servants.

They’re supposed to know virtually nothing about what we do: that’s why we’re called private individuals.’ As always, God bless you all and stay safe and masked in both digital and physical worlds this weekend.

  • ILAITIA B. TUISAWAU is a private cybersecurity consultant. The views expressed in this article are his and not necessarily shared by this newspaper. Mr Tuisawau can be contacted on ilaitia@cyberbati.com
Array
(
    [post_type] => post
    [post_status] => publish
    [orderby] => date
    [order] => DESC
    [update_post_term_cache] => 
    [update_post_meta_cache] => 
    [cache_results] => 
    [category__in] => 1
    [posts_per_page] => 4
    [offset] => 0
    [no_found_rows] => 1
    [date_query] => Array
        (
            [0] => Array
                (
                    [after] => Array
                        (
                            [year] => 2024
                            [month] => 01
                            [day] => 26
                        )

                    [inclusive] => 1
                )

        )

)