How Hackers Are Wielding Artificial Intelligence

AI has proven itself to be a value-adding technology across the global economy.

As businesses found themselves scrambling to adapt to current events over the last few years, some of them found ways to cram half a decade’s worth – in Frito-Lay’s case – of digital transformations into a much shorter time frame. Harris Poll and Appen found that AI budgets increased by 55% during the global pandemic.

Like any tool, artificial intelligence has no innate moral value. AI’s usefulness or potential for harm comes down to how the system “learns” and what humans ultimately do with it.

Some attempts to leverage AI – such as “predicting” crime before it happens – show that models trained on biased data tend to replicate human shortcomings. So far, training AI using data from the U.S. justice system has resulted in tragically prejudiced AI reasoning.

In other examples, humans choose more deliberate ways to leverage AI’s destructive potential. Hackers are showing their innovative tendencies yet again by using artificial intelligence to improve their attacks’ reach, effectiveness and profitability. And as cyberwarfare becomes more and more common around the globe, we will surely see the applications of AI in hacking develop even further.

AI Is an Opportunity and a Risk

Artificial intelligence provides a world of possibilities for businesses wishing to improve forecasting, business optimization and customer retention strategies. It’s also a windfall for those intent on compromising others’ digital sovereignty.

Here are a few ways artificial intelligence may be susceptible to discreet tampering and more overt efforts to turn it toward aggressive actions.

1. Compromising Machine Logic

The chief advantage of AI for consumers and commercial enterprises is that it carries out predictable and repeatable acts of logic without human interference. This is also its greatest weakness.

Like any other digital construct, AI may be susceptible to penetration by outside forces. Hackers that access and compromise the machine logic powering AI could cause it to carry out unpredictable or harmful actions. For example, an AI tasked with industrial condition monitoring might deliver false readings or let maintenance pings go undelivered.

Since the entire point of AI investments is to eliminate human intervention and second-guessing results, the harm to infrastructure or product quality caused by an attack of this nature may not be noticed until catastrophic failure.

2. Utilizing Reverse Engineering Algorithms

Another potential avenue for harm – especially where intellectual property (IP) and consumer or commercial data are concerned – is the notion of reverse engineering. Hackers may even steal the artificial intelligence code itself. With time enough to study how it works, they could eventually uncover the datasets used to train the AI in the first place.

This could provoke several results, the first of which is AI poisoning. Other examples could involve hackers leveraging the AI training data itself to glean compromising information on markets, competitors, governments, vendors or general consumers.

3. Learning About Intended Targets

Surveilling targets is likely one of the more unsettling implications of AI falling into the hands of hackers. AI’s ability to come to conclusions about a person’s abilities, knowledge areas, temperament, and the likelihood of falling victim to targeting, fraud or abuse is proving particularly worrying for some cybersecurity experts.

Artificial intelligence can ingest and reach surprisingly detailed conclusions about people, teams and groups based on some of the unlikeliest data points. An “engaged” or “distracted” individual might type quickly, fidget with the mouse or move between browser tabs quickly. A user who is “confused” or “hesitant” may pause before clicking on page elements or revisit multiple sites.

In the right hands, cues like these help HR departments boost employee engagement or help marketing teams polish their websites and sales funnels.

For hackers, signals like these could result in a surprisingly nuanced psychological profile of an intended target. Cybercriminals might be able to tell based on hints invisible to humans which people might be vulnerable to phishing, smishing, ransomware, financial fraud and other types of harm. It might also help bad actors learn how best to convince their targets that their fraud attempts come from legitimate sources.

4. Probing Network Vulnerabilities

Cybersecurity professionals published data on 20,175 known security vulnerabilities in 2021. That was an increase over 2020, when there were 17,049 such vulnerabilities.

The world grows more digitally interconnected – some would say interdependent – by the hour. The world now hosts a dizzying number of small-scale and industrial networks, with billions of connected devices online and more on the way. Everything’s online, from condition-monitoring sensors to enterprise planning software.

Artificial intelligence shows promise in helping cybersecurity teams rapidly probe for network, software and hardware vulnerabilities faster than humans could alone. The speed and scale of the growth of Earth’s digital infrastructure mean it’s almost impossible to search trillions of lines of code for security exploits to patch. This all has to happen while these systems are online due to the cost of downtime.

If AI is a cybersecurity tool here, it’s also a double-edged sword. Hackers can use the same mechanisms as the “white hat” IT crowd to carry out the same work: probing networks, software and firmware for vulnerabilities more efficiently than human IT specialists can.

A Digital Arms Race

There are too many AI applications in cybercrime to name them all, but here are a few more:

  • Hackers could conceal AI code within an otherwise benign application that executes a malicious behavior when it detects a predetermined trigger or threshold.
  • Malicious AI models may be used to determine credentials or IT management features by monitoring biometric input, such as fingerprints and voice recognition.
  • Even if an attempted cyberattack ultimately fails, hackers equipped with AI may be able to use machine learning to determine what went wrong and what they could do differently next time.

It seemed to take just one well-placed story about hacking a jeep while it’s driving to slow autonomous vehicle development to a crawl. One high-profile hack where AI acts as a lynchpin could cause a similar erosion in public opinion. Some polling shows the average American is highly dubious about AI’s benefits already.

Omnipresent computing comes with cybersecurity risks – and both white hat and black hat hackers know it. AI can help keep our online lives secure, but it’s also the epicenter of a new digital arms race.

Credit: Source link

Comments are closed.