February 2019, Vol. 246, No. 2

Features

Malicious Use of Artificial Intelligence in Cybersecurity

By Harold Kilpatrick, Cybersecurity Consultant

When you see a letter whose signatories include the universities of Yale, Stanford, Oxford and Cambridge, you ought to pay attention. These heavyweights of the academic world do not lend their names to anything without careful consideration. 

Researchers from these universities, working alongside representatives of the cybersecurity industry and civic organizations, published a paper recently, called “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” It examined a variety of potentially malicious artificial intelligence (AI) applications.  

The core issue at the heart of this research is that the function of artificial intelligence (AI) has no inherent ethical bias. The same technology that could, in principle, allow AI to identify and treat an injured person, could equally be trained to identify and harm members of specific groups.

What is AI?

Surely, by now everyone knows what AI is, right? Sure, but it still helps if we’re all on the same page. In brief – AI, in this case, refers to the use of computing power to perform calculations and undertake the kind of analytical functions that are normally reserved for humans.

Over the last few years, oil and gas prices have dropped dramatically. Taking that into account, the pipeline industry had to adapt and optimize operations to prevail in these difficult times. Enter artificial intelligence.

There’s no doubt AI is revolutionizing business practices around the globe, and the oil industry is not missing out. Chevron is already using AI technology to locate new wells. The results are quite surprising as the company experienced a 30% rise in production. 

AI has more potential applications than you could imagine, and the oil industry can harvest the benefits of such technology to establish predictive maintenance, ensure smooth logistical operations and lower drilling expenses.

Nevertheless, AI, like any other technology, can be abused. The same way that it can optimize operations, it can also do the exact opposite. Ensuring proper security measures while implementing AI is the next challenge that the pipeline industry must face. 

The most common use of machine learning (ML) is in anti-malware software. In the industry, such software is known as “endpoint protection.” You are the endpoint user, an average computer user who browses the internet, downloads files, etc. Most people have anti-virus application running, and there’s general awareness of how important they are.

But, have you ever wondered how your anti-virus software works? Its inner workings are twofold. First, there are databases. These are vast lists of known viruses. If an anti-virus application detects any of the listed viruses on your system, it will alert you and quarantine them. However, these databases can easily be defeated. Just altering a filename and some of the code could be enough to make the file appear different. 

These databases are backed up by machine learning algorithms. These algorithms analyze known viruses and look for patterns in their code.

No security system is perfect. Unfort-unately, machine learning algorithms are quite dumb. As people, when we’re presented with data that doesn’t fit in with our expectations of the world, we can question it and discard it. A machine learning algorithm will take data, analyze it, and draw conclusions, regardless of whether it makes sense.

You can implement rules for the algorithm, so it will look for specific characteristics in the file and, if they are identified, treat it differently. However, you need to be able to account for and design a check for every potential error.

Malicious actors have already begun using automatic and algorithmic content generation to confuse bots in many areas. Deliberately injecting bad data into the stream is known as “poisoning the well,” and it is one of the more dangerous and complex AI vulnerabilities to counter.

In the future, we can expect hackers and criminals to be using their machine learning protocols to make malicious software more difficult to detect.

Anonymity

Identity theft is an old crime that has found a new home in the digital age. Since the implementation of the General Data Protection Regulation (GDPR) in the EU and a Facebook data scandal that rocked Washington, businesses and governments are stressing the importance of anonymized data. Consumers themselves are becoming less forgiving of privacy breaches. Nevertheless, an advanced AI might be able to defeat our current methods of anonymizing data.

One potential use of ML is as a means of de-anonymizing data. Using a combination of scraping (gathering as much data as possible from the internet about a given person or subject) and the kind of analytical power only available to a computer, malicious actors can establish the identity that fragments of data relate to. 

From the perspective of a human, this sounds like a mammoth task, one that is beyond reasonable and practical considerations. However, a sufficiently trained algorithm would make it a relatively resource-light process. With the right setup, it would be trivially easy to do.

The paper ultimately made four recommendations designed to allow better forecasting, prevention and mitigation of AI-related threats. The key will be closer collaboration between researchers and policymakers. 

What can you do individually? Well, unfortunately, short of deploying your own counter-algorithms, you cannot stop a malicious AI-driven cyber-attack. You can somewhat mitigate the danger, though. One of the is getting a virtual private network (VPN).

A VPN will not protect assets on its own, but it will help prevent de-anonymization. When you connect to the internet through a VPN, you first send an encrypted request to the VPN server. It is then the server that interacts with the internet and sends an encrypted result back. 

Researchers hope one day to be able to differentiate between content that has been generated using ML, and that produced by humans. P&GJ

Related Articles

Comments

{{ error }}
{{ comment.comment.Name }} • {{ comment.timeAgo }}
{{ comment.comment.Text }}