Privatise Blog

AI in cyber security: A silver bullet or a double-edged sword?

Is AI really the answer cyber security is looking for?

Will AI be the silver bullet cyber security experts have been waiting for when it comes to finally fighting back against ever increasing numbers of complex cyber attacks?

It is a question asked many times and for which there still remains no definitive answer.

But within the cyber security sphere, there is an overwhelming sense that AI will be a benefit when it comes to cyber security and data privacy.

For instance, more than three quarters of cyber security professionals believe that AI and machine learning are beneficial to their roles, according to research from Exabeam.

And more than a quarter (27%) of business executives are planning to invest in cyber security which employs AI or machine learning in some way in the coming years, according to research by PwC.

There is no doubting that AI as a business tool has many benefits in automating some simple processes, and there is an argument to be made that this could translate to more complicated procedures like cyber security.

In theory it makes complete sense.

The AI powered cyber security system is deployed and gradually learns the nuances of the attacks it faces. Because the system can “learn” as it goes, it eventually spots patterns and can stave off attacks on its own.

But there remains a fundamental flaw in the theory that simply turning on an AI cyber security system in the background and letting it fight off threats is enough to protect against attacks – and this could place businesses in more danger than they were in before.

One of the biggest weaknesses of cyber security and AI is an over-reliance on its ability – along with machine learning – and a misunderstanding of its limitations.

This opens up a two-fold threat to businesses, because it both leaves the company with a potentially weakened defence and results in bosses and IT managers having the false sense of security that they are being protected.

Much of this misunderstanding comes from the idea that you can simply install an AI tool and it will just learn as it goes.

But not only does this assume that the initial system has been set up properly, it has also been demonstrated many times that these systems can easily be tricked and manipulated if not monitored correctly.

Google’s AI research division for example managed to trick and reprogram an AI platform’s ‘neural network’ and manipulated it to perform tasks it was not meant to perform.

Within the cyber security sphere this raises some serious doubts about the capability of AI systems.

It is just as possible that an AI driven security system could be tricked to allow attackers in, but then could also be used to cover up the attack.

The next phase in the cyber arms race could involve hackers using social engineering to figure out what AI cyber security systems their targets are using, then working out ways to ‘trick’ those supposedly ‘intelligent’ systems into marking suspicious activity as legitimate – and even vice versa.

Then there is the risk that cyber criminals themselves are already making use of AI to overcome complex defences.

Just as an AI system could be deployed to recognise patterns in attacks, the same technology can also be used to recognise patterns and modes of defence to help attackers get around these walls.

The technology could also be used to make “traditional” cyber attacks, like phishing emails, much more convincing by using AI to trawl through a user’s history to learn about them.

Attackers could then use this information to craft more personalised emails to trick users into handing over personal information.

Cyber criminals are also using AI techniques to improve the evasiveness of malware, using the technology to identify hardware configurations and check if a human is using a particular machine at a given time.

An AI system called Deeplocker – developed by researchers with IBM Research – was trained to ensure it only executed its malware payload when it reached a specific target while using layers of concealment to stop security tools from identifying the software as a threat.

Far from being a silver bullet of defence, artificial intelligence resembles more of a double-edged sword for businesses looking for a better form of cyber security.

It is important to understand that while on the one side AI can help to improve defences, if businesses are not careful, they can just as easily find themselves falling on the sharp end of their own systems.

2019_01_10_Thumnail_MarketResearchTo find out more about the kind of threats facing small and medium sized businesses today, and what IT bosses think about the state of the cyber security market download our “Assessing the struggle of UK SMBs against cyber criminals” report by clicking the button below:

{{cta(‘5878fad6-d3fb-460e-a34a-703d0b12e17a’)}}

 

 

Categories:

Related Article

Privatise’s RMM Integration Allows for…

Privatise is proud to announce that we are remote monitoring and management (RMM) friendly for MSPs. MSPs can-one click deploy across multiple companies in their…

Privatise Launches Cloud-Based Network Solution…

The spread of COVID-19 in 2020 has altered work patterns across the globe, with more employees encouraged to work from home. Improving remote workers’ efficiency…