There’s no doubting that artificial intelligence (AI) is on the rise. It’s already supporting businesses with their marketing strategies, being used in driverless cars and recommending movies for us to watch.
It’s also expected to grow even further in the coming years. According to Gartner, AI technologies will be found in almost all new software, products and services by 2020. And according to Constellation Research, the AI market will surpass $100 billion by 2025. Businesses will be making the most of AI to complement their data analysis, automate their processes and anticipate future trends.
But the question remains: can businesses adopt AI and remain protected from cyber threats?
The Internet of Things (IoT) means everyday objects are generating more traffic, collecting more data and opening up more entry points for attack than ever before. This, alongside more integrated networks, is the unfortunate result of today’s cybercriminals having a plethora of entry points for bringing down an organization.
The truth is that as businesses grow smarter with AI, so do their attackers. Already, malware can infiltrate a system, collect and transmit data, and remain undetected for days. But with AI, an attack has the ability to adapt and learn how to improve its effectiveness with every moment it goes unnoticed.
It’s worth noting that AI refers to the broad concept of machines being able to mimic human cognitive functions. It can detect patterns, spot anomalies, classify data and group information. Machine learning, meanwhile, can be seen as an embodiment of AI – when machines are given enough data, they can use it to solve problems by themselves.
In an ideal world, AI and machine learning would be able spot and shut down an attack before humans need to do anything. After all, it has the ability to detect anomalous behavior and deter security intrusions on a round-the-clock basis.
However, this isn’t always the case. Machine learning requires feedback when determining what is ‘good’ or ‘bad’. But often, malicious attacks can seem unthreatening from the outset and slip in by AI’s algorithms. What’s more, AI and machine learning may spot deviations in patterns that are not necessarily attacks, leading to inefficiencies in deploying security resources. And because machine learning depends on AI and data to learn, AI hacks become possible.
Given its flaws, AI should not be considered as an adequate replacement for human surveillance. At least not in the immediate future. Every technology has limits. And because of AI, human knowledge will remain vital to understanding how to react to a threat, and the depth of the issue at hand.
A hybrid approach, where all processes are automated, while the rest remains the responsibility of humans, is the most logical option. AI can share some of the burden of surveillance, while taking away some mundane chores from human hands.
The AI-paved path ahead is an efficient one. But CIOs need to ask the right questions when it comes to ensuring they don’t get swept up amongst the AI hype. Any security solution claiming absolute protection should be treated with caution. Because while the potential for security to become more proactive than reactive is there, a dual approach is definitely needed.
Human expertise along with AI technology can achieve better results than either one alone.
For more information on how businesses can adopt AI technology, download our whitepaper.
With 2017 now done and dusted, it’s time to think ...
Like many years before it, 2017 has seen a large ...
When a child goes near something hot, a parent will ...
“The [Balabit] solution’s strongest points are the privileged session management, recording and search, and applying policy filters to apps and commands typed by administrators on monitored sessions.”
– The Forrester Wave, Privileged Identity Management, Q3 2016, by Andras Cser