Select Page

Jan 19, 2022

AI in Security: A Spear and Shield Inevitability

by

Late last year, we gave a preview of the six biggest topics we think will be at the forefront of cybersecurity for 2022. Earlier, we talked in-depth about what we see happening in the threat landscape for this year; today we’ll dive deeper into the topic of AI in security.

It seems sometimes that the technology industry runs on hype – Gartner, after all, has coined the term ‘hype cycle’ to reflect the lifespan of a technology from initial development through to maturity. In the cybersecurity world, artificial intelligence is getting more hype right now than any other technology. Just about every new product in this space claims to have at least some elements of AI, machine learning or statistical modeling, all closely related concepts.

The Promise of AI in Cybersecurity

Hillstone has been a pioneer in the use of AI and ML in security products, and to us the hype is warranted. AI techniques show great promise in the areas of threat detection, analysis, hunting and response. Its ability to apply advanced analysis and logic-based techniques can relieve a great deal of the burden of security admins and allow them to take reasoned and effective actions in response to attacks and threats.

User and Entity Behavioral Analytics (UEBA), for example, can help detect malicious insiders as well as hostile external attackers infiltrating the network and its assets. Network traffic analysis is another area where AI can shine; the volume of network traffic is typically massive and conducting a thorough and ongoing analysis by human efforts alone would be difficult if not impossible.

Advanced AI and ML techniques like big data analysis can help detect malware and advanced threats with a great degree of accuracy, including mutations and variants. And finally, AI and ML can improve security automation by codifying many routine and repetitive tasks into playbooks (or workflows), allowing SOC staff to focus on threat resolution and other mission-critical efforts.

…But Some AI Claims Fall Short

Despite the promise that AI and ML show for cybersecurity, some of the claims need to be taken with a grain of salt.  Claims of AGI, or artificial general intelligence, is one area of concern: Though a number of organizations are working toward it, in reality we’re not even close at this point. You may see claims of AGI or standalone, hands-off AI solutions that require zero human input. We suggest proceeding with caution on anything that purports to completely replace the wondrous capabilities of the human mind.

Another area of concern is AI-augmented security technologies that can alert on potential threats, but without the forensic or causal supporting content to provide context for security admins. This leaves the already overburdened security team with even more metadata to comb through in an attempt to discover whether a threat is real – or not.

AI in the Hands of Hackers

As we mentioned in our initial blog, attackers have long since figured out that if they buy a given AI-based detection technology, they can train their malwares to avoid discovery by that particular device. Similarly, they’ve learned how to inject malicious code via phishing and other attacks that will then ‘poison’ the data used by AI-based system to detect anomalous behaviors. In essence, poisoned data can teach AI engines that odd or aberrant behaviors are normal.

Malicious actors are also using AI to create backdoors, and to gauge which vulnerability within a network will be the best vector for attack, for example.

Does AI Match up to the Hype?

In a word: absolutely, but with qualifications. If you look back at the use cases we gave earlier in this post, you’ll note that each of them focuses on behavior-based detection. Regardless of their evasive tactics, malware almost always exhibits certain behaviors, often subtle, that can be discerned by crunching the massive amounts of data generated in a typical network.

And that’s precisely what AI and ML are great at – doing things like digesting, correlating and analyzing enormous amounts of data to pick out small nuances, or indicators of threat, that can then be presented to the security team for further investigation. AI can help streamline processes and perform repetitive tasks, intelligently reduce false positives and lessen the workload of overburdened IT and security staff.

We’ve been leveraging AI and ML-based techniques since the early 2010’s, and our suite of products have integrated said techniques to enhance the correlation analysis process, behavior detection, and intelligent detection and prevention. Contact your local Hillstone representative or authorized reseller today to start a conversation!