After years of hype around AI and machine learning, skepticism, and a focus on practical applications of the technology are now taking center stage. In the security industry, this was abundantly clear at the recent RSA Conference where 45,000 people and a thousand vendors descended on San Francisco to discuss industry challenges and debate over the best solutions. Despite the many voices contending for attention at the show, there was little to no dispute that the cybersecurity skills gap continues to be one of the industry’s biggest challenges. But, here’s what is next for AI.

An (ISC)² report released during the conference says there are 2.93 million cybersecurity positions open and unfilled around the world. ISACA agreed, finding in another study that nearly 70 percent of organizations report their cybersecurity teams are understaffed.

AI and automation solutions have been put forward by many as the remedy to this cyber skills ailment. This solution would be one of the types of practical uses we are looking for — for AI. However, it’s clear that – despite the wealth of options on the market today – something in this answer isn’t working.

A report from Accenture confirms that security headaches continue to grow and become more expensive. The average cost of cybercrime rising by over $1 million a year — last year to reach $13 million per firm.

Where are we doing wrong?

An Illustration of the Practical AI Problem

As a longtime business leader and technologist, I see examples every day of where AI is being applied well, and where there are gaps in our workflows that should be ripe targets for automation.

In security, one such example of AI not being applied where it should be — is the challenge of detecting traffic to malicious domains.

It may come as a surprise, but most analysts today still uncover suspicious domains visually — by combing through a long list of domains for anything unusual that sticks out. They improve with time and more in-depth familiarity, but it’s still a very manual process to uncover malicious and suspicious domains. In the past, this was not an issue. Older attacker techniques often used random domain generators or bizarre domain endings and TLDs that made them relatively easy to spot.

The proliferation of URL shorteners and alternate TLDs has made finding the newer tech attacks task exponentially more challenging — if not impossible — today.

Globalization means that we can’t just look at country code extensions like .cn and know they’re bad as we could in the good old days. We’ve even seen more clever techniques, such as phishing attacks that go through a legitimate site like Google Translate to hide the real domain of their sites, further compounding this challenge.

Long story short, smarter attackers are continually finding new ways to disguise and mask their malicious domains, making them even more challenging to spot. With the increasing difficulty of identifying malicious sites and spyware, it is causing a greater dependence on the manual work security teams — and they must be relied on especially hard.

Identification of both malicious domains and perfectly legitimate domains with services that can be exploited is a perfect example of the kind of problem a machine should be used to solve.

There’s little reason why a high-volume, repetitive task like this should still be left to humans. As an industry, we’re making progress with heuristics, which generally do a better job than machine learning at exposing malicious domains today. But there is still room to improve as AI leverages more contextual information such as the entities communicating — and the uniqueness of the communication, etc.

AI: What is It Good For?

We collectively expect that certain use cases, like the domain problem illustrated here, will be resolved somewhat organically as technologies improve over time. The problem is that practical applications of AI like this, i.e., tools built to address specific and identifiable use cases — are few and far between.

Plenty of solutions say they deploy AI and machine learning to address more significant industry issues like “analyst fatigue” and “the cybersecurity skills gap” (or insert your industry’s favorite trend topics).

Trying to be the best at everything ultimately just makes us specialists at nothing. It’s a trap that’s far too easy to fall into for businesses deploying on-trend technologies. What AI is really good for today includes:

  • High-volume, repetitive tasks.
  • Complex calculations and correlations that involve many sources and considerations.
  • Analysis that shouldn’t be done manually due to other factors such as privacy or security.

For startup and vendors, considering these guidelines can help guide technology development and deployments moving forward. End users and prospective investors, meanwhile, should evaluate AI solutions with a critical eye towards the actual customer problems and use cases they solve. Using these lenses, we can begin trimming out the hype and continue making progress in the direction of practical AI.

Rahul Kashyap

Rahul Kashyap

President & CEO

Rahul Kashyap is president and CEO of advanced network traffic analysis company Awake Security and has a proven track record of establishing and building disruptive technologies. Prior to Awake, Rahul was held key executive positions Cylance, Bromium, and McAfee.