Triage Attacks More Efficiently With AI for Cybersecurity
Think of cybersecurity like your personal health. In cybersecurity, basic cyber hygiene foils most cyber attacks. With a shortage of cyber experts, just as in medicine, finding faster and better ways to train practitioners using real-world scenarios is key. However, artificial intelligence (AI) for cybersecurity can improve a team’s response by triaging threats on its own.
The medical field is similar to cybersecurity for AI in other ways, too. The medical field’s process of studying and diagnosing the patient is often well-structured, but siloed. Digital defense experts know the playbook of attacks well, just like doctors know the symptoms and signs of most diseases.
What’s different is the rate of fire. In medicine, under most conditions doctors have time to triage, and the number of patients does not overwhelm them. In cybersecurity, data constantly barrages analysts. Effective triage sets up a team for improved defenses.
This is why we are researching new ways of using AI for cybersecurity and deep learning tools, so developers can use both to build effective models for threat triage. Right now, there is a big gap in the AI defense landscape when it comes to true behavior-based threat analysis.
A handful of agent-based AI threat analysis platforms do exist. However, they may be limited to the major operating system platforms. This fails to cover hosts running less used and older, but still crucial, platforms. For example, they may not be able to work with the Unix family (HPUX, AIX and Solaris) or consumer devices that have network access but are not yet considered inside-the-perimeter devices. In contrast, the AI can only cover threat triage well if it scans behavior across all relevant readings regardless of host.
During threat disposition, an analyst or automated system needs to quickly assign an alert to one of three statuses. The first status involves behavior that is likely to be benign and not worth checking out. The second status refers to behavior that may or may not be dangerous and requires further study to tell whether it’s safe. The third status shows an attack, requiring action right away. Over time, these exercises may lead to policy changes. Those might be changes to security controls and stances.
One major hurdle for AI and cybersecurity in threat triage is the volume and types of training data. Deep learning systems need high volume of data to generate good results. In the case of cyber triage, humans must guide deep learning systems in order to generate smart decisions. That’s because so many of these decisions are still judgment calls by nature. Context and history drive a lot of the decisions made in threat triage. Humans need to train the AI in order to convey how to make these decisions.
Cyber attack simulation systems can help create more teaching data, enabling AI for cybersecurity to work effectively. Here’s how it works:
This system will enable faster training without needing actual live alerts. By creating a higher volume of alerts flagged by humans, the AI can acquire data at 10 to 20 times the rate possible using organic data. Equally important, cyber attacks tend to come in similar waves. For example, there are a lot of ransomware attacks right now. In the past, there were more database breaches or supply chain compromise attempts. Live data does not tell the whole story. So using real-world attacks to train AI models helps create balanced coverage across a wider range of potential attack types.
In addition, AI for cybersecurity models are able to simulate both single and composite attack types. To respond to a single-machine attack, you need to look at telemetry, endpoint detection and response and status on a single machine or a group of similar machines that attackers are hitting in the same way. A composite attack, on the other hand, is when the attacker targets a cloud host, a device or hardware host and/or a network agent. The attackers may exploit one, two or all three of these attack paths. Or they may try to breach one of the hosts and traverse to a network. They might connect hosts over a network and then back out to an external command and control server.
To train the AI model, you need to simulate as many attack path options as possible and do so quickly. Deep learning can study all of the inbound attack path data fed by the human analysts and begin to recognize attack patterns.
A logical end result of AI for cybersecurity would be to move beyond automated triage to automated remediation and response. This would only trigger when confidence that an attack is underway is high. For example, the threat disposition engine could trigger an action if it detects the signature of a known attack type.
It’s key to avoid false positives. Trying to fix them could cause operational issues by abruptly shutting down production systems, stalling service delivery and degrading customer experience. For this reason, moving to automated attack response requires rock-solid belief in the AI model paired with rapid escalation to human analysts. Once you can trust AI for cybersecurity to be accurate, you can change the game by reducing incident response times. This also requires deep integration with SOAR and SIEM systems to ensure a closed-loop response.
This appears to be the future of threat triage, and AI for cybersecurity can make a meaningful difference in improving broad security posture.