One difficulty that all cybersecurity teams struggle with is uncertainty. As a cyberattack unfolds, how do you know which mitigations to work on first, which systems are at greatest risk to inflict business losses, and how to fix root problems when responders are still grappling with the incident?
“Just as military commanders contend with the fog of war, cybersecurity professionals face their own version of a cloudy environment while defending organizations. The information moves quickly and it’s not always evident what the issue is and what the appropriate response should be.”
—Derek Fisher
Fisher said there currently are four pillars in a mature security program's strategy for breaking through that fog:
- Situational awareness of the environment provided by the telemetry from tools such as SIEM, IDS/IPS, endpoint protection platforms, and scanners
- Threat intelligence that helps the security team understand attackers' tactics, techniques, and procedures, as well as their targeting behavior, so that the team can plan vulnerability management, prioritizing the flaws most likely to be exploited soon
- Incident response planning (and testing) to provide decision clarity during moments of chaos
- Anomaly detection via machine learning to identify and respond to threats more quickly and accurately
While most mature programs have all of these components deeply embedded in their frameworks, many organizations still struggle to cut through the fog and make on-the-ground decisions that improve security outcomes. What they need is better application of data science.
Here are key challenges arising from the cybersecurity fog of war — and four ways to better apply data science.
[ Get White Paper: Accelerate Your Suspicious File Triage ]
SecOps’ data science challenges
Data streams into security operations centers from a multitude of security monitoring tools spread across the network and application stack. As Fisher noted in his Substack post, security teams need analytical force multipliers to deal with it all. The uncertainty that dogs a fast-moving incident can't be cleared up by looking at telemetry. Teams need access to security insights to inform their decisions.
Two of Fisher's pillars, threat intelligence and ML-based anomaly detection, help enrich and contextualize the telemetry data, turning visibility into prioritized action.
The problem is that there's no single set of telemetry data sources that will work for every organization, no consensus on data formatting, no singular way to analyze security data, and no perfect source of threat intel that can consistently transform everything into meaningful insights. Also, the bad guys regularly game the system to hide themselves in all of the security noise.
Fisher's four pillars are a great start at gaining clarity through the fog — but there’s also a lot of data science and data management work that needs to happen at the practitioner and vendor levels to get the pillars' tools right, said Balázs Greksza, threat response lead at Ontinue.
Greksza said selecting appropriate, high-quality data sources is key to helping leaders come to the right conclusions in the heat of the moment. “Security is a ‘right information and intelligence at the right time’ problem," he said.
Here are four other pillars that can help your team break through security incidents with data science.
1. More effective data management
SecOps teams need to pay close attention to data selection, data normalization, and data quality, not only to optimize anomaly detection, but also to improve their automated decision making. Poor data is always going to be a garbage-in, garbage-out problem, no matter what whiz-bang security tools are online for automating and orchestrating security decisions in the heat of the moment.
Experienced SOC responders increasingly see problems with the kitchen-sink approach to adding new data sources. Instead, security teams need to take care to drive data selection with very clear objectives and requirements, Greksza said, and then layer that with solid data normalization practices.
“Data integrations should serve a purpose and have a perceived value beforehand to help prioritize the meaningful ones. Following through data normalization and quality, integration and contextualization are also impacting both the effectiveness and overall capability of security operations teams.”
—Balázs Greksza
While SecOps teams depend on quality information, they all face data quality problems, and that's something that vendors need to address, said Shane Shook, venture partner at Forgepoint Capital. Automation for data ingestion for analysis is widely available, but automation for data quality is currently scarce, he said
"Simply excluding 'unfit' records in analytics pipelines leads to false negatives that can critically impede sequential analyses such as the MITRE ATT&CK framework. While focus on derivative analytics continues to advance with ML and generative AI, more fundamental focus on data completeness and quality is important.”
—Shane Shook
2. Better threat synthesis
Threat synthesis is an area ripe for automation, Shook said. ML is great for simple anomaly detection, he said, but security teams need to find ways to better embed AI into their threat modeling to arrive at more complete threat scenarios that include business context and a synthesis of threat intelligence information.
A recurring problem with threat intelligence management and related SecOps management is the lack of context describing the threat, Shook explained.
“The context of a threat detected by security operations is critical to assess the risk it represents to targeted functions of the organization. AI offers opportunities for modeling varied risk context and threat scenarios at scales and magnitudes that are currently impossible for security operations teams, who rely on manual analytics and interactive functional/operational team exercises.”
—Shane Shook
Improved synthesis could drive improvements in security playbooks for incident response, he said.
3. Enhanced detection engineering
One of the issues with vendor-driven anomaly detection is that out-of-the-box tuning doesn’t specifically account for the unique threat landscape of each organization that’s using it, said Or Saya, cybersecurity architect for CardinalOps. “It is important to acknowledge that while AI detection rules in security products are valuable, they may not cover all scenarios,” Saya said.
He recommends that SecOps teams implement a detection engineering strategy for creating custom detection rules tailored to their environment, industry, and specific risks.
“These custom rules can enhance the precision of threat detection and response by addressing context-specific threats that may not be covered by generic AI rules."
—Or Saya
4. Layering data science skills into SecOps teams
Finally, SecOps teams need to think long term about their data science skills mix. Many security departments depend solely on their vendors to do the heavy lifting of data science, but achieving true excellence in data management and analytics is going to require that internal staffers have at least basic levels of data science competence.
Greksza said he recommends hiring lifelong learners who are willing to start boning up on data science principles, and he also suggests helping them learn by implementing job rotation and cross-training support.
“Build cross-functional skills in your teams. Data scientists with security exposure and security experts with interest in data visualization and statistics excel delivering value together.”
—Balázs Greksza
A solid foundation is key
Fisher noted in his think piece that security teams are always going to be balancing the need to act swiftly with the risk of misjudging a situation because iron-clad information about what’s truly occurring during an incident is lacking. With core tooling fundamentals — and the data science pillars — your organization can provide better confidence in the insights provided by the security data to cut through the fog of cybersecurity war.
Keep learning
- Get up to speed on securing AI/ML with our white paper: AI Is the Supply Chain. Plus: See RL's research on nullifAI and join our Webinar to learn how RL discovered the novel threat.
- Upgrade your software security posture with RL's new essential guide, Software Supply Chain Security for Dummies.
- Learn how commercial software risk is under-addressed: Download the white paper — and see our related Webinar for more insights.
Explore RL's Spectra suite: Spectra Assure for software supply chain security, Spectra Detect for scalable file analysis, Spectra Analyze for malware analysis and threat hunting, and Spectra Intelligence for reputation data and intelligence.