Speaking at the RSAC 365 Virtual Summit Tomasz Bania, cyber-defense manager, Dolby, explored how organizations can transition from manually doing the security basics to implementing full end-to-end security automation.
Bania explained that the amount of work security teams are having to handle is increasing rapidly, but without the tooling or staffing to keep up.
Furthermore, levels of alert volumes received by security teams are increasing, “without a matching growth in the skilled technical resources that are available to us,” Bania continued.
By utilizing security automation there is “an opportunity to automate the monotonous and bring things that are much more interesting to them [security professionals] so that they are more engaged and feel more valued within the organization.”
When it comes to measuring an organization’s automation capabilities, Bania suggested a five-level framework:
- Manual processing
- Limited orchestration and no automation
- Significant orchestration and some automation
- Full orchestration and significant orchestration
- End-to-End SOAR implementation
The fifth level is the goal when it comes to achieving full-scale automated security, Bania said, allowing organizations to leverage automation through the security entire process, from identification to automated handling and reporting.
To achieve such a holistically automated security position, Bania advised organizations to follow an incremental process guideline, starting with actions to achieve in the first 30 days.
“Over the next 30 days, validate your existing manual IR processes,” he said. “If you’re holding this as tribal knowledge you might want to start documenting what all those processes are.”
Once that is achieved (likely around the 90-day mark) the next step is to “develop your single or heuristic scoring algorithm,” tailoring it to what matters most in your organization, Bania said.
Next, between 90 and 180 days, “validate your scoring efficacy with manual analysis” and “move forward to developing your first machine learning model.
“Once you’ve developed your first machine learning model, one of the very important things you’re going to want to do [at the 180+ day stage] is conduct a back test of that model compared to your pre-automation datasets if you have them available.”
To conclude, Bania said: “The earlier you can start documenting alerts, events and metadata for future analysis, the better chance you have of developing this machine learning model quickly and effectively.”