Information TechnologySecurity

What Your SOC Manager Can’t Tell You


By Bryan Hendricks, Director – Enterprise Security Architect, Oportun

“The action variety of Exploit vulnerability is up to 7% of breaches this year…” I noticed this detail while reading page 31 of the recently published 2022 Verizon Data Breach Investigation report. It didn’t compute. I read it again.  “…is up to 7% of breaches this year, doubling from last year.” I didn’t fall out of my chair or anything, but I did stand up and walk around my desk a couple of times to clear my head. How could this be? We invest heavily in a vulnerability management program to prevent 7% of breaches…? Everyone I know invests heavily in vulnerability management. I scrolled backward a few pages. Page 26 reported that 14% of system intrusion incidents “involved Desktop sharing software as one of the main vectors, followed by Email at 9%.”  I felt better. Those numbers felt right to me, but how could I have been so far off on exploited vulnerabilities?

A week or two has passed. I’m confident vulnerability management is a key program that requires deliberate investment. I’m also reconciled to the fact my instincts constantly need to be tuned to match the data. This is why I review things like the Verizon report every year. This is also why metrics are so important to an information security program. Well-designed metrics help me focus on the issues that matter.

One metric I believe is critical is also a metric I rarely hear mentioned. It is likely something your SOC Manager can’t tell you. Specifically, your SOC Manager probably can’t tell you how well their core detection capabilities are protecting the enterprise. This is likely a painful topic, but we are going to dive right in.

SOCs are often evaluated based on how well they detect and respond to the things they are looking for. Leaders rarely have enough information to assess whether SOCs are looking for the right things. Sadly, SOC personnel often aren’t even sure that they are monitoring the right things. Multiple issues contribute to this problem, including staffing and training.  For the purpose of this article, I will naively pretend that staffing and other issues have been solved. I will then emphasize that the detection logic used to drive monitoring in your SOC was probably implemented by people who left the organization, and the reasoning behind the detection logic was never documented. This is not a hypothetical problem. For some organizations, this is one step in a cycle that is slowly repeated.

Conventional wisdom suggests the solution to this problem is to document monitoring use cases. Friends would be amused if an architect like myself didn’t invoke a framework.  Behold: The MaGMa Use Case Framework was created by a handful of Dutch financial institutions to help clarify the risks and response before implementing detection logic.  Two things about the MaGMa Framework are particularly attractive to me. First, the framework incorporates a business context layer allowing non-SOC leaders to participate in the discussion. If the detection logic we implemented for a specific use case is failing, then we can describe the impact in business terms. Second, the MaGMa Framework dovetails nicely into the Mitre ATT&CK Framework, which can provide a substantial foundation for detection use cases. 

Please also note monitoring use cases should not be limited to the tactics and techniques described in industry frameworks. Some of the most important monitoring use cases for your organization will be triggered by policy exceptions. For example: When a product team obtains a written exception allowing them not to patch an app (because patching would break an important process), that exception should be documented in a monitoring use case, and detection logic should be deployed to monitor whether or not the unpatched app is being exploited. This reasoning applies broadly to scenarios where an organization chooses to accept risk. Please note that many monitoring use cases surface outside of the purview of the SOC. To the extent these use cases are not currently feeding back into your SOC detection logic, your SOC Manager truly is blind to the risks and cannot tell you how well the current detection logic is protecting the enterprise.

Although absolutely critical to defending an organization, monitoring and detection are rarely managed like Vulnerability Management or Risk Management. Rather, Monitoring and Detection are often treated like a dark art known only to technical wizards.  Lacking deliberate management, entropy will erode your detection engineering logic. The erosion will be virtually invisible. The SOC will continue to detect and respond to stuff. The MTTD and MTTR metrics might even continue to trend in positive directions while the foundation rots.

Although I don’t believe it is wise or even possible to separate a Monitoring and Detection program from the SOC, I am convinced outside resources (resources that can’t be pulled into incident handling) must be allocated to the program. In the same way, a fighter pilot has a team of people working to keep a plane combat ready; the SOC needs a team of people to help curate the library of use cases, document the business context, and subject matter experts to point out the biggest problems, engineers to test the detection logic, tune the detection logic, etc. The team doesn’t necessarily need to be large. SOC Managers, CISOs and other security leaders should have open conversations about how to staff and structure a Monitoring and Detection program. If your SOC Manager can’t tell you how well your current detection logic can defend your company, then you should discuss the issues.  It is time to manage Monitoring and Detection like a program. It is time for leaders to lead.