By Christopher Frenz, Information Security Officer/AVP of IT Security, Mount Sinai South Nassau
Almost everyone in the security industry will, at some point in their career, be faced with questions like “How secure are we?” or “How risky is that?”. While seemingly simple questions to answer, few security programs assess themselves in ways that allow these questions to be answered in an empirically meaningful way. Let’s think of various ways in which security is commonly evaluated. We routinely see the appearance of vanity metrics like “my spam filter blocks 100,000 emails a day,” which sound impressive due to the large number but provide no context as to whether that is good or bad and no actionable insights as to how to improve. Moreover, even common KPIs and KRIs that are not pure vanity metrics often lack the granularity needed to make meaningful improvements to security processes and architectures. For example, tracking Mean Time to Detection (MTTD) may tell you that you are doing a poor job of detecting incidents in a timely manner, but it lacks the granularity needed to figure out how to get better at detecting things.
These problems are compounded even further by much of security being assessed for compliance with standards that award points for the existence of a control rather than the efficacy of a control. Having a firewall is one thing, while ensuring that you have proper egress filtering and other rules in place within the firewall to effectively prevent data exfiltration and command and control traffic is another matter entirely. It is quite possible to achieve compliance with many standards while remaining insecure. This issue is further compounded when one considers that controls fail all the time and that just because a control exists does not mean that it should be assumed to be effective. Yet, this assumption is all too common and results in many organizations buying into an illusion of security fostered by the mere existence of controls and vanity metrics that provide a feel-good sense of working controls that may not actually be working in all the ways that are needed. Organizations that fall into this trap will inevitably find that this illusion is one day shattered when an attacker points out to them all the control failures that their approach to measuring security failed to identify.
It’s critical we test for efficacy and measure security in ways that expose things that are not working and not just focus on metrics that paint a rosy picture.
To address these issues, we, as an industry, need to consider moving to a more evidence-based approach and begin to build methods of measuring security that are centered on control efficacy. Security needs to mature into a discipline that is approached as more of a science than an art form and the Evidence-Based Security Framework provides an ideal means of achieving that (https://www.oreilly.com/library/view/evidence-based-security/9781098148942/). The framework functions as follows:
- Map Threats to TTPs – Identify the biggest threats to your organization and figure out the tactics, techniques, and procedures (TTPs) that are associated with that threat.
- Devise Metrics – Devise metrics that can be used to quantify threat impact, control efficacy, and the efficacy of your incident response process.
- Simulate – Simulate the TTPs and collect your metrics to gain real world insight into what the impact of the threat may be and the efficacy of controls and response processes.
- Analyze – Analyze the data to identify any deficiencies and changes that could be made to further improve security.
- Remediate – remediate the deficiencies and implement identified improvements.
- Repeat – Repeat the testing to show a quantitative improvement and to identify additional areas that may need improvement.
This approach to measuring security is much more granular and as a result more actionable than many of the metric approaches commonly used today. For example, let’s reconsider the KPI of MTTD in the context of this framework. MTTD can tell us we are doing a bad job, but the evidence-based approach can tell us why we are doing a bad job and help us better identify actionable remediations. If there were concerns about detecting a given threat in a timely manner, the evidence-based framework in its simplest form would have us simulate the TTPs associated with the threat and evaluate what TTPs we were able to detect and/or block. We could then easily see that we failed to detect 8 of the 10 TTPs commonly associated with the threat and then perform some detection or control engineering to detect and stop the TTPs we failed to catch. The test could then be repeated to show that we are now only failing to detect 2 TTPs, allowing us to empirically show that security was improved. Once the scores are considered acceptable, testing can be repeated periodically for continuous validation of efficacy or, better yet, expanded (e.g., additional TTPs or TTP variants, common obfuscations, etc.) to try to identify other ways in which controls may fail to provide the needed efficacy.
It’s critical we test for efficacy and measure security in ways that expose things that are not working and not just focus on metrics that paint a rosy picture. It’s through identifying the ways that controls can fail that we are given the opportunity to improve the effectiveness of our security processes and architectures. We need to embrace the fact that controls are imperfect and that no security product provides protections that can’t be bypassed. We need to identify ways in which our controls can be bypassed and work to eliminate those holes (e.g., blocking BCDedit so safe mode can’t be used to bypass EDR) so threat actors cannot readily circumvent our protections. This is especially true given that threat actors spend a lot of time and resources figuring out how to bypass security tooling and in many ways threat actors understand the limitations of many security tools and controls better than we as defenders do. Threat actors are, in essence, using evidence-based security approaches to identify ways to bypass our controls and defeat us. It’s time we begin to use the same approaches to build robust metrics programs centered on control efficacy to identify and eliminate these same weaknesses and take back the upper hand.