See for yourself – run a scan on your code right now

It is impossible to manage security posture without considering two key factors in any potential vulnerability or security flaw: reachability and risk. The two factors are related. Reachability defines the degree to which a given security vulnerability that is detected, such as a CVE, can actually be attacked and exploited to gain privileged access and directly or indirectly access critical systems or data. Risk is a business measurement that assesses the potential for a vulnerability to actually damage an enterprise or organization. In general, without reachability, there is less risk.

How to Think About Reachability

Reachability and risk have dramatically shifted in recent years as we have moved from a hard perimeter “fortress” approach to security to more distributed systems, APIs, cloud computing, SaaS, and far more potential points for attack ingress and exfiltration egress. To mitigate this new risk landscape, we have actually moved to fortify all the newly exposed internal elements and the critical systems of security. We have built a new perimeter around identity, with authentication and Zero Trust,. We have built perimeters around connected devices, around our cloud infrastructure, and our virtualized environments and our configuration of all these systems. At every point where there is a potential for risk, you need to think about fortifying the new mini-perimeters.

A key aspect of risk and reachability that we have spent less effort on is the human aspect. We do not consider, yet, people as a perimeter to be defended, even though people comprise the most reachable vulnerability. As people we create flawed software. We misconfigure our infrastructure, our security tools and our authentication systems. A key part of the new mindset of reachability must be better accounting for and mitigating the risks of people. This means simplifying their decision trees,, either through better systems or better design, and by helping them become a better gray-matter perimeter.

Due to the atomization of our IT and application environment, and the resulting miniaturization of our security fortresses, we have created a far more granular surface. In some cases, such as through segmentation, we have enhanced security and reduced risk, In other cases, we have added complexity. Without a doubt, we now have many more points of contact and a broader attack surface exposed to the world. A critical consideration in assessing reachability is how far each of those points can be exploited, and to what breadth and depth — looking at horizontal traversal and immediate compromise of critical systems or subverting supply chain and other pivotal sub-systems for subsequent attacks. The miniature fortress mentality is a useful framework for thinking through the potential attack surface and putting in place the right controls, at the right places, to address reachability.

Risk Scoring and Reachability

So what is the interplay between risk scoring and reachability? We cannot say no to new technologies that introduce more complexity, and potentially more risk if our end users are demanding them. We need to better manage how risk scores are created and applied to help people perform better in this environment of greater complexity.

Today, the responsibility for understanding the complete security picture and applying the proper context to decision and judgements has grown more widespread. Everyone who is deploying, using, building or designing applications and technology environments must be constantly thinking about what are the security implications of whatever project or code they are working on. The lens for those implications is not only at the application or system or infrastructure level, but for the general well-being of the business. This means broader understanding of risk scoring driven by reachability and criticality and taking into better account the implications of reachability and attackability.

In cyber, there is no shortage of risk measurements. Injecting reachability into the scoring process is a way of framing vulnerabilities to prioritize which of these issues are likely to be an actual risk. It is possible to have something that comprises a major business risk but have a very low reachability context, which may lead a security team to treat that vulnerability differently.

Broadly speaking, however, it is insufficient to make decisions solely based on programmatically generated risk scores. Any risk score is a reflection of the system generating the score, and how that system was tuned, either by the vendor or by your team. It is important to always remember that any risk score involves some subjective judgments. This is necessary but injects opportunities for drift and bias that may skew scores in ways that leave you less secure.

To be clear, risk scores serve a crucial purpose in application security. Risk scores help you reduce the noise and find signal amidst the flood of CVEs and vulnerabilities and alerts. In fact, there is today so much noise around security that any effective team relies on multiple layers of noise reduction in their decision chain and prioritization and policy efforts. Where risk scores fail is with inability to map accurately to business risk or by inaccurately reflecting the reachability of any CVE or other reported issue. This requires security teams must build their own internal and mental models of risk, modeling threats against the most critical systems and creating context that can be used to verify decisions by the risk scoring systems. Knowing how data flows through application and infrastructure is crucial if a human agent can quickly judge the reachability and attackability of any given vulnerability presented or recognize when the automated risk scoring engines may not accurately reflect business risks.

In effect, there are two different processes at work — calculating risk and contemplating risk. Calculating risk is programmatic and automated, human-guided initially but driven by models applied against the mass of vulnerabilities surfaced by security testing tools like software composition analysis, static application security testing, fuzzing and linting. Contemplating risk is a more holistic exercise that leverage the human capacity for intuition and innate knowledge of systems driven by experience and hard-won insights.

Contemplation is what will help security teams identify undetected biases, guide threat modeling to better consider drift, and highlight areas where human breakdowns might have augmented risk in ways that a risk scoring system could never capture. Above all, contemplation is about context. The more context a human actor has, the better they can understand the big picture. The more understanding of the big picture, the better decisions that people make. This must also flow through and inform the models and the risk scoring engines so that the value of contemplation is captured in the calculation.

How to Use Risk Scoring Tools to Minimize Reachability and Attackability

A security team must constantly measure the efficiency and efficacy of all security tooling. The general rule is that security controls lose efficacy over time as vendors lose focus or fail to account properly for emerging classes of threats. When you have a security tool that generates risk scores in tune with your reachability guidelines and proves accurate, you should elevate that system to a higher level of trust and give more weight to its findings. That said, even tools that generate risks you think are accurate and perform well must be constantly evaluated and cross-checked.

Unfortunately, it only takes a small amount of drift to open up an organization to potentially catastrophic attacks. A critical part of this calculation is ensuring that humans stay on top of how the risk scores are created and continue to influence the scores to fight drift. Drift is natural but it is not inevitable. Your code and environment is constantly changing, This may result in changes in reachability which can materially impact your overall risk posture and should alter the way you approach different systems and risks. An automated risk scoring system may not understand that a new type of supply chain attack might result in subsequent related attacks on similar components in other operating systems, to name one example. Human intelligence and contemplation, really, is the secret ingredient that makes risk scoring tools far more valuable and reliable.

Takeaways: Atomized Systems, Reachability and Risk and Humans

The environment and application landscape will continue to atomize further as we move into new infrastructure paradigms like edge computing and all systems on the network gain capacity. For this landscape, security teams and the developers they work with on shifting left should:

  • Recognize that we have shifted from a global perimeter to an atomized constellation of mini-perimeters and shift their mindset to consider security and building fortresses at the micro level as a key way to reduce reachability and risk
  • Developers and security teams need to assess the systems they build and protect on the basis of reachability and risk to properly prioritze. Risk scoring tools are useful element in reducing noise but cannot supplant human intelligence to “gut-check” risk decisions
  • All along the way, humans have to continue to guide and inform these risk scoring systems and ensure that risk scores do not become blind proxies for criticality and importance. Allowing automated risk scoring to dominate the decision process can make security drift more dangerous and allow human error to go unrecognized, leaving applications and infrastructure both reachable and at risk.

About ShiftLeft

ShiftLeft empowers developers and AppSec teams to dramatically reduce risk by quickly finding and fixing the vulnerabilities most likely to reach their applications and ignoring reported vulnerabilities that pose little risk. Industry-leading accuracy allows developers to focus on security fixes that matter and improve code velocity while enabling AppSec engineers to shift security left.

A unified code security platform, ShiftLeft CORE scans for attack context across custom code, APIs, OSS, containers, internal microservices, and first-party business logic by combining results of the company’s and Intelligent Software Composition Analysis (SCA). Using its unique graph database that combines code attributes and analyzes actual attack paths based on real application architecture, ShiftLeft then provides detailed guidance on risk remediation within existing development workflows and tooling. Teams that use ShiftLeft ship more secure code, faster. Backed by SYN Ventures, Bain Capital Ventures, Blackstone, Mayfield, Thomvest Ventures, and SineWave Ventures, ShiftLeft is based in Santa Clara, California. For information, visit: www.shiftleft.io.

Share

See for yourself – run a scan on your code right now