We’ve been fans of testing the security of your infrastructure and applications since we can remember doing research. It always occurred to us that attackers are testing your environment at all times, so if you aren’t also doing some self-assessment inevitably you’ll be surprised by a successful attack. And we, like most security folks, are not fans of surprises.
Security testing/assessment has gone through a number of iterations. It started with pretty simple vulnerability scanning. You could scan devices and understand its security posture, what patches are installed, and what remains vulnerable on the device. Vulnerability scanning remains a function that most organizations still do, driven mostly by a compliance requirement.
As useful as it was to understand what devices or applications in your environment were vulnerable, a simple scan provides limited information. A vuln scanner doesn’t understand that just because a device is vulnerable, it may not be exploitable due to other controls in place. So penetration testing emerged as a discipline to go beyond a vuln scanner and have humans try to steal data.
Pen tests were helpful because you were able to get a sense of what was really at risk. Though, on the other hand, a pen test is resource intensive and expensive, especially if you use an external testing firm. So to fill that gap automated pen testing tools emerged that use actual exploits in an semi-automatic fashion to mimic the actions of an attacker.
Regardless of whether you are using a carbon-based (human) or silicon-based (computer) pen test approach, the findings of the test represents a point in time view of your environment. If you blink, your environment has changed and your pen test findings may no longer be valid.
With the easy availability of pen testing tools (Metasploit is open source software), defending against a pen testing tool emerged as the low bar of security. Our friend Josh Corman coined this “HDMoore’s Law,” after the leader of the Metasploit project. Basically, if you can’t stop a simplistic attacker using Metasploit (or other pen test tool) then you aren’t very good at security.
The low bar isn’t high enough
As we lead enterprises through developing a security program, we typically start with an adversary analysis. It’s pretty important to understand what kind of attackers will be targeting your organization and what they’d be looking for. If you view your main threat as a 400-pound hacker in the basement of their parents home, then defending against what’s in an open source pen test tool is probably sufficient.
Yet, do any of you really think an unsophisticated attacker wielding a pen test tool is all you have to worry about? Of course not. Here is what you need to know about these adversaries: They don’t play by your rules. Period. They will attack when you don’t expect it. They will take advantage of new attacks and exploits to evade detection. They will use tactics to make their attacks look like a different adversary to throw a false flag.
Basically the adversary will do whatever it takes to achieve their mission. They can usually be patient and will wait for you to screw something up. So any low bar of security represented by a pen test tool is not going to be good enough to really assess your environment.
Dynamic IT
The increasing sophistication of adversaries is not the only challenge you face in assessing your environment and understanding your risk. Technology infrastructure is undergoing possibly the most significant set of changes we’ve seen and this is dramatically complicating your ability to assess your environment.
First, you have no idea where your data actually resides. Between SaaS applications, cloud storage services and integrated business partner networks, the boundaries traditional technology infrastructure have been extended and you can’t assume your information is on a network that you control. And if you don’t control the network, it’s potentially harder for you to test it.
The next mega change is mobility. Between an increasingly disconnected workforce and an explosion of smart devices accessing critical information, you can no longer assume your employees will access applications and data from your networks. Realizing that authorized users of data can be anywhere in the world at any time impacts your assessment strategies as well.
Finally, the push to public cloud-based infrastructure means it’s not clear where your compute and storage is going to be either. Many of the enterprises we work with are building cloud-native technology stacks using dozens of services across cloud providers. So you can’t really assume what the target of your attacks will be either.
To recap, you no longer know where your data is, from where it’s going to be accessed, nor where the compute is going to happen. And you are chartered to protect information in this dynamic IT environment, and that means you’ll want to assess the security of your environment as often as practical. Can you start to see the challenge of security assessment today, and how much more complicated it’s going to become tomorrow?
We Need Dynamic Security Assessment
As we discussed above, a pen test represents a point in time view of your environment and becomes obsolete once the test is done since something has changed. The only way to keep pace with the dynamic IT environment we describe is to do dynamic security assessment. The rest of this blog series will lay out what we mean by this term and how to implement it within your environment.
As a little prelude to what you’ll learn, a dynamic security assessment tool includes:
- a highly sophisticated simulation engine, which can imitate typical attack patterns from sophisticated adversaries without putting your production infrastructure in danger.
- an understanding of the network topology to model possible lateral movement and isolate the targeted information/assets.
- a security research team to leverage both proprietary and ensure the latest and greatest attacks are modeled into the tool to eliminate surprise.
- an effective security analytics function to not just figure out what is exploitable, but how different workarounds and fixes will impact the security of the infrastructure.
We’d like to thank SafeBreach as the initial potential licensee of this content. If you remember, we do research using our Totally Transparent Research methodology, which requires the foresight on the part of our licensees to allow us to post our papers in the Research Library and not require paywalls, registration or any other blockage to you reading (and presumably enjoying)
So with that, we’ll get going on describing the concept of Dynamic Security Assessment in the next post.
We’ve been fans of testing the security of your infrastructure and applications since we can remember doing research. It always occurred to us that attackers are testing your environment at all times, so if you aren’t also doing some self-assessment inevitably you’ll be surprised by a successful attack. And we, like most security folks, are not fans of surprises.
Security testing/assessment has gone through a number of iterations. It started with pretty simple vulnerability scanning. You could scan devices and understand its security posture, what patches are installed, and what remains vulnerable on the device. Vulnerability scanning remains a function that most organizations still do, driven mostly by a compliance requirement.
As useful as it was to understand what devices or applications in your environment were vulnerable, a simple scan provides limited information. A vuln scanner doesn’t understand that just because a device is vulnerable, it may not be exploitable due to other controls in place. So penetration testing emerged as a discipline to go beyond a vuln scanner and have humans try to steal data.
Pen tests were helpful because you were able to get a sense of what was really at risk. Though, on the other hand, a pen test is resource intensive and expensive, especially if you use an external testing firm. So to fill that gap automated pen testing tools emerged that use actual exploits in an semi-automatic fashion to mimic the actions of an attacker.
Regardless of whether you are using a carbon-based (human) or silicon-based (computer) pen test approach, the findings of the test represents a point in time view of your environment. If you blink, your environment has changed and your pen test findings may no longer be valid.
With the easy availability of pen testing tools (Metasploit is open source software), defending against a pen testing tool emerged as the low bar of security. Our friend Josh Corman coined this “HDMoore’s Law,” after the leader of the Metasploit project. Basically, if you can’t stop a simplistic attacker using Metasploit (or other pen test tool) then you aren’t very good at security.
The low bar isn’t high enough
As we lead enterprises through developing a security program, we typically start with an adversary analysis. It’s pretty important to understand what kind of attackers will be targeting your organization and what they’d be looking for. If you view your main threat as a 400-pound hacker in the basement of their parents home, then defending against what’s in an open source pen test tool is probably sufficient.
Yet, do any of you really think an unsophisticated attacker wielding a pen test tool is all you have to worry about? Of course not. Here is what you need to know about these adversaries: They don’t play by your rules. Period. They will attack when you don’t expect it. They will take advantage of new attacks and exploits to evade detection. They will use tactics to make their attacks look like a different adversary to throw a false flag.
Basically the adversary will do whatever it takes to achieve their mission. They can usually be patient and will wait for you to screw something up. So any low bar of security represented by a pen test tool is not going to be good enough to really assess your environment.
Dynamic IT
The increasing sophistication of adversaries is not the only challenge you face in assessing your environment and understanding your risk. Technology infrastructure is undergoing possibly the most significant set of changes we’ve seen and this is dramatically complicating your ability to assess your environment.
First, you have no idea where your data actually resides. Between SaaS applications, cloud storage services and integrated business partner networks, the boundaries traditional technology infrastructure have been extended and you can’t assume your information is on a network that you control. And if you don’t control the network, it’s potentially harder for you to test it.
The next mega change is mobility. Between an increasingly disconnected workforce and an explosion of smart devices accessing critical information, you can no longer assume your employees will access applications and data from your networks. Realizing that authorized users of data can be anywhere in the world at any time impacts your assessment strategies as well.
Finally, the push to public cloud-based infrastructure means it’s not clear where your compute and storage is going to be either. Many of the enterprises we work with are building cloud-native technology stacks using dozens of services across cloud providers. So you can’t really assume what the target of your attacks will be either.
To recap, you no longer know where your data is, from where it’s going to be accessed, nor where the compute is going to happen. And you are chartered to protect information in this dynamic IT environment, and that means you’ll want to assess the security of your environment as often as practical. Can you start to see the challenge of security assessment today, and how much more complicated it’s going to become tomorrow?
We Need Dynamic Security Assessment
As we discussed above, a pen test represents a point in time view of your environment and becomes obsolete once the test is done since something has changed. The only way to keep pace with the dynamic IT environment we describe is to do dynamic security assessment. The rest of this blog series will lay out what we mean by this term and how to implement it within your environment.
As a little prelude to what you’ll learn, a dynamic security assessment tool includes:
- a highly sophisticated simulation engine, which can imitate typical attack patterns from sophisticated adversaries without putting your production infrastructure in danger.
- an understanding of the network topology to model possible lateral movement and isolate the targeted information/assets.
- a security research team to leverage both proprietary and ensure the latest and greatest attacks are modeled into the tool to eliminate surprise.
- an effective security analytics function to not just figure out what is exploitable, but how different workarounds and fixes will impact the security of the infrastructure.
We’d like to thank SafeBreach as the initial potential licensee of this content. If you remember, we do research using our Totally Transparent Research methodology, which requires the foresight on the part of our licensees to allow us to post our papers in the Research Library and not require paywalls, registration or any other blockage to you reading (and presumably enjoying)
So with that, we’ll get going on describing the concept of Dynamic Security Assessment in the next post.
- Mike Rothman (0) Comments Subscribe to our daily email digestfrom Dynamic Security Assessment: The Limitations of Security Testing [New Series]
No comments:
Post a Comment