Monday, January 15, 2018

Container Security 2018: Securing Container Contents

Posted under: Research and Analysis

Testing the code and supplementary components which will execute within a container, and verifying that both conform to security and operational practices is core to any container security effort. One of the major advances over the last year or so is the introduction of security features for the software supply chain, from container engine providers like Docker, Rocket, OpenShift and so on. And we are seeing a number of third-party vendors help validate conatiner content both before and after deployment. Each of these solutions focus on slightly different threats to container construction; for example Docker provides tools to certify that a container has gone through your process without alteration through use of digital sigantures and container repositories. Third-party tools focus on security benefits outside of what the engine providers do, such as examining libraries for known flaws. So while things like process controls, digital signing services to verify chain of custody, and creation of a bill of materials based on known trusted libraries are all important, you’re going to need more than what is packaged with the base container management platforms. You will want to examine the third-party tools which help harden the container inputs, analyze resource usage, perform static code analysis, analyze the composition of libraries, and check against known malware signatures. In a nutshell, you’ll need to look for more that what comes with the base platform you choose.

Container Validation and Security Testing

  • Runtime User Credentials: We could go into great detail here about user IDs and Namespace views and resource allocation, but instead let’s focus on the most important thing: don’t run the container processes as root, as that provides attackers access to the underlying kernel and a path to attack other containers or the Docker engine itself. We recommend using specific user ID mappings with restricted permissions for each class of container. We understand that roles and permissions change over time, which requires some work to keep kernel views up to date, but this provides a failsafe to limit access to OS resources and virtualization features underlying the container engine.
  • Security Unit Tests: Unit tests are a great way to run focused test cases against specific modules of code — typically created as your development teams find security and other bugs — without needing to build the entire product every time. They cover things such as XSS and SQLi testing of known attacks against test systems. Additionally, the body of tests grows over time, providing a regression testbed to ensure that vulnerabilities do not creep back in. During our research we were surprised to learn that many teams run unit security tests from Jenkins. Even though most are moving to microservices, fully supported by containers, they find it easier to run these tests earlier in the cycle. We recommend unit tests somewhere in the build process to help validate the code in containers is secure.
  • Code Analysis: A number of third-party products perform automated binary and white box testing, rejecting the build if critical issues are discovered. We are also seeing several new tools that are plug-ins to common Integrated Development Environments (IDE), where code is checked for security issues prior to check-in. We recommend you implement some form of code scans to verify the code you build into containers is secure. Many newer tools have full RESTful API integration within the software delivery pipeline. These tests usually take a bit longer to run but still fit within a CI/CD deployment framework.
  • Composition Analysis: A useful security technique is to check libraries and supporting code against the CVE (Common Vulnerabilities and Exposures) database to determine whether you are using vulnerable code. Docker and a number of third parties – including some open source distributions – provide tools for checking common libraries against the CVE database, and they can be integrated into your build pipeline. Developers are not typically security experts, and new vulnerabilities are discovered in common tools weekly, so an independent checker to validate components of your container stack is both simple and essential.
  • Hardening: Over and above making sure what you use is free of known vulnerabilities, there are other tricks for securing containers before deployment. Hardening in this context is similar to OS hardenng (which we discuss in the following section); by removing libraries and unneeded packages to reduce attack surface. There are several ways to check for unused contents of the container, and then work with the Development team to remove items which are unused or unnecessary. Another approach to hardening is to check for hard-coded passwords, keys, or other sensitive items in the container — these breadcrumbs makes things easy for developers, but much easier for attackers. Some firms use manual scans for this, while others leverage tools to automate scanning.
  • Container Signing and Chain of Custody: How do you know where a container came from? Did it go through your build process? The problem here is what is called image to container drift, where unwanted additions are added to the image. You want to ensure that the entire process was followed, and that somewhere along the way some well-intentioned developer did not subvert the process with untested code. You accomplish this by creating a cryptographic digest of all image contents as a unique ID, and then track it though the lifecycle - ensuring that no unapproved images are run in the environment. Digests and digital fingerprints help you detect code changes and identify where the container came from. Some of the conatiner management platfroms provide tools to digitially fingerprint code at each phase of the development process, along with tools to validate the signature chain. But these capabilities are seldom used, and the platforms such as Docker make only optionally produce signatures. While the code should be checked prior to being placed into a registry or container library, the work of signing images and code modules happens during build. You will need to create specific keys for each phase of the build, sign code snippets on test completion but before code is sent on to the next step in the process, and — most importantly — keep these keys secured so attackers cannot create their own trusted code signatures. This gives you some assurance that the vetting process proceeded as intended.
  • Bill Of Materials: What’s in the container? What code is running in your production environment? How long ago did you build this container image? These are common questions when something goes awry. In case of container compromise, a very practical question is: how many containers are currently running this software bundle? One recommendation — especially for teams which don’t perform much code validation during the build process — is to leverage scanning tools to check pre-built containers for common vulnerabilities, malware, root account usage, bad libraries, and so on. If you keep containers around for weeks or months, it is entirely possible that a new vulnerability has since been discovered, so the container is now suspect. Second, we recommend using the Bill of Materials capabilities available in some scanning tools to catalog container contents. This helps you identify other potentially vulnerable containers, and to scope remediation efforts.

In the next section we will talk about how to proetct comtainers when they are in production.

- Adrian Lane
(0) Comments
Subscribe to our daily email digest

The post Container Security 2018: Securing Container Contents appeared first on Security Boulevard.



from Container Security 2018: Securing Container Contents

No comments:

Post a Comment