Continuous Integration In The Age of Containers - Part 2

February 08, 2018 By Curtis Yanko

4 minute read time

In Part 1 we explored the impact of containers on CI/CD and looked at how shifting application security testing to the left helped us avoid passing defects downstream to our consumers. That view was really the first half of the story, CI, and one that closely resembled the work myself and others were doing before containers came along. In this post, I hope to show how containers impact delivery teams and create new opportunities for DevSecOps to be successful.

If I once again look back to the world of 2012 we were already shifting things left in our CI process so, in many respects, containers don't change that. If, however, we look at my team's primary responsibility of deploying the application to test and production environments, this is where we'll see the more profound impact of containers. Most of the apps I've supported in large organizations have had about 8-10 environments, each with a specific testing function associated with them. There would be environments for things like functional testing, system integration, performance, user acceptance, staging, and production fix to name a few. The one thing they all had in common was that no two were ever exactly the same! Not only would we use different hardware with different amounts of capacity but we would routinely 'discover' different configurations and patching levels as we went. Configuration drift, as it is known, was, and still is, a primary contributor to security and operational risk.

legacy.png

Because the environments were statically provisioned, operations and middleware teams had a really hard time keeping configurations consistent, think folder structure, users, and permissions. Once provisioned, they would need to be kept up-to-date via patching. Sadly, it was often the case that 'test' environments were a low priority and neglected. Production and production-like environments, while a priority, presented a different challenge with the business owners often deferring patch windows to avoid disrupting development time. Bottom line here was what I said before, no two of my environments were the same because changes were coming from different directions via disparate processes.

Containers change all of this and offer us what Scott McCarty of Red Hat would call a 'converged software supply chain.'  All of the changes can flow into the CI process and though the pipeline where they can be tested.

Containers.png

The idea is to start by making a base image that meets your configuration standards and to verify that with something like OpenSCAP or maybe a Chef Inspec. Whatever will let you reliably and repeatably verify that the image is correctly configured and properly patched. I've been asking myself how often new images might be created and for now have settled on, weekly. I'd be curious to hear what others think about that. My own thinking is simply that in a world of 'patch Tuesdays, one week feels about right. Curious what others are seeing here.

DevSecOps Containers.pngThe middleware team can then consume the base image and rebuild each week or as new patches for the runtime itself are released. Since we're probably creating a user and making changes to the file system we'll want to run it through our tools to verify the configuration and contents again prior to tagging and pushing out to a private Docker registry. This creates a steady stream changes flowing into the development and delivery process where applications can be added.

Application teams, who are likely doing 10's of builds a day will layer in their files and configuration changes and use the CI pipeline like I've shown in Part 1 to comprehensively test the application. From a security perspective, we're talking about static code analysis, a Nexus Lifecycle scan for the bill of materials and one more pass through our configuration validation and of course dynamic security testing. Once we have an image that has been verified to be configured properly and free of known vulnerabilities, you're ready to deploy to validate business functionality. Deployment is greatly simplified because the entire system is encapsulated and portable, no more unintended config drift, and we no longer have to manage lots of environments. In the world of containers, there is just Production and 'Not Production' and every opportunity to verify any and all changes throughout the whole CI/CD journey.

The benefits of this unified supply chain stream help both developers and operations folks to collaborate early and often while significantly enhancing the ability for Security teams to ensure their concerns are a part of the process as well. If you've seen any of these benefits, please share your story with us in the comments.

 

Written by Curtis Yanko

Curtis Yanko is a Sr Principal Architect at Sonatype and a DevOps coach/evangelist. Prior to coming to Sonatype Curtis started the DevOps Center of Enablement at a Fortune 100 insurance company and chaired a Open Source Governance Committee. When he isn’t working with customers and partners on how to build security and governance into modern CI/CD pipelines he can be found raising service dogs or out playing ultimate frisbee during his lunch hour. Curtis is currently working on building strategic technical partnerships to help solve for the rugged devops tool chain.