The Power of Data in DevSecOps

January 28, 2018 By Derek Weeks

3 minute read time

“In God we trust. All others must bring data.” – W. Edwards Deming

 

Data in Depth

Just like Deming, we believe there is nothing more important than good data. It's why we’ve invested in building out not only the best data research team in the industry, but the largest. It’s why our accuracy level when identifying vulnerabilities is 99% (and getting better).

Let me explain. Successful DevOps practices can’t escape automation. Automation helps support speed, but when it comes to DevOps, many people fail to see the whole picture. You see, speed is critical both forward and backward in a DevOps pipeline.

Perspective on Feedback Loops

Most people imagine things moving forward in a DevOps pipeline: source code, binaries, build artifacts, containers, tests, and deploys. When an element is checked in or approved it moves forward to the next step in the pipeline. If all of the steps are successful, the new code lands in production. But, no organization deploys new code perfectly every time. When a build, integration, or test fails, a feedback loop is triggered back to a previous step or perhaps all the way back to a developer.

null

Feedback loops in a DevOps pipeline are critical. They not only allow for corrections to be made, they are the first step measured under mean time to repair (MTTR). MTTR begins when the feedback loop is triggered and ends once the code returns back to that trigger point.

Automating the Entire Feedback Loop

At Sonatype, our Nexus platform enables DevSecOps teams to automate triggers and feedback loops when analyzing open source, third-party, and proprietary components. To achieve this at DevOps scale, speed is critical, but precision is of the utmost importance.

Let’s say a development team analyzes a build running on their Jenkins CI platform. Our Nexus Lifecycle integration helps the team identify three vulnerable components being used in that build. Once identified, feedback is automatically sent back to developers. The feedback loop has been initiated.

Next, Nexus Lifecycle automates delivery of information about the vulnerable components and safer alternatives that might be available for us. This feedback enables the developer to select three new components quickly, and resubmit the code for the next build cycle. Let’s say this process -- the complete loop -- requires 15 minutes to complete.

The Impact of Shallow Data Depths

Alternatively, if you inject poor data into the pipeline, the MTTR extends dramatically. For example, in cases where we might uncover three vulnerabile components, a competitor without our data depth might find eighty. Unfortunately, 77 of the 80 vulnerabilities identified are false positives. While the same feedback loop is triggered in both scenarios, the remediation time in the later scenario has to account for the evaluation of 77 more components and safer alternatives. What could have been a 15 minute remediation window, might now be hours or days.

The story of poor data doesn’t end at longer remediation times, extended feedback loops and delayed releases. The real killer is that longer remediation times suck away precious innovation time for developers. Rework equals less innovation.

Meet our Team

At Sonatype, our data services team ensures the precision of data for our customers. Don’t just take my word for it - meet the team in this awesome new 2 minute video:

 

Screen Shot 2018-01-25 at 2.19.49 PM

 

 

 

 

 

 

 

 

Tags: OSS governance, devsecops, jenkins, Software composition analysis, open source vulnerability

Written by Derek Weeks

Derek serves as vice president and DevOps advocate at Sonatype and is the co-founder of All Day DevOps -- an online community of 65,000 IT professionals.