Wicked Good Development Episode 11: Vulnerability drills - The intention, habit, and impact

July 01, 2022 By Kadi Grigg

27 minute read time

Wicked Good Development is dedicated to the future of open source. This space is to learn about the latest in the developer community and talk shop with open source software innovators and experts in the industry.

We now live in a world where it's not a matter of if you will get attacked. It's now a matter of when you will get attacked. So what can you do to protect yourself when that does happen? In this episode, Kadi and Omar sit down with Developer Relations team members at Sonatype to discuss the value of engineering teams doing vulnerability drills. Learn why this shouldn't be viewed as an incident response but as streamlining your operations and engineering to be in lockstep.

Listen to the episode


 

Wicked Good Development is available wherever you find your podcasts. Visit our page on Spotify's anchor.fm

Show notes

Hosts

Panelists

  • Sal Kimmich, Developer Advocate, Sonatype (Twitter: @Kimmich_Compute)
  • Steve Poole, Developer Advocate, Sonatype (Twitter: @spoole167)
  • Ilkka Turunen, Field CTO, Sonatype / (Twitter: @IlkkaT)

References

Transcript

 

Kadi 0:09

Hi, my name is Kadi Grigg, and welcome to today's episode of Wicked Good Development. This is a space to learn about the latest in the developer community and talk shop with OSS innovators and experts in the industry.

Omar 0:19

Hola, My name is Omar, and I'll be your co-host. We are dedicated to the future of open source, and today we will be talking about vulnerability drills.

Kadi 0:27

Today we have a great team from Sonatype, including our field CTO, the one and only Ilkka, and developer advocate extraordinaire Sal Kimmich and Steve Poole. Welcome and thanks for being here today.

Ilkka 0:38

Well, glad to be back. I'm so glad that you've got the hang of my name now.

Kadi

I have. You're just Ilkka. It's kind of, you know, that one word name now. So that's it.

Ilkka 0:48

Boom. There you go.

Kadi 0:50

So, Sal and Steve, can you just introduce yourselves and tell us a little bit about what you're bringing to the conversation today.

Steve 1:07

Steve Poole. I’m a developer advocate at Sonatype. My passion is Java, Java security, and just teaching developers how to create safer software, but better software from this point of view, you know, prevent…. how do we teach developers to just do things better? You know

Omar 1:25

I love it. Sal, how about you?

Sal 1:27

Yeah, so I have been working in open source development, mostly in machine learning, for a little over ten years. My journey started at the National Institutes of Health, moved on to the Missile Defense Agency, and then the US Air Force. So really, I have been working in software in a situation where it really matters that we get it right. So I moved on to work in the open source supply chain because I saw that that's really where our problems are headed. So I'm really excited to talk about it today.

Kadi 1:57

Thanks for being here. So let's dive in.

Today, I want to talk to actually something that I saw in a slide from one of Brian Fox's presentations that he recently gave, and I know he's actually given it a few times.

So if I was to tell you had a vulnerability right now, would you be able to tell me, if you're using this exact component, what applications is it in? Are you able to track the remediation of it across your portfolio? How long until you can ship or deploy an update?

In speaking with people in the industry, this seems to be something that's becoming more front of mind for some enterprises. You know, all three of you talk with different organizations, large and small. So I'm curious to see, would you say that most enterprises nowadays can answer these questions?

Ilkka 2:46

Well, no. I mean, I guess that's literally why we're talking about all of this stuff.

Because, you know, one of the things that's been on top of my mind quite a lot recently has been: Did you know, did we really learn the lessons that Log4J kind of granted all of us? Right, you know, it's months now, since it since it happened, there was a lot of fire drilling. And that's kind of what brought it up to the top of my mind is actually, you know, did we get any better at it?

You know, it felt like we did, but did we actually, and if something's going to happen tomorrow, let's just assume something's going to come out of the woodworks tomorrow. Well, I don't think that people have stopped and asked themselves like, “what would I do differently if that happened tomorrow?” And I think that's kind of at the root of this discussion is, is, well, what is it? How would you improve it?

Steve

Yeah. And it’s not even if Ilkka. It's when. It's when it's just a matter of time.

Kadi 3:44

So are there things you can do in to put in place though, some type of like forward thinking protection in place, you know, are there certain parts of doing one of those, you know, Log4J fire drills that people should be really tuned in on and should be paying the most attention to. Are there certain steps in that process that should require that level of granularity when looking at a post mortem, basically, after one of these types of episodes?

Sal

Yeah, well, I mean, this is really where SBOMs are going to make a difference. So the software bill of materials means that you have exactly what you expect when you try to buy a carton of milk, right? I want to know what the ingredients are. I want to know where it came from. I want to know when it was packaged. You get the exact same thing. And you'll get a timestamp, which is so important.

I think that people are now understanding that they may have to put together their sort of ingredients list of how they build out their software. But more importantly, the reason why we're still seeing these vulnerabilities seeping in is not that they're going in and still capturing or downloading vulnerabilities. It's because, historically, it's probably sitting in some deep layer of their code base that they don't know how to access. So it's really starting and getting that historical hygiene and begin being able to know I can step back in my record, see exactly when and how things were downloaded and composed, that we might be able to start hitting this. And without that, you may not ever be able to be doing a good fire drill. You're just going to be running on the surface level.

But I also think, I'm curious to understand if we're, you know, if we're not capturing sort of a red herring here. Do I want my developers to be going and doing drills when they could be developing stronger infrastructure and stronger daily practices that might be more preventative and restorative?

Steve 5:39

So I'm not sure I completely agree with what you said. I agreed. No. So I agree with what you said.

My feeling is that the companies that are at the point where they can implement an SBOM are much better positioned already. And that we have many customers out there, not just customers we have, we know there are companies out there that just don't know what they've got. So even the idea of saying, you know, you need to give you more insight, they're not even step one.

Log4J has demonstrated that I've had conversations with developers, and it's been quite clear that many developers have no clue what they've got in their software. And they think it's somebody else's problem. And what Log4J has shown because of its emergency, that they got involved in having to fix the problem, and they didn't even know where to start.

So I think there's one thing that we need to do is help the developers understand the technologies used in their companies to find these things and work out whether they're actually useful.

That would be useful to do that. And the second thing is to start to get the two developers because, obviously, they're making the choices about what software to use. But to start thinking about, well, what tools do you use to find out what software you're running? Because I just think a lot of developers just ignore that. Is it not their problem?

Ilkka

Well, you know, you're both right, actually, because I don't think fire drilling is just about open source components. Like that is definitely one fire drill that's kind of been going on.

Part of the problem is, you know, these security drills, you never really know what it's going to be about, right? Because it can be anything. It can be your own code failed, and you got hacked. So, where did it come from? Where did it happen? It could be that your infrastructure configuration was wrong to hack through that. And then something else happened.

You know, one of the kinds of key things, I think, in most organizations, this kind of left unsaid is, that, you know, every incident management has fairly simple stages, right? You know, identify where the breaches occurred, contained the breach, and then, you know, issue mitigating actions until we call it a day. And that's that, right? You know, it's as simple as that.

But the real problem with that sort of thing is, if you don't even have the steps to do that, the only thing you really do is punt in the dark and try and poke different things. So really, you know, to me, like when I think about this. Log4J is obviously is near for dear, near and dear for us because that's usually how it happens.

Most of these attacks are actually multimodal. They, you know, coming through some window, as you know, hey, Log4J is the entry point. They use that as a landing point to arrive in the organization. And actually, there are automated hacks here. They'll scan for other things. And you know, if they find something, it’s called pivoting. When they kind of get in through the door, and then they pivot into another attack method, find another vulnerability, like a poorly configured database, or, you know, what's the latest Linux vulnerability that came up last week, you know, bad shell, yes.

So they'll find another vulnerability, increase their privileges, find other things, and look for other important things. But that's not necessarily how that works. It could also be that they're going through the door, through that initial vulnerability, there's no indication that nothing's been breached because they know how to hide their tracks. They'll spend some time doing reconnaissance, right? You know, look at your networks, see your databases move around, you know, through a backdoor a bunch of your machines and use that as a way of doing the secondary hack.

So when that sort of stuff comes out, right, then that fire drill is a lot more complex because you just have to assume that everything's tainted, and you have to nuke it, reroll it and kind of come back to it. So when we're talking about vulnerability drills, one of the reasons why these things happen, I don't honestly think that many engineers don't even understand the difference between being hacked. Now we have to understand that versus there is a new vulnerability, we have to mitigate a vector for a potential attack.

And I think that's part of the real issue is that organizations typically aren't very good at communicating this stuff because it's usually the domain of two people in the entire company. And they can only do so much. So that's kind of one of those challenges that you see: people just, you know, they're like, “Well, why should I not? Why should I spend my time, you know, hunting for infrastructure bugs when I'm a core developer word? I don't even care about that. I think there's a lot of that sort of thing.”

It's a very natural human friction to overcome. And I think that's at the root of when I think about it, so I agree with both of you, Sal, and Steve, in that sort of sense. I think it's like, it's like, more complex issue than it seems at face value.

Steve 10:38

Yeah. Yeah. I agree. It is complicated. I, you know, we're talking about fire drills here. And honestly, I'd say the developers have to do fire drills. They don't have to go off and work out what version of vulnerable Log4J they've got. I think if the development community started to pay attention, and I think that would be a big thing as to exactly what it is that they've got, that they're consuming, and started the process of working out how they would work out that they're using a particular version of something.

I think that would stand them in good stead. Because then they can start to have those conversations with the IT guys who are providing the tooling, to you know, the protection tooling to make sure it actually is finding things that we expect.

You know, I think we spent a long, the industry has spent a long time producing software. And in some ways, it's just been seen as insurance. It's the I have a cybersecurity tick. And, of course, we're way past that now. It isn't about showing somebody that you have a piece of software that may or may not find something. The fact is, now you have to have real defenses and real prevention and detection because the world has moved on.

So you need to start thinking a lot more about not about if you get attacked. It's just simply about when. And if you don't have that mindset, you just assume that “Oh, it'll happen to somebody else.” Well, guess what? It's gonna happen to you.

Sal 12:06

Yeah, I think this is the point that I want to dive into. I think, you know, maybe five, even ten years ago, we were looking at a much more traditional model where cybersecurity was a little bit more siloed. You weren't necessarily considering it inside of your engineering team. And engineering teams, when doing these fire drills, would see it as a nuisance. I would see it as it would, and it does, genuinely produce operational fatigue.

If you're doing fire drills well, the old model you were doing them, not so you can necessarily do incident response at an engineering level. It's really meant to make sure that you streamline the operational response. Streamline the way that you're going to communicate it externally, and make sure that all of that is in place. Because you don't know how long it's going to take you from the point of a true incident. And you want to ensure you have as little friction as possible.

I think what I'm trying to say here when I want to move from, you know, incident response to something more preventative is exactly that. The game has changed. It's no longer just trying to streamline an operational level. The engineering really does have to get it right the first time. We have to be able to mitigate this in as close to real-time as possible. And it is true. If you look at the statistics, you should be expecting to get hit two or three times in a year minimum, and that's definitely going to increase.

If you don't have an operational and engineering level that are in lots lockstep with each other, you're going to have not just fatigue from the drill. You're going to fail in the real condition in a way that can be a business deficit.

So, I am curious to see how we sort of change and move that mindset in real-time from developers who are really going to have to change how they think about these drills.

Kadi 13:50

You know, I think that's a good point that you bring up, Sal, because all three of you have actually alluded to this is that development often will see this as kind of the past to run these drills. And we need to move away from that mindset. So that's kind of a little bit of a cultural change, too.

So it makes me wonder when we talk about large versus small, too, what the implications are? Because I feel like a lot of small enterprises, you know, will not think about these types of things until they're actually attacked, right? Like, once you become a kind of like that teenage mindset where you're like, I can get away with everything, and I'm invincible.

You know, which also, this conversation just makes me think is smaller you are, it actually means you should be thinking about it now, because if you're building up a good practice from the beginning, you know, later on down the line, it's going to be a little bit easier.

But all that being said, my original question is how do we get developments buy-in to participate in a process like this where they should be caring about this to get it right from the beginning?

Ilkka 14:47

So you know, that's a really, really insightful question that you're posing there because, you know, that's first of all the million-dollar question like how do we get anyone's buy-in?

It's like in a physical office building. If you remember those, they used to be a thing. You know when a fire drill happened on a Tuesday at 10 am, right? And you knew that it was a fire drill because, again, it always happens around the same time. You just kind of begrudgingly grab your stuff, and you sort of lug yourself out. And, you know, get to the assembly point.

It is like that, you know, the analogy is exactly the same, like when we're just doing about, hey, let's simulate an attack, right? That's actually got a discipline inside our security team called red teaming. And it literally is we have a team inside who offensively attacks are on code. And you know, as things happen, we will take them as if there were serious threats and mitigate them.

I mean, know, every organization does that. But it's important, right? You know, it's kind of like, well, what's the best way of avoiding things, you know HackerOne has made a billion-dollar business out of that particular mode of offensive security all on their own. So I think, when you're thinking about it from this sort of view, my mind springs to my histories and engineering process and in engineering management. And, you know, how do you get value streams from left to right, bottlenecks, and all the usual lean stuff.

And, when I think about it, from this point of view, I think about it in terms of lost productivity rather than anything else. And so, although, the engineering team takes a little bit of a hit from a deduction of their effective time to be able to produce code and deliver software to their customers faster to engineer. The preventative side will save me more time down the line and more reputational and physical hacking damage by doing that, right? And it can't. That has to be the balance that needs to be stricken. And I honestly don't think in many organizations that that balance is always entirely thought out. I think some government organizations have a better sense of this.

And it's precisely because the incentive is we gotta get stuff out the door, right? So any sort of investment in not doing that is going to be a diffraction. But you know, that's one of those things that we actually see in the software supply chain sphere, right? You know, the analogy in medicine, actually, is washing your hands before you go to surgery.

I might have told the story like a billion times to some people, but you know, I think it's a very good analogy. It's like, you know, surgery has existed for 1000s of years. And it was a painful and often very deadly affair. Either you died on the cutting room table, or you died after, you know, surgery was complete, because something happened, right? Because you got sepsis or blood poisoning, or something else is a very gruesome thing, right? You know, it wasn't generally safe as a method of last resort.

Then one day, someone figured out that if you've washed your hands before cutting someone open, the survival rate, literally, I think it like tripled, like immediately, from 25% to 75%. It was a drastic change.

And they realized that there must be something, you know, some sort of microorganisms or things like that in your hand that, you know, getting into the body, and that causes all of these ill gains. So, figuring that one out wasn't the funny part.

The funny part was when they went to the wider medical community and said, “Hey, everyone, listen, if just wash your hand, 20 seconds, before you cut someone open, you're gonna save their lives three times as much.”

Now imagine you're a surgeon. You go, “Well, what's that got to do with cutting people open? Right? What's that got to do with anything? You know, if anything is gonna make my hands more slippery, if I don't dry them properly, that's not going to help me out.”

Well, the moral there was when they finally clocked in that 20 seconds of extra effort in, you know, in front of a multi-hour operation had this sort of effect. That's how I tend to view this sort of stuff.

It's not about wasted productivity as much as it is about finding what are those hand washing activities? Like they have to be small enough from an everyday perspective so that they can be practically executed. They have to be fairly replicable. It has to be done. Often, in repetition, the best thing you can do is to make it into a habit that you just do every day for the reason you do.

Like listeners, listeners from around the world might have noticed that mask-wearing has gone down, especially in London over recent days. And we were just actually talking about earlier today how out of habit, I'll still put the mask on just because it's kind of a habit. And Sal, I believe you were mentioning that somebody that you saw someone actually take a phone call by putting their mask on.

Yeah, exactly. Like when you get a habit so ingrained to that sort of level, that's when it's effective. But if it is a distraction like it's poorly thought out, is poorly executed, as you know, by weekly, monthly, once a quarter, once every half year type of thing, you're not going to gain the lessons that you intend to gain. That's kind of my, when I think about it from just value flow perspective, that's where these things need to go through. And I think that's often where it falls down because the people designing those exercises have no understanding or sympathy for those who have to execute those exercises.

Steve 20:26

Yeah.

So you touched on productivity, but ultimately, we're talking about how do we incent the developer community to pay more attention to things faster? Productivity is their key measure. It is about delivering value. And we're already saying that, because of the scale of the attacks and things like zero days becoming almost like negative days, you can't wait around any longer. You have to be able to apply fixes as quickly as possible. And that requires you to have a supply chain delivery process that's very fast and slick. So you got to do that anyway. So you've got to do it to either serve security reasons or simply because it allows you to deliver value for your business.

So it's a win win if you do this. It's how we encourage…I'm encouraging developers to see that as being the answer.

Sal 21:25

Yeah, and there's this organizational muscle memory. For a new sport, the kind of cybersecurity attacks that we're seeing is in a completely new sport.

This is not just impacting big organizations. This impacts big and small organizations, and they all need to be aware of this. It is not the fact that if I have a small consultancy, for example, that if I, you know, allow a cybersecurity risk to remain in my code, I will not get caught up. I will get caught up. And this is not just in terms of having the vulnerability, but in real business concerns, this will have a bottom line impact. It can produce downtime, or it’ll have to legally be required to publish that vulnerability, lose clientele, or possibly you know, how to bottom line on my site, the line agreements, right?

My legal obligations to my customers, I would argue you are much more at risk these days as a smaller enterprise than as a big one because you are just as likely to get hit by one of these blanket attacks. You may not have the developer resource to handle it with real-time accuracy in a way that a larger enterprise does. So it is equally if not more important for a smaller enterprise to be building up this muscle memory to be making sure that they're doing this preventative care and to have real awareness.

Steve 22:42

Yeah, that's a really good point.

I think that's one of the hardest things to get across to developers. They may work for a small company, but it can be vitally important in the attack chain, and they need to understand that. As you said, you know, if you're, if you're a small company, I remember talking to some developers at a small company, and I can't exactly say what they did. But we have a conversation, again, we're not at risk. And again, we'll need to hear your customers. And they start listing all the big car manufacturers. And again, well, then you're a vector for getting into those systems because they trust you. So you get compromised, you know, your their next stepping stone. And you see lights come on because suddenly people appreciate how they know where their position in the world is. And that's part of the things we have to fix over the coming months to start getting people to realize that they are part of this supply chain, whether they realize it or not.

And I think the other thing I want to get across to people is, and this sort of has crystallized this year is, is we talk about, you know, bad actors and state actors and stuff like that. And we have this sense that there is some geographical difference distance. And the simple answer is the only thing between you and the bad guys is software. That's it, you know, there is no distance, it's just software. And we're here we are talking about software vulnerabilities. So there's something to be taken away there.

Ilkka 24:13

Yeah, I, you know, just like completely as an aside, I don't think that we're still answering, you know, that's why I said it's kind of the million dollar question.

You know, if we knew how to answer this, we'd all be millionaires, I think, but you know, kind of an interesting thing that made me drop, make the penny dropped for me about this many, many years ago as a, you know, when I was hands on keyboard and doing this sort of stuff was really

I heard someone say, “did you know the bad guys also do Scrum? They also have sprint, and they also have deliverables. They're probably better than you are at it. Right? They're probably much more productive.”

Steve 24:50

They are definitely more motivated.

Ilkka 24:55

Yeah, exactly. And they've got kind of a direct return on on investment.

And so, if you think about it in that sort of sense, it feels like an abstract thread. But actually, that's a very, very true statement. So we've touched on, hey, you know, the fact is, you know, that first breach that happens to you might not be the last one, right? That's how they get in through the door, then they'll find other things and other things and other things until they get into it, get to something good and do something. That's a very typical way that not just nation state hackers work. But often when you hear about these highly sophisticated campaigns, that's really what that means is somebody got into the door and found something else. And they did something clever at it.

So there's that sort of thing and understanding and preparing yourself for the fret, because I still think like, the fundamental, you know, cognitive dissonance is, you tend to think of the hack as somebody, you know, they've got their hood on. They go, “I'm in,” and then, five seconds later, there's like a bar that fills up and says, “You've got 5000 Bitcoin. Thank you.”

And that's like, totally not how it works anymore like it never has. And, you know, if that's your perception of the kinds of threats that you're under, then clearly, you're not going to be able to prepare yourself for it. But if you go all right, imagine that there's an actual offensive team on the other end. They do software like you do, except they're better at delivering it and better at executing it.

And what can you do? You probably do, you know, a defense of attrition, right? You close off as many holes as you can, making it very complicated and hard to advance inside your infrastructure. And when we think about fire drills, I feel like that might be a much more. I mean, I could be completely talking out of my head here. But you know, it feels like that would probably be much more of a productive exercise to run. And I think from a developer's perspective, the good news is that the hand washing thing is often not just the right thing to do, from a security perspective, but getting into the habit of maintaining dependencies and things like that, just spending the five minutes every day, on my dependencies, right? It's the last update. Are we, you know, tracking near the latest don't, you know, those sorts of things. It also averts technical debt, right?

Which is generally considered a good thing for the quality of your software. It's generally considered good for maintenance. And that's one of those investments that, you know, I was always very bad at making as an engineering team, right? Is just about maintaining the platform, maintaining the code base, you know, reducing technical debt.

To a level so many of these vulnerability fire drills don't just have to be about the vulnerability per se. They might actually be establishing a new, better engineering process. Because it turns out, if you do that, and you get really good at it, you're good at reacting to abnormal situations as well.

Omar 27:48

Could we dive into that a little bit more in terms of what would be a successful metric to say, Okay, we're practicing these fire drills, and they're working. And we know we can measure that our teams are being more effective because of them?

Ilkka 28:02

Well, so you know, there was actually a really interesting metric that came out. Last year's Software Supply Chain reports, State of the Software Supply Chain 2021 is called time to update.

So there's, you know, the backstory to that is there's this, you know, quality metrics for a bunch of open source libraries. You know, there's a framework from the Open SSF foundation that talks about like how, how well a piece of open source is maintained, and it has security dimension and a quality dimension. And another thing is the security scorecard metric also talks about the security scorecards. But what we found is, you know, to save you a long story, what we found is actually the simplest possible metric that actually gives positive indication of positive security stance. Like in, you're making improvements in the state of your security by not having vulnerabilities in your open source, was actually just tracking a metric called time to update or time to update. And basically, it just means when any of your dependencies, let's say you have an application, it's got 100 dependencies, like you've got 20 directs, and the rest of them are transitive. If any single one of those dependencies is a new update, how quick are you to apply that update? Right? So when you look at it across a population, the median time is three to six months. Like that's how quickly the average engineering team does it. But the smaller you can get that time, the more highly indicating it is that you know the security state of your software is good. And we found that by actually looking at a population of open source, we looked at 10,000 open source projects. And we said, you know, we calculated all of their TTU values over a period of time. We found out that, you know, the smaller the TTU number, the better they were - their security vulnerability exposure was much lower. Their ability to react to security situations and publish new releases was also heightened because it turns out a lot of publishing a new version is just about maintaining your dependencies and maintaining the tech stack. So I think like that, you know if I were to invent a metric and a Northstar thing, I'd probably look at something like that because it's simple to calculate. It's a pretty simple number I give it. If it goes bigger, something's going wrong. If it goes smaller, we're doing something right to do more of it.

Steve 30:26

Yeah. I mean, and if you add, these are sort of second order stuff. But if you look at quality metrics, code complexity, those sorts of things. Technical debt, the less of that you have suggests that you're putting more effort into better engineering. And if you're, if you're more focused on producing quality code, you're going to be more likely to have better build systems. And you're definitely going to fix some of the really silly coding problems we see that let people do. So it's not necessarily a direct measure of your ability to be hacked or not. But it demonstrates that good engineering pays.

Sal 31:02

Yeah, I think I think something that I have been considering lately, is the fact that everyone is attempting to adopt CI, CD, continuous integration, and continuous delivery. And inside of this, increasingly, we need to think about this as continuous monitoring or continuous security as well. This is something that we absolutely have to be aware of. And it's essential when we're considering that we are all participants in an open source supply chain that you should not just be considering your time to update your dependencies. You need to be considering that for the dependencies that you're taking in, right for the components that you're taking in, are they able to update in a timely manner, am I upstream for me, ensuring that security is primary to the sources that I pull from, if not, you're only going to be as secure as the weakest link that you are pulling in consistently. And it's that global awareness that none of us are working in our own silos. In this case, that means, you know,, if I hold the belief that 90% of my application is built on open source, another way to think about that is 90% of my entire engineering team is built of strangers I don't know and will never meet, right? What can I do to ensure that they are living up to a standard that I need for my business intelligence to function well? You really have to put metrics in place. You cannot have a human based trust in this world where everything's moving so fast, you really have to have a metric, a quantitative metric that you can be pulling from, and to call that your standard.

Steve 32:52

Yeah, no, I think it is a really good point. People forget just how much software they depend on. So all those aspects and just - like Sal said, if you're going to choose a dependency, then put more effort into it than just going it's got a feature I want is how, how good they are doing engineering, if they're worse than you are, then you've got a problem.

Ilkka 33:15

I think a lot of this is easier said than done. Right? You know, because in the end ,we spare how off? How long do we spare a thought for any of these things? Right? You know, usually, we're talking in a matter of seconds, if if you find a minute of cognitive time going into thinking through all of those, then something's gonna, you know, that's a miracle, you know, more power to ya. If you're able to do that. Still, it is also a measure of like real engineering maturity, right. And I think, like hence, where my engineering part of my brain goes back into, clearly everything is signaling to me, that there's sort of an engineering system here, right? There's something that it's like a trick that you're missing. It's like when somebody figured out that, hey, you can put the cars on the conveyor belts, and they can move from one station to another, rather than, you know, holding all the stuff to the, to the car, all of a sudden, it's a huge leap in productivity, it's kind of the same thing. It's like, it's like, we're spending all this energy assembling this stuff. And there are so many aspects of it when you're spending the smallest amount of time to even consider what that stuff is, clearly, then, you know, we could get better. And I mean, there's science that shows that the companies that are pretty good at it, are like 20%, more productive than their peers, which is not an insignificant improvement at all.

Steve 34:34

Yeah, I mean, I'm sure we're not asking every single developer out there to become a super developer. But we are definitely saying that it's time that engineering teams start focusing on these problems. Maybe they start growing individuals to start being the talent to and the skill sets to do to provide these pipelines and start focusing on things but and then I think if we start to see more human beings start to seem to appreciate the challenges and start doing things about it. You know, we talk about code quality. Once you start to become something that people appreciate the connection between code quality and security, then work and expect to see the needle begin to move on those things. We're definitely not asking 25 million developers to become security experts. But we're definitely asking you 25 million developers to start thinking about what needs to happen.

Omar 35:25

Only because of time, I'm gonna say, we can wrap it up. And I'm curious if there are any final comments or things to say to the developer world, anybody who might be listening about vulnerability drills - or what we could be doing, just any final words.

Sal 35:42

Well, they should still do them. But we need to get a little bit smarter about how and why we're doing them and be really intentional, and what is the intended outcome. This should not just be an exercise for muscle memory. That should be really getting ready to play a new sport.

Steve 35:59

And beyond this particular thing about fire drills, we're going to do what we can to help educate the development communities around all these issues. Help people be more effective. We'll do what we can to provide the education and the guidance, not just us, obviously, but you know, then get the communities to start paying attention to this because we're all in the same position. It's not just, you know, a few people.

Ilkka 36:27

Yeah, I think I think, you know, I think the advice is, less is more in terms of time spent. But, I think, as with many things, this is about how do you, how do you form a habit rather than anything else? It's understanding the risks and then forming positive habits that help mitigate those things. Because, hey, the fastest way to avoid having a security vulnerability is not to have it at all, right? So better you can get it, the better you can build muscle to that turns out, you probably get better at delivering software in general by doing that anyway. So I think there's a lot of that sort of work. That can be done. Yeah. We love to write about it. Go to Dev zone and find out more about it.

I think the only other thing to say in general state of the world is Sláva Ukrayíni.

All 37:12

Yeah.

Kadi 37:15

Well, thank you, everyone, for being on today. This was really great.

Kadi 37:19

Thanks for listening to another episode of wicked good develo0pment brought to you by Sonatype. This show was co-produced by Kadi Grigg and Omar Torres and made possible in partnership with our collaborators. Let us know what you think, and leave us a review on Apple Podcasts or Spotify. If you have any questions or comments, please feel free to message us. If you think this was valuable content. Share this episode with your friends.

Till next time.

Tags: Software Supply Chain, Community, podcast, DevZone, Wicked Good Development

Written by Kadi Grigg

Kadi is passionate about the DevOps / DevSecOps community since her days of working with COBOL development and Mainframe solutions. At Sonatype, she collaborates with developers and security researchers and hosts Wicked Good Development, a podcast about the future of open source. When she's not working with the developer community, she loves running, traveling, and playing with her dog Milo.