Securing software supply chains and dependency confusion — An industry perspective

March 08, 2021 By Derek Weeks

29 minute read time

Following a growing trend in software supply chain attacks which use "dependency or namespace confusion" techniques, I sat down for a discussion on software supply chain security with a few experts on the topic.

  • Dr. David Wheeler, Director of Open Source Software Supply Chain Security at the Linux Foundation
  • Dr. Trey Herr, Director of Cyber Statecraft Initiative at the Atlantic Council
  • Brian Fox, CTO and Co-founder of Sonatype

As the attack vector continues to gain further steam in the early months of 2021, we chatted about what's happening, why this vector has taken off and how organizations can protect ourselves.

You can watch the full discussion on YouTube and/or read the transcript below.

 

Derek Weeks: Hey, everyone, I am Derek Weeks, VP at Sonatype, and I am joined here by an esteemed colleague and two friends from the community. We're going to talk about some software supply chain attacks that have happened recently.

First off, I'm joined by my colleague, co-founder and CTO of Sonatype, Brian Fox. We have Dr. David Wheeler, from the Linux Foundation, where he is the director of open source supply chain security. We also have Dr. Trey Herr from the Atlantic Council, who is the Director of Cyber Statecraft Initiative.

All of us spend a lot of time within our own organizations and within the community, talking about securing software supply chains. That's really the basis of the discussion that I wanted to gather you three here for today. To start off the discussion, I want to focus on some of the more recent news on software supply chain attacks, going to Brian and talking about a new or novel supply chain attack with "namespace confusion" that's happened within the npm repositories and some others. Brian, introduce us to what happened, what the news was, and why maybe people should pay attention to this particular new form of attack.

Brian Fox: With a lot of these things, they're only novel in that the rest of the world is paying attention to them. I think that's probably a pattern, we can agree, on most of these. So what happens in this instance, is that npm doesn't have a strong default namespacing schema.

So what do I mean by that? You know, like in Java and Maven land, for example, we use the reverse DNS of a company to be both the base path or the classes, and Maven adopted that as well for the components. In npm, there wasn't one and when it was first introduced, there was no possibility to have that. So, you basically just have a very flat naming structure.

Later, because they were running out of good names, I guess, they added the concept called scopes, which was really a namespace so you could have company at my project, right, as opposed to just my project. The problem is it wasn't required. It wasn't enforced. By the time it was introduced, I think a lot of patterns had been set in place.

What this latest exploit takes advantage of is, if you can figure out the name of a package being used internally in a company not an open source project that is being used "my project" and you go to the public npm repository and register and publish something called "my project" there. You put it in there with a high enough version number, npm is trying to resolve the package. It's normally looking at multiple different sources, both internal and the public. In this case, it's going to see oh, I need the latest version of "my project" and the one that happens to be in the public repository has a higher version, that's the one I'm going to get.

So the researcher figured this out. For some of us, it wasn't new, but I guess I didn't certainly think to go collect a bunch of bug bounties when I wrote about it a couple years ago and I was talking about it back then. Because the Java modules group when they were trying to figure out how to do naming for Java modules, and JVM 9, was thinking about doing away with some of the namespacing. And I use this as an exact example of why not to do that, because what we have in Java, and what we have in Maven is actually one of the better situations as compared to the other ecosystems. Let's not replicate that behavior. Right. So that's the gist of what happened there. Unless a company moves to using scopes, or has other mechanisms in place, it can be very hard to defend against.

Derek Weeks: There were some 30 odd companies that were prone to this kind of attack. Alex Birsan who did the research, exposed whether it was Apple, Zillow, Tesla or Amazon or whoever, creating packages, npm packages that mimic the names of packages those companies were using internally. If the build systems went to try and find the latest version, they went externally to repos first to check instead of their own, which was the vulnerability.

A lot of copycats have since shown up. In the days following Alex's post about it, our post about it, maybe two days later something like 275 copycats that appeared. It's either, this is a way to stage an attack on a company that I know or wanted to target, or, oh my gosh, this Alex person made $130,000 in bug bounties and the guy lives in Romania, that's got to be a fortune for him. I should go out and do the same thing, because hey, what a way to make $130,000 in six months.

David, you're focused on open source, software supply chain security. When this kind of news came along, what was your thought on reading about this?

Dr. David Wheeler: We've seen attacks like this before. In fact, if you want to go much further back, if you squint a little bit, decades ago, when people were typing commands into their terminals to make a program run, we had a very similar problem with what's called paths. Where if you typed in the name of a program, which program runs? Is it the program you intended, or some other program that maybe an attacker slipped in?

We've also seen this kind of namespace confusion before. This is just another instance of it. We talked earlier that what we need to do continuously is watch what the attackers do. Then learn from attackers. We need to build our systems, where possible, so that they're secured by default.

In this case, obviously, that has not happened. So, I think in the shorter term, there are some quick steps. Microsoft actually posted some things and by the way. In the shorter term, there's some quick countermeasures you can do. But in the longer term, we need to build into our package managers ways that, by default, it's secure. If you have multiple repos, and for many organizations, that's a reality, then make sure that for any given name, there's exactly one repo that it’s going to map to. No more, no less.

There are many ways to do that. I don't care so much about the details, we just need to make it so that when you use a package manager - and you need to - it automatically works in the way you expected. No surprises. I know that some folks are already working on that within the various package managers.

Something we basically need to do is say, wait a minute, we've got a problem, we need to fix that. Unfortunately, what that's going to mean for some folks is that those package managers are going to need to be updated. Now the other heavy lift is that people are going to have to use those mechanisms and make changes, because the earlier defaults weren't secure, unfortunately.

Derek Weeks: Brian and I were talking about this internally, the capability existed internally in lots of people's technology to configure it in a way where this didn't happen. You didn't need to be at risk. Now that the part that we talked about is clear, have you gone in and done that configuration within your build pipeline? If no, you need to! If you don't know how to do that, then you need to learn how to do that. You need to actually take action on this.

Before we went live, David and I were joking about lessons learned. He said something like, "you know, we shouldn't call these lessons learned because we keep repeating the same old mistakes. Maybe we should just refer to it as lessons learned by a few and not practiced by many."

But before we get carried away on that, Dr. Herr, when it comes to software supply chain attacks, and you heard about this attack, what was your reaction to it within the Atlantic Council in the communities that you serve?

Dr. Trey Herr: I feel like I echo Brian and David to some extent you know, everything old is made new again. What's amazing is we put together this project breaking trust, starting back in September 2019, it is a way to catalogue attacks and significant vulnerability disclosures on the software supply chain. We put out our first report in July of 2020, capturing 115 attacks and disclosures over 10 years, which was more than I'd ever run into anecdotally. But I think, especially in talking with you all and profiling open sources, it is a lot fewer than are going on at any given basis.

What's interesting, though, is that there's this trend where it's the same combination of a handful of techniques played over and over again, against libraries, against projects, and against repositories that just don't have the kind of basic mechanisms in place to stop those attacks, either from being effective or from being just so damn cheap.

When I saw this, it was just, you know, "here we go again!" At a different scale, maybe a little bit harder to detect.

In this case, I think what was interesting was going after an intriguing seam in the open source ecosystem, which is this private repo of otherwise open and common code. So that's something where as we see folks on this call, and others advocate to really get people to understand just how much open source exposure they have.

You know:

"Oh, no, no, I'm proprietary, you know code users."

"No, no, you're using a lot of open source, it's just one step down the dependency chain from the package that you have."

I think it's been interesting to see folks reevaluate just how much perceived security they're getting from that isolation of a private repository, which I think hopefully is a good thing, right? There's nothing magic about isolation, or security through obscurity.

So if this is driving a little bit of an improvement in awareness and basic behavior, and just maintaining the health of those libraries, and some of that code, I think it's a positive thing. But yeah, you know, the song just remains the same with us over and over again.

Derek Weeks: If we're, I don't know if anyone's published any research or comments on this. But if you look at what we're talking about, open source packages, in this particular case, and proprietary packages and open source packages, and which repo you're getting those from, but if you think about containers, do we have the same problem with containers? Can I put anything up on Docker Hub and allow anyone to download it and call it apple.apple.docker?

Brian Fox: Docker has a namespace in it that is similar to GitHub. It's a project of the organization that published it. So yes, you could. I don't know if they validate if I show up and try to insert a company name. I'm not sure how the validation works there. But there is at least the mechanism in place to make it more obvious that you're getting my projects from their own company.

Derek Weeks: I think that if I remember correctly, there are some kind of certified repos within Docker Hub. If you go to Docker Hub, and you search that you would see, here's the official one. But if you have a build tool, just going out and calling for a name, I don't know what mechanisms exist within that. Certainly something to review in terms of how builds are done, or what kind of infrastructure or code is pulled in.

Dr. David Wheeler: Regarding Docker, I think Docker does have like almost any repo, the problem of what's called typosquatting. All of you are very familiar with this, where you make a name that's similar to the package you intended. It's not that hard to double check a name, but too many people don't. So even if there's something or if there's a namespace registered, if you can fool somebody that Apple with three p's, the apple you were expecting, it's probably going to result in a bad day.

Derek Weeks: Yeah. Hey, you know, it feels kind of ridiculous, like, who types in Apple with three p's, would that really happen?

Dr. David Wheeler: Yes.

Derek Weeks: But then today I was writing one of my colleagues Alyssa and I needed to type in her email address into something and I misspelled it. How many things do I misspell each day? Just off by one letter, you know, in the name, and yet, that could that same kind of behavior in coding could end up bringing in something masquerading to be something else.

Brian Fox: When you factor in that many people are just searching for stuff and if the downloads are driven up, as we've seen in some of the malicious attacks, they set up bots to drive the downloads through the roof. So if you search for a thing, it's probably going to be the number one hit. It's got a lot of downloads, it looks right, I'm gonna grab those coordinates and drop it in. So it may not even be a typo per se, but they're manipulating the way you discover what the thing is. Maybe you don't remember the name exactly, or whatever you just search for it looks about right. Off I go.

Derek Weeks: That's what got the name 3 million downloads is popular in the open source community, I pick other components because they're popular.

Brian Fox: So that's probably the one to find, right?

Dr. Trey Herr: These are not robust authentication mechanisms, these are search mechanisms. It's metadata and a mechanism to define it source code. So I think it's interesting. At least it was for us, I think, in reading, reports from Sonatype, and the work that folks of Linux Foundation of doing or others, just to try to get the basic baseline behavior into something where you're not pulling packages, you're not pulling code into your product, or into a production system, that you haven't validated the source of in some other way.

I think one of the areas that we're really excited to see some growth, especially with the repositories, is setting some very rudimentary, very easy to hit bars in terms of governing that code, how it's identified, and what sort of behaviors are allowed, in terms of what's being submitted and what's being pulled. If only because these are one of a handful of good points of concentration in this ecosystem. I don't think we want to impose new barriers or new sort of artificial points of concentration, just as a way to solve this, this broader governance problem.

Brian Fox: It's funny that you say that it's near and dear to my heart, because we've been taking heat for 15 years on Maven Central for exactly that. Right. Because we do require that you can prove you own the domain that the coordinates are matching to or the GitHub project, right, we have validation. For years and years, people do drive bys on Twitter saying, "Can you just make it easy like npm? Why do I need to wait half an hour for your bots to validate? I don't want to do this, I just want to publish my stuff." It feels a little bit like people are starting to understand why we had that in place this whole time. Hopefully, we'll see that all the other communities recognize that that is actually an important thing that we need to do.

Sometimes you have to have a little bit of a trade off from the super easy to publish thing because it protects the community from those kinds of things. So it's not going to protect all the things but if you have that validation mechanism, it allows you to catch things like typo squats, or somebody pretending to be Apache, when they're not Apache. These kinds of basic things at least makes it harder for the bad guys, instead of it being just super easy for anybody to do it, right?

Dr. David Wheeler: Yep.

Derek Weeks: One of the other subjects I wanted to get to for our audience. You all are out speaking to various communities, you publish reports and blogs, on the topic of security software supply chains. Is there one point over the years or even recent months that you keep coming back to? We've talked about focus on simple, right? There are complex ways to solve this but there are also simple ways. Is there a simple suggestion that you always come back to with the audience's that you're talking to, or maybe a research point or a stat from your research? Like, look, if you're gonna leave with anything, this is what you know. I want you to leave with...

Dr. Herr is there something from the report that you published at the Atlantic Council on securing software supply chains that always comes up in conversation, or you always want to bring up?

Dr. Trey Herr: I'd love to have a silver bullet. We've been asked by a number of entities to just get to the one thing we can do like this is all great, but what's the one? Like there's probably not one thing.

So I think we tend to lead with one of two ideas.

Conceptually, the thing that I found resonates most Is that how you trust has to be at least as important as who you trust. One of the observations that we've made, especially in talking to policymakers is there's a tremendous amount of focus on provenance. Where are these products coming from? Who's building them? Which cybercriminals may or may not have had their hands on this? Very little rigor or attention to the process of how that trust is established, and how it's maintained as it's transferred between entities.

So I think part of what we're hoping to see in the discussion, in the next 18 months, that SolarWinds has kicked off, unfortunately, for them and for the victims of the incident, but hopefully, positively for the rest of us, is a much more explicit treatment of the mechanisms that we use to establish trust. Not just sort of hoping that anything can be signed and running away because I you know.

The npm attacks are really good example of something where you wouldn't necessarily have imposed a rigorous authentication barrier, right, or an integrity barrier, right and trying to establish whether or not that code was still trustworthy, even within a build process, going back in and revalidating between cycles and commits. I think it's something we're seeing that is really important.

Probably the more actionable piece of this, and this is a longer conversation that we're having with NIST is a lot of the discussion about software security in the last 15 years has been characterized by an explicit focus on development, which was a good problem and a good discussion. SDLC was a necessary outgrowth of a lot of these efforts, we've seen a lot of standards work in that area.

But part of the shortfall that we found in that research was that there's very little work that's been done on secure deployment, relative to secure development. So those two are really co-equal, and need to be part of a mutually reinforcing system of standards.

Where we are hoping to push folks right now is, just talking about what's out there, and cobbling it together into a NIST overlay right, one place, or any vendor any developer can go to recognize. Okay, this is what the industry, this is what the government believes are best practices, in terms of secure deployment of code and management through a lifecycle, as opposed to just secure development. I'm hopeful we'll see some growth there. I think that's one thing that would really raise all boats in the not huge but meaningful way.

Derek Weeks: I didn't ask you beforehand if you're prepared to talk about it, but President Biden released an executive order last week about securing government supply chains: physical supply chain, digital supply chain, and software supply chains. Any major kind of pick up on that?

You had to know that that was expected after SolarWinds, that there would be some executive order coming regarding that. But how does that change life and the government entities that you work with?

Dr. Trey Herr: I think realistically, the SunBurst executive order is still to come. I think that was a supply chain executive order thinking much more broadly about physical goods and commodities, pharmaceuticals, and micro electronics. So it's good to see some studies being done. I think they'll probably come back with information that we've known for a long time. But you know, if that's the way to kick that process off for DoD in particular, but I think for this government in commerce, that's a good thing. But I think there is more still to come on software.

Derek Weeks: Right, David, let's shift to you. What was the one piece of advice or direction you would give the audience?

Dr. David Wheeler: I'm gonna repeat that there is no one magic bullet. But there are some things mainly already mentioned before that I learn from history. It's incredible how many times we see attacks, that is the same old attacks we just discussed. The type of squatting is not new.

As far as open source goes there's a study, an academic paper, Backstabbers Knife Collection, which found that the majority of the malicious code, malicious, open source projects come in through a type of squatting. It's not that they're subverted well established programs. It's a name that's similar to the one that you intended. So, you know, learn what are our attackers attacking? How can I mitigate? In that case, for example, simple things like double checking the name. Yes, I'm aware that attackers sometimes try to increase the downloads. But that doesn't change the date that that package was added. If that package has a million downloads, and it just appeared last week, that's actually more suspicious.

Another thing which I've already mentioned, secure by default, and really that goes for any time you write software, your users will use the software.

If you have to add extra text to explain how to use it securely, then you built it wrong. You should have to add extra text to explain how to use it insecurely. Alright. I think probably many people have harped on this before but learning how to develop secure software. That still is pretty unusual. Most software developers don't get any education, any training on how to develop secure software. That's including the ones who go through school in a computer related degree program - about half in the US don't do that. So I'll be happy to mention the Linux Foundation is a free course on the fundamentals of how to develop secure software on edX. Go take it or go take something else, but learn how to do it.

I think those will get you started. I'll learn from people what the attackers are doing securely by default, learn stuff. You mentioned the operation with DevSecOps. Increasingly those worlds are getting connected anyway. So yes, we do need to learn how to deploy securely.

But increasingly, that's by working together with everybody else. It's not really an isolated world for most people anymore.

Derek Weeks: It's an interesting point that you bring up.  One, I think we've repeated this, you know, again and again, that the same kinds of attacks are just happening over and over. I know reading, like the Verizon data breach reports or other things like email is still the number one way that people get in.

In terms of software vulnerabilities, Struts is still in the top 10, year after year after year. It's like, okay, we know they're a vulnerable version. Why are you still using vulnerable versions or surprised that you're being breached through vulnerable versions.

This wasn't new, when Equifax came around. There were numerous Struts related breaches long before that, and they continue to be in the top 10 each year. I think the point that you bring up about training for software developers and what's available, and what's encouraged is important. Organizations need to go out and encourage developers to get the training, or to provide the training. But it feels fairly daunting.

If we think about the numbers and you think there are 35 million software developers around the world -- how do you train 35 million people at scale? Right? What kinds of things do you introduce that make that experience valuable for everyone and give everyone a training event?

Dr. David Wheeler: If 35 million people showed up on edX and took that fundamental security, I think edX would be happy and the world would be happy.

To be fair, I'm the one who primarily wrote the course. So of course, I'm going to like my own course. But in the broader scheme, I don't care if you take that course, or something else.

The problem here is we've got a lot of developers who are writing software that's under attack, and they're totally unprepared for the situation. They know only the very, very basics.

We need to move, we should be unsurprised, we should expect that our software is vulnerable. Of course it is, no one's told them how to do it differently. So that doesn't solve everything. We still need to make things secure by default, that sort of stuff. But having a little knowledge is a big plus. We need to get to that point.

Derek Weeks: Brian, I'm gonna show it to you, what was the thing that you are continuously bringing up with every audience and community that you're talking to? Is that there's no silver bullet? But what are the things that you feel like you probably repeated 100 times and will need to repeat 100 more times?

Brian Fox: Where to start? The one I've been on lately, as I'm sure you've heard me talk about, is to try to educate mostly application security professionals to the fact that the battle has shifted upstream a bit towards what's happening within development, right?

So take the namespace confusion problem, we're now living in a world where the mere act of the developer downloading the package as a bind as a dependency, causes it to get executed, or at least some of the parts of it get executed, which literally means the attack happened right there. The developer downloaded the dependency, maybe before they even wrote a line of code that included it.

So historically, what really has been happening as application security is set up to be defensive, to make sure that they don't ship insecure stuff to their customers, right?

So that's where you see it, especially in less agile organizations, say "before we ship, it must go through a static scan or some other kind of criteria."

They're defending against shipping it to their users, when today's battlefield is going on inside your development, right? So, we were talking about this before in the context of Deming, right? Like all of Deming's principles, as they apply to dev sec ops make perfect sense, you need to do those, those are table stakes. But Deming was not guarding against somebody trying to sabotage the factory, right?

So you can do all those practices and still get your development factory sabotaged. You need to do more, you're missing the battle, right? So that's the conversation that I keep trying to have. Because so much of the practices about fighting last year's war - last decades, and is completely missing what's actually happening right now.

In some ways, you have to be much more proactive, much more on the front end of what developers are downloading and consuming to prevent that from even happening, because by the time they've done it. We we found two more they were the first ones that we uncovered, you know with the namespace thing. We saw a wave one was all of Alex Birsan's stuff. Within 48 hours, there were hundreds more. Then in a week it exploded by 7,000%.

The copycats went after bug bounties, but you know, I speculated there's gonna be a third wave, we're not sure we might already be in it. But we weren't sure at the time of people actually doing this to steal real things. Yesterday, we put our finger on two of them. The component was stealing your bash history and the Etsy shadow password cache file.

So depending on how you use your system, there can be a lot of sensitive information just in your bash history if you were typing passwords on command lines as an example. So the mere act of people consuming that meant those two things were potentially sent off to wherever the bad guy was from.

That's a totally different war than what I think traditional app sack is set up to defend against. That's the thing I'm constantly pushing and trying to raise awareness on.

Derek Weeks: I think that I haven't thought about it in that way. But I like how you really position that application, you know, app sack application security was really about looking at this thing before it goes into production, or when it's in production, making sure that no one's attacking that. I've verified that that is safe to deploy or operate out there. I can protect it when it's operating out there.

But application security shifting to include "securing application development?" Part of the role and responsibility is on the development team. How are you going? What are you building with? etc. But if you think about the SolarWinds attack, right, there's part that is, as software developers, what were we doing to try and prevent that type of attack? But what are we doing as application security to protect how applications are being developed, where there are multiple layers of things that could be done in development or by security.

That layered approach, defense in depth kind of approach to security? When it comes to software supply chains and development -- that's a great perspective if you received out of your knapsack leader, you know, thinking about this? Am I an application security? Or am I securing application development? Or am I doing both? What is that? That kind of role within your organization?

Dr. David Wheeler:  I have to say, I'm not so sure. It's a good idea nowadays, to split those. I wrote it back in the 1980s, you'd reuse the operating system, you'd reuse a database. And that's maybe maybe a library.

But over the years, development has been shifting increasingly to reusing software. Historically, the problem was we didn't know how to develop software that was reusable.

Hooray, we solved that problem. Now we have a new problem, we can reuse a lot of software and there's an old wag. The cause of problems is previous solutions. The vast majority of most applications reuse software. Okay, vast, vast majority. I've even done some analysis to show that we should be unsurprised that the attackers are going to where the biggest vulnerabilities are.

That doesn't mean we can't reuse software. It just means that because it's now part of the attack surface for development of software, we need to defend that as part of the way that we develop software.

I'm sorry, I didn't mean to break in Trey.

Dr. Trey Herr: No, I think you hit the nail on the head. I mean, when we think about the trend in software development at a systems level, right, the move towards micro services and containerization. I think there's a very similar trend towards decomposition away from monolithic programs and single developer tools into these, you know, highly complex set of relationships, really, it's almost a small family of nested features and code bases, developed by different entities over time.

I agree with David completely in that I think part of the challenges that we talked about before, the way that we assess and manage risk in the software supply chain just hasn't kept up with the way we use software and the way we build software. I think that's an area where a concerted effort to shift norms forward into rigorous validation and integrity, checking at each of those seams, as those seams multiply throughout a code base and throughout a system is really critical.

That's not an easy technical solution. Right? I mean, I think in some cases, we're talking about fairly low level message passing services. Things that are wired together in some very unconventional ways -- not called as traditional dependencies or packaged in a way that can be validated by a single way...this binary has been vetted by this testing suite.

But I don't think that stops us and really shouldn't if only because this seems to be where we're going for a long time. I think, again, maybe pulling out a little bit is really good. The democratization of development in a way where if you have the ability to write one feature, and you can pull all the rest of the functionality you need for a piece of software together, that's fantastic.

You don't have to be able to write everything from an ISA all the way up. So I think we have to embrace that as a realistic trend. But that does mean, changing in some ways, just like we did with networks in the enterprise model in the 2000s. Like changing the security boundary away from, well it's this program, it's been validated, all the way down to feature by feature module by module, which is hard. No question.

Derek Weeks: But I like what you said before, and I didn't comment on it at the time, but it,  boils down to who do you trust? And how do you trust?

I think that's a great perspective to go on, as you think about how we are securing the software supply chain? How do we think about this? You did bring up provenance earlier, I've talked to various people along the way about provenance of components. Whether those are packages, or containers, or any kind of reusable code, reusable build tools that I'm bringing into my organization supply chains that are not on premises, but in the cloud or development pipelines that are in the cloud that were once on premises. There are a lot of probably more unanswered questions in that provenance area.

I don't know exposing themselves to the problem just with how software is developed in a community basis.

the way it is today, when I look at Java ecosystems, and I report on this each year in the State of the Software Supply Chain Report. Looking at 15,000 Java development organizations, they're relying on 3,400 open source projects, on average, right? You can't go and assess every single one of these and the provenance of well, what is that project? Who's working on it? How long have they been working on it? Are all those trusted people, and then multiply that across languages or other parts of the infrastructure? It feels like an impossible task. If you know, Trey, you're asking me, who do you trust? How do you trust? How do I answer that?

Dr. Trey Herr: You know, what, I think that's a good problem for us, right? We've gotten to like the growth of the internet out of NSF net in the 80s and 90s. Right? I can't hold the phone book anymore. I can't tell you who's on the other end of the line and a lot of cases and what breed of dog they are.

I think for where we are, in some ways, what we're saying is we're missing the the effort, the real community led effort to come in and say, "Regular open standard, reasonably easy, low overhead set of protocols to come in and validate through whatever mechanism, it's whatever standard we think is appropriate. Any code you're receiving outside of your hands, because it's a lot more than you think."

You know that there's jury rig solutions right now. Certain companies can sell something that you can sort of wrap part of your build process in, that's good. But that's not sustainable or scalable, or really accessible to a lot of small open source developers.

So I think here, this is one of the few places when we talk about a governance challenge that there is actually an opportunity for a technical solution. I say, one of the very few, not as a magic solution, but as a way to start to build a substrate in where I can talk about what information I'm expecting from you that I can pull from your resource, not wait for you to push to me with a new update, right or a new version.

What information does that have? Then have the debate, is that enough? Is the mechanism of how I'm establishing that trust efficient? Do I want to modify it to my own ends?

If I'm the intelligence community, do I want to know more different things than if I'm Walmart?

But I think we're not there yet. We're sort of missing that base layer almost.

Derek Weeks: There are a lot of conversations around that. I know you within the Linux Foundation, NTIA on software bill of materials (SBOM), and so forth being part of that, can pull out some information, as you say, and then begin to question? Is that enough?

I want to go to our final kind of wrap up topic because I think we could go on for hours on this conversation. Let's give people some place to go to learn something. If I want to take the next step. Give me a place to go into your report that you produced a blog that you wrote a resource or training that you know is available.

What would you suggest and lead people toward? I'll start with Brian, and then go to David and Trey.

Brian Fox: If you're interested in some of the attacks that we were talking about the namespace stuff. Our blog at Sonatype has a whole bunch going back a long time, as well as more information on some of the history. So I think there's a lot, a lot of bleeding edge information that we've been pushing out there that should be of interest.

Derek Weeks: All right, David, where would you send people?

Dr. David Wheeler: Multiple places, actually, so depends on what you're interested in. I worked at the Linux Foundation, foundations kicked off, open source security foundation (OpenSSF). They're actively working right now. In doing some of these things, for example, they've coming up with some tools to monitor the various repos, and then various other tools to take that monitor data, and then analyze it and ask questions like, wait a minute, does this look malicious? Does this look like typosquatting? That sort of thing. They're also working on many other things. They've also included something called the CI best practices badge. I don't know how many of you are familiar with that. Let me see here. That would be best practices, core infrastructure org.

So if you're an open source project work, you get a badge that basically is a list of best practices. Assuming that you want to be secure, you know, things like having version control, telling people how to report vulnerabilities, and running tools to look at your code.

Another thing we just mentioned was courses. So EdX courses will give you some courses.

Fourth and final, we talked about SolarWinds. In the short term, we need to harden up our build environments, that is not going to work against nation state adversaries. I'm sorry, that's just not going to be adequate to counter that. The only measure that I know of for counting a SolarWinds style attack is something called reproducible builds. There is a website reproducible-builds.org.

That's a harder road to hoe no doubt of that. But right now, that's the only game in town when you're trying to counter those kinds of attacks. We could talk about that later.

So anyway, you got a four-fer.

Derek Weeks: Awesome. So Trey, you mentioned NIST standards and this guidance before, but where would you send people in addition to the NIST research and publications that have been put out there?

Dr. Trey Herr: I think that the NIST research and the ministers in particular have been some really good sources of information.

I would definitely go to the Linux Foundation. I think there are a lot of resources scattered throughout that page. It's really interesting programming work being done there.

The other thing I would look at honestly, is there's some really good work being done by the folks at InQTel labs. So Bentz Tozer and John Speed Myers and of course, the inimitable Dan Geer, around secure code reuse and and trying to frame not only the operational aspects of this, but almost the sort of system design and system theory side of secure code, reuse and awake ways that are really going to be useful both to practitioners and industry, but also the policy community trying to wrap their head around some of these problems.

The fixes that are coming, the ways that folks are trying to address shortfalls in federal cyber risk management, and in some cases in industry, are good and there's a lot of good energy I think that's been kicked up by the sunburst crisis. But you know, some of these are much longer roads to hoe and I think need the sort of sustained energy that we've seen from folks like the group pushing on SBOM -- NTIA and Alan Friedman shop. So I think there's some really good framing in the InQTel world thinking about these problems that helps map out that longer frame.

Derek Weeks: First of all, thank you all for this conversation. It's always great to talk to each of you about what's happening, both and most importantly, kind of short term, right? What do we need to be aware of in the short term, what kind of actions can we take?

I think you've each brought up different initiatives that really have a long term focus. If you're out there listening to this saying, I want to get involved there are enough places to get involved. We don't necessarily need new places. But just more investment in more people and more brains on here are the things in the people that are thinking long term on these initiatives. How can we work to better improve Application Security, securing application development, secure coding, practices, etc.

There's no immediate path to, or silver bullet that we can apply here. As we said, this will be an ongoing story and we just need more brains in the industry focused on contributing toward solutions and thought leadership in this space. So, Dr. Wheeler, Dr. Herr, Brian Fox -- thank you so much for participating today.

We'll wrap it up there.

Tags: featured, News and Views, Industry commentary, dependency confusion

Written by Derek Weeks

Derek serves as vice president and DevOps advocate at Sonatype and is the co-founder of All Day DevOps -- an online community of 65,000 IT professionals.