A skeleton key is capable of opening any lock regardless of make or type. Do you know anyone who has one? I do. Lots of them. At the HP Protect conference last week in Washington DC, the theme of their conference was “think like a bad guy”. They introduced us to known hackers, their approaches to infiltrating organizations, and the trends in their behaviors. They also introduced us to the people who hunted down the hackers and successfully captured them.
This week, I will be attending AppSec USA in Denver with the rest of our Sonatype crew. While it will be my first time attending the event, I am really excited to be leading a panel discussion at the event this Thursday. If you will be at the event, please come by the session or the Sonatype booth (G10) and say hello. So what’s the panel discussion about?
We are not the first industry to face this challenge. But many are convinced our problem is much smaller than it really is or that it does not exist. They simply ignore it. Or choose to do nothing about it. Meanwhile, the problem is multiplying like rabbits. The challenge lies within our software. Within the quality of its supply chain, within our collective ability to maintain its health, and within our ability to establish easy (yes, I said easy) paths to ban rampant, yet avoidable risks.
Customers using CLM want to surface known security vulnerabilities and license risk in the same place developers or executives already go to assess the overall quality of their application. To support this growing interest from our customers, we are introducing our next important milestone: Sonatype CLM’s integration with SonarQube.
“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us…”, penned Charles Dickens in 1859’s A Tale of Two Cities.
I was going to start off listing a series of what I think are easy questions that I reckon everyone in technology should be able to answer even if they are not or have never been involved with writing software. I gave this some serious thought and decided (perhaps a little arbitrarily) that, actually, I’m really only interested in one single question for now and that is ‘should software be tested’?
In part 1 and part 2 of the ‘[ ________ ] is the Best Policy’ series, we looked at how open source policies can quite often lead to the wrong type of behavior in an organization. As we saw, 41% of development professionals stated they are generally looking for the path of least resistance when it comes to compliance with policies — many of whom will put a non-trivial amount of effort into working such policies.
At the Black Hat 2014 Conference in Las Vegas, Mark Miller, Community Advocate for Nexus, and Executive Producer of the OWASP 24/7 Podcast Series, presented the third installment of the OWASP security news quizz, “Wait, Wait! Don’t Pwn Me!”. Play along and see how many news stories you can identify for the month of August […]
In Part 1, ‘[ ________ ] is the Best Policy, we looked at some of the common aspects of an open source policy and discussed how our recent survey discovered that 41% of people think that policies are not enforced. Now in Part 2, we will look at how effective policies are when considering security concerns.
Open source has been around for donkey’s years but until recently the persuasive argument of “many eyeballs” was the guiding policy when using open source. In comes the recent industry shock wave we all know as Heartbleed and now many of us are re-evaluating the cost of free software.