I was going to start off listing a series of what I think are easy questions that I reckon everyone in technology should be able to answer even if they are not or have never been involved with writing software. I gave this some serious thought and decided (perhaps a little arbitrarily) that, actually, I’m really only interested in one single question for now and that is ‘should software be tested’?
Ask this question to any software developer and even if they’ve never tested software themselves, they will still answer yes. It’s a baked in software principle and everyone knows it.
It would be fantastic to find someone who would take the other side of this argument. There must be a clever developer somewhere who has given this some thought and, through a likely questionable but possibly logical journey, determined that, in reality, untested code is better. I don’t care how good that guy’s logic is I definitely don’t want him anywhere near my car’s critical software systems or producing stuff for my local hospital.
Perhaps with continuous deployment and people trying to reduce the time it takes from typing code to delivering it, there could be a case to chuck anything out the door and then worry about fixing it later. But when I think about it, if nobody cares about the software they are using or the reliability, I question the value of that software. (lets ignore almost all apps written for mobile devices right now).
So please, if you want to take on the challenge and let the world know why software shouldn’t be tested, I welcome a convincing argument. (as I think a lot of developers would as well)
Until such time, we must all live by the mantra in which ‘all software should be tested’. I’m not even going into how much and what type just that there is some agreement that some kind of testing, no matter how limited, is better than none at all (though crappy inaccurate and erroneous testing is most likely worse than none at all).
Get to the point I hear you say. Right, so what happens with software you don’t write and / or build yourself (as even downloaded source code which you build yourself I believe should follow the same process you use for your own code)? Should it not live by the same rule of thumb described above? What happens when it’s not a high priority to test 3rd party software?
Excluding software you pay for because in many circumstances you’re likely paying the supplier or creator to take on the responsibility of things going wrong or providing support when you run into issues.
But what about open source software? Why does it seem there isn’t the same level of investment required to test and vet the software used in some of your most critical applications? I hear time and time again, ‘we don’t see this as a big problem right now’.
Well let me tell you, it’s a problem and the time is now to address it. I’m lucky enough to work for the custodians of the largest repository of open source software in the world (which I think is a totally cool accolade). We see the same vulnerable bits of software being downloaded day in, day out regardless of the awareness put out about the fixes available. Not in small quantities either. Millions of components riddled with nasties are actively built into the world’s production software solutions each and every month.
If you can’t hear it, and you can’t see it, it doesn’t mean that you are not actively deploying it as part of your most prized and critical systems. When asked the question ‘should software be tested’ the answer is always ‘yes’. Open source software is software too so look to treat it the same way or better (given you often haven’t a clue who wrote it).