The Golden Repo is NOT the Answer, the Golden Policy is

October 1, 2013 By Manfred Moser

Like many organizations, you have turned to Nexus as a repository for your components. Since that is going so well, you may be thinking adding controls that turn Nexus into a Golden Repository. It’s natural to try to manage components by restricting usage to only those components approved by your security, licensing and architecture teams. Unfortunately, what sounds like a good idea actually has a ton of unintended consequences, most of them negative.

We know because we have already been down this path. We built much of the Nexus governance capabilities based on these golden repository requirements. We had tons of conversations with customers and we built and delivered what we thought they wanted. Then, reality struck – customers started to use Nexus to control access to approved components. They wanted to ensure that developers used only the best components so that the applications they constructed could be trusted and free from licensing issues.

So what went wrong? This is not an indictment of using Nexus as a repository manager. Nexus is a key instrument in managing and storing components – and we all know that components represent the critical building blocks for our applications. So while Nexus Pro features such as smart proxy, LDAP, build promotion and staging, make it possible to implement a highly performant, scalable repository that helps provision components and drive the release management cycle, something was missing in terms of using Nexus as the Golden Repository.

It turns out that the problem was not a functional limitation with Nexus – Nexus procurement support does a fine job of controlling what goes into the repository. The issue is a process problem that led to unexpected consequences. Here are just a few of those limitations:

  • Developers bypass the repository because it gets in the way of their development efforts.

  • Approvals couldn’t keep pace with the volume and release cadence of components.

  • Component versions became stale and organizations lacked the ability to identify new vulnerabilities, and to purge/replace those components from the repository.

  • A golden repository couldn’t meet the needs of different departments or application profiles – not all applications share the same risk profile.

  • The golden repository does nothing to help you manage newly discovered vulnerabilities in your production apps.

We knew that the repository concept was solid, we just had to improve the way that we implemented component governance. We learned that:

  • Guidance and governance was needed throughout the entire software lifecycle.

  • Automated policies were needed to replace approval laden approaches.

  • Continuous monitoring of applications was necessary to identify new vulnerabilities.

  • Flexible policies that are organizationally aware are needed to manage different department and application needs.

  • Component governance has to be extended to support the production environment because application usage and component vulnerabilities are not static.

In short, you need a Golden Policy Approach, not a Golden Repository Approach.

Since many of our customers were turning to us for help – we felt that the topic warranted a proactive effort to educate others using Nexus as a repository for component governance. As part of this educational effort, we have scheduled a webinar to help you determine the best way to implement component management. And those that register for the webinar will also receive a design brief on how to augment their repository with a policy-based approach that manages the entire software lifecycle. Register now!


  • Gary Fry

    I think a hybrid solution is much more appropriate here … while it’s true that the perceived nirvana for Production environments must contain only approved third party libraries and tested in-house artifacts are “golden” we must not forget that developers will need to try stuff out locally. Developers are smart. Of course they will bypass repositories if they are able to – it’s in their blood to find work-arounds.

    Why not simply understand this psychology and be pragmatic about it? Provide two repositories – the first is a Developer repository which allows unapproved access to third party libraries for R&D, Spikes, innovation time, whatever. This repository contains in-house builds which haven’t fully completed their testing cycles yet. The builds are ring-fenced from other teams in order to avoid pulling in other fully untested artifacts , for example common libraries… unless the developer specifically wants to.

    When it’s time for a satisfactorily and signed-off application / library to go in to pre-production (next-live), then it’s time to promote those libraries to the golden repository.

    Policies are good, but programmers are better at circumventing them. Why not embrace their creativity and trust them to do their job properly? Red-tape gets in the way. It frustrates and beats programmers down… Policies (and complexity) hampers time-to-market and seriously affects business agility. Who would tell a medical doctor not to use his better judgement because of an edict?

    I think the above will capture most (if not all) of the cases you mentioned above about being flexible on a per-department basis, according to whatever SDLC is in-place at any organisation.

    • Brian Fox

      Hi Gary, thanks for the response.

      The process you describe that has multiple repositories, open and locked down is in fact what most people attempt to do. Unfortunately, this doesn’t really solve many problems and introduces yet more work.

      For example, there is no good way to easily move between the open and closed repository setup and know what is approved and what’s not approved without going through a series of “lets change the repo url and see what breaks”. This leads to delaying of the reconciliation until very late in the process.

      It’s also not very easy for developers to see what is likely to be approved, what could be approved automatically and what will automatically be denied with this approach.

      Our policy solutions are intended to cut through the red tape and elevate the programmers by empowering them with information, not to beat them down.

      We believe that by defining the policy in an automated, deterministic (not objective) way, and then letting everyone across the process see the results is the best way to handle this. It means that as a developer, I can see immediately in my IDE what the policy status is for any component, and any specific version of that component is…even ones I’m not yet using. It means I can choose one that is approved / will be approved right from the beginning. It also means I know up front if a component I want desperately to use won’t be approved based on the current policy, so I can start working on getting that approved before I invest too much time into the component.

      We also believe that policy actions need to be flexible. For example, I may define a policy that says “no components with un-investigated security issues may be used.” That’s a pretty sane policy. What’s insane is trying to enforce that policy at the front door which means developers have to go outside the system to do the investigation. Our process lets you define the policy but then be flexible about where in the lifecycle you really enforce it. So for example, you may only throw some warnings during development, but you put the brakes on the process if it’s release time and there’s still an issue.

      Put simply, with a policy you can be much more nuanced in your approach to component approval than you can with a Golden Repository. A Golden repo is more or less binary, the component is approved or it’s not. You are building against the Golden repo or you are not. Policies and actions need the nuance to deal with the realities of today’s agile development. We’ve built CLM with this in mind.

      • Gary Fry

        Hi Brian, thanks for your detailed response :-)

        The idea of a good utility is to save time in performing an otherwise labour-intensive or operation. A microwave heats food quick. A kettle boils water quickly. What’s not fundamentally being addressed is the dogma that there’s ” no good way to easily move between the open and closed repository setup”

        You’re absolutely right that continuing as-is will cause shenanigans, guess work and time wastage. That’s why this statement requires serious (yet constructive) scrutiny.

        It’s not difficult for a dev to copy any open-source material into his code, rename package names, remove unnecessary code, rename some variables and class names, and the policy holders would be none the wiser as the code has been “written” in-house.

        How would your system protect from that case? I’m sure you’ll tell me code-reviews would also need to be part of the process…

        We’re not far apart from a simpler solution. I’m simply questioning why a generic solution which fits an approval process is genuinely required. A simple and effective solution is all that is needed.

        • David Grierson

          I think that this then has to fall back to the policy solution. If a developer is copying, renaming and rebuilding disallowed artifacts then they aren’t just circumventing an organisation’s policy, but potentially also breaking another developer’s copyright and also licensing issues.

          Another point about simply blocking access to “dangerous” artifacts is: how do you then support released products containing those artifacts?

          At this point you _have_ to allow them; but that should be within a controlled policy.

          • Gary Fry

            Hi David – you’ve summarised my point spot-on

            Sure, control it – that’s absolutely necessary! And a process wrapped around it will be the glue where deployment and test automation falls short.

            The point is, a bit of trust and support (and peer review, of course), goes a long way. What’s not been mentioned here is that for mature organisations who are well on their way to nailing Continuous Delivery, will, without question, need two repositories.

            If you’re not on that path yet, then the smart money is on reading about Continuous Delivery – all will become apparent.

            Saying that, Brian’s comment “lets change the repo url and see what breaks”… is pretty close to the life of a developer – except it’s not a game – it’s more “oh bugger, I need to get that jar from a repo that isn’t locked down, this build needs it, how did this pass CI?” or “awww not again, I need the in-house repo for this part of the application I’m working on”!

            However, I have no doubt the jury is out about 3rd part dependencies being managed.