Today, as we kicked off the 2019 Nexus User Conference, one of the first sessions tackle tough questions about the journey of containers and how they’re actually just a piece of the puzzle.
ABN AMRO, which we’ll use as the backdrop to showcase this lesson, is one of the biggest banks in the Netherlands. They have a lot of revenue and thus a lot of operations, employees, and dev teams. Specifically, they have more than 450 agile software development teams in both the Netherlands and India and more than 5,000 people in IT.
The leads at ABN AMRO are responsible for helping all of these folks with delivery. And they’re doing it at an organization moving from a waterfall world to using agile and DevOps.
The Container Journey
The first important piece of the puzzle to note is continuous integration. They’ve historically used Jenkins on prem in VMs for this. But recently, they’ve moved to using containers. In 2018, they moved to using Docker in AWS. (No small feat in the enterprise!)
As they’ve done this, they’ve learned how to “smash business bottlenecks.” Clearly, the move away from on-prem has helped this. But in general, they’ve moved from centralized control to team autonomy, with each group managing its own containers via cloud-native pipelines for the enterprise.
But as all of this happened, they began to understand the essential need for container security. It’s not enough just to have and use containers; you have to use tools like static analysis and build checks to make sure that they’re actually secure.
Managing the Components of a Container
Obviously, a huge component of what goes into a container is the source code of the eventual deliverable. So they’re using static code analysis to scan for potential security issues and to establish code quality gates. Not only are they automating these checks but they’re also actually automating the detection and review of potential false positives—a huge pain point with static code analysis.
And this doesn’t just apply to their own source code. ABN AMRO uses a lot of open source technologies, and, as we all know, you own the code you use as much as the code you write. So managing libraries they use and looking at their versioning and security is enormously important. Thus, library management is another important piece of the total puzzle.
Now, once you have code and dependency analysis in place, you have the packaging into actual containers. It is here that container security enters the mix. This includes concerns like:
- Syntax checks of things like metadata
- Scanning for container common vulnerabilities and exposures (CVEs)
- Managing secrets (storage of secure credentials)
- Prevention of bad practices
- Compliance, which is, of course, huge in the banking world
- Runtime protection
The Container Platform They’re Building
First of all, why build your own container platform? In their case, it’s because the teams are using a lot of different technologies and tools to handle deployments. They’re dealing with a problem of unification.
So they set up a container expert team to set standards, establish security requirements, manage compliance and, of course, pick the technologies. This team analyzed all of the available tools and decided that their best approach would be to use available building blocks to build their own platform.
That is here:
A quick inspection of the layers here will reveal a lot of names with which you are no doubt familiar, including AWS, Azure, Docker, Kubernetes, Cloudbees, Twistlock, Splunk, and, of course, Nexus.
Collectively, they believe that all of these individual tools serve their needs best when brought together, helping them with compliance, security, and agile operation.
Challenges They Face
But beyond selecting techs, they face a series of challenges as well.
First up, there’s the governance of basic Docker images. The teams need guidance and support on how to create and deploy images. They have a container expert team that provides some base images that teams can extend and modify, lest they become a bottleneck.
They also have the challenge of implementing a Docker registry at scale. This is another thematic version of the same underlying issue: managing good practice at scale without impeding individual team creativity.
Another way that they meet this challenge is via the use of signed and approved images, which requires their team to manage this signing and approving.
And, in that same vein, there’s also the oversight of and review of security issues. They combat this with a “shift left” to raising security issues in an automated fashion as they’re introduced by developers.
The role of metadata is another challenge. It has to be versioned, and it has its own lifecycle to consider. So they have to make sure that teams are supplying access to the source code for the image.
Collaboration among so many teams is, of course, a challenge, and that surfaces another core issue, which is the drift of containers from their original images.
Conclusions and Takeaways
For any DevSecOps transformations, it’s really important to start with the aspect of people. Any organizational change like this creates mental churn, changing people’s way of working. So it’s critically important to help people through the change and give them the tools they need to feel comfortable as they go—all while maintaining control over what’s happening.
Processes are really important. They need to change to adapt to a new way of working. And to go this fast, automation is non-negotiable. Automation is the foundation of this process change.
Only once you’ve addressed people and process can you adopt the technology. Just throwing a new tool at a team won’t work. But of course, once you’ve established the people and processes, the technology that enables both to succeed is a critically important decision.
So, as you can see, “let’s start using containers” isn’t a solution. It’s just a piece of a much larger puzzle.
For more details on how containers fit into ABN AMRO’s puzzle, watch the full presentation from Wiebe de Roos and Dominik de Smit below.