While DevOps is forging boldly into the future, security is still trailing behind in many organizations. So, it’s important that we understand how to apply notions of (traditionally static) security into environments that are built to foster continuous development. I, for one, would like to raise the torch to the fledgling category of DevSecOps and learn how it is successfully implemented by industry leaders. In the first of a series of interviews with DevSecOps community leaders, I chat with DJ Schleen, DevSecOps Advocate at Sonatype.
Helen: I think that the market is light on shared DevSecOps reference architectures to help the community learn and grow. Do you agree and what can we do about it?
DJ: There are a lot of missing pieces out there, and I think it’s because nobody really knows where to go with it. If you do a search for DevSecOps reference architectures, you’re going to see that infinity logo with a bunch of locks around it, which doesn’t really tell you much. I’ve created this one, but the community does need to share. I think it’s because people don’t really know which community they’re a part of; are they part of Secure DevOps, SecDevOps, OpsSecDev? I think there’s confusion. So you might see some security reference architectures, but I don’t know if they’re really taking into consideration flow across the whole technology value stream.
Helen: DevSecOps or SecDevOps?
“DJ, thanks for everything, and long live Sec-Dev-Sec-Ops-Sec.”
Shannon Lietz came up with DevSecOps and the domain, and then the community started moving from Rugged. I think DevSecOps has stuck because it called out security specifically. I talked to John Willis about this back in late 2016, early 2017, and he said: “Well, you’ve got a lot of cojones to be up here at a DevOps conference and call yourself a DevSecOps evangelist.”
And I replied: “Yeah, but if you have Sec in front, nobody’s going to follow security. And if you have Sec at the end, we’re an afterthought like we’ve always been. And if you have it right in the middle, we’re this kumbaya between both, right?”
It suggests that it should just be part of everything. So now he’s a DevSecOps evangelist which is awesome. Really though, I would probably rather just call it programming. That’s what we did 20 years ago.
Helen: What are the biggest challenges facing CISOs today?
DJ: Software supply chain problems that they may not know about yet. There’s been a whole bunch of breaches, even over the past couple of weeks, based on open source components getting into the supply chain. And some are just dependencies of dependencies of dependencies that you’re never going to see on the top. That leads to a conversation about the software bill of materials. You have to know what ingredients are in your can of beans.
You may also like: Where Can We Actually Use DevSecOps?
Helen: What does safety culture look like, is it important and how do we get it?
DJ: It’s not putting a pool table in the lunchroom and hoping for the best. It’s actually sitting down with developers or DevOps tribes and saying the same thing, having the same conversations, knowing the same terminology, and having commonality. It’s like what we’re doing with DevSecOps engineering. How we did this was pretty difficult actually because a lot of the security organizations, the traditional ones or threat-hunters, they’re not even involved in technology. Like Eliza May Austin said at All Day DevOps:
“I don’t know what DevOps is. All I know is that I’m never going to program, even though somebody says security needs to get programming.”
That rang a bell in my head. What is the message we’re trying to say to people? Is it that everyone has to code? Because that’s not going to necessarily be the case. So are we alienating security organizations? And is that why DevSecOps is a bad word in some places; because they just don’t have the technical resources? At a previous employer, we spent most of 2017 in the technical software security group learning Kubernetes, learning Docker, learning all these orchestration tools and CI/CD platforms. And not just the tools, but we actually got in and started getting our hands dirty and looking at the technique of how things would flow from idea to production.
What that let us do is to have really relevant conversations with people. Their jaws would drop when they heard that we were a security organization; there was that instant technical respect and credibility. My advice then, to any technical security folks, is to learn the same thing that your developers learn, but also learn ethical hacking tools and all these kinds of techniques to actually know how to put security in there and then have the conversations.
Helen: Where do developers go to learn about ethical hacking and ethical hacking tools?
DJ: A lot of the folks that I worked with just went and got Kali and then started hacking around with it and they’re like, “Whoa, I can use these automated tools.” But that turns you into the script kiddie. So I always suggest that people go for an ethical hacker or penetration tester certification. Those are good introductions for the more brave at heart who like to do shell scripting and that kind of thing. Doing advanced certified ethical hacking training gives you the ability to craft your own exploits.
But a lot of it is experience. I used to use things like Burp Suite, and then I created my own in Python because I hated using the UI, creating my own proxies. Experience is, I think, the number one thing if people want to get started. For developers, that means bi-directional communication with their security organization; ask them how to do it. That’s where people can learn, and that’s where technical security can really plug in and say: “Well, here’s how this injection vulnerability works.” It opens people’s eyes, then you’re using different tools and that starts getting people more involved.
Helen: How autonomic can DevSecOps systems become in the future?
DJ: I’ve always talked about making security invisible, so it just happens. We recently announced a capability where a developer just checks in code with a vulnerable dependency. We do the scan on the check-in and with the GitHub Action, and then it comes back; if there’s an upgrade path to a binary compatible version, it’ll create a pull request for you. So, you just have to merge it in, and it’s just done. So that’s cool.
But when you talk about other kinds of security automation, self-serving systems, or self-healing systems, I like using the latest solutions. If there’s an intrusion event, start using moving-target defense and compensating controls and then call back into the build server and say, “Rebuild it and patch it.” A lot of people hate the idea of version pinning because they’re thinking, “Well, it could be introducing a new vulnerability.” But I think, “Well, wait a second, or maybe wait a few minutes, and maybe the one that you have out there is going to be vulnerable.” You never know, right? So you’re only as good as your last scan, no matter what you’re doing.
Helen: So do we have the ability to be scanning all the time? Real time?
DJ: Hell, no. Well, yes. And that goes in the IAST conversation. I don’t want to talk about vendors specifically, but looking at the open-source software problem and seeing that those kinds of component scanning tools cover up to 97% of the code in your application; so you’re 97% covered at that point, if it’s a web application. That was some of what came out of the State of the Software Supply Chain Report.
So yes, I think you can scan all the time. I think you have to continuously scan at a couple of places. Scanning is an interesting term because there’s on machine scanning, there’s on check-in scanning and then there’s the dead code that you have in your repository that you might not have used for a while that you’re going to pull in. But it’s not in scan because maybe it gets packaged after the build or something like that.
What I do is create scheduled scanning that looks at anything in a repository and scans it on a regular basis from the build. I take that information, and if there are vulnerabilities, I pull it back in. Sometimes, the versions that are pinned in libraries are vulnerable, and we don’t even know because we haven’t updated them for a while. Maybe it’s some sort of library you’ve used internally for logging for years and it works pretty well, and all of a sudden you’re adding that into your system and there’s a dependency that’s really a vulnerability that you’re not scanning for.
Helen: So continuous scanning could be a thing?
DJ: I can’t see why not. I think that the tools need to mature though. Risk of the environment being misconfigured is probably higher than how much code you’ve written that has a problem in it, though.
Helen: Why may we still need manually-performed penetration testing in the future?
Helen: What’s the best model for an organization that wants to do threat hunting? Train their own? Have a partner? Bug bounties?
DJ: Yes. All of the above. From a compliance perspective, I don’t like having our own organization doing threat-hunting. I think it’s a waste of time. I wouldn’t have ethical hackers on staff that would do anything from a compliance perspective. I’d outsource that. Bug bounties are good for having the community hit your stuff and crowdsourcing it. Training your own is hard because bandwidth is tight; there’s a severe lack of technical security people and ethical hackers out there. Training your own is a good idea, but trying to find someone you can train who wants to do security is difficult because they think it’s the traditional culture of “no” and that it’s not going to be fun from a technically challenging perspective.
Helen: Why don’t developers want to learn?
DJ: As I said, it’s not perceived to be technically challenging or fun. In a previous role, when we called it ‘learn’ nobody would come but if we called it ‘attack’ people would get excited. We had this thing called ‘Attack And Educate’ where we actually showed people how to get a shell on their system or something like that and exploit a vulnerability. They liked that. That wasn’t just people saying “no” to a programming language or telling the developers that there’s a SQL injection and not being able to explain it to them.
One of the examples that really irked me was that security people would say, “Here are the OWASP Top 10 and we can’t have any vulnerabilities like this.” And a developer would look at it and say, “Well, this can’t be exploited.” And the security organization could never dispute that because they never knew how to exploit it themselves but they would keep pointing to it on the scan. I hope that developers will look at that and say, “I can correct some of that antiquated thinking and actually get a little bit more technical expertise and understanding involved.”
Helen: There’s a new tools category out there: IAST. The vendors in this space claim it’ll replace SAST and DAST. What are your observations?
DJ: Dynamic Application Security Testing (DAST) is like watching paint dry and it’s going to be like throwing the book at things unless it’s configured properly. And if it’s not, it’s like taking a dart and throwing it from 50 yards away and hoping you hit the bullseye. The only thing I’ve ever found from DAST is the same thing I could have found from Static Application Security Testing (SAST) or any other scanning tools. It should be further left than that. But from a compliance perspective, you need it. And SAST, it’s well known that it’s sort of useless and very slow unless you’re using it on small microservices. We had it running on 4.8 million lines of code. After 72 hours it was still running.
Interactive Application Security Testing (IAST) sounds great but is going to be embedded later in the process, further to the right, where the QA is happening. It’s going to throw an agent in there. And from my experience looking at a product that’s coming out the right-hand side of the tool chain, it’s going to have all these test harnesses in it and it’s also an agent. So now you’re introducing another third-party component that also might have vulnerabilities that you’ve never scanned.
So why would you put a tool that you’d use for testing to detect security information, that’s an agent, that could possibly be left in there, that happens way right in the process, when you’re scanning code or you’re testing code that’s already been vetted by your software supply chain tools and maybe earlier in the process by another tool like container security analysis?
Putting that into the software just seems irrelevant. You’re spending all this money on the IAST tool where you could be investing in more effective tooling. Normally they’ll bundle IAST and RASP, so the agent stays in there into production. And then you have the RASP side of the equation running when it’s out in production. Well now you’re assuming that all your threat intelligence is going to be based off your own software and not off the best practices of the industry.
Helen: How are coding practices changing in large enterprises and how will this affect tooling?
DJ: Like a snail! This relates back to Conway’s Law where people on a central transformation team decide on languages and practices and push them and standardized tool sets down the stack. Developers really need the liberty to select their own. Developers are going to start wanting to use Python and Go and some of these more terse languages, because that’s the industry trend in the industry. But some large organizations are still saying, “You still have to use this even though it might not be effective.” And maybe the teams can’t get resources that know these technologies.
Coding practices are changing slowly because people aren’t used to change. It’s like tugging an iceberg down to the equator. It’s going to be really, really slow. And as it melts, it’s still going to be more stuff underneath. It can be political. And then tooling just becomes a, “Hey, we’ve always used this tool and we’re an X, Y, Z company,” whether that’s Java or Microsoft or whatever, right, or Amazon or what-have-you. “Infrastructure is code” is a changing practice but it scares operations that developers are doing it. And then all of a sudden you have FrankenDevOps where everything is great up to Ops, and then there’s this big wall that you have to throw code and process over.
Helen: What’s the best way to enforce policy?
DJ: Policy as code. Define control standards in the GRC tool, automating exception creation and remediation plans if possible. You might have a component that’s not being maintained any more, so have an exception automatically generated when something is being used that’s blacklisted. You can gate it, you can do lots of different things, but automate that so that developers don’t have to go into a tool they don’t know or like to find out what the compliance issue is. And then map your policies to the control standards. And then you map them into the vulnerability information that’s getting generated by the tools.
Helen: Is there a cybersecurity skills shortage and, if so, what’s the best way for an organization to address this constraint?
DJ: I pick people right out of college and train them on technical security, not policies. I always look for attitude and cultural fit first, and the ability to adapt quickly to multiple languages or techniques that need to be done. I don’t really care about tools. They’re going to use new ones no matter what; things change so fast. You need the people who fit really well into your team, that have a go-getter attitude and adapt quickly. As Buckminster Fuller said, “A fool with a tool still remains a fool.” Just because you have a hammer, doesn’t mean you can use it on a screw.