Five Challenges Facing AppSec Teams at Large Enterprise and How to Solve Them

Jim Manico and Scott Kuffer’s RSA Conference 2022 Talk – Five Challenges Facing AppSec Teams at Large Enterprise and How to Solve Them.

RSAC 2022 Talk Transcription

Jim Manico:
My name is Jim Manico. I’ve been an author, an investor in security companies, and an educator for many years. I’ve had a couple exits. This is not about me and my exits, it’s about why I invested in Nucleus and what I saw in them. I was one of the principals at WhiteHat Security that sold to NTT. I was part of Brakeman Pro and sold that to Synopsys. I was part of Signal Sciences, the original investor, and it’s been sold to Fastly. I was an investor in SecureCircle via CrowdStrike, and I recently sold Bit Discovery to Tenable.

Jim Manico:
Now it’s not about that. It’s about what do I see in these companies that make me want to invest in them? So, I want to talk about that in just a few minutes, but let’s talk about the problem first of all.

Jim Manico:
Security bugs in software happens. It happens at a very big scale. Most security problems you’re looking at, it’s in software. You have a firewall problem? Well, that’s because of the software in that firewall and similar. So software security, application security, it’s eating the whole world right now. You have security errors in the custom code built by your developers, one of the biggest problems. You have security errors in the third party libraries that you use every day. You have security configuration errors in the frameworks and cloud services that you use today, and this is not an easy problem to solve by any stretch of the imagination. Your engineers are working hard. The cloud services do the best they can, and the third party library developers don’t want security bugs, but they happen, and they happen a lot. That’s leading to security fragility across the entire software ecosystem that runs the world today.

Jim Manico:
Now let’s say that your decision is you want to fix these security bugs. I think that’s what we want. We want to identify and fix security bugs in the software that we care about. Every study I have ever looked at when it comes to fixing bugs will show you that the longer you wait to fix a security bug, from idea, to development, to production, the cost to fix that bug increases exponentially. Exponentially. That’s not one study, that’s any study ever done on the cost of fixing bugs.

Jim Manico:
So, here we have Security Boulevard, some recent statistics. One-third of all applications have a serious to critical bug. Think about all the software that you use and depend upon every day. Not even in business, just in your personal life. How much data are you giving the apps that you use every day, and one-third of them have some kind of critical vulnerability that affects your security and privacy? Next slide.

Jim Manico:
At the beginning of 2021, 86% of technical respondents told us that their security teams and their developers have no meaningful communication. And we see that 56% of the biggest incidents in the last five years were because of web applications and software services that your company depend upon. And this is not exaggeration. This is the reality of what application security problems are putting on our plate every day. Next slide.

Jim Manico:
Now what’s the impact from this? Do you like to lose money, sir? Do you like to lose money?

Speaker 2:
Not at all.

Jim Manico:
No. No one does. No one likes to lose money. Privacy violation. There are a growing number of laws like GPR, like CCPA, where privacy is now a legal right that global citizens have. And you can’t get privacy without security. Reputation damage. It takes a lifetime to build a good reputation and one incident to lose it. Not to mention other laws like HIPAA, the regulations of PCI credit card, and more of these show up every day. So it’s not just a good thing to do. There are real world business drivers that force us to care about fixing bugs in software. So we’ll go through all of this here. And the thing is, the conventional wisdom we’re being told today, is to do security testing continuously through every phase of the life cycle. The moment I start coding, I’m going to start running a series of scanners to look for bugs in my code all throughout the development life cycle. Next slide.

Jim Manico:
So it’s something we want to do continuously, but the problem is if you have a mature program, there are way too many scanning tools. There’s actually three or four categories of scanning tools we care about. The main one is static analysis. Companies like Checkmarx, Return to Corp, Fortify, and many other companies that provide you with scanners that look at the code for security bugs. But the most mature customers I have, they run this scanner a hundred times a day, looking for bugs in code during development. Next slide.

Jim Manico:
The other class of scanning tool is dynamic analysis. These are companies like Burp and Netsparker, and a thousand other companies. Many of these scanners are right in this room with us, trying to sell you their new product, because there’s so many scanners. These kind of scanners look at a live running application. They simulate pen tester activity, looking for real world bugs.

Jim Manico:
The third class of scanner is called software composition analysis. They look at the third… Are you taking a picture? Wait, go ahead, quick. Oh selfie! Come on up, selfie.

Speaker 3:
I take a good picture. Don’t worry.

Jim Manico:
You got it? I want to give them a proper dramatic picture.

Jim Manico:
The third class of tools is software composition analysis. This is going to look at the many third party libraries that you’re using. It will let you know if there’s a bug in that particular library. But there’s too much scanning data. Think about it. When you’re writing these scanners every day, some of our customers give us 500 gigabytes of scanning data every day, because they’re running for big companies. What do you do with that data? What do you do with it? Do you actually fix the problem? Do you put it in a spreadsheet? What you do with that scanning data is critical. So I’m going to tell you. One more slide and I’m going to get it off to Scott, the founder.

Jim Manico:
There’s way too many scanners. The vulnerabilities aren’t grouped and categorized properly. There’s no concept of ownership in a spreadsheet. How do you assign a vulnerability to a department or a developer so they fix it? And there’s more scanners being built every day, many in this exact room. So that’s the problem. We have scanning data. We have a need to fix bugs, but if I’m running a hundred apps with thousands of vulnerabilities, managing that in a spreadsheet or a less effective piece of software like Peta or Brick, like some of our competitors, these are really complicated software packages to use. We feel like we have a better solution and 5% of the Fortune 500 agrees with us.

Jim Manico:
One last note. Let me tell you how I got involved. I get asked to be an investor and advisor every day. I’m looking for three criteria before I write a check. Number one, I want to make sure there’s a gap in the market. When it comes to this problem, vulnerability management, I am very confident that that there’s a gap in the market for an effective piece of software. Our competitor started about three or four years ago and really built their product based on customer demand without a product vision. Clunky, difficult to use software packages. Our belief was that we wanted to hear our customer’s problems and build a unified solution for them. We know that they know the problem, and we feel like we’ve done a good job at making the solution happen.

Jim Manico:
The other note is, I’m looking for mentally healthy people. There’s a lot of crazy people in security, myself included. But when I met Scott and I met Steve, I was very impressed at their conflict resolution abilities, their grounded nature, their sober nature, and their dedication to working with their team and getting the job done. This is a very rare set of characteristics in founders, in my opinion.

Jim Manico:
And last, I’m looking for engineering competence. I look at their code base and their database and how they design their database and immediately sign a check because they have engineering sophistication at great scale.

Jim Manico:
That’s my story. I’m very happy that you’re here. I want to pass this off to Scott Kuffer, who’s the corporate of operations officer at Nucleus. Thank you for listening so far.

Speaker 5:
Yeah, Scott!

Scott Kuffer:
Sweet. Well, thanks Jim. I had no idea that I was going to be complimented so much. So if my face is red, that is why. Okay. Yeah. I don’t project. Right? Okay. This guy did OP over eight years, so he could project. I’m not that cool. But Jim talked kind of broad about what the issues that we’re seeing as an industry are. What I want to start focusing on is really what you’re here to see, which is the specifics. Oh, there we go. The specifics of what enterprises, especially at large scale, are seeing in their environments. Right? And so kind of at a high level, what Jim was getting around the bush at, was this concept of vulnerability management sprawl. So there’s vulnerability data coming from all these different places. Is it your SAS, your Dash, your IAS, your SCA, your cloud configuration, your network security vulnerabilities, you know, container scan, all of that.

Scott Kuffer:
And so what ends up happening is that it turns into a process problem, more so than anything else. We don’t have a data generation problem. We have a data process problem. So what ends up happening is you have teams that are responsible for fixing vulnerabilities across a large enterprise, and you don’t actually know who’s responsible for what, why they’re responsible for it. You can’t make decisions around what you actually want to do with your vulnerability data. So what ends up happening is that it’s just way too slow, and modern enterprises, we’re doing development at the speed of light. As Jim mentioned, we’re doing hundreds, thousands of scans a day, and we just literally can’t keep up from a security posture perspective, to keep up with the speed of development. And so in a modern enterprise, you require an ability to scale your process and your people just as much as you require the tools to generate the data.

Scott Kuffer:
The other piece of this is just incomplete visibility in pictures. As a global information security team, I have no idea what vulnerabilities I have, where they exist, or why they even exist. And I don’t know who’s accountable to fix that. So what we’re looking at is what do we actually need to do in order to get vulnerabilities fixed as quickly as possible, but getting the right vulnerabilities fixed? Because we have millions and millions of vulnerabilities that are spread out across our entire organization. We have no idea- is this an Oracle database? Is this windows operating? Is this a patch that needs to be applied? Who’s responsible for that patch, for that team, in that business unit? And so the visibility and accountability is a big problem that we’re starting to see and just traditional home vulnerability management doesn’t scale to the modern enterprise.

Scott Kuffer:
So that leads us to what we’re trying to build here at Nucleus, which is a mind shift around what we’re trying to do with the vulnerability process. So when we look at vulnerability management, you think of Tenable, Rapid7, Qualys and Scan, right? What we’re trying to do is to say, “Vulnerability management is a higher level process that we care about in order to actually mitigate risk and to execute on our business objectives.” So it goes beyond just vulnerability data and then saying, “Hey, we fixed X number of vulnerabilities.” The greatest example of this is, “Hey, we have 50 million vulnerabilities. We’ve fixed 80,000 vulnerabilities. Our risk score is 75.” What does that mean? It doesn’t, it means nothing. It means literally nothing. So the idea is that we need to actually automate the process of being able to scale our programs.

Scott Kuffer:
So what we do is we think about it, we turn it on its head and say, “All right, let’s aggregate data that already exists, up to 500 gigs a day, whatever.” We enrich that data with additional context that you need to make decisions. So as an analyst, my job is to make decisions and to triage. There’s a whole triage process. Some organizations have entire teams whose job it is to just triage vulnerabilities. Is this a problem, who does this belong to? So we need to enrich the data, not just the threat information, so that we know what to fix, but also with business context information so that they know who’s responsible to fix it, they know what they have to do with it, and all of those types of sub-processes that go along with it. And then finally, you have to analyze the data, you need to actually remediate, and then you need to monitor.

Scott Kuffer:
So there’s all of these challenges, about 150 different challenges and sub-processes within the concept of vulnerability management that you need to be able to manage at scale. Okay. Go to the next slide please.

Scott Kuffer:
So, the way that we approach this problem is what we call centralized vulnerability management, unified vulnerability management, pick a marketing term. We don’t care, call it whatever you want. But the idea is that we take data from all of the different sources of information in your organization. So this is anything from network scanners to SAS/STAT, IAS, SCA, et cetera. We normalize it into a single place across your entire technology stack. We enrich it with threat intelligence, actually that we have a partnership with Mandiant, so we pull the data from Mandiant. So you get Mandiant data and attributes inside of Nucleus as part of your platform. And you can use that for decision making process.

Scott Kuffer:
We enrich it with asset information, from things like CMDB, things like AWS cloud environments, so that you can keep up to date with those ephemeral assets as they come, get spun up, spun down, we deploy Kubernetes Clusters, et cetera. And then we take that normalized data set and say, “Great, now do whatever you want with them. Export it to another system, build automation within Nucleus, to be able to take actions, organizing the data.” Just organizing the data is a real problem. Like how am I as a global team, who to hold my team’s accountable to vulnerability management?

Scott Kuffer:
So the [inaudible 00:15:02] asks me, “Hey, I’m going to jump on the Log4J train.” So, do we have Log4J in our environment? Let’s say I’m a giant software member like Oracle, right? I’ve got 15 business units, global information security team. Cisco says, “Hey, do we have Log4J?” I have no idea, right? “Let me go talk to NetSuite, let me go talk to Oracle Cloud.” And then they ask them, “Hey, do we have Log4J? And then they’re like, “I don’t know.” So then they go and they talk to their AWS team or whoever, they go talk to Sneak, they go talk to all of their different tools. And it can take six months just to find out if you’re vulnerable to a vulnerability. Six months is way too long. It’s way too slow. And so just the power of being able to say, “Hey, all of these assets belong to these teams. Here are all of our vulnerabilities across our entire ecosystem. We have the answers.” It’s what types of questions are we trying to answer? And what types of enablement are we trying to provide to our teams?

Scott Kuffer:
It allows for so many different use cases, such top down level management, there’s the ability to democratize, so each team does the Spotify methodology of “We’re doing it a drive-based way and embedded enterprise security architects,” those kinds of things. And these are the types of ways that we’re seeing enterprises try to manage these challenges.

Scott Kuffer:
So, specifically what we’re trying to do is to kind of change the way that people are just thinking about vulnerability management and assets. So it’s less about the data itself, and it’s more about how we manage the process around the data so that we can actually start scaling what we’re trying to do. As Jim mentioned, we work with a lot of Fortune 500, Global 1000. We’re tending to see three main use cases. The first is just dev sec ops. Everybody talks about dev sec ops, right? Everybody loves that term. But what we mean by that is we actually are able to embed security into our development processes. So this is huge in the government and trying to say, Hey, we want to provision infrastructure in real time, but there’s all these compliance frameworks and issues that we need to deal with. How do we actually make decisions at scale in an automated way.