Supply Chain Analysis - Creating a Risk Assessment Baseline

On Demand
In this episode, Tim talks about creating a policy baseline and tuning it to suit an enterprise’s use cases, risk appetite, and areas of concern. 

 

EPISODE TRANSCRIPT

GUY NADIVI: Hello, everyone, and welcome to episode six of our series called Software Package Deconstruction, How to Assess Risks to your Software Supply Chain. Each episode, we deconstruct a software package and do a deep dive into its risk profile to better understand its unique attack vector. And, we hope that in some small way, we can enable your security team to stay one step ahead of malicious actors.

Next slide, please. We'll start with a little housekeeping. If you have any questions or comments, please submit them through the Q& A feature on Zoom webinar. We are recording this episode for those who cannot attend live, and you'll be able to find it on our website's media pages shortly after the conclusion of today's webinar.

After the webinar, we'll also be sending out a follow up email to all registrants with any relevant links that we think you might find useful, as well as a link to the aforementioned recording. And finally, if you like subtitles, click the live transcript button to enable Zoom's closed captioning feature.

Next slide, please. All right. Who is ReversingLabs? You may be asking. Well, protecting organizations from increasingly sophisticated cyber threats is what we do. At ReversingLabs, we leverage the world's largest threat intelligence repository to protect software development and power advanced security solutions.

Our threat intelligence repository is seven times bigger than VirusTotal's, and it keeps the most advanced cybersecurity organizations and Fortune 500 enterprises informed and in front of malicious actors. Our software supply chain security and threat intelligence solutions have become essential tools for advancing enterprise cybersecurity maturity globally.

ReversingLabs operations are spread throughout the U. S. and Europe, with offices in Zagreb, Croatia, and Cambridge, Massachusetts, and our customers are in just about every corner of the globe. We have just under 300 employees, and we're looking to hire more. So, if you or someone you know is in the market for a new opportunity, please check out our careers page to view our current openings.

Next slide. The man who will be deconstructing the software packages for you is Tim Stahl, a seasoned security professional who boasts almost 20 years of experience in information security, with additional experience in information technology related to both the engineering and administration of enterprise networks.

Tim has secured and defended networks across .mil, .gov, and .com enterprise domains. And his expertise includes the tracking of both APT and criminal groups, threat intelligence, threat hunting, data analysis, OSINT research, DevOps, and SIEM engineering. 

Next slide. The subject of today's software package deconstruction is creating a risk assessment baseline. We're going to talk about creating a policy baseline and tuning it to suit an enterprise's use cases, risk appetite, and areas of concern. This is critical since policy configurations are not a one size fits all undertaking different teams, projects, or risk profiles may require their own custom policy set.

So, today's deconstruction episode, we'll hopefully clear out some of the fog surrounding software supply chain security: What it is, why it matters, and how can it be operationalized. And the tool we'll be discussing to discuss creating a risk assessment baseline is the one you see on your screen, the Software Supply Chain Security platform from ReversingLabs.

The SSCS platform decomposes the final deployment software image describes, its underlying software behaviors to determine deployment risks, and when it finds a red flag, it prioritizes remediation guidance. SSCS is powered by the world's largest private file reputation repository with over 10 billion files, and about four and a half million new files added daily. It not only helps prevent threats from reaching production, but can also find exposed secrets before release. As a bonus, it can also generate a complete Software Bill of Materials. And with that, let's turn it over to Tim to begin his deconstruction.

TIM STAHL: Welcome, everyone. Thanks for joining us. If you have questions throughout this, feel free to throw them in the chat and we'll grab them at the end. So, we're gonna talk about baselines and why it's such an important thing. So, you know, SSCS and some of these tools for software supply chain security are new to everybody.

This is general, this is a greenfield opportunity. This is producing a lot of data that you've never had visibility into before. We generate a lot of different alerts and you want to tune those things to your baseline and to your risk appetite and what matters to you. No two industries are alike.

There are a lot of different reasons why you would want to see certain things and not see other things depends on your requirements. And a lot of it depends on the resources you have, you know, being a hundred percent secure or having a hundred percent accurate risk baseline for these things isn't realistic.

You want to do the best you can with the resources you have and make qualified decisions. So, that's what we're going to talk about here. Nothing is plug and play. No security device just works right out of the box. You have to do this work ahead of time and get the tune as you go, right?

These are living things. This'll be something that you constantly are updating and researching and tuning things out. The more you tune over time and the more you learn, the better the outputs are, the better the reporting is, and the more signal versus noise. So, there's a lot of reasons you want to do this.

Tuning a good baseline allows you to do comparisons between different software, the same type. If you're doing a purchasing decision, then you want to compare them with the same policy set. So, you can have an apples to apples comparison like that. You know, M&A is another good reason for this. You want to do a bunch of different products from a company that gives you a sense of what their quality is like, what's their pipeline look like?

What are they looking for? So, we, there's enough information in SSCS and the behaviors and the networking stuff and all these other things we reveal to you that allow you to assess not only the product itself, but the vendor. You can make some assumptions about how good the vendor is doing, what they're checking, how secure is the stuff they're releasing? Are they improving over time? Are you seeing a reduction over release after release where there's fewer CVEs, there's fewer bad behaviors, there's less traffic, whatever? Are they handing off a ton of tech debt to you and leaving you holding the bag? So, all of these things play into this. And this is why, right?

Modern software is way more complex. It's not the olden days where you had an edge and there was an inside and outside. Today, it's all edge, right? Your attack surface is a fractal. The more you dig, the more niches you see and the more things you need to be aware of. So, there's a lot of cross platform.

 Zoom is a good example of that. You can log in from your computer and see the screen. You can log in from your phone at the same time and do audio. There's cloud backend stuff. Maybe the data lives in the cloud, but you have some elements on-prem. And then the building of software is extremely difficult.

So, you may see large packages written by multiple teams. There will be a cloud team. Maybe there's an API team, there's different sets of teams. They all develop their sections of this code in a silo. So, they generally won't talk to each other. And then there's somebody managing the whole thing, but you can see how these things get built and you can see some of the results of that.

 And when you look at the reports for these things, you'll see things like multiple versions of the same third party packages. Recently looked at a package that contained eight different versions of OpenSSL inside it. Those span seven different versions of OpenSSL. Some one or two behind and then there are a couple that were almost a year old and you can see the different teams use different things.

So, you'll see this and the more of that, the more complex it is, that's more tax service for you to manage. So, this is how you can evaluate the vendor as well.

There's also the use case matters a lot. Producers and consumers care about different things. They have different goals, different priorities. We list some of them here. Producers really care about new features and building out new things and following up on requests that customers have, or keeping up with the Joneses and other similar products, do certain things that you have to follow along with that. Qualities a other big feature.

But it's usually bugs and customer related. If customers are asking for things and those are going to get prioritized, reliability is a big thing for them. Also, protecting their own code. So, there are a lot of policies in here that are for software producers: Debug information, code leakage, secret leakage. A lot of that stuff is applicable to them, but as a consumer, you wouldn't really care about it. Right? I don't care as a consumer if your debug information is in there, that's your problem. That makes it a little bit easier to reverse engineer your code, if somebody wants to do that, but doesn't impact me as a consumer. So, those are things I would want to turn off, just to reduce that noise.

Consumers you're, you really care about your network. Does the product do what it says? Does it give you all the things that you want? Does it have value? And then, does it impact your risk? Does it bring along with a lot of baggage that you have to be more concerned about? So, those, sometimes, in some cases, those are opposing viewpoints, right?

You protecting your customers and their data is always something that you hear from producers. But with SSCS you can actually see that you can see it in the results. You can see it in the behaviors in the reporting, and you can tell whether or not that matters. You can see, as you do release after release, and you're doing diffs, you can see if new things are being added.

So, does the count of vulnerabilities in that package rise over time? With each new feature, does it bring more baggage? Or are they tuning those things out over time? So, that gives you a sense of what you can expect long term from a vendor or from a particular product.

In SSCS we have around 200 different policies, and you can see on the right hand side how the policy configuration works. It's just a pop up screen, gives you the definition of what the policy is. This year it's related to cookies. Whether or not you want to enable it and whether or not you want this thing to pass or fail. So, you can tune your baseline to alert you when something critical happens, it'll show up as a failure in the report.

Sometimes you want to just see these things but you don't want to be a failure, so you want to monitor, but not deny so, and this takes a little bit of work as you go. Every time you do an analysis you'll see things that maybe don't matter to you, and you tune those out, or maybe you just have it as a pass.

Just for awareness sake, and then there's also audit trail information here. So. You can pick a reason why you make this change, or you can make a note at the bottom explaining why you're doing this, so it's recorded, and you can see those in the policy setting. You can see who changed it, when they changed it, and the reasoning they gave.

So, it's a good audit trail to see what's going on and monitor this over time. We have six categories in the platform, the policies reflect those same six categories. You see them listed here. We'll touch on some of them here. Digital signatures is going to be the focus of next week's episode, where we continue this same discussion, but focus on those signatures.

 There's a little bit of depth to that. There's a little bit of research involved. So, we'll talk about different pieces of that and why you should care about which elements. So, that should be pretty interesting as well.

So, in the platform, there are some features that make us easier. You can, like I said, there all those different policy settings, you can go in and tune them out individually. And a lot of cases you do want to do that. You don't have time to do everything. You can't view all the things. You can't take everything into account.

You only have limited times and resources to do this kind of stuff. And you may be evaluating multiple packages, over time, 10 different pieces of software that you want to review and you're doing diff reports every time there's an update, that kind of stuff. So, this has to be repeatable and this is part of what helps that.

So, you want to do some of the policy settings, but we also have this thing called maturity levels. This is an important feature because it's guided maturity. So, you start out with level one and it has the most critical elements: Malware, any tampering, source code leakage, that kind of stuff. Again, if you're a consumer, maybe the source code piece doesn't matter to you.

So, maybe you also turn those off. Level two, then once you have it level one, as a producer, your goal is to increase maturity. So, fix a bunch of things. Then you can click level two and that'll open up more elements, that as a consumer, you probably want to start at level two, right? It gives you a certain baseline set of the most important things.

And you can tune out the elements that don't matter to you. And then the idea is to just continue to move up levels. Each level is a new maturity. So, if your software you're using meets level two requirements, then you can shift up to level three. A nice element inside this too, is everywhere we show you that each policy set passes L2, it'll tell you whether it passes level three and level four. So, before you make that change, you can see all that information. You can make an educated decision when it's time to move up a level. So, this helps guide you into maturity levels. And what you want to get to is level three, at least, and you want to aspire to level four, but that again, this, some of this is out of your control when it's based on the producer.

So, but, that gives you an idea where to go and what's more important as you go up.

So, when you build this baseline like we just mentioned. Producers, there's some producer stuff in here. So, by default, the policy set is for software producers. So, if you're a consumer and you want to do third party risk management, TPRM kind of stuff, you're evaluating products, you're doing anything like that, you're going to want to tune out a lot of this stuff out. To debug information, as we said, is more important to producers than consumers.

You probably don't care about that. And if that's not going to figure into your risk assessment. And your evaluation of this software, then there's no need to even look at it, right? It's just becomes noise. So, this is how you want to slowly tune out that noise and zero in on the things that do matter to you.

So, this becomes a repeatable process. You can do rapid triage. You can see what changed and then adjust your assessment accordingly. You know, we have some, the version control information is sort of like debug information, right? It's something that a producer wouldn't want everyone to be able to see.

 Maybe you don't care about it as a consumer though. Some of their secrets as well. Some of those will impact you, right? If there's a credentials for a web service that people can then access, that should be a concern for you as a consumer, but there are some secrets that are built into this.

We have policies for that wouldn't matter to you as much. They're not as critical to you and it wouldn't figure into your risk assessment beyond your assessment of the vendor itself. And what are they letting slide through? If they're letting these things go through their process, that means they're not doing these checks as well.

And now you can question the validity of their process, what their pipeline looks like, and, the quality of what they're delivering to you. Time and resources are always going to be a factor. That's always going to be the limit is how much you have to do, how many resources and people that you have to do it with.

And you make the best of what you have. It's a balancing act. So, there are some of these things that right off the bat as a consumer, I would recommend turning off low CVEs and even mediums as well, right? They're not really going to figure into your assessment. You're not going to decide not to purchase something because of a certain low CVE, there's nothing that critical down there. To me, it's just noise. You're never going to get to it. You're never going to research those things. So, let's just remove that from the alerts and from the reporting, right? That gives you more time to work on the things that do matter.

Anything that's low priority for consumers. So, mitigations where they have checks for this or, their code has ASLR, which is memory randomization. So, attackers can't always focus on the same piece of memory to find the same thing every time on every computer. That was introduced in Windows XP.

And we still see software that doesn't have that. So, there are some of those that are important, but there are a lot of them that are lower level. And it tells you what the priority rate in the policy, this is a low thing. So, if it's a low priority for someone producing the software, then it's of very little value as a consumer.

It's not going to sway your risk assessment either way. So, that's more noise you can reduce. And anything that's really high volume, and which varies from package to package. The nice thing about SSCS is you can have up to a thousand different policy sets. So, you can tune the policy for specific things and have a larger generic policy when you're just starting out with a new solution or a new piece of software you're evaluating.

Start with the main, everything wide open, view everything to get your initial assessment, and then you can start tuning it out, or move that to a policy set that's already configured. So, that's how you want to mature your settings and how you review this stuff over time. Anything that doesn't matter, if you look at something in a report and you read through that and you're like, I don't care, immediately just turn it off because you don't care about it now, you're not going to care about it next week or the next update. So, it doesn't really matter that much. This is also organic. You don't want to worry about everything right out of the gate and say, I got to have this, and spend a ton of time and effort trying to get it tuned before you start using it.

Do the big swing stuff, right? Turn off the low CVEs, some of the other low priority things, anything that's really, really noisy, that doesn't impact what you think about that software. It doesn't impact your data, your customers, your uptime, anything that you're trying to do that matters to you, then you just start turning that stuff off and you do it release after release. Turn a few things off, run the next release tune a few more things out. So, you'll get there over time, but it's better to take this in small bites as you go and you'll get there. And, again, it's all about tuning out the noise. If it doesn't aid you in any way, it doesn't tell you anything useful about this application, it doesn't support your assessment, then there's no value in it. Why even look at it? So, we talked about the baselines. This is shows you a different set of baselines, right? So, the default policy set is on the left. These are three different samples run through different policy sets. So, I have a TPRM set that I use for these webinars and for other things that does a lot of what I'm just saying.

It's just a little low level stuff. So, the defaults on the left, the new policies on the right, and you can see how the delta's change, the huge drop in alerts. Now, the difference between these two policy sets is a dozen policies turned off. That's about it. So, you can see the highs stay the same across each one.

Those are things you wouldn't want to tune out, at least not without a lot of thought and research, over time, but the lows and the mediums that don't matter, that's where you can make these big cuts. All these things that don't matter and just generate noise to you, reduce them over time.

But a dozen policies gets you at least a 50% reduction in most packages. Size matters. The bottom package is a lot smaller. The top one is a couple of gigs, but looking at this initially and you're seeing 3000 alerts. It's it seems daunting, right? Where do I start? It's so much stuff.

Well, that's why you start doing this tuning is because a lot of this stuff won't matter to you. And it depends on the industry you're in, what compliance regulations, all the things we talked about earlier, all that goes in and your knowledge of your network, of your needs and what you want to defend. Do this first handful alerts, turn those off the lows and stuff. And then from there, then you put in your knowledge, you apply your risk to this thing. And then you can continually tune it, but just a handful of alerts makes a huge difference. So, hopefully this is helpful. I think this is the biggest thing we see when we talk to people and we show on the platform is the sort of the big eyed, deer in the headlights kind of thing, when they see all those alerts, it's okay, because a lot of those don't matter to you.

So, you can move forward with this slowly and make big chunks. Reduce the noise and now you just concentrate your time and effort on the highs and the things that really matter. And then over time, as you research something, you look into it and you're like, "Oh, this, you know, this doesn't matter to me," then you just start shutting things off. You can always go back and turn them back on later too. So, you don't have to worry about getting it right the first time, you just slowly evolve this as you go. So, you have any questions? Oh, we did get a question. 

GUY NADIVI: Yeah, and let me remind everybody that if you have any questions or comments, please submit them through the Q&A feature on Zoom webinar, Tim will be happy to answer them for you.

And I'll go ahead and read the question that's come in. Do you have an SSCS solution for keeping track of firmware/software for the modern vehicles? Modern vehicles have anywhere from 80 to 150 ECU miniature computers. 

TIM STAHL: Yeah. I mean, at the end, it's just software. So, firmware is a little bit different than almost software, you know.

That's depends, how deep you can go because it is sort of different than software, but any software can be decompiled. If we can read it, we can give you the analysis report on it. So, it really doesn't matter where it comes from or its purpose. Software is software, we can decompile it, break it down to atomic elements, run this analysis, give you the same reporting.

So, it should work. And, like you said, that's a really good point to bring up is anything that's critical to you that matters to you, you can run this analysis. That's what this tool is for. 

GUY NADIVI: Tim, I've got a question for you. You talked a bit about how baseline risk assessments are not a one size fits all type of thing.

In fact, you showed that there was 200 individual risk policies available spread across six different categories. So, would it make sense then for an organization as part of determining what their baseline should be to first quantify different risk elements internally and come out with a type of risk score that makes sense for them with this? Maybe help them better determine what level of risk would be acceptable to them?

TIM STAHL: Absolutely. And that's part of this process that's almost that's it is a requirement right? As you have to look at this through the lens of your operation, your enterprise, industry sector, all the things that matter to you, that you are required to do, you know if there's certain policies you have to have, if there's legal requirements for certain things. All that stuff goes into play and the policy sets that you're looking at. So, by all means, it does take some planning, look through this stuff, but I would do that alongside making these adjustments, right? Initially, again, turn off the low CVE stuff, you're never going to get to it, you're never going to have time to research those things, they generally don't matter.

You have other things to worry about. But, yes, you're part of this is looking at through your lens and saying, okay, these are the things that matter to me. So, there may be some things that are off by default that you want to turn on because that's important to you. So, do the planning, do the work.

But like I said, this is gonna be new data to you, right? This is stuff that's never been available, you've never had this depth of view into closed source software before. It does take a little bit of time and effort. That's why I say, sure, you do the planning that is important, that's part of this.

But, don't let that get in the way of getting started. Throw the packages in and get started, and then do your research as you go. You know, you have the big picture view. These are the three or four things we care about. Okay. Now then you have to figure out what policies map to that and then whether or not they show you information that matters to you.

So, together between the individual policies using levels as a guide, and then your input based on your enterprise, your leadership's recommendations or whatever it is you're doing, together all those things work to create that baseline. So, you have this repeatable effort and as you go package to package or release to release you get a better feel for what's going on.

GUY NADIVI: Some interesting questions coming in: Are the scan results from source code analysis and, or binary analysis and how much of those may be false positives to the code scanned? 

TIM STAHL: False positives is a relative term. This isn't like normal network detection. We're telling you what this software can do.

It's in there. We can see it in the code. We'll tell you it does these behaviors, and then it's up to you to evaluate whether or not that matters to you. And a lot of it depends on what the software is and what it's meant to do. So, the behaviors should align with what this software is doing. If it's accounting software, then you should expect it to do certain things. If it's medical device software or like the automotive software, it may do different things that normal software wouldn't. Like it may look for attached devices because it needs to read the device that's lined up in USB or whatever. So, you'll see that kind of evaluation of those collections behind the scenes.

Sometimes you'll see software that collects way too much information. And you may have the question, why? Does it need to enumerate all the users? Does it need to enumerate network drives? There's all these things you can say: "wait a minute, I don't like that. It doesn't look right to me."

Right. In a previous episode, we saw some crypto wallets that can do screen captures. That seems sketchy to me, right? Which applications on your network have access to the webcam and microphone? That's another trigger. There's a lot of these things you can look at to evaluate the software intel.

And it's not so much false positive is, is do you care? Does it affect your assessment? 

GUY NADIVI: Next question. Is SSCS essentially a tool designed to handle SBOM or Software Bill of Materials? 

TIM STAHL: No, we don't import SBOMs. We need the actual package itself to do the evaluation. It will generate an SBOM. And, I think we've discussed in the past that not all SBOMs are created equal.

It depends on what the tools doing, a lot of the tools exist for this kind of work are further down the pipeline, so they're shifted left, to use a term we hear a lot. So, if you're further down the pipeline, you're looking at different pieces: SCA is a good example. It looks for known vulnerabilities.

That's great. You want that in your pipelines to make sure those things don't make it to the final release, but it also just gives you a narrow view of things. If you're not analyzing that final package, then you're missing a lot of this stuff and a lot of the tools are different points of the pipeline looking for different things or their dynamic analysis afterwards, but it's only testing. The dynamic analysis at the end is only testing things that are accessible via the UI or the API that still leaves a lot of gap.

So, this is the choke point for all the issues going into your software is that final package. So, that's where you want to scan, that's the baseline you want, that's the SBOM you want. And then on top of that, we provide intelligence. We enriched that SBOM. So, we tell you if there's a CVE in there, is that patch mandated?

Is there an exploit available for it? And is malware actively using that? Those are all things to help you evaluate and prioritize certain things. So, we don't ingest an SBOM and then give you an output of it. It's the thing that we produce for you. Hopefully that answers your question there.

GUY NADIVI: Okay, here's one more interesting question. I've come across information that's indicated/hinted at not necessarily excluding medium and low CVEs because sometimes those can or could be a hopping point to get to or traverse to some other vulnerability. Does SSCS factor that in/influence that somehow? CVEs that could possibly be linked.

TIM STAHL: That's a true, absolutely true statement. So, I think what you have to do is balance. What do you have time to do? If you have the time to do trace through that stuff and say, okay, here's the attack chain. It usually goes here to here, to here, to here, then you can do that.

All that. So, if you want to do that and you have the resources to then by all means, keep that stuff on. In general, that's beyond those folks. I think this is something new. Existing teams will pick up this tool, and it may be a risk and compliance team. It may be your SOC. It may be something else.

 You don't really need to hire a whole new team to do this, but it does take time and effort. So, it just depends on how deep you want to go, right? The further you go and the more things you look at, the better your risk assessment. Do you have time to trace through all that stuff?

If you do, then that's awesome. Good for you. So, yeah, again, it's the information is presented to you. It's tuning that baseline. You decide what you want to see, what you have time to evaluate. So, maybe, you just turn off the lows and you keep the mediums at least for a while until you get a feel for that package and what changes and what doesn't.

And then maybe after a while, you can start tuning those things out, but it's totally up to you. And that's the point, right? As you tune this, however you want for your needs and on how deep you want to go. 

GUY NADIVI: Okay, let's advance to the next slide then. Okay, for those of you interested in analyzing your own software supply chains security profile, we've got a 14 day free trial available to you of the same software that Tim just used to create a risk assessment policy baseline.

I'm going to put the link to it right in the chat. So, it's available at that link and there's no credit card required or software to install just create an account and you'll get 14 days of free access to a full blown version of our Software Supply Chain Security platform. You can use this to analyze malware and code tampering embedded in your software components.

Or generate an SBOM listing all open source and third party packages and software you're using, or leverage the platform to contextualize and prioritize alerts and threats in your software supply chain. And of course, you can create your own risk assessment policy baseline. Again, there's no charge, So, please take advantage of this risk free trial.

Next slide, please. All right, be sure to tune into our next episode of Software Package Deconstruction, when we're going to continue looking at Deconstructing Supply Chain Analysis Part Two, Fine Tuning your Baseline Code Signing Certificates. So, in our next session, we're going to focus on code signing certificates: The role they play in software, how they function, how they can be used to detect tampering, and perhaps most importantly, the many ways they can be abused. We'll cover topics like certificate authorities, certificate validation and revocation, and how weak certificates can provide a false sense of security.

It's definitely an episode you won't want to miss, and it airs next week on Thursday, August 3rd, by the way, if you registered for today's part one episode, you are automatically registered for next week's part two episode. And we look forward to you joining us then. Next slide. And that concludes this episode of Software Package Deconstruction.

Thank you all again for attending and please be on the lookout for announcements about future episodes from ReversingLabs. Have a great rest of the day, everyone, as well as a great rest of the week. And remember to broaden your view of software risk by securing your entire software supply chain.

Tim Stahl

About Presenter: Tim Stahl

Tim Stahl is a seasoned security professional with almost 20 years of experience, with additional experience in both the engineering and administration of enterprise networks. He has secured and defended networks across .mil, .gov and .com enterprise domains, and his expertise includes the tracking of both APT and criminal groups, threat intelligence, threat hunting, data analysis, OSINT research, DevOps, and SIEM engineering.

Related episodes

Subscribe

Sign up now to receive the latest weekly
news from ReveringLabs

Get Started
Request a DEMO

Learn more about how ReversingLabs can help your company.

REQUEST A DEMO