
Why Using SCA to Build Your SBOMs is a Risky Proposition
Software bills of material generated by software composition analysis tools miss half of the components in final, compiled software packages, a new research report reveals.

Software bills of material generated by software composition analysis tools miss half of the components in final, compiled software packages, a new research report reveals.

Experts say scan-and-fix will remain for some time. But application security tools are evolving to provide prioritization and automation.

JPMorganChase's Pat Opet has raised a red flag. Learn why — and how SaaSBOMs can help your organization get a handle on risk.

Software supply chain security issues are on the rise — and a fragmented tools market may leave companies open to compromise.

DaC can bolster the speed, accuracy, and scalability of your threat detection. Here are five essential steps to getting started.

RL researchers detected a new malicious campaign that exploits the Pickle file format on the Python Package Index.

RL's SAFE report delivers insights into the APIs and services in your software, further enhancing transparency beyond a typical SBOM.

Virtual-machine ubiquity requires rethinking traditional AppSec controls — and modernizing your approach. Here are essential considerations.

New NIST guidance identifies ML challenges. Here’s why ReversingLabs Spectra Assure should be an essential part of your solution.

RL researchers detected a sophisticated, malicious package believed to be an ongoing campaign that may be linked to a hacktivist gang.

Here's why your organization should consider using SaaSBOMs, key challenges — and how to put CycloneDX's xBOM standard into action.

Model Context Protocol makes agentic AI development easier by connecting data sources — but the risks are very real. Here's what you need to know.

ReversingLabs’ YARA detection rule for Conti can help you detect this ransomware in your environment. We provide tools and information that you can use to spot CONTI at work in your environment.

A new Python package revives the name of a malicious module to steal source code and secrets from blockchain developers’ machines.

Malicious instructions buried in LLM sources such as documents can poison ML models. Here's how it works — and how to protect your AI systems.
Get your 14-day free trial of Spectra Assure
Get Free TrialMore about Spectra Assure Free Trial