When it comes to malware's persistent threat to your business, we typically discuss risk in context to failed compliance with regulations, or we talk about data loss, or we focus on fears around business continuity. But internal application quality and the software supply chain are often left out of the conversation — likely because the risk remains somewhat ambiguous. Today, it's time to talk about application quality and release, how malware might impact that system and those processes just as much as it impacts any others, and how you can make improvements.
Today's Malware Risks
The malware risks to supply chain assets is intense and real; attacks have been seen in the wild at scale. Your developer, build and test, and production environments all feature nuances that allow 1) new malware to enter, 2) existing detection to fail, and 3) the software development lifecycle to be compromised. For instance, your build and test environment can be exposed to insider contamination, developed with malicious third-party software components (code, certificates), and lack post-build package and certificate verification — all possible vectors of attack.
Your production environment presents more risks.
For example, Operation ShadowHammer, a supply chain attack that leveraged ASUS Live Update software, was able to remain undetected due to the fact that the updates were signed with legitimate certificates. Vendor updates and package redistribution can present further avenues for malware to infiltrate and remain undetected.
Other real-world malware risks include your use of open-source components for software development. Sometimes, attackers will make concerted attacks against your supply chain to substitute a tool you believe you need with something malicious. Sometimes, it's as simple as typosquatting, and you end up not with the SQL driver you thought you were going to get, but a bitcoin miner instead based on a mistake
Room for risk leaves room for consequence. So how do you include the right kinds of controls in your software development life cycle (SDLC) — the kind that ensure your applications aren't acting as vectors of attack?
Today's Mitigation
The traditional tools for dealing with any of these malware risks through continuous integration (CI) are not the most powerful in terms of threat detection. Most enterprises run vulnerability scans and antivirus software, but in many cases, new vulnerabilities and malware won't be incorporated into CVE and blacklist databases.
Tomorrow's Mitigation
How can you have a streamlined process and still focus on, and benefit from, detection? There are two particular areas of detection that are quite useful:
1. Scan all build files and all dependencies
2. Run static analysis to examine files and certificates
First, when you decompose and scan all build files and all dependencies, analyzing and identifying all components, you see the deepest pieces of the application — for instance, deobfuscating objects embedded within it and detecting threats more readily. Consider the coin miner injection we discussed earlier. Imagine a third-party supplier is bumping their version, and you don't have a stringent review process in place to second-check the installer. You could miss that coin miner, exposing all of your customers to risk.
Second, if you're producing executables for installation, your certificate chain must be valid on final result. Static analysis can provide you with metadata about the final result and about the signature, allowing you to analyze your policy for detection based on the golden image you'll ship — as well as the golden images you're bringing in the door. Whenever something comes into your supply chain, it must be run through a static analysis as well. Equip your detection process with automated scanning workflows to make sure you are creating the right controls across the SDLC.
Accelerated Dev, High Quality
How does instituting automated static analysis help accelerate application development and quality? Think about your competition. When you produce and distribute (or receive and distribute) complicated packages, the ability to scan them under a timescale becomes critical. Today, everyone is looking at sandbox options for analyzing their software, with exorbitant times to run. However, if you can scan larger Docker images within seconds, you're going to outperform a competitor whose scans take much longer and are less comprehensive. So, how can detection help you accelerate development, release and quality? Once the mechanisms are in place, the processes are in place, and the policies are in place, you can provide application quality through automated detection capabilities based on static analysis and file reputation.
Ready to learn more about static analysis and how you can use it to catch threats hidden in files and objects?
Read our blog on Automated Static Analysis vs. Dynamic Analysis - Better Together?