Trusting code just isn’t as simple as it once was. Your operating system loads up a binary, verifies the signature and says, “yup, this is signed by a known and trusted authority, so I should trust it,” and goes on its way. But Stuxnet changed all that. Many malware binaries today are signed with valid signatures or signatures that appear valid, and in fact many malware binaries are using multiple signatures to appear more legitimate and avoid detection. Certainly, these things should not be trusted. We need a new way of validating the source of signed code as well as the signatures.
Determining the trustworthiness of code, not just whether it has a valid signature, is becoming a critical technical requirement for many organizations as we see the rise of signed malware that is considered trustworthy by operating systems. Simply verifying the signature and timestamp of a signature is no longer sufficient to determine the trustworthiness of code. The Certificate Authority Security Council recently outlined a new standard that defines the proper procedures for Certificate Authorities (CAs) to follow to avoid the misuse of code signing. While the new Minimum Requirements for Code Signing for use by all Certificate Authorities (CAs) will make it easier to verify software authenticity, I don’t think it goes far enough.
Instances of Code Signing as a Service have emerged on the dark web to help cyber criminals produce code that passes the signature test by operating systems that require a valid signature before allowing it to load. Cyber criminals are stealing code signing certificates and re-selling them on the dark web for as little as $1,000 each. For organized cyber criminals with more compute power, weak code signing certificates can be forged to look like they have been issued by a trusted authority. Whether cyber criminals sign their own malicious code through a stolen or forged certificate themselves or use a signing service, most systems today will trust any code that is signed if the signature cryptographically checks out to come from a certificate rooted by one of the many authorities in the systems trust store.
Another indication that this problem is only getting worse is the rise of Continuous Integration and Continuous Deployment paradigms. The days of building and deploying production systems with a trustworthy base once a year or once a quarter are quickly coming to an end for many organizations. With the rise of DevOps, code and systems are being built and deployed weekly, daily, even hourly and signing is a mandatory and critical step in this overall process.
Thus, validating the trust of signatures has become a major challenge for organizations throughout the world. Signature and timestamp alone clearly no longer establishes trust, but unfortunately, that’s the only check that happens for signed code today. In the current paradigm, two different bits of code signed with the same certificate have the same level of trust. Cyber criminals are taking advantage of this basic vulnerability when they use a stolen or forged certificate.
Google is bolstering the trust of TLS certificates through the Certificate Transparency paradigm, which requires publicly trusted Certificate Authorities to publish issued certificate so that when evaluating the trustworthiness of a website, browsers can compare whether the certificate being presented has actually been issued by the CA. Google is also pushing to have a list of certificate authorities that a company does business with be part of the certificate so that there is yet another point to validate when determining trust. Currently, there’s not a good way to verify that the code you’re executing has been authorized to be signed by the certificate that validates it, and this is the level of granularity that is needed as a start to fixing the code signing paradigm.
Ideally, it would be nice to have something like the Certificate Transparency paradigm for signed code with a whitelist of code that was signed with specific certificates. This would give organizations an additional pairing to validate the trustworthiness of a given code signature. That one extra piece of data is critical if we truly want to solve the problem of trustworthy code.
The bottom line is that the lack of control over code signing certificates and processes, coupled with a rapidly increasing volume of signed code through CICD paradigms is making it dangerous to blindly trust signed code.
Would you like to have an extra source to validate the signed code that your organization trusts?