We are seeing attackers more and more shift their targets from hardened systems like HSMs and secured environments to weak points like in the case of SolarWinds, they went after the software supply chain. They hacked into the build machine or the build server where they were able to attack the source code while it was compiling and inject malware. The malware made it to the downloads, and all the customers downloaded the malware. In that way, they attacked all SolarWinds customers, and then everybody had that vulnerability.
This attack raises many compelling security questions. Why is an attacker writing something that's not signed and verified in a build system? Why is a build system sitting static? Regardless, a lot of customers have been impacted through these types of attacks. You can imagine the implications of this for IoT devices! Imagine you're driving a car that has software or a plane you're flying in—a plane that is runs on software—that was attacked or vulnerable. Or a power plant, or electrical grids or medical devices. You can see the danger of the supply chain not being protected is a big deal, very dangerous.
What we've seen is organizations are still overlooking the basics, and that's the big problem on the supply chain. For example, in DevOps CI/CD we still see key sprawl where keys are just given to developers, no auditing, no idea where the keys are, no tracking of weak authentication and authorization, no approval system or even no dual controls. Their situation is extremely vulnerable because of those gaps. These DevOps teams have inconsistent security, they live in silos, and most of the times it ends up being costly.
Thankfully there are ways to prevent this from happening! Venafi has teamed with Device Authority to help organizations secure updates and securely sign code. James Penney, who is CTO at Device Authority, members of the Machine Identity Management Development Fund, spoke to me recently about best practices or protecting the IoT supply chain.
Slam: First, I think we need to discuss what is the reason that we even need to update software in the first place.
James: That is a great place to start. According to Verizon's data breach investigations report, 99.9 of attacks use known vulnerabilities. Attackers aren't necessarily trying to find new zero days that they can use against their target devices. They are simply using known information, which is readily available on the internet to gain that unauthorized access. So, with that in mind, it's important to understand that your device either has or probably will have a vulnerability that can be exploited even if it's not your fault. And it's pretty inevitable as software progresses and different versions fall out of support that this will happen. So having the ability to address and remotely update those affected software components is a critical security function that must be present in anything that goes out the door.
On that note, unpatchable code will definitely come back to bite you. There was a case recently of a smart door lock manufacturer. Security researchers found a pretty serious vulnerability in the way that a smartphone would connect to it and authenticate to unlock the door. What ultimately made it worse is that they did not create any processes to be able to update the code on the door lock itself, which means that customers are now in the position of having to purchase a new set of locks or risk having their homes broken into. It's not a great situation to be in!
Slam: How do we start to resolve these challenges with code signing?
James: We need to address vulnerabilities, so we’ve got the ability to deploy updates over the air, which is great. But what if the update process is unprotected? This is where unsigned code really presents a problem, because this is a situation where you potentially have an update function available on the device, but the update that is being pushed to the device is not validated or checked that it's from a known, good source. This opens up a lovely little spot for someone to maliciously deploy their own code without any restrictions. It's pretty much a given that you don't want that software running on your device!
Having an update mechanism is great, but if you don't adequately protect it, then you're really just creating a fairly significant vulnerability in itself, that can be easily exploited. If an attacker can get their code on a device, you lose integrity of any of the data that it produces. We have to cryptographically sign that code to protect the integrity!
Let's say we've done that. We've now decided to sign our code, which is great. We can patch vulnerabilities through updates and ensure that our device only trusts new code given that's signed with our corporate signing key. At least we can prevent some of those low-hanging fruit attacks on our devices. Our device receives the update and can check that it was signed with the right key before installing and executing it. As long as I sign my code, my device is good. But what if my corporation is no longer in control of that signing key? As in the SolarWinds case, if someone that I don't trust has access to my signing key, how is the device ever going to be able to tell the difference between a good update and a bad one? So, control of the code signing keys is a vital foundation for any secure update process.
Slam: What are some of the best practices to consider when implementing an update process?
James: If it isn't obvious by now, you need to consider how code signing keys are protected and who or what has access to use them. Venafi's CodeSign Protect is a great example of a solution that you can deploy to implement the right controls and policies around your keys ultimately to protect your devices from running untrusted code. The specifics of those policies will typically be whatever your internal security team is more comfortable with. But it's wise to consider things like quorum-controlled access. What that means is that for each code signing operation or each request, you'll need certain members of the organization to come together and approve the signature. Coupled with the right internal communications policy, this can be really effective in combating key abuse.
Here are some good examples of the roles to consider for securing the code signing process:
Having a log of every time a signature is requested and what the outcome was helps to track if a code signing key is ever being misused in your organization. Additionally, tracking the delivery of the update to the device, and ultimately when that update is applied or maybe not applied will also help you understand what state your devices are in.
Carefully recording and alerting update failures will help with the remediation process, and depending on the use case, that remediation plan might involve restrictive actions being taken on devices. For example, you should consider what will happen if an update fails for whatever reason or if the device flat out refuses to even try installing it in the first place. IoT devices running out-of-date software potentially with known vulnerabilities, present a risk to other devices and services in the ecosystem. Consider a medical device use case. It would be imperative really to know when a device has failed to apply an update, as this can potentially impact patient safety. You might not want to immediately ban the medical device but being able to automatically and temporarily quarantine the device until the software has been updated will prevent further exposure.
Lastly how can you help further lock down your update process even before the package is downloaded and the signature checked out? Limit where your device can actually download updates from. For example, if you know that you'll be using Microsoft Azure storage to host your software update packages, then it's probably a good idea not to let a device initiate downloading update packages from “not-a-virus.com”. I'm not going to say that that's a suitable kind of standalone solution for what you want to do by any means but checking the integrity of the download by the signature should always be done. At least this way you'll stop that kind of problem before it even gets onto the device. And as a worst case, you'll save yourself some bandwidth costs for pulling dodgy updates.
Slam: Describe how Device Authority's KeyScaler integrates with Venafi's CodeSign Protect solution.
James: KeyScaler is an IoT platform that essentially helps you manage the security lifecycle of your IoT devices. It achieves this by delivering trust and automation at scale with the end goal of ensuring that your devices have the right security assets to do the job from day one with minimal or no human interaction. We believe that the goal for deploying any IoT or headless device should be that you simply power it on and walk away—minimal human interaction. This should be the case not only for the initial setup, but also for the lifetime of the device. Device Authority have partnered with Venafi and integrated KeyScaler with CodeSign Protect to deliver an end-to-end code signing and an update management solution that allows you to keep all of your IoT devices up to date. This ensures that the keys used for that code signing process are protected appropriately from theft or misuse. There're a few moving parts to this solution, but essentially Venafi's CodeSign Protect provides the actual signing and key usage authorization of the update packages, and KeyScaler delivers the update to the devices and ensures that the update is applied in a timely fashion.
You can learn all about the Device Authority KeyScaler and Venafi CodeSign Protect integration on the Venafi Marketplace.
This blog features solutions from the ever-growing Venafi Ecosystem, where industry leaders are building and collaborating to protect more machine identities across organizations like yours. Learn more about how the Venafi Technology Network is evolving above and beyond just technical integrations.