Your company is dealing with lots of risks right now. We all are. It’s a scary time for everyone.
I wanted to spend the next few paragraphs talking about a risk that many CISOs and CIOs may not be thinking about but should be—especially as cybercriminals up their activity when our focus is being diverted elsewhere. Hint: the hidden villain is insecure codes signing.
Before I get into that, I wanted to share a (true) story that I recently read about.
On June 27, 2017 global shipping conglomerate A.P. Møller-Maersk was hit by NotPetya malware. It brought their entire operations to a standstill. Phones didn’t work. Gates to terminals stopped operating. 4,000 servers were down. 45,000 personal computers were down. It was mayhem. For nearly 2 weeks, the company was not operational. Being responsible for one fifth of the world’s shipping capacity, one fifth of the world’s commercial goods were not being shipped to where they needed to go. Data on all of their internal computers were effectively wiped out.
You might be thinking that this is just another case of bad malware. And it is. However, the way that the infection happened is what is troubling. Here’s the story: Nearly 1400 miles away in the Ukraine, a small family-owned software company was being victimized by Russian state-sponsored hackers. The presumed intent of the hackers was to disrupt Ukrainian life as part of their cyberwar efforts with the country.
This small Ukrainian technology company produced financial software that companies use when doing business in the Ukraine. After breaching this company’s security perimeter, the hackers apparently located the company’s code signing keys because they were able to successfully insert malware into a regular update of the company’s product.
Later, the company pushed out the software to its customers and because it was signed with a legitimate code signing key, their customers installed it. Including A.P. Møller-Maersk. And FedEx’s European subsidiary TNT Express. And Merck Pharmaceuticals. And others. They all got infected.
How many outside software packages does your business rely on for its daily operations? 100? 1000? Do you trust all software that is installed? Do you have a process for vetting it before users install it? Do you have the means to whitelist software and only allow for that software to be installed on your computing resources?
We need to consider both sides of the same software supply chain coin. Maersk received the software. The Ukrainian company produced it. Security failures obviously occurred in both places. Code signing, a security measure that has been effective for 30 years in protecting the software supply chains, failed here. Why?
Even though we don’t know the details of what actually happened, we do know a few things. Cybercriminals have been thwarted by the protective measures of code signing so much that they now target the theft or misuse of the credential itself. Here’s a quote from a very good SANS article, “The Scary and Terrible Code Signing Problem That You Don’t Know You Have.”
The reason for this is that in the hands of a hacker, these keys can be used to add malware to the software that YOUR COMPANY produces, or that can be used to distribute malware disguised as being legitimate software from your company.
In addition, on the other side of the coin, the software that your company uses, including the software that your employees install, could have had malware inserted along the way. Sure, we have anti-malware scanners and antivirus software…but the bad guys are still finding a way through. What can be done to stop it?
Code signing has been proven effective in the past to deter this. But the problem is that companies don’t treat their private code signing keys as if they were the master key to their business.
Why don’t they? First, a typical Global 5000 company likely has thousands, or tens of thousands of software developers spread around the globe. They are developing software that gets delivered to customers, scripts that are used to automate IT operations, and software that is used for internal infrastructure. To use code signing, all of these teams need access to private code signing keys. And when that happens, the keys often get stored on developers’ laptops, build servers, or web update servers and thus become vulnerable to theft or misuse.
Just ask the folks at Taiwanese computer manufacturer, Asus. Someone left code signing keys on their web update server. Hackers broke into that server, found the keys, added malware to legitimate Asus driver updates, and those updates were pushed out to customers. Over 1M Asus computers were infected.
Venafi has been managing protecting machine identities such as TLS certificates and SSH keys for over a decade. However, we recognize that protecting code signing credentials is an entirely different beast for most businesses. To truly secure code signing private keys, you have to secure the process by which they get used without inconveniencing those that need to use them (developers). That last part is key. Developers’ existing tool chains need to be supported as do their current processes, such as DevOps.
In this new environment, traditional InfoSec processes often do not support developers very well. Many organizations need to sign thousands of pieces of software a day. This is software that is built using different toolsets, environments, and processes. InfoSec needs to support all of them but without getting in the way.
Venafi Next-Gen Code Signing was architected to secure the code signing process while providing an easy to use, convenient, and fast code signing environment for your developers.
Dive deeper into code signing with Eddie Glenn, and find out why giving code signing certificates to your PKI team is just a start.