Well, it's been five years since the release of CitizenFour (and more than a year longer since Snowden first gave his documents to journalists, of course). And in all that time, one pressing question in the security community has been this: faced with the best efforts of the most advanced digital intelligence service in the world, how did one young guy manage to steal so much?
Now at last, with the publication of Snowden's own book "Permanent Record [amazon.com]", we have his own answer to that question—and it’s not one anyone in the community had anticipated.
In the absence of any response from the NSA themselves (and it became rapidly obvious that even the NSA had no idea what he had taken) speculation abounded, and most of it centered on the standard "playbook" for a privileged insider. This makes sense, of course; Snowden was both architect and administrator of specific NSA systems (aka a "SysAdmin") and as such had unlimited authority within those systems. Nevertheless, the scope and number of the documents disclosed suggested Snowden had far, far wider access than would be consistent with that authority. In particular, he had disclosed documents not only from his own site, but other sites, allied federal agencies and even the agencies of allied countries.
In the standard playbook, that could only mean one thing—stolen credentials. Using his access to the systems he DID control, he was presumed to have gained access to credentials of users on those systems who also had access to others, allowing him to "leapfrog" across the NSA network to sites he would not normally visit, and systems he could not normally touch. By using those stolen goods, he could appear (to those other systems) to be those legitimate users, and thus trigger no alarms. How he then extracted and escaped with the data would then seem to be another question, but one again answered by his privileged position.
NOTE: Systems administrators often have to apply patches, repair or restore system file, and perform other tasks that can't be done over a network, so usually have rights to use "removable media" which a normal user would not.
The community embraced this theory, because it fit the facts as-they-were-known, and hammered home the lessons they were struggling to impart locally—that control of credentials was critical (and unsecured credentials sitting on a server might as well be in plain sight); that logs must be monitored for uncharacteristic access, removable media locked out, and so forth. It was both an object lesson, and an exercise in humility. If even the great NSA could suffer a breach though poor practice, it could happen to ANYONE.
Yet some things didn't fit. The NSA are known to be obsessive log-watchers and to take behavioural analysis to a fine art. Further, the NSA are firm believers that only hardware can be trusted, so created (way back at the start of the new century) a variation on standard Linux called "Security-Enhanced Linux". SELinux reduces the role of the systems administrator from the unlimited "root" superuser of standard linux/unix to a much more nuanced set of permissions that did not allow disabling logging (or modifying the logs), altering certain key system files and configuration settings, and so forth. Thus any attempt to access credentials stored by his users should have been logged, and on audit of those logs, an explanation would need to be forthcoming or he would be in a great deal of trouble. In such an environment, the only way a Snowden could happen would be if the NSA had fallen asleep at the wheel (and that IS possible; time spent spying on your own people is time you aren't spending spying on "the enemy"). This may imply at least that, after the fact, they would know in triplicate exactly which files he had accessed, where he had copied them to, and where that copy was supposed to be now.
But if that didn't happen, what did?
In his book, Snowden advances a completely different sequence of events, and key to this narrative is the role he was paid to play within the NSA network.
During an earlier tour of duty with the agency, Snowden had been tasked to do exactly what he would later do for his own purposes—scan multiple NSA sites for sensitive files, copy those files off to a remote server, then examine those files. In the process, he would not only determine the security classification of sensitive files (and hence, who could access these backup copies) but locate and "merge" duplicate copies of the same document in a process backup engineers call "single instancing" or "deduplication". There is little merit in having three hundred backup copies of the same document, just because three hundred agents all had their own, personal copy. This project was called EPICSHELTER and its goal was to provide resilience in the face of catastrophic loss; if (say) an NSA satellite base was destroyed; backup copies of its documents would exist in the system and could be allocated to other agents. This system was not DIRECTLY a factor, but a later, related system would be.
Snowden's next role would be as the sole employee of the "Office of Information Sharing"—where he describes himself as a "SharePoint Administrator", tasked with maintaining a document management system for the Hawaii site. This was summarized in an overview page called a "readboard", which is basically a newsfeed for classified status updates—allowing those on the site to see the bigger picture, and spot if some other employee's work was overlapping with their own.
Snowden was reading, not only the readboard of his own site, but of sites he had been previously stationed at—this is routine for any geek with a content management system. He soon automated this, pulling in the readboards for multiple sites into his SharePoint system, and displaying it as a "meta-readboard". As this grew, Snowden sought (and obtained) official approval to expand the system, not only pulling in more and more data, but sharing it with other employees at the NSA so that they could only see entries at their own clearance level or lower. As this new "Heartbeat" system grew, it expanded to pull data not only from other NSA sites, but CIA, FBI, and even the Joint Worldwide Intelligence Communications System, with the active co-operation of the document administrators of those sites.
Here at least is the missing link; Snowden needed no additional credentials (which is just as well). In his book, Snowden details that the credentials are PKI certificates and keys stored in physical tokens, which double as identity badges. These tokens literally cannot be stolen without destroying the badges in the process. But he DID need to get the files tracelessly from "Heartbeat" to the media he took to Hong-Kong (which of course he also covers in his book.)
So looking at this, we see a much more nuanced story, but with less leaps of faith. In creating and expanding this system, Snowden (with full approval from his management) placed himself in an unprecedented position—able to manipulate the code controlling access to vast swathes of intelligence information. Once he determined to abuse that trust, the outcome seems inevitable, but in fact, this is a natural consequence of the insider risk key insiders pose. Even for the NSA, there are going to be individuals who can tracelessly go rogue, and they may not be directors or even managers, but some geek in a remote office you have never heard of... until everyone has heard of them.
Are your machine identities protected against insider threats?
Find out why Machine Identity Protection is relevant in today's high risk cybersecurity climate.
If you are interested in history, here’s how Venafi estimated that the events may have played out 5 years ago.
To date little real information exists publicly to explain how Edward Snowden stole secrets from one of the world’s most advanced and sophisticated intelligence organizations. Reports on “How Snowden Did It” detail little more than the obvious: he breached the National Security Agency (NSA). As experts in securing and protecting the trust established by keys and certificates, Venafi understands how Snowden accomplished this breach. To develop this understanding, Venafi security analysts have methodically analyzed public information for over 3 months, connected pieces of the puzzle based on our knowledge of attacks and vulnerability in the Global 2000, and requested peer review and feedback from industry experts.
Ironically, the blueprint and methods for Snowden's attack were well known to the NSA. The NSA had to look no further than the US’s own Stuxnet attack to understand their vulnerability. Clearly, Edward Snowden understood this. Here we describe how Venafi solved this puzzle and explain why Snowden’s actions affect not only the NSA but your organization.
If we’re wrong, we invite the NSA and Edward Snowden to correct us. NSA Director General Keith Alexander wants to promote information sharing, and now is the perfect opportunity. General Alexander stated “At the end of the day it's about people and trust” and we agree. The attacks on trust that breached the NSA are vulnerabilities in every business and government. Sharing how the breach was researched and executed is important to help every business protect its valuable intellectual property, customer data, and reputation. We believe both the NSA and Edward Snowden would agree that helping businesses and governments improve their security is a worthy cause.
Here is what we know about Snowden’s work environment and the tools he had at his disposal:
Once we understood Snowden’s tools and network environment, we reviewed the information that had been reported about Snowden and identified the critical elements that would help us piece together the full story of how Snowden attacked the trust established by cryptographic keys and digital certificates to breach the NSA:
General Alexander summed up well Snowden’s ability to attack: “Snowden betrayed the trust and confidence we had in him. This was an individual with top secret clearance whose duty it was to administer these networks. He betrayed that confidence and stole some of our secrets.” Unfortunately it seems that like so many organizations Venafi has worked with, the NSA had no awareness and no ability to respond to these attacks on keys and certificates.
Using military Kill Chain analysis, which Lockheed Martin and others have made popular in IT security, we can reveal how Snowden executed his theft of data from the NSA:
You might think that only advanced cyber teams in the NSA have the knowledge and skill to fabricate self-signed certificates or use unauthorized SSH keys to exfiltrate data. However, all of these attacks have been reported publicly in the wild. Cyber-criminals have used them to launch successful attacks and will continue to use them. In fact, Snowden was in many ways just following the methods and means the NSA had already used successfully.
In one of the first and most powerful demonstration of what attacking the trust established by keys and certificates can accomplish, the NSA is reported to have helped carry out the Stuxnet attacks on Iranian nuclear facilities. Using stolen digital certificates from unknowing Taiwanese companies, the architects of Stuxnet, identified by Snowden to include the NSA, were able to launch the Stuxnet attacks with trusted status. These and other attacks provided Snowden a blueprint for attack: compromise the trust established by keys and digital certificates.
More specifically to the NSA Breach, attackers used SSH key stealing Trojans to gain unauthorized access to SSH keys and have unfettered access to the FreeBSD source code for more than a month. As a system administrator, Snowden didn’t need to use Trojans to steal or create his own SSH keys.
Mandiant reported that the APT1 attackers generated self-signed certificates to enable their command and control servers to receive cloaked, encrypted stolen data. These certificates went completely undetected as being rogue—purporting to be from IBM or Google or for use with “webserver” or “alpha server.” Freely available tools such as OpenSSL would allow Snowden to create self-signed certificates on demand.
The gap that’s allowed cyber-criminals to breach these and other organizations is why Forrester Consulting described the situation in simple, blunt terms: “Basically, the enterprise is a sitting duck.”
All these facts clearly applied to the NSA before Snowden breached the agency’s security and stole data. The NSA had no awareness of the keys and certificates in use, no ability to detect anomalies, and no ability to respond to an attack. Because of these deficiencies, General Alexander believes strongly that the NSA must use automated machine intelligence to improve its ability to detect and respond to threats:
“What we’re in the process of doing—not fast enough—is reducing our system administrators by about 90 percent. What we’ve done is put people in the loop of transferring data, securing networks and doing things that machines are probably better at doing.”
The NSA is already setting out on the path Gartner expects most organizations to reach by 2020 or sooner: reallocating spending to focus on detection and remediation of security issues using fast, automated security systems. This trend will create a tectonic shift in IT security, putting almost two-thirds of IT security’s budget into detection and remediation, up from less than 10 percent today.
But the game won’t be over if these detection and remediation efforts don’t include securing and protecting the keys and certificates that provide the foundation of trust in the modern world. Therefore, the NSA would be well advised to take Forrester Consulting’s advice:
“Advanced threat detection provides an important layer of protection but is not a substitute for securing keys and certificates that can provide an attacker trusted status that evades detection.” Of course, Edward Snowden knew this. Unfortunately, the NSA did not.