Security, whether physical, or technical, is a fundamental need and driver for humans. Anthropologically speaking, it was a cave, a club, a spear, a bow that gave assurance. Then firearms, now a firewall. As our lives go further digital and services gain in their ease of access and reach, so too the sophistication and creativity of people trying to breach defences for their own gain, and the increasing need for effective measures to counter threats to our most precious assets.
The mechanisms typically identified as relevant for end-user examination are physical security, authentication, and encryption. We’re going to look at the how user behaviours influence the implementation and limitations of this amalgam we’ll refer to as ‘user security’.
Pin numbers have largely replaced signatures because they’re simple and less problematic. If they were even marginally more difficult - say 5, 6, 7 numbers – the friction on the uptake to replace signatures would likely be insurmountable. There’s always a trade-off between security and ease – nothing new there. So, during this period of precipitous task-uptake by technologies like IoT, blockchain, self-driving cars, and our app-centric lives in general, how do we ensure maximal end-user security while keeping a handle on potential friction?
The simple technical response would be something like Two-Factor Authentication (2fA) to keep resistance low, and substantially improve security posture. Paul Grassi, senior standards and technology adviser at NIST (National Institute of Standards and Technology), the major body for this type of reconnaissance across the Federal US Government, acknowledged for Multi-Factor Authentication (MFA) that “…any MFA is better than no MFA.”
A more considered answer would acknowledge that the vast majority of the population – given the choice – would opt out of 2FA in favour of a quick password. So why don’t organisations force 2FA or similar ‘low-friction’ elements on their end-users? Competitors won’t. And people will migrate to them if they feel any ‘difficulty’ with their incumbent provider. Any multi-factor authentication efforts also place enormous burdens on IT departments. People forget passwords constantly, they also lose their phones, or accidently delete the app, or… you get the idea.
Biometrics - perhaps the ultimate literal expression of ‘humanising security’ - as a basis for MFA in conjunction with other measures, seems to be the prevailing direction. Cross-referenced biometric authentication mechanisms, behavioural analysis, geo-fencing, challenge-response, and myriad other options, are about minimising the likelihood of false verification. If, however, we get the fundamentals right to largely expunge technical attacks, there’s still social-engineering attacks - the human vector.
If we start to address the machine identity part of the equation – you’re on this blog after all - and start to evolve IAM, how do we ensure adherence? With the continuing move towards transparency/low-friction through identity automation and emerging IAM, much of security fundamentals are in hand. The next (somewhat precipitous) step is making sure that users understand the implications of skirting around such measures in an applied, pragmatic context.
Humanising the Messaging to Reduce Human Factors
So how do we improve the user factor? Most corporate security policies take the Carrot & Stick approach instead of teaching about outcomes and consequences. Like any teaching, demonstrating the value and consequence of activities is far more instructional and likely to see adherence than the long list of ‘Do’s and Don’ts’. Aesop’s fables (the tortoise and the hare, the lion and the mouse, the boy who cried wolf, and numerous others) are still the archetypical tools after over 2500 years to teach fundamental moral behaviour to children. If this isn’t testimony to the effectiveness of story-telling relative to just threats or enticements…
Recall any post or industry article from the past month that you’ve read on a breach or what-have-you. Chances are that it was criticising users for poor passwords, or some other fundamental error that we, as security advocates, scoff at. And therein lies a huge part of the problem with the security industry; talking down to our audience. Users are afraid of looking stupid or of potential admonishment if they do mess up, decreasing the likelihood of notifying security officers, or asking questions about certificate installs on their own end-points, etc. So how do we change this systemic positioning?
Let’s use Barry as an example. Barry doesn’t know IT or understand security. He does, however, hold a PhD in Quantitative Economics at a finance company and holds several board positions. He’s not an idiot – just ignorant of what we consider ‘the basics’ because he’s never had to really think about it.
There is always going to be an element of human ignorance. As a clever sabretooth is always going to be able to take out the oblivious human collecting firewood, so too will Barry if left to his own devices, always make his login passwords some permutation of his football team, and not bother about digital keys or certificate-use to secure his mobile phone or laptops access to the corporate network. He doesn’t understand the need for changing it either. ‘Why would anyone want to see my TPS Report? It’s just about transaction timetables and really quite boring.” Well, Barry, those times are important if trying to intercept some validation information that’s going to the banks. The SWIFT compromises in 2016/7 were a prime example of understanding that archaic methods are still leveraged by institutions in place of current machine identity options, and a crowbar hack can often fit nicely in there.
Tell Barry a story that really engages. A story ignited by where hackers were able to bring down the entire global financial system through a snowball-effect, all starting from the access they had to some small, boring, unimportant files… like his TPS reports. (Then, I assume the hero foils the plot using an imagined 3D GUI). Granted, this level of abstraction is a tad much, but I think it illustrates that if we can show the Barry’s of the world that security matters with storytelling - even if Hollywood levels of exaggeration and inaccuracy - rather than ‘telling’ them, they will start to take it more seriously and be genuinely more receptive to modifying their behaviours and practices. Let them know you’re there to help and show, not lecture. Inject machine identity intelligence and automation to provide near-transparent layers of security, and Barry et al aren’t resisting adoption. Making it easy to introduce new practices like effective IAM and improved machine identity practices means we have a recipe for much better hygiene.
With the growing profile of blockchain, we’re seeing the opportunity for it to provide enormous potential to security posture; “…the unification of all the transactional activities that constitute a supply chain into a single dataspace so that the transactional fog in which adversaries presently hide can be minimized.” (2017, Michael Hsieh, Ph.D., and Samantha Ravich, Ph.D.). We’re yet to see any enduring implementations – it’s all still quite new – but there are absolutely some promising takes on it as a tool in the validation arsenal for both humans and machines. Most importantly, this blockchain zeitgeist gives us greater license to start a conversation and educate the user population on the adoption of better practices, and to a much more attentive audience.
If we look at one of the most divisive topics of recent years stemming from the Edward Snowden revelations, the care-factor of most Americans was quite low. Their personal details being recorded and the government having access to their most private data caused barely a ripple in the mass opinion, particularly of Middle America. Then John Oliver took that messaging to something far more…personal… and distorted the lens on this topic just a touch and the realities and practical implications totally changed the tune of the very same demographic. Given this is a TV show, we can take it as merely apocryphal, but the point was well made. MAKE THE MESSAGE HUMAN!
We can patch and update and create new policies day in day out, but if the Barry’s of the world aren’t educated into what the pragmatic implications are and thereby taking action, they’ll always be the weakest link, and your shiny new certificates - though hugely important - are for naught. If we can raise the standards of user practice and expectation through engaged communication, we can increase willingness in the uptake of security measures and normalise adherence. With near-transparent technical improvements like machine identity, blockchain, biometric verification hybrids, we can improve universal security posture. Every organisation needs to start by putting down the stick, looking at ways to keep a lid on friction, and to start telling stories.