DevSecOps is the principle that all technology teams have accountability for cybersecurity in an organisation—ownership is not solely at the door of the security professionals and teams. The idea that cybersecurity is everyone’s job has come about partly because cybersecurity skills are constrained—within the market as a whole and within an organisation specifically. A recent report from (ISC)2 claims there is a global cybersecurity staffing shortage of three million and that this is increasing. This is certainly my own experience with the organisations I work with in Europe and the Middle East.
This constraint manifests itself through:
The constraint is exacerbated by the traditional organizational design that has security as a separate team, sometimes two separate teams (one security and one InfoSec), often with reporting lines into the CEO rather than the CIO. Whilst I recognize that there are many physical security considerations to consider (who’s allowed into a building, what checks people have to pass to gain access to a building etc), the cyber threats are varied, growing and nobody would deny, absolutely real.
One of the first questions I ask an organization when I start working with them is about their organizational design—having separate security teams creates tension (“security says no”), handoffs and delays into the process. See my earlier blog post, ‘How Can You Be More Successful in a Compliance Conversation with DevSecOps’ for more on this.
DevSecOps seeks to address these challenges, and I find a useful way to break down how it does this is to use the well-established DevOps ‘elevator acronym’ CALMS. In the first of this five part blog series, I’ll address the ‘C’ in CALMS:
A DevOps culture is primarily defined, in my opinion, by high-trust. When we have high levels of trust, there are low levels of friction and work can flow at high speed, so it costs less to deliver value. It’s hard to talk about culture because it feels like such a nebulous thing, even harder for us in technology to talk about it since we are more practised in talking bits and bytes than emotions and feelings.
I find it helps to think about behavior instead, as the key thing that is culture. Not just because that gives us narrative (and storytelling is a very human way of learning), but also because we can focus on how to change behavior and that gives us a practical approach when we want to make changes.
In order to nurture trust we need to create a place of psychological safety—a place where people are able to be truthful and experimental without fear of consequences, particularly if they experience failure. Many of the leaders I work with, particularly in larger corporations, struggle with this tolerance to failure. Particularly when their job is to run national or international critical infrastructure, such as a banking network, it’s entirely understandable that these leaders are coming from a place where they see failure only as catastrophic failure. But there are many shades of failure—from a defect in a development branch picked up during continuous integration to a system down for hours for all global users.
What is also entirely understandable is what happens in our brain when we hear the word ‘failure’. Our society teaches us that failure is bad and we are punished for it emotionally by feelings of shame and embarrassment and potentially physically also—by being shunned or worse. This is why we often inherently fear failure.
Fear causes subcortical activity in the amygdala which in turn activates the working memory network in the frontal lobe (where conscious attention happens), which makes it harder to learn as the anxiety is a distraction.
When we are in a situation, as we are with security, where knowledge, skills and expertise are a constraint and part of the solution is to improve learning, we do not want a blame culture—where people are fearful—as this will inhibit learning.
Additionally, most enterprise organizations I work with have fragile systems (which means they spend a great deal of their time firefighting) and do not have control over the flow of work (which means they have too much work in progress and not enough time to make improvements or perform effective, actioned retrospectives). All this pressure, and a siloed organizational design also often results in the definition of someone’s done being “I did my job.”
When we talk about the behaviors, we want to change them in order to evolve to a ‘DevSecOps culture’, what we’re talking about is people that:
So, we know what we’ve got, and we know what we want to aim for, here are my top tips on practical things you can do to influence the change you want to see:
You’ll note in the table above that I’ve mentioned automation as a behavioral change enabler several times. In the next post in this series I’ll be tackling the ‘A’ in CALMS in the context of DevSecOps - Automation.