Every evolution in technology starts with the people behind it. So understanding the context, roles, and objectives that gave rise to DevOps — and now DevSecOps — isn’t just an exploration of timing and technology; it’s central to how we define organizational success.
In this post, I’ll recap a conversation I had with Gareth Rushgrove, Director of Product Management at Snyk, a cybersecurity platform that helps developers identify vulnerabilities in their applications. Gareth and I started our careers in tech before the migration to cloud, and we had a front-row seat to the age of digital transformation. Our discussion touched on the early days of DevOps and the evolving application lifecycle, the emergence of self-service for developers, and the shifting role of security.
The evolution: every company becomes a technology business
Let’s go back to the year 2008, in the early days of the DevOps movement. At the time, virtualization and cloud computing were primitive compared to where they are today, and we witnessed an exciting shift: Businesses of all sizes started to recognize that they needed to treat technology as a differentiator to provide a competitive edge.
That was the beginning for us as operators and developers to use virtual machines via service providers; along came Amazon with the best APIs we’d seen, virtual machines on demand. The biggest thing was the APIs and the fact that we could add on API request to provision virtual machine resources. We could access Infrastructure as a Service (IaaS) via S3, EC2, and RDS; we could request services that we wanted over an API. The interactive, highly dynamic culture of Web 2.0 and API accessibility took off.
These days, some portion of organizations have progressed from virtualized deployments and VMs to containers and Kubernetes (with far more of them virtualized on public clouds) — and the roles of dev and ops have started to blur and overlap.
Shifting POVs: New meanings for development and operations
Traditionally, the idea of iterating quickly fell to developers. And yet, when Gareth and I talked about being agile as a company, we couldn’t help but veer our discussion toward the infrastructure side. Success depends on how teams operationalize their deployments and manage all of the infrastructure-related elements that impact production environments.
As the application lifecycle evolves, how does the meaning of infrastructure change? The simplest answer, as Gareth said, is that “infrastructure is whatever supports the application.” In more traditional roles, the infrastructure team might manage physical network switches, load balancers, and a bunch of cable. In other roles, they might manage a database as a service, an application platform, or a database client. Moving forward, roles continue to shift as Kubernetes, containerization, and serverless become more ubiquitous.
We used to say developers are the people who write code. But now the people who manage infrastructure are managing it with code, and security folks are delivering security as code. Whether programming languages or scripting languages, it’s driven by business needs and all in service of getting the job done efficiently.
The rise of self-service for developers
Increasingly, we’re seeing self-empowered development teams that are involved in (or even responsible) for deploying, monitoring, and securing their applications.
As a developer in 2020, you can deploy an application on Kubernetes using containers. You can monitor your app through instrumentation; you’ve got continuous integration running with unit tests, integration tests, acceptance tests, etc. You might have some monitoring checks in place for service availability, and you might use a few SaaS providers to ping the app from different locations. You want your app to be secure, so maybe you’ll use a platform like Snyk to make sure the code is secure before it’s deployed to production.
Each element of your app can be self-service, which is massively empowering and fosters a deep sense of responsibility. When developers have a high level of care, commitment, and responsibility to their applications, they’ll work across the aisle to understand how features are being used in production, to deliver more secure applications, and to support good monitoring practices.
Integrating security in DevOps
This all sounds like good news, and it is — but I’m left to wonder, why does it seem like security has taken longer to join the conversation? Maybe it’s the natural order of things because, as Gareth noted, the initial focus of the DevOps movement was two-fold: 1) speeding up the deployment process and 2) improving collaboration between dev and ops.
Solving for those pain points was (is!) a tall order, requiring pretty dramatic cultural shifts within traditional organizations. As Gareth said, “We used to have this view that developers weren’t great at security and that they needed to be protected from themselves.” With DevOps comes greater transparency, collaboration, and trust. Positive outcomes in one area of the business make it easier to invite changes in others; now security is very much involved in the DevOps conversation.
As DevSecOps becomes mainstream, the role of security will continue to evolve. Indeed, its role has to evolve as the threat landscape becomes larger and more sophisticated. The old way of doing things — such as tacking QA and security on at the end, detecting problems late in the process, slowing everything down (and, no surprise, frustrating everyone) — just doesn’t cut it.
By embedding security earlier in development (and eliminating manual, triage-related tasks through automation), security specialists — in partnership with developers and operators — will be able to focus on supporting the organization as it grows.
What’s next?
With the mainstream adoption of DevSecOps — with self-service (i.e., IaC) and operators and security folks supporting developers in the development and deployment of applications — we’re lined up for the next evolution. The question is, what does that look like?
From my perspective, it can go one of two directions. We could continue to move more of our development into the cloud, letting cloud providers do everything for you — AKA, serverless. You develop your application, and they’ll offer all the APIs you need for data storage, instrumentation, service monitoring, and visualization. Now, with this possible reality, you don’t have to worry about machines. Your cloud provider owns everything, which also means you’re locked in.
On the other side of the coin, I could see a resurgence of on-premises infrastructure and managing your own resources. From this perspective, it’s interesting to see companies like Oxide Computer Company which promise to make it easier to own and run your own bare metal infrastructure. In that future, we’ll have access to highly performant, easy-to-manage bare metal infrastructure. Then you have companies like Packet (from their site: “Built for enterprises, loved by developers”), who are taking a fully automated approach to bare metal. Their customers are moving away from traditional cloud providers to run their own bare metal (read: multiple datacenters) at the edge. They took what cloud providers were doing — with fully managed and boxed solutions — with a cloud of bare metal you can rent to get allocated resources in a datacenter. Essentially, it’s a non-virtualized cloud.
With advancements like edge cloud, companies like Packet are offering a bare metal presence in multiple geographically separated places, like a CDN for your compute. Services like Packet offer what’s effectively a single-tenant cloud — with an API and instant access to resources. Hello security, and hello consistent performance.
The question remains, which direction will we go? Someone still has to drive these changes. Either way, I’m excited to see what’s next!
Stay up to date on all things monitoring, DevOps, and security — subscribe to our newsletter, below!