As with any enterprise technology, there are benefits and challenges when creating IT environments in the public cloud. The benefits include cost savings and the ability to easily scale up and down (just to name a few). Challenges present a double-edged sword: on the one hand, the scale and reach the cloud provides enables massive scale, but you’re also confronted with regional issues in terms of privacy, such as complying with GDPR, and related challenges when it comes to visibility (you wouldn’t want to accidentally collect private information from the UK and then display it on a dashboard in Texas). Your monitoring tool should surface business-critical information in a way that doesn’t break any laws. More on that later.
Evolution in cloud technology from the last decade culminated in containers, which in many ways exacerbated the challenges noted above (for starters, there are more moving pieces to monitor). And while container adoption has begun to plateau, the same can’t be said for technologies at the orchestration layer (AKA, Kubernetes, for which native adoption is up 43%). The orchestration layer is where you should focus your investment; you’ve moved up a level from packaging, and as long as you can interact with it in a consistent manner, it doesn’t matter how you package it. Another reason to focus on the orchestration layer is containers tend to present their own challenges when it comes to security, and while there are a number of different projects that try to make containers more secure, I predict we’ll see a shift to lightweight virtual machines (VMs), which take advantage of the security that comes with VMs and the efficiency and portability that containers offer. They’re light and portable enough to provide what Docker does today, but with the added benefit of proper security isolation. Although Amazon’s Firecracker was a bit slow to gain traction, we’ll see increased adoption in 2020; as Firecracker and other lightweight VMs become more established, you’ll be able to make the transition to a more secure infrastructure without having to rewrite how you deploy code.
That said, you should treat any public cloud infrastructure (whether that’s hypervisors or containers) as a hostile public network. Trust no one — carefully evaluate every tool and technology you partner with.
Tools you use for any event in your infrastructure, including logging, monitoring, and security auditing, have to be robust, secure, and able to operate in these extremely hostile operating environments. These tools have to offer access control to limit what information is available to whom, as well as limit the actions people can use that tool for. You don’t want your visibility tool to become an attack vector. By treating every public infrastructure as hostile — assuming there’s always going to be a bad actor — we can design and build solutions that can operate safely.
This isn’t to say you should completely separate performance and compliance data. Instead, you should bake security into your visibility tool — with a robust role-based access control (RBAC) model in place — for deep visibility while staying compliant. Too often we silo security and performance data, which overlooks the criticality of security information and our responsibility to maintain visibility into our entire infrastructure, gaining a holistic picture. Surface security and compliance data alongside everything else you care about, such as application and performance data, for a well-rounded approach to monitoring and observability.
Implementing a strong RBAC policy and enforcement model from the outset helps bake security into our visibility strategy. From there, you can achieve scale by working with a solution that’s able to operate in these aforementioned hostile public networks. By layering an access control model on top, you make it safer to take that one visibility tool and apply it more broadly — not only in terms of infrastructure coverage but also in terms of its capacity to move throughout the organization, allowing many teams (like security and compliance) to consume it safely.
Another requirement for operating safely in the public cloud is to have a robust secrets management solution that’s also developer friendly, like HashiCorp Vault. This solution should naturally and effectively integrate with the rest of your toolset, including monitoring. In its very nature — monitoring public cloud infrastructure — your monitoring tool needs access to some level of credentials. One of the best ways we can help organizations maintain a secure infrastructure on a public cloud while maintaining visibility is integrating with a first-class solution for secrets management.
If monitoring is all about identifying an issue, that puts security and compliance data under that same umbrella. In order to gain a complete picture of what’s going on with your infrastructure, it’s critical that you eliminate any siloing of performance and security data, creating a holistic view of your data coupled with a robust RBAC model to ensure you’re not breaking any laws. It’s so much easier to manage your cloud infrastructure when all your data is in one place, and you can access it safely. This holistic and compliant view leads to better management, less risk, and better, more secure operations — an all-around net positive for your business.