The New Frontiers of Cybersecurity is a three-part thought-leadership series investigating the big-picture problems within the cybersecurity industry. In the first post, we explored the reasons malicious actors have been able to enhance their ability to execute and profit from attacks. In the second post, we discussed how the massive increase in endpoints and systems online has dramatically increased the attack surface. A different—but equally critical—dimension that we’ll discuss for our third and final installment is that alongside this increase in attack surface comes a significant increase in complexity that is plaguing security teams.
The simple combinatorial mathematics of the sheer increase in endpoints not only means a greater number of systems to manage but also much more complex network architectures and webs of connections underlying IT and technology infrastructure and systems. The rise of cloud computing added a further layer of complexity for individuals trying to keep their applications and data secure. For example, an organization like Twitter that is composed of thousands of microservices will have a vastly more complex endpoint infrastructure than an enterprise that is guarding a handful of servers or even a few cloud instances.
Rather than linear complexity increases with each new node, we see exponential increases in complexity for every added node. Then there is the element of time. It is hard enough to guard and proactively protect an IT infrastructure that is growing quickly but steadily and constantly. It is entirely another issue to protect an IT infrastructure with a growing number of endpoints or systems attached to IP addresses that only exist for short periods and then morph into something else.
This combinatorial complexity is the new reality of Kubernetes and containers, serverless computing, and IPv6 (the newer IP numbering structure that enables billions more endpoints and systems to have their own unique IP addresses). In Kubernetes and containers, new endpoints with IP addresses may spin up and shut down every hour or even on a minute-by-minute basis. Unlike the billions of connected devices, which are far more limited in compute resources and other restrictions, containers and serverless are general purpose and can be more easily adapted for almost any type of payload or attack.
So we are now in a world where anyone can provision hundreds or even thousands of general-purpose servers or lightweight computers with the push of a button. This means a lot more complexity to protect, but also that attackers can generate significantly more complex attacks. Remember, the nature of cloud computing is that it is open to everyone. This includes the Kubernetes engines offered by cloud providers, as well as more abstracted systems to scale up and manage large fleets of containers like Amazon’s Fargate platform.
The New Risk of Container and Kubernetes Attacks
We already see signs of this new complexity. A scan by security researchers in mid-2022 pulled in over 900,000 exposed Kubernetes management endpoints. To be clear, these endpoints were not necessarily vulnerable or unprotected. But in security, exposing endpoints provides attackers information they can use to create more targeted attacks. Likewise, public compute clouds have unpatched security flaws that can allow rogue users to break out of a container and potentially access the management plane of the public cloud. This can then allow them to attack other tenants in the cloud, violating the core proposition of secure multi-tenancy.
In the legacy world of tightly controlled network perimeters and less secure internal networks, there was little need to harden endpoints not designed to be exposed to the world. In the datacenter era, a firewall on the edge of the center guarded against unauthorized probes and kept everything private. Even a misconfigured internal server was not accessible to the world. A firewall engineer had to explicitly change firewall rules to open that server to access from the Internet. Today, the opposite is true, with “open-to-the-Internet” being the default state and the burden falling on developers, DevOps teams, and security teams to set up firewalls, API gateways, and other protections to guard against probes and attacks.
Kubernetes can (and often does) expose endpoints as default behavior, providing a handy map to attackers. We are already seeing attackers exploit the complexity of containers and Kubernetes as a new attack vector, driven in part by the elimination or limitation of favorite older vectors such as “macros.”
Core Cloud Complexity Leads to Mistakes
The big push behind the cloud and Kubernetes is to allow developer teams, DevOps, and IT to be more agile, flexible, and resilient. However, this is a paradigm shift with many implications that may be hard for IT and security teams to address. In the cloud, the default is public. In the legacy datacenter world, the default was private, and IT or security would need to grant access. In the public cloud, IT or security governs access. The default premise of cloud, going back to Jeff Bezos’s policies at AWS, is to make services, APIs, storage, computing, and networking accessible to anyone with a credit card. In the cloud, therefore, the default for a service is exposed to the world. In the traditional datacenter and legacy networking world, a service must be configured to be exposed.
This paradigm shift injects a new layer of complexity into security and can lead to configuration mistakes, even for cloud-native companies. A developer may build a test application and load code onto it that communicates with other services out in the cloud or even opens an API to the public Internet. The developer may not realize that the cloud server the test application is on the same namespace and security groups as other key production assets. That test server might also be left open by mistake for days, becoming a pivot or jump point for a malicious actor. Another point to consider is that in the past, storage was physically attached to networks and segregated from public access. To access data contained in that storage, you had to go through the server that was attached to it. Cloud computing broke that paradigm and allowed the easy storage of data in object stores and other online storage buckets. In the cloud, developers and even security teams often store data in public cloud storage buckets without properly configuring the buckets to secure access to them.
While physical data centers are somewhat obscured and blocked from public access or even scans, cloud service providers operate using well-known blocks of public IP addresses. This is true even down to individual services. For example, the IP blocks used by Amazon’s S3 storage service are well documented and publicly shared on the Internet. Because malicious actors know the IP addresses, this makes running continuous probes of those blocks searching for vulnerabilities far less resource intensive and expensive. Attackers also know the default configurations of Kubernetes clusters and connecting APIs. They know the default security configurations of most server images deployed as part of the default public compute cloud catalogs, as well as what ports are protected and opened by default in commonly deployed public cloud Web Application Firewalls. The upshot of all this? We face the opposing trends of operating and security infrastructure being made more complicated by the shift to the cloud, while at the same time identifying attack targets is becoming simpler.
Defense-in-Depth Drives Complexity
The days of firewalling the data center to guard infrastructure are long gone. Many organizations maintain a global firewall in front of their infrastructure. These firewalls are necessarily porous due to the growing number of APIs and services that must connect to the outside world. In the cloud, the initial approach was to create security groups. Critical processes, services, and instances were placed inside more security groups. Access controls were applied on a per-group basis, associated with identity providers and authentication systems. Security groups are still necessary but insufficient to handle the cloud infrastructure’s complexity.
The answer is defense-in-depth. Security teams put in place more defense technologies, protecting data assets and applications in multiple ways. APIs are guarded by API gateways. Kubernetes clusters are guarded by specialized Web Application Firewalls and Ingress Controllers. SecDevOps teams mandate smaller, more lightweight firewalls in front of every public service or API. Application security teams require that SAST and SCA scans be run on any code iterations. Cloud providers add technology to ensure that cloud services, such as storage buckets, are properly secured. Endpoint detection and response is mandatory for all devices interacting with enterprise and cloud assets. Security is also placed in the content delivery network (CDN), extending web firewalls and denial of service (DoS) protection further away from core app servers to intercept attacks further upstream. These layered systems require proper configuration and management—a never-ending task.
The Big Problem with Complexity
Complexity increases the probability for mistakes. This complexity also provides malicious actors with potential opportunities to hide and attack. The high degrees of complexity are precisely what hackers use and abuse to get their way. An enterprise may have multiple directories that maintain user permissions, and an admin forgets to update one of them. There may be five valid authentication methods and the last method is the weakest. This will be the one malicious actors invariably choose to exploit. While 90% of development use cases and user requirements are satisfied with the standard catalog of infrastructure and devices, the remaining 10% of non-standard use cases will be the last to be updated and will likely present the best opportunities for exploits. Complexity creeps up on CISOs one exception at a time, one additional service or software or SaaS tool at a time.
So what can security teams do? According to recommendations from leading security agencies like the Cybersecurity and Infrastructure Security Agency (CISA), organizations must begin to invest in automated, continuous security validation to keep up. Unlike an annual penetration test by a third party, organizations must continually evaluate their security control stack. This means performing adversary simulations to test that the defensive controls are working correctly to detect, log, and stop attacks. This continuous test also helps organizations identify those temporary resources that may have been brought up and not protected correctly. Security teams should also make sure they do not limit themselves to external attack surface validation only. Any network can become an entry, exit, or pivot point for malicious actors to use.
Connect with a SafeBreach cybersecurity expert or request a demo of our advanced platform today to see what continuous security validation—powered by breach and attack simulation (BAS)—can do for you.