dot Be the first to see our latest product releases—virtually—at Redis Released: Worldwide.

Register now

Tracing Kubernetes Adoption, From Inception to the Cloud

Among the factors contributing to developers’ growing reliance on Kubernetes are its hybrid cloud capabilities, team-friendly portability, and cost-consciousness for cloud deployments. 

Kubernetes has come a long way since 2014 when Google first introduced it as an open source answer to Borg, Google’s internal container orchestration solution.  

Since then, the deployment system and containerized application tool has had many technical upgrades. After releasing Kubernetes v1.0 in 2015, it began supporting OpenAPI in December 2016, which enabled API providers to define their operations and opened the path for developers to automate their tools. By 2018, Kubernetes had become so mainstream that Google dedicated an entire podcast to it. 

Flash forward to the present day. Kubernetes’ popularity skyrocketed in tandem with the adoption of cloud-native services. According to the recent Splunk State of Kubernetes report, there’s been a “300% increase in container production usage in the past five years,” with large organizations being the predominant driving force behind Kubernetes’ mainstream adoption. 

According to the Cloud Native Computing Foundation (CNCF) 2020 Annual Survey, organizations using or evaluating Kubernetes ballooned from 78% in 2019 to 96% in 2022, making it the go-to platform for building platforms

That’s not to say that Kubernetes was immediately accepted by the development community. For a while, container orchestration required a technology choice among Kubernetes, Docker, and Mesosphere. Eventually, Docker and Kubernetes made friends with one another, and developers are comfortable using both. Mesosphere, however, lost their attention. 

Kubernetes has stayed open source, and one mark of its prevalence is the number of contributions to its codebase. There have been over 2.8 million contributions to Kubernetes made by companies. 

Kubernetes, as we know it today, comes with many automations already baked in, making it easier for development teams to get started. It’s easy to tick off the advantages on your fingers: Everything is managed through code and is easily portable if running on cloud. Kubernetes’ flexibility makes scaling applications a simple process. Containers are lean by design; they store only the must-have resources an application needs to run. That makes applications significantly faster and lighter and Kubernetes all the more appealing to developers. 

Kubernetes: a solid foothold in the cloud  

With each passing year, cloud computing has continued to gain traction while on-prem Kubernetes deployments have taken a 3% hit year over year, according to VMware Tanzu’s The State of Kubernetes 2022 report. Most Kubernetes deployments are in multicloud or hybrid clouds. “When we asked people about growth plans in the coming year,” the report states, “almost half (48%) expect the number of Kubernetes clusters they operate to grow by more than 50%; an additional 28% expect the number of clusters to increase notably (20% to 50%). 

Kubernetes’ growth is thanks in part to its continued investment in software development and infrastructure efficiency. Development teams have come to rely on the flexibility that orchestrating from on-prem, single-cloud, hybrid cloud, or multicloud provides. These container-based hybrid cloud and multicloud environments allow teams to handle massive workloads with minimal refactoring or replatforming. 

This developer efficiency is particularly true when building microservice-based applications. Since they’re comprised of different units, teams can more readily put their resources to the right tasks while working in one platform. That’s a considerably tougher request when working with a monolithic architecture. It’s a useful way of keeping too many proverbial cooks out of the proverbial kitchen, so to speak. 

Kubernetes’ bread and butter are its automations. By taking highly repeatable commands and setting them to automate, teams can eliminate unnecessary headcounts and sidestep any time-consuming IT tinkering. When surveyed, 45% of registrants in VMware’s report indicated they rely on Kubernetes because of its product capabilities and roadmap.

Kubernetes growth

The cloud infrastructure market grew 24% year over year in Q3 2022, according to Synergy Research. With more large enterprises including containerized cloud deployments in their stack and looking to freely move their data from cloud to cloud with no lock-ins, the ability to work in a hybrid cloud environment is becoming mission-critical for most large organizations; that ongoing trend is reflected in the 41% of respondents that choose Kubernetes for this very reason, as per VMware’s report. 

Most major cloud service platforms have a Kubernetes distribution tailored to their respective services, points out a 2022 Evans Data Cloud Development Survey. Foremost, 37% of developers use dedicated Kubernetes-based cloud services, such as AWS’s Amazon EKS or Microsoft’s Azure Kubernetes Service. “These implementations of Kubernetes, with varying levels of service, use these companies’ internal cloud back-ends for container orchestration,” the report explains. In the Evans Data report, 32% of developers use a commercial Kubernetes distribution, and 28% own a managed “vanilla” Kubernetes. 

The pandemic drove digital services and experiences into overdrive, which encouraged the already-in-progress transition to businesses moving services online. One result was that cloud deployments escalated – along with application load. Back in June 2018, Kubernetes 1.11 was released, which introduced IPVS-based in-cluster load balancing, an advancement that greatly increased application production scalability.  

Load balancing is key, as it’s a hands-free means of maintaining infrastructure efficiency, which in turn, keeps cloud costs from skyrocketing. According to Cast.ai, companies spent an exorbitant $16.2B in cloud waste in 2022. This overprovisioning is blood-letting dev teams of their available resources. Automations that save money, time, and manpower are the goal, though that has required teams to implement their own tools for automatic failover through the commercial support of an operator

Operators for Kubernetes 

As mentioned before, the core of Kubernetes has many built-in automations, but thanks to the Kubernetes operator pattern, it’s possible to automate tasks beyond what’s in the Kubernetes software. 

Kubernetes is great at keeping costs low by load-balancing cluster nodes in order to reach their ideal state. An operator goes a step further by adding an extra layer of orchestration, one that automatically steps in in the event of an automatic failover, monitors resources via a reconciliation loop, or even scales a cluster automatically. 

And if you use Redis – or you are considering doing so – we make it even easier. The Redis Enterprise Operator for Kubernetes streamlines and automates the management of the Kubernetes layer. It’s the result of everything learned, millions of cluster deployments later.  

https://www.youtube.com/embed/7UBlNsyHSQA

For anyone running Redis on Kubernetes and interested in comparing the Redis operator and Helm charts, read An Introduction to the Helm Tool and Helm Charts.