By, Samuel Cadavid, Senior Solutions Consultant

 

In the dynamic world of cloud computing, Kubernetes has emerged as a frontrunner in orchestrating containerized applications. However, as the complexity and scale of deployments grow, managing resources efficiently becomes a daunting task. This is where Artificial Intelligence (AI) steps in, revolutionizing how we approach Kubernetes management, specifically in in-place pod resizing, vertical and horizontal scaling, and power-aware batch scheduling. During my recent trip to KubeCon, I was able to attend a session hosted by Vinay Kulkarni (eBay) and Haoran Qiu (UIUC) that delved into this topic, including how cluster autoscaler currently handles pods pending due to insufficient resources, changes to the autoscaling workflow that right-sizes over-provisioned pods, and the latest research that leverages machine learning to achieve multi-dimensional autoscaling. The session got me thinking about how AI plays a role in pod-resizing, and it inspired me to keep digging into it afterward.  

AI-Driven In-Place Pod Resizing 

Traditionally, resizing pods in Kubernetes meant recreating them with the new size specifications. This process, while effective, leads to downtime and potential service disruptions. AI-driven in-place pod resizing changes this narrative.

How it Works 

AI algorithms continuously monitor the resource usage patterns of each pod. When a pod requires more resources, AI predicts this need and dynamically adjusts CPU and memory allocations without restarting the pod. This approach minimizes downtime and ensures that applications scale seamlessly with fluctuating demands. 

Benefits

 

 

Vertical and Horizontal Scaling: AI at the Helm 

Vertical Scaling with AI 

AI-driven vertical scaling involves adjusting the CPU and memory limits of a pod. Using predictive analytics, AI determines the optimal size for a pod based on historical data and current trends. This proactive resizing prevents resource exhaustion and improves performance. 

Horizontal Scaling with AI 

In horizontal scaling, AI plays a pivotal role in deciding when to add or remove pod instances. By analyzing traffic patterns, workload demands, and system health, AI can automate the scaling process, ensuring that the cluster meets the demand without manual intervention. 

Advantages 

Power-Aware Batch Scheduling with AI 

Energy efficiency is becoming increasingly important in data center operations. AI-driven power-aware batch scheduling in Kubernetes is a game-changer in this realm.

The Concept

This approach involves scheduling batch jobs in a manner that optimizes power usage. AI algorithms analyze the power consumption patterns of nodes and schedule jobs on those consuming less power or during off-peak hours, significantly reducing the overall energy footprint. 

Impact 

The integration of AI into Kubernetes management is not just a trend; it’s a necessity for efficient, cost-effective, and sustainable operations. AI’s role in in-place pod resizing, vertical and horizontal scaling, and power-aware batch scheduling marks a significant leap towards smarter, more autonomous cloud infrastructures. As we continue to embrace these AI-driven strategies, we pave the way for more resilient, responsive, and responsible computing environments. Here, I’ve only scratched the surface of what AI can do for Kubernetes management. As technology evolves, we can expect even more innovative solutions to emerge, further simplifying and enhancing the way we manage cloud resources. 

 

Watch the full lecture here!

By Dan Quackenbush

 

In the dynamic world of tech conferences, there exists a gem unlike any other – KubeCon, where the orchestrators of Kubernetes gather to share tales of triumph and innovation. As a seasoned navigator of cluster administration, I found myself immersed in the heartbeat of this dynamic symphony of ideas, particularly drawn to the stories that unfolded after the initial deployment – the fascinating Day 2 Operations. 

The first stop, Major League Baseball + Argo CD: A Home Run, implemented GitOps through Argo CD to empower feature-driven development using Helm charts. The stage was set with a compelling case study, highlighting how developers were handed the reins to their applications without drowning in the sea of YAML configurations. Through the power of abstraction, Helm deployed through Argo, allows them to focus on features, such as enabling monitoring, injecting secrets, and exposing their services across two hundred clusters, bringing consistency to the service runtime landscape. 

Next up was a talk on the alpha feature introduced in Kubernetes 1.27 – In-Place Resource Resize. A meaningful change for administrators dealing with dynamic resource-intensive applications, especially those built on JVM. Sustainable Scaling of Kubernetes Workloads with In-Place Pod Resize and Predictive AI, unfolded the power of dynamically adjusting pod sizes, unveiling a new level of flexibility for Kubernetes clusters. It was not about vertical scaling; but talked about how we could use ML to power these configurations without over-allocating resources. 

 

The talk FinOps at Grafana Labs illuminated the path to financial accountability, transforming it into a cultural cornerstone. The speaker painted a vivid picture of a world where accountability, transparency, and a culture of openness were the guiding lights. Through real-world examples, the audience learned the impact of “cash positive chaos testing,” moving to spot instances, aligning cost optimization measured against service reliability, and the importance of continuously stress-testing applications in various infrastructure conditions. 

 

In a creative twist, Burden to Bliss: Eliminate Patching and Upgrading Toil with Cluster Autoscaler at Scale, dived into leveraging Cluster Autoscaler for applying security patches. The ingenious strategy involved creating new node pools and strategically shifting pods with tolerations to the patched nodes. By creating new node pools with the patched system and strategically forcing a single pod with toleration to the patched node, the talk demonstrated how eventual consistency mechanisms could be leveraged to shift all pods to the new node through eviction. This innovative strategy ensures that security patches are seamlessly applied without affecting ongoing workloads. 

 

The final act in this symphony of talks explored the intersection of Kubernetes, service mesh, and content delivery networks (CDNs). Take It to the Edge: Creating a Globally Distributed Ingress with Istio & K8gb, unveiled the critical role of service mesh in handling disruptions during DNS load balancing, offering a solution to the dreaded 502 errors. Through health checks and local failover endpoints, the talk unfolded how Kubernetes Global Balancer could redefine CDN construction, providing a resilient and scalable solution for distributed applications. 

 

It is always interesting to hear how people are handling similar problems. These talks show us, how as a Kubernetes administrator, with a new mindset, can provide a central way for developers to be feature vs configuration driven, scale those workloads either in place, or through cheaper means, all while reducing the burden on sustainability. Once deployed, these applications can then spread across regions, on hardened nodes. I invite you to check out the talks, dive into the dynamic Day 2 Operations, and discover the secrets shared by industry leaders. 

 

By Dani Shirer, Director of Project Management, River Point Technology

 

KubeCon is a major tech conference hosted by Cloud Native Computing Foundation (CNCF), a project by the Linux Foundation to help advance container technology. This year, it was held in Chicago with three and half days of sessions, demos, workshops, and networking events packed into the schedule. Although I was excited to attend with other members of the RPT Team, I was unsure of what to expect for my experience as a less technically-savvy attendee.

As a project manager in the tech industry, I have the distinct honor of working with my talented colleagues who are subject matter experts in this field, and I am exposed to the seemingly endless stream of cloud native tools they implement or advise on during a project’s lifecycle. However, exposure doesn’t always equate to understanding, and I often find myself struggling to grasp concepts of a solution that appear very basic to my coworkers.

As a result, the first question that popped into my head upon learning I would be attending Kubeon was: Will there be value for me in these sessions or is everything going to be over my head? A fairly standard rumination from the imposter syndrome many of us in this field grapple with on a daily basis.

However this isn’t the CNCF’s first rodeo, and they had prepared a full offering for me and the other newbies attending. While planning my schedule on the website, I found a category of sessions labeled “Cloud Native Novice” and suddenly birds were singing and there was a light at the end of the tunnel. With my Sched app filled, I was ready to take on KubeCon. Here are a list of the Cloud Native Novice talks I attended with my major take-aways:

 

It’s Never Too Late for PKI Fundamentals: Building a Mental Model – Jackie Elliott, Microsoft

This was an excellent session to kick off with. Jackie did an amazing job breaking down the concepts of Public Key Infrastructure and its purposes; facilitating the secure transfer of information, increasing a network’s security, and providing a common framework of practices, policies and technologies. PKI is a term I’ve heard come up during numerous project discussions, and while I had a general understanding, it was really valuable to take a deep dive to help cement that comprehension.

 

From Non-Tech to CNCF Ambassador: You Can Do It Too! – Julia Furst, Veeam

By far my favorite KubeCon session! Julia walked us through her journey from a non-technical Marketing Manager to a CNCF Ambassador in a
span of two years. She touched on pushing past that imposter syndrome and self-doubt that inevitably comes when faced with difficulties, and also advocated for public learning, which essentially means to put yourself out there into various support networks (LinkedIn, GitHub Community Discussions, YouTube, Twitter) and to not be afraid to ask questions publicly to gain insight from your peers in this industry. I left Julia’s session feeling inspired and have already watched some of the introductory videos on her YoutTube Channel.

 

Demystifying Service Mesh: Separating Hype from Practicality – Brian Redmond & Ally Ford, Microsoft

Another great deep dive into a tool I regularly work with. RPT often provides consulting services for HashiCorp Consul and all of our customers use some form of a service mesh. Brian and Ally truly did demystify service meshes by breaking down the major pillars of security, observability and traffic management and providing thorough context that will help me speak more confidently during project planning around the subject.

 

Beyond Passwords: Keycloak’s Contributions to IAM (Identity and Access Management) + Security – Soojin Lee & Hoon Jo, Megazone

With RPT leading consulting projects for HashiCorp Vault and AWS Services, Keycloak tends to find its way into project planning and discovery sessions, so I was very much looking forward to this presentation. Soojin and Hoon did not disappoint. They delved into the authentication and authorization cycle in a way that was easy to understand and provided detailed mappings of IAM within the multi-cloud world.

 

Learning Kubernetes by Chaos – Breaking a Kubernetes Cluster to Understand the Components – Ricardo Katz, VMware & Anderson Duboc, Google Cloud

The leaders of this session took a unique, yet brilliant, approach to explaining the components of a Kubernetes cluster. Typically, I have seen a
colleague or customer quickly spin up a cluster in a no-nonsense manner with little discussion around its components. Ricardo and Anderson instead, pulled up the code to an existing Kubernetes cluster and systematically broke then repaired the individual parts of it in order to showcase each purpose and function of said parts.

Beyond the Novice track, KubeCon offered incredible opportunities for networking with industry leaders, collaboration with new and existing partners, exposure to emerging tools within the cloud native landscape, and enough free knick-knacks on the showroom floor that I had to take a second carry-on with me for the plane ride home.

I’m eager to utilize my new arsenal of knowledge and continue to expand upon it, and hopefully by next year’s KubeCon, I’ll be writing a breakdown of the Cloud Native Expert Track.

Connect with Dani Shirer!