As organizations continue to adopt cloud infrastructure and automate their processes, the need for efficient, scalable, and maintainable infrastructure-as-code (IaC) solutions has never been more pressing. Terraform, HashiCorp’s flagship IaC tool, is at the forefront of this movement, allowing engineers to define, provision, and manage infrastructure with ease. However, building and managing Terraform in a way that is reusable, scalable, and maintainable can be a challenge—especially as organizations grow and their infrastructure becomes more complex. 

That’s where Terraform modules come into play. These self-contained packages of Terraform configuration files enable developers to build infrastructure in a modular, reusable way. Yet, not all modules are created equal. Poor design can lead to inefficiencies, increased complexity, and a host of operational headaches. 

For anyone interested in optimizing their Terraform use or looking to gain an edge in managing large, complex infrastructures, the session “Meet the Experts: Terraform Module Design” at HashiConf 2024 is an absolute must-attend. 

In this dynamic panel discussion, Ned Bellavance (Ned in the Cloud LLC), Drew Mullen (River Point Technology), and Bruno Schaatsbergen (HashiCorp) will share their hard-earned expertise on the topic. Together, these experts will dive deep into best practices, design patterns, and lessons learned from working on some of the most popular Terraform modules available in the public registry. 

Here’s why this session stands out and what you can expect to learn from attending. 

Why Terraform Module Design Matters 

Terraform’s popularity stems from its ability to codify infrastructure in a simple, declarative way. But while it’s easy to get started with Terraform, creating Terraform modules that are flexible, reusable, and maintainable can be tricky. 

As your infrastructure grows and becomes more complex, manually replicating configuration across environments or teams becomes untenable. That’s where the true power of modules comes in. They allow you to abstract away complexities and create reusable building blocks that standardize infrastructure provisioning. 

However, creating well-designed modules requires more than just wrapping code into a reusable block. A well-designed Terraform module must: 

This is precisely what the panelists will explore: how to craft Terraform modules that scale effectively and remain easy to maintain over time, without sacrificing flexibility. 

What You’ll Learn from the Experts 

1. Best Practices for Terraform Module Design 
The foundation of any great Terraform module is its design. Get practical advice and expert insights that will help you avoid common pitfalls and create modules that are not only easier to use but also easier to scale and maintain over time. 

2. Design Patterns for Terraform Modules 
Building on the basics, the panel will delve into advanced design patterns that make Terraform modules more powerful and adaptable.  

3. Lessons from the Field 
One of the most valuable aspects of this session is the chance to learn from the personal experiences of the panelists. Each has played a key role in developing some of the most popular Terraform modules in the public registry, and they will share their real-world experiences while  

Meet the Experts: Your Panelists 

Ned Bellavance (Founder, Ned in the Cloud LLC) 
Ned is a well-known thought leader in the cloud space, with deep expertise in infrastructure-as-code. As the founder of Ned in the Cloud LLC and the host of the popular “Day Two Cloud” podcast, Ned brings a wealth of knowledge on how to scale and automate cloud operations. He’s also an experienced educator and public speaker, known for breaking down complex cloud topics into digestible, actionable advice. 

Drew Mullen (Principal Solutions Architect, River Point Technology) 
Drew is a recognized expert in cloud-native infrastructure, with a focus on helping enterprises adopt and scale cloud technologies. At River Point Technology, Drew works with Fortune 500 companies to design and implement cloud architectures that are reliable, secure, and scalable. His experience with Terraform spans years, making him a key contributor to open-source modules and a mentor for organizations seeking to optimize their IaC practices. 

Bruno Schaatsbergen (Senior Engineer, HashiCorp) 
As a Senior Engineer at HashiCorp, Bruno is deeply involved in the development and maintenance of Terraform itself. His contributions to the Terraform ecosystem have helped shape the way developers think about infrastructure automation. Bruno’s technical depth, combined with his understanding of the challenges faced by enterprises adopting Terraform at scale, make him an invaluable voice on this panel.

Session detail: Kubernetes is a trendy solution, but in most cases, there is a skills gap when deploying and maintaining Kubernetes. I am looking at Nomad to bridge the gap and provide a simpler solution to run self-hosted platform infrastructure such as runners for CI/CD pipelines, Packer workflows to build image, send build metadata to HCP Packer, and lastly running HCP Terraform Agents. 

Hashi Areas / technologies covered: Terraform, Nomad, HashiCorp Cloud Platform, Packer, Infrastructure Lifecycle Management 

Scaling Simplified: How Nomad Bridges the Kubernetes Skills Gap 

When it comes to orchestration, Kubernetes is often the go-to choice. Its popularity is undeniable, but many engineering teams quickly find themselves grappling with its complexity. Kubernetes’ steep learning curve and operational challenges have left many asking a crucial question: is there an easier way to scale infrastructure without compromising capability? The answer may lie in Nomad, a lightweight yet powerful alternative that’s steadily gaining traction for its simplicity and efficiency. 

If you’ve ever found yourself struggling with the intricacies of Kubernetes or simply looking for a more straightforward solution to scale your infrastructure, then this upcoming session at HashiConf 2024 hosted by River Point Technology’s very own Ben Lykins, is for you. Titled “A Beginner’s Journey: Scaling Self-Hosted Platform Infrastructure with Nomad”, Ben’s session aims to provide the audience with actionable insights on how you can leverage Nomad to overcome the challenges often associated with Kubernetes. 

Why Nomad? Understanding the Challenges with Kubernetes 

Kubernetes is widely praised for its ability to manage containerized applications at scale, but the truth is, it often requires a significant amount of expertise to run efficiently. The inherent complexity can lead to several challenges, including: 

While Kubernetes is an excellent solution for many organizations, its complexity can lead to operational bottlenecks. This is where Nomad shines as an alternative—providing a simpler, more flexible platform for managing workloads. 

What is Nomad? 

HashiCorp Nomad is a flexible workload orchestrator that’s designed to run containers, VMs, and other application types on any infrastructure. Unlike Kubernetes, Nomad is lightweight and easy to adopt, yet powerful enough to handle production-grade workloads. 

Here are a few key reasons why Nomad stands out as a strong contender for self-hosted platform infrastructure: 

How Nomad Bridges the Skills Gap 

In this session, we’ll explore how Nomad addresses many of the pain points experienced by those who struggle with Kubernetes. Whether you’re new to infrastructure management or simply looking for a more efficient way to run self-hosted platforms, Nomad provides a simpler, more approachable solution. Here’s how: 

1. Simple Configuration Language 

Nomad’s use of HCL (HashiCorp Configuration Language) makes it incredibly simple to define and manage infrastructure. HCL’s human-readable syntax allows users to quickly and clearly define job specifications, configurations, and policies.  

With Nomad, HCL streamlines workflows by offering a declarative approach, making it easy for both beginners and seasoned engineers to understand and use. This simplicity reduces the learning curve and enhances productivity, especially when working with HashiCorp’s broader ecosystem like Terraform and Vault. 

2. Single Binary 

Nomad’s architecture as a single binary makes it remarkably simple to deploy and manage. With no external dependencies or complex setup, Nomad can be easily installed and run on any environment, from local development to large-scale production. This single binary handles all core functions—scheduling, orchestration, and resource management—without needing multiple components or services.  

Its simplicity reduces operational overhead, speeds up installation, and enables quick portability between on-premises and cloud environments, making Nomad an efficient and user-friendly solution for managing workloads at any scale. 

3. Fit in HashiCorp’s Ecosystem 

Nomad’s seamless integrations with other HashiCorp products, like Vault, Consul, and Terraform, make it a powerful part of a unified infrastructure ecosystem. With these integrations, Nomad enhances security, networking, and provisioning workflows. Vault provides dynamic secrets management, ensuring sensitive data remains secure, while Consul offers service discovery and networking automation. Terraform simplifies infrastructure provisioning, allowing teams to define and deploy infrastructure as well as Nomad con. These integrations streamline operations, increase efficiency, and create a cohesive, end-to-end solution for managing complex infrastructure environments. 

Nomad vs. Kubernetes: Is It Time to Make the Switch? 

While Kubernetes has long been the gold standard for container orchestration, it’s not without its downsides—especially for teams looking for a more straightforward way to manage their platform infrastructure. Nomad’s lightweight architecture and simplicity make it an excellent alternative for those who want to minimize complexity while still scaling effectively. 

In this session, you’ll learn whether Nomad is the right fit for your organization and how you can start using it to build scalable, self-hosted platforms. 

Conclusion: Bridging the Gap with Nomad 

Kubernetes will likely remain a dominant force in the infrastructure space, but for those looking for an alternative that offers ease of use without sacrificing scalability, Nomad is a strong contender. With its ability to run various workloads, integrate with HashiCorp’s suite of tools, and scale efficiently, Nomad provides a simpler solution to managing self-hosted platform infrastructure. 

If you’re ready to take your infrastructure to the next level while reducing operational complexity, be sure to attend “A Beginner’s Journey: Scaling Self-Hosted Platform Infrastructure with Nomad” at HashiConf 2024. Whether you’re new to Nomad or just looking for a more streamlined solution, this session will equip you with the knowledge and tools to make scaling your infrastructure more manageable. 

Here are just a few reasons why you shouldn’t miss Ben’s session “A Beginner’s Journey: Scaling Self-Hosted Platform Infrastructure with Nomad” at HashiConf 2024: 

For organizations leveraging Kubernetes and Rancher, efficient secret management across multiple clusters is a common concern. This blog post uses SUSE’s Fleet to explore a custom operator solution that streamlines secret distribution in multi-cluster environments.

The Challenge of Multi-Cluster Secret Management

As organizations scale Kubernetes infrastructure, managing secrets across multiple clusters becomes increasingly complex. While it’s possible to configure each downstream cluster to communicate directly with a central secret store like HashiCorp Vault, this approach can become unwieldy as the number of clusters grows. Using the Kubernetes JWT authentication method with Vault requires careful management of roles and policies for each cluster. Alternatively, using the AppRole authentication method, while more straightforward to set up, falls short of providing the needed level of security.

Leveraging Rancher as a Central Secrets Manager

Clusters managed by Rancher leverage the Fleet agent for various aspects of configuration. Using a custom operator, we can enable replication of secrets in targeted Rancher managed clusters. This accounts for secrets created manually in the Rancher cluster or externally managed secrets from tools like HashiCorp Vault or External Secrets.

Introducing the Fleet Handshake Operator

We’ve developed a customer Kubernetes operator that works with Fleet to distribute secrets across clusters. This operator, the Fleet Handshake Operator, listens for defined Kubernetes secrets and creates Fleet Bundles to distribute to downstream clusters.

Fleet Handshake utilizes the cluster hosting the Fleet controller, typically deployed within Rancher, as the central source of truth for secrets. Fleet agents then consume these secrets in downstream clusters. This approach solves the “secret zero” problem by leveraging Rancher as the JWT authentication to Vault and Fleet to manage the connectivity to downstream clusters. Moreover, it provides a centralized point of control for secret distribution while maintaining security.

Advantages of This Approach

Technical Implementation

The Fleet Handshake Operator is built around a custom resource definition (CRD) called FleetHandshake. This CRD defines the structure for specifying which secrets should be synchronized and to which target clusters. The main controller, FleetHandshakeReconciler, handles the reconciliation loop for these custom resources. When a secret is created or updated, the reconciler, which listens for the changes, will then update the Bundle resource it manages to distribute downstream.

Let’s dive into the critical components of the operator:

Reconciliation Process

The Reconcile function implements the operator’s core logic. It first gets the secret resource and the custom FleetHandshake. It then upserts the bundle attribute. If all is successful, it updates the fleet handshake to synced.

func (r *FleetHandshakeReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    // Fetch the FleetHandshake resource
    var fleetHandshake rancherv1.FleetHandshake
    if err := r.Get(ctx, req.NamespacedName, &fleetHandshake); err != nil {
        // Handle error or return if not found
    }

    // Retrieve the target secret
    var secret corev1.Secret
    if err := r.Get(ctx, types.NamespacedName{Namespace: fleetHandshake.Namespace, Name: fleetHandshake.Spec.SecretName}, &secret); err != nil {
        // Handle error or return if not found. if not found, a status of mising will be set
    }

    // Check to see if a fleet bundle exists
    var existingBundle fleetv1alpha1api.Bundle
    err := r.Get(ctx, types.NamespacedName{Name: bundle.Name, Namespace: bundle.Namespace}, existingBundle)
    if err != nil && errors.IsNotFound(err) {
       // Update existing Bundle if content has changed
         if !reflect.DeepEqual(existingBundle.Spec, bundle.Spec) {
            if err := r.Update(ctx, bundle); err != nil {
                // Handle bundle error
            }
        }
    } else {
        if err := r.Create(ctx, bundle); err != nil {
                // Handle bundle error
            }
        }
    }

   fleetHandshake.Status.Status = "Synced"
    if err := r.Status().Update(ctx, &fleetHandshake); err != nil {
        // Handle error
    }

    return ctrl.Result{}, nil
}

A Bundle–a resource housing the secret’s content–is distributed by Fleet to the respective targets. The owner reference plays a significant role here, tying the bundle to the fleet handshake resource. This synchronization ensures that the lifecycles align seamlessly, enhancing the efficiency of the process. Importantly, deleting the handshake prompts the Kubernetes API to remove the secrets downstream.

bundle := &fleetv1alpha1api.Bundle{
    ObjectMeta: metav1.ObjectMeta{
        Name:      fleetHandshake.Name,
        Namespace: fleetHandshake.Namespace,
        OwnerReferences: []metav1.OwnerReference{{
            APIVersion: fleetHandshake.APIVersion,
            Kind:       fleetHandshake.Kind,
            Name:       fleetHandshake.Name,
            UID:        fleetHandshake.UID,
        }},
    },
    Spec: fleetv1alpha1api.BundleSpec{
        Resources: []fleetv1alpha1api.BundleResource{
            {
                Name:    fmt.Sprintf("%s.json", secret.Name),
                Content: string(jsonSecret),
            },
        },
        Targets: fleetHandshake.Spec.Targets,
    },
}

Conclusion

The Fleet Handshake Operator provides a powerful solution for organizations seeking to streamline their secret management across multiple Kubernetes clusters. By leveraging Suse’s Fleet and implementing a custom operator, we can achieve a scalable, secure, and automated approach to secret distribution. This implementation serves as a testament to the pivotal role custom operators play in extending and enhancing the capabilities of existing Kubernetes ecosystem tools, providing tailored solutions for complex operational challenges. As Kubernetes environments become complex, the role of such custom operators in maintaining operational efficiency and security becomes increasingly significant.

To explore this solution and install it within your rancher instance, visit our GitHub repository at https://github.com/rptcloud/fleet-handshake, or checkout our visual walkthrough of the Fleet Handshake Operator’s capabilities.

About River Point Technology: River Point Technology (RPT) is an award-winning cloud consulting, training, and enablement provider, partnering with the Fortune 500 to accelerate their digital transformation and infrastructure automation journeys and redefine the art of the possible. Our world-class team of IT, cloud, and DevOps experts helps organizations leverage the cloud for transformative growth through prescriptive methodologies, best- in-class services, and our trademarked Value Creation Technology process. From consulting and training to comprehensive year-long RPT Accelerator programs, River Point Technology empowers enterprises to achieve Day 2 success in the cloud and maximize their technology investments.