The sky’s the limit for River Point Technology! In a groundbreaking achievement, this past fall RPT became the first global HashiCorp partner to snag all three coveted competency badges: Infrastructure, Security, and Networking. This phenomenal feat, coupled with our recent title of HashiCorp Global Competency Partner of the Year, solidifies RPT’s position as a market leader and trusted advisor for Fortune 500 companies navigating the ever-evolving cloud landscape.
“As the first partner to pursue and achieve the Networking competency, we’re beyond ecstatic to now claim the distinction of holding all three badges,” shared Jeff Eiben, CEO of River Point Technology. “This is a testament to our relentless pursuit of excellence and our team’s deep-seated expertise across the entire cloud spectrum. We’re incredibly proud of their dedication to helping enterprises accelerate their infrastructure automation journeys and maximize their technology investments!”
“We’re excited that River Point Technology is the first partner to achieve certification in Infrastructure, Security and Networking. As HashiCorp’s Global Competency Partner of the Year in 2023, we’re confident they’ll remain dedicated to empowering businesses and customers on their cloud journeys,” said Leon Jones, VP Worldwide Partner Ecosystem at HashiCorp.
RPT’s award-winning team, comprised of some of the world’s best IT, cloud, and DevOps experts, delivers a comprehensive suite of consulting offerings, including:
And it doesn’t stop there! RPT’s 5-star training programs, covering leading cloud platforms like the HashiCorp suite, equip individuals and teams with the skills to become cloud masters. Our private group training is not only top ranked, but led by some of the industry’s leading practitioners and experts who can cater programs based on your organization’s specific needs. Plus, our subscription-based enablement offering, the RPT Accelerator, can help you achieve Day 2 cloud success, ensuring your organization’s ongoing optimization and value realization of their technology investment.
(insert image showing training, consulting, enablement)
With unparalleled expertise and dedication to customer success, RPT is poised to continue leading the way in cloud consulting and enablement. By empowering organizations to leverage the cloud effectively, we help them achieve their full potential and accelerate their journey towards digital transformation. How can we help you?
In the dynamic realm of container orchestration and secrets management, the integration of Kubernetes with HashiCorp Vault stands as a pivotal undertaking, offering enhanced security and streamlined operational workflows. However, this collaboration is not without its complexities, presenting a set of formidable challenges that organizations must navigate. From intricacies in configuration to ensuring seamless communication between these powerful tools, the journey to successfully integrate Kubernetes with HashiCorp Vault demands a strategic approach.
In this exploration, we delve into the top 10 challenges faced in this integration process, shedding light on the key hurdles that organizations encounter and providing insights into overcoming these obstacles for a robust and secure deployment. Here are the top 10 challenges you might face:
Authentication and Authorization: Configuring proper authentication and authorization mechanisms to control access to Vault secrets for both Kubernetes and traditional applications can be challenging.
Secrets Management: Managing secrets across different platforms, ensuring their security, and automating their lifecycle is a fundamental challenge.
Secret Rotation: Implementing automated secret rotation policies and procedures for secrets stored in Vault can be complex, especially for legacy applications that may not support dynamic secret retrieval.
Networking and Security: Establishing secure communication between Kubernetes pods, traditional applications, and Vault while maintaining network segmentation and firewall rules can be tricky.
Integration Complexity: Integrating Vault with a variety of application types, databases, and cloud services, especially when dealing with legacy systems, can lead to integration complexities.
Compliance and Auditing: Meeting compliance requirements and tracking access and usage of secrets for auditing purposes can be challenging, especially in regulated industries.
Secrets Versioning: Managing different versions of secrets, ensuring backward compatibility, and handling secrets rotation gracefully can be complex.
Backup and Disaster Recovery: Developing and testing robust backup and disaster recovery plans for Vault’s data and configurations is crucial to ensure business continuity.
Monitoring and Alerting: Setting up monitoring and alerting solutions to detect and respond to any issues or breaches in real-time is a significant challenge.
Documentation and Training: Ensuring that your team has the necessary skills and knowledge to manage and troubleshoot the integrated environment is an ongoing challenge, as technologies evolve.
Certainly, the real challenges linked to the widespread use of DevOps tools are undeniable. This is why numerous organizations caught in the predicament of managing multiple DevOps platforms are choosing to streamline by consolidating into a central platform. Yet, what does this consolidation involve, and how do you determine the optimal single DevOps platform for migration? Read this Case Study for more answers.
HashiCorp’s Terraform Cloud provides a centralized platform for managing infrastructure as code. It’s a leading provider in remote Terraform management with remote state management, automated VCS integrations, and cost visibility. One of its features, a private registry, can be used to develop internal Terraform providers where control, security, and customizations are paramount.
Let’s explore an example using the Terraform Provider Scaffolding Framework to build a custom Terraform provider and publish it to a private registry. Scaffold provides a framework starter kit that you can use out of the box to replace your APIs.
Code signing guarantees that the generated artifacts originate from your source, allowing users to verify this authenticity by comparing the produced signature with your publicly available signing key. It will require you to generate a key pair through the GNU PGP utility. You can develop this by using the command below, be sure to replace GPG_PASSWORD and your name with values that make sense.
gpg --default-new-key-algo rsa4096 --batch --passphrase "${GPG_PASSWORD}" --quick-gen-key 'Your Name <name@example.com>' default default
With your newly generated key securely stored, the next step involves exporting and uploading it to Terraform Cloud. This action facilitates verification while deploying your signed artifacts, and ensuring their authenticity within the platform’s environment. The GPG Key API requires the public key to validate the signature. To access the list of key IDs, you can execute: gpg --list-secret-keys --keyid-format LONG
. The key is denoted in the output.
[keyboxd]
---------
sec rsa4096/<KEY ID> 2023-11-22 [SC] [expires: 2026-11-21]
You can then get your public key as a single string. KEY=$(gpg --armor --export ${KEY_ID} | awk '{printf "%sn", $0}')
. You’ll then need to build a payload with the output of that file and POST that to https://app.terraform.io/api/registry/private/v2/gpg-keys. The ORG_NAME is your Terraform cloud organization.
{
"data": {
"type": "gpg-keys",
"attributes": {
"namespace": "${ORG_NAME}",
"ascii-armor": "${KEY}"
}
}
}
If you plan to use this key in a CI Platform, you can also export the key and upload it gpg --export-secret-keys --armor ${KEY_ID} > /tmp/gpg.pgp
to a secure Vault.
GoReleaser simplifies the process of building and releasing Go binaries. Using GoReleaser, we can bundle different architectures, operating systems, etc.
You will need to create a terraform registry manifest, the protocol version is essential. If you are using Plugin Framework, you will want version 6.0. If you are using Plugin SDKv2, you will want version 5.0.
{
"version": 1,
"metadata": {
"protocol_versions": ["6.0"]
}
}
Ensure your goreleaser.yml
configuration includes settings for multi-architecture support and signing. This file should live at the provider’s root, next to your main codebase.
before:
hooks:
- go mod tidy
builds:
- env:
- CGO_ENABLED=0
mod_timestamp: '{{ .CommitTimestamp }}'
flags:
- -trimpath
ldflags:
- '-s -w -X main.version={{ .Version }} -X main.commit={{ .Commit }}'
goos:
- freebsd
- windows
- linux
- darwin
goarch:
- amd64
- '386'
- arm
- arm64
ignore:
- goos: darwin
goarch: '386'
binary: '{{ .ProjectName }}_v{{ .Version }}'
archives:
- format: zip
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}'
checksum:
extra_files:
- glob: 'terraform-registry-manifest.json'
name_template: '{{ .ProjectName }}_{{ .Version }}_manifest.json'
name_template: '{{ .ProjectName }}_{{ .Version }}_SHA256SUMS'
algorithm: sha256
signs:
- artifacts: checksum
args:
- "--batch"
- "--local-user"
- "{{ .Env.GPG_FINGERPRINT }}"
- "--output"
- "${signature}"
- "--detach-sign"
- "${artifact}"
stdin: '{{ .Env.GPG_PASSWORD }}'
release:
extra_files:
- glob: 'terraform-registry-manifest.json'
name_template: '{{ .ProjectName }}_{{ .Version }}_manifest.json'
changelog:
skip: true
git tag 0.0.1
git checkout 0.0.1
Your git strategy may differ, but GoReleaser uses branch tags to determine versions.
Execute GoReleaser to bundle the binaries locally without publishing. We skipped publishing as we will manually upload them to Terraform Cloud.
export GPG_TTY=$(tty)
export GPG_FINGERPRINT=${KEY_ID}
goreleaser release --skip=publish
Now we have our artifacts.
Once you have the signed binaries, you can publish them to the Terraform Cloud private registry. HashiCorp provides a guide, which we will follow.
Create a provider config file and POST that body utilizing your Terraform Cloud API token. A provider name is usually a singular descriptor representing a business unit, such as Google or AWS.
curl --header "Authorization: Bearer ${TERRAFORM_CLOUD_API_TOKEN}"
--header "Content-Type: application/vnd.api+json"
--request POST
-d @-
"https://app.terraform.io/api/v2/organizations/${ORG_NAME}/registry-providers" <<EOT
{
"data": {
"type": "registry-providers",
"attributes": {
"name": "${PROVIDER_NAME}",
"namespace": "${ORG_NAME}",
"registry-name": "private"
}
}
}
EOT
Create Version Shell within Private Registry Providers
curl -H "Authorization: Bearer ${TOKEN}"
-H "Content-Type: application/vnd.api+json"
--request POST
-d @-
"https://app.terraform.io/api/v2/organizations/${ORG_NAME}/registry-providers/private/${ORG_NAME}/${PROVIDER_NAME}/versions" <<EOT
{
"data": {
"type": "registry-provider-versions",
"attributes": {
"version": "${VERSION}",
"key-id": "${KEY_ID}",
"protocols": ["6.0"]
}
}
}
EOT
The response will contain upload links that you will use to upload the SHA256SUMS and SHA256.sig files.
"links": {
"shasums-upload": "https://archivist.terraform.io/v1/object/dmF1b64hd73ghd63",
"shasums-sig-upload": "https://archivist.terraform.io/v1/object/dmF1b37dj37dh33d"
}
Upload Signatures.
# Replace ${VERSION} and ${PROVIDER_NAME} with actual values
curl -sS -T "dist/terraform-provider-${PROVIDER_NAME}_${VERSION}_SHA256SUMS" "${SHASUM_UPLOAD}"
curl -sS -T "dist/terraform-provider-${PROVIDER_NAME}_${VERSION}_SHA256SUMS.sig" "${SHASUM_SIG_UPLOAD}"
Register Platform for every Architecture and Operating System.
FILENAME="terraform-provider-${PROVIDER_NAME}_${VERSION}_${OS}_${ARCH}.zip"
SHA=$(shasum -a 256 "dist/${FILENAME}" | awk '{print $1}' )
# OS ex. darwin/linux/windows# ARCH ex. arm/amd64# FILENAME. terraform-provider-<PROVIDER_NAME>_<VERSION>_<OS>_<ARCH>.zip. Define through name_template
curl -H "Authorization: Bearer ${TOKEN}"
-H "Content-Type: application/vnd.api+json"
--request POST
-d @-
"https://app.terraform.io/api/v2/organizations/${ORG_NAME}/registry-providers/private/${ORG_NAME}/${PROVIDER_NAME}/versions/${VERSION}/platforms" << EOT
{
"data": {
"type": "registry-provider-version-platforms",
"attributes": {
"shasum": "${SHA}",
"os": "${OS}",
"arch": "${ARCH}",
"filename": "${FILENAME}"
}
}
}
EOT
The response will contain upload the provider binary to:
"links": {
"provider-binary-upload": "https://archivist.terraform.io/v1/object/dmF1b45c367djh45nj78"
}
Upload archived binaries
curl -sS -T "dist/${FILENAME}" "${PROVIDER_BINARY_URL}"
Repeat step: Register Platform for every Architecture and Operating System and step: Upload Archived Binaries for every architecture and operating system.
Private providers hosted within Terraform Cloud are only available to users within the organization.
When developing locally, ensure you set up credentials through the terraform login, creating a credentials.tfrc.json
file.
With the authentication bits setup, you can utilize the new provider by defining the provider block substituting in those existing variables.
terraform {
required_providers {
${PROVIDER_NAME} = {
source = "app.terraform.io/${ORG_NAME}/${PROVIDER_NAME}"
version = "${VERSION}"
}
}
}
provider "${PROVIDER_NAME}" {
# Configuration options
}
For user consumption, a common practice is to provide provider documentation for your resources utilizing Terraform plugin docs. This plugin generator allows you to generate markdowns from examples and schema definitions, which users can then consume. At the time of publication, this feature is currently not supported within the Terraform Cloud. Please talk to your River Point Technology representative for alternative solutions.
To remove the provider from the registry:
curl -H "Authorization: Bearer ${TOKEN}"
--request DELETE
"https://app.terraform.io/api/v2/organizations/${ORG_NAME}/registry-providers/private/${ORG_NAME}/${PROVIDER_NAME}/versions/${VERSION}"
curl -H "Authorization: Bearer ${TOKEN}"
--request DELETE
"https://app.terraform.io/api/v2/organizations/${ORG_NAME}/registry-providers/private/${ORG_NAME}/${PROVIDER_NAME}"
curl -H "Authorization: Bearer ${TOKEN}"
--request DELETE
https://app.terraform.io/api/registry/private/v2/gpg-keys/${ORG_NAME}/${KEY_ID}
With a private registry, you get all the benefits of Terraform while still allowing internal consumption. This may be desirable when public providers don’t meet your use case and it comes with a host of benefits:
Publishing custom Terraform providers to the Terraform Cloud private registry involves bundling, signing, and uploading binaries and metadata through the API. Following these steps, you can effectively manage and distribute your Terraform provider to support various architectures and operating systems.
River Point Technology (RPT) is here to guide you through the intricacies of exploring the dynamic cloud landscape. If you’re facing challenges and need assistance in achieving increased customization and oversight, enhanced security and compliance, along with improved versioning and stability, feel free to leave a comment or reach out to us directly.
As the HashiCorp Global Competency Partner of the Year and the only company certified in all three competencies—Security, Networking, and Infrastructure—we stand out as a market leader. Trusted by Fortune 500 companies, we serve as their guide in effectively navigating the dynamic cloud terrain. Contact RPT for guidance in optimizing the journey through the cloud landscape.