Many organizations using the Ansible Automation Platform (AAP) still provision virtual machines manually and then run playbooks afterward. That process often involves waiting for infrastructure tickets, scheduling jobs, and completing lengthy configuration steps. In some cases, it can take hours or even days for a system to be ready for use.
Packer can communicate with both the cloud and the hypervisor, as well as a provisioner, changing the workflow to mint golden images. Deployments begin with a system that is ready in minutes, not days. Although it includes native Ansible support, Enterprise architects often point out problems right away: “You are bypassing all the governance we have built around AAP. No centralized logging, no RBAC, no audit trails, and no consistent execution environments.” For teams that rely on AAP for governance and standardized workflows, local Ansible runs create silos or force duplication. To address this, a custom Packer provisioner was built to integrate directly with the AAP API. Instead of running Ansible locally, Packer now calls job templates from AAP during image builds, enabling organizations to continue using their existing playbooks and environments while benefiting from the speed and repeatability of image factories.
With this approach, image builds run under the same governance as production workloads and benefit from centralized logging, RBAC, and consistent execution environments. Concerns about bypassing controls disappear since all automation stays inside AAP. The result is faster provisioning, reusable images, and an automation workflow that feels both modern and enterprise-ready.
First the plugin will need to be added to the required plugins:
packer {
required_plugins {
ansible-aap = {
source = "github.com/rptcloud/ansible-aap"
version = "1.0.0"
}
}
}
Then within the build spec, the provisioner can be configured to your AAP instance, assigned a job template ID, and all API orchestration occurs within the code. In a packer template, it would look something like:
build {
sources = ["source.amazon-ebs.example"]
provisioner "ansible-aap" {
tower_host = "https://aap.example.com"
access_token = vault("secret/data/aap", "access_token")
job_template_id = 11 # Job template to install docker on host
organization_id = 1
dynamic_inventory = true
extra_vars = {
Name = "packer-ansible-demo"
Environment = "production"
BuiltBy = "packer"
}
timeout = "15m"
poll_interval = "10s"
}
}
amazon-ebs.example: output will be in this color.
==> amazon-ebs.example: Prevalidating any provided VPC information
==> amazon-ebs.example: Prevalidating AMI Name: packer-ansible-demo-20250820181254
...
==> amazon-ebs.example: Waiting for SSH to become available...
==> amazon-ebs.example: Connected to SSH!
==> amazon-ebs.example: Setting a 15m0s timeout for the next provisioner...
amazon-ebs.example: 🌐 Attempting to connect to AAP server: https://aap.example.com
amazon-ebs.example: 🔧 Initializing AAP client...
amazon-ebs.example: ✅ AAP client initialized successfully
amazon-ebs.example: 🎯 Creating inventory for target host: 54.146.55.206
amazon-ebs.example: 🗄️ Using organization ID: 1
amazon-ebs.example: ✅ Created inventory with ID: 75
amazon-ebs.example: ✅ Created SSH credential ID: 63
amazon-ebs.example: 🖥️ Adding host 54.146.55.206 to inventory
amazon-ebs.example: ✅ Added host ID: 66
amazon-ebs.example: 🚀 Launching job template ID 10 for target_host=54.146.55.206
amazon-ebs.example: ✅ Job launched https://aap.example.com/execution/jobs/playbook/142/output/. Waiting for completion...
amazon-ebs.example: ⏳ Polling job status...
amazon-ebs.example: 🎉 Job completed successfully!
amazon-ebs.example: Identity added: /runner/artifacts/142/ssh_key_data (packer-aap-key)
amazon-ebs.example:
amazon-ebs.example: PLAY [Install Docker] **********************************************************
amazon-ebs.example:
amazon-ebs.example: TASK [Gathering Facts] *********************************************************
amazon-ebs.example: [WARNING]: Platform linux on host 54.146.55.206 is using the discovered Python
amazon-ebs.example: interpreter at /usr/bin/python3.7, but future installation of another Python
amazon-ebs.example: interpreter could change the meaning of that path. See
amazon-ebs.example: https://docs.ansible.com/ansible-
amazon-ebs.example: core/2.16/reference_appendices/interpreter_discovery.html for more information.
amazon-ebs.example: ok: [54.146.55.206]
amazon-ebs.example:
amazon-ebs.example: TASK [Update package cache] ****************************************************
amazon-ebs.example: ok: [54.146.55.206]
amazon-ebs.example:
amazon-ebs.example: TASK [Install Docker] **********************************************************
amazon-ebs.example: changed: [54.146.55.206]
amazon-ebs.example:
amazon-ebs.example: TASK [Start and enable Docker service] *****************************************
amazon-ebs.example: changed: [54.146.55.206]
amazon-ebs.example:
amazon-ebs.example: TASK [Add ec2-user to docker group] ********************************************
amazon-ebs.example: changed: [54.146.55.206]
amazon-ebs.example:
amazon-ebs.example: PLAY RECAP *********************************************************************
amazon-ebs.example: 54.146.55.206 : ok=5 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
amazon-ebs.example: 🧹 Cleaning up credential 63...
amazon-ebs.example: 🧹 Cleaning up host 66...
amazon-ebs.example: 🧹 Cleaning up inventory 75...
==> amazon-ebs.example: Stopping the source instance...
...
Build 'amazon-ebs.example' finished after 4 minutes 45 seconds.
==> Wait completed after 4 minutes 45 seconds
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs.example: AMIs were created:
us-east-1: ami-0d056993e3e2be56f
AAP has a well-documented REST API. The provisioner handles the entire lifecycle through API calls. Here’s the workflow:
sequenceDiagram participant P as Packer participant Prov as AAP Provisioner participant AAP as Ansible Automation Platform participant VM as Target VM P->>Prov: Start provisioning Prov->>AAP: Create temporary inventory AAP-->>Prov: Inventory ID: 123 Prov->>AAP: Register target host AAP-->>Prov: Host ID: 456 Prov->>AAP: Create SSH/WinRM credential AAP-->>Prov: Credential ID: 789 Prov->>AAP: Launch job template AAP-->>Prov: Job ID: 1001 loop Poll Status Prov->>AAP: Check job status AAP-->>Prov: Status: running/successful/failed end AAP->>VM: Execute playbooks VM-->>AAP: Configuration complete Prov->>AAP: Delete credential Prov->>AAP: Delete host Prov->>AAP: Delete inventory Prov-->>P: Provisioning complete
First, we create a temporary inventory in AAP. Every build receives its temporary inventory with a timestamp, ensuring no conflicts or stepping on other builds, providing clean isolation.
curl -X POST https://aap.example.com/api/controller/v2/inventories/ \
-H "Authorization: Bearer $AAP_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "packer-inv-1642684800",
"description": "Temporary inventory for packer provisioning",
"organization": 1
}'
Response:
{
"id": 123,
"name": "packer-inv-1642684800",
"organization": 1,
"created": "2025-01-20T10:00:00Z"
}
Next, we register the target host that Packer is building. In the provisioner, we retrieve all connection details directly from Packer’s communicator. SSH keys, passwords, WinRM creds, whatever Packer is using to talk to the instance:
curl -X POST https://aap.example.com/api/controller/v2/hosts/ \
-H "Authorization: Bearer $AAP_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "10.0.1.100",
"inventory": 123,
"variables": "{\"ansible_host\": \"10.0.1.100\", \"ansible_port\": 22, \"ansible_user\": \"ec2-user\"}"
}'
Response:
{
"id": 456,
"name": "10.0.1.100",
"inventory": 123,
"variables": "{\"ansible_host\": \"10.0.1.100\", \"ansible_port\": 22, \"ansible_user\": \"ec2-user\"}"
}
Each build has the option to use a temporary credential provided to AAP; this refers to SSH key or username/password authentication.
curl -X POST https://aap.example.com/api/controller/v2/credentials/ \
-H "Authorization: Bearer $AAP_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "packer-ssh-cred-1642684800",
"description": "SSH credential for Packer builds",
"credential_type": 1,
"organization": 1,
"inputs": {
"username": "ec2-user",
"ssh_key_data": "-----BEGIN PRIVATE KEY-----\nMIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQC..."
}
}'
Response:
{
"id": 789,
"name": "packer-ssh-cred-1642684800",
"credential_type": 1,
"organization": 1
}
Finally, we launch the actual job template. For this integration to work properly, your job template in AAP needs to be configured to accept runtime parameters:
{
"name": "Packer Image Build Template",
"ask_inventory_on_launch": true,
"ask_credential_on_launch": true,
"ask_variables_on_launch": true
}
The ask_inventory_on_launch
and ask_credential_on_launch
settings are crucial – they allow the provisioner to inject the temporary inventory and credentials at launch time instead of using pre-configured values. Without these settings, the job template would try to use its default inventory and credentials, which won’t have access to your Packer-managed instance.
Here’s the launch request:
curl -X POST https://aap.example.com/api/controller/v2/job_templates/42/launch/ \
-H "Authorization: Bearer $AAP_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"inventory": 123,
"credentials": [789],
"extra_vars": {
"environment": "production",
"packer_build_name": "amazon-linux-base",
"packer_build_id": "build-1642684800"
}
}'
Response:
{
"job": 1001,
"ignored_fields": {},
"id": 1001,
"type": "job",
"url": "/api/controller/v2/jobs/1001/",
"status": "pending"
}
Then we poll the job status until completion:
curl -X GET https://aap.example.com/api/controller/v2/jobs/1001/ \
-H "Authorization: Bearer $AAP_TOKEN"
Response when complete:
{
"id": 1001,
"status": "successful",
"finished": "2025-01-20T10:15:30Z",
"elapsed": 330.5
}
One of the more critical steps is to ensure that temporary resources are cleaned up. Nothing worse than finding 500 orphaned inventories in AAP because builds crashed. The provisioner is coded to track and clean up in a dependency-safe cleanup process:
# Delete credential first
curl -X DELETE https://aap.example.com/api/controller/v2/credentials/789/ \
-H "Authorization: Bearer $AAP_TOKEN"
# Then delete the host (depends on credential being removed)
curl -X DELETE https://aap.example.com/api/controller/v2/hosts/456/ \
-H "Authorization: Bearer $AAP_TOKEN"
# Finally delete the inventory (depends on hosts being removed)
curl -X DELETE https://aap.example.com/api/controller/v2/inventories/123/ \
-H "Authorization: Bearer $AAP_TOKEN"
Ready to integrate your Packer workflows with AAP? The provisioner is open source and available on GitHub. Check out the repository for installation instructions, configuration examples, and contribution guidelines:
rptcloud/packer-plugin-ansible-aap
If you’re using this provisioner in your environment or have ideas for improvements, contributions and feedback are welcome!
River Point Technology (RPT), an award-winning cloud consulting, training, and enablement provider, is thrilled to announce that we have been named HashiCorp’s Americas SI Partner of the Year for 2024. This prestigious recognition highlights our ongoing dedication to helping enterprises maximize their technology investments through HashiCorp’s suite of multi-cloud infrastructure automation tools. It also underscores RPT’s relentless commitment to empowering the Fortune 500 to redefine the art of the possible in cloud automation and management.
Sean Toomey, Senior Director, Partners, at HashiCorp, had this to say about the recognition, “River Point Technology has demonstrated a significant commitment to HashiCorp through substantial investments in sales, services, and training.”
He then added, “Notably, River Point Technology is distinguished as one of the few partners holding all three core competencies—Infrastructure, Security, and Networking. They also play a critical role in testing and piloting new products and features, reflecting their proactive involvement in innovation. Their expertise is especially vital in managing some of our most significant Strategic Accounts.”
River Point Technology founder and CEO, Jeff Eiben, highlighted the benefits of the relationship to organizations. “Our partnership with HashiCorp has allowed us to simplify infrastructure automation for our clients, enabling faster, more efficient deployment and scaling.” He went on to say, “Being recognized as the Americas SI Partner of the Year is a testament to our team’s expertise and our clients’ trust in us to deliver results.”
Expertise and Accomplishments
Indeed, the team is quite stacked when it comes to HashiCorp expertise. Beyond the competencies distinction, RPT can also boast more HashiCorp Ambassadors than any global partner including Core Contributors to the HashiCorp software. Recently, we earned the distinction of having the industry’s first and only experts to pass the challenging Terraform Professional Certification exam. Combined, these credentials ensure that our clients are in the hands of industry leading HashiCorp experts, fully equipped to guide them through their digital transformation journey.
Innovative Solutions for Enterprise Clients
To support RPT’s commitment to help clients succeed with HashiCorp products, we recently introduced RPT Bundles for Infrastructure and Security Lifecycle Management, that provide enterprises with tightly scoped, outcome-oriented services structured around HashiCorp Validated Designs (HVD). These bundles streamline the adoption, scaling, and operations of HashiCorp solutions.
River Point Technology’s exclusive Accelerator program offers ongoing enablement that help clients accelerate their automation journey. Our expert advisors are there to empower and lead your team through every phase of adoption; from discovery to build to process and ongoing adoption. For resource enablement we deliver custom, private training for HashiCorp products, ensuring that our clients have the knowledge and tools necessary to thrive.
Commitment to Excellence
RPT’s unique combination of best-in-class services, leveraging our HashiCorp Ambassadors and certified experts, positions us to continue delivering exceptional results for organizations across the globe. From our proprietary Value Creation Technology (VCT) process to custom integrations and advanced solutions, we empower enterprises to ‘think big, start small, and scale fast.’
As we congratulate our entire team on all their efforts that have helped us earn the prestigious HashiCorp Americas SI Partner of the Year award, we remain focused on delivering cutting-edge, human-centered solutions that empower our clients to achieve sustainable growth and success across their multi-cloud environments. We look forward to strengthening our relationship with HashiCorp further and proudly collaborate with other notable companies such as AWS, IBM, SUSE, Microsoft, Google Cloud, and more. This broad, technology-agnostic approach allows us to support our clients’ diverse digital transformation and cloud automation needs.
For more information, please contact our team today.
Meet the experts: Terraform Module Design
October 15, 2024 | 4:15 PM ET – 5PM ET – Ensemble Ballroom
October 16, 2024 | 3:30 PM ET – 3:45 PM ET – Hallway track Q&A
Speakers: Ned Bellavance, Drew Mullen & Bruno Schaatsbergen
Companies: Ned in the Cloud LLC, River Point Technology, HashiCorp
As organizations continue to adopt cloud infrastructure and automate their processes, the need for efficient, scalable, and maintainable infrastructure-as-code (IaC) solutions has never been more pressing. Terraform, HashiCorp’s flagship IaC tool, is at the forefront of this movement, allowing engineers to define, provision, and manage infrastructure with ease. However, building and managing Terraform in a way that is reusable, scalable, and maintainable can be a challenge—especially as organizations grow and their infrastructure becomes more complex.
That’s where Terraform modules come into play. These self-contained packages of Terraform configuration files enable developers to build infrastructure in a modular, reusable way. Yet, not all modules are created equal. Poor design can lead to inefficiencies, increased complexity, and a host of operational headaches.
For anyone interested in optimizing their Terraform use or looking to gain an edge in managing large, complex infrastructures, the session “Meet the Experts: Terraform Module Design” at HashiConf 2024 is an absolute must-attend.
In this dynamic panel discussion, Ned Bellavance (Ned in the Cloud LLC), Drew Mullen (River Point Technology), and Bruno Schaatsbergen (HashiCorp) will share their hard-earned expertise on the topic. Together, these experts will dive deep into best practices, design patterns, and lessons learned from working on some of the most popular Terraform modules available in the public registry.
Here’s why this session stands out and what you can expect to learn from attending.
Why Terraform Module Design Matters
Terraform’s popularity stems from its ability to codify infrastructure in a simple, declarative way. But while it’s easy to get started with Terraform, creating Terraform modules that are flexible, reusable, and maintainable can be tricky.
As your infrastructure grows and becomes more complex, manually replicating configuration across environments or teams becomes untenable. That’s where the true power of modules comes in. They allow you to abstract away complexities and create reusable building blocks that standardize infrastructure provisioning.
However, creating well-designed modules requires more than just wrapping code into a reusable block. A well-designed Terraform module must:
This is precisely what the panelists will explore: how to craft Terraform modules that scale effectively and remain easy to maintain over time, without sacrificing flexibility.
What You’ll Learn from the Experts
1. Best Practices for Terraform Module Design
The foundation of any great Terraform module is its design. Get practical advice and expert insights that will help you avoid common pitfalls and create modules that are not only easier to use but also easier to scale and maintain over time.
2. Design Patterns for Terraform Modules
Building on the basics, the panel will delve into advanced design patterns that make Terraform modules more powerful and adaptable.
3. Lessons from the Field
One of the most valuable aspects of this session is the chance to learn from the personal experiences of the panelists. Each has played a key role in developing some of the most popular Terraform modules in the public registry, and they will share their real-world experiences while
Meet the Experts: Your Panelists
Ned Bellavance (Founder, Ned in the Cloud LLC)
Ned is a well-known thought leader in the cloud space, with deep expertise in infrastructure-as-code. As the founder of Ned in the Cloud LLC and the host of the popular “Day Two Cloud” podcast, Ned brings a wealth of knowledge on how to scale and automate cloud operations. He’s also an experienced educator and public speaker, known for breaking down complex cloud topics into digestible, actionable advice.
Drew Mullen (Principal Solutions Architect, River Point Technology)
Drew is a recognized expert in cloud-native infrastructure, with a focus on helping enterprises adopt and scale cloud technologies. At River Point Technology, Drew works with Fortune 500 companies to design and implement cloud architectures that are reliable, secure, and scalable. His experience with Terraform spans years, making him a key contributor to open-source modules and a mentor for organizations seeking to optimize their IaC practices.
Bruno Schaatsbergen (Senior Engineer, HashiCorp)
As a Senior Engineer at HashiCorp, Bruno is deeply involved in the development and maintenance of Terraform itself. His contributions to the Terraform ecosystem have helped shape the way developers think about infrastructure automation. Bruno’s technical depth, combined with his understanding of the challenges faced by enterprises adopting Terraform at scale, make him an invaluable voice on this panel.
Why You Should Attend
The “Meet the Experts: Terraform Module Design” session is not just for experienced Terraform users—it’s for anyone who wants to improve how they build and manage infrastructure with Terraform. Whether you’re a developer looking to learn best practices or an architect tasked with managing large, multi-cloud environments, this panel will offer valuable insights.
Attendees can expect to leave with practical knowledge on:
– How to build modules that scale.
– Avoiding common design pitfalls.
– Contributing to both public and internal Terraform modules.
We hope to see you at HashiConf 2024. If you need help with Terraform or any of the HashiCorp suite of products, contact the experts at River Point Technology today.
Meet the experts: Terraform Module Design
October 15, 2024 | 4:15 PM ET – 5PM ET – Ensemble Ballroom
October 16, 2024 | 3:30 PM ET – 3:45 PM ET – Hallway track Q&A
Speakers: Ned Bellavance, Drew Mullen & Bruno Schaatsbergen
Companies: Ned in the Cloud LLC, River Point Technology, HashiCorp