i automated my home lab (and CLOUD) with Ansible
Based on NetworkChuck's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Ansible playbooks can automate repeated VM provisioning across AWS and on-prem Proxmox, replacing manual cloud/on-prem setup steps.
Briefing
A home-lab builder turned Ansible into a full “VM delivery pipeline,” automating not just cloud and on-prem provisioning but also Slack notifications and even an Alexa-driven request flow. The core payoff: a new virtual machine can be requested with voice, created across Proxmox, AWS, and a home lab, and then reported back with the VM’s IP address—so the operator doesn’t have to wait, hunt for addresses, or run commands manually.
The automation starts with a simple premise: deploying lab VMs repeatedly is slow and repetitive, so Ansible playbooks replace manual clicks. For cloud deployments, the workflow is straightforward—Ansible modules and documentation make it easy to spin up fresh instances in AWS and to do similar work in a “LE Node” environment. The same approach works on-prem with Proxmox, but Proxmox introduces a key friction point: VM templates aren’t as simple to handle, and the playbook must be adapted to Proxmox’s expectations.
A second, more subtle issue appears when the playbook is run more than once. Ansible’s idempotent behavior ensures the desired state exists, meaning it won’t create a duplicate VM if one with the same name already exists. The fix is practical: generate a random VM name so each run produces a new machine. Once that’s in place, the playbook can reliably create “fresh baked” VMs on Proxmox and in the cloud.
The next leap is operational convenience—getting results without manual follow-up. Slack integration is added so the system can message the operator when provisioning finishes. Cloud environments can provide the IP address directly, but Proxmox doesn’t assign networking details in a way Ansible can immediately read; instead, the VM lands on the network and the router assigns the address. Since direct automation of the router (a Ubiquiti Dream Machine Pro) isn’t available through modules, the solution becomes network detective work.
To recover the IP address, the automation uses Proxmox to obtain the VM’s MAC address, then relies on ARP to map MAC to IP. That fails at first because ARP tables only contain entries the router has already learned. The workaround is to pre-populate the ARP cache by scanning the network with Nmap via the Proxmox host, after waiting briefly for the VM to come up. With the ARP cache filled, the playbook can find the IP and include it in the Slack message.
Finally, the system goes beyond “run a playbook” into “ask for a VM.” Alexa can’t natively call Ansible because Ansible lacks a built-in API/webhook interface, so the setup uses the Red Hat Ansible Automation Platform to provide the needed API. Alexa triggers the workflow through Zapier, which runs Python to call the automation platform, launching the same Proxmox/AWS/LE Node provisioning and then notifying Slack. The result is a voice-to-VM pipeline designed for maximum laziness: request a machine, let automation handle the rest, and receive the connection details automatically.
Cornell Notes
Ansible is used to automate VM provisioning across AWS and on-prem Proxmox, then extended with Slack notifications so the operator receives the new VM’s connection details automatically. Idempotency required a change: repeated runs won’t create duplicates unless the VM name is unique, so the playbook generates random names. Proxmox networking complicates IP discovery because Ansible can’t directly obtain the assigned IP; the workflow retrieves the VM’s MAC address and uses ARP to find the IP, with Nmap scans used to populate the ARP cache. To make the process voice-driven, Alexa triggers the automation via Zapier and the Red Hat Ansible Automation Platform, which supplies the API Ansible alone lacks.
Why did the Proxmox playbook stop creating new VMs after the first run, and what fixed it?
How did the workflow get an IP address for Proxmox VMs when Proxmox/cloud behavior differed?
What role did Nmap play in making ARP-based IP discovery work?
Why was Red Hat Ansible Automation Platform necessary for Alexa integration?
How did Slack fit into the automation pipeline?
Review Questions
- What does idempotency mean in Ansible, and how did it affect VM creation when rerunning the Proxmox playbook?
- Describe the step-by-step method used to determine a Proxmox VM’s IP address without direct router automation.
- What components were added to make voice requests (Alexa) trigger Ansible-driven provisioning, and why couldn’t Ansible alone handle that?
Key Points
- 1
Ansible playbooks can automate repeated VM provisioning across AWS and on-prem Proxmox, replacing manual cloud/on-prem setup steps.
- 2
Idempotency prevents duplicate VM creation when names match, so unique VM naming (randomized) is required for repeated lab runs.
- 3
Proxmox networking can delay or hide IP assignment from Ansible, requiring indirect IP discovery rather than relying on provisioning output.
- 4
MAC-to-IP resolution used ARP, but ARP only worked after the router learned the VM—solved by running an Nmap scan to populate the ARP cache.
- 5
Slack notifications were added so provisioning results (including IP addresses) arrive automatically without manual lookup.
- 6
Alexa-driven provisioning required an API layer; the Red Hat Ansible Automation Platform provided the API that plain Ansible lacks.
- 7
Zapier acted as the glue between Alexa and the automation platform, running Python to trigger the workflow and then notify Slack.