Files
ansible_proxmox_VM/README.md
Jose 132409da05 docs 📝: Update Proxmox VM Role documentation
Updated the README.md with new features, requirements and usage instructions for the Ansible Proxmox VM role. Added detailed information on cloud-init templates, clone creation flow and notes & best practices.
2025-11-15 12:58:55 +01:00

203 lines
3.7 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Ansible Role: Proxmox VM → Template → Clones (Cloud-Init)
This role automates the full lifecycle of deploying a Debian GenericCloud VM on Proxmox, converting it into a template, and optionally creating multiple Cloud-Init clones with static or dynamic networking.
It supports:
Downloading Debian GenericCloud image (qcow2)
Creating and configuring a base VM
UEFI + Secure Boot + TPM 2.0 (optional)
PCI or VirtIO GPU passthrough (optional)
Disk import + optional resize
Cloud-Init user and vendor templates
Template conversion
Automatic clone deployment with per-VM networking and hostname settings
Features
✔ Automatically downloads Debian Bookworm GenericCloud image
✔ Creates VM with CPU, RAM, networking, and storage settings
✔ Supports DHCP or static IPs
✔ Cloud-Init support:
Users, SSH keys, passwords
Timezone
Packages
✔ Optional TPM 2.0 + SecureBoot (OVMF)
✔ Optional real GPU passthrough and VirtIO GPU
✔ Optional disk resize
✔ Convert base VM into a template
✔ Create any number of clones from template
✔ Start clones after creation
Folder Structure
ANSIBLE_PROXMOX_VM/
defaults/
main.yml
tasks/
main.yml
templates/
cloudinit_userdata.yaml.j2
cloudinit_vendor.yaml.j2
README.md
Requirements
Proxmox API / Environment
This role runs on the Proxmox host via localhost, using qm CLI commands.
Therefore:
Ansible must have SSH access to the Proxmox node.
The user must have permission to run qm commands (root recommended).
Proxmox must have a storage pool such as local-lvm.
Debian Cloud Image
Downloaded automatically if not present:
/var/lib/vz/template/qemu/debian-genericcloud-amd64.qcow2
Variables (defaults/main.yml)
Base VM settings
vm_id: 150
hostname: debian-template-base
memory: 4096
cores: 4
bridge: vmbr0
storage: local-lvm
mac_address: "DE:AD:BE:EF:44:55"
cpu_type: host
Networking
Choose DHCP:
ip_mode: dhcp
ipconfig0: "ip=dhcp"
Or static:
ip_mode: static
ip_address: "192.168.1.60/24"
gateway: "192.168.1.1"
dns:
- "1.1.1.1"
- "8.8.8.8"
ipconfig0: "{{ 'ip=dhcp' if ip_mode == 'dhcp' else 'ip=' + ip_address + ',gw=' + gateway }}"
Cloud-Init user
ci_user: debian
ci_password: "SecurePass123"
ssh_key_path: "~/.ssh/id_rsa.pub"
timezone: "Europe/Berlin"
Disk
resize_disk: true
resize_size: "16G"
GPU passthrough
gpu_passthrough: false
gpu_device: "0000:01:00.0"
virtio_gpu: false
UEFI + TPM
enable_tpm: false
Templates + Clones
make_template: true
create_clones: true
clones:
- id: 301
hostname: app01
ip: "192.168.1.81/24"
gateway: "192.168.1.1"
How to Use
1. Include the role in your playbook
- hosts: proxmox
become: true
roles:
- ANSIBLE_PROXMOX_VM
Or run directly:
ansible-playbook tasks/main.yml -i inventory
Clone Creation Flow
For each clone you define:
clones:
- id: 301
hostname: app01
ip: "192.168.1.81/24"
gateway: "192.168.1.1"
The role will:
Clone VM from template → qm clone 150 301
Set hostname + Cloud-Init netplan
Start VM
Cloud-Init Templates
User Data
templates/cloudinit_userdata.yaml.j2
Defines:
users
SSH key
password
timezone
package updates
custom command execution
Vendor Data
templates/cloudinit_vendor.yaml.j2
Defines:
default packages
DNS (optional)
Notes & Best Practices
Ensure Proxmox has snippets storage enabled under Datacenter → Storage.
If cloning fails with invalid IP, check formatting: "192.168.1.81/24".
Using both SSH keys + password login is possible (ssh_pwauth: true).
If GPU passthrough is enabled, ensure the host kernel is configured for IOMMU.
Future Improvements (optional)
Add API-based module support (community.general.proxmox) instead of CLI
Add validation for clone IDs
Automated post-deployment provisioning (Ansible SSH into clones)
Create a Galaxy-ready role