Files
ansible_proxmox_VM/README.md

169 lines
3.6 KiB
Markdown
Raw Normal View History

# Ansible Role: Proxmox VM → Template → Clones (CloudInit)
Automates the entire lifecycle of a Debian GenericCloud VM on Proxmox:
- Download the Debian image
- Create a base VM
- Optionally enable UEFI, SecureBoot, TPM 2.0, GPU passthrough
- Convert the VM into a template
- Spin up any number of CloudInit clones with static or dynamic networking
## Features
- ✅ Autodownload Debian Bookworm GenericCloud image
- ✅ Create VM (CPU, RAM, networking, storage)
- ✅ DHCP or static IP support
- ✅ CloudInit: users, SSH keys, passwords, timezone, packages
- ✅ Optional TPM2.0 + SecureBoot (OVMF)
- ✅ Optional GPU passthrough or VirtIO GPU
- ✅ Optional disk resize
- ✅ Convert base VM into a template
- ✅ Create multiple clones from template
- ✅ Start clones after creation
## Folder Structure
```
ANSIBLE_PROXMOX_VM/
├─ defaults/
│ └─ main.yml
├─ tasks/
│ └─ main.yml
├─ templates/
│ ├─ cloudinit_userdata.yaml.j2
│ └─ cloudinit_vendor.yaml.j2
└─ README.md
```
## Requirements
- Proxmox API / Environment
- Role runs on the Proxmox host via localhost, using `qm` CLI commands.
- Ansible must have SSH access to the Proxmox node.
- User must have permission to run `qm` commands (root recommended).
- Proxmox storage pool such as `local-lvm`.
## Debian Cloud Image
The image is automatically downloaded if not present:
```
/var/lib/vz/template/qemu/debian-genericcloud-amd64.qcow2
```
## Variables (`defaults/main.yml`)
```yaml
# Base VM settings
vm_id: 150
hostname: debian-template-base
memory: 4096
cores: 4
bridge: vmbr0
storage: local-lvm
mac_address: "DE:AD:BE:EF:44:55"
cpu_type: host
# Networking
ip_mode: dhcp # or 'static'
ip_address: "192.168.1.60/24"
gateway: "192.168.1.1"
dns:
- "1.1.1.1"
- "8.8.8.8"
ipconfig0: "{{ 'ip=dhcp' if ip_mode == 'dhcp' else 'ip=' + ip_address + ',gw=' + gateway }}"
# CloudInit user
ci_user: debian
ci_password: "SecurePass123"
ssh_key_path: "~/.ssh/id_rsa.pub"
timezone: "Europe/Berlin"
# Disk
resize_disk: true
resize_size: "16G"
# GPU passthrough
gpu_passthrough: false
gpu_device: "0000:01:00.0"
virtio_gpu: false
# UEFI + TPM
enable_tpm: false
# Templates + Clones
make_template: true
create_clones: true
clones:
- id: 301
hostname: app01
ip: "192.168.1.81/24"
gateway: "192.168.1.1"
```
## Usage
### Include the role in a playbook
```yaml
- hosts: proxmox
become: true
roles:
- ANSIBLE_PROXMOX_VM
```
### Run directly
```bash
ansible-playbook tasks/main.yml -i inventory
```
## Clone Creation Flow
For each clone defined in `clones`:
1. `qm clone 150 <clone_id>`
2. Set hostname & CloudInit netplan
3. Start the VM
### Example `clones` section
```yaml
clones:
- id: 301
hostname: app01
ip: "192.168.1.81/24"
gateway: "192.168.1.1"
```
## CloudInit Templates
### `templates/cloudinit_userdata.yaml.j2`
Defines:
- `users`
- SSH key
- password
- timezone
- package updates
- custom commands
### `templates/cloudinit_vendor.yaml.j2`
Defines:
- default packages
- DNS (optional)
## Notes & Best Practices
- Ensure Proxmox has snippets storage enabled (`Datacenter → Storage`).
- If cloning fails with an invalid IP, check the format: `"192.168.1.81/24"`.
- Both SSH keys and password login are supported (`ssh_pwauth: true`).
- If GPU passthrough is enabled, ensure the host kernel is configured for IOMMU.
## Future Improvements (optional)
- Switch to `community.general.proxmox` API module instead of CLI
- Validate clone IDs
- Automated postdeployment provisioning (Ansible SSH into clones)
- Publish as a Galaxyready role