7 minutes
Home Lab Setup (Part 2: Terraform)
This post is the second of a few explaining how I set up my home lab setup to run a Kubernetes cluster on a Proxmox hypervisor using Infrastructure-as-code tools like Terraform and Puppet.
Last past (Part 1) we covered the hardware and manually setup Proxmox and a Centos cloud-init template that we’ll be using in this post to provision our required nodes with Terraform. In the next part we’ll expand on this by using Puppet Bolt to drive the Terraform process
Terraform
Terraform, an open source product by HashiCorp, lets us define and provision our desired infrastructure as code. It nicely integrates with a large range of providers either officially supported or extended by the community, allowing you to generally define infrastructure logic that could be used to setup resources across various cloud providers (AWS, Azure, GCP etc).
In our case we will be using a community provider to integrate with Proxmox, meaning the Terraform logic defined will not easily re-usable unless using Proxmox again. However, if for some reason my homelab server stopped working, all I’d need to rebuild it is setup another piece of hardware, are per the last part, and re-run these scripts to setup the whole environment exactly as before.
Install Terraform
Terraform is available as a binary file. To make it usable you just need to download it and place it in your system’s PATH
. Refer to the Terraform downloads page for the appropriate one, but if you’re following along from the Proxmox node (Debian 10), it is as follows.
wget https://releases.hashicorp.com/terraform/0.12.24/terraform_0.12.24_linux_amd64.zip
unzip terraform_0.12.24_linux_amd64.zip && rm terraform_0.12.24_linux_amd64.zip
mv terraform /usr/local/bin/
terraform --version
Install Proxmox Terraform Provider
As hinted at earlier, there isn’t an official way to manage Proxmox with Terraform, but since Terraform is extensible we can use a community-supported Terraform Provider already created and shared by the company Telmate on Github here.
Following their instructions here, we first we need to install Go which has a similar setup process to Terraform (not surprising since if you haven’t guessed by now, Terraform is written in Go)
wget https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.14.2.linux-amd64.tar.gz && rm go1.14.2.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin # Also modify PATH in ~/.profile or /etc/profile
go --version
Next we can actually setup the Provider by cloning the community repo and creating the relevant binary that Terraform can use.
git clone https://github.com/Telmate/terraform-provider-proxmox.git && cd terraform-provider-proxmox
go install github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provider-proxmox
go install github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provisioner-proxmox
make
Finally, create the Terraform plugin directory and copy the Provider binaries into it as below.
mkdir ~/.terraform.d/plugins
cp bin/terraform-provider-proxmox ~/.terraform.d/plugins
cp bin/terraform-provisioner-proxmox ~/.terraform.d/plugins
Create Terraform Module
We then created a “terraform” Git Repo containing the following structure/files. (note: as this contains information specific to my setup, this is not publicly shared, but I will include the rough contents with any sensitive fields modified to something generic). For more on Terraform Modules, refer to the official docs.
|-homelab
| |-main.tf
| |-outputs.tf
| |-provider.tf
| |-variables.tf
|-README.md
main.tf
This file contains the “core logic” of your Proxmox setup (i.e. the node definitions). For fields you can modify, refer to the Provider documentation here.
We will be creating the following nodes:
- 1x Kubernetes Master (4 CPU, 8GB Memory, 25GB Disk)
- 2x Kubernetes Worker Node (2 CPU, 8GB Memory, 25GB Disk)
- 1x Storage Server (2 CPU, 4GB Memory, 100GB Disk)
- 1x Puppet Master (2 CPU, 4GB Memory, 25GB Disk)
Note: Be sure to replace cores
/ memory
/ disk
with values that are applicable to your setup, as well as IP’s and Gateway for ipconfig0
.
terraform {
required_version = ">= 0.12"
}
resource "proxmox_vm_qemu" "k8s_server" {
count = 1
name = "${var.target_node}-master-${count.index + 1}"
target_node = var.target_node
clone = var.cloudinit_template
agent = 1
os_type = "cloud-init"
cores = 4
sockets = "1"
cpu = "host"
memory = 8192
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
id = 0
size = 25
type = "scsi"
storage = "local-lvm"
storage_type = "lvm"
iothread = true
}
network {
id = 0
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
# Cloud Init Settings
ipconfig0 = "ip=192.168.0.1${count.index + 1}/24,gw=<GW>"
sshkeys = <<EOF
${var.ssh_key}
EOF
}
resource "proxmox_vm_qemu" "k8s_agent" {
count = 2
name = "${var.target_node}-node-${count.index + 1}"
target_node = var.target_node
clone = var.cloudinit_template
agent = 1
os_type = "cloud-init"
cores = 2
sockets = "1"
cpu = "host"
memory = 8192
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
id = 0
size = 25
type = "scsi"
storage = "local-lvm"
storage_type = "lvm"
iothread = true
}
network {
id = 0
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
# Cloud Init Settings
ipconfig0 = "ip=192.168.0.2${count.index + 1}/24,gw=<GW>"
sshkeys = <<EOF
${var.ssh_key}
EOF
}
resource "proxmox_vm_qemu" "storage" {
count = 1
name = "${var.target_node}-storage-${count.index + 1}"
target_node = var.target_node
clone = var.cloudinit_template
agent = 1
os_type = "cloud-init"
cores = 2
sockets = "1"
cpu = "host"
memory = 4096
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
id = 0
size = 100
type = "scsi"
storage = "local-lvm"
storage_type = "lvm"
iothread = true
}
network {
id = 0
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
# Cloud Init Settings
ipconfig0 = "ip=192.168.0.3${count.index + 1}/24,gw=<GW>"
sshkeys = <<EOF
${var.ssh_key}
EOF
}
resource "proxmox_vm_qemu" "puppet" {
count = 1
name = "${var.target_node}-puppet-${count.index + 1}"
target_node = var.target_node
clone = var.cloudinit_template
agent = 1
os_type = "cloud-init"
cores = 2
sockets = "1"
cpu = "host"
memory = 4096
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
id = 0
size = 25
type = "scsi"
storage = "local-lvm"
storage_type = "lvm"
iothread = true
}
network {
id = 0
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
# Cloud Init Settings
ipconfig0 = "ip=192.168.0.4${count.index + 1}/24,gw=<GW>"
sshkeys = <<EOF
${var.ssh_key}
EOF
}
outputs.tf
Since we intend on using Puppet Bolt to modify the servers once provisioned, we can get Terraform to output these as a “servers” value, by creating a list of map objects (e.g. “”:“” like “server-master-1”:“ip=192.168.0.11/24,gw=192.168.0.1”)
Note: there’s probably a cleaner way of collating this dynamically, but I didn’t have much luck :(
locals {
k8s_server = "${zipmap("${proxmox_vm_qemu.k8s_server.*.name}", "${proxmox_vm_qemu.k8s_server.*.ipconfig0}")}"
k8s_agent = "${zipmap("${proxmox_vm_qemu.k8s_agent.*.name}", "${proxmox_vm_qemu.k8s_agent.*.ipconfig0}")}"
storage = "${zipmap("${proxmox_vm_qemu.storage.*.name}", "${proxmox_vm_qemu.storage.*.ipconfig0}")}"
puppet = "${zipmap("${proxmox_vm_qemu.puppet.*.name}", "${proxmox_vm_qemu.puppet.*.ipconfig0}")}"
}
output "servers" {
value = "${merge("${local.k8s_server}", "${local.k8s_agent}", "${local.storage}", "${local.puppet}")}"
}
providers.tf
This file tells Terraform what providers we’ll be using, where we will pass in the credentials as a variable from the variables.tf
file.
provider "proxmox" {
pm_parallel = 1
pm_tls_insecure = true
pm_api_url = var.pm_api_url
pm_user = var.pm_user
}
variables.tf
This file lets us state the expected input variables to allow passing this in from command line or set defaults. We’ll set defaults for everything but the password which we’ll export as the environment variable PM_PASS
(e.g. export PM_PASS="<Your Proxmox Password>"
).
# ---------------------------------------------------------------------------------------------------------------------
# ENVIRONMENT VARIABLES
# Define these secrets as environment variables
# ---------------------------------------------------------------------------------------------------------------------
# PM_PASS # Proxmox password
# ---------------------------------------------------------------------------------------------------------------------
# OPTIONAL PARAMETERS
# ---------------------------------------------------------------------------------------------------------------------
variable "pm_api_url" {
default = "https://<PROXMOX IP>:8006/api2/json"
}
variable "pm_user" {
default = "root@pam"
}
variable "cloudinit_template" {
default = "centos-8-cloudinit-template" # This should match name of template from Part 1
}
variable "target_node" {
default = "<PROXMOX HOSTNAME>"
}
variable "ssh_key" {
default = "<YOUR PUBLIC SSH KEY>"
}
Testing it works (optional)
Since we’ll be using Puppet Bolt to drive the provisioning process this step can be skipped, but if you’re wanting to stop at this point you just need to initialize Terraform and run the plan to confirm it is as expected, and/or applying it.
terraform init
terraform plan
# Extra optional
terraform apply # to actually provision using Terraform use this
Note: If you decide on doing the following, you should remove it afterwards with terraform destroy
to later test having Puppet Bolt do this step.
Conclusion
So that’s it for this part. In this guide we defined our infrastructure setup as well as any dependencies for Terraform to provision this, which will be continued in the next part (Part 3) by using Puppet Bolt to run this logic and setup the base configuration management requirements.