Home Lab Setup (Part 2: Terraform)

Explaining my home lab setup

This post is the second of a few explaining how I set up my home lab setup to run a Kubernetes cluster on a Proxmox hypervisor using Infrastructure-as-code tools like Terraform and Puppet.

Last past (Part 1) we covered the hardware and manually setup Proxmox and a Centos cloud-init template that we’ll be using in this post to provision our required nodes with Terraform. In the next part we’ll expand on this by using Puppet Bolt to drive the Terraform process

Terraform

Terraform, an open source product by HashiCorp, lets us define and provision our desired infrastructure as code. It nicely integrates with a large range of providers either officially supported or extended by the community, allowing you to generally define infrastructure logic that could be used to setup resources across various cloud providers (AWS, Azure, GCP etc).

In our case we will be using a community provider to integrate with Proxmox, meaning the Terraform logic defined will not easily re-usable unless using Proxmox again. However, if for some reason my homelab server stopped working, all I’d need to rebuild it is setup another piece of hardware, are per the last part, and re-run these scripts to setup the whole environment exactly as before.

Install Terraform

Terraform is available as a binary file. To make it usable you just need to download it and place it in your system’s PATH. Refer to the Terraform downloads page for the appropriate one, but if you’re following along from the Proxmox node (Debian 10), it is as follows.

1
2
3
4
wget https://releases.hashicorp.com/terraform/0.12.24/terraform_0.12.24_linux_amd64.zip
unzip terraform_0.12.24_linux_amd64.zip && rm terraform_0.12.24_linux_amd64.zip
mv terraform /usr/local/bin/
terraform --version

Install Proxmox Terraform Provider

As hinted at earlier, there isn’t an official way to manage Proxmox with Terraform, but since Terraform is extensible we can use a community-supported Terraform Provider already created and shared by the company Telmate on Github here.

Following their instructions here, we first we need to install Go which has a similar setup process to Terraform (not surprising since if you haven’t guessed by now, Terraform is written in Go)

1
2
3
4
wget https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.14.2.linux-amd64.tar.gz && rm go1.14.2.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin # Also modify PATH in ~/.profile or /etc/profile
go --version

Next we can actually setup the Provider by cloning the community repo and creating the relevant binary that Terraform can use.

1
2
3
4
git clone https://github.com/Telmate/terraform-provider-proxmox.git && cd terraform-provider-proxmox
go install github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provider-proxmox
go install github.com/Telmate/terraform-provider-proxmox/cmd/terraform-provisioner-proxmox
make

Finally, create the Terraform plugin directory and copy the Provider binaries into it as below.

1
2
3
mkdir ~/.terraform.d/plugins
cp bin/terraform-provider-proxmox ~/.terraform.d/plugins
cp bin/terraform-provisioner-proxmox ~/.terraform.d/plugins

Create Terraform Module

We then created a “terraform” Git Repo containing the following structure/files. (note: as this contains information specific to my setup, this is not publicly shared, but I will include the rough contents with any sensitive fields modified to something generic). For more on Terraform Modules, refer to the official docs.

1
2
3
4
5
6
 |-homelab
 | |-main.tf
 | |-outputs.tf
 | |-provider.tf
 | |-variables.tf
 |-README.md

main.tf

This file contains the “core logic” of your Proxmox setup (i.e. the node definitions). For fields you can modify, refer to the Provider documentation here.

We will be creating the following nodes:

  • 1x Kubernetes Master (4 CPU, 8GB Memory, 25GB Disk)
  • 2x Kubernetes Worker Node (2 CPU, 8GB Memory, 25GB Disk)
  • 1x Storage Server (2 CPU, 4GB Memory, 100GB Disk)
  • 1x Puppet Master (2 CPU, 4GB Memory, 25GB Disk)

Note: Be sure to replace cores / memory / disk with values that are applicable to your setup, as well as IP’s and Gateway for ipconfig0.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
terraform {
  required_version = ">= 0.12"
}

resource "proxmox_vm_qemu" "k8s_server" {
  count       = 1
  name        = "${var.target_node}-master-${count.index + 1}"
  target_node = var.target_node

  clone = var.cloudinit_template

  agent    = 1
  os_type  = "cloud-init"
  cores    = 4
  sockets  = "1"
  cpu      = "host"
  memory   = 8192
  scsihw   = "virtio-scsi-pci"
  bootdisk = "scsi0"

  disk {
    id           = 0
    size         = 25
    type         = "scsi"
    storage      = "local-lvm"
    storage_type = "lvm"
    iothread     = true
  }

  network {
    id     = 0
    model  = "virtio"
    bridge = "vmbr0"
  }

  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  # Cloud Init Settings
  ipconfig0 = "ip=192.168.0.1${count.index + 1}/24,gw=<GW>"

  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

resource "proxmox_vm_qemu" "k8s_agent" {
  count       = 2
  name        = "${var.target_node}-node-${count.index + 1}"
  target_node = var.target_node

  clone = var.cloudinit_template

  agent    = 1
  os_type  = "cloud-init"
  cores    = 2
  sockets  = "1"
  cpu      = "host"
  memory   = 8192
  scsihw   = "virtio-scsi-pci"
  bootdisk = "scsi0"

  disk {
    id           = 0
    size         = 25
    type         = "scsi"
    storage      = "local-lvm"
    storage_type = "lvm"
    iothread     = true
  }

  network {
    id     = 0
    model  = "virtio"
    bridge = "vmbr0"
  }

  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  # Cloud Init Settings
  ipconfig0 = "ip=192.168.0.2${count.index + 1}/24,gw=<GW>"

  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

resource "proxmox_vm_qemu" "storage" {
  count       = 1
  name        = "${var.target_node}-storage-${count.index + 1}"
  target_node = var.target_node

  clone = var.cloudinit_template

  agent    = 1
  os_type  = "cloud-init"
  cores    = 2
  sockets  = "1"
  cpu      = "host"
  memory   = 4096
  scsihw   = "virtio-scsi-pci"
  bootdisk = "scsi0"

  disk {
    id           = 0
    size         = 100
    type         = "scsi"
    storage      = "local-lvm"
    storage_type = "lvm"
    iothread     = true
  }

  network {
    id     = 0
    model  = "virtio"
    bridge = "vmbr0"
  }

  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  # Cloud Init Settings
  ipconfig0 = "ip=192.168.0.3${count.index + 1}/24,gw=<GW>"

  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

resource "proxmox_vm_qemu" "puppet" {
  count       = 1
  name        = "${var.target_node}-puppet-${count.index + 1}"
  target_node = var.target_node

  clone = var.cloudinit_template

  agent    = 1
  os_type  = "cloud-init"
  cores    = 2
  sockets  = "1"
  cpu      = "host"
  memory   = 4096
  scsihw   = "virtio-scsi-pci"
  bootdisk = "scsi0"

  disk {
    id           = 0
    size         = 25
    type         = "scsi"
    storage      = "local-lvm"
    storage_type = "lvm"
    iothread     = true
  }

  network {
    id     = 0
    model  = "virtio"
    bridge = "vmbr0"
  }

  lifecycle {
    ignore_changes = [
      network,
    ]
  }

  # Cloud Init Settings
  ipconfig0 = "ip=192.168.0.4${count.index + 1}/24,gw=<GW>"

  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

outputs.tf

Since we intend on using Puppet Bolt to modify the servers once provisioned, we can get Terraform to output these as a “servers” value, by creating a list of map objects (e.g. “”:"" like “server-master-1”:“ip=192.168.0.11/24,gw=192.168.0.1”)

Note: there’s probably a cleaner way of collating this dynamically, but I didn’t have much luck :(

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
locals {
  k8s_server = "${zipmap("${proxmox_vm_qemu.k8s_server.*.name}", "${proxmox_vm_qemu.k8s_server.*.ipconfig0}")}"
  k8s_agent  = "${zipmap("${proxmox_vm_qemu.k8s_agent.*.name}", "${proxmox_vm_qemu.k8s_agent.*.ipconfig0}")}"
  storage    = "${zipmap("${proxmox_vm_qemu.storage.*.name}", "${proxmox_vm_qemu.storage.*.ipconfig0}")}"
  puppet     = "${zipmap("${proxmox_vm_qemu.puppet.*.name}", "${proxmox_vm_qemu.puppet.*.ipconfig0}")}"

}

output "servers" {
  value = "${merge("${local.k8s_server}", "${local.k8s_agent}", "${local.storage}", "${local.puppet}")}"
}

providers.tf

This file tells Terraform what providers we’ll be using, where we will pass in the credentials as a variable from the variables.tf file.

1
2
3
4
5
6
provider "proxmox" {
  pm_parallel     = 1
  pm_tls_insecure = true
  pm_api_url      = var.pm_api_url
  pm_user         = var.pm_user
}

variables.tf

This file lets us state the expected input variables to allow passing this in from command line or set defaults. We’ll set defaults for everything but the password which we’ll export as the environment variable PM_PASS (e.g. export PM_PASS="<Your Proxmox Password>").

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# ---------------------------------------------------------------------------------------------------------------------
# ENVIRONMENT VARIABLES
# Define these secrets as environment variables
# ---------------------------------------------------------------------------------------------------------------------

# PM_PASS # Proxmox password

# ---------------------------------------------------------------------------------------------------------------------
# OPTIONAL PARAMETERS
# ---------------------------------------------------------------------------------------------------------------------

variable "pm_api_url" {
  default = "https://<PROXMOX IP>:8006/api2/json"
}

variable "pm_user" {
  default = "root@pam"
}

variable "cloudinit_template" {
  default = "centos-8-cloudinit-template" # This should match name of template from Part 1
}

variable "target_node" {
  default = "<PROXMOX HOSTNAME>"
}

variable "ssh_key" {
  default = "<YOUR PUBLIC SSH KEY>"
}

Testing it works (optional)

Since we’ll be using Puppet Bolt to drive the provisioning process this step can be skipped, but if you’re wanting to stop at this point you just need to initialize Terraform and run the plan to confirm it is as expected, and/or applying it.

1
2
3
4
5
terraform init
terraform plan

# Extra optional
terraform apply # to actually provision using Terraform use this

Note: If you decide on doing the following, you should remove it afterwards with terraform destroy to later test having Puppet Bolt do this step.

Conclusion

So that’s it for this part. In this guide we defined our infrastructure setup as well as any dependencies for Terraform to provision this, which will be continued in the next part (Part 3) by using Puppet Bolt to run this logic and setup the base configuration management requirements.

Built with Hugo
Theme Stack designed by Jimmy