Matchbox Analysis
Analysis of Matchbox network boot service capabilities and architecture
Matchbox Network Boot Analysis
This section contains a comprehensive analysis of Matchbox, a network boot service for provisioning bare-metal machines.
Overview
Matchbox is an HTTP and gRPC service developed by Poseidon that automates bare-metal machine provisioning through network booting. It matches machines to configuration profiles based on hardware attributes and serves boot configurations, kernel images, and provisioning configs.
Primary Repository: poseidon/matchbox
Documentation: https://matchbox.psdn.io/
License: Apache 2.0
Key Features
- Network Boot Support: iPXE, PXELINUX, GRUB2 chainloading
- OS Provisioning: Fedora CoreOS, Flatcar Linux, RHEL CoreOS
- Configuration Management: Ignition v3.x configs, Butane transpilation
- Machine Matching: Label-based matching (MAC, UUID, hostname, serial, custom)
- API: Read-only HTTP API + authenticated gRPC API
- Asset Serving: Local caching of OS images for faster deployment
- Templating: Go template support for dynamic configuration
Use Cases
- Bare-metal Kubernetes clusters - Provision CoreOS nodes for k8s
- Lab/development environments - Quick PXE boot for testing
- Datacenter provisioning - Automate OS installation across fleets
- Immutable infrastructure - Declarative machine provisioning via Terraform
Analysis Contents
Quick Architecture
┌─────────────┐
│ Machine │ PXE Boot
│ (BIOS/UEFI)│───┐
└─────────────┘ │
│
┌─────────────┐ │ DHCP/TFTP
│ dnsmasq │◄──┘ (chainload to iPXE)
│ DHCP+TFTP │
└─────────────┘
│
│ HTTP
▼
┌─────────────────────────┐
│ Matchbox │
│ ┌──────────────────┐ │
│ │ HTTP Endpoints │ │ /boot.ipxe, /ignition
│ └──────────────────┘ │
│ ┌──────────────────┐ │
│ │ gRPC API │ │ Terraform provider
│ └──────────────────┘ │
│ ┌──────────────────┐ │
│ │ Profile/Group │ │ Match machines
│ │ Matcher │ │ to configs
│ └──────────────────┘ │
└─────────────────────────┘
Technology Stack
- Language: Go
- Config Formats: Ignition JSON, Butane YAML
- Boot Protocols: PXE, iPXE, GRUB2
- APIs: HTTP (read-only), gRPC (authenticated)
- Deployment: Binary, container (Podman/Docker), Kubernetes
Integration Points
- Terraform:
terraform-provider-matchbox for declarative provisioning - Ignition/Butane: CoreOS provisioning configs
- dnsmasq: Reference DHCP/TFTP/DNS implementation (
quay.io/poseidon/dnsmasq) - Asset sources: Can serve local or remote (HTTPS) OS images
1 - Configuration Model
Analysis of Matchbox’s profile, group, and templating system
Matchbox Configuration Model
Matchbox uses a flexible configuration model based on Profiles (what to provision) and Groups (which machines get which profile), with support for templating and metadata.
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ Matchbox Store │
│ ┌────────────┐ ┌────────────┐ ┌────────────┐ │
│ │ Profiles │ │ Groups │ │ Assets │ │
│ └────────────┘ └────────────┘ └────────────┘ │
│ │ │ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ Matcher Engine │ │
│ │ (Label-based group selection) │ │
│ └─────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ Template Renderer │ │
│ │ (Go templates + metadata) │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
▼
Rendered Config (iPXE, Ignition, etc.)
Data Directory Structure
Matchbox uses a FileStore (default) that reads from -data-path (default: /var/lib/matchbox):
/var/lib/matchbox/
├── groups/ # Machine group definitions (JSON)
│ ├── default.json
│ ├── node1.json
│ └── us-west.json
├── profiles/ # Profile definitions (JSON)
│ ├── worker.json
│ ├── controller.json
│ └── etcd.json
├── ignition/ # Ignition configs (.ign) or Butane (.yaml)
│ ├── worker.ign
│ ├── controller.ign
│ └── butane-example.yaml
├── cloud/ # Cloud-Config templates (DEPRECATED)
│ └── legacy.yaml.tmpl
├── generic/ # Arbitrary config templates
│ ├── setup.cfg
│ └── metadata.yaml.tmpl
└── assets/ # Static files (kernel, initrd)
├── fedora-coreos/
└── flatcar/
Version control: Poseidon recommends keeping /var/lib/matchbox under git for auditability and rollback.
Profiles
Profiles define what to provision: network boot settings (kernel, initrd, args) and config references (Ignition, Cloud-Config, generic).
Profile Schema
{
"id": "worker",
"name": "Fedora CoreOS Worker Node",
"boot": {
"kernel": "/assets/fedora-coreos/36.20220906.3.2/fedora-coreos-36.20220906.3.2-live-kernel-x86_64",
"initrd": [
"--name main /assets/fedora-coreos/36.20220906.3.2/fedora-coreos-36.20220906.3.2-live-initramfs.x86_64.img"
],
"args": [
"initrd=main",
"coreos.live.rootfs_url=http://matchbox.example.com:8080/assets/fedora-coreos/36.20220906.3.2/fedora-coreos-36.20220906.3.2-live-rootfs.x86_64.img",
"coreos.inst.install_dev=/dev/sda",
"coreos.inst.ignition_url=http://matchbox.example.com:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp}"
]
},
"ignition_id": "worker.ign",
"cloud_id": "",
"generic_id": ""
}
Profile Fields
| Field | Type | Required | Description |
|---|
id | string | ✅ | Unique profile identifier (referenced by groups) |
name | string | ❌ | Human-readable description |
boot | object | ❌ | Network boot configuration |
boot.kernel | string | ❌ | Kernel URL (HTTP/HTTPS or /assets path) |
boot.initrd | array | ❌ | Initrd URLs (can specify --name for multi-initrd) |
boot.args | array | ❌ | Kernel command-line arguments |
ignition_id | string | ❌ | Ignition/Butane config filename in ignition/ |
cloud_id | string | ❌ | Cloud-Config filename in cloud/ (deprecated) |
generic_id | string | ❌ | Generic config filename in generic/ |
Boot Configuration Patterns
Pattern 1: Live PXE (RAM-based, ephemeral)
Boot and run OS entirely from RAM, no disk install:
{
"boot": {
"kernel": "/assets/fedora-coreos/VERSION/fedora-coreos-VERSION-live-kernel-x86_64",
"initrd": [
"--name main /assets/fedora-coreos/VERSION/fedora-coreos-VERSION-live-initramfs.x86_64.img"
],
"args": [
"initrd=main",
"coreos.live.rootfs_url=http://matchbox/assets/fedora-coreos/VERSION/fedora-coreos-VERSION-live-rootfs.x86_64.img",
"ignition.config.url=http://matchbox/ignition?uuid=${uuid}&mac=${mac:hexhyp}"
]
}
}
Use case: Diskless workers, testing, ephemeral compute
Pattern 2: Disk Install (persistent)
PXE boot live image, install to disk, reboot to disk:
{
"boot": {
"kernel": "/assets/fedora-coreos/VERSION/fedora-coreos-VERSION-live-kernel-x86_64",
"initrd": [
"--name main /assets/fedora-coreos/VERSION/fedora-coreos-VERSION-live-initramfs.x86_64.img"
],
"args": [
"initrd=main",
"coreos.live.rootfs_url=http://matchbox/assets/fedora-coreos/VERSION/fedora-coreos-VERSION-live-rootfs.x86_64.img",
"coreos.inst.install_dev=/dev/sda",
"coreos.inst.ignition_url=http://matchbox/ignition?uuid=${uuid}&mac=${mac:hexhyp}"
]
}
}
Key difference: coreos.inst.install_dev triggers disk install before reboot
Pattern 3: Multi-initrd (layered)
Multiple initrds can be loaded (e.g., base + drivers):
{
"initrd": [
"--name main /assets/fedora-coreos/VERSION/fedora-coreos-VERSION-live-initramfs.x86_64.img",
"--name drivers /assets/drivers/custom-drivers.img"
],
"args": [
"initrd=main,drivers",
"..."
]
}
Config References
Ignition Configs
Direct Ignition (.ign files):
{
"ignition_id": "worker.ign"
}
File: /var/lib/matchbox/ignition/worker.ign
{
"ignition": { "version": "3.3.0" },
"systemd": {
"units": [{
"name": "example.service",
"enabled": true,
"contents": "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello\n\n[Install]\nWantedBy=multi-user.target"
}]
}
}
Butane Configs (transpiled to Ignition):
{
"ignition_id": "worker.yaml"
}
File: /var/lib/matchbox/ignition/worker.yaml
variant: fcos
version: 1.5.0
passwd:
users:
- name: core
ssh_authorized_keys:
- ssh-ed25519 AAAA...
systemd:
units:
- name: etcd.service
enabled: true
Matchbox automatically:
- Detects Butane format (file doesn’t end in
.ign or .ignition) - Transpiles Butane → Ignition using embedded library
- Renders templates with group metadata
- Serves as Ignition v3.3.0
Generic Configs
For non-Ignition configs (scripts, YAML, arbitrary data):
{
"generic_id": "setup-script.sh.tmpl"
}
File: /var/lib/matchbox/generic/setup-script.sh.tmpl
#!/bin/bash
# Rendered with group metadata
NODE_NAME={{.node_name}}
CLUSTER_ID={{.cluster_id}}
echo "Provisioning ${NODE_NAME} in cluster ${CLUSTER_ID}"
Access via: GET /generic?uuid=...&mac=...
Groups
Groups match machines to profiles using selectors (label matching) and provide metadata for template rendering.
Group Schema
{
"id": "node1-worker",
"name": "Worker Node 1",
"profile": "worker",
"selector": {
"mac": "52:54:00:89:d8:10",
"uuid": "550e8400-e29b-41d4-a716-446655440000"
},
"metadata": {
"node_name": "worker-01",
"cluster_id": "prod-cluster",
"etcd_endpoints": "https://10.0.1.10:2379,https://10.0.1.11:2379",
"ssh_authorized_keys": [
"ssh-ed25519 AAAA...",
"ssh-rsa AAAA..."
]
}
}
Group Fields
| Field | Type | Required | Description |
|---|
id | string | ✅ | Unique group identifier |
name | string | ❌ | Human-readable description |
profile | string | ✅ | Profile ID to apply |
selector | object | ❌ | Label match criteria (omit for default group) |
metadata | object | ❌ | Key-value data for template rendering |
Selector Matching
Reserved selectors (automatically populated from machine attributes):
| Selector | Source | Example | Normalized |
|---|
uuid | SMBIOS UUID | 550e8400-e29b-41d4-a716-446655440000 | Lowercase |
mac | Primary NIC MAC | 52:54:00:89:d8:10 | Colon-separated |
hostname | Network hostname | node1.example.com | As reported |
serial | Hardware serial | VMware-42 1a... | As reported |
Custom selectors (passed as query params):
{
"selector": {
"region": "us-west",
"environment": "production",
"rack": "A23"
}
}
Matching request: /ipxe?mac=52:54:00:89:d8:10®ion=us-west&environment=production&rack=A23
Matching logic:
- All selector key-value pairs must match request labels (AND logic)
- Most specific group wins (most selector matches)
- If multiple groups have same specificity, first match wins (undefined order)
- Groups with no selectors = default group (matches all)
Default Groups
Group with empty selector matches all machines:
{
"id": "default-worker",
"name": "Default Worker",
"profile": "worker",
"metadata": {
"environment": "dev"
}
}
⚠️ Warning: Avoid multiple default groups (non-deterministic matching)
Example: Region-based Matching
Group 1: US-West Workers
{
"id": "us-west-workers",
"profile": "worker",
"selector": {
"region": "us-west"
},
"metadata": {
"etcd_endpoints": "https://etcd-usw.example.com:2379"
}
}
Group 2: EU Workers
{
"id": "eu-workers",
"profile": "worker",
"selector": {
"region": "eu"
},
"metadata": {
"etcd_endpoints": "https://etcd-eu.example.com:2379"
}
}
Group 3: Specific Machine Override
{
"id": "node-special",
"profile": "controller",
"selector": {
"mac": "52:54:00:89:d8:10",
"region": "us-west"
},
"metadata": {
"role": "controller"
}
}
Matching precedence:
- Machine with
mac=52:54:00:89:d8:10®ion=us-west → node-special (2 selectors) - Machine with
region=us-west → us-west-workers (1 selector) - Machine with
region=eu → eu-workers (1 selector)
Templating System
Matchbox uses Go’s text/template for rendering configs with group metadata.
Template Context
Available variables in Ignition/Butane/Cloud-Config/generic templates:
// Group metadata (all keys from group.metadata)
{{.node_name}}
{{.cluster_id}}
{{.etcd_endpoints}}
// Group selectors (normalized)
{{.mac}} // e.g., "52:54:00:89:d8:10"
{{.uuid}} // e.g., "550e8400-..."
{{.region}} // Custom selector
// Request query params (raw)
{{.request.query.mac}} // As passed in URL
{{.request.query.foo}} // Custom query param
{{.request.raw_query}} // Full query string
// Special functions
{{if index . "ssh_authorized_keys"}} // Check if key exists
{{range $element := .ssh_authorized_keys}} // Iterate arrays
Example: Templated Butane Config
Group metadata:
{
"metadata": {
"node_name": "worker-01",
"ssh_authorized_keys": [
"ssh-ed25519 AAA...",
"ssh-rsa BBB..."
],
"ntp_servers": ["time1.google.com", "time2.google.com"]
}
}
Butane template: /var/lib/matchbox/ignition/worker.yaml
variant: fcos
version: 1.5.0
storage:
files:
- path: /etc/hostname
mode: 0644
contents:
inline: {{.node_name}}
- path: /etc/systemd/timesyncd.conf
mode: 0644
contents:
inline: |
[Time]
{{range $server := .ntp_servers}}
NTP={{$server}}
{{end}}
{{if index . "ssh_authorized_keys"}}
passwd:
users:
- name: core
ssh_authorized_keys:
{{range $key := .ssh_authorized_keys}}
- {{$key}}
{{end}}
{{end}}
Rendered Ignition (simplified):
{
"ignition": {"version": "3.3.0"},
"storage": {
"files": [
{
"path": "/etc/hostname",
"contents": {"source": "data:,worker-01"},
"mode": 420
},
{
"path": "/etc/systemd/timesyncd.conf",
"contents": {"source": "data:,%5BTime%5D%0ANTP%3Dtime1.google.com%0ANTP%3Dtime2.google.com"},
"mode": 420
}
]
},
"passwd": {
"users": [{
"name": "core",
"sshAuthorizedKeys": ["ssh-ed25519 AAA...", "ssh-rsa BBB..."]
}]
}
}
Template Best Practices
- Prefer external rendering: Use Terraform +
ct_config provider for complex templates - Validate Butane: Use
strict: true in Terraform or fcct --strict - Escape carefully: Go templates use
{{}}, Butane uses YAML - mind the interaction - Test rendering: Request
/ignition?mac=... directly to inspect output - Version control: Keep templates + groups in git for auditability
Warning: .request is reserved for query param access. Group metadata with "request": {...} will be overwritten.
Reserved keys:
request.query.* - Query parametersrequest.raw_query - Raw query string
API Integration
HTTP Endpoints (Read-only)
| Endpoint | Purpose | Template Context |
|---|
/ipxe | iPXE boot script | Profile boot section |
/grub | GRUB config | Profile boot section |
/ignition | Ignition config | Group metadata + selectors + query |
/cloud | Cloud-Config (deprecated) | Group metadata + selectors + query |
/generic | Generic config | Group metadata + selectors + query |
/metadata | Key-value env format | Group metadata + selectors + query |
Example metadata endpoint response:
GET /metadata?mac=52:54:00:89:d8:10&foo=bar
NODE_NAME=worker-01
CLUSTER_ID=prod
MAC=52:54:00:89:d8:10
REQUEST_QUERY_MAC=52:54:00:89:d8:10
REQUEST_QUERY_FOO=bar
REQUEST_RAW_QUERY=mac=52:54:00:89:d8:10&foo=bar
gRPC API (Authenticated, mutable)
Used by terraform-provider-matchbox for declarative infrastructure:
Terraform example:
provider "matchbox" {
endpoint = "matchbox.example.com:8081"
client_cert = file("~/.matchbox/client.crt")
client_key = file("~/.matchbox/client.key")
ca = file("~/.matchbox/ca.crt")
}
resource "matchbox_profile" "worker" {
name = "worker"
kernel = "/assets/fedora-coreos/.../kernel"
initrd = ["--name main /assets/fedora-coreos/.../initramfs.img"]
args = [
"initrd=main",
"coreos.inst.install_dev=/dev/sda",
"coreos.inst.ignition_url=${var.matchbox_http_endpoint}/ignition?uuid=$${uuid}&mac=$${mac:hexhyp}"
]
raw_ignition = data.ct_config.worker.rendered
}
resource "matchbox_group" "node1" {
name = "node1"
profile = matchbox_profile.worker.name
selector = {
mac = "52:54:00:89:d8:10"
}
metadata = {
node_name = "worker-01"
}
}
Operations:
CreateProfile, GetProfile, UpdateProfile, DeleteProfileCreateGroup, GetGroup, UpdateGroup, DeleteGroup
TLS client authentication required (see deployment docs)
Configuration Workflow
┌─────────────────────────────────────────────────────────────┐
│ 1. Write Butane configs (YAML) │
│ - worker.yaml, controller.yaml │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 2. Terraform ct_config transpiles Butane → Ignition │
│ data "ct_config" "worker" { │
│ content = file("worker.yaml") │
│ strict = true │
│ } │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 3. Terraform creates profiles + groups in Matchbox │
│ matchbox_profile.worker → gRPC CreateProfile() │
│ matchbox_group.node1 → gRPC CreateGroup() │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 4. Machine PXE boots, queries Matchbox │
│ GET /ipxe?mac=... → matches group → returns profile │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 5. Ignition fetches rendered config │
│ GET /ignition?mac=... → Matchbox returns Ignition │
└─────────────────────────────────────────────────────────────┘
Benefits:
- Rich Terraform templating (loops, conditionals, external data sources)
- Butane validation before deployment
- Declarative infrastructure (can
terraform plan before apply) - Version control workflow (git + CI/CD)
Alternative: Manual FileStore
┌─────────────────────────────────────────────────────────────┐
│ 1. Create profile JSON manually │
│ /var/lib/matchbox/profiles/worker.json │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 2. Create group JSON manually │
│ /var/lib/matchbox/groups/node1.json │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 3. Write Ignition/Butane config │
│ /var/lib/matchbox/ignition/worker.ign │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ 4. Restart matchbox (to reload FileStore) │
│ systemctl restart matchbox │
└─────────────────────────────────────────────────────────────┘
Drawbacks:
- Manual file management
- No validation before deployment
- Requires matchbox restart to pick up changes
- Error-prone for large fleets
Storage Backends
FileStore (Default)
Config: -data-path=/var/lib/matchbox
Pros:
- Simple file-based storage
- Easy to version control (git)
- Human-readable JSON
Cons:
- Requires file system access
- Manual reload for gRPC-created resources
Custom Store (Extensible)
Matchbox’s Store interface allows custom backends:
type Store interface {
ProfileGet(id string) (*Profile, error)
GroupGet(id string) (*Group, error)
IgnitionGet(name string) (string, error)
// ... other methods
}
Potential custom stores:
- etcd backend (for HA Matchbox)
- Database backend (PostgreSQL, MySQL)
- S3/object storage backend
Note: Not officially provided by Matchbox project; requires custom implementation
Security Considerations
gRPC API authentication: Requires TLS client certificates
ca.crt - CA that signed client certsserver.crt/server.key - Server TLS identityclient.crt/client.key - Client credentials (Terraform)
HTTP endpoints are read-only: No auth, machines fetch configs
- Do NOT put secrets in Ignition configs
- Use external secret stores (Vault, GCP Secret Manager)
- Reference secrets via Ignition
files.source with auth headers
Network segmentation: Matchbox on provisioning VLAN, isolate from production
Config validation: Validate Ignition/Butane before deployment to avoid boot failures
Audit logging: Version control groups/profiles; log gRPC API changes
Operational Tips
Test groups with curl:
curl 'http://matchbox.example.com:8080/ignition?mac=52:54:00:89:d8:10'
List profiles:
ls -la /var/lib/matchbox/profiles/
Validate Butane:
podman run -i --rm quay.io/coreos/fcct:release --strict < worker.yaml
Check group matching:
# Default group (no selectors)
curl http://matchbox.example.com:8080/ignition
# Specific machine
curl 'http://matchbox.example.com:8080/ignition?mac=52:54:00:89:d8:10&uuid=550e8400-e29b-41d4-a716-446655440000'
Backup configs:
tar -czf matchbox-backup-$(date +%F).tar.gz /var/lib/matchbox/{groups,profiles,ignition}
Summary
Matchbox’s configuration model provides:
- Separation of concerns: Profiles (what) vs Groups (who/where)
- Flexible matching: Label-based, multi-attribute, custom selectors
- Template support: Go templates for dynamic configs (but prefer external rendering)
- API-driven: Terraform integration for GitOps workflows
- Storage options: FileStore (simple) or custom backends (extensible)
- OS-agnostic: Works with any Ignition-based distro (FCOS, Flatcar, RHCOS)
Best practice: Use Terraform + external Butane configs for production; manual FileStore for labs/development.
2 - Deployment Patterns
Matchbox deployment options and operational considerations
Matchbox Deployment Patterns
Analysis of deployment architectures, installation methods, and operational considerations for running Matchbox in production.
Deployment Architectures
Single-Host Deployment
┌─────────────────────────────────────────────────────┐
│ Provisioning Host │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Matchbox │ │ dnsmasq │ │
│ │ :8080 HTTP │ │ DHCP/TFTP │ │
│ │ :8081 gRPC │ │ :67,:69 │ │
│ └─────────────┘ └─────────────┘ │
│ │ │ │
│ └──────────┬───────────┘ │
│ │ │
│ /var/lib/matchbox/ │
│ ├── groups/ │
│ ├── profiles/ │
│ ├── ignition/ │
│ └── assets/ │
└─────────────────────────────────────────────────────┘
│
│ Network
▼
┌──────────────┐
│ PXE Clients │
└──────────────┘
Use case: Lab, development, small deployments (<50 machines)
Pros:
- Simple setup
- Single service to manage
- Minimal resource requirements
Cons:
- Single point of failure
- No scalability
- Downtime during updates
HA Deployment (Multiple Matchbox Instances)
┌─────────────────────────────────────────────────────┐
│ Load Balancer (Ingress/HAProxy) │
│ :8080 HTTP :8081 gRPC │
└─────────────────────────────────────────────────────┘
│ │
├─────────────┬────────────────┤
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│Matchbox 1│ │Matchbox 2│ │Matchbox N│
│ (Pod/VM) │ │ (Pod/VM) │ │ (Pod/VM) │
└──────────┘ └──────────┘ └──────────┘
│ │ │
└─────────────┴────────────────┘
│
▼
┌────────────────────────┐
│ Shared Storage │
│ /var/lib/matchbox │
│ (NFS, PV, ConfigMap) │
└────────────────────────┘
Use case: Production, datacenter-scale (100+ machines)
Pros:
- High availability (no single point of failure)
- Rolling updates (zero downtime)
- Load distribution
Cons:
- Complex storage (shared volume or etcd backend)
- More infrastructure required
Storage options:
- Kubernetes PersistentVolume (RWX mode)
- NFS share mounted on multiple hosts
- Custom etcd-backed Store (requires custom implementation)
- Git-sync sidecar (read-only, periodic pull)
Kubernetes Deployment
┌─────────────────────────────────────────────────────┐
│ Ingress Controller │
│ matchbox.example.com → Service matchbox:8080 │
│ matchbox-rpc.example.com → Service matchbox:8081 │
└─────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ Service: matchbox (ClusterIP) │
│ ports: 8080/TCP, 8081/TCP │
└─────────────────────────────────────────────────────┘
│
┌───────────┴───────────┐
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Pod: matchbox │ │ Pod: matchbox │
│ replicas: 2+ │ │ replicas: 2+ │
└─────────────────┘ └─────────────────┘
│ │
└───────────┬───────────┘
▼
┌─────────────────────────────────────────────────────┐
│ PersistentVolumeClaim: matchbox-data │
│ /var/lib/matchbox (RWX mode) │
└─────────────────────────────────────────────────────┘
Manifest structure:
contrib/k8s/
├── matchbox-deployment.yaml # Deployment + replicas
├── matchbox-service.yaml # Service (8080, 8081)
├── matchbox-ingress.yaml # Ingress (HTTP + gRPC TLS)
└── matchbox-pvc.yaml # PersistentVolumeClaim
Key configurations:
Secret for gRPC TLS:
kubectl create secret generic matchbox-rpc \
--from-file=ca.crt \
--from-file=server.crt \
--from-file=server.key
Ingress for gRPC (TLS passthrough):
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
Volume mount:
volumes:
- name: data
persistentVolumeClaim:
claimName: matchbox-data
volumeMounts:
- name: data
mountPath: /var/lib/matchbox
Use case: Cloud-native deployments, Kubernetes-based infrastructure
Pros:
- Native Kubernetes primitives (Deployments, Services, Ingress)
- Rolling updates via Deployment strategy
- Easy scaling (
kubectl scale) - Health checks + auto-restart
Cons:
- Requires RWX PersistentVolume or shared storage
- Ingress TLS configuration complexity (gRPC passthrough)
- Cluster dependency (can’t provision cluster bootstrap nodes)
⚠️ Bootstrap problem: Kubernetes-hosted Matchbox can’t PXE boot its own cluster nodes (chicken-and-egg). Use external Matchbox for initial cluster bootstrap, then migrate.
Installation Methods
1. Binary Installation (systemd)
Recommended for: Bare-metal hosts, VMs, traditional Linux servers
Steps:
Download and verify:
wget https://github.com/poseidon/matchbox/releases/download/v0.10.0/matchbox-v0.10.0-linux-amd64.tar.gz
wget https://github.com/poseidon/matchbox/releases/download/v0.10.0/matchbox-v0.10.0-linux-amd64.tar.gz.asc
gpg --verify matchbox-v0.10.0-linux-amd64.tar.gz.asc
Extract and install:
tar xzf matchbox-v0.10.0-linux-amd64.tar.gz
sudo cp matchbox-v0.10.0-linux-amd64/matchbox /usr/local/bin/
Create user and directories:
sudo useradd -U matchbox
sudo mkdir -p /var/lib/matchbox/{assets,groups,profiles,ignition}
sudo chown -R matchbox:matchbox /var/lib/matchbox
Install systemd unit:
sudo cp contrib/systemd/matchbox.service /etc/systemd/system/
Configure via systemd dropin:
sudo systemctl edit matchbox
[Service]
Environment="MATCHBOX_ADDRESS=0.0.0.0:8080"
Environment="MATCHBOX_RPC_ADDRESS=0.0.0.0:8081"
Environment="MATCHBOX_LOG_LEVEL=debug"
Start service:
sudo systemctl daemon-reload
sudo systemctl start matchbox
sudo systemctl enable matchbox
Pros:
- Direct control over service
- Easy log access (
journalctl -u matchbox) - Native OS integration
Cons:
- Manual updates required
- OS dependency (package compatibility)
2. Container Deployment (Docker/Podman)
Recommended for: Docker hosts, quick testing, immutable infrastructure
Docker:
mkdir -p /var/lib/matchbox/assets
docker run -d --name matchbox \
--net=host \
-v /var/lib/matchbox:/var/lib/matchbox:Z \
-v /etc/matchbox:/etc/matchbox:Z,ro \
quay.io/poseidon/matchbox:v0.10.0 \
-address=0.0.0.0:8080 \
-rpc-address=0.0.0.0:8081 \
-log-level=debug
Podman:
podman run -d --name matchbox \
--net=host \
-v /var/lib/matchbox:/var/lib/matchbox:Z \
-v /etc/matchbox:/etc/matchbox:Z,ro \
quay.io/poseidon/matchbox:v0.10.0 \
-address=0.0.0.0:8080 \
-rpc-address=0.0.0.0:8081 \
-log-level=debug
Volume mounts:
/var/lib/matchbox - Data directory (groups, profiles, configs, assets)/etc/matchbox - TLS certificates (ca.crt, server.crt, server.key)
Network mode:
--net=host - Required for DHCP/TFTP interaction on same host- Bridge mode possible if Matchbox is on separate host from dnsmasq
Pros:
- Immutable deployments
- Easy updates (pull new image)
- Portable across hosts
Cons:
- Volume management complexity
- SELinux considerations (
:Z flag)
3. Kubernetes Deployment
Recommended for: Kubernetes environments, cloud platforms
Quick start:
# Create TLS secret for gRPC
kubectl create secret generic matchbox-rpc \
--from-file=ca.crt=~/.matchbox/ca.crt \
--from-file=server.crt=~/.matchbox/server.crt \
--from-file=server.key=~/.matchbox/server.key
# Deploy manifests
kubectl apply -R -f contrib/k8s/
# Check status
kubectl get pods -l app=matchbox
kubectl get svc matchbox
kubectl get ingress matchbox matchbox-rpc
Persistence options:
Option 1: emptyDir (ephemeral, dev only):
volumes:
- name: data
emptyDir: {}
Option 2: PersistentVolumeClaim (production):
volumes:
- name: data
persistentVolumeClaim:
claimName: matchbox-data
Option 3: ConfigMap (static configs):
volumes:
- name: groups
configMap:
name: matchbox-groups
- name: profiles
configMap:
name: matchbox-profiles
Option 4: Git-sync sidecar (GitOps):
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.6.3
env:
- name: GIT_SYNC_REPO
value: https://github.com/example/matchbox-configs
- name: GIT_SYNC_DEST
value: /var/lib/matchbox
volumeMounts:
- name: data
mountPath: /var/lib/matchbox
Pros:
- Native k8s features (scaling, health checks, rolling updates)
- Ingress integration
- GitOps workflows
Cons:
- Complexity (Ingress, PVC, TLS)
- Can’t bootstrap own cluster
Network Boot Environment Setup
Matchbox requires separate DHCP/TFTP/DNS services. Options:
Option 1: dnsmasq Container (Quickest)
Use case: Lab, testing, environments without existing DHCP
Full DHCP + TFTP + DNS:
docker run -d --name dnsmasq \
--cap-add=NET_ADMIN \
--net=host \
quay.io/poseidon/dnsmasq:latest \
-d -q \
--dhcp-range=192.168.1.3,192.168.1.254,30m \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe \
--address=/matchbox.example.com/192.168.1.2 \
--log-queries \
--log-dhcp
Proxy DHCP (alongside existing DHCP):
docker run -d --name dnsmasq \
--cap-add=NET_ADMIN \
--net=host \
quay.io/poseidon/dnsmasq:latest \
-d -q \
--dhcp-range=192.168.1.1,proxy,255.255.255.0 \
--enable-tftp \
--tftp-root=/var/lib/tftpboot \
--dhcp-userclass=set:ipxe,iPXE \
--pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe \
--pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.example.com:8080/boot.ipxe \
--log-queries \
--log-dhcp
Included files: undionly.kpxe, ipxe.efi, grub.efi (bundled in image)
Option 2: Existing DHCP/TFTP Infrastructure
Use case: Enterprise environments with network admin policies
Required DHCP options (ISC DHCP example):
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.10 192.168.1.250;
# BIOS clients
if option architecture-type = 00:00 {
filename "undionly.kpxe";
}
# UEFI clients
elsif option architecture-type = 00:09 {
filename "ipxe.efi";
}
# iPXE clients
elsif exists user-class and option user-class = "iPXE" {
filename "http://matchbox.example.com:8080/boot.ipxe";
}
next-server 192.168.1.100; # TFTP server IP
}
TFTP files (place in tftp root):
Option 3: iPXE-only (No PXE Chainload)
Use case: Modern hardware with native iPXE firmware
DHCP config (simpler):
filename "http://matchbox.example.com:8080/boot.ipxe";
No TFTP server needed (iPXE fetches directly via HTTP)
Limitation: Doesn’t support legacy BIOS with basic PXE ROM
TLS Certificate Setup
gRPC API requires TLS client certificates for authentication.
Option 1: Provided cert-gen Script
cd scripts/tls
export SAN=DNS.1:matchbox.example.com,IP.1:192.168.1.100
./cert-gen
Generates:
ca.crt - Self-signed CAserver.crt, server.key - Server credentialsclient.crt, client.key - Client credentials (for Terraform)
Install server certs:
sudo mkdir -p /etc/matchbox
sudo cp ca.crt server.crt server.key /etc/matchbox/
sudo chown -R matchbox:matchbox /etc/matchbox
Save client certs for Terraform:
mkdir -p ~/.matchbox
cp client.crt client.key ca.crt ~/.matchbox/
Option 2: Corporate PKI
Preferred for production: Use organization’s certificate authority
Requirements:
- Server cert with SAN:
DNS:matchbox.example.com - Client cert issued by same CA
- CA cert for validation
Matchbox flags:
-ca-file=/etc/matchbox/ca.crt
-cert-file=/etc/matchbox/server.crt
-key-file=/etc/matchbox/server.key
Terraform provider config:
provider "matchbox" {
endpoint = "matchbox.example.com:8081"
client_cert = file("/path/to/client.crt")
client_key = file("/path/to/client.key")
ca = file("/path/to/ca.crt")
}
Option 3: Let’s Encrypt (HTTP API only)
Note: gRPC requires client cert auth (incompatible with Let’s Encrypt)
Use case: TLS for HTTP endpoints only (read-only API)
Matchbox flags:
-web-ssl=true
-web-cert-file=/etc/letsencrypt/live/matchbox.example.com/fullchain.pem
-web-key-file=/etc/letsencrypt/live/matchbox.example.com/privkey.pem
Limitation: Still need self-signed certs for gRPC API
Configuration Flags
Core Flags
| Flag | Default | Description |
|---|
-address | 127.0.0.1:8080 | HTTP API listen address |
-rpc-address | `` | gRPC API listen address (empty = disabled) |
-data-path | /var/lib/matchbox | Data directory (FileStore) |
-assets-path | /var/lib/matchbox/assets | Static assets directory |
-log-level | info | Logging level (debug, info, warn, error) |
TLS Flags (gRPC)
| Flag | Default | Description |
|---|
-ca-file | /etc/matchbox/ca.crt | CA certificate for client verification |
-cert-file | /etc/matchbox/server.crt | Server TLS certificate |
-key-file | /etc/matchbox/server.key | Server TLS private key |
TLS Flags (HTTP, optional)
| Flag | Default | Description |
|---|
-web-ssl | false | Enable TLS for HTTP API |
-web-cert-file | `` | HTTP server TLS certificate |
-web-key-file | `` | HTTP server TLS private key |
Environment Variables
All flags can be set via environment variables with MATCHBOX_ prefix:
export MATCHBOX_ADDRESS=0.0.0.0:8080
export MATCHBOX_RPC_ADDRESS=0.0.0.0:8081
export MATCHBOX_LOG_LEVEL=debug
export MATCHBOX_DATA_PATH=/custom/path
Operational Considerations
Firewall Configuration
Matchbox host:
firewall-cmd --permanent --add-port=8080/tcp # HTTP API
firewall-cmd --permanent --add-port=8081/tcp # gRPC API
firewall-cmd --reload
dnsmasq host (if separate):
firewall-cmd --permanent --add-service=dhcp
firewall-cmd --permanent --add-service=tftp
firewall-cmd --permanent --add-service=dns # optional
firewall-cmd --reload
Monitoring
Health check endpoints:
# HTTP API
curl http://matchbox.example.com:8080
# Should return: matchbox
# gRPC API
openssl s_client -connect matchbox.example.com:8081 \
-CAfile ~/.matchbox/ca.crt \
-cert ~/.matchbox/client.crt \
-key ~/.matchbox/client.key
Prometheus metrics: Not built-in; consider adding reverse proxy (e.g., nginx) with metrics exporter
Logs (systemd):
journalctl -u matchbox -f
Logs (container):
Backup Strategy
What to backup:
/var/lib/matchbox/{groups,profiles,ignition} - Configs/etc/matchbox/*.{crt,key} - TLS certificates- Terraform state (if using Terraform provider)
Backup command:
tar -czf matchbox-backup-$(date +%F).tar.gz \
/var/lib/matchbox/{groups,profiles,ignition} \
/etc/matchbox
Restore:
tar -xzf matchbox-backup-YYYY-MM-DD.tar.gz -C /
sudo chown -R matchbox:matchbox /var/lib/matchbox
sudo systemctl restart matchbox
GitOps approach: Store configs in git repository for versioning and auditability
Updates
Binary deployment:
# Download new version
wget https://github.com/poseidon/matchbox/releases/download/vX.Y.Z/matchbox-vX.Y.Z-linux-amd64.tar.gz
tar xzf matchbox-vX.Y.Z-linux-amd64.tar.gz
# Replace binary
sudo systemctl stop matchbox
sudo cp matchbox-vX.Y.Z-linux-amd64/matchbox /usr/local/bin/
sudo systemctl start matchbox
Container deployment:
docker pull quay.io/poseidon/matchbox:vX.Y.Z
docker stop matchbox
docker rm matchbox
docker run -d --name matchbox ... quay.io/poseidon/matchbox:vX.Y.Z ...
Kubernetes deployment:
kubectl set image deployment/matchbox matchbox=quay.io/poseidon/matchbox:vX.Y.Z
kubectl rollout status deployment/matchbox
Scaling Considerations
Vertical scaling (single instance):
- CPU: Minimal (config rendering is lightweight)
- Memory: ~50MB base + asset cache
- Disk: Depends on cached assets (100MB - 10GB+)
Horizontal scaling (multiple instances):
- Stateless HTTP API (load balance round-robin)
- Shared storage required (RWX PV, NFS, or custom backend)
- gRPC API can be load-balanced with gRPC-aware LB
Asset serving optimization:
- Use CDN or cache proxy for remote assets
- Local asset caching for <100 machines
- Dedicated HTTP server (nginx) for large deployments (1000+ machines)
Security Best Practices
Don’t store secrets in Ignition configs
- Use Ignition
files.source with auth headers to fetch from Vault - Or provision minimal config, fetch secrets post-boot
Network segmentation
- Provision VLAN isolated from production
- Firewall rules: only allow provisioning traffic
gRPC API access control
- Client cert authentication (mandatory)
- Restrict cert issuance to authorized personnel/systems
- Rotate certs periodically
Audit logging
- Version control groups/profiles (git)
- Log gRPC API changes (Terraform state tracking)
- Monitor HTTP endpoint access
Validate configs before deployment
fcct --strict for Butane configs- Terraform plan before apply
- Test in dev environment first
Troubleshooting
Common Issues
1. Machines not PXE booting:
# Check DHCP responses
tcpdump -i eth0 port 67 and port 68
# Verify TFTP files
ls -la /var/lib/tftpboot/
curl tftp://192.168.1.100/undionly.kpxe
# Check Matchbox accessibility
curl http://matchbox.example.com:8080/boot.ipxe
2. 404 Not Found on /ignition:
# Test group matching
curl 'http://matchbox.example.com:8080/ignition?mac=52:54:00:89:d8:10'
# Check group exists
ls -la /var/lib/matchbox/groups/
# Check profile referenced by group exists
ls -la /var/lib/matchbox/profiles/
# Verify ignition_id file exists
ls -la /var/lib/matchbox/ignition/
3. gRPC connection refused (Terraform):
# Test TLS connection
openssl s_client -connect matchbox.example.com:8081 \
-CAfile ~/.matchbox/ca.crt \
-cert ~/.matchbox/client.crt \
-key ~/.matchbox/client.key
# Check Matchbox gRPC is listening
sudo ss -tlnp | grep 8081
# Verify firewall
sudo firewall-cmd --list-ports
4. Ignition config validation errors:
# Validate Butane locally
podman run -i --rm quay.io/coreos/fcct:release --strict < config.yaml
# Fetch rendered Ignition
curl 'http://matchbox.example.com:8080/ignition?mac=...' | jq .
# Validate Ignition spec
curl 'http://matchbox.example.com:8080/ignition?mac=...' | \
podman run -i --rm quay.io/coreos/ignition-validate:latest
Summary
Matchbox deployment considerations:
- Architecture: Single-host (dev/lab) vs HA (production) vs Kubernetes
- Installation: Binary (systemd), container (Docker/Podman), or Kubernetes manifests
- Network boot: dnsmasq container (quick), existing infrastructure (enterprise), or iPXE-only (modern)
- TLS: Self-signed (dev), corporate PKI (production), Let’s Encrypt (HTTP only)
- Scaling: Vertical (simple) vs horizontal (requires shared storage)
- Security: Client cert auth, network segmentation, no secrets in configs
- Operations: Backup configs, GitOps workflow, monitoring/logging
Recommendation for production:
- HA deployment (2+ instances) with load balancer
- Shared storage (NFS or RWX PV on Kubernetes)
- Corporate PKI for TLS certificates
- GitOps workflow (Terraform + git-controlled configs)
- Network segmentation (dedicated provisioning VLAN)
- Prometheus/Grafana monitoring
3 - Network Boot Support
Detailed analysis of Matchbox’s network boot capabilities
Network Boot Support in Matchbox
Matchbox provides comprehensive network boot support for bare-metal provisioning, supporting multiple boot firmware types and protocols.
Overview
Matchbox serves as an HTTP entrypoint for network-booted machines but does not implement DHCP, TFTP, or DNS services itself. Instead, it integrates with existing network infrastructure (or companion services like dnsmasq) to provide a complete PXE boot solution.
Boot Protocol Support
1. PXE (Preboot Execution Environment)
Legacy BIOS support via chainloading to iPXE:
Machine BIOS → DHCP (gets TFTP server) → TFTP (gets undionly.kpxe)
→ iPXE firmware → HTTP (Matchbox /boot.ipxe)
Key characteristics:
- Requires TFTP server to serve
undionly.kpxe (iPXE bootloader) - Chainloads from legacy PXE ROM to modern iPXE
- Supports older hardware with basic PXE firmware
- TFTP only used for initial iPXE bootstrap; subsequent downloads via HTTP
2. iPXE (Enhanced PXE)
Primary boot method supported by Matchbox:
iPXE Client → DHCP (gets boot script URL) → HTTP (Matchbox endpoints)
→ Kernel/initrd download → Boot with Ignition config
Endpoints served by Matchbox:
| Endpoint | Purpose |
|---|
/boot.ipxe | Static script that gathers machine attributes (UUID, MAC, hostname, serial) |
/ipxe?<labels> | Rendered iPXE script with kernel, initrd, and boot args for matched machine |
/assets/ | Optional local caching of kernel/initrd images |
Example iPXE flow:
- Machine boots with iPXE firmware
- DHCP response points to
http://matchbox.example.com:8080/boot.ipxe - iPXE fetches
/boot.ipxe:#!ipxe
chain ipxe?uuid=${uuid}&mac=${mac:hexhyp}&domain=${domain}&hostname=${hostname}&serial=${serial}
- iPXE makes request to
/ipxe?uuid=...&mac=... with machine attributes - Matchbox matches machine to group/profile and renders iPXE script:
#!ipxe
kernel /assets/coreos/VERSION/coreos_production_pxe.vmlinuz \
coreos.config.url=http://matchbox.foo:8080/ignition?uuid=${uuid}&mac=${mac:hexhyp} \
coreos.first_boot=1
initrd /assets/coreos/VERSION/coreos_production_pxe_image.cpio.gz
boot
Advantages:
- HTTP downloads (faster than TFTP)
- Scriptable boot logic
- Can fetch configs from HTTP endpoints
- Supports HTTPS (if compiled with TLS support)
3. GRUB2
UEFI firmware support:
UEFI Firmware → DHCP (gets GRUB bootloader) → TFTP (grub.efi)
→ GRUB → HTTP (Matchbox /grub endpoint)
Matchbox endpoint: /grub?<labels>
Example GRUB config rendered by Matchbox:
default=0
timeout=1
menuentry "CoreOS" {
echo "Loading kernel"
linuxefi "(http;matchbox.foo:8080)/assets/coreos/VERSION/coreos_production_pxe.vmlinuz" \
"coreos.config.url=http://matchbox.foo:8080/ignition" "coreos.first_boot"
echo "Loading initrd"
initrdefi "(http;matchbox.foo:8080)/assets/coreos/VERSION/coreos_production_pxe_image.cpio.gz"
}
Use case:
- UEFI systems that prefer GRUB over iPXE
- Environments with existing GRUB network boot infrastructure
4. PXELINUX (Legacy, via TFTP)
While not a primary Matchbox target, PXELINUX clients can be configured to chainload iPXE:
# /var/lib/tftpboot/pxelinux.cfg/default
timeout 10
default iPXE
LABEL iPXE
KERNEL ipxe.lkrn
APPEND dhcp && chain http://matchbox.example.com:8080/boot.ipxe
DHCP Configuration Patterns
Matchbox supports two DHCP deployment models:
Pattern 1: PXE-Enabled DHCP
Full DHCP server provides IP allocation + PXE boot options.
Example dnsmasq configuration:
dhcp-range=192.168.1.1,192.168.1.254,30m
enable-tftp
tftp-root=/var/lib/tftpboot
# Legacy BIOS → chainload to iPXE
dhcp-match=set:bios,option:client-arch,0
dhcp-boot=tag:bios,undionly.kpxe
# UEFI → iPXE
dhcp-match=set:efi32,option:client-arch,6
dhcp-boot=tag:efi32,ipxe.efi
dhcp-match=set:efi64,option:client-arch,9
dhcp-boot=tag:efi64,ipxe.efi
# iPXE clients → Matchbox
dhcp-userclass=set:ipxe,iPXE
dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe
# DNS for Matchbox
address=/matchbox.example.com/192.168.1.100
Client architecture detection:
- Option 93 (
client-arch): Identifies BIOS (0), UEFI32 (6), UEFI64 (9) - User class: Detects iPXE clients to skip TFTP chainloading
Pattern 2: Proxy DHCP
Runs alongside existing DHCP server; provides only boot options (no IP allocation).
Example dnsmasq proxy-DHCP:
dhcp-range=192.168.1.1,proxy,255.255.255.0
enable-tftp
tftp-root=/var/lib/tftpboot
# Chainload legacy PXE to iPXE
pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe
# iPXE clients → Matchbox
dhcp-userclass=set:ipxe,iPXE
pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.example.com:8080/boot.ipxe
Benefits:
- Non-invasive: doesn’t replace existing DHCP
- PXE clients receive merged responses from both DHCP servers
- Ideal for environments where main DHCP cannot be modified
Network Boot Flow (Complete)
Scenario: BIOS machine with legacy PXE firmware
┌──────────────────────────────────────────────────────────────────┐
│ 1. Machine powers on, BIOS set to network boot │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 2. NIC PXE firmware broadcasts DHCPDISCOVER (PXEClient) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 3. DHCP/proxyDHCP responds with: │
│ - IP address (if full DHCP) │
│ - Next-server: TFTP server IP │
│ - Filename: undionly.kpxe (based on arch=0) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 4. PXE firmware downloads undionly.kpxe via TFTP │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 5. Execute iPXE (undionly.kpxe) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 6. iPXE requests DHCP again, identifies as iPXE (user-class) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 7. DHCP responds with boot URL (not TFTP): │
│ http://matchbox.example.com:8080/boot.ipxe │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 8. iPXE fetches /boot.ipxe via HTTP: │
│ #!ipxe │
│ chain ipxe?uuid=${uuid}&mac=${mac:hexhyp}&... │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 9. iPXE chains to /ipxe?uuid=XXX&mac=YYY (introspected labels) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 10. Matchbox matches machine to group/profile │
│ - Finds most specific group matching labels │
│ - Retrieves profile (kernel, initrd, args, configs) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 11. Matchbox renders iPXE script with: │
│ - kernel URL (local asset or remote HTTPS) │
│ - initrd URL │
│ - kernel args (including ignition.config.url) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 12. iPXE downloads kernel + initrd (HTTP/HTTPS) │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 13. iPXE boots kernel with specified args │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 14. Fedora CoreOS/Flatcar boots, Ignition runs │
│ - Fetches /ignition?uuid=XXX&mac=YYY from Matchbox │
│ - Matchbox renders Ignition config with group metadata │
│ - Ignition partitions disk, writes files, creates users │
└──────────────────────────────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ 15. System reboots (if disk install), boots from disk │
└──────────────────────────────────────────────────────────────────┘
Asset Serving
Matchbox can serve static assets (kernel, initrd images) from a local directory to reduce bandwidth and increase speed:
Asset directory structure:
/var/lib/matchbox/assets/
├── fedora-coreos/
│ └── 36.20220906.3.2/
│ ├── fedora-coreos-36.20220906.3.2-live-kernel-x86_64
│ ├── fedora-coreos-36.20220906.3.2-live-initramfs.x86_64.img
│ └── fedora-coreos-36.20220906.3.2-live-rootfs.x86_64.img
└── flatcar/
└── 3227.2.0/
├── flatcar_production_pxe.vmlinuz
├── flatcar_production_pxe_image.cpio.gz
└── version.txt
HTTP endpoint: http://matchbox.example.com:8080/assets/
Scripts provided:
scripts/get-fedora-coreos - Download/verify Fedora CoreOS imagesscripts/get-flatcar - Download/verify Flatcar Linux images
Profile reference:
{
"boot": {
"kernel": "/assets/fedora-coreos/36.20220906.3.2/fedora-coreos-36.20220906.3.2-live-kernel-x86_64",
"initrd": ["--name main /assets/fedora-coreos/36.20220906.3.2/fedora-coreos-36.20220906.3.2-live-initramfs.x86_64.img"]
}
}
Alternative: Profiles can reference remote HTTPS URLs (requires iPXE compiled with TLS support):
{
"kernel": "https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/36.20220906.3.2/x86_64/fedora-coreos-36.20220906.3.2-live-kernel-x86_64"
}
OS Support
Fedora CoreOS
Boot types:
- Live PXE (RAM-only, ephemeral)
- Install to disk (persistent, recommended)
Required kernel args:
coreos.inst.install_dev=/dev/sda - Target disk for installcoreos.inst.ignition_url=http://matchbox/ignition?uuid=${uuid}&mac=${mac:hexhyp} - Provisioning configcoreos.live.rootfs_url=... - Root filesystem image
Ignition fetch: During first boot, ignition.service fetches config from Matchbox
Flatcar Linux
Boot types:
- Live PXE (RAM-only)
- Install to disk
Required kernel args:
flatcar.first_boot=yes - Marks first bootflatcar.config.url=http://matchbox/ignition?uuid=${uuid}&mac=${mac:hexhyp} - Ignition config URLflatcar.autologin - Auto-login to console (optional, dev/debug)
Ignition support: Flatcar uses Ignition v3.x for provisioning
RHEL CoreOS
Supported as it uses Ignition like Fedora CoreOS. Requires Red Hat-specific image sources.
Machine Matching & Labels
Matchbox matches machines to profiles using labels extracted during boot:
Reserved Label Selectors
| Label | Source | Example | Normalized |
|---|
uuid | SMBIOS UUID | 550e8400-e29b-41d4-a716-446655440000 | Lowercase |
mac | NIC MAC address | 52:54:00:89:d8:10 | Normalized to colons |
hostname | Network boot program | node1.example.com | As-is |
serial | Hardware serial | VMware-42 1a... | As-is |
Custom Labels
Groups can match on arbitrary labels passed as query params:
/ipxe?mac=52:54:00:89:d8:10®ion=us-west&env=prod
Matching precedence: Most specific group wins (most selector matches)
Firmware Compatibility
| Firmware Type | Client Arch | Boot File | Protocol | Matchbox Support |
|---|
| BIOS (legacy PXE) | 0 | undionly.kpxe → iPXE | TFTP → HTTP | ✅ Via chainload |
| UEFI 32-bit | 6 | ipxe.efi | TFTP → HTTP | ✅ |
| UEFI (BIOS compat) | 7 | ipxe.efi | TFTP → HTTP | ✅ |
| UEFI 64-bit | 9 | ipxe.efi | TFTP → HTTP | ✅ |
| Native iPXE | - | N/A | HTTP | ✅ Direct |
| GRUB (UEFI) | - | grub.efi | TFTP → HTTP | ✅ /grub endpoint |
Network Requirements
Firewall rules on Matchbox host:
# HTTP API (read-only)
firewall-cmd --add-port=8080/tcp --permanent
# gRPC API (authenticated, Terraform)
firewall-cmd --add-port=8081/tcp --permanent
DNS requirement:
matchbox.example.com must resolve to Matchbox server IP- Can be configured in dnsmasq, corporate DNS, or
/etc/hosts on DHCP server
DHCP/TFTP host (if using dnsmasq):
firewall-cmd --add-service=dhcp --permanent
firewall-cmd --add-service=tftp --permanent
firewall-cmd --add-service=dns --permanent # optional
Troubleshooting Tips
Verify Matchbox endpoints:
curl http://matchbox.example.com:8080
# Should return: matchbox
curl http://matchbox.example.com:8080/boot.ipxe
# Should return iPXE script
Test machine matching:
curl 'http://matchbox.example.com:8080/ipxe?mac=52:54:00:89:d8:10'
# Should return rendered iPXE script with kernel/initrd
Check TFTP files:
ls -la /var/lib/tftpboot/
# Should contain: undionly.kpxe, ipxe.efi, grub.efi
Verify DHCP responses:
tcpdump -i eth0 -n port 67 and port 68
# Watch for DHCP offers with PXE options
iPXE console debugging:
- Press Ctrl+B during iPXE boot to enter console
- Commands:
dhcp, ifstat, show net0/ip, chain http://...
Limitations
- HTTPS support: iPXE must be compiled with crypto support (larger binary, ~80KB vs ~45KB)
- TFTP dependency: Legacy PXE requires TFTP for initial chainload (can’t skip)
- No DHCP/TFTP built-in: Must use external services or dnsmasq container
- Boot firmware variations: Some vendor PXE implementations have quirks
- SecureBoot: iPXE and GRUB must be signed (or SecureBoot disabled)
Reference Implementation: dnsmasq Container
Matchbox project provides quay.io/poseidon/dnsmasq with:
- Pre-configured DHCP/TFTP/DNS service
- Bundled
ipxe.efi, undionly.kpxe, grub.efi - Example configs for PXE-DHCP and proxy-DHCP modes
Quick start (full DHCP):
docker run --rm --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq \
-d -q \
--dhcp-range=192.168.1.3,192.168.1.254 \
--enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-match=set:bios,option:client-arch,0 \
--dhcp-boot=tag:bios,undionly.kpxe \
--dhcp-match=set:efi64,option:client-arch,9 \
--dhcp-boot=tag:efi64,ipxe.efi \
--dhcp-userclass=set:ipxe,iPXE \
--dhcp-boot=tag:ipxe,http://matchbox.example.com:8080/boot.ipxe \
--address=/matchbox.example.com/192.168.1.2 \
--log-queries --log-dhcp
Quick start (proxy-DHCP):
docker run --rm --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq \
-d -q \
--dhcp-range=192.168.1.1,proxy,255.255.255.0 \
--enable-tftp --tftp-root=/var/lib/tftpboot \
--dhcp-userclass=set:ipxe,iPXE \
--pxe-service=tag:#ipxe,x86PC,"PXE chainload to iPXE",undionly.kpxe \
--pxe-service=tag:ipxe,x86PC,"iPXE",http://matchbox.example.com:8080/boot.ipxe \
--log-queries --log-dhcp
Summary
Matchbox provides robust network boot support through:
- Protocol flexibility: iPXE (primary), GRUB2, legacy PXE (via chainload)
- Firmware compatibility: BIOS and UEFI
- Modern approach: HTTP-based with optional local asset caching
- Clean separation: Matchbox handles config rendering; external services handle DHCP/TFTP
- Production-ready: Used by Typhoon Kubernetes distributions for bare-metal provisioning
4 - Use Case Evaluation
Evaluation of Matchbox for specific use cases and comparison with alternatives
Matchbox Use Case Evaluation
Analysis of Matchbox’s suitability for various use cases, strengths, limitations, and comparison with alternative provisioning solutions.
Use Case Fit Analysis
✅ Ideal Use Cases
Scenario: Provisioning 10-1000 physical servers for Kubernetes nodes
Why Matchbox Excels:
- Ignition-native (perfect for Fedora CoreOS/Flatcar)
- Declarative machine provisioning via Terraform
- Label-based matching (region, role, hardware type)
- Integration with Typhoon Kubernetes distribution
- Minimal OS surface (immutable, container-optimized)
Example workflow:
resource "matchbox_profile" "k8s_controller" {
name = "k8s-controller"
kernel = "/assets/fedora-coreos/.../kernel"
raw_ignition = data.ct_config.controller.rendered
}
resource "matchbox_group" "controllers" {
profile = matchbox_profile.k8s_controller.name
selector = {
role = "controller"
}
}
Alternatives considered:
- Cloud-init + netboot.xyz: Less declarative, no native Ignition support
- Foreman: Heavier, more complex for container-centric workloads
- Metal³: Kubernetes-native but requires existing cluster
Verdict: ⭐⭐⭐⭐⭐ Matchbox is purpose-built for this
2. Lab/Development Environments
Scenario: Rapid PXE boot testing with QEMU/KVM VMs or homelab servers
Why Matchbox Excels:
- Quick setup (binary + dnsmasq container)
- No DHCP infrastructure required (proxy-DHCP mode)
- Localhost deployment (no external dependencies)
- Fast iteration (change configs, re-PXE)
- Included examples and scripts
Example setup:
# Start Matchbox locally
docker run -d --net=host -v /var/lib/matchbox:/var/lib/matchbox \
quay.io/poseidon/matchbox:latest -address=0.0.0.0:8080
# Start dnsmasq on same host
docker run -d --net=host --cap-add=NET_ADMIN \
quay.io/poseidon/dnsmasq ...
Alternatives considered:
- netboot.xyz: Great for manual OS selection, no automation
- PiXE server: Simpler but less flexible matching logic
- Manual iPXE scripts: No dynamic matching, manual maintenance
Verdict: ⭐⭐⭐⭐⭐ Minimal setup, maximum flexibility
3. Edge/Remote Site Provisioning
Scenario: Provision machines at 10+ remote datacenters or edge locations
Why Matchbox Excels:
- Lightweight (single binary, ~20MB)
- Declarative region-based matching
- Centralized config management (Terraform)
- Can run on minimal hardware (ARM support)
- HTTP-based (works over WAN with reverse proxy)
Architecture:
Central Matchbox (via Terraform)
↓ gRPC API
Regional Matchbox Instances (read-only cache)
↓ HTTP
Edge Machines (PXE boot)
Label-based routing:
{
"selector": {
"region": "us-west",
"site": "pdx-1"
},
"metadata": {
"ntp_servers": ["10.100.1.1", "10.100.1.2"]
}
}
Alternatives considered:
- Foreman: Requires more resources per site
- Ansible + netboot: No declarative PXE boot, post-install only
- Cloud-init datasources: Requires cloud metadata service per site
Verdict: ⭐⭐⭐⭐☆ Good fit, but consider caching strategy for WAN
⚠️ Moderate Fit Use Cases
Scenario: Provide bare-metal-as-a-service to multiple customers
Matchbox challenges:
- No built-in multi-tenancy (single namespace)
- No RBAC (gRPC API is all-or-nothing with client certs)
- No customer self-service portal
Workarounds:
- Deploy separate Matchbox per tenant (isolation via separate instances)
- Proxy gRPC API with custom RBAC layer
- Use group selectors with customer IDs
Better alternatives:
- Metal³ (Kubernetes-native, better multi-tenancy)
- OpenStack Ironic (purpose-built for bare-metal cloud)
- MAAS (Ubuntu-specific, has RBAC)
Verdict: ⭐⭐☆☆☆ Possible but architecturally challenging
5. Heterogeneous OS Provisioning
Scenario: Need to provision Fedora CoreOS, Ubuntu, RHEL, Windows
Matchbox challenges:
- Designed for Ignition-based OSes (FCOS, Flatcar, RHCOS)
- No native support for Kickstart (RHEL/CentOS)
- No support for Preseed (Ubuntu/Debian)
- No Windows unattend.xml support
What works:
- Fedora CoreOS ✅
- Flatcar Linux ✅
- RHEL CoreOS ✅
- Container Linux (deprecated but supported) ✅
What requires workarounds:
- RHEL/CentOS: Possible via generic configs + Kickstart URLs, but not native
- Ubuntu: Can PXE boot and point to autoinstall ISO, but loses Matchbox templating benefits
- Debian: Similar to Ubuntu
- Windows: Not supported (different PXE boot mechanisms)
Better alternatives for heterogeneous environments:
- Foreman (supports Kickstart, Preseed, unattend.xml)
- MAAS (Ubuntu-centric but extensible)
- Cobbler (older but supports many OS types)
Verdict: ⭐⭐☆☆☆ Stick to Ignition-based OSes or use different tool
❌ Poor Fit Use Cases
6. Windows PXE Boot
Why Matchbox doesn’t fit:
- No WinPE support
- No unattend.xml rendering
- Different PXE boot chain (WDS/SCCM model)
Recommendation: Use Microsoft WDS or SCCM
Verdict: ⭐☆☆☆☆ Not designed for this
7. BIOS/Firmware Updates
Why Matchbox doesn’t fit:
- Focused on OS provisioning, not firmware
- No vendor-specific tooling (Dell iDRAC, HP iLO integration)
Recommendation: Use vendor tools or Ansible with ipmi/redfish modules
Verdict: ⭐☆☆☆☆ Out of scope
Strengths
1. Ignition-First Design
- Native support for modern immutable OSes
- Declarative, atomic provisioning (no config drift)
- First-boot partition/filesystem setup
2. Label-Based Matching
- Flexible machine classification (MAC, UUID, region, role, custom)
- Most-specific-match algorithm (override defaults per machine)
- Query params for dynamic attributes
- Declarative infrastructure as code
- Plan before apply (preview changes)
- State tracking for auditability
- Rich templating (ct_config provider for Butane)
4. Minimal Dependencies
- Single static binary (~20MB)
- No database required (FileStore default)
- No built-in DHCP/TFTP (separation of concerns)
- Container-ready (OCI image available)
5. HTTP-Centric
- Faster downloads than TFTP (iPXE via HTTP)
- Proxy/CDN friendly for asset distribution
- Standard web tooling (curl, load balancers, Ingress)
6. Production-Ready
- Used by Typhoon Kubernetes (battle-tested)
- Clear upgrade path (SemVer releases)
- OpenPGP signature support for config integrity
Limitations
1. No Multi-Tenancy
- Single namespace (all groups/profiles global)
- No RBAC on gRPC API (client cert = full access)
- Requires separate instances per tenant
2. Ignition-Only Focus
- Cloud-Config deprecated (legacy support only)
- No native Kickstart/Preseed/unattend.xml
- Limits OS choice to CoreOS family
3. Storage Constraints
- FileStore doesn’t scale to 10,000+ profiles
- No built-in HA storage (requires NFS or custom backend)
- Kubernetes deployment needs RWX PersistentVolume
4. No Machine Discovery
- Doesn’t detect new machines (passive service)
- No inventory management (use external CMDB)
- No hardware introspection (use Ironic for that)
5. Limited Observability
- No built-in metrics (Prometheus integration requires reverse proxy)
- Logs are minimal (request logging only)
- No audit trail for gRPC API changes (use Terraform state)
6. TFTP Still Required
- Legacy BIOS PXE needs TFTP for chainloading to iPXE
- Can’t fully eliminate TFTP unless all machines have native iPXE
Comparison with Alternatives
vs. Foreman
| Feature | Matchbox | Foreman |
|---|
| OS Support | Ignition-based | Kickstart, Preseed, AutoYaST, etc. |
| Complexity | Low (single binary) | High (Rails app, DB, Puppet/Ansible) |
| Config Model | Declarative (Ignition) | Imperative (post-install scripts) |
| API | HTTP + gRPC | REST API |
| UI | None (API-only) | Full web UI |
| Terraform | Native provider | Community modules |
| Use Case | Container-centric infra | Traditional Linux servers |
When to choose Matchbox: CoreOS-based Kubernetes clusters, minimal infrastructure
When to choose Foreman: Heterogeneous OS, need web UI, traditional config mgmt
| Feature | Matchbox | Metal³ |
|---|
| Platform | Standalone | Kubernetes-native (operator) |
| Bootstrap | Can bootstrap k8s cluster | Needs existing k8s cluster |
| Machine Lifecycle | Provision only | Provision + decommission + reprovision |
| Hardware Introspection | No (labels passed manually) | Yes (via Ironic) |
| Multi-tenancy | No | Yes (via k8s namespaces) |
| Complexity | Low | High (requires Ironic, DHCP, etc.) |
When to choose Matchbox: Greenfield bare-metal, no existing k8s
When to choose Metal³: Existing k8s, need hardware mgmt lifecycle
vs. Cobbler
| Feature | Matchbox | Cobbler |
|---|
| Age | Modern (2016+) | Legacy (2008+) |
| Config Format | Ignition (declarative) | Kickstart/Preseed (imperative) |
| Templating | Go templates (minimal) | Cheetah templates (extensive) |
| Python | Go (static binary) | Python (requires interpreter) |
| DHCP Management | External | Can manage DHCP |
| Maintenance | Active (Poseidon) | Low activity |
When to choose Matchbox: Modern immutable OSes, container workloads
When to choose Cobbler: Legacy infra, need DHCP management, heterogeneous OS
vs. MAAS (Ubuntu)
| Feature | Matchbox | MAAS |
|---|
| OS Support | CoreOS family | Ubuntu (primary), others (limited) |
| IPAM | No (external DHCP) | Built-in IPAM |
| Power Mgmt | No (manual or scripts) | Built-in (IPMI, AMT, etc.) |
| UI | No | Full web UI |
| Declarative | Yes (Terraform) | Limited (CLI mostly) |
| Cloud Integration | No | Yes (libvirt, LXD, VM hosts) |
When to choose Matchbox: Non-Ubuntu, Kubernetes, minimal dependencies
When to choose MAAS: Ubuntu-centric, need power mgmt, cloud integration
vs. netboot.xyz
| Feature | Matchbox | netboot.xyz |
|---|
| Purpose | Automated provisioning | Manual OS selection menu |
| Automation | Full (API-driven) | None (interactive menu) |
| Customization | Per-machine configs | Global menu |
| Ignition | Native support | No |
| Complexity | Medium | Very low |
When to choose Matchbox: Automated fleet provisioning
When to choose netboot.xyz: Ad-hoc OS installation, homelab
Decision Matrix
Use this table to evaluate Matchbox for your use case:
| Requirement | Weight | Matchbox Score | Notes |
|---|
| Ignition/CoreOS support | High | ⭐⭐⭐⭐⭐ | Native, first-class |
| Heterogeneous OS | High | ⭐⭐☆☆☆ | Limited to Ignition OSes |
| Declarative provisioning | Medium | ⭐⭐⭐⭐⭐ | Terraform native |
| Multi-tenancy | Medium | ⭐☆☆☆☆ | Requires separate instances |
| Web UI | Medium | ☆☆☆☆☆ | No UI (API-only) |
| Ease of deployment | Medium | ⭐⭐⭐⭐☆ | Binary or container, minimal deps |
| Scalability | Medium | ⭐⭐⭐☆☆ | FileStore limits, need shared storage for HA |
| Hardware mgmt | Low | ☆☆☆☆☆ | No power mgmt, no introspection |
| Cost | Low | ⭐⭐⭐⭐⭐ | Open source, Apache 2.0 |
Scoring:
- ⭐⭐⭐⭐⭐ Excellent
- ⭐⭐⭐⭐☆ Good
- ⭐⭐⭐☆☆ Adequate
- ⭐⭐☆☆☆ Limited
- ⭐☆☆☆☆ Poor
- ☆☆☆☆☆ Not supported
Recommendations
Choose Matchbox if:
- ✅ Provisioning Fedora CoreOS, Flatcar, or RHEL CoreOS
- ✅ Building bare-metal Kubernetes clusters
- ✅ Prefer declarative infrastructure (Terraform)
- ✅ Want minimal dependencies (single binary)
- ✅ Need flexible label-based machine matching
- ✅ Have homogeneous OS requirements (all Ignition-based)
Avoid Matchbox if:
- ❌ Need multi-OS support (Windows, traditional Linux)
- ❌ Require web UI for operations teams
- ❌ Need built-in hardware management (power, BIOS config)
- ❌ Have strict multi-tenancy requirements
- ❌ Need automated hardware discovery/introspection
Hybrid Approaches
Pattern 1: Matchbox + Ansible
- Matchbox: Initial OS provisioning
- Ansible: Post-boot configuration, app deployment
- Works well for stateful services on bare-metal
Pattern 2: Matchbox + Metal³
- Matchbox: Bootstrap initial k8s cluster
- Metal³: Ongoing cluster node lifecycle management
- Gradual migration from Matchbox to Metal³
Pattern 3: Matchbox + Terraform + External Secrets
- Matchbox: Base OS + minimal config
- Ignition: Fetch secrets from Vault/GCP Secret Manager
- Terraform: Orchestrate end-to-end provisioning
Conclusion
Matchbox is a purpose-built, minimalist network boot service optimized for modern immutable operating systems (Ignition-based). It excels in container-centric bare-metal environments, particularly for Kubernetes clusters built with Fedora CoreOS or Flatcar Linux.
Best fit: Organizations adopting immutable infrastructure patterns, container orchestration, and declarative provisioning workflows.
Not ideal for: Heterogeneous OS environments, multi-tenant bare-metal clouds, or teams requiring extensive web UI and built-in hardware management.
For home labs and development, Matchbox offers an excellent balance of simplicity and power. For production Kubernetes deployments, it’s a proven, battle-tested solution (via Typhoon). For complex enterprise provisioning with mixed OS requirements, consider Foreman or MAAS instead.