Skip to main content
Proxmox VE Helper Scripts supports both LXC containers (lightweight) and Virtual Machines (full isolation). Understanding when to use each is crucial for optimal performance and security.

Quick Comparison

FeatureLXC ContainersVirtual Machines
OverheadMinimal (shares host kernel)Higher (full OS)
Boot TimeSecondsMinutes
Resource UsageVery efficientMore resources
IsolationProcess-levelHardware-level
OS SupportLinux onlyAny OS
PerformanceNear-nativeSlight overhead
SecurityGood (with unprivileged)Excellent
Use CaseLinux apps, servicesNon-Linux OS, full isolation

LXC Containers

What Are LXC Containers?

LXC (Linux Containers) are OS-level virtualization that share the host kernel while providing isolated user spaces. They’re similar to Docker containers but designed for running full system services.
Think of containers as lightweight isolated environments that share the same kernel but have their own filesystem, processes, and network stack.

Container Architecture

┌─────────────────────────────────────────────────────────┐
│                    Proxmox VE Host                      │
│                   (Linux Kernel 6.x)                    │
├─────────────────────────────────────────────────────────┤
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  │
│  │ Container 1  │  │ Container 2  │  │ Container 3  │  │
│  │              │  │              │  │              │  │
│  │ 2FAuth       │  │ Docker       │  │ Pi-hole      │  │
│  │ (Debian 13)  │  │ (Debian 13)  │  │ (Debian 13)  │  │
│  │              │  │              │  │              │  │
│  │ User: 100000 │  │ User: 100000 │  │ User: 100000 │  │
│  │ Unprivileged │  │ Unprivileged │  │ Unprivileged │  │
│  └──────────────┘  └──────────────┘  └──────────────┘  │
│                                                         │
│  Shared: Kernel, Drivers, Core Libraries               │
└─────────────────────────────────────────────────────────┘

Container Scripts Structure

All container scripts follow a consistent two-file pattern:

1. Host Script (ct/*.sh)

Executes on the Proxmox VE host to create the container.
#!/usr/bin/env bash
source <(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func)

APP="2FAuth"
var_tags="${var_tags:-2fa;authenticator}"
var_cpu="${var_cpu:-1}"
var_ram="${var_ram:-512}"
var_disk="${var_disk:-2}"
var_os="${var_os:-debian}"
var_version="${var_version:-13}"
var_unprivileged="${var_unprivileged:-1}"

header_info "$APP"
variables
color
catch_errors

function update_script() {
  # Update logic for existing containers
  header_info
  check_container_storage
  check_container_resources
  # ... update application
}

start
build_container  # Creates LXC container
description

msg_ok "Completed successfully!\n"
echo -e "${CREATING}${GN}${APP} setup has been successfully initialized!${CL}"
echo -e "${INFO}${YW} Access it using the following URL:${CL}"
echo -e "${TAB}${GATEWAY}${BGN}http://${IP}:80${CL}"
  • APP: Human-readable application name
  • var_tags: Categorization tags (used for filtering)
  • var_cpu/ram/disk: Resource allocation (defaults)
  • var_os: Operating system (debian, ubuntu, alpine)
  • var_version: OS version number
  • var_unprivileged: Security mode (1=unprivileged, 0=privileged)
  • update_script(): Function called when updating existing containers
  • build_container: Main function that orchestrates container creation

2. Install Script (install/*-install.sh)

Executes inside the container after creation to install the application.
#!/usr/bin/env bash

source /dev/stdin <<<"$FUNCTIONS_FILE_PATH"
color
verb_ip6
catch_errors
setting_up_container
network_check
update_os

msg_info "Installing Dependencies"
$STD apt install -y nginx
msg_ok "Installed Dependencies"

export PHP_VERSION="8.4"
PHP_FPM="YES" setup_php
setup_composer
setup_mariadb
MARIADB_DB_NAME="2fauth_db" MARIADB_DB_USER="2fauth" setup_mariadb_db

fetch_and_deploy_gh_release "2fauth" "Bubka/2FAuth" "tarball"

msg_info "Setup 2FAuth"
cd /opt/2fauth
cp .env.example .env
sed -i -e "s|^APP_URL=.*|APP_URL=http://$LOCAL_IP|" \
  -e "s|^DB_CONNECTION=$|DB_CONNECTION=mysql|" \
  -e "s|^DB_DATABASE=$|DB_DATABASE=$MARIADB_DB_NAME|" .env
export COMPOSER_ALLOW_SUPERUSER=1
$STD composer install --no-dev --prefer-dist
$STD php artisan key:generate --force
$STD php artisan migrate:refresh
chown -R www-data: /opt/2fauth
msg_ok "Setup 2fauth"

msg_info "Configure Service"
cat <<EOF >/etc/nginx/conf.d/2fauth.conf
server {
    listen 80;
    root /opt/2fauth/public;
    server_name $LOCAL_IP;
    index index.php;
    
    location / {
        try_files \$uri \$uri/ /index.php?\$query_string;
    }
    
    location ~ \.php\$ {
        fastcgi_pass unix:/var/run/php/php${PHP_VERSION}-fpm.sock;
        fastcgi_param SCRIPT_FILENAME \$realpath_root\$fastcgi_script_name;
        include fastcgi_params;
    }
}
EOF
systemctl reload nginx
msg_ok "Configured Service"

motd_ssh
customize
cleanup_lxc
  • source /dev/stdin: Loads install.func from FUNCTIONS_FILE_PATH
  • setting_up_container: Initializes container environment
  • network_check: Verifies internet connectivity
  • update_os: Updates package lists and base system
  • setup_php/setup_mariadb/setup_composer: Helper functions from core.func
  • fetch_and_deploy_gh_release: Downloads latest release from GitHub
  • motd_ssh: Configures message of the day
  • customize: Applies user customizations
  • cleanup_lxc: Final cleanup steps

Privileged vs Unprivileged Containers

When to Use Containers

Use LXC containers for:
  • Linux applications (Debian, Ubuntu, Alpine based)
  • Web services (nginx, Apache, databases)
  • Docker hosts (running Docker inside LXC)
  • Network services (Pi-hole, DNS, VPN)
  • Development environments
  • Microservices that need fast startup
  • Resource-constrained scenarios

Virtual Machines

What Are VMs?

Virtual Machines provide full hardware virtualization with complete OS isolation. Each VM runs its own kernel and can run any operating system.

VM Architecture

┌─────────────────────────────────────────────────────────┐
│                    Proxmox VE Host                      │
│                   (Linux Kernel 6.x)                    │
├─────────────────────────────────────────────────────────┤
│               KVM/QEMU Hypervisor Layer                 │
├─────────────────────────────────────────────────────────┤
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  │
│  │     VM 1     │  │     VM 2     │  │     VM 3     │  │
│  │              │  │              │  │              │  │
│  │ Home Assist. │  │  OPNsense    │  │  Windows 11  │  │
│  │ (HassOS)     │  │  (FreeBSD)   │  │  (Windows)   │  │
│  │              │  │              │  │              │  │
│  │ Own Kernel   │  │ Own Kernel   │  │ Own Kernel   │  │
│  │ Full Drivers │  │ Full Drivers │  │ Full Drivers │  │
│  └──────────────┘  └──────────────┘  └──────────────┘  │
│                                                         │
│  Isolated: Each VM has complete OS stack               │
└─────────────────────────────────────────────────────────┘

VM Scripts Structure

VM scripts differ from container scripts as they handle full OS installation:
#!/usr/bin/env bash

source /dev/stdin <<<$(curl -fsSL https://raw.githubusercontent.com/.../misc/api.func)

GEN_MAC=02:$(openssl rand -hex 5 | awk '{print toupper($0)}' | sed 's/\(..\)/\1:/g; s/.$//')
RANDOM_UUID="$(cat /proc/sys/kernel/random/uuid)"
VERSIONS=(stable beta dev)
METHOD=""
NSAPP="homeassistant-os"
var_os="homeassistant"
DISK_SIZE="32G"

# Fetch available versions
for version in "${VERSIONS[@]}"; do
  eval "$version=$(curl -fsSL https://raw.githubusercontent.com/home-assistant/version/master/stable.json | grep '"ova"' | cut -d '"' -f 4)"
done

# VM creation logic
function get_valid_nextid() {
  local try_id
  try_id=$(pvesh get /cluster/nextid)
  while true; do
    if [ -f "/etc/pve/qemu-server/${try_id}.conf" ] || [ -f "/etc/pve/lxc/${try_id}.conf" ]; then
      try_id=$((try_id + 1))
      continue
    fi
    break
  done
  echo "$try_id"
}

# Download and import disk image
# Configure VM settings
# Start VM
VM scripts do not use install scripts because they boot complete OS images (OVA, QCOW2, ISO). Configuration happens through cloud-init or first-boot scripts.

When to Use VMs

Use Virtual Machines for:
  • Non-Linux operating systems (Windows, FreeBSD, pfSense)
  • Home Assistant OS (requires full OS control)
  • OPNsense/pfSense (firewall appliances)
  • Maximum security isolation
  • Kernel-level requirements
  • Hardware passthrough (GPU, USB devices)
  • Testing different kernels
  • Legacy applications requiring specific OS versions

Decision Matrix

Use this flowchart to choose between containers and VMs:

Performance Comparison

Real-World Examples

Pi-hole (Container)

  • Boot time: 3-5 seconds
  • Memory: 100MB active
  • CPU overhead: Less than 1%
  • Disk I/O: Near-native

OPNsense (VM)

  • Boot time: 30-60 seconds
  • Memory: 1GB minimum
  • CPU overhead: 2-5%
  • Disk I/O: Slight overhead

Benchmark Summary

OperationContainerVM
Create20-40 seconds2-5 minutes
Start2-5 seconds30-90 seconds
StopInstant10-30 seconds
BackupFast (rsync)Slower (full disk)
RestoreFastSlower
SnapshotInstantQuick (storage dependent)

Migration Considerations

Container to VM

If you need to migrate from container to VM:
1

Backup application data

Export databases, configuration files, and user data
2

Create VM with appropriate OS

Use vm/debian-vm.sh or similar
3

Restore application data

Install application manually or use install script as reference

VM to Container

Downgrading from VM to container:
Only possible for Linux VMs. Windows/FreeBSD VMs cannot become containers.
1

Verify application compatibility

Check if app can run in unprivileged container
2

Export data

Backup all application data and configs
3

Create container

Use appropriate ct/ script
4

Restore data

Import data into new container

Best Practices

  • Always use unprivileged mode unless absolutely required
  • Allocate only necessary resources (easy to increase later)
  • Use bind mounts for data persistence
  • Enable nesting for Docker-in-LXC
  • Configure backups regularly
  • Monitor resource usage with pct commands
  • Enable VirtIO drivers for better performance
  • Use cloud-init for automated configuration
  • Allocate adequate disk space (harder to resize)
  • Enable QEMU guest agent
  • Use thin provisioning when possible
  • Configure NUMA for multi-socket systems

Next Steps

Script Structure

Learn how scripts are organized internally

Architecture

Understand the overall system design

Build docs developers (and LLMs) love