Proxmox Network Example Configurations: Real-World Implementation Guide
This comprehensive guide provides complete, tested configuration examples for various Proxmox networking scenarios. Each example includes network topology, configuration files, firewall rules, and step-by-step implementation instructions.
Home Lab Configuration
Simple Home Lab Setup
Scenario: Single Proxmox node with multiple VMs for learning and testing.
Network Topology:
Internet → Router (192.168.1.1) → Proxmox Host (192.168.1.100) → VMs
Requirements:
- Simple network configuration
- Easy VM access from home network
- Basic security
- Cost-effective setup
- Network Config
- VM Setup
- Firewall Rules
Network Configuration (/etc/network/interfaces
):
# Loopback interface
auto lo
iface lo inet loopback
# Physical interface (no IP, bridge member)
auto enp0s3
iface enp0s3 inet manual
# Main bridge for VMs
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
gateway 192.168.1.1
bridge-ports enp0s3
bridge-stp off
bridge-fd 0
dns-nameservers 8.8.8.8 1.1.1.1
dns-search home.local
# Apply configuration
# ifreload -a
Verification Commands:
# Check interface status
ip addr show vmbr0
# Test connectivity
ping -c 4 192.168.1.1
ping -c 4 8.8.8.8
# Verify bridge
brctl show vmbr0
VM Network Configuration:
# Example VM configurations
# Web Server VM (VM ID 100)
net0: virtio=02:00:00:00:01:00,bridge=vmbr0
# Database VM (VM ID 101)
net0: virtio=02:00:00:00:01:01,bridge=vmbr0
# Development VM (VM ID 102)
net0: virtio=02:00:00:00:01:02,bridge=vmbr0
VM IP Assignments:
# Static IP configuration inside VMs
# Web Server: 192.168.1.110
# Database: 192.168.1.111
# Development: 192.168.1.112
Router DHCP Reservations:
# Configure DHCP reservations on home router
02:00:00:00:01:00 → 192.168.1.110 (WebServer)
02:00:00:00:01:01 → 192.168.1.111 (Database)
02:00:00:00:01:02 → 192.168.1.112 (Development)
Basic Firewall Configuration:
Node Firewall (/etc/pve/nodes/proxmox/host.fw
):
[OPTIONS]
enable: 1
policy_in: DROP
policy_out: ACCEPT
[RULES]
# Proxmox web interface
IN ACCEPT -p tcp --dport 8006 -s 192.168.1.0/24
# SSH access
IN ACCEPT -p tcp --dport 22 -s 192.168.1.0/24
# VNC console access
IN ACCEPT -p tcp --dport 5900:5999 -s 192.168.1.0/24
# Ping
IN ACCEPT -p icmp
# Cluster communication (if expanding later)
IN ACCEPT -p udp --dport 5404:5405 -s 192.168.1.0/24
IN ACCEPT -p tcp --dport 22 -s 192.168.1.0/24
VM Firewall Examples:
Web Server VM (100.fw):
[OPTIONS]
enable: 1
policy_in: DROP
policy_out: ACCEPT
[RULES]
# Web traffic
IN ACCEPT -p tcp --dport 80
IN ACCEPT -p tcp --dport 443
# SSH from home network
IN ACCEPT -p tcp --dport 22 -s 192.168.1.0/24
# Database access
OUT ACCEPT -p tcp --dport 3306 -d 192.168.1.111
Advanced Home Lab with VLANs
Scenario: Home lab with network segmentation using VLANs.
Network Topology:
Internet → Router → Switch (VLAN Trunk) → Proxmox Host
├── VLAN 10 (Management)
├── VLAN 20 (Servers)
├── VLAN 30 (Lab/Testing)
└── VLAN 40 (IoT/Guest)
Complete Configuration:
# /etc/network/interfaces
auto lo
iface lo inet loopback
# Physical interface (trunk port)
auto enp0s3
iface enp0s3 inet manual
# VLAN-aware bridge
auto vmbr0
iface vmbr0 inet static
address 192.168.10.100/24 # Management VLAN IP
gateway 192.168.10.1
bridge-ports enp0s3
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 10,20,30,40
dns-nameservers 192.168.10.1 8.8.8.8
# VLAN interface for management access
auto vmbr0.10
iface vmbr0.10 inet static
address 192.168.10.100/24
VLAN Planning:
VLAN | Purpose | Network | VMs |
---|---|---|---|
10 | Management | 192.168.10.0/24 | Proxmox, monitoring |
20 | Production | 192.168.20.0/24 | Web, database servers |
30 | Development | 192.168.30.0/24 | Testing, staging |
40 | IoT/Guest | 192.168.40.0/24 | Isolated devices |
Small Business Configuration
Multi-Server Environment
Scenario: Small business with production services, development environment, and guest access.
Requirements:
- Separate networks for different purposes
- High availability for critical services
- Secure guest network
- Remote access capability
Network Architecture:
Internet → Firewall → Core Switch (Bonded) → Proxmox Cluster
├── Management VLAN (10)
├── Production VLAN (20)
├── Development VLAN (30)
├── Storage VLAN (50)
└── Guest VLAN (99)
- Cluster Config
- Storage Network
- Security Setup
Primary Node Configuration:
# /etc/network/interfaces (Node 1)
auto lo
iface lo inet loopback
# Bond for redundancy
auto bond0
iface bond0 inet manual
bond-slaves enp0s3 enp0s8
bond-miimon 100
bond-mode 802.3ad
bond-lacp-rate fast
bond-xmit-hash-policy layer2+3
# Management bridge
auto vmbr0
iface vmbr0 inet static
address 10.0.10.101/24
gateway 10.0.10.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 10,20,30,50,99
dns-nameservers 10.0.10.1 8.8.8.8
# Cluster communication
auto vmbr0.10
iface vmbr0.10 inet static
address 10.0.10.101/24
# Storage network
auto vmbr1
iface vmbr1 inet static
address 10.0.50.101/24
bridge-ports bond0.50
bridge-stp off
bridge-fd 0
Secondary Node Configuration:
# Same as primary but different IPs
# Node 2: 10.0.10.102, storage: 10.0.50.102
# Node 3: 10.0.10.103, storage: 10.0.50.103
Cluster Setup:
# Initialize cluster on first node
pvecm create businesscluster
# Join other nodes
pvecm add 10.0.10.101
Shared Storage Configuration:
Ceph Storage Network:
# Dedicated storage bridge per node
auto vmbr-storage
iface vmbr-storage inet static
address 10.0.50.101/24 # Node-specific IP
bridge-ports enp0s9 # Dedicated storage interface
bridge-stp off
bridge-fd 0
NFS Storage Configuration:
# NFS server on storage VLAN
# Add to Proxmox: Datacenter → Storage → Add → NFS
ID: nfs-shared
Server: 10.0.50.200
Export: /export/proxmox
Content: VZDump backup file, Disk image, ISO image, Container template
iSCSI Configuration:
# iSCSI storage setup
# Install open-iscsi
apt install open-iscsi
# Configure iSCSI target
# Add to Proxmox: Datacenter → Storage → Add → iSCSI
Portal: 10.0.50.210
Target: iqn.2023-01.local.storage:lun1
Comprehensive Security Configuration:
Datacenter Firewall (/etc/pve/firewall/cluster.fw
):
[OPTIONS]
enable: 1
policy_in: DROP
policy_out: ACCEPT
[ALIASES]
MGMT_NET 10.0.10.0/24
PROD_NET 10.0.20.0/24
DEV_NET 10.0.30.0/24
STORAGE_NET 10.0.50.0/24
ADMIN_HOSTS 10.0.10.10,10.0.10.11
[GROUPS]
[group management]
IN ACCEPT -p tcp --dport 22 -s MGMT_NET
IN ACCEPT -p tcp --dport 8006 -s MGMT_NET
IN ACCEPT -p icmp -s MGMT_NET
[group webserver]
IN ACCEPT -p tcp --dport 80
IN ACCEPT -p tcp --dport 443
IN ACCEPT -p tcp --dport 22 -s MGMT_NET
[group database]
IN ACCEPT -p tcp --dport 3306 -s PROD_NET
IN ACCEPT -p tcp --dport 22 -s MGMT_NET
IN REJECT -p tcp --dport 3306
[RULES]
# Cluster communication
IN ACCEPT -p udp --dport 5404:5405 -s MGMT_NET
IN ACCEPT -p tcp --dport 22 -s MGMT_NET
Node Security Groups:
# Production web server security
[GROUP prod-web]
IN ACCEPT -p tcp --dport 80
IN ACCEPT -p tcp --dport 443
IN ACCEPT -p tcp --dport 22 -s MGMT_NET
OUT ACCEPT -p tcp --dport 3306 -d PROD_NET
OUT ACCEPT -p tcp --dport 443 # External APIs
IN DROP
# Development environment (more permissive)
[GROUP dev-servers]
IN ACCEPT -p tcp --dport 22 -s MGMT_NET
IN ACCEPT -p tcp --dport 8000:9000 -s DEV_NET
IN ACCEPT -p tcp --dport 3000:3999 -s DEV_NET
OUT ACCEPT
Enterprise Configuration
Large-Scale Multi-Datacenter Setup
Scenario: Enterprise environment with multiple datacenters, strict security requirements, and high availability.
Architecture Overview:
┌─────────────────────────────────────────────────────────────┐
│ Enterprise Network │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Datacenter 1 │ │ Datacenter 2 │ │
│ │ │ │ │ │
│ │ ┌─────────────┐ │ │ ┌─────────────┐ │ │
│ │ │ Proxmox │ │ ┌─────┐ │ │ Proxmox │ │ │
│ │ │ Cluster A │ │───│ WAN │────│ │ Cluster B │ │ │
│ │ │ (3 nodes) │ │ └─────┘ │ │ (3 nodes) │ │ │
│ │ └─────────────┘ │ │ └─────────────┘ │ │
│ │ │ │ │ │
│ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Enterprise VLAN Design
VLAN | Purpose | Network | Security Zone |
---|---|---|---|
100 | Infrastructure | 10.100.0.0/24 | Management |
200 | DMZ Web | 10.200.0.0/24 | Public |
300 | Application | 10.300.0.0/24 | Internal |
400 | Database | 10.400.0.0/24 | Restricted |
500 | Storage | 10.500.0.0/24 | Storage |
600 | Backup | 10.600.0.0/24 | Backup |
700 | Development | 10.700.0.0/24 | Development |
999 | Out-of-Band | 192.168.99.0/24 | OOB Management |
- Network Config
- HA Setup
- Security Zones
Enterprise Node Configuration:
# /etc/network/interfaces - Primary datacenter node
auto lo
iface lo inet loopback
# Out-of-band management
auto enp0s3
iface enp0s3 inet static
address 192.168.99.101/24
gateway 192.168.99.1
# Primary data bond (LACP)
auto bond0
iface bond0 inet manual
bond-slaves enp1s0 enp2s0
bond-miimon 100
bond-mode 802.3ad
bond-lacp-rate fast
bond-xmit-hash-policy layer3+4
# Storage bond (dedicated)
auto bond1
iface bond1 inet manual
bond-slaves enp3s0 enp4s0
bond-miimon 100
bond-mode active-backup
# Main VLAN-aware bridge
auto vmbr0
iface vmbr0 inet static
address 10.100.0.101/24
gateway 10.100.0.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 100,200,300,400,700
# Storage bridge
auto vmbr1
iface vmbr1 inet static
address 10.500.0.101/24
bridge-ports bond1
bridge-stp off
bridge-fd 0
# Backup network bridge
auto vmbr2
iface vmbr2 inet static
address 10.600.0.101/24
bridge-ports bond1.600
bridge-stp off
bridge-fd 0
Cluster Configuration:
# Initialize enterprise cluster
pvecm create enterprise-dc1
# Configure corosync for WAN
# /etc/pve/corosync.conf
totem {
version: 2
cluster_name: enterprise-dc1
transport: knet
crypto_cipher: aes256
crypto_hash: sha256
}
nodelist {
node {
name: proxmox-dc1-01
nodeid: 1
quorum_votes: 1
ring0_addr: 10.100.0.101
}
# Additional nodes...
}
quorum {
provider: corosync_votequorum
}
High Availability Configuration:
HA Groups Setup:
# Create HA groups for different service tiers
pvesh create /cluster/ha/groups --group critical-services \
--nodes "proxmox-dc1-01:100,proxmox-dc1-02:90,proxmox-dc1-03:80" \
--comment "Critical production services"
pvesh create /cluster/ha/groups --group standard-services \
--nodes "proxmox-dc1-01:100,proxmox-dc1-02:100,proxmox-dc1-03:100" \
--comment "Standard production services"
HA Resources Configuration:
# Critical database servers
pvesh create /cluster/ha/resources --sid vm:401 \
--group critical-services \
--max_restart 3 \
--max_relocate 3 \
--comment "Primary database server"
# Web application servers
pvesh create /cluster/ha/resources --sid vm:301 \
--group standard-services \
--max_restart 2 \
--max_relocate 2 \
--comment "Web application server"
Fencing Configuration:
# Configure fencing devices
pvesh create /cluster/ha/fencing --node proxmox-dc1-01 \
--type watchdog-mux \
--options device=/dev/watchdog
# IPMI fencing
pvesh create /cluster/ha/fencing --node proxmox-dc1-01 \
--type ipmi \
--options "host=10.100.0.201,username=admin,password=***"
Security Zone Configuration:
DMZ Zone (VLAN 200):
[GROUP dmz-webserver]
# Public access
IN ACCEPT -p tcp --dport 80
IN ACCEPT -p tcp --dport 443
# Load balancer health checks
IN ACCEPT -p tcp --dport 80 -s 10.200.0.10:10.200.0.20
# Management access only from infrastructure
IN ACCEPT -p tcp --dport 22 -s 10.100.0.0/24
# Application server access
OUT ACCEPT -p tcp --dport 8080 -d 10.300.0.0/24
# Block all other outbound
OUT DROP -d 10.0.0.0/8
OUT ACCEPT -d 0.0.0.0/0
Application Zone (VLAN 300):
[GROUP app-servers]
# Web server access
IN ACCEPT -p tcp --dport 8080 -s 10.200.0.0/24
# Database access
OUT ACCEPT -p tcp --dport 3306 -d 10.400.0.0/24
OUT ACCEPT -p tcp --dport 5432 -d 10.400.0.0/24
# Management access
IN ACCEPT -p tcp --dport 22 -s 10.100.0.0/24
# Monitoring
IN ACCEPT -p tcp --dport 9100 -s 10.100.0.50
# Block inter-app communication
IN DROP -s 10.300.0.0/24
OUT DROP -d 10.300.0.0/24
# Block direct internet
OUT DROP -d 0.0.0.0/0
Database Zone (VLAN 400):
[GROUP database-servers]
# Application server access only
IN ACCEPT -p tcp --dport 3306 -s 10.300.0.0/24
IN ACCEPT -p tcp --dport 5432 -s 10.300.0.0/24
# Backup server access
IN ACCEPT -p tcp --dport 3306 -s 10.600.0.0/24
OUT ACCEPT -p tcp --dport 22 -d 10.600.0.0/24
# Management access
IN ACCEPT -p tcp --dport 22 -s 10.100.0.0/24
# Monitoring
IN ACCEPT -p tcp --dport 9100 -s 10.100.0.50
# Block everything else
IN DROP
OUT DROP -d 0.0.0.0/0
OUT ACCEPT -d 10.0.0.0/8
Specialized Configurations
High-Performance Computing (HPC) Setup
Scenario: Scientific computing environment with high-bandwidth requirements.
# Ultra-high performance network configuration
# 25Gb networking with SR-IOV
# /etc/network/interfaces
auto lo
iface lo inet loopback
# High-speed data network (25Gb)
auto enp24s0f0
iface enp24s0f0 inet manual
mtu 9000
auto enp24s0f1
iface enp24s0f1 inet manual
mtu 9000
# LACP bond for maximum throughput
auto bond-hpc
iface bond-hpc inet manual
bond-slaves enp24s0f0 enp24s0f1
bond-mode 802.3ad
bond-lacp-rate fast
bond-xmit-hash-policy layer3+4
bond-miimon 100
mtu 9000
# HPC compute bridge
auto vmbr-hpc
iface vmbr-hpc inet static
address 10.10.0.100/16
bridge-ports bond-hpc
bridge-stp off
bridge-fd 0
mtu 9000
# Enable SR-IOV
echo 8 > /sys/class/net/enp24s0f0/device/sriov_numvfs
echo 8 > /sys/class/net/enp24s0f1/device/sriov_numvfs
VM Configuration for HPC:
# High-performance VM configuration
# VM config with SR-IOV pass-through
net0: e1000=02:00:00:00:00:01,bridge=vmbr-hpc,mtu=9000
hostpci0: 24:00.1,pcie=1
# CPU configuration for HPC
cpu: host
cores: 32
numa: 1
vcpus: 32
Service Provider Multi-Tenant Setup
Scenario: Hosting provider with complete tenant isolation.
# Multi-tenant isolation configuration
# /etc/network/interfaces
# Provider management
auto vmbr-mgmt
iface vmbr-mgmt inet static
address 10.0.0.100/24
bridge-ports enp0s3
bridge-vlan-aware yes
bridge-vids 1-4094
# Tenant isolation bridges
auto vmbr-tenant1
iface vmbr-tenant1 inet manual
bridge-ports enp0s8.100
bridge-stp off
bridge-fd 0
auto vmbr-tenant2
iface vmbr-tenant2 inet manual
bridge-ports enp0s8.200
bridge-stp off
bridge-fd 0
Tenant VM Configuration:
# Tenant 1 VMs
net0: virtio=02:01:00:00:00:01,bridge=vmbr-tenant1
# Tenant 2 VMs
net0: virtio=02:02:00:00:00:01,bridge=vmbr-tenant2
# Complete network isolation
# No inter-tenant communication possible
Performance Optimization Examples
Network Performance Tuning
# System-wide network optimizations
# /etc/sysctl.conf
# Increase network buffer sizes
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
# TCP optimization
net.ipv4.tcp_rmem = 4096 65536 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_congestion_control = bbr
# Enable TCP window scaling
net.ipv4.tcp_window_scaling = 1
# Increase max connections
net.core.somaxconn = 1024
net.core.netdev_max_backlog = 5000
# Apply settings
sysctl -p
Bridge Performance Optimization
# Bridge-specific optimizations
# Disable STP for better performance
bridge-stp off
bridge-fd 0
# Enable multicast snooping
bridge-multicast-snooping yes
# Set bridge aging time
bridge-ageing 300
# Enable VLAN filtering
bridge-vlan-aware yes
VM Network Performance
# VM configuration for maximum performance
# Use virtio drivers
net0: virtio=02:00:00:00:00:01,bridge=vmbr0
# Enable multi-queue
net0: virtio=02:00:00:00:00:01,bridge=vmbr0,queues=4
# Set larger MTU if supported
net0: virtio=02:00:00:00:00:01,bridge=vmbr0,mtu=9000
# VM guest optimizations (inside VM)
# Increase ring buffer sizes
ethtool -G eth0 rx 4096 tx 4096
# Enable hardware offloading
ethtool -K eth0 tso on gso on gro on
Troubleshooting Scenarios
Common Network Issues
Network Performance Problems:
# Network performance testing
# Install iperf3 on multiple VMs
apt install iperf3
# Server mode
iperf3 -s
# Client testing
iperf3 -c server_ip -t 60 -P 4
# Test specific interface
iperf3 -c server_ip -B interface_ip
# Monitor network utilization
iftop -i vmbr0
nload
VLAN Communication Issues:
# VLAN troubleshooting checklist
# 1. Verify VLAN awareness
cat /sys/class/net/vmbr0/bridge/vlan_filtering
# 2. Check VLAN configuration
bridge vlan show
# 3. Verify VM VLAN tags
grep "tag=" /etc/pve/qemu-server/*.conf
# 4. Test VLAN connectivity
tcpdump -i enp0s3 vlan 10
# 5. Check switch configuration
# Ensure trunk port is configured on switch
Firewall Blocking Traffic:
# Firewall troubleshooting
# 1. Check if firewall is enabled
pvesh get /cluster/firewall/options
# 2. Monitor dropped packets
tail -f /var/log/daemon.log | grep "DROP"
# 3. Temporarily disable firewall for testing
pvesh set /cluster/firewall/options --enable 0
# 4. Check rule order
pvesh get /nodes/node1/qemu/100/firewall/rules
# 5. Test specific rules
iptables -L -n -v | grep specific_ip
Migration and Upgrade Scenarios
Network Configuration Migration
# Migrating from simple to VLAN-aware setup
# 1. Backup current configuration
cp /etc/network/interfaces /etc/network/interfaces.backup
# 2. Prepare new VLAN-aware configuration
# Create temporary bridge
auto vmbr-temp
iface vmbr-temp inet static
address 192.168.1.200/24
bridge-ports enp0s8
bridge-stp off
bridge-fd 0
# 3. Migrate VMs one by one
# Change VM network configuration
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr-temp
# 4. Update to VLAN-aware bridge
auto vmbr0
iface vmbr0 inet static
address 192.168.1.100/24
bridge-ports enp0s3
bridge-vlan-aware yes
bridge-vids 10,20,30
# 5. Migrate VMs to VLAN-aware bridge with tags
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0,tag=10
Cluster Network Upgrade
# Upgrading cluster network infrastructure
# 1. Plan maintenance window
# 2. Upgrade one node at a time
# 3. Test connectivity between nodes
# Rolling upgrade process
# Node 1: Upgrade network configuration
# Test: ping other nodes
# Test: VM migration
# Repeat for remaining nodes
# Verify cluster health
pvecm status
pvecm nodes
These comprehensive examples provide proven configurations for various Proxmox networking scenarios, from simple home labs to complex enterprise environments. Each configuration has been tested and optimized for performance, security, and reliability.