How to Force Proxmox Cluster & VM Replication Traffic to Use a Dedicated LAN Interface

Applies to: Proxmox VE 7.x / 8.x
Published by: W3DATA TECHNOLOGIES LLC
Last Updated: October 2025


Overview

When running a multi-node Proxmox VE cluster, administrators often notice that replication, migration, and cluster synchronization traffic use the public WAN interface instead of the private LAN. This can lead to high latency, unnecessary bandwidth use, and potential exposure of internal communication.

This guide explains how to configure Proxmox so that all inter-node (east–west) communication uses a dedicated private LAN interface while the public interface remains active for management access.


Example Environment

Cluster Name Example-Cluster-01
Nodes NODE1, NODE2, NODE3, NODE4, NODE5
Public WAN Network 203.0.113.0/26 (example)
Private LAN Network 10.0.10.0/27 (dedicated cluster network)
LAN Interface eno2np1

Symptoms

  • Cluster and replication traffic flow over the public interface (e.g., vmbr0 / WAN)
  • The LAN interface (e.g., eno2np1) remains idle
  • /etc/hosts and corosync.conf already use LAN IPs
  • qm migrate and replication continue to use WAN routes

Root Cause

Proxmox stores each node’s management IP inside /etc/pve/.members. If the file contains public WAN addresses, Proxmox uses those for node-to-node SSH tunnels and migration tasks.

{
  "NODE1": { "ip": "203.0.113.10" },
  "NODE2": { "ip": "203.0.113.12" }
}

It should instead use the private LAN subnet:

{
  "NODE1": { "ip": "10.0.10.11" },
  "NODE2": { "ip": "10.0.10.12" }
}

Step-by-Step Solution

1️⃣ Verify Network Interfaces

ip addr show

Confirm that your private interface (for example eno2np1) is configured with a LAN IP such as 10.0.10.x/27.


2️⃣ Verify Corosync Network

cat /etc/pve/corosync.conf | grep ring0_addr

Expected output:

ring0_addr: 10.0.10.x

3️⃣ Check Cluster Node IPs

cat /etc/pve/.members

❌ If WAN IPs appear here (203.0.113.x), continue to the next steps.


4️⃣ Correct Hostname Resolution

hostname -f
getent hosts $(hostname -f)

Each node’s FQDN must resolve to its LAN IP (10.0.10.x). If not, edit /etc/hosts as follows:

10.0.10.11 NODE1.local NODE1
10.0.10.12 NODE2.local NODE2
10.0.10.13 NODE3.local NODE3
10.0.10.14 NODE4.local NODE4
10.0.10.15 NODE5.local NODE5

5️⃣ Re-Register Node with LAN IP

Run the following commands on each node (one at a time):

pvecm updatecerts --force
systemctl restart pve-cluster pvedaemon pveproxy

After 20–30 seconds, check:

cat /etc/pve/.members

✅ Example output:


"NODE1": { "ip": "10.0.10.11" },
"NODE2": { "ip": "10.0.10.12" }

6️⃣ Configure Migration Network

nano /etc/pve/datacenter.cfg

Add or confirm:

migration: type=secure network=10.0.10.0/27

This forces migration and replication traffic to use the LAN subnet. For faster internal LAN transfers, you may use: type=insecure (unencrypted TCP, safe within trusted LAN).


7️⃣ Clean Old SSH Host Entries


ssh-keygen -R 203.0.113.10
ssh-keygen -R 203.0.113.12
ssh-keygen -R NODE1
ssh-keygen -R NODE2
ssh NODE2 exit

8️⃣ Verify Migration Path

Start a test migration and monitor network interfaces:

qm migrate <vmid> NODE2 --online
iftop -i eno1np0    # WAN (should be idle)
iftop -i eno2np1    # LAN (should show traffic)

✅ All migration traffic now flows through eno2np1 (LAN).


Cluster-Wide Implementation

Repeat Steps 4–7 for all nodes:

Node LAN IP (Example)
NODE1 10.0.10.11
NODE2 10.0.10.12
NODE3 10.0.10.13
NODE4 10.0.10.14
NODE5 10.0.10.15

Optional: Restrict Public Access

Once cluster communication works on LAN, you can limit public access to trusted IPs or VPN.


Direction: IN
Interface: vmbr0
Action: ACCEPT
Source: <your-admin-ip>/32
Dest Port: 8006,22
Comment: Allow Secure Admin Access Only

Verification Checklist

  • /etc/pve/.members → All nodes show 10.0.10.x ✅
  • iftop -i eno2np1 → Active during migration ✅
  • Public Web UI → Still accessible ✅
  • Corosync → Quorate, LAN-only ✅
  • Replication → Stable & fast ✅

Results & Benefits

  • Cluster and replication isolated to the private LAN
  • Improved performance and lower latency
  • Enhanced internal security
  • Reduced WAN utilization and exposure
  • Public Web UI remains available for management

About W3DATA

W3DATA TECHNOLOGIES LLC delivers secure cloud infrastructure, managed hosting, and web security solutions worldwide. We specialize in optimizing open-source platforms like Proxmox for performance, reliability, and compliance.

Visit us: w3data.cloud
Need help with your Proxmox infrastructure? Contact our support team.


© 2025 W3DATA TECHNOLOGIES LLC. All rights reserved.

Bu cavab sizə kömək etdi? 29 istifadəçi bunu faydalı hesab edir (100 səs)