Configuring a Virtual IP Address for the LINSTOR HA Controller Database
This article describes how to configure a DRBD® Reactor promoter resource with a virtual IP (VIP) address for the LINSTOR® HA controller database. Adding a VIP address allows LINSTOR clients and the LINSTOR GUI to connect to a single stable address that follows the active controller across failover events.
This article assumes you already have a working LINSTOR HA controller database managed by DRBD Reactor. For instructions on setting up the HA controller database, refer to the LINSTOR User Guide.
The IPaddr2 OCF resource agent manages the VIP address.
Install the resource-agents package on all nodes that can run the LINSTOR controller service (Combined node types for HA database deployments).
Debian family:
On Debian, Proxmox VE, or Ubuntu 22.04 and earlier:
apt -y install resource-agents
On Ubuntu 24.04 and later, IPaddr2 is in the resource-agents-base package:
apt -y install resource-agents-base
SUSE family:
On openSUSE Leap or Tumbleweed:
zypper install -y resource-agents
On SLES, the resource-agents package requires the High Availability Extension (HAE). Enable the extension first, then install.
SUSEConnect -p sle-ha/<VERSION>/x86_64 -r <YOUR_HA_REGCODE>
zypper install -y resource-agents
Replace <VERSION> with your SLES version (for example, 15.5) and <YOUR_HA_REGCODE> with your HAE registration code.
⚠️ WARNING: The SLES High Availability Extension requires a separate paid subscription. Without it, the HA repositories are not available and the
resource-agentspackage cannot be installed.
Red Hat family (with LINBIT customer repositories):
LINBIT® customers do not need to enable any additional high availability repositories.
The resource-agents package is included in the LINBIT drbd-9 customer repository.
Register your nodes through the LINBIT Customer Portal or by using the LINBIT linbit-manage-node.py script.
Then install the resource-agents package.
dnf -y install resource-agents
Red Hat family (without LINBIT customer repositories):
On Red Hat Enterprise Linux (RHEL), enable the HA Add-On:
source /etc/os-release
subscription-manager repos --enable=rhel-${VERSION_ID%%.*}-for-x86_64-highavailability-rpms
dnf -y install resource-agents
⚠️ WARNING: The RHEL High Availability Add-On requires a paid subscription.
On AlmaLinux 9+ or Rocky Linux 9+, enable the highavailability repository:
dnf -y --enablerepo=highavailability install resource-agents
On AlmaLinux 8 or Rocky Linux 8, enable the ha repository:
dnf -y --enablerepo=ha install resource-agents
On Oracle Linux, enable the addons repository first:
source /etc/os-release
dnf config-manager --enable ol${VERSION_ID%%.*}_addons
dnf -y install resource-agents
Edit the DRBD Reactor promoter plugin configuration file on all HA database nodes.
The file is typically located at /etc/drbd-reactor.d/linstor_db.toml.
Add an IPaddr2 OCF resource agent entry at the beginning of the start list.
Replace linstor_db with the name of your DRBD resource if it differs.
[[promoter]]
[promoter.resources.linstor_db]
start = [
"ocf:heartbeat:IPaddr2 ha_db_vip cidr_netmask=24 ip=10.43.70.100",
"var-lib-linstor.mount",
"linstor-controller.service",
]
Replace the ip= value with your chosen VIP address and the cidr_netmask= value with your subnet prefix length.
The start list defines the service chain.
DRBD Reactor starts these services in order on the node that wins promotion:
IPaddr2brings up the VIP address on the network interface of the active node.var-lib-linstor.mountmounts the DRBD-backed/var/lib/linstorfile system.linstor-controller.servicestarts the LINSTOR controller service.
On failover, services stop in reverse order: the LINSTOR controller service stops, the file system unmounts, then the VIP address releases. The winning node then starts the same chain.
The IPaddr2 entry in the start list follows this format:
ocf:heartbeat:IPaddr2 <instance-id> <param>=<value> ...
ocf:heartbeat:IPaddr2is the OCF agent path (vendorheartbeat, agentIPaddr2).ha_db_vipis a user-defined instance ID. DRBD Reactor appends_<resource>internally, so it becomesha_db_vip_linstor_db.ipis the virtual IP address.cidr_netmaskis the subnet prefix length.
After creating the TOML configuration file on all nodes, reload DRBD Reactor on each node:
drbd-reactorctl reload
Verify the promoter plugin resource is active:
drbd-reactorctl status
The output should show the promoter plugin resource and its current state.
Edit /etc/linstor/linstor-client.conf on all nodes.
Replace the addresses with your own node or VIP addresses.
Without a VIP address, you either need to specify the controller node address with each LINSTOR client command, or else edit the LINSTOR client configuration and add a list of all potential controller node addresses so the client can find whichever node is currently active:
[global]
controllers = 10.43.70.1,10.43.70.2,10.43.70.3
With a VIP address, replace the list with the single VIP address:
[global]
controllers = 10.43.70.100
After applying the configuration, test failover to verify the VIP address moves correctly between nodes.
Check the current primary node:
drbd-reactorctl status
Evict the promoter plugin resource to trigger failover:
drbd-reactorctl evict linstor_db
Verify the VIP address has moved to the new primary node:
ip addr show | grep 10.43.70.100
Verify that you can access the LINSTOR controller service through the VIP address:
linstor node list
Repeat the eviction to test failover to each node in the cluster.
Without a VIP address, the LINSTOR GUI is only accessible on whichever node is currently running the controller. After a failover, you need to determine which node became the new primary before you can reach the GUI.
With a VIP address, the GUI is always available at a single address:
http://10.43.70.100:3370
Bookmark this URL for consistent access to the LINSTOR GUI regardless of which node is active.
📝 NOTE: The LINSTOR controller service defaults to plain HTTP on port 3370. If you have configured TLS, use
https://instead.
If you are running LINSTOR Gateway, update its configuration to connect through the VIP address.
Edit /etc/linstor-gateway/linstor-gateway.toml on all nodes that run the LINSTOR Gateway server:
[linstor]
controllers = ["10.43.70.100"]
Replace 10.43.70.100 with your VIP address.
Alternatively, you can set the LS_CONTROLLERS environment variable or pass the --controllers flag when starting the LINSTOR Gateway server.
After updating the configuration, restart the LINSTOR Gateway service:
systemctl restart linstor-gateway
If you are running LINSTOR on Proxmox VE, update the controller line in /etc/pve/storage.cfg to use the VIP address.
Without a VIP address, the LINSTOR Proxmox storage plugin lists all controller node IP addresses:
drbd: linstor_storage
resourcegroup pve-rg
content images,rootdir
controller 10.43.70.1,10.43.70.2,10.43.70.3
With a VIP address, replace the comma-separated list with the single VIP address:
drbd: linstor_storage
resourcegroup pve-rg
content images,rootdir
controller 10.43.70.100
📝 NOTE: The
/etc/pve/storage.cfgfile is shared across all nodes in a Proxmox VE cluster through thepmxcfscluster file system. Editing it on one node applies the change to all nodes automatically. No service restart is required. The LINSTOR storage plugin reads the configuration on each storage operation.
If you use LINSTOR with other virtualization or cloud platforms, update their controller connection settings to also use the VIP address. The configuration location and format will vary by platform but some examples are:
- OpenNebula: Set the
LINSTOR_CONTROLLERSattribute in your datastore configuration to the VIP address and port, for example10.43.70.100:3370. - CloudStack: Enter the VIP address in the Server field when configuring LINSTOR primary storage through the CloudStack UI.
- OLVM or oVirt: Set the
linstor_urisdriver option tolinstor://10.43.70.100when adding a Managed Block Storage data domain.
Refer to the LINSTOR User Guide for detailed instructions for each platform.
Created 2026/03/24 - RR
Reviewed 2026/03/25 - MAT