Joseph Matino
  • Servers
  • Web Dev
  • Gaming
    • PC Gaming
  • Guides
  • Security
Font ResizerAa
Joseph MatinoJoseph Matino
Font ResizerAa
Search
  • Servers
  • Web Dev
  • Gaming
    • PC Gaming
  • Guides
  • Security
Common Cloud-Init Issues On Ubuntu

10 Common Cloud-init Issues On Ubuntu: How To Fix

Joseph Matino Joseph Matino 11 months ago
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
Follow US
  • Terms & Conditions
  • Privacy Policy
  • Contact
© Joseph Matino. All Rights Reserved.

Home – Blog – Servers

How To Set Up FC-SAN With Proxmox VE 9.0 Snapshots

Joseph Matino
Last updated: August 2, 2025 5:05 pm
By Joseph Matino
Complete Fc-San Setup Guide For Proxmox Ve 9.0. Configure Shared Storage, Enable Snapshots, Multipath Setup, And Clustering Step By Step

Setting up FC-SAN with Proxmox VE has been one of those topics that sent people straight back to VMware. The main blocker was not the FC connectivity itself, that part worked fine. The real problem was snapshots.

For years, anyone wanting to run VMs on FC-SAN storage had to choose between shared storage that works across your cluster or snapshot functionality. You could not have both. This limitation kept many organizations stuck on their existing VMware infrastructure, even when they wanted to explore Proxmox VE as an alternative.

Article Content
What This Actually MeansStep 1: Installing Fibre Channel HBAsStep 2: SAN Zoning and LUN PresentationStep 3: Configure Multipath for FC StorageStep 4: Proxmox VE Storage Configuration and SnapshotsStep 5: Performance Optimization and High Availability SetupMoving Forward

Things just changed with Proxmox VE 9.0 Beta. The development team added snapshot support for thick-provisioned LVM storage using volume chains. This means FC-SAN storage can now handle VM snapshots properly, something the community has been requesting for years.

I have been running Proxmox VE on bare metals for our hosting business, relying on iSCSI when it comes to expanding storage. When this FC-SAN snapshot functionality became available, it opened up possibilities for anyone with existing SAN infrastructure. The community discussions around this development show how many people have been waiting for exactly this solution.

What This Actually Means

Instead of choosing between shared storage or snapshots, you now get both. Your existing SAN hardware, FC infrastructure, and previous investment all work with proper snapshot functionality.

Setting up FC-SAN with Proxmox VE involves several key components working together. You need proper Fibre Channel Host Bus Adapters installed in your servers, SAN storage that supports FC connectivity, and correct zoning configuration on your FC switches. The hardware compatibility is generally good with most enterprise-grade equipment from vendors like Dell, HP, and NetApp working well with Proxmox VE.

The snapshot functionality requires Proxmox VE 9.0 Beta or newer, and VM disks must be configured in qcow2 format rather than raw format. This is different from how traditional LVM storage works in Proxmox VE, where raw format was typically preferred for performance reasons. The new volume chain approach maintains good performance while enabling the snapshot features that were previously impossible with shared LVM storage.

Hardware That Works With FC-SAN

These are configurations currently running in production environments with the new snapshot functionality working properly. The comparison table below shows verified hardware combinations and their current status with Proxmox VE 9.0 Beta.

Storage System HBA Configuration User Report Status Snapshot Support
Dell MD3820f QLogic ISP2532 8Gb Dual Port Production Verified Volume Chain Working
HP StorageWorks P2000 G3 HP 82Q (QLE2562) 8Gb Forum Confirmed Volume Chain Ready
NetApp MetroCluster HPE 32G FC HBA Dual Port Production Setup Volume Chain Compatible
HP StoreFabric SN1100Q Marvell QLogic 16Gb Forum Question Volume Chain Expected
Generic FC Storage QLogic ISP25xx Series Widely Compatible Volume Chain Ready

Here’s what the detection looks like on a working system:

root@jmhstwekprox:~# lspci -nn | grep -i hba
06:00.0 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA [1077:2532] (rev 02)
06:00.1 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA [1077:2532] (rev 02)

root@jmhstwekprox:~# fdisk -l | grep FC-SAN
Disk /dev/sdc: 8.19 TiB, 9007199254740992 bytes, 17592186044416 sectors
Disk model: FC-SAN-Vol1
Disk /dev/sdd: 8.19 TiB, 9007199254740992 bytes, 17592186044416 sectors
Disk model: FC-SAN-Vol1

Requirements and Important Notes

The snapshot functionality requires Proxmox VE 9.0 Beta or newer. Earlier versions simply do not have the volume chain support needed for shared LVM snapshots. There are several important points to understand before starting the setup process.

VM disks must be configured in qcow2 format rather than raw format for snapshots to work. This differs from traditional LVM storage where raw format was preferred for performance. Your SAN firmware should be reasonably current, as older firmware versions sometimes have compatibility issues with newer Linux multipath configurations.

The FC infrastructure needs proper zoning configured on your switches, and LUN masking should be set up correctly on the storage side. Configuration is currently done through command line rather than the web interface, so some manual work is required.

Key requirements include:

  • Proxmox VE 9.0 Beta for FC-SAN snapshots (earlier versions will not work)
  • Recent SAN firmware (check with your vendor for compatibility)
  • Proper FC infrastructure (switches, cables, zoning configured correctly)
  • qcow2 disk format (raw disks will not support snapshots)
  • Manual configuration (no GUI options available yet)
  • Testing approach (this is beta software, test thoroughly before production)

This guide walks through the complete setup process using these hardware configurations. We cover the physical installation of FC HBAs, SAN zoning and LUN presentation, Proxmox VE storage configuration, multipath setup for redundancy, and enabling the new snapshot functionality. The troubleshooting section addresses common issues that come up during implementation, based on real deployment experiences.

Step 1: Installing Fibre Channel HBAs

The physical installation of FC HBAs is straightforward, but there are specific steps that matter for getting them recognized properly by Proxmox VE. Most enterprise servers have PCIe slots that can accommodate standard FC HBAs, but the installation process requires attention to detail.

Power down your Proxmox VE server completely before starting. Hot-swapping FC HBAs is possible on some enterprise hardware, but it can cause system instability and is not worth the risk. Make sure you have anti-static protection when handling the cards.

Choose PCIe Slots

Most FC HBAs require PCIe x8 slots for full performance, though they will work in x16 slots as well. Based on forum reports and hardware documentation, the first PCIe x16 slot typically provides the best compatibility for passthrough scenarios, but for direct host usage, any x8 or x16 slot should work fine.

Check your server documentation for slot specifications. Some servers have specific slots designated for storage controllers that provide better performance and fewer compatibility issues. Users report that motherboards like the B550 chipset can only reliably passthrough the first x16 PCIe slot, but this mainly affects VM passthrough scenarios.

Install the HBAs in slots that provide adequate airflow. FC HBAs can generate significant heat under load, and poor cooling will cause performance throttling or hardware failures. If you are installing dual HBAs for redundancy, spread them across different PCIe buses when possible to avoid bandwidth contention.

Physical Installation

Remove the server chassis cover and locate the appropriate PCIe slots. Remove the slot covers and set them aside – you will need them if you ever remove the HBAs. Carefully seat each HBA in its slot, ensuring it clicks into place securely. The retention clips should engage automatically.

Connect the FC cables to the HBAs before closing the chassis. FC cables use either LC or SC connectors depending on your HBA model. Make sure the connections are secure and that the cables are not pinched or bent sharply. Poor cable connections cause intermittent connectivity issues that are difficult to troubleshoot later.

Boot and Recognition

Power on the server and watch the boot process. Most HBAs will display their own BIOS information during startup, showing firmware versions and detected ports. This is normal and indicates the cards are being recognized at the hardware level.

After Proxmox VE boots, verify the HBAs are detected:

lspci | grep -i fibre

You should see output like:

root@jmhstwekprox:~# lspci | grep -i fibre
06:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
06:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)

If no devices appear, check that the cards are properly seated and that your server BIOS recognizes them. Some servers require enabling specific PCIe slots in BIOS settings, and network interface names can change when HBA cards are installed.

Check that the FC host adapters are properly initialized:

ls -l /sys/class/fc_host/

Expected output:

root@jmhstwekprox:~# ls -l /sys/class/fc_host/
total 0
lrwxrwxrwx 1 root root 0 Jan 15 10:23 host3 -> ../../devices/pci0000:00/0000:00:07.0/0000:06:00.0/host3/fc_host/host3
lrwxrwxrwx 1 root root 0 Jan 15 10:23 host4 -> ../../devices/pci0000:00/0000:00:07.0/0000:06:00.1/host4/fc_host/host4

If the directory is empty, the FC driver may not have loaded properly. Check dmesg | tail -20 for error messages related to the HBA initialization.

Collect World Wide Names

Each FC HBA port has a unique World Wide Name that identifies it on the SAN fabric. You will need these WWNs for SAN zoning configuration. Extract them using this command:

cat /sys/class/fc_host/host*/port_name

Expected output:

root@jmhstwekprox:~# cat /sys/class/fc_host/host*/port_name
0x21000024ff5cab56
0x21000024ff5cab57

If you get file not found errors, wait a few minutes for the FC subsystem to fully initialize, then try again. Some HBAs take time to complete their startup sequence.

Record these WWNs – you will need them when configuring zoning on your FC switches. The format shown here is the standard 16-character hexadecimal representation that most SAN management tools expect.

Install Required Software

Proxmox VE includes most FC drivers by default, but you need additional tools for managing multipath and SCSI devices. Install the necessary packages:

apt update
apt install multipath-tools sg3-utils

If the installation fails with dependency errors, run apt --fix-broken install first, then retry. Sometimes the package cache needs refreshing if you have custom repositories configured.

The sg3-utils package provides useful commands for SCSI device management and troubleshooting. The multipath-tools package is essential for managing redundant paths to your SAN storage.

Update Firmware

Check your HBA firmware versions and update if necessary. Older firmware can have compatibility issues with newer Linux kernels or cause performance problems. Users report that newer firmware often fixes intermittent connectivity issues.

For QLogic HBAs, you can check firmware versions with:

cat /sys/class/fc_host/host*/firmware_version

This should return something like:

root@jmhstwekprox:~# cat /sys/class/fc_host/host*/firmware_version
8.07.00 (d0d5)
8.07.00 (d0d5)

If you see permission denied errors, the HBA driver may not be fully loaded. Wait a few minutes after boot and try again, or check dmesg | grep -i qlogic for driver loading issues.

Visit your HBA vendor website to determine if newer firmware is available. Firmware updates typically require a reboot to take effect, so plan accordingly. Some firmware updates may temporarily cause the HBA to disappear from the system until the reboot completes.

The HBA installation is complete when you can see the cards detected by Linux, the FC host adapters are initialized, and you have collected the WWNs for SAN configuration. The next step involves configuring your SAN to present storage to these newly installed HBAs.

Step 2: SAN Zoning and LUN Presentation

With your FC HBAs installed and recognized, the next critical step is configuring your SAN infrastructure to present storage to your Proxmox VE hosts. This involves two main tasks: setting up proper zoning on your FC switches and configuring LUN presentation on your storage array.

SAN zoning controls which devices can communicate with each other on the FC fabric. Without proper zoning, your Proxmox VE servers will not see any storage devices, even if everything else is configured correctly. Think of zoning as firewall rules for your storage network – it determines what can talk to what.

Zone Types and Configuration

There are two main types of zoning: single initiator zones and multiple initiator zones. For Proxmox VE clusters, single initiator zones are typically preferred because they provide better security and easier troubleshooting. Each Proxmox VE host gets its own zone containing its HBA WWNs and the target storage ports.

Most enterprise FC switches support both hardware-based zoning (preferred) and software-based zoning. Hardware zoning provides better performance and is enforced at the switch level, making it more secure. Always use hardware zoning when available.

Create Zones for Proxmox VE

Log into your FC switch management interface. The exact process varies by vendor, but the concepts are the same across Brocade, Cisco, and other manufacturers. You will need the HBA WWNs you collected in Step 1.

Brocade Switch Configuration

For Brocade switches, the process involves creating aliases first, then zones, then adding zones to a configuration. Start by creating aliases for your HBA WWNs:

# Connect to Brocade switch console
FCSwitch1:admin> alicreate "proxmox-host1-hba1", "21:00:00:24:ff:5c:ab:56"
FCSwitch1:admin> alicreate "proxmox-host1-hba2", "21:00:00:24:ff:5c:ab:57"
FCSwitch1:admin> alicreate "storage-port-a", "50:00:09:7a:00:12:34:56"
FCSwitch1:admin> alicreate "storage-port-b", "50:00:09:7a:00:12:34:57"

Create zones with single initiator and multiple targets:

FCSwitch1:admin> zonecreate "proxmox-host1-zone", "proxmox-host1-hba1;proxmox-host1-hba2;storage-port-a;storage-port-b"

Add the zone to a configuration and activate it:

FCSwitch1:admin> cfgcreate "proxmox-config", "proxmox-host1-zone"
FCSwitch1:admin> cfgsave
FCSwitch1:admin> cfgenable "proxmox-config"

Cisco MDS Configuration

For Cisco MDS switches, the process uses VSANs and different commands. Enter configuration mode and select the appropriate VSAN:

# Connect to Cisco MDS console
switch# config terminal
switch(config)# zone name proxmox-host1-zone vsan 100
switch(config-zone)# member pwwn 21:00:00:24:ff:5c:ab:56
switch(config-zone)# member pwwn 21:00:00:24:ff:5c:ab:57
switch(config-zone)# member pwwn 50:00:09:7a:00:12:34:56
switch(config-zone)# member pwwn 50:00:09:7a:00:12:34:57

Create the zoneset and activate it:

switch(config)# zoneset name proxmox-zoneset vsan 100
switch(config-zoneset)# member proxmox-host1-zone
switch(config-zoneset)# zoneset activate name proxmox-zoneset vsan 100

Verify Zone Configuration

After creating zones, verify they are active and working. For Brocade switches:

FCSwitch1:admin> zoneshow --active

For Cisco switches:

switch# show zoneset active

Zone activation can briefly disrupt existing connections, so plan this during a maintenance window if you have production traffic on the FC fabric. The disruption typically lasts only a few seconds, but some applications are sensitive to even brief storage interruptions.

Configure Storage Array LUNs

Now configure your storage array to present LUNs to the Proxmox VE hosts. The exact process depends on your storage vendor, but the concepts are universal. You need to create LUNs on the storage array and then map them to the appropriate host groups.

NetApp Storage Configuration

For NetApp arrays, create an initiator group (igroup) with your HBA WWNs:

# Create igroup for Proxmox host
igroup create -vserver svm1 -igroup proxmox-host1 -protocol fcp -ostype linux
igroup add -vserver svm1 -igroup proxmox-host1 -initiator 21:00:00:24:ff:5c:ab:56,21:00:00:24:ff:5c:ab:57

Create LUNs and map them to the igroup:

# Create LUN
lun create -vserver svm1 -volume vol1 -lun lun1 -size 500GB -ostype linux

# Map LUN to igroup
lun map -vserver svm1 -volume vol1 -lun lun1 -igroup proxmox-host1

Dell Storage Configuration

For Dell arrays, create host entries and storage groups. The process varies by model, but typically involves:

  1. Register the host with HBA WWNs
  2. Create storage groups with the desired LUNs
  3. Add the host to the storage group

HP Storage Configuration

HP arrays use host groups and virtual disks. Create host entries with your WWNs, then assign virtual disks to the host group containing your Proxmox hosts.

Set Up LUN Masking

Configure LUN masking to present the appropriate LUNs to each host group. Start with smaller LUN numbers (0, 1, 2, etc.) as some older systems have issues with high LUN numbers. Keep LUN numbering consistent across all hosts in your cluster to avoid confusion.

For shared storage scenarios (which you need for Proxmox VE clustering), map the same LUNs to all hosts in your cluster. This is different from some other virtualization platforms where each host might get dedicated LUNs.

If you are setting up a new environment, create test LUNs first before configuring your production storage. A small 10GB test LUN lets you verify connectivity and permissions without risking larger storage volumes.

Verify Storage Visibility

After configuring zoning and LUN presentation, check that Proxmox VE can see the new storage devices. This may take a few minutes as the FC subsystem discovers the newly available LUNs.

Rescan for new SCSI devices:

echo "- - -" > /sys/class/scsi_host/host3/scan
echo "- - -" > /sys/class/scsi_host/host4/scan

Check if new storage devices appeared:

lsblk | grep -v loop

You should see new block devices corresponding to your SAN LUNs. If you configured multiple paths to the same LUN, you might see the same storage appear multiple times with different device names – this is normal and will be handled by multipath configuration in the next step.

Troubleshoot Connection Issues

If no new devices appear, verify your zoning configuration first. Use the FC switch commands to confirm the zones are active and contain the correct WWNs. A single wrong character in a WWN will prevent the zone from working.

Check the storage array logs for access attempts. Most arrays log when hosts attempt to access LUNs, which helps identify whether the problem is with zoning or with LUN presentation.

Some storage arrays require explicit host registration before they will present LUNs. Check your storage documentation for host registration requirements – this is common with NetApp and some Dell arrays.

Check FC link status on your HBAs:

cat /sys/class/fc_host/host*/port_state

This should show “Online” for all ports. If you see “Linkdown” or “Blocked”, check your FC cable connections and switch port status.

Review FC error counters:

cat /sys/class/fc_host/host*/statistics/error_frames
cat /sys/class/fc_host/host*/statistics/link_failure_count

Non-zero error counts indicate physical layer problems with cables, SFPs, or switch ports.

Force a fabric login to test connectivity:

echo 1 > /sys/class/fc_host/host3/issue_lip
echo 1 > /sys/class/fc_host/host4/issue_lip

Check system logs for FC-related messages:

dmesg | grep -i "scsi\|fc\|qlogic"
journalctl -u multipathd | tail -20

Keep Track of Your Setup

Record your zone names, WWNs, and LUN mappings in a simple text file. This information becomes critical when troubleshooting connectivity issues or adding new hosts to the cluster. Include the storage array serial numbers and firmware versions in your notes.

Save your switch configuration files and take screenshots of working zone configurations. Most FC switches support configuration backup and restore, which can save hours if you need to rebuild a switch configuration after hardware failure.

With zoning and LUN presentation configured correctly, your Proxmox VE hosts should now see the SAN storage devices. The next step involves configuring multipath to handle redundant connections and then setting up the Proxmox VE storage configuration to use these devices.

Step 3: Configure Multipath for FC Storage

With your SAN presenting LUNs to Proxmox VE, you likely have multiple paths to the same storage devices. This is exactly what you want for redundancy, but Linux sees each path as a separate block device. Multipath configuration combines these duplicate paths into single logical devices that Proxmox VE can use efficiently.

Multipath provides two critical benefits: automatic failover when one path fails, and load balancing across multiple paths for better performance. Without proper multipath configuration, you might see the same LUN appear as multiple devices, leading to data corruption if different processes write to different paths simultaneously.

Check Current Path Status

First, verify that you can see multiple paths to your SAN storage. Look for duplicate devices with the same size but different device names:

lsblk | grep -E "sd[c-z]"

You should see something like this if multipath is not yet configured:

sdc    8:32   0  8.2T  0 disk 
sdd    8:48   0  8.2T  0 disk
sde    8:64   0  8.2T  0 disk
sdf    8:80   0  8.2T  0 disk

Multiple devices with identical sizes usually indicate the same LUN visible through different FC paths. You can confirm this by checking the SCSI device information:

lsscsi | grep -v cd

This shows the SCSI bus, target, and LUN information for each device. Devices with the same target and LUN but different host numbers represent multiple paths to the same storage.

Basic Multipath Configuration

The multipath configuration file controls how paths are grouped and managed. Create or edit /etc/multipath.conf:

nano /etc/multipath.conf

Start with this basic configuration based on current Proxmox recommendations:

defaults {
    user_friendly_names yes
    find_multipaths yes
    polling_interval 2
    path_selector "round-robin 0"
    path_grouping_policy multibus
    failback immediate
    no_path_retry queue
    rr_min_io 100
}

blacklist {
    devnode "^sda"
    devnode "^sdb"
}

blacklist_exceptions {
    property "(ID_SCSI_VPD|ID_WWN|ID_SERIAL)"
}

The user_friendly_names option creates device names like /dev/mapper/mpatha instead of long WWID-based names. The find_multipaths setting tells multipath to automatically detect and manage devices that have multiple paths without requiring manual WWID entry.

Storage-Specific Configuration

Different storage arrays require specific multipath settings for optimal performance. Add a devices section for your storage vendor. Based on the official Proxmox documentation and current working configurations:

For Dell storage arrays:

devices {
    device {
        vendor "DELL"
        product "MD3[0-9].*"
        path_grouping_policy "group_by_prio" 
        path_checker "rdac"
        prio "rdac"
        hardware_handler "1 rdac"
        failback "immediate"
        no_path_retry 30
        rr_min_io 100
    }
}

For HP/HPE storage:

devices {
    device {
        vendor "(HP|HPE)"
        product "MSA [12]0[456]0 (SAN|SAS|FC|iSCSI)"
        path_grouping_policy "group_by_prio"
        path_checker "tur"
        prio "alua"
        failback "immediate"
        no_path_retry 18
    }
}

For NetApp storage (current ONTAP versions):

devices {
    device {
        vendor "NETAPP"
        product "LUN.*"
        path_grouping_policy "group_by_prio"
        path_checker "tur"
        prio "alua"
        failback "immediate"
        no_path_retry "queue"
    }
}

Activate Multipath

Enable and start the multipath service:

systemctl enable multipathd
systemctl start multipathd

Load the new configuration:

multipath -ll

You should see output showing your storage devices with multiple paths:

root@jmhstwekprox:~# multipath -ll
mpatha (36005076801810818000000000000000) dm-2 DELL,MD38xxf
size=8.2T features='1 queue_if_no_path' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 3:0:0:0 sdc 8:32  active ready running
| `- 4:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:1:0 sdd 8:48  active ready running
  `- 4:0:1:0 sdf 8:80  active ready running

This shows one logical device (mpatha) with four physical paths. The active ready running status indicates healthy paths. For NetApp storage, you might see different priority groups with ALUA.

Test Path Failover

Test that failover works by temporarily disabling a path. Find one of the physical paths and simulate a failure:

echo offline > /sys/block/sdc/device/state

Check multipath status:

multipath -ll

You should see the disabled path marked as failed faulty offline. IO should continue flowing through the remaining paths without interruption.

Re-enable the path:

echo running > /sys/block/sdc/device/state
multipath -r

Common Multipath Issues

Here are the most frequent problems and their solutions:

  • No multipath devices appear – Check that you have truly redundant paths. Single-path devices will not be managed by multipath with the find_multipaths setting enabled.
  • Permission denied errors when editing /etc/multipath.conf – You need root access. Use sudo or switch to the root user.
  • “Device in use” errors during multipath reload – Something is actively using the raw device paths. Stop any processes accessing /dev/sd* devices before reloading multipath configuration.
  • “Device or resource busy” error – Often occurs after reboots when LVM tries to use individual paths instead of multipath devices. This is typically resolved by properly configuring the LVM filter (covered next).
  • All paths show as active – Some storage arrays report all paths as active even when they are not optimized. This is normal with ALUA (Asymmetric Logical Unit Access) storage. Check your storage documentation for optimal path configuration.

Monitor Multipath Health

Set up basic monitoring to catch path failures:

# Check for failed paths
multipath -ll | grep failed

# Monitor path status changes
journalctl -u multipathd -f

Create a simple monitoring script that alerts when paths fail:

#!/bin/bash
FAILED_PATHS=$(multipath -ll | grep -c failed)
if [ $FAILED_PATHS -gt 0 ]; then
    echo "Warning: $FAILED_PATHS multipath paths have failed"
    multipath -ll | grep failed
fi

Blacklist Devices

Your boot drives and local storage should be excluded from multipath management. Update the blacklist section in /etc/multipath.conf to include any local devices:

blacklist {
    devnode "^sda"
    devnode "^sdb"
    devnode "^nvme"
    wwid "eui.0025385b71e40e90"
}

Use the wwid from multipath -ll output to blacklist specific devices. This is more reliable than device node names, which can change between reboots.

Configure LVM Filter

Configure LVM to ignore the individual FC paths and only use the multipath devices. Edit /etc/lvm/lvm.conf and find the devices section:

nano /etc/lvm/lvm.conf

Update the filter line to look like this:

filter = [ "a|/dev/mapper/|", "a|/dev/sda.*|", "r|.*|" ]

This filter accepts multipath devices (/dev/mapper/), accepts your boot device (/dev/sda), and rejects everything else. This prevents LVM from getting confused by seeing the same storage through multiple paths.

After changing the LVM filter, update the initramfs:

update-initramfs -u

With multipath properly configured, your SAN storage appears as reliable, redundant block devices that Proxmox VE can use for shared storage. The next step involves configuring these multipath devices as LVM storage within Proxmox VE and enabling the new snapshot functionality.

Step 4: Proxmox VE Storage Configuration and Snapshots

With multipath devices working properly, the next step involves configuring Proxmox VE to use these devices as shared storage and enabling the new snapshot functionality. This is where the real power of Proxmox VE 9.0 Beta becomes apparent, as you can now combine shared FC-SAN storage with proper VM snapshots.

The process involves creating LVM storage on your multipath devices and then configuring Proxmox VE to recognize this storage with snapshot support enabled. Unlike traditional LVM storage in Proxmox VE, the new volume chain approach requires specific configuration to work properly.

Set Up LVM on Multipath Devices

Start by creating physical volumes on your multipath devices. Use the /dev/mapper/ device names rather than the individual paths:

pvcreate /dev/mapper/mpatha

If you have multiple multipath devices, create physical volumes on each:

pvcreate /dev/mapper/mpatha /dev/mapper/mpathb /dev/mapper/mpathc

Create a volume group that spans your multipath devices:

vgcreate fc-san-vg /dev/mapper/mpatha /dev/mapper/mpathb /dev/mapper/mpathc

Verify the volume group was created correctly:

vgdisplay fc-san-vg

You should see output showing the total size and available space across all your multipath devices. The volume group is now ready for use by Proxmox VE.

Since you’ll be managing storage space across VMs and containers on this new FC-SAN setup, I also covered disk space management for LXC containers and VMs in Proxmox VE which becomes important as your environment grows.

Configure Proxmox VE Storage

The storage configuration requires manual editing of the storage configuration file to enable the new snapshot functionality. Edit /etc/pve/storage.cfg:

nano /etc/pve/storage.cfg

Add the FC-SAN storage configuration:

lvm: fc-san-storage
    vgname fc-san-vg
    content images
    shared 1
    snapshot-as-volume-chain 1

The snapshot-as-volume-chain 1 parameter is crucial – this enables the new volume chain snapshot functionality that was introduced in Proxmox VE 9.0 Beta. This parameter was previously called external-snapshots in early beta versions but was renamed for clarity. Without this parameter, snapshots will not work on shared LVM storage.

How Volume Chain Snapshots Work

The new snapshot system creates volume chains rather than traditional copy-on-write snapshots. Each snapshot becomes a separate logical volume linked to the previous state. This approach provides better performance and more reliable snapshots on shared storage.

When you create a snapshot, Proxmox VE creates a new qcow2-formatted logical volume that uses the previous volume as a backing device. All blocks that are not in the outermost snapshot are searched in the chain of backing devices. This design avoids issues with native LVM snapshots, such as significant I/O penalties and dangerous behavior when running out of pre-allocated space.

Create VM Disks with Snapshot Support

When creating VM disks on the FC-SAN storage, they must use qcow2 format for snapshots to work. This is different from traditional raw format disks that cannot use the volume chain feature.

Create a test VM disk using the command line:

qm set 100 -scsi0 fc-san-storage:32,format=qcow2

This creates a 32GB disk on the FC-SAN storage in qcow2 format. Raw format disks will not support the new snapshot functionality.

From the web interface, when adding a new disk to a VM, make sure to select qcow2 format instead of raw. The GUI should now show this option for LVM storage with volume chain snapshots enabled.

Test Snapshot Functionality

Create a test snapshot to verify everything is working:

qm snapshot 100 test-snapshot-1

List the snapshots to confirm it was created:

qm listsnapshot 100

You should see your snapshot listed with the volume chain structure. The snapshot creates a new logical volume and preserves the current state under the snapshot name.

To rollback to a snapshot:

qm rollback 100 test-snapshot-1

To delete a snapshot:

qm delsnapshot 100 test-snapshot-1

Verify Storage Configuration

Check that Proxmox VE recognizes the new storage. In the web interface, go to Datacenter → Storage and you should see your fc-san-storage listed. The status should show as available with the correct capacity.

From the command line, verify the storage is properly configured:

pvesm status

Your FC-SAN storage should appear in the list with the correct total and available space.

Current Limitations

The volume chain snapshot feature has some important limitations to be aware of:

  • Linear chains only – You can only rollback to the most recent snapshot. To rollback to an earlier snapshot, you must first delete all newer snapshots in the chain.
  • qcow2 format required – VM disks must use qcow2 format. Raw format disks do not support volume chain snapshots.
  • Beta software – This is still beta functionality. Test thoroughly before using in production environments.
  • No branching – Unlike some storage systems, you cannot create branched snapshot trees. Each snapshot creates a linear chain.

Performance Considerations

The volume chain snapshots can impact performance during snapshot operations. Monitor performance during peak usage to ensure it meets your requirements:

iostat -x 1

Watch for high wait times or queue depths that might indicate performance bottlenecks. The qcow2 format adds some overhead compared to raw format, but the trade-off enables snapshot functionality on shared storage.

Check multipath statistics to ensure load is distributed properly:

iostat -x 1 | grep -E "dm-|mpath"

Several issues can prevent the snapshot functionality from working properly:

  • Snapshot creation fails with format errors – VM disks must be in qcow2 format, not raw format. Check the VM hardware configuration to verify disk formats and recreate disks with qcow2 if needed.
  • Storage not visible in web interface – Verify that the volume group is active with vgdisplay and check that the snapshot-as-volume-chain parameter is set correctly in the storage configuration file.
  • Permission errors during snapshot operations – Usually indicates that the LVM filter in /etc/lvm/lvm.conf is not correctly configured to include multipath devices while excluding individual paths.
  • “Device in use” errors – Often occurs when trying to reload LVM configuration while VMs are running. Stop VMs temporarily or ensure no processes are actively using the raw device paths.

Monitoring and Maintenance

Set up monitoring for your FC-SAN storage to track both performance and capacity:

# Check volume group usage
vgs fc-san-vg

# Monitor logical volume usage
lvs fc-san-vg

# Check for snapshot volumes
lvs fc-san-vg | grep snap

The volume chain approach creates new logical volumes for each snapshot, so monitor logical volume count and space usage carefully. Unlike thin provisioning, these are thick volumes that consume their full allocated space.

With FC-SAN storage properly configured in Proxmox VE with volume chain snapshots, you now have shared storage with snapshot functionality. This provides the foundation for high-availability VM deployments with proper backup and recovery capabilities that were previously impossible with shared LVM storage.

Step 5: Performance Optimization and High Availability Setup

With FC-SAN storage working properly with snapshots, the next phase involves optimizing performance and setting up high availability clustering. This step transforms your single-node FC-SAN setup into a production-ready cluster that can handle node failures and maintain optimal performance under load.

The combination of FC-SAN shared storage with Proxmox VE clustering provides enterprise-grade capabilities. Your VMs can automatically failover between nodes while maintaining access to the same storage, and performance optimizations ensure everything runs efficiently.

Optimize FC-SAN Performance

Start with storage-level optimizations that directly impact FC-SAN performance. The new volume chain snapshots in Proxmox VE 9.0 can affect I/O patterns, so monitoring becomes crucial.

Check current FC-SAN performance baseline:

iostat -x 1 5

Look for high wait times or queue depths on your multipath devices. If you see consistently high values, investigate further:

iostat -dx 1 | grep -E "dm-|mpath"

Optimize multipath I/O patterns by adjusting the rr_min_io value in your multipath configuration. For FC-SAN workloads with mixed I/O sizes, try different values:

nano /etc/multipath.conf

For mostly database workloads, use smaller values:

defaults {
    rr_min_io 10
}

For large sequential workloads, use larger values:

defaults {
    rr_min_io 1000
}

After changing multipath settings:

multipath -r
systemctl reload multipathd

Configure VM Storage Optimization

Configure VMs for optimal FC-SAN performance. The qcow2 format required for snapshots can impact performance, but several settings help:

Enable I/O threads for storage-heavy VMs:

qm set 100 -iothread 1
qm set 100 -scsi0 fc-san-storage:vm-100-disk-0,iothread=1

Use VirtIO SCSI with proper settings:

qm set 100 -scsihw virtio-scsi-single

For high-performance workloads, consider disabling cache or using writeback:

qm set 100 -scsi0 fc-san-storage:vm-100-disk-0,cache=none

Network Performance for Clustering

Prepare network performance for clustering before adding nodes. FC-SAN clusters benefit from dedicated networks for different traffic types.

Create dedicated network bridges for cluster communication:

nano /etc/network/interfaces

Add a dedicated cluster network interface:

auto ens224
iface ens224 inet static
    address 10.0.200.11/24
    
auto vmbr1
iface vmbr1 inet static
    address 10.0.200.11/24
    bridge-ports ens224
    bridge-stp off
    bridge-fd 0

Set Up Proxmox VE Clustering

Create the cluster on your first node. Choose a name that reflects your environment:

pvecm create production-cluster

Verify cluster status:

pvecm status

You should see your single node listed as online. The cluster is ready for additional nodes.

Adding Nodes to the Cluster

On additional Proxmox VE nodes that will join the cluster, ensure they can access the same FC-SAN storage. Install and configure multipath exactly as done on the first node.

Get the cluster join information from the first node:

pvecm add --use_ssh 10.0.200.11

After joining, verify all nodes see the shared storage:

pvesm status

The FC-SAN storage should appear on all cluster nodes with the same configuration.

Configure High Availability

Configure HA to automatically restart VMs on different nodes during failures. This requires proper fencing and shared storage – both of which your FC-SAN setup provides.

Create an HA resource group through the web interface or CLI:

ha-manager groupadd production-group --nodes node1,node2,node3

Add VMs to HA management:

ha-manager add vm:100 --group production-group --max_restart 3

Monitor and Validate Performance

Set up monitoring to track both individual node performance and cluster health. Create simple monitoring scripts for key metrics:

FC-SAN path health monitoring:

#!/bin/bash
echo "=== Multipath Status ==="
multipath -ll | grep -E "policy|status|dm-"

echo "=== FC Host Status ==="
cat /sys/class/fc_host/host*/port_state

echo "=== Storage Performance ==="
iostat -dx 1 1 | grep -E "dm-|mpath"

Cluster health monitoring:

#!/bin/bash
echo "=== Cluster Status ==="
pvecm status

echo "=== HA Status ==="
ha-manager status

echo "=== Storage Status ==="
pvesm status

Performance Testing and Validation

Test VM migration performance between nodes:

time qm migrate 100 node2

This shows how quickly VMs can move between nodes during maintenance or failures.

Test snapshot performance on the new volume chain system:

time qm snapshot 100 performance-test
time qm rollback 100 performance-test
time qm delsnapshot 100 performance-test

Several issues can affect FC-SAN cluster performance. Here are the most frequent problems and solutions:

  • Slow VM migrations – Check network bandwidth between nodes and verify multipath is properly distributing I/O across all available paths.
  • High snapshot creation times – Monitor storage array performance during snapshot operations, as volume chain snapshots create new LVM volumes.
  • Cluster communication delays – Verify dedicated cluster network is configured and not competing with VM traffic or storage traffic.
  • Storage performance degradation – Check FC cable connections, HBA statistics for errors, and verify SAN array is not overloaded.

Production Readiness Checklist

Before considering your FC-SAN cluster production-ready, verify these critical items:

  • Test failover scenarios – Power off nodes and confirm VMs restart automatically on surviving nodes without data loss or extended downtime.
  • Verify snapshot operations under load – Ensure snapshot creation, rollback, and deletion work correctly during normal operations and cluster activities.
  • Confirm monitoring and alerting – Test that alerts trigger appropriately for node failures, storage path issues, and performance degradation.
  • Document recovery procedures – Create step-by-step recovery guides for various failure scenarios specific to your FC-SAN hardware and configuration.

Your FC-SAN with Proxmox VE cluster now provides enterprise-grade capabilities with automatic failover, shared storage, and proper performance optimization. The combination of reliable SAN storage with the new snapshot capabilities gives you both the performance and data protection needed for production workloads.

Moving Forward

Proxmox VE 9.0 Beta finally solved the FC-SAN snapshot problem that kept many organizations stuck on VMware. What was impossible for years is now working in production environments.

This breakthrough removes the technical barrier that kept whole infrastructures in limbo. You can now migrate VMware setups while keeping backup workflows intact, using existing SANs like HP Nimble, NetApp, or Dell storage without rearchitecting for ZFS or Ceph.

Remember this is beta software, so test thoroughly before production use. Network interface names might change during upgrades from Proxmox 8.x, which could break scripts tied to specific interface names.

For standard LVM shared storage cases, this is a game-changer. Proxmox transforms from an alternative you had to compromise for into a true VMware replacement. The SAN headache that forced people back to VMware is finally over.

What’s your experience been with FC-SAN and Proxmox? Are you planning to migrate from VMware now that snapshots work? Share your setup or questions in the comments below.

Share This Article
Facebook Twitter Whatsapp Whatsapp Reddit Copy Link
Joseph Matino
By Joseph Matino
Follow:
Fullstack developer and CTO at Hostwek LTD with over six years of experience building custom applications while managing Proxmox server infrastructure. Currently working on cloud-based HMS development with a hosting platform set to launch soon, sharpening skills on Node.js, React, and AI alongside Laravel and Python. I build custom monitoring systems with Zabbix, implement sophisticated security configurations, handle BYOIP implementations while creating private network configurations that keep systems secure and performing well. Based in Kenya, writing about development workflows, server optimization techniques, and PC gaming insights. PC Gaming keeps my debugging skills sharp between projects.
Leave a comment Leave a comment
Subscribe
Notify of
guest
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments

What's New

13 Best Linux Server Distros with GUI: Worth Checking Out

Servers Guides 11 months ago

How CyberPanel is Dominating the Web Hosting Market

Servers 11 months ago

Top 14 Best Game Development Frameworks in 2024

Gaming 11 months ago

You Might also Like

Top 5 Ssh Client Managers For Windows

Best SSH Client Managers For Windows To Try Out

In my work with servers, jumping between different SSH connections is part of the daily…

Joseph Matino 1 year ago
How To Protect Cyberpanel From Syn Flood And Crypto Mining Attacks

How To Secure CyberPanel From Recent Cyber Attacks

Recently, over 22,000 CyberPanel servers faced serious attacks. This guide shows how to protect your…

Joseph Matino 9 months ago
How To Customize Linux Motd Screen On Popular Distributions

How to Customize Linux MOTD on AlmaLinux, Ubuntu, Debian

When setting up a server, I always customize the Message of the Day (MOTD), which…

Joseph Matino 12 months ago
How To Fix Curl: Unable To Get Local Issuer Certificate Error

How to Fix Curl: Unable to Get Local Issuer Certificate Error

When your computer shows the error “SSL Certificate Problem: Unable to Get Local Issuer Certificate”…

Joseph Matino 11 months ago
joseph matino logo

Building digital solutions that don’t break at 3am. Passionate about custom applications, server management, A+ WordPress development, and sharing gaming insights. Creating reliable technology and honest content from Kenya.

Quicklinks

  • Terms & Conditions
  • Privacy Policy
  • Cookie Policy
  • Contact
  • About
Hire Me

Connect

Follow Me
Joseph Matino
Manage your privacy

To provide the best experiences, we and our partners use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us and our partners to process personal data such as browsing behavior or unique IDs on this site and show (non-) personalized ads. Not consenting or withdrawing consent, may adversely affect certain features and functions.

Click below to consent to the above or make granular choices. Your choices will be applied to this site only. You can change your settings at any time, including withdrawing your consent, by using the toggles on the Cookie Policy, or by clicking on the manage consent button at the bottom of the screen.

Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Statistics

Marketing

Features
Always active

Always active
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Manage options
{title} {title} {title}
Joseph Matino
Manage your privacy

I use cookies and similar technologies to enhance your experience on this site. By consenting, you allow me to process data like your browsing behavior and unique IDs. If you choose not to consent or later withdraw your consent, some features of the site may not function properly.

Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
Statistics

Marketing

Features
Always active

Always active
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Manage options
{title} {title} {title}
wpDiscuz