Proxmox Virtual Environment 9.0 was released on August 5, 2025, bringing major improvements for anyone running virtualization on physical servers. Built on Debian 13 “Trixie,” this update delivers better hardware compatibility, improved performance, and storage features that change how we manage bare metal deployments.
After testing this release on our bare metal servers for hosting and virtualization, the performance improvements on physical servers are noticeable. The official Proxmox announcement highlights how this release tackles real problems we face daily. The Debian 13 foundation means your newer hardware finally gets proper support, while new storage capabilities give you options that were not available before.
Moving to Debian 13 “Trixie” as the base operating system represents the most significant change in Proxmox VE 9.0. This brings Linux kernel 6.14.8-2 as the stable default, delivering better hardware recognition and performance on modern server equipment. From our testing on bare metal servers, the hardware compatibility improvements are immediately noticeable. The newer kernel includes better support for PCIe 5.0 devices, NVMe storage controllers, and newer CPU architectures that were problematic with older kernels.
Debian 13 and Performance Gains
The version differences between Proxmox VE 8.x and 9.0 show the scope of this upgrade across all major components. Each update brings performance improvements and better hardware compatibility that directly benefits bare metal deployments.
Component | Proxmox VE 8.x | Proxmox VE 9.0 | Key Improvements |
---|---|---|---|
Base OS | Debian 12 “Bookworm” | Debian 13 “Trixie” | Better hardware support, security updates |
Linux Kernel | 6.8 series | 6.14.8-2 | PCIe 5.0, NVMe enhancements, newer CPU support |
QEMU | 8.1.x series | 10.0.2 | Modern blockdev interface, improved IGD passthrough |
LXC | 5.0.x series | 6.0.4 | Enhanced resource isolation, cgroup v2 integration |
ZFS | 2.2.x series | 2.3.3 | RAIDZ expansion with minimal downtime |
Ceph | Reef 18.2.x | Squid 19.2.3 | Performance improvements, better stability |
Backup Integration | PBS 3.x compatibility | PBS 4.x support | Forward/backward compatibility maintained |
Web Interface | ExtJS 7.x based | ExtJS 7.x with improvements | Enhanced performance and usability |
API Version | API v8 compatibility | API v9 with extensions | New endpoints for snapshots and SDN |
Storage Drivers | GlusterFS supported | GlusterFS deprecated | Focus on ZFS, Ceph, and LVM improvements |
Container Runtime | cgroup v1/v2 mixed | cgroup v2 only | Better resource isolation and security |
Network Stack | Basic SDN support | SDN Fabrics with OSPF | Complex routing and redundancy support |
Mobile Interface | Legacy framework | Rust-based Yew framework | Better mobile management capabilities |
High Availability | Basic clustering | Resource affinity rules | Fine-grained VM placement control |
Linux kernel 6.14.8-2 brings enhanced PCIe 5.0 support that improves compatibility with modern storage controllers and network cards. NVMe enhancements provide better performance with high-speed storage devices commonly used in hosting environments. QEMU 10.0.2 introduces the modern blockdev interface which gives better control during storage migrations between nodes. The improved IGD passthrough now works reliably with Intel Gen 11 and 12 integrated graphics devices.
LXC 6.0.4 provides enhanced resource isolation through improved cgroup v2 integration. This makes container workloads more predictable and stable when running multiple services on shared hardware. Ceph Squid 19.2.3 delivers better performance and stability for distributed storage environments. The update includes optimizations that reduce latency in multi-node storage clusters.
The new network interface pinning tool fixes a real problem we have encountered during upgrades. Network interface names can change when the newer kernel recognizes additional hardware features, potentially locking you out of remote systems. This network stability becomes crucial during major upgrades, especially when working with complex storage setups.
Storage and Networking Improvements
The most requested feature is now here – snapshot support for thick-provisioned LVM shared storage. This functionality works with iSCSI and Fibre Channel SAN environments, implementing snapshots as volume chains where only differences are stored compared to parent volumes. This improvement helps environments using traditional storage infrastructure without relying on vendor-specific solutions.
When setting up Fibre Channel SAN configurations with Proxmox, these new snapshot capabilities create a layered approach where each snapshot only stores differences from its parent, making storage usage more efficient. The snapshot functionality follows ZFS snapshot best practices for managing point-in-time recovery options, which becomes particularly valuable for environments running critical workloads where quick recovery times matter more than storage efficiency.
ZFS received an important update that allows adding new devices to existing RAIDZ pools with minimal downtime. Storage expansion becomes much more practical for production environments where you cannot afford service interruptions. The volume chain snapshots work alongside existing disk management practices, and if you already resize disks for LXC containers and VMs regularly, these new features provide additional tools for maintaining and expanding storage capacity.
The new SDN Fabrics feature simplifies the configuration and management of complex routed networks. This addition supports both OpenFabric and OSPF routing protocols, making it easier to build redundant network architectures with automatic failover capabilities. For hosting environments that require multiple network paths and high availability, these networking improvements reduce the complexity of setting up spine-leaf architectures and full-mesh networks.
Proxmox VE 9.0 also introduces HA resource affinity rules that provide fine-grained control over how virtual machines and resources are distributed across cluster nodes. These rules allow administrators to keep related services together on the same physical node to minimize latency, or ensure critical services are separated across different nodes for maximum redundancy. This level of control matters for hosting environments where application performance and fault tolerance requirements vary between different workloads.
Should You Upgrade Right Now?
With VMware’s aggressive licensing changes under Broadcom pushing costs up by 150% to over 1,000% for many organizations, Proxmox VE 9.0 arrives at precisely the right time. The transition to subscription-only models, 72-core minimum licensing requirements, and forced product bundles has many enterprises reconsidering their virtualization strategy. The licensing model differences become stark when compared side by side:
Aspect | VMware (Broadcom) | Proxmox VE 9.0 |
---|---|---|
Licensing Model | Subscription-only, 72-core minimum | Open-source, optional enterprise support |
Cost Structure | $4,780+ per CPU per year | Free community, €800/CPU/year enterprise |
Product Bundling | Forced bundles (VCF, VVF) | Modular components, use what you need |
Performance | Baseline reference | 50% higher IOPS, 30% lower latency |
Vendor Lock-in | Proprietary ecosystem | Open standards, no lock-in |
The decision to upgrade depends on your current setup and specific needs. Users running Proxmox VE 8.4 or older versions have access to extensively tested upgrade paths that make the transition smoother. Independent performance testing shows Proxmox VE consistently outperforming VMware ESXi in storage-intensive workloads, delivering nearly 50% higher IOPS and 30% lower latency while achieving 38% higher bandwidth during peak conditions.
The hardware compatibility improvements make this update particularly attractive for environments with newer server equipment that may not be fully supported by older kernel versions.
Testing this on some of our bare metal servers shows the upgrade process requires careful planning due to the major changes in the underlying operating system. Network interface name changes present a real challenge during upgrades. I have seen this happen on production systems where the primary network interface changed names, breaking the network configuration and requiring physical console access to fix.
When dealing with additional IPs like BYOIP blocks that need to be routed, your whole network configuration becomes critical because interface name changes can break routing tables and leave entire IP blocks unreachable. The newer kernel in Proxmox VE 9.0 recognizes more hardware features, which can cause interface names to shift from something like eno1 to enp1s0 during the upgrade process.
Remote management becomes problematic when this happens. Most hosting environments rely on SSH access for server management, and losing network connectivity during an upgrade can lock you out completely. Proxmox VE 9.0 addresses this with the pve-network-interface-pinning tool. This utility lets you bind MAC addresses to stable interface names like nic0 and nic1, ensuring your network configuration remains consistent across kernel upgrades.
root@hwnode:~# proxmox-network-interface-pinning generate
This will generate name pinning configuration for all interfaces - continue (y/N)? y
Name for link 'enp3s0' will change to 'nic0'
Name for link 'enp4s0' will change to 'nic1'
Name for link 'enp5s0f0np0' will change to 'nic2'
Name for link 'enp5s0f1np1' will change to 'nic3'
Generating link files
Successfully generated .link files in '/usr/local/lib/systemd/network/'
Updating /etc/pve/nodes/hwnode/host.fw.new
Updating /etc/network/interfaces.new
Successfully updated Proxmox VE configuration
Interface pinning becomes particularly important when managing complex networking setups or multiple network cards. For environments with redundant connections or specific firewall rules tied to interface names, this tool prevents configuration drift that could impact network security or performance. Running the pinning tool before upgrading from Proxmox VE 8 saves troubleshooting time and reduces the risk of losing remote access during the upgrade process.
For production environments, the upgrade should be planned during maintenance windows. Testing the upgrade process in a non-production environment first is recommended to identify any potential compatibility issues with specific hardware or configurations. The platform continues to support installation on top of Debian for users who prefer that approach, and can be installed fresh on bare metal using the complete ISO image. After running this upgrade on several test systems, the storage improvements are valuable for any setup using shared storage infrastructure.
Bottom Line
Proxmox VE 9.0 delivers exactly what the virtualization market needs right now. While VMware customers face subscription-only licensing, forced bundling, and dramatic cost increases under Broadcom’s ownership, Proxmox continues advancing its open-source platform with real technical improvements that solve actual problems. The Debian 13 foundation provides better hardware support and performance, while new storage and networking features address operational needs that many running Proxmox VE 8 have been waiting for.
The snapshot support for thick LVM storage removes a limitation that has affected shared storage environments for years, making Proxmox viable for traditional SAN infrastructure that previously required VMware. The combination of improved hardware compatibility, enhanced storage options, and better networking tools makes Proxmox VE 9.0 a compelling upgrade for most bare metal deployments. One thing I particularly appreciate about Proxmox is the high IOPS we usually get if you know how to configure the storage stack correctly, and the ZFS and Ceph integration options give you control over performance tuning that proprietary solutions simply don’t offer.
Most hosting environments are still running Proxmox VE 8, which continues to work well for production workloads. When you’re ready to upgrade to version 9, plan it carefully with proper testing and backup strategies in place. The improved networking capabilities make complex setups more manageable, and the storage improvements are valuable for any setup using shared storage infrastructure. For those currently evaluating virtualization platforms, the timing has never been better to consider alternatives before your next VMware licensing renewal cycle hits.