Running out of disk space on Proxmox containers and virtual machines happens more often than you might expect. Learning how to resize Proxmox disk space properly ensures you can handle storage expansion and reduction on both LXC containers and VMs without data loss.
The Proxmox disk resize process varies depending on what you are working with, but the core principles remain the same. Proper disk resizing prevents downtime and keeps your virtualization infrastructure running smoothly. A step-by-step video tutorial is also available at the end of this guide for visual learners.
I encountered this situation running multiple containers on a private DHCP server. The main container had filled up its allocated 32GB space, and I needed to expand it without affecting the running services. Since the server was also using CyberPanel for management, I had to coordinate the expansion with ongoing backup operations to avoid conflicts.
When working with more complex storage setups like Fibre Channel SANs, disk management becomes even more critical for maintaining performance and reliability. I have found that environments using shared FC SAN storage with Proxmox require extra attention during resize operations since multiple nodes may be accessing the same storage volumes.
Pre-Resize Planning
Before making any changes to disk space in Proxmox VE, proper planning prevents problems that could lead to data loss or extended downtime. From my experience working with production systems, taking time for preparation saves hours of troubleshooting later.
- Create and test backups: Always create a complete backup using Proxmox VE built-in backup tools or Proxmox Backup Server. Test your backup by checking file integrity and making sure you can actually restore from it if needed. I learned this the hard way after assuming a backup was working, only to discover during an emergency that the backup files were corrupted and unusable.
- Check current disk usage: Log into your container or VM and run
df -h
to see exactly how much space is being used. This information helps you determine how much additional space you actually need rather than just guessing. Clean up unnecessary files first, since expanding storage when you could just free up existing space wastes resources. - Identify disk image paths: Use the
mount
command for containers orqm config [vmid]
for virtual machines. This step becomes important later when you need to work with the underlying storage files directly. - Plan downtime carefully: Both containers and VMs need to be stopped during resize operations. Schedule this work during maintenance windows to minimize impact on users and running services. For production environments, consider whether you can migrate services temporarily to other servers. The official Proxmox resize documentation provides additional technical details about the underlying processes.
Increase LXC Container Disk Space
Expanding disk space for LXC containers follows a straightforward process that works reliably across different storage backends. I typically use this method for containers that have grown beyond their initial allocation.
Step 1: Stop the Container
First, identify the container you want to resize. In this example, we will work with container ID 300. Stop the container to prevent active processes from interfering with the resize operation:
pct stop 300
Wait for the container to completely stop before proceeding. You can verify the status through the Proxmox web interface or by running pct status 300
.
Step 2: Resize Container Storage
Use the pct resize
command to increase the container root filesystem size:
pct resize 300 rootfs 60G
Here is what each part means:
pct resize
– Command for resizing container storage300
– Container ID you are working withrootfs
– The root filesystem of the container60G
– New total size (not additional space)
The system will show confirmation that the logical volume has been resized. This process usually completes quickly since it only involves updating storage allocation.
Step 3: Start and Verify
Start the container and verify the new disk size:
pct start 300
pct enter 300
df -h
The df -h
command displays filesystem disk space usage in human-readable format. You should see the new total space available in the root filesystem. The expansion happens automatically for most filesystem types.
Reduce LXC Container Disk Space
Reducing container disk space requires more careful steps since you risk data loss if the current usage exceeds the new size. I only recommend this for containers where you are certain about the space requirements. This process is more complex than expansion because you are working with live data that must fit in the smaller space.
Step 1: Prepare the Container
Log into the container first to clean up files and reduce actual usage:
pct enter 300
df -h
# Clean up unnecessary files, logs, caches
apt autoremove
apt clean
exit
Make sure the used space is well below your target size. If you want to shrink to 20GB, ensure used space is under 15GB to allow for filesystem overhead.
Stop the container before proceeding with the resize:
pct stop 300
Step 2: Find Storage Path
Determine the disk image path by checking the container configuration or by examining mount points before stopping:
pct enter 300
mount
Look for the entry showing where the root filesystem is mounted from. In my setups, this is typically something like /var/lib/vz/images/300/vm-300-disk-0.raw
.
Step 3: Shrink Filesystem and Storage
The reduction process depends on your storage backend. For LVM storage, the correct process is different from the loop device method. Here are both approaches:
For LVM storage, which is the most common setup, follow this specific order to avoid data corruption:
# First, check filesystem for errors
e2fsck -fy /dev/pve/vm-300-disk-0
# Shrink the filesystem FIRST
resize2fs /dev/pve/vm-300-disk-0 20G
# Then reduce the logical volume
lvreduce -L 20G /dev/pve/vm-300-disk-0
# Update container configuration
nano /etc/pve/lxc/300.conf
# Change: rootfs: local-lvm:vm-300-disk-0,size=32G
# To: rootfs: local-lvm:vm-300-disk-0,size=20G
For directory storage with raw images, use the loop device method:
mkdir /mnt/tmp
losetup -f /var/lib/vz/images/300/vm-300-disk-0.raw
losetup -a # Note which loop device was assigned
# Mount and shrink filesystem
mount /dev/loop0 /mnt/tmp
resize2fs /dev/loop0 20G
umount /mnt/tmp
losetup -d /dev/loop0
# Reduce the image file size
qemu-img resize /var/lib/vz/images/300/vm-300-disk-0.raw 20G
Step 4: Start and Verify
Start the container and check the new size:
pct start 300
pct enter 300
df -h
The filesystem should now show the reduced total capacity.
Increase VM Disk Space
Expanding virtual machine disk space involves resizing both the virtual disk and the filesystem inside the VM. This process works for most Linux and Windows VMs but requires more steps than container expansion since you need to work inside the guest operating system.
Step 1: Check VM Configuration
Identify the VM and examine its current disk configuration:
qm config 301
This shows output similar to:
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
memory: 2048
scsi0: local:301/vm-301-disk-0.qcow2,iothread=1,size=32G
Note the disk identifier (scsi0
) and current size (32G
).
Step 2: Resize Virtual Disk
Increase the virtual disk size using the qm resize
command:
qm resize 301 scsi0 +10G
This adds 10GB to the existing disk. You can also specify an absolute size like 50G
instead of +10G
.
Step 3: Start VM and Expand Filesystem
Start the virtual machine and expand the filesystem inside the guest operating system:
qm start 301
Log into the VM through the console or SSH to expand the filesystem. For Linux VMs with ext4 filesystems:
# Check current partition table
fdisk /dev/sda
# Press 'p' to print partitions, note the layout
# Press 'q' to quit without changes
# Expand the last partition
growpart /dev/sda 1
# Resize the filesystem
resize2fs /dev/sda1
For VMs with LVM:
# Extend physical volume
pvresize /dev/sda1
# Extend logical volume (adjust path as needed)
lvextend -l +100%FREE /dev/mapper/vg-root
# Resize filesystem
resize2fs /dev/mapper/vg-root
Verify the changes:
df -h
Reduce VM Disk Space
Reducing VM disk space involves shrinking both the filesystem inside the VM and the virtual disk file. This process requires extreme caution since you are working with live data and multiple layers of storage. Any mistake during this process can result in complete data loss, so triple-check your backups before starting.
Step 1: Prepare VM Storage
Log into the VM and clean up unnecessary files:
ssh root@[VM-IP]
df -h
apt clean
apt autoremove
# Remove unnecessary files, logs
exit
Step 2: Shrink VM Filesystem
For ext4 filesystems, shrink the filesystem before reducing the partition. This step must be done in the correct order to prevent data corruption:
# Force filesystem check first
e2fsck -f /dev/sda1
# Shrink filesystem to desired size
resize2fs /dev/sda1 20G
Use fdisk
to delete and recreate the partition with smaller size:
fdisk /dev/sda
# Press 'p' to view current partitions
# Press 'd' to delete the partition
# Press 'n' to create new partition
# Use same starting sector, but end at smaller size
# Press 'w' to write changes
Step 3: Reduce Virtual Disk
If using LVM storage, reduce the logical volume first, then resize the virtual disk in Proxmox:
lvreduce -L 25G /dev/mapper/pve-vm--301--disk--0
Finally, resize the virtual disk in Proxmox:
qm resize 301 scsi0 25G
Verify the changes inside the VM:
df -h
Troubleshooting Common Issues
From managing numerous Proxmox environments, several issues come up regularly during disk resize operations.
- Container fails to start after reduction: This happened to me once when I rushed the filesystem check. The container configuration showed the correct smaller size, but the container refused to boot. Running
e2fsck -f /dev/pve/vm-300-disk-0
revealed filesystem errors that needed manual fixing. Always run the filesystem check even if it seems to complete without errors initially. - LVM reports size mismatch after reduction: Sometimes the logical volume shows the correct size but the container still sees the old disk space. This usually means the container configuration file was not updated properly. Check
/etc/pve/lxc/[container-id].conf
and ensure the rootfs line reflects the new size exactly. - Filesystem does not show increased space: The virtual disk may have expanded but the partition table or filesystem was not updated. Use
growpart
for partition expansion andresize2fs
for filesystem expansion inside the VM. - Network connectivity lost after VM disk operations: Some resize operations can affect network interface naming, especially when working with complex storage setups covered in setting up shared FC SAN with Proxmox. Check network configuration and restart networking services if needed.
- LVM resize fails on cluster storage: Ensure all cluster nodes have consistent LVM metadata. Use
vgscan
andlvscan
to refresh LVM information across cluster nodes. The Proxmox community forums have extensive discussions about cluster-specific resize challenges.
Storage Backend Considerations
Different storage backends in Proxmox handle resize operations differently. Understanding these differences helps avoid problems during resize operations.
- LVM-Thin storage allows online expansion for containers but VMs still require shutdown. The thin provisioning means storage space is only allocated as needed, making expansion more efficient.
- ZFS storage provides built-in compression and deduplication that affects actual disk usage calculations. ZFS datasets can be expanded online, but filesystem expansion inside VMs still requires the standard process.
- Ceph storage in clustered environments distributes data across multiple nodes. Resize operations work the same way, but network performance affects operation speed.
- Directory storage on NFS or CIFS shares may have different performance characteristics during resize operations. Network latency can significantly impact resize speed for large disks.
Best Practices and Recommendations
Managing Proxmox storage effectively requires following practices that prevent problems and improve efficiency.
Monitor disk usage proactively rather than waiting for full disks. I set up monitoring alerts at 80% usage to plan expansion before running out of space. This prevents those emergency 3 AM resize operations when production services start failing due to disk space issues.
Keep spare storage capacity available for emergency expansion. After running into situations where urgent expansions failed due to insufficient underlying storage, I now plan storage pools with at least 20% free space to handle unexpected growth.
Test your backup and restore procedures regularly, not just the backup creation. I schedule monthly restore tests to ensure the entire backup chain actually works when needed. This has saved me multiple times when dealing with resize operations that went wrong.
For a visual walkthrough of the Proxmox disk resize process, I created this step-by-step video tutorial that covers both container and VM expansion:
The video demonstrates the actual commands and shows the Proxmox interface during resize operations, making it easier to follow along with the written steps above.
Final Thoughts
Resizing disk space in Proxmox gets easier once you understand the underlying processes and plan operations carefully. The key difference between containers and VMs is that containers typically handle filesystem expansion automatically, while VMs require manual intervention inside the guest system.
Most Proxmox disk resize operations complete successfully when you follow proper procedures and maintain good backups. The process has become more reliable with recent Proxmox versions, especially when working with modern storage backends like ZFS and LVM-thin.
Regular monitoring and proactive management prevent emergency resize situations. Planning disk space requirements during initial VM and container creation saves time and reduces risk later.
If you have experienced specific resize challenges or have questions about storage configurations that behaved differently than expected, share your experience in the comments. Every Proxmox environment has unique characteristics, and community knowledge helps solve problems that official documentation might not cover.