Finally managed to get my hands on 2x1TB NVMe’s. Budgets are tight these days … :-) They are Crucial P310 … hope they are reliable, although I suspect nowhere near Samsung stuff.
I have a little Proxmox installation running a VM on a 256GB NVMe, which as you can imagine is tight. Is there a way of cloning this installing on one of the new NVMes?
Reason why I have 2x new NVMe is that I want to eventually get myself to Proxmox HA, so that the two machines (two little Optiplex 5070, one of which has the 256GB install) provide me with redundancy.
First thing is to clone the 256GB install to the larger NVMe. Would it be an idea to go this way: a) install 1TB new NVMe on spare Optiplex b) install Proxmox on this new machine c) find a way to replicate the whole 256GB install on the second machine (need to read the docs to see if/how this can happen) d) once second machine is up and running as a clone, remove machine with 256GB (current machine) and install the 1TB NVMe. e) do the same above process the other way around.
Do you think this will work or am I going to hit a wall? Is there a simpler way of doing this?
I recently had to increase my proxmox storage as well from an old 256 to 1TB. What I did was make a copy of /etc via PVE Host Backup and saved that on my NAS/external storage. Almost everything is in /etc/pve. Then I created backups of all the VMs and stored those on the same external storage. I then installed proxmox as normal and compared configs between backup and new configs then restored VMs from backup. The reason I did it this way is because 1) I had installed proxmox a while ago and new config > old config for stability after adding some necessary PVE scripts (e.g. intel chip, and 2) I’ve had weird issues before cloning drives and a fresh install was easier than risking some weird edge case troubleshooting. It also let me keep the old SSD as a backup in case something went wrong.
Edit: Also recommend going with zfs mirrored on the new install during the setup: target disks options and zfs mirrored. ZFS offers some benefits vs the default lvm.
I like @pgo_lemmy’s answer best, but instead of rebuilding the original system, (assuming you did the default ZFS installation) you can add the bigger device as part of a mirror, let it resilver, install the boot loader, and then detach the smaller device from the mirror. It should automatically grow to the bigger size once the smaller device is removed and the only downtime you’d have is from installing the bigger device. Check the PVE wiki and you should find some details on this method.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters HA Home Assistant automation software ~ High Availability LXC Linux Containers NAS Network-Attached Storage NVMe Non-Volatile Memory Express interface for mass storage SATA Serial AT Attachment interface for mass storage SSD Solid State Drive mass storage ZFS Solaris/Linux filesystem focusing on data integrity
[Thread #299 for this comm, first seen 17th May 2026, 14:40] [FAQ] [Full list] [Contact] [Source code]
I have a little Proxmox installation running a VM on a 256GB NVMe, which as you can imagine is tight
Not as tight as I was imagining a 256 MB installation to be.
I know nothing about promox, but because it’s quiet in here, I imagine cloning the original drive for the original system then expanding it to take over the whole drive is the easier thing to do, it’s a fairly standard process and generally nondestructive because you can just put the old drive back in if something breaks.
So, I would probably go e) unless you really want to set everything up again from scratch (which is sometimes nice to do)
Add a second node using the new drive, move all vm to the new node, decommission old node, rebuild the old node with the new drive.
You can get away with a disk clone but in my opinion a vm move is the proper way to go.
Adding a new node you start with a clean install, any quirk you have on the old hw will be finally washed away (or will bite you back and be properly documented), you have a quick way back should anything go sideways (the clone too provides a quick way back, but i like this way much more ^^), you get some hands on multi node experience that will be useful for ha setup.
Ok, but I assume this means that I have to configure the new node from scratch, adding the storage, etc. Correct? So the steps would be:
a) build new node with spare Optiplex + 1 new NVMe and install Proxmox from scratch b) configure the new node and add to the cluster c) migrate the VM from old now to new node d) decomission old node but installing the 2nd NVMe and Proxmox from scratch e) add the second rebuilt node to the cluster again.
Did I get this right?
Agreed, it helps that with proxmox the cluster is a first-class feature and all installs are a cluster even if only a single node. Really removes a lot of potential pain points from operations like this.
That depends on what level of HA you want to end up with.
If you want proper HA, you’ll want to plan on adding a (small, like a Raspberry Pi) third node for quorum. If you are already taking backups and you just want “I can restore on the second system” then it’s slightly simpler, but mostly the same process:
- Setup new node, add to cluster
- Migrate all VMs and LXCs to new node
- Remove and upgrade other node
- Add rebuilt node to cluster
If you’re planning on proper HA, I’d strongly advise having the proxmox installation on a second small drive on each node and leaving your 1tb drives as data only.
This article half-explains one option for a two node setup (zfs replication), which is functional but not ideal. If you want to get your feet wet with Ceph then I can give you some pointers.
Yeah, but these little Optiplex machines only take one NVMe at a time I think. In your article it sounds like you have a tiny NVMe drive for Proxmox and an SSD (on the SATA port presumably) for the storage. Is this right? I think the Optiplex 5070 allows both NVMe and SSD at the same time, but not 100% sure.
I personally would go the other way, use your nvme for storage and have a second small drive for proxmox since that part doesn’t really need speed. That said, if you go with zfs replication it doesn’t matter; just have the one nvme drive that holds the proxmox install and the storage pool. Separate drive only matters for Ceph.

