I recently decided to rebuild my homelab after a nasty double hard drive failure (no important files were lost, thanks to ddrescue). The new setup uses one SSD as the PVE root drive, and two Ironwolf HDDs in a RAID 1 MD array (which I’ll probably expand to RAID 5 in the near future).

Previously the storage array had a simple ext4 filesystem mounted to /mnt/storage, which was then bind-mounted to LXC containers running my services. It worked well enough, but figuring out permissions between the host, the container, and potentially nested containers was a bit of a challenge. Now I’m using brand new hard drives and I want to do the first steps right.

The host is an old PC living a new life: i3-4160 with 8 GB DDR3 non-ECC memory.

  • Option 1 would be to do what I did before: format the array as an ext4 volume, mount on the host, and bind mount to the containers. I don’t use VMs much because the system is memory constrained, but if I did, I’d probably have to use NFS or something similar to give the VMs access to the disk.

  • Option 2 is to create an LVM volume group on the RAID array, then use Proxmox to manage LVs. This would be my preferred option from an administration perspective since privileges would become a non-issue and I could mount the LVs directly to VMs, but I have some concerns:

    • If the host were to break irrecoverably, is it possible to open LVs created by Proxmox on a different system? If I need to back up some LVM config files to make that happen, which files are those? I’ve tried following several guides to mount the LVs, but never been successful.
    • I’m planning to put things on the server that will grow over time, like game installers, media files, and Git LFS storage. Is it better to use thinpools or should I just allocate some appropriately huge LVs to those services?
  • Option 3 is to forget mdadm and use Proxmox’s ZFS to set up redundancy. My main concern here, in addition to everything in option 2, is that ZFS needs a lot of memory for caching. Right now I can dedicate 4 GB to it, which is less than the recommendation – is it responsible to run a ZFS pool with that?

My primary objective is data resilience above all. Obviously nothing can replace a good backup solution, but that’s not something I can afford at the moment. I want to be able to reassemble and mount the array on a different system if the server falls to pieces. Option 1 seems the most conducive for that (I’ve had to do it once), but if LVM on RAID or ZFS can offer the same resilience without any major drawbacks (like difficulty mounting LVs or other issues I might encounter)… I’d like to know what others use or recommend.

  • toebert@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 hours ago

    I don’t use it so I can’t recommend it, but if you’re interested in other options to research there’s a mergerfs+snapraid combo.

    I currently pass through my disks to an unraid VM and then mount them through nfs which works (but from the sounds of it probably not a valid option for you, not would I recommend it), but I want to try replacing it with mergerfs at some point.

    The thing that has mainly turned me off of zfs is (from what I understand) that you kinda need to plan how you’re going to expand when you set it up. Which really doesn’t work for me with a random collection of disks of varying sizes.

    Another note for option 1, since proxmox 8.4 there is virtiofs which would allow you to mount a folder to a VM without having to go through nfs. You may have to mess with selinux in the VM depending on what you do in there, but just fyi it’s a thing.