dotfiles/nix/system/svalbard
2023-11-19 17:00:33 -08:00
..
configuration.nix Clean up some hardware/machine/role configuration spaghetti 2023-11-19 17:00:33 -08:00
hardware-configuration.nix Clean up some hardware/machine/role configuration spaghetti 2023-11-19 17:00:33 -08:00
README.md Some notes-to-self about adding new disks to the storage pool 2023-02-05 22:15:51 -08:00

ZFS

To set up ZFS on the spinning discs, followed steps derived from these sources (but without encryption):

# zpool status
# DISK=<disk from /dev/disk/by-id>
# POOL=<a name for the pool>
# zpool create "${POOL}" $DISK
# zfs create -o compression=on -o mountpoint=legacy "${POOL}/main"

then added the ZFS filesystem to hardware-configuration.nix (use the zfs created label as the device) and rebooted.

Adding a new disk to the pool

Based on https://unix.stackexchange.com/questions/530968/adding-disks-to-zfs-pool, it should be straightforward as long as what you're trying to do is simply extend a pool's storage by adding a new vdev:

# fdisk -l
# ls -l /dev/disk/by-id
# DISK=<the new disk, based on which one is pointed to the right /dev/sdX>
# fdisk $DISK
# zpool status
# zpool add <pool> $DISK

Since we've chosen single-disks for the storage pool, it's going to be annoying if we ever decide to go for any redundancy like raidz. We'll have to buy quite a few disks and copy over all the data 😭. That is a good prompt to cull un-valued data!

Someday, it might be possible to grow a raidz vdev by adding disks, instead of by replacing disks: https://github.com/openzfs/zfs/pull/12225

NFS

Don't forget to chown the mounted system so that non-root can read/write there.

If your NFS clients are able to mount the store but can't ls it or see any conents, make sure that the store is still, in fact, readable to non-owners!!!