A Weird Imagination

Splitting ZFS datasets

Posted in

The problem#

ZFS datasets are a powerful way to organize your filesystems. At first glance, datasets look a lot like filesystems, so you may default to just one or at most a handful per pool. But unlike with traditional filesystems where you have to decide how much of your disk space each one gets when it's created, ZFS datasets share the space available to the entire pool. Since datasets are the granularity at which ZFS operations like snapshots and zfs send/recv work, having more datasets can give you better control over having different backup policies for different subsets of your data, and ZFS scales just fine to hundreds or thousands of datasets, so you don't have to really worry about creating too many.

But if you're me (well, not just me) and you realize this after you already have months of snapshots of a few terabytes of data, how do you reorganize your ZFS pool into more datasets without either losing the snapshot history or ending up wasting a lot of disk space on redundant copies of data?

The solution#

Before doing anything with real data, make backups and confirm you can restore from them.

I do not have a one-size-fits-all solution here; instead I'll outline the general process and recommend you continually review at each step to make sure things look correct and be ready to zfs rollback and retry if you make a mistake or notice a way you could have done something in a more space-efficient manner.

  1. Create the new dataset hierarchy. I'll refer to the old dataset as tank/old and the new dataset root as tank/new.
  2. Do an initial copy of the earliest snapshot you want to keep from the .zfs directory. If it's @first, then the copy command will be rsync -avhxPHS /tank/old/.zfs/snapshot/first/ /tank/new/.
  3. Check your work and possibly delete or dedup files.
  4. zfs snapshot -r tank/new@first
  5. Do an incremental copy of the next snapshot. If it's @second, this may be as simple as rsync -avhxPHS@-1 --delete /tank/old/.zfs/snapshot/second/ /tank/new/, but that will waste space if you have moved files or modified small sections of large files.
  6. Check your work, and make any necessary changes.
  7. zfs snapshot -r tank/new@second
  8. Repeat steps 5-7 for each snapshot you want to keep.
  9. zfs rename tank/old tank/legacy && zfs rename tank/new tank/old

The details#

Read more…

Recreate moves from zfs diff

Posted in

The problem#

When doing an incremental backup, any moved file on the source filesystem usually results in recopying the file to the destination filesystem. For a large file this can both be slow and possibly waste space if the destination keeps around deleted files (e.g. ZFS holding on to old snapshots). If both sides are ZFS, then you can get zfs send/recv to handle all of the details efficiently. But if only the source filesystem is ZFS or the ZFS datasets are not at the same granularity on both sides, that doesn't apply.

zfs diff gives the information about file moves from a snapshot, but its output format is a little awkward for scripting.

The solution#

Download the script I wrote, zfs-diff-move.sh and run it like

zfs-diff-move.sh /path/ /tank/dataset/ tank/dataset@base @new

The following is an abbreviated version of it:

#!/bin/bash
zfs diff -H "$3" "$4" | grep '^R' | while read -r line
do
  get_path() {
    path="$(echo -e "$(echo "$line" | cut -d$'\t' "-f$3")")"
    echo "${path/#$2/$1}"
  }

  from="$(get_path "$1" "$2" 2)"
  to="$(get_path "$1" "$2" 3)"
  mkdir -vp -- "$(dirname "$to")"
  mv -vn -- "$from" "$to" || echo "Unable to move $from"
done

The details#

Read more…

Hardlink identical directory trees

Posted in

The problem#

I will often make copies of important files onto multiple devices, and then later make backups of all of those devices onto the same drive. At which point, I now have multiple redundant copies of those files within my backup. Tools like rdfind, fdupes, and jdupes exist to deal with the general problem of searching a collection of files for duplicates efficiently, but none of them support only checking if files are identical if their filenames and/or paths match, so they end up doing a lot of extra work in this case.

The solution#

Download the script I wrote, hardlink-dups-by-name.sh and run it as follows:

hardlink-dups-by-name.sh a_backup/ another_backup/

Then all files like a_backup/some/path that are identical to the corresponding file another_backup/some/path will get hard-linked together so there will only be one copy of the data taking up space.

The details#

Read more…