A Weird Imagination

Installing an OS

Posted in

The problem#

You know how to make a bootable flash drive, but you want to actually use it to install a permanent operating system (OS) onto your computer.

The solution#

Luckily, modern OS installs are very straightforward. Just download the installation image (e.g. Debian Linux or Windows 11), put it on a flash drive, boot off it, and click "Next" a few times and wait a bit. The defaults will usually erase all data on the computer, but if it's a new computer, there's nothing to erase.

Of course, you can make things more complicated if you don't like the defaults, have a special situation, or just want to know more precisely what's going on.

The details#

Read more…

Copy on save

The problem#

I was running a Factorio multiplayer server and was being paranoid about making sure I didn't lose any save data. But I also didn't want to put the saves directory on my ZFS file system as it's on a hard drive, not an SSD, and saves taking too long can cause lag for the players (although with non-blocking saving this is much less of an issue).

The solution#

The following script watches the saves/ directory for any new files being written and immediately copies them to the ZFS dataset tank/factorio mounted at /tank/factorio/ and creates a snapshot named with the current date and time. The result is a snapshot corresponding to every time the game saved with the save data.

#!/bin/sh
while true
do
  inotifywait -r saves/ -e close_write
  sleep 0.1s  # write is to *.tmp.zip, wait for rename
  rsync -avhx saves/ /tank/factorio/
  now="$(date +%Y-%m-%d_%H-%M-%S)"
  zfs snapshot tank/factorio@save-"$now"
done

The details#

Read more…

Experimenting with ZFS

Posted in

The problem#

For my recent posts on ZFS, I wanted to quickly try out a bunch of variants of my proposed operations without worrying about accidentally modifying my real ZFS filesystems. Specifically, I wanted to know which ways of copying files would result in more efficiently reusing blocks from existing snapshots where possible.

The solution#

WARNING: The instructions below will modify the ZFS pool tank, which is the default name used in many ZFS examples, and therefore may be a real ZFS pool on your computer.

I strongly recommend doing all of this inside a VM to be sure you are not affecting any real filesystems. I used a VirtualBox VM that I installed Debian on and used the guest additions to share a directory between the VM and my actual machine.

First create a 1 GiB virtual (i.e. in a file instead of a physical device) ZFS pool to run tests on:

fallocate -l 1G /root/tank
zpool create tank /root/tank

Then perform various filesystem operations and inspect the result of zfs list -o space to determine if they were using more (or less) space than you expect. In order to make sure I was being consistent and make it easier to test out multiple variations, I wrote some scripts:

git clone https://git.aweirdimagination.net/perelman/zfs-test.git
cd zfs-test/bin
# dump logs from create-/copy-all- and-measure into ../logs/
./measure-all
# read ../logs/ and print space used as Markdown table
./logs-to-table --links
Create script orig rsync-ahvx rsync-ahvx-sparse rsync-inplace rsync-inplace-no-whole-file rsync-no-whole-file zfs-diff-move-then-rsync
empty 24K 24K✅ 24K✅ 24K✅ 24K✅ 24K✅ 24K✅
random-1M-file 1.03M 1.03M✅ 1.03M✅ 1.03M✅ 1.03M✅ 1.03M✅ 1.03M✅
zeros-1M-file 24K 1.03M❌ 24K✅ 1.03M❌ 1.03M❌ 1.03M❌ 1.03M❌
move-file 1.04M 2.04M❌ 2.04M❌ 2.04M❌ 2.04M❌ 2.04M❌ 1.04M✅
edit-part-of-file 1.16M 2.04M❌ 2.04M❌ 2.04M❌ 1.17M✅ 2.04M❌ 1.17M✅

The details#

Read more…

Splitting ZFS datasets

Posted in

The problem#

ZFS datasets are a powerful way to organize your filesystems. At first glance, datasets look a lot like filesystems, so you may default to just one or at most a handful per pool. But unlike with traditional filesystems where you have to decide how much of your disk space each one gets when it's created, ZFS datasets share the space available to the entire pool. Since datasets are the granularity at which ZFS operations like snapshots and zfs send/recv work, having more datasets can give you better control over having different backup policies for different subsets of your data, and ZFS scales just fine to hundreds or thousands of datasets, so you don't have to really worry about creating too many.

But if you're me (well, not just me) and you realize this after you already have months of snapshots of a few terabytes of data, how do you reorganize your ZFS pool into more datasets without either losing the snapshot history or ending up wasting a lot of disk space on redundant copies of data?

The solution#

Before doing anything with real data, make backups and confirm you can restore from them.

I do not have a one-size-fits-all solution here; instead I'll outline the general process and recommend you continually review at each step to make sure things look correct and be ready to zfs rollback and retry if you make a mistake or notice a way you could have done something in a more space-efficient manner.

  1. Create the new dataset hierarchy. I'll refer to the old dataset as tank/old and the new dataset root as tank/new.
  2. Do an initial copy of the earliest snapshot you want to keep from the .zfs directory. If it's @first, then the copy command will be rsync -avhxPHS /tank/old/.zfs/snapshot/first/ /tank/new/.
  3. Check your work and possibly delete or dedup files.
  4. zfs snapshot -r tank/new@first
  5. Do an incremental copy of the next snapshot. If it's @second, this may be as simple as rsync -avhxPHS@-1 --delete /tank/old/.zfs/snapshot/second/ /tank/new/, but that will waste space if you have moved files or modified small sections of large files.
  6. Check your work, and make any necessary changes.
  7. zfs snapshot -r tank/new@second
  8. Repeat steps 5-7 for each snapshot you want to keep.
  9. zfs rename tank/old tank/legacy && zfs rename tank/new tank/old

The details#

Read more…

Status of long-running copy

The problem#

When running an incremental backup with rsync with the --progress flag, it often spends lot of time outputting nothing as it scans through many unchanged files. If you think of it before starting the transfer, --info=progress2 or the name2/skip2 --info flags would give more detail, but once the transfer has been going for a while, you probably don't want to cancel and restart it so you can add those flags.

The solution#

The documentation and this StackExchange answer say you can send a SIGVTALRM signal to rsync version 3.2.0+ and it will output its current progress, but that wasn't working for me.

As a workaround, you can use strace to get a running log of which files rsync is looking at, which includes files it skips without actually opening:

strace --attach="$(pidof rsync)" --trace=openat

(If that's not showing anything, try removing the --trace=openat filter and seeing if there's other syscalls with paths to filter on.)

Alternatively, this StackExchange answer suggests a way to see the currently open files including their sizes (including directories but not unchanged files being inspected):

watch lsof -p"$(pidof rsync | tr ' ' ',')"

(The same should work for a recursive cp/mv/rm.)

Similarly, for getting the status of a transfer of a single large file, this answer attempts to read the files cp is reading/writing to give a running percentage of how much it has copied; a similar approach might work for rsync.

The details#

Read more…