A Weird Imagination

Status of long-running copy

The problem#

When running an incremental backup with rsync with the --progress flag, it often spends lot of time outputting nothing as it scans through many unchanged files. If you think of it before starting the transfer, --info=progress2 or the name2/skip2 --info flags would give more detail, but once the transfer has been going for a while, you probably don't want to cancel and restart it so you can add those flags.

The solution#

The documentation and this StackExchange answer say you can send a SIGVTALRM signal to rsync version 3.2.0+ and it will output its current progress, but that wasn't working for me.

As a workaround, you can use strace to get a running log of which files rsync is looking at, which includes files it skips without actually opening:

strace --attach="$(pidof rsync)" --trace=openat

(If that's not showing anything, try removing the --trace=openat filter and seeing if there's other syscalls with paths to filter on.)

Alternatively, this StackExchange answer suggests a way to see the currently open files including their sizes (including directories but not unchanged files being inspected):

watch lsof -p"$(pidof rsync | tr ' ' ',')"

(The same should work for a recursive cp/mv/rm.)

Similarly, for getting the status of a transfer of a single large file, this answer attempts to read the files cp is reading/writing to give a running percentage of how much it has copied; a similar approach might work for rsync.

The details#

Read more…

Limit processor usage of multiple processes

Posted in

The problem#

In last week's post, I discussed using cpulimit on multiple processes in the special case of web browsers, but I wanted a more general solution.

The solution#

cpulimit-all.sh is a wrapper around cpulimit which will call cpulimit many times to cover multiple processes of the same name and subprocesses.

Using that script, the follow is the equivalent of the script from last week to limit all browser processes to 10% CPU:

cpulimit-all.sh --limit=10 --max-depth=1 \
    -e firefox -e firefox-esr -e chromium -e chrome

But also, we can add a couple options to include any grandchild processes and check for new processes to limit every minute:

cpulimit-all.sh --limit=10 --max-depth=2 \
    -e firefox -e firefox-esr -e chromium -e chrome \
    --watch-interval=1m

The details#

Read more…

Limit web browser processor usage

Posted in

The problem#

cpulimit is a useful utility for stopping a program from wasting CPU, but it only limits a single process. As all modern web browsers use process isolation, limiting just a single process doesn't do very much, we actually want to limit all of the browser processes.

The solution#

The following script will limit the CPU usage of all browser processes to $LIMIT percent CPU. Note that the limit is per process not total over all processes, so you may want to set it quite low to actually have an effect.

LIMIT=10 # Hard-code a limit of 10% CPU as an example.

# Kill child processes (stop limiting CPU) on script exit.
for sig in INT QUIT HUP TERM; do
  trap "
    pkill -P $$
    trap - $sig EXIT
    kill -s $sig "'"$$"' "$sig"
done
trap cleanup EXIT

# Find and limit all child processes of all browsers.
for name in firefox firefox-esr chromium chrome
do
    for ppid in $(pgrep "$name")
    do
        cpulimit --pid="$ppid" --limit="$LIMIT" &
        for pid in "$ppid" $(pgrep --parent "$ppid")
        do
            cpulimit --pid="$pid" --limit="$LIMIT" &
        done
    done
done

The details#

Read more…

Listing files into a file

Posted in

The problem#

$ ls > file

doesn't do what you expect:

$ touch foo
$ touch bar
$ ls > filelist
$ cat filelist
bar
filelist
foo

You probably didn't expect, or want, filelist to be listed in filelist.

The solution#

$ filelist=$(ls); echo "$filelist" >filelist

Read more…