A Weird Imagination

Fullscreen mode on part of screen

Posted in

The problem#

Many applications have a fullscreen mode that has a different interface from their windowed mode. For example, many media players will show just the video in fullscreen mode but include media controls in windowed mode. But, especially if you have a large monitor, you may want to use that interface while only having the application take up part of your monitor.

The solution#

I could not find a solution that works on every window manager.

The window manager handles resizing the application when it switches to fullscreen, so the most straightforward way to accomplish this is to not run a window manager. Problem solved! Unfortunately, window managers are really useful, so outside of some niche cases where you're positioning windows with xdotool, that's probably not what you want.

There's a "fakefullscreen" option in some forks of the very configurable window manager dwm: base dwm with the fakefullscreen patch always does fullscreen that way, the instantWM fork has a hotkey Super+Shift+F that toggles fake fullscreen for a window, and awesome can be configured to do the same.

For more common window managers, there is a solution, but more than two virtual monitors requires an xorg-server newer than 21.1.10 (which is the most recent release at time of writing, so you would have to compile it yourself), and in my tests, it only worked on Compiz, and not Mutter, KWin, or Xfwm. Use xrandr 1.5+ to define virtual monitors on sections of your monitors and then maximizing or fullscreening applications should respect those boundaries:

xrandr --setmonitor lefthalf 960/217x1080/132+0+0 LVDS-1
# This is a hack, should be "LVDS-1", not "none".
xrandr --setmonitor righthalf 960/217x1080/132+960+0 none

where the the geometery specification format is w/mmwxh/mmh+x+y (mmw/h="millimeters width/height") and LVDS-1 is the name xrandr gives to my physical monitor. Note that xorg-server 21.1.10 and older have a limit of one virtual monitor per physical monitor which we can circumvent by putting the second virtual monitor on "none".

The details#

Read more…

Status of long-running copy

The problem#

When running an incremental backup with rsync with the --progress flag, it often spends lot of time outputting nothing as it scans through many unchanged files. If you think of it before starting the transfer, --info=progress2 or the name2/skip2 --info flags would give more detail, but once the transfer has been going for a while, you probably don't want to cancel and restart it so you can add those flags.

The solution#

The documentation and this StackExchange answer say you can send a SIGVTALRM signal to rsync version 3.2.0+ and it will output its current progress, but that wasn't working for me.

As a workaround, you can use strace to get a running log of which files rsync is looking at, which includes files it skips without actually opening:

strace --attach="$(pidof rsync)" --trace=openat

(If that's not showing anything, try removing the --trace=openat filter and seeing if there's other syscalls with paths to filter on.)

Alternatively, this StackExchange answer suggests a way to see the currently open files including their sizes (including directories but not unchanged files being inspected):

watch lsof -p"$(pidof rsync | tr ' ' ',')"

(The same should work for a recursive cp/mv/rm.)

Similarly, for getting the status of a transfer of a single large file, this answer attempts to read the files cp is reading/writing to give a running percentage of how much it has copied; a similar approach might work for rsync.

The details#

Read more…

Hardlink identical directory trees

Posted in

The problem#

I will often make copies of important files onto multiple devices, and then later make backups of all of those devices onto the same drive. At which point, I now have multiple redundant copies of those files within my backup. Tools like rdfind, fdupes, and jdupes exist to deal with the general problem of searching a collection of files for duplicates efficiently, but none of them support only checking if files are identical if their filenames and/or paths match, so they end up doing a lot of extra work in this case.

The solution#

Download the script I wrote, hardlink-dups-by-name.sh and run it as follows:

hardlink-dups-by-name.sh a_backup/ another_backup/

Then all files like a_backup/some/path that are identical to the corresponding file another_backup/some/path will get hard-linked together so there will only be one copy of the data taking up space.

The details#

Read more…

Deleting deeply nested OPFS directories

Posted in

The problem#

The straightforward OPFS API for deleting a directory

await parentDir.removeEntry(name, {recursive: true});

doesn't work if the directory contains too many (several hundred) levels of nested directories. The sensible workaround is never create such a directory structure and wipe the OPFS storage entirely if you ever do by accident, but as discussed previously, I did get in that situation due to a bug and wrote some helpers to deal with it.

The solution#

For any reasonable real-world case, what you actually want is probably removeEntry():

await parentDir.removeEntry(name, {recursive: true});

or to delete everything from the root directory:

const root = await navigator.storage.getDirectory();
for await (const handle of root.values()) {
  await root.removeEntry(handle.name, {recursive: true});
}

or possibly to simply use your browser's settings to delete all site data, which should include OPFS data.

In case you do still want to delete a directory in OPFS without worrying about how deeply nested the directory structure is, you can use

async function removeDirectoryFast(dir) {
  const toDelete = [];
  let i = 0;
  let maxDepth = 0;
  for await (const fileHandle
             of getFilesNonRecursively(dir)) {
    maxDepth = Math.max(maxDepth, fileHandle.depth);
    toDelete.push(fileHandle);
  }
  async function deleteAtDepth(depth) {
    for (const f of toDelete) {
      if (f.depth === depth) {
        await f.parentDir.removeEntry(f.name,
                            {recursive: true});
      }
    }
  }
  const increment = 500; // Works empirically in Firefox.
  for (let depth = maxDepth; depth > 1; depth -= increment) {
    await deleteAtDepth(depth);
  }
  await deleteAtDepth(1);
}

This depends on the getFilesNonRecursively() helper from my previous blog post.

The details#

Read more…

Debugging OPFS

The problem#

While the web developer tools in Firefox and Chrome provide a Storage/Application tab for inspecting the local data stored by a web app, neither shows OPFS files there, making it difficult to tell what's going wrong when you have a bug (which was a problem when writting my recent blog posts about OPFS). There's open Firefox and Chromium bugs about the missing feature, so if it's been a while since this was posted when you're reading this, hopefully this is no longer a problem.

Additionally, the tools I did find all use recursion, resulting in them failing to work on the deeply nested directory tree I created by accident.

The solution#

If you don't have several hundred levels deep of nested directories, you can just use this Chrome extension or this script (or probably this web component, although I couldn't get it to install), all named "opfs-explorer".

The following AsyncIterator returns all of the files in OPFS without using recursion and adds properties to include their full path and parent directory:

async function* getFilesNonRecursively(dir) {
  const stack = [[dir, "", undefined, 0]];
  while (stack.length) {
    const [current, prefix, parentDir] = stack.pop();
    current.relativePath = prefix + current.name;
    current.parentDir = parentDir;
    current.depth = depth;
    yield current;

    if (current.kind === "directory") {
      for await (const handle of current.values()) {
        stack.push([handle,
                    prefix + current.name + "/",
                    current,
                    depth + 1]);
      }
    }
  }
}

And here's the simple HTML display function I've been using that calls that (you will likely want to modify this to your preferences):

async function displayOPFSFileList() {
  const existing = document.getElementById("opfs-file-list");
  const l = document.createElement('ol');
  l.id = "opfs-file-list";
  if (existing) existing.replaceWith(l);
  else document.body.appendChild(l);

  const root = await navigator.storage.getDirectory();
  for await (const fileHandle
             of getFilesNonRecursively(root)) {
    const i = document.createElement("li");
    i.innerText = fileHandle.kind + ": "
                  + (fileHandle.relativePath ?? "(root)");
    if (fileHandle.kind === "file") {
      const content = await fileHandle.getFile();
      const contentStr = content.type.length === 0
                      || content.type.startsWith("text/")
        ? ("\"" + (await content.slice(0, 100).text()).trim()
          + "\"")
        : content.type;
      i.innerText += ": (" + content.size + " bytes) "
                     + contentStr;
    }
    l.appendChild(i);
  }
}

The details#

Read more…

Loading multiple files without ZIP

The problem#

Last time, I showed how you can let a user have control over their data stored in a web app's OPFS by transferring directories in or out of the browser as ZIP files. But it would be more convenient if the user could just transfer folders instead without needing the extra step of going through an archive manager. There's no shortcut for getting data out of the browser: we can only save one file at a time (unless we use the Chrome-only File System Access API). But there is a cross-browser way to load multiple files, or even nested directories, into the browser.

The solution#

The HTML Drag and Drop API supports transferring multiple files and directories, although the details are a bit messy:

const target = document.getElementById("dropTarget");
// Required to make drop work.
target.addEventListener("dragover", (e) => e.preventDefault());
target.addEventListener("drop", async (e) => {
  e.preventDefault();
  await Promise.allSettled([...e.dataTransfer.items]
    .map(async (item) => {
      if (item.getAsFileSystemHandle) {
        await processFileSystemHandle(dir,
          await item.getAsFileSystemHandle());
      } else {
        await processFileSystemEntry(dir,
          item.webkitGetAsEntry());
      }
    })
  );
});

As you can see, for cross-browser support, we need to handle both getAsFileSystemHandle() and webkitGetAsEntry(). And, unfortunately, they return different types, so those two process*() functions really are pretty different:

async function processFileSystemHandle(dir, handle) {
  if (handle.kind === "directory") {
    const subdir = await dir.getDirectoryHandle(handle.name,
                             {create: true});
    for await (const entry of handle.values()) {
      await processFileSystemHandle(subdir, entry);
    }
  } else /* handle.kind === "file" */ {
    await writeFile(await dir.getFileHandle(handle.name,
                              {create: true}),
                    await handle.getFile());
  }
}
async function processFileSystemEntry(dir, entry) {
  async function readDirectory(directory) {
    let dirReader = directory.createReader();
    let getEntries = async () => {
      const results = await (new Promise((resolve, reject) =>
        dirReader.readEntries(resolve, reject)));
      if (results.length) {
        return [...results, ...await getEntries()];
      }
      return [];
    };

    return await getEntries();
  }

  if (entry.isDirectory) {
    const subdir = await dir.getDirectoryHandle(entry.name,
                             {create: true});
    for (const el of await readDirectory(entry)) {
      await processFileSystemEntry(subdir, el);
    }
  } else /* entry.isFile */ {
    const file = new Promise((resolve, reject) =>
      entry.file(resolve, reject));
    await writeFile(await dir.getFileHandle(entry.name,
                              {create: true}),
                    await file);
  }
}

(These assume the writeFile() helper from last week's post to handle writing inside a Web Worker as necessary.)

The details#

Read more…

ZIP web app local data

The problem#

In my previous post, I gave some tips for making a web app save and load its data as a file to give the user control over their data. But for many applications, it's useful to think of the user's data as multiple files, possibly organized into directories. OPFS lets a web app store local data with a filesystem-like API, but, due to security concerns, there's no direct access to the user's real filesystem, so there's no straightforward way for the user to view or manipulate that data.

The solution#

The common way to deal with this kind of issue is to stuff all of the files into one file, reducing it to a solved problem. We'll use the ZIP archive file format as it's pretty universally supported, so the user can likely use such files. In these examples, I use the zip.js library, so you'll have to import zip-fs.min.js (or equivalent) to use them.

In these functions, dir is an OPFS directory: either the root directory from navigator.storage.getDirectory() or a subdirectory's FileSystemDirectoryHandle. They input/output the ZIP files as Blobs; use the helpers from my previous post to actually connect to the user's filesystem.

Downloading a directory as a ZIP is simple:

async function zipDirectory(dir) {
  const zipFs = new zip.fs.FS();
  await zipFs.root.addFileSystemHandle(dir);
  return await zipFs.exportBlob();
}

Reading a ZIP file into OPFS is more complicated and must be done inside a Web Worker (due to using createSyncAccessHandle()):

async function unzipToDirectory(zipfile, dir) {
  const z = new zip.fs.FS();
  await z.importBlob(zipfile);

  async function extract(z, dir) {
    if (z.directory) {
      const childDir = z.name
        ? await dir.getDirectoryHandle(z.name,
                    { create: true })
        : dir;
      for (const child of z.children) {
        await extract(child, childDir);
      }
    } else {
      await writeFile(
        await dir.getFileHandle(z.name, { create: true }),
        await (await z.getBlob()).arrayBuffer());
    }
  }

  await extract(z.root, dir);
}
async function writeFile(file, contents) {
  const handle = await file.createSyncAccessHandle();
  handle.truncate(0);
  if (contents.arrayBuffer) contents = await contents.arrayBuffer();
  handle.write(contents);
  handle.flush();
  handle.close();
}

The details#

Read more…

Keeping web app data local

The problem#

Users don't tend to have a lot of control over their data in web apps. Most often, the data is stored on a server the user does not control—or, if they do control it, we're talking about self-hosting which is much more involved then just navigating to a web app in a browser. Alternatively, the data may be stored locally, but using various browser-specific mechanisms which make it difficult for the user to share, backup, or otherwise reason about the data the web app manipulates.

While desktop apps can replicate these problems, usually they store data in files either explicitly chosen by the user or in well-known locations.

The solution#

Files are a flexible interface to let users do whatever they want with their data, so let's use them for web apps, too.

To save a file to the user's computer, modified from this example:

function saveFile(filename, data, mimeType) {
  const element = document.createElement("a");
  const url = URL.createObjectURL(new Blob([data],
                                  { type: mimeType }));
  element.setAttribute("href", url);
  element.setAttribute("download", filename);
  element.click();
  URL.revokeObjectURL(url);
}
// Save a JSON file:
saveFile("hello.json",
  JSON.stringify({"Hello": "World!"}, null, 2),
  "application/json");

(Consider using beforeunload if the user has unsaved changes to make sure they really do have their data in the file, and not just in the browser.)

To load a file from the user's computer:

function loadFile() {
  const element = document.createElement("input");
  element.type = "file";
  return new Promise((resolve, reject) => {
    element.click();
    element.addEventListener("change",
      () => resolve(element.files[0]));
    element.addEventListener("cancel",
      () => reject("User canceled."));
  });
}
// loadFile() must be called from a real user click.
myButton.addEventListener('click',
  async (e) => myLoadFunc(await loadFile()));

The details#

Read more…

Generating specialized word lists

Posted in

The problem#

I've been playing Codenames online a lot lately (using my fork of codenames.plus), and a friend suggested it might be fun to have themed word lists. Specifically, they suggested Star Trek as a theme as it's a fandom that's fairly widely known. They left it up to me to figure out what should be in a Star Trek themed word list.

The solution#

If you just want to play Codenames with the list, go to my Codenames web app and select one or both of the Star Trek card packs. If you just want the word lists, you can download the Star Trek: The Next Generation words and the Star Trek: Deep Space 9 words.

To generate a word list yourself (I used this source for the Star Trek scripts), you will need a common words list like en_50k.txt which I mentioned in my previous post on anagram games, and then pipe the corpus through the following script (which you will likely have to modify for the idiosyncrasies of your data):

#!/bin/bash
set -euo pipefail

NUM_COMMON=2000 # Filter out the most common 2000 words
COMMON_WORDS="$(mktemp)"
<en_50k.txt head "-$NUM_COMMON" | cut -d' ' -f1 |\
    sort | tr '[:lower:]' '[:upper:]' >"$COMMON_WORDS"

# Select only dialogue lines (in Star Trek scripts)
grep -aP '^\t\t\t[^\t]' |\
    # Split words
    tr ' .,:()\[\]!?;"/\t[:cntrl:]' '[\n*]' |\
    sed 's/--/\n/' |\
    # Strip whitespace
    sed 's/^\s\+//' | sed 's/\s\+$//' |\
    grep -av '^\s*$' |\
    # Strip quotes
    sed "s/^'//" | sed "s/'$//" |\
    # Filter out numbers
    grep -av '^[[:digit:]]*$' |\
    tr '[:lower:]' '[:upper:]' |\
    # Fix for contractions not being in wordlist
    sed "s/'\(S\|RE\|VE\|LL\|M\|D\)$//" |\
    grep -av "'T$" |\
    # Remove some more non-words
    grep -avF '-' |\
    grep -avF '&' |\
    # Count
    sort | uniq -c |\
    # Only keep words with >25 occurrences
    awk '{ if ($1 > 25) { print } }' |\
    # Remove common words
    join -v2 -22 -o 2.1,2.2 "$COMMON_WORDS" - |\
    # Sort most common words first
    sort -rn

rm "$COMMON_WORDS"

The output of the script will require some manual effort to decide which words really belong in the final list, but it's a good start.

The details#

Read more…

Useful global keyboard shortcuts

Most desktop environments provide options for customizing keyboard shortcuts. In XFCE, there's settings panels for both for window manager shortcuts and application shortcuts. While the term "application shortcuts" suggests using them for launching applications, and many keyboards do have special keys for launching a music player or a calculator that I do have set up, I don't find myself using those much. I have buttons on my panel for applications that I launch often; if I'm going to be clicking away into a new application, I don't find clicking on the panel to be an additional inconvenience.

On the other hand, "application shortcuts" can be used for launching arbitrary scripts, including ones don't involve switching contexts.

Keys to use#

Many keyboards have extra keys intended for global commands labeled with various symbols. If you have them, you can be creative about what you want them to mean and even combine them with modifiers (Shift, Ctrl, etc.) to get more inputs. On the other hand, if you have a more traditional keyboard layout (which is likely the case on a laptop), your choices are more limited. To avoid confusion, it's generally best to use the Windows key (usually called the Super key in Linux) for global shortcuts as it is not usually used for anything else.

Shortcut ideas#

Read more…