A Weird Imagination

Displaying Factorio history

Posted in

The problem#

Last week, I got all of the Factorio saves I had been keeping around into a single directory in order by the time they were created. But what should we do with that data? We could load arbitrary saves to see what our base looked like in the past, but loading the saves individually isn't a great way to do that when there's a lot of them.

The solution#

Luckily, I'm not the only one to want screenshots of my Factorio bases, so there are existing mods to do so.

The FactorioMaps Timelapse mod will take a list of saves and generates a web page like this demo that lets you look around your base across time. As the documentation explains, this is not actually a mod you enable for your save, but a script that you place in your mods/ directory that will run Factorio repeatedly to generate screenshots.

To set it up, use a non-Steam install of Factorio and put the saves you want under its saves directory (in a subdirectory named to_screenshot/ in this example). If you downloaded the mod as ZIP file (e.g., by installing the mod from within Factorio), unzip it; alternatively you can clone the GitHub repo or my fork which includes a few minor improvements for when displaying a lot of saves, especially ones less than an hour apart.

# get to Factorio install /mods/ directory
cd factorio/mods
# clone the git repo with the mod's internal name
git clone https://github.com/dperelman/FactorioMaps.git L0laapk3_FactorioMaps
cd L0laapk3_FactorioMaps
# install dependencies
python -m venv .venv
. .venv/bin/activate
pip install --upgrade -r requirements.txt
# add saves in /saves/to_screenshot/ to timelapse "mybase"
python auto.py --standalone mybase to_screenshot/*

Then you can find the timelapse in the directory script-output/FactorioMaps/mybase/ of your Factorio install; just open index.html in any web browser. Especially if you have a lot of saves, consider adding the --dayonly option to not take twice as much time also generating screenshots of the night view.

Generating screenshots as you play#

Note that if you just want to take screenshots automatically as you play, you can use the Screenshot Toolkit mod, which is what I use for single player games. But due to the way Factorio multiplayer works, doing so impacts every player, so in a multiplayer game it may be better to just copy the saves as the game runs and generate the screenshots later.

The details#

Read more…

Ordering saves by date

Posted in

The problem#

Last week, I shared a script which continuously backed up game saves whenever the game saved. The result is a series of directories that contain snapshots of the game saves from every autosave. But to view this data, we really want a list of unique files in order marked with the time they were created.

The solution#

The following will create symbolic links to the unique files named after their modification date:

for i in /tank/factorio/.zfs/snapshot/*/*.zip
do
  ln -sf "$i" "$(stat --printf=%y "$i").zip"
done

or if you want a custom date format, you can use date:

  ln -sf "$i" "$(date -r "$i" +%Y-%m-%d_%H-%M-%S).zip"

Alternatively, the following will just list the unique files with their timestamps:

find /tank/factorio/.zfs/snapshot/ -printf "%T+ %p\n" \
    | sort | uniq --check-chars=30

The details#

Read more…

Copy on save

The problem#

I was running a Factorio multiplayer server and was being paranoid about making sure I didn't lose any save data. But I also didn't want to put the saves directory on my ZFS file system as it's on a hard drive, not an SSD, and saves taking too long can cause lag for the players (although with non-blocking saving this is much less of an issue).

The solution#

The following script watches the saves/ directory for any new files being written and immediately copies them to the ZFS dataset tank/factorio mounted at /tank/factorio/ and creates a snapshot named with the current date and time. The result is a snapshot corresponding to every time the game saved with the save data.

#!/bin/sh
while true
do
  inotifywait -r saves/ -e close_write
  sleep 0.1s  # write is to *.tmp.zip, wait for rename
  rsync -avhx saves/ /tank/factorio/
  now="$(date +%Y-%m-%d_%H-%M-%S)"
  zfs snapshot tank/factorio@save-"$now"
done

The details#

Read more…

Troubleshooting ZFS upgrade

The problem#

I had recently done an apt upgrade that included upgrading ZFS and noticed zpool status showed a weird "(non-allocating)" message, which seemed concerning:

$ zpool status
  pool: tank
 state: ONLINE
config:

    NAME         STATE     READ WRITE CKSUM
    tank         ONLINE       0     0     0
      mirror-0   ONLINE       0     0     0
        ata-***  ONLINE       0     0     0  (non-allocating)
        ata-***  ONLINE       0     0     0  (non-allocating)

errors: No known data errors

The solution#

This forum thread suggested the error may be due to a version mismatch between the ZFS tools and the kernel module. I confirmed there was a mismatch:

$ zpool --version
zfs-2.2.3-2
zfs-kmod-2.1.14-1

The easy way to load the new version of a kernel module after an update is to reboot the computer. But if you don't want to do that, here's the general outline of the commands I ran to unload and reload ZFS (run as root):

# Stop using ZFS
$ zfs umount -a
$ zpool export tank
$ service zfs-zed stop
# Remove modules
$ rmmod zfs
$ rmmod spl
# will show error: rmmod: ERROR: Module spl is in use by: ...
# repeatedly rmmod dependencies until spl is removed.

# Reload ZFS
$ modprobe zfs
$ service zfs-zed start
$ zpool import tank

The details#

Read more…

Troubleshooting KeePassXC browser extension

Posted in

The problem#

I use KeePassXC as my password manager in Firefox and while sometimes the connection between Firefox and KeePassXC drops and I have to explicitly click reconnect, it recently stopped working entirely.

The solution#

Install the keepassxc-full package instead of the keepassxc package. If you get the browser extension via the webext-keepassxc-browser package, then your package manager will automatically get the right one.

(This only applies to Debian Sid and Trixie or newer.)

The details#

Read more…

Resolving apt full-upgrade problems

Posted in

The problem#

My personal desktop runs Debian Unstable ("Sid")1. The nature of running a bleeding edge distro is that things break sometimes. I use Debian Testing/Stable or Ubuntu on my other machines to make my life easier, but I often want access to the latest version of some piece of software and running Debian Unstable is one way to do that. Admittedly, I also do it partially just because fixing things that break is a good way of learning how things work.

The most common kind of problem I run into is that upgrades are not straightforward. For their unstable distro, Debian doesn't make any promises about package dependencies not changing. This is less of a problem when there's an additional package that needs to be installed, but can be complicated when there's conflicts which require removing packages to get an upgrade to go through.

Recently I ran into an extreme version of this problem: trying to upgrade, it proposed uninstalling nearly everything I had installed. Worse, trying to resolve the issue, I got a scary sounding warning that I had uninstalled libssl3:

dpkg: libssl3:amd64: dependency problems, but removing anyway as you requested:
 [...]
 systemd depends on libssl3 (>= 3.0.0).
 sudo depends on libssl3 (>= 3.0.0).
 [...]

Both of those sound important.

The solution#

Luckily, it wasn't as bad as it sounded. Looking at the message, it turned out I had replaced libssl3 with libssl3t64. The latter of which is actually the exact same thing, although the package manager doesn't know that. The reason for the different package name is part of the Debian project to transition to 64-bit time_t, which is required to fix the Year 2038 problem. While on AMD64 and other 64-bit architectures, everything already uses 64-bit time_t, that's not true of all platforms that Debian supports. The way Debian handles ABI transitions like this is to rename the library packages with a suffix (t64 for this one) to ensure the old and new ABI don't get mixed accidentally. Since all of the architectures share the package names, the rename also happens on AMD64 even though there's actual change to match the rename on other platforms where the ABI did change.

Presumably the upgrade will be smoother when done between stable versions, but it really confused apt (which I usually use via wajig):

$ wajig install libssl-dev
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 libegl1 : Depends: libegl-mesa0 but it is not going to be installed
 libreoffice-core : Depends: libgstreamer-plugins-base1.0-0 (>= 1.0.0) but it is not going to be installed
                    Depends: libgstreamer1.0-0 (>= 1.4.0) but it is not going to be installed
                    Depends: liborcus-0.18-0 (>= 0.19.2) but it is not going to be installed
                    Depends: liborcus-parser-0.18-0 (>= 0.19.2) but it is not going to be installed
 wine-development : Depends: wine64-development (>= 8.21~repack-1) but it is not going to be installed or
                             wine32-development (>= 8.21~repack-1)
                    Depends: wine64-development (< 8.21~repack-1.1~) but it is not going to be installed or
                             wine32-development (< 8.21~repack-1.1~)
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.

Yeah, no idea what libegl1, libreoffice-core, or wine-development have to do with upgrading libssl-dev, but apt was showing those same packages in the error messages no matter what I tried to upgrade and trying to upgrade those packages didn't work either. Luckily, aptitude was able to handle it somewhat better:

$ sudo aptitude install libssl-dev
The following packages will be upgraded:
  libssl-dev{b}
1 packages upgraded, 0 newly installed, 0 to remove and 1459 not upgraded.
Need to get 2,699 kB of archives. After unpacking 1,122 kB will be used.
The following packages have unmet dependencies:
 libssl-dev : Depends: libssl3t64 (= 3.2.1-3) but it is not going to be installed
The following actions will resolve these dependencies:

     Remove the following packages:
1)     libssl3 [3.1.4-2 (now)]
2)     libssl3:i386 [3.1.4-2 (now)]

     Install the following packages:
3)     libssl3t64 [3.2.1-3 (testing, unstable)]
4)     libssl3t64:i386 [3.2.1-3 (testing, unstable)]



Accept this solution? [Y/n/q/?] y
The following NEW packages will be installed:
  libssl3t64{a} libssl3t64:i386{a}
The following packages will be REMOVED:
  libssl3{a} libssl3:i386{a}
The following packages will be upgraded:
  libssl-dev
1 packages upgraded, 2 newly installed, 2 to remove and 1457 not upgraded.
Need to get 7,177 kB of archives. After unpacking 2,294 kB will be used.
Do you want to continue? [Y/n/?]

Getting the packages to upgrade involved a lot of calls to aptitude that looked like that: removing a list of libraries and a installing a matching list of new libraries whose names were identical to those removed except with t64 at the end.

The details#

Read more…

Shell script over characters

Posted in

The problem#

I wanted to find a specific Unicode character I had used somewhere in some previous blog post. But by the nature of not knowing exactly what character it was, I wasn't sure how to search for it.

The solution#

Instead, I wrote a script based on this post to simply list all of the characters appearing in any file in a given directory:

cat * | sed 's/./&\n/g' | sort -u

Although the output is small, to further reduce the noise, this version strips out the English letters, numbers, and common symbols:

cat * \
    | tr -d 'a-zA-Z0-9!@#$%^&*()_+=`~,./?;:"[]{}<>|\\'"'-" \
    | sed 's/./&\n/g' | sort -u

The details#

Read more…

Remote graphical troubleshooting

Posted in

The problem#

For various reasons you might want graphical access to another computer, since some things can't be done over a text interface, including actually designing and troubleshooting what the graphical interface looks like. The other computer might be in a remote location across the internet, in a different room, or simply have a less convenient form factor like a tablet or television, so it's easier to use your desktop's keyboard, mouse, and monitor.

The solution#

The standard solution for this is VNC, specifically the x11vnc VNC server.

To keep a VNC server open to the current X11 session:

x11vnc -usepw -nevershared -forever -localhost -loop &
#... (run one or more graphical applications that block)
# When done, kill everything.
rkill $$

Then to connect to it, assuming the hostname is tablet and you're set up to connect to it via SSH:

$ vncviewer -via tablet -passwd ~/.vnc/tab-passwd localhost

This assumes you've created a ~/.vnc/passwd password file on the server by running

$ x11vnc -storepasswd

and entering something at the prompt from your favorite password generator. No need to save the password anywhere as the file itself is the actual password; just copy it to the client at ~/.vnc/tab-passwd to match the path used in the example above.

The details#

Read more…

Streams and socket and pipes, oh my

You know, like "lions and tigers and bears, oh my"… okay, not funny, moving on…

The problem#

There's a lot of different ways to transmit streams of bytes between applications on the same host or different hosts with various reasons you might want to use each one. And sometimes the two endpoints might disagree on which one they want to be using.

The solution#

As it turns out, there actually is a single answer to bridging any two byte streams: socat. The documentation has plenty of examples. Here's a few I made up involving named pipes and Unix sockets to go along with my recent posts:

Bridge a pair of named pipes to a Unix socket#

socat UNIX-LISTEN:test.sock 'PIPE:pipe_in!!PIPE:pipe_out'

Builds a bridge such that a client sees a Unix socket test.sock and the server communicates through two named pipes, pipe-in to send data over the socket and pipe_out to read the data received over the socket.

Connect to Unix socket HTTP server via TCP#

socat TCP-LISTEN:8042,fork,bind=localhost \
    UNIX-CONNECT:http.sock

For an HTTP server accepting connections via the Unix socket http.sock, makes it also accept connections via the TCP socket localhost:8042.

Forward a Unix socket over an SSH connection#

socat EXEC:"ssh remote 'socat UNIX-CLIENT:service.sock -'" \
    UNIX-LISTEN:proxy-to-remote.sock

Note ssh can do the same without socat (including supporting either side being a TCP port):

ssh -N -L ./proxy-to-remote.sock:./service.sock remote

But that demonstrates combining socat and ssh for getting access to streams only accessible from a remote computer.

The details#

Read more…

HTTP over Unix sockets

Posted in

The problem#

Previously, I wrote about using named pipes for IPC to allow controlling a process by another process running on the same computer possibly as a different user, with the access control set by file permissions. But I observed that the restricted unidirectional communication mechanism limited how useful it could be, suggesting another design might be better in settings where bidirectional communication including confirmation of commands may be useful.

Is there a good general solution to this problem without losing the convenience of access control via file permissions?

The solution#

Let's use everyone's favorite RPC mechanism: HTTP. But HTTP normally runs over TCP, and even if we bind to localhost, the HTTP server would still be accessible to any user on the same computer and require selecting a port number that's not already in use. Instead, we can bind the HTTP server to a Unix socket, which similar to named pipes, look a lot like a file, but allow communication like a network socket.

Python's built-in HTTP server doesn't directly support binding to a Unix socket, but the following is slightly modified from an example I found of how to get it to:

import http.server
import json
import os
import socket
import sys
import traceback

def process_cmd(cmd, *args):
    print(f"In process_cmd({cmd}, {args})...")

class HTTPHandler(http.server.BaseHTTPRequestHandler):
    def do_POST(self):
        size = int(self.headers.get('Content-Length', 0))
        body = self.rfile.read(size)
        args = json.loads(body) if body else []
        try:
            result = process_cmd(self.path[1:], *args)
            self.send(200, result or 'Success')
        except Exception:
            self.send(500, str(traceback.format_exc()))

    def do_GET(self):
        self.do_POST()

    def send(self, code, reply):
        # avoid exception in server.py address_string()
        self.client_address = ('',)
        self.send_response(code)
        self.end_headers()
        self.wfile.write(reply.encode('utf-8'))

sock_file = sys.argv[1]
try:
    os.remove(sock_file)
except OSError:
    pass
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.bind(sock_file)
sock.listen(0)

server = http.server.HTTPServer(sock_file, HTTPHandler,
                                False)
server.socket = sock
server.serve_forever()

sock.shutdown(socket.SHUT_RDWR)
sock.close()
os.remove(sock_file)

Then you can query the server using curl:

$ ./server.py http.socket &
$ curl --unix-socket http.socket http://anyhostname/foo
[GET response...]
$ curl --unix-socket http.socket http://anyhostname/foo \
    --json '["some", "args"]'
[POST reponse...]

or in Python, using requests-unixsocket:

import requests_unixsocket
session = requests_unixsocket.Session()
host = "http+unix//http.socket/"
r = session.get(host + "foo")
# Inspect r.status_code and r.text or r.json()
r = session.post(host + "foo", json=["some", "args"])
# Inspect r.status_code and r.text or r.json()

The details#

Read more…