Steve's Blog

SPDIF Optical Keepalive with Pipewire

For years, I’ve run a set of Logitech Z-5500 speakers into an optical port on my PC. It gives good quality 5.1 audio, and supports AC3 + DTS digital passthrough as well as 44, 48, and 96khz bitrates.

The problem is, the speakers go into a ‘sleep’ mode where it takes nearly a second to bring the amp back online to play audio - so notification sounds are often not played at all.

To correct this, in the past, I’ve run a simple systemd service using sox to output a sine wave that is below the audible level like so: /usr/bin/play -q -n -c2 synth sin gain -95

Now however, we can do this directly within pipewire itself.

Firstly, we need to identify the output device using pw-top. Play some audio, and look for which sink it is being played on - eg:

1
2
3
4
5
6
7
8
9
10
11
12
13
S   ID  QUANT   RATE    WAIT    BUSY   W/Q   B/Q  ERR FORMAT           NAME                                                                                                                                                                   
S   28      0	   0    ---     ---   ---   ---     0                  Dummy-Driver
S   29      0	   0    ---     ---   ---   ---     0                  Freewheel-Driver
S   36      0	   0    ---     ---   ---   ---     0                  Midi-Bridge
S   42      0	   0    ---     ---   ---   ---     0                  alsa_output.usb-Kingston_HyperX_Cloud_Stinger_Core_Wireless___7.1_000000000000-00.analog-stereo
S   49      0	   0    ---     ---   ---   ---     0                  alsa_input.usb-Kingston_HyperX_Cloud_Stinger_Core_Wireless___7.1_000000000000-00.mono-fallback
R   40   1024  48000  32.3us   4.3us  0.00  0.00    0    S16LE 2 48000 alsa_output.usb-Generic_USB_Audio-00.HiFi_5_1__hw_Audio_3__sink
R  106   1024  48000  20.5us   5.1us  0.00  0.00    0    F32LE 2 48000  + Brave
S   50      0	   0    ---     ---   ---   ---     0                  alsa_output.usb-Generic_USB_Audio-00.HiFi_5_1__hw_Audio_1__sink
S   51      0	   0    ---     ---   ---   ---     0                  alsa_output.usb-Generic_USB_Audio-00.HiFi_5_1__hw_Audio__sink
S   52      0	   0    ---     ---   ---   ---     0                  alsa_input.usb-Generic_USB_Audio-00.HiFi_5_1__hw_Audio_2__source
S   53      0	   0    ---     ---   ---   ---     0                  alsa_input.usb-Generic_USB_Audio-00.HiFi_5_1__hw_Audio_1__source
S   54      0	   0    ---     ---   ---   ---     0                  alsa_output.pci-0000_2f_00.1.hdmi-stereo

In my case, the audio device is alsa_output.usb-Generic_USB_Audio-00.HiFi_5_1__hw_Audio_3__sink.

Now we create a file at ~/.config/wireplumber/main.lua.d/spdif-noise.lua with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
rule = {
  matches = {
    {
      { "node.name", "matches", "alsa_output.usb-Generic_USB_Audio-00.HiFi_5_1__hw_Audio_3__sink" }
    },
  },
  apply_properties = {
    ["dither.noise"] = 2,
    ["node.pause-on-idle"] = false,
    ["session.suspend-timeout-seconds"] = 0
  }
}

table.insert(alsa_monitor.rules,rule)

You’ll need to swap the name

Restart pipewire now: systemctl --user restart pipewire.service.

Now, when your first sound plays, pipewire will continue to output sub-audible noise to keep everything alive - which is a much better solution than using sox!

Training spam with doveadm

A while ago, I posted about training SpamAssassin Bayes filter with Proxmox Mail Gateway. That’s really easy when you’re using Maildir - as each email message is its own file.

At this point, we could easily just cat out a file and treat email in folders as files and ignore the fact they were part of an imap mailbox. However, what happens if you use something other than Maildir - like the newer mailbox formats? We can’t use the same approach, as each email is likely not just a file anymore.

For example, dbox is Dovecot’s own high-performance mailbox format.

If we use mdbox, we can no longer open a single message per file, nor can we tell what folders are what from the on disk layout. So we have to get smarter.

Using doveadm, we can search for messages in a mailbox, and fetch them to feed into our previously configured script and feed them into PMG as before. The main advantage is that this will work with any mail storage backend.

This simple bash script will go through all users Spam or INBOX/Spam folders and fetch each one, feed it into the learning system, and then remove it from the users mailbox.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
	#!/bin/bash
	MAILFILTER=my.pmg.install.example.com
	shopt -s nullglob

	doveadm search -A mailbox Spam OR mailbox INBOX/Spam | while read user guid uid; do
		doveadm fetch -u $user text mailbox-guid $guid uid $uid | tail -n+2 > /tmp/spam.$guid.$uid
		cat /tmp/spam.$guid.$uid | ssh root@$MAILFILTER report
			if [ $? != 0 ]; then
				echo "Error running sa-learn. Aborting."
				exit 1
			fi
			rm -f /tmp/spam.$guid.$uid
			doveadm expunge -u $user mailbox-guid $guid uid $uid
		done

Use it with the scripts / general configuration from the previous article, and this should be able to be used across all mail storage methods supported by Dovecot.

Cron it to run every 5 minutes or so, and you’re done! Nice and easy.

Multiple Watchdog handler

Recently, I’ve been having a problem with kernel panics beyond kernel 6.3.7 which causes a hard hang of the system.

So, the first thing to do was set up a watchdog to reset the system after 60 seconds with nothing feeding the it. At that point, the system would reset and wouldn’t need me to manually reboot it each time.

The problem is, the default watchdog daemon can only handle a single watchdog - and I want to activate two.

Sounds like time for another simple perl script!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
#!/usr/bin/perl
use strict;
use warnings;
use POSIX qw(nice);
$|++;

my @watchdogs = glob ("/dev/watchdog?");

## Find the lowest timeout...
print "Finding lowest watchdog timeout...\n";
my $sleep_timeout = 60;
my @wd_timeouts = glob("/sys/class/watchdog/*/timeout");
for my $wd_timeout ( @wd_timeouts ) {
	open my $fh, '<', $wd_timeout;
	my $timeout = do { local $/; <$fh> };
	close $fh;
	print "Timeout $wd_timeout = $timeout";
	if ( $timeout < $sleep_timeout ) {
		$sleep_timeout = $timeout;
	}
}

## Half the timeout to ensure reliability
$sleep_timeout = $sleep_timeout / 2;
print "Using final timeout of $sleep_timeout\n";

nice(-19);
$SIG{INT}  = \&signal_handler;
$SIG{TERM} = \&signal_handler;

## Open the file handles...
my @fhs;
for my $watchdog ( @watchdogs ) {
	print "Opening: $watchdog\n";
	open(my $fh, ">", $watchdog);
	$fh->autoflush(1);
	my $device = {
		device	=> $watchdog,
		fh	=> $fh,
	};
	push @fhs, $device;
}

## Start feeding the watchdogs.
while (1) {
    for my $watchdog ( @fhs ) {
        #print "Feeding: " . $watchdog->{"device"} . "\n";
        my $fh = $watchdog->{"fh"};
        print $fh ".\n";
    }
    #print "Sleeping $sleep_timeout seconds...\n";
    sleep $sleep_timeout;
}

sub signal_handler {
    for my $watchdog ( @fhs ) {
        print "Sending STOP to " . $watchdog->{"device"} . "\n";
        my $fh = $watchdog->{"fh"};
        print $fh "V";
    }
    exit 0;
}

This script will scan for the lowest timeout across all watchdogs installed in the system, and then feed them at 1/2 the watchdog timeout rate.

It can be started with a simple systemd unit:

1
2
3
4
5
6
7
8
9
10
11
12
[Unit]
Description=Run watchdog feeder

[Service]
Type=simple
ExecStart=/usr/local/bin/watchdog.pl
Restart=always
CPUSchedulingPolicy=fifo
CPUSchedulingPriority=99

[Install]
WantedBy=multi-user.target

When the program stops, it sends the magic STOP command to the watchdog so a stopped service won’t trigger a system reset.

Nice and simple.

Setting up MariaDB Replicas - the easy way

The MariaDB page on replication is good, but I feel it lacks a few details to make things easier. Specifically, in moving the data between the master and slave to be able to get the replica running with as little effort as possible.

If we assume that the Master has been configured with the steps in the MariaDB Guide, we can then look at how to get data to the slave for the initial replication to happen.

In my configuration, I use a master server already configured with SSL - you should really do the same for your master BEFORE you set up any replication. I use a LetsEncrypt certificate and this reference.

Using the script below, we can skip the Getting the Master’s Binary Log Co-ordinates step, and export the GTID in a dump - and import that into the new slave in the one command. When running, mariadb-dump will automatically lock the database tables, and unlock again after the transfer has completed.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#!/bin/bash
declare -x MYSQL_PWD="MySqlRootPassword"
declare -x MARIA_MASTER="my.master.server.example.com"
declare -x MARIA_REPL_USER="replication_user"
declare -x MARIA_REPL_PASS="replication_password"

echo "Stopping the local slave (if running)..."
mysql -e "stop slave;"

echo "Transferring initial dataset... Please wait..."
mariadb-dump -A --gtid \
	--add-drop-database \
	--compress=true \
	--master-data=1 \
	--include-master-host-port \
	--ssl=true \
	-h $MARIA_MASTER \
	-u root | mysql

echo "Configuring slave..."
mysql -e "
CHANGE MASTER TO
    MASTER_HOST=\"$MARIA_MASTER\",
    MASTER_USER=\"$MARIA_REPL_USER\",
    MASTER_PASSWORD=\"$MARIA_REPL_PASS\",
    MASTER_PORT=3306,
    MASTER_SSL=1,
    MASTER_SSL_CA=\"/etc/pki/tls/certs/ca-bundle.trust.crt\",
    MASTER_SSL_VERIFY_SERVER_CERT=1;
start slave;
"

After this script completes, you can then check the status of the slave - and confirm the values of Slave_IO_Running and Slave_SQL_Running with:

1
SHOW SLAVE STATUS \G;

Keep this script handy, as if replication breaks for whatever reason, you can run it again to resync to the master server, and the existing databases on the slave will get dropped as the import happens. Keep in mind though that it won’t drop databases that don’t exist on the master anymore.

NOTE: If you have a large or busy database, you might be better served using the mariabackup tool. This tool will make a local export of all the data to allow you to transfer it out-of-band and therefore reduce the amount of time the master database is locked. MariaDB have a guide to using this tool here. While its more steps, your locking time will be greatly reduced.

I also use the following on the replica in /etc/my.cnf.d/replication.cnf to configure the slave:

1
2
3
[mariadb]
slave_compressed_protocol = 1
log-basename = <slave hostname>

Change <slave hostname> to the hostname of the configured slave. This will use compression for the slave, which is helpful for replication over WAN connections, and setting log-basename will ensure that if the slave host changes its name at some point in the future, that replication won’t break.

Automating Secondary DNS servers

When running several name-servers, it can be difficult to configure which domains end up on them. There’s multiple ways - copying config files, getting a config snippet from a web site regularly, or having a deployment script. All of which will break at some point in time and leave you with a semi-functional name server.

Wouldn’t it be great if we could use DNS to configure DNS?

This is a great use for TXT records - or misuse - depending on how pure you want to be ;)

How good would it be to be able to have a TXT record that contains what zone files a secondary DNS should have in its config file by referencing a DNS entry?

So here’s a script to do just that.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
#!/usr/bin/perl
# vim: set ts=4:
use strict;
use warnings;
use Net::DNS;

my $outputfile = "/etc/named/secondary_domains.conf";
my $output = "";

my $header = '
masters dns-masters {
	1.2.3.4;
};
';

my $entry_template = '
zone "ZONE" IN {
	type		slave;
	file		"/var/named/slaves/FILE";
	masters		{ dns-masters; };
};
';

my $resolver = Net::DNS::Resolver->new;
$resolver->nameservers("8.8.8.8");
my $reply = $resolver->query("secondary_domains.example.com", "TXT");

if ($reply) {
	$output = $header;
	foreach my $rr ($reply->answer) {
		foreach my $txt ( $rr->txtdata ) {
			my $entry = $entry_template;
			$entry =~ s/ZONE/$txt/g;

			## Have a sane filename...
			$txt =~ s@/@_@g;
			$entry =~ s/FILE/$txt/g;
			$output = $output . $entry;
		}
	}

	## Write file to disk.
	open(FH, '>', $outputfile) or die $!;
	print FH $output;
	close(FH);

	## Find which systemd unit we use...
	my $service = "named-chroot.service";
	if ( -f "/etc/systemd/system/multi-user.target.wants/named.service" ) {
		$service = "named.service";
	}
	system("systemctl reload $service");
} else {
	warn "query failed: ", $resolver->errorstring, "\n";
}

Then in your /etc/named.conf config file, include the generated /etc/named/secondary_domains.conf with the following at the bottom of the file.

1
include "/etc/named/secondary_domains.conf";

Get cron or a systemd timer to run the perl script once an hour or so, and you’ll be quickly adding / removing entire zones from your secondary DNS servers with ease.

On a second part, because this will query the 8.8.8.8 name-server for the TXT record, as long as one DNS server can respond with the correct entry, your secondary will be able to generate a new configuration file.

On your primary (normally a hidden master), you will add a TXT record to the zone file as follows:

1
secondary_domains.example.com.   1800 IN  TXT "domain1.com" "domain2.com" "domain3.com"

You can adjust your TTL, paths and other items to reflect your implementations. This is also simple enough that it will allow you to run a secondary DNS server on the free-tier cloud platforms like GCP.

What are the limitations of this approach? Well, once you get over 64Kb worth of domain names, you’ll have to either split the TXT records and implement something like a "include:record_b.example.com" and loop over that as well for another 64Kb worth of text, or loop over a counter like secondary-1.example.com, secondary-2.example.com until you get an NXDOMAIN reply.

You could also change the domain name to be something that only exists on the hidden master server - and not on the wider internet and query the master directly to get the list of domains. This has the advantage that the list of domain names included can’t become public.

There’s probably more variations that can be done to further fine-tune this approach, but this is a good, functional start.