News items for tag linux - Koos van den Hout

2019-08-21 Comparing yfktest and tlf for linux-based amateur radio contesting 3 days ago
Episode 295 of Linux in the Ham Shack is about the TLF Contest Logger. I wrote to Linux in the Ham Shack about my experiences with both programs. In 2017 I participated in the IARU-HF contest using yfktest and in 2019 I participated in the IARU-HF contest using TLF.
My opionion about both is clearly formed by my style of contesting. Phone contesting is rare for me, and I am a very casual contester. I operate in search and pounce mode, where I search for other stations calling CQ.

My experiences:

Both are textmode programs, which try to mimic DOS-based contest programs. No dragging around windows, you'll have to deal with how the makers decided to set up the screen. Also, on a graphical system, try to find the biggest and baddest monospace font to fill as much of your screen with the contesting software as possible.

The role of contest logging software is making it easier to log contacts in a contest. It does this by automating a lot of the tasks in a CW contest, by keeping the log and showing the outgoing serial number (if needed). It's a plus when contest logger can keep the live claimed score in the contest and when it can connect to a DX-cluster and show possible contacts being spotted. Both packages can do the basic contesting and scorekeeping, tlf is the only one that supports DX clusters

yfktest is written in Perl, tlf in C. For adding a new contest to yfktest you will soon have to do some programming in perl to handle the score calculations. For a new contest in tlf you may have to do some C programming.

yfktest has no cluster support, but tlf does have it. This is a huge difference to me. With tlf I could open a cluster window showing me where new calls were spotted and on what frequencies recent contacts were, so I could hunt for interesting new calls and multipliers

Specific to the IARU-HF contest and my use of the packages: yfktest supports the IARU-HF contest out of the box, so it gets the multipliers right. When I did the IARU-HF contest with tlf, I asked about it on the list and someone shared a configuration right at the beginning of the contest so it worked. Mostly: It did not count the multipliers correctly, so I had no idea of the claimed score during the contest.

Both are open source and welcome any additions. Looking at the commit history tlf is somewhat more active recently.

If you want to really add a contest to either of them you'll probably have to start thinking about that months before the contest and take your time to debug your rules/scoring configuration if you want good scoring during the contest.

I will probably stick with tlf because of the cluster support.
Linux in the Ham Shack took my shallow dive a lot further and went into a deep dive with installing, configuring and running TLF. Awesome episode, I really enjoyed it!

Links to all the stuff: Show Notes #295: TLF Contest Logger Deep Dive - Linux in the Ham Shack
yfktest linux based ham radio contest logger, TLF, a linux based ham radio contest logger.

Tags: , , ,
2019-08-13 Decompiling zonefiles 1 week ago
The authoritive nameserver on the homeserver 2017 is using bind9 version 9.10.3 (from Devuan packages). I wanted to look up something in a secondary zonefile and noticed it was a binary file.

Using 'file' to determine what to do next wasn't much help:
$ file secondary.domain-zone
secondary.domain-zone: data
But a search found an explanation at Reading a binary zone file from Bind - The Linux Page. With named-compilezone a zonefile can be 'uncompiled' to a readable file.
$ /usr/sbin/named-compilezone -f raw -F text -o /tmp/secondary.domain-zone.txt secondary.domain secondary.domain-zone
zone secondary.domain/IN: loaded serial 2018122523
dump zone to /tmp/secondary.domain-zone.txt...done
OK
$ file /tmp/secondary.domain-zone.txt
/tmp/secondary.domain-zone.txt: ASCII text
Which is a readable zonefile.

Tags: ,
2019-07-05 I tested the randomness setup 1 month ago
Doing some more reading on haveged made me decide to test the actual randomness of my setup with haveged and randomsound which I created to fix the lack of entropy for dnssec signing operations so I booted the same testing virtual machine which can tap from the host /dev/random. I ran rngtest until it was time to shut down the laptop which was showing the output. The result:
$ rngtest < /dev/random 
rngtest 2-unofficial-mt.14
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions.  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

rngtest: starting FIPS tests...
^Crngtest: bits received from input: 4999640
rngtest: FIPS 140-2 successes: 249
rngtest: FIPS 140-2 failures: 0
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 0
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=303.011; avg=543.701; max=5684.774)bits/s
rngtest: FIPS tests speed: (min=43.251; avg=64.587; max=84.771)Mibits/s
rngtest: Program run time: 9194254192 microseconds
I ratelimited the virtio-rng-pci driver from the host, so the test took a really long time. Given earlier tries with dnssec-signzone this is fast enough.

No need to buy a hardware random generator, although they are way cool and it would be an idea to have a source of correctness (NTP) next to a source of randomness.

Update: I ran rngtest on /dev/urandom and I had to ask for a really big load of blocks to get failures. The first test with 249 blocks gave the same result as above, just a lot higher bit rate. So now I know less about the correct randomness of my setup but at least the test shows that I can safely run dnssec-signzone which was the original idea.

Tags: , , ,
2019-07-04 First tests with dnssec show a serious lack of entropy 1 month ago
I was looking at the options for implementing DNSSEC on the domains I have, and started doing this on a domain name that is just used for web redirects, so I won't break anything serious when I make an error. And I am looking at monitoring options at the same time.

Looking for usable documentation I found DNSSEC signatures in BIND named - sidn.nl which shows and explains a lot of the options for doing this with bind9, including full automation. I want to take steps I understand, so I will start with careful minimal automation on a domain name that I can 'break'.

Following that documentation I created a key-signing key (KSK) and a zone-signing key (ZSK). I used the /etc/bind/keys directory which is the standard location.

The first dnssec-signzone action took 54 minutes. After waiting for a bit I started wondering what was happening and it turned out to be a problem with entropy: the signing uses a lot of data from /dev/random. I have the virtio-rng module loaded but the host wasn't making randomness available to the guest operating system. The host server does run randomsound to get more entropy since there is no hardware random number generator available.

Documentation on how to 'forward' randomness from the host to the client virtual machine: Random number generator device - Domain XML format

So I did some tests with a test virtual machine with a similar configuration. The results:
  • Just software kernel rng in the virtual machine: 54 minutes.
  • Offering virtio-rng randomness from the host from /dev/urandom running randomsound: less than 1 second.
  • Offering virtio-rng randomness from the host from /dev/random running randomsound: 11 minutes 10 seconds.
  • Offering virtio-rng randomness from the host from /dev/random running randomsound and haveged: less than 1 second.
Installing haveged which gathers entropy from hardware processes fixes the whole problem.

Now to implement the same settings for the virtual machine running the production nameserver and I'll be able to take the next step.

Tags: , , ,
2019-07-03 Unix printing isn't what it used to be 1 month ago
My wife bought a new inkjet printer because the previous one was failing. The new one is a HP deskjet 2630, and it has wifi support. Out of the box it was playing access-point on the busy 2.4 GHz band making it even more crowded so I asked her to disable the wifi. She used the printer nicely with the USB cable and asked me to look into putting it on the network so it can be in a different room and not in the way.

Today I had a look into that. I hoped it could be a wifi client. Yes it can. The first two explanations on how to set that up started with 'using the windows HP software'. The third one had 'press and hold the wifi button to connect using wps'.

So I enabled wps on the wifi network, did the wps mating and saw arpwatch note the new IPv4 addres in use.

For a laugh I tried whether it has an IPP server running. It has. So adding it under linux should not be completely impossible. Search for 'linux hp deskjet 2630' and notice it needs the hplip package. Which is already installed in my recent Ubuntu.

So I just opened the cups printer browser, saw the HP deskjet show up, selected that and printed a test page. Which came out correctly.

Typing this took longer than the actual steps I took, and searching websites with explanations took most of the time.

I'm still in the "what just happened?" stage, remembering long fights with printer drivers, network printing and losing everything at upgrades.

Update: Adding the printer in Windows 10 was harder, we needed to use the HP software to add it which tried to sell us "HP instant ink" service before allowing the printer to be used in Windows.

Tags: , ,
2019-06-19 Looking at the wrong side of a mirrored disk 2 months ago
Due to recent kernel updates I rebooted the home server and ran into only older kernels available. Some searching later I found out it booted from another disk than the disk the update manager was maintaining /boot on.

The solution was to mirror the /boot partition by hand and change the EFI boot setup to try a boot from both disks, so the machine will still boot when one half of the mirror is completely unavailable. I did buy mirrored disks to have the machine available with one disk unavailable.

Changing the EFI boot setup with efibootmgr was somewhat complicated, but I got it all done. How to add a second disk found via Partitioning EFI machine with two SSD disks in mirror - Unix & Linux stackexchange and understanding the numbers in the efibootmgr -v output via "efibootmgr -v" output question.

The ideal solution would be to have /boot and /boot/efi on mirrored partitions without metadata (so they are readable too from the efi loader as an unmirrored partition). According to what I read this is possible in Linux with devicemapper but there is not a lot of experience shared.

Tags: , ,
2019-06-02 Trying to backup to a cloudservice again 2 months ago
After the migration to the new homeserver was finished I found out I had to run backups on a separate computer: misconfigured backups so the old idea of backups to a cloudservice is on my mind again. I've looked into this before: Backup to .. the cloud! and I still want to backup to a cloud-based service which has a webdav interface and is based on owncloud. With some searching I came across How to synchronize your files with TransIP’s STACK using the commandline.

I'd like the outgoing bandwidth to be limited so the VDSL uplink isn't completely filled with the backup traffic. Installing owncloud-client-cmd still has a lot of dependencies on graphical stuff, but doesn't install the GUI of the owncloud client. In owncloud-client-cmd I can't set the bandwidth limits, but I can set those in the graphical client. But after a test it shows that owncloud-client-cmd doesn't read .local/share/data/ownCloud/owncloud.cfg for the bandwidth settings.

At least with the VDSL uplink speed and the wondershaper active the responsiveness of other applications at home never suffered. Maybe specific rules for the IP addresses of the cloud service could ratelimit the uploads.

Tags: , ,
2019-05-06 Making checking SSL certificates before installing them a bit more robust 3 months ago
Encrypt all the things meme With all the automated updates of certificates as described in Enabling Server Name Indication (SNI) on my webserver and Automating Let's Encrypt certificates further I wondered about what would happen when some things got corrupt, most likely as a result of a full disk. And a simple test showed out that the checkcert utility would happily say two empty files are a match because the sha256sum of two empty public keys is the same.

Solution, do something with the errorlevel from openssl. New version of checkcert:
#!/bin/sh

# check ssl private key 1 with ssl pem encoded x509 certificate 2 public key

SUMPRIVPUBKEY=`openssl pkey -in $1 -pubout -outform pem || echo privkey | sha256sum`
SUMCERTPUBKEY=`openssl x509 -in $2 -noout -pubkey -outform pem || echo pubkey | sha256sum`

if [ "${SUMPRIVPUBKEY}" = "${SUMCERTPUBKEY}" ]; then
        exit 0
else
        exit 1
fi
And now:
koos@gosper:~$ /usr/local/bin/checkcert /dev/null /dev/null
unable to load key
139636148224064:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: ANY PRIVATE KEY
unable to load certificate
139678825668672:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: TRUSTED CERTIFICATE
koos@gosper:~$ echo $?
1

Tags: , , ,
2019-05-04 Considering enabling Server Name Indication (SNI) on my webserver 3 months ago
Encrypt all the things meme While making a lot of my websites available via HTTPS I started wondering about enabling Server Name Indication (SNI) because the list of hostnames in the one certificate (subjectAltName parameter) keeps growing and they aren't all related.

So on a test system with haproxy I created two separate private keys, two separate certificate signing requests and requested two separate certificates. One for the variants of camp-wireless.org and one for most of the idefix.net names. The whole requesting procedure happened on the system where my automated renewal and deployment of LetsEncrypt certificates with dehydrated happens so the request went fine. For the configuration of haproxy I was following HAProxy SNI where 'terminating SSL on the haproxy with SNI' gets a short mention.

So I implemented the configuration as shown in that document and got greeted with an error:
haproxy[ALERT] 123/155523 (3435) : parsing [/etc/haproxy/haproxy.cfg:86] : 'bind :::443' unknown keyword '/etc/haproxy/ssl/webserver-idefix-main.pem'.
And found out that the crt keyword has to be repeated.

This is why I like having a test environment for things like this. Making errors in the certificate configuration on the 'production' server will give visitors scary and/or incomprehensible errors.

So the right configuration for my test is now:
frontend https-in
    bind :::443 v4v6 ssl crt /etc/haproxy/ssl/webserver-campwireless.pem crt /etc/haproxy/ssl/webserver-idefix-main.pem
And testing it shows the different certificates in use when I use the -servername parameter for openssl s_client to test things.
$ openssl s_client -connect testrouter.idefix.net:443 -servername idefix.net -showcerts -verify 3
..
Server certificate
subject=/CN=idefix.net
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
..
Verification: OK
$ openssl s_client -connect testrouter.idefix.net:443 -servername camp-wireless.org -showcerts -verify 3
..
Server certificate
subject=/CN=www.camp-wireless.org
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
..
Verification: OK
The certificates are quite separate. Generating the certificate signing requests with a separate private key for each request works fine.

So if I upgrade my certificate management to renew, transport, test and install multiple certificate for the main webserver it would work.
Read the rest of Considering enabling Server Name Indication (SNI) on my webserver

Tags: , , , ,
2019-04-25 Accepting multiple passwords for IMAPS access 4 months ago
After upgrading to the new homeserver my old setup to allow two passwords for IMAPS logins so I can use a separate password for IMAPS access for those devices that insist on saving a password without asking.

I have the following PAM libraries:
ii  libpam-modules 1.1.8-3.6    amd64        Pluggable Authentication Modules
And I debugged the problem using the pamtester program which makes debugging this problem a lot easier than constantly changing the configuration and restarting the imap server.

The relevant configuration now is:
# PAM configuration file for Courier IMAP daemon

#@include common-auth
# here are the per-package modules (the "Primary" block)
auth    required    pam_succeed_if.so quiet user ingroup users
#auth   [success=1 default=ignore]      pam_unix.so nullok_secure
auth    sufficient      pam_unix.so nullok_secure
auth    sufficient  pam_userdb.so db=/etc/courier/extrausers crypt=crypt use_first_pass
# here's the fallback if no module succeeds
auth    requisite                       pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth    required                        pam_permit.so
# and here are more per-package modules (the "Additional" block)
# end of pam-auth-update config
@include common-account
@include common-password
@include common-session
And now both my unix login password and the extra password are accepted.

Tags: , , ,
2019-02-05 Starting tcpdump causes bluetooth drivers to be loaded .. on a virtual machine 6 months ago
I noticed something really weird in the kernel log of a virtual machine:
Feb  5 11:46:54 server kernel: [2936066.990621] Bluetooth: Core ver 2.22
Feb  5 11:46:54 server kernel: [2936067.005355] NET: Registered protocol family 31
Feb  5 11:46:54 server kernel: [2936067.005901] Bluetooth: HCI device and connection manager initialized
Feb  5 11:46:54 server kernel: [2936067.006404] Bluetooth: HCI socket layer initialized
Feb  5 11:46:54 server kernel: [2936067.006838] Bluetooth: L2CAP socket layer initialized
Feb  5 11:46:54 server kernel: [2936067.007280] Bluetooth: SCO socket layer initialized
Feb  5 11:46:54 server kernel: [2936067.009650] Netfilter messages via NETLINK v0.30.
Feb  5 11:46:54 server kernel: [2936067.056017] device eth0 entered promiscuous mode
The last two are the giveaway about what really happened: I started tcpdump to debug a problem. But I did not expect (and do not need) bluetooth drivers on a virtual machine, it will never have access to a bluetooth dongle.

After setting up /etc/modprobe.d/local-config.conf with
blacklist bluetooth
tcpdump still works fine and no bluetooth drivers are loaded.

Update: Most recommendations are to disable the bluetooth network family:
alias net-pf-31 off

Tags: ,
2019-01-30 Misconfigured backups 6 months ago
I have "always" been running amanda for backups on linux. Or rather, I can't find any indication when I started doing that several homeserver versions ago, it's just still running.

Or it was running, but first I had to tackle a hardware problem: all SCSI controllers I have are PCI and the newest homeserver has no PCI slots. So I searched for a solution. The first solution was to try using the desktop system for the tapedrive, but the powersupply in that system has no 4-lead Molex connectors so I can't connect the tapedrive.

For now I use an old 'test' system with some software upgrades to run amanda and shut it down when all backups are done and flushed to tape. But amanda had a serious problem writing stuff to tape. With some debugging this turned out to be caused by the variable blocksize I used on the previous systems, with
# mt -f /dev/nst0 setblk 0
and I can't even find out why this seemed like a good idea years ago. But now amanda really wants to use 32768 byte blocks and filled a DDS-3 tape (12 Gb without compression) with about 1.8 Gb of data before reaching the end of the tape.

Why this default has changed isn't clear to me, but I found a way to re-initialize the tapes so the backups fit again. Based on block size mismatch - backup central I created a script to do this. I did not get the error about the blocksize, but I searched specifically for 'amanda 3.3.6 blocksize'.
#!/bin/sh

if [ "$1" = "" ]; then
        echo "Usage: $0 <tapename>"
fi

mt -f /dev/nst0 setblk 32768
mt -f /dev/nst0 compression 1
mt -f /dev/nst0 rewind
dd if=/dev/zero of=/dev/nst0 bs=32768 count=200
mt -f /dev/nst0 setblk 32768
mt -f /dev/nst0 compression 1
mt -f /dev/nst0 rewind
amlabel -f kzdoos $1
And now normal amounts of data fit on a tape again. I just have to initialize every tape before using it for the first time in this setup.

Tags: , ,
2019-01-02 Migration to new server finished 7 months ago
More than a year after I started migrating from homeserver greenblatt to the new homeserver conway the last migration is done and the old server is switched off. The new server is in a good position in the rack, and the old server is still taking up space in there too. It has taken a lot of time, I decided to stop some websites and other unused services in the process and my energy levels haven't always been that great. I have improved several things in the process, which also caused delays.

One thing hasn't changed (which I did expect to change): the power usage of the new server isn't lower! The UPS tells me the output load is about the same. Ok, the new hardware has a lot more CPU power, a lot more memory and faster storage, but I expected the poweruse to go down a bit.

Tags: , , ,
2019-01-01 Switching to 1-wire over USB and forwarding a USB device to a guest VM 7 months ago
The new hardware for the homeserver has no external serial ports, so I could not use the old serial / 1-wire interface that has been doing the home monitoring for years. But I had a spare USB DS2490 interface. So I plugged this into the server and wanted to forward the USB device to the guest VM that runs all the monitoring.

First I had to blacklist all the loaded drivers to have the device available to kvm as-is. In /etc/modprobe.d/local-config.conf:
blacklist w1_smem
blacklist ds2490
blacklist wire
Next step was to attach the device to the right vm. I followed the hints at How to auto-hotplug usb devices to libvirt VMs (Update 1) and edited the definition for the vm to get the host device like:
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x04fa'/>
        <product id='0x2490'/>
      </source>
    </hostdev>
But that did not get the usb device attached to the running VM and I did not feel like rebooting it. So I created an extra file with the above and did a
root@conway:~# virsh attach-device --live gosper /tmp/onewire.xml 
Device attached successfully
And then I had to do the same blacklisting as above in the virtual machine. After doing that I detached and attached it from the VM without touching it with simply:
root@conway:~# virsh detach-device --live gosper /tmp/onewire.xml 
Device detached successfully

root@conway:~# virsh attach-device --live gosper /tmp/onewire.xml 
Device attached successfully
After that I had to set up rules for the telemetry user to have enough access to the USB device:
#
SUBSYSTEMS=="usb", GOTO="usb_w1_start"
GOTO="usb_w1_end"
LABEL="usb_w1_start"
ATTRS{idVendor}=="04fa", ATTRS{idProduct}=="2490", GROUP="telemetry", MODE="0666"
LABEL="usb_w1_end"
And now it all works:
telemetry@gosper:~$ digitemp_DS2490 -a
DigiTemp v3.7.1 Copyright 1996-2015 by Brian C. Lane
GNU General Public License v2.0 - http://www.digitemp.com
Found DS2490 device #1 at 002/003
Jan 01 21:53:11 Sensor 10A8B16B0108005D C: 9.500000
Jan 01 21:53:12 Sensor 28627F560200002F C: 17.062500
Jan 01 21:53:14 Sensor 10BC428A010800F4 C: 19.562500
Jan 01 21:53:15 Sensor 1011756B010800F1 C: 11.937500
Jan 01 21:53:16 Sensor 10B59F6B01080016 C: 16.312500
Jan 01 21:53:17 Sensor 1073B06B010800AC C: 18.687500
Jan 01 21:53:18 Sensor 102B2E8A010800F0 C: 29.250000
Jan 01 21:53:20 Sensor 28EF71560200002D C: 16.687500
Working house temperatures again!

Tags: , , , ,
2018-12-30 New GcmWin for Linux 7 months ago
The author of GcmWin for Linux responded quickly to my report of being unable to install gcmwin after installing a new Linux version and made a new version available which does run fine on Ubuntu 18.04. Again my thanks to Roger Hedin SM3GSJ for making GcmWin available.

Tags: , ,
2018-12-30 First annoyance with systemd on thompson 7 months ago
On reinstalling thompson I was not sure whether to pick ubuntu (with lots of package support for amateur radio) or devuan (without systemd). I chose ubuntu to keep access to lots of amateur radio packages but as expected the first systemd problem already got me. Names in the internal network with RFC1918 addresses weren't resolvable.

After some searching I found out systemd-resolved had decided the last nameserver advertised via IPv6 was the one to use. As I could not find a lot of information on how to do the ordering I just decided to kick it all out and switch to normal resolving. Some searching found How to disable systemd-resolved in Ubuntu? - ask ubuntu which has the right steps. Back to somewhat normal, the next step is to convince NetworkManager to use IPv6 resolving before IPv4.

Tags: , , ,
2018-12-23 I upgraded the 'radio workstation' thompson 8 months ago
As mentioned in New 2 meter distance: 506 kilometers I was still running the old wsjt-x because a newer version requires a newer Linux environment. With a bit of time in the christmas holidays available and more and more things depending on this upgrade I ordered a new disk from Azerty so the reinstallation would be easier. The old linux installation on the radio workstation was several Ubuntu versions old, it was still a 32-bit installation because of earlier hardware compatibility issues and something in D-Bus communication gave lots of errors at bootup, so I expected another upgrade to give me an unavailable system.

The new disk came faster than expected, and I did an install with Xubuntu because I'm ok with the Xfce environment.

One problem is back: the system starts with the two monitors swapped and after the screensaver kicks in the monitors somehow end up in mirrored mode.

And Gcmwin for linux failed in the upgrade since it depends on older libraries. Already reported to the author.

Lots of upgraded software, the most important ones in amateur radio are CQRLOG which showed the well-known MySQL problems until I used the version from the CQRLOG ppa. Everything now works fine and all the earlier confirmations of PSK contacts have been imported. And the trigger that all started this upgrade WSJT-X has been upgraded using the WSJTX General Availability Release ppa.

Tags: , ,
2018-12-19 New 2 meter distance: 506 kilometers 8 months ago
Today I had a listen on the 2 meter band with FT8 from wsjt-x 1.9.1, which is currently the near-ancient version but I can't upgrade yet (wsjt-x 2.0.0 requires newer Qt libraries which require a newer linux environment).

But I decoded some signals including a new callsign from Germany. It's always nice to work a new callsign so I answered it and the contact was made after a few tries. Only when I checked the gridsquare and the map I saw that DK1FG is a new 2 meter band distance record for me : 506 kilometers. Looking at that qrz page makes clear why that was possible: on that end 8 stacked 12 element antennes are available for 2 meter DX.

Update 2018-12-21: I just saw wsjt-x packages for other ubuntu versions are available in the WSJTX General Availability Release ppa but the 'oldest' Ubuntu version supported is Ubuntu 16.04.5 LTS 'Xenial'.

Tags: , , ,
2018-11-28 Using mice adopted to my hands 8 months ago
The old rsi problem was acting up again, just like I had RSI in 1999.

One of the things I now did was add a left-side mouse on the linux desktop at home. I have used a left-side mouse for a number of years on a linux desktop and used the instructions from the xmodmap manpage:
       Many  pointers are designed such that the first button is pressed using
       the index finger of the right hand.  People who  are  left-handed  fre‐
       quently  find  that  it is more comfortable to reverse the button codes
       that get generated so that the primary  button  is  pressed  using  the
       index  finger  of  the  left  hand.   This  could be done on a 3 button
       pointer as follows:
       %  xmodmap -e "pointer = 3 2 1"
But I now have two USB mice, one with a forward/backward button and a clearly right-handed design and one simple one on the left. And it is possible to selectively swap mouse buttons on only one input device with xinput.

The list of all inputs:
koos@thompson:~$ xinput list
⎡ Virtual core pointer                          id=2    [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
⎜   ↳ Logitech USB-PS/2 Optical Mouse           id=9    [slave  pointer  (2)]
⎜   ↳ Logitech Optical USB Mouse                id=10   [slave  pointer  (2)]
⎣ Virtual core keyboard                         id=3    [master keyboard (2)]
    ↳ Virtual core XTEST keyboard               id=5    [slave  keyboard (3)]
    ↳ Power Button                              id=6    [slave  keyboard (3)]
    ↳ Power Button                              id=7    [slave  keyboard (3)]
    ↳ Burr-Brown from TI               USB Audio CODEC  id=8    [slave  keyboard (3)]
    ↳ VIA Technologies Inc. USB Audio Device    id=11   [slave  keyboard (3)]
    ↳ daskeyboard                               id=12   [slave  keyboard (3)]
    ↳ daskeyboard                               id=13   [slave  keyboard (3)]
    ↳ Dell WMI hotkeys                          id=14   [slave  keyboard (3)]
Setting the button order happens with xinput set-button-map which needs an ID. Solution in .xsession:
xinput set-button-map $(xinput list --id-only "Logitech Optical USB Mouse") 3 2 1

Oh, and in that other operating system I use (Windows) one of the problems is the user can't set mouse button order per device. And technical specifications of left-handed mice do not list whether the buttons are swapped in hardware.

Tags: , ,
2018-11-23 Automatic ls colours can be slow 9 months ago
I noticed certain commands taking a while to start, including a simple ls. At last I got annoyed enough to diagnose the whole situation and found out the problem is the combination of symbolic links in the listed directory pointing to filesystems behind automounter, one mounted filesystem coming from a NAS with sleeping disk and ls --color doing a stat() on the target of a symbolic link to find the type of the target file to be able to select a colour.

My solution: find the source of the alias and disable it.

Tags: , ,
  Older news items for tag linux ⇒
, reachable as koos+website@idefix.net. PGP encrypted e-mail preferred.

PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers pgp key statistics for 0x5BA9368BE6F334E4 Koos van den Hout
RSS
Other webprojects: Camp Wireless, wireless Internet access at campsites, The Virtual Bookcase, book reviews