News items for tag linux - Koos van den Hout

2023-09-13 I bought an RTL-SDR blog v4 dongle, and it's not working in Linux .. yet
A few weeks ago I saw 'buzz' all around about the RTL-SDR v4 dongle coming out: RTL-SDR Blog V4 Dongle Initial Release! and lots of people reporting clicking "buy now". I did the same, without even having a good reason to buy one. It is the third RTL-SDR dongle in the house, but the first one from RTL-SDR.COM. RTL-SDR dongles allow for the reception of radio signals in a wide range of frequencies where the processing of the signals is all done in the computer.

I ordered it through AliExpress but making sure I got the right version via RTLSDRBlog Store on AliExpress.

It arrived earlier and I can't get it to work with the Linux SDR software stack I use, even on the newest laptop, which uses:
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name             Version        Architecture Description
ii  gqrx-sdr         2.15.8-1build1 amd64        Software defined radio receiver
ii  gr-osmosdr       0.2.3-5build2  amd64        Gnuradio blocks from the OsmoSDR project
ii  librtlsdr0:amd64 0.6.0-4        amd64        Software defined radio receiver for Realtek RTL2832U (library)
The dongle is recognized, but there is just noise, no signal to decode, even when I try strong broadcast stations. The previous RTL-SDR dongle receives the same stations fine, so it's an amplification or tuning problem.

Checking the web finds librtlsdr/librtlsdr: Software to turn the RTL2832U into an SDR - GitHub which has a recent commit: add rtl-sdr blog v4 support · librtlsdr/librtlsdr@fe22586 · GitHub which sounds exactly like what I would need. So it's not working.. yet.

Tags: , , ,
2023-08-29 Re-enabling grafana deb updates.. again
I did a manual apt update and saw an error message again for the grafana packages, which confirmed a posting I saw about grafana having to issue a new GPG key again.
root@gosper:~# apt update
Get:1 stable InRelease [5,984 B]
Err:1 stable InRelease
  The following signatures couldn't be verified because the public key is not av
ailable: NO_PUBKEY 963FA27710458545
Hit:2 beowulf InRelease
Hit:3 beowulf-security InRelease
Hit:4 beowulf-updates InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: An error occurred during the signature verification. The repository is not up
dated and the previous index files will be used. GPG error: https://packages.gra stable InRelease: The following signatures couldn't be verified
 because the public key is not available: NO_PUBKEY 963FA27710458545
W: Failed to fetch
The following signatures couldn't be verified because the public key is not avai
lable: NO_PUBKEY 963FA27710458545
W: Some index files failed to download. They have been ignored, or old ones used
And again the solution was to update the GPG key for grafana packages, as mentioned in Grafana security update: GPG signing key rotation | Grafana Labs.

I followed the same steps as in the update of the grafana signing key in june 2023 to get things working again.
root@gosper:~# cd /etc/apt/trusted.gpg.d/
root@gosper:/etc/apt/trusted.gpg.d# wget -q -O -  | gpg --dearmor > grafana.gpg
I also filed a bug with cron-apt because it hides the 'problem with this repository' error from me. Logged as #778 - cron-apt does not report repositories with GPG problems.

Found via James Tucker: "In case you missed it, grafana…" -

Tags: ,
2023-08-13 Going down the rabbit hole of DJ mixing
1980s style Retro wave image with 'Playing DJ with Mixxx open source' I had a heavy case of 'Oh Shiny!' this weekend. Recently I've been viewing and listening some DJ mixes on YouTube, most of them with music from the 1980s which I appreciate a lot.

Seeing those DJs mix live in those videos made me wonder 'how do they do it'. In one or more of these mixes I really noticed that a transition seemed to have happened between one well-known song and another, but I wasn't aware of how and when the transition happened. The DJ was so good in mixing the two records together I couldn't hear the point where it happened. In seeking the video I saw that other people viewing the video had been wondering the same, there was clearly a peak in viewing time on the transitions. It was also clear from the look on the face of the DJ he was happy with what he accomplished with that transition!

In the 1980s the DJ had an audio mixer and two turntables, almost always the Technics SL1200 with pitch control and fast start/stop. Nowadays this can be done in software. From a music collection on harddisk with controls to mix 2 or 4 tracks, with effects, equalizer and speed control. The modern DJ has a laptop!

I soon found out there is open source DJ mixing software that supports Linux! Mixxx - Free DJ Mixing Software App is open source and multiplatform. And it is available as an Ubuntu package so I gave it a spin (pun intended). Only having one audio device is 'supported' but it took me some trying to find a setup where I could work 'split' with a master mix in one ear and a headphone mix in the other. So I loaded some music and tried to make it into a bit of a DJ mix. I'm not very good at it, but I enjoyed trying.

Mixxx really prefers Jack audio since it likes having a lot of audio channels. I tried installing Jack audio in linux but couldn't get it to do what I want fast. Mixxx also supports the Alsa drivers and I managed to also set it up to route the main audio to a USB audio device and the headphone audio to the internal headphone jack. But I had nothing connected to the USB audio device and I didn't want to annoy my family with the noises of trying to make a good cutover from one song to the next. Mixxx has an option 'Split' to play the master output to one ear of the headphones and the headphone output to another, this is good for practicing.

Control of all the mixing functions in Mixxx can be done with mouse and keyboard, but the good part is it also supports all kinds of hardware DJ controllers. And some of them aren't too expensive... and available on the second hand market for an even better price.

Tags: , , ,
2023-08-07 Trying to understand bonnie++ output
In preparation for a migration at work I wanted to do actual benchmarking of Linux filesystem performance. I think I used bonnie in the last century, so I gave bonnie++ a spin for this.

I have little idea of what 'good' or 'bad' numbers are from bonnie++. I could only compare a "local" filesystem with an NFS filesystem. I use local in quotes because this was in a virtual machine, so it's SSD storage in raid-1, with LVM on top of it, with a logical volume assigned to a KVM-based virtual machine, which uses the virtio disk driver for an ext4 filesystem.

The numbers for the "local" filesystem:
Version  1.98       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
gosper          32G  809k  98  440m  38  215m  22 1590k  99  410m  30  4639 135
Latency             25688us     317ms     143ms    9332us   39208us    2089us
Version  1.98       ------Sequential Create------ --------Random Create--------
gosper              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency               488us     684us     762us     236us      87us     262us
And for NFS, a Synology NAS with spinning disks in raid-5:
Version  1.98       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
gosper          32G 1054k  98 78.7m   7 68.4m  13 1483k  99  109m  10 432.2  12
Latency             11138us     408ms   13261ms   16434us     212ms     274ms
Version  1.98       ------Sequential Create------ --------Random Create--------
gosper              -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 16384  10 16384  16 16384  15 16384   9 16384  18 16384  15
Latency             69194us   53194us   98927us   69144us    1240us   94317us
Now I am somewhat confused. Sequential write to NFS is slightly faster.

Update 2023-08-08

At work I got different but comparable numbers for iscsi attached storage versus vmware storage (and the layers in between). Those numbers helped make decisions about the storage.

Tags: , , ,
2023-06-24 Time to replace half of a mirrored disk (again)
Error messages like this make me fix things fast:
Jun 24 13:42:59 conway kernel: [6925745.388604] sd 0:0:0:0: [sda] tag#6 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT
Jun 24 13:42:59 conway kernel: [6925745.389388] sd 0:0:0:0: [sda] tag#6 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
Jun 24 13:42:59 conway kernel: [6925745.390157] print_req_error: I/O error, dev sda, sector 616464
Jun 24 13:42:59 conway kernel: [6925745.390923] md: super_written gets error=10
Jun 24 13:42:59 conway kernel: [6925745.391705] md/raid1:md127: Disk failure on sda3, disabling device.
Jun 24 13:42:59 conway kernel: [6925745.391705] md/raid1:md127: Operation continuing on 1 devices.
Jun 24 13:42:59 conway mdadm[2559]: Fail event detected on md device /dev/md127, component device /dev/sda3
The part that makes me go 'hmmm' is that this was another Kingston A400 SSD, just like the one that failed in December 2021 for which I ordered a replacement from a different brand. Since that disk failed under warranty it was replaced with another Kingston A400 which I still had available in packaging. So that is now in use and the failed SSD is removed. I wonder how long that replacement disk will work fine.

I did all the bits to replace the disk and recreate the software raid mirror. This worked fine, and all my work to make sure the system can boot from either disk of the mirror worked.

Tags: , ,
2023-06-14 Looking at web caching options
Somewhere on irc the term "don't host your website on a wet newspaper" is sometimes used when an url getting a bit of serious traffic makes it really respond slow or give errors.

So I looked at my own webservers at home and what would happen if one of the sites got hit with the Slashdot Effect. As I don't like guessing I played with ab - Apache HTTP server benchmarking tool to get some idea of what happens under load and/or highly concurrent access.

Especially highly concurrent access turns out to be an issue because there are only so much database connections available for the webservers. The load average does go up, but the main problem is clients getting a database connection error.

I started looking at caching options to allow the dynamic pages to be cached for short periods. This would make high amounts of traffic have the advantages of having a cached version without losing the advantages of dynamic pages.

By now this has cost me more time and energy than the advantage of ever surviving a high amount of valid traffic. And to be honest the chances of a DDoS attack on my site because someone didn't like something I wrote is higher than the chances of a lot of people suddenly liking something I wrote.

This was all tested with the test and development servers, so actual production traffic was never affected by the tests.

Apache built-in memory cache with memcached

I first tried the Apache module socache_module with socache_memcache_module as backend. This did not cache the dynamic pages, just .css and other static files which originate from diskcache or ssd storage anyway. All kinds of fiddling with the caching headers did not make this work. With debugging enabled all I could see was that the dynamic pages coming from cgid or modperl were not a candidate for caching.

I could have used memcached from the web applications directly, but that would mean I would have to rewrite every script to handle caching. I was hoping to add the caching in a layer between the outside world and the web applications, so I can just detour the traffic via a caching proxy when needed.

Haproxy cache

Between the outside world and the webservers is a haproxy installation anyway, so I looked at that option. But the haproxy cache will not cache pages that have a Vary: header, but even after removing that header in Apache the next problem is that the Content-Length: http header has to be set in the answer from the webserver. With my current setup that header is missing in dynamic pages.

Varnish cache

Using varnish cache means I really have to 'detour' web traffic through another application before it goes on to the final webserver. This turned out to be the working combination. But this caused confusion as Varnish adds to the X-Forwarded-For header and I had an entire setup based on this header being added by haproxy listing the correct external IP address from the view of haproxy. It took a few tries and some reading to find the right incantation to specifically mangle back the X-Forwarded-For header to the right state in the outgoing request to the backend server. The varnish cache runs on the same virtual machine as the test haproxy, so the rule was to delete , ::1 from the header.

Tuning haproxy to avoid overloading a backend

In looking at things and testing I also found out haproxy has a maxconn parameter for backend servers, listing the maximum number of open connections to the backend. By changing this number to something lower than the maximum amount of database connections the site starts to respond slow under a high number of concurrent requests, but it keeps working and doesn't give database errors.

Tags: , , ,
2023-06-05 Re-enabling grafana deb updates
I noticed grafana hadn't updated in a while. Normally cron-apt does the prefetching of updates and notifies me when new updates are available, so I can make sure updating doesn't break running stuff or I can resolve it quick.

But cron-apt held an error message from apt update away from me which I saw by hand:
root@gosper:~# apt update
Get:1 stable InRelease [5,983 B]
Err:1 stable InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9E439B102CF3C0C6
Get:2 beowulf InRelease [33.5 kB]
Get:3 beowulf-security InRelease [26.1 kB]
Get:4 beowulf-updates InRelease [26.1 kB]
Fetched 85.7 kB in 3s (28.4 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9E439B102CF3C0C6
W: Failed to fetch  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9E439B102CF3C0C6
W: Some index files failed to download. They have been ignored, or old ones used instead.
I partly followed the instructions in Problem with debian repository key - Grafana / Installation - Grafana Labs Community Forums to get things going again. I used /etc/apt/trusted.gpg.d because this is the standard directory, is already available and the remark about ubuntu means this is the only supported directory for gpg keys.
root@gosper:~# cd /etc/apt/trusted.gpg.d/
root@gosper:/etc/apt/trusted.gpg.d# wget -q -O -  | gpg --dearmor > grafana.gpg
By putting the grafana.gpg in this directory it gets detected and used automatically. No need for a pointer in /etc/apt/sources.list.d/grafana.list.

Now apt update doesn't complain, so I will be notified of new grafana versions available.

Tags: ,
2023-05-10 Repetitive SSH attempts are still on
I noticed in 2016 that putting services like ssh on a different port does not change much in the attacks and the last few days I noticed this is true as ever.

I use fail2ban for sshd and other services that are prone to brute-force attempts. I've been using influxdb and grafana to visualize measurements and I use telegraf to gather a lot of system data.

I recently enabled gathering fail2ban statistics and it's interesting to see the numbers of blocked addresses is very similar for the sshd on port 22 and the sshd on port 2022. It's not exactly the same number and interestingly not the same attackers but the numbers are within 5%. And yes the numbers are high enough to make the output of fail2ban-client status sshd several screenfulls of IP addresses.

Tags: , ,
2023-04-28 Fixing settings/drivers for Digitus Gigabit Ethernet adapter USB-C
I recently bought a Digitus Gigabit Ethernet adapter USB-C, mainly because my work laptop has no wired ethernet connection which I really want sometimes.

As I don't like having Windows-only hardware I did check before ordering that it can also be used with Linux. It contains a Realtek r8152 chip so I searched and found Fixing performance issues with Realtek RTL8156B 2.5GbE USB dongle in Ubuntu - CNX Software which mentions that loading the listed udev rules makes Linux select the right driver and improves performance.

And indeed the 'wrong' driver was chosen initially. I fetched r8152/50-usb-realtek-net.rules at master · bb-qq/r8152 · GitHub like:
root@moore:~# cd /etc/udev/rules.d/
root@moore:/etc/udev/rules.d# wget
root@moore:/etc/udev/rules.d# cd
root@moore:~# udevadm control --reload-rules
root@moore:~# udevadm trigger
And now things are as I wish, the right driver is loaded:
  Device-3: Realtek USB 10/100/1G/2.5G LAN type: USB driver: r8152
  IF: enx3c49deadbeef state: down mac: 3c:49:de:ad:be:ef

Tags: , ,
2023-04-14 Teaching courier-imapd-ssl to use up-to-date encryption
Encrypt all the things meme A discussion on irc about how hard it is to set TLS options in some programs made me recall I still wanted courier-imap-ssl to give me the right SSL settings (Only TLS 1.2 and 1.3, and no weak algorithms). This has bothered me for a while but I couldn't find the right answers. Most documentation assumes courier-imap-ssl is compiled with OpenSSL. In Debian/Ubuntu/Devuan it is compiled with GnuTLS.

Searching this time found me Bug #1808649 “TLS_CIPHER_LIST and TLS_PROTOCOL Ignored” : Bugs : courier package : Ubuntu which points at debian-server-tools/mail/courier-check at master · szepeviktor/debian-server-tools · GitHub which lists the right parameter TLS_PRIORITY. And that page has usable answers for up to TLS v1.2, with some reading of the output of gnutls-cli --list I can imagine TLS v1.3 settings.

So with a minor adjustment to the given example to allow for TLS v1.3 I set this in /etc/courier/imapd-ssl:
# GnuTLS setting only
# Set TLS protocol priority settings (GnuTLS only)
# This setting is also used to select the available ciphers.
# The actual list of available ciphers depend on the options GnuTLS was
# compiled against. The possible ciphers are:
# AES256, 3DES, AES128, ARC128, ARC40, RC2, DES, NULL
# Also, the following aliases:
# HIGH -- all ciphers that use more than a 128 bit key size
# MEDIUM -- all ciphers that use a 128 bit key size
# LOW -- all ciphers that use fewer than a 128 bit key size, the NULL cipher
#        is not included
# ALL -- all ciphers except the NULL cipher
# See GnuTLS documentation, gnutls_priority_init(3) for additional
# documentation.

And now things are good! All green in sslscan:
  SSL/TLS Protocols:
SSLv2     disabled
SSLv3     disabled
TLSv1.0   disabled
TLSv1.1   disabled
TLSv1.2   enabled
TLSv1.3   enabled

  TLS Fallback SCSV:
Server supports TLS Fallback SCSV

  TLS renegotiation:
Session renegotiation not supported

  TLS Compression:
Compression disabled

TLSv1.3 not vulnerable to heartbleed
TLSv1.2 not vulnerable to heartbleed

  Supported Server Cipher(s):
Preferred TLSv1.3  128 bits  TLS_AES_128_GCM_SHA256        Curve P-256 DHE 256
Accepted  TLSv1.3  256 bits  TLS_AES_256_GCM_SHA384        Curve P-256 DHE 256
Accepted  TLSv1.3  256 bits  TLS_CHACHA20_POLY1305_SHA256  Curve P-256 DHE 256
Preferred TLSv1.2  256 bits  ECDHE-ECDSA-AES256-GCM-SHA384 Curve P-256 DHE 256
Accepted  TLSv1.2  256 bits  ECDHE-ECDSA-CHACHA20-POLY1305 Curve P-256 DHE 256
Accepted  TLSv1.2  128 bits  ECDHE-ECDSA-AES128-GCM-SHA256 Curve P-256 DHE 256
Accepted  TLSv1.2  256 bits  ECDHE-ECDSA-AES256-SHA384     Curve P-256 DHE 256
Accepted  TLSv1.2  128 bits  ECDHE-ECDSA-AES128-SHA256     Curve P-256 DHE 256

  Server Key Exchange Group(s):
TLSv1.3  128 bits  secp256r1 (NIST P-256)
TLSv1.3  192 bits  secp384r1 (NIST P-384)
TLSv1.3  260 bits  secp521r1 (NIST P-521)
TLSv1.2  128 bits  secp256r1 (NIST P-256)
TLSv1.2  192 bits  secp384r1 (NIST P-384)
TLSv1.2  260 bits  secp521r1 (NIST P-521)

  SSL Certificate:
Signature Algorithm: sha256WithRSAEncryption
ECC Curve Name:      secp384r1
ECC Key Strength:    192
Read the rest of Teaching courier-imapd-ssl to use up-to-date encryption

Tags: , ,

IPv6 check

Running test...
, reachable as PGP encrypted e-mail preferred. PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers

Meningen zijn die van mezelf, wat ik schrijf is beschermd door auteursrecht. Sommige publicaties bevatten een expliciete vermelding dat ze ongevraagd gedeeld mogen worden.
My opinions are my own, what I write is protected by copyrights. Some publications contain an explicit license statement which allows sharing without asking permission.
Other webprojects: Camp Wireless, wireless Internet access at campsites
This page generated by $Id: newstag.cgi,v 1.43 2023/06/14 14:07:16 koos Exp $ in 0.043061 seconds.