News items for tag homeserver - Koos van den Hout

2019-12-24 First tries with DNSSEC on subzones: no success 1 month ago
I tried adding subzones with DNSSEC by adding the DS record to the parent zone, but in both tries I got errors from DNSViz. Different errors even: in one case the signature on the DS record was seen as invalid and in another case there was no signature at all. The errors are reproducable, even after waiting for caches to empty.

Tags: , ,
2019-12-19 Removing an RRTYPE for a DNS name causes an expired RRSIG for that record 1 month ago
I kept seeing warnings about an expired signature when running named-checkzone or dnssec-signzone and it took some searching before I found the reason.

Recently I removed the records with type SPF from my zones since the recommended approach is to use TXT records with SPF data. The RRSIG records for the SPF records were left in the signed zonefile, but not updated so they expired and started to give warnings.

The SPF records were for names that had other data too which seems to trigger this. Removing a record completely (no RRTYPEs left for the name) removes all signatures.

The things in DNSSEC I haven't tested yet are a signed subzone, a ZSK rollover and a KSK rollover. Those will eventually happen too.

Tags: , ,
2019-12-14 Moved the first domain registration to TransIP 1 month ago
The machine moved so I had to do the whole update dance with the glue records again. Since the IPv6 glue records 'vanished' when I added DNSSEC to I decided to move to a different registrar where IPv6 glue records and DNSSEC are normal and don't require an extra support call.

Since I have an account with TransIP anyway for the stack storage service I just had to add (and pay for) domain services. Interesting bit is that TransIP says I have to pay again next year. According to the registry the domain is registered until 11 august 2024 at the moment.

Adding DNSSEC gave problems at first, the format they expect is from the public part of the key signing key, which is a different format from the file which gets generated by dnssec-signzone. After some tries and searching I found the right source and format. The error message was about the Key Tag which was confusing as that is a number where there isn't much to go wrong.

Tags: ,
2019-12-12 Adding the first TLSA records for secured services 1 month ago
Encrypt all the things meme Now I have DNSSEC running ok on my domains I can start looking at security innovations that rely on DNSSEC.

The first one is DANE for the mailserver, in which the public key signature is published in DNS record secured with DNSSEC to give a separate path to verify the public key during the SMTP session.

The public key of the mailserver is also signed by LetsEncrypt as described in Automating Let's Encrypt certificates further and Automating Let's Encrypt certificates with DNS-01 protocol so there are two completely independent paths to verify the identity of the mail server.

To find the public key of the mailserver for a given domain:
$ dig +short mx
$ dig +short tlsa
3 1 2 2B55764A99A47AEC5B66D8EB4E741F2646BF6352CABC9BE3F37D2F42 0BD7EF56B5BE3058E7B10964BA963777364443057E45599E07A82375 7A812F1A7014356A
I found the tlsa tool from package hash-slinger by Paul Wouters to create these records. This can be both from the protocol which has certain risks (if that connection is intercepted) or from the public key file. Or via the web tool Generate TLSA Record by Shumon Huque.

TLSA records are generically linked to a TCP or UDP port. The next step will probably be to start adding records for other public services with TLS like https. There was a time that some people were convinced DANE was going to replace certificate authorities for https, but at this moment it is very limited. I have added TLSA records for https (tcp/443) for and for now and I'm testing with these. For now one of my favourite checkers isn't convinced.

This does increase the chances for things to go wrong. With the tlsa program it is possible to verify records too, so I can use this to verify TLSA records.
$ tlsa --verify -6 --starttls smtp --port 25
SUCCESS (Usage 3 [DANE-EE]): Certificate offered by the server matches the TLSA record (2001:980:14ca:1::23)
Although this certificate is a valid LetsEncrypt certificate, DNS-based Authentication of Named Entities (DANE) does not support usage 1 (check the certificate public key and verify certificate chain to a known root) for SMTP with STARTTLS, so it is usage 3 (just check the certificate public key). The tlsa program does not check this specifically, but the web checker at DANE TLSA Server checker found the issue, so I corrected that.

I use selector 1 to just check the public key because the complete certificate changes with every LetsEncrypt renewal. My choice for mtype 2 (sha512) is just a wish for a strong hashing algorithm.

This also makes the link between service configuration and DNS contents a lot stronger. Maybe this needs secure automated updates.

Tags: , ,
2019-10-17 Tested incremental DNSSEC signing 3 months ago
I noticed some really unused records in one zone which is now DNSSEC signed. For example I still had to point at my Google+ page.

So I removed them and did the signing after increasing the serial number. Indeed the records that had no update kept their original signature and the records that where changed (such as the SOA because of the serial number) were signed with new signatures.

Tags: , ,
2019-10-16 The signatures for the first DNSSEC signed zone expired, and I signed the rest 3 months ago
Today I was reminded of the first zone I signed with DNSSEC and did the check again with DNSViz. And I saw a lot of error messages. Some searching found that I let all the signatures expire (after the default time of 30 days).

Solution: re-sign the zone and have a careful look at when I need to sign the zones again. Officially just in time for expiry time of the signature (default 30 days) minus TTL of the record.

Obviously this process has to be automated. In the first go I decided to force new signatures after 21 days. But I tested some things later and decided to go for more regular checks of the ages of the signatures and refresh the signatures that are about to expire. This is usually reserved for 'big' zones with lots of resolvers querying them but I decided to implement this myself to avoid problems, and learn more about DNSSEC.

The magic signing command is now:
    named-checkzone $* $^
    ./ $^
    dnssec-signzone -S -K /etc/bind/keys -g -a -r /dev/random -D -S -e +2592000 -i 604800 -j 86400 -o $* $^
    rndc reload $*
    touch $@
The expiry is set with -e at 30 days, the checkinterval with -i at 7 days and the jitter factor with -j at 1 day.

Now there is a special part in the Makefile to be called from cron on a regular basis. It won't produce any output when there is nothing to update.
    @for zone in $(SIGNEDZONES); do if [ `find $${zone}-signedserial -mtime +7 -print` ]; then touch $${zone}-zone ; $(MAKE) --no-print-directory $${zone}-signedserial; fi ;done
The Make variable SIGNEDZONES is filled with the zonenames of the zones that have to be kept DNSSEC signed. File structure for each forward zone is as listed in first zone with valid DNSSEC signatures.

So now almost all my domains are DNSSEC signed. A learning experience and a good level of security.

Tags: , , ,
2019-09-11 First zone with valid DNSSEC signatures 4 months ago
My previous test with DNSSEC zone signing showed a problem with entropy in virtual machines. Today I had time to reboot the home server running the virtual machines including the virtual machine with the nameserver, based on bind9.

Now I can create DNSSEC signatures for zonefiles at high speed (0.028 seconds) with enough entropy available. My first test is with which is a domainname for redirecting to Camp Wireless but since that variant was mentioned somewhere I had to generate the redirects to the right version.

The next step was to upload the DS records for the zone to my registrar and get them entered into the top level domain. This failed on the first attempt, the DS records have to be entered very carefully at the registrar.

I tested the result with dnsviz for and found an error in the first try: I updated the serial after signing the zone. So the soa record wasn't signed correctly anymore.

I updated my zonefile Makefile to do the steps in the right order:
        named-checkzone $* $^
        ./ $^
        dnssec-signzone -S -K /etc/bind/keys -g -a -r /dev/random -D -S -o $* $^
        rndc reload $*
        touch $@
For the zone the original data is in, the DNSSEC signatures in And make will abort when one of the commands gives an error level, so it will for example stop completely when I make a typo in the zonefile which will make named-checkzone fail. The -D option creates a file to be used with $INCLUDE in the original zonefile. This does create a circular dependency: named-checkzone will fail when the -signedserial file isn't available on the first run. So the first run will have to be manually.

So now the zone is signed correctly. The next developments will be to find out how to monitor this extensively so I won't be surprised by problems and to redo the signing from time to time to make DNSSEC zone walking very hard.

And when I trust all of this I will implement it on other domain names that I manage.
Read the rest of First zone with valid DNSSEC signatures

Tags: , , ,
2019-07-05 I tested the randomness setup 6 months ago
Doing some more reading on haveged made me decide to test the actual randomness of my setup with haveged and randomsound which I created to fix the lack of entropy for dnssec signing operations so I booted the same testing virtual machine which can tap from the host /dev/random. I ran rngtest until it was time to shut down the laptop which was showing the output. The result:
$ rngtest < /dev/random 
rngtest 2-unofficial-mt.14
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions.  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

rngtest: starting FIPS tests...
^Crngtest: bits received from input: 4999640
rngtest: FIPS 140-2 successes: 249
rngtest: FIPS 140-2 failures: 0
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 0
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=303.011; avg=543.701; max=5684.774)bits/s
rngtest: FIPS tests speed: (min=43.251; avg=64.587; max=84.771)Mibits/s
rngtest: Program run time: 9194254192 microseconds
I ratelimited the virtio-rng-pci driver from the host, so the test took a really long time. Given earlier tries with dnssec-signzone this is fast enough.

No need to buy a hardware random generator, although they are way cool and it would be an idea to have a source of correctness (NTP) next to a source of randomness.

Update: I ran rngtest on /dev/urandom and I had to ask for a really big load of blocks to get failures. The first test with 249 blocks gave the same result as above, just a lot higher bit rate. So now I know less about the correct randomness of my setup but at least the test shows that I can safely run dnssec-signzone which was the original idea.

Tags: , , ,
2019-07-04 First tests with dnssec show a serious lack of entropy 6 months ago
I was looking at the options for implementing DNSSEC on the domains I have, and started doing this on a domain name that is just used for web redirects, so I won't break anything serious when I make an error. And I am looking at monitoring options at the same time.

Looking for usable documentation I found DNSSEC signatures in BIND named - which shows and explains a lot of the options for doing this with bind9, including full automation. I want to take steps I understand, so I will start with careful minimal automation on a domain name that I can 'break'.

Following that documentation I created a key-signing key (KSK) and a zone-signing key (ZSK). I used the /etc/bind/keys directory which is the standard location.

The first dnssec-signzone action took 54 minutes. After waiting for a bit I started wondering what was happening and it turned out to be a problem with entropy: the signing uses a lot of data from /dev/random. I have the virtio-rng module loaded but the host wasn't making randomness available to the guest operating system. The host server does run randomsound to get more entropy since there is no hardware random number generator available.

Documentation on how to 'forward' randomness from the host to the client virtual machine: Random number generator device - Domain XML format

So I did some tests with a test virtual machine with a similar configuration. The results:
  • Just software kernel rng in the virtual machine: 54 minutes.
  • Offering virtio-rng randomness from the host from /dev/urandom running randomsound: less than 1 second.
  • Offering virtio-rng randomness from the host from /dev/random running randomsound: 11 minutes 10 seconds.
  • Offering virtio-rng randomness from the host from /dev/random running randomsound and haveged: less than 1 second.
Installing haveged which gathers entropy from hardware processes fixes the whole problem.

Now to implement the same settings for the virtual machine running the production nameserver and I'll be able to take the next step.

Tags: , , ,
2019-06-19 Looking at the wrong side of a mirrored disk 7 months ago
Due to recent kernel updates I rebooted the home server and ran into only older kernels available. Some searching later I found out it booted from another disk than the disk the update manager was maintaining /boot on.

The solution was to mirror the /boot partition by hand and change the EFI boot setup to try a boot from both disks, so the machine will still boot when one half of the mirror is completely unavailable. I did buy mirrored disks to have the machine available with one disk unavailable.

Changing the EFI boot setup with efibootmgr was somewhat complicated, but I got it all done. How to add a second disk found via Partitioning EFI machine with two SSD disks in mirror - Unix & Linux stackexchange and understanding the numbers in the efibootmgr -v output via "efibootmgr -v" output question.

The ideal solution would be to have /boot and /boot/efi on mirrored partitions without metadata (so they are readable too from the efi loader as an unmirrored partition). According to what I read this is possible in Linux with devicemapper but there is not a lot of experience shared.

Tags: , ,
2019-06-02 Trying to backup to a cloudservice again 7 months ago
After the migration to the new homeserver was finished I found out I had to run backups on a separate computer: misconfigured backups so the old idea of backups to a cloudservice is on my mind again. I've looked into this before: Backup to .. the cloud! and I still want to backup to a cloud-based service which has a webdav interface and is based on owncloud. With some searching I came across How to synchronize your files with TransIP’s STACK using the commandline.

I'd like the outgoing bandwidth to be limited so the VDSL uplink isn't completely filled with the backup traffic. Installing owncloud-client-cmd still has a lot of dependencies on graphical stuff, but doesn't install the GUI of the owncloud client. In owncloud-client-cmd I can't set the bandwidth limits, but I can set those in the graphical client. But after a test it shows that owncloud-client-cmd doesn't read .local/share/data/ownCloud/owncloud.cfg for the bandwidth settings.

At least with the VDSL uplink speed and the wondershaper active the responsiveness of other applications at home never suffered. Maybe specific rules for the IP addresses of the cloud service could ratelimit the uploads.

Tags: , ,
2019-05-06 Making checking SSL certificates before installing them a bit more robust 8 months ago
Encrypt all the things meme With all the automated updates of certificates as described in Enabling Server Name Indication (SNI) on my webserver and Automating Let's Encrypt certificates further I wondered about what would happen when some things got corrupt, most likely as a result of a full disk. And a simple test showed out that the checkcert utility would happily say two empty files are a match because the sha256sum of two empty public keys is the same.

Solution, do something with the errorlevel from openssl. New version of checkcert:

# check ssl private key 1 with ssl pem encoded x509 certificate 2 public key

SUMPRIVPUBKEY=`openssl pkey -in $1 -pubout -outform pem || echo privkey | sha256sum`
SUMCERTPUBKEY=`openssl x509 -in $2 -noout -pubkey -outform pem || echo pubkey | sha256sum`

if [ "${SUMPRIVPUBKEY}" = "${SUMCERTPUBKEY}" ]; then
        exit 0
        exit 1
And now:
koos@gosper:~$ /usr/local/bin/checkcert /dev/null /dev/null
unable to load key
139636148224064:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: ANY PRIVATE KEY
unable to load certificate
139678825668672:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: TRUSTED CERTIFICATE
koos@gosper:~$ echo $?

Tags: , , ,
2019-05-04 Considering enabling Server Name Indication (SNI) on my webserver 8 months ago
Encrypt all the things meme While making a lot of my websites available via HTTPS I started wondering about enabling Server Name Indication (SNI) because the list of hostnames in the one certificate (subjectAltName parameter) keeps growing and they aren't all related.

So on a test system with haproxy I created two separate private keys, two separate certificate signing requests and requested two separate certificates. One for the variants of and one for most of the names. The whole requesting procedure happened on the system where my automated renewal and deployment of LetsEncrypt certificates with dehydrated happens so the request went fine. For the configuration of haproxy I was following HAProxy SNI where 'terminating SSL on the haproxy with SNI' gets a short mention.

So I implemented the configuration as shown in that document and got greeted with an error:
haproxy[ALERT] 123/155523 (3435) : parsing [/etc/haproxy/haproxy.cfg:86] : 'bind :::443' unknown keyword '/etc/haproxy/ssl/webserver-idefix-main.pem'.
And found out that the crt keyword has to be repeated.

This is why I like having a test environment for things like this. Making errors in the certificate configuration on the 'production' server will give visitors scary and/or incomprehensible errors.

So the right configuration for my test is now:
frontend https-in
    bind :::443 v4v6 ssl crt /etc/haproxy/ssl/webserver-campwireless.pem crt /etc/haproxy/ssl/webserver-idefix-main.pem
And testing it shows the different certificates in use when I use the -servername parameter for openssl s_client to test things.
$ openssl s_client -connect -servername -showcerts -verify 3
Server certificate
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Verification: OK
$ openssl s_client -connect -servername -showcerts -verify 3
Server certificate
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
Verification: OK
The certificates are quite separate. Generating the certificate signing requests with a separate private key for each request works fine.

So if I upgrade my certificate management to renew, transport, test and install multiple certificate for the main webserver it would work.
Read the rest of Considering enabling Server Name Indication (SNI) on my webserver

Tags: , , , ,
2019-04-25 Accepting multiple passwords for IMAPS access 9 months ago
After upgrading to the new homeserver my old setup to allow two passwords for IMAPS logins so I can use a separate password for IMAPS access for those devices that insist on saving a password without asking.

I have the following PAM libraries:
ii  libpam-modules 1.1.8-3.6    amd64        Pluggable Authentication Modules
And I debugged the problem using the pamtester program which makes debugging this problem a lot easier than constantly changing the configuration and restarting the imap server.

The relevant configuration now is:
# PAM configuration file for Courier IMAP daemon

#@include common-auth
# here are the per-package modules (the "Primary" block)
auth    required quiet user ingroup users
#auth   [success=1 default=ignore] nullok_secure
auth    sufficient nullok_secure
auth    sufficient db=/etc/courier/extrausers crypt=crypt use_first_pass
# here's the fallback if no module succeeds
auth    requisite             
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth    required              
# and here are more per-package modules (the "Additional" block)
# end of pam-auth-update config
@include common-account
@include common-password
@include common-session
And now both my unix login password and the extra password are accepted.

Tags: , , ,
2019-01-30 Misconfigured backups 12 months ago
I have "always" been running amanda for backups on linux. Or rather, I can't find any indication when I started doing that several homeserver versions ago, it's just still running.

Or it was running, but first I had to tackle a hardware problem: all SCSI controllers I have are PCI and the newest homeserver has no PCI slots. So I searched for a solution. The first solution was to try using the desktop system for the tapedrive, but the powersupply in that system has no 4-lead Molex connectors so I can't connect the tapedrive.

For now I use an old 'test' system with some software upgrades to run amanda and shut it down when all backups are done and flushed to tape. But amanda had a serious problem writing stuff to tape. With some debugging this turned out to be caused by the variable blocksize I used on the previous systems, with
# mt -f /dev/nst0 setblk 0
and I can't even find out why this seemed like a good idea years ago. But now amanda really wants to use 32768 byte blocks and filled a DDS-3 tape (12 Gb without compression) with about 1.8 Gb of data before reaching the end of the tape.

Why this default has changed isn't clear to me, but I found a way to re-initialize the tapes so the backups fit again. Based on block size mismatch - backup central I created a script to do this. I did not get the error about the blocksize, but I searched specifically for 'amanda 3.3.6 blocksize'.

if [ "$1" = "" ]; then
        echo "Usage: $0 <tapename>"

mt -f /dev/nst0 setblk 32768
mt -f /dev/nst0 compression 1
mt -f /dev/nst0 rewind
dd if=/dev/zero of=/dev/nst0 bs=32768 count=200
mt -f /dev/nst0 setblk 32768
mt -f /dev/nst0 compression 1
mt -f /dev/nst0 rewind
amlabel -f kzdoos $1
And now normal amounts of data fit on a tape again. I just have to initialize every tape before using it for the first time in this setup.

Tags: , ,
2019-01-02 Migration to new server finished 1 year ago
More than a year after I started migrating from homeserver greenblatt to the new homeserver conway the last migration is done and the old server is switched off. The new server is in a good position in the rack, and the old server is still taking up space in there too. It has taken a lot of time, I decided to stop some websites and other unused services in the process and my energy levels haven't always been that great. I have improved several things in the process, which also caused delays.

One thing hasn't changed (which I did expect to change): the power usage of the new server isn't lower! The UPS tells me the output load is about the same. Ok, the new hardware has a lot more CPU power, a lot more memory and faster storage, but I expected the poweruse to go down a bit.

Tags: , , ,
2019-01-01 Switching to 1-wire over USB and forwarding a USB device to a guest VM 1 year ago
The new hardware for the homeserver has no external serial ports, so I could not use the old serial / 1-wire interface that has been doing the home monitoring for years. But I had a spare USB DS2490 interface. So I plugged this into the server and wanted to forward the USB device to the guest VM that runs all the monitoring.

First I had to blacklist all the loaded drivers to have the device available to kvm as-is. In /etc/modprobe.d/local-config.conf:
blacklist w1_smem
blacklist ds2490
blacklist wire
Next step was to attach the device to the right vm. I followed the hints at How to auto-hotplug usb devices to libvirt VMs (Update 1) and edited the definition for the vm to get the host device like:
    <hostdev mode='subsystem' type='usb' managed='no'>
        <vendor id='0x04fa'/>
        <product id='0x2490'/>
But that did not get the usb device attached to the running VM and I did not feel like rebooting it. So I created an extra file with the above and did a
root@conway:~# virsh attach-device --live gosper /tmp/onewire.xml 
Device attached successfully
And then I had to do the same blacklisting as above in the virtual machine. After doing that I detached and attached it from the VM without touching it with simply:
root@conway:~# virsh detach-device --live gosper /tmp/onewire.xml 
Device detached successfully

root@conway:~# virsh attach-device --live gosper /tmp/onewire.xml 
Device attached successfully
After that I had to set up rules for the telemetry user to have enough access to the USB device:
SUBSYSTEMS=="usb", GOTO="usb_w1_start"
ATTRS{idVendor}=="04fa", ATTRS{idProduct}=="2490", GROUP="telemetry", MODE="0666"
And now it all works:
telemetry@gosper:~$ digitemp_DS2490 -a
DigiTemp v3.7.1 Copyright 1996-2015 by Brian C. Lane
GNU General Public License v2.0 -
Found DS2490 device #1 at 002/003
Jan 01 21:53:11 Sensor 10A8B16B0108005D C: 9.500000
Jan 01 21:53:12 Sensor 28627F560200002F C: 17.062500
Jan 01 21:53:14 Sensor 10BC428A010800F4 C: 19.562500
Jan 01 21:53:15 Sensor 1011756B010800F1 C: 11.937500
Jan 01 21:53:16 Sensor 10B59F6B01080016 C: 16.312500
Jan 01 21:53:17 Sensor 1073B06B010800AC C: 18.687500
Jan 01 21:53:18 Sensor 102B2E8A010800F0 C: 29.250000
Jan 01 21:53:20 Sensor 28EF71560200002D C: 16.687500
Working house temperatures again!

Tags: , , , ,
2018-12-04 Really ending a domain name and the web presence 1 year ago
On 25 december 2004 there was a special deal giving me the .info names and for free for the first year. Since that moment I kept the names registered and redirected all web traffic to the right version: So the deal worked from a 'selling domain names' perspective: Christmas is a bad moment to review the need for domain names, so the easy solution is to renew it. My decision to stop with these names was made in January 2018.

Traffic to the .info versions is very minimal. With the cost of the domain registration I decided to stop doing that and devised an exit strategy which would result in a domain name that attracts no traffic and is not linked to my other webprojects. On the next renewal date the domain will expire. I have done this before in a different context: when we ended the students personal webspace at

The solution is to start returing HTTP state 410 Gone for search engines while at the same time returning a somewhat user-friendly error page.

Relevant bit of apache 2.4 configuration:
<VirtualHost *:80>

	DocumentRoot /home/httpd/campwireless-expire/html

    <Directory "/home/httpd/campwireless-expire/html">
        Require all granted

    RewriteEngine On
    RedirectMatch 410 ^/(?!gone.html|robots.txt)
    ErrorDocument 410 /gone.html
The gone page is simple: It has an explanation for human visitors and a meta refresh tag to redirect the browser eventually. But to a search engine the status 410 on almost any url will give a clear flag the page is gone and should be flushed from the cache.
Read the rest of Really ending a domain name and the web presence

Tags: , , , ,
2018-11-23 Automatic ls colours can be slow 1 year ago
I noticed certain commands taking a while to start, including a simple ls. At last I got annoyed enough to diagnose the whole situation and found out the problem is the combination of symbolic links in the listed directory pointing to filesystems behind automounter, one mounted filesystem coming from a NAS with sleeping disk and ls --color doing a stat() on the target of a symbolic link to find the type of the target file to be able to select a colour.

My solution: find the source of the alias and disable it.

Tags: , ,
2018-10-12 Serious slowness with rrdgraph from rrdtool 1 year ago
One of the things still needing migrating is the NTP server stats which obviously uses rrdtool. Because I want to keep the history I migrated the datasets with:
/usr/local/rrdtool/bin/rrdtool dump \
| ssh newhost /usr/bin/rrdtool restore -f -
And then create a graph of the plloffset for example using:
/usr/bin/rrdtool graph /tmp/ \
--title " pll offset (last 24 hours)" --imginfo \
'<img src="tmpgraphs/%s" WIDTH="%lu" HEIGHT="%lu" alt="Graph">' \
--start -24hours --end now --vertical-label="Seconds" --color BACK#0000FF \
--color CANVAS#c0e5ff --color FONT#ffffff --color GRID#ffffff \
--color MGRID#ffffff --alt-autoscale --imgformat PNG --lazy \ \
CDEF:wipeout=offset,UN,INF,UNKN,IF CDEF:wipeoutn=wipeout,-1,* \
LINE1:offset#000000:"Offset\:" \
GPRINT:offset:LAST:"Current\:%.3lf%s" \
GPRINT:offset:MIN:"Min\:%.3lf%S" \
GPRINT:offset:MAX:"Max\:%.3lf%S" \
GPRINT:offset:AVERAGE:"Average\:%.3lf%S" \
AREA:wipeout#e0e0e0 AREA:wipeoutn#e0e0e0
But on the old server this takes 0.026 seconds, on the new server 3 minutes and 47.46 seconds. No idea what is happening, strace shows nothing strange and rrdtool uses 1 cpu at 100% all that time.
Read the rest of Serious slowness with rrdgraph from rrdtool

Tags: , , ,
  Older news items for tag homeserver ⇒
, reachable as PGP encrypted e-mail preferred.

PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers pgp key statistics for 0x5BA9368BE6F334E4 Koos van den Hout
Other webprojects: Camp Wireless, wireless Internet access at campsites, The Virtual Bookcase, book reviews