News items for tag computersarebetterat - Koos van den Hout

2019-05-06 Making checking SSL certificates before installing them a bit more robust 1 year ago
Encrypt all the things meme With all the automated updates of certificates as described in Enabling Server Name Indication (SNI) on my webserver and Automating Let's Encrypt certificates further I wondered about what would happen when some things got corrupt, most likely as a result of a full disk. And a simple test showed out that the checkcert utility would happily say two empty files are a match because the sha256sum of two empty public keys is the same.

Solution, do something with the errorlevel from openssl. New version of checkcert:

# check ssl private key 1 with ssl pem encoded x509 certificate 2 public key

SUMPRIVPUBKEY=`openssl pkey -in $1 -pubout -outform pem || echo privkey | sha256sum`
SUMCERTPUBKEY=`openssl x509 -in $2 -noout -pubkey -outform pem || echo pubkey | sha256sum`

if [ "${SUMPRIVPUBKEY}" = "${SUMCERTPUBKEY}" ]; then
        exit 0
        exit 1
And now:
koos@gosper:~$ /usr/local/bin/checkcert /dev/null /dev/null
unable to load key
139636148224064:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: ANY PRIVATE KEY
unable to load certificate
139678825668672:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: TRUSTED CERTIFICATE
koos@gosper:~$ echo $?

Tags: , , ,
2018-07-27 Automating Let's Encrypt certificates with DNS-01 protocol 2 years ago
Encrypt all the things meme After thoroughly automating Let's Encrypt certificate renewal and installation I wanted to get the same level of automation for systems that do not expose an http service to the outside world. So that means the DNS-01 challenge within the ACME protocol has to be used.

I found out dehydrated Let's Encrypt certificate management supports DNS-01 and I found a sample on how to do this with bind9 at Example hook script using Dynamic DNS update utility for dns-01 challenge which looks like it can do the job.

It took me a few failed tries to find out that if I want a certificate for the name that it will request the TXT record for to make me prove that I have control over the right bit of DNS. I first assumed something in which turned out wrong. So the bind9 config in /etc/bind/named.conf.local has:
zone "" {
        type master;
        file "/var/cache/bind/";
        masterfile-format text;
        allow-update { key "acmekey-turing"; };
        allow-query { any; };
        allow-transfer {
And in the zone there is just one delegation:
_acme-challenge.turing  IN      NS      ns2
I created and used a dnskey with something like:
# dnssec-keygen -r /dev/random -a hmac-sha512 -b 128 -n HOST acmekey-turing
This gives 2 files, both with the right secret:
# ls Kacmekey-turing.+157+53887.*
Kacmekey-turing.+157+53887.key  Kacmekey-turing.+157+53887.private
# cat Kacmekey-turing.+157+53887.key
acmekey-turing. IN KEY 512 3 157 c2V0ZWMgYXN0cm9ub215
and configured it in /etc/bind/named.conf.options:
key "acmekey-turing" {
        algorithm hmac-md5;
        secret "c2V0ZWMgYXN0cm9ub215";
And now I can request a key for and use it to generate sendmail certificates. And the net result:
        (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256          
SMTP between systems with TLS working and good certificates.

Tags: , , ,
2018-07-08 Automating Let's Encrypt certificates further 2 years ago
Encrypt all the things meme Over two years ago I started using Let's Encrypt certificates. Recently I wanted to automate this a step further and found dehydrated automated certificate renewal which helps a lot in automating certificate renewal with minimal hassle.

First thing I fixed was http-based verification. The webserver has been set up to make all .well-known/acme-challenge directories end up in one place on the filesystem and it turns out this works great with dehydrated.

I created a separate user for dehydrated, gave that user write permissions for the /home/httpd/html/.well-known/acme-challenge directory. It also needs write access to /etc/dehydrated for its own state. I changed /etc/dehydrated/config with:
Now it was possible to request certificates based on a .csr file. I used this to get a new certificate for the home webserver, and it turned out to be easier than the previous setup based on letsencrypt-nosudo.
Read the rest of Automating Let's Encrypt certificates further

Tags: , , , ,
2016-12-23 Getting video to play just right with vlc 3 years ago
I wanted to project a videofile with a black screen before and after, with no visible controls on the screen where the video plays, with manual control of when the video starts and with the video starting on the second monitor.

The 'why' is simple: I want to use a videoprojector which has no option to turn the screen black itself and I want the smoothest videoplay possible with no visible controls.

The how was a bit more work, but vlc has enough command line options. I could not find a guess online so I did an estimated count myself:
$ vlc -H --advanced | grep -ce '--'
VLC media player 2.2.2 Weatherwax (revision 2.2.2-0-g6259d80)
This shows 1525 commandline options. So I had to find the right options. Not too much of a problem either:
vlc --image-duration -1 --no-qt-fs-controller --qt-fullscreen-screennumber 1 --no-video-title-show --qt-notification 0 -f --disable-screensaver Downloads/black.png Downloads/VID_20161210_104822.mp4 Downloads/black.png
This lets me use the vlc controls in the systray, starts playing fullscreen on the right screen, plays the static black image until I select 'next', leaves out all the indicators and ends with the other static black image.

The only thing left is the fact that the audio has to select the right audio device too. It turns out vlc plays audio via the alsa emulation in pulseaudio, and I need to change that preference via the pavucontrol program.

Tags: ,
2014-04-27 (#) 6 years ago
I had a look at creating a simpler QSL card which I could print with my own printer. I still want 4 cards per page. The earlier qsl card designs are nice and an inspiration for when I get around to having cards printed. But I want a few things different, like a mention of my amateur radio website, on the card. And space for notes about contacts. And when I use my own printer and heavy enough paper I want to print 4 cards per A4 page. Having 4 the same cards on one page meant wanting to use \LaTeX and a \newcommand so I define the card once and use it four times all of them on the same printer page. I found A QSL card backside made in LaTeX - DJ1YFK's Ham Radio Stuff which has a nice QSL card design in \LaTeX which I could use with some adjustments. This \LaTeX file defines the page size as 14cm*9cm landscape, the official size of a QSL card. I first tried changing this to an a4 page with 4 14cm*9cm \fbox in it, but this didn't give me the right result. I now create 4 pages of 14cm*9cm and create an A4 page from this with:
$ pstops -pa4 "4:0L@1.0(30cm,0)+1L@1.0(30cm,14.85cm)+2L@1.0(40cm,0)+3L@1.0(40cm,14.85cm)"
Which has about the right result: 4 cards on one page. No frames around the cards yet.

I use the coloured Veron logo, but it prints fine in grayscale on my black and white printer.

Tags: , , ,
2014-04-14 (#) 6 years ago
When documenting something I have to look up the full path of some file and have it ready for cut-and-paste. I found out the easy way to canonicalize a filename:
koos@greenblatt:~$ readlink -f ../../etc/radvd.conf

Tags: ,
2014-04-06 (#) 6 years ago
The fact I can't get status information from the such as linespeed in a way I can use in scripts annoy me, especially since the linespeed changed tonight (to 22381 down 1402 up). I'd like to at least have access to those statistics for my pretty graphs again. I did find Universal Plug and Play How to get Status-Information from the FRITZ!Box which uses the Perl Net::UPnP::ControlPoint module. The downside is this module wants to discover upnp devices by itself via multicast. So I need to setup a specific route for from the server. It does discover the Fritz!Box, but thinks it has no further information:
$ ./get_upnp_info.mcast .
Device = FRITZ!Box Fon WLAN 7360
No possible actions. Digging a bit into the code reveals the problem is probably in the XML parsing bit. Changing the xml parser to search in namespace urn:dslforum-org:service-1-0 gives a tiny bit more:
$ ./get_upnp_info.mcast .
Device = FRITZ!Box Fon WLAN 7360
urn:any-com:serviceId:l2tpv31::GetInfo:ServerInstanceId = 0000001F8BF6F4502F99CFB2F71DC374ECD623A957E08803247CDC9AD3856FF4DDA943C535C22E937DE07643AB2A6BBFEC45DED2FBF0E95AC5C2B3B28699F07
urn:any-com:serviceId:l2tpv31::GetInfo:ServerIP =
urn:any-com:serviceId:l2tpv31::GetInfo:RemoteEndIds =
But no DSL upstream and downstream yet.

Tags: , ,
2013-12-30 The wonderful world of week number standards 6 years ago
The wonderful thing about standards:
$ date "+%u %w %U %V %W"
1 1 52 01 52
And the explanations:

%u day of week (1..7); 1 is Monday

%w day of week (0..6); 0 is Sunday

%U week number of year, with Sunday as first day of week (00..53)

%V ISO week number, with Monday as first day of week (01..53)

%W week number of year, with Monday as first day of week (00..53)

And it's easy to find days with 3 different week numbers:
31 dec 1990 is 52 01 53
03 jan 1993 is 01 53 00
02 jan 1994 is 01 52 00
01 jan 1995 is 01 52 00
30 dec 1996 is 52 01 53
31 dec 1996 is 52 01 53
03 jan 1999 is 01 53 00
02 jan 2000 is 01 52 00
02 jan 2005 is 01 53 00
01 jan 2006 is 01 52 00
31 dec 2007 is 52 01 53
03 jan 2010 is 01 53 00
02 jan 2011 is 01 52 00
01 jan 2012 is 01 52 00
03 jan 2016 is 01 53 00
01 jan 2017 is 01 52 00
31 dec 2018 is 52 01 53
03 jan 2021 is 01 53 00
02 jan 2022 is 01 52 00
01 jan 2023 is 01 52 00
30 dec 2024 is 52 01 53
31 dec 2024 is 52 01 53
03 jan 2027 is 01 53 00
02 jan 2028 is 01 52 00
31 dec 2029 is 52 01 53
Calendering software, including the one from a software developer quite known for not following standards has converged on the ISO week number.

Tags: , ,
2013-11-05 (#) 7 years ago
I heard today about Windows 2012 R2 "desired state configuration" which made me think a bit of puppet. The general idea is to get systems configured to a desired state with whichever changes are needed to get to that state. Desired State Configuration in Windows Server 2012 R2 PowerShell - YouTube for a presentation.

But when I see a bit of configuration sample in the above video it makes me think a lot of puppet:
Configuration FourthCoffeeWebsite
	Node ("WebServer1","WebServer2")
		# Install the IIS role
		WindowsFeature IIS
			Ensure  = "Present"
			Name    = "Web-Server"
		# Install the ASP .NET 4.5 role
		WindowsFeature AspNet45
			Ensure  = "Present"
			Name	= "Web-Server"
Funny how system administration in the Windows and Linux/Unix world is converging. Just like Windows PowerShell makes me think of scripting languages and the unix commandline.

Tags: , , ,
2013-08-09 (#) 7 years ago
I've been working on managing Linux systems with puppet for a while. Until now puppet was a tool to manage part of the configuration with still work to be done on each host. But the last two weeks I worked on a (test) webserver completely configured from puppet. With a complete separation of configuration (from puppet), input data (web content), output data (logging) and installed applications it is possible to reduce a webserver to a puppet recipe and an amount of storage. This means adding new webservers to a cluster or rebuilding systems in the cluster is easy. As a test I 'broke' the webserver (wiped the disk), reinstalled basic CentOS (nothing configured) and let puppet deliver a running webserver again, all within 15 minutes.

The new bit for me was using puppet templates to write centos ifcfg-ethX files and apache virtualhost configurations. Apache virtualhosts get a number of parameters (the hostname, aliases, directory index settings, needing php, needing ssl). I started with different templates for 'real' virtualhosts and 'special' virtualhosts like a host which gives a 410 Gone error on all urls but I noticed the templates were still mostly the same so now the type of virtualhost is also set using variables and one template has conditional parts depending on the type of virtualhost.

This does mean I'm learning bits of Ruby, Yet Another Scripting Language (for me).

In general, using puppet makes it very easy to install/remove packages, add scripts, schedule tasks, configure the monitoring setup (zabbix) and do other 'checklist' items to each system in a consistent way. Which in my opinion improves security and general quality.

Tags: , , ,
2013-07-29 (#) 7 years ago
I use amanda for backups, all scheduled automatically, including automatically waking up and shutting down systems for backups but I also want the effort to put in the right tape minimal. To eliminate waiting for the previous tape to rewind and eject I tried an extra check which ejects the tape when it's not the 'correct' tape.
$ amcheck -t kzdoos > /dev/null || mt -f /dev/nst0 eject 2>/dev/null
The amcheck command will give an errorlevel on the wrong tape, but also on no tape at all, so I need to ignore the errors from mt. The above commandline now has a place in the crontab for the account the backups are run on.

Tags: , ,
2013-04-29 (#) 7 years ago
We like our Linux kernels chatty during boot, seeing stuff in the startup messages like
serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
00:06: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A
00:07: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A
is perfectly fine with us. Defaults with several linux distributions are going the other way. For CentOS we already disable the plymouth splash screen, but to disable more eyecandy and get real kernel messages the commandline options rhgb and quiet need to be removed from the kernel commandline in the grub config. Option rhgb enables 'red hat graphic boot' and option quiet disables most kernel messages.

Via How do I set the default kernel parameters in CentOS for all existing and future kernels? - Server Fault I found the right way. The next step was to turn this into a puppet recipe so this is done automatically:
class serverpackages::fixgrubconfig {
        exec { "Clean grub default options":
                path => "/sbin:/bin",
                onlyif => 'egrep -c \'(rhgb|quiet)\' /boot/grub/grub.conf',
                command => '/usr/local/sbin/normalizegrubconfig',
                require => file["normalizegrubconfig"];
        file { "normalizegrubconfig":
                path => '/usr/local/sbin/normalizegrubconfig',
                ensure => present,
                owner => 'root',
                group => 'root',
                mode => 0700,
                content => '#!/bin/sh
# reset grub config for all kernels
for KERNEL in /boot/vmlinuz-* ; do
        grubby --update-kernel="$KERNEL" --remove-args="rhgb quiet"
Problem solved, yet another thing puppet adds to the baseline configuration. The upside of using grubby to manage this is that 'creating correct grub config files' is builtin into grubby.

Tags: , , ,
2013-03-26 (#) 7 years ago
Interesting clash between the bind 9.8.2 package for CentOS 6.4 and puppet: When puppet updates /etc/named.conf it's not visible in the chroot setup for named. The named startup script uses bind mounts to make configuration files visible within the chroot environment.
root@geodns01:~# mount | grep named.conf
/etc/named.conf on /var/named/chroot/etc/named.conf type none (rw,bind)
root@geodns01:~# md5sum /etc/named.conf /var/named/chroot/etc/named.conf
d028cfee6cf1a1f77993da7c769273ad  /etc/named.conf
82d1717bb34db23804f67ad855e090ea  /var/named/chroot/etc/named.conf
I first thought this was some form of caching, but a suggestion was the way the files were replaced by puppet: if puppet creates a new file and then renames the old one and the new one, the file will have a different inode after that action. I tested for this:
root@geodns01:~# mkdir test
root@geodns01:~# touch file.conf
root@geodns01:~# touch /root/test/file.conf
root@geodns01:~# mount --bind file.conf /root/test/file.conf
root@geodns01:~# ls -il /root/file.conf /root/test/file.conf
652873 -rw-r--r-- 1 root root 0 Mar 26 19:20 /root/file.conf
652873 -rw-r--r-- 1 root root 0 Mar 26 19:20 /root/test/file.conf
root@geodns01:~# vim --cmd 'set backup' file.conf
root@geodns01:~# ls -li file.conf* test/file.conf
652876 -rw-r--r-- 1 root root 7 Mar 26 19:25 file.conf
652873 -rw-r--r-- 1 root root 0 Mar 26 19:20 file.conf~
652873 -rw-r--r-- 1 root root 0 Mar 26 19:20 test/file.conf
This confirms that replace-by-rename will clash with bind mounts being actually inode based. The workaround isn't that hard: the startup script for named explicitly tests for an existing non-zero-byte /var/named/chroot/etc/named.conf and will skip the mount --bind part in that case. Time to learn puppet about this feature, puppet now manages both /etc/named.conf and /var/named/chroot/etc/named.conf.

Tags: , , ,
2013-01-02 (#) 7 years ago
GNU date can display different times than the current time, with the -d or --date parameter:
  -d, --date=STRING         display time described by STRING, not `now'
But with a bit of experimenting I found out I can use this for calculating times in other timezones too:
koos@greenblatt:~$ date --date=20:00\ UTC
Wed Jan  2 21:00:00 CET 2013
koos@greenblatt:~$ date --date=20:00\ US/Central
date: invalid date `20:00 US/Central'
koos@greenblatt:~$ date --date=20:00\ CST
Thu Jan  3 03:00:00 CET 2013
So you can use the short timezone name, but not the long version.

Or, for day calculations:
koos@greenblatt:~$ date --date="396 days"
Fri Feb 28 15:36:15 CET 2014

Tags: , ,
2012-12-18 (#) 7 years ago
I updated the zabbix ssl certificate test script to be able to use starttls services and did some other changes (tests work better in days left). Current version which can also check for smtp tls and returns the certificate time left in days which makes for easier checks:
#!/usr/bin/perl -w

# monitor the number of days left on the SSL certificate on a publicly
# reachable service
# usage in zabbix, create an item in a template
# - Type: External check
# - Key:  ssl-expiry-left.monitor[443]
#   change this for other services and use ssl-expiry-left.monitor[587,"-smtp"]
#   for smtp+tls. Yes, you will need to set up a separate item (/template)
#   for each ssl port combination
# - Type of information: Numeric (unsigned)
# - Data type: Decimal
# - Units: Days
# - Update interval (in sec): 43200
# - Application: SSL+service
# possible trigger values:
# 0: certificate already expired or invalid or not retrievable
# you can add tests for less than 30 or 60 days left

use strict;
use Date::Parse;

my $protoadd="";

if (defined $ARGV[2]){
        if ($ARGV[2] eq "-smtp"){
                $protoadd="-starttls smtp ";

my ($host,$port) = ($ARGV[0],$ARGV[1]);

open(SSLINFO,"echo \"\" | openssl s_client -connect $host:$port $protoadd 2>/dev/null | openssl x509 -enddate -noout 2>/dev/null |");

my $expiry=0;

while (<SSLINFO>){
        if (/^notAfter=(.+)\n$/){

if ($expiry>0){
        my $daysleft=($expiry-time())/86400;
        printf "%d\n",$daysleft>=0?$daysleft:0;
} else {
        print "0\n";
Assumes a reasonably recent openssl.

And yes, this script has helped me avoid embarrasment over expired certificates.

Tags: , , ,
2012-08-17 (#) 8 years ago
At my current work I am also introducing zabbix monitoring. I chose zabbix at my previous work because I like the approach: measure a lot of values and store those, and next you decide whether to draw graphs or run triggers based on those values. Monitoring, graphing and alerting in one system.

The installation of the zabbix agent got puppetized instantly. I found out the rpm from epel leaves a few things to fix, so puppet to the rescue to fix that on installation. By simply configuring those fixes to depend on the package and to notify the service the start of the service will be postponed until those fixes have been done and the agent will start correctly.

Firewall on the monitored machines still needs to be fixed by hand, this is still a problem. Bringing the firewall under puppet control would be great, but that is quite a project.

Tags: , , ,
2012-08-13 (#) 8 years ago
At work I recently introduced puppet for automated system management, after hearing about it from people with very good experiences with it.

Slowly but surely we start to manage the first tasks with puppet: system accounts, ssh configuration, ntp configuration, package removal/addition, postfix configuration and other things we want configured to our standards on all machines. Puppet helps a lot in making configurations standard and making sure (complicated) configuration tasks have been done on every system.

The fact that we are currently setting up quite a number of new virtual machines helps, lots of room to start of with a 'puppetized' config.

Configuration choices can be made based on classes assigned to nodes but also based on 'facts' derived from the machine itself. For example I install package smartmontools on machines with real hardware, it doesn't make sense to install it in virtual machines. Or I can use a variable from a fact in a configuration, which is great if you want mail from machines to be readable when it's in a big mailbox. A sample from the logwatch config:
        file { 'logwatch.conf':
                path => '/etc/logwatch/conf/logwatch.conf',
                ensure => present,
                owner => 'root',
                group => 'root',
                mode => 0444,
                content => "# This file is under puppet control
# Generated by $Id: logwatch.pp 67 2012-08-14 08:14:49Z XXXXX $
# Do not edit on this machine
MailFrom = Logwatch@$fqdn (Logwatch on $hostname)
                require => Package['logwatch'];
And the logwatch mailfolder gets more readable.

Puppetdashboard radiator view with colors denoting system states With more than a few machines to manage with puppet I like puppetdashboard to see whether all changes have been rolled out to all machines. The 'radiator view' gives a great visual hint whether you need to look at your own puppet dashboard for more info or everything is fine so we use that view on our system monitor screen. And puppetdashboard gives nice counters showing just how much configuration items you are controlling: the current count for our setup is 812 items already, and we're just getting started.

Tags: , , ,
2012-03-01 (#) 8 years ago
Even gdm has the option to allow sessions via XDMCP over the network, but it is (rightfully) disabled by default. I used Xnest to debug an issue with gdm. The configuration (at least this bit) is in /etc/gdm/custom.conf :
And now I can debug some gdm settings with
$ Xnest :1 -query thompson
And see the results in an X session in an X session.

And I debugged the problem: the minimal uid needed to get an account listed in the gdm greeter is taken from /etc/login.defs. The documentation for gdm lists the MinimalUID option but this gdm (version from ubuntu 10.04.4 LTS) ignores that option.

Tags: , ,
2011-07-13 (#) 9 years ago
Trying to clear out an old e-mailarchive (13215 messages) with the Thunderbird e-mail client (selecting all messages older than a month, pressing shift-delete) makes Thunderbird unresponsive for hours and in the end the mail is still not deleted.

Doing the same in the right place on the server with
# find . -mtime +31 | xargs rm
takes less than 30 seconds and Thunderbird rereads the folder fine.

Tags: , , ,
2011-04-06 (#) 9 years ago
Tip: when searching DNS answers for certain IP addresses, use the -n flag for tcpdump. Otherwise tcpdump will 'helpfully' resolve the IP back to a name.

You may need to scroll the output below to the right to see what I mean.
# tcpdump -r zorin.pcap port 53 -v | grep webcam
14:02:27.731039 IP (tos 0x0, ttl 128, id 24132, offset 0, flags [none], proto 17, length: 63) >  41099+ A? (35)
14:02:27.734230 IP (tos 0x0, ttl  64, id 0, offset 0, flags [DF], proto 17, length: 241) >  41099 1/3/5 A (213)
And what I was testing for:
# tcpdump -nr zorin.pcap port 53 -v | grep webcam
reading from file zorin.pcap, link-type EN10MB (Ethernet)
14:02:27.731039 IP (tos 0x0, ttl 128, id 24132, offset 0, flags [none], proto 17, length: 63) >  41099+ A? (35)
14:02:27.734230 IP (tos 0x0, ttl  64, id 0, offset 0, flags [DF], proto 17, length: 241) >  41099 1/3/5 A xx.xx.xx.xx (213)
That is something I can grep for a weird IP.

Tags: , ,
  Older news items for tag computersarebetterat ⇒
, reachable as PGP encrypted e-mail preferred.

PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers pgp key statistics for 0x5BA9368BE6F334E4 Koos van den Hout
Other webprojects: Camp Wireless, wireless Internet access at campsites, The Virtual Bookcase, book reviews