News items for tag homeserver - Koos van den Hout

2019-07-05 I tested the randomness setup 1 month ago
Doing some more reading on haveged made me decide to test the actual randomness of my setup with haveged and randomsound which I created to fix the lack of entropy for dnssec signing operations so I booted the same testing virtual machine which can tap from the host /dev/random. I ran rngtest until it was time to shut down the laptop which was showing the output. The result:
$ rngtest < /dev/random 
rngtest 2-unofficial-mt.14
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions.  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

rngtest: starting FIPS tests...
^Crngtest: bits received from input: 4999640
rngtest: FIPS 140-2 successes: 249
rngtest: FIPS 140-2 failures: 0
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 0
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=303.011; avg=543.701; max=5684.774)bits/s
rngtest: FIPS tests speed: (min=43.251; avg=64.587; max=84.771)Mibits/s
rngtest: Program run time: 9194254192 microseconds
I ratelimited the virtio-rng-pci driver from the host, so the test took a really long time. Given earlier tries with dnssec-signzone this is fast enough.

No need to buy a hardware random generator, although they are way cool and it would be an idea to have a source of correctness (NTP) next to a source of randomness.

Update: I ran rngtest on /dev/urandom and I had to ask for a really big load of blocks to get failures. The first test with 249 blocks gave the same result as above, just a lot higher bit rate. So now I know less about the correct randomness of my setup but at least the test shows that I can safely run dnssec-signzone which was the original idea.

Tags: , , ,
2019-07-04 First tests with dnssec show a serious lack of entropy 1 month ago
I was looking at the options for implementing DNSSEC on the domains I have, and started doing this on a domain name that is just used for web redirects, so I won't break anything serious when I make an error. And I am looking at monitoring options at the same time.

Looking for usable documentation I found DNSSEC signatures in BIND named - sidn.nl which shows and explains a lot of the options for doing this with bind9, including full automation. I want to take steps I understand, so I will start with careful minimal automation on a domain name that I can 'break'.

Following that documentation I created a key-signing key (KSK) and a zone-signing key (ZSK). I used the /etc/bind/keys directory which is the standard location.

The first dnssec-signzone action took 54 minutes. After waiting for a bit I started wondering what was happening and it turned out to be a problem with entropy: the signing uses a lot of data from /dev/random. I have the virtio-rng module loaded but the host wasn't making randomness available to the guest operating system. The host server does run randomsound to get more entropy since there is no hardware random number generator available.

Documentation on how to 'forward' randomness from the host to the client virtual machine: Random number generator device - Domain XML format

So I did some tests with a test virtual machine with a similar configuration. The results:
  • Just software kernel rng in the virtual machine: 54 minutes.
  • Offering virtio-rng randomness from the host from /dev/urandom running randomsound: less than 1 second.
  • Offering virtio-rng randomness from the host from /dev/random running randomsound: 11 minutes 10 seconds.
  • Offering virtio-rng randomness from the host from /dev/random running randomsound and haveged: less than 1 second.
Installing haveged which gathers entropy from hardware processes fixes the whole problem.

Now to implement the same settings for the virtual machine running the production nameserver and I'll be able to take the next step.

Tags: , , ,
2019-06-19 Looking at the wrong side of a mirrored disk 2 months ago
Due to recent kernel updates I rebooted the home server and ran into only older kernels available. Some searching later I found out it booted from another disk than the disk the update manager was maintaining /boot on.

The solution was to mirror the /boot partition by hand and change the EFI boot setup to try a boot from both disks, so the machine will still boot when one half of the mirror is completely unavailable. I did buy mirrored disks to have the machine available with one disk unavailable.

Changing the EFI boot setup with efibootmgr was somewhat complicated, but I got it all done. How to add a second disk found via Partitioning EFI machine with two SSD disks in mirror - Unix & Linux stackexchange and understanding the numbers in the efibootmgr -v output via "efibootmgr -v" output question.

The ideal solution would be to have /boot and /boot/efi on mirrored partitions without metadata (so they are readable too from the efi loader as an unmirrored partition). According to what I read this is possible in Linux with devicemapper but there is not a lot of experience shared.

Tags: , ,
2019-06-02 Trying to backup to a cloudservice again 2 months ago
After the migration to the new homeserver was finished I found out I had to run backups on a separate computer: misconfigured backups so the old idea of backups to a cloudservice is on my mind again. I've looked into this before: Backup to .. the cloud! and I still want to backup to a cloud-based service which has a webdav interface and is based on owncloud. With some searching I came across How to synchronize your files with TransIP’s STACK using the commandline.

I'd like the outgoing bandwidth to be limited so the VDSL uplink isn't completely filled with the backup traffic. Installing owncloud-client-cmd still has a lot of dependencies on graphical stuff, but doesn't install the GUI of the owncloud client. In owncloud-client-cmd I can't set the bandwidth limits, but I can set those in the graphical client. But after a test it shows that owncloud-client-cmd doesn't read .local/share/data/ownCloud/owncloud.cfg for the bandwidth settings.

At least with the VDSL uplink speed and the wondershaper active the responsiveness of other applications at home never suffered. Maybe specific rules for the IP addresses of the cloud service could ratelimit the uploads.

Tags: , ,
2019-05-06 Making checking SSL certificates before installing them a bit more robust 3 months ago
Encrypt all the things meme With all the automated updates of certificates as described in Enabling Server Name Indication (SNI) on my webserver and Automating Let's Encrypt certificates further I wondered about what would happen when some things got corrupt, most likely as a result of a full disk. And a simple test showed out that the checkcert utility would happily say two empty files are a match because the sha256sum of two empty public keys is the same.

Solution, do something with the errorlevel from openssl. New version of checkcert:
#!/bin/sh

# check ssl private key 1 with ssl pem encoded x509 certificate 2 public key

SUMPRIVPUBKEY=`openssl pkey -in $1 -pubout -outform pem || echo privkey | sha256sum`
SUMCERTPUBKEY=`openssl x509 -in $2 -noout -pubkey -outform pem || echo pubkey | sha256sum`

if [ "${SUMPRIVPUBKEY}" = "${SUMCERTPUBKEY}" ]; then
        exit 0
else
        exit 1
fi
And now:
koos@gosper:~$ /usr/local/bin/checkcert /dev/null /dev/null
unable to load key
139636148224064:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: ANY PRIVATE KEY
unable to load certificate
139678825668672:error:0906D06C:PEM routines:PEM_read_bio:no start line:../crypto/pem/pem_lib.c:686:Expecting: TRUSTED CERTIFICATE
koos@gosper:~$ echo $?
1

Tags: , , ,
2019-05-04 Considering enabling Server Name Indication (SNI) on my webserver 3 months ago
Encrypt all the things meme While making a lot of my websites available via HTTPS I started wondering about enabling Server Name Indication (SNI) because the list of hostnames in the one certificate (subjectAltName parameter) keeps growing and they aren't all related.

So on a test system with haproxy I created two separate private keys, two separate certificate signing requests and requested two separate certificates. One for the variants of camp-wireless.org and one for most of the idefix.net names. The whole requesting procedure happened on the system where my automated renewal and deployment of LetsEncrypt certificates with dehydrated happens so the request went fine. For the configuration of haproxy I was following HAProxy SNI where 'terminating SSL on the haproxy with SNI' gets a short mention.

So I implemented the configuration as shown in that document and got greeted with an error:
haproxy[ALERT] 123/155523 (3435) : parsing [/etc/haproxy/haproxy.cfg:86] : 'bind :::443' unknown keyword '/etc/haproxy/ssl/webserver-idefix-main.pem'.
And found out that the crt keyword has to be repeated.

This is why I like having a test environment for things like this. Making errors in the certificate configuration on the 'production' server will give visitors scary and/or incomprehensible errors.

So the right configuration for my test is now:
frontend https-in
    bind :::443 v4v6 ssl crt /etc/haproxy/ssl/webserver-campwireless.pem crt /etc/haproxy/ssl/webserver-idefix-main.pem
And testing it shows the different certificates in use when I use the -servername parameter for openssl s_client to test things.
$ openssl s_client -connect testrouter.idefix.net:443 -servername idefix.net -showcerts -verify 3
..
Server certificate
subject=/CN=idefix.net
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
..
Verification: OK
$ openssl s_client -connect testrouter.idefix.net:443 -servername camp-wireless.org -showcerts -verify 3
..
Server certificate
subject=/CN=www.camp-wireless.org
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
..
Verification: OK
The certificates are quite separate. Generating the certificate signing requests with a separate private key for each request works fine.

So if I upgrade my certificate management to renew, transport, test and install multiple certificate for the main webserver it would work.
Read the rest of Considering enabling Server Name Indication (SNI) on my webserver

Tags: , , , ,
2019-04-25 Accepting multiple passwords for IMAPS access 4 months ago
After upgrading to the new homeserver my old setup to allow two passwords for IMAPS logins so I can use a separate password for IMAPS access for those devices that insist on saving a password without asking.

I have the following PAM libraries:
ii  libpam-modules 1.1.8-3.6    amd64        Pluggable Authentication Modules
And I debugged the problem using the pamtester program which makes debugging this problem a lot easier than constantly changing the configuration and restarting the imap server.

The relevant configuration now is:
# PAM configuration file for Courier IMAP daemon

#@include common-auth
# here are the per-package modules (the "Primary" block)
auth    required    pam_succeed_if.so quiet user ingroup users
#auth   [success=1 default=ignore]      pam_unix.so nullok_secure
auth    sufficient      pam_unix.so nullok_secure
auth    sufficient  pam_userdb.so db=/etc/courier/extrausers crypt=crypt use_first_pass
# here's the fallback if no module succeeds
auth    requisite                       pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
auth    required                        pam_permit.so
# and here are more per-package modules (the "Additional" block)
# end of pam-auth-update config
@include common-account
@include common-password
@include common-session
And now both my unix login password and the extra password are accepted.

Tags: , , ,
2019-01-30 Misconfigured backups 6 months ago
I have "always" been running amanda for backups on linux. Or rather, I can't find any indication when I started doing that several homeserver versions ago, it's just still running.

Or it was running, but first I had to tackle a hardware problem: all SCSI controllers I have are PCI and the newest homeserver has no PCI slots. So I searched for a solution. The first solution was to try using the desktop system for the tapedrive, but the powersupply in that system has no 4-lead Molex connectors so I can't connect the tapedrive.

For now I use an old 'test' system with some software upgrades to run amanda and shut it down when all backups are done and flushed to tape. But amanda had a serious problem writing stuff to tape. With some debugging this turned out to be caused by the variable blocksize I used on the previous systems, with
# mt -f /dev/nst0 setblk 0
and I can't even find out why this seemed like a good idea years ago. But now amanda really wants to use 32768 byte blocks and filled a DDS-3 tape (12 Gb without compression) with about 1.8 Gb of data before reaching the end of the tape.

Why this default has changed isn't clear to me, but I found a way to re-initialize the tapes so the backups fit again. Based on block size mismatch - backup central I created a script to do this. I did not get the error about the blocksize, but I searched specifically for 'amanda 3.3.6 blocksize'.
#!/bin/sh

if [ "$1" = "" ]; then
        echo "Usage: $0 <tapename>"
fi

mt -f /dev/nst0 setblk 32768
mt -f /dev/nst0 compression 1
mt -f /dev/nst0 rewind
dd if=/dev/zero of=/dev/nst0 bs=32768 count=200
mt -f /dev/nst0 setblk 32768
mt -f /dev/nst0 compression 1
mt -f /dev/nst0 rewind
amlabel -f kzdoos $1
And now normal amounts of data fit on a tape again. I just have to initialize every tape before using it for the first time in this setup.

Tags: , ,
2019-01-02 Migration to new server finished 7 months ago
More than a year after I started migrating from homeserver greenblatt to the new homeserver conway the last migration is done and the old server is switched off. The new server is in a good position in the rack, and the old server is still taking up space in there too. It has taken a lot of time, I decided to stop some websites and other unused services in the process and my energy levels haven't always been that great. I have improved several things in the process, which also caused delays.

One thing hasn't changed (which I did expect to change): the power usage of the new server isn't lower! The UPS tells me the output load is about the same. Ok, the new hardware has a lot more CPU power, a lot more memory and faster storage, but I expected the poweruse to go down a bit.

Tags: , , ,
2019-01-01 Switching to 1-wire over USB and forwarding a USB device to a guest VM 7 months ago
The new hardware for the homeserver has no external serial ports, so I could not use the old serial / 1-wire interface that has been doing the home monitoring for years. But I had a spare USB DS2490 interface. So I plugged this into the server and wanted to forward the USB device to the guest VM that runs all the monitoring.

First I had to blacklist all the loaded drivers to have the device available to kvm as-is. In /etc/modprobe.d/local-config.conf:
blacklist w1_smem
blacklist ds2490
blacklist wire
Next step was to attach the device to the right vm. I followed the hints at How to auto-hotplug usb devices to libvirt VMs (Update 1) and edited the definition for the vm to get the host device like:
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x04fa'/>
        <product id='0x2490'/>
      </source>
    </hostdev>
But that did not get the usb device attached to the running VM and I did not feel like rebooting it. So I created an extra file with the above and did a
root@conway:~# virsh attach-device --live gosper /tmp/onewire.xml 
Device attached successfully
And then I had to do the same blacklisting as above in the virtual machine. After doing that I detached and attached it from the VM without touching it with simply:
root@conway:~# virsh detach-device --live gosper /tmp/onewire.xml 
Device detached successfully

root@conway:~# virsh attach-device --live gosper /tmp/onewire.xml 
Device attached successfully
After that I had to set up rules for the telemetry user to have enough access to the USB device:
#
SUBSYSTEMS=="usb", GOTO="usb_w1_start"
GOTO="usb_w1_end"
LABEL="usb_w1_start"
ATTRS{idVendor}=="04fa", ATTRS{idProduct}=="2490", GROUP="telemetry", MODE="0666"
LABEL="usb_w1_end"
And now it all works:
telemetry@gosper:~$ digitemp_DS2490 -a
DigiTemp v3.7.1 Copyright 1996-2015 by Brian C. Lane
GNU General Public License v2.0 - http://www.digitemp.com
Found DS2490 device #1 at 002/003
Jan 01 21:53:11 Sensor 10A8B16B0108005D C: 9.500000
Jan 01 21:53:12 Sensor 28627F560200002F C: 17.062500
Jan 01 21:53:14 Sensor 10BC428A010800F4 C: 19.562500
Jan 01 21:53:15 Sensor 1011756B010800F1 C: 11.937500
Jan 01 21:53:16 Sensor 10B59F6B01080016 C: 16.312500
Jan 01 21:53:17 Sensor 1073B06B010800AC C: 18.687500
Jan 01 21:53:18 Sensor 102B2E8A010800F0 C: 29.250000
Jan 01 21:53:20 Sensor 28EF71560200002D C: 16.687500
Working house temperatures again!

Tags: , , , ,
2018-12-04 Really ending a domain name and the web presence 8 months ago
On 25 december 2004 there was a special deal giving me the .info names camp-wireless.info and campwireless.info for free for the first year. Since that moment I kept the names registered and redirected all web traffic to the right version: https://www.camp-wireless.org/. So the deal worked from a 'selling domain names' perspective: Christmas is a bad moment to review the need for domain names, so the easy solution is to renew it. My decision to stop with these names was made in January 2018.

Traffic to the .info versions is very minimal. With the cost of the domain registration I decided to stop doing that and devised an exit strategy which would result in a domain name that attracts no traffic and is not linked to my other webprojects. On the next renewal date the domain will expire. I have done this before in a different context: when we ended the students personal webspace at www.students.cs.uu.nl.

The solution is to start returing HTTP state 410 Gone for search engines while at the same time returning a somewhat user-friendly error page.

Relevant bit of apache 2.4 configuration:
<VirtualHost *:80>
    ServerName www.camp-wireless.info
    ServerAlias www.campwireless.info
    ServerAlias camp-wireless.info
    ServerAlias campwireless.info

	DocumentRoot /home/httpd/campwireless-expire/html

    <Directory "/home/httpd/campwireless-expire/html">
        Require all granted
    </Directory>

    RewriteEngine On
    RedirectMatch 410 ^/(?!gone.html|robots.txt)
    ErrorDocument 410 /gone.html
</VirtualHost>
The gone page is simple: It has an explanation for human visitors and a meta refresh tag to redirect the browser eventually. But to a search engine the status 410 on almost any url will give a clear flag the page is gone and should be flushed from the cache.
Read the rest of Really ending a domain name and the web presence

Tags: , , , ,
2018-11-23 Automatic ls colours can be slow 9 months ago
I noticed certain commands taking a while to start, including a simple ls. At last I got annoyed enough to diagnose the whole situation and found out the problem is the combination of symbolic links in the listed directory pointing to filesystems behind automounter, one mounted filesystem coming from a NAS with sleeping disk and ls --color doing a stat() on the target of a symbolic link to find the type of the target file to be able to select a colour.

My solution: find the source of the alias and disable it.

Tags: , ,
2018-10-12 Serious slowness with rrdgraph from rrdtool 10 months ago
One of the things still needing migrating is the NTP server stats which obviously uses rrdtool. Because I want to keep the history I migrated the datasets with:
/usr/local/rrdtool/bin/rrdtool dump ntpvals-stardate.cs.uu.nl.rrd \
| ssh newhost /usr/bin/rrdtool restore -f - ntpvals-stardate.cs.uu.nl.rrd
And then create a graph of the plloffset for example using:
/usr/bin/rrdtool graph /tmp/plloffset-stardate.cs.uu.nl-24hours.png \
--title "stardate.cs.uu.nl pll offset (last 24 hours)" --imginfo \
'<img src="tmpgraphs/%s" WIDTH="%lu" HEIGHT="%lu" alt="Graph">' \
--start -24hours --end now --vertical-label="Seconds" --color BACK#0000FF \
--color CANVAS#c0e5ff --color FONT#ffffff --color GRID#ffffff \
--color MGRID#ffffff --alt-autoscale --imgformat PNG --lazy \
DEF:offset=ntpvals-stardate.cs.uu.nl.rrd:plloffset:AVERAGE \
CDEF:wipeout=offset,UN,INF,UNKN,IF CDEF:wipeoutn=wipeout,-1,* \
LINE1:offset#000000:"Offset\:" \
GPRINT:offset:LAST:"Current\:%.3lf%s" \
GPRINT:offset:MIN:"Min\:%.3lf%S" \
GPRINT:offset:MAX:"Max\:%.3lf%S" \
GPRINT:offset:AVERAGE:"Average\:%.3lf%S" \
AREA:wipeout#e0e0e0 AREA:wipeoutn#e0e0e0
But on the old server this takes 0.026 seconds, on the new server 3 minutes and 47.46 seconds. No idea what is happening, strace shows nothing strange and rrdtool uses 1 cpu at 100% all that time.
Read the rest of Serious slowness with rrdgraph from rrdtool

Tags: , , ,
2018-09-26 Made the big bang to the new homeserver 11 months ago
So for months and months I had hardware ready for the new homeserver, I was testing bits and pieces in the new environment and I still did not get around to making the big bang. Part of the time the new system was running and using electricity.

And a few weeks ago I had time for the big bang and forgot to mention it!

So one free day I just did the last sync of homedirectories and started migrating all services in a big bang. No more but, if, when, is it done yet. It's a homeserver, not a complete operational datacenter. Although with everything running it sometimes does look that way!

The new setup, more completely documented at Building - and maintaining home server conway 2017 is now running almost all tasks. The main migration was homedirectories, mail, news, webservers. Things are now split over several virtual machines and the base virtual machine running kvm virtual machines is as minimal as possible.

One thing I just noticed is that the new virtual machine with pppoe kernel mode drivers and updated software is doing great: the bigger MTU is working by default and kernel mode pppoe does not show up as using CPU when a 50 mbit download is active. I looked at CPU usage with htop and at the network traffic with iptraf and the result was that iptraf was using the most cpu.

There are still some things left to migrate, including a few public websites that currently give 50x errors. But I will find the time eventually.

Tags: , , ,
2018-09-24 After 25 years with sendmail there was still something to improve 11 months ago
I still like running sendmail on my own systems. But sendmail evolves with time and my configuration does improve slightly sometimes, such as on the introduction of authenticated smtp with secondary passwords.

After the recent upgrades to the home server there is a new mail server with some other new details and suddenly other systems at home could not relay. A bit of searching found Best practice: sendmail and SMTP auth with the right flags for the DAEMON_OPTIONS to only offer authentication on port 587 (submission).

I noticed the local systems tried relaying via port 587 so I changed this to port 25 where IP-based relaying is allowed. No idea why I set this up to use the port 587 when I set it up previously.

And yes, I checked it, I started with sendmail in 1993, so 25 years of sendmail on port 25. I did start with writing my own sendmail.cf rules but I switched to .mc based configurations.

Tags: , , ,
2018-09-21 Setting my bash prompt PS1 to remind me I'm in screen 11 months ago
With some systems constantly running screen and others not I started to get confused. Solution: change the visual indications in the prompt inside screen.

I decided to just change the username color in PS1 when I'm in screen. So now:
PS1='${STY:+\[\e[1;36m\]}\u${STY:+\[\e[0m\]}@\h:\w\$ '
In bash, ${STY:+..} gives output when shell variable STY is set. So I add the color set/unset commands to the prompt when STY, a typical screen variable is set. The result is dark cyan, a color that works (for me) on my normal light-grey background xterm/putty sessions.

Oh, and for root things are different:
PS1='\[\e[1;91m\]\u@\h\[\e[0m\]:\w\$ '
Which gives a light red user@hostname.

In the above \e causes an escape to be printed. Wrapping parts of the prompt between \[ and \] causes bash to ignore those for counting the length of the prompt so it doesn't get confused on redrawing the prompt when editing the commandline.

Samples of colours and other formatting at FLOZz' MISC » bash:tip_colors_and_formatting.

Tags: , ,
2018-09-06 Weird interface names in snmp due to virtio driver 11 months ago
I want to measure network traffic so I decided to copy most of my rrdtool setup from the old home server.

But with virtio network cards I have a confused snmpd:
IF-MIB::ifDescr.1 = STRING: lo
IF-MIB::ifDescr.2 = STRING: Red Hat, Inc Device 0001
IF-MIB::ifDescr.3 = STRING: Red Hat, Inc Device 0001
IF-MIB::ifDescr.4 = STRING: Red Hat, Inc Device 0001
IF-MIB::ifDescr.5 = STRING: dummy0
IF-MIB::ifDescr.6 = STRING: dumhost
IF-MIB::ifDescr.7 = STRING: dumdh6
Fix: go for the IF-MIB::ifName snmp variables, found in oid 1.3.6.1.2.1.31.1.1.1:
IF-MIB::ifName.1 = STRING: lo
IF-MIB::ifName.2 = STRING: eth0
IF-MIB::ifName.3 = STRING: eth1
IF-MIB::ifName.4 = STRING: eth2
IF-MIB::ifName.5 = STRING: dummy0
IF-MIB::ifName.6 = STRING: dumhost
IF-MIB::ifName.7 = STRING: dumdh6
Those are easier to discern, now my snmp scripts are gathering data again.

Tags: , , ,
2018-07-19 Configuring sendmail authentication like imaps access to allow secondary passwords 1 year ago
I needed to configure sendmail authenticated access because I want a strict SPF record for idefix.net which means I always have to make outgoing mail originate from the right server.

For the sendmail authenticated smtp bit I used How to setup and test SMTP AUTH within Sendmail with some configuration details from Setting up SMTP AUTH with sendmail and Cyrus-SASL. To get this running saslauthd is needed to get authentication at all and I decided to let it use the pam authentication mechanism. The relevant part of sendmail.mc:
include(`/etc/mail/sasl/sasl.m4')dnl
define(`confAUTH_OPTIONS', `A p')dnl
define(`confAUTH_MECHANISMS', `LOGIN PLAIN')dnl
TRUST_AUTH_MECH(`LOGIN PLAIN')dnl
And now I can login to sendmail only in an encrypted session. And due to sendmail and other services now having valid certificates I can set up all devices to fully check the certificate so I make it difficult to intercept this password.

And after I got that working I decided I wanted 'secondary passwords' just like I configured extra passwords for IMAPS access so I set up /etc/pam.d/smtp to allow other passwords than the unix password and restrict access to the right class of users.
auth    required    pam_succeed_if.so quiet user ingroup users
auth    [success=1 default=ignore]      pam_unix.so nullok_secure
auth    sufficient  pam_userdb.so db=/etc/courier/extrausers crypt=crypt use_first_pass
# here's the fallback if no module succeeds
auth    requisite                       pam_deny.so
Now I can set up my devices that insist on saving the password for outgoing smtp and if it ever gets compromised I just have to change that password without it biting me too hard.
Read the rest of Configuring sendmail authentication like imaps access to allow secondary passwords

Tags: , , ,
2018-06-17 Apache 2.2 Proxy and default block for everything but the .well-known/acme-challenge urls 1 year ago
I'm setting up a website on a new virtual machine on the new homeserver and I want a valid letsencrypt certificate. It's a site I don't want to migrate so I'll have to use the Apache proxy on the 'old' server to allow the site to be accessed via IPv4/IPv6 (for consistency I am now setting up everything via a proxy).

So first I set up a proxy to pass all requests for the new server to the backend, something like:
        ProxyPass / http://newsite-back.idefix.net/
        ProxyPassReverse / http://newsite-back.idefix.net/
But now the requests for /.well-known/acme-challenge also go there and they are blocked needing a username/password since the new site is not open yet.

So to set up the proxy correctly AND avoid the username checks for /.well-known/acme-challenge the order has to be correct. In the ProxyPass rules the rule for the specific URL has to come first and in the Location setup it has to come last.
        ProxyPass /.well-known/acme-challenge !
        ProxyPass / http://newsite-back.idefix.net/
        ProxyPassReverse / http://newsite-back.idefix.net/

        <Location />
        Deny from all
        AuthName "Site not open yet"
        [..]
        </Location>

        <Location /.well-known/acme-challenge>
            Order allow,deny
            Allow from all
        </Location>
And now the acme-challenge is done locally on the server and all other requests get forwarded to the backend after authentication.

Tags: , , ,
2018-05-03 The preferring IPv6 policy is working 1 year ago
Yesterday I changed some IPv4 addresses on virtual machines on the new homeserver to make autofs work. This is a known issue with autofs: autofs does not appear to support IPv6 hostname lookups for NFS mounts - Debian Bug #737679 and for me the easy solution is to do NFS mounts over rfc1918 ipv4 addresses. I prefer autofs over 'fixed' NFS mounts for those filesystems that are nice to be available but aren't needed constantly.

It took about 9 hours before arpwatch on the central router noticed the new activity. I guess the policy to try to do everything over IPv6 is working.

Tags: , , ,
  Older news items for tag homeserver ⇒
, reachable as koos+website@idefix.net. PGP encrypted e-mail preferred.

PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers pgp key statistics for 0x5BA9368BE6F334E4 Koos van den Hout
RSS
Other webprojects: Camp Wireless, wireless Internet access at campsites, The Virtual Bookcase, book reviews