News items for tag linux - Koos van den Hout

2018-08-17 Trying (and failing) to correlate security logs 4 days ago
Since activating sendmail authentication with secondary passwords I see a number of attempts to guess credentials to send mail via my system. This is not very surprising, given the constant attack levels on the wider Internet.

For work I am looking at log correlation and monitoring and with that in mind I noted that finding the right information from sendmail where and when the attempt came from is quite hard since there are several processes busy and it's hard to correlate the logging. The failed attempt is logged by saslauthd in /var/log/auth.log:
Aug 16 12:28:57 greenblatt saslauthd[32648]: pam_unix(smtp:auth): check pass; user unknown
Aug 16 12:28:57 greenblatt saslauthd[32648]: pam_unix(smtp:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=
Aug 16 12:28:59 greenblatt saslauthd[32648]: do_auth         : auth failure: [user=monster] [service=smtp] [] [mech=pam] [reason=PAM auth error]
Aug 16 12:29:00 greenblatt saslauthd[32649]: pam_unix(smtp:auth): check pass; user unknown
Aug 16 12:29:00 greenblatt saslauthd[32649]: pam_unix(smtp:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost=
Aug 16 12:29:02 greenblatt saslauthd[32649]: do_auth         : auth failure: [user=monster] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error]
This is probably related to this sendmail log information:
Aug 16 12:28:56 greenblatt sm-mta[20716]: STARTTLS=server, [] (may be forged), version=TLSv1/SSLv3, verify=NO, cipher=DHE-RSA-AES256-SHA, bits=256/256
Aug 16 12:29:02 greenblatt sm-mta[20716]: w7GASspx020716: [] (may be forged) did not issue MAIL/EXPN/VRFY/ETRN during connection to MSP-v6
But I can't be sure as there are multiple 'did not issue MAIL/EXPN/VRFY/ETRN' messages in the logs. So I can't build a fail2ban rule based on this.

Tags: , , ,
2018-07-27 Automating Let's Encrypt certificates with DNS-01 protocol 3 weeks ago
Encrypt all the things meme After thoroughly automating Let's Encrypt certificate renewal and installation I wanted to get the same level of automation for systems that do not expose an http service to the outside world. So that means the DNS-01 challenge within the ACME protocol has to be used.

I found out dehydrated Let's Encrypt certificate management supports DNS-01 and I found a sample on how to do this with bind9 at Example hook script using Dynamic DNS update utility for dns-01 challenge which looks like it can do the job.

It took me a few failed tries to find out that if I want a certificate for the name that it will request the TXT record for to make me prove that I have control over the right bit of DNS. I first assumed something in which turned out wrong. So the bind9 config in /etc/bind/named.conf.local has:
zone "" {
        type master;
        file "/var/cache/bind/";
        masterfile-format text;
        allow-update { key "acmekey-turing"; };
        allow-query { any; };
        allow-transfer {
And in the zone there is just one delegation:
_acme-challenge.turing  IN      NS      ns2
I created and used a dnskey with something like:
# dnssec-keygen -r /dev/random -a hmac-sha512 -b 128 -n HOST acmekey-turing
This gives 2 files, both with the right secret:
# ls Kacmekey-turing.+157+53887.*
Kacmekey-turing.+157+53887.key  Kacmekey-turing.+157+53887.private
# cat Kacmekey-turing.+157+53887.key
acmekey-turing. IN KEY 512 3 157 c2V0ZWMgYXN0cm9ub215
and configured it in /etc/bind/named.conf.options:
key "acmekey-turing" {
        algorithm hmac-md5;
        secret "c2V0ZWMgYXN0cm9ub215";
And now I can request a key for and use it to generate sendmail certificates. And the net result:
        (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256          
SMTP between systems with TLS working and good certificates.

Tags: , , ,
2018-07-19 Configuring sendmail authentication like imaps access to allow secondary passwords 1 month ago
I needed to configure sendmail authenticated access because I want a strict SPF record for which means I always have to make outgoing mail originate from the right server.

For the sendmail authenticated smtp bit I used How to setup and test SMTP AUTH within Sendmail with some configuration details from Setting up SMTP AUTH with sendmail and Cyrus-SASL. To get this running saslauthd is needed to get authentication at all and I decided to let it use the pam authentication mechanism. The relevant part of
define(`confAUTH_OPTIONS', `A p')dnl
And now I can login to sendmail only in an encrypted session. And due to sendmail and other services now having valid certificates I can set up all devices to fully check the certificate so I make it difficult to intercept this password.

And after I got that working I decided I wanted 'secondary passwords' just like I configured extra passwords for IMAPS access so I set up /etc/pam.d/smtp to allow other passwords than the unix password and restrict access to the right class of users.
auth    required quiet user ingroup users
auth    [success=1 default=ignore] nullok_secure
auth    sufficient db=/etc/courier/extrausers crypt=crypt use_first_pass
# here's the fallback if no module succeeds
auth    requisite             
Now I can set up my devices that insist on saving the password for outgoing smtp and if it ever gets compromised I just have to change that password without it biting me too hard.

Tags: , , ,
2018-07-08 Automating Let's Encrypt certificates further 1 month ago
Encrypt all the things meme Over two years ago I started using Let's Encrypt certificates. Recently I wanted to automate this a step further and found dehydrated automated certificate renewal which helps a lot in automating certificate renewal with minimal hassle.

First thing I fixed was http-based verification. The webserver has been set up to make all .well-known/acme-challenge directories end up in one place on the filesystem and it turns out this works great with dehydrated.

I created a separate user for dehydrated, gave that user write permissions for the /home/httpd/html/.well-known/acme-challenge directory. It also needs write access to /etc/dehydrated for its own state. I changed /etc/dehydrated/config with:
Now it was possible to request certificates based on a .csr file. I used this to get a new certificate for the home webserver, and it turned out to be easier than the previous setup based on letsencrypt-nosudo.
Read the rest of Automating Let's Encrypt certificates further

Tags: , , , ,
2018-06-23 SMART can be wrong 1 month ago
Someone brought me a 'WD My cloud' that does not respond at all. So I took it apart and found out how to access the disk in an i386 Linux system: mount the 4th partition as ext4. When the disk was available I did a smart test:
SMART overall-health self-assessment test result: PASSED
But while trying to find out how much data is actually on the disk, I get:
[  866.165641] Sense Key : Medium Error [current] [descriptor]
[  866.165645] Descriptor sense data with sense descriptors (in hex):
[  866.165647]         72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 
[  866.165659]         b0 90 ea 60 
[  866.165664] sd 2:0:0:0: [sda]  
[  866.165668] Add. Sense: Unrecovered read error - auto reallocate failed
So the disk isn't very healthy. But rerunning the smart check still shows nothing is wrong. It is a Western Digital 'RED' harddisk especially for NAS systems so it should return errors earlier to the operating system but this disk is bad, which is probably related to why the 'my cloud' enclosure isn't working.
Read the rest of SMART can be wrong

Tags: ,
2018-06-17 Apache 2.2 Proxy and default block for everything but the .well-known/acme-challenge urls 2 months ago
I'm setting up a website on a new virtual machine on the new homeserver and I want a valid letsencrypt certificate. It's a site I don't want to migrate so I'll have to use the Apache proxy on the 'old' server to allow the site to be accessed via IPv4/IPv6 (for consistency I am now setting up everything via a proxy).

So first I set up a proxy to pass all requests for the new server to the backend, something like:
        ProxyPass /
        ProxyPassReverse /
But now the requests for /.well-known/acme-challenge also go there and they are blocked needing a username/password since the new site is not open yet.

So to set up the proxy correctly AND avoid the username checks for /.well-known/acme-challenge the order has to be correct. In the ProxyPass rules the rule for the specific URL has to come first and in the Location setup it has to come last.
        ProxyPass /.well-known/acme-challenge !
        ProxyPass /
        ProxyPassReverse /

        <Location />
        Deny from all
        AuthName "Site not open yet"

        <Location /.well-known/acme-challenge>
            Order allow,deny
            Allow from all
And now the acme-challenge is done locally on the server and all other requests get forwarded to the backend after authentication.

Tags: , , ,
2018-05-03 The preferring IPv6 policy is working 3 months ago
Yesterday I changed some IPv4 addresses on virtual machines on the new homeserver to make autofs work. This is a known issue with autofs: autofs does not appear to support IPv6 hostname lookups for NFS mounts - Debian Bug #737679 and for me the easy solution is to do NFS mounts over rfc1918 ipv4 addresses. I prefer autofs over 'fixed' NFS mounts for those filesystems that are nice to be available but aren't needed constantly.

It took about 9 hours before arpwatch on the central router noticed the new activity. I guess the policy to try to do everything over IPv6 is working.

Tags: , , ,
2018-04-24 KVM and os-specific defaults 3 months ago
Today I wanted to install a new virtual machine on the new homeserver and virt-install gave me a new warning:
WARNING  No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results.
According to the virt-install manpage the --os-variant can be found with osinfo-query os which I can't find in Devuan jessie. But the same information is available via Installing Virtual Machines with virt-install, plus copy pastable distro install one-liners.

I chose debian7 as that is probably the closest to Devuan Jessie to be upgraded to Devuan ascii immediately.

The interesting change is that the resulting linux suddenly has virtio networkcards and a disk /dev/vda. That last bit is quite different from earlier virtual machines.

Tags: , ,
2018-04-06 Keeping squid webproxy running for network mismatches 4 months ago
I considered stopping using squid when upgrading to the new homeserver but I have now changed that decision: I need to keep it running for applications that want to do http connections to IPv6-only systems but can't handle those. There are some old scripts running that need it but it's also the way to fix the problem I noticed with linuxcounter.

Tags: , ,
2018-04-06 25 years of Linux use 4 months ago
Powered by Linux In looking at a problem with the linuxcounter script I noticed I am now passing the 25 years with Linux mark. I first saw it in the beginning of 1993 when part of my internship happened at the 'expa' lab (if I recall correctly) of Hogeschool Utrecht with SLS Linux.

Anyway, still using Linux a lot. It's been an interesting 25 years!

Tags: , ,
2018-01-27 I caused an interesting problem with the VDSL pppoe session 6 months ago
Normally being active on certain HF bands causes one-time VDSL disconnects but what I have currently done seems to have triggered something else. After the connection dropped it refuses to come back at the moment. The entire session looks like:
22:49:28.466922 PPPoE PADI [Service-Name]
22:49:28.490394 PPPoE PADO [AC-Name "dr12.d12"] [Service-Name] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83] [EOL]
22:49:28.490603 PPPoE PADR [Service-Name] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83]
22:49:28.517063 PPPoE PADS [ses 0x40c] [Service-Name] [AC-Name "dr12.d12"] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83] [EOL]
22:49:28.575266 PPPoE  [ses 0x40c] LCP, Conf-Request (0x01), id 72, length 16
22:49:28.575776 PPPoE  [ses 0x40c] LCP, Conf-Request (0x01), id 99, length 22
22:49:28.575798 PPPoE  [ses 0x40c] LCP, Conf-Reject (0x04), id 72, length 10
22:49:28.589161 PPPoE  [ses 0x40c] LCP, Conf-Ack (0x02), id 99, length 22
22:49:28.589164 PPPoE  [ses 0x40c] LCP, Conf-Request (0x01), id 73, length 12
22:49:28.589666 PPPoE  [ses 0x40c] LCP, Conf-Ack (0x02), id 73, length 12
22:49:28.589682 PPPoE  [ses 0x40c] LCP, Echo-Request (0x09), id 0, length 10
22:49:28.589693 PPPoE  [ses 0x40c] CCP, Conf-Request (0x01), id 89, length 17
22:49:28.589702 PPPoE  [ses 0x40c] IPCP, Conf-Request (0x01), id 89, length 18
22:49:28.589711 PPPoE  [ses 0x40c] IP6CP, Conf-Request (0x01), id 89, length 16
22:49:28.603265 PPPoE  [ses 0x40c] LCP, Echo-Reply (0x0a), id 0, length 10
22:49:28.603267 PPPoE  [ses 0x40c] LCP, Term-Request (0x05), id 74, length 6
22:49:28.604033 PPPoE  [ses 0x40c] LCP, Term-Ack (0x06), id 74, length 6
22:49:31.623454 PPPoE PADT [ses 0x40c] [Generic-Error "RP-PPPoE: System call error: Input/output error"] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83]
So in the end the router at my ISP decides to terminate the connection. On the connection failing I decided to change the configuration to use the kernel mode pppoe driver but after this started showing I reverted that change. Which made no difference, the connection is still not coming up.

Update: I went looking at other changes I made to enable the pppoe server test and reverting the /etc/ppp/pap-secrets file to its original format fixed the problem. I guess I somehow started to authenticate the remote end.

And changing from user-mode pppoe to kernel-mode pppoe does lower the MTU to 1492, so that test is also finished. Back to user-mode pppoe.

Tags: , , ,
2018-01-25 Building a testing server for pppoe 6 months ago
The new homeserver will have to run the same pppoe client setup as the current server. But I want to get the whole setup tested before the migration to minimize disruption.

Since I'm not going to get a free extra vdsl line and vdsl modem to test with and the complicated part is in the pppoe and ppp client part I decided to use a test vlan and set up a pppoe-server and ppp server on that vlan.

The pppoe server part is started with
# pppoe-server -I eth0.99 -C kzdoos -L -R
And it's indeed available from the client:
# pppoe-discovery -I eth2
Access-Concentrator: kzdoos
Got a cookie: 84 39 c6 51 13 fe 32 00 2c 06 2a b4 38 0e 30 87 46 7b 00 00
AC-Ethernet-Address: 00:1f:c6:59:76:f6
So that part works. Next is to get an actual ppp session working over it.

The server part was a bit of work as I want to get the whole configuration including password checks. Server configuration in /etc/ppp/pppoe-server-options on the server system:
lcp-echo-interval 10
lcp-echo-failure 2
ipv6 ,
And the client configuration in /etc/ppp/peers/dray-vdsl:
user testkees
password topsecret
ipv6 ,
maxfail 0
ipparam xs4all
lcp-echo-interval 10
lcp-echo-failure 6
pty "pppoe -I eth2"
Lots of options to make the setup exactly the same as the current. It took a lot of tries before password authentication was working. I could not get the client-side password in /etc/ppp/pap-secrets to work, but as show above the password in the ppp configuration did work.

And the setup in /etc/network/interfaces on the client just the same as the known configuration:
iface pppdray inet ppp
        provider dray-vdsl

And it works!
# ifup pppdray
Plugin loaded.
# ifconfig ppp0
        inet  netmask  destination
        inet6 fe80::5254:ff:fe3c:2014  prefixlen 10  scopeid 0x20<link>
        ppp  txqueuelen 3  (Point-to-Point Protocol)
        RX packets 9  bytes 252 (252.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9  bytes 202 (202.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
# ping -c 3
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.721 ms
64 bytes from icmp_seq=2 ttl=64 time=0.436 ms
64 bytes from icmp_seq=3 ttl=64 time=0.449 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2029ms
rtt min/avg/max/mdev = 0.436/0.535/0.721/0.132 ms
The mtu is not yet what I want, but the session is alive.

Tags: , ,
2018-01-23 Avoiding the linux statefull firewall for some traffic 7 months ago
I was setting up a linux based firewall on a busy ntp server and to make sure everything worked as designed I added the usual:
iptables -A INPUT -j ACCEPT --protocol all -m state --state ESTABLISHED,RELATED
And after less than half an hour the system log started filling with
nf_conntrack: table full, dropping packet
nf_conntrack: table full, dropping packet
nf_conntrack: table full, dropping packet
nf_conntrack: table full, dropping packet
It is indeed a busy server. The solution is to exclude all the ntp traffic from the stateful firewall. Which means I have to allow all kinds of ntp traffic (outgoing and incoming) by itself.

The specific ruleset:
iptables -t raw -A PREROUTING --protocol udp --dport 123 -j NOTRACK
iptables -t raw -A OUTPUT --protocol udp --sport 123 -j NOTRACK

iptables -A INPUT -j ACCEPT --protocol udp --destination-port 123
I also made sure the rules for the ntp traffic are the first rules.

Traffic at this server is somewhat over 1000 ntp request per second. So the counters of the NOTRACK rules go fast.
# iptables -t raw -L -v
Chain PREROUTING (policy ACCEPT 1652K packets, 126M bytes)
 pkts bytes target     prot opt in     out     source               destination 
9635K  732M CT         udp  --  any    any     anywhere             anywhere             udp dpt:ntp NOTRACK
1650K  125M CT         udp  --  any    any     anywhere             anywhere             udp dpt:ntp NOTRACK

Chain OUTPUT (policy ACCEPT 1522K packets, 117M bytes)
 pkts bytes target     prot opt in     out     source               destination 
9029K  686M CT         udp  --  any    any     anywhere             anywhere             udp spt:ntp NOTRACK
1520K  116M CT         udp  --  any    any     anywhere             anywhere             udp spt:ntp NOTRACK
But no packets are dropped, which is good as this server is supposed to be under a constant DDoS.

Tags: , , ,
2018-01-14 Recovering firmware on the Draytek Vigor 130 VDSL2 modem with linux / macosx 7 months ago
Note beforehand: I have not tested this procedure, every time I needed it it was faster to boot Windows to run the utility Draytek has available.

I needed the recovery procedure again: there was a new firmware 3.8.12 with newer VDSL modem driver and the standard update via the webinterface failed.

I just want to keep the notes from "OzCableguy" since his shop and blog have gone. I found the saved version via, Updating Draytek firmare using the MacOS X or UNIX command line and TFTP - OzCableguy.

Draytek modems have several methods available to update their firmware.

You can use the Firmware Upgrade Utility under Windows, load it from the web interface via HTTP, FTP the file to the modem or use the TFTP (Trivial File Transfer Protocol) service built into the box.

If your modem has been bricked you can’t use FTP or HTTP. If you don’t want to use Windows or go through the web interface, then this TFTP method is a viable alternative. Note that unlike a lot of other boxes using TFTP to load firmware, the Draytek is acting as a TFTP server, the UNIX/MacOS box as a client and you PUT the file onto the modem. It is normally the other way around, but that needs some extra setup steps that are conveniently avoided with this method.

The firmware comes in two pieces. Use the .rst version of the file if you want to change the modem settings back to factory defaults, use the .all file to keep the current settings (.all may not be a good option if the modem is bricked).

Secondly you need an ethernet interface on your Mac or UNIX box set to the subnet (eg: with IP address so that you can talk to the modem at its default IP address of

If the modem is up and running (and not bricked), you should now be able to ping it ..
$ ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=255 time=0.309 ms
64 bytes from icmp_seq=1 ttl=255 time=0.421 ms
64 bytes from icmp_seq=2 ttl=255 time=0.409 ms
—- PING Statistics—-
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.309/0.380/0.421/0.050 ms$ 
If your modem is really bricked then the ping will only work when the modem is actually in TFTP upload mode as below. You can ignore this step, it just demonstrates that the ethernet cable is working.

Now we can upload the firmware. With the modem powered off, press and hold the factory reset button, then power up the modem. Continue to hold the button down until ’some’ of the lights flash together. On the Vigor2820Vn ’some’ is the left column of three. On the 2800 and 2910 the left two LEDs flash.

Release the button and on your UNIX/MacOS box type the following commands (note that the modem only stays in TFTP mode for a short time, you can actually type right up to the end of the put command and just press return when the left-hand modem lights start flashing).

The name of the firmware and the number of bytes transmitted depend on the product you are trying to recover.
$ tftp
tftp> binary
tftp> put v2820_v03301_211011_A.rst
Sent 4973144 bytes in 13.1 seconds
tftp> quit
There will be a pause after the ‘put’ command, but your modem ethernet port light should be flashing madly. The transfer is done when you get the “Sent” message. Quit the TFTP client and perhaps your Terminal session, there’s nothing more to see.

What happens next isn’t really documented but we presume that the modem has to unpack the firmware and load it into flash. On our 2820Vn the column of 3 lights continued to flash, but gradually slowed down, speeded up, then slowed again. Eventually after a minute or two the modem rebooted in the normal fashion. Just be patient.
And this last bit is where the windows utility is better: it will tell you when the recovery is done and a success. With a commandline tool you'll just have to wait for the leds to blink right.

After all the recovery and the waiting the modem works again and the line is stable. I chose the 'modem6' version again. I may try the 'modem5' and 'modem4' version too to see whether I can get lower latency without losing stability. Although the improvement may be in the single digit millisecond range so it would be a lot of work for very little improvement.

Tags: , ,
2017-12-28 Learning Apache 2.4 access control 7 months ago
Before I expose anything to the outside world I want the access controls to work as I expect, but things have changed a lot in Apache 2.4.

Standard for a site that's normally available is now in 2.4:
        <Directory "/home/httpd/idefix/html">
                Require all granted
(and any other needed options). But for development systems I want a username/password request to access them. This part took a bit of work to get right. First I found Upgrading to 2.4 from 2.2 - Apache HTTP Server Version 2.4 has a repeating typo in the authorization samples:
AuthBasicProvider File
isn't going to work, giving
Unknown Authn provider: File
error messages. The right bit is:
AuthBasicProvider file
The difference one letter makes.

That also did not give me a working configuration, leading to interesting errors in the log of type:
AH00027: No authentication done but request not allowed without authentication for /. Authentication not configured?
Which turned out to be a missing bit in the samples in the same document: the AuthType is needed too.

The full now working access rule is:
    <Location "/">
        AuthType Basic
        AuthBasicProvider file
        AuthUserFile /home/httpd/data/sitemanagers
        AuthName "Koos z'n Doos beheer"
            Require valid-user
The use of RequireAny allows me to add trusted IP ranges so that the site is reachable from a trusted IP address or after using http basic authentication.

The good news is that the samples in Authentication and Authorization - Apache HTTP Server Version 2.4 are correct.

Tags: , ,
2017-12-28 Getting haproxy to do what I want 7 months ago
In the new homeserver I want an haproxy running on the "router" so it can route http requests to the right backend. At the moment I am testing this and after the 'http' config I'm now testing the 'https' part. To keep things consistent things that come in via https also get requested via https from the backends.

For testing I have some ports on the main server forwarded to haproxy so I can test with all aspects of host-header based routing. After some searching I found out that when I visit the header is set to
And this wasn't routed to the 'development' server. The production server is the 'default' so I searched for the right incantation to test the domain name part and found:
acl devsite hdr_dom(host) -i
And now it's a config that will test on port 8080 and will run on port 80 too. I like configurations that I can test before bringing them into production.

Tags: , ,
2017-12-28 Non-predictable interface names biting me 7 months ago
While doing some upgrades on new homeserver I ran into a problem with the tun/tap network driver which is needed for virtual machines, giving the error message
Dec 27 21:41:51 conway kernel: [  266.832675] tun: Unknown symbol dev_get_valid_name (err 0)
Since virtual machines are the main thing to run in this machine I needed this driver to work. Searching for solutions found the suggestion to reinstall the linux kernel image, which I did:
# apt-get install --reinstall linux-image-$(uname -r)
# apt-mark auto linux-image-$(uname -r)
After which the system came up fine but without a network connection it seemed. This is irritating as the homeserver is in the attic and I found out the VGA screen up there does not cooperate with the new server. So another VGA screen got dragged up there to fix it.

Some searching later I found the eth2 and eth3 interfaces got swapped from what I expected. These are the two mainboard interfaces, both Intel interfaces but with different chipsets. There is a /etc/udev/rules.d/70-persistent-net.rules which sets this up but it isn't working at the moment:

In the system logs:
[    2.833442] udevd[542]: Error changing net interface name eth2 to eth3: File exists
[    2.834309] udevd[542]: could not rename interface '4' from 'eth2' to 'eth3': File exists
[    2.866356] udevd[538]: Error changing net interface name eth3 to eth2: File exists
[    2.868197] udevd[538]: could not rename interface '5' from 'eth3' to 'eth2': File exists
Maybe different names that don't start with eth will work better to get truely persistant names as the current situation isn't very stable and reliable.

After all the work the tun/tap driver works again so the virtual machines now start fine.

Tags: , ,
2017-11-13 Linux and enabling NFSv4 name mapping 9 months ago
Note: even with full name mapping enabled you will still have problems. To get this mapping fully working you will need to establish trust relations via kerberos.

When I shared my article on NFSv4 on the synology I noticed I left out the fundamentals about Linux and NFSv4 with name mapping. All kernels I nowadays run into have the same preference to disable using names over NFSv4 because somewhere the decision was made to assume most Linux systems will be in an environment with centralized UID/GID management.

In any environment with devices with their own UID/GID management (such as synology devices without central LDAP) this will not be true. So the defaults need an override.

The runtime way to change this is, for the nfs client kernel process:
# echo N > /sys/module/nfs/parameters/nfs4_disable_idmapping
And for the nfsd server kernel process:
# echo N > /sys/module/nfsd/parameters/nfs4_disable_idmapping
Notice the one letter difference.

To make this change more permanent, set up a file with a name like /etc/modprobe.d/local-config.conf with
options nfs nfs4_disable_idmapping=0
options nfsd nfs4_disable_idmapping=0
And you still need to set /etc/idmapd.conf on all systems involved (both clients and servers) with the same value for the 'Domain'. I obviously have:

Verbosity = 0
Pipefs-Directory = /run/rpc_pipefs
# set your own domain here, if id differs from FQDN minus hostname
Domain =


Nobody-User = nobody
Nobody-Group = nogroup
And enable idmapd. How you enable this depends on your Linux distribution. In ubuntu server it's in /etc/default/nfs-common with
# Do you want to start the idmapd daemon? It is only needed for NFSv4.

Tags: ,
2017-11-10 Really disabling framebuffer on a modern linux 9 months ago
Framebuffer is nice but I want it really disabled on my new homeserver 2017 because that will end up in the attic where I don't want a repeat of the earlier Linux-related radio interference problem. And for virtual machines it's a bit of overkill too.

To disable framebuffer in both grub and the running Linux it has to be disabled twice. Both in /etc/default/grub which now has these two lines:


Tags: , ,
2017-11-10 NFSv4 on the synology isn't complete NFSv4 until you do some special configuration 9 months ago
This solution fails at the moment I start using rsync to sync directories to the Synology. Update when I find out where that goes wrong.

I am now using a synology for storage in the home network. Linux clients use NFS to access the Synology, and nowadays the default NFS version is version 4, which does things quite differently from version 3. NFS version 4 is supposed to use user names with NFS domain names and rpc.idmapd instead of numeric user and group IDs.

After serious debugging I found out NFSv4 with the synology doesn't use names as I expected. I kept looking at nfs client settings but eventually I used tcpdump, wireshark and tshark to find out owner names aren't used at all. Numerical UIDs are used as text in the NFSv4 answers, even for files that have an owner that is known in the synology. As if the nfs4_disable_idmapping=0 is never set for the NFS server.

I confirmed this with capturing the NFS traffic with tcpdump and analyzing the pcap files with wireshark and tshark. I indeed see:
                        reco_attr: Owner (36)
                            fattr4_owner: 1026
                                length: 4
                                contents: 1026

A lot of google searching confirms this, including anyone have nfsv4 actually working? - Synology Forum. The next step is to adjust the idmapping in the running kernel on the synology, using:
# echo N > /sys/module/nfsd/parameters/nfs4_disable_idmapping
Now I indeed see the right strings in the NFSv4 traffic, but the idmapd on the client doesn't translate for some reason. Fixing the /etc/idmapd.conf file helped.

The next step is to make this change permanent on the synology. Adding a file /etc/modules.local.conf with
does the trick. This I learned from reading the startup file /etc/rc.subr which loads the kernel modules.

And now I see the right data in the NFS traffic:
                        reco_attr: Owner (36)
                                length: 15
And the user mapping works. On an older system I have UID 501, on the synology I have UID 1026 and on a new system I have UID 1000, and I'm owner of the files everywhere.

Tags: , ,
  Older news items for tag linux ⇒
, reachable as PGP encrypted e-mail preferred.

PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers pgp key statistics for 0x5BA9368BE6F334E4 Koos van den Hout
Other webprojects: Camp Wireless, wireless Internet access at campsites, The Virtual Bookcase, book reviews