2018-07-08 Automating Let's Encrypt certificates further 7 months ago
Over two years ago I started using Let's Encrypt certificates. Recently I wanted to automate this a step further and found dehydrated automated certificate renewal which helps a lot in automating certificate renewal with minimal hassle. First thing I fixed was http-based verification. The webserver has been set up to make all .well-known/acme-challenge directories end up in one place on the filesystem and it turns out this works great with dehydrated. I created a separate user for dehydrated, gave that user write permissions for the /home/httpd/html/.well-known/acme-challenge directory. It also needs write access to /etc/dehydrated for its own state. I changed /etc/dehydrated/config with:Read the rest of Automating Let's Encrypt certificates furtherCHALLENGETYPE="http-01" WELLKNOWN="/home/httpd/html/.well-known/acme-challenge"Now it was possible to request certificates based on a .csr file. I used this to get a new certificate for the home webserver, and it turned out to be easier than the previous setup based on letsencrypt-nosudo.
2018-06-23 SMART can be wrong 8 months ago
Someone brought me a 'WD My cloud' that does not respond at all. So I took it apart and found out how to access the disk in an i386 Linux system: mount the 4th partition as ext4. When the disk was available I did a smart test:Read the rest of SMART can be wrong=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSEDBut while trying to find out how much data is actually on the disk, I get:[ 866.165641] Sense Key : Medium Error [current] [descriptor] [ 866.165645] Descriptor sense data with sense descriptors (in hex): [ 866.165647] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 866.165659] b0 90 ea 60 [ 866.165664] sd 2:0:0:0: [sda] [ 866.165668] Add. Sense: Unrecovered read error - auto reallocate failedSo the disk isn't very healthy. But rerunning the smart check still shows nothing is wrong. It is a Western Digital 'RED' harddisk especially for NAS systems so it should return errors earlier to the operating system but this disk is bad, which is probably related to why the 'my cloud' enclosure isn't working.
2018-06-17 Apache 2.2 Proxy and default block for everything but the .well-known/acme-challenge urls 8 months ago
I'm setting up a website on a new virtual machine on the new homeserver and I want a valid letsencrypt certificate. It's a site I don't want to migrate so I'll have to use the Apache proxy on the 'old' server to allow the site to be accessed via IPv4/IPv6 (for consistency I am now setting up everything via a proxy). So first I set up a proxy to pass all requests for the new server to the backend, something like:ProxyPass / http://newsite-back.idefix.net/ ProxyPassReverse / http://newsite-back.idefix.net/But now the requests for /.well-known/acme-challenge also go there and they are blocked needing a username/password since the new site is not open yet. So to set up the proxy correctly AND avoid the username checks for /.well-known/acme-challenge the order has to be correct. In the ProxyPass rules the rule for the specific URL has to come first and in the Location setup it has to come last.ProxyPass /.well-known/acme-challenge ! ProxyPass / http://newsite-back.idefix.net/ ProxyPassReverse / http://newsite-back.idefix.net/ <Location /> Deny from all AuthName "Site not open yet" [..] </Location> <Location /.well-known/acme-challenge> Order allow,deny Allow from all </Location>And now the acme-challenge is done locally on the server and all other requests get forwarded to the backend after authentication.
2018-05-03 The preferring IPv6 policy is working 9 months ago
Yesterday I changed some IPv4 addresses on virtual machines on the new homeserver to make autofs work. This is a known issue with autofs: autofs does not appear to support IPv6 hostname lookups for NFS mounts - Debian Bug #737679 and for me the easy solution is to do NFS mounts over rfc1918 ipv4 addresses. I prefer autofs over 'fixed' NFS mounts for those filesystems that are nice to be available but aren't needed constantly. It took about 9 hours before arpwatch on the central router noticed the new activity. I guess the policy to try to do everything over IPv6 is working.
2018-04-24 KVM and os-specific defaults 10 months ago
Today I wanted to install a new virtual machine on the new homeserver and virt-install gave me a new warning:WARNING No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results.According to the virt-install manpage the --os-variant can be found with osinfo-query os which I can't find in Devuan jessie. But the same information is available via Installing Virtual Machines with virt-install, plus copy pastable distro install one-liners. I chose debian7 as that is probably the closest to Devuan Jessie to be upgraded to Devuan ascii immediately. The interesting change is that the resulting linux suddenly has virtio networkcards and a disk /dev/vda. That last bit is quite different from earlier virtual machines.
2018-04-06 Keeping squid webproxy running for network mismatches 10 months ago
I considered stopping using squid when upgrading to the new homeserver but I have now changed that decision: I need to keep it running for applications that want to do http connections to IPv6-only systems but can't handle those. There are some old scripts running that need it but it's also the way to fix the problem I noticed with linuxcounter.
2018-04-06 25 years of Linux use 10 months ago
In looking at a problem with the linuxcounter script I noticed I am now passing the 25 years with Linux mark. I first saw it in the beginning of 1993 when part of my internship happened at the 'expa' lab (if I recall correctly) of Hogeschool Utrecht with SLS Linux. Anyway, still using Linux a lot. It's been an interesting 25 years!
2018-01-27 I caused an interesting problem with the VDSL pppoe session 1 year ago
Normally being active on certain HF bands causes one-time VDSL disconnects but what I have currently done seems to have triggered something else. After the connection dropped it refuses to come back at the moment. The entire session looks like:22:49:28.466922 PPPoE PADI [Service-Name] 22:49:28.490394 PPPoE PADO [AC-Name "dr12.d12"] [Service-Name] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83] [EOL] 22:49:28.490603 PPPoE PADR [Service-Name] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83] 22:49:28.517063 PPPoE PADS [ses 0x40c] [Service-Name] [AC-Name "dr12.d12"] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83] [EOL] 22:49:28.575266 PPPoE [ses 0x40c] LCP, Conf-Request (0x01), id 72, length 16 22:49:28.575776 PPPoE [ses 0x40c] LCP, Conf-Request (0x01), id 99, length 22 22:49:28.575798 PPPoE [ses 0x40c] LCP, Conf-Reject (0x04), id 72, length 10 22:49:28.589161 PPPoE [ses 0x40c] LCP, Conf-Ack (0x02), id 99, length 22 22:49:28.589164 PPPoE [ses 0x40c] LCP, Conf-Request (0x01), id 73, length 12 22:49:28.589666 PPPoE [ses 0x40c] LCP, Conf-Ack (0x02), id 73, length 12 22:49:28.589682 PPPoE [ses 0x40c] LCP, Echo-Request (0x09), id 0, length 10 22:49:28.589693 PPPoE [ses 0x40c] CCP, Conf-Request (0x01), id 89, length 17 22:49:28.589702 PPPoE [ses 0x40c] IPCP, Conf-Request (0x01), id 89, length 18 22:49:28.589711 PPPoE [ses 0x40c] IP6CP, Conf-Request (0x01), id 89, length 16 22:49:28.603265 PPPoE [ses 0x40c] LCP, Echo-Reply (0x0a), id 0, length 10 22:49:28.603267 PPPoE [ses 0x40c] LCP, Term-Request (0x05), id 74, length 6 22:49:28.604033 PPPoE [ses 0x40c] LCP, Term-Ack (0x06), id 74, length 6 22:49:31.623454 PPPoE PADT [ses 0x40c] [Generic-Error "RP-PPPoE: System call error: Input/output error"] [AC-Cookie 0xA3FE109A222CE73945C23FCE85E03F83]So in the end the router at my ISP decides to terminate the connection. On the connection failing I decided to change the configuration to use the kernel mode pppoe driver but after this started showing I reverted that change. Which made no difference, the connection is still not coming up. Update: I went looking at other changes I made to enable the pppoe server test and reverting the /etc/ppp/pap-secrets file to its original format fixed the problem. I guess I somehow started to authenticate the remote end. And changing from user-mode pppoe to kernel-mode pppoe does lower the MTU to 1492, so that test is also finished. Back to user-mode pppoe.
2018-01-25 Building a testing server for pppoe 1 year ago
The new homeserver will have to run the same pppoe client setup as the current server. But I want to get the whole setup tested before the migration to minimize disruption. Since I'm not going to get a free extra vdsl line and vdsl modem to test with and the complicated part is in the pppoe and ppp client part I decided to use a test vlan and set up a pppoe-server and ppp server on that vlan. The pppoe server part is started with# pppoe-server -I eth0.99 -C kzdoos -L 172.16.19.1 -R 172.16.21.19And it's indeed available from the client:# pppoe-discovery -I eth2 Access-Concentrator: kzdoos Got a cookie: 84 39 c6 51 13 fe 32 00 2c 06 2a b4 38 0e 30 87 46 7b 00 00 -------------------------------------------------- AC-Ethernet-Address: 00:1f:c6:59:76:f6So that part works. Next is to get an actual ppp session working over it. The server part was a bit of work as I want to get the whole configuration including password checks. Server configuration in /etc/ppp/pppoe-server-options on the server system:require-pap lcp-echo-interval 10 lcp-echo-failure 2 hide-password noipx ipv6 ,And the client configuration in /etc/ppp/peers/dray-vdsl:user testkees password topsecret +pap noauth noipdefault ipv6 , ipv6cp-use-persistent defaultroute persist maxfail 0 noproxyarp ipparam xs4all lcp-echo-interval 10 lcp-echo-failure 6 pty "pppoe -I eth2"Lots of options to make the setup exactly the same as the current. It took a lot of tries before password authentication was working. I could not get the client-side password in /etc/ppp/pap-secrets to work, but as show above the password in the ppp configuration did work. And the setup in /etc/network/interfaces on the client just the same as the known configuration:iface pppdray inet ppp provider dray-vdslAnd it works!# ifup pppdray Plugin rp-pppoe.so loaded. # ifconfig ppp0 ppp0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1492 inet 172.16.21.45 netmask 255.255.255.255 destination 172.16.19.1 inet6 fe80::5254:ff:fe3c:2014 prefixlen 10 scopeid 0x20<link> ppp txqueuelen 3 (Point-to-Point Protocol) RX packets 9 bytes 252 (252.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 9 bytes 202 (202.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 # ping -c 3 172.16.19.1 PING 172.16.19.1 (172.16.19.1) 56(84) bytes of data. 64 bytes from 172.16.19.1: icmp_seq=1 ttl=64 time=0.721 ms 64 bytes from 172.16.19.1: icmp_seq=2 ttl=64 time=0.436 ms 64 bytes from 172.16.19.1: icmp_seq=3 ttl=64 time=0.449 ms --- 172.16.19.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2029ms rtt min/avg/max/mdev = 0.436/0.535/0.721/0.132 msThe mtu is not yet what I want, but the session is alive.
2018-01-23 Avoiding the linux statefull firewall for some traffic 1 year ago
I was setting up a linux based firewall on a busy ntp server and to make sure everything worked as designed I added the usual:iptables -A INPUT -j ACCEPT --protocol all -m state --state ESTABLISHED,RELATEDAnd after less than half an hour the system log started filling withnf_conntrack: table full, dropping packet nf_conntrack: table full, dropping packet nf_conntrack: table full, dropping packet nf_conntrack: table full, dropping packetIt is indeed a busy server. The solution is to exclude all the ntp traffic from the stateful firewall. Which means I have to allow all kinds of ntp traffic (outgoing and incoming) by itself. The specific ruleset:iptables -t raw -A PREROUTING --protocol udp --dport 123 -j NOTRACK iptables -t raw -A OUTPUT --protocol udp --sport 123 -j NOTRACK iptables -A INPUT -j ACCEPT --protocol udp --destination-port 123I also made sure the rules for the ntp traffic are the first rules. Traffic at this server is somewhat over 1000 ntp request per second. So the counters of the NOTRACK rules go fast.# iptables -t raw -L -v Chain PREROUTING (policy ACCEPT 1652K packets, 126M bytes) pkts bytes target prot opt in out source destination 9635K 732M CT udp -- any any anywhere anywhere udp dpt:ntp NOTRACK 1650K 125M CT udp -- any any anywhere anywhere udp dpt:ntp NOTRACK Chain OUTPUT (policy ACCEPT 1522K packets, 117M bytes) pkts bytes target prot opt in out source destination 9029K 686M CT udp -- any any anywhere anywhere udp spt:ntp NOTRACK 1520K 116M CT udp -- any any anywhere anywhere udp spt:ntp NOTRACKBut no packets are dropped, which is good as this server is supposed to be under a constant DDoS.
2018-01-14 Recovering firmware on the Draytek Vigor 130 VDSL2 modem with linux / macosx 1 year ago
I needed the recovery procedure again: there was a new firmware 3.8.12 with newer VDSL modem driver and the standard update via the webinterface failed. I just want to keep the notes from "OzCableguy" since his shop and blog have gone. I found the saved version via archive.org, Updating Draytek firmare using the MacOS X or UNIX command line and TFTP - OzCableguy.Read the rest of Recovering firmware on the Draytek Vigor 130 VDSL2 modem with linux / macosxDraytek modems have several methods available to update their firmware. You can use the Firmware Upgrade Utility under Windows, load it from the web interface via HTTP, FTP the file to the modem or use the TFTP (Trivial File Transfer Protocol) service built into the box. If your modem has been bricked you can’t use FTP or HTTP. If you don’t want to use Windows or go through the web interface, then this TFTP method is a viable alternative. Note that unlike a lot of other boxes using TFTP to load firmware, the Draytek is acting as a TFTP server, the UNIX/MacOS box as a client and you PUT the file onto the modem. It is normally the other way around, but that needs some extra setup steps that are conveniently avoided with this method. The firmware comes in two pieces. Use the .rst version of the file if you want to change the modem settings back to factory defaults, use the .all file to keep the current settings (.all may not be a good option if the modem is bricked). Secondly you need an ethernet interface on your Mac or UNIX box set to the subnet 192.168.1.0 (eg: with IP address 192.168.1.10) so that you can talk to the modem at its default IP address of 192.168.1.1. If the modem is up and running (and not bricked), you should now be able to ping it ..And this last bit is where the windows utility is better: it will tell you when the recovery is done and a success. With a commandline tool you'll just have to wait for the leds to blink right. After all the recovery and the waiting the modem works again and the line is stable. I chose the 'modem6' version again. I may try the 'modem5' and 'modem4' version too to see whether I can get lower latency without losing stability. Although the improvement may be in the single digit millisecond range so it would be a lot of work for very little improvement.$ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: icmp_seq=0 ttl=255 time=0.309 ms 64 bytes from 192.168.1.1: icmp_seq=1 ttl=255 time=0.421 ms 64 bytes from 192.168.1.1: icmp_seq=2 ttl=255 time=0.409 ms ^C —-192.168.1.1 PING Statistics—- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.309/0.380/0.421/0.050 ms$If your modem is really bricked then the ping will only work when the modem is actually in TFTP upload mode as below. You can ignore this step, it just demonstrates that the ethernet cable is working. Now we can upload the firmware. With the modem powered off, press and hold the factory reset button, then power up the modem. Continue to hold the button down until ’some’ of the lights flash together. On the Vigor2820Vn ’some’ is the left column of three. On the 2800 and 2910 the left two LEDs flash. Release the button and on your UNIX/MacOS box type the following commands (note that the modem only stays in TFTP mode for a short time, you can actually type right up to the end of the put command and just press return when the left-hand modem lights start flashing). The name of the firmware and the number of bytes transmitted depend on the product you are trying to recover.$ tftp 192.168.1.1 tftp> binary tftp> put v2820_v03301_211011_A.rst Sent 4973144 bytes in 13.1 seconds tftp> quit $There will be a pause after the ‘put’ command, but your modem ethernet port light should be flashing madly. The transfer is done when you get the “Sent” message. Quit the TFTP client and perhaps your Terminal session, there’s nothing more to see. What happens next isn’t really documented but we presume that the modem has to unpack the firmware and load it into flash. On our 2820Vn the column of 3 lights continued to flash, but gradually slowed down, speeded up, then slowed again. Eventually after a minute or two the modem rebooted in the normal fashion. Just be patient.
2017-12-28 Learning Apache 2.4 access control 1 year ago
Before I expose anything to the outside world I want the access controls to work as I expect, but things have changed a lot in Apache 2.4. Standard for a site that's normally available is now in 2.4:<Directory "/home/httpd/idefix/html"> Require all granted </Directory>(and any other needed options). But for development systems I want a username/password request to access them. This part took a bit of work to get right. First I found Upgrading to 2.4 from 2.2 - Apache HTTP Server Version 2.4 has a repeating typo in the authorization samples:AuthBasicProviderisn't going to work, giving
FileUnknown Authn provider: Fileerror messages. The right bit is:AuthBasicProvider fileThe difference one letter makes. That also did not give me a working configuration, leading to interesting errors in the log of type:AH00027: No authentication done but request not allowed without authentication for /. Authentication not configured?Which turned out to be a missing bit in the samples in the same document: the AuthType is needed too. The full now working access rule is:<Location "/"> AuthType Basic AuthBasicProvider file AuthUserFile /home/httpd/data/sitemanagers AuthName "Koos z'n Doos beheer" <RequireAny> Require valid-user </RequireAny> </Location>The use of RequireAny allows me to add trusted IP ranges so that the site is reachable from a trusted IP address or after using http basic authentication. The good news is that the samples in Authentication and Authorization - Apache HTTP Server Version 2.4 are correct.
2017-12-28 Getting haproxy to do what I want 1 year ago
In the new homeserver I want an haproxy running on the "router" so it can route http requests to the right backend. At the moment I am testing this and after the 'http' config I'm now testing the 'https' part. To keep things consistent things that come in via https also get requested via https from the backends. For testing I have some ports on the main server forwarded to haproxy so I can test with all aspects of host-header based routing. After some searching I found out that when I visit http://developer.urlurl.org:8080/ the header is set toHost: developer.urlurl.org:8080And this wasn't routed to the 'development' server. The production server is the 'default' so I searched for the right incantation to test the domain name part and found:acl devsite hdr_dom(host) -i developer.urlurl.orgAnd now it's a config that will test on developer.urlurl.org port 8080 and will run on port 80 too. I like configurations that I can test before bringing them into production.
2017-12-28 Non-predictable interface names biting me 1 year ago
While doing some upgrades on new homeserver I ran into a problem with the tun/tap network driver which is needed for virtual machines, giving the error messageDec 27 21:41:51 conway kernel: [ 266.832675] tun: Unknown symbol dev_get_valid_name (err 0)Since virtual machines are the main thing to run in this machine I needed this driver to work. Searching for solutions found the suggestion to reinstall the linux kernel image, which I did:# apt-get install --reinstall linux-image-$(uname -r) # apt-mark auto linux-image-$(uname -r)After which the system came up fine but without a network connection it seemed. This is irritating as the homeserver is in the attic and I found out the VGA screen up there does not cooperate with the new server. So another VGA screen got dragged up there to fix it. Some searching later I found the eth2 and eth3 interfaces got swapped from what I expected. These are the two mainboard interfaces, both Intel interfaces but with different chipsets. There is a /etc/udev/rules.d/70-persistent-net.rules which sets this up but it isn't working at the moment: In the system logs:[ 2.833442] udevd: Error changing net interface name eth2 to eth3: File exists [ 2.834309] udevd: could not rename interface '4' from 'eth2' to 'eth3': File exists [ 2.866356] udevd: Error changing net interface name eth3 to eth2: File exists [ 2.868197] udevd: could not rename interface '5' from 'eth3' to 'eth2': File existsMaybe different names that don't start with eth will work better to get truely persistant names as the current situation isn't very stable and reliable. After all the work the tun/tap driver works again so the virtual machines now start fine.
2017-11-13 Linux and enabling NFSv4 name mapping 1 year ago
Note: even with full name mapping enabled you will still have problems. To get this mapping fully working you will need to establish trust relations via kerberos. When I shared my article on NFSv4 on the synology I noticed I left out the fundamentals about Linux and NFSv4 with name mapping. All kernels I nowadays run into have the same preference to disable using names over NFSv4 because somewhere the decision was made to assume most Linux systems will be in an environment with centralized UID/GID management. In any environment with devices with their own UID/GID management (such as synology devices without central LDAP) this will not be true. So the defaults need an override. The runtime way to change this is, for the nfs client kernel process:# echo N > /sys/module/nfs/parameters/nfs4_disable_idmappingAnd for the nfsd server kernel process:# echo N > /sys/module/nfsd/parameters/nfs4_disable_idmappingNotice the one letter difference. To make this change more permanent, set up a file with a name like /etc/modprobe.d/local-config.conf withoptions nfs nfs4_disable_idmapping=0 options nfsd nfs4_disable_idmapping=0And you still need to set /etc/idmapd.conf on all systems involved (both clients and servers) with the same value for the 'Domain'. I obviously have:[General] Verbosity = 0 Pipefs-Directory = /run/rpc_pipefs # set your own domain here, if id differs from FQDN minus hostname Domain = idefix.net [Mapping] Nobody-User = nobody Nobody-Group = nogroupAnd enable idmapd. How you enable this depends on your Linux distribution. In ubuntu server it's in /etc/default/nfs-common with# Do you want to start the idmapd daemon? It is only needed for NFSv4. NEED_IDMAPD=yes
2017-11-10 Really disabling framebuffer on a modern linux 1 year ago
Framebuffer is nice but I want it really disabled on my new homeserver 2017 because that will end up in the attic where I don't want a repeat of the earlier Linux-related radio interference problem. And for virtual machines it's a bit of overkill too. To disable framebuffer in both grub and the running Linux it has to be disabled twice. Both in /etc/default/grub which now has these two lines:GRUB_CMDLINE_LINUX_DEFAULT="nomodeset" GRUB_TERMINAL=console
2017-11-10 NFSv4 on the synology isn't complete NFSv4 until you do some special configuration 1 year ago
This solution fails at the moment I start using rsync to sync directories to the Synology. Update when I find out where that goes wrong. I am now using a synology for storage in the home network. Linux clients use NFS to access the Synology, and nowadays the default NFS version is version 4, which does things quite differently from version 3. NFS version 4 is supposed to use user names with NFS domain names and rpc.idmapd instead of numeric user and group IDs. After serious debugging I found out NFSv4 with the synology doesn't use names as I expected. I kept looking at nfs client settings but eventually I used tcpdump, wireshark and tshark to find out owner names aren't used at all. Numerical UIDs are used as text in the NFSv4 answers, even for files that have an owner that is known in the synology. As if the nfs4_disable_idmapping=0 is never set for the NFS server. I confirmed this with capturing the NFS traffic with tcpdump and analyzing the pcap files with wireshark and tshark. I indeed see:reco_attr: Owner (36) fattr4_owner: 1026 length: 4 contents: 1026A lot of google searching confirms this, including anyone have nfsv4 actually working? - Synology Forum. The next step is to adjust the idmapping in the running kernel on the synology, using:# echo N > /sys/module/nfsd/parameters/nfs4_disable_idmappingNow I indeed see the right strings in the NFSv4 traffic, but the idmapd on the client doesn't translate for some reason. Fixing the /etc/idmapd.conf file helped. The next step is to make this change permanent on the synology. Adding a file /etc/modules.local.conf withmodule_nfsd_args="nfs4_disable_idmapping=0"does the trick. This I learned from reading the startup file /etc/rc.subr which loads the kernel modules. And now I see the right data in the NFS traffic:reco_attr: Owner (36) fattr4_owner: firstname.lastname@example.org length: 15 contents: email@example.comAnd the user mapping works. On an older system I have UID 501, on the synology I have UID 1026 and on a new system I have UID 1000, and I'm owner of the files everywhere.
2017-10-15 Getting to play VIC-20 games again 1 year ago
Ages ago my first homecomputer was a Commodore VIC-20. I did basic programming on it and played some games. I remember the game Centipede and loading games from audio cassette. These days games seem to be enormously complex and expensive or filled with advertisments. I don't like these, the last time I seriously invested time in a game was Pinball Dreams. I found out about the VIC-20 emulator xvic, part of the vice package. I even bought a cheap USB joystick to use. I never had a joystick with my VIC-20 so it was about time to get one. This joystick is a DragonRise Inc. Generic USB Joystick (yes, including the spaces) and I noticed today it wasn't working right: up and down on the joystick did not work. I found out eventually the left and right on the second stick mapped to up and down, thanks to a simple joystick tester from Joystick - Denialwiki in 7 lines of Basic. Some searching found DragonRise USB Driver Issue - RetroPie which mentions this issue in hid-dr.ko happened in Linux 4.4 - 4.9. I did not feel like going back to compiling my own kernels for this laptop, but there is a simple solution in Ubuntu 16.04: use hwe (hardware enablement) kernels. These seem to be aimed at the long term support server versions, but they fix my joystick problem and I can play centipede.
2017-10-11 Haproxy on the new home server and devuan upgrades 1 year ago
I got around again to working on the new homeserver 2017 and I worked on the installation of a 'testing' virtual machine with virt-install. This test machine also runs devuan linux. The first application I was testing on there is haproxy. I noticed some defaults I did not expect (such as preferring IPv4 over IPv6). It seems the 'stable' devuan has the same age issues as 'stable' debian. Otherwise haproxy does what it is supposed to and I may standardize on it. Upgrading was easy, I looked at Upgrading Devuan Jessie to Ascii and just changed jessie to ascii in /etc/apt/sources.list and did an apt-get dist-upgrade. The only minor issue afterwards is that the system now insists on using framebuffer video, which I find overkill for a virtual machine. VGA 80x25 is fine.
2017-10-09 Interesting NFS exports problem 1 year ago⇐ Newer news items for tag linux Older news items for tag linux ⇒
I am used to being unable to unmount filesystems as long as they are NFS exported. It took me a while to find out how to correctly unexport filesystems before trying to unmount them. The easy solution would be to unexport everything and just export the other filesystems, but I'd rather not interrupt NFS availability of other filesystems. So it was time to check some large filesystems again and I'd rather not do that during boot as it can delay booting for up to an hour. Currently those filesystems are exported via IPv4 and IPv6. Removing the export for IPv4 is easy:# exportfs -u 192.168.1.0/255.255.255.0:/exportBut for IPv6 it gets harder:# exportfs -u 2001:db8:a::/64:/export exportfs: Invalid unexporting option: 2001So it is still exported via IPv6. And next thing I try to unmount it and notice it's ok to unmount a filesystem that is only exported via IPv6. I guess this shows some interesting bug.