News items for tag web - Koos van den Hout

2019-05-04 Considering enabling Server Name Indication (SNI) on my webserver 2 weeks ago
Encrypt all the things meme While making a lot of my websites available via HTTPS I started wondering about enabling Server Name Indication (SNI) because the list of hostnames in the one certificate (subjectAltName parameter) keeps growing and they aren't all related.

So on a test system with haproxy I created two separate private keys, two separate certificate signing requests and requested two separate certificates. One for the variants of camp-wireless.org and one for most of the idefix.net names. The whole requesting procedure happened on the system where my automated renewal and deployment of LetsEncrypt certificates with dehydrated happens so the request went fine. For the configuration of haproxy I was following HAProxy SNI where 'terminating SSL on the haproxy with SNI' gets a short mention.

So I implemented the configuration as shown in that document and got greeted with an error:
haproxy[ALERT] 123/155523 (3435) : parsing [/etc/haproxy/haproxy.cfg:86] : 'bind :::443' unknown keyword '/etc/haproxy/ssl/webserver-idefix-main.pem'.
And found out that the crt keyword has to be repeated.

This is why I like having a test environment for things like this. Making errors in the certificate configuration on the 'production' server will give visitors scary and/or incomprehensible errors.

So the right configuration for my test is now:
frontend https-in
    bind :::443 v4v6 ssl crt /etc/haproxy/ssl/webserver-campwireless.pem crt /etc/haproxy/ssl/webserver-idefix-main.pem
And testing it shows the different certificates in use when I use the -servername parameter for openssl s_client to test things.
$ openssl s_client -connect testrouter.idefix.net:443 -servername idefix.net -showcerts -verify 3
..
Server certificate
subject=/CN=idefix.net
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
..
Verification: OK
$ openssl s_client -connect testrouter.idefix.net:443 -servername camp-wireless.org -showcerts -verify 3
..
Server certificate
subject=/CN=www.camp-wireless.org
issuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
..
Verification: OK
The certificates are quite separate. Generating the certificate signing requests with a separate private key for each request works fine.

So if I upgrade my certificate management to renew, transport, test and install multiple certificate for the main webserver it would work.
Read the rest of Considering enabling Server Name Indication (SNI) on my webserver

Tags: , , , ,
2019-01-12 Enabling some old web userdirs 4 months ago
I received a "complaint" that a very old site on the webserver wasn't working anymore. I am not a person to just stop something without planning that so this was an oversight. It was one of the userdirs on idefix.net: Ivo van der Wijk who hasn't updated the page sinds 1994. No, really, not even the broken links.

In restoring this one and the others I found that php in userdirs is disabled by default nowadays, found via PHP not working in userdir (public_html) - devPlant. Maybe a good idea, but I only enable php on virtualhosts where I want it, so I disabled that rule. I hadn't missed it on my own webspace yet, but a site like Het online dagboek van hester (Renate) in Australie (en daar in de buurt) depend on PHP completely.

While I was looking for the reason the php failed I also noticed that /etc/apache2/mods-available/userdir.conf also has some configuration I do not appreciate, it enables userdirs globally when the module is loaded:
<IfModule mod_userdir.c>
        UserDir public_html
        UserDir disabled root

        <Directory /home/*/public_html>
                AllowOverride FileInfo AuthConfig Limit Indexes
                Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec
                Require method GET POST OPTIONS
        </Directory>
</IfModule>
I disabled that part: I only want the userdir to work on specific virtual hosts.

Tags: , ,
2019-01-08 Seeing the 451: Unavailable due to legal reasons in the wild 4 months ago
Today I tried to follow a link to http://www.independentri.com/ but I got an error message:
451: Unavailable due to legal reasons

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time
And indeed in the headers:
$ lynx -head -dump http://www.independentri.com/
HTTP/1.1 451 Unavailable For Legal Reasons
I see the real reason as 'not wanting to comply with European consumer protection laws'. I have no idea how many visitors the site is missing due to this regionblock but since it's a regional weekly newspaper in the United States of America: probably not a lot of the intended audience.

Tags: , ,
2018-12-04 Really ending a domain name and the web presence 5 months ago
On 25 december 2004 there was a special deal giving me the .info names camp-wireless.info and campwireless.info for free for the first year. Since that moment I kept the names registered and redirected all web traffic to the right version: https://www.camp-wireless.org/. So the deal worked from a 'selling domain names' perspective: Christmas is a bad moment to review the need for domain names, so the easy solution is to renew it. My decision to stop with these names was made in January 2018.

Traffic to the .info versions is very minimal. With the cost of the domain registration I decided to stop doing that and devised an exit strategy which would result in a domain name that attracts no traffic and is not linked to my other webprojects. On the next renewal date the domain will expire. I have done this before in a different context: when we ended the students personal webspace at www.students.cs.uu.nl.

The solution is to start returing HTTP state 410 Gone for search engines while at the same time returning a somewhat user-friendly error page.

Relevant bit of apache 2.4 configuration:
<VirtualHost *:80>
    ServerName www.camp-wireless.info
    ServerAlias www.campwireless.info
    ServerAlias camp-wireless.info
    ServerAlias campwireless.info

	DocumentRoot /home/httpd/campwireless-expire/html

    <Directory "/home/httpd/campwireless-expire/html">
        Require all granted
    </Directory>

    RewriteEngine On
    RedirectMatch 410 ^/(?!gone.html|robots.txt)
    ErrorDocument 410 /gone.html
</VirtualHost>
The gone page is simple: It has an explanation for human visitors and a meta refresh tag to redirect the browser eventually. But to a search engine the status 410 on almost any url will give a clear flag the page is gone and should be flushed from the cache.
Read the rest of Really ending a domain name and the web presence

Tags: , , , ,
2018-11-20 Fixing old deeplinks to twitter 6 months ago
Remember the twitter #! hashbang urls? I'd rather not. Those URLs were active from 2010 to 2012 and have been eliminated. But I got reminded today as it seems they are now silently failing. I checked the archive of my own website to fix all those links.

I try to keep all old URLs working. Unless the content completely goes away.

Tags: , ,
2018-09-26 Made the big bang to the new homeserver 7 months ago
So for months and months I had hardware ready for the new homeserver, I was testing bits and pieces in the new environment and I still did not get around to making the big bang. Part of the time the new system was running and using electricity.

And a few weeks ago I had time for the big bang and forgot to mention it!

So one free day I just did the last sync of homedirectories and started migrating all services in a big bang. No more but, if, when, is it done yet. It's a homeserver, not a complete operational datacenter. Although with everything running it sometimes does look that way!

The new setup, more completely documented at Building - and maintaining home server conway 2017 is now running almost all tasks. The main migration was homedirectories, mail, news, webservers. Things are now split over several virtual machines and the base virtual machine running kvm virtual machines is as minimal as possible.

One thing I just noticed is that the new virtual machine with pppoe kernel mode drivers and updated software is doing great: the bigger MTU is working by default and kernel mode pppoe does not show up as using CPU when a 50 mbit download is active. I looked at CPU usage with htop and at the network traffic with iptraf and the result was that iptraf was using the most cpu.

There are still some things left to migrate, including a few public websites that currently give 50x errors. But I will find the time eventually.

Tags: , , ,
2018-07-08 Automating Let's Encrypt certificates further 10 months ago
Encrypt all the things meme Over two years ago I started using Let's Encrypt certificates. Recently I wanted to automate this a step further and found dehydrated automated certificate renewal which helps a lot in automating certificate renewal with minimal hassle.

First thing I fixed was http-based verification. The webserver has been set up to make all .well-known/acme-challenge directories end up in one place on the filesystem and it turns out this works great with dehydrated.

I created a separate user for dehydrated, gave that user write permissions for the /home/httpd/html/.well-known/acme-challenge directory. It also needs write access to /etc/dehydrated for its own state. I changed /etc/dehydrated/config with:
CHALLENGETYPE="http-01"
WELLKNOWN="/home/httpd/html/.well-known/acme-challenge"
Now it was possible to request certificates based on a .csr file. I used this to get a new certificate for the home webserver, and it turned out to be easier than the previous setup based on letsencrypt-nosudo.
Read the rest of Automating Let's Encrypt certificates further

Tags: , , , ,
2018-06-17 Apache 2.2 Proxy and default block for everything but the .well-known/acme-challenge urls 11 months ago
I'm setting up a website on a new virtual machine on the new homeserver and I want a valid letsencrypt certificate. It's a site I don't want to migrate so I'll have to use the Apache proxy on the 'old' server to allow the site to be accessed via IPv4/IPv6 (for consistency I am now setting up everything via a proxy).

So first I set up a proxy to pass all requests for the new server to the backend, something like:
        ProxyPass / http://newsite-back.idefix.net/
        ProxyPassReverse / http://newsite-back.idefix.net/
But now the requests for /.well-known/acme-challenge also go there and they are blocked needing a username/password since the new site is not open yet.

So to set up the proxy correctly AND avoid the username checks for /.well-known/acme-challenge the order has to be correct. In the ProxyPass rules the rule for the specific URL has to come first and in the Location setup it has to come last.
        ProxyPass /.well-known/acme-challenge !
        ProxyPass / http://newsite-back.idefix.net/
        ProxyPassReverse / http://newsite-back.idefix.net/

        <Location />
        Deny from all
        AuthName "Site not open yet"
        [..]
        </Location>

        <Location /.well-known/acme-challenge>
            Order allow,deny
            Allow from all
        </Location>
And now the acme-challenge is done locally on the server and all other requests get forwarded to the backend after authentication.

Tags: , , ,
2018-04-07 I'm glad you read my newsitem 1 year ago
Apr  6 23:41:16 greenblatt sshd[25116]: Invalid user squid from 139.99.122.129
Apr  7 01:44:09 greenblatt sshd[3495]: Invalid user squid from 110.10.189.108
Apr  7 08:21:37 greenblatt sshd[7106]: Invalid user squid from 118.24.100.11
I'm glad you read my newsitem about keeping squid running.

Tags: , ,
2018-03-15 Working on having the right IP address in the apache logs 1 year ago
I noticed the access_log for various websites being tested on the new homeserver all had the IPv6 address of the haproxy I configured in the logs and not the original IP address.

The fun bit is I have set up the right Apache mod_remoteip settings, RemoteIPHeader and RemoteIPInternalProxy and this was tested and working with Require ip rules. But it turns out the default logging formats use the %h logging variable which is not changed by mod_remoteip. Since I want IPv6/IPv4 addresses in the logs that can be resolved later I changed to the %a variable which is the Client IP address which can be changed by mod_remoteip.

Changed options:
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O" common
LogFormat "%a %{HOST}i %l %u %t \"%r\" %s %b %{User-agent}i %{Referer}i -> %U" vcommon

Tags: , ,
2018-01-03 Fixing stuff in The Virtual Bookcase for PHP 7 1 year ago
After spending an evening fixing scripts on The Virtual Bookcase to make them run in PHP 7 and make them safer at the same time I came to the conclusion that I still don't like php.

My conclusion is that if I want to maintain sites I'd rather redo them in perl. I noticed any serious maintenance on the scripts of The Virtual Bookcase was 9 years ago (!). That was also when I had the habit of writing maintenance scripts in perl and web code in php. The upside is that a part of the page-generating code is already available in perl.

But a rewrite is a task for another day. For now the site works cleanly in PHP 7 (and 5) and I can go on to the next task for moving the homeserver.
Read the rest of Fixing stuff in The Virtual Bookcase for PHP 7

Tags: , , , ,
2018-01-01 Making my own web stuff more robust 1 year ago
In building the new homeserver there is also time to test things and improve robustness a bit (although I should not overdo it).

The one thing that forces me to look at some web-code again is that the new servers run PHP version 7. Some of my code is giving warnings, time to fix that. But I haven't written any serious PHP in ages, I just rewrote sites in mod_perl. So my PHP is rusty and needs work, especially with PHP 7.

It's a good thing I use version management, which allows me to test the fixes on the development version(s) of the site and push them to the production version when I'm happy with the results.

Some of the things I notice that could improve go on the todo list. One thing I did notice and fixed right away was that the CVS metadata inside the web directories could be requested too. Although I find no serious security information in there it is still an unwanted information leak.

Tags: , , ,
2017-08-19 Moving virtualbookcase.com to https 1 year ago
I received a notification from the google webmaster program that chrome browser would start showing security warnings on http://www.virtualbookcase.com/ due to the search box there.

The simple solution: make the site correctly available via https and redirect to the https version. I found out I already started doing the first bit and therefore the conversion was easy. Now with encrypted connections: The Virtual Bookcase.

Tags: , , ,
2017-03-26 It was Groundhog day again! 2 years ago
I have Gooogle Sightseeing on my 'regular visit' list because they found really interesting places all over the world and I liked to make a virtual visit to those places myself.

But lately the site hasn't been updated much and now I notice it has three 'Groundhog day' articles on the front page: Groundhog Day for 2017, Groundhog Day for 2016 and Groundhog Day for 2015. The last non-Groundhog Day article is from May 2015.

Tags: ,
2016-11-03 Speeding up my website(s) with mod_perl 2 years ago
I am currently working on a new version of one of the sites I manage in perl, rewriting it from php. I noticed loading times were slower and gave mod_perl a try.

The basic configuration of mod_perl is quite simple. This did not give me the big advantage in web server speed, that came when I added:
PerlModule Apache::DBI
to the apache2 config. The Apache::DBI module caches database connections for supported drivers, this speeds up database-dependent scripts. The module comes from the ubuntu package libapache-dbi-perl and Apache will throw really bad errors at you when the module you want to load is not available.

This is now enabled for my homepage site too. The processing times of the pages don't change much, but the startup of the perl interpreter, modules and scripts is much faster so the waiting time is a lot less.

Tags: , , ,
2016-10-25 Speeding up apache by not resolving for access 2 years ago
I was testing something on my own webserver and noticed the loading time of the page was over 10 seconds. Browsing the log showed me the hostname of the client was logged which was not what I wanted, and the IPv4 address I had at that moment was slow to resolve. It turned out this was caused because the part I was visiting has an authentication check, which looked like:
    <Location />
        Order deny,allow
        Deny from all
        Allow from localhost
        AuthName "Restricted access"
        AuthType basic
        AuthUserFile /...
        AuthGroupFile /dev/null
        Require valid-user
        Satisfy Any
    </Location>
Using the name 'localhost' triggered the resolver. A big speedup was caused by changing to:
    <Location />
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
        AuthName "Restricted access"
        AuthType basic
        AuthUserFile /...
        AuthGroupFile /dev/null
        Require valid-user
        Satisfy Any
    </Location>
Which let me concentrate on other methods to speed up the site.

Tags: , ,
2016-10-20 Being way behind in webdesign... 2 years ago
I recently started pondering making the text font on my homepage slightly less black because I saw a lot of pages with different shades of grey looking (to me!) easier on the eyes and more 'modern'. So I finally updated the stylesheet of my homepage (still HTML4, so already outdated) to use a not completely black (#000000) color for all text but something slightly lighter. I changed it to #202020.

And one of the first things I saw right after testing and implementing that change (of course the css file of my homepage is under version control to move it from the development version to the production version) was... How the Web Became Unreadable - Kevin Marks.

I guess I missed the cycle completely. I'll stick with the current colour for a while. I'm not a graphic designer, I am just lagging in sometimes updating design things.

Tags: ,
2016-10-13 A few pictures added to The Transmission Gallery 2 years ago
I am a fan and regular visitor of The Transmission Gallery and a photographer. But it is not very often I can submit pictures for The Transmission Gallery as it is aimed at transmitters in the United Kingdom.

But on our recent holiday in the UK lake district I noticed on one campsite I had a direct view of a TV transmitter tower. And good, fast mobile data from the same tower. So I took a walk to photograph the transmitter so I could add something to The Transmission Gallery.

So, now available to the general public: Keswick - Pictures taken August 2016 - The Transmission Gallery.

My previous addition to the gallery was in 2010: Wooler - The Transmission Gallery. We have visited the United Kingdom a few times in between but never got close to a transmitter site or the weather hid the site completely in clouds or fog.

Tags: , , , ,
2016-06-02 Not filling my disk with .well-known/acme-challenge directories 2 years ago
Encrypt all the things meme I am slowly gaining trust in my Let's Encrypt setup and today I renewed my certificate. One thing I noticed on the first tries was that the whole process left me with a .well-known/acme-challenge directory in every website. Solution: use the options for a general configuration item available in Apache which is then inherited by all virtual hosts. So now I have in the general configuration in /etc/apache2/apache2.conf:
Alias /.well-known/acme-challenge/ "/home/httpd/html/.well-known/acme-challenge/"

<Directory "/home/httpd/html/.well-known/acme-challenge/">
        AllowOverride None
        Order allow,deny
        Allow from all
</Directory>
So now there is only one directory filling up with challenge-response files which is easier to clean out. I have seen filenames for challenge response with a - at the start so rm * started to complain.

The first complete change to https is on Camp Wireless, Wireless Internet access on campsites.

Tags: , , ,
2016-05-24 Updating the Electronic QSL collection for SWL reports 2 years ago
In the Electronic QSL received at PD4KH / PE4KH I have some SWL reports received via eQSL and I decided I should note these correctly. So I updated the script that generates this page and now NL12621, DL-SWL/DE1PCE, R4A-1227 and others are properly noted.

I haven't found a conclusive list of all SWL 'callsigns' so I may miss some.

Tags: , ,
  Older news items for tag web ⇒
, reachable as koos+website@idefix.net. PGP encrypted e-mail preferred.

PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers pgp key statistics for 0x5BA9368BE6F334E4 Koos van den Hout
RSS
Other webprojects: Camp Wireless, wireless Internet access at campsites, The Virtual Bookcase, book reviews