News items for tag web - Koos van den Hout

2018-09-26 Made the big bang to the new homeserver 1 month ago
So for months and months I had hardware ready for the new homeserver, I was testing bits and pieces in the new environment and I still did not get around to making the big bang. Part of the time the new system was running and using electricity.

And a few weeks ago I had time for the big bang and forgot to mention it!

So one free day I just did the last sync of homedirectories and started migrating all services in a big bang. No more but, if, when, is it done yet. It's a homeserver, not a complete operational datacenter. Although with everything running it sometimes does look that way!

The new setup, more completely documented at Building - and maintaining home server conway 2017 is now running almost all tasks. The main migration was homedirectories, mail, news, webservers. Things are now split over several virtual machines and the base virtual machine running kvm virtual machines is as minimal as possible.

One thing I just noticed is that the new virtual machine with pppoe kernel mode drivers and updated software is doing great: the bigger MTU is working by default and kernel mode pppoe does not show up as using CPU when a 50 mbit download is active. I looked at CPU usage with htop and at the network traffic with iptraf and the result was that iptraf was using the most cpu.

There are still some things left to migrate, including a few public websites that currently give 50x errors. But I will find the time eventually.

Tags: , , ,
2018-07-08 Automating Let's Encrypt certificates further 4 months ago
Encrypt all the things meme Over two years ago I started using Let's Encrypt certificates. Recently I wanted to automate this a step further and found dehydrated automated certificate renewal which helps a lot in automating certificate renewal with minimal hassle.

First thing I fixed was http-based verification. The webserver has been set up to make all .well-known/acme-challenge directories end up in one place on the filesystem and it turns out this works great with dehydrated.

I created a separate user for dehydrated, gave that user write permissions for the /home/httpd/html/.well-known/acme-challenge directory. It also needs write access to /etc/dehydrated for its own state. I changed /etc/dehydrated/config with:
CHALLENGETYPE="http-01"
WELLKNOWN="/home/httpd/html/.well-known/acme-challenge"
Now it was possible to request certificates based on a .csr file. I used this to get a new certificate for the home webserver, and it turned out to be easier than the previous setup based on letsencrypt-nosudo.
Read the rest of Automating Let's Encrypt certificates further

Tags: , , , ,
2018-06-17 Apache 2.2 Proxy and default block for everything but the .well-known/acme-challenge urls 5 months ago
I'm setting up a website on a new virtual machine on the new homeserver and I want a valid letsencrypt certificate. It's a site I don't want to migrate so I'll have to use the Apache proxy on the 'old' server to allow the site to be accessed via IPv4/IPv6 (for consistency I am now setting up everything via a proxy).

So first I set up a proxy to pass all requests for the new server to the backend, something like:
        ProxyPass / http://newsite-back.idefix.net/
        ProxyPassReverse / http://newsite-back.idefix.net/
But now the requests for /.well-known/acme-challenge also go there and they are blocked needing a username/password since the new site is not open yet.

So to set up the proxy correctly AND avoid the username checks for /.well-known/acme-challenge the order has to be correct. In the ProxyPass rules the rule for the specific URL has to come first and in the Location setup it has to come last.
        ProxyPass /.well-known/acme-challenge !
        ProxyPass / http://newsite-back.idefix.net/
        ProxyPassReverse / http://newsite-back.idefix.net/

        <Location />
        Deny from all
        AuthName "Site not open yet"
        [..]
        </Location>

        <Location /.well-known/acme-challenge>
            Order allow,deny
            Allow from all
        </Location>
And now the acme-challenge is done locally on the server and all other requests get forwarded to the backend after authentication.

Tags: , , ,
2018-04-07 I'm glad you read my newsitem 7 months ago
Apr  6 23:41:16 greenblatt sshd[25116]: Invalid user squid from 139.99.122.129
Apr  7 01:44:09 greenblatt sshd[3495]: Invalid user squid from 110.10.189.108
Apr  7 08:21:37 greenblatt sshd[7106]: Invalid user squid from 118.24.100.11
I'm glad you read my newsitem about keeping squid running.

Tags: , ,
2018-03-15 Working on having the right IP address in the apache logs 8 months ago
I noticed the access_log for various websites being tested on the new homeserver all had the IPv6 address of the haproxy I configured in the logs and not the original IP address.

The fun bit is I have set up the right Apache mod_remoteip settings, RemoteIPHeader and RemoteIPInternalProxy and this was tested and working with Require ip rules. But it turns out the default logging formats use the %h logging variable which is not changed by mod_remoteip. Since I want IPv6/IPv4 addresses in the logs that can be resolved later I changed to the %a variable which is the Client IP address which can be changed by mod_remoteip.

Changed options:
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O" common
LogFormat "%a %{HOST}i %l %u %t \"%r\" %s %b %{User-agent}i %{Referer}i -> %U" vcommon

Tags: , ,
2018-01-03 Fixing stuff in The Virtual Bookcase for PHP 7 10 months ago
After spending an evening fixing scripts on The Virtual Bookcase to make them run in PHP 7 and make them safer at the same time I came to the conclusion that I still don't like php.

My conclusion is that if I want to maintain sites I'd rather redo them in perl. I noticed any serious maintenance on the scripts of The Virtual Bookcase was 9 years ago (!). That was also when I had the habit of writing maintenance scripts in perl and web code in php. The upside is that a part of the page-generating code is already available in perl.

But a rewrite is a task for another day. For now the site works cleanly in PHP 7 (and 5) and I can go on to the next task for moving the homeserver.
Read the rest of Fixing stuff in The Virtual Bookcase for PHP 7

Tags: , , , ,
2018-01-01 Making my own web stuff more robust 10 months ago
In building the new homeserver there is also time to test things and improve robustness a bit (although I should not overdo it).

The one thing that forces me to look at some web-code again is that the new servers run PHP version 7. Some of my code is giving warnings, time to fix that. But I haven't written any serious PHP in ages, I just rewrote sites in mod_perl. So my PHP is rusty and needs work, especially with PHP 7.

It's a good thing I use version management, which allows me to test the fixes on the development version(s) of the site and push them to the production version when I'm happy with the results.

Some of the things I notice that could improve go on the todo list. One thing I did notice and fixed right away was that the CVS metadata inside the web directories could be requested too. Although I find no serious security information in there it is still an unwanted information leak.

Tags: , , ,
2017-08-19 Moving virtualbookcase.com to https 1 year ago
I received a notification from the google webmaster program that chrome browser would start showing security warnings on http://www.virtualbookcase.com/ due to the search box there.

The simple solution: make the site correctly available via https and redirect to the https version. I found out I already started doing the first bit and therefore the conversion was easy. Now with encrypted connections: The Virtual Bookcase.

Tags: , , ,
2017-03-26 It was Groundhog day again! 1 year ago
I have Gooogle Sightseeing on my 'regular visit' list because they found really interesting places all over the world and I liked to make a virtual visit to those places myself.

But lately the site hasn't been updated much and now I notice it has three 'Groundhog day' articles on the front page: Groundhog Day for 2017, Groundhog Day for 2016 and Groundhog Day for 2015. The last non-Groundhog Day article is from May 2015.

Tags: ,
2016-11-03 Speeding up my website(s) with mod_perl 2 years ago
I am currently working on a new version of one of the sites I manage in perl, rewriting it from php. I noticed loading times were slower and gave mod_perl a try.

The basic configuration of mod_perl is quite simple. This did not give me the big advantage in web server speed, that came when I added:
PerlModule Apache::DBI
to the apache2 config. The Apache::DBI module caches database connections for supported drivers, this speeds up database-dependent scripts. The module comes from the ubuntu package libapache-dbi-perl and Apache will throw really bad errors at you when the module you want to load is not available.

This is now enabled for my homepage site too. The processing times of the pages don't change much, but the startup of the perl interpreter, modules and scripts is much faster so the waiting time is a lot less.

Tags: , , ,
2016-10-25 Speeding up apache by not resolving for access 2 years ago
I was testing something on my own webserver and noticed the loading time of the page was over 10 seconds. Browsing the log showed me the hostname of the client was logged which was not what I wanted, and the IPv4 address I had at that moment was slow to resolve. It turned out this was caused because the part I was visiting has an authentication check, which looked like:
    <Location />
        Order deny,allow
        Deny from all
        Allow from localhost
        AuthName "Restricted access"
        AuthType basic
        AuthUserFile /...
        AuthGroupFile /dev/null
        Require valid-user
        Satisfy Any
    </Location>
Using the name 'localhost' triggered the resolver. A big speedup was caused by changing to:
    <Location />
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
        AuthName "Restricted access"
        AuthType basic
        AuthUserFile /...
        AuthGroupFile /dev/null
        Require valid-user
        Satisfy Any
    </Location>
Which let me concentrate on other methods to speed up the site.

Tags: , ,
2016-10-20 Being way behind in webdesign... 2 years ago
I recently started pondering making the text font on my homepage slightly less black because I saw a lot of pages with different shades of grey looking (to me!) easier on the eyes and more 'modern'. So I finally updated the stylesheet of my homepage (still HTML4, so already outdated) to use a not completely black (#000000) color for all text but something slightly lighter. I changed it to #202020.

And one of the first things I saw right after testing and implementing that change (of course the css file of my homepage is under version control to move it from the development version to the production version) was... How the Web Became Unreadable - Kevin Marks.

I guess I missed the cycle completely. I'll stick with the current colour for a while. I'm not a graphic designer, I am just lagging in sometimes updating design things.

Tags: ,
2016-10-13 A few pictures added to The Transmission Gallery 2 years ago
I am a fan and regular visitor of The Transmission Gallery and a photographer. But it is not very often I can submit pictures for The Transmission Gallery as it is aimed at transmitters in the United Kingdom.

But on our recent holiday in the UK lake district I noticed on one campsite I had a direct view of a TV transmitter tower. And good, fast mobile data from the same tower. So I took a walk to photograph the transmitter so I could add something to The Transmission Gallery.

So, now available to the general public: Keswick - Pictures taken August 2016 - The Transmission Gallery.

My previous addition to the gallery was in 2010: Wooler - The Transmission Gallery. We have visited the United Kingdom a few times in between but never got close to a transmitter site or the weather hid the site completely in clouds or fog.

Tags: , , , ,
2016-06-02 Not filling my disk with .well-known/acme-challenge directories 2 years ago
Encrypt all the things meme I am slowly gaining trust in my Let's Encrypt setup and today I renewed my certificate. One thing I noticed on the first tries was that the whole process left me with a .well-known/acme-challenge directory in every website. Solution: use the options for a general configuration item available in Apache which is then inherited by all virtual hosts. So now I have in the general configuration in /etc/apache2/apache2.conf:
Alias /.well-known/acme-challenge/ "/home/httpd/html/.well-known/acme-challenge/"

<Directory "/home/httpd/html/.well-known/acme-challenge/">
        AllowOverride None
        Order allow,deny
        Allow from all
</Directory>
So now there is only one directory filling up with challenge-response files which is easier to clean out. I have seen filenames for challenge response with a - at the start so rm * started to complain.

The first complete change to https is on Camp Wireless, Wireless Internet access on campsites.

Tags: , , ,
2016-05-24 Updating the Electronic QSL collection for SWL reports 2 years ago
In the Electronic QSL received at PD4KH / PE4KH I have some SWL reports received via eQSL and I decided I should note these correctly. So I updated the script that generates this page and now NL12621, DL-SWL/DE1PCE, R4A-1227 and others are properly noted.

I haven't found a conclusive list of all SWL 'callsigns' so I may miss some.

Tags: , ,
2016-04-29 Now available as TLS encrypted website 2 years ago
Encrypt all the things meme I consider it testing at the moment, but you can visit https://idefix.net/. The mixed-content warning will not go away soon since I partly depend on images and audiofiles from sources not (yet) available via https.

Tags: , ,
2016-04-28 First tries with letsencrypt certificates 2 years ago
A while ago I already pondered preparing links in my websites for https. With Let's Encrypt I can get free domain validating certificates for TLS encrypting my traffic. Even the subjectAltName extension is supported to get multiple domain names on one certificate. But it took me a while to really get around to implementing the rest and testing the results.

The standard way of using letsencrypt is a bit too much 'for dummies' to my taste. The suggested and supported method for using Let's Encrypt uses the standard Let's Encrypt client which is very good at modifying apache configurations on it's own.

I would like free certificates, but not at the price of letting that script do things to my webserver configuration. So I asked around and someone pointed me at letsencrypt-nosudo with the brilliant introduction:
I love the Let's Encrypt devs dearly, but there's no way I'm going to trust their script to run on my server as root, be able to edit my server configs, and have access to my private keys. I'd just like the free ssl certificate, please.
Exactly my thoughts. So I used that script, got my brain around what was happening and now I have a TLS certificate for a number of my private domains.
Read the rest of First tries with letsencrypt certificates

Tags: , , ,
2016-03-31 Interesting report from pskreporter 2 years ago
PSKreporter negative time Interesting report from pskreporter psk map today: a negative time at which the signal was reported. I guess the reported time is taken from the original spotter, I had EB4DDQ in the log at 18:12 UTC, he had me in the log at 19:12 UTC.

Tags: , , ,
2016-02-02 Humor van nu.nl 2 years ago
In lynx krijg ik ook de melding over een adblocker van nu.nl. Nee, ik heb geen adblocker in lynx!

Tags: ,
2016-01-20 Testing protocol-relative hyperlinks with letsencrypt in mind 2 years ago
I am pondering making my websites available via https using a Let's Encrypt certificate which are free and support multiple servernames. Currently I have one HTTPS site running with a certificate signed by my own CA which is only trusted by my own systems.

Chances are that I will find lots of places where I will get mixed-content warnings and things that will break. So switching to https-only will have to wait.

But the good news is that it's possible to omit the protocol from a hyperlink, leading to the following bit of HTML code in Nice APRS track this morning:

<img src="//idefix.net/~koos/pics/aprs-PD4KH-20160108.png" alt="APRS track PD4KH 20160108" title="APRS track PD4KH 20160108"><br>
This will keep working when idefix.net becomes reachable via https and will not give a mixed-content warning. I just have to make sure the http and https versions of idefix.net work exactly the same.

At the moment this works fine, even when viewing the RSS feed using sage. According to Can I change all my http:// links to just //? on stackoverflow the number of browsers that don't support this is very small.

Tags: , ,
  Older news items for tag web ⇒
, reachable as koos+website@idefix.net. PGP encrypted e-mail preferred.

PGP key 5BA9 368B E6F3 34E4 local copy PGP key 5BA9 368B E6F3 34E4 via keyservers pgp key statistics for 0x5BA9368BE6F334E4 Koos van den Hout
RSS
Other webprojects: Camp Wireless, wireless Internet access at campsites, The Virtual Bookcase, book reviews