On 8 and 9 February last week I attended the Surf Security and Privacy conference. SURFcert, the incident response team of SURF, had its own 'side event' within this conference, an escape room. Since the members of SURFcert like to visit escape rooms themselves, the idea was to build our own escape room. A simple one as teams of 2 or 3 people had to solve it within 15 minutes. The best scores were indeed just over 5 minutes so it was doable.Read the rest of I learned event-based programming recentlyThe theme of this escape room was the trip Snowden made: from the US to Hongkong to Moscow. Each location had a puzzle and like Snowden the only thing you could take to the next location was knowledge. In this case a 4-digit code to open a lock. Someone else in the SURFcert team did most of the hardware work and I decided to dive into some programming to support this effort. The escape room needed a countdown clock that could only be stopped by the right code. So I installed a Raspberry Pi with a raspbian desktop and found out how to set up the autorun on the Pi so my program would be started at startup when the user 'pi' logs in automatically. This was done by starting it from ~/.config/lxsession/LXDE-pi/autorun. The program I wrote had three inputs:
The escape room clock
For the barcodes I used an usb barcode scanner I have lying around. It behaves like a usb keyboard so scanning a barcode will cause the code to be entered as keystrokes with an enter key at the end, But all programming I do is sequential. This is different, I needed to write an event-based program. It needs to react to time events, enter events and needs to check the state of gpio bits on time events. And on certain events it needs to change the global state (reset, running, stopped). The last time I did any event-based programming was an irc-bot written in Perl 4. So with a lot of google searches, copypasting bits of code, searching a lot for which input bits would be default high and go low when connected to earth and a lot of trying I wrote a program. It uses WxPerl to have a graphical interface and use events. I'm not saying its a good program, but it did the job. Notable things:
- A reset switch connected to GPIO pin 11 and ground
- A start button connected to GPIO pin 03 and ground
- Entering the right barcode to stop the time. In the end this was the barcode of a real Russian bottle of vodka, so my program needed vodka as input
- The OnInit function sets up everything: a window with minimal decorations, tries to set it full-screen, a text box that will show the time and starts at 15:00 as static text. A handler for time events that will be called 10 times per second. And an input box and a handler for when the enter key is pressed.
- The onTimer function that looks at global state and decides which inputs are valid in that state and handles them
- The onenter function that calculates a sha256 hash of the input line and checks which inputs can change the global state. The hash was to make sure that someone who could have a look at the source still had no idea what the commands were to control it all via keyboard. And no keyboard was connected anyway. The input for a shutdown is the barcode from one of the loyalty cards I carry around.
After spending an evening fixing scripts on The Virtual Bookcase to make them run in PHP 7 and make them safer at the same time I came to the conclusion that I still don't like php. My conclusion is that if I want to maintain sites I'd rather redo them in perl. I noticed any serious maintenance on the scripts of The Virtual Bookcase was 9 years ago (!). That was also when I had the habit of writing maintenance scripts in perl and web code in php. The upside is that a part of the page-generating code is already available in perl. But a rewrite is a task for another day. For now the site works cleanly in PHP 7 (and 5) and I can go on to the next task for moving the homeserver.Read the rest of Fixing stuff in The Virtual Bookcase for PHP 7
I am currently working on a new version of one of the sites I manage in perl, rewriting it from php. I noticed loading times were slower and gave mod_perl a try. The basic configuration of mod_perl is quite simple. This did not give me the big advantage in web server speed, that came when I added:PerlModule Apache::DBIto the apache2 config. The Apache::DBI module caches database connections for supported drivers, this speeds up database-dependent scripts. The module comes from the ubuntu package libapache-dbi-perl and Apache will throw really bad errors at you when the module you want to load is not available. This is now enabled for my homepage site too. The processing times of the pages don't change much, but the startup of the perl interpreter, modules and scripts is much faster so the waiting time is a lot less.
From the spambox:How are you doing today, I am miracle 24 yearls old girl, i saw your profile today at googlesearch cpan.cse.msu.edu - i like it, then i decided to contact you for going into deep rellastionship between me and youI know CPAN is a lot, but I never saw it as a dating site.
I introduced a MediaWiki at work (science ict department) to use for internal documentation. One of the things I wanted to try is pages in the wiki created or maintained from other sources.
I created a special namespace for pages with information from other sources, where normal users have no rights to edit pages. This is to make sure nobody tries to edit something which is maintained by a script from another source.I started with something simple: the list of printers. The windows printserver is leading, so I want to fetch the list there and massage it to generate a list of printers and comments. The weapon of choice is perl and MediaWiki::Bot. The output of smbclient -N -L printserver takes one regexp to find printqueuenames and descriptions. For the overview of cups queues I can parse the output of lpstat -a. With a bit more digging into IPP it should also be possible to get a list of details of printers to link cups queues and their windows counterparts.
I can run this script from crontab each day and the history tracking in MediaWiki will start to help document when something changed. Another thing which we can stop worrying about.I have visions of the future of automatically linking zabbix (which has a json interface) and mediawiki and maybe a further future with a good database of stuff which is a source of entries in zabbix and the wiki. Double work is unneeded, computers are much better at working with one canonical source and importing that in a lot of places.
More than one visitor of my homepage saw an intricate XML parsing error and not the page you all want to see. I never saw the problem myself but my best guess sofar is that the twitter rss feed was malformed, because that is the only XML parsing happening for the page. I fetch the twitter feed automatically every 6 hours, but sometimes twitter is a bit overloaded and probably gives an internal error page (the famous fail whale) and not the valid rss feed.
Solution: Fetch the file to a temporary file, run the parser on it and when the parser does not fail, copy it to where the webserver reads it:#!/bin/sh wget -O wwwdata/twitter.rss.pre -o /dev/null http://twitter.com/statuses/user_timeline/19301166.rss perl -MXML::RSS -e 'my $rss=new XML::RSS; $rss->parsefile("wwwdata/twitter.rss.pre");' if [ "$?" = "0" ]; then cp wwwdata/twitter.rss.pre wwwdata/twitter.rss fi
Wat perl code om met Chipcard::PCSC het saldo en de laatste 5 transacties van een chipknip uit te lezen. Altijd handig als je toevallig een laptop met chipcard slot hebt. Het leuke is dat je ook kan zien wat het eigen nummer was van de chipautomaat waar je de transactie deed (SAM_ID) en het volgnummer (SAM_STAN) waarmee je kan zien waar je chipknip geweest is en kan zien hoeveel transacties automaten te verwerken krijgen. Perl code om een chipknip saldo te lezen en de laatste transacties. Update 2012-04-10: Nee, ik heb helaas de originele java code niet meer waar ik veldgroottes en veldnamen aan ontleend heb. Slecht van me, ik had daar naar moeten verwijzen.
I noticed a few malformed characters in the RSS feed of my homepage that weren't there in the original database entries and showed ok in the web version. Again, utf-8 problems showing, although all data (postgres - script - xml - browser) should be utf-8. Lots of testing and searching, finally I found The Perl UTF-8 and utf8 Encoding Mess by Jeremy Zawodny. He is right: it is a mess. And the post itself demonstrates it by being filled with � characters.
So to make sure everything in the RSS generating process understand that what comes out of PostgreSQL is valid utf-8 and should be imported in the XML::RSS module as the same valid utf-8, I need to recode it to utf-8. Uh.. ok. The bit of code:my $body = Encode::decode('UTF-8', $row);And now I can use ÜTF-8 çħáräćtërs!
A new version of my homepage, rewritten in perl because PHP was starting to irritate me. More database-driven in the background which allows me to add things like the tags. And a minor change in the colour scheme because someone remarked that the black-on-cyan was hard to read for people above a certain age.
One of the little irritations at work was trying to find out what the exact error was of the printer when the helpdesk ticket just says 'printer problems'. Since HP laserjets will divulge everything via SNMP, I thought the complete information must be available. It is, and I gobbled together a perl script for our noc webserver. Public version in the perl noc stuff page.
My first CPAN upload. I uploaded Geo::METAR 1.15 to CPAN just now. Time to find out if I did stuff right.
Played a bit with Geo::METAR so now my Webcam knows the almost-local weather and pastes it into the image.