I introduced a MediaWiki at work (science ict department) to use for internal documentation. One of the things I wanted to try is pages in the wiki created or maintained from other sources.
I created a special namespace for pages with information from other sources, where normal users have no rights to edit pages. This is to make sure nobody tries to edit something which is maintained by a script from another source.I started with something simple: the list of printers. The windows printserver is leading, so I want to fetch the list there and massage it to generate a list of printers and comments. The weapon of choice is perl and MediaWiki::Bot. The output of smbclient -N -L printserver takes one regexp to find printqueuenames and descriptions. For the overview of cups queues I can parse the output of lpstat -a. With a bit more digging into IPP it should also be possible to get a list of details of printers to link cups queues and their windows counterparts.
I can run this script from crontab each day and the history tracking in MediaWiki will start to help document when something changed. Another thing which we can stop worrying about.I have visions of the future of automatically linking zabbix (which has a json interface) and mediawiki and maybe a further future with a good database of stuff which is a source of entries in zabbix and the wiki. Double work is unneeded, computers are much better at working with one canonical source and importing that in a lot of places.
More than one visitor of my homepage saw an intricate XML parsing error and not the page you all want to see. I never saw the problem myself but my best guess sofar is that the twitter rss feed was malformed, because that is the only XML parsing happening for the page. I fetch the twitter feed automatically every 6 hours, but sometimes twitter is a bit overloaded and probably gives an internal error page (the famous fail whale) and not the valid rss feed.
Solution: Fetch the file to a temporary file, run the parser on it and when the parser does not fail, copy it to where the webserver reads it:#!/bin/sh wget -O wwwdata/twitter.rss.pre -o /dev/null http://twitter.com/statuses/user_timeline/19301166.rss perl -MXML::RSS -e 'my $rss=new XML::RSS; $rss->parsefile("wwwdata/twitter.rss.pre");' if [ "$?" = "0" ]; then cp wwwdata/twitter.rss.pre wwwdata/twitter.rss fi
Wat perl code om met Chipcard::PCSC het saldo en de laatste 5 transacties van een chipknip uit te lezen. Altijd handig als je toevallig een laptop met chipcard slot hebt. Het leuke is dat je ook kan zien wat het eigen nummer was van de chipautomaat waar je de transactie deed (SAM_ID) en het volgnummer (SAM_STAN) waarmee je kan zien waar je chipknip geweest is en kan zien hoeveel transacties automaten te verwerken krijgen. Perl code om een chipknip saldo te lezen en de laatste transacties. Update 2012-04-10: Nee, ik heb helaas de originele java code niet meer waar ik veldgroottes en veldnamen aan ontleend heb. Slecht van me, ik had daar naar moeten verwijzen.
I noticed a few malformed characters in the RSS feed of my homepage that weren't there in the original database entries and showed ok in the web version. Again, utf-8 problems showing, although all data (postgres - script - xml - browser) should be utf-8. Lots of testing and searching, finally I found The Perl UTF-8 and utf8 Encoding Mess by Jeremy Zawodny. He is right: it is a mess. And the post itself demonstrates it by being filled with � characters.
So to make sure everything in the RSS generating process understand that what comes out of PostgreSQL is valid utf-8 and should be imported in the XML::RSS module as the same valid utf-8, I need to recode it to utf-8. Uh.. ok. The bit of code:my $body = Encode::decode('UTF-8', $row);And now I can use ÜTF-8 çħáräćtërs!
A new version of my homepage, rewritten in perl because PHP was starting to irritate me. More database-driven in the background which allows me to add things like the tags. And a minor change in the colour scheme because someone remarked that the black-on-cyan was hard to read for people above a certain age.
One of the little irritations at work was trying to find out what the exact error was of the printer when the helpdesk ticket just says 'printer problems'. Since HP laserjets will divulge everything via SNMP, I thought the complete information must be available. It is, and I gobbled together a perl script for our noc webserver. Public version in the perl noc stuff page.
My first CPAN upload. I uploaded Geo::METAR 1.15 to CPAN just now. Time to find out if I did stuff right.
Played a bit with Geo::METAR so now my Webcam knows the almost-local weather and pastes it into the image.