2023-08-07 Trying to understand bonnie++ output
In preparation for a migration at work I wanted to do actual benchmarking of Linux filesystem performance. I think I used bonnie in the last century, so I gave bonnie++ a spin for this. I have little idea of what 'good' or 'bad' numbers are from bonnie++. I could only compare a "local" filesystem with an NFS filesystem. I use local in quotes because this was in a virtual machine, so it's SSD storage in raid-1, with LVM on top of it, with a logical volume assigned to a KVM-based virtual machine, which uses the virtio disk driver for an ext4 filesystem. The numbers for the "local" filesystem:Version 1.98 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP gosper 32G 809k 98 440m 38 215m 22 1590k 99 410m 30 4639 135 Latency 25688us 317ms 143ms 9332us 39208us 2089us Version 1.98 ------Sequential Create------ --------Random Create-------- gosper -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 488us 684us 762us 236us 87us 262us 1.98,1.98,gosper,1,1691401899,32G,,8192,5,809,98,450230,38,220385,22,1590,99,419827,30,4639,135,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,25688us,317ms,143ms,9332us,39208us,2089us,488us,684us,762us,236us,87us,262usAnd for NFS, a Synology NAS with spinning disks in raid-5:Version 1.98 ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Name:Size etc /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP gosper 32G 1054k 98 78.7m 7 68.4m 13 1483k 99 109m 10 432.2 12 Latency 11138us 408ms 13261ms 16434us 212ms 274ms Version 1.98 ------Sequential Create------ --------Random Create-------- gosper -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 16384 10 16384 16 16384 15 16384 9 16384 18 16384 15 Latency 69194us 53194us 98927us 69144us 1240us 94317us 1.98,1.98,gosper,1,1691398200,32G,,8192,5,1054,98,80605,7,70058,13,1483,99,111574,10,432.2,12,16,,,,,705,10,16184,16,2800,15,693,9,3703,18,2226,15,11138us,408ms,13261ms,16434us,212ms,274ms,69194us,53194us,98927us,69144us,1240us,94317usNow I am somewhat confused. Sequential write to NFS is slightly faster.
Update 2023-08-08At work I got different but comparable numbers for iscsi attached storage versus vmware storage (and the layers in between). Those numbers helped make decisions about the storage.
2023-01-21 2022 in amateur radio for me
Time for an overview of what happened in amateur radio in 2022 for me. Like previous years I will look back at the plans and what happened. Looking back at Closing 2021 in amateur radio the following results are clear:Read the rest of 2022 in amateur radio for me
The plans for 2023:
- The morse exam finally happened and I passed it.
- More morse contacts in contests and in general
- 18 new countries/entities in the log
- More countries/entities in morse in the log
- Satellite contacts: none
- Used the improving propagation
And one thing is both a result of 2022 and an item for 2023: I ordered a new radio: a Yaesu FT-911A, HF, VHF, UHF all mode at the end of 2022 and it was delivered last week. That will be a separate post.
- Try to get more countries/entities, especially in morse. I am working towards DXCC in morse: 100 entities confirmed.
2022-10-09 I moved the 1-wire interface to a Raspberry Pi
After the problems with detaching and attaching the USB 1-wire interface from a kvm virtual machine to fix an interference issue showed up again I decided to move the USB 1-wire interface to a different machine, one where kvm virtualisation isn't in the mix. The closest available machine that can deal with the 1-wire interface is a Raspberry Pi which also has other monitoring tasks. This move worked fine and the 1-wire temperatures are showing up again in influxdb. I decided not to update the rrdtool temperature database. I will have to find time to migrate the rrdtool history to influxdb. Ideally there will be some aggregation for older measurements but I'd like an "infinite" archive of a daily average.
2022-06-15 Grafana 9.0.0 available, and downgraded back to 8.5.6 and back up...
I saw an upgrade of Grafana available, which turned out to be 9.0.0. When upgrading to 9.0.0 I get...An unexpected error happened TypeError: Object(...) is not a function t@[..]public/plugins/grafana-clock-panel/module.js:2:15615 WithTheme(undefined)So maybe the grafana-clock-panel plugin isn't compatible with 9.0.0 somehow. Downgrading to 8.5.6 and reloading everything makes it work again. Update: I checked the grafana-clock-panel plugin and noticed it hadn't been updated. So I did that update and retried grafana 9.0.0, and that made everything run smoothly again.
2022-05-11 SolarEdge omvormers 'THROTTLING'
Na het aanpassen van het netwerk naar de schuur naar gigabit was ik natuurlijk ook de monitoring van de SolarEdge inverter met modbus/tcp aan het testen. En toen viel me even iets op, de inverter stond in mode THROTTLING en dat was me nog niet eerder opgevallen. De uitvoer is dan$ ./sunspec-status -v se-schuur -m 0 INVERTER: Model: SolarEdge SE2200 Firmware version: 3.2537 Serial Number: xxxxxxxx Status: THROTTLING Power Output (AC): 342 W Power Input (DC): 348 W Efficiency: 98.50 % Total Production: 3964.313 kWh Voltage (AC): 237.40 V (49.94 Hz) Current (AC): 1.53 A Voltage (DC): 378.80 V Current (DC): 0.92 A Temperature: 42.75 C (heatsink)Ik kon niet vinden wat de reden was van het terugregelen van het uitgangsvermogen. Ik log nu wel de statuswaarde van de inverters om te zien of dit vaker voorkomt. Update: Achteraf denk ik dat dit gekomen is omdat ik de omvormer in de schuur gereboot had om het juiste IPv4 adres te krijgen voor monitoring. Dit was op een best wel zonnig moment. Na de reboot was ik snel aan het testen of de modbus/tcp monitoring het deed naar het nieuwe adres, en de omvormer gaat niet in een klap voluit electriciteit leveren maar brengt dat langzaam op gang.
2022-05-09 Grafana alerts working again
After reverting to Grafana 8.4.7 for a while because alerts were failing in Grafana 8.5.0 I had a look at the available version today and saw version 8.5.2. I assumed the problem with DataSourceNoData errors was fixed by now and did the upgrade. Indeed the alerts are seeing data fine now and I trust they will work when needed.
2022-04-23 Grafana alerts failing in 8.5.0
I installed Grafana from their debian repository, so I get updates via the normal apt update / apt dist-upgrade process. Since upgrading to version 8.5.0 the alerts were all firing because of 'DatasourceNoData' errors. According to Alert Rule returned no data (after upgrade to 8.5.0) #48128 other people are seeing this too. For now I downgraded to version 8.4.7 where things work fine and I'll see if a newer version shows up.
2022-03-18 Using grafana for alerting too
I've been playing with grafana for about a year since starting with updating my statistics gathering and I keep seeing new options and updates in grafana. Grafana recently got some new options for alerting and I am trying a few of those. Alerts for things that are a real problem and can cause other problems are a good start. Based on some earlier problems I keep an eye on some filesystems that are over 90% full. Today I read Three DDoS attacks on my personal website found via Three DDoS attacks on my personal website : r/homelab reddit and this made me wonder about overloads on my webserver. The easiest way to detect problems with web serving I could think of is to look at the queue size in haproxy which is monitored in influxdb/grafana anyway for nice graphs of website traffic. I did have a time with too high queues for backend webservers. But that was when the backend server was completely broken due to a filesystem problem so that was a logical reason. It would be nice if I could iterate alerts, like 'for the root filesystem of every monitored system'. Or at least copy them changing only the system name in the rules and alerts.
2022-02-16 Closing 2021 in amateur radio
I noticed I didn't do a "Closing 2021 in amateur radio" post yet, so time to catch up. Looking back at the Review of 2020 in amateur radio with plans for 2021 I can say:
And the plans for 2022:
- Practising morse has happened! Just no exam yet, but that is mainly due to the current circumstances
- Satellite contacts: none.
- Morse and phone in contest: yes!
- New qsl cards ordered and in use
- More and more morse, and that exam. There is an exam date now and it will be possible to get the wanted 'CW included' on my radio amateur identification
- Again satellites
- In contests: try to get more morse and phone contacts.
- Use the better propagation to get contacts on different bands
More detailed statistics over 2021And I had to check my own notes again how I got these numbers last year, so I'm adding the sql queries I typed at the mysql/mariadb client. With the database behind cqrlog available I can make all kinds of queries.
By monthThe influence of months with (digital) contests isn't as strong as in previous years.+-------+-----+ | month | cnt | +-------+-----+ | 1 | 234 | | 2 | 204 | | 3 | 238 | | 4 | 161 | | 5 | 131 | | 6 | 111 | | 7 | 211 | | 8 | 19 | | 9 | 232 | | 10 | 204 | | 11 | 191 | | 12 | 101 | +-------+-----+Query: select month(qsodate) as month,count(id_cqrlog_main) as cnt from cqrlog_main where year(qsodate)=2021 group by month order by month;
By bandNo real surprises there. And the feeling that 10 meter was improving isn't showing in the statistics yet.+------+-----+ | band | cnt | +------+-----+ | 40M | 699 | | 20M | 849 | | 17M | 151 | | 15M | 40 | | 10M | 243 | | 2M | 51 | | 70CM | 4 | +------+-----+Query: select band,count(id_cqrlog_main) as cnt from cqrlog_main where year(qsodate)=2021 group by band order by freq;
By modeAlmost double the number of morse contacts compared to the previous year.+-------+-----+ | mode | cnt | +-------+-----+ | JT65 | 2 | | PSK31 | 3 | | FM | 19 | | FT4 | 35 | | PSK63 | 226 | | CW | 240 | | SSB | 267 | | RTTY | 386 | | FT8 | 859 | +-------+-----+Query: select mode,count(id_cqrlog_main) as cnt from cqrlog_main where year(qsodate)=2021 group by mode order by cnt;
2021-11-12 Meer magische getallen in Sunspec ModbusItems with tag statistics before 2021-11-12
Na een poosje blijkt ook het getal 65534 (0xFFFE) in sunspec modbus antwoorden een vorm van 'geen geldige uitlezing' te kunnen zijn. Ik heb de scripts die de gegevens ophalen en verwerken richting influxdb hier op aangepast.