Today, I found out one of my switches had a hiccup and caused a packet loss of about 80%. This triggered me to build meshping, a distributed ping service that I'm going to integrate with FluxMon in order to notice such stuff more quickly in the future. Of course, I could just use any monitoring tool, but I just can't be arsed to configure one. I need something that configures itself.
Today I found out that after rebooting my Samba4 DC based on Debian Jessie, systemd starts up smbd/nmbd/winbind again instead of
samba-ad-dc. This has to stop, I want my domain back! So I dug around a bit and found that while there is an upstart config and an init script for
samba-ad-dc, there is no systemd service file, so systemd just doesn't know what to do — the init script is nice enough to just go like "hey, I'mma call systemd for this!", which then goes to nirvana.
One thing I've been wanting to do for quite some time now is calculating a confidence interval for the metrics measured by Fluxmon. I still have to figure out whether or not it's the same thing, but for starters I'd like to calculate percentiles for those values.
I use FluxMon to query statistics from the inverter hooked up to our solar collectors. One of the values the inverter spits out is today's energy production in Wh. This value is reset every day and counts up to a certain value. Fair enough, but I'd be way more interested in the derivative of that value, which would give an indication of how much power is being produced per hour or so.
I'm currently looking into loads of maths stuff for Fluxmon, so I eventually reached the point where I'd like to try a few things. I've even collected quite a nice amount of data by now, so I have something to go with. But how to store and analyze it?
On the Chemnitz Linux Days, Hans-Jürgen Schönig held a talk named PostgreSQL: Daten superschnell analysieren which I had attended, so I wanted to get the data into PostgreSQL and try out some of the fancy stuff he showed. Googling for postgres time series, I stumbled upon this blog post showing a coupl'a nice things too, and since the schema used by this guy is radically simple, I just went ahead.
Every now and then, I need to modify files hosted on a rented web space where the hoster provides access via SFTP. Since unencrypted connections with plaintext passwords in them suck, of course I'm using SFTP. But this turns out to be a royal pain in the ass, because the Linux SFTP client can only be used interactively and does not provide any means of saving the password because hey, use Public Key Authentication. Sadly, I can't because it's just not available. So every time I want to do a file operation, I need to find the damn credentials in some text file, start an interactive sftp session, do whatever I want to do (interrupted by lots of cd'ing around) and then close the session again to get my bash back.
Obviously, this sucks. I can't establish any kind of workflow because I keep getting interrupted by having to do trivial stuff in a non-trivial way.
After getting my Snom phones to load their directories from my home server instead of using their local ones, I quickly noticed that scrolling through a directory with 96 entries on a one-line display is no fun at all and the thing definitely needs to be searchable.
I recently started fiddling around with Asterisk and Snom VOIP phones, and one of the main features I want my phones to have is cool CallerID handling, which means:
Contacts need to be in sync with my mobile, which syncs with a Zarafa instance.
When the phone rings, I want to see who's calling.
When I type a number (without selecting it from the directory), I want my phone to display the name of the person I'm calling.
I want my phones' directories to be synced without having to worry about it.
So here's what I did.