Processing Prometheus alerts using NodeRED

I'm currently building some home automation stuff using NodeRED. One of the things I need to do is trigger home automation actions from Prometheus: I'm monitoring our Solar panels using it, and I want the shutters to go down whenever it's sunny outside. I have a Prometheus alert rule that tells me when that is. But how can I trigger NodeRED from there?

As an extra challenge, I wanted to see if I can actually do this without writing a single line of code. I do need a couple more nodes this way, but it works and it's not all that complicated. Here's how it's done.

Read more…

NodeRED-based MQTT Browser

I've recently started building stuff with NodeRED, which I really enjoy. Since I'm using MQTT as the central message bus for my system, I find myself regularly in need of knowing the current state of my MQTT topics. There are a couple of tools around, for instance MQTT.fx, which are specifically built for this purpose; but I found that MQTT.fx likes to cause high CPU load when the topic scanner thingy is running. So I'm looking for an alternative, and I thought, why not build it in NodeRED? So that's what I did.

Read more…

Asterisk + Sipgate + IPv6

My internet provider force-migrated me to Dual-Stack Light the other day, so now I have IPv6 at home. It did cost me my IPv4 address though, so I'm not sure yet if I should be stoked or not. But anyway, now that I have it, I wanted to at least try it out.

Read more…

Manually creating a Ceph OSD

When setting up the Ceph Server scenario for Proxmox, the PVE guide suggests to use the pveceph createosd command for creating OSDs. Unfortunately, this command assumes that you want to dedicate a complete harddrive to your OSD and format it using ZFS. I tend to disagree: Not only do I prefer RAIDs because their caches eliminate latency. I also always have LVM in between so that I'm flexible with the disk space allocation. And I'm not really a huge fan of ZFS ever since it bit me, albeit they fixed that issue by now. Still, I'm staying with my trusty XFS.

That of course means that I'll have to create my OSDs differently because pveceph createosd` isn't going to work. Here's how I do it.

Read more…

Ceph CRUSH map with multiple storage tiers

At work, we're running a virtualization server that has two kinds of storage built-in: An array of fast SAS disks, and another one of slow-but-huge SATA disks. We're running OSDs on both of them, and I wanted to distinguish between them when creating RBD images, so that I could choose the performance characteristics of the pool. I'm not sure if this post is outdated by now (Jan 2018), there's a "class" thing in crush map all of a sudden. However, here's what we're currently running.

Read more…