Manually creating a Ceph OSD

When setting up the Ceph Server scenario for Proxmox, the PVE guide suggests to use the pveceph createosd command for creating OSDs. Unfortunately, this command assumes that you want to dedicate a complete harddrive to your OSD and format it using ZFS. I tend to disagree: Not only do I prefer RAIDs because their caches eliminate latency. I also always have LVM in between so that I'm flexible with the disk space allocation. And I'm not really a huge fan of ZFS ever since it bit me, albeit they fixed that issue by now. Still, I'm staying with my trusty XFS.

That of course means that I'll have to create my OSDs differently because pveceph createosd` isn't going to work. Here's how I do it.

Read more…

Ceph CRUSH map with multiple storage tiers

At work, we're running a virtualization server that has two kinds of storage built-in: An array of fast SAS disks, and another one of slow-but-huge SATA disks. We're running OSDs on both of them, and I wanted to distinguish between them when creating RBD images, so that I could choose the performance characteristics of the pool. I'm not sure if this post is outdated by now (Jan 2018), there's a "class" thing in crush map all of a sudden. However, here's what we're currently running.

Read more…

Rust, meet Python

Out of mere curiousity, I wanted to try out bindgen to generate a Rust interface to a C library. So I ran it against libpython, not really expecting that it would work, but you don't know until you tried, right? The fun part is: It does work, after defeating a few errors.

Read more…