Resilvering a ZFSonLinux disk

The other day, a disk died in our ZFSonLinux NAS. So we ordered a new one and installed the replacement today. The installation was pretty straightforward: Remove the old disk, put in the new one, zpool replace the damn thing, and voila, here goes the resilvering. And it says it's just gonna take about 58 hours, or two and a half days for a 500 GB disk.

scan: resilver in progress since Thu Jun 18 14:25:34 2015
    124G scanned out of 1,35T at 6,10M/s, 58h39m to go
        14,3G resilvered, 9,01% done

Dafuq.

Read more…

rsnapshot-based previous versions

The other day, a friend of mine who runs a small business asked me how to avoid the risk of losing files when hardware dies. Of course, the obvious choice is storing important data on a little NAS system that tolerates disk failure, but on top of that, I want to add snapshot-based backup for a little extra safety, integrated with Samba's shadow_copy2 VFS module so the Restore Previous Versions feature found in recent Windows OSes would work. But naturally, there are unexpected caveats.

Read more…

Passwordless LDAP authentication

If you're running boxes in an Active Directory or Samba4 Domain, you might get to the point where you need to access the domain's LDAP directory. Hence, you require some kind of authentication.

The classical way involves either using your own domain account, ideally only temporarily; using the Administrator account (which is strongly discouraged, meaning people do it all the time); or creating a dummy user with an even dummier password to use in the application. I admit to having used this way before, but I always felt it to be somewhat ugly, unclean. So finally, I went looking for a better way.

Read more…

Configuring NTPd for a Samba 4 Domain

If you're using Samba 4 to run an Active Directory Domain, you should also configure NTPd on your domain controller hosts. Windows clients are automatically configured to sync their clocks with the DC, and if the clocks drift apart for more than five minutes, logins will simply stop working, which tends to make users unhappy.

However, Windows clients require NTP packets to be signed by the Domain Controller, otherwise they'll refuse to sync their clocks with the server. Here's how you can configure ntpd to sign its responses.

Read more…

Why I started working on Fluxmon

/files/2013-10-12_21-35-16_dt0001.jpg

Monitoring is one of those fields where the temptation for tech ejac is pretty strong. You can collect data from everywhere, using not only pretty standard scripts that query the Linux kernel, but also using home-brewn hardware that uses the likes of a Raspberry Pi to measure the water temperature in your aquarium. You can draw graphs, where you can get to build your own parser for a minilanguage that allows to graph wr_sectors[sct/s] * 512[B/sct] / wr_ios[IO/s] and have the language infer that this calculation yields bytes per IO operation. You can do loads of fancy BI stuff using CrystalReports and friends to generate reports. Oh, do make sure you can handle a flaky connection without losing measurements. And while you're at it, cluster the whole thing, and be sure to have a system underneath it that can easily take the load.

All of this is actually pretty fun. That's why it is so tempting: We're nerds, so doing this stuff is just plain awesome. But once you get to the point where you figured out how to measure things, there's a much tougher question waiting: What do you measure, and what for?

Read more…

Building a storage abstraction layer

During the development process for the next version of openATTIC, we have come across the problem that our previous design — which had been pretty straightforward — was being challenged by the fact that there are lots of ways to architect a storage system, and using filesystems on top of LVM logical volumes shared by CIFS, NFS or FTP was just one of those.

Read more…

Storage Performance Howto

Virtualization is no longer a trend, it's simply the reality in nearly every data center, especially the ones that are newly built. And it makes sense, too: These days, Computers have such an insane amount of processing power that no one single user will ever be able to use it to full capacity. (Check your load average if you don't believe me. Tip: 100% is usually equal to the number of CPU cores you have.)

But if you're running a multitude of virtual machines on a single hardware, that hardware has to be able to take the load. With regard to CPU and RAM, that's easy: Just buy enough of 'em. In terms of network bandwidth that works too, a simple Gigabit connection can already get you quite far, and ten gigabit ethernet is readily available.

Unfortunately, it's not that easy when it comes to storage. Scaling up a single system can get pretty expensive beyond a certain point, and you don't want to build yourself a very expensive single point of failure. Scaling out to a bunch'a systems is fair enough regarding the price, but at the end of the day, it always comes down to a single user waiting for a single disk to do stuff, so this approach does not do anything to increase that particular user's experienced performance.

So in order to speed that up, we'll go and buy some SSDs, right?

Read more…

Open Source is not an option

Up until now, I always considered open source and closed source software as two competing licensing (and business) models, which both had their pros and cons. This week, I've experienced a couple of things that changed my mind. Right now, I'm convinced that open source is the only sane way to go.

There's a couple of reasons for that.

Read more…