Running Path MTU Discovery from Windows

When debugging network connectivity issues, sometimes it would be helpful to know what’s the largest packet that can be transferred from one host to another without being fragmented. This is especially helpful when Jumbo Frames or VPNs are involved. The method to find this size is called Path MTU Discovery (PMTUD). It is performed by your OS of choice in the background, but you can also do it manually. In Linux this is easily done, but it’s also possible from Windows. Here’s how.

Read more…

A nicer way to access Windows file servers from Linux

I recently came across smbnetfs. Initially I had only been looking for a way to mount a Windows file share from userspace, so that I wouldn’t have to deal with permissions for my non-root user accessing a mountpoint. smbnetfs can do a lot more than just that though: It gives you the equivalent of Windows’s “Network” feature, allowing you to access the whole network via a single mount. Since the documentation is a bit sparse, here’s how I set it up.

Read more…

Sysadmin Life Hack: Syslog and NodeRed, a match made in heaven

I recently wanted to react to events detected by some obscure hardware monitoring device, using NodeRED. I was not exactly sure how to achieve this, because the device doesn’t really offer any method of accessing it via an API or something. Of course these devices often can send mail or SNMP traps, but I didn’t want to involve a mail server or deal with OIDs, so I kept looking for a different solution.

Another protocol that is actually quite widely supported is syslog. Since I had a syslog-ng instance running to send logs to Loki, I wondered if it would be possible to also send the logs to MQTT. Indeed this works, and it allowed me to implement my use-case quite easily. But ever since, I kept finding new applications for this: The syslog protocol is ubiquitous, and even the cheapest routers support it.

I’m going to lay out a few details on how it works.

Read more…

Ceph Bluestore/Filestore latency

We’ve been looking deeply into Ceph Storage latency, comparing BlueStore and FileStore, and looking at methods how to get below the magic 2ms write latency mark in our Proxmox clusters. Here’s what we found.

The endeavour was sparked by our desire to run ZooKeeper on our Proxmox Clusters. ZooKeeper is highly sensitive to IO latency: If writes are too slow, it will log messages like this one:

fsync-ing the write ahead log in SyncThread:1 took 1376ms which will adversely effect operation latency.File size is 67108880 bytes. See the ZooKeeper troubleshooting guide

Subsequently, ZooKeeper nodes will consider themselves broken and restart. If the thing that’s slow is your Ceph cluster, this means that all three VMs will be affected at the same time, and you’ll end up losing your ZooKeeper cluster altogether.

We mitigated this by moving ZooKeeper to local disks, and getting rid of the Ceph layer in between. But that is obviously not a satisfactory solution, so we’ve spent some time looking into Ceph latency.

Unfortunately, there’s not a lot of advice to be found other than “buy faster disks”. This didn’t seem to cut it for us: Our hosts were reporting 0.1ms of disk latency, while the VMs measured 2ms of latency. If our hosts had weighed in at 1.8ms, I’d be willing to believe that we have a disk latency issue - but not with the discrepancy that we were seeing. So let’s dive in and see if we can find other issues.

Read more…

Enforcing correct DNS upstreams for internal zones

When you’re frequently working with internal DNS zones of a company whose DNS server sits behind a VPN, you’ll probably soon encounter DNS shenanigans where you’ll find that resolving internal domain names is a lot more tricky than it should be. I’ve found a way that works using dnsmasq, but I also found that you need to be careful to keep an overly-eager NetworkManager in check.

Read more…

Revisiting Samba RODC + Bind

Ok, so here’s another step in the evolution of my Samba4-RODC-based DNS setup. First steps were setting up a Samba4 Read-Only DC in my remote locations, so that DNS would be replicated to that location so that DNS doesn’t fail in case the VPN connection dies. Then we discovered that the SAMBA_INTERNAL DNS backend does not support caching, which unsurprisingly lead to performance problems, so we switched to Samba AD DC with Bind as DNS backend. This setup is quite a bit more complex though, and it seems a bit unstable in the sense that Samba lost its ability to update records in Bind for some reason and we have to “fix” that manually by re-joining the RODC to the domain. Rumor has it that the SAMBA_INTERNAL backend is a lot more stable. So, here’s step three in our evolution: Let’s allow Samba to use SAMBA_INTERNAL, but only run on 127.0.0.1, while communication with the outside world is handled by a bind instance that handles caching and forwards queries for the company domain records to Samba.

Read more…

LetsEncrypt DNS verification using a local BIND instance

I’ve been looking into Let’s Encrypt DNS verification for a while. Not only because you’re able to obtain wildcard certificates through this method, freeing you from the necessity to obtain an individual certificate for every single one of your subdomains: It also allows you to get a certificate for stuff running on your LAN, provided you’re running it on a subdomain that belongs to you. The problem is though, how do you enable Certbot to automate the DNS server update, without putting a credential in place that would allow full access to all your domains? And what to do if you’re running a server for a domain that doesn’t even belong to you: How can the owner delegate permissions for the verification TXT records to you, without having to give you full access to all their domains? Today I stumbled across a solution: Delegate the _acme-challenge subdomain to a local BIND instance and have Certbot update that. Here’s how.

Read more…