When setting up the Ceph Server scenario for Proxmox, the PVE guide suggests to use the pveceph createosd command for creating OSDs. Unfortunately, this command assumes that you want to dedicate a complete harddrive to your OSD and format it using ZFS. I tend to disagree: Not only do I prefer RAIDs because their caches eliminate latency. I also always have LVM in between so that I'm flexible with the disk space allocation. And I'm not really a huge fan of ZFS ever since it bit me, albeit they fixed that issue by now. Still, I'm staying with my trusty XFS.
That of course means that I'll have to create my OSDs differently because pveceph createosd` isn't going to work. Here's how I do it.
The following is a bash-script, ready to be copy-pasted into a shell running on your PVE host:
OSDID="`ceph osd create`" ln -s /media/osd1/ /var/lib/ceph/osd/ceph-$OSDID ceph-osd -i $OSDID --mkfs --mkkey chown -R ceph. /var/lib/ceph/osd/ceph-$OSDID/ ceph auth add osd.$OSDID osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-$OSDID/keyring service ceph-osd@$OSDID start
It assumes that the volume upon which you'd like to place your OSD is mounted to
/media/osd1. Feel free to adapt that path if you want,
the rest should pretty much just work as-is.