Setting up Ceph FS on a Proxmox cluster
Proxmox apparently does not yet support running CephFS, but it can be done using a bunch of manual steps. Here’s how.
Install the file system
-
Create RBD pools for your data and metadata:
ceph osd pool create cephfs 64 64 ceph osd pool create cephfs_metadata 64 64 ceph osd pool application enable cephfs_metadata cephfs
-
ceph fs new cephfs cephfs_metadata cephfs
-
Configure the MDS in
/etc/ceph/ceph.conf
:[mds.dev-vh001] host = dev-vh001 keyring = /var/lib/ceph/mds/ceph-$id/keyring
Add multiple sections like these for each host where you want an MDS running.
-
Set up the actual MDS on the host(s):
On each MDS host, run:
mkdir -p /var/lib/ceph/mds/ceph-$HOSTNAME ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-$HOSTNAME/keyring --gen-key -n mds.$HOSTNAME ceph auth add mds.$HOSTNAME osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-$HOSTNAME/keyring cp /var/lib/ceph/mds/ceph-$HOSTNAME/keyring /etc/pve/priv/ceph.mds.$HOSTNAME.keyring chown -R ceph. /var/lib/ceph/mds/ceph-$HOSTNAME systemctl enable ceph-mds@$HOSTNAME systemctl start ceph-mds@$HOSTNAME
Rinse and repeat for all the hosts. Note that you won’t have to replace the
$HOSTNAME
variable manually, bash does that for you.
ceph -s
should now look something like this:
cluster:
id: de4364ad-0e8a-449e-be65-7d9600c0f67a
health: HEALTH_OK
services:
mon: 3 daemons, quorum 2,1,0
mgr: dev-vh003(active), standbys: dev-vh002
mds: cephfs-1/1/1 up {0=dev-vh001=up:active}, 2 up:standby
osd: 3 osds: 3 up, 3 in
And you should be able to mount CephFS:
apt-get install ceph-fuse
mkdir /media/cephfs
ceph-fuse /media/cephfs
It should show up in df
too:
# df -h /media/cephfs
Filesystem Size Used Avail Use% Mounted on
ceph-fuse 173G 0 173G 0% /media/cephfs
So far, so good.
Get a client ready
-
On the host, prepare the CephFS side of the equation:
# ceph-authtool --create-keyring /etc/ceph/ceph.client.test_client.keyring --gen-key -n client.test_client # vi /etc/ceph/ceph.client.test_client.keyring [client.test_client] key = AQAX4PBZw5tcGhAaaaaaBCSJR8qZ25uQB3yYA2gw== caps mds = "allow r path=/, allow rw path=/test_client" caps mon = "allow r" caps osd = "allow class-read object_prefix rbd_children, allow rw pool=cephfs" # ceph auth import -i /etc/ceph/ceph.client.test_client.keyring
Make sure the
/test_client
path actually exists by mounting CephFS as shown above and running a quick mkdir. -
Prepare the client:
mkdir /media/test_client apt-get install ceph-common ceph-fs-common mkdir /etc/ceph
Copy
/etc/ceph/ceph.client.test_client.keyring
and/etc/ceph/ceph.conf
from the host to the client.-
Try
ceph --id test_client -s
. It likely complains that it can’t find the key, because PVE changes the default keyring location inceph.conf
. To fix this, edit/etc/ceph/ceph.conf
and remove thekeyring
line from the[global]
section. Then it should work already. If not, you can also try adding an explicit section for the client, like so:[client.test_client] keyring = /etc/ceph/ceph.client.test_client.keyring
-
Extract the secret from the keyring:
ceph-authtool -p -n client.test_client /etc/ceph/ceph.client.test_client.keyring > /etc/ceph/ceph.client.test_client.secret
-
Add an entry to fstab such as this:
dev-vh001,dev-vh002,dev-vh003:/test_client /media/test_client ceph name=test_client,secretfile=/etc/ceph/ceph.client.test_client.secret 0 0
mount /media/test_client
If everything goes right, df
now shows the
mountpoint:
# df -h /media/test_client/
Filesystem Size Used Avail Use% Mounted on
192.168.14.91,192.168.14.92,192.168.14.93:/test_client 1,2T 566G 574G 50% /media/test_client
If it doesn’t mount (or just hangs), you can check
dmesg -T
for a hint as to what’s wrong. Check to make sure
that your hosts are in agreement with each other about their respective
IP addresses, and make sure that clients can actually reach the
addresses your services are running on.