MicroCeph is a lightweight way of deploying a Ceph cluster using just a few commands, resulting in reliable and resilient distributed storage. MicroCeph is aimed at small scale private-clouds and edge computing environments.
A minimum of three systems are required, along with the usual recommended system resources for each OSD, MDS, or RGW daemon.
RBD, CephFS and RGW are supported, along with full disk encryption and upgrades.
Getting started
Check out MicroCeph's documentation at https://canonical-microceph.readthedocs-hosted.com/en/latest/.
To get started install the MicroCeph snap with the following command on each node to be used in the cluster:
snap install microceph
Connect the microceph snap to the hardware-observe interface:
snap connect microceph:hardware-observe
Then bootstrap the cluster from the first node:
microceph cluster bootstrap
One the first node, add other nodes to the cluster:
microceph cluster add node[x]
Copy the resulting output to be used on node[x]:
microceph cluster join pasted-output-from-node1
Repeat these steps for each additional node you would like to add to the cluster.
Check the cluster status with the following command:
microceph.ceph status
Here you should see that all the nodes you added have joined the cluster, in the familiar ceph status output.
Next, add some disks to each node that will be used as OSDs:
microceph disk add /dev/sd[x] --wipe
Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using
microceph.ceph status
microceph.ceph osd status