Install Ceph on Ubuntu
Ceph is a storage system designed for excellent performance, reliability, and scalability. However, the installation and management of Ceph can be challenging. The Ceph-on- Ubuntu solution takes the administration minutiae out of the equation through the use of Juju charms. With charms, the deployment of a Ceph cluster becomes trivial as does the scaling of the cluster’s storage capacity.
Looking for help running Ceph?
Single-node deployment
- Uses MicroCeph
- Works on a workstation or VM
- Suitable for testing and development
These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.
-
To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:
sudo snap install microceph
-
Then bootstrap the cluster:
sudo microceph cluster bootstrap
-
Check the cluster status with the following command:
sudo microceph.ceph status
Here you should see that there is a single node in the cluster.
-
To use MicroCeph as a single node, the default CRUSH rules need to be modified:
sudo microceph.ceph osd crush rule rm replicated_rule
sudo microceph.ceph osd crush rule create-replicated single default osd -
Next, add some disks that will be used as OSDs:
sudo microceph disk add /dev/sd[x] --wipe
Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:
sudo microceph.ceph status
sudo microceph.ceph osd status
Multi-node deployment
- Uses MicroCeph
- Minimum 4-nodes, full-HA Ceph cluster
- Suitable for small-scale production environments
These installation instructions use MicroCeph - Ceph in a snap. MicroCeph is a pure upstream Ceph distribution designed for small scale and edge deployments, which can be installed and maintained with minimal knowledge and effort.
-
To get started, install the MicroCeph snap with the following command on each node to be used in the cluster:
sudo snap install microceph
-
Then bootstrap the cluster from the first node:
sudo microceph cluster bootstrap
-
On the first node, add other nodes to the cluster:
sudo microceph cluster add node[x]
-
Copy the resulting output to be used on node[x]:
sudo microceph cluster join pasted-output-from-node1
Repeat these steps for each additional node you would like to add to the cluster.
-
Check the cluster status with the following command:
sudo microceph.ceph status
-
Next, add some disks to each node that will be used as OSDs:
sudo microceph disk add /dev/sd[x] --wipe
Repeat for each disk you would like to use as an OSD on that node, and additionally on the other nodes in the cluster. Cluster status can be verified using:
sudo microceph.ceph status
sudo microceph.ceph osd status
Containerised deployment
- Uses a Canonical-supplied and maintained rock (OCI image)
- Works with cephadm and rook
- Suitable for all types of containerised deployments
These installation instructions use the Canonical produced and supplied Ceph rock — this OCI compliant image provides a drop in replacement for the upstream Ceph OCI image.
Large-scale deployment
- Uses Charmed Ceph
- Uses MAAS for bare metal orchestration
- Suitable for large-scale production environments
Charmed Ceph is Canonical's fully automated, model-driven approach to installing and managing Ceph. Charmed Ceph is generally deployed on bare-metal hardware that is managed by MAAS.