Ceph
Ceph is a distributed, unified, and resilient network file system. Ceph is developed by RedHat, Intel, CERN, Cisco, Fujitsu, SanDisk, Canonical and SUSE. Its designed to be a flexible storage source that runs on commodity hardware, typically on premises/bare-metal where it can perform well.
![graph ceph
{
layout=dot
rankdir=TD;
bgcolor="azure3";
// margin=0;
fontname="Helvetica,Arial,sans-serif"
node [fontname="Helvetica,Arial,sans-serif"]
node [shape=normal]
edge [fontname="Helvetica,Arial,sans-serif"]
edge [weight=1000 style=dashed color=dimgrey]
subgraph cluster_top
{
bgcolor="darkkhaki";
nodesep=0;
padding=0;
margin="1.0";
RADOS [style=filled; color="transparent"; height=0];
subgraph cluster_top_inner
{
rankdir=LR;
style=filled;
color="cornflowerblue";
subgraph cluster_librados
{
color="transparent";
style=filled;
rankdir=TD;
nodesep=0;
padding=0;
margin=0;
// ranksep=0.0;
node [ style=filled; color="lightblue"; padding=0];
rbd [label=RBD];
rados_gw [label="RADOS GW"];
ceph_fs [label="Ceph FS"];
// edge [style=invis];
// // rank=same {rbd -- rados_gw -- ceph_fs}
// rank=same {rbd rados_gw ceph_fs}
}
node [ style=plaintext ];
LibRados [ label=LIBRADOS ; style=filled; color="transparent";
height=0];
edge [style=invis];
rados_gw -- LibRados;
}
edge [style=invis];
LibRados -- RADOS;
}
subgraph cluster_servers
{
rankdir=TD;
ranksep=0.0;
style=filled;
color=transparent;
fontname="Helvetica,Arial,sans-serif"
node [fontname="Helvetica,Arial,sans-serif"]
edge [fontname="Helvetica,Arial,sans-serif"]
node [shape=normal]
edge [weight=1000 style=dashed color=dimgrey]
mon0 [ label=Mon; style=filled; color="gold3" ]
mon1 [ label=Mon; style=filled; color="gold3" ]
mon2 [ label=Mon; style=filled; color="gold3" ]
mds [ label=MDS ; style=filled; color="crimson" ]
mgr [ label=MGR ; style=filled; color="cyan3" ]
A1 [ label=OSD ; style=filled; color="darkolivegreen4" ]
A2 [ label=OSD ; style=filled; color="darkolivegreen4" ]
A3 [ label=OSD ; style=filled; color="darkolivegreen4" ]
A4 [ label=OSD ; style=filled; color="darkolivegreen4" ]
B1 [ label=OSD ; style=filled; color="darkolivegreen4" ]
B2 [ label=OSD ; style=filled; color="darkolivegreen4" ]
B3 [ label=OSD ; style=filled; color="darkolivegreen4" ]
B4 [ label=OSD ; style=filled; color="darkolivegreen4" ]
C1 [ label=OSD ; style=filled; color="darkolivegreen4" ]
C2 [ label=OSD ; style=filled; color="darkolivegreen4" ]
C3 [ label=OSD ; style=filled; color="darkolivegreen4" ]
C4 [ label=OSD ; style=filled; color="darkolivegreen4" ]
D1 [ label=OSD ; style=filled; color="darkolivegreen4" ]
D2 [ label=OSD ; style=filled; color="darkolivegreen4" ]
D3 [ label=OSD ; style=filled; color="darkolivegreen4" ]
D4 [ label=OSD ; style=filled; color="darkolivegreen4" ]
E1 [ label=OSD ; style=filled; color="darkolivegreen4" ]
E2 [ label=OSD ; style=filled; color="darkolivegreen4" ]
E3 [ label=OSD ; style=filled; color="darkolivegreen4" ]
E4 [ label=OSD ; style=filled; color="darkolivegreen4" ]
mon0 -- A1 -- A2 -- A3 -- A4
mon1 -- B1 -- B2 -- B3 -- B4
mon2 -- C1 -- C2 -- C3 -- C4
mds -- D1 -- D2 -- D3 -- D4
mgr -- E1 -- E2 -- E3 -- E4
rank=same {mon0 -- mon1 -- mon2 -- mds -- mgr }
rank=same {A1 -- B1 -- C1 -- D1 -- E1 }
rank=same {A2 -- B2 -- C2 -- D2 -- E2 }
rank=same {A3 -- B3 -- C3 -- D3 -- E3 }
rank=same {A4 -- B4 -- C4 -- D4 -- E4 }
}
edge [style=invis];
RADOS -- mon2;
}](../../../../_images/graphviz-787f1f9561364bb2eb9ab9fb2757451c5d6550d8.png)
Ceph Key Services:
Object Storage by the RADOS Gateway (RGW). It can do what AWS S3, Cloudflare R2, and Google GCS do, which is to provide data as objects via a public API.
RADOS Block Device (RBD): Thinly provisioned block devices that are ideal to run virtualized systems like KVM or Kubernetes.
CephFS File Storage: A global filesystem that can be attached to any service in the network, similar to NFS and GlusterFS
Ceph Components
OSD: These are the storage devices at the lowest level
MON: The daemons that monitor the cluster
MDS: The metadata daemon that manage the file system namespace, coordinating access to the shared OSD cluster.
MGR: This daemon that runs alongside monitor daemons, and provides additional monitoring and interfaces to external systems
To Deploy Ceph You Need:
Network engineer(s) with high-speed network hardware and Linux experience
High speed network equiptment, typically in the 100 GbE or greater realm
Beefy Linux system with fast storage like SSD or better
See https://en.wikipedia.org/wiki/Ceph for more details.