-
Notifications
You must be signed in to change notification settings - Fork 0
Description
From our conversation:
ZFS replication is scheduled (minimum 1 minute).
With Ceph each write is streamed over the network.
ZFS has a fixed size and fixed configuration of hardware.
With Ceph the hardware is fluid.
ZFS is limited to the speed of a single server.
Ceph can be massively parallel and have as much bandwidth as the NVMes and NICs allow.
ZFS vdevs (mirror, raidz, draid) need to be split somewhere when you get to around 10 or 12 devices, and are striped across across vdevs.
A single Ceph pool can be hundreds of drives.
ZFS is better for as many disks and as much data as you can run from a single server.
But Ceph is what they run at CERN where they need to span many servers.
ZFS only runs on a single server.
But Ceph can also handle lots of data. It just handles it differently.
You wouldn't use ZFS where the network data needs to be atomicly consistent instantaneously.
You wouldn't use Ceph where you need to have the maximum benefit of local speeds and the fastest reconciliation of transactional commits.
If you want to be able to grow a single pool without any pain, you'd use Ceph.
If you want to maximize performance per application with different tiers of disks, you'd use ZFS.
If your primary use case is "as many disk queue actions as possible" - such as with a typical hypervisor setup with many VMs, Ceph is a great choice.
If your primary use case is "committed as fast as possible" - such as with a typical DB, ZFS is a great choice.
ZFS is parallel across vdevs for writes, and vdevs (and mirrored drives within a vdev, if available) for reads.
ZFS reads groups of blocks from the disks similar to a traditional hardware RAID.
Ceph is more like torrent.