My friend and I bought 3 Dell Power Edge R740xd servers (128GB of RAM each) along with 11 SSD of 1TB each and 14 HDD of 14TB each. They are interconnected through 2 switches in two different networks via 1GB ethernet interfaces.
We are wrapping our heads around how to get the best out of the current storage inventory in setting up a decent CEPH-powered Proxmox cluster.
With firstly saying that we have little to no background in this, this is our current arrangement for each server.
2 SSD in RAID1 for mirroring proxmox at the hardware level.
1 SSD for running containers and VMs.
4 HDD for CEPH pools
With the remaining 2 HDDs, use them for Proxmox backup.
God only knows what to use the remaining 2 SSDs for.
I have to say I don't agree with the RAID1 idea. Yes, you get 1-disk fault tolerance, but at the cost of around 4.8TBs in SSD. The OS only requires a recommended space of 32GB per the proxmox docs.
Also, (and again) according to the docs, CEPH managers, monitors and MDS (in case we setup a CEPHfs) perform heavy read and writes per second, so I think they are best placed in SSDs.
Regarding a shared library for sharing files among the 3 servers, I was wondering it was best formatting the disk and share the fs using NFS protocol (with NFS Ganesha?). From what I have read I concluded NFS is better than CephFS for this, it is a more robust, performant and battle-tested proto.
So my question is: If you were us how would you make the best out of this storage for using Proxmox along with CEPH? Consider also that we want to use proxmox backup, you know, for backups.
Asked by d3vr10
(1 rep)
Jun 3, 2024, 01:34 AM
Last activity: Jul 2, 2024, 01:31 AM
Last activity: Jul 2, 2024, 01:31 AM