Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

OS disk size requirements

Hi,

We are currently testing an all flash cluster with 3 node and 10x2TB SSD drives...

Each node is using:

  • RAID1 for OS : 2x2TB SSD
  • 8 x SSD OSD's

Seeing OS disk usage when raw storage usage is around 70%

root@CEPH-23:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 6.4M 13G 1% /run
/dev/sda3 15G 4.5G 11G 31% /
tmpfs 63G 400K 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda4 30G 123M 30G 1% /var/lib/ceph
/dev/sda5 1.7T 187M 1.7T 1% /opt/petasan/config
/dev/sda2 127M 512 127M 1% /boot/efi

Would it be safe to replace OS disk drives with smaller ones , something  like 480GB SSDs, so we could reuse these 2TB disk as OSD ? Is there any circumstance where the os disk might grow and use all these space?

Thanks!

 

256GB is enough

Booting nodes on 76GB sas drives.  You need a hair over 64GB once formatted for the image to be layed onto, after that its just log space which we upload to a syslog server in realtime and delete the log files weekly. (These get big fast and become unmanageable easily)

Still havent found a way to copy the graph database, I can export it but that removes all old data too, but 10GB is a huge graph space since if everything is going good it doesnt use too much space. A 6mo run netted 400ish MB of graph data, so 10GB is a couple years worth.