OS disk size requirements
wailer
75 Posts
December 17, 2019, 10:17 pmQuote from wailer on December 17, 2019, 10:17 pmHi,
We are currently testing an all flash cluster with 3 node and 10x2TB SSD drives...
Each node is using:
- RAID1 for OS : 2x2TB SSD
- 8 x SSD OSD's
Seeing OS disk usage when raw storage usage is around 70%
root@CEPH-23:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 6.4M 13G 1% /run
/dev/sda3 15G 4.5G 11G 31% /
tmpfs 63G 400K 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda4 30G 123M 30G 1% /var/lib/ceph
/dev/sda5 1.7T 187M 1.7T 1% /opt/petasan/config
/dev/sda2 127M 512 127M 1% /boot/efi
Would it be safe to replace OS disk drives with smaller ones , something like 480GB SSDs, so we could reuse these 2TB disk as OSD ? Is there any circumstance where the os disk might grow and use all these space?
Thanks!
Hi,
We are currently testing an all flash cluster with 3 node and 10x2TB SSD drives...
Each node is using:
- RAID1 for OS : 2x2TB SSD
- 8 x SSD OSD's
Seeing OS disk usage when raw storage usage is around 70%
root@CEPH-23:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 6.4M 13G 1% /run
/dev/sda3 15G 4.5G 11G 31% /
tmpfs 63G 400K 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda4 30G 123M 30G 1% /var/lib/ceph
/dev/sda5 1.7T 187M 1.7T 1% /opt/petasan/config
/dev/sda2 127M 512 127M 1% /boot/efi
Would it be safe to replace OS disk drives with smaller ones , something like 480GB SSDs, so we could reuse these 2TB disk as OSD ? Is there any circumstance where the os disk might grow and use all these space?
Thanks!
admin
2,930 Posts
December 17, 2019, 10:51 pmQuote from admin on December 17, 2019, 10:51 pm
256GB is enough
256GB is enough
Shiori
86 Posts
January 28, 2020, 6:55 pmQuote from Shiori on January 28, 2020, 6:55 pmBooting nodes on 76GB sas drives. You need a hair over 64GB once formatted for the image to be layed onto, after that its just log space which we upload to a syslog server in realtime and delete the log files weekly. (These get big fast and become unmanageable easily)
Still havent found a way to copy the graph database, I can export it but that removes all old data too, but 10GB is a huge graph space since if everything is going good it doesnt use too much space. A 6mo run netted 400ish MB of graph data, so 10GB is a couple years worth.
Booting nodes on 76GB sas drives. You need a hair over 64GB once formatted for the image to be layed onto, after that its just log space which we upload to a syslog server in realtime and delete the log files weekly. (These get big fast and become unmanageable easily)
Still havent found a way to copy the graph database, I can export it but that removes all old data too, but 10GB is a huge graph space since if everything is going good it doesnt use too much space. A 6mo run netted 400ish MB of graph data, so 10GB is a couple years worth.
OS disk size requirements
wailer
75 Posts
Quote from wailer on December 17, 2019, 10:17 pmHi,
We are currently testing an all flash cluster with 3 node and 10x2TB SSD drives...
Each node is using:
- RAID1 for OS : 2x2TB SSD
- 8 x SSD OSD's
Seeing OS disk usage when raw storage usage is around 70%
root@CEPH-23:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 6.4M 13G 1% /run
/dev/sda3 15G 4.5G 11G 31% /
tmpfs 63G 400K 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda4 30G 123M 30G 1% /var/lib/ceph
/dev/sda5 1.7T 187M 1.7T 1% /opt/petasan/config
/dev/sda2 127M 512 127M 1% /boot/efiWould it be safe to replace OS disk drives with smaller ones , something like 480GB SSDs, so we could reuse these 2TB disk as OSD ? Is there any circumstance where the os disk might grow and use all these space?
Thanks!
Hi,
We are currently testing an all flash cluster with 3 node and 10x2TB SSD drives...
Each node is using:
- RAID1 for OS : 2x2TB SSD
- 8 x SSD OSD's
Seeing OS disk usage when raw storage usage is around 70%
root@CEPH-23:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 6.4M 13G 1% /run
/dev/sda3 15G 4.5G 11G 31% /
tmpfs 63G 400K 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/sda4 30G 123M 30G 1% /var/lib/ceph
/dev/sda5 1.7T 187M 1.7T 1% /opt/petasan/config
/dev/sda2 127M 512 127M 1% /boot/efi
Would it be safe to replace OS disk drives with smaller ones , something like 480GB SSDs, so we could reuse these 2TB disk as OSD ? Is there any circumstance where the os disk might grow and use all these space?
Thanks!
admin
2,930 Posts
Quote from admin on December 17, 2019, 10:51 pm
256GB is enough
256GB is enough
Shiori
86 Posts
Quote from Shiori on January 28, 2020, 6:55 pmBooting nodes on 76GB sas drives. You need a hair over 64GB once formatted for the image to be layed onto, after that its just log space which we upload to a syslog server in realtime and delete the log files weekly. (These get big fast and become unmanageable easily)
Still havent found a way to copy the graph database, I can export it but that removes all old data too, but 10GB is a huge graph space since if everything is going good it doesnt use too much space. A 6mo run netted 400ish MB of graph data, so 10GB is a couple years worth.
Booting nodes on 76GB sas drives. You need a hair over 64GB once formatted for the image to be layed onto, after that its just log space which we upload to a syslog server in realtime and delete the log files weekly. (These get big fast and become unmanageable easily)
Still havent found a way to copy the graph database, I can export it but that removes all old data too, but 10GB is a huge graph space since if everything is going good it doesnt use too much space. A 6mo run netted 400ish MB of graph data, so 10GB is a couple years worth.