Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Is there a way to measure the cache effectiveness (or utilization) when using kernel cache?

Good morning team,

I have a technical question. When using the SSD's for caching.. is there a way (command, set of commands, etc.) to measure the cache utilization and effectiveness ?

The best way is to measure iops and latency and comparing with and without cache. this is the best effectiveness measurement.

note that dm-writecache is a writeback cache, unlike dm-cache which is mainly a read cache that does promotion/demotion and can use different algorithm for this, so it keeps stats of things like cache hits and ratios.

dm-writecache does keep some simple status, you can view it via the dmsetup status command.

 

 

Does the dm-writecache cache have a read cache function? Can the read cache be done in memory? What is the recommended dm-writecache disk size

dm-writecache is focused on writing, it does however serve reads from cache if they were recently written based on lru (most recently used).  it does not however promote data from slow device back to fast device device. So in this case larger cache partition will keep more recent data in cache to serve reads.

read caching however is already done at the osd level, the default is 4G ram cache, can be set by osd_memory_target, so yo can increase it if you want, this is a true read cache that does promotion/demotion,..etc. There are also other read ahead caches at the lvm level.

Generally the main bottleneck in Ceph is small random writes, reads are generally ok.

We have tested dm-cache and bcache, both did not give us good results, We also tuned dm-writecache to work well with Ceph such as support for FUA and added metadata mirroring to make it robust.

For size: actually the number of partitions ( ie active backend OSDs) is more important, we allow 1-8, but generally 2-4 should be used.  For size per partition, at least 50 GB, 100-200 is better. Note there is a memory overhead of 2% of partition size used by dm-writecache data structures in ram.

I have three nodes, each node has two Intel s4600 960G sata SSD hard drives, 10 4t Seagate hard drives. I want to use db and dm-writecache for caching. What is the best practice? One ssd puts all the db, the other hard disk puts the dm-writecache cache partition; or one ssd puts the db partition of the Seagate hard disk with 5 hdds, and the other space is divided into 5 partitions and the dm-writecache partition of 5 hdd hard disks ? Can this interface be completed? Or can it be done manually? Is there a command line? Thank you