Some thoughts about SSD speed, file systems and fragmentation
Post date: Nov 26, 2017 6:10:01 AM
Had long discussions about the SSD fragmentation topic with one researcher. Here's very short summary. I used small files to reserve disk space and deleted those to free disk space and grow one larger file. This is all done on very high level. I used filefrag tool, to confirm proper, I mean, almost total fragmentation. It wasn't perfect, but at least 99% of blocks were non-contiguous. Read and write tests used 4 KB blocks and seek within file to read data. I could have overridden / by passed the file system and accessed the storage media directly, but then it would be just a seek test and nothing to do with file fragmentation.
Actually my testes doesn't directly indicate measure effect of file system overhead. It's just one part of the whole. Defining what's non-negligible is extremely relative term. Of course, it would be interesting to measure the file system overhead. Separately using some underlying device, or virtual device, with always provides constant random / linear access latency.
Whole point was that even with modern file systems, fragmentation does play a role decreasing read / write performance. On file system level, as well as on storage media level.
Since doing those SSD tests I've gotten also a bunch of high capacity cheap flash drives. With these the difference is even more drastic. Reason for that is that the drives do not use advanced FTL. Which means that when data in block is changed, it's always read-modify-write operation. Doesn't sound bad, yet. But when you hear that the data block size is 8 megabytes, then you'll realize that any 'random' or fragmented non-contiguous writes, will make writing to the flash memory very slow.
When the drive fills up and rest of remaining space between extents gets filled. It's taking ages to get data stored. Drive writes 40 MB/s. But after serious fragmentation it's more like 4 KB / 8192 KB . Resulting in about 2000x slowdown at the very end when last totally fragmented voids are being filled in the available space.
When people say that full flash drive is slow. It's not. The slowness with full flash (without FTL / trim) comes only from file system + fragmentation overhead.
Actually the 4 KB read request I've used in the tests is probably extended to a larger read request by the OS anyway. It also uses read-a-head caching. If that wouldn't happen the random read and reversed read should take same time. But because the random read was faster than reversed read, it means that the random read hit the data read to RAM cache with previous requests.
This is just the trick why to do high level tests. There are several complex underlying layers which affect the system results.
With the high end SSD, I guess the FTL works so fast that it doesn't play a real role here. I don't know the exact details of high end SSD, but I assume those do support extreme internal fragmentation without any visible performance loss at least on sequential reads. Allowing high degree of internal fragmentation also drastically reduces amount of write amplification. Especially when device starts to become near full.
From speed aspects, maybe there's something like the TLB cache issues with FTL too. So some reads might hit FTL caches and be faster than others, etc. Just like with CPU RAM caches and so o on. But that is implementation specific and therefore really hard to predict. It's also highly likely that sequential reads do work pretty well with FTL, because it's also doing look-a-head processing. Getting the internally fragmented data ready and available for subsequent requests. This benefit is lost when the data is logically fragmented on file system level. - Therefore file system level fragmentation also reduces read speed on SSD devices. It's just like RAM fragmentation, it's not often that bad, but in some specific cases it can make things lot slower.