The SSD cache test: how much faster will my Synology become thanks to WD Blue?
The four 8-TB Seagate-Ironwolf hard drives in my Synology NAS DS918+ have a transfer rate of up to 210 megabytes per second. That's 1.68 gigabits per second, which at first glance can easily supply my gigabit network. However, both HDDs and SSDs only deliver fast transfer rates if, firstly, the data connection is designed for this and, secondly, the files to be delivered aren't too small – larger than 128 kilobytes.
By the way, I'm not just making the 128 kilobytes up out of thin air. This has been shown again and again in tests, such as the following SSD review by my colleague Kevin:
Since the hard or solid state drives have to get up to speed when copying each new file, many small files are copied much slower than larger ones. Nevertheless, the theoretical write and read speeds of SSDs are much higher than those of HDDs. And that's where two WD Blue SN550 SSDs come into play. On paper, they can achieve more than ten times the speed of my NAS HDDs. That's read access up to 19.2 Gigabits per second and write access up to 14. But only in theory.
If you'd like to know in more detail what an SSD cache is good for before this test, please read the following:
In summary, the goal of the SSD cache during normal operation is to make a small portion of the total stored data – the part that is accessed frequently – available more quickly. Network storage uses a Least Recently Used (LRU) algorithm to determine which files to cache.
Test preparations: cache installation and configuration
My NAS can be equipped with up to two SSDs. The minimum number to create a read-write cache on a Synology device. With only one SSD, a read cache could be created, but this doesn't speed up uploads to the device.
I got two 500 gigabyte WD Blue SSDs, which can be installed in a minute – two plastic covers on the bottom of the NAS enclosure expose the M.2 ports and make installation easy.
If after reading this review, you decide to upgrade to an SSD cache, consult Synology's SSD cache guide for additional guidance. It gives a recommendation on the size of your cache. Among other things, this is done on the basis of access statistics for the past few days.
In my case, the recommended size of 4.6 terabytes may not be correct. I copied considerably more files than usual for testing purposes over the past few days. Therefore, Synology's evaluation is for statistical purposes only. I hope that my test results will give me more insight into how big and fast the SSDs would be in my case. And whether it's even worth it.
Once the SSDs are installed and the network storage is restarted, the cache can be created in DiskStation Manager as follows:
1. Start up the memory manager.
2. Select menu item «SSD Cache» and click on «Create».
3. Follow the instructions: select the cache mode (read only or read/write cache), the volume and the SSDs you want to use, and the [RAID types] (/en/page/nasgeforscht-welcher-raid-typ-passt-zu-mir-11323). For me, only RAID 1 works with read-write cache – RAID 5 and 6 are greyed out. The setup wizard automatically detects that nothing else will fit my RAID 10 NAS.
4. For the final step, select the desired cache size. Less than the maximum only makes sense if you also have different volumes you want to divide the SSD space between.
The SSD Cache is now being mounted into the system, which took a little over a minute for me.
I always kept an eye on the display of the previous SSD cache size on the above image during the upcoming tests. To make sure that the data really arrived where it was supposed to. Now everything is ready for testing. Almost:
After creating a new SSD cache, the item «Skip sequential I/O» can be found under «Configure». This option is correctly activated. In contrast to mixed or random reading or writing, sequential I/O accesses – for example when copying a large movie file – with SSD Cache only have a small advantage due to an increase in speed. For this reason, and especially to save the life of SSDs, HDDs are accessed directly during sequential operations when this option is activated. For this review, however, I deactivated this function so that everything runs over the cache.
SSD cache test: data transfer via Gigabit-LAN and USB 3.0 (5 Gigabits)
Since my network only delivers one gigabit per second, but I also want to show the impact of the cache on faster connectivity, I'm testing in three ways: with an external SSD from Samsung attached directly to the USB 3.0 port of the NAS, delivering up to 5 gigabits per second. Second, I test the transfer over the Gigabit-LAN and last but not least the speed of internal copying.
To cover different scenarios, I chose the following four tests to determine the average upload and download speed, both with and without SSD cache:
- Data transfer of a large video file – UHD version of «Matrix Revolutions» as MKV at 50.5 gigabytes.
- Data transfer of many photos in RAW – 2215 ARW files from a Sony RX100, on average 19.8 megabytes in size.
- Data transfer of many JPG files – a total of 2349 photos with an average size of 4.4 megabytes.
- Data transfer of many small files – 14,380 different fonts in TTF format with an average of 59 kilobytes.
The 50.5 gigabytes test with a UHD movie
To warm up, as well as to sound out the possible average maximum speed of the present configuration, I start the tests with a UHD film. I upload and download 50.5 gigabytes. Depending on the connection and transfer direction, this takes only 3 minutes 41 seconds or up to 7 minutes 39 seconds per attempt.
In greater detail, the NAS (Network Attached Storage) performs as follows – please note that I coloured the SSD cache results green for easier viewing. The results are in megabytes per second:
If I put the results in relation, the following speed increase results with SSD Cache when copying a 50.5 gigabytes large file:
|Internal Copy||+47.05%||+74.9 MBps|
When transferring very large files, the internal copy benefits most from the cache. That doesn't surprise me. However, I'd have expected that the increase in speed would be much greater. Maybe the SSD's cache is already full or the temperature is too high and it doesn't give you more speed? In any case, the 234.1 megabytes per second, or 1.87 gigabits per second, represent an increase of 47.05 per cent over HDD operation.
The SSD on the USB port gets about a tenth of the speed out. The LAN connection doesn't benefit from the cache at all. At 113 megabytes per second, this is simply the maximum that can be achieved with my current network configuration.
The 19.8 megabytes test with RAW photo data
Time to tighten the thumbscrew a little. 42.8 gigabytes of RAW photo data with an average file size of 19.8 megabytes is sent on its way. This takes anywhere from 3 minutes 23 seconds to 9 minutes 6 seconds per go.
Here are the results:
The speed increase with SSD cache when transferring files with a size of 19.8 megabytes:
|Internal Copy||+57.88%||+79.7 MBps|
Compared to the large MKV file, the speed through the tape drops by about 15 to 20 megabytes per second. But now all connections benefit from the cache. The speed increase of over 20 per cent in LAN downloads makes me happy, as I often load RAW data directly from network storage into Photoshop. When it comes to saving, there's a speed increase of almost five per cent.
The increase when copying via the USB port is only about half as beneficial. In return, the internal SSD cache copy goes off like a rocket and gets a full plus of almost 80 megabytes per second.
The 4.4 megabytes test with JPG files
With JPG files averaging 4.4 megabytes, I say goodbye to speeds in excess of 200 megabytes per second. The 2349 files are a combined 10.1 gigabytes and take between 54 seconds and 3 minutes 21 seconds per test run.
The speed increase with SSD cache when transferring files with a size of 4.4 megabytes:
|Internal Copy||+122.31%||+105.8 MBps|
Even smaller files, even slower processing and even more speed increase through the SSD Cache. For 4.4 megabyte files, there's a substantial uptick of almost 50 and 65 per cent respectively, especially when downloading. When uploading, my LAN is a little more than a fifth faster. The internal copy sets a benchmark with a performance increase of 122.31 per cent.
The 59 kilobyte test with 14,380 fonts
Vast small access can make storage media sweat. A folder from 2010 with 14,380 TTF files should do nicely. Watch out NAS, you're headed for disaster.
The total size of only 839 megabytes seems relatively small before the test, but a single run takes between 57 seconds and 6 minutes 59 seconds.
My NAS performs as follows:
The speed increase with SSD cache when transferring 59 megabyte files:
|Internal Copy||+35.00%||+2.1 MBps
The smaller the file, the slower the transfer and the greater the speed increase? Turns out that isn't true after all. The increase in speed is greater with 4.4 megabytes of files than with the 59 kilobyte fonts for almost every connection. In some cases even considerably. Nevertheless, the fonts benefit from a 9-35 per cent afterburner.
The devastatingly low speeds for small files are what really catches my eye. Even with SSD cache, my NAS won't get up to speed. And looking at the results of the Gigabit LAN, I wonder if I could get something out of it with other network devices. I can't shake the feeling that either the router, a switch or the network chip of my PC is causing a bottleneck.
By the way: the internal copy is subject to the connected USB SSD for the first time in this test.
Overview: speed per connection
Since it can be helpful to see the determined speeds from a different angle, I have also sorted the results by connection. This results in the following picture.
USB file transfer (up to 5 Gigabits per second)
Speed increase through SSD Cache with 5 Gigabit connection:
|50,5 GB Download||+12.86%||+26.1 MBps|
|50,5 GB Upload||+8.59%||+16.7 MBps|
|19.8 MB Download||+12.25%||+23.5 MBps|
|19.8 MB Upload||+0.91%||+1.8 MBps|
|4.4 MB Download||+49.25%||+52.7 MBps|
|4.4 MB Upload||+11.89%||+18.7 MBps|
|59 KB Download||+22.50%||+2.7 MBps
|59 KB Upload||+11.97%||+1.4 MBps|
LAN file transfer (up to 1 Gigabit per second)
Speed increase through SSD cache for Gigabit LAN:
|50,5 GB Download||+0.44%||+0.5 MBps|
|50,5 GB Upload||+0.44%||+0.5 MBps|
|19.8 MB Download||+21.89%||+17.6 MBps|
|19.8 MB Upload||+4.80%||+3.9 MBps|
|4.4 MB Download||+64.60%||+33.4 MBps|
|4.4 MB Upload||+22.18%||+12.0 MBps|
|59 KB Download||+9.09%||+0.3 MBps|
|59 KB Upload||+10.00%||+0.2 MBps|
Speed increase with SSD cache during internal copying:
|50,5 GB Copy||+47.05%||+74.9 MBps|
|19.8 MB Copy||+57.88%||+79.7 MBps|
|4.4 MB Copy||+122.31%||+105.8 MBps|
|59 KB Copy||+35.00%||+2.1 MBps|
Conclusion: an SSD cache can be recommended on several conditions
After testing, I realise how rarely I'll actually benefit from an SSD cache. After all, my NAS mostly streams movies. Then there's the Gigabit LAN connection, whose greatest performance increase with cache is 64.6 per cent. This happens when downloading files 4.4 megabytes in size. This applies to JPG or MP3 files, from which I can then download about 19 items per second with an SSD cache instead of 12. If I process RAW photo data around 20 megabytes, the download is only a fifth faster. With small files only one tenth.
All in all, I'd recommend investing money into better, respectively larger, HDDs rather than into an SSD cache. But if I were a photographer or had considerably more users and in the best case also a faster LAN, things could look different. In a corporate network, where every second of delay costs money, an SSD cache shouldn't be missing. IF I had my own company.
I really hope this review gives you an idea of what an SSD cache can do. And whether you need one yourself. If you're already using one, leave a comment about how much speed increase you get with it. Or other experiences with caches. For example, have you ever had to swap an SSD because it was nearing the end of its life.