{"id":2573,"date":"2017-09-19T13:50:51","date_gmt":"2017-09-19T12:50:51","guid":{"rendered":"http:\/\/www.nivas.hr\/blog\/?p=2573"},"modified":"2017-09-19T13:50:51","modified_gmt":"2017-09-19T12:50:51","slug":"measuring-disk-io-performance-macos","status":"publish","type":"post","link":"https:\/\/www.nivas.hr\/blog\/2017\/09\/19\/measuring-disk-io-performance-macos\/","title":{"rendered":"Measuring Disk IO Performance on MacOS"},"content":{"rendered":"<p>Over time and numerous hardware updates around the office, I collected a vast number of 2.5&#8243; HDD&#8217;s in my &#8220;hardware junk&#8221; box. The other day, I noticed two Kingston SSDNow V200 128GB SSD&#8217;s just sitting there doing nothing, so I decided to make them usable again. I have a really BAD track record of broken non-ssd 2.5&#8243; travelling external disks. 99% of them broke or started showing serious problems just after 1st year of usage (traveling with them with the notebook). I wanted to see how will SSD disk act in same conditions.<\/p>\n<p>I visited my local hardware store to get USB3 2.5&#8243; HDD enclosure, being geek, I did my homework and decided to get noname enclosure for 15 EUR with semi rubber protection.<br \/>\nGood lady at the counter suggested that instead of 15EUR one, I get 13EUR noname enclosure since &#8220;it was better&#8221;. <\/p>\n<p>Sceptical that I am, I bought both and decided to do a test and prove her that she is wrong. The one with higher price had to be better. :)<\/p>\n<p>After fitting disks in enclosures, first issue I stumbled upon was a lack of disk benchmarking tool on MacOS. On Windows I used <a href=\"http:\/\/www.hdtune.com\/\">hdtune<\/a> for ages and was happy with it. On MacOS however, <a href=\"https:\/\/itunes.apple.com\/us\/app\/blackmagic-disk-speed-test\/id425264550?mt=12\">Blackmagic Disk Speed Test<\/a> in Mac App Store did not inspire confidence in me (blac kmagic, cmon?), not did 11yrs old <a href=\"http:\/\/www.xbench.com\/\">Xbench<\/a> or <a href=\"https:\/\/sourceforge.net\/projects\/jdiskmark\/\">jDiskMark beta<\/a> (written in Java).<\/p>\n<p>In Ubuntu\/Debian\/RHEL land I&#8217;ve benchmarked device IO before and had good experience with FIO. FIO is a popular tool for measuring IOPS on a Linux servers. <\/p>\n<blockquote><p>\n<strong><br \/>\nDo not make mistake of benchmarking (or using dd for eg.) \/dev\/disk device.<br \/>\nOn MacOS you should always use \/dev\/rdisk device.<br \/>\n<\/strong><\/p>\n<p><strong>\/dev\/disk<\/strong> &#8211; buffered access, for kernel filesystem calls, broken in 4kb chunks. goes more expensive root.<br \/>\n<strong>\/dev\/rdisk<\/strong> &#8211; &#8220;raw&#8221; in the BSD sense and force block-aligned I\/O. Those devices are closer to the physical disk than the buffered cache ones.<br \/>\nIf you do a read or write larger than one sector to \/dev\/rdisk, that request will be passed straight through. The lower layers may break it up (eg., USB breaks it up into 128KB pieces due to the maximum payload size in the USB protocol), but you generally can get bigger and more efficient I\/Os. When streaming, like via dd, 128KB to 1MB are pretty good sizes to get near-optimal performance on current non-RAID hardware. (<a href=\"https:\/\/superuser.com\/questions\/631592\/why-is-dev-rdisk-about-20-times-faster-than-dev-disk-in-mac-os-x\">source<\/a>)<\/p>\n<\/blockquote>\n<p><strong>1. Install FIO<\/strong><\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">brew install fio<\/pre>\n<p><strong>2. Check correct disk number<\/strong><\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">diskutil list<\/pre>\n<p><strong>Everything from this step forward can and will delete data on your disk. So BE VERY CAREFUL on which disk you use.  You have been warned.<\/strong><\/p>\n<p><strong>3. Precondition SSD<\/strong><br \/>\nWe precondition each drive the same way for each measurement, and stimulate the drive to the same performance state so the test process is deterministic<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">sudo dd if=\/dev\/zero of=\/dev\/rdisk2 bs=1m<\/pre>\n<p><strong>4. Running tests<\/strong><\/p>\n<p><strong>Random read\/write performance<\/strong><\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">.\/fio --randrepeat=1 --ioengine=posixaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75<\/pre>\n<p><strong>Random read performance<\/strong><\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">.\/fio --randrepeat=1 --ioengine=posixaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread<\/pre>\n<p><strong>Random write performance<\/strong><\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">.\/fio --randrepeat=1 --ioengine=posixaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite<\/pre>\n<p>(<em>On MacOS we must use posixaio ioengine. If you are on running some different flavour of Unix just replace &#8211;ioengine=<strong>posixaio<\/strong> with eg. &#8211;ioengine=<strong>libaio<\/strong> for Ubuntu<\/em>)<\/p>\n<p><strong>5. The results<\/strong><\/p>\n<p>The lady at the store was right! Using same HDD&#8217;s the cheaper HDD enclosure gave us better results. <strong>It was faster by almost 35%<\/strong>.<\/p>\n<table border=\"1\" cellpadding=\"5\">\n<tr style=\"font-weight: bold\">\n<td>tray<\/p>\n<td>\n<td>read mb\/s<\/td>\n<td>write mb\/s<\/td>\n<td>read IOPS<\/td>\n<td>write IOPS<\/td>\n<\/tr>\n<tr>\n<td>ASMT (\/dev\/disk)<\/p>\n<td>\n<td>10.9MiB\/s<\/td>\n<td>11.9MiB\/s<\/td>\n<td>86 IOPS<\/td>\n<td>94 IOPS<\/td>\n<\/tr>\n<tr>\n<td>ASMT<\/p>\n<td>\n<td>69.7MiB\/s<\/td>\n<td>72.8MiB\/s<\/td>\n<td>552 IOPS<\/td>\n<td>576 IOPS<\/td>\n<\/tr>\n<tr>\n<td>PATRIOT<\/p>\n<td>\n<td>92.4MiB\/s<\/td>\n<td>93.5MiB\/s<\/td>\n<td>738 IOPS<\/td>\n<td>747 IOPS<\/td>\n<\/tr>\n<\/table>\n<p>If you are interested in values I got, here there are.<\/p>\n<p>The first set of benchmarks (done on buffered \/dev\/disk device) revealed really poor performance [r=10.9MiB\/s,w=11.9MiB\/s][r=86,w=94 IOPS].<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\nsudo fio --filename=\/dev\/disk2 --direct=1 --rw=randrw --rwmixwrite=50 --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=128k --rate_iops=1280  --iodepth=16 --numjobs=1 --time_based --runtime=86400 --group_reporting --name=benchtest\r\nfio-2.18\r\nStarting 1 thread\r\n^Cbs: 1 (f=1), 0-2560 IOPS: [m(1)][0.5%][r=10.9MiB\/s,w=11.9MiB\/s][r=86,w=94 IOPS][eta 23h:52m:35s]\r\nfio: terminating on signal 2\r\n\r\nbenchtest: (groupid=0, jobs=1): err= 0: pid=3075: Fri Mar 24 20:14:55 2017\r\n   read: IOPS=94, BW=11.8MiB\/s (12.4MB\/s)(5234MiB\/445379msec)\r\n    slat (usec): min=0, max=303, avg= 0.40, stdev= 2.28\r\n    clat (msec): min=47, max=228, avg=100.40, stdev=14.81\r\n     lat (msec): min=47, max=228, avg=100.40, stdev=14.81\r\n    clat percentiles (msec):\r\n     |  1.00th=[   74],  5.00th=[   82], 10.00th=[   85], 20.00th=[   90],\r\n     | 30.00th=[   93], 40.00th=[   96], 50.00th=[   98], 60.00th=[  102],\r\n     | 70.00th=[  105], 80.00th=[  111], 90.00th=[  119], 95.00th=[  127],\r\n     | 99.00th=[  151], 99.50th=[  161], 99.90th=[  184], 99.95th=[  192],\r\n     | 99.99th=[  208]\r\n  write: IOPS=94, BW=11.8MiB\/s (12.4MB\/s)(5237MiB\/445379msec)\r\n    slat (usec): min=0, max=296, avg= 0.53, stdev= 2.81\r\n    clat (msec): min=25, max=177, avg=69.66, stdev= 9.52\r\n     lat (msec): min=25, max=177, avg=69.66, stdev= 9.52\r\n    clat percentiles (msec):\r\n     |  1.00th=[   51],  5.00th=[   58], 10.00th=[   61], 20.00th=[   63],\r\n     | 30.00th=[   66], 40.00th=[   68], 50.00th=[   69], 60.00th=[   71],\r\n     | 70.00th=[   73], 80.00th=[   76], 90.00th=[   80], 95.00th=[   86],\r\n     | 99.00th=[  105], 99.50th=[  114], 99.90th=[  133], 99.95th=[  137],\r\n     | 99.99th=[  151]\r\n    lat (msec) : 50=0.44%, 100=76.81%, 250=22.76%\r\n  cpu          : usr=0.46%, sys=0.41%, ctx=283619, majf=3, minf=6\r\n  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=50.0%, 16=50.0%, 32=0.0%, &gt;=64=0.0%\r\n     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\r\n     complete  : 0=0.0%, 4=98.3%, 8=1.7%, 16=0.1%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\r\n     issued rwt: total=41875,41894,0, short=0,0,0, dropped=0,0,0\r\n     latency   : target=0, window=0, percentile=100.00%, depth=16\r\n\r\nRun status group 0 (all jobs):\r\n   READ: bw=11.8MiB\/s (12.4MB\/s), 11.8MiB\/s-11.8MiB\/s (12.4MB\/s-12.4MB\/s), io=5234MiB (5489MB), run=445379-445379msec\r\n  WRITE: bw=11.8MiB\/s (12.4MB\/s), 11.8MiB\/s-11.8MiB\/s (12.4MB\/s-12.4MB\/s), io=5237MiB (5491MB), run=445379-445379msec\r\n<\/pre>\n<p>Repeated benchmark on same enclosure, but using raw device (\/dev\/rdisk) revealed much nicer numbers &#8211; 600% faster than buffered device<br \/>\n[m(1)][0.3%][r=69.7MiB\/s,w=72.8MiB\/s][r=552,w=576 IOPS][eta 23h:55m:54s]<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\nsudo fio --filename=\/dev\/rdisk2 --direct=1 --rw=randrw --rwmixwrite=50 --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=128k --rate_iops=1280  --iodepth=16 --numjobs=1 --time_based --runtime=86400 --group_reporting --name=benchtest\r\nfio-2.18\r\nStarting 1 thread\r\n^Cbs: 1 (f=1), 0-2560 IOPS: [m(1)][0.3%][r=69.7MiB\/s,w=72.8MiB\/s][r=552,w=576 IOPS][eta 23h:55m:54s]\r\nfio: terminating on signal 2\r\n\r\nbenchtest: (groupid=0, jobs=1): err= 0: pid=3075: Fri Mar 24 21:13:39 2017\r\n   read: IOPS=538, BW=67.3MiB\/s (70.6MB\/s)(16.2GiB\/245308msec)\r\n    slat (usec): min=0, max=47, avg= 0.45, stdev= 1.02\r\n    clat (msec): min=8, max=45, avg=15.05, stdev= 2.70\r\n     lat (msec): min=8, max=45, avg=15.05, stdev= 2.70\r\n    clat percentiles (usec):\r\n     |  1.00th=[11200],  5.00th=[12224], 10.00th=[12736], 20.00th=[13376],\r\n     | 30.00th=[13888], 40.00th=[14400], 50.00th=[14784], 60.00th=[15168],\r\n     | 70.00th=[15680], 80.00th=[16320], 90.00th=[17280], 95.00th=[18048],\r\n     | 99.00th=[23936], 99.50th=[36608], 99.90th=[39680], 99.95th=[40192],\r\n     | 99.99th=[42240]\r\n  write: IOPS=538, BW=67.4MiB\/s (70.7MB\/s)(16.2GiB\/245308msec)\r\n    slat (usec): min=0, max=65, avg= 0.46, stdev= 0.67\r\n    clat (msec): min=6, max=45, avg=14.56, stdev= 2.71\r\n     lat (msec): min=6, max=45, avg=14.57, stdev= 2.71\r\n    clat percentiles (usec):\r\n     |  1.00th=[10560],  5.00th=[11712], 10.00th=[12224], 20.00th=[12864],\r\n     | 30.00th=[13376], 40.00th=[13888], 50.00th=[14272], 60.00th=[14784],\r\n     | 70.00th=[15168], 80.00th=[15808], 90.00th=[16768], 95.00th=[17536],\r\n     | 99.00th=[23680], 99.50th=[36096], 99.90th=[39168], 99.95th=[40192],\r\n     | 99.99th=[42240]\r\n    lat (msec) : 10=0.22%, 20=98.34%, 50=1.44%\r\n  cpu          : usr=3.48%, sys=2.40%, ctx=531264, majf=3, minf=5\r\n  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=50.0%, 16=50.0%, 32=0.0%, &gt;=64=0.0%\r\n     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\r\n     complete  : 0=0.0%, 4=97.9%, 8=1.8%, 16=0.3%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\r\n     issued rwt: total=132027,132160,0, short=0,0,0, dropped=0,0,0\r\n     latency   : target=0, window=0, percentile=100.00%, depth=16\r\n\r\nRun status group 0 (all jobs):\r\n   READ: bw=67.3MiB\/s (70.6MB\/s), 67.3MiB\/s-67.3MiB\/s (70.6MB\/s-70.6MB\/s), io=16.2GiB (17.4GB), run=245308-245308msec\r\n  WRITE: bw=67.4MiB\/s (70.7MB\/s), 67.4MiB\/s-67.4MiB\/s (70.7MB\/s-70.7MB\/s), io=16.2GiB (17.4GB), run=245308-245308msec\r\n<\/pre>\n<p>Finally, the second HDD tray I benchmarked revealed best results, almost 35% faster than cheap-enclosure-1.<br \/>\n[m(1)][0.5%][r=92.4MiB\/s,w=93.5MiB\/s][r=738,w=747 IOPS][eta 23h:52m:50s]<\/p>\n<pre class=\"brush: plain; title: ; notranslate\" title=\"\">\r\nsudo fio --filename=\/dev\/rdisk3 --direct=1 --rw=randrw --rwmixwrite=50 --refill_buffers --norandommap --randrepeat=0 --ioengine=posixaio --bs=128k --rate_iops=1280  --iodepth=16 --numjobs=1 --time_based --runtime=86400 --group_reporting --name=benchtest\r\nfio-2.18\r\nStarting 1 thread\r\n^Cbs: 1 (f=1), 0-2560 IOPS: [m(1)][0.5%][r=92.4MiB\/s,w=93.5MiB\/s][r=738,w=747 IOPS][eta 23h:52m:50s]\r\nfio: terminating on signal 2\r\n\r\nbenchtest: (groupid=0, jobs=1): err= 0: pid=3075: Fri Mar 24 20:37:26 2017\r\n   read: IOPS=761, BW=95.2MiB\/s (99.8MB\/s)(39.2GiB\/430198msec)\r\n    slat (usec): min=0, max=310, avg= 0.55, stdev= 2.23\r\n    clat (msec): min=1, max=48, avg=11.43, stdev= 2.84\r\n     lat (msec): min=1, max=48, avg=11.43, stdev= 2.84\r\n    clat percentiles (usec):\r\n     |  1.00th=[ 6880],  5.00th=[ 8256], 10.00th=[ 8896], 20.00th=[ 9536],\r\n     | 30.00th=[10048], 40.00th=[10560], 50.00th=[11072], 60.00th=[11584],\r\n     | 70.00th=[12224], 80.00th=[12864], 90.00th=[14016], 95.00th=[15296],\r\n     | 99.00th=[22912], 99.50th=[28800], 99.90th=[35584], 99.95th=[37120],\r\n     | 99.99th=[40704]\r\n  write: IOPS=762, BW=95.3MiB\/s (99.9MB\/s)(40.3GiB\/430198msec)\r\n    slat (usec): min=0, max=767, avg= 0.96, stdev= 3.58\r\n    clat (usec): min=492, max=45310, avg=9422.63, stdev=2869.71\r\n     lat (usec): min=493, max=45311, avg=9423.59, stdev=2869.68\r\n    clat percentiles (usec):\r\n     |  1.00th=[ 5024],  5.00th=[ 6240], 10.00th=[ 6944], 20.00th=[ 7712],\r\n     | 30.00th=[ 8256], 40.00th=[ 8640], 50.00th=[ 9024], 60.00th=[ 9536],\r\n     | 70.00th=[10048], 80.00th=[10688], 90.00th=[11712], 95.00th=[13120],\r\n     | 99.00th=[21888], 99.50th=[27264], 99.90th=[35072], 99.95th=[37120],\r\n     | 99.99th=[40704]\r\n    lat (usec) : 500=0.01%\r\n    lat (msec) : 2=0.01%, 4=0.08%, 10=49.48%, 20=49.08%, 50=1.35%\r\n  cpu          : usr=4.59%, sys=2.86%, ctx=1256049, majf=0, minf=11\r\n  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=57.4%, 16=42.6%, 32=0.0%, &gt;=64=0.0%\r\n     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\r\n     complete  : 0=0.0%, 4=98.2%, 8=1.8%, 16=0.1%, 32=0.0%, 64=0.0%, &gt;=64=0.0%\r\n     issued rwt: total=327551,327861,0, short=0,0,0, dropped=0,0,0\r\n     latency   : target=0, window=0, percentile=100.00%, depth=16\r\n\r\nRun status group 0 (all jobs):\r\n   READ: bw=95.2MiB\/s (99.8MB\/s), 95.2MiB\/s-95.2MiB\/s (99.8MB\/s-99.8MB\/s), io=39.2GiB (42.1GB), run=430198-430198msec\r\n  WRITE: bw=95.3MiB\/s (99.9MB\/s), 95.3MiB\/s-95.3MiB\/s (99.9MB\/s-99.9MB\/s), io=40.3GiB (42.1GB), run=430198-430198msec\r\n<\/pre>\n<p><strong>Conclusion<\/strong><br \/>\nfio is pretty robust utility for io testing. Beware of quality of onboard electronics when buying HDD trays. Trays within same price range, can vary 15-30% in speed.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Over time and numerous hardware updates around the office, I collected a vast number of 2.5&#8243; HDD&#8217;s in my &#8220;hardware junk&#8221; box. The other day, I noticed two Kingston SSDNow V200 128GB SSD&#8217;s just sitting there doing nothing, so I decided to make them usable again. I have a really BAD track record of broken&#8230;<\/p>\n","protected":false},"author":3,"featured_media":2587,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[51,1,38,52],"tags":[],"_links":{"self":[{"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/posts\/2573"}],"collection":[{"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/comments?post=2573"}],"version-history":[{"count":21,"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/posts\/2573\/revisions"}],"predecessor-version":[{"id":2814,"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/posts\/2573\/revisions\/2814"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/media\/2587"}],"wp:attachment":[{"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/media?parent=2573"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/categories?post=2573"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nivas.hr\/blog\/wp-json\/wp\/v2\/tags?post=2573"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}