Server tasting

Here’s a page about how to test Linode’s network performance at a given data center and also the disk read and write speeds of a Linux server.  I’ve been looking for an easy way to evaluate/compare how virtual servers will hold up under load.  Since Jacktrip is all about moving audio data around, I’ve landed on testing disk performance as a good way to test all the parts of the server that are going to be used by Jacktrip (CPU, i/o busses, etc) to do that.

Linode has introduced a very helpful network performance testing tool which is available here.  I would use this if a given player is having trouble as it is meant to test performance to a specific player location — results will vary widely depending on where the player is.

I titled this web page “Server Tasting” because that’s how I do it.  I build a gaggle of servers and conduct a little tasting session that compares the results of these two command-line tests (thanks to Linode-support for these).  As soon as I find a tasty one in a location that will work for the session, I stop building new ones and delete the rejects.  I can taste a lot of servers for not much money this way and the results have been Pretty Good.

More recently I’ve started using a series of support tickets in which I ask Linode to migrate a server to a less-crowded server and network-segment which these tests aren’t coming back the way I want them.  They’ve been very responsive and willing to do those migrations.

The Two Tests

I got these from Linode Support — who have always been there when I needed help.  Thanks folks!

This “dd” command tests how fast the disk can create and then read files some great big disk files.  This version creates a pretty big file set 500 4M (mByte) files or 4 gBytes which can tip over the disk of a 25 gig Nanode if it’s got other stuff already there.  Consider reducing those numbers if the disk isn’t empty.  It’s a good idea to hunt down and remove that test_file_DELETE file if the process aborts in the middle.

dd if=/dev/zero of=test_file_DELETE bs=4M count=500

My speedy Nanode (smallest/cheapest Linode) is returning 1.2 gBytes/second just now.  I would trigger a migration with Linode support if that dropped below 600 mBytes/second.  Here’s how the 1.2 gig result looks.

dd if=/dev/zero of=test_file_DELETE bs=4M count=500
4000+0 records in
4000+0 records out
16777216000 bytes (17 GB, 16 GiB) copied, 13.7779 s, 1.2 GB/s

This “fio” command line runs a similar test that produces more information and stresses the server a little more.  You may need to add it to the server – use “apt install fio” to do that.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=500M --readwrite=randrw --rwmixread=75

Here are the results, on Speedy Nanode, showing lower throughput (reading 373 mbytes/sec and writing 124 mbytes/second).   I would trigger a migration with Linode support if either of these drops below 150 mBytes/sec.  I’m especially sensitive to the speed with which the server writes data, as a bottleneck here corrupts the audio of a session that’s being recorded with weird-sounding periodic “gremlin” buzzes every 3-5 seconds.

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=500M --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.25
Starting 1 process
test: Laying out IO file (1 file / 500MiB)
Jobs: 1 (f=1)
test: (groupid=0, jobs=1): err= 0: pid=460525: Mon Sep 5 16:29:40 2022
read: IOPS=95.5k, BW=373MiB/s (391MB/s)(375MiB/1005msec)
bw ( KiB/s): min=362784, max=401816, per=100.00%, avg=382300.00, stdev=27599.79, samples=2
iops : min=90696, max=100454, avg=95575.00, stdev=6899.95, samples=2
write: IOPS=31.9k, BW=124MiB/s (130MB/s)(125MiB/1005msec); 0 zone resets
bw ( KiB/s): min=120952, max=134176, per=100.00%, avg=127564.00, stdev=9350.78, samples=2
iops : min=30238, max=33544, avg=31891.00, stdev=2337.70, samples=2
cpu : usr=6.47%, sys=45.92%, ctx=7617, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=95984,32016,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=373MiB/s (391MB/s), 373MiB/s-373MiB/s (391MB/s-391MB/s), io=375MiB (393MB), run=1005-1005msec
WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=125MiB (131MB), run=1005-1005msec

Disk stats (read/write):
sda: ios=74188/24613, merge=0/0, ticks=33804/9000, in_queue=42804, util=90.34%

Results from a recent tasting:

I was preparing a server to host a public 7-person performance that was also recorded full multi-track.

I built five servers of two types (virtual and dedicated CPUs) in two locations  (California and New Jersey).  I had a duplicate sneak in because I wasn’t paying attention to what I was doing.  The Nanode rounds out the bottom of the table.

The “dd” test results are the most startling in their difference, ranging over 10 times the speed.  The more-complex “fio” test doesn’t show as much difference.  But the “dd” and “fio” tests agree on which server is fastest.  I chose the highlighted East-location dedicated-CPU server and was happy with the performance.

But also note how well the tiny little last-row Speedy Nanode stacks up.  That rascal is a terrific server — and can run all month for five dollars.

location type dd test fio test
elapsed sec MB/s read MB/s write MB/s
west dedicated 8 1 170 98 225 75
west dedicated8 2 429 39 207 69
west virtual8 125 134 124 41
east dedicated8 12
1400
236
78
east virtual8 208 80 112 37
east NANODE 14
1200
393
131