Today i've been experimenting with the VMware I/O Analyser as a useful tool to drive storage performance from a fairly even baseline of tests.
The tool essentially is an Ubuntu VM with IOMeter and a web front end (which is OK, but doesn't help much), wrapped up in a nice OVF package.
First step is to download the package, which is available from the VMware Flings website (http://labs.vmware.com/flings/io-analyzer).
Next we need to push the OVF into the environment. As this is the controller VM we are not wanting to monitor the backend performance stats of the VM itself, so it should NOT be published on the datastore that we wish to test. Instead, publish the VM onto a local datastore or another shared volume. This is mostly due to the workload being generated may saturate the disks, SAN controllers or network which could affect this VM and may be detrimental to your test and not provide a fair result.
In my case "Nimble-IOAnalyser" was my local datastore, and VMFS-01 was my datastore to be tested:
|Select a datastore which is NOT to be tested by the I/O Analyser for your OVF deployment.|
VMware's best practice for deploying Virtual Machines is to deploy the disks using "Thick Provision, Eager Zero" as a profile, so go ahead and use that.
Once the VM has been deployed it's key to take a look at the settings and at the disks it's created. What you'll notice is that the test disk (Hard Disk 2) has been created with only 100MB as a size; which means 100% of the testing will reside in memory or in controller cache. This is something we need to avoid as it does not provide a fair and real-world result. This Hard Disk is also provisioned into the incorrect datastore by default.
|The default testing drive needs to be deleted (Hard Disk 2)|
Delete this disk and create a new one with at least 100GB as it's size. This disk should also be placed on the datastore for which you wish to test the performance of, and again use "Thick Provision, Eager Zero".
Once deployed, power the VM on. You should be greeted with the familiar VMware appliance screen. Change your timezone to what's relevant to you (GMT for me). You could also change the network settings here, but I left mine as DHCP as it's not really needed. Login to the VM with the default credentials of root/vmware to continue.
Note: we're not going to use the web client provided for this tool, as whilst it's ok it's not possible to change any of the default values in the I/O testing phase of this workload.
Once logged in you'll see an Ubuntu screen with a terminal window open. Right-Click on the desktop and open another (Terminals->xterm).
In the first terminal, you want to type "/usr/bin/dynamo". This starts the backend IOMeter worker and thread engine.
In the second window, type "wine /usr/bin/Iometer.exe". This will open the IOMeter application, and should tie into the dynamo engine started.
Note: ignore the comment underneath of "Fail to open kstat...", as long as you opened the engine before IOMeter it'll be OK.
From here onwards it's down to how you use IOMeter. There are lots of incorrect methods to using this tool to provide expected results, so here are my settings:
- Two workers, both mapped to the new 100GB volume (sdb).
- Maximum disk size of 200000000 sectors (translates to 95GB).
- 32 Outstanding I/Os per target (if this is left to 1 the test is not adequately driving the storage array!).
- A new workload policy is created for 4K blocks. 100% random workload, and 100% write. (100% write should ALWAYS be the very first workload to the volume, otherwise there is no data to actually read back which does not provide real-world results).
- Align the I/Os to the block size of your volume to remove disk misalignment performance discrepancies. In my case it's 4K.
- Assign this workload to each of your workers to ensure you're consistent with your tests.
Some people may ask why 100% random; if you wish to test for IOPS performance in your storage, random IO is the key to generate these statistics. If you wish to test network performance (rather than IOPS) then Sequential I/O should be used. You should NOT mix these workloads together as you will be given inconsistent and inconclusive results for both disk and network stats.
- Before running the test ensure you set a ramp up period (I use 15 seconds) and set a standard run-time (I use 10 minutes). Ensure all your workers are selected.
- Click the Green Flag to start the test!
I hope the above has been useful to you. Please feel free to run this test and let me know your stats in the comments box on this page, along with the make/model of your storage array - would be a fun survey!
PS - if you wish to see my results: