Identifying a disk bottleneck in Linux
One of my DBAs simply asked for a virtual machine with 350GB disk space. Since he stated this was a “jump box” for accessing the other Oracle DB servers, I didnt think anything of it and simply built out a linux system with the appropriate space. It wasnt until a few days later that they started complaining that their DB performance was horrible that I realized I had given them utility class SATA disks that were shared with several other VMs, and everything on that array was being killed.
I then built out a RAID10 with 4 300GB disks and moved the LUN to it. Things were better, and other VMs started working, but they were still complaining about performance. I found the following article and began doing some testing http://it.toolbox.com/blogs/database-soup/testing-disk-speed-the-dd-test-31069.
Turns out that the disk transfers were being limited by the number of IOPs supported on the disk. Using Navisphere Analyzer, I quickly realized the IOPs were exceeding 500 even though MBs were below 100MBs. The large number of IOPs is because of the 8k block size in the DB, instead of the larger block size in many Windows systems.
I migrated the LUN to a larger RAID10 array and poof! Performance skyrocketed