Measuring the Performance of Your Storage System

IOPS vs Throughput vs Latency

There are 3 metrics that play into the performance of your storage system: IOPS (input output per second) Throughput, and latency. How these 3 factors work together synergistically will determine how well your system performs. In this post, we will define all 3 individually, and then look and look at how they work together to create performance or non-performance.

 

What are IOPS?

IOPS is the standard unit of measurement for the maximum number of reads and writes a drive can carry out every second. Generally speaking, the higher the IOPs number, the better the drive performs, but there’s more to it than that. IOPS results can be influenced by factors such as the size of the data blocks and the queue depth (i.e. how many data requests are waiting to be processed).

the chart below (source: wiki) shows how different block size affects the actual performance of the drive.

 

Drive IOPS IOPS IOPS
(Type / RPM) (4KB block, random) (64KB block, random) (512KB block, random)
SAS / 15K 188 – 203 175 – 192 115 – 135
FC / 15K 163 – 178 151 – 169 97 – 123
FC / 10K 142 – 151 130 – 143 80 – 104
SAS / 10K 142 – 151 130 – 143 80 – 104
SATA / 7200 73 – 79 69 – 76 47 – 63

 

When you are reading manufacturer sales literature and they are boasting about IOPS of their systems, it is similar to car manufacturers boasting about horsepower in their cars. Of course, it plays some role, but it doesn’t determine the performance or efficacy of the storage system as a whole.

What is Throughput?

Throughput goes hand-in-hand with bandwidth. Throughput is the volume of data that was transferred from a source at any given time. Bandwidth is a measure of how much data could theoretically be transferred from a source at any given time.

Think of throughput as the cars and bandwidth as the freeway. Throughput can easily be bottlenecked by bandwidth. This deal primarily with the networking side of the storage system, and how quickly the storage system can transmit the data to the endpoints in a local area network, or a wide area network.

What is Latency?

Latency s how fast the data packets inside the pipe travel from client to server and back. Relating it back to the previous example, latency is how fast the cars are travelling on the freeway.

Latency is a synonym for delay, and often measured in milliseconds. The lower the latency, the better the performance. Like throughput, Latency is more network related that drive or system related. There are several factors that can contribute to high latency, the primary being distance travelled in a local network.

To check your latency, you can open the command prompt (CMD) on windows, and type: tracert followed by the website you’d like to test. It would look like this: tracert spectra.com .

Your results will be shows in milliseconds, and will be easy to tell if you have a latency-based issue or not.

 

  Related Posts
  • No related posts found.