Loading...

Blog

Latest blog posts

iSCSI performance tuning: 380 MB/s within a VM with openfiler and ESXi

Hi, folks
There is a large gap between entry level Storage appliances like QNAP, Thecus or Buffalo and „large“ SAN-Storages regarding performance, features and very much the price. We have found a solution to fill this gap for our datacenter. Using open source of course!

unsere TestumgebungFor quite a while we have been using Openfiler, a linux-based storage software. However, we were not content with the performance being just a notch above the entry level storages, despite the decent server hardware used. An ESXi cluster waiting for roll out was just the opportunity we needed to tune up I/O performance. So we hit the lab and found out, that the order of tuning steps was crucial to find the right configuration.

Our test bed:

  • 1 ESXi 4.1 server (Fujitsu RX 200 S6, 8 Cores 2,4 GHz, 48 GB RAM, 6 x GBit-NICs Intel)
  • 1 Openfiler 2.3 (Supermicro 8 Cores 2,0 GHz, 8 GB RAM, 3ware 9650SE 8 Ports, 6 x GBit-NICs Intel, 8 disk drives 2 TB SATA 7200U/min)
  • 1 switch Cisco Catalyst 2960S (24 port GBit)

We used four GBit NICs on either side for four iSCSI-VLANs with different IP address ranges. The NICs have to be assigned to the iSCSI software initiator to address all paths. We created an iSCSI volume in Openfiler, connected and formatted it in ESXi using VMFS 4 file system. Paths selection was configured „round robin“.

virtual machine for benchmarking:
We crated a VM with 1 CPU, 2 GB RAM and a harddisk of 15 GB + twice the size of the storage server’s RAM and installed Windows 7. Then we installed IOMETER and a standardized configuration from the VMware forums. This way your results can be compared with other forum posts.
1st step: Where do we start

First we measure the performance of the unoptimized system. The size of the test file we use is smaller than the storage server’s RAM. So any read and written data comes from the cache and we measure only the iSCSI connection.

Latency (ms) IO/s MB/s
Max Throughput-100%Read 14,6 4096,1 128,0
RealLife-60%Rand-65%Read 412,3 141,4 1,1
Max Throughput-50%Read 13,4 4302,2 134,4
Random-8k-70%Read 546,1 107,6 0,8

Not impressive, is it? Accessing the disks locally yields 420 MB/s! What a performance loss in the way to the VM.
2nd step: iSCSI path optimization

Next thing to do is tuning all parameters associated with the iSCSI paths. It is okay in this step to use unsafe settings (e.b. enabling a write buffer without having a BBU). The test file still is smaller than the storage server’s RAM, because we want to measure the iSCSI connection’s speed.
Things to look at:

  • Parameters of the network interface card (jumbo frames, TCP-offload)
  • iSCSI parameter (round-robin parameter)
  • RAID pontroller (enable write-cache)

The boost in performance was obvious:

Latency (ms) IO/s MB/s
Max Throughput-100%Read 4,69 12624,71 394,52
RealLife-60%Rand-65%Read 318,66 181,52 1,42
Max Throughput-50%Read 8,08 7030,93 219,72
Random-8k-70%Read 369,51 150,26 1,17

This is close the theoretical limit.Leistungsanzeige der iSCSI-Pfade bei sequentiellem Zugriff
Beware: Don’t continue if you’re not satisfied. From now on it’s getting worse!
Watching the load of iSCSI paths in vSphere client should give a equal share of traffic over all paths.
Things we stumbled upon:

  • bad jumbo frame configuration. Test it with ping using large packets and „dont fragment“-bit set. Our switch needed a „set mtu jumbo 9000“
  • VMwares „round robin“ algorithm switches paths only every 1000 IO ops. You have to use „esxcli“ to change that.

3rd step: optimizing storage parameters

Now we set up everything to safe values for the production environment. The IOMETER testfile is twice as big as the storage server’s RAM. Caution: Size is given in blocks of 512 bytes.
We compared different RAID levels (getting practice in online RAID migration while doing so) and different number of disk drives.
Raid-10 with 4 disks: 2 TB (SATA, 7200 U/min):

Latency (ms) IO/s MB/s
Max Throughput-100%Read 4,93 12089,4 377,79
RealLife-60%Rand-65%Read 333,02 171,66 1,34
Max Throughput-50%Read 8,15 6857,19 214,29
Random-8k-70%Read 454,2 129,76 1,01

Raid-10 with 8 disks:

Latency (ms) IO/s MB/s
Max Throughput-100%Read 4,8 12331,0 385,3
RealLife-60%Rand-65%Read 443,6 138,0 1,1
Max Throughput-50%Read 9,1 6305,3 197,0
Random-8k-70%Read 504,0 121,4 0,9

Increasing the number of spindles didn’t improve performance, although we expected better IOPS. So it would be better to use two independent datastores with 4 drives each.

Using RAID-6 with 8 drives gave worse IOPS.

Summary:
Almost 400 MB/s and >10.000 IOPS makes us very happy. Our x86 server with Openfiler (about $5.500) closes the gap between inexpensive entry level storage appliances ($1.500 Euro for 70 MB/s and2.000 IOPS) and large SAN storages for $15k+.
Further IOPS and latency improvement could be achieved using more and better drives (SSD, SAS). We haven’t tried storage replication yet, but we read reports about users successfully implementing „drbd“ .

IT bleibt spannend,
Christian Eich

This article is based on research by Christian Eich, Richard Schunn and Toni Eimansberger.
Author: Christian Eich

Comments (10)

  1. Buy Flamenco shoes sagt:

    I enjoy what you guys are usually up too. This type of clever work and coverage!
    Keep up the amazing works guys I’ve incorporated you guys to blogroll.

  2. That is very interesting, You’re an excessively professional blogger. I’ve joined your
    rss feed and look forward to seeking extra of your
    wonderful post. Also, I have shared your site in my social
    networks

    1. Thank you very much. We do our best to share our learnings.

  3. warpitaly sagt:

    Why not bonding the 4 NICs?

    1. because we wanted to distribute the connections over several indepentent switches. In our sertup we couldn’t get bonding to work, while having separate iSCSI-VLANS worked well.

      1. warpitaly sagt:

        Understood! We will try bonding over two Cisco 3750 switches, stacked into a single virtual switch… if it works, we will let you know 🙂

  4. Thanks for this. We had a similar excellent result. We are writing to a EMC VNX 5300 10Gbe iSCSI connection into a Cisco UCS fabric. With MTU at 1500 we had 405MBps throughput 50MBps real world in IOMETER and with MTU at 9000 we had 688MBps and 150MBps. This is a single VM writing to a single iSCSI LUN. We didn’t think these fancy stuff would care about the MTU. Now getting it to do the MTU that is another thing all together.

  5. Alex sagt:

    Quite useful post to what I was looking for, although I’m not using VMWare. Thanks for sharing!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

*
*

Cookie Consent mit Real Cookie Banner