Some quick HDD and SSD benchmarks

I have been able to run some benchmarks on various hard drives and a solid state drive. Mostly for my own amusement to see how old drives compares to new drives. There are some desktop drives as well as some enterprise drives. Perhaps the numbers can be useful for someone.

The drives

Listed in some kind of old/slow to new/fast

  • Seagate Barracuda 7200.11 (ST31500341AS), 1.5TB
  • Seagate Barracuda 7200.12 (ST500DM002), 500GB
  • Samsung SpinPoint F3 (HD502HJ), 500GB
  • Hitachi GST Deskstar 7K1000.D (HDS721010DLE630), 1TB
  • Western Digital Red (WD20EFRX), 2TB
  • Seagate SV35.6 Series (ST2000VX000), 2TB
  • Seagate Constellation CS (ST2000NC000), 2TB
  • Seagate Constellation ES (ST3000NM0033), 3TB
  • Intel 520 SSD (SSDSC2CW240A3), 240GB

Test system

  • Asus P8Z68-V (Intel Z68 chipset)
  • Intel 2600K
  • 2x4GB RAM
  • Ubuntu Desktop 10.04.3
  • Ubuntu Disk application used for the benchmarks

The system is kind of old, but I have collected the numbers for some time and wanted to run the drives on the same platform. The drives were connected to the onboard SATA-III/6G ports, connected to the Z68 chipset.

*Update* I believe I have screwed up and actually used the SATA 3G ports for some of the drives. I will rerun the benchmark with the Constellation ES and SSD drive and update this post. The other drives are in production and I’m unable to test them.

Results

ST31500341AS ST500DM002 HD502HJ HDS721010DLE630 WD20EFRX ST2000VX000 ST2000NC000 ST3000NM0033 SSDSC2CW240A3

Reflections

I am not going to do an in depth analysis of the results, since I realize the procedure was way too sloppy. There are some really strange write results for the Constellation ES drive shown here. I tried running the same benchmark with Ubuntu 12.04 and it was more consistent with less spikes/dips.

Hopefully I will be able to post some other interesting benchmarks soon.

Moving an Ubuntu Server installation to a new partition scheme

My previous post covered how to clone an Ubuntu Server installation to a new drive. That method covered cloning between identical drives and identical partition schemes. However, I have grown out of space on one of the file servers which have the following layout:

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048  2146435071  1073216512   83  Linux
/dev/sda2      2146435072  2147483647      524288    5  Extended
/dev/sda5      2146437120  2147479551      521216   82  Linux swap

The issue now is that if increase the size of the drive, I can not grow the filesystem since the swap is at the end of the disk. Fine, I figured I could remove the swap, grow the sda1 partition and then add the swap at the end again. I booted up the VM to a Live CD, launched GParted and tried the operation. This failed with the following output:

resize2fs: /dev/sda: The combination of flex_bg and 
!resize_inode features is not supported by resize2fs

After some searching and new attempts with a stand alone Gparted Live CD I still got the same results. Therefor, I figured I could try to copy the installation to a completely new partition layout.

Process

Setup the new filesystem

Add the new (destination) drive and the old (source) drive to the VM and boot it to a Live CD, preferable with the same version as the source OS. Setup the new drive as you like it to be. Here is the layout of my new (sda) and old drive (sdb):

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048     8390655     4194304   82  Linux swap / Solaris
/dev/sda2         8390656  4294965247  2143287296   83  Linux
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048  2145386495  1072692224   83  Linux
/dev/sdb2      2145388542  2147481599     1046529    5  Extended
/dev/sdb5      2145388544  2147481599     1046528   82  Linux swap / Solaris

Copy the installation to the new partition

For this I use the following rsync command:

$ sudo rsync -ahvxP /mnt/old/* /mnt/new/

Time to wait…

Fix fstab on the new drive

Identify the UUIDs of the partition:

$ ll /dev/disk/by-uuid/

5de1...9831 -> ../../sda1
f038...185d -> ../../sda2

Remember, we changed the partition layout so make sure you pick the right ones. In this example, sda1 is swap and sda2 is the ext4 filesystem.
Change the UUID in fstab:

$ sudo nano /mnt/new/etc/fstab

Look for the following lines and change the old UUIDs to the new ones. Once again, the comments in the file is from initial installation, do not get confused of them.

# / was on /dev/sda1 during installation
UUID=f038...158d / ext4 discard,errors=remount-ro 0 1

# swap was on /dev/sda5 during installation
UUID=5de1...5831 none swap sw 0 0

Save (CTRL+O) and exit (CTRL+X) nano.

Setup GRUB on the new drive

This is the same process as in the previous post, only the mount points are slightly different this time:

$ sudo mount --bind /dev /mnt/new/dev
$ sudo mount --bind /dev/pts /mnt/new/dev/pts
$ sudo mount --bind /proc /mnt/new/proc
$ sudo mount --bind /sys /mnt/new/sys
$ sudo chroot /mnt/new

With the help of chroot, the grub tools will operate on the virtual drive rather than the live session. Run the following commands to re-/install the boot loader again.

# grub-install /dev/sda
# grub-install --recheck /dev/sda
# update-grub

Lets exit the chroot and unmount the directories:

# exit
$ sudo umount /mnt/new/dev/pts
$ sudo umount /mnt/new/dev
$ sudo umount /mnt/new/proc
$ sudo umount /mnt/new/sys
$ sudo umount /mnt/new

Cleanup and finalizing

Everything should be done now to boot into the OS with the new drive. Shutdown the VM, remove the old virtual drive from the VM and remove the virtual Live CD. Fire up the VM in a console and verify that it is booting correctly.

As a friendly reminder – now that the VM has been removed and re-added to the inventory it is removed from the list of automatically started virtual machines. If you use it, head over to the host configuration – Software – Virtual Machine

Cloning a virtual hard disk to a new ESXi datastore

One physical drive of my ESXi host is starting to act strange and after a couple of years I think it is a good idea to start migrating to a new drive. Unfortunately, I do not do this often enough to remember the process. Therefor, I intend to document it here and it could hopefully be of help to someone else.

Precondition

  • ESXi 5.1
  • Ubuntu Server 12.04 virtual machine on “Datastore A”
  • VM hard disk is thin provisioned

Goal

  • Move the VM to “Datastore B”
  • Reduce the used space of the VM on the new datastore

The VM was set up with a 1TB thin provisioned drive and the initial usage was up to 400GB. Later on I moved the majority of the used storage to a separate VM and the usage now is around 35GB. However, the previously used storage is not freed up and I intend to accomplish this by cloning the VM to a new virtual disk. As far as I know, there are other methods to free up space but I have not tried any of those yet. To be investigated…

Process

  1. Add new thin virtual hard disk to the VM of the same size on the new datastore
  2. Boot the VM to a cloning tool (I have used Acronis, but there are other free competent alternatives)
  3. Clone the old drive to the new one keeping the same partition setup
  4. Shut down the VM and copy the .vmx-file to the same folder as the .vmdk on the new datastore (created in step 1)
  5. Remove the VM from the inventory. Do not delete the files from the datastore
  6. Browse the new datastore, right click on the copied .vmx-file and select Add to inventory
  7. Edit the settings of the VM to remove the old virtual drive.
  8. Select a Ubuntu Live CD image (preferable the same version as the VM) for the virtual CD drive.
  9. Start the VM. vSphere will pop up a dialogue asking if the VM was moved or copied, select moved.
  10. Boot the VM to a Ubuntu Live CD to fix the mounting and grub
  11. Boot into the new VM

Let’s explain some steps in greater detail.

4. Copy the VMX file

If this is the initial state:

DatastoreA/VM/VM.vmdk
DatastoreA/VM/VM.vmx

After adding a second drive, VM_B.vmdk, on the other datastore (step 1), cloning VM.vmdk to VM_B.vmdk (step 3) and copying the VM.vmx to the VM-folder on datastoreB (step 4), the layout would be the following:

DatastoreA/VM/VM.vmdk
DatastoreA/VM/VM.vmx

DatastoreB/VM/VM_B.vmdk
DatastoreA/VM/VM.vmx

10. Boot the VM to a Ubuntu Live CD to fix mounts and grub

This section is heavily dependent on the guest OS. Ubuntu 12.04 uses UUID’s to mount drives and to decide which drive to boot from. The new virtual drive will have a different UUID than the original drive and will therefor not be able to boot the OS. This is where the Live CD comes in.

Once inside the Live CD, launch a terminal and orientate yourself. To identify the UUIDs of the partition use:

$ ll /dev/disk/by-uuid/

5de1...9831 -> ../../sda1
f038...185d -> ../../sda2

Next, let’s mount the drive:

$ sudo mount /dev/sdXY /mnt

where X is the drive letter and Y is the root partition.(If you have some exclusive partitioning setup you might need to mount the other partitions to be able to follow these steps. But then again, you probably know what you are doing anyway)

Change the UUID in fstab:
$ sudo nano /mnt/etc/fstab
Look for the following lines and change the old UUIDs to the new ones.

# / was on /dev/sda2 during installation
UUID=f038...158d / ext4 discard,errors=remount-ro 0 1

# swap was on /dev/sda1 during installation
UUID=5de1...5831 none swap sw 0 0

Next up is changing the grub device mapping. The following steps have I borrowed from HowToUbuntu.org (How to repair, restore of reinstall Grub).

$ sudo mount --bind /dev /mnt/dev
$ sudo mount --bind /dev/pts /mnt/dev/pts
$ sudo mount --bind /proc /mnt/proc
$ sudo mount --bind /sys /mnt/sys
$ sudo chroot /mnt

With the help of chroot, the grub tools will operate on the virtual drive rather than the live OS. Run the following commands to re-/install the boot loader again.

# grub-install /dev/sda
# grub-install --recheck /dev/sda
# update-grub

Lets exit the chroot and unmount the directories:

# exit
$ sudo umount /mnt/dev/pts
$ sudo umount /mnt/dev
$ sudo umount /mnt/proc
$ sudo umount /mnt/sys
$ sudo umount /mnt

Now we should be all set. Shut down the VM, remove the virtual Live CD and boot up the new VM.

Two new IBM ServeRAID M1015 cards

I found two additional IBM ServeRAID cards on a Swedish forum at a price too good to pass on. These were server pulls and did not have any PCI bracket. I had a box of old computer parts and found two Firewire cards which had one hole that fit the M1015 card. This is good enough and better than paying $10×2 for two brackets on Ebay. As for the cables, my previous experience with Deconn, also on Ebay, was only positive and I ordered 4 cables to fully equip the new cards.

cable

Of course, the first thing I did was to flash the cards to the latest LSI P16 firmware. This time around though, I flashed one card with the IT firmware and omitting the BIOS, and the other with the IR firmware with BIOS. The IT firmware just pass on the disks to the OS while the IR firmware makes it possible to setup RAID 0, 1 or 10 as well as passing on non-RAID disks to the OS. This combination of RAID and pass on disks is something the IBM firmware cannot do.

As soon as I get some decent disks I will see how the card behaves in a Windows computer.

VT-d Verification on ASRock Z77 Pro4-M

Strike while the iron is hot

I’m building a workstation for a friend and he chose a ASRock Z77 Pro4-M motherboard. I have had all the parts for a week and it wasn’t until today it struck me that there are some Z77 motherboards that support VT-d. There have been some conflicting information on whether or not VT-d is supported in the Z77 chipset. According to the latest information at the Intel site, VT-d is supported on Z77. However, many motherboard manufacturers have not implemented it (yet?).

Since I had the chance to test this myself I decided to try it out while I had the chance. The specifications of the workstation are as follows:

  • Motherboard: ASRock Z77 Pro4-M (specifications)
  • Processor: Intel Core i7 3770K
  • Memory: Corsair XMS 2x8GB
  • Storage: Intel SSD 520 240GB
  • PSU: beQuiet Straight Power 400W 80+ Gold
  • Case: Fractal Design Define Mini

Now you’re thinking; “this won’t turn out well with a K-processor”… Absolutely right, the Core i7 3770K does not support VT-d. After asking around I happened to find a Core i5 2400 for this test. As you can see on the Intel Ark page, VT-d is supported for that model.

Here are some shots of the ASRock mobo which is really good looking in my opinion.

pci_express

socket_area

io_ports

Let’s hook it up and see if we can get some DirectPath I/O device passthrough going.

There are two settings for virtualization in the BIOS. One (VT-x) is found under CPU configuration and the other (VT-d) is found under Northbridge configuration. ESXi 5.1 installs just fine to a USB stick and detects the onboard NIC, which by the way is a Realtek 8168. An Intel NIC would have been preferred. Once ESXi is installed we can connect to it with the vSphere client and see that we can enable DirectPath.

directpathIO

Unfortunately, I didn’t have any other PCI-Express card available to make a more extensive test. The device I have selected, which vSphere fails to detect, is the ASMedia SATA controller. This controller is used for one internal SATA port and either one internal or the E-SATA port on the back I/O panel.

Create a Virtual Machine, change all settings for it and save the changes. Launch the settings again and Add the PCI device:

add_PCI

add_PCI_choose

Once the PCI device is connected, some settings are not possible to change anymore. It is possible to remove the PCI device, change the settings and re-add the PCI device. Also, adding the PCI device and changing settings at the same time might throw some error messages.

I choose to fire up an Ubuntu 12.04 Live CD just to see if it works. Here is what the controller looks like. I hooked up an old spare drive to the ASMedia controller and as we can see it is correctly detected.

asm_controller

This was a really quick test but I will definitely give ASRock boards another try for upcoming build. Please, ASRock, send me your Z77 Extreme11 board for evaluation. Z77 with LSI onboard SATA is a real killer!

To summarize my short experience with this board:

  • VT-d on Z77 is working!
  • non-K CPU overclocking
  • 3x PCI-Express 16x ports for add in cards.
  • Power ON-to-boot is really quick
  • Realtek NIC is a slight negative. Intel would have been better.

SATA hotswap drive in mdadm RAID array

I needed to replace a SATA drive in a mdadm RAID1 array and I figured I could try to do a hot swap. Before the step-by-step guide, this is how the system is set up for an orientation.

  • 2x1TB physical disks; /dev/sdb and /dev/sdc
  • Each drive contains one single partition; /dev/sdb1 and /dev/sdc1 respectively
  • /dev/sdb1 and /dev/sdc1 together make up the /dev/md0 RAID1 array

Here is what the array looks like:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[0] sdc1[1]
976629568 blocks super 1.2 [2/2] [UU]

In the following steps we will remove one drive from the array, remove it physically, add the new physical drive and make mdadm rebuild the array.

Important note!
Since we’re removing one of the drives in a RAID1 set we do not have any redundancy anymore. If this is critical data on the array, this is the time to make a proper backup of it. The drive I’m removing is not faulty in any way and therefor I will not take any backups.

Step-by-step guide

  1. Mark the drive as failed
    $ sudo mdadm --manage /dev/md0 --fail /dev/sdb1
  2. Remove the drive from the array
    $ sudo mdadm --manage /dev/md0 --remove /dev/sdb1
  3. View the mdadm status
    $ cat /proc/mdstat
  4. If you prefer to shut down the system for a cold swap, do it now. Before the hot swap put the drive into standby with the following command
    $ sudo hdparm -Y /dev/sdb
    Make sure you know which drive you are going to remove before issuing this command. Operations to the disk will wake up the drive again.
  5. Remove the SATA signal cable first and then the SATA power cable.
  6. Mount the new drive and connect SATA power. I let the drive spin up for 5-10 seconds before connecting the SATA signal cable. If you did a cold swap, power on the system at this point.
  7. Identify the new drive and what device name it has. In my case, the new drive was conveniently named /dev/sdb, the same as the old one.
  8. Copy the partitioning setup from the other drive in the array to the new disk.
    Make sure the order is correct, otherwise we will erase the operational drive!
    $ sfdisk -d /dev/sdc | sfdisk /dev/sdb
  9. Add the new drive to the RAID array
    $ sudo mdadm --manage /dev/md0 --add /dev/sdb1
  10. The RAID array will now be rebuilt and the progress is indicated by the
    $ cat /proc/mdstat
    output. To have a more dynamic update of the progress use the following:
    $ watch cat /proc/mdstat

When the rebuild is done the status will look something like this again:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[2] sdc1[1]
976629568 blocks super 1.2 [2/2] [UU]

Enjoy and be careful with your data!

VMware vSphere client over a SSH tunnel

Reaching a virtual host remotely with the vSphere client through a SSH tunnel is not as straight forward as one could hope. However, it is possible with a few simple steps.

How to connect to a remote ESXi server through a SSH tunnel

I will present two examples, one using the Putty GUI and the other with command line arguments for Putty.

A. Putty GUI

  1. Create or open an existing session to a machine located on the ESXi management network.
  2. Go to Connection – SSH – Tunnels and add the following tunnel configurations:

    Replace 10.0.0.254 with the IP of your ESXi host.
  3. Return to the session settings and make sure you save your settings . It is always a pain to realize when pressing Open to early.
  4. Connect to the machine with the newly created session
  5. Due to some issues in the vSphere client we need to add an entry to the Windows hosts file. Open notepad with administrator privileges (needed to make changes to the hosts file) and open the file (without file extension):
    C:\Windows\System32\drivers\etc\hosts
    (hint: copy paste the about line into the Open file dialogue)
  6. Add a line at the end of the file
    127.0.0.1 esxiserver
    Save the file and exit Notepad
  7. Fire the vSphere client and enter esxiserver as “IP address / Name” and your login credentials.

B. Putty command line

  1. Instead of setting up a Putty session with the GUI
    putty.exe -L 443:destIP:443 -L 902:destIP:902 -L 903:destIP:903 user@local_machine
    (all the above on the same line) where destIP is the IP of the ESXi-server, user is your username on the local_machine and local_machine is the machine on the local network. Hit enter to launch the SSH session and log in.
  2. Setup the hosts file as described in step 5-6 in the previous section
  3. Launch the vSphere client and connect as described in step 7 above.

Raw Device Mappings in ESXi 5.1

Raw Device Mapping (RDM) is a a method to pass a physical drive (that is detected by ESXi) to a virtual machine without first creating a Datastore and a virtual hard disk inside it.

Here is how to setup a drive with RDM to a VM:

  1. Enable SSH on the host. Log in to the physical ESXi host. Under Troubleshooting Options select Enable SSH.
  2. Log in to the host with your favorite SSH client
  3. Find out what the disk is called by issuing the command:
    ls -l /vmfs/devices/disks/
    The device is called something like:
    t10.ATA_____SAMSUNG_HD103SJ___________S246JDWS90XXXX______
    Make sure you determine the correct drive to use for the RDM. Entries with the same beginning as above but ending with :1 is a partition. This is not what you want, you want to map the entire drive.
  4. Find your datastore by issuing the command:
    ls -l /vmfs/volumes/

    It should be named something like:
    Datastore1 -> 509159-bd99-…
  5. Go to your Datastore by issuing the command:
    cd /vmfs/volumes/
    Datastore1
    (Type ls and you will see the content of this Datastore)
  6. Here comes the actual mapping. Issue the command:
    vmkfstools -z /vmfs/devices/disks/<name of disk from step 3> <RDM>.vmdk
    Where you replace the <name of disk from step 3> with the actual disk name and <RDM>with what you would like to call the RDM. A concrete example:vmkfstools -z /vmfs/devices/disks/t10.ATA_DISKNAME_ SamsungRDM.vmdk
  7. Log in to the vSphere client
  8. Shutdown the virtual machine that you want to add the RDM to
  9. Open the settings for the virtual machine
  10. Under hardware tab, click Add…
  11. Select Hard Disk and press Next
  12. Select Use an existing virutal disk
  13. Press Browse, go to the datastore we found in step 4 and select theSamsungRDM.vmdk we created in step 6.
  14. Press Next, Next and Finish to finalize the add hardware guide.

The hard drive is now added to the VM. Just start it up and start using the drive.

Note 1: I was trying to be a smart ass by using the path /dev/disks/ instead of /vmfs/devices/disks/ to point out the disk for the vmkfstools and it refused to accept it.

Note 2: SMART monitoring does not work on drives used as RDM.

Cross Flashing of IBM ServeRAID M1015 to LSI SAS9211-8i

Finally, the RAID card and SATA cables have arrived. The card was back ordered from the retailer I decided to buy from. Now they’re in stock though and can be found here: IBM SERVERAID M1015 6GB SAS/SATA

Why on earth did you buy an OEM card?

Please read my post SATA Expansion Card Selection.

I also scored some neat cables from Ebay. Here in Sweden, one SFF-8087 cable in the standard red SATA color would cost 150SEK ($22) if I bought one at the same time as the RAID card (no extra shipping cost). Two cables of the same type but with black sleeves over silver cables including shipping from Singapore, 100SEK ($15). The choice was simple…

In my prestudy for a suitable controller I found that it was possible to flash certain OEM cards with the original manufacturer’s firmware and BIOS to change the behavior of the card. The IBM ServeRAID is equivalent with a LSI SAS9240 card and it can be flashed into a LSI SAS9211-8i.

All information on how to flash the M1015 card into a LSI SAS9211-8i can be found in this excellent article at ServeTheHome. The content and instructions are updated so I am not going to put them here in case some major changes occur.

As it happens, the process of flashing these cards does not work with any motherboard. The sas2flsh tool refused to work in the following two boards (PAL initialization error)

  • Intel DQ77MK (Q77 chipset, socket 1155, tried PCIe 16x slot)
  • Intel DG965RY (G965 chipset, socket 775, tried PCIe 16x slot)

The following board worked for me:

  • Asus P5QPL-AM (G41 chipset, socket 775, PCIe 16x slot)

For more information and experiences of the flash process, visit this LaptopVideo2Go Forum Post

Here are some additional information that might be useful to an interested flasher

The board is up and running in the ESXi host now and I will run some benchmarks to compare the performance difference between a Datastore image, a Raw Device Mapped (RDM) drive and a drive connected to the LSI controller and passed through to the VM.

SATA Expansion Card Selection

In my previous article, The Storage Challenge, I explained my thinking when it comes to storage for the virtual environment. The conclusion was that I need some kind of SATA controller to set up the storage how I want.

 The requirements I have on the controller are as follows

  • Sata III / 6G support
  • Support for 3+TB drives
  • 8 SATA ports
  • RAID 1
  • Drive hotswap

 Let’s break them down.

  • SATA III/6G is not really a needed today as I will start off with only mechanical SATA II/3G drives. However, I might add an SSD drive to the mix later on which would make use of the added bandwidth.
  • 3+TB drives. Once again, I will begin with only 1TB drives but since the main focus of this storage solution is storage and backup I will most likely increase the amount of storage at a later stage.
  • 8 SATA ports. The initial plan is to connect 4 drives to the controller so 8 is pretty much the next step.
  • RAID 1 is not really needed as I plan on beginning with software RAID1. This option is more of an educational decision. RAID 0 or RAID 1 does not require that much from the controllers and they doesn’t get that expensive. Stepping up to RAID 5 or 6 and we’re talking about a completely different price point.
  • Drive hot swap. The goal of this all-in-one machine is that it will always be online. Hence, I would like to be able to add a hot swap HDD bay down the road. I’m also interested in trying how the OS will handle hot swap when the drives are in a software RAID array. Another educational aspect.

With the requirement list set I started searching for a suitable card. Since I run ESXi on the hardware I would like to have native support for it in case I decide to use RAID below ESXi. Looking at the VMware white list for storage adapters (link), the safest bet is to use a card based on an LSI controller. As I have come to understand, many large hardware companies rebrand LSI card to their own brand. These cards happens to be cheaper than LSI’s own cards. ServeTheHome have some excellent articles and summaries on what cards from different manufacturers are based on. Here in Sweden it seems like IBM and their ServeRAID cards are the cheapest with comparable controllers.

 Using the excellent sorting mechanism of Prisjakt.nu boiled it all down to these four cards:

  • IBM ServeRAID BR10i
  • IBM ServeRAID M1015
  • IBM ServeRAID M1115
  • IBM ServeRAID M5110

Here is a great summary on the IBM website regarding the ServeRAID adapters. BR10i is not capable of SATA 6G or 3TB drives so it’s out of the list. M5110 is a nice card capable of RAID5 but is also somewhat 50% more expensive than the M1X15 cards. M1015 seems to be a really popular card to flash to LSI’s own firmwares to enable different operating modes. M1115 seems to be pretty much the same card but I have not yet found any information on flashing it with LSI firmwares. M1015 is slightly cheaper so I decided to go for it.

ServeTheHome also happens to have a pointer to a pretty good deal for people in the United States on a IBM ServeRAID M1015