ESXi 5.5 u2 on ASRock Z97 Extreme6 with dual NIC support

I finally got around trying out the ASRock Z97 Extreme 6 motherboard and how it is supported by ESXi 5.5-u2. There seems to be quite a few issues with getting the Intel I218 to work on some Z97 boards, while the very same driver have been tested to work well on Z87 boards with the Intel I217 controller, for example the ASRock Z87 Extreme 6.

I recently found that a VMWare forum user called GLRoman had managed to compile an updated Intel e1000e driver that is required for this NIC. By the way, I encourage everyone interested in getting drivers to work for ESXi to read through this thread and the threads it’s linking too. Very interesting!

How to add Intel and Realtek drivers to an ESXi 5.5 U2 ISO

While I was at it I also decided to try to add Realtek drivers to the ESXi 5.5-u2 ISO in an attempt to get both NICs on the Z97 ASRock Extreme6 board to work. Please see my previous blog post on adding the updated Intel driver to an ISO. I used the very same method for this test with the addition of the Realtek drivers. In summary, these are the packages, including links to them, that are required to get both NIC operational.

The Intel driver is an offline bundle while the Realtek driver packages are VIB files. The Realtek VIBs are compressed to a zip file and need to be extracted for the EXSi-Customizer-PS script.

How to make ASRock Z97 Extreme 6 boot ESXi when installed on USB

As I have said earlier, I’m fond of installing ESXi to a USB stick to make it separated from the datastores. From what I have understood, ESXi is using GPT by default and I did not manage to get it to boot with UEFI in that way. I found that it is possible to add an option to the installation process which uses MBR instead of GPT.

During boot of the installation media, press SHIFT + O when promted. A prompt with “runweasel” will appear. Press space and add “formatwithmbr“, press enter to continue the installation as normal.

ESXi support for onboard AHCI SATA controller

In the previous article I covered the issue of VMWare removing support for some onboard SATA controllers. I did not test whether or not ESXi 5.5 U2 would detect the onboard SATA AHCI controller on this motherboard without the sata-xahci driver package since I decided to include it right away. As can be seen from the screenshot below, the onboard Intel SATA controller is detected and it is possible to connect a HDD/SSD to these ports and use them for datastore. ESXi does not support onboard SATA RAID since it is a kind of software RAID.

sata_ahci

VT-d verification on the ASRock Z97 Extreme 6

Unfortunately, at the time of this test I did not have a VT-d capable CPU installed in the system. Therefore, I can not verify that VT-d is working with this board. From what I have read, it is possible to get VT-d to work on the Z97 chipset and other persons have managed to get it working on similar boards. Hopefully, I will have the chance to confirm this at a later point.

Conclusion

It is possible to make both onboard NICs on ASRock Z97 Extreme 6 available to ESXi 5.5 U2 by adding drivers for them to the ESXi ISO image. It is also possible to use the onboard SATA controller to connect drives and use them as datastores.

extreme6_nics

drivers

Storage performance: Intel Z87 vs. ASMedia ASM1062 vs. LSI 9211-8i

During my VT-d verification on the ASRock Z86 Extreme6 I took the opportunity to compare the performance of three different storage controllers, namely:

  • Intel Z87 (onboard)
  • ASMedia ASM1062 (onboard)
  • LSI 9211-8i (PCI-Express 8x add in card)

Below is a summary of the test setup and the results of the tests.

Test System

Native performance

Comparison of the three controllers are done with the simple hard disk benchmark tool in Ubuntu 13.10.

SSD Performance

Average read [MB/s] Average write [MB/s] Average access time [ms]
Intel Z87 516.6 527.4 0.03
ASMedia ASM1062 402.2 398.4 0.04
LSI 9211-8i 546.9 521.8 0.04

Intel-SSD ASMedia-SSD LSI-SSD

HDD Performance

Average read [MB/s] Average write [MB/s] Average access time [ms]
Intel Z87 140.3 136.1 12.4
ASMedia ASM1062 140.3 136.0 12.5
LSI 9211-8i 140.3 136.6 12.4

Intel-HDD ASMedia-HDD LSI-HDD

Passthrough Performance

Passthrough performance is measured with ESXi 5.5 installed on a USB memory and the LSI card passed through to a VM. The VM is running the same version as in the above benchmarks, ubuntu 13.10. The performance is only run with the LSI card. I really tried getting passthrough working with the ASMedia controller as this would open up to some interesting storage opportunities with this board. However, Ubuntu recognized the controller but did not find any disk attached to it. Also, now that I think about it, I have no idea why I did not think about trying to pass through the Z87 controller. Anyway, here is the comparison, SSD and HDD combined.

Average read [MB/s] Average write [MB/s] Average access time [ms]
SSD – Native 546.9 521.8 0.04
SSD – Passthrough 519.3 520.2 0.06
HDD – Native 140.3 136.6 12.4
HDD – Passthrough 140.3 136.4 12.4

LSI-SSD LSI-SSD-passt

LSI-HDD LSI-HDD-passt

Final thoughts

The ASMedia controller is not capable of handling the performance of modern SSDs. For mechanical drives there is practically no difference between the three different controllers.

I had an idea of using the Intel controller for the ESXi datastore and pass through the ASMedia controller to a VM. Then it would be possible to setup software RAID for the drives connected to the ASMedia controller. This is a solution working very well for me today with the LSI card, but it would have been nice to have an all-in-one solution.

There are some performance impacts on reads when passing through the LSI card to a VM. I have not investigated this further but it might very well be benchmark technical reasons behind it.

VT-d Verification on ASRock Z87 Extreme6 with ESXi 5.5

This is one of the best enthusiast ESXi virtualization motherboard I have come across so far: ASRock Z87 Extreme6 (maybe second to ASRock Z77 Extreme11).
Some of the highlights:

  • Onboard USB header
  • VT-d passthrough
  • Debug LED
  • Dual Intel NICs

I’m very fond of installing ESXi on a USB stick to separate it from the rest of the storage. Previously I have removed the metal bracket from “onboard-USB-header-to-rear-I/O-bracket” (whatever do they call these things?) that used to come along with motherboards a couple of years back, to have an internal USB header.

Secondly, ASRock is one of the few manufacturers that supports VT-d on as many of their motherboards as possible. Even on boards with a chipset not officially supporting VT-d, like the Z77. Here is my previous test of ASRock Z77 Pro4-M. See below for some VT-d tests.

Finally, the debug code LED. This brings some lovely overclocking memories back to life. One of the main reasons for choosing Abit over Asus back in the P3 and P4 days was this very tiny feature. The POST codes proved to be extremely valuable when pushing the very last bit of performance out of a system. Anyway, I’m way off track. Let’s look at some pictures and some of my other findings.

Extreme6-overview USB-header-post-code-led intel-i211 sata-ports

Specifications

  • Intel Z87 chipset
  • Intel Haswell CPU support, socket 1150
  • Four DIMMS, supporting 32GB RAM
  • Dual Intel Gigabit LAN (I217V + I211AT)
  • USB socket directly on the board
  • POST debug code display
  • 8x USB3

The dual Intel network controllers is a real positive thing for us ESXi persons and I was really exited about this. But, neither of these controllers are supported by ESXi 5.5 out of the box. The I217 NIC can be made to work if a newer driver is supplied. However, the I211 NIC is a scaled down desktop version, as far as I have read and I have not yet found a way to make work.

Apart from the Z87 based SATA controller, there are also four additional ports connected to two controllers made by ASMedia. Two ports to each controller, whereas one port double as a ESATA port. The name of the controllers are “ASM1062”.

In total, there are eight USB3 ports where four are connected to the Z87 chipset and four connected to a controller also made by ASMedia. The four rear I/O ones are connected to the ASMedia controller and the four internal ones to the Z87 chipset. I am not entirely sure if I think this was the best move by ASRock. I can argue either way.

VT-d testing

While gathering fact about the board on the Internet I came across some comments fearing the removal of the VT-d setting for some boards and with some BIOS versions. This board, with the shipping BIOS (P2.10), has the VT-d option. According to some BIOS release notes, even though the VT-d option is removed it is supposed to be enabled by default if all criterias are met.

ESXi 5.5 is capable of utilizing the VT-d functionality with this board. More specifically DirectPath I/O as VMware calls it. Here are some of my findings:

  • Both NICs can be passed on to a VM, even though none of them are supported out of the box by ESXi 5.5, and only one can be made operational with additional drivers.
  • Both ASMedia SATA controllers show up and can be passed on to a VM. However, when I tried this with a Ubuntu 13.10 VM I could not get any drive connected to show up. The controller is detected in the VM but that is all. The controller is detected by Ubuntu 13.10 when run natively.
  • The ASMedia USB3 controller does not show up in the passthrough view. There are two Z87 USB controllers showing up but it is not clear to me if either or both of them are USB3.
  • An LSI 9211-8i card, actually a flashed M1015, inserted into the top most PCI-Express 16x slot did not show up as available for passthrough. The card works in this port when running Ubuntu 13.10 natively. Inserting the card in the second slot made it available for passthrough and it worked well with Ubuntu 13.10.

Here is a screenshot of the default available passthrough devices:

ESXi-ASRockZ87Extreme6

Summary

All in all, a board with great potential. If drivers arrive for the second Intel NIC and some of the passthrough issues are sorted out it will be a killer motherboard. Nothing critical by any means, but the flexibility is reduced if you need to use one PCI(-Express) slot for another NIC, all SATA ports or an separate RAID controller.

I will hopefully be able to present upcoming articles on how to customize an ESXi image for this board and present some storage benchmarks.

*Update 2014-01-04*
Here is how to include a driver the one of the Intel network controllers on this board:
Include Intel network drivers in an ESXi ISO
Also, here is some storage benchmarks related to this board:
Storage performance of ASRock Z87 Extreme6

Cloning a virtual hard disk to a new ESXi datastore

One physical drive of my ESXi host is starting to act strange and after a couple of years I think it is a good idea to start migrating to a new drive. Unfortunately, I do not do this often enough to remember the process. Therefor, I intend to document it here and it could hopefully be of help to someone else.

Precondition

  • ESXi 5.1
  • Ubuntu Server 12.04 virtual machine on “Datastore A”
  • VM hard disk is thin provisioned

Goal

  • Move the VM to “Datastore B”
  • Reduce the used space of the VM on the new datastore

The VM was set up with a 1TB thin provisioned drive and the initial usage was up to 400GB. Later on I moved the majority of the used storage to a separate VM and the usage now is around 35GB. However, the previously used storage is not freed up and I intend to accomplish this by cloning the VM to a new virtual disk. As far as I know, there are other methods to free up space but I have not tried any of those yet. To be investigated…

Process

  1. Add new thin virtual hard disk to the VM of the same size on the new datastore
  2. Boot the VM to a cloning tool (I have used Acronis, but there are other free competent alternatives)
  3. Clone the old drive to the new one keeping the same partition setup
  4. Shut down the VM and copy the .vmx-file to the same folder as the .vmdk on the new datastore (created in step 1)
  5. Remove the VM from the inventory. Do not delete the files from the datastore
  6. Browse the new datastore, right click on the copied .vmx-file and select Add to inventory
  7. Edit the settings of the VM to remove the old virtual drive.
  8. Select a Ubuntu Live CD image (preferable the same version as the VM) for the virtual CD drive.
  9. Start the VM. vSphere will pop up a dialogue asking if the VM was moved or copied, select moved.
  10. Boot the VM to a Ubuntu Live CD to fix the mounting and grub
  11. Boot into the new VM

Let’s explain some steps in greater detail.

4. Copy the VMX file

If this is the initial state:

DatastoreA/VM/VM.vmdk
DatastoreA/VM/VM.vmx

After adding a second drive, VM_B.vmdk, on the other datastore (step 1), cloning VM.vmdk to VM_B.vmdk (step 3) and copying the VM.vmx to the VM-folder on datastoreB (step 4), the layout would be the following:

DatastoreA/VM/VM.vmdk
DatastoreA/VM/VM.vmx

DatastoreB/VM/VM_B.vmdk
DatastoreA/VM/VM.vmx

10. Boot the VM to a Ubuntu Live CD to fix mounts and grub

This section is heavily dependent on the guest OS. Ubuntu 12.04 uses UUID’s to mount drives and to decide which drive to boot from. The new virtual drive will have a different UUID than the original drive and will therefor not be able to boot the OS. This is where the Live CD comes in.

Once inside the Live CD, launch a terminal and orientate yourself. To identify the UUIDs of the partition use:

$ ll /dev/disk/by-uuid/

5de1...9831 -> ../../sda1
f038...185d -> ../../sda2

Next, let’s mount the drive:

$ sudo mount /dev/sdXY /mnt

where X is the drive letter and Y is the root partition.(If you have some exclusive partitioning setup you might need to mount the other partitions to be able to follow these steps. But then again, you probably know what you are doing anyway)

Change the UUID in fstab:
$ sudo nano /mnt/etc/fstab
Look for the following lines and change the old UUIDs to the new ones.

# / was on /dev/sda2 during installation
UUID=f038...158d / ext4 discard,errors=remount-ro 0 1

# swap was on /dev/sda1 during installation
UUID=5de1...5831 none swap sw 0 0

Next up is changing the grub device mapping. The following steps have I borrowed from HowToUbuntu.org (How to repair, restore of reinstall Grub).

$ sudo mount --bind /dev /mnt/dev
$ sudo mount --bind /dev/pts /mnt/dev/pts
$ sudo mount --bind /proc /mnt/proc
$ sudo mount --bind /sys /mnt/sys
$ sudo chroot /mnt

With the help of chroot, the grub tools will operate on the virtual drive rather than the live OS. Run the following commands to re-/install the boot loader again.

# grub-install /dev/sda
# grub-install --recheck /dev/sda
# update-grub

Lets exit the chroot and unmount the directories:

# exit
$ sudo umount /mnt/dev/pts
$ sudo umount /mnt/dev
$ sudo umount /mnt/proc
$ sudo umount /mnt/sys
$ sudo umount /mnt

Now we should be all set. Shut down the VM, remove the virtual Live CD and boot up the new VM.

VT-d Verification on ASRock Z77 Pro4-M

Strike while the iron is hot

I’m building a workstation for a friend and he chose a ASRock Z77 Pro4-M motherboard. I have had all the parts for a week and it wasn’t until today it struck me that there are some Z77 motherboards that support VT-d. There have been some conflicting information on whether or not VT-d is supported in the Z77 chipset. According to the latest information at the Intel site, VT-d is supported on Z77. However, many motherboard manufacturers have not implemented it (yet?).

Since I had the chance to test this myself I decided to try it out while I had the chance. The specifications of the workstation are as follows:

  • Motherboard: ASRock Z77 Pro4-M (specifications)
  • Processor: Intel Core i7 3770K
  • Memory: Corsair XMS 2x8GB
  • Storage: Intel SSD 520 240GB
  • PSU: beQuiet Straight Power 400W 80+ Gold
  • Case: Fractal Design Define Mini

Now you’re thinking; “this won’t turn out well with a K-processor”… Absolutely right, the Core i7 3770K does not support VT-d. After asking around I happened to find a Core i5 2400 for this test. As you can see on the Intel Ark page, VT-d is supported for that model.

Here are some shots of the ASRock mobo which is really good looking in my opinion.

pci_express

socket_area

io_ports

Let’s hook it up and see if we can get some DirectPath I/O device passthrough going.

There are two settings for virtualization in the BIOS. One (VT-x) is found under CPU configuration and the other (VT-d) is found under Northbridge configuration. ESXi 5.1 installs just fine to a USB stick and detects the onboard NIC, which by the way is a Realtek 8168. An Intel NIC would have been preferred. Once ESXi is installed we can connect to it with the vSphere client and see that we can enable DirectPath.

directpathIO

Unfortunately, I didn’t have any other PCI-Express card available to make a more extensive test. The device I have selected, which vSphere fails to detect, is the ASMedia SATA controller. This controller is used for one internal SATA port and either one internal or the E-SATA port on the back I/O panel.

Create a Virtual Machine, change all settings for it and save the changes. Launch the settings again and Add the PCI device:

add_PCI

add_PCI_choose

Once the PCI device is connected, some settings are not possible to change anymore. It is possible to remove the PCI device, change the settings and re-add the PCI device. Also, adding the PCI device and changing settings at the same time might throw some error messages.

I choose to fire up an Ubuntu 12.04 Live CD just to see if it works. Here is what the controller looks like. I hooked up an old spare drive to the ASMedia controller and as we can see it is correctly detected.

asm_controller

This was a really quick test but I will definitely give ASRock boards another try for upcoming build. Please, ASRock, send me your Z77 Extreme11 board for evaluation. Z77 with LSI onboard SATA is a real killer!

To summarize my short experience with this board:

  • VT-d on Z77 is working!
  • non-K CPU overclocking
  • 3x PCI-Express 16x ports for add in cards.
  • Power ON-to-boot is really quick
  • Realtek NIC is a slight negative. Intel would have been better.

VMware vSphere client over a SSH tunnel

Reaching a virtual host remotely with the vSphere client through a SSH tunnel is not as straight forward as one could hope. However, it is possible with a few simple steps.

How to connect to a remote ESXi server through a SSH tunnel

I will present two examples, one using the Putty GUI and the other with command line arguments for Putty.

A. Putty GUI

  1. Create or open an existing session to a machine located on the ESXi management network.
  2. Go to Connection – SSH – Tunnels and add the following tunnel configurations:

    Replace 10.0.0.254 with the IP of your ESXi host.
  3. Return to the session settings and make sure you save your settings . It is always a pain to realize when pressing Open to early.
  4. Connect to the machine with the newly created session
  5. Due to some issues in the vSphere client we need to add an entry to the Windows hosts file. Open notepad with administrator privileges (needed to make changes to the hosts file) and open the file (without file extension):
    C:\Windows\System32\drivers\etc\hosts
    (hint: copy paste the about line into the Open file dialogue)
  6. Add a line at the end of the file
    127.0.0.1 esxiserver
    Save the file and exit Notepad
  7. Fire the vSphere client and enter esxiserver as “IP address / Name” and your login credentials.

B. Putty command line

  1. Instead of setting up a Putty session with the GUI
    putty.exe -L 443:destIP:443 -L 902:destIP:902 -L 903:destIP:903 user@local_machine
    (all the above on the same line) where destIP is the IP of the ESXi-server, user is your username on the local_machine and local_machine is the machine on the local network. Hit enter to launch the SSH session and log in.
  2. Setup the hosts file as described in step 5-6 in the previous section
  3. Launch the vSphere client and connect as described in step 7 above.

New virtualization hardware in the house

I launch this blog now that I got some new hardware.

The idea is to step up the game a notch or two from the previous system. Here is an article I wrote for NordicHardware about my previous system, the one that will be upgraded now: Vi bygger lågeffektsserver med virtualiseringsteknik

The article is in Swedish but the main point of the article was to demonstrate how to build a low power ESXi server for home usage. The specifications of the system is:

A really cheap system consuming between 45W och 70W depending on load.

The next build will be slightly more powerful and the following components have arrived so far:

PSU is still up in the air as I’m trying to find a solid 80+ gold unit for decent money. Case will most likely be a sound dampened Fractal Design Define Mini. I will dedicate another post to some of the motivation for this hardware and also how i intend to solve the storage part. Component and storage selection are restricted by several factors in this build, which I also intend to cover in that post.