Align GPT partitions on 4K sector disks

I’m upgrading the storage in a offsite backup server to two new disks. The new disks are of 3TB each which pose some challenges when it comes to partitioning. Here is a quick background to this issue.

Why is it an issue to partition disks larger than 2TB?

Historically, data stored on the actual disks have been stored in 512 byte chunks, called a sector. 32 bit addressing of sectors creates the following limit:

And there you have it. Newer disks have transitioned to “4K”/4096 bytes physical sectors which extends this limit to 16TB. But…

Why is partition alignment crucial to storage performance?

To complicated things further, disks often expose 512 bytes logical sectors to the operating system for legacy support. This might cause tools to believe it is okay to begin and end a partition on any 512 byte sector border, which might not be a 4K byte sector border that is stored on the disk.

Hardware.info has a good article illustrating this.
Wikipedia on 4K / Advanced Format

How do you align partitions in Ubuntu with GNU Parted?

GNU Parted is a tool that supports GUID Partition Table, GPT, setup under Linux. Parted have some parameters to aid in the alignment of partition starts and ends. Let’s launch parted with:

Where sdX is the drive we intend to view and/or modify. The –align optimal is the aid in the alignment. In parted we can view the current partition table with the command print:

As we can see, the drive has 4K physical sectors but presents 512 logical sectors. A tricky part I struggled with for hours was to calculate the partition sizes with the unit set to sectors. In my opinion, parted could be more clear on what sector size it presents to the user. To figure this out I issued the following:

Making the calculation, bytes per sectors:

So, even though this is a 4K drive, parted is using 512 byte sectors for viewing partition starts, ends and sizes.

Setting up partitions with parted

First, let’s setup a gpt partition table with the following command:

This was the partition layout I wanted to achieve:

Partition Size Usage
sdX1 8GB swap
sdX2 250GB /
sdX3 1200GB raid
sdx4 1542GB raid

Initially, I tried calculating the partition sizes using the sector unit to make sure that each partition border aligned with the physical sectors. Often, parted complained about the alignment with:

What helped was to use the unit MB for the starts and ends. Here is the final parted commands:

Notes: Using 0% default to the first 1MB border that is correctly aligned. The same goes for 100% which makes sure the last partition aligns with the end of the disk. Here is the resulting partition layout:

To verify that the partitions are aligned, the following command can be executed, with P being the partition number:

This became a long post. In the future I will try to cover handling alignment between the filesystem layer and the partitions.

Let me know how it goes for you!

Some quick HDD and SSD benchmarks

I have been able to run some benchmarks on various hard drives and a solid state drive. Mostly for my own amusement to see how old drives compares to new drives. There are some desktop drives as well as some enterprise drives. Perhaps the numbers can be useful for someone.

The drives

Listed in some kind of old/slow to new/fast

  • Seagate Barracuda 7200.11 (ST31500341AS), 1.5TB
  • Seagate Barracuda 7200.12 (ST500DM002), 500GB
  • Samsung SpinPoint F3 (HD502HJ), 500GB
  • Hitachi GST Deskstar 7K1000.D (HDS721010DLE630), 1TB
  • Western Digital Red (WD20EFRX), 2TB
  • Seagate SV35.6 Series (ST2000VX000), 2TB
  • Seagate Constellation CS (ST2000NC000), 2TB
  • Seagate Constellation ES (ST3000NM0033), 3TB
  • Intel 520 SSD (SSDSC2CW240A3), 240GB

Test system

  • Asus P8Z68-V (Intel Z68 chipset)
  • Intel 2600K
  • 2x4GB RAM
  • Ubuntu Desktop 10.04.3
  • Ubuntu Disk application used for the benchmarks

The system is kind of old, but I have collected the numbers for some time and wanted to run the drives on the same platform. The drives were connected to the onboard SATA-III/6G ports, connected to the Z68 chipset.

*Update* I believe I have screwed up and actually used the SATA 3G ports for some of the drives. I will rerun the benchmark with the Constellation ES and SSD drive and update this post. The other drives are in production and I’m unable to test them.

Results

ST31500341AS ST500DM002 HD502HJ HDS721010DLE630 WD20EFRX ST2000VX000 ST2000NC000 ST3000NM0033 SSDSC2CW240A3

Reflections

I am not going to do an in depth analysis of the results, since I realize the procedure was way too sloppy. There are some really strange write results for the Constellation ES drive shown here. I tried running the same benchmark with Ubuntu 12.04 and it was more consistent with less spikes/dips.

Hopefully I will be able to post some other interesting benchmarks soon.

Moving an Ubuntu Server installation to a new partition scheme

My previous post covered how to clone an Ubuntu Server installation to a new drive. That method covered cloning between identical drives and identical partition schemes. However, I have grown out of space on one of the file servers which have the following layout:

The issue now is that if increase the size of the drive, I can not grow the filesystem since the swap is at the end of the disk. Fine, I figured I could remove the swap, grow the sda1 partition and then add the swap at the end again. I booted up the VM to a Live CD, launched GParted and tried the operation. This failed with the following output:

After some searching and new attempts with a stand alone Gparted Live CD I still got the same results. Therefor, I figured I could try to copy the installation to a completely new partition layout.

Process

Setup the new filesystem

Add the new (destination) drive and the old (source) drive to the VM and boot it to a Live CD, preferable with the same version as the source OS. Setup the new drive as you like it to be. Here is the layout of my new (sda) and old drive (sdb):

Copy the installation to the new partition

For this I use the following rsync command:

Time to wait…

Fix fstab on the new drive

Identify the UUIDs of the partition:

Remember, we changed the partition layout so make sure you pick the right ones. In this example, sda1 is swap and sda2 is the ext4 filesystem.
Change the UUID in fstab:

Look for the following lines and change the old UUIDs to the new ones. Once again, the comments in the file is from initial installation, do not get confused of them.

Save (CTRL+O) and exit (CTRL+X) nano.

Setup GRUB on the new drive

This is the same process as in the previous post, only the mount points are slightly different this time:

With the help of chroot, the grub tools will operate on the virtual drive rather than the live session. Run the following commands to re-/install the boot loader again.

Lets exit the chroot and unmount the directories:

Cleanup and finalizing

Everything should be done now to boot into the OS with the new drive. Shutdown the VM, remove the old virtual drive from the VM and remove the virtual Live CD. Fire up the VM in a console and verify that it is booting correctly.

As a friendly reminder – now that the VM has been removed and re-added to the inventory it is removed from the list of automatically started virtual machines. If you use it, head over to the host configuration – Software – Virtual Machine

Cloning a virtual hard disk to a new ESXi datastore

One physical drive of my ESXi host is starting to act strange and after a couple of years I think it is a good idea to start migrating to a new drive. Unfortunately, I do not do this often enough to remember the process. Therefor, I intend to document it here and it could hopefully be of help to someone else.

Precondition

  • ESXi 5.1
  • Ubuntu Server 12.04 virtual machine on “Datastore A”
  • VM hard disk is thin provisioned

Goal

  • Move the VM to “Datastore B”
  • Reduce the used space of the VM on the new datastore

The VM was set up with a 1TB thin provisioned drive and the initial usage was up to 400GB. Later on I moved the majority of the used storage to a separate VM and the usage now is around 35GB. However, the previously used storage is not freed up and I intend to accomplish this by cloning the VM to a new virtual disk. As far as I know, there are other methods to free up space but I have not tried any of those yet. To be investigated…

Process

  1. Add new thin virtual hard disk to the VM of the same size on the new datastore
  2. Boot the VM to a cloning tool (I have used Acronis, but there are other free competent alternatives)
  3. Clone the old drive to the new one keeping the same partition setup
  4. Shut down the VM and copy the .vmx-file to the same folder as the .vmdk on the new datastore (created in step 1)
  5. Remove the VM from the inventory. Do not delete the files from the datastore
  6. Browse the new datastore, right click on the copied .vmx-file and select Add to inventory
  7. Edit the settings of the VM to remove the old virtual drive.
  8. Select a Ubuntu Live CD image (preferable the same version as the VM) for the virtual CD drive.
  9. Start the VM. vSphere will pop up a dialogue asking if the VM was moved or copied, select moved.
  10. Boot the VM to a Ubuntu Live CD to fix the mounting and grub
  11. Boot into the new VM

Let’s explain some steps in greater detail.

4. Copy the VMX file

If this is the initial state:

After adding a second drive, VM_B.vmdk, on the other datastore (step 1), cloning VM.vmdk to VM_B.vmdk (step 3) and copying the VM.vmx to the VM-folder on datastoreB (step 4), the layout would be the following:

10. Boot the VM to a Ubuntu Live CD to fix mounts and grub

This section is heavily dependent on the guest OS. Ubuntu 12.04 uses UUID’s to mount drives and to decide which drive to boot from. The new virtual drive will have a different UUID than the original drive and will therefor not be able to boot the OS. This is where the Live CD comes in.

Once inside the Live CD, launch a terminal and orientate yourself. To identify the UUIDs of the partition use:

Next, let’s mount the drive:

where X is the drive letter and Y is the root partition.(If you have some exclusive partitioning setup you might need to mount the other partitions to be able to follow these steps. But then again, you probably know what you are doing anyway)

Change the UUID in fstab:
$ sudo nano /mnt/etc/fstab
Look for the following lines and change the old UUIDs to the new ones.

Next up is changing the grub device mapping. The following steps have I borrowed from HowToUbuntu.org (How to repair, restore of reinstall Grub).

With the help of chroot, the grub tools will operate on the virtual drive rather than the live OS. Run the following commands to re-/install the boot loader again.

Lets exit the chroot and unmount the directories:

Now we should be all set. Shut down the VM, remove the virtual Live CD and boot up the new VM.