Platinum Sponsors
Archives

VMware

VMware related material

Save & Optimise Virtual Disk Storage for FREE.

Save & Optimise Virtual Disk Storage for FREE.

This article demonstrates a FREE way of saving/optimising Virtual Disk storage space using software which is freely available. Include is :1. How to report which disks are over-allocated using our very own free vdisk waste finder application from here > Vdisk Waste Finder (5233 downloads)
2. How to resize over-allocated vdisks using a free open source application called “gparted
3. How to align the vdisk including system disks to a 64k start sector. This increases performance and reduces disk latency using a free open source application called “gparted”So to get started first of all download all the software by clicking on the application names above./Disclaimer:  Virtualizeplanet will not be liable if anything was to go wrong whilst performing the following operations. Do at your own risk and we will not be responsible for data loss or down time./
How to report which disks are over-allocated.

Update on this post, watch how to do this found > here <

Firstly use Vdisk Waste Finder by pointing the application to your ESX server or vCenter server, providing credentials, specifying the percentage of free space you want to look for and then clicking go.
You will then be presented with a list of VMs and their disk details. On the far right you will see a column titled “Wasted Disk”. Any disk that is noted as “Needs Resize” falls under the allowed free space and you could consider this drive a candidate for a resize..

VWF Click Picture to make larger

How to resize over-allocated vdisks.

Secondly Add a vdisk to the candidate VM of the new optimal size using the VI client.
Now booting from the Gparted live-cd iso image it’s easy to resize the partition of a disk.  Right click on the drive you want to resize then click Resize/Move from the menu:

gpartedClick Picture to make larger

Then resize the original drive to the same size as the newly add drive:

gpartedClick Picture to make larger
Next you’ll have to click on “Apply All Operations”
Next right click the drive and select copy.
Now select the new drive, right-click and select paste. Before you can paste you will be prompted to initiate a partition table make sure you do this but no need to create a partition or format it.
Next you’ll have to click on “Apply All Operations”
This will now copy the data from the old drive to the new disk.
Now the newly created drive right-click and select manage-flags.
Make sure the boot flag is selected or the VM won’t boot.

GpartedClick Picture to make larger

In the VI client back in the settings of VM remove the old drive.

Job done.

How to align the vdisk including system disks to a 64k start sector.

The idea here is to make sure your partition starts on a sector number derivable by 64, so for example 64 or 128 or 256. This will increase your VM disk performance and reduce latency.
This issue is fully described in the following vmware document:
http://www.vmware.com/pdf/esx3_partition_align.pdf

New update and “how to” video found >HERE<

Follow these steps:
Again boot the system with the Gparted live CD.
Right-click the parition select move/resize
Shrink the partition by 10 MBs
Move the partition to right by a few MB’s to free up space at the begining of the disk.
Next you’ll have to click on “Apply All Operations”
When finished  exit Gparted, not the entire live CD, just the Gparted application (so don’t reboot)
Start the terminal window
At command prompt type ‘parted /dev/sda’ (substitute your actual device here) to start the command line parted editor
Create a new partition at the start of the disk to fill in the space up to the section where you want to align your parition.  For example, if you want your system partition to start at sector 128, create a very small partition that takes up space from sectors 63-127.  For example using the command:
>mkpart primary 63s 127s

parted will create a new primary partition from sector 63 to sector 127.  That means the very next sector available is 128.
Exit parted and restart the Gparted GUI by clicking the Gparted icon.
Use the move/resize option to resize the partition to fill the entire remaining space.  Make sure you have the MUST uncheck the “Round to Cylinders” option selected.
Next you’ll have to click on “Apply All Operations”

Job Done.

Your partition should now start on an optimized sector

Gparted Click Picture to make larger

But how do you find misaligned disks in the first place? Click >HERE < to find out

Last Updated ( Mar 07, 2010 at 03:47 PM )

vSwitch Local Speed

A client told me today he thought he was having poor backup performance over a virtual network. But when he started debugging he noticed that it wasn’t just Backup that was the problem it was all network IO through a local vswitch. He started the conversation by asking me what I expect to see in network throughput between 2 VMs localised on the same vswitch. He informed me no matter what he tried in terms of using a vswitch with/without a pNIC and trying different types of vNIC he couldn’t push 240mbits/s. He asked him the obvious question which was “Is there anything else happening on the system that could be impacting net performance, like CPU overhead” and the answer was “no”.
So I decided to test it for myself.

As did my client I used the Netperf tool to test throughput between the 2 VMs. To use NetPerf you run:
C:\Netserver.exe on one VM
and then 
C:\Netclient.exe –H hostname (of the first system)
on the other VM.

After a few seconds you should get a result displayed in mbits/s
And just like my client I tested a vswitch with/without a pNIC and different types of vNIC. I also tested different hosts to get varied result. The problem is I saw results which I’d expect to see, and on average I saw speeds of 500mbits/s

Flexible 1 2 Average
ESX1      
vswitch +pNIC 502 594 548
vswitch -pNIC 608 565 586.5
ESX2      
vswitch +pNIC 461 482 471.5
vswitch -pNIC 261 495 378
       
VMXNet3 1 2 Average
ESX1      
vswitch +pNIC 641 621 631
vswitch -pNIC 628 593 610.5
ESX2      
vswitch +pNIC 465 496 480.5
vswitch -pNIC 531 533 532

Long Distance vMotion not New!

A few weeks back I was listening to a discussion about the concept of vMotion over long distance as if it was a new thing which I instantly disagreed with because I personally had been describing the concept for students when I was a VMware instructor 3 years ago. I knew it was possible to do by:

A. Having the right infrastructure in place – to simplify a fast bridged network

B. Using storage virtualisation solutions like DataCore.

DataCore has had for a long time the ability to synchronously mirror an active/active virtualised volume, which means 2 sets of ESX servers can see the same volume in a RW state as if the volume was local to that ESX server. This feature was nothing new for DataCore so in fact the idea of long distance vMotion has been achievable from the day vMotion went GA.

There were some caveats though, one being it was only feasible to achieve this when latency and bandwidth was not an issue and secondly technically at the time it wasn’t supported by VMware. At the time I tested this theory in the lab but didn’t record my research, so as an alternative I’ve asked a friend within DataCore to describe a real world case study thus proving the point. This concept wasn’t manufactured by DataCore but is a side effect of combining the 2 technologies. Following is a real world implementation of a stretched cluster as DataCore refer to it as:

 

Mike Beevor (DataCore) says.


Let’s start by looking at a real life application of SANSymphony.  IoMart, a highly reputable hosting company offering 100% uptime, approached DataCore with the vision of being able to provide High Availability in a potentially heterogeneous storage environment to its hosted customers, DataCore were more than happy to oblige.  The solution was delivered using industry standard software, based on readily available server and storage hardware. What followed was a solution that provided not only the HA that they were looking for, but managed to deliver it on scales more akin to DR!IoMart has 5 datacentre’s located around the UK, but the two that we are particularly interested in are in The City and in Maidenhead, a distance of approximately 20 miles, which I think that you’ll agree, is more than suitable to satisfy most company’s DR strategy.  The environment created was done using standard x86 hardware, highly specified due to being of a hosting nature.  There is 128GB RAM for caching, 4 Quad Core Processors and 8GB HBA’s for connectivity.  The disk behind the server was also considered as low end commodity disk by the major manufacturer that was chosen.  Naturally we can’t divulge the full details of the environment, we wouldn’t want to give away IoMart’s competitive advantage, but we can say that the cost was less than a 1/3 of the equivalent software and hardware from a well known manufacturer.  DataCore SANSymphony was used to virtualise and manage the environment and the DataCentres have a fibre link between them and a DataCore SANSymphony server in each location.What we have achieved using this configuration is Synchronous replication, between the 2 sites, over a distance of 21 miles.  This extends, not only through the storage layer replication, but also the Application server layer, in this instance ESX.  Now, site to site replication is nothing new, but where this gets very interesting is that the failover is seamless and automated… at the storage layer, but also the failback is automated and seamless.  Now, because we are grid based storage, rather than cluster, we are at no danger of a quorum instance, making this an extremely efficient and effective solution.  It is also worth noting that the performance metrics within a grid present a linear performance growth model, as each node is able to dedicate its full computing power to the performance of the system rather than having to aggregate performance throughout a cluster and also dedicate some power to the arbiter within the cluster. 
Essentially, we have created a Stretch Virtual Storage Grid, or SVSG as you will hear it referred to in future.  The benefits of this type of infrastructure is that you can distribute the environment throughout several locations and ensure that unless a major city be taken out (possibly by Godzilla) then you have a fully distributed HA model on DR geographies.
DataCore’s Synchronous replication functionality operates on a forced cache coherency model, and is based on a grid architecture, replicating the i/o block between the cache on each DataCore server before sending the acknowledgement to the application server and committing the data to disk.  By doing it this way, we obviate the problems associated with clustered storage, and allow a greater degree of performance and flexibility.
 


I didn’t want this to come across as a dig at Cisco as it’s Cisco who are currently branding this a new infrastructure application, far from it but if some tells me this is a new concept then I have to disagree. What I will say though it also looks like Cisco also doing a good job of providing a complete solution by providing tools to achieve things like extended VLANs as well as a good IO virtualisation platform and gone for standard hardware and protocols to drive it. Importantly, Cisco has a good hook into VMware in more than one way 😉 . So you can see all the right tools to architect this concept. As for Cisco playing a big part in a cloud that was always going to be a given and you can see how these kind of solutions are going to help.

Gold Sponsors
Silver Sponsor