[AccessD] Running four VMs on Windows 2003 Server

Jim Lawrence accessd at shaw.ca
Sun Jun 28 01:09:46 CDT 2009


Impressive John... all those nice new toys as well... I hope I do not sound
too jealous. ;-) ...but there is many medium companies who do not have the
hardware you do.

Jim

-----Original Message-----
From: accessd-bounces at databaseadvisors.com
[mailto:accessd-bounces at databaseadvisors.com] On Behalf Of jwcolby
Sent: Saturday, June 27, 2009 8:10 PM
To: Access Developers discussion and problem solving
Subject: [AccessD] Running four VMs on Windows 2003 Server

I finally got around to fixing the issue I was having running multiple VMs
on my Windows 2003 X64 
servers, now running 16 gigs of ram.  The first problem I was having which
was a real b****to solve 
was that the VMs simply would not connect to the network.  It turns out that
I had Hamachi installed 
on the server.  Apparently what happens is that hamachi installs a new NIC
and all that stuff and 
now when the VMs fire up they grab the Hamachi NIC instead of the physical
NIC.  As soon as I 
uninstalled Hamachi that problem went away.  BTW I have been googling this
problem for MONTHS and 
finally found this tip as the very last post in one of the threads about VMs
not connecting.

So... I now have four VMs running, each VM with three gigs of ram.

I run a specific software package which does address validation.  A couple
of weeks ago I bought a 
new Vertex Solid State Disk:

http://www.newegg.com/Product/Product.aspx?Item=N82E16820227393

I create four partitions on that and then assign one of the partitions to
each virtual machine.  I 
then copy all of the database files that Accuzip uses for the address
processing.  These files are 
read-only BTW.

I used to use an iRam (hardware) RAM disk with 4 gigs total, and do the same
thing, partition it 
into four 1 gb partitions and give each VM a partition.  That worked for one
VM but the performance 
was awful for any more than that.  The iRam has a total bandwidth of about
125 gbytes / sec (it was 
SATA I) and it just wasn't up to the job.

Just as a benchmark, I was getting about 1 million records / hour running on
a raid 6 disk array, so 
even the iRam was a big improvement, at least for one instance.  At any
rate, I would get about 2.5 
million records / hour processing in my one VM using the iRam.  Using the
new SSD I get about 4.1 
million records per hour, and I am getting that in FOUR virtual machines
running simultaneously!

I upgraded one of my servers to the new AMD Phenom II X4:

http://www.newegg.com/Product/Product.aspx?Item=N82E16819103471

And on that machine, running only one VM (and using that SSD) I achieved
about 6.4 million records / 
hour.  That processor is about 40% faster so it makes sense that I would get
a much higher records / 
hour.  I am going to order a new processor for the server that I am setting
up as my VM server and 
see if I can jack the four VMS up to something close to that rate as well.
'Twould be nice if that 
happens!

I originally had SQL Server running on this machine and had assigned 7 gigs
to it.  With four VMs 
trying to use 3 gigs each, performance on the VMs slowed to worse than a
crawl.  Once I remembered 
that SQL Server was there, I stopped the service, stopped all of the VMs,
closed the VM host 
software, reopened the host and reopened all of the VMS and the performance
is stellar.

The thing to understand is that I often have to validate tens of millions of
records.  My total 
processing time for a two million record chunk was about 40 minutes on the
faster machine so to do 
50 million records (25 files) would take most of a 24 hour day.  If I can
split those 25 files out 
over four machines I will drop the total turnaround down to a more
reasonable 6 hours or so, 
especially if I can get the faster processor going on the VM server.

I will be a happy camper.

-- 
John W. Colby
www.ColbyConsulting.com
-- 
AccessD mailing list
AccessD at databaseadvisors.com
http://databaseadvisors.com/mailman/listinfo/accessd
Website: http://www.databaseadvisors.com




More information about the AccessD mailing list