Jurgen Welz
jwelz at hotmail.com
Fri Mar 5 10:05:32 CST 2010
John/Drew: Thanks for the comments. For the most part, a great deal of common sense has been applied. Currently their web site says they have 180 job sites and about 20 offices in Omaha, Nebraska, Washington and most of the provinces in Canada. Their servers in Calgary, Alberta support over 1500 users, all using the same basic environment that runs our system. The basic setup is well thought out now that they have a team of over 30 full time professionals and consultants on staff. Our 65 odd users now have a disproportioanately high percentage of the CPU, memory and bandwidth resources and it appears that reliance on Access and the Pervasive data services for our estimating system are the only differences between their 1500 and our 65. Their management interfaces are built in Powerbuilder and ours are in Access. We use their stuff for payroll and a good deal of the project controls. Everyones' data is hosted in SQL Server. The IT reporting tools they have tell us that Access is the problem. Our former environment was 4 VMWare servers for over 60 users and is now 7 servers on better physical hardware. As far as I know, the only real changes are that we have upgraded estimating (Sage) a couple of versions, we have a bunch of new Windows service packs installed (without a bank of servers to test live they were very leery of messing with a working system so they never got service packs) and we now have access to some applications that we haven't started using such as P3 and P6 (Primavera). We deployed to the new environment 3 weeks ago and there is no going back as I had to upgrade over 200,000 estimate files and the downgrade path is ugly. We have been ironing out a few loose ends, shadow copy, anti-virus, load balancer optimization, time zone awareness... For the most part, things are very nice. Nearly all our users are running Wyse thin client terminals with dual screens via Cisco AnyConnect VPN Client and Remote Desktop. Division and company wide, they are making record profits in the North American construction market notwithstanding the economy. They had a bunch of hardware and support dollars to throw at our issues now and fingers are being pointed at Access as the likely culprit. My philosophy has aways been that a parent form be populated by a single record recordset and that sub forms fill on demand (John calls it Just in Time) as a sub form takes focus. It is exceedingly rare that a continuous sub form contain more than 20 records and I will frequently use single recordset non-continous sub forms to limit memory and bandwidth requirements. I can't imagine why a form that is open on static data while they are writing an email and working on an Excel workbook in a person's session would pin the CPU at 100%. No one at their end cares to know much about our application so they don't really understand the measures I went to in order to make this work with mdb data. The mdb limitations went away 1.5 years ago. Much of the tiny data is cached in memory and displayed via callback so the server never gets hit for tiny lookups and even so, we know that memory is not the bottleneck. I'm told that they have enough MSDN licensing to give me access to the works so if I want to rewrite the whole shooting match, I can choose my weapon. Also, for the first time since Access 97 was brand new, I have Admin rights on our servers. I also have SQL Server tools so I no longer have to make data structure changes via DDL using MSQuery via Excel. Unfortunately, my title has been Safety Manager for the last few years and that role means that Access is more and more a part time amusement. I was pulled off other duties to coordinate migration to the new environment and build a few new enhancements. Ciao Jürgen Welz Edmonton, Alberta jwelz at hotmail.com > Date: Thu, 4 Mar 2010 20:33:57 -0500 > From: jwcolby at colbyconsulting.com > To: accessd at databaseadvisors.com > Subject: Re: [AccessD] A 2003 on VM Ware > > Jurgen, > > I think Drew might be right in that Access is supposed to sense "other applications" wanting the > processor and releasing the cycles. If Access is all that is running inside of that VM, then it > never senses "other applications" wanting the cycles because those other applications are isolated > by the walls of the VM. Thus the VM ends up requesting real cpu cycles to service the Access FE and > essentially tying up an entire core per FE. > > As for whether Access still does this anymore, I have not seen it continuously do this, however I > have seen it do this for a "long period" where long period is 30 seconds or more, and then > eventually stop. > > I have three virtual machines running on a quad core "server". I quote the server simply because it > is just a reasonably powerful quad core AMD, NOT a true SERVER machine. After much research I > discovered that VMs are not all they appear. For example the recommendation is NEVER give a single > VM multiple "CPUs" even though it is possible to do so. Likewise the recommendation is to always > leave a core not assigned to a VM, iow if it is a quad core machine, only run three VMs and leave > the fourth core to run the VMWare host software (and Windows of course). > > I have a third party application which is written in Foxpro for Windows. It runs well in the VM > however... it eventually locks up the VM. No idea why, but if I allow it to do its processing for > 24 or 48 hours it will eventually lock up the VM. The VM responds, but "responds" as in 2 minutes > to respond to a mouse click and so forth. Once this happens, it is damned difficult and sometimes > impossible to regain control of the VM. I end up just "removing power" to the VM (the equivalent of > hitting the power button). > > I have seen it do something to the VMWare host software such that a reboot of the actual server was > required to get the VMs working again. SOMETIMES I can simply close VMWare and restart it, > sometimes that doesn't work and a reboot of the physical machine is required. > > All of this happens with an application other than Access, so that indicates that application > software running in a VM is quite capable of bringing the entire server to its knees. In my mind > that should not be possible and VMWare needs to figure this out and fix it from their side. So far > they have not, and there are complaints about this on their forums. > > I have just fired up my server and will do a little testing. I do not have office installed on the > VMs but I will install Office 2003, and then get a simple FE running talking to a BE up on the > VMWare server machine. I will then be able to tell you if I see anything like what you are seeing. > I suspect that I will not however. > > John W. Colby > www.ColbyConsulting.com > > > Drew Wutka wrote: > > Jurgen, I am no expert in VMWare. I use Virtual PC and Virtual Server, both MS based systems. And for 'remote clients' that want to use a 'desktop' here, they use Remote Desktop (it used to be MS's Terminal Server), so if you are using Citrix for that, I don't have much experience there either. > > > > HOWEVER, I think I may know where your problem lies...at least the direction it's in. I think the problem is two fold. First, Access 97 used to use up CPU processing during idle time, and MS swears that went away with Access 2000, but I've seen A2k and up do that 'maxing the CPU' thing. Now, on a normal machine, it's no big deal, because Access is willing and ready to give up the processing time. But, in a virtual machine, you have to realize that your VM programs aren't getting direct processor time. Instead, they are getting 'virtual processor' time. On top of that, in a Citrix/Terminal Server setting, you are practically running a virtual machine, inside a virtual machine. So with all of those lines getting tangled, Access may not be releasing the 'virtual CPU time' as readily as it should. > > > > http://insights.oetiker.ch/linux/vmware.html That is a link to a google search about terminal services inside a VM. Try those pointers, see if that helps. > > > > Personally,.... > > > > <soap box> > > Virtual machines are great. It's a great way to have multiple 'computers' running without having to have hardware for each one. HOWEVER, a virtual machine is going to inherently run slower then it's host, even with just one VM running on the host. Each VM added to the host is going to divvy up the already 'limited' resources. For single purpose machines, this isn't a problem, on hefty hardware. But the key is 'single purpose'. Terminal services, by their very nature are not 'single purpose'. And even some items that may seem single purposes, are really too large and complex to truly host inside a VM. Exchange servers, DB servers, etc, all can require massive resources, so putting them on anything but a capable box by themselves can be detrimental. The real confusion lies in the lack of understanding of what many 'server' roles require. Plus there is the inherent 'coolness' of virtualization. > > > > But there has to be some common sense applied. I'd say the best rule of thumb would be to ask yourself if what you want to do would work with a server that is 5 years old. If not, it won't work well in a virtual environment, and should be put on it's own server. > > </soap box> > > > > ;) > > > > Drew _________________________________________________________________ Live connected with Messenger on your phone http://go.microsoft.com/?linkid=9712958