Jurgen Welz
jwelz at hotmail.com
Wed Feb 21 14:19:57 CST 2007
People: Given that the organization I work for has decided that the database I built for them over the years will be going from a maximum of 41 to about 50 users in the course of this year and the complexity and size of our database has been growing at an ever accellerating rate and we have been suffering from the occasional corruption, the question of what we can do has been asked of me. We are running individual DAO FE files via a pair (soon to be four) of servers against a file server MDB on a gigabit LAN. Corruptions started happening a couple years back when users copied the backend MDB file to their local laptops. When we were on a local network, this never caused any grief but when we switched to a terminal server environment, the copy duration went to several minutes and, unhappily, the original server data flie frequently became corrupted. If any users logged on after the corruption, they were given an error about a damaged database while existing logins proceeded without issue. The only resolution was to bump everyone off, repair the BE file and then allow everyone back on again. Typically, one record in a single table would be found to be at fault with a ton of invalid junk in the fields. That record would typically have to be deleted. Curiously, it often contained out of range numbers even after repair. We put a stop to copying open MDB files to remote laptops (you could still make a copy on the server and then copy that copy without issue). Notwithstanding that, we occasionally get a corrupt MDB BE file showing a similar kind of damage. Of course I have no way of knowing whether anyone tried to copy the file or whether it was just the fact that user or MDB limits had been exceeded. Corruption may happen once every 4 to 8 months. When I started all this in about '96, my boss was told that our parent company would look at our data needs in 5 to 7 years. My boss was told today that it would be another 4 years at least. I suspect that target will be signifcantly exceeded. We could be up to 100 users by that time, with a 200 Megabyte BE file. The whole discussion about DAO/ADO caught my attention as I haven't been doing much with our system in the past 4 years other than some refinements and enhancements. Access development has become more of a sideline. I wrote the basics at a time when DAO was my only option and wasn't give the resources to convert to ADO. SQL Server has never been an option, even though the parent company, which grosses over a billion in annual revenue, relies on it for their systems. If I can provide sufficient justification, I may get the go ahead to upsize. The question I am posed is, what is the next step for the system I built. My boss is complaining about performance of some of our forms, but the fact is, our forms display considerably more information on a screen than any of the parent company forms and they still open and populate a great deal faster than our parent company forms. When I pull a safety stats report on screen from the parent company on a single manager on perhaps 20 projects, it may take 20 to 30 seconds to load, yet my slowest project form will load within 10 seconds. My load time is a bit slow because of a number of lists and combos that retrieve a large number of records or retrieve from tables that I can reduce in size by archiving datas, but it shows a great deal more information from a broader variety of sources. I know I can gain sginficant performance by denormalizing. I've always split my addresses into tables for country, province/state, city and address. By storing all the data in a single field rather than in separate tables, the joins process significantly faster. I have used Callback functions for many lists and combos to aviod hitting a BE repeatedly, but, for continuous forms using combos filling from a table with a large number of records, the forms poplulate quite slowly, much more slowly than when the combo is filled with an SQL rowsource. I could make Just In Time combos and use arrays of controls (as I have done in various situations) but I think my best option is to cut down the size of the data file, giving users the option to see current data only, and current data unioned with archived data and let them wait the small percentage of the time when that is necessary. Given that we are moving from 2 servers to 4 over the next few weeks, it seems that people are starting to realize that our division has increasingly significant needs and be more amenable to suggestions. I welcome any input from the list. Ciao Jürgen Welz Edmonton, Alberta jwelz at hotmail.com _________________________________________________________________ Find out the restaurants participating in Winterlicious http://local.live.com/default.aspx?v=2&cp=43.658648~-79.383962&style=r&lvl=15&tilt=-90&dir=0&alt=-1000&scene=3702663&cid=7ABE80D1746919B4!1329 >From January 26 to February 8, 2007