[AccessD] MDB to SQL Server

Jurgen Welz jwelz at hotmail.com
Wed Feb 21 21:05:23 CST 2007


Arthur:  I appreciate the suggestions.  However, all our servers are in a 
single office remote to every user.  All are connected via some form of 
terminal services so the conventional advantages of replication do not 
really apply.  Quite frankly, I've got a Dell XPS laptop with a fast core 
duo processor, two gigs of fast RAM, and a 7200 RPM drive and the 
application is not significantly faster on that laptop.  It's a bit faster 
in Access 97, but the fact is, even with local data and a single user, my 
boss is going to bemoan the lack of speed because it used to run faster.  
Remember the bit wise querying, storing data in arrays to avoid hitting the 
drive and other such stuff I did.  It's not enough anymore because there are 
so many tables from which related data must be retrieved.

In the past couple years, I have ceased to store quite so much in arrays 
because I though we'd be hitting a RAM limiit with the number of users we 
have on a single and more recently, two terminal servers.  As a result, the 
more recently added tables and types of data I've added are more normalized. 
  I had even dropped some of my bit wise join querying used to store 
multiple attributes in a single byte (integer or long) and gone to joined 
tables.  That was a performance hit.

I suspect I'm going to run into even worse performance with SQL Express or 
the MSDE (that's what it was called in A2K wasn't it?).  I could try 
upsizing on my personal PC, I only have A2K, and presumably get similar 
performance as the SQL Express.  At least I could determine whether it will 
help or hinder performance.

I would really like to normalize further than I have and as it stands, that 
would probably kill an MDB of the size and number of users I currently 
serve.  As much as I'd love to have a table of phone numbers, with a table 
of number types, and addresses and address types, and relate them to a 
single table of entities of both corporate and people types, in all their 
permutations and combinations, this has proved to be far slower than having 
a seperate table of corporate and individual entities and just keeping phone 
numbers and addresses with them.

As far as using named queries, these are rare and few.  I manipulate the 
record and row sources with filters based on numerous variables and usually 
set them in code.  I understand that there can be no optimization of a query 
execution plan with this approach, but my experience has been that throwing  
a variable number of something like 4, 5, 6 or 10 parameters at a query will 
slow it down as much.  Practically all of my recordsets and rowsources are 
generated in code at runtime in the open event of the form.  Not only the 
subforms, but the forms themselves are Just in Time, and the same applies to 
a large percentage of the lists and combos.  This actually makes changes in 
things like field and table names quite simple because it can be done by a 
find and replace in the code window.

Those saved queries that I do have frequently have their .SQL properties set 
at runtime.  This allows me to in/exclude unions and join disparate tables 
depending on the users choices.  The unions allow me to split off current 
from archival data, yet display both when necessary.  This is a tactic I 
could exploit further.  Another example; a user may need to see an employee 
report that shows contact information where he may or may not need to filter 
the report on joins based on a combination of training certifications 
involving joins with completely different tables than where the training 
records are not a basis for retrieval.  For example, I want a list of 
contact info on the Saskatoon office based employees vs I want a list of 
Alberta regional employees who have CSTS, a current forklift certificate, 
Swingstage training who also fall into a trade category from yet another 
table.  Same report but involving joins, and sometimes equal and other times 
unequal joins to an indeterminate variety of tables.  Sometimes 1 table, 
sometimes 5 or 6, sometimes with parameters on some fields, at other times 
with parameters on other fields.  I couldn't imagine doing this with saved 
queries.

I take it I would lose much of the flexibility I've given my users by 
upsizing?  I could normalize more without hurting performance, but the 
performance would likely not be better than what I'm getting?

Ciao
Jürgen Welz
Edmonton, Alberta
jwelz at hotmail.com





>From: artful at rogers.com
>
>My suggestions:
>
>1. If you wish to stay with an MDB back end (undoubtedly your least 
>expensive choice), then I suggest replication as the way to go. You can 
>even replicate a copy to every individual user. I did that with 70 users in 
>4 branch offices and it all worked nicely. Each branch had a server in 
>charge of local replication, and HQ's server handled both HQ replication 
>and also replication between the branches. This all occurred on a WAN. 
>Every user everywhere was guaranteed to see the changes from everyone else 
>within 15 minutes. A significant advantage of this approach is the dramatic 
>cutdown on net traffic -- everyone has all the tables locally, so the only 
>net traffic is the changes, not the data itself.
>
>2. If you want to go beyond MDB, then install a copy of SQL Express. This 
>would involve "translating" your app, but the result will ultimately be 
>better. One thing you definitely should do is identify all your row and 
>record sources that do not use named queries, and fix them to use named 
>queries. If you don't do this before upsizing, you'll end up with dozens of 
>objects whose names were manufactured and will be utterly meaningless to 
>you.
>
>
>Arthur Fuller
>Technical Writer, Data Modeler, SQL Sensei
>Artful Databases Organization
>www.artfulsoftware.com

_________________________________________________________________
Windows Live Spaces: share your New Year pictures! 
http://spaces.live.com/?mkt=en-ca




More information about the AccessD mailing list