[AccessD] MDB to SQL Server

William Hindman wdhindman at dejpolsystems.com
Wed Feb 21 22:02:06 CST 2007


Jurgen

"As much as I'd love to have a table of phone numbers, with a table
of number types, and addresses and address types, and relate them to a
single table of entities of both corporate and people types, in all their
permutations and combinations, this has proved to be far slower than having
a separate table of corporate and individual entities and just keeping phone
numbers and addresses with them."

...this is the template I moved to more than two years ago ...what I've 
found is that the address info and contact means are not required for most 
processes and fully normalizing them with fk to a super entity ID allows me 
to call them only when required rather than every time a contact or company 
record is called ...I display them only in JIT subforms when needed ...and 
that makes the critical data records much shorter and thus quicker to 
process ...while some processes take more time to accomplish, the overall 
application performance is at least as good as the prior approach ...the 
overall size of the first database I did this with dropped considerably 
because of the "full" data normalization ...ie, instead of having 
Loxahatchee, Fl, USA, 33470, lat, long repeated in numerous company and 
contact records it now exists only once in the db.

...one of the drivers behind adopting this approach was the ease it provides 
in deterring user data entry errors and thus the money it saves in marketing 
campaigns ...the returned mail has dropped drastically.

...I don't pretend to have your expertise in Access but based upon my own 
experience with that data model, I can't say that my results match what you 
posited as a far slower performance.

William Hindman

----- Original Message ----- 
From: "Jurgen Welz" <jwelz at hotmail.com>
To: <accessd at databaseadvisors.com>
Sent: Wednesday, February 21, 2007 10:05 PM
Subject: Re: [AccessD] MDB to SQL Server


Arthur:  I appreciate the suggestions.  However, all our servers are in a
single office remote to every user.  All are connected via some form of
terminal services so the conventional advantages of replication do not
really apply.  Quite frankly, I've got a Dell XPS laptop with a fast core
duo processor, two gigs of fast RAM, and a 7200 RPM drive and the
application is not significantly faster on that laptop.  It's a bit faster
in Access 97, but the fact is, even with local data and a single user, my
boss is going to bemoan the lack of speed because it used to run faster.
Remember the bit wise querying, storing data in arrays to avoid hitting the
drive and other such stuff I did.  It's not enough anymore because there are
so many tables from which related data must be retrieved.

In the past couple years, I have ceased to store quite so much in arrays
because I though we'd be hitting a RAM limiit with the number of users we
have on a single and more recently, two terminal servers.  As a result, the
more recently added tables and types of data I've added are more normalized.
  I had even dropped some of my bit wise join querying used to store
multiple attributes in a single byte (integer or long) and gone to joined
tables.  That was a performance hit.

I suspect I'm going to run into even worse performance with SQL Express or
the MSDE (that's what it was called in A2K wasn't it?).  I could try
upsizing on my personal PC, I only have A2K, and presumably get similar
performance as the SQL Express.  At least I could determine whether it will
help or hinder performance.

I would really like to normalize further than I have and as it stands, that
would probably kill an MDB of the size and number of users I currently
serve.  As much as I'd love to have a table of phone numbers, with a table
of number types, and addresses and address types, and relate them to a
single table of entities of both corporate and people types, in all their
permutations and combinations, this has proved to be far slower than having
a seperate table of corporate and individual entities and just keeping phone
numbers and addresses with them.

As far as using named queries, these are rare and few.  I manipulate the
record and row sources with filters based on numerous variables and usually
set them in code.  I understand that there can be no optimization of a query
execution plan with this approach, but my experience has been that throwing
a variable number of something like 4, 5, 6 or 10 parameters at a query will
slow it down as much.  Practically all of my recordsets and rowsources are
generated in code at runtime in the open event of the form.  Not only the
subforms, but the forms themselves are Just in Time, and the same applies to
a large percentage of the lists and combos.  This actually makes changes in
things like field and table names quite simple because it can be done by a
find and replace in the code window.

Those saved queries that I do have frequently have their .SQL properties set
at runtime.  This allows me to in/exclude unions and join disparate tables
depending on the users choices.  The unions allow me to split off current
from archival data, yet display both when necessary.  This is a tactic I
could exploit further.  Another example; a user may need to see an employee
report that shows contact information where he may or may not need to filter
the report on joins based on a combination of training certifications
involving joins with completely different tables than where the training
records are not a basis for retrieval.  For example, I want a list of
contact info on the Saskatoon office based employees vs I want a list of
Alberta regional employees who have CSTS, a current forklift certificate,
Swingstage training who also fall into a trade category from yet another
table.  Same report but involving joins, and sometimes equal and other times
unequal joins to an indeterminate variety of tables.  Sometimes 1 table,
sometimes 5 or 6, sometimes with parameters on some fields, at other times
with parameters on other fields.  I couldn't imagine doing this with saved
queries.

I take it I would lose much of the flexibility I've given my users by
upsizing?  I could normalize more without hurting performance, but the
performance would likely not be better than what I'm getting?

Ciao
Jürgen Welz
Edmonton, Alberta
jwelz at hotmail.com





>From: artful at rogers.com
>
>My suggestions:
>
>1. If you wish to stay with an MDB back end (undoubtedly your least
>expensive choice), then I suggest replication as the way to go. You can
>even replicate a copy to every individual user. I did that with 70 users in
>4 branch offices and it all worked nicely. Each branch had a server in
>charge of local replication, and HQ's server handled both HQ replication
>and also replication between the branches. This all occurred on a WAN.
>Every user everywhere was guaranteed to see the changes from everyone else
>within 15 minutes. A significant advantage of this approach is the dramatic
>cutdown on net traffic -- everyone has all the tables locally, so the only
>net traffic is the changes, not the data itself.
>
>2. If you want to go beyond MDB, then install a copy of SQL Express. This
>would involve "translating" your app, but the result will ultimately be
>better. One thing you definitely should do is identify all your row and
>record sources that do not use named queries, and fix them to use named
>queries. If you don't do this before upsizing, you'll end up with dozens of
>objects whose names were manufactured and will be utterly meaningless to
>you.
>
>
>Arthur Fuller
>Technical Writer, Data Modeler, SQL Sensei
>Artful Databases Organization
>www.artfulsoftware.com

_________________________________________________________________
Windows Live Spaces: share your New Year pictures!
http://spaces.live.com/?mkt=en-ca




--------------------------------------------------------------------------------


> -- 
> AccessD mailing list
> AccessD at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/accessd
> Website: http://www.databaseadvisors.com
> 







More information about the AccessD mailing list