[AccessD] Clients and money

Arthur Fuller fuller.artful at gmail.com
Thu Jan 10 09:57:41 CST 2008


I've mentioned this before, but another approach that doesn't cost any money
other than your time is to set up Access Replication on the network. The
scenario goes approximately like this:

1. Create the Master Replica on your development machine.
2. Set up the Synchronizer either on the server or on any other
always-available machine.
3. Create replicas for each machine that taps into the app.
4. Set up the synchronizer to synch the replicas with a suitable frequency.

This eliminates about 90% of the network traffic. The entire database
resides locally on each machine that needs it. The synchronizer kicks in at
the specified interval, and copies data in both directions. That is, in a
simplified case, where we have local machines A and B, and server S:

At the beginning, A, B and S all have the same data.
A adds some rows.
B adds some rows.
Synchronizer kicks in and exchanges data with A, then B. At this point, B
receives A's new data, but A won't receive B's new data until the next
synchronization.
And so on.

Whether this can work in your scenario depends on the granularity of
synchronization. If A needs to see B's changes immediately, then this
scenario is inappropriate. But if A can wait a few minutes to see B's data,
then this approach delivers much better performance than the classic FE/BE
scenario in which a whole whack of data is sent over the wire frequently (to
populate subforms, dropdowns, etc.). The reason it's so much faster is
because all that is transmitted is the new and changed data. Regardless of
how fast your data-entry people are, how many rows can they enter every 15
minutes or so (depending on the interval you set)? Further, when viewed this
way, how much data is one new row? Typically, due to foreign keys etc., a
row is a collection of longs, a couple of text fields, a few dates, a
currency value or three... total, maybe 1k per row. So the data exchange
that occurs per synchronization (local to server and back) is a few KB at
most. How long does that take? Answer: a second or two.

This approach is simple to try, and if you don't like the results it's
simple to undo.

Arthur

On 1/10/08, jwcolby <jwcolby at colbyconsulting.com> wrote:
>
> I had an interesting "problem" with the database at a client this week,
> where the database response time went to hell very suddenly.  This is the
> disability insurance call center software which many users spend their day
> taking calls, opening a very complex form to view and edit claim info for
> the person they are talking to.
>
> On Friday of last week, the time to open this very complex form went from
> 4
> or 5 seconds to 20 or 30 seconds.  There are old machines where the form
> went from 8-10 seconds to 60 or 80 seconds.
>
> Long ago I had a similar problem in this database and I had developed a
> class (of course) and a table to log how long the form takes to open, the
> time of day, the workstation trying to open the form, how many users are
> in
> the database etc.  So every time this main form opens it logs all this
> information in a table.  I then developed a set of queries (long ago) to
> show me averages by day / workstation etc.
>
> So... times to open have gone through the roof, it happened on a specific
> day last week, and they have remained there.  Of course the client is
> calling me with "did you do anything..." kinds of questions.  I had not,
> and
> could tell that by my billing records where I record what I do on what day
> for who.
>
> Long story short, after a few days of poking around, the user rebooting
> the
> server, compacting / repair the BE, decompile / compact / repair the FE
> etc.... I noticed that the disk volume holding the database was down to
> about 15% remaining space (on a 60 gig drive).  I told the client to look
> at
> this and he quickly went in and deleted all kinds of old trash and got us
> up
> to about 50% remaining.  this did make some small impact, but the database
> was still abysmally slow.  Last night I went in, rebooted the server,
> defragged the C: drive and the D: drive (where the database resides) and
> voila, this morning the times are back to normal.
>
> It turns out that the real problem was two fold.  First it was horribly
> fragmented, but additionally when the client did a compact repair,
> something
> went wrong and Access created two of those "DB1.MDB" things that it
> creates
> when a compact fails.  The database is about 800 megs compacted, and the
> drive was so full that suddenly, with two additional 800 meg files in
> there,
> there was just "no room left".  When I say "no room left", there was
> actually about 6 gigs left even after the DB1 copies were created, but the
> remaining space was tiny little fragments of space all over the
> disk.  Which
> meant that the database itself was already horribly fragmented and it
> couldn't find any room to put new pieces as needed.
>
> So, just an FYI, DEFRAG THE DISK!!!  And do not allow the disk to get too
> low on space.
>
> Now to the money thing.  I use a 4 gig RAM drive on one of my servers here
> at my office to hold a set of files for the address validation software
> that
> one of my servers runs.  It speeds up that process by 50%, allowing me to
> move from about 2.5 million addresses per hour processed up to about 4.5
> million.  A startling and impressive increase in speed.  So I advised this
> same client (a year ago) to look at doing this for this call center
> database.  The main database file is about 800 megs.  In looking over the
> "time to open" records this last week I noticed that various employees are
> opening claim records using this complex form every 20 to 60 seconds or so
> (950 records yesterday).  That is a LOT of data being pulled (and I use
> JIT
> subforms to hold it down).  So I again advised the client to try a couple
> of
> these 4 gig boards in Raid 0 to put just the BE files on, in order to
> speed
> up the database.  I am convinced with this number of transactions per
> hour,
> with the size of the database, and with the way that a RAM disk works,
> that
> a RAM disk could boost this specific application's usability.
>
> The board costs about $150 and another $200 for 4 gigs of memory to put on
> it.  $400 shipped to their door for one, $800 for two.  The client just
> told
> me that "due to costs and ... " they will "consider this in the
> future".  We
> are talking about $800 expense (plus implementation) for a company of 60
> employees where 30 or so users are in the database all day every day, and
> they are deferring it to later.
>
> Clients really are cost conscious, and the smaller the client, the more
> that
> is so IMHO.
>
> John W. Colby
> Colby Consulting
> www.ColbyConsulting.com
>
> --
> AccessD mailing list
> AccessD at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/accessd
> Website: http://www.databaseadvisors.com
>



More information about the AccessD mailing list