[AccessD] Alternatives to domain aggregate functions

Mark Simms marksimms at verizon.net
Sun Aug 31 15:56:48 CDT 2014


Re: "Obviously, the domain aggregate approach is not working for this
> situation, so I'm looking for more efficient approaches to fulfill the
> aggregating/incrementing requirements."

No need to change databases....just replace the aggregate functions with
standard DAO queries.

> -----Original Message-----
> From: accessd-bounces at databaseadvisors.com [mailto:accessd-
> bounces at databaseadvisors.com] On Behalf Of Arthur Fuller
> Sent: Friday, August 29, 2014 11:00 AM
> To: Access Developers discussion and problem solving
> Subject: Re: [AccessD] Alternatives to domain aggregate functions
>
> Jim.
>
> Excellent summary, and I would add a couple of points:
>
> If at all possible, lose the Access Back End (BE) and replace it with
> either the free edition of MS-SQL or MySQL or MariaDb. In all cases the
> increase in performance is significant, and many times dramatic.
> Basically,
> the larger the number of users, the more dramatic the increase in
> performance.
>
> Having made that move, then the next logical place to go is Stored
> Procedures rather than Access queries. In my many years as a developer,
> I
> have seen very few occasions where dynamic construction of a SQL query
> was
> the only solution. About 99% of the time, it is unnecessary. A little
> thought and the use of optional parameters to an SP is the better way.
>
> Arthur
>
>
> On Fri, Aug 29, 2014 at 9:02 AM, Jim Dettman <jimdettman at verizon.net>
> wrote:
>
> > Don,
> >
> >   Just to add a bit to my post from last night as I couldn't see this
> > original post, you've got to be careful where you use Domain
> functions.
> >
> >   What they do is just wrap up a SQL Statement so you can execute SQL
> in
> > places where you ordinarily can not.   For example, Dlookup() is
> "SELECT
> > <x>
> > FROM <y> WHERE <Z>".    So any place you can use SQL directly you
> should
> > and
> > a domain function is not appropriate.
> >
> >  For example, totaling a field in a sub form.   Just place a control
> in the
> > footer and set its control source to:  =Sum(<expression>)
> >
> >  One place where domain functions are really a no-no is inside of a
> query.
> > They are totally un-optimizable by the query processor and you are
> > guaranteeing yourself poor performance for anything over a couple of
> > hundred
> > records.
> >
> >  Alternatives are executing SQL in code, performing operations on a
> record
> > set directly, etc.
> >
> >  Domain functions are handy to use, but in general, you'll find that
> you
> > don't use them often because there are other ways to do things.
> >
> > Jim.
> >
> >
> >
> > -----Original Message-----
> > From: accessd-bounces at databaseadvisors.com
> > [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of
> McGillivray,
> > Don
> > Sent: Thursday, August 28, 2014 06:28 PM
> > To: accessD at databaseadvisors.com
> > Subject: [AccessD] Alternatives to domain aggregate functions
> >
> > Hi Listers,
> >
> > I have a form in an Access application (users have the Access 2010
> runtime
> > only, on Win 7;  Split FE/BE, with BE on network share) where I need
> to
> > display and continuously evaluate the cumulative count and value of
> items
> > and assign an incrementing serial number to child records as they are
> > entered.  At this point, the form uses domain aggregate functions to
> > determine the count and sum of the transactions, and the max plus 1
> for the
> > next serial number.  At modest volumes (<=200 records) performance is
> > acceptable.  Beyond that, performance becomes increasingly degraded,
> to the
> > point of being almost unusable by the time 400 records have been
> entered.
> >
> > Obviously, the domain aggregate approach is not working for this
> situation,
> > so I'm looking for more efficient approaches to fulfill the
> > aggregating/incrementing requirements.  Can anybody point me in a
> better
> > direction?
> >
> > Thanks!
> >
> > Don McGillivray
> >
> > ---------------------------------------------------------------------
> ------
> > For those interested in further detail, please read on . . .
> >
> > The application records customer payment data.  Ultimately, the
> payment
> > data
> > collected by the system will be imported into an Oracle database.
> The
> > payments, each consisting of a header record and one or more GL
> > distribution
> > records, are entered in batches that correspond to daily bank
> deposits.
> > The
> > payment header record contains the payment date, check number, and
> amount,
> > and the GL distribution record contains line item detail including an
> > application number and the GL account to which the revenue is booked.
> Data
> > entry begins with the creation of a batch (deposit) header record
> that
> > includes the date, number of payments, number of applications paid
> for, and
> > total deposit amount.  (The item count and amount are used as
> "target"
> > values during data entry to ensure that everything balances in the
> end.)
> > Once the batch header record is complete, the payment records are
> entered
> > one at a time, along with their GL details.  So I have a simple
> hierarchy
> > of
> > data entities: Deposit !
> >  Header - Payment Header - Payment Details, and the entry form is
> designed
> > (main form, sub-form, sub-form) to accommodate this structure.
> That's the
> > quick and dirty description of the basic setup.
> >
> > A further wrinkle is that each payment record (child of the top
> level) is
> > associated to one or more stamp numbers that are affixed to the check
> upon
> > receipt, and those numbers must be included in the distribution
> records
> > (grandchildren to the top level) for each payment.  The stamp numbers
> are
> > composed of a date string (YYMMDD) followed by an incrementing serial
> > number
> > from 1 to n each day, according to the number of applications being
> paid
> > for.  So, based on the application count of the batch header, a list
> of
> > stamp numbers can be generated for each batch, and, assuming the
> checks are
> > entered in stamp number order, the next available number may be
> assigned
> > automatically to each payment until all are used up.  The entry form
> allows
> > the user to select from a combo box any valid stamp number for any
> payment
> > if necessary, but the typical scenario is to allow the system to
> assign the
> > numbers in sequence to the distribution records automatically.
> >
> > To help ensure accurate data entry, the user is required to have
> entered
> > values that agree with the target values.  For example, on a given
> payment,
> > before a new payment may be entered, the value of the distribution
> records
> > must equal the check amount as entered in the payment header.
> Similarly,
> > the batch itself may not be posted until the sum of its payment
> records
> > equals the previously entered batch totals (item count and amount.)
> These
> > rules are enforced by the form by comparing the batch and payment
> header
> > entries to the sums of the line items as data is entered.
> >
> > The application has been in production since May, and has functioned
> well
> > as
> > designed.  However, this week an unusually large volume of payments
> (>600)
> > arrived on a single day, and as the batch entry progressed beyond
> about 300
> > records, the form's performance became increasingly degraded, until
> it was
> > essentially unusable.  The degradation is probably due to the methods
> I've
> > employed to determine the next available stamp number, compounded by
> the
> > tabulation of the item count and cumulative total for the batch as
> data
> > entry progresses.  In both cases, I'm using domain aggregate
> functions to
> > return the count and sum of payments entered, along with the max plus
> 1 for
> > the next available stamp number.  As the number of records in the
> batch
> > grows, these calculations take longer and longer.  The effect was
> observed
> > during development, leading me to modify the table structure so that
> new
> > records land in temporary tables before being posted by the user to
> the
> > permanent ones.  This im!
> >  proved things quite a bit by avoiding the impact of aggregating over
> a
> > table whose content continues to grow over time, but apparently that
> move
> > is
> > not enough when the batch volume rises beyond a certain point.
> > --
> > AccessD mailing list
> > AccessD at databaseadvisors.com
> > http://databaseadvisors.com/mailman/listinfo/accessd
> > Website: http://www.databaseadvisors.com
> >
> > --
> > AccessD mailing list
> > AccessD at databaseadvisors.com
> > http://databaseadvisors.com/mailman/listinfo/accessd
> > Website: http://www.databaseadvisors.com
> >
>
>
>
> --
> Arthur
> --
> AccessD mailing list
> AccessD at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/accessd
> Website: http://www.databaseadvisors.com




More information about the AccessD mailing list