[AccessD] Handling concurrent updates

Gustav Brock gustav at cactus.dk
Sat Feb 6 05:09:43 CST 2016


Hi Bill

Please have in mind that the functions here are not the magic bullet to solve all concurrency issues.

The main purpose is to take advantage of the fact, that if you try an edit of a record, and that record has been updated by another process in the interval from you read the record until you try to update it, your edit or update will fail. However, at the same time the record - as you read it – has been refreshed, thus you will now be able to do your edit and update.

Usually, this try-and-error will take one round only until success, as you can see if you run the ready-made test function in the download. Even with three processes running on the same record, I have never seen a count of more than two.

So the task is not to count or collect failed updates, it is to repeat the update until success raising no errors.

Of course, in case the processes update the same fields, you must plan carefully as usual, but in the case of updating different fields (which the functions originally were developed for) the implementation is trivial.

/gustav

Fra: Bill Benson<mailto:bensonforums at gmail.com>
Sendt: 6. februar 2016 11:16
Til: Access Developers discussion and problem solving<mailto:accessd at databaseadvisors.com>
Emne: Re: [AccessD] Handling concurrent updates

I did not mean to imply in all cases that you would pare down to just *one*
record before bailing; that was a single use case.

I meant all records that the is not able to force into Edit mode in a
reasonable period.

I think I would probably continue using pessimistic locking in my
professional work, if it seems the same records are at risk of requiring
edit.

On Sat, Feb 6, 2016 at 5:09 AM, Bill Benson <bensonforums at gmail.com> wrote:

> Another *possible* improvement on this scheme is to establish a collection
> of the records which are not editable (assuming this is possible); and
> proceed to edit those which *can* be edited before coming back through the
> collection, trying to edit again, if successful, removing their key from
> the collection, if not, leaving in the collection. I think this might be
> more efficient in situations of many concurrencies in a single system,
> because in the same amount of time that a single Edit can be being tried
> and retried and retried, many other edits can be applied.
>
> Of course one could argue that "leaving a record alone for a bit" is not
> guarantor that it will free up later, but if you start with a relatively
> large collection of records that cannot be edited and successively pare
> that down as you perform retry loops, eventually you will pick off the
> problem records and be left with a single record that cannot be updated ...
> at that point you could test whether the collection size >0 continues
> unchanged for more than N number of tries, and if so, perhaps it is because
> the records have been deleted.
>
> I am not going to test this myself, but I think it could be more efficient
> than what you are doing (feel free to rebut and debunk this supposition).
> Hard to say because I don't know how often there are real concurrency
> issues.
>
> Regards,
>
> Bill



More information about the AccessD mailing list