[AccessD] Handling concurrent updates

Bill Benson bensonforums at gmail.com
Sat Feb 6 04:15:43 CST 2016


I did not mean to imply in all cases that you would pare down to just *one*
record before bailing; that was a single use case.

I meant all records that the is not able to force into Edit mode in a
reasonable period.

I think I would probably continue using pessimistic locking in my
professional work, if it seems the same records are at risk of requiring
edit.

On Sat, Feb 6, 2016 at 5:09 AM, Bill Benson <bensonforums at gmail.com> wrote:

> Another *possible* improvement on this scheme is to establish a collection
> of the records which are not editable (assuming this is possible); and
> proceed to edit those which *can* be edited before coming back through the
> collection, trying to edit again, if successful, removing their key from
> the collection, if not, leaving in the collection. I think this might be
> more efficient in situations of many concurrencies in a single system,
> because in the same amount of time that a single Edit can be being tried
> and retried and retried, many other edits can be applied.
>
> Of course one could argue that "leaving a record alone for a bit" is not
> guarantor that it will free up later, but if you start with a relatively
> large collection of records that cannot be edited and successively pare
> that down as you perform retry loops, eventually you will pick off the
> problem records and be left with a single record that cannot be updated ...
> at that point you could test whether the collection size >0 continues
> unchanged for more than N number of tries, and if so, perhaps it is because
> the records have been deleted.
>
> I am not going to test this myself, but I think it could be more efficient
> than what you are doing (feel free to rebut and debunk this supposition).
> Hard to say because I don't know how often there are real concurrency
> issues.
>
> Regards,
>
> Bill
>
>>
>>


More information about the AccessD mailing list