James Button
jamesbutton at blueyonder.co.uk
Thu Feb 19 14:50:35 CST 2015
It seems to me, from the description posted that the primary concern should be the robustness of the transfer process Whatever process is implemented it should cope with loss of connection, And in my time as a systems tester I would have gone to substantial effort to generate failures in various combinations. So, while hard wiring connections may well, and should deal with excessive dropout, which can be caused by blockage of the (near) line-of-site of the transmitter receivers, (people walking by) or something as simple as a fan motor producing interference, a powered door, road traffic, or a fridge pump putting 'noise' onto the power supplies to devices. And - in a manufacturing environment there are lots of sources of interference to the electrical supplies and to the transmission signals. Certainly a starting point would be to hard-wire the connections. Then augment that with a logging facility recording the start and end times of each upload process to see if they are overlapping with the processing of each others data. With 20+ systems feeding the back end, is there a robust locking facility to ensure that only 1 set of data is 'imported' at any time, and that data is (pre)processed before the import or process facility starts on the next set of data. Another consideration is the locking structure within the database that is applicable to the apps data. In the past when dealing with probably similar needs I designed input to be flat files presented at the back end by a messaging facility - queue handling or auto-process of email attachments where the input handling facility created files with input timestamp details in the filenames. The alternative to that being to have the sending systems include their (possibly wrong timestamp) in the filename with their own unique device operator id and batch-sequence number That means that data batches can be uniquely identified and traced back, the batches of data will be processed in sequence generated by each source and can be re-processed in the same order against a restore of a backup instance of the database JimB -----Original Message----- From: accessd-bounces at databaseadvisors.com [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of Janet Erbach Sent: Thursday, February 19, 2015 8:02 PM To: Database Advisors Subject: [AccessD] Backend database corruption Hello! It's been years since I've addressed this group, so please be patient with me while I get back into the swing of this. I've been an Access developer for the last 15 years or so. Until recently I created straightforward apps used on a small group of hardwired networked computers that had 5 or 6 users in the app at the same time. Last year I took a job with a large manufacturing plant, and just deployed a very complex app that I co-wrote with one of the access-fluent production supervisors. It is supposed to run non-stop on 20+ machines, all with WIFI connections. It writes machine production data to a set of front-end tables; every 15 minutes the app checks to see if there is network connectivity - if there is, the front-end table data is posted to the back-end tables on the network, the front-end tables are emptied, and the loop begins again. The app worked pretty well when it was running on one or two machines. Now that it's up on 20 machines, the back end is corrupting multiple times during the day - which, of course, brings the whole show to a halt. The error log seems to indicate that loss of a network connection during the back-end write operation proceeds the corruption. I have two questions. Will hard wiring the network connection to these machines go a long way towards stopping the corruption? Is there anything else that could be contributing to this that I need to be aware of? Thank you for your help. Janet Erbach -- AccessD mailing list AccessD at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/accessd Website: http://www.databaseadvisors.com