S D
accessd667 at yahoo.com
Thu Oct 28 15:02:35 CDT 2004
Jim, Gustav, thnx for the replies. However, while I was 'fondeling' around I was looking at the code, network stats, doing some prototyping I came up with the following. Load the file (wich I first copied to the desktop) in an array; then I opened 3 rsts Then I loop-ed through the file until the UBOUND(array) was found did a addnew + update after each found 'tag' exited the loop, closed the rsts and voila! Remember the 25 hours.....how about 3 seconds!!! Then I skipped the copying part and left the file on the network, still 14 seconds. Maybe it can be done faster but I think a 50,000% performance improvement will do just fine :-) The problem was that the 'programmer' that designed and build the app did the following: While looping through the file he opened and closed a recordset every time he found a 'tag' So that could be about 5500 times! Over a slow novell network using a 800Mb Fe-Be app.... Well it works. Now all I have to do is create the transaction thingy Gustav said. Just read about it in the bible...aka ADHB2K by litwin, gilbert and getz...get it ;-) Sjee, it's getting a bit late. Sander Hi Sander: I was working on a contract to roll-out an update inventory database, for a number of stores, this summer. The system had each computer, with it's own full database, which could be fully independent (in the event of a catastrophic failure on the LAN, the POS system had to run). After the database was rebuilt on the server, it was then transferred to each station across the LAN. The late-night process could take 20 to 30 minutes per station. Given that some sites had ten terminals the process would have been very time-consuming. We sped the process by zipping the master DB and then broadcasting the result. It would now only take three minutes per station, as the database and in your case, a flat-file, compress down very nicely. HTH Jim Gustav Brock <gustav at cactus.dk> wrote:Hi Sander First, reading a 0,25 MB file off the network should be fast, no need to copy the file first. But that is easy to test ... just try with a local copy. Second, you can read the file directly with a query as you did for exporting. Then use this query as the source for three append queries, one for each table. Further, this step you can wrap in a BeginTrans/CommitTrans to give you the option of rolling back. /gustav > Hi group, > I've got a A2K (that needs to migrate to XP) app that needs to import a flatfile. > A 267kb file takes about 25-28 HOURS(!!) to proces. Yes, it's a slow network. > I need to improve this. > This is how it 'works' now: > 1 - read the file on the network to determine the number of lines > 2 - start a for x=0 to NumberOfLines loop > 3 - check the first x characters in the line > 4 - create a new record in a table (table depends on step 3, there are 5 different tables to store the records) > 5 - execute an update > 6 - goto the next record > Here's how I want to do it: > 1 - copy the file from the network to the desktop > 2 - import the complete file in a staging table > 3 - Use an UpdateBatch mechanisme to update the BE. > Question: > How can i implement this UpdateBatch? I mean I have to store the records for each table > and after the complete file is processed I want to update all tables. > Question: > I need to implement a transaction. If the processing failes somewhere I need to rollback the changes (if I use an updatebatch this is probably not necesseray but it's good programming I think) > any tips/advice is greatly appreciated!! > Regards, > Sander -- _______________________________________________ AccessD mailing list AccessD at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/accessd Website: http://www.databaseadvisors.com --------------------------------- Do you Yahoo!? Express yourself with Y! Messenger! Free. Download now.