[AccessD] Backend database corruption

James Button jamesbutton at blueyonder.co.uk
Fri Feb 20 06:08:10 CST 2015


I'll add a bit from some annoying experience -

NTFS file system holds all sorts of 'stuff' on one aggregated heap - the MFT and
that can become so large that it not only won't fit into the memory cache the OS
allocates, but can also take an annoyingly long time to get the details of a
folder, or even a single file.

So do setup your processing to firstly not indicate to the processing facility
that a file is there as the file is being written.
Have the file created with a name that the creating facility uses, and then
renamed to a name that the reading facility will be 'informed of' after that,
and make sure the file is not deep into a folder structure, or on a partition
that has many 10's, let alone 100's  of thousands of files.

FAT was so much simpler - all the names of files in a folder held in 1 'file'
separated from other folder sets.

JimB

-----Original Message-----
From: accessd-bounces at databaseadvisors.com
[mailto:accessd-bounces at databaseadvisors.com] On Behalf Of Susan Harkins
Sent: Friday, February 20, 2015 1:18 AM
To: Access Developers discussion and problem solving
Subject: Re: [AccessD] Backend database corruption

I was wondering if someone would recommend SQL Server Express. Janet it's
free, and you're definitely up to the learning curve.

Susan H.

On Thu, Feb 19, 2015 at 8:08 PM, John W. Colby <jwcolby at gmail.com> wrote:

> Yes, if you do go the CSV rout (or other data file), do yourself a favor
> and perform a line count and checksum which are written into the file.
>
> Better still, just go to SQL Server.  This is free, easy and way powerful,
> and will absolutely prevent the corruption issues.
>
> John W. Colby
>
> O
-- 
AccessD mailing list
AccessD at databaseadvisors.com
http://databaseadvisors.com/mailman/listinfo/accessd
Website: http://www.databaseadvisors.com



More information about the AccessD mailing list