[AccessD] SQL Server Speed testing - Simple update to null.

jwcolby jwcolby at colbyconsulting.com
Wed Sep 30 08:37:28 CDT 2009


Well, I am still looking at this.

Last night I purchased PerfectDisk Server and defragged all of my disks.  They were not terribly 
fragmented but that is out of the way.

Yesterday afternoon late I was finally able to detach the two databases I was working on and rename 
the log files so as to force SS to build a new one.  Remember that for whatever reason the log files 
were huge (roughly 35 gigs) and I couldn't shrink them.

BTW I verified last night that I am working in Simple Mode (logging).  All of my work on this server 
is now backed up regularly and the databases are not updated except when I send tables out for 
address validation.

This AM I got my log files back out on a dedicated drive (array - set of spindles) specifically for 
logging.  All of my databases share this log volume but this is not a transaction processing kind of 
system.

So this morning I started testing.  Updating the AddValid field back to a NULL value.  What I have 
discovered is interesting, at least to me.

Using a SELECT TOP(N) WHERE AddrValid IS NOT NULL I am updating AddrValid back to null.  This is 
just testing how long it takes to do a simple update of a single field.

500K		10 Seconds
500K		4 seconds
1 million	20 seconds
1 million	8 seconds
2 million	30 seconds
2 million	16 seconds
4 million	55 seconds
4 million	32 seconds

The first pass takes a lot longer than the second and subsequent passes.  I understand compiling and 
all but whatever is going on is not just a few milliseconds longer, but rather many seconds longer, 
and the amount increases almost linearly, i.e. it is not a fixed overhead of say 2 or three seconds.

Notice also that each time I change the N in the TOP(N) view, I have to go through the longer cycle. 
  All of these readings were taken sequentially, in the space of the time it takes to change the N, 
store the view and rerun the stored proc.

Even so the times are not unreasonable.  Basically a million records in 8 seconds in every "second 
pass" case.  So it certainly appears that 125K records / second is the "real" maximum speed given my 
current hardware and settings.  The unsettling part is that the first pass appears to take roughly 
50% longer for large recordsets, and more than 100% longer for small data sets.

BTW after all of this testing, my log file is 3.164 gigs.  When I go through the shrink files dialog 
and look at shrinking the log file for this database it says the file is 3.089 gigs, 68% available 
free space (2.116 gigs).  I am still confused as to what is in the log file that it cannot shrink it 
back to .5 meg as it used to do.  Is the log file acting as some sort of "backup" to the main 
database?  Storing all the changes made or something?  I thought that Simple mode did not do that.

Anyway, all of my records AddrValid are set back to null.  Time to test the times for setting them 
to the codes I use.

-- 
John W. Colby
www.ColbyConsulting.com



More information about the AccessD mailing list