[dba-SQLServer] SQL Server speed issues

Robert Stewart robert at webedb.com
Thu Oct 1 13:56:19 CDT 2009


Here is the code to run as an SP to truncate you log file. Truncate 
is not automatic with shrink.

CREATE PROCEDURE [DBO].[usp_Truncate_Log] AS
BEGIN
      EXEC sp_dboption 'SSMS_BCAM' , 'trunc. log on chkpt.','TRUE'
END
BEGIN
      CHECKPOINT
END
BEGIN
      EXEC sp_dboption 'SSMS_BCAM' , 'trunc. log on chkpt.','FALSE'
END
BEGIN
      DBCC SHRINKFILE(SSMS_Log,1)
END

GO

Replace SSMS_BCAM with the name of your DB
Replace SSMS_Log with the name of the log file.

Running the SP above every-so-often in your code will keep them smaller.

Also, running things so that you can use transactions might allow you 
to commit them more often and reduce the size of the temdb to hold 
all of it before committing it at the end of the process.

The tempdb is where everything is stored. It shrinks back down to the 
600+ meg after it is finished doing what it does. You need to check 
the size of it as you are performing the processing to see how big it 
really gets.


At 12:00 PM 10/1/2009, you wrote:
>Date: Wed, 30 Sep 2009 15:14:14 -0400
>From: jwcolby <jwcolby at colbyconsulting.com>
>Subject: Re: [dba-SQLServer] SQL Server speed issues - was server
>         locking
>To: Discussion concerning MS SQL Server
>         <dba-sqlserver at databaseadvisors.com>
>Message-ID: <4AC3AE06.7080407 at colbyconsulting.com>
>Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>Sorry Robert, I didn't answer the questions.
>
>The log files get huge pretty quickly.  In use I don't really care 
>as that is what they are for.
>However as I have mentioned, the application is really static 
>data.  I get tables of addresses, send
>them out for address validation, pull the resulting data (includes 
>but more than the original
>addresses), process the data and then... just use the data for 
>"quite awhile", then send out for
>address validation again, process the results and then... just use 
>the data for "quite awhile".
>There are zero "transactions" in the normal sense.  The resulting 
>data are read-only for weeks or
>months at a time.
>
>So under normal circumstances I want to use the log files, then "do 
>whatever" to get them to as
>close as possible to zero bytes.  Truncate, shrink, doesn't matter 
>to me as long as it works.
>
>In the past the log files were always 99% available and I just used 
>the shrink wizard.  I do need to
>automate this process so that when the address validation process 
>finishes, it TRUNCATES the log
>file as the final process.  The log file will not be used again for 
>weeks or more so why leave it huge.
>
>  >> How big is your log file?
>
>That depends on what is going on, how many records in the table 
>being processed etc.  For this
>specific process for this specific database, for 20 million records, 
>about 15 GB when it is all
>finished.
>
>  >> Have you truncated it?
>
>Not in so many words.  I have never run any code with the TRUNCATE 
>keyword.  I have always used the
>"shrink files" dialog from inside of SS.  My understanding is that 
>the wizard performed a TRUNCATE
>behind the scenes but who am I to say?  I will almost certainly do 
>so eventually.
>
>  >> How big is your space for the tempdb file?
>
>The temp db runs on a separate raid0 array, two 320g drives, 
>apparently waaaayyyy to big for the
>requirements.  The array is 600 gb and the temp db (after all this 
>horsing around) is currently 640
>mb.  The temp db is the only thing on that array.
>
>My log files run on another array.  Raid1, using two 500gb drives 
>for a total of 500gb available.
>That array is used only for the log files, though all of the log 
>files are on that one array.  At
>this instant the total space used for logs is 16 GB, though I have 
>seen a single log file get as big
>as 100 GB under heaving processing as I am setting up a large (100 
>million record) database.  When
>done I just shrink the log back down though.
>
>The main DATA array is a raid6 array of six 1 tb drives, divided 
>into two 2 tb volumes.  All the
>data files are on one of the two volumes.  The other volume is used 
>for general data storage.  Given
>that both of the two terabyte volumes are on the same array (set of 
>spindles) my understanding is
>that it doesn't matter if they are all on the same volume.
>
>The system drive is on its own array, Raid1, using two 1 tb 
>drives.  That array is divided into two
>volumes, 300g for boot (C:) and 700g for data.  The main SQL Server 
>installation is on that C:
>drive, with the master database etc on that volume.
>
>And then there is a 1 tb backup drive sitting on the end of an ESATA 
>cable in an external enclosure.
>
>All of the RAID arrays are on a single ARECA controller card with 2 
>gigs of RAM onboard.  One thing
>I want to try is to move the raid arrays for the logs and tempdb off 
>to a second controller card
>which I have sitting on the shelf.  That would put it on a separate 
>co-processor, and a separate
>PCI-Express x8 interface to the processor.  Unfortunately that 
>second controller has only a fixed
>256 MB of memory available to it, and I don't know whether to put 
>the data or the logs on the bigger
>card.  And of course just touching stuff that is working is always a risk.
>
>John W. Colby
>www.ColbyConsulting.com
>



More information about the dba-SQLServer mailing list