[dba-SQLServer] SQL Server speed issues - was server locking

jwcolby jwcolby at colbyconsulting.com
Tue Sep 29 15:55:06 CDT 2009


Francisco,

I did reorganize.  the temp db is out on a single drive IIRC.  The log files are on their own Raid5 
array.  The database itself is on a raid6 array.

The log files are HUGE, as in 45 GIGS.  And more importantly, one of them won't shrink down.  It 
says only 24% is available.  I shrank the log file and sure enough it is now 33 gigs and 0% free 
space.  So what is in the log file?

I have a database that is really nothing but the raw data imported from CSV files.  I created an 
index, and I decided to try updating this same field in it - just to see how the times compare, if 
it was something in one of the databases or is system wide etc.

20 million records, update a field to null, all records.  FOUR HOURS LATER... it still was not 
completed.  However the log file is enormous, 33 gigs.  It was 45 but I could shrink about 25% of 
it.  Is there any way to see what is in the log file that is so almighty important?  And is it 
trying desperately to apply whatever is in the log file to the database?

In the past I would get tremendous sized log files but I could ALWAYS shrink them to just a meg or 
so.  Not this time.

WTF over?

John W. Colby
www.ColbyConsulting.com


Francisco Tapia wrote:
> if your transaction log file is not big enough to house all the data, then
> you will end up paging...that's normal, but this brings other issues as well
> your log file will want to grow (if you've set it to autogrow) and also if
> you're not in simple recovery model. and even in simple recovery model
> you'll see the log file grow a good amount based on how much data you push
> through your system.  page and working with only 1 million rows is different
> than working on all 100 million rows, it's all about resources and how
> you've allocated them.  a small tlog fle will require the server to stop
> what it was doing in order to grow the tlog first then it will continue on
> it's merry way...
> 
> I remember you were going to re-organize the location of the tempdb, and the
> tlog files... how did that pan out and how much space did you allocate for
> it?
> 
> --
> -Francisco
> http://sqlthis.blogspot.com | Tsql and More...
> 
> 
> On Tue, Sep 29, 2009 at 10:44 AM, Robert Stewart <robert at webedb.com> wrote:
> 
>> How big is your log file?
>> Have you truncated it?
>> How big is your space for teh tempdb file?
>>
>>
>>
>> At 12:00 PM 9/29/2009, you wrote:
>>> Date: Tue, 29 Sep 2009 12:29:35 -0400
>>> From: jwcolby <jwcolby at colbyconsulting.com>
>>> Subject: [dba-SQLServer] SQL Server speed issues - was server locking
>>>         up.
>>> To: Dba-Sqlserver <dba-sqlserver at databaseadvisors.com>, Access
>>>         Developers discussion and problem solving
>>>         <accessd at databaseadvisors.com>
>>> Message-ID: <4AC235EF.2040002 at colbyconsulting.com>
>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>
>>> Well, it appears that I have issues with SQL Server, above and
>>> beyond the "locking up" issue
>>>
>>> When I rebooted the server this morning (it never did finish this
>>> update process last night), my
>>> database being updated was NOT corrupted, indicating no writes were
>>> happening to it when I rebooted.
>>>   Furthermore it did not take off and try and do "gotta finish log
>>> file stuff" so it appears that it
>>> wasn't even doing that stuff any more.  Strange in itself.
>>>
>>> I have spent a confusing morning trying to discover what exactly is
>>> going on.  The first thing that
>>> is going on is that just updating a single field To NULL in 21
>>> million records is taking 24 minutes.
>>>   That is a million records / minute which is not stellar
>>> IMHO.  The field is not indexed so it is
>>> not an "updating the index" kind of problem.
>>>
>>> I can tell you that I fed the "update" stored procedure a "Top() one
>>> million" kind of query and it
>>> took only 17 seconds to update one million records, that same
>>> field.  If you do the math, 17 seconds
>>> / million times 21 million records is only about 6 minutes to update
>>> the field for every record.  So
>>> why does it take 24 minutes to just do a simple "set that field in
>>> every record to null"?
>>>
>>> This just makes no sense to me, but I am not a SQL Server kind of guy.
>> _______________________________________________
>> dba-SQLServer mailing list
>> dba-SQLServer at databaseadvisors.com
>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver
>> http://www.databaseadvisors.com
>>
>>
> _______________________________________________
> dba-SQLServer mailing list
> dba-SQLServer at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver
> http://www.databaseadvisors.com
> 
> 



More information about the dba-SQLServer mailing list