[dba-SQLServer] log file size

jwcolby jwcolby at colbyconsulting.com
Fri Oct 5 22:28:30 CDT 2012


I am confused though.  I thought I was told long ago that when you shrank a dbf file it literally 
started at the end moving the last record to the front, trying to create empty space at the end of 
the file so that it could be shrunk by pulling in the end of the file.  So when you were done the 
dbf was all hosed internally in terms of record order.

All of which makes no sense if you are doing a clustered index, where the data records themselves 
are stored in literal order.  I always use an autoincrement PK on which is the clustered index.  So 
I fail to see how you can take records and insert them "out of order" if by definition they have to 
stay in order.

Obviously I do not understand all I know about this situation.

So if I do end up with large free space I do this "data migration" to a brand new container, order 
by the PK, clustered index on the PKID etc.  I do end up with .2% fragmentation or something and 
almost no free space so it appears that I achieve the intended result at the logical (database 
structure) level, though it may indeed be fragmented at the physical level.  However as mentioned I 
am putting them out on an SSD RAID array / volume.

I use page compression which results in anywhere from 15% - 45% compression depending on the data 
and structure under discussion.  The biggest reason I do that is that the data is pulled off the 
disks compressed, and stored in memory compressed, which means that I can get that much more of the 
data in memory at once.  I have a lot of memory but these tables are huge obviously and I am almost 
always joining two huge tables together and pulling result sets.  So the more I can keep loaded the 
better, even given the SSD storage.

IOW I don't know jack about SQL Server so throw hardware at it.  ;)

John W. Colby
Colby Consulting

Reality is what refuses to go away
when you do not believe in it

On 10/5/2012 7:31 PM, Asger Blond wrote:
> Then the problem seems to narrows down to the way JC is inserting the
> rows. To prevent excessive logging and execution time I would suggest
> executing this statement before the insert:ALTER TABLE <tablename>
> NOCHECK CONSTRAINT ALLAnd after the insert use:ALTER TABLE <tablename>
> CHECK CONSTRAINT ALL
> Asger
> ----- Original meddelelse -----
>
>> Fra: Francisco Tapia <fhtapia at gmail.com>
>> Til: Discussion concerning MS SQL Server
>> <dba-sqlserver at databaseadvisors.com>
>> Dato: Lør, 06. okt 2012 00:53
>> Emne: Re: [dba-SQLServer] log file size
>>
>> excellent point... I didn't remember he was all SSD'ed out... in that
>> case
>> simply defragging the indexes should suffice, but if that is not
>> important
>> just peforming a db shrinkfile will be enough and he can save all the
>> overhead of droping data into a new container...
>>
>> doing an defrag on SSD's isn't good practice, but I have noticed that
>> index
>> defragging on these volumes still does yield improved performance, we
>> use a
>> NetApp SAN, which is really a hybrid SSD w/ spining disk array,
>> generally a
>> SAN handles all defragmentation internally and away from the OS, but
>> as I
>> stated index defragging is huge in keeping performance.
>>
>> -Francisco
>> --------------------------
>> You should follow me on twitter here <http://twitter.com/seecoolguy>
>> Blogs: SqlThis! <http://bit.ly/sqlthis> |
>> XCodeThis!<http://bit.ly/xcodethis>
>>
>> <http://db.tt/JeXURAx>
>>
>>
>>
>>
>> On Fri, Oct 5, 2012 at 3:06 PM, Stuart McLachlan
>> <stuart at lexacorp.com.pg>wrote:
>>
>>> JC says he is using 900GB o\f SSD
>>>
>>> In which case, the whole concept of fragmentation is immaterail.
>>>
>>> He shouldn't even think doing an OS defrag.
>>>
>>>
>>> --
>>> Stuart
>>>
>>> On 5 Oct 2012 at 11:51, Francisco Tapia wrote:
>>>
>>>> John,
>>>> While performing a shrinkfile can cause OS file level
>> fragmentation,
>>>> unless you are pre-growing your database files, you've already
>> causing
>>> file
>>>> fragmentation, everytime Windows creates a new file, it's NTFS
>> system
>>> will
>>>> be super efficient and begin writing to all available blocks,
>> even when
>>>> they are not contiguous... it's the inherent nature of the
>>> filesystem...in
>>>> order to avoid fragmentation at all cost, you will want to
>> perform an OS
>>>> level defragmentation prior to creating any new large file on the
>> OS in
>>>> windows.
>>> _______________________________________________
>>> dba-SQLServer mailing list
>>> dba-SQLServer at databaseadvisors.com
>>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver
>>> http://www.databaseadvisors.com
>>>
>>>
>> _______________________________________________
>> dba-SQLServer mailing list
>> dba-SQLServer at databaseadvisors.com
>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver
>> http://www.databaseadvisors.com
>
> _______________________________________________
> dba-SQLServer mailing list
> dba-SQLServer at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver
> http://www.databaseadvisors.com
>
>



More information about the dba-SQLServer mailing list