[AccessD] 4 gig dims (relatively) cheap

jwcolby jwcolby at colbyconsulting.com
Mon Jun 15 16:32:32 CDT 2009


 > Is that your "database from hell"?  My sympathies!

ROTFL.  With the proper hardware and the proper knowledge (my biggest problem) it is manageable.

I got the 4 gig dimms today, installed and in use.  16 gigs RAM on each server.  I use virtual 
machines to run an address validation process.  In the past because of the limited amount of ram I 
had I would lower the amount of RAM that SQL Server could have from 7 to 4 gigs when I needed to run 
the VM.  Now I can just leave the memory available for the VM and permanently assign about 12 gigs 
for SQL Server.

I am working on this stuff now.  I really need to be logging all kinds of stuff including the time 
/date and time it takes to perform each stored procedure.  If I can get that logging happening then 
I can start to analyze the impact of hardware changes.  I have orders which I can use as a test bed 
for how long each step takes.

John W. Colby
www.ColbyConsulting.com


Steve Erbach wrote:
> John,
> 
> Is that your "database from hell"?  My sympathies!
> 
> Steve Erbach
> Neenah, WI
> 
> 
> On Sat, Jun 13, 2009 at 6:52 PM, jwcolby<jwcolby at colbyconsulting.com> wrote:
>> Steve,
>>
>> This is for 4g dimms.  I have 8 gigs in my servers right now but that is (4) sticks of 2 gigs each.
>>  I am now going to replace that with (4) sticks of 4 gigs each.
>>
>> I run SQL Server on these machines and I have largish databases, about 180 gig database files, 50
>> million records, 640 fields wide.  I have to do cover indexes on up to 14 or more fields at a time
>> to get my speeds up to snuff.  If I can get twice the memory it should help with the processing
>> speed a bit.
>>
>> My next move is to place these databases on a raid of solid state disks, probably a raid (5) using
>> three (to start) of these:
>>
>> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227395
>>
>> This is a review of the drive comparing it to other drives, both SSDs as well as normal hard drives.
>>
>> http://benchmarkreviews.com/index.php?option=com_content&task=view&id=299&Itemid=60
>>
>> Perhaps of most interest to me are these three tests which show read/write speeds (I rarely write to
>> my drives except for generating cover indexes) as well as IOPS.
>>
>> Access times:
>>
>> http://benchmarkreviews.com/index.php?option=com_content&task=view&id=299&Itemid=60&limit=1&limitstart=5
>>
>> Read / write speeds:
>>
>> http://benchmarkreviews.com/index.php?option=com_content&task=view&id=299&Itemid=60&limit=1&limitstart=5
>>
>> IOPS:
>>
>> http://benchmarkreviews.com/index.php?option=com_content&task=view&id=299&Itemid=60&limit=1&limitstart=8
>>
>> My databases are the exception in that they are almost entirely read only.  This will provide an
>> advantage in that I will not be wearing out the SSDs writing to them all of the time, and for a
>> database, the scorching read data rates and the truly scorching IOPS rate MIGHT provide truly
>> scorching query processing rates.  And to top it off, SSDs to not suffer from the effects of disk
>> fragmentation - reads to any location are equally fast.
>>
>> Basically I set up this "database from hell" which is the very wide table on a raid 6 array of
>> normal hard drives.  However I do not USE that database directly.  I set up cover indexes, many of
>> which are static, they cover the most common fields that are used over and over from order to order.
>>  I then (very occasionally now) have to generate cover indexes for specific sets of fields from
>> that big wide table.
>>
>> If I can set JUST those cover indexes into database files out on a raid 5 array of these SSDs I
>> should get just phenomenal performance.  Given disk access times of 100 microseconds, a raid
>> sustained data transfer rate of as much as 500 mbytes / second, and even more if I add more disks to
>> the raid array,  and IOPS of many thousands per second, it would SEEM that even large queries should
>> simply fly.  Talk about keeping the processor fed...
>>
>> We shall see.
>>
>>
>> John W. Colby
> 



More information about the AccessD mailing list