[AccessD] Needs analysis

jwcolby jwcolby at colbyconsulting.com
Tue Jun 22 10:02:55 CDT 2010


All I see discussed there is AWE.  I am running X64 Windows and SQL Server and AFAIK AWE does not apply.

John W. Colby
www.ColbyConsulting.com


Jim Dettman wrote:
> <<I don't think there is an upper limit to that outside of what the system
> can address.>>
> 
>   FYI: 64GB is the upper limit
> 
> Memory Architecture
> http://msdn.microsoft.com/en-us/library/ms187499.aspx 
> 
>   This is on SQL 2008
> 
> Jim.
> 
> -----Original Message-----
> From: accessd-bounces at databaseadvisors.com
> [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of Jim Dettman
> Sent: Monday, June 21, 2010 10:18 PM
> To: 'Access Developers discussion and problem solving'
> Subject: Re: [AccessD] Needs analysis
> 
> 
> Always, always, always, the biggest bang for you buck is memory.
> 
> I would not call myself an SQL expert, but I believe SQL will consume as
> much memory as it can to execute things.  I don't think there is an upper
> limit to that outside of what the system can address.
> 
> Jim. 
> 
> -----Original Message-----
> From: accessd-bounces at databaseadvisors.com
> [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of jwcolby
> Sent: Monday, June 21, 2010 2:28 PM
> To: Access Developers discussion and problem solving
> Subject: Re: [AccessD] Needs analysis
> 
> LOL, yep one terabyte, not one gigabyte.
> 
> As for 10, I got it.  Span to do the RAID 0, then mirror to do the RAID 1.
> So that is 2X.
> 
> I went back in and looked at what I actually have.  I have eight 1 terabyte
> drives.  I use two of 
> them Raid 1.  That leaves six which I use to create a single RAID 6 array.
> That leaves four 
> terabytes of data storage.  Due to the limitations of Windows, I break that
> into two 2 terabyte 
> volumes.
> 
>  > You said you had a budget for upgrading your machine.  John, I'm serious,
> when reading and 
> writing from a database, your absolute most limiting factor is going to be
> drive speed!
> 
> Of course this is true.  However there is more to the story.  For example
> 
> 
> Imagine I have a database.  The whole thing is only 500 megs of data indexes
> and all.  I have 16 
> gigs of ram.  Yes, the slowest thing is the disk but... SQL Server loads the
> whole damn thing into 
> ram and then executes ten thousand queries pulling this that and the other,
> all "instantly".
> 
> Next scenario, imagine same server, but the database is 40 gigs.  Now only a
> piece can be loaded at 
> a time.
> 
> You get the point.  At what point does it pay to go to more memory vs the
> disk thing?  If I invest 
> heavily in memory, now I have the whole thing loaded in memory but I have 47
> potential threads but 
> only 3 available cores.  At what point does it pay to invest in a ton of
> cores?
> 
> Drew, I understand the disk thing, and I know that I absolutely am hitting
> an I/O bottleneck. 
> However it seems not useful to just say "get faster disks".  I know that
> needs to happen but there 
> is much much more to this than "more disks".  Understand that these
> databases are freakin huge. 
> Just reading the data off a raid 0 or 10 will still take a ton of time if
> the disks in the raid are 
> magnetic.
> 
> OTOH I do not write back to these big databases.  During processing they are
> literally read-only. 
> And much of the time I hit two specific databases, and then write a "small"
> (few thousand to few 
> million) record order table.  My plan (in the absence of explicit bottleneck
> data) is to go get some 
> SSDs to hold those two databases, probably two 60 gig SSDs for each
> database, raid 0 (120 gig total 
> per database).
> 
> That effectively does what you are discussing, but just for my two most
> frequently used databases. 
> After that, providing a processor / memory pipe that can handle the flood of
> data will still be a 
> requirement.
> 
> John W. Colby
> www.ColbyConsulting.com
> 
> 
> Drew Wutka wrote:
>> Ok, a little fuzzy about your math here.
>>
>> First of all, a RAID 1+0 (RAID 10) is actually 2x used for x storage
>> space.  It's a striped mirror set, so you use up 2 drives for each
>> mirror set, and then stripe those.
>>
>> Second, 1 gig drives...really?  That's VERY small by today's standards!
>> Hopefully you meant terabyte.
>>
>> How much space do you actually need?  From what you are saying, it looks
>> like you have 16 drives. (2 sets of 6 disk RAID 6, (which is 12 drives
>> as you said), plus a 2 drive striped set, plus a 2 drive mirrored set).
>> The way you SHOULD have this setup is with a 2 drive mirror, and a 14
>> drive RAID 1+0.  That first mirrored set should be your operating
>> system.  Put NOTHING else on there.  Then install everything else to the
>> RAID 1+0.  
>>
>> You said you had a budget for upgrading your machine.  John, I'm
>> serious, when reading and writing from a database, your absolute most
>> limiting factor is going to be drive speed!  
>>
>> So what kinda storage space do you need, and how big are your drives?
>> (Again, I'm assuming your way off on the 1 gig part).  I guess you could
>> be getting very small solid state drives, but those aren't idea for a
>> database either.  If you went with fairly cheap drives (260 gigs are
>> like $35 range, depending on the interface), for $560, you could have a
>> 260 gig root drive for your OS, and then 1.8 terrabyte data drive.
>>
>> Drew
> 



More information about the AccessD mailing list