[dba-SQLServer] Indexes not all listed

Francisco Tapia fhtapia at gmail.com
Thu Dec 17 07:29:46 CST 2009


Flash drives do not do well with very large files. Flash drives excell  
at random access wich for SQL makes them good with both tempdb and  
transaction log files because they both have random inserts and reads.  
To improve on what you have you will want to double the number of hdd  
spindles. The more spindles you have the bettertheperformance. We have  
an enterprise database riding on 40 spindles, this in turn is able to  
crank out searches for a 1.5 terabyte db in usually 10 seconds to ???  
Depending on the query my developers need. We have discussed as an  
architecture change that we may soon upgrade to 80 spindles.


Sent from my mobile

On Dec 16, 2009, at 6:37 AM, jwcolby <jwcolby at colbyconsulting.com>  
wrote:

> Mark,
>
>> Are you really using all columns to search on?
>
> I have not used every one, with 584 columns it would be difficult to  
> do so.  However the client
> calls and says "I need a count where" and rattles off a list of  
> columns.  He has a spreadsheet with
> every single column listed and he uses these to perform very  
> targeted searches.  So I have used a
> ton of them already and could use any of them on a moment's notice.
>
>> In the case where you have an order for a specific column(s), can  
>> you not programmatically create
> the indexes on those columns... And if it does not take too long, I  
> would run the scripts that the
> maintenance wizard create afterwards every time also, in the hope  
> that it keeps the database intact.
>
> Yes, except that you are talking about 50 million records.  You  
> don't just "bang out an index".
>
>> Again, this is just instinct, but I feel you will have other  
>> integrity / file size / performance
> /de-fragmentation / torn pages etc etc with such enormous data and  
> enormous numbers of indexes.
>
> On any other database table I would probably agree, however you must  
> always remember that this table
> is completely static.  No inserts, updates or deletes EVER.
>
> It is not a relational database, the table does in fact have a one  
> to one with the name / address
> table but it is that other table which gets updated.  Thus I don't  
> see how such issues could occur.
>
> To be honest I am such an amateur in SQL Server that I cannot  
> possibly know whether this is the
> right thing to do.  What I do know is that when I started, my  
> searches and sorts would take 20
> minutes to two hours - basically SQL Server was doing all table  
> scans.  Once I discovered "cover
> indexes", where I did as you suggested and indexed a group of  
> related fields, I got my searches down
> to anywhere from 2-10 minutes.  With this "every field" indexed  
> paradigm I got a two minute search
> down to 30 seconds, and went from a bunch of index scans to a bunch  
> of index seeks.
>
> All I can say is that it sure seems to be another gigantic leap  
> forward.  When you spend as much
> time waiting on this stuff as I do, you are constantly looking for  
> that next gigantic leap.
>
> I truly don't much care about the file size if the results are  
> worthwhile.  This database / table is
> the center of the client's universe.  It is THE table that he does  
> selects against.  As such I will
> do whatever I can to make it fast.
>
> In fact some day soon I will be placing this file on a Flash drive.   
> You know how much those cost,
> but again, if I can cut the times in 1/2 or 1/4 then it all becomes  
> worthwhile.  When that day comes
> I will be looking to place these indexes out in a separate file so  
> that only these indexes have to
> go on flash.
>
> John W. Colby
> www.ColbyConsulting.com
>
>
> Mark Breen wrote:
>> Hello John,
>>
>> It is really going to be the best solution to index every column?
>>
>> Are you really using all columns to search on?
>>
>> I instinctively feel that if you place an index on every column,  
>> you will
>> loose the benefits of having an index in the first place, or at  
>> least you
>> loose some of the benefits that a neat and quick index delivers.  I  
>> am
>> imagining Encyclopedia Brittanica if it supplied 25 different  
>> indexes, the
>> indexes would be larger that the books.
>>
>> I always imagined we should have as few indexes as you can get away  
>> with but
>> as many as we need.  In the case where you have an order for a  
>> specific
>> column(s), can you not programatically create the indexes on those  
>> columns,
>> then run your queries and perhaps even go the final step and drop  
>> those
>> indexes you just created after the order has been delivered.  And  
>> if it does
>> not take too long, I would run the scripts that the maintenance  
>> wizard
>> create afterwards everytime also, in the hope that it keeps the  
>> database
>> intact.
>>
>> Again, this is just instinct, but I feel you will have other  
>> integrity /
>> file size / performance /de-fragmentation / torn pages etc etc with  
>> such
>> enormous data and enormous numbers of indexes.
>>
>> What do you think?
>>
>> Mark
>
> _______________________________________________
> dba-SQLServer mailing list
> dba-SQLServer at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver
> http://www.databaseadvisors.com
>



More information about the dba-SQLServer mailing list