[dba-SQLServer] Add records to large file

Bobby Heid bheid at sc.rr.com
Sun Dec 20 17:57:41 CST 2009


John,

If you are going to insert 50 million records into a table, then it should
be much faster to create the table without the indexes, then insert the
data, then create the index.  If the indexes exist before the data is
loaded, then it will have to insert data into the index(es) each time a
record is inserted.

Using a bulk load would be fastest.  IIRC, bulk load drops any indexes
before doing an insert.

Bobby

-----Original Message-----
From: dba-sqlserver-bounces at databaseadvisors.com
[mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby
Sent: Friday, December 11, 2009 11:09 AM
To: Dba-Sqlserver
Subject: [dba-SQLServer] Add records to large file

I don't have a clustered index on my database from hell main table, the 50
million record 600 field 
table.

I am going to do that.  What I thought I would do is simply make a new
database, create a new table, 
and append the existing data into the new table.  There is a unique integer
PK which will be the key 
of the index.

My question is whether it is faster to create the clustered index and then
insert the records, 
already sorted in PKID, or create the table and then create the index.

Logically it seems that it would be faster to do the insert sorted with the
index already in place, 
but I have heard opinions that it is "often faster" to drop the index and
recreate", i.e. do the 
index after the records are in place.

-- 
John W. Colby
www.ColbyConsulting.com
_______________________________________________
dba-SQLServer mailing list
dba-SQLServer at databaseadvisors.com
http://databaseadvisors.com/mailman/listinfo/dba-sqlserver
http://www.databaseadvisors.com






More information about the dba-SQLServer mailing list