Michael Maddison
michael at ddisolutions.com.au
Mon Nov 8 23:53:55 CST 2004
John, I'm not sure EM or DTS will do a transfer with indexes. I'm pretty sure it wont actually. I think you will have to script the create indexes manually. What I'd do is create a script for each server, (QA will do most of the work for you) create table go append data go alter table (constraints) go create index go Then run it from the target server, that way it wont affect your main server too much. I hope your getting paid the big bucks because If you think you've had troubles before I think the fun is just about to begin ;-))) I read your other posts as well and I'm not sure if my comments will be all that helpful as I suspect you'll need a fair amount of trial and error anyway to see what works best for the environment you have to work with... However, I'm nervous about you normalising the big table. I predict you will only end up hampering performance. Joins on large table or many joins take a considerable amount of time. If you need to create separate files for space reasons have a look at Partitioned Views in BOL. This will allow you to split the table over multiple drives and multple servers. I'd also seriously try and coerce the client into taking an OLAP approach, though this also has its drawbacks as there are so many damn columns. A cube with 600 dimensions would never finish processing so maybe not :-))) Just how dynamic do the queries have to be? cheers Michael M Can SQL Server EM copy tables from a database in one SQL Server (Neo2) to another SQL Server (Neo1)? I am trying to tear down the big 64 million record 640 field table into sub-tables. The process consists of creating the tables, building an append query to copy the data from the big table to the new sub-table, then creating indexes on every field in the sub-table. The process of creating the new tables is relatively easy and quick, manually copying the field defs into the paste buffer, then pasting them back in to a new table. The process of creating and executing the append query is relatively painless and fairly quick though it does take a couple of hours (I think, never actually timed it) to copy the data. However the process of creating the indexes is time consuming both in the doing (manually in table design view) and particularly in the execution (saving the table with the new indexes - which can take overnight. The process of creating the indexes pretty much locks up the machine (80-100% CPU utilization) for many many hours. I would like to create the sub-tables all at once, then farm out the index creation to my other machines, but in order to do that I have to copy the data filled tables to another machine, switch to that machine, create the indexes in design view, then save the table in design view which starts the index creation process happening. I have no idea whether it is even possible to do this "table copy", but all indexes need to copy as well at least when I copy them back to the main SQL Server. On another note, how do I split the database into more files? I built it originally in 4 files, and then as I finished the import I shrank them down into two files (on two physical 250g disks). I am now pushing the size up such that I need to split them back out to at least one more 250g disk (one more file), and perhaps even two. I haven't discovered how to split the database into more physical files. John W. Colby www.ColbyConsulting.com Contribute your unused CPU cycles to a good cause: http://folding.stanford.edu/ _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com