jwcolby
jwcolby at colbyconsulting.com
Sat Oct 6 12:57:28 CDT 2012
I am doing tests where I defrag my indexes on some databases where they are badly fragmented, using the script from here: http://ola.hallengren.com It works nicely, but expands the database "a lot", approximately 37% on one of my smaller databases, approx 32% on another much larger db. Even so, given how much work my 'migration' is, this is probably preferable on all but the very largest databases. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 10/4/2012 11:13 AM, jwcolby wrote: > I am migrating a database to a new container. My databases tend to be hundreds of gigabytes and > often times I can cut the size in half by migration. I also use page compression because it allows > me to get much more of the data into memory and I have cores to throw at the decompression task. > > I make the recovery model simple because they are read-mostly and that works for me. > > So I am migrating a couple of tables, each of which contains 225 million records. The first was 277 > fields but very few indexes. It took a looong time to move the data and the log file was huge, > literally over two hundred gigabytes. > > The second table was only 22 fields but has indexes covering many of the fields. Moving this table > is taking much longer and it appears to be the indexing that is taking all of the time. The log > file is 450 gigabytes and growing. I have another 450 gb of space so I do not anticipate running > out before it finishes but I am wondering why it takes so much room? I thought that SQL Server > committed stuff and then reclaimed the space in the log file to use in the next operation. > >