From davidmcafee at gmail.com Wed Oct 3 12:49:12 2012 From: davidmcafee at gmail.com (David McAfee) Date: Wed, 3 Oct 2012 10:49:12 -0700 Subject: [dba-SQLServer] SSMS 2012, different databases on tabs Message-ID: Is anyone else playing with SQL Server 2012 yet? In 2005, you could open a tab and enter some TSQL against databaseA You could then open a New Query tab and do something else against databaseB. When you would switch back to the first tab, the drop down with the database name would switch back to database A and vice versa. This doesn't seem to be the case with SSMS 2012. Is there a setting that I can't see? Thanks, David From fhtapia at gmail.com Wed Oct 3 12:51:53 2012 From: fhtapia at gmail.com (Francisco Tapia) Date: Wed, 3 Oct 2012 10:51:53 -0700 Subject: [dba-SQLServer] SSMS 2012, different databases on tabs In-Reply-To: References: Message-ID: <-3919432368645617608@unknownmsgid> Wow!! That seems dangerous!! Sent from my mobile device On Oct 3, 2012, at 10:50 AM, David McAfee wrote: > Is anyone else playing with SQL Server 2012 yet? > > In 2005, you could open a tab and enter some TSQL against databaseA > You could then open a New Query tab and do something else against > databaseB. > > When you would switch back to the first tab, the drop down with the > database name would switch back to database A and vice versa. > > This doesn't seem to be the case with SSMS 2012. > > Is there a setting that I can't see? > > Thanks, > David > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From davidmcafee at gmail.com Wed Oct 3 13:02:09 2012 From: davidmcafee at gmail.com (David McAfee) Date: Wed, 3 Oct 2012 11:02:09 -0700 Subject: [dba-SQLServer] SSMS 2012, different databases on tabs In-Reply-To: <-3919432368645617608@unknownmsgid> References: <-3919432368645617608@unknownmsgid> Message-ID: Exactly! I'm working on one, and get a call to modify something in another database. I go back to the code I was working on and hit F5 and get a "table doesn't exist" message, but it could easily have been a delete or update script. Scary! :) D On Wed, Oct 3, 2012 at 10:51 AM, Francisco Tapia wrote: > Wow!! That seems dangerous!! > > Sent from my mobile device > > On Oct 3, 2012, at 10:50 AM, David McAfee wrote: > > > Is anyone else playing with SQL Server 2012 yet? > > > > In 2005, you could open a tab and enter some TSQL against databaseA > > You could then open a New Query tab and do something else against > > databaseB. > > > > When you would switch back to the first tab, the drop down with the > > database name would switch back to database A and vice versa. > > > > This doesn't seem to be the case with SSMS 2012. > > > > Is there a setting that I can't see? > > > > Thanks, > > David > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From davidmcafee at gmail.com Wed Oct 3 13:07:01 2012 From: davidmcafee at gmail.com (David McAfee) Date: Wed, 3 Oct 2012 11:07:01 -0700 Subject: [dba-SQLServer] SSMS 2012, different databases on tabs In-Reply-To: References: <-3919432368645617608@unknownmsgid> Message-ID: OK, now this is weird. Now it is working the way it is supposed to. I tried to recreate the problem by going to database1 and opening a sproc from database2. It now changes the db name, it wasn't doing that before either. I'm going to have to keep an eye on it and see if I am doing something to make this happen. It's happened to me a couple of times. D On Wed, Oct 3, 2012 at 11:02 AM, David McAfee wrote: > Exactly! > > I'm working on one, and get a call to modify something in another database. > I go back to the code I was working on and hit F5 and get a "table doesn't > exist" message, but it could easily have been a delete or update script. > > Scary! > > :) > > D > > > On Wed, Oct 3, 2012 at 10:51 AM, Francisco Tapia wrote: > >> Wow!! That seems dangerous!! >> >> Sent from my mobile device >> >> On Oct 3, 2012, at 10:50 AM, David McAfee wrote: >> >> > Is anyone else playing with SQL Server 2012 yet? >> > >> > In 2005, you could open a tab and enter some TSQL against databaseA >> > You could then open a New Query tab and do something else against >> > databaseB. >> > >> > When you would switch back to the first tab, the drop down with the >> > database name would switch back to database A and vice versa. >> > >> > This doesn't seem to be the case with SSMS 2012. >> > >> > Is there a setting that I can't see? >> > >> > Thanks, >> > David >> >> From jwcolby at colbyconsulting.com Thu Oct 4 10:13:06 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 04 Oct 2012 11:13:06 -0400 Subject: [dba-SQLServer] log file size Message-ID: <506DA782.2080901@colbyconsulting.com> I am migrating a database to a new container. My databases tend to be hundreds of gigabytes and often times I can cut the size in half by migration. I also use page compression because it allows me to get much more of the data into memory and I have cores to throw at the decompression task. I make the recovery model simple because they are read-mostly and that works for me. So I am migrating a couple of tables, each of which contains 225 million records. The first was 277 fields but very few indexes. It took a looong time to move the data and the log file was huge, literally over two hundred gigabytes. The second table was only 22 fields but has indexes covering many of the fields. Moving this table is taking much longer and it appears to be the indexing that is taking all of the time. The log file is 450 gigabytes and growing. I have another 450 gb of space so I do not anticipate running out before it finishes but I am wondering why it takes so much room? I thought that SQL Server committed stuff and then reclaimed the space in the log file to use in the next operation. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From fuller.artful at gmail.com Thu Oct 4 12:03:12 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 4 Oct 2012 13:03:12 -0400 Subject: [dba-SQLServer] log file size In-Reply-To: <506DA782.2080901@colbyconsulting.com> References: <506DA782.2080901@colbyconsulting.com> Message-ID: John, A few questions. 1. What exactly do you mean by "migrate"? Do you really mean "move", or are you actually migrating, as in from one version or instance to another? There's a large difference between them. 2. Since your database is, except for the occasional mass-update, for all intents and purposes, R/O, why bother with the log in either your backups or your migration or move? Arthur On Thu, Oct 4, 2012 at 11:13 AM, jwcolby wrote: > I am migrating a database to a new container. My databases tend to be > hundreds of gigabytes and often times I can cut the size in half by > migration. I also use page compression because it allows me to get much > more of the data into memory and I have cores to throw at the decompression > task. > > I make the recovery model simple because they are read-mostly and that > works for me. > > So I am migrating a couple of tables, each of which contains 225 million > records. The first was 277 fields but very few indexes. It took a looong > time to move the data and the log file was huge, literally over two hundred > gigabytes. > > The second table was only 22 fields but has indexes covering many of the > fields. Moving this table is taking much longer and it appears to be the > indexing that is taking all of the time. The log file is 450 gigabytes and > growing. I have another 450 gb of space so I do not anticipate running out > before it finishes but I am wondering why it takes so much room? I thought > that SQL Server committed stuff and then reclaimed the space in the log > file to use in the next operation. > > From jwcolby at colbyconsulting.com Thu Oct 4 13:49:55 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 04 Oct 2012 14:49:55 -0400 Subject: [dba-SQLServer] log file size In-Reply-To: References: <506DA782.2080901@colbyconsulting.com> Message-ID: <506DDA53.3060807@colbyconsulting.com> > 1. What exactly do you mean by "migrate"? Do you really mean "move", or are you actually migrating, as in from one version or instance to another? There's a large difference between them. 1) I think I am not doing either by your standards, it is really just recreating the database from scratch, moving the data from old to new. I am scripting every object in db ABC and recreating that in (new) DB XYZ. I then append the data from the tables in ABC to the same tables in XYZ. Not being a guru I don't know a more efficient way to do this. I have 900 gbytes of RAID 6 SSD which I hold these things on. I had two specific databases which had grown to almost 500 gigs all by themselves. By 'shrinking' them down as described above I got the total file size for the two back under 200g, a worthwhile task given my limited SSD space. >why bother with the log in either your backups or your migration or move? 2) I wasn't aware that you could not have a log file. AFAIK when I create the db the process automatically creates the db file and the log file simultaneously. If I delete the log file (after detaching that is possible with a 'simple' database) it automatically creates a new one. I pretty much come along behind and shrink the logs after major update operations. I have been told to never shrink the data files because to do so fragments them so I do this 'data migration' process if the data files ever get outsized with major empty space. That doesn't happen often because these are 'read-mostly' but once in awhile I have to do something which balloons them up. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 10/4/2012 1:03 PM, Arthur Fuller wrote: > John, > > A few questions. > > 1. What exactly do you mean by "migrate"? Do you really mean "move", or are > you actually migrating, as in from one version or instance to another? > There's a large difference between them. > 2. Since your database is, except for the occasional mass-update, for all > intents and purposes, R/O, why bother with the log in either your backups > or your migration or move? > > Arthur > From fhtapia at gmail.com Fri Oct 5 13:51:20 2012 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 5 Oct 2012 11:51:20 -0700 Subject: [dba-SQLServer] log file size In-Reply-To: <506DDA53.3060807@colbyconsulting.com> References: <506DA782.2080901@colbyconsulting.com> <506DDA53.3060807@colbyconsulting.com> Message-ID: John, While performing a shrinkfile can cause OS file level fragmentation, unless you are pre-growing your database files, you've already causing file fragmentation, everytime Windows creates a new file, it's NTFS system will be super efficient and begin writing to all available blocks, even when they are not contiguous... it's the inherent nature of the filesystem...in order to avoid fragmentation at all cost, you will want to perform an OS level defragmentation prior to creating any new large file on the OS in windows. You also need to grow your database files to the optimum size, so if your db is 700gb you want to pre-grow to a suitable level slightly above that (since it's mostly readonly) You can properly defrag your existing db w/o having to move your data into a new db container, but will require some downtime and defraging of your Raid volume, it is up to you to believe what you want when it comes to RAID5/6 and OS level fragmentation, but the general consensus says that the OS still treats this as a single volume and therefore a defrag on this volume will yield positive results. downtime... that is generally whey most people do not defrag, in order to defrag the database files at the OS level, you need to take the Sql Server engine offline, otherwise those files are in use. so to be the most effective, on your Read Only database, you will want to 1) take the system offline, 2) perform an OS level defrag, 3) then follow that up with a database defrag (indexes etc). 4) peform a shrinkfile 5) truncate your transaction log to acceptable levels 6) take the system offline a second time 7) defrag at the OS level once more. * defragging at the end here is simply housekeeping since you've moved, defraged, and shrank files, you will want contiguous space anytime your log or db files need to grow.... if your database is a ReadOnly database you should not experience any growth, if you are adding data to your database infrequently, then you will want to manually pre-grow the database file before you import new records, all other temp selections should be done to ##temp tables OR if you need to keep the data longer, to a separate container that you can return to, such as ResultsDB, that way ResultsDB will grow and shrink as you dump report data to it. It seems that you are unnecessarily introducing a chance to corrupt things the way you are doing things.. -Francisco -------------------------- You should follow me on twitter here Blogs: SqlThis! | XCodeThis! On Thu, Oct 4, 2012 at 11:49 AM, jwcolby wrote: > > 1. What exactly do you mean by "migrate"? Do you really mean "move", or > are you actually migrating, as in from one version or instance to another? > There's a large difference between them. > > 1) I think I am not doing either by your standards, it is really just > recreating the database from scratch, moving the data from old to new. > > I am scripting every object in db ABC and recreating that in (new) DB XYZ. > I then append the data from the tables in ABC to the same tables in XYZ. > Not being a guru I don't know a more efficient way to do this. > > I have 900 gbytes of RAID 6 SSD which I hold these things on. I had two > specific databases which had grown to almost 500 gigs all by themselves. > By 'shrinking' them down as described above I got the total file size for > the two back under 200g, a worthwhile task given my limited SSD space. > > > >why bother with the log in either your backups or your migration or move? > > 2) I wasn't aware that you could not have a log file. AFAIK when I create > the db the process automatically creates the db file and the log file > simultaneously. If I delete the log file (after detaching that is possible > with a 'simple' database) it automatically creates a new one. > > I pretty much come along behind and shrink the logs after major update > operations. I have been told to never shrink the data files because to do > so fragments them so I do this 'data migration' process if the data files > ever get outsized with major empty space. That doesn't happen often > because these are 'read-mostly' but once in awhile I have to do something > which balloons them up. > > > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > On 10/4/2012 1:03 PM, Arthur Fuller wrote: > >> John, >> >> A few questions. >> >> 1. What exactly do you mean by "migrate"? Do you really mean "move", or >> are >> you actually migrating, as in from one version or instance to another? >> There's a large difference between them. >> 2. Since your database is, except for the occasional mass-update, for all >> intents and purposes, R/O, why bother with the log in either your backups >> or your migration or move? >> >> Arthur >> >> > ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From stuart at lexacorp.com.pg Fri Oct 5 17:06:02 2012 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Sat, 06 Oct 2012 08:06:02 +1000 Subject: [dba-SQLServer] log file size In-Reply-To: References: <506DA782.2080901@colbyconsulting.com>, <506DDA53.3060807@colbyconsulting.com>, Message-ID: <506F59CA.8470.3D40A48A@stuart.lexacorp.com.pg> JC says he is using 900GB o\f SSD In which case, the whole concept of fragmentation is immaterail. He shouldn't even think doing an OS defrag. -- Stuart On 5 Oct 2012 at 11:51, Francisco Tapia wrote: > John, > While performing a shrinkfile can cause OS file level fragmentation, > unless you are pre-growing your database files, you've already causing file > fragmentation, everytime Windows creates a new file, it's NTFS system will > be super efficient and begin writing to all available blocks, even when > they are not contiguous... it's the inherent nature of the filesystem...in > order to avoid fragmentation at all cost, you will want to perform an OS > level defragmentation prior to creating any new large file on the OS in > windows. From stuart at lexacorp.com.pg Fri Oct 5 17:12:26 2012 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Sat, 06 Oct 2012 08:12:26 +1000 Subject: [dba-SQLServer] log file size In-Reply-To: <506DDA53.3060807@colbyconsulting.com> References: <506DA782.2080901@colbyconsulting.com>, , <506DDA53.3060807@colbyconsulting.com> Message-ID: <506F5B4A.13961.3D4683B1@stuart.lexacorp.com.pg> Fragmentation is not a problem with SSD disks. Go ahead and shrink the data files. (google "SSD fragmentation") -- Stuart On 4 Oct 2012 at 14:49, jwcolby wrote: > I pretty much come along behind and shrink the logs after major update operations. I have been told > to never shrink the data files because to do so fragments them so I do this 'data migration' process > > John W. Colby > Colby Consulting From fhtapia at gmail.com Fri Oct 5 17:53:28 2012 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 5 Oct 2012 15:53:28 -0700 Subject: [dba-SQLServer] log file size In-Reply-To: <506F59CA.8470.3D40A48A@stuart.lexacorp.com.pg> References: <506DA782.2080901@colbyconsulting.com> <506DDA53.3060807@colbyconsulting.com> <506F59CA.8470.3D40A48A@stuart.lexacorp.com.pg> Message-ID: excellent point... I didn't remember he was all SSD'ed out... in that case simply defragging the indexes should suffice, but if that is not important just peforming a db shrinkfile will be enough and he can save all the overhead of droping data into a new container... doing an defrag on SSD's isn't good practice, but I have noticed that index defragging on these volumes still does yield improved performance, we use a NetApp SAN, which is really a hybrid SSD w/ spining disk array, generally a SAN handles all defragmentation internally and away from the OS, but as I stated index defragging is huge in keeping performance. -Francisco -------------------------- You should follow me on twitter here Blogs: SqlThis! | XCodeThis! On Fri, Oct 5, 2012 at 3:06 PM, Stuart McLachlan wrote: > JC says he is using 900GB o\f SSD > > In which case, the whole concept of fragmentation is immaterail. > > He shouldn't even think doing an OS defrag. > > > -- > Stuart > > On 5 Oct 2012 at 11:51, Francisco Tapia wrote: > > > John, > > While performing a shrinkfile can cause OS file level fragmentation, > > unless you are pre-growing your database files, you've already causing > file > > fragmentation, everytime Windows creates a new file, it's NTFS system > will > > be super efficient and begin writing to all available blocks, even when > > they are not contiguous... it's the inherent nature of the > filesystem...in > > order to avoid fragmentation at all cost, you will want to perform an OS > > level defragmentation prior to creating any new large file on the OS in > > windows. > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From ab-mi at post3.tele.dk Fri Oct 5 18:31:16 2012 From: ab-mi at post3.tele.dk (Asger Blond) Date: Sat, 6 Oct 2012 01:31:16 +0200 Subject: [dba-SQLServer] log file size References: <506DA782.2080901@colbyconsulting.com> <506DDA53.3060807@colbyconsulting.com> <506F59CA.8470.3D40A48A@stuart.lexacorp.com.pg> Message-ID: Then the problem seems to narrows down to the way JC is inserting the rows. To prevent excessive logging and execution time I would suggest executing this statement before the insert:ALTER TABLE NOCHECK CONSTRAINT ALLAnd after the insert use:ALTER TABLE CHECK CONSTRAINT ALL Asger ----- Original meddelelse ----- > Fra: Francisco Tapia > Til: Discussion concerning MS SQL Server > > Dato: L?r, 06. okt 2012 00:53 > Emne: Re: [dba-SQLServer] log file size > > excellent point... I didn't remember he was all SSD'ed out... in that > case > simply defragging the indexes should suffice, but if that is not > important > just peforming a db shrinkfile will be enough and he can save all the > overhead of droping data into a new container... > > doing an defrag on SSD's isn't good practice, but I have noticed that > index > defragging on these volumes still does yield improved performance, we > use a > NetApp SAN, which is really a hybrid SSD w/ spining disk array, > generally a > SAN handles all defragmentation internally and away from the OS, but > as I > stated index defragging is huge in keeping performance. > > -Francisco > -------------------------- > You should follow me on twitter here > Blogs: SqlThis! | > XCodeThis! > > > > > > > On Fri, Oct 5, 2012 at 3:06 PM, Stuart McLachlan > wrote: > > > JC says he is using 900GB o\f SSD > > > > In which case, the whole concept of fragmentation is immaterail. > > > > He shouldn't even think doing an OS defrag. > > > > > > -- > > Stuart > > > > On 5 Oct 2012 at 11:51, Francisco Tapia wrote: > > > > > John, > > > While performing a shrinkfile can cause OS file level > fragmentation, > > > unless you are pre-growing your database files, you've already > causing > > file > > > fragmentation, everytime Windows creates a new file, it's NTFS > system > > will > > > be super efficient and begin writing to all available blocks, > even when > > > they are not contiguous... it's the inherent nature of the > > filesystem...in > > > order to avoid fragmentation at all cost, you will want to > perform an OS > > > level defragmentation prior to creating any new large file on the > OS in > > > windows. > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Fri Oct 5 22:28:30 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Fri, 05 Oct 2012 23:28:30 -0400 Subject: [dba-SQLServer] log file size In-Reply-To: References: <506DA782.2080901@colbyconsulting.com> <506DDA53.3060807@colbyconsulting.com> <506F59CA.8470.3D40A48A@stuart.lexacorp.com.pg> Message-ID: <506FA55E.2040406@colbyconsulting.com> I am confused though. I thought I was told long ago that when you shrank a dbf file it literally started at the end moving the last record to the front, trying to create empty space at the end of the file so that it could be shrunk by pulling in the end of the file. So when you were done the dbf was all hosed internally in terms of record order. All of which makes no sense if you are doing a clustered index, where the data records themselves are stored in literal order. I always use an autoincrement PK on which is the clustered index. So I fail to see how you can take records and insert them "out of order" if by definition they have to stay in order. Obviously I do not understand all I know about this situation. So if I do end up with large free space I do this "data migration" to a brand new container, order by the PK, clustered index on the PKID etc. I do end up with .2% fragmentation or something and almost no free space so it appears that I achieve the intended result at the logical (database structure) level, though it may indeed be fragmented at the physical level. However as mentioned I am putting them out on an SSD RAID array / volume. I use page compression which results in anywhere from 15% - 45% compression depending on the data and structure under discussion. The biggest reason I do that is that the data is pulled off the disks compressed, and stored in memory compressed, which means that I can get that much more of the data in memory at once. I have a lot of memory but these tables are huge obviously and I am almost always joining two huge tables together and pulling result sets. So the more I can keep loaded the better, even given the SSD storage. IOW I don't know jack about SQL Server so throw hardware at it. ;) John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 10/5/2012 7:31 PM, Asger Blond wrote: > Then the problem seems to narrows down to the way JC is inserting the > rows. To prevent excessive logging and execution time I would suggest > executing this statement before the insert:ALTER TABLE > NOCHECK CONSTRAINT ALLAnd after the insert use:ALTER TABLE > CHECK CONSTRAINT ALL > Asger > ----- Original meddelelse ----- > >> Fra: Francisco Tapia >> Til: Discussion concerning MS SQL Server >> >> Dato: L?r, 06. okt 2012 00:53 >> Emne: Re: [dba-SQLServer] log file size >> >> excellent point... I didn't remember he was all SSD'ed out... in that >> case >> simply defragging the indexes should suffice, but if that is not >> important >> just peforming a db shrinkfile will be enough and he can save all the >> overhead of droping data into a new container... >> >> doing an defrag on SSD's isn't good practice, but I have noticed that >> index >> defragging on these volumes still does yield improved performance, we >> use a >> NetApp SAN, which is really a hybrid SSD w/ spining disk array, >> generally a >> SAN handles all defragmentation internally and away from the OS, but >> as I >> stated index defragging is huge in keeping performance. >> >> -Francisco >> -------------------------- >> You should follow me on twitter here >> Blogs: SqlThis! | >> XCodeThis! >> >> >> >> >> >> >> On Fri, Oct 5, 2012 at 3:06 PM, Stuart McLachlan >> wrote: >> >>> JC says he is using 900GB o\f SSD >>> >>> In which case, the whole concept of fragmentation is immaterail. >>> >>> He shouldn't even think doing an OS defrag. >>> >>> >>> -- >>> Stuart >>> >>> On 5 Oct 2012 at 11:51, Francisco Tapia wrote: >>> >>>> John, >>>> While performing a shrinkfile can cause OS file level >> fragmentation, >>>> unless you are pre-growing your database files, you've already >> causing >>> file >>>> fragmentation, everytime Windows creates a new file, it's NTFS >> system >>> will >>>> be super efficient and begin writing to all available blocks, >> even when >>>> they are not contiguous... it's the inherent nature of the >>> filesystem...in >>>> order to avoid fragmentation at all cost, you will want to >> perform an OS >>>> level defragmentation prior to creating any new large file on the >> OS in >>>> windows. >>> _______________________________________________ >>> dba-SQLServer mailing list >>> dba-SQLServer at databaseadvisors.com >>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.com >>> >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Sat Oct 6 10:49:57 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 06 Oct 2012 11:49:57 -0400 Subject: [dba-SQLServer] Data dictionary to Report Message-ID: <50705325.2010701@colbyconsulting.com> I have a database, 225 million rows, 240 columns. Some of the fields apparently map to another database / marketing 'standard' somewhere in the world. My client has provided a spreadsheet which tells me that field ABC of my table 'matches' field XYZ of this other database, sometimes with a 'where value = xyz). He wants data element counts, i.e. what are the codes and counts of those codes in our table / fields matching this other database. How many records (the count) in our database has data in the specified field or matching the Where(). There are a 225 of these counts in the spreadsheet he provided to me. He wants to be able to provide me a spreadsheet like this and match it against this db but also match it against other databases that we have, one spreadsheet for each of our databases, defining field matches. Obviously this is simply a groupby / count on each specified field, or in some cases a count / where(value = xyz) . I have to present the data back to him somehow. My 'programmer' response is to design an application where I can import the spreadsheet he provides into a new table in each database that he gives me a spreadsheet for. He tells me the field name in our db that he wants the counts for so my application would then generate the SQL statement to do the SELECT / Groupby / count, execute the count, get the results back into C#, create a sheet in a workbook, and dump the data for each field into a sheet of a spreadsheet. Alternatively (and more realistically) denormalize the count into a comma delimited list and write the count back into the SQL Server table I just created, then paste the result back into the original spreadsheet he provided in a new column, or generate a new spreadsheet from the table. I don't want to launch into developing this application if SQL Server can do this for me natively. Does SQL Server have this functionality native? Can I just somehow generate a report of this? Can I push a denormalized string back to an excel spreadsheet that the client provided? It seems unlikely to me. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From jwcolby at colbyconsulting.com Sat Oct 6 12:57:28 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 06 Oct 2012 13:57:28 -0400 Subject: [dba-SQLServer] log file size In-Reply-To: <506DA782.2080901@colbyconsulting.com> References: <506DA782.2080901@colbyconsulting.com> Message-ID: <50707108.4070301@colbyconsulting.com> I am doing tests where I defrag my indexes on some databases where they are badly fragmented, using the script from here: http://ola.hallengren.com It works nicely, but expands the database "a lot", approximately 37% on one of my smaller databases, approx 32% on another much larger db. Even so, given how much work my 'migration' is, this is probably preferable on all but the very largest databases. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 10/4/2012 11:13 AM, jwcolby wrote: > I am migrating a database to a new container. My databases tend to be hundreds of gigabytes and > often times I can cut the size in half by migration. I also use page compression because it allows > me to get much more of the data into memory and I have cores to throw at the decompression task. > > I make the recovery model simple because they are read-mostly and that works for me. > > So I am migrating a couple of tables, each of which contains 225 million records. The first was 277 > fields but very few indexes. It took a looong time to move the data and the log file was huge, > literally over two hundred gigabytes. > > The second table was only 22 fields but has indexes covering many of the fields. Moving this table > is taking much longer and it appears to be the indexing that is taking all of the time. The log > file is 450 gigabytes and growing. I have another 450 gb of space so I do not anticipate running > out before it finishes but I am wondering why it takes so much room? I thought that SQL Server > committed stuff and then reclaimed the space in the log file to use in the next operation. > > From jwcolby at colbyconsulting.com Mon Oct 8 05:37:07 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Mon, 08 Oct 2012 06:37:07 -0400 Subject: [dba-SQLServer] How does SQL Server do updates Message-ID: <5072ACD3.5060201@colbyconsulting.com> Every month I perform a process where name / address records are extracted from SQL Server, updated by a third party app and then only changed data is updated back into SQL Server. I take changed addresses and write them to an 'old address' table and then literally update the existing record with the changes. The address table has an integer PK (which never changes) which came from and is related to another table 1 to 1, and that PK is used by itself in a clustered index. There are other indexes for FName, LName Addr ect, my hash fields and so forth. So, I pull the updated address info into a custom SQL Server DB created on-the-fly for this purpose, and then only the changes are updated back into the live database. The PK and Name fields never change but the address fields do change and other fields which capture information about the address changes also change. The hash fields are updated (in the temp db) and written back into live etc. I am trying to visualize what goes on behind the scenes in SQL Server in the live database. Most of the data fields are varchar(), the hash fields are varbinary(200). I assume that the data is moved around inside of the dbf file, i.e. moved out to new space on the end if it can no longer fit in the originally allocated space. IOW of the town changed from 'Yuma' to 'Los Angeles', something has to give. So as things move around, does SQL Server actually go back and reuse the pieces and parts of empty space inside of the file? Or does it just keep expanding the file and doing everything out at the end of the file. So -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From jlawrenc1 at shaw.ca Mon Oct 8 09:46:05 2012 From: jlawrenc1 at shaw.ca (Jim Lawrence) Date: Mon, 8 Oct 2012 07:46:05 -0700 Subject: [dba-SQLServer] How does SQL Server do updates In-Reply-To: <5072ACD3.5060201@colbyconsulting.com> References: <5072ACD3.5060201@colbyconsulting.com> Message-ID: I believe MS SQL just sticks the records on the end of the table. This is of course done for speed. It is much easier to just add a record at the end of the table than try to insert it in the middle, replacing a deleted record's position. (This is of course how old FoxPro and MS Access works and I am assuming MS SQL works basically, just the same...) Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby Sent: Monday, October 08, 2012 3:37 AM To: Sqlserver-Dba; Access Developers discussion and problem solving Subject: [dba-SQLServer] How does SQL Server do updates Every month I perform a process where name / address records are extracted from SQL Server, updated by a third party app and then only changed data is updated back into SQL Server. I take changed addresses and write them to an 'old address' table and then literally update the existing record with the changes. The address table has an integer PK (which never changes) which came from and is related to another table 1 to 1, and that PK is used by itself in a clustered index. There are other indexes for FName, LName Addr ect, my hash fields and so forth. So, I pull the updated address info into a custom SQL Server DB created on-the-fly for this purpose, and then only the changes are updated back into the live database. The PK and Name fields never change but the address fields do change and other fields which capture information about the address changes also change. The hash fields are updated (in the temp db) and written back into live etc. I am trying to visualize what goes on behind the scenes in SQL Server in the live database. Most of the data fields are varchar(), the hash fields are varbinary(200). I assume that the data is moved around inside of the dbf file, i.e. moved out to new space on the end if it can no longer fit in the originally allocated space. IOW of the town changed from 'Yuma' to 'Los Angeles', something has to give. So as things move around, does SQL Server actually go back and reuse the pieces and parts of empty space inside of the file? Or does it just keep expanding the file and doing everything out at the end of the file. So -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From stuart at lexacorp.com.pg Mon Oct 8 16:13:57 2012 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Tue, 09 Oct 2012 07:13:57 +1000 Subject: [dba-SQLServer] How does SQL Server do updates In-Reply-To: References: <5072ACD3.5060201@colbyconsulting.com>, Message-ID: <50734215.17393.4C840A7D@stuart.lexacorp.com.pg> It also depends on the field type. To oversimplify, if it is a fixed length type, the field will be part of the record - if it is a vrariable length field, the main record will only hold a pointer to the data and the contents will be stored somwhere else in a paged data area. SQL Server manages the pages adding them as necessary and reusing them when they are emptied so it is not just a case of bloating the db every time you edit a varchar. -- Stuart On 8 Oct 2012 at 7:46, Jim Lawrence wrote: > I believe MS SQL just sticks the records on the end of the table. > > This is of course done for speed. It is much easier to just add a record at > the end of the table than try to insert it in the middle, replacing a > deleted record's position. (This is of course how old FoxPro and MS Access > works and I am assuming MS SQL works basically, just the same...) > > Jim > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby > Sent: Monday, October 08, 2012 3:37 AM > To: Sqlserver-Dba; Access Developers discussion and problem solving > Subject: [dba-SQLServer] How does SQL Server do updates > > Every month I perform a process where name / address records are extracted > from SQL Server, updated > by a third party app and then only changed data is updated back into SQL > Server. I take changed > addresses and write them to an 'old address' table and then literally update > the existing record > with the changes. > > The address table has an integer PK (which never changes) which came from > and is related to another > table 1 to 1, and that PK is used by itself in a clustered index. There are > other indexes for > FName, LName Addr ect, my hash fields and so forth. > > So, I pull the updated address info into a custom SQL Server DB created > on-the-fly for this purpose, > and then only the changes are updated back into the live database. > > The PK and Name fields never change but the address fields do change and > other fields which capture > information about the address changes also change. The hash fields are > updated (in the temp db) and > written back into live etc. > > I am trying to visualize what goes on behind the scenes in SQL Server in the > live database. Most of > the data fields are varchar(), the hash fields are varbinary(200). I assume > that the data is moved > around inside of the dbf file, i.e. moved out to new space on the end if it > can no longer fit in the > originally allocated space. IOW of the town changed from 'Yuma' to 'Los > Angeles', something has to > give. > > So as things move around, does SQL Server actually go back and reuse the > pieces and parts of empty > space inside of the file? Or does it just keep expanding the file and doing > everything out at the > end of the file. > > > So > > -- > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From stuart at lexacorp.com.pg Mon Oct 8 16:16:14 2012 From: stuart at lexacorp.com.pg (Stuart McLachlan) Date: Tue, 09 Oct 2012 07:16:14 +1000 Subject: [dba-SQLServer] How does SQL Server do updates In-Reply-To: <50734215.17393.4C840A7D@stuart.lexacorp.com.pg> References: <5072ACD3.5060201@colbyconsulting.com>, , <50734215.17393.4C840A7D@stuart.lexacorp.com.pg> Message-ID: <5073429E.25118.4C8621C4@stuart.lexacorp.com.pg> See http://www.sqlskills.com/BLOGS/PAUL/post/Inside-the-Storage-Engine-Anatomy-of-a-record. aspx On 9 Oct 2012 at 7:13, Stuart McLachlan wrote: > It also depends on the field type. > > To oversimplify, if it is a fixed length type, the field will be part of the record - if it is a vrariable > length field, the main record will only hold a pointer to the data and the contents will be stored > somwhere else in a paged data area. SQL Server manages the pages adding them as > necessary and reusing them when they are emptied so it is not just a case of bloating the db > every time you edit a varchar. > > -- > Stuart > > > On 8 Oct 2012 at 7:46, Jim Lawrence wrote: > > > I believe MS SQL just sticks the records on the end of the table. > > > > This is of course done for speed. It is much easier to just add a record at > > the end of the table than try to insert it in the middle, replacing a > > deleted record's position. (This is of course how old FoxPro and MS Access > > works and I am assuming MS SQL works basically, just the same...) > > > > Jim > > > > -----Original Message----- > > From: dba-sqlserver-bounces at databaseadvisors.com > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby > > Sent: Monday, October 08, 2012 3:37 AM > > To: Sqlserver-Dba; Access Developers discussion and problem solving > > Subject: [dba-SQLServer] How does SQL Server do updates > > > > Every month I perform a process where name / address records are extracted > > from SQL Server, updated > > by a third party app and then only changed data is updated back into SQL > > Server. I take changed > > addresses and write them to an 'old address' table and then literally update > > the existing record > > with the changes. > > > > The address table has an integer PK (which never changes) which came from > > and is related to another > > table 1 to 1, and that PK is used by itself in a clustered index. There are > > other indexes for > > FName, LName Addr ect, my hash fields and so forth. > > > > So, I pull the updated address info into a custom SQL Server DB created > > on-the-fly for this purpose, > > and then only the changes are updated back into the live database. > > > > The PK and Name fields never change but the address fields do change and > > other fields which capture > > information about the address changes also change. The hash fields are > > updated (in the temp db) and > > written back into live etc. > > > > I am trying to visualize what goes on behind the scenes in SQL Server in the > > live database. Most of > > the data fields are varchar(), the hash fields are varbinary(200). I assume > > that the data is moved > > around inside of the dbf file, i.e. moved out to new space on the end if it > > can no longer fit in the > > originally allocated space. IOW of the town changed from 'Yuma' to 'Los > > Angeles', something has to > > give. > > > > So as things move around, does SQL Server actually go back and reuse the > > pieces and parts of empty > > space inside of the file? Or does it just keep expanding the file and doing > > everything out at the > > end of the file. > > > > > > So > > > > -- > > John W. Colby > > Colby Consulting > > > > Reality is what refuses to go away > > when you do not believe in it > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From lawhonac at hiwaay.net Mon Oct 8 20:32:18 2012 From: lawhonac at hiwaay.net (Alan Lawhon) Date: Mon, 8 Oct 2012 20:32:18 -0500 Subject: [dba-SQLServer] Quick Joke Message-ID: <000101cda5bd$ebbb3910$c331ab30$@net> This is just too good. http://www.sqlskills.com/blogs/paul/post/Quick-joke.aspx Alan C. Lawhon From fuller.artful at gmail.com Mon Oct 8 20:58:01 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 8 Oct 2012 21:58:01 -0400 Subject: [dba-SQLServer] Quick Joke In-Reply-To: <000101cda5bd$ebbb3910$c331ab30$@net> References: <000101cda5bd$ebbb3910$c331ab30$@net> Message-ID: That's only the first half of it, Alan. Then the waiter says, Would you like to Order By? A. On Mon, Oct 8, 2012 at 9:32 PM, Alan Lawhon wrote: > This is just too good. > > http://www.sqlskills.com/blogs/paul/post/Quick-joke.aspx > > Alan C. Lawhon > > From mcp2004 at mail.ru Tue Oct 9 01:45:04 2012 From: mcp2004 at mail.ru (=?UTF-8?B?U2FsYWtoZXRkaW5vdiBTaGFtaWw=?=) Date: Tue, 09 Oct 2012 10:45:04 +0400 Subject: [dba-SQLServer] =?utf-8?q?How_does_SQL_Server_do_updates?= In-Reply-To: References: <5072ACD3.5060201@colbyconsulting.com> Message-ID: <1349765104.64653813@f167.mail.ru> Hi Jim -- But JC's records have a clustered index and do not have(?) BLOBs. http://msdn.microsoft.com/en-us/library/ms177443(v=sql.105).aspx So they should be updated in place if there exists enough free space on a record page to adapt to its increased total record length or record's page should be split... That could be ?ROW_OVERFLOW_DATA Allocation Unit where part of records go in the case of updated record length exceed 8KB? http://msdn.microsoft.com/en-us/library/ms189051(v=sql.105).aspx http://msdn.microsoft.com/en-us/library/ms190969(v=sql.105).aspx The speed of update (insert/update/delete) is important but secondary - the speed of retrieval "is a king"... Please correct me if I'm wrong and I'm missing something obvious. Thank you. -- Shamil Mon, 8 Oct 2012 07:46:05 -0700 ?? "Jim Lawrence" : > > > > >I believe MS SQL just sticks the records on the end of the table. > > This is of course done for speed. It is much easier to just add a record at > the end of the table than try to insert it in the middle, replacing a > deleted record's position. (This is of course how old FoxPro and MS Access > works and I am assuming MS SQL works basically, just the same...) > > Jim > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby > Sent: Monday, October 08, 2012 3:37 AM > To: Sqlserver-Dba; Access Developers discussion and problem solving > Subject: [dba-SQLServer] How does SQL Server do updates > > Every month I perform a process where name / address records are extracted > from SQL Server, updated > by a third party app and then only changed data is updated back into SQL > Server. I take changed > addresses and write them to an 'old address' table and then literally update > the existing record > with the changes. > > The address table has an integer PK (which never changes) which came from > and is related to another > table 1 to 1, and that PK is used by itself in a clustered index. There are > other indexes for > FName, LName Addr ect, my hash fields and so forth. > > So, I pull the updated address info into a custom SQL Server DB created > on-the-fly for this purpose, > and then only the changes are updated back into the live database. > > The PK and Name fields never change but the address fields do change and > other fields which capture > information about the address changes also change. The hash fields are > updated (in the temp db) and > written back into live etc. > > I am trying to visualize what goes on behind the scenes in SQL Server in the > live database. Most of > the data fields are varchar(), the hash fields are varbinary(200). I assume > that the data is moved > around inside of the dbf file, i.e. moved out to new space on the end if it > can no longer fit in the > originally allocated space. IOW of the town changed from 'Yuma' to 'Los > Angeles', something has to > give. > > So as things move around, does SQL Server actually go back and reuse the > pieces and parts of empty > space inside of the file? Or does it just keep expanding the file and doing > everything out at the > end of the file. > > > So > > -- > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > _______________________________________________ > dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Tue Oct 9 06:08:00 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 09 Oct 2012 07:08:00 -0400 Subject: [dba-SQLServer] How does SQL Server do updates In-Reply-To: <1349765104.64653813@f167.mail.ru> References: <5072ACD3.5060201@colbyconsulting.com> <1349765104.64653813@f167.mail.ru> Message-ID: <50740590.3080604@colbyconsulting.com> True, no BLOBs, though many of the fields are Varchar(). John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 10/9/2012 2:45 AM, Salakhetdinov Shamil wrote: > Hi Jim -- > > But JC's records have a clustered index and do not have(?) BLOBs. > > http://msdn.microsoft.com/en-us/library/ms177443(v=sql.105).aspx > > So they should be updated in place if there exists enough free space on a record page to adapt to its increased total record length or record's page should be split... > > That could be ROW_OVERFLOW_DATA Allocation Unit where part of records go in the case of updated record length exceed 8KB? > > http://msdn.microsoft.com/en-us/library/ms189051(v=sql.105).aspx > > http://msdn.microsoft.com/en-us/library/ms190969(v=sql.105).aspx > > The speed of update (insert/update/delete) is important but secondary - the speed of retrieval "is a king"... > > Please correct me if I'm wrong and I'm missing something obvious. > > Thank you. > > -- Shamil > > > Mon, 8 Oct 2012 07:46:05 -0700 ?? "Jim Lawrence" : >> >> >> > > >> > > > >> I believe MS SQL just sticks the records on the end of the table. >> >> > This is of course done for speed. It is much easier to just add a record at >> > the end of the table than try to insert it in the middle, replacing a >> > deleted record's position. (This is of course how old FoxPro and MS Access >> > works and I am assuming MS SQL works basically, just the same...) >> >> > Jim >> >> > -----Original Message----- >> > From: dba-sqlserver-bounces at databaseadvisors.com >> > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby >> > Sent: Monday, October 08, 2012 3:37 AM >> > To: Sqlserver-Dba; Access Developers discussion and problem solving >> > Subject: [dba-SQLServer] How does SQL Server do updates >> >> > Every month I perform a process where name / address records are extracted >> > from SQL Server, updated >> > by a third party app and then only changed data is updated back into SQL >> > Server. I take changed >> > addresses and write them to an 'old address' table and then literally update >> > the existing record >> > with the changes. >> >> > The address table has an integer PK (which never changes) which came from >> > and is related to another >> > table 1 to 1, and that PK is used by itself in a clustered index. There are >> > other indexes for >> > FName, LName Addr ect, my hash fields and so forth. >> >> > So, I pull the updated address info into a custom SQL Server DB created >> > on-the-fly for this purpose, >> > and then only the changes are updated back into the live database. >> >> > The PK and Name fields never change but the address fields do change and >> > other fields which capture >> > information about the address changes also change. The hash fields are >> > updated (in the temp db) and >> > written back into live etc. >> >> > I am trying to visualize what goes on behind the scenes in SQL Server in the >> > live database. Most of >> > the data fields are varchar(), the hash fields are varbinary(200). I assume >> > that the data is moved >> > around inside of the dbf file, i.e. moved out to new space on the end if it >> > can no longer fit in the >> > originally allocated space. IOW of the town changed from 'Yuma' to 'Los >> > Angeles', something has to >> > give. >> >> > So as things move around, does SQL Server actually go back and reuse the >> > pieces and parts of empty >> > space inside of the file? Or does it just keep expanding the file and doing >> > everything out at the >> > end of the file. >> >> >> > So >> >> From mcp2004 at mail.ru Tue Oct 9 06:39:40 2012 From: mcp2004 at mail.ru (=?UTF-8?B?U2FsYWtoZXRkaW5vdiBTaGFtaWw=?=) Date: Tue, 09 Oct 2012 15:39:40 +0400 Subject: [dba-SQLServer] =?utf-8?q?How_does_SQL_Server_do_updates?= In-Reply-To: <50740590.3080604@colbyconsulting.com> References: <5072ACD3.5060201@colbyconsulting.com> <1349765104.64653813@f167.mail.ru> <50740590.3080604@colbyconsulting.com> Message-ID: <1349782779.506058720@f131.mail.ru> Hi John -- But Varchar() could behave as/be handled as BLOB if you'd use Varchar(MAX)? - do you use such a T-SQL DDL construct? http://msdn.microsoft.com/en-us/library/ms187752.aspx Anyway AFAIU if you have a clustered index and this clustered index field's value isn't changed (as in your case) then the record can't be moved to the new page, or I'd better write record can be moved to new?page only if record's page get split and record still doesn't fit the first part of the split page - then it gets moved to the second part of that split page and some page's records could be moved to the third part. If a record's total length exceeds 8KB then its tail go moved to Overflow extent. And BLOBs are always stored on LOB pages/extents -?http://msdn.microsoft.com/en-us/library/ms189051(v=sql.105).aspx?? Please correct me if I'm wrong and I'm missing something obvious. Thank you. -- Shamil Tue, 09 Oct 2012 07:08:00 -0400 ?? jwcolby : > > > > >True, no BLOBs, though many of the fields are Varchar(). > > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > On 10/9/2012 2:45 AM, Salakhetdinov Shamil wrote: > > Hi Jim -- > > > > But JC's records have a clustered index and do not have(?) BLOBs. > > > > http://msdn.microsoft.com/en-us/library/ms177443(v=sql.105).aspx > > > > So they should be updated in place if there exists enough free space on a record page to adapt to its increased total record length or record's page should be split... > > > > That could be ROW_OVERFLOW_DATA Allocation Unit where part of records go in the case of updated record length exceed 8KB? > > > > http://msdn.microsoft.com/en-us/library/ms189051(v=sql.105).aspx > > > > http://msdn.microsoft.com/en-us/library/ms190969(v=sql.105).aspx > > > > The speed of update (insert/update/delete) is important but secondary - the speed of retrieval "is a king"... > > > > Please correct me if I'm wrong and I'm missing something obvious. > > > > Thank you. > > > > -- Shamil > > > > > > Mon, 8 Oct 2012 07:46:05 -0700 ?? "Jim Lawrence" : > >> > >> > >> > > > > > >> > > > > > > > >> I believe MS SQL just sticks the records on the end of the table. > >> > >> > > This is of course done for speed. It is much easier to just add a record at > >> > > the end of the table than try to insert it in the middle, replacing a > >> > > deleted record's position. (This is of course how old FoxPro and MS Access > >> > > works and I am assuming MS SQL works basically, just the same...) > >> > >> > > Jim > >> > >> > > -----Original Message----- > >> > > From: dba-sqlserver-bounces at databaseadvisors.com > >> > > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby > >> > > Sent: Monday, October 08, 2012 3:37 AM > >> > > To: Sqlserver-Dba; Access Developers discussion and problem solving > >> > > Subject: [dba-SQLServer] How does SQL Server do updates > >> > >> > > Every month I perform a process where name / address records are extracted > >> > > from SQL Server, updated > >> > > by a third party app and then only changed data is updated back into SQL > >> > > Server. I take changed > >> > > addresses and write them to an 'old address' table and then literally update > >> > > the existing record > >> > > with the changes. > >> > >> > > The address table has an integer PK (which never changes) which came from > >> > > and is related to another > >> > > table 1 to 1, and that PK is used by itself in a clustered index. There are > >> > > other indexes for > >> > > FName, LName Addr ect, my hash fields and so forth. > >> > >> > > So, I pull the updated address info into a custom SQL Server DB created > >> > > on-the-fly for this purpose, > >> > > and then only the changes are updated back into the live database. > >> > >> > > The PK and Name fields never change but the address fields do change and > >> > > other fields which capture > >> > > information about the address changes also change. The hash fields are > >> > > updated (in the temp db) and > >> > > written back into live etc. > >> > >> > > I am trying to visualize what goes on behind the scenes in SQL Server in the > >> > > live database. Most of > >> > > the data fields are varchar(), the hash fields are varbinary(200). I assume > >> > > that the data is moved > >> > > around inside of the dbf file, i.e. moved out to new space on the end if it > >> > > can no longer fit in the > >> > > originally allocated space. IOW of the town changed from 'Yuma' to 'Los > >> > > Angeles', something has to > >> > > give. > >> > >> > > So as things move around, does SQL Server actually go back and reuse the > >> > > pieces and parts of empty > >> > > space inside of the file? Or does it just keep expanding the file and doing > >> > > everything out at the > >> > > end of the file. > >> > >> > >> > > So > >> > >> > > _______________________________________________ > dba-SQLServer mailing list >dba-SQLServer at databaseadvisors.com >http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >http://www.databaseadvisors.com > > From davidmcafee at gmail.com Thu Oct 11 14:45:53 2012 From: davidmcafee at gmail.com (David McAfee) Date: Thu, 11 Oct 2012 12:45:53 -0700 Subject: [dba-SQLServer] Fwd: [ACCESS-L] Varying results in an A2007/A2010 ADP stored procedure In-Reply-To: References: Message-ID: So this has come back to haunt me. I've created a button to export the stored procedures results to an Excel file, but the exported records are inconsistent. I guess when using the following line, the ADP uses an RPC as it does when double clicking on a sproc in the database window. DoCmd.OutputTo acOutputStoredProcedure, strMySql, acFormatXLS, strOutPutFile, False Has anyone else ran into this? The only thing I can say bout the sproc is that it is using using some # temp tables, so maybe Access is having a hard time with those. It shouldn't matter. I've tried exporting the form, but since I am doing it from the header, I don't get the detail rows. If there was a way of getting the subform's record set and exporting it, then I would be good. I think I'm going to dump the values into a table and export those. Grrrr David --Original thread--- A coworker wrote a new stored procedure that works just fine when ran from SSMS. If either of these are ran from SSMS: EXEC stpR6Payouts '1/1/2011','11/30/2011','' EXEC stpR6Payouts '1/1/2011','11/30/2011',NULL The data is returned as expected. If, the sproc is ran from VBA, or directly through the immediate window, we get different results. The sproc run from Access will return a few rows short, and data is being calculated incorrectly on the rows that are returned. Hit F5 in Access, and a different row count (and data on the returned rows) will be different. I've always assumed SQL was doing all the work and returning a resultset to Access, but it doesn't appear this is actually what happens. Does anyone have any ideas? ------------- How does it look if you try this in a Pass-Through query? Duane Hookom ------------------------ I created a new mdb and it returns correctly, as expected via a pass through query. I'm going to try a box with an Access version <2007 to test the ADP. ----------------------- OK, I tested the ADP on a box with Access 2002. It returned the same, incorrect row count and values. I tried running the stored procedure from a different ADP and it also returns incorrect records. So far the only way to get the correct results besides running it directly in SSMS is to run it from an mdb using a pass through query. What occurs differently between running a pass through vs running the sproc directly from the Access database window? David ---------------------- OK, we ran a Trace on the different ways we are running the sproc. When it is called from the ADP, the sproc is called via an RPC, not directly as a passthrough query (as I've assumed it was called). >From the ADP, if I run this: Private Sub Command8_Click() Dim rs As Recordset Set rs = CurrentProject.Connection. Execute("EXEC RRMS.dbo.stpR6Payouts '1/1/2011','11/30/2011',null") Debug.Print rs.RecordCount I get the correct count! If I put an break point on the last line above and run this from the immediate window: rs.MoveFirst ? rs![CustName] STAR FORD ? rs![IndividualPayCalc] 5368 I get the correct amount (that 5368 is never correct when running the sproc from the immediate window in the ADP). So this tells me the rendering in the ADP is having issues, correct? This is scary. How many other things have I trusted to be correct and weren't? David -------------------------- David, Does your stored procedure specify an ORDER BY or are you leaving this to chance? Duane Hookom ----------------------------------------------------------------------- Yes: SET NOCOUNT ON --Do a bunch of crap here SET NOCOUNT OFF select A.CustNo,B.cust_name AS CustName,C.State,InvTotal,TotalPayCalc, AccuralPayCalc,DealerPayCalc,IndividualPayCalc,ARPaymentAmt AS [PaidTo-Dealer],APAmt AS [PaidTo-Individual] from @tblR6SumTemp A INNER JOIN salesdb..m_customer B(NOLOCK) ON A.CustNo = B.cust_no LEFT JOIN salesdb..m_cust_address C(NOLOCK) on B.cust_no = C.cust_no and B.bill_address_id = C.address_id ORDER BY CustNo --------------------------------- David, Are there every two records with the same CustNo in the results? If so, would the TotalPayCalc vary? Duane Hookom MS Access MVP ---------------------------------------- No. One record per customer. This is what I had to do to get it all working: I created a datasheet subform. In that subform, I placed a public sub: Public Sub Type6PayoutPopulate(ByVal StartDate As String, ByVal EndDate As String, Optional Customer As String) Dim rs As Recordset Set rs = CurrentProject.Connection.Execute("EXEC RRMS.dbo.stpR6Payouts '" & Trim(StartDate) & "','" & Trim(EndDate) & "','" & Nz(Customer, "") & "'") DoCmd.Hourglass False If rs.RecordCount > 0 Then Set Me.Form.Recordset = rs Me.Visible = True Else Me.Visible = False MsgBox "No records were returned" End If End Sub In the Parent Form's Header, I created 3 text boxes and a command button. I placed the following code in the button's click event: Private Sub cmdRun_Click() If Nz(Me.txtStart, "") = "" Or Nz(Me.txtEnd, "") = "" Then MsgBox "Please enter a Start and End date" Exit Sub Else DoCmd.Hourglass True Call Form_frmType6PayOutDet.Type6PayoutPopulate(Me.txtStart, Me.txtEnd, Nz(Me.txtComp, "")) End If End Sub It all seems to work good now. I still don't know why the stored procedure is being called via an RPC when opened directly from the database window, which is causing it to render differently. Using CurrentProject.Connection.Execute() seems to call it correctly. Weird. David