From jwcolby at colbyconsulting.com Thu Jan 5 16:43:23 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 05 Jan 2012 17:43:23 -0500 Subject: [dba-SQLServer] What would you do? Message-ID: <4F06278B.5010506@colbyconsulting.com> Guys, My client has a web page hosted on a RackSpace (the name of the hosting provider) server. It runs Windows 2008 Standard, appears to be a physical (as opposed to virtual) server, has 12 gigs and 4 cores. It runs SQL Server 2008 Standard. the C: drive is 300 gigs of which about 109 gigs are used. The client does meail ad campaigns for things like Aleve pain reliever etc. There is a SQL Server database which stores information about what happens on the web page, hits, clicks moves between pages etc. The specific database that stores all this info says that it has a size of 60.33 gigs. Of that the data file is about 5.56 gigs with 67% free and the log file is 54.77 gigs with 16% free. So... I just got involved with this stuff. I didn't build the database nor the web site. I was asked to go find out why a specific statistic (time on site) was "out of whack". My question really is that the statistics are coming from stored procedures, mostly out of one specific table EVNT_Events. That table has no indexes on it. It does have an autoincrement field but has no PK per se and of course is a heap structure. What I know so far is that events are logged on a session id, that session id is apparently a unique identifier for a "session" or a specific browser logged into that web site. An event record is logged every 10 seconds, but perhaps more because there are "event types" such as clicks, backbutton, etc. The following is a count of events for this campaign between 11/25 and 1/5, roughly 2.4 million events over 40 days, which is an average of about an event every 100 seconds. Not exactly a high traffic campaign. Which is not to say the next one won't be but let's be real, click throughs from an email campaign isn't a high volume business. 2372576 2011-11-25 00:01:50.770 2012-01-05 14:10:08.037 The statistics report that runs takes awhile to run. I would bet that some judicious indexing would speed things up. At the very least making a clustered index on the autoincrement EVNT_ID. Of course reporting is not the primary concern here. Still... So assuming I get permission to do anything, what do I do? I don't want to make any changes without first backing up the database. I would highly doubt that anything has ever been backed up though I will be asking these questions. 1) This appears to be a standard SQL Server setup, i.e max memory is insane, all cores assigned to SQL Server, compression is not turned on and so forty. Is there a reason not to compress the backup? 2) The log file contains a lot of "stuff". 3) Performing a backup to the same physical drive is going to take awhile and use a lot of disk space, even if compressed. 4) Once backed up what happens to that log file? Does it automatically "empty"? 5) Would creating a clustered index on that "PK" help make things faster? 6) Many of the statistics group by the session ID. Obviously indexing that and a few other key fields would speed up reporting but slow down creating these log records but would it slow it down enough to make any difference? It seems unlikely to me but I don't have much experience in a situation like this. What I do know is that once created these event records are apparently never modified. I am just trying to get some "feel" for what will work and what to recommend. I am to a large extent the tech support for the client. The web site and the campaigns are created by a third party and that will continue. It just looks to me like they are not spending much time on the database side of things. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From lawhonac at hiwaay.net Thu Jan 5 16:57:59 2012 From: lawhonac at hiwaay.net (Alan Lawhon) Date: Thu, 5 Jan 2012 16:57:59 -0600 Subject: [dba-SQLServer] Semicolon Statement Terminator Enforced in Denali? Message-ID: <001301cccbfd$78833190$698994b0$@net> I'm reading the SQL Server 2008 R2 BOL and come across this documentation for batches. A batch is a group of one or more Transact-SQL statements sent at the same time from an application to SQL Server for execution. SQL Server compiles the statements of a batch into a single executable unit, called an execution plan. The statements in the execution plan are then executed one at a time. Each Transact-SQL statement should be terminated with a semicolon. This requirement is not enforced, but the ability to end a statement without a semicolon is deprecated and may be removed in a future version of Microsoft SQL Server. I've been writing most of my queries without using the semicolon character to terminate statements. Does the new version (i.e. SQL Server "Denali") rigidly enforce termination of T-SQL statements with the semicolon character? Just curious. Alan C. Lawhon From Gustav at cactus.dk Fri Jan 6 01:11:59 2012 From: Gustav at cactus.dk (Gustav Brock) Date: Fri, 06 Jan 2012 08:11:59 +0100 Subject: [dba-SQLServer] What would you do? Message-ID: Hi John > It just looks to me like they are not spending much time on the database side of things. They probably don't know how. Such stories always remind me about a presentation I once attended where the project leader explained about a large web project. One thing he had noted was, that among his otherwise skilled developers were some that "were afraid of the database". It struck me that I had never looked at it that way and that it could explain why so much database work you meet out there is crap. But think about it; if you were not very well educated in database design what would you do when you meet such a monster? For us it is maybe not a non-issue but at least nothing to be afraid of. But if you know very little about SQL, it is scary. It can also - to some degree - explain the popularity of NonSQL databases. What the project leader did was to create and present views to such developers with exactly what they needed and, of course, as read-only if that would do. As to your task, couldn't the BI parts and tools of Visual Studio and SQL Server be what you are looking for? I haven't worked with these though. Also, as this effectively is a write-only database, I guess you could easily clean up the log file to a fraction of the current size. /gustav >>> jwcolby at colbyconsulting.com 05-01-2012 23:43 >>> Guys, My client has a web page hosted on a RackSpace (the name of the hosting provider) server. It runs Windows 2008 Standard, appears to be a physical (as opposed to virtual) server, has 12 gigs and 4 cores. It runs SQL Server 2008 Standard. the C: drive is 300 gigs of which about 109 gigs are used. The client does meail ad campaigns for things like Aleve pain reliever etc. There is a SQL Server database which stores information about what happens on the web page, hits, clicks moves between pages etc. The specific database that stores all this info says that it has a size of 60.33 gigs. Of that the data file is about 5.56 gigs with 67% free and the log file is 54.77 gigs with 16% free. So... I just got involved with this stuff. I didn't build the database nor the web site. I was asked to go find out why a specific statistic (time on site) was "out of whack". My question really is that the statistics are coming from stored procedures, mostly out of one specific table EVNT_Events. That table has no indexes on it. It does have an autoincrement field but has no PK per se and of course is a heap structure. What I know so far is that events are logged on a session id, that session id is apparently a unique identifier for a "session" or a specific browser logged into that web site. An event record is logged every 10 seconds, but perhaps more because there are "event types" such as clicks, backbutton, etc. The following is a count of events for this campaign between 11/25 and 1/5, roughly 2.4 million events over 40 days, which is an average of about an event every 100 seconds. Not exactly a high traffic campaign. Which is not to say the next one won't be but let's be real, click throughs from an email campaign isn't a high volume business. 2372576 2011-11-25 00:01:50.770 2012-01-05 14:10:08.037 The statistics report that runs takes awhile to run. I would bet that some judicious indexing would speed things up. At the very least making a clustered index on the autoincrement EVNT_ID. Of course reporting is not the primary concern here. Still... So assuming I get permission to do anything, what do I do? I don't want to make any changes without first backing up the database. I would highly doubt that anything has ever been backed up though I will be asking these questions. 1) This appears to be a standard SQL Server setup, i.e max memory is insane, all cores assigned to SQL Server, compression is not turned on and so forty. Is there a reason not to compress the backup? 2) The log file contains a lot of "stuff". 3) Performing a backup to the same physical drive is going to take awhile and use a lot of disk space, even if compressed. 4) Once backed up what happens to that log file? Does it automatically "empty"? 5) Would creating a clustered index on that "PK" help make things faster? 6) Many of the statistics group by the session ID. Obviously indexing that and a few other key fields would speed up reporting but slow down creating these log records but would it slow it down enough to make any difference? It seems unlikely to me but I don't have much experience in a situation like this. What I do know is that once created these event records are apparently never modified. I am just trying to get some "feel" for what will work and what to recommend. I am to a large extent the tech support for the client. The web site and the campaigns are created by a third party and that will continue. It just looks to me like they are not spending much time on the database side of things. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From marklbreen at gmail.com Fri Jan 6 04:19:19 2012 From: marklbreen at gmail.com (Mark Breen) Date: Fri, 6 Jan 2012 10:19:19 +0000 Subject: [dba-SQLServer] What would you do? In-Reply-To: References: Message-ID: Hello Gustav and John, Funny, I was discussing John's log file with my wife last night at 11:30 pm :) I said the same thing, unless you need to step back to a point in time, just detach the db, copy the MDF only, zip and and download it and then reattach only the mdf and allow SSMS to automatically recreate an empty log file. I rarely leave databases in Full recovery mode, unless the client is capable of restoring to a point in time. re Databases - I sometimes feel like the poor cousin because I am more into databases than programming? Am I just a script kiddie or can I hold my head proudly? Mark On 6 January 2012 07:11, Gustav Brock wrote: > Hi John > > > It just looks to me like they are not spending much time on the > database side of things. > > They probably don't know how. Such stories always remind me about a > presentation I once attended where the project leader explained about a > large web project. One thing he had noted was, that among his otherwise > skilled developers were some that "were afraid of the database". It struck > me that I had never looked at it that way and that it could explain why so > much database work you meet out there is crap. But think about it; if you > were not very well educated in database design what would you do when you > meet such a monster? For us it is maybe not a non-issue but at least > nothing to be afraid of. But if you know very little about SQL, it is > scary. It can also - to some degree - explain the popularity of NonSQL > databases. > > What the project leader did was to create and present views to such > developers with exactly what they needed and, of course, as read-only if > that would do. > > As to your task, couldn't the BI parts and tools of Visual Studio and SQL > Server be what you are looking for? I haven't worked with these though. > > Also, as this effectively is a write-only database, I guess you could > easily clean up the log file to a fraction of the current size. > > /gustav > > > >>> jwcolby at colbyconsulting.com 05-01-2012 23:43 >>> > Guys, > > My client has a web page hosted on a RackSpace (the name of the hosting > provider) server. It runs > Windows 2008 Standard, appears to be a physical (as opposed to virtual) > server, has 12 gigs and 4 > cores. It runs SQL Server 2008 Standard. the C: drive is 300 gigs of > which about 109 gigs are > used. The client does meail ad campaigns for things like Aleve pain > reliever etc. There is a SQL > Server database which stores information about what happens on the web > page, hits, clicks moves > between pages etc. > > The specific database that stores all this info says that it has a size of > 60.33 gigs. Of that the > data file is about 5.56 gigs with 67% free and the log file is 54.77 gigs > with 16% free. > > So... > > I just got involved with this stuff. I didn't build the database nor the > web site. I was asked to > go find out why a specific statistic (time on site) was "out of whack". > My question really is that > the statistics are coming from stored procedures, mostly out of one > specific table EVNT_Events. > That table has no indexes on it. It does have an autoincrement field but > has no PK per se and of > course is a heap structure. > > What I know so far is that events are logged on a session id, that session > id is apparently a unique > identifier for a "session" or a specific browser logged into that web > site. An event record is > logged every 10 seconds, but perhaps more because there are "event types" > such as clicks, > backbutton, etc. The following is a count of events for this campaign > between 11/25 and 1/5, > roughly 2.4 million events over 40 days, which is an average of about an > event every 100 seconds. > Not exactly a high traffic campaign. Which is not to say the next one > won't be but let's be real, > click throughs from an email campaign isn't a high volume business. > > 2372576 2011-11-25 00:01:50.770 2012-01-05 14:10:08.037 > > > The statistics report that runs takes awhile to run. I would bet that > some judicious indexing would > speed things up. At the very least making a clustered index on the > autoincrement EVNT_ID. Of > course reporting is not the primary concern here. Still... > > So assuming I get permission to do anything, what do I do? I don't want > to make any changes without > first backing up the database. I would highly doubt that anything has > ever been backed up though I > will be asking these questions. > > 1) This appears to be a standard SQL Server setup, i.e max memory is > insane, all cores assigned to > SQL Server, compression is not turned on and so forty. Is there a reason > not to compress the backup? > 2) The log file contains a lot of "stuff". > 3) Performing a backup to the same physical drive is going to take awhile > and use a lot of disk > space, even if compressed. > 4) Once backed up what happens to that log file? Does it automatically > "empty"? > 5) Would creating a clustered index on that "PK" help make things faster? > 6) Many of the statistics group by the session ID. Obviously indexing > that and a few other key > fields would speed up reporting but slow down creating these log records > but would it slow it down > enough to make any difference? It seems unlikely to me but I don't have > much experience in a > situation like this. What I do know is that once created these event > records are apparently never > modified. > > I am just trying to get some "feel" for what will work and what to > recommend. I am to a large > extent the tech support for the client. The web site and the campaigns > are created by a third party > and that will continue. It just looks to me like they are not spending > much time on the database > side of things. > > -- > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Fri Jan 6 10:28:11 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 6 Jan 2012 11:28:11 -0500 Subject: [dba-SQLServer] What would you do? In-Reply-To: References: Message-ID: Can you hold your head high? Good question, but ultimately my answer is Yes. Frankly, I don't understand the FE developers' aversion to DBs. Frankly I don't understand it. SQL, T-SQL, Pl/SQL, etc. are just about the easiest languages to learn: compared with C# or C++, SQL variants are drop-dead simple -- four commands, with clauses and/or predicates! How could it get simpler? But the reluctance to master these commands is what keeps folks like you and me in business. (Don't let the secrets out!) As to PITA (Point In Time Architecture), there are two basic ways to get There from Here: 1. Do incremental backups very often; and 2. Do full backups very often; and 3. If you need to transport the changes elsewhere, use Log Shipping. (An old joke: there are 3 kinds of DBAs: those who can count and those who cannot.) Arthur On Fri, Jan 6, 2012 at 5:19 AM, Mark Breen wrote: > Hello Gustav and John, > > Funny, I was discussing John's log file with my wife last night at 11:30 pm > :) > > I said the same thing, unless you need to step back to a point in time, > just detach the db, copy the MDF only, zip and and download it and then > reattach only the mdf and allow SSMS to automatically recreate an empty log > file. > > I rarely leave databases in Full recovery mode, unless the client is > capable of restoring to a point in time. > > re Databases - I sometimes feel like the poor cousin because I am more > into databases than programming? Am I just a script kiddie or can I hold > my head proudly? > > Mark > > -- Cell: 647.710.1314 Prediction is difficult, especially of the future. -- Werner Heisenberg From fhtapia at gmail.com Mon Jan 9 08:16:17 2012 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 9 Jan 2012 06:16:17 -0800 Subject: [dba-SQLServer] What would you do? In-Reply-To: <4F06278B.5010506@colbyconsulting.com> References: <4F06278B.5010506@colbyconsulting.com> Message-ID: On Thu, Jan 5, 2012 at 14:43, jwcolby wrote: > 1) This appears to be a standard SQL Server setup, i.e max memory is > insane, all cores assigned to SQL Server, compression is not turned on and > so forty. Is there a reason not to compress the backup? the only issue would be that if the backup process would choke the server. it depends on the hardware. > 4) Once backed up what happens to that log file? Does it automatically > "empty"? > I'm assuming you're asking the list and that you are referring to the transaction log. and the answer is yes, it does empty, it empties all completed checkpoints. however the size of the file won't change. If the customer does not need up to the minute recoverability the recovery model should be set to simple. > 5) Would creating a clustered index on that "PK" help make things faster? > is the pk used for searches? > 6) Many of the statistics group by the session ID. Obviously indexing > that and a few other key fields would speed up reporting but slow down > creating these log records but would it slow it down enough to make any > difference? It seems unlikely to me but I don't have much experience in a > situation like this. What I do know is that once created these event > records are apparently never modified. > depends on the hardware, if there are no updates and simply inserts, then adding the index may be negligible. From fuller.artful at gmail.com Mon Jan 9 08:55:41 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 9 Jan 2012 09:55:41 -0500 Subject: [dba-SQLServer] Format Partitions for SQL Performance Message-ID: Interesting article here: http://www.mssqltips.com/sqlservertip/2138/format-drives-with-correct-allocation-and-offset-for-maximum-sql-server-performance/?utm_source=dailynewsletter&utm_medium=email&utm_content=headline&utm_campaign=201218 (Watch for wrap.) I confess that I haven't given this much if any thought so far. But I'm thinking about it now! -- Cell: 647.710.1314 Prediction is difficult, especially of the future. -- Werner Heisenberg From David at sierranevada.com Mon Jan 9 09:29:56 2012 From: David at sierranevada.com (David Lewis) Date: Mon, 9 Jan 2012 07:29:56 -0800 Subject: [dba-SQLServer] What would you do? In-Reply-To: References: Message-ID: <8437387186B192498848F1892A41F780015D625B7066@schwarz.sierranevada.corp> With regard to indexing the table, I would test out the typical queries to see what would help. It is hard to give specific advice like 'add an index on such and such columns, with such and such included' without more information. It may take some time working with the client to figure out exactly what they need. I suspect that clustering on the Identity field will not help queries much. Making it a unique primary key may or may not. My hunch is that an index of sessionid and timestamp (as a datetime) with perhaps other fields included, will vastly speed up queries, but only you can know for sure by testing. Message: 1 Date: Thu, 05 Jan 2012 17:43:23 -0500 From: jwcolby To: Sqlserver-Dba Subject: [dba-SQLServer] What would you do? Message-ID: <4F06278B.5010506 at colbyconsulting.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Guys, My client has a web page hosted on a RackSpace (the name of the hosting provider) server. It runs Windows 2008 Standard, appears to be a physical (as opposed to virtual) server, has 12 gigs and 4 cores. It runs SQL Server 2008 Standard. the C: drive is 300 gigs of which about 109 gigs are used. The client does meail ad campaigns for things like Aleve pain reliever etc. There is a SQL Server database which stores information about what happens on the web page, hits, clicks moves between pages etc. The specific database that stores all this info says that it has a size of 60.33 gigs. Of that the data file is about 5.56 gigs with 67% free and the log file is 54.77 gigs with 16% free. So... I just got involved with this stuff. I didn't build the database nor the web site. I was asked to go find out why a specific statistic (time on site) was "out of whack". My question really is that the statistics are coming from stored procedures, mostly out of one specific table EVNT_Events. That table has no indexes on it. It does have an autoincrement field but has no PK per se and of course is a heap structure. What I know so far is that events are logged on a session id, that session id is apparently a unique identifier for a "session" or a specific browser logged into that web site. An event record is logged every 10 seconds, but perhaps more because there are "event types" such as clicks, backbutton, etc. The following is a count of events for this campaign between 11/25 and 1/5, roughly 2.4 million events over 40 days, which is an average of about an event every 100 seconds. Not exactly a high traffic campaign. Which is not to say the next one won't be but let's be real, click throughs from an email campaign isn't a high volume business. 2372576 2011-11-25 00:01:50.770 2012-01-05 14:10:08.037 The statistics report that runs takes awhile to run. I would bet that some judicious indexing would speed things up. At the very least making a clustered index on the autoincrement EVNT_ID. Of course reporting is not the primary concern here. Still... So assuming I get permission to do anything, what do I do? I don't want to make any changes without first backing up the database. I would highly doubt that anything has ever been backed up though I will be asking these questions. 1) This appears to be a standard SQL Server setup, i.e max memory is insane, all cores assigned to SQL Server, compression is not turned on and so forty. Is there a reason not to compress the backup? 2) The log file contains a lot of "stuff". 3) Performing a backup to the same physical drive is going to take awhile and use a lot of disk space, even if compressed. 4) Once backed up what happens to that log file? Does it automatically "empty"? 5) Would creating a clustered index on that "PK" help make things faster? 6) Many of the statistics group by the session ID. Obviously indexing that and a few other key fields would speed up reporting but slow down creating these log records but would it slow it down enough to make any difference? It seems unlikely to me but I don't have much experience in a situation like this. What I do know is that once created these event records are apparently never modified. I am just trying to get some "feel" for what will work and what to recommend. I am to a large extent the tech support for the client. The web site and the campaigns are created by a third party and that will continue. It just looks to me like they are not spending much time on the database side of things. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it The contents of this e-mail message and its attachments are covered by the Electronic Communications Privacy Act (18 U.S.C. 2510-2521) and are intended solely for the addressee(s) hereof. If you are not the named recipient, or the employee or agent responsible for delivering the message to the intended recipient, or if this message has been addressed to you in error, you are directed not to read, disclose, reproduce, distribute, disseminate or otherwise use this transmission. If you have received this communication in error, please notify us immediately by return e-mail or by telephone, 530-893-3520, and delete and/or destroy all copies of the message immediately. From jwcolby at colbyconsulting.com Mon Jan 9 09:44:42 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Mon, 09 Jan 2012 10:44:42 -0500 Subject: [dba-SQLServer] What would you do? In-Reply-To: References: <4F06278B.5010506@colbyconsulting.com> Message-ID: <4F0B0B6A.5060009@colbyconsulting.com> Francisco, > the only issue would be that if the backup process would choke the server. it depends on the hardware. In the middle of a campaign I would probably not do a backup. Any other time there is very little traffic. > I'm assuming you're asking the list and that you are referring to the transaction log. and the answer is yes, it does empty, it empties all completed checkpoints. however the size of the file won't change. Yes, I am asking the list and yes, about the transaction log. This database is used to store what happens on a "medium volume" web site. It is an advertising web site where people come to get coupons for products, Aleve pain reliever in the latest campaign. They apparently get as many as 10,000 "visits" a day at the height of a campaign, with a few events (clicks to various pages) per visit. That seems to me to be perhaps 6 visits per minute, less than one "event" per second. >If the customer does not need up to the minute recoverability the recovery model should be set to simple. It doesn't look like "extreme recoverability" is required. >> 5) Would creating a clustered index on that "PK" help make things faster? > is the pk used for searches? No, not used at all as far as I can tell. I guess I am thinking about heap versus ... The specific table I am concerned about is the event table. This records web page events (clicks and stuff). There is a "session id" that "groups" events but otherwise nothing specific or unique about a session making a good candidate key. There is already an autoincrement int, it was just never designated the PK. I am proposing turning that into a PK. My question really is whether doing so slows down the insertion of event records in any way. Placing an index on Session ID allows me to gather info about sessions, but does having them in a clustered index make that process any faster? Is it ever preferable to store in a heap vs on a clustered index? > depends on the hardware, if there are no updates and simply inserts, then adding the index may be negligible. Events are what they are, data about events that occur on the web site. Never updated in any way AFAICT. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/9/2012 9:16 AM, Francisco Tapia wrote: > On Thu, Jan 5, 2012 at 14:43, jwcolby wrote: > >> 1) This appears to be a standard SQL Server setup, i.e max memory is >> insane, all cores assigned to SQL Server, compression is not turned on and >> so forty. Is there a reason not to compress the backup? > > > the only issue would be that if the backup process would choke the server. > it depends on the hardware. > > >> 4) Once backed up what happens to that log file? Does it automatically >> "empty"? >> > > I'm assuming you're asking the list and that you are referring to the > transaction log. and the answer is yes, it does empty, it empties all > completed checkpoints. however the size of the file won't change. If the > customer does not need up to the minute recoverability the recovery model > should be set to simple. > > >> 5) Would creating a clustered index on that "PK" help make things faster? >> > > is the pk used for searches? > > >> 6) Many of the statistics group by the session ID. Obviously indexing >> that and a few other key fields would speed up reporting but slow down >> creating these log records but would it slow it down enough to make any >> difference? It seems unlikely to me but I don't have much experience in a >> situation like this. What I do know is that once created these event >> records are apparently never modified. >> > > depends on the hardware, if there are no updates and simply inserts, then > adding the index may be negligible. > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Mon Jan 16 12:09:08 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Mon, 16 Jan 2012 13:09:08 -0500 Subject: [dba-SQLServer] Intentional Denormalize? Message-ID: <4F1467C4.2010402@colbyconsulting.com> As I have discussed in the past, I have a set of about 8 tables (which I call AZData) containing name/address info. This table has just a PKID, FNAMe, LName, Addr, City, St, Zip5, Zip4. I have to maintain these records by sending them out every month for CASS / NCOA processing. I currently process about 350 million name/address records a month. The data always comes from bigger tables which contains demographics data, things like "owns a dog, owns a cat" ('Y'/'N' for each of these fields) or Child age 0-3, 4-7 etc (codes 1,2,or 3 indicating m/f/both). To this point I have always built a separate database to hold the original table with an autonumber PK, then I break the name/address out into the AZData table, linked back to the demographics table by an FK which is the PK in the demographics table. I then process that AZData table monthly. DogsAndCats contains about 11 million records, Kids contains 21 million records and so forth. I did it this way for a couple of reasons, an attempt to keep the physical size of the database down and to some extent simply history - that is the way I started doing this. Well, with 8 databases and 350 million records it is probably time to change. I am considering building a table with the name/addr fields and then a set of fields with an FK pointing back to each database where that name / address can be found. IOW the same name/addr may be in the kids and DogsAndCats table. So there would be a single name/addr record with an FK back to those two demographics tables. I would have a column for each demographics database's PK. This is denormalized, in the sense that as I add a new demographics table I have to add a new column to the name / address table to hold the PK linking records back to that demogr table. I normally consider any case where I have to add a field like this to be "denormalized" - kinda like Child1, Child2. The upside is that suddenly the demographics table link through a single M-M table (which happens to hold the name/addr info as well), which allows a very simple sql statement to determine whether a person with dogs also has kids etc. Any thoughts on the wisdom of denormalizing in this manner? -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From Gustav at cactus.dk Tue Jan 17 01:37:46 2012 From: Gustav at cactus.dk (Gustav Brock) Date: Tue, 17 Jan 2012 08:37:46 +0100 Subject: [dba-SQLServer] Intentional Denormalize? Message-ID: Hi John Not that I've used it, but could Business Intelligence Development Studio (for Microsoft Visual Studio 2008, not 2010) be for you: http://msdn.microsoft.com/en-us/library/ms173709.aspx /gustav >>> jwcolby at colbyconsulting.com 16-01-2012 19:09 >>> As I have discussed in the past, I have a set of about 8 tables (which I call AZData) containing name/address info. This table has just a PKID, FNAMe, LName, Addr, City, St, Zip5, Zip4. I have to maintain these records by sending them out every month for CASS / NCOA processing. I currently process about 350 million name/address records a month. The data always comes from bigger tables which contains demographics data, things like "owns a dog, owns a cat" ('Y'/'N' for each of these fields) or Child age 0-3, 4-7 etc (codes 1,2,or 3 indicating m/f/both). To this point I have always built a separate database to hold the original table with an autonumber PK, then I break the name/address out into the AZData table, linked back to the demographics table by an FK which is the PK in the demographics table. I then process that AZData table monthly. DogsAndCats contains about 11 million records, Kids contains 21 million records and so forth. I did it this way for a couple of reasons, an attempt to keep the physical size of the database down and to some extent simply history - that is the way I started doing this. Well, with 8 databases and 350 million records it is probably time to change. I am considering building a table with the name/addr fields and then a set of fields with an FK pointing back to each database where that name / address can be found. IOW the same name/addr may be in the kids and DogsAndCats table. So there would be a single name/addr record with an FK back to those two demographics tables. I would have a column for each demographics database's PK. This is denormalized, in the sense that as I add a new demographics table I have to add a new column to the name / address table to hold the PK linking records back to that demogr table. I normally consider any case where I have to add a field like this to be "denormalized" - kinda like Child1, Child2. The upside is that suddenly the demographics table link through a single M-M table (which happens to hold the name/addr info as well), which allows a very simple sql statement to determine whether a person with dogs also has kids etc. Any thoughts on the wisdom of denormalizing in this manner? -- John W. Colby Colby Consulting From jwcolby at colbyconsulting.com Tue Jan 17 07:11:26 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 17 Jan 2012 08:11:26 -0500 Subject: [dba-SQLServer] How do I see actual compression ratios? Message-ID: <4F15737E.3070104@colbyconsulting.com> I am finding how to see estimated compression ratios, how to setup compression, how to see that compression is set etc, but I am not finding how to see the actual compression ratio of data stored in a table or index. Is there a property that stores an average compression actually achieved as the data was stored into the table / index? -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From fuller.artful at gmail.com Tue Jan 17 07:48:26 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 17 Jan 2012 08:48:26 -0500 Subject: [dba-SQLServer] Completely Uninstall SQL Server Message-ID: How can I completely uninstall SQL Server, removing all traces of it, including anything that might lurk in the registry? I have more than one version installed currently (2008 R2, 2012 RC0, and a 2005 that some other product installed. I want to nuke them all and start over. I've tried uninstalling them one by one, but that doesn't seem to result in a complete nuke. Any suggestions, short of reformatting the disks? (Actually, on payday I intend to buy a TB hard disk, copy everything there, and reformat all the existing disks, and start over from scratch.) -- Arthur Cell: 647.710.1314 Prediction is difficult, especially of the future. -- Neils Bohr From Gustav at cactus.dk Tue Jan 17 08:29:17 2012 From: Gustav at cactus.dk (Gustav Brock) Date: Tue, 17 Jan 2012 15:29:17 +0100 Subject: [dba-SQLServer] Completely Uninstall SQL Server Message-ID: Hi Arthur After two days and perusing dozens and dozens of pages - many expressing the same woe - I ended up reinstalling ... with Windows 8 Preview. And promised myself never to install locally anything else than SQL Server Express; all regular SQL Server installs will go to server machines. /gustav PS: Sorry to bother, but professor Bohr was named Niels. An institute of the University of Copenhagen proudly carries his name: http://www.nbi.ku.dk/english/about/ >>> fuller.artful at gmail.com 17-01-2012 14:48 >>> How can I completely uninstall SQL Server, removing all traces of it, including anything that might lurk in the registry? I have more than one version installed currently (2008 R2, 2012 RC0, and a 2005 that some other product installed. I want to nuke them all and start over. I've tried uninstalling them one by one, but that doesn't seem to result in a complete nuke. Any suggestions, short of reformatting the disks? (Actually, on payday I intend to buy a TB hard disk, copy everything there, and reformat all the existing disks, and start over from scratch.) -- Arthur Cell: 647.710.1314 Prediction is difficult, especially of the future. -- Neils Bohr _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Tue Jan 17 08:33:41 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 17 Jan 2012 09:33:41 -0500 Subject: [dba-SQLServer] Completely Uninstall SQL Server In-Reply-To: References: Message-ID: <4F1586C5.4080504@colbyconsulting.com> Well this does bring up the issue of installing the SSMS which does not come with the express version. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/17/2012 9:29 AM, Gustav Brock wrote: > Hi Arthur > > After two days and perusing dozens and dozens of pages - many expressing the same woe - I ended up reinstalling ... with Windows 8 Preview. And promised myself never to install locally anything else than SQL Server Express; all regular SQL Server installs will go to server machines. > > /gustav > > PS: Sorry to bother, but professor Bohr was named Niels. > An institute of the University of Copenhagen proudly carries his name: > http://www.nbi.ku.dk/english/about/ > > >>>> fuller.artful at gmail.com 17-01-2012 14:48>>> > How can I completely uninstall SQL Server, removing all traces of it, > including anything that might lurk in the registry? I have more than one > version installed currently (2008 R2, 2012 RC0, and a 2005 that some other > product installed. I want to nuke them all and start over. I've tried > uninstalling them one by one, but that doesn't seem to result in a complete > nuke. Any suggestions, short of reformatting the disks? (Actually, on > payday I intend to buy a TB hard disk, copy everything there, and reformat > all the existing disks, and start over from scratch.) > From df.waters at comcast.net Tue Jan 17 08:48:24 2012 From: df.waters at comcast.net (Dan Waters) Date: Tue, 17 Jan 2012 08:48:24 -0600 Subject: [dba-SQLServer] Completely Uninstall SQL Server In-Reply-To: <4F1586C5.4080504@colbyconsulting.com> References: <4F1586C5.4080504@colbyconsulting.com> Message-ID: <002801ccd527$109569d0$31c03d70$@comcast.net> There is a separate download for SQL Express with the express version of SSMS. http://www.microsoft.com/download/en/details.aspx?id=25174 Dan -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby Sent: Tuesday, January 17, 2012 8:34 AM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Completely Uninstall SQL Server Well this does bring up the issue of installing the SSMS which does not come with the express version. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/17/2012 9:29 AM, Gustav Brock wrote: > Hi Arthur > > After two days and perusing dozens and dozens of pages - many expressing the same woe - I ended up reinstalling ... with Windows 8 Preview. And promised myself never to install locally anything else than SQL Server Express; all regular SQL Server installs will go to server machines. > > /gustav > > PS: Sorry to bother, but professor Bohr was named Niels. > An institute of the University of Copenhagen proudly carries his name: > http://www.nbi.ku.dk/english/about/ > > >>>> fuller.artful at gmail.com 17-01-2012 14:48>>> > How can I completely uninstall SQL Server, removing all traces of it, > including anything that might lurk in the registry? I have more than > one version installed currently (2008 R2, 2012 RC0, and a 2005 that > some other product installed. I want to nuke them all and start over. > I've tried uninstalling them one by one, but that doesn't seem to > result in a complete nuke. Any suggestions, short of reformatting the > disks? (Actually, on payday I intend to buy a TB hard disk, copy > everything there, and reformat all the existing disks, and start over > from scratch.) > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From Gustav at cactus.dk Tue Jan 17 09:50:20 2012 From: Gustav at cactus.dk (Gustav Brock) Date: Tue, 17 Jan 2012 16:50:20 +0100 Subject: [dba-SQLServer] Completely Uninstall SQL Server Message-ID: Hi John Yes, but as I recall, you can install the SSMS only (without the SQL engine) from both 2005, 2008, and 2008 R2 as well as the upcoming version (not tested). The issue here is, that some of the versions of SSMS cannot coexist while the never versions not always are backward compatible. I think it was you mentioning that a while ago. Can someone confirm, please? /gustav >>> jwcolby at colbyconsulting.com 17-01-2012 15:33 >>> Well this does bring up the issue of installing the SSMS which does not come with the express version. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/17/2012 9:29 AM, Gustav Brock wrote: > Hi Arthur > > After two days and perusing dozens and dozens of pages - many expressing the same woe - I ended up reinstalling ... with Windows 8 Preview. And promised myself never to install locally anything else than SQL Server Express; all regular SQL Server installs will go to server machines. > > /gustav > > PS: Sorry to bother, but professor Bohr was named Niels. > An institute of the University of Copenhagen proudly carries his name: > http://www.nbi.ku.dk/english/about/ > > >>>> fuller.artful at gmail.com 17-01-2012 14:48>>> > How can I completely uninstall SQL Server, removing all traces of it, > including anything that might lurk in the registry? I have more than one > version installed currently (2008 R2, 2012 RC0, and a 2005 that some other > product installed. I want to nuke them all and start over. I've tried > uninstalling them one by one, but that doesn't seem to result in a complete > nuke. Any suggestions, short of reformatting the disks? (Actually, on > payday I intend to buy a TB hard disk, copy everything there, and reformat > all the existing disks, and start over from scratch.) From jwcolby at colbyconsulting.com Tue Jan 17 09:59:14 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 17 Jan 2012 10:59:14 -0500 Subject: [dba-SQLServer] Completely Uninstall SQL Server In-Reply-To: <002801ccd527$109569d0$31c03d70$@comcast.net> References: <4F1586C5.4080504@colbyconsulting.com> <002801ccd527$109569d0$31c03d70$@comcast.net> Message-ID: <4F159AD2.1090304@colbyconsulting.com> The express version does not supply a bunch of stuff including data import / export and backup stuff (IIRC). John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/17/2012 9:48 AM, Dan Waters wrote: > There is a separate download for SQL Express with the express version of > SSMS. > > http://www.microsoft.com/download/en/details.aspx?id=25174 > > > Dan > > -----Original Message----- > From: dba-sqlserver-bounces at databaseadvisors.com > [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby > Sent: Tuesday, January 17, 2012 8:34 AM > To: Discussion concerning MS SQL Server > Subject: Re: [dba-SQLServer] Completely Uninstall SQL Server > > Well this does bring up the issue of installing the SSMS which does not come > with the express version. > > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > On 1/17/2012 9:29 AM, Gustav Brock wrote: >> Hi Arthur >> >> After two days and perusing dozens and dozens of pages - many expressing > the same woe - I ended up reinstalling ... with Windows 8 Preview. And > promised myself never to install locally anything else than SQL Server > Express; all regular SQL Server installs will go to server machines. >> >> /gustav >> >> PS: Sorry to bother, but professor Bohr was named Niels. >> An institute of the University of Copenhagen proudly carries his name: >> http://www.nbi.ku.dk/english/about/ >> >> >>>>> fuller.artful at gmail.com 17-01-2012 14:48>>> >> How can I completely uninstall SQL Server, removing all traces of it, >> including anything that might lurk in the registry? I have more than >> one version installed currently (2008 R2, 2012 RC0, and a 2005 that >> some other product installed. I want to nuke them all and start over. >> I've tried uninstalling them one by one, but that doesn't seem to >> result in a complete nuke. Any suggestions, short of reformatting the >> disks? (Actually, on payday I intend to buy a TB hard disk, copy >> everything there, and reformat all the existing disks, and start over >> from scratch.) >> > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Tue Jan 17 10:12:46 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 17 Jan 2012 11:12:46 -0500 Subject: [dba-SQLServer] Completely Uninstall SQL Server In-Reply-To: References: Message-ID: <4F159DFE.5010906@colbyconsulting.com> AFAIK SSMS is entirely compatible with all versions of the database engine. I don't really deal with old versions much though. What you can do will be limited by the engine that SSMS is controlling of course. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/17/2012 10:50 AM, Gustav Brock wrote: > Hi John > > Yes, but as I recall, you can install the SSMS only (without the SQL engine) from both 2005, 2008, and 2008 R2 as well as the upcoming version (not tested). The issue here is, that some of the versions of SSMS cannot coexist while the never versions not always are backward compatible. I think it was you mentioning that a while ago. > Can someone confirm, please? > > /gustav > > >>>> jwcolby at colbyconsulting.com 17-01-2012 15:33>>> > Well this does bring up the issue of installing the SSMS which does not come with the express version. > > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > On 1/17/2012 9:29 AM, Gustav Brock wrote: >> Hi Arthur >> >> After two days and perusing dozens and dozens of pages - many expressing the same woe - I ended up reinstalling ... with Windows 8 Preview. And promised myself never to install locally anything else than SQL Server Express; all regular SQL Server installs will go to server machines. >> >> /gustav >> >> PS: Sorry to bother, but professor Bohr was named Niels. >> An institute of the University of Copenhagen proudly carries his name: >> http://www.nbi.ku.dk/english/about/ >> >> >>>>> fuller.artful at gmail.com 17-01-2012 14:48>>> >> How can I completely uninstall SQL Server, removing all traces of it, >> including anything that might lurk in the registry? I have more than one >> version installed currently (2008 R2, 2012 RC0, and a 2005 that some other >> product installed. I want to nuke them all and start over. I've tried >> uninstalling them one by one, but that doesn't seem to result in a complete >> nuke. Any suggestions, short of reformatting the disks? (Actually, on >> payday I intend to buy a TB hard disk, copy everything there, and reformat >> all the existing disks, and start over from scratch.) > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From Gustav at cactus.dk Tue Jan 17 10:36:37 2012 From: Gustav at cactus.dk (Gustav Brock) Date: Tue, 17 Jan 2012 17:36:37 +0100 Subject: [dba-SQLServer] Completely Uninstall SQL Server Message-ID: Hi John OK, then I see no issues for your case. /gustav >>> jwcolby at colbyconsulting.com 17-01-2012 17:12 >>> AFAIK SSMS is entirely compatible with all versions of the database engine. I don't really deal with old versions much though. What you can do will be limited by the engine that SSMS is controlling of course. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/17/2012 10:50 AM, Gustav Brock wrote: > Hi John > > Yes, but as I recall, you can install the SSMS only (without the SQL engine) from both 2005, 2008, and 2008 R2 as well as the upcoming version (not tested). The issue here is, that some of the versions of SSMS cannot coexist while the never versions not always are backward compatible. I think it was you mentioning that a while ago. > Can someone confirm, please? > > /gustav > > >>>> jwcolby at colbyconsulting.com 17-01-2012 15:33>>> > Well this does bring up the issue of installing the SSMS which does not come with the express version. > > John W. Colby > Colby Consulting From jwcolby at colbyconsulting.com Tue Jan 17 11:06:50 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 17 Jan 2012 12:06:50 -0500 Subject: [dba-SQLServer] Completely Uninstall SQL Server In-Reply-To: References: Message-ID: <4F15AAAA.9090405@colbyconsulting.com> LOL, well... life is never simple. I tried to install SQL Server 2008 on my laptop, just the SSMS. The install complained that there was a previous version of Visual Studio that was not up to SP1. It seems the installer tries to install VS2008 AFAICT. I do have VS 2010 install which is at the latest SP. So the SSMS install won't let me do the install because of an internal (and apparently bogus) error in the install package. Ya gotta love it. John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/17/2012 11:36 AM, Gustav Brock wrote: > Hi John > > OK, then I see no issues for your case. > > /gustav > > >>>> jwcolby at colbyconsulting.com 17-01-2012 17:12>>> > AFAIK SSMS is entirely compatible with all versions of the database engine. I don't really deal > with old versions much though. What you can do will be limited by the engine that SSMS is > controlling of course. > > John W. Colby > Colby Consulting > > Reality is what refuses to go away > when you do not believe in it > > On 1/17/2012 10:50 AM, Gustav Brock wrote: >> Hi John >> >> Yes, but as I recall, you can install the SSMS only (without the SQL engine) from both 2005, 2008, and 2008 R2 as well as the upcoming version (not tested). The issue here is, that some of the versions of SSMS cannot coexist while the never versions not always are backward compatible. I think it was you mentioning that a while ago. >> Can someone confirm, please? >> >> /gustav >> >> >>>>> jwcolby at colbyconsulting.com 17-01-2012 15:33>>> >> Well this does bring up the issue of installing the SSMS which does not come with the express version. >> >> John W. Colby >> Colby Consulting > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Thu Jan 19 09:07:48 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 19 Jan 2012 10:07:48 -0500 Subject: [dba-SQLServer] Adding a field to every table Message-ID: <4F1831C4.5060003@colbyconsulting.com> I have built several SQL Server databases which are used by Access, and have just recently discovered that I really need a timestamp in every table. So I have been laboriously adding a timestamp field table by table. As everyone knows, TSQL is not my strength, but I went out looking for solutions to doing this programmatically using TSQL. So the challenge is to get every user table in a database, and then execute an alter table statement: alter table EachTableName add timestamp I managed to do it but boy is it *not* elegant! I found the following (by our own Arthur Fuller) which creates a udf to return a table with the names of all the users tables. My first iteration actually created this udf and then called it. http://www.techrepublic.com/article/alter-every-table-in-a-sql-database/5796376 The following discusses iterating through a table. The author was executing a stored procedure for each line of the table, which is close to what I am trying to do. Yes, I know RBAR and all that but after all this is just a minimal number of operations done once per database. http://weblogs.asp.net/jgalloway/archive/2006/04/12/442618.aspx That said, I would like to know if there is an elegant (set based) way to execute a line of code like I am doing for each record in the table. If you promise not to laugh I will show you my cobbled together solution. I built a User Stored Procedure so that I can just copy it over to my server at the client. The stored procedure creates the udf every time it runs, this in case this is the first time I am running the USP. I then declare a table to store the table returned by the UDF, and fill the table. After that I iterate the table RBAR, pulling the assembled SQL statement out into a varchar, and then exec() the SQL Statement. -- ============================================= -- Author: -- Create date: <1/19/2012> -- Description: -- ============================================= ALTER PROCEDURE [dbo].[usb_AddFldTake2] @DBName as varchar(250) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @SQL varchar(4000) DECLARE @tblUsrTableNames table (rownum int IDENTITY (1, 1), TblName varchar(250)) --Get all of the table names into a table variable select @SQL = 'SELECT TOP 100 PERCENT name FROM ' + @DBName + '.dbo.sysobjects WHERE type = ''U'' ORDER BY name' print @SQL insert into @tblUsrTableNames (TblName) exec (@SQL) --Set up RBAR table iteration declare @RowCnt int declare @MaxRows int select @RowCnt = 1 select @MaxRows=count(*) from @tblUsrTableNames Declare @TblName varchar(250) while @RowCnt <= @MaxRows begin --Get the name of the table from each row select @TblName = (SELECT TblName from @tblUsrTableNames where rownum = @RowCnt ) --build a sql statement to perform the alter table and add the timestamp select @SQL = 'ALTER TABLE PrisonMinistries.dbo.[' + @TblName + '] ADD timestamp' print @SQL --Execute the sql execute (@SQL) --move to the next row Select @RowCnt = @RowCnt + 1 end -- Insert statements for procedure here END As I mentioned I would like to know if there is an elegant (set based) way to execute a line of code like I am doing for each record in the table. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it From mwp.reid at qub.ac.uk Thu Jan 19 09:20:14 2012 From: mwp.reid at qub.ac.uk (Martin Reid) Date: Thu, 19 Jan 2012 15:20:14 +0000 Subject: [dba-SQLServer] Adding a field to every table In-Reply-To: <4F1831C4.5060003@colbyconsulting.com> References: <4F1831C4.5060003@colbyconsulting.com> Message-ID: <631CF83223105545BF43EFB52CB082957BB371F229@EX2K7-VIRT-2.ads.qub.ac.uk> John Look this up sp_msforeachtable Martin -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby Sent: 19 January 2012 15:08 To: Sqlserver-Dba; VBA; Access Developers discussion and problem solving Subject: [dba-SQLServer] Adding a field to every table I have built several SQL Server databases which are used by Access, and have just recently discovered that I really need a timestamp in every table. So I have been laboriously adding a timestamp field table by table. As everyone knows, TSQL is not my strength, but I went out looking for solutions to doing this programmatically using TSQL. So the challenge is to get every user table in a database, and then execute an alter table statement: alter table EachTableName add timestamp I managed to do it but boy is it *not* elegant! I found the following (by our own Arthur Fuller) which creates a udf to return a table with the names of all the users tables. My first iteration actually created this udf and then called it. http://www.techrepublic.com/article/alter-every-table-in-a-sql-database/5796376 The following discusses iterating through a table. The author was executing a stored procedure for each line of the table, which is close to what I am trying to do. Yes, I know RBAR and all that but after all this is just a minimal number of operations done once per database. http://weblogs.asp.net/jgalloway/archive/2006/04/12/442618.aspx That said, I would like to know if there is an elegant (set based) way to execute a line of code like I am doing for each record in the table. If you promise not to laugh I will show you my cobbled together solution. I built a User Stored Procedure so that I can just copy it over to my server at the client. The stored procedure creates the udf every time it runs, this in case this is the first time I am running the USP. I then declare a table to store the table returned by the UDF, and fill the table. After that I iterate the table RBAR, pulling the assembled SQL statement out into a varchar, and then exec() the SQL Statement. -- ============================================= -- Author: -- Create date: <1/19/2012> -- Description: -- ============================================= ALTER PROCEDURE [dbo].[usb_AddFldTake2] @DBName as varchar(250) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @SQL varchar(4000) DECLARE @tblUsrTableNames table (rownum int IDENTITY (1, 1), TblName varchar(250)) --Get all of the table names into a table variable select @SQL = 'SELECT TOP 100 PERCENT name FROM ' + @DBName + '.dbo.sysobjects WHERE type = ''U'' ORDER BY name' print @SQL insert into @tblUsrTableNames (TblName) exec (@SQL) --Set up RBAR table iteration declare @RowCnt int declare @MaxRows int select @RowCnt = 1 select @MaxRows=count(*) from @tblUsrTableNames Declare @TblName varchar(250) while @RowCnt <= @MaxRows begin --Get the name of the table from each row select @TblName = (SELECT TblName from @tblUsrTableNames where rownum = @RowCnt ) --build a sql statement to perform the alter table and add the timestamp select @SQL = 'ALTER TABLE PrisonMinistries.dbo.[' + @TblName + '] ADD timestamp' print @SQL --Execute the sql execute (@SQL) --move to the next row Select @RowCnt = @RowCnt + 1 end -- Insert statements for procedure here END As I mentioned I would like to know if there is an elegant (set based) way to execute a line of code like I am doing for each record in the table. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From David at sierranevada.com Thu Jan 19 09:37:19 2012 From: David at sierranevada.com (David Lewis) Date: Thu, 19 Jan 2012 07:37:19 -0800 Subject: [dba-SQLServer] add timestamp column to all tables In-Reply-To: References: Message-ID: <8437387186B192498848F1892A41F780015D65768396@schwarz.sierranevada.corp> More or less that is the approach to use. One could quibble here and there about the specifics of your version, but it gets the job done and you've learned some things in the process so I'd say 'well done'. Here is another solution that you haven't heard of (by design of MS), but it is simpler. sp_msforeachtable. A link that shows an application is http://weblogs.sqlteam.com/joew/archive/2007/10/23/60383.aspx Message: 9 Date: Thu, 19 Jan 2012 10:07:48 -0500 From: jwcolby To: Sqlserver-Dba , VBA , Access Developers discussion and problem solving Subject: [dba-SQLServer] Adding a field to every table Message-ID: <4F1831C4.5060003 at colbyconsulting.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed I have built several SQL Server databases which are used by Access, and have just recently discovered that I really need a timestamp in every table. So I have been laboriously adding a timestamp field table by table. As everyone knows, TSQL is not my strength, but I went out looking for solutions to doing this programmatically using TSQL. So the challenge is to get every user table in a database, and then execute an alter table statement: alter table EachTableName add timestamp I managed to do it but boy is it *not* elegant! I found the following (by our own Arthur Fuller) which creates a udf to return a table with the names of all the users tables. My first iteration actually created this udf and then called it. http://www.techrepublic.com/article/alter-every-table-in-a-sql-database/5796376 The following discusses iterating through a table. The author was executing a stored procedure for each line of the table, which is close to what I am trying to do. Yes, I know RBAR and all that but after all this is just a minimal number of operations done once per database. http://weblogs.asp.net/jgalloway/archive/2006/04/12/442618.aspx That said, I would like to know if there is an elegant (set based) way to execute a line of code like I am doing for each record in the table. If you promise not to laugh I will show you my cobbled together solution. I built a User Stored Procedure so that I can just copy it over to my server at the client. The stored procedure creates the udf every time it runs, this in case this is the first time I am running the USP. I then declare a table to store the table returned by the UDF, and fill the table. After that I iterate the table RBAR, pulling the assembled SQL statement out into a varchar, and then exec() the SQL Statement. -- ============================================= -- Author: -- Create date: <1/19/2012> -- Description: -- ============================================= ALTER PROCEDURE [dbo].[usb_AddFldTake2] @DBName as varchar(250) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @SQL varchar(4000) DECLARE @tblUsrTableNames table (rownum int IDENTITY (1, 1), TblName varchar(250)) --Get all of the table names into a table variable select @SQL = 'SELECT TOP 100 PERCENT name FROM ' + @DBName + '.dbo.sysobjects WHERE type = ''U'' ORDER BY name' print @SQL insert into @tblUsrTableNames (TblName) exec (@SQL) --Set up RBAR table iteration declare @RowCnt int declare @MaxRows int select @RowCnt = 1 select @MaxRows=count(*) from @tblUsrTableNames Declare @TblName varchar(250) while @RowCnt <= @MaxRows begin --Get the name of the table from each row select @TblName = (SELECT TblName from @tblUsrTableNames where rownum = @RowCnt ) --build a sql statement to perform the alter table and add the timestamp select @SQL = 'ALTER TABLE PrisonMinistries.dbo.[' + @TblName + '] ADD timestamp' print @SQL --Execute the sql execute (@SQL) --move to the next row Select @RowCnt = @RowCnt + 1 end -- Insert statements for procedure here END As I mentioned I would like to know if there is an elegant (set based) way to execute a line of code like I am doing for each record in the table. -- John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it ------------------------------ _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver End of dba-SQLServer Digest, Vol 106, Issue 3 ********************************************* The contents of this e-mail message and its attachments are covered by the Electronic Communications Privacy Act (18 U.S.C. 2510-2521) and are intended solely for the addressee(s) hereof. If you are not the named recipient, or the employee or agent responsible for delivering the message to the intended recipient, or if this message has been addressed to you in error, you are directed not to read, disclose, reproduce, distribute, disseminate or otherwise use this transmission. If you have received this communication in error, please notify us immediately by return e-mail or by telephone, 530-893-3520, and delete and/or destroy all copies of the message immediately. From fuller.artful at gmail.com Thu Jan 19 10:35:38 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 19 Jan 2012 11:35:38 -0500 Subject: [dba-SQLServer] add timestamp column to all tables In-Reply-To: <8437387186B192498848F1892A41F780015D65768396@schwarz.sierranevada.corp> References: <8437387186B192498848F1892A41F780015D65768396@schwarz.sierranevada.corp> Message-ID: Somewhere, on some eZine (can't remember which, perhaps TechRepublic or Simple-Talk) I wrote a piece about modifying modeldb, the idea being based on the fact that every new db is created based on modeldb. This, IMO, is a very powerful and exceedingly lazy way to accomplish a lot in a little time. I have investigated this avenue at some length and have come up with this approach: 1. Copy modeldb to a new db, naming it appropriately (see next step). 2. Depending on our niche(s), we all have several models. One might be what we at Artful call YAFOES (yet another f***ing Order Entry System), which involves Customers, Orders, OrderDetails, and Products. Another might be YACAS (yet another accounting system), which involves a Chart of Accounts, four quadrants, and so on. 3. Copy the virgin modeldb and give it a name such as VirginModelDB. 4. For each of your models, import the tables and sprocs and views and udfs that every such app will typically need. 5. When you need to create a new db, rename modeldb and rename ModelYafoesDB to modeldb, then create your new database. Result: a new db containing lots of what you need, built in. The last step is to restore the original modeldb to said name -- unless, of course, you specialize in a particular niche, in which case the customized modeldb will serve as your starting ground for each new project. However, you need also be aware of updates to SQL Server, which might potentially wipe out your changes to modeldb. That's why you should always create VirginModelDB, and after an upate to SQL, copy the new modeldb to VirginModelDB, and you're ready to go. Lock and load, as it were. This approach has worked very well for me for the past few years, when it first occurred to me. I copied modeldb to a new db called VirginModeldb, then rolled into modeldb the tables Customers, Orders, OrderDetails and Products, plus some queries and sprocs and views; finally I created a new db based on the customized model and Lo and Behold! Everything carried into my new baby. I don't take entire credit for this notion. It was inspired in the first place by a chapter of a book on SQL by my good friend Dejan Sunderic. You can find his books on Amazon. I highly recommend them. -- Arthur Cell: 647.710.1314 Prediction is difficult, especially of the future. -- Niels Bohr From jwcolby at colbyconsulting.com Thu Jan 19 11:29:14 2012 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 19 Jan 2012 12:29:14 -0500 Subject: [dba-SQLServer] add timestamp column to all tables In-Reply-To: <8437387186B192498848F1892A41F780015D65768396@schwarz.sierranevada.corp> References: <8437387186B192498848F1892A41F780015D65768396@schwarz.sierranevada.corp> Message-ID: <4F1852EA.2090006@colbyconsulting.com> If I could get it working this would be magnificent. http://weblogs.sqlteam.com/joew/archive/2007/10/23/60383.aspx I tried this: use InmateCheckout execute sp_MSforeachtable 'alter table [?] add timestamp' and it gives the following error Msg 4902, Level 16, State 1, Line 1 Cannot find the object "[dbo].[tblLocation]" because it does not exist or you do not have permissions. for each table in that database. It appears that it is in fact iterating all of the tables, but doesn't have the permissions to perform an alter table kind of sql statement? John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/19/2012 10:37 AM, David Lewis wrote: > More or less that is the approach to use. One could quibble here and there about the specifics of your version, but it gets the job done and you've learned some things in the process so I'd say 'well done'. > Here is another solution that you haven't heard of (by design of MS), but it is simpler. > > sp_msforeachtable. A link that shows an application is > http://weblogs.sqlteam.com/joew/archive/2007/10/23/60383.aspx From mwp.reid at qub.ac.uk Thu Jan 19 14:55:08 2012 From: mwp.reid at qub.ac.uk (Martin Reid) Date: Thu, 19 Jan 2012 20:55:08 +0000 Subject: [dba-SQLServer] add timestamp column to all tables Message-ID: <631CF83223105545BF43EFB52CB082957BB4232457@EX2K7-VIRT-2.ads.qub.ac.uk> John Have a look st this one http://www.sqlservercentral.com/Forums/Topic1090648-391-1.aspx Martin Sent from my Windows Phone ________________________________ From: jwcolby Sent: 19/01/2012 18:13 To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] add timestamp column to all tables If I could get it working this would be magnificent. http://weblogs.sqlteam.com/joew/archive/2007/10/23/60383.aspx I tried this: use InmateCheckout execute sp_MSforeachtable 'alter table [?] add timestamp' and it gives the following error Msg 4902, Level 16, State 1, Line 1 Cannot find the object "[dbo].[tblLocation]" because it does not exist or you do not have permissions. for each table in that database. It appears that it is in fact iterating all of the tables, but doesn't have the permissions to perform an alter table kind of sql statement? John W. Colby Colby Consulting Reality is what refuses to go away when you do not believe in it On 1/19/2012 10:37 AM, David Lewis wrote: > More or less that is the approach to use. One could quibble here and there about the specifics of your version, but it gets the job done and you've learned some things in the process so I'd say 'well done'. > Here is another solution that you haven't heard of (by design of MS), but it is simpler. > > sp_msforeachtable. A link that shows an application is > http://weblogs.sqlteam.com/joew/archive/2007/10/23/60383.aspx _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fhtapia at gmail.com Thu Jan 26 12:00:49 2012 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 26 Jan 2012 10:00:49 -0800 Subject: [dba-SQLServer] db programmer job Message-ID: Hey guys, My brother is looking for a DB programmer / administrator to take on some generally light duty work. I am posting here because I don't have time to respond during normal business hours 8-5 (GMT-8) if guys are interested let me know. Thanks! -Francisco http://bit.ly/sqlthis | Tsql and More... From fuller.artful at gmail.com Thu Jan 26 16:05:51 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Thu, 26 Jan 2012 17:05:51 -0500 Subject: [dba-SQLServer] db programmer job In-Reply-To: References: Message-ID: Hi Francisco, If remote computing is an option, I'd be interested; On Thu, Jan 26, 2012 at 1:00 PM, Francisco Tapia wrote: > Hey guys, > My brother is looking for a DB programmer / administrator to take on some > generally light duty work. I am posting here because I don't have time to > respond during normal business hours 8-5 (GMT-8) if guys are interested > let me know. > > Thanks! > -Francisco > http://bit.ly/sqlthis | Tsql and More... > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > -- Arthur Cell: 647.710.1314 Prediction is difficult, especially of the future. -- Niels Bohr From fhtapia at gmail.com Thu Jan 26 16:07:18 2012 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 26 Jan 2012 14:07:18 -0800 Subject: [dba-SQLServer] db programmer job In-Reply-To: References: Message-ID: I'll email his details privately Thanks!! -Francisco http://bit.ly/sqlthis | Tsql and More... On Thu, Jan 26, 2012 at 14:05, Arthur Fuller wrote: > Hi Francisco, > > If remote computing is an option, I'd be interested; > > On Thu, Jan 26, 2012 at 1:00 PM, Francisco Tapia > wrote: > > > Hey guys, > > My brother is looking for a DB programmer / administrator to take on > some > > generally light duty work. I am posting here because I don't have time > to > > respond during normal business hours 8-5 (GMT-8) if guys are interested > > let me know. > > > > Thanks! > > -Francisco > > http://bit.ly/sqlthis | Tsql and More... > > > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > > > > > -- > Arthur > Cell: 647.710.1314 > > Prediction is difficult, especially of the future. > -- Niels Bohr > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From rls at WeBeDb.com Fri Jan 27 11:54:42 2012 From: rls at WeBeDb.com (Robert Stewart) Date: Fri, 27 Jan 2012 11:54:42 -0600 Subject: [dba-SQLServer] Adding a field to all user tables In-Reply-To: References: Message-ID: For whatever reason, my original post of this never seems to appear on the results from the list. While I do not advocate using cursors 99% of the time, there are times that they are what you should use. This will make the code easier to read, and it will work smooth. CREATE PROCEDURE dbo.usp_AddTimestamp AS BEGIN SET NOCOUNT ON; DECLARE @TableName VARCHAR( 128 ) , @Sql NVARCHAR( 4000 ) DECLARE curTableList CURSOR FOR SELECT name FROM sysobjects WHERE [TYPE] = 'U' ORDER BY name OPEN curTableList FETCH next FROM curTableList INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN SET @Sql = N'ALTER TABLE dbo.' + @TableName + ' ADD Timestamp' EXEC sp_executesql @Sql FETCH next FROM curTableList INTO @TableName END CLOSE curTableList DEALLOCATE curTableList END GO From fuller.artful at gmail.com Sat Jan 28 02:37:34 2012 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 28 Jan 2012 03:37:34 -0500 Subject: [dba-SQLServer] Adding a field to all user tables In-Reply-To: References: Message-ID: I'll give you this one, for sure! It's guaranteed to be a smallish cursor, even in a large-ish database. Maybe 600 tables in the largest db I ever worked on! And as cursors go, that's not large. That said, I personally would have gone with a ForEach in SMO, but that's nit-picking. Well done! P.S. I also like your formatting conventions. On Fri, Jan 27, 2012 at 12:54 PM, Robert Stewart wrote: > For whatever reason, my original post of this never seems to appear on the > results from the list. > > While I do not advocate using cursors 99% of the time, there are times > that they are what you should use. > This will make the code easier to read, and it will work smooth. > > CREATE PROCEDURE dbo.usp_AddTimestamp > AS > BEGIN > SET NOCOUNT ON; > DECLARE > @TableName VARCHAR( 128 ) > , @Sql NVARCHAR( 4000 ) > > DECLARE curTableList CURSOR > FOR SELECT name > FROM sysobjects > WHERE [TYPE] = 'U' > ORDER BY name > > OPEN curTableList > > FETCH next FROM curTableList INTO @TableName > > WHILE @@FETCH_STATUS = 0 > BEGIN > SET @Sql = N'ALTER TABLE dbo.' + @TableName + ' ADD Timestamp' > > EXEC sp_executesql @Sql > > FETCH next FROM curTableList INTO @TableName > END > > CLOSE curTableList > DEALLOCATE curTableList > END > GO-- Arthur Cell: 647.710.1314 Prediction is difficult, especially of the future. -- Niels Bohr From lawhonac at hiwaay.net Sat Jan 28 19:22:02 2012 From: lawhonac at hiwaay.net (Alan Lawhon) Date: Sat, 28 Jan 2012 19:22:02 -0600 Subject: [dba-SQLServer] Online Courses Threaten University Model Message-ID: <000501ccde24$6780e070$3682a150$@net> http://hereandnow.wbur.org/2012/01/27/online-courses-universities Colleges and universities have pretty much done this to themselves. By not easily allowing (or accepting) transfer of credit hours from one institution to another, most colleges and universities have tended to operate as if they were a monopoly where their "customers" (i.e. students) have few options and little control. It's been nearly 30 years since I was in college, but I always thought the price of college textbooks was nothing but a racket - legalized extortion and price fixing. This is but one example of how the "education industry" behaves like a monopoly. There is (or has been) little concern over constantly rising prices - until now. (President Obama seemed to put higher education "on notice" the other day, but I'm not convinced he's really serious about it - or merely electioneering.) Welcome to the future, higher education. This is what is going to happen to every industry where the cost of the "product" gets out of kilter. I've specifically chosen the option of self-education (and self study) in an effort to get certified in Microsoft SQL Server. The high cost of going back to school was a primary determinant in that decision. If one of these low cost (or free) online universities offered a curriculum in database management or database design, I would be very interested in such a program - especially if potential employers indicated a willingness to accept such a credential. Alan C. Lawhon From fhtapia at gmail.com Mon Jan 30 10:20:40 2012 From: fhtapia at gmail.com (Francisco Tapia) Date: Mon, 30 Jan 2012 08:20:40 -0800 Subject: [dba-SQLServer] Adding a field to all user tables In-Reply-To: References: Message-ID: out of curiosity would you really make this a sproc? I think more of a maintenance architecture script (again this is nitpicking) but I gather that once you've added your timestamps to all your tables, going forward you'd automatically do that to all new tables created, right? in the case of a broken design, where some tables have the timestamps, and many do not you'd probably want to skip the timestamped tables. how about a little mod like: SELECT t.name AS TableName -- ,sc.name AS SchemaName, -- c.name AS ColumnName, -- types.name AS TypeName, -- st.name AS TypeSchemaName, -- t.type AS type FROM sys.all_columns c INNER JOIN sys.all_objects t WITH (NOLOCK) ON c.object_id=t.object_id LEFT JOIN sys.schemas sc WITH (NOLOCK) ON t.schema_id=sc.schema_id LEFT JOIN sys.types types WITH (NOLOCK) ON c.user_type_id=types.user_type_id LEFT JOIN sys.schemas st WITH (NOLOCK) ON st.schema_id=types.schema_id WHERE t.type IN ('U') AND types.NAME <> 'timestamp' GROUP BY t.NAME ORDER BY t.NAME instead of SELECT name FROM sysobjects WHERE [TYPE] = 'U' ORDER BY name if you plan on keeping the design with a managed script so you can automatically come back and re-add timestamps on any table you may have forgotten, you can replace the previous cursor select with this modified one, so that you only find the tables without timestamps already. -Francisco http://bit.ly/sqlthis | Tsql and More... On Sat, Jan 28, 2012 at 00:37, Arthur Fuller wrote: > I'll give you this one, for sure! It's guaranteed to be a smallish cursor, > even in a large-ish database. Maybe 600 tables in the largest db I ever > worked on! And as cursors go, that's not large. That said, I personally > would have gone with a ForEach in SMO, but that's nit-picking. Well done! > > P.S. > I also like your formatting conventions. > > On Fri, Jan 27, 2012 at 12:54 PM, Robert Stewart wrote: > > > For whatever reason, my original post of this never seems to appear on > the > > results from the list. > > > > While I do not advocate using cursors 99% of the time, there are times > > that they are what you should use. > > This will make the code easier to read, and it will work smooth. > > > > CREATE PROCEDURE dbo.usp_AddTimestamp > > AS > > BEGIN > > SET NOCOUNT ON; > > DECLARE > > @TableName VARCHAR( 128 ) > > , @Sql NVARCHAR( 4000 ) > > > > DECLARE curTableList CURSOR > > FOR SELECT name > > FROM sysobjects > > WHERE [TYPE] = 'U' > > ORDER BY name > > > > OPEN curTableList > > > > FETCH next FROM curTableList INTO @TableName > > > > WHILE @@FETCH_STATUS = 0 > > BEGIN > > SET @Sql = N'ALTER TABLE dbo.' + @TableName + ' ADD Timestamp' > > > > EXEC sp_executesql @Sql > > > > FETCH next FROM curTableList INTO @TableName > > END > > > > CLOSE curTableList > > DEALLOCATE curTableList > > END > > GO-- > > Arthur > Cell: 647.710.1314 > > Prediction is difficult, especially of the future. > -- Niels Bohr > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > >