From jwcolby at colbyconsulting.com Fri Jul 1 18:03:37 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Fri, 01 Jul 2011 19:03:37 -0400 Subject: [dba-SQLServer] When are log files used Message-ID: <4E0E5249.1040406@colbyconsulting.com> Are log files used for read operations or only data modifications? -- John W. Colby www.ColbyConsulting.com From fhtapia at gmail.com Fri Jul 1 18:16:55 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 1 Jul 2011 16:16:55 -0700 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E0E5249.1040406@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> Message-ID: <-7846804006833651449@unknownmsgid> It's for updates and inserts only, read operations may use the tempdb depending on how you constructed the select... Sent from my mobile On Jul 1, 2011, at 4:04 PM, jwcolby wrote: > Are log files used for read operations or only data modifications? > > -- > John W. Colby > www.ColbyConsulting.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From jwcolby at colbyconsulting.com Sat Jul 2 00:36:37 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 01:36:37 -0400 Subject: [dba-SQLServer] Server upgrade Message-ID: <4E0EAE65.90501@colbyconsulting.com> Awhile back I built a server based on the ASUS KGPE-D16 dual socket G34 http://www.newegg.com/Product/Product.aspx?Item=N82E16813131643 and a single 8 core AMD 6128 http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266 with 32 gigs of RAM using the Kingston KVR1333D3D4R9S/8G http://www.newegg.com/Product/Product.aspx?Item=N82E16820139140 While I was at it I built a 2 drive SSD Raid 0 using OCZ Vertex 2 OCZSSD2-2VTX120G http://www.newegg.com/Product/Product.aspx?Item=N82E16820227705 to place my central databases on. Backup is everything (!) but it has been fault free so far, and pretty darned fast! But I started filling up the disk (in fact twice had a "disk full") so this week I ordered an ASUS PIKE 1068E raid controller http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 And 4 more SSDs using the Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 I just finished the install. I created a 4 disk raid 0 on the 1068 E controller and moved the data files onto that volume and left the log files on the original raid 0 SSD volume. I understand that I must keep backups in place due to the RAID 0 failure scenario. However these databases are primarily read-only, with approximately monthly updates so the data is relatively static. Given that they are primarily read-only the high IOPS and low latency getting at the data makes an enormous performance difference with manageable risk. I am getting the typical ATTO graphs you would expect with a 4 drive SSD Raid 0, in the neighborhood of 700 GBPS for the large block transfers, both read and write. I am impatiently waiting for the Interlagos to arrive, though I may not be able to afford them at first. OTOH the price of the RAM has dropped substantially in the 8 months since I built the server so buying another 32 gigs soon is definitely doable. Doing an AB comparison with the old system is impractical nut i can tell you I am running processes in a few minutes that used to take a half hour or more. Sometimes much more. It is gratifying to watch 8 cores running at 80+ % at times. I can also say that backing up a 60 gig database file with high compression from SSD to rotating media is *really* fast. I can't say that SSDs would be the answer for transaction processing systems but for my purposes they really do rock. -- John W. Colby www.ColbyConsulting.com From marklbreen at gmail.com Sat Jul 2 05:25:57 2011 From: marklbreen at gmail.com (Mark Breen) Date: Sat, 2 Jul 2011 11:25:57 +0100 Subject: [dba-SQLServer] When are log files used In-Reply-To: <-7846804006833651449@unknownmsgid> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> Message-ID: Hello John, Just curious, what prompted your question? Mark On 2 July 2011 00:16, Francisco Tapia wrote: > It's for updates and inserts only, read operations may use the tempdb > depending on how you constructed the select... > > Sent from my mobile > > On Jul 1, 2011, at 4:04 PM, jwcolby wrote: > > > Are log files used for read operations or only data modifications? > > > > -- > > John W. Colby > > www.ColbyConsulting.com > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Sat Jul 2 09:28:48 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 10:28:48 -0400 Subject: [dba-SQLServer] When are log files used In-Reply-To: References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> Message-ID: <4E0F2B20.1000709@colbyconsulting.com> Mark, > Just curious, what prompted your question? When I got into this business I bought a 16 port Areca RAID controller and a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum reliability and as much speed as I could muster. I created 2 tb partitions and placed my data files on one and my log files on another. Awhile back I bought a pair of SSDs http://www.newegg.com/Product/Product.aspx?Item=N82E16820227590 And made a 220 GB RAID 0 array and placed a set of three databases (my "central" databases) on there for speed. This last week I was doing some Update / Append operations on some of these databases and ended up with "disk full" - stopped me cold!!! Luckily I was able to move the logs off to rotating media and let them complete their operations and then finish up what I was doing. Anyway... I upgraded the server last night. I added a very reasonably priced (and reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid controller. It only supports Raid 0 and 1 but that is perfect for this application since I am using Raid 0 for these volumes. It also has no write cache so it is not appropriate for high write applications. http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 ASUS PIKE and four new SSDs to hold the central database files I work with: Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 > Just curious, what prompted your question? What I was trying to discover was when log files are used in order to discover how much room I needed to give them. I had all of the databases and their log files on a single RAID0. I was doing some appends / updates and the log files filled up the disk, which is what prompted the expansion. In the end I decided to put the data files on a new RAID0 created from the 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old two SSDs (~220 GB). I really only write to these files roughly once per month, but I ended up doing some processing unrelated to the monthly thing. ATM the data disk has 160 GB used (280 GB free) and the log file disk has 18 GB used (204 GB free). That should hold me for awhile, but I still have 4 more SATA ports on the Pike controller if I need them. John W. Colby www.ColbyConsulting.com On 7/2/2011 6:25 AM, Mark Breen wrote: > Hello John, > > Just curious, what prompted your question? > > Mark > > > > > On 2 July 2011 00:16, Francisco Tapia wrote: > >> It's for updates and inserts only, read operations may use the tempdb >> depending on how you constructed the select... >> >> Sent from my mobile >> >> On Jul 1, 2011, at 4:04 PM, jwcolby wrote: >> >>> Are log files used for read operations or only data modifications? >>> >>> -- >>> John W. Colby >>> www.ColbyConsulting.com >>> _______________________________________________ >>> dba-SQLServer mailing list >>> dba-SQLServer at databaseadvisors.com >>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.com >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fhtapia at gmail.com Sat Jul 2 11:23:57 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Sat, 2 Jul 2011 09:23:57 -0700 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E0F2B20.1000709@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> Message-ID: <-5104475497454315775@unknownmsgid> Wow I like your speedy setup, just remember to backup often, also something that might help during large operations is to switch the recovery model to simple mode to help maintain the log file size, operations are automatically flushed and the space is reused when write operations commit to the db files. Full recovery models are only really needed if you have to be able to restore back up to the minute before failure. Sent from my mobile On Jul 2, 2011, at 7:29 AM, jwcolby wrote: > Mark, > > > Just curious, what prompted your question? > > When I got into this business I bought a 16 port Areca RAID controller and a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum reliability and as much speed as I could muster. I created 2 tb partitions and placed my data files on one and my log files on another. Awhile back I bought a pair of SSDs > > http://www.newegg.com/Product/Product.aspx?Item=N82E16820227590 > > And made a 220 GB RAID 0 array and placed a set of three databases (my "central" databases) on there for speed. > > This last week I was doing some Update / Append operations on some of these databases and ended up with "disk full" - stopped me cold!!! Luckily I was able to move the logs off to rotating media and let them complete their operations and then finish up what I was doing. Anyway... > > > I upgraded the server last night. I added a very reasonably priced (and reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid controller. It only supports Raid 0 and 1 but that is perfect for this application since I am using Raid 0 for these volumes. It also has no write cache so it is not appropriate for high write applications. > > http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 ASUS PIKE > > and four new SSDs to hold the central database files I work with: > > Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX > > http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 > > > Just curious, what prompted your question? > > What I was trying to discover was when log files are used in order to discover how much room I needed to give them. I had all of the databases and their log files on a single RAID0. I was doing some appends / updates and the log files filled up the disk, which is what prompted the expansion. > > In the end I decided to put the data files on a new RAID0 created from the 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old two SSDs (~220 GB). > > I really only write to these files roughly once per month, but I ended up doing some processing unrelated to the monthly thing. > > ATM the data disk has 160 GB used (280 GB free) and the log file disk has 18 GB used (204 GB free). That should hold me for awhile, but I still have 4 more SATA ports on the Pike controller if I need them. > > John W. Colby > www.ColbyConsulting.com > > On 7/2/2011 6:25 AM, Mark Breen wrote: >> Hello John, >> >> Just curious, what prompted your question? >> >> Mark >> >> >> >> >> On 2 July 2011 00:16, Francisco Tapia wrote: >> >>> It's for updates and inserts only, read operations may use the tempdb >>> depending on how you constructed the select... >>> >>> Sent from my mobile >>> >>> On Jul 1, 2011, at 4:04 PM, jwcolby wrote: >>> >>>> Are log files used for read operations or only data modifications? >>>> >>>> -- >>>> John W. Colby >>>> www.ColbyConsulting.com >>>> _______________________________________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer at databaseadvisors.com >>>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.com >>>> >>> _______________________________________________ >>> dba-SQLServer mailing list >>> dba-SQLServer at databaseadvisors.com >>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.com >>> >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From jwcolby at colbyconsulting.com Sat Jul 2 12:29:27 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 13:29:27 -0400 Subject: [dba-SQLServer] SQL Server - SSD / Rotating media - Side by side test results Message-ID: <4E0F5577.9060206@colbyconsulting.com> This morning I set out to do a little bit of real world testing to see what my SSD investment buys me. The following are some results. BTW if a SQL Server guru wants to actually gain remote access and run tests on my system I would welcome that. I am obviously not very talented as a SQL Server DBA and so a real DBA who has an interest is welcomed to tune and test. Anyway, I have a pair of databases that I will be testing with. One is my infamous "database from hell" called HSID, containing 51 million records with about 600 fields. The other is my HSIDAllAdults containing about 65 million Name and address fields. HSIDAllAdults is child to HSID, IOW it has a foreign key which contains the PK of HSID. Both databases have a clustered index on their autonumber PK. So... I have both of these databases on a four SSD RAID 0. I backed them up last night and restored them to the same name + _Rotatingmedia on my RAID6 volumes. So I have identical databases on a 4 SSD Raid 0 and a 6 disk RAID 6. I am now doing some comparative A/B runs of rather standard queries - similar to things I do routinely. I backed up the two databases last night just before upgrading the server. I restored both tha HSID and HSIDAllAdults this AM to the same rotating media location I normally hold the databases. I did not defrag the rotating media before doing the restore. I include the counts so that we can be assured that the actual data is identical between the SSD and rotating media DBs. HSIDAllAdults - has a pair of cover indexes each of which includes Address valid DBCC DROPCLEANBUFFERS SELECT AddrValid, COUNT(PK) AS Cnt FROM dbo.tblAllAdultNameAddr GROUP BY AddrValid SSD: 12 seconds ANK 635917 E 2918652 INV 936058 MOV 112093 PO 3780131 V 59074768 Rotating: 52 seconds ANK 635917 E 2918652 INV 936058 MOV 112093 PO 3780131 V 59074768 DBCC DROPCLEANBUFFERS SELECT COUNT(_DataHSID.dbo.tblHSID.PKID) AS Cnt, dbo.tblAllAdultNameAddr.AddrValid FROM dbo.tblAllAdultNameAddr INNER JOIN _DataHSID.dbo.tblHSID ON dbo.tblAllAdultNameAddr.PKHSID = _DataHSID.dbo.tblHSID.PKID GROUP BY dbo.tblAllAdultNameAddr.AddrValid SSD: 35 seconds 635917 ANK 2918652 E 936058 INV 112093 MOV 3780131 PO 59074768 V DBCC DROPCLEANBUFFERS SELECT COUNT(_DataHSID_RotatingMedia.dbo.tblHSID.PKID) AS Cnt, dbo.tblAllAdultNameAddr.AddrValid FROM _DataHSID_RotatingMedia.dbo.tblHSID INNER JOIN dbo.tblAllAdultNameAddr ON _DataHSID_RotatingMedia.dbo.tblHSID.PKID = dbo.tblAllAdultNameAddr.PKHSID GROUP BY dbo.tblAllAdultNameAddr.AddrValid Rotating: 1:00 635917 ANK 2918652 E 936058 INV 112093 MOV 3780131 PO 59074768 V The following appears to be a table scan which would be a "worst case". I just picked a field from HSID which we use occasionally. DBCC DROPCLEANBUFFERS SELECT COUNT(PKID) AS Cnt, Household_Occupation_code FROM dbo.tblHSID GROUP BY Household_Occupation_code Rotating: 7:06 35481479 NULL 7143021 10 11480 11 9780 12 37452 13 115093 20 2266292 21 501715 22 23724 23 1039660 30 1325728 40 1183311 50 8271 51 70318 52 2566 60 33157 61 28595 62 15305 70 511464 80 739340 90 609317 91 Rotating media: 1:05 35481479 NULL 7143021 10 11480 11 9780 12 37452 13 115093 20 2266292 21 501715 22 23724 23 1039660 30 1325728 40 1183311 50 8271 51 70318 52 2566 60 33157 61 28595 62 15305 70 511464 80 739340 90 609317 91 DBCC DROPCLEANBUFFERS SELECT COUNT(PKID) AS Cnt, Narrow_Income_Band FROM dbo.tblHSID GROUP BY Narrow_Income_Band SSD: 8 seconds 13824508 NULL 3762511 1 1675853 2 1015899 3 2307736 4 1031640 5 2595759 6 1069374 7 2662509 8 1100049 9 1055216 A 1026910 B 4285629 C 941494 D 862906 E 831573 F 2443917 G 738328 H 676959 I 478582 J 423856 K 1168819 L 371413 M 333796 N 249064 O 204771 P 708189 Q 193265 R 189413 S 2927130 T Rotating media: 10 seconds 13824508 NULL 3762511 1 1675853 2 1015899 3 2307736 4 1031640 5 2595759 6 1069374 7 2662509 8 1100049 9 1055216 A 1026910 B 4285629 C 941494 D 862906 E 831573 F 2443917 G 738328 H 676959 I 478582 J 423856 K 1168819 L 371413 M 333796 N 249064 O 204771 P 708189 Q 193265 R 189413 S 2927130 T I am going to stop for now. I have the rotating media copies and will leave them in place for awhile. If any real DBA wants to do some testing let me know. Obviously I have to know you. :) -- John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Sat Jul 2 12:48:36 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 13:48:36 -0400 Subject: [dba-SQLServer] When are log files used In-Reply-To: <-5104475497454315775@unknownmsgid> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> <-5104475497454315775@unknownmsgid> Message-ID: <4E0F59F4.5090303@colbyconsulting.com> Francisco, Of course I need to do that. Thanks for the suggestion. John W. Colby www.ColbyConsulting.com On 7/2/2011 12:23 PM, Francisco Tapia wrote: > Wow I like your speedy setup, just remember to backup often, also > something that might help during large operations is to switch the > recovery model to simple mode to help maintain the log file size, > operations are automatically flushed and the space is reused when > write operations commit to the db files. Full recovery models are > only really needed if you have to be able to restore back up to the > minute before failure. > > Sent from my mobile > > On Jul 2, 2011, at 7:29 AM, jwcolby wrote: > >> Mark, >> >>> Just curious, what prompted your question? >> >> When I got into this business I bought a 16 port Areca RAID controller and a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum reliability and as much speed as I could muster. I created 2 tb partitions and placed my data files on one and my log files on another. Awhile back I bought a pair of SSDs >> >> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227590 >> >> And made a 220 GB RAID 0 array and placed a set of three databases (my "central" databases) on there for speed. >> >> This last week I was doing some Update / Append operations on some of these databases and ended up with "disk full" - stopped me cold!!! Luckily I was able to move the logs off to rotating media and let them complete their operations and then finish up what I was doing. Anyway... >> >> >> I upgraded the server last night. I added a very reasonably priced (and reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid controller. It only supports Raid 0 and 1 but that is perfect for this application since I am using Raid 0 for these volumes. It also has no write cache so it is not appropriate for high write applications. >> >> http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 ASUS PIKE >> >> and four new SSDs to hold the central database files I work with: >> >> Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX >> >> http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 >> >>> Just curious, what prompted your question? >> >> What I was trying to discover was when log files are used in order to discover how much room I needed to give them. I had all of the databases and their log files on a single RAID0. I was doing some appends / updates and the log files filled up the disk, which is what prompted the expansion. >> >> In the end I decided to put the data files on a new RAID0 created from the 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old two SSDs (~220 GB). >> >> I really only write to these files roughly once per month, but I ended up doing some processing unrelated to the monthly thing. >> >> ATM the data disk has 160 GB used (280 GB free) and the log file disk has 18 GB used (204 GB free). That should hold me for awhile, but I still have 4 more SATA ports on the Pike controller if I need them. >> >> John W. Colby >> www.ColbyConsulting.com >> >> On 7/2/2011 6:25 AM, Mark Breen wrote: >>> Hello John, >>> >>> Just curious, what prompted your question? >>> >>> Mark >>> >>> >>> >>> >>> On 2 July 2011 00:16, Francisco Tapia wrote: >>> >>>> It's for updates and inserts only, read operations may use the tempdb >>>> depending on how you constructed the select... >>>> >>>> Sent from my mobile >>>> >>>> On Jul 1, 2011, at 4:04 PM, jwcolby wrote: >>>> >>>>> Are log files used for read operations or only data modifications? >>>>> >>>>> -- >>>>> John W. Colby >>>>> www.ColbyConsulting.com >>>>> _______________________________________________ >>>>> dba-SQLServer mailing list >>>>> dba-SQLServer at databaseadvisors.com >>>>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>>>> http://www.databaseadvisors.com >>>>> >>>> _______________________________________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer at databaseadvisors.com >>>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.com >>>> >>>> >>> _______________________________________________ >>> dba-SQLServer mailing list >>> dba-SQLServer at databaseadvisors.com >>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.com >>> >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Sat Jul 2 13:36:42 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 14:36:42 -0400 Subject: [dba-SQLServer] SQL Server - SSD / Rotating media - Side by side test results In-Reply-To: References: <4E0F5577.9060206@colbyconsulting.com> Message-ID: <4E0F653A.7070503@colbyconsulting.com> Arthur, There are definite cases where the gains are minimal, others where they are significant. The other thing is that I am intentionally clearing the cache before each test. The cache further minimizes the differences as it turns out. That is to be expected of course. This just goes to show the old axiom that throwing memory at SQL Server does a world of good. Without a shadow of a doubt, one thing that SSDs (and faster / better hardware in general) do is minimize the impact of ignorance and sloth. ;) I am not an accomplished DBA, and I simply do not have the time to become one. As a result I am unable to correctly tune my system. By throwing cores, memory and SSDs at the problem I manage to achieve respectable results in spite of myself. Hardware is cheap. My entire server cost somewhere in the neighborhood of $5K. Additionally I have dragged disk and RAID controllers forward through many upgrades. Back around 2005 I spent the then enormous sum of $1600 for three Areca raid controllers which I am still using today. I bought 10 1 TB drives back when they were $150, but I am still using them today. What I upgrade are the motherboards and more frequently the processors. In 2004 I started with single core AMD 3800 processors using Windows2003 X32 and 4 gigs of RAM. I built two systems for $4000! Moving up to dual and then quad cores, and Windows / SQL Server X64 and 8 GB RAM, then 16 gigs of ram. My latest motherboard / processor cost me (my client of course) about $700 (8 cores, with 24 cores possible) and 32 gigs of ram was about $1000. I was looking last night and another 32 GB of RAM (same modules) is now only $600! And... I am using my entire old server (quad core / 16 gigs ram) for a VM server. The point really is that while it is not a trivial amount spent over the years making these upgrades, over that same period I billed a couple of hundred thousand dollars. All these upgrades make me more and more productive, faster and faster getting the results back to the client. The client *loves me* precisely because he gets results back in hours instead of a week as his previous provider gave him. I program custom C# solutions (and bill him for the programming) which have enabled me to do orders literally in hours which (back in 2006) took me a full day or even two to get out. Counts which took an hour in 2004 now take my custom program 2 minutes. *AND* I have developed a system which allows him to send emails with zip lists as attachments. A program running on my server strips off the CSV attachment, generate counts, build a count spreadsheet, attach it to an email and send it back to him literally within 5 minutes of him pressing send *without* my doing anything. Again those counts used to take me an hour back when I did everything by hand. Now I log that a count came in and put a small charge in my billing database! The lesson for me is that my time is worth much more than the cost of the electronics and my response time is what makes me valuable to the client. I fully understand than everyone cannot solve all their problems by throwing hardware / custom software at it, but for a sole proprietor it just might be the only way! I don't have the time to be good at all the hats I wear! And so I do things like spend a thousand on SSDs on an educated guess that they will make a significant difference for an uneducated sloth. :) And finally, because the client loves me, he is sending me a *ton* more work! MORE CORES! MORE MEMORY! More SSDs! :):):) John W. Colby www.ColbyConsulting.com On 7/2/2011 1:48 PM, Arthur Fuller wrote: > I would be happy to assist. Judging by your IMO rather narrow result-gap (measured in a few > seconds), my initial guess would be that the SSHDs are not gaining you much over your investment in > CPU and RAM. However, that remains to be determined. Could be that table-scans or some other factor > are causing this lag. Should be that SSHD retrieves ought to be an order of magnitude quicker, but > according to your posted measurements they lag significantly behind that thumbnail benchmark. > > And besides all that, how are you? What's new with you and your family? > > A. From jwcolby at colbyconsulting.com Sun Jul 3 11:56:54 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sun, 03 Jul 2011 12:56:54 -0400 Subject: [dba-SQLServer] SSD, The Game Changer - SQL Man of Mystery - SQLServerCentral.com Message-ID: <4E109F56.809@colbyconsulting.com> -- John W. Colby www.ColbyConsulting.com http://www.sqlservercentral.com/blogs/sqlmanofmystery/archive/2009/04/14/ssd-the-game-changer.aspx From fuller.artful at gmail.com Sun Jul 3 12:01:34 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sun, 3 Jul 2011 13:01:34 -0400 Subject: [dba-SQLServer] SSD, The Game Changer - SQL Man of Mystery - SQLServerCentral.com In-Reply-To: <4E109F56.809@colbyconsulting.com> References: <4E109F56.809@colbyconsulting.com> Message-ID: Huh? On Sun, Jul 3, 2011 at 12:56 PM, jwcolby wrote: > > -- > John W. Colby > www.ColbyConsulting.com > http://www.sqlservercentral.**com/blogs/sqlmanofmystery/** > archive/2009/04/14/ssd-the-**game-changer.aspx > ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From marklbreen at gmail.com Sun Jul 3 12:16:53 2011 From: marklbreen at gmail.com (Mark Breen) Date: Sun, 3 Jul 2011 18:16:53 +0100 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E0F2B20.1000709@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> Message-ID: Hello John, With a memory of a gold fish these days, I was reluctant to mention the simple recovery mode in case we had already discussed it in detail. But I would have expected that if you have simple more enabled, your logs would never grow too large - is that the case? That was my main reason for asking, however, I read your following emails with green envy - I love your setup. Thanks Mark On 2 July 2011 15:28, jwcolby wrote: > Mark, > > > > Just curious, what prompted your question? > > When I got into this business I bought a 16 port Areca RAID controller and > a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum > reliability and as much speed as I could muster. I created 2 tb partitions > and placed my data files on one and my log files on another. Awhile back I > bought a pair of SSDs > > http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820227590 > > And made a 220 GB RAID 0 array and placed a set of three databases (my > "central" databases) on there for speed. > > This last week I was doing some Update / Append operations on some of these > databases and ended up with "disk full" - stopped me cold!!! Luckily I was > able to move the logs off to rotating media and let them complete their > operations and then finish up what I was doing. Anyway... > > > I upgraded the server last night. I added a very reasonably priced (and > reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid > controller. It only supports Raid 0 and 1 but that is perfect for this > application since I am using Raid 0 for these volumes. It also has no write > cache so it is not appropriate for high write applications. > > http://www.newegg.com/Product/**Product.aspx?Item=**N82E16816110042ASUS PIKE > > and four new SSDs to hold the central database files I work with: > > Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX > > http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820226152 > > > > Just curious, what prompted your question? > > What I was trying to discover was when log files are used in order to > discover how much room I needed to give them. I had all of the databases > and their log files on a single RAID0. I was doing some appends / updates > and the log files filled up the disk, which is what prompted the expansion. > > In the end I decided to put the data files on a new RAID0 created from the > 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old > two SSDs (~220 GB). > > I really only write to these files roughly once per month, but I ended up > doing some processing unrelated to the monthly thing. > > ATM the data disk has 160 GB used (280 GB free) and the log file disk has > 18 GB used (204 GB free). That should hold me for awhile, but I still have > 4 more SATA ports on the Pike controller if I need them. > > > John W. Colby > www.ColbyConsulting.com > > On 7/2/2011 6:25 AM, Mark Breen wrote: > >> Hello John, >> >> Just curious, what prompted your question? >> >> Mark >> >> >> >> >> On 2 July 2011 00:16, Francisco Tapia wrote: >> >> It's for updates and inserts only, read operations may use the tempdb >>> depending on how you constructed the select... >>> >>> Sent from my mobile >>> >>> On Jul 1, 2011, at 4:04 PM, jwcolby> >>> wrote: >>> >>> Are log files used for read operations or only data modifications? >>>> >>>> -- >>>> John W. Colby >>>> www.ColbyConsulting.com >>>> ______________________________**_________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer@**databaseadvisors.com >>>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.**com >>>> >>>> ______________________________**_________________ >>> dba-SQLServer mailing list >>> dba-SQLServer@**databaseadvisors.com >>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.**com >>> >>> >>> ______________________________**_________________ >> dba-SQLServer mailing list >> dba-SQLServer@**databaseadvisors.com >> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.**com >> >> >> ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From jwcolby at colbyconsulting.com Sun Jul 3 13:08:28 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sun, 03 Jul 2011 14:08:28 -0400 Subject: [dba-SQLServer] When are log files used In-Reply-To: References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> Message-ID: <4E10B01C.8040506@colbyconsulting.com> > That was my main reason for asking, however, I read your following emails with green envy - I love your setup. My envy is folks who have the knowledge to do things right instead of throwing hardware at it. But we all get what we get. It is a nice server. Imagine what it could do with one of our DBAs at the helm. ;) Hardware really is cheap though. I fully expect to just blow it out with 128 GB of RAM and another processor. Basically if I can keep the entire pair of tables in cache... It is strange to think about keeping 50 gb tables entirely in RAM. I keep getting more business though and the reason (I believe) is that I can get it done quickly. John W. Colby www.ColbyConsulting.com On 7/3/2011 1:16 PM, Mark Breen wrote: > Hello John, > > With a memory of a gold fish these days, I was reluctant to mention the > simple recovery mode in case we had already discussed it in detail. > > But I would have expected that if you have simple more enabled, your logs > would never grow too large - is that the case? > > That was my main reason for asking, however, I read your following emails > with green envy - I love your setup. > > Thanks > > Mark > > > On 2 July 2011 15:28, jwcolby wrote: > >> Mark, >> >> >>> Just curious, what prompted your question? >> >> When I got into this business I bought a 16 port Areca RAID controller and >> a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum >> reliability and as much speed as I could muster. I created 2 tb partitions >> and placed my data files on one and my log files on another. Awhile back I >> bought a pair of SSDs >> >> http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820227590 >> >> And made a 220 GB RAID 0 array and placed a set of three databases (my >> "central" databases) on there for speed. >> >> This last week I was doing some Update / Append operations on some of these >> databases and ended up with "disk full" - stopped me cold!!! Luckily I was >> able to move the logs off to rotating media and let them complete their >> operations and then finish up what I was doing. Anyway... >> >> >> I upgraded the server last night. I added a very reasonably priced (and >> reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid >> controller. It only supports Raid 0 and 1 but that is perfect for this >> application since I am using Raid 0 for these volumes. It also has no write >> cache so it is not appropriate for high write applications. >> >> http://www.newegg.com/Product/**Product.aspx?Item=**N82E16816110042ASUS PIKE >> >> and four new SSDs to hold the central database files I work with: >> >> Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX >> >> http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820226152 >> >> >>> Just curious, what prompted your question? >> >> What I was trying to discover was when log files are used in order to >> discover how much room I needed to give them. I had all of the databases >> and their log files on a single RAID0. I was doing some appends / updates >> and the log files filled up the disk, which is what prompted the expansion. >> >> In the end I decided to put the data files on a new RAID0 created from the >> 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old >> two SSDs (~220 GB). >> >> I really only write to these files roughly once per month, but I ended up >> doing some processing unrelated to the monthly thing. >> >> ATM the data disk has 160 GB used (280 GB free) and the log file disk has >> 18 GB used (204 GB free). That should hold me for awhile, but I still have >> 4 more SATA ports on the Pike controller if I need them. >> >> >> John W. Colby >> www.ColbyConsulting.com >> >> On 7/2/2011 6:25 AM, Mark Breen wrote: >> >>> Hello John, >>> >>> Just curious, what prompted your question? >>> >>> Mark >>> >>> >>> >>> >>> On 2 July 2011 00:16, Francisco Tapia wrote: >>> >>> It's for updates and inserts only, read operations may use the tempdb >>>> depending on how you constructed the select... >>>> >>>> Sent from my mobile >>>> >>>> On Jul 1, 2011, at 4:04 PM, jwcolby> >>>> wrote: >>>> >>>> Are log files used for read operations or only data modifications? >>>>> >>>>> -- >>>>> John W. Colby >>>>> www.ColbyConsulting.com >>>>> ______________________________**_________________ >>>>> dba-SQLServer mailing list >>>>> dba-SQLServer@**databaseadvisors.com >>>>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>>>> http://www.databaseadvisors.**com >>>>> >>>>> ______________________________**_________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer@**databaseadvisors.com >>>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.**com >>>> >>>> >>>> ______________________________**_________________ >>> dba-SQLServer mailing list >>> dba-SQLServer@**databaseadvisors.com >>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.**com >>> >>> >>> ______________________________**_________________ >> dba-SQLServer mailing list >> dba-SQLServer@**databaseadvisors.com >> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.**com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From marklbreen at gmail.com Tue Jul 5 02:30:55 2011 From: marklbreen at gmail.com (Mark Breen) Date: Tue, 5 Jul 2011 08:30:55 +0100 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E10B01C.8040506@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> <4E10B01C.8040506@colbyconsulting.com> Message-ID: Hi John, >I keep getting more business though and the reason (I believe) is that I can get it done quickly. In todays market, this, IMO, is an enormous differentiator. I try to use this all the time, speed first, quality second. That does not mean quality last, just that is should take second place to getting the job done quickly. Interestingly, I sent this email yesterday morning and it was stopped by the moderation because it was too long. Last night, I spoke with a developer aged 58. Out of the blue, he mentioned that he specialises in what he is fast at, not what is the latest craze. It was a very interesting conversation. He says he has almost no SQL skills, but with HTML, CSS and some clever use of some Dotnetnuke modules, he builds online applications for the last four years with relative ease. I told him that I also believe in this philosophy - use what we are fast at. Enjoy the hardware :) Mark On 3 July 2011 19:08, jwcolby wrote: > > That was my main reason for asking, however, I read your following emails > with green envy - I love your setup. > > My envy is folks who have the knowledge to do things right instead of > throwing hardware at it. > > But we all get what we get. It is a nice server. Imagine what it could do > with one of our DBAs at the helm. ;) > > Hardware really is cheap though. I fully expect to just blow it out with > 128 GB of RAM and another processor. Basically if I can keep the entire > pair of tables in cache... It is strange to think about keeping 50 gb > tables entirely in RAM. > > I keep getting more business though and the reason (I believe) is that I > can get it done quickly. > > > > John W. Colby > www.ColbyConsulting.com > > > From jwcolby at colbyconsulting.com Tue Jul 5 03:56:04 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 05 Jul 2011 04:56:04 -0400 Subject: [dba-SQLServer] Sourcegear vault free two developer license Message-ID: <4E12D1A4.1010809@colbyconsulting.com> I ran across this today. http://www.sqlservercentral.com/articles/Red+Gate+Software/74579/ http://promotions.sourcegear.com/vouchers/new/ -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Tue Jul 5 07:01:00 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 5 Jul 2011 08:01:00 -0400 Subject: [dba-SQLServer] Sourcegear vault free two developer license In-Reply-To: <4E12D1A4.1010809@colbyconsulting.com> References: <4E12D1A4.1010809@colbyconsulting.com> Message-ID: Were you able to connect to it? I've tried several times and I just get a message saying "Oops, one voucher per person. A voucher has already been requested from this email address. So I guess that means you got my message yesterday. But I have not received a voucher yet. A. On Tue, Jul 5, 2011 at 4:56 AM, jwcolby wrote: > I ran across this today. > > http://www.sqlservercentral.**com/articles/Red+Gate+**Software/74579/ > http://promotions.sourcegear.**com/vouchers/new/ > > From jwcolby at colbyconsulting.com Tue Jul 5 07:15:42 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 05 Jul 2011 08:15:42 -0400 Subject: [dba-SQLServer] Sourcegear vault free two developer license In-Reply-To: References: <4E12D1A4.1010809@colbyconsulting.com> Message-ID: <4E13006E.6040704@colbyconsulting.com> I got a message saying "oopps something went wrong. We will look into it." I guess the link isn't working correctly. John W. Colby www.ColbyConsulting.com On 7/5/2011 8:01 AM, Arthur Fuller wrote: > Were you able to connect to it? I've tried several times and I just get a > message saying "Oops, one voucher per person. A voucher has already been > requested from this email address. So I guess that means you got my message > yesterday. But I have not received a voucher yet. > > A. > > On Tue, Jul 5, 2011 at 4:56 AM, jwcolby wrote: > >> I ran across this today. >> >> http://www.sqlservercentral.**com/articles/Red+Gate+**Software/74579/ >> http://promotions.sourcegear.**com/vouchers/new/ >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jm.hwsn at gmail.com Tue Jul 5 07:38:14 2011 From: jm.hwsn at gmail.com (jm.hwsn) Date: Tue, 5 Jul 2011 07:38:14 -0500 Subject: [dba-SQLServer] Sourcegear vault free two developer license In-Reply-To: <4E13006E.6040704@colbyconsulting.com> References: <4E12D1A4.1010809@colbyconsulting.com> <4E13006E.6040704@colbyconsulting.com> Message-ID: <4e1305b7.09a32a0a.21a8.1641@mx.google.com> I just tried it... it's working now. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby Sent: Tuesday, July 05, 2011 7:16 AM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Sourcegear vault free two developer license I got a message saying "oopps something went wrong. We will look into it." I guess the link isn't working correctly. John W. Colby www.ColbyConsulting.com On 7/5/2011 8:01 AM, Arthur Fuller wrote: > Were you able to connect to it? I've tried several times and I just get a > message saying "Oops, one voucher per person. A voucher has already been > requested from this email address. So I guess that means you got my message > yesterday. But I have not received a voucher yet. > > A. > > On Tue, Jul 5, 2011 at 4:56 AM, jwcolby wrote: > >> I ran across this today. >> >> http://www.sqlservercentral.**com/articles/Red+Gate+**Software/74579/ >> http://promotions.sourcegear.**com/vouchers/new/ >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Wed Jul 6 06:46:07 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 06 Jul 2011 07:46:07 -0400 Subject: [dba-SQLServer] =?windows-1252?q?HASHBYTES_=96_A_T-SQL_Function_-?= =?windows-1252?q?_SQL_Musings_-_SQLServerCentral=2Ecom?= Message-ID: <4E144AFF.5010306@colbyconsulting.com> We were discussing hashes awhile back. -- John W. Colby www.ColbyConsulting.com http://www.sqlservercentral.com/blogs/steve_jones/archive/2011/6/28/hashbytes-_1320_-a-t_2D00_sql-function.aspx From jwcolby at colbyconsulting.com Thu Jul 7 07:20:46 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 07 Jul 2011 08:20:46 -0400 Subject: [dba-SQLServer] [AccessD] SQL Server - Query non-updateable In-Reply-To: <4E15A088.9050801@colbyconsulting.com> References: <4E15A088.9050801@colbyconsulting.com> Message-ID: <4E15A49E.30002@colbyconsulting.com> Further to this, I have discovered that if I build a temp table inside of the access fe and do the join it is un-updateable. However if I use the temp table in the IN() clause it is now updateable. So it is something about using the stored procedure in the IN() that causes the query to become un-updateable. John W. Colby www.ColbyConsulting.com On 7/7/2011 8:03 AM, jwcolby wrote: > I have a selection query for a bound form. If I just do a select xyz from tblInmate it works of > course. I want to select a subset of inmates that reflect those that I work with (specific camps). > > I have a tblVolunteerCamps (the camps that a volunteer works with) and I built a stored procedure > out in SQL Server that selects the IDs of inmates at those camps. I feed the SP the volunteer ID and > back comes the campIDs and the inmateIDs in those camps. > > I had read (on this list) that if I used IN (SELECT ID from QueryXYZ) in the where clause it would > allow the query to be editable but doing so is turning my query into a non-updatable query. > > SELECT TblInmate.* from tblInmate > WHERE (INM_Active <> 0) AND > (INM_Location IN (SELECT CMP_LOCCODE FROM qspVolCampIDs)) > > If I remove the IN clause, the query is updateable. > > I really need to filter to just the camps the volunteer works with and I am wondering how to > accomplish this. In the past I would try to JOIN the main query to the selection filter and that > caused non-updateable. I was told to use the IN(SELECT) which has worked in most cases in the past. > > Any clue why not now and how to go about filtering and keeping it updateable? > From fuller.artful at gmail.com Fri Jul 15 07:14:14 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 08:14:14 -0400 Subject: [dba-SQLServer] Update Foreign Keys Message-ID: I'd like to poll the readership to ask, "Do you permit FKs to be updated, and if so under what circumstances?" I'm asking because a client and I are discussing a situation where this has arisen: A Client may have several locations. A Location has zero or more machines installed. A Machine has related data in at least one table (Assessments and optionally Measurements). >From time to time the Client may want to move a Machine from one Location to another. The client suggested that I simply replace the FK LocationID on the Machine record with the LocationID of the new Location. I pointed out that there are two possible results to this operation: a) do a Cascade Update on the tables under Machines. This approach "destroys history", so to speak, in that the data really no longer applies to the relocated Machine. The Assessments and Measurements no longer apply to the new Location. b) Orphan the Assessments and Measurements. This is unacceptable, IMO. So I suggested that rather than change the Machine's LocationID, we instead copy the Machine data (only) to a new row, assigning it the new LocationID and leaving the old row intact, along with its Assessments and Measurements In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, under what circumstances?" I'll respond to that one first. The only time I permit this is when using staging tables. For example, a wizard may accept new data into several tables. The last step in the wizard is equivalent to "COMMIT" -- it writes the accumulated data to the "real" tables. There is also a "Cancel" button, which if pressed causes a Cascade Delete across all the tables involved. Arthur From jwcolby at colbyconsulting.com Fri Jul 15 07:42:16 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Fri, 15 Jul 2011 08:42:16 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <4E2035A8.9090505@colbyconsulting.com> Arthur, FKs are updated all of the time. Since I only use autonumber PKs the FKs do not form part of the PK of the child table and therefore I do not need cascade updates. As for the history, the history belongs to the machine. If the machine moves, then the history moves as well. If the history needs other "location" data to be valid then it needed to have location FKs in that history data, which it apparently does not. If it truly needs that location data then add a new location field to the immediate child of the machine, update with the location where the data was accumulated and off you go. It seems to me that location is probably not what is actually being tracked however but rather instruments (taking the measurements) and you probably already have an instrument id in the measurements. If not you have bigger problems than location data. If you copy the machine and create a new record then you have the same machine in two different locations. Clearly a problem in this universe. You will now be working around a problem that you created. I understand that you like the PIT architecture stuff but unless the system is designed from the ground up to use that it seems unwise to me to be applying it piecemeal. The machine moved. The location ID gets updated. FK updates happen all the time in my world. Think people / cars and so forth. John W. Colby www.ColbyConsulting.com On 7/15/2011 8:14 AM, Arthur Fuller wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be updated, > and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > >> From time to time the Client may want to move a Machine from one Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to the > relocated Machine. The Assessments and Measurements no longer apply to the > new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new LocationID > and leaving the old row intact, along with its Assessments and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > under what circumstances?" I'll respond to that one first. The only time I > permit this is when using staging tables. For example, a wizard may accept > new data into several tables. The last step in the wizard is equivalent to > "COMMIT" -- it writes the accumulated data to the "real" tables. There is > also a "Cancel" button, which if pressed causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From df.waters at comcast.net Fri Jul 15 08:43:19 2011 From: df.waters at comcast.net (Dan Waters) Date: Fri, 15 Jul 2011 08:43:19 -0500 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <001d01cc42f5$29750f60$7c5f2e20$@comcast.net> I think I'd do something like: Create a tblMachineHistory table which shows MachineID and LocationID. Have fields for Active, EnteredBy and EnteredDate. When a machine is moved, add a new record for that machine and make that record Active. Keep the historical records which show the location for each machine. When a new record is recorded with your assessment/measurement information, the Machine record can have the LocationID entered by your code where the LocationID comes from the lookup table. Now, you can recall location history for the machine, and know where the machine was when the record was created. Of course ... don't change the machine ID! HTH, Dan From davidmcafee at gmail.com Fri Jul 15 08:46:49 2011 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 Jul 2011 06:46:49 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: Create a junction table for installs. tblInstalls InstalledID (PK, INT) LocationID (FK, int) MachineID (fk, int) Installdate EntryDate Entryuserid Every record is an insertion. You never have to overwrite data. Built in history. A simple view/sproc using Max() can show the latest location location for a given machine or machines at a given location. HTH, David Sent from my Droid phone. On Jul 15, 2011 5:15 AM, "Arthur Fuller" wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be updated, > and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > > From time to time the Client may want to move a Machine from one Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to the > relocated Machine. The Assessments and Measurements no longer apply to the > new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new LocationID > and leaving the old row intact, along with its Assessments and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > under what circumstances?" I'll respond to that one first. The only time I > permit this is when using staging tables. For example, a wizard may accept > new data into several tables. The last step in the wizard is equivalent to > "COMMIT" -- it writes the accumulated data to the "real" tables. There is > also a "Cancel" button, which if pressed causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From df.waters at comcast.net Fri Jul 15 08:51:34 2011 From: df.waters at comcast.net (Dan Waters) Date: Fri, 15 Jul 2011 08:51:34 -0500 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <001e01cc42f6$50352530$f09f6f90$@comcast.net> Yeah ..... What David said! ;-) -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of David McAfee Sent: Friday, July 15, 2011 8:47 AM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Update Foreign Keys Create a junction table for installs. tblInstalls InstalledID (PK, INT) LocationID (FK, int) MachineID (fk, int) Installdate EntryDate Entryuserid Every record is an insertion. You never have to overwrite data. Built in history. A simple view/sproc using Max() can show the latest location location for a given machine or machines at a given location. HTH, David Sent from my Droid phone. On Jul 15, 2011 5:15 AM, "Arthur Fuller" wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be > updated, and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where > this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > > From time to time the Client may want to move a Machine from one > Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that > there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to > the relocated Machine. The Assessments and Measurements no longer > apply to the new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new > LocationID and leaving the old row intact, along with its Assessments > and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if > so, under what circumstances?" I'll respond to that one first. The > only time I permit this is when using staging tables. For example, a > wizard may accept new data into several tables. The last step in the > wizard is equivalent to "COMMIT" -- it writes the accumulated data to > the "real" tables. There is also a "Cancel" button, which if pressed > causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Fri Jul 15 08:53:31 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 09:53:31 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: That is my preferred approach. A while back I wrote a piece for Simple-Talk on PITA (Point In Time Architecture, not the other meaning, although it is somewhat appropriate too :). In the case I was discussing, nothing was ever updated, other than its EndDate value. A case in point: throughout your life, you might change family physicians, for any number of reasons. On the other hand, you may need your medical history while you were with doctor 123, from 2004 to 2007. Since then you've hand two other family doctors. Sometimes you need a PIT, sometimes you need all the data, A, On Fri, Jul 15, 2011 at 9:46 AM, David McAfee wrote: > Create a junction table for installs. > > tblInstalls > InstalledID (PK, INT) > LocationID (FK, int) > MachineID (fk, int) > Installdate > EntryDate > Entryuserid > > Every record is an insertion. > You never have to overwrite data. > Built in history. > > A simple view/sproc using Max() can show the latest location location for a > given machine or machines at a given location. > > HTH, > David > > From fhtapia at gmail.com Fri Jul 15 08:57:07 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 06:57:07 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <2171584035414575940@unknownmsgid> What a great concept! Sent from my mobile On Jul 15, 2011, at 6:47 AM, David McAfee wrote: > Create a junction table for installs. > > tblInstalls > InstalledID (PK, INT) > LocationID (FK, int) > MachineID (fk, int) > Installdate > EntryDate > Entryuserid > > Every record is an insertion. > You never have to overwrite data. > Built in history. > > A simple view/sproc using Max() can show the latest location location for a > given machine or machines at a given location. > > HTH, > David > > Sent from my Droid phone. > On Jul 15, 2011 5:15 AM, "Arthur Fuller" wrote: >> I'd like to poll the readership to ask, "Do you permit FKs to be updated, >> and if so under what circumstances?" >> >> I'm asking because a client and I are discussing a situation where this > has >> arisen: >> >> A Client may have several locations. >> A Location has zero or more machines installed. >> A Machine has related data in at least one table (Assessments and > optionally >> Measurements). >> >> From time to time the Client may want to move a Machine from one Location > to >> another. >> >> The client suggested that I simply replace the FK LocationID on the > Machine >> record with the LocationID of the new Location. I pointed out that there > are >> two possible results to this operation: >> >> a) do a Cascade Update on the tables under Machines. This approach > "destroys >> history", so to speak, in that the data really no longer applies to the >> relocated Machine. The Assessments and Measurements no longer apply to the >> new Location. >> b) Orphan the Assessments and Measurements. This is unacceptable, IMO. >> >> So I suggested that rather than change the Machine's LocationID, we > instead >> copy the Machine data (only) to a new row, assigning it the new LocationID >> and leaving the old row intact, along with its Assessments and > Measurements >> >> In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, >> under what circumstances?" I'll respond to that one first. The only time I >> permit this is when using staging tables. For example, a wizard may accept >> new data into several tables. The last step in the wizard is equivalent to >> "COMMIT" -- it writes the accumulated data to the "real" tables. There is >> also a "Cancel" button, which if pressed causes a Cascade Delete across > all >> the tables involved. >> >> Arthur >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fuller.artful at gmail.com Fri Jul 15 09:30:00 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 10:30:00 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: <2171584035414575940@unknownmsgid> References: <2171584035414575940@unknownmsgid> Message-ID: If you want a detailed explanation and implementation guide, as it were, visit www.simple-talk.com and download my article on it. The editors didn't like my PITA title so they changed it to PIT Architecture. Myself, I liked the double-entendre. A. On Fri, Jul 15, 2011 at 9:57 AM, Francisco Tapia wrote: > What a great concept! > > From davidmcafee at gmail.com Fri Jul 15 10:28:40 2011 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 Jul 2011 08:28:40 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: <2171584035414575940@unknownmsgid> References: <2171584035414575940@unknownmsgid> Message-ID: hmmmm. Where was that concept used before? ;) On Fri, Jul 15, 2011 at 6:57 AM, Francisco Tapia wrote: > What a great concept! > > Sent from my mobile > > On Jul 15, 2011, at 6:47 AM, David McAfee wrote: > > > Create a junction table for installs. > > > > tblInstalls > > InstalledID (PK, INT) > > LocationID (FK, int) > > MachineID (fk, int) > > Installdate > > EntryDate > > Entryuserid > > > > Every record is an insertion. > > You never have to overwrite data. > > Built in history. > > > > A simple view/sproc using Max() can show the latest location location for > a > > given machine or machines at a given location. > > > > HTH, > > David > > > > Sent from my Droid phone. > > On Jul 15, 2011 5:15 AM, "Arthur Fuller" > wrote: > >> I'd like to poll the readership to ask, "Do you permit FKs to be > updated, > >> and if so under what circumstances?" > >> > >> I'm asking because a client and I are discussing a situation where this > > has > >> arisen: > >> > >> A Client may have several locations. > >> A Location has zero or more machines installed. > >> A Machine has related data in at least one table (Assessments and > > optionally > >> Measurements). > >> > >> From time to time the Client may want to move a Machine from one > Location > > to > >> another. > >> > >> The client suggested that I simply replace the FK LocationID on the > > Machine > >> record with the LocationID of the new Location. I pointed out that there > > are > >> two possible results to this operation: > >> > >> a) do a Cascade Update on the tables under Machines. This approach > > "destroys > >> history", so to speak, in that the data really no longer applies to the > >> relocated Machine. The Assessments and Measurements no longer apply to > the > >> new Location. > >> b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > >> > >> So I suggested that rather than change the Machine's LocationID, we > > instead > >> copy the Machine data (only) to a new row, assigning it the new > LocationID > >> and leaving the old row intact, along with its Assessments and > > Measurements > >> > >> In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > >> under what circumstances?" I'll respond to that one first. The only time > I > >> permit this is when using staging tables. For example, a wizard may > accept > >> new data into several tables. The last step in the wizard is equivalent > to > >> "COMMIT" -- it writes the accumulated data to the "real" tables. There > is > >> also a "Cancel" button, which if pressed causes a Cascade Delete across > > all > >> the tables involved. > >> > >> Arthur > >> _______________________________________________ > >> dba-SQLServer mailing list > >> dba-SQLServer at databaseadvisors.com > >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > >> http://www.databaseadvisors.com > >> > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Fri Jul 15 10:32:02 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 11:32:02 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: <2171584035414575940@unknownmsgid> Message-ID: I first encountered it in theory in Ralph Kimball's books. In practice I encountered it while working on a project called Ontario Labs Information System (OLIS), whose goal is to digitize all the province's medical information, down to and including all the X-rays, MRIs -- the whole kit and kaboodle. A. On Fri, Jul 15, 2011 at 11:28 AM, David McAfee wrote: > hmmmm. Where was that concept used before? ;) > > From davidmcafee at gmail.com Fri Jul 15 10:32:53 2011 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 Jul 2011 08:32:53 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: Yes. I do it for our rebates and incentives, here at my current job. It make it so nice to be able to see what a price/incentive was on a given date. I keep trying to get another developer to move his seller/customer assignments over to this model. He feels it is too confusing and just keeps sticking to "live" overwriting data. It sucks when someone needs to interface our systems, as there is no historical data on his end. On Fri, Jul 15, 2011 at 6:53 AM, Arthur Fuller wrote: > That is my preferred approach. A while back I wrote a piece for Simple-Talk > on PITA (Point In Time Architecture, not the other meaning, although it is > somewhat appropriate too :). In the case I was discussing, nothing was ever > updated, other than its EndDate value. A case in point: throughout your > life, you might change family physicians, for any number of reasons. On the > other hand, you may need your medical history while you were with doctor > 123, from 2004 to 2007. Since then you've hand two other family doctors. > Sometimes you need a PIT, sometimes you need all the data, > > A, > > On Fri, Jul 15, 2011 at 9:46 AM, David McAfee > wrote: > > > Create a junction table for installs. > > > > tblInstalls > > InstalledID (PK, INT) > > LocationID (FK, int) > > MachineID (fk, int) > > Installdate > > EntryDate > > Entryuserid > > > > Every record is an insertion. > > You never have to overwrite data. > > Built in history. > > > > A simple view/sproc using Max() can show the latest location location for > a > > given machine or machines at a given location. > > > > HTH, > > David > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fhtapia at gmail.com Fri Jul 15 10:35:52 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 08:35:52 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <6012251177852188278@unknownmsgid> SQL right? Have him create a trigger so that data is also inserted at every overwrite to the audit table. Sent from my mobile On Jul 15, 2011, at 8:33 AM, David McAfee wrote: > Yes. I do it for our rebates and incentives, here at my current job. > > It make it so nice to be able to see what a price/incentive was on a given > date. > > I keep trying to get another developer to move his seller/customer > assignments over to this model. > > He feels it is too confusing and just keeps sticking to "live" overwriting > data. > > It sucks when someone needs to interface our systems, as there is no > historical data on his end. > > > > On Fri, Jul 15, 2011 at 6:53 AM, Arthur Fuller wrote: > >> That is my preferred approach. A while back I wrote a piece for Simple-Talk >> on PITA (Point In Time Architecture, not the other meaning, although it is >> somewhat appropriate too :). In the case I was discussing, nothing was ever >> updated, other than its EndDate value. A case in point: throughout your >> life, you might change family physicians, for any number of reasons. On the >> other hand, you may need your medical history while you were with doctor >> 123, from 2004 to 2007. Since then you've hand two other family doctors. >> Sometimes you need a PIT, sometimes you need all the data, >> >> A, >> >> On Fri, Jul 15, 2011 at 9:46 AM, David McAfee >> wrote: >> >>> Create a junction table for installs. >>> >>> tblInstalls >>> InstalledID (PK, INT) >>> LocationID (FK, int) >>> MachineID (fk, int) >>> Installdate >>> EntryDate >>> Entryuserid >>> >>> Every record is an insertion. >>> You never have to overwrite data. >>> Built in history. >>> >>> A simple view/sproc using Max() can show the latest location location for >> a >>> given machine or machines at a given location. >>> >>> HTH, >>> David >>> >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fhtapia at gmail.com Fri Jul 15 10:39:08 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 08:39:08 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: <2171584035414575940@unknownmsgid> Message-ID: <-4406565997184781526@unknownmsgid> :). Thanks Arthur, I really love using this pita method whenever historical data is required. The plus side for me is that even though we have new dba's and I've moved on to a new job description, the new guys always come back to me to ask on how they can use this method on new projects. Sent from my mobile On Jul 15, 2011, at 7:30 AM, Arthur Fuller wrote: > If you want a detailed explanation and implementation guide, as it were, > visit www.simple-talk.com and download my article on it. The editors didn't > like my PITA title so they changed it to PIT Architecture. Myself, I liked > the double-entendre. > > A. > > On Fri, Jul 15, 2011 at 9:57 AM, Francisco Tapia wrote: > >> What a great concept! >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fhtapia at gmail.com Fri Jul 15 10:44:00 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 08:44:00 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <-7963481768233347922@unknownmsgid> I will tell you this arthur. To this day I do not let my clients (be it my current employer nor my freelance stuff) dictate the architecture of a database. I will always work with my colleagues and design what makes the most business sense. I guess its different if the client is also a db developer, but generally my clients are not thus why I can say, it like this. Sent from my mobile On Jul 15, 2011, at 5:15 AM, Arthur Fuller wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be updated, > and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > > From time to time the Client may want to move a Machine from one Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to the > relocated Machine. The Assessments and Measurements no longer apply to the > new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new LocationID > and leaving the old row intact, along with its Assessments and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > under what circumstances?" I'll respond to that one first. The only time I > permit this is when using staging tables. For example, a wizard may accept > new data into several tables. The last step in the wizard is equivalent to > "COMMIT" -- it writes the accumulated data to the "real" tables. There is > also a "Cancel" button, which if pressed causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fuller.artful at gmail.com Fri Jul 15 10:54:12 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 11:54:12 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: <-7963481768233347922@unknownmsgid> References: <-7963481768233347922@unknownmsgid> Message-ID: In this case, the client is an engineer and also has experience in Access. In fact, he wrote the first version of the app, and then brought me in when the requirements went beyond his skill set. We've been working together for five years, and I consider him a very good friend. He never dictates anything to do with db architecture. We kick subjects around a lot, though; he considers that exercise a learning experience. On Fri, Jul 15, 2011 at 11:44 AM, Francisco Tapia wrote: > I will tell you this arthur. To this day I do not let my clients (be > it my current employer nor my freelance stuff) dictate the > architecture of a database. I will always work with my colleagues and > design what makes the most business sense. > > I guess its different if the client is also a db developer, but > generally my clients are not thus why I can say, it like this. > > From jwcolby at colbyconsulting.com Wed Jul 20 06:51:31 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 20 Jul 2011 07:51:31 -0400 Subject: [dba-SQLServer] How to get top1 of group by Message-ID: <4E26C143.4010206@colbyconsulting.com> The following sql gets me the topN of a group where based on rank. with PR as ( SELECT Product, PartNo, Buildable, LimitingFactor, rank() over (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank FROM dbo.VpARTS_REQUIREMENTS_FOR_WORKORDERS_OWBreakout as PRFW ) select * from PR WHERE PR.LFRank = 1 but if there are several of the same rank it returns all of the records with the top rank. I need to get only 1 item per product. Any suggestions? -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Wed Jul 20 06:57:46 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 Jul 2011 07:57:46 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: <4E26C143.4010206@colbyconsulting.com> References: <4E26C143.4010206@colbyconsulting.com> Message-ID: SELECT TOP 1 ... and the rest of your query. A. On Wed, Jul 20, 2011 at 7:51 AM, jwcolby wrote: > The following sql gets me the topN of a group where based on rank. > > with PR as > ( > SELECT Product, PartNo, Buildable, LimitingFactor, rank() over > (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank > FROM dbo.VpARTS_REQUIREMENTS_FOR_**WORKORDERS_OWBreakout as PRFW > ) > > select * from PR WHERE PR.LFRank = 1 > > but if there are several of the same rank it returns all of the records > with the top rank. I need to get only 1 item per product. > > Any suggestions? > From jwcolby at colbyconsulting.com Wed Jul 20 07:07:49 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 20 Jul 2011 08:07:49 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: References: <4E26C143.4010206@colbyconsulting.com> Message-ID: <4E26C515.5090300@colbyconsulting.com> No, that gets the top 1 of all the records. The query is returning 1 to N records for every product. I need 1 record for every product. John W. Colby www.ColbyConsulting.com On 7/20/2011 7:57 AM, Arthur Fuller wrote: > SELECT TOP 1 ... and the rest of your query. > > A. > > On Wed, Jul 20, 2011 at 7:51 AM, jwcolbywrote: > >> The following sql gets me the topN of a group where based on rank. >> >> with PR as >> ( >> SELECT Product, PartNo, Buildable, LimitingFactor, rank() over >> (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank >> FROM dbo.VpARTS_REQUIREMENTS_FOR_**WORKORDERS_OWBreakout as PRFW >> ) >> >> select * from PR WHERE PR.LFRank = 1 >> >> but if there are several of the same rank it returns all of the records >> with the top rank. I need to get only 1 item per product. >> >> Any suggestions? >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Wed Jul 20 08:56:19 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 Jul 2011 09:56:19 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: <4E26C515.5090300@colbyconsulting.com> References: <4E26C143.4010206@colbyconsulting.com> <4E26C515.5090300@colbyconsulting.com> Message-ID: Sorry, I misunderstood. To get what you want, you have to go "correlated subquery". See this link for more info on setting this up: http://msdn.microsoft.com/en-us/library/ms187638.aspx HTH, A. On Wed, Jul 20, 2011 at 8:07 AM, jwcolby wrote: > No, that gets the top 1 of all the records. The query is returning 1 to N > records for every product. I need 1 record for every product. > > > John W. Colby > > From fhtapia at gmail.com Thu Jul 21 14:19:04 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 21 Jul 2011 12:19:04 -0700 Subject: [dba-SQLServer] iOS and Sql Server Message-ID: Is anybody here working on mobile apps? Today I ran into this link from redgate for helping to get access to sql servers in your organization using iOS's mobile platform... http://mobilefoo.com/ProductDetail.aspx/iSql -Francisco http://bit.ly/sqlthis | Tsql and More... From jwcolby at colbyconsulting.com Thu Jul 21 14:52:04 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 21 Jul 2011 15:52:04 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: References: <4E26C143.4010206@colbyconsulting.com> <4E26C515.5090300@colbyconsulting.com> Message-ID: <4E288364.9090001@colbyconsulting.com> Making matters much worse, they have SQL Server 2000. John W. Colby www.ColbyConsulting.com On 7/20/2011 9:56 AM, Arthur Fuller wrote: > Sorry, I misunderstood. To get what you want, you have to go "correlated > subquery". See this link for more info on setting this up: > > http://msdn.microsoft.com/en-us/library/ms187638.aspx > > HTH, > A. > > On Wed, Jul 20, 2011 at 8:07 AM, jwcolbywrote: > >> No, that gets the top 1 of all the records. The query is returning 1 to N >> records for every product. I need 1 record for every product. >> >> >> John W. Colby >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From markamatte at hotmail.com Fri Jul 22 10:39:40 2011 From: markamatte at hotmail.com (Mark A Matte) Date: Fri, 22 Jul 2011 15:39:40 +0000 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: <4E26C143.4010206@colbyconsulting.com> References: <4E26C143.4010206@colbyconsulting.com> Message-ID: Could you : select top 1 pr.* from (SELECT Product, PartNo, Buildable, LimitingFactor, rank() over (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank FROM dbo.VpARTS_REQUIREMENTS_FOR_WORKORDERS_OWBreakout as PRFW) pr where pr.LFRank = 1 Mark A. Matte > Date: Wed, 20 Jul 2011 07:51:31 -0400 > From: jwcolby at colbyconsulting.com > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] How to get top1 of group by > > The following sql gets me the topN of a group where based on rank. > > with PR as > ( > SELECT Product, PartNo, Buildable, LimitingFactor, rank() over (partition by PRFW.product order > by PRFW.LimitingFactor) as LFRank > FROM dbo.VpARTS_REQUIREMENTS_FOR_WORKORDERS_OWBreakout as PRFW > ) > > select * from PR WHERE PR.LFRank = 1 > > but if there are several of the same rank it returns all of the records with the top rank. I need > to get only 1 item per product. > > Any suggestions? > -- > John W. Colby > www.ColbyConsulting.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From jwcolby at colbyconsulting.com Sat Jul 23 00:40:58 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 01:40:58 -0400 Subject: [dba-SQLServer] Manually recreate database structure Message-ID: <4E2A5EEA.30804@colbyconsulting.com> I have a database designed on SQL Server 2008 Express which is a later version (10.50) than my full version (10.0). I cannot backup and restore backwards nor can I detach / attach. Thus I am scripting as create each table and copying the script to a new database (same name) on the older software version Server. My problem is that the scripts have constraints which are PK/FK pairs. Each table has these constraints but one half of the constraint will always be missing as I run these scripts. Is there a way to tell SQL server to create but ignore the constraints as I create the tables and even as I load the tables with the existing data and then "turn on" the constraints at the very end? Or do I need to move the constraint SQL into a separate query, create the tables, load the data and then create the constraints at the end? -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Sat Jul 23 01:55:56 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 23 Jul 2011 02:55:56 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2A5EEA.30804@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> Message-ID: Yes, or alternatively re-order the script's execution sequence to populate all the lookup tables first, and only then populate the main tables. FWIW, I agree that these MS tools ought to do a far better job at this, and understand in which order the tables ought to be created. A. On Sat, Jul 23, 2011 at 1:40 AM, jwcolby wrote: > I have a database designed on SQL Server 2008 Express which is a later > version (10.50) than my full version (10.0). I cannot backup and restore > backwards nor can I detach / attach. > > Thus I am scripting as create each table and copying the script to a new > database (same name) on the older software version Server. My problem is > that the scripts have constraints which are PK/FK pairs. Each table has > these constraints but one half of the constraint will always be missing as I > run these scripts. Is there a way to tell SQL server to create but ignore > the constraints as I create the tables and even as I load the tables with > the existing data and then "turn on" the constraints at the very end? > > Or do I need to move the constraint SQL into a separate query, create the > tables, load the data and then create the constraints at the end? > From jwcolby at colbyconsulting.com Sat Jul 23 07:43:25 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 08:43:25 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: References: <4E2A5EEA.30804@colbyconsulting.com> Message-ID: <4E2AC1ED.9020408@colbyconsulting.com> I am building the scripts one by one by clicking on the table and right clicking "script table as". Is there some way to script the whole shebang at once time into a single script? John W. Colby www.ColbyConsulting.com On 7/23/2011 2:55 AM, Arthur Fuller wrote: > Yes, or alternatively re-order the script's execution sequence to populate > all the lookup tables first, and only then populate the main tables. > > FWIW, I agree that these MS tools ought to do a far better job at this, and > understand in which order the tables ought to be created. > > A. > > On Sat, Jul 23, 2011 at 1:40 AM, jwcolbywrote: > >> I have a database designed on SQL Server 2008 Express which is a later >> version (10.50) than my full version (10.0). I cannot backup and restore >> backwards nor can I detach / attach. >> >> Thus I am scripting as create each table and copying the script to a new >> database (same name) on the older software version Server. My problem is >> that the scripts have constraints which are PK/FK pairs. Each table has >> these constraints but one half of the constraint will always be missing as I >> run these scripts. Is there a way to tell SQL server to create but ignore >> the constraints as I create the tables and even as I load the tables with >> the existing data and then "turn on" the constraints at the very end? >> >> Or do I need to move the constraint SQL into a separate query, create the >> tables, load the data and then create the constraints at the end? >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Sat Jul 23 07:53:04 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 08:53:04 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2AC1ED.9020408@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> Message-ID: <4E2AC430.5030701@colbyconsulting.com> Never mind, I found it. When run, the script generated about a billion errors but it did apparently build the tables. John W. Colby www.ColbyConsulting.com On 7/23/2011 8:43 AM, jwcolby wrote: > I am building the scripts one by one by clicking on the table and right clicking "script table as". > Is there some way to script the whole shebang at once time into a single script? > > John W. Colby > www.ColbyConsulting.com > > On 7/23/2011 2:55 AM, Arthur Fuller wrote: >> Yes, or alternatively re-order the script's execution sequence to populate >> all the lookup tables first, and only then populate the main tables. >> >> FWIW, I agree that these MS tools ought to do a far better job at this, and >> understand in which order the tables ought to be created. >> >> A. >> >> On Sat, Jul 23, 2011 at 1:40 AM, jwcolbywrote: >> >>> I have a database designed on SQL Server 2008 Express which is a later >>> version (10.50) than my full version (10.0). I cannot backup and restore >>> backwards nor can I detach / attach. >>> >>> Thus I am scripting as create each table and copying the script to a new >>> database (same name) on the older software version Server. My problem is >>> that the scripts have constraints which are PK/FK pairs. Each table has >>> these constraints but one half of the constraint will always be missing as I >>> run these scripts. Is there a way to tell SQL server to create but ignore >>> the constraints as I create the tables and even as I load the tables with >>> the existing data and then "turn on" the constraints at the very end? >>> >>> Or do I need to move the constraint SQL into a separate query, create the >>> tables, load the data and then create the constraints at the end? >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Sat Jul 23 07:55:01 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 23 Jul 2011 08:55:01 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2AC430.5030701@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> <4E2AC430.5030701@colbyconsulting.com> Message-ID: Only a billion? On Sat, Jul 23, 2011 at 8:53 AM, jwcolby wrote: > Never mind, I found it. When run, the script generated about a billion > errors but it did apparently build the tables. > > > John W. Colby > www.ColbyConsulting.com > From jwcolby at colbyconsulting.com Sat Jul 23 08:09:58 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 09:09:58 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> <4E2AC430.5030701@colbyconsulting.com> Message-ID: <4E2AC826.9070009@colbyconsulting.com> Well maybe a couple of billion. I didn't really count them. ;) I am now trying to populate the tables with the data from the source db. What a PITA. This is a small database so I will just keep on at it until I get it. John W. Colby www.ColbyConsulting.com On 7/23/2011 8:55 AM, Arthur Fuller wrote: > Only a billion? > > On Sat, Jul 23, 2011 at 8:53 AM, jwcolbywrote: > >> Never mind, I found it. When run, the script generated about a billion >> errors but it did apparently build the tables. >> >> >> John W. Colby >> www.ColbyConsulting.com >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Sat Jul 23 09:25:27 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 23 Jul 2011 10:25:27 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2AC826.9070009@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> <4E2AC430.5030701@colbyconsulting.com> <4E2AC826.9070009@colbyconsulting.com> Message-ID: I could be wrong, and perhaps you already found the thing to which I am about to refer, but in SSMS if you click on the database rather than any given table, and then choose All Tasks and then Script, it comes out in the right order, respecting your FKs. I'll check that in a minute but I think that is the case. A. P.S. Since when do you work on small DBs? I always thought of you as the 60M-rows man. There could be a sequel to The Social Network here, starring oh I don't know, maybe Christopher Walken as JWC, duelling with tables 700 columns wide and unleashing his SSHDs upon his enemies, and delivering the requirements OTAOB (on time and on budget). I'll call Spielberg in the morning; nah, Christopher Nolan is the man to approach. Speaking of whom, have you seen either Memento or Inception? This is one brilliant man. A. On Sat, Jul 23, 2011 at 9:09 AM, jwcolby wrote: > Well maybe a couple of billion. I didn't really count them. ;) > > I am now trying to populate the tables with the data from the source db. > What a PITA. > > This is a small database so I will just keep on at it until I get it. > > > From jwcolby at colbyconsulting.com Mon Jul 25 13:57:36 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Mon, 25 Jul 2011 14:57:36 -0400 Subject: [dba-SQLServer] Link odbc tables in Access Message-ID: <4E2DBCA0.6050603@colbyconsulting.com> When I link to SQL Server from Access using a DSN file, I end up with my tables displayed but also about a billion (sorry, I didn't count) tables that start with INFORMATION_SCHEMA_.xyz. At this time I have no use for those tables and would like to filter them out so that I cannot see them. Does anyone know how to do that? -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Mon Jul 25 16:42:45 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 25 Jul 2011 17:42:45 -0400 Subject: [dba-SQLServer] Link odbc tables in Access In-Reply-To: <4E2DBCA0.6050603@colbyconsulting.com> References: <4E2DBCA0.6050603@colbyconsulting.com> Message-ID: They are all views into the meta-data of the database, intended to insulate you from relying on the structure of tables such as SysColumns. As to how to hide them, I have no idea. But if you create an ADP, you don't see them. So why don't you do that instead? Just a question; you may have some valid reason for going the DSN route. A. On Mon, Jul 25, 2011 at 2:57 PM, jwcolby wrote: > When I link to SQL Server from Access using a DSN file, I end up with my > tables displayed but also about a billion (sorry, I didn't count) tables > that start with INFORMATION_SCHEMA_.xyz. At this time I have no use for > those tables and would like to filter them out so that I cannot see them. > Does anyone know how to do that? > From jwcolby at colbyconsulting.com Mon Jul 25 16:47:50 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Mon, 25 Jul 2011 17:47:50 -0400 Subject: [dba-SQLServer] SQL Server Security - from Server / From workstation Message-ID: <4E2DE486.2090805@colbyconsulting.com> I am setting up a SQL Server at the client. The OS is Windows 2000 so my choice is SQL Server 2005 Express - SQL Server 2008 won't run on Windows 2000. Sigh already! When I view the server security from the server I see a ton of logins including all of the server logins such as ASPNet, NT Authority \System and so forth. I also see an sa, DiscoAdmin and DiscoApp. I created the DiscoAdmin and DiscoApp. I have created several databases. Several are required because of the max file size throttling on SQL Server 2005. I created tables in the databases and I can see them from SSMS. I can also use ODBC linked tables to link to the tables from my Access application. This is all from the server. From my workstation, I am managing to see the server and databases. I can do everything I can do from the server except see the tables from the Access application. If I click on the links it says they are not available. Also oddly, while I do see the SERVER DiscoAdmin login from my workstation I cannot see the DiscoUser login. I can see BOTH logins in the databases themselves. What do I need to do to see the SERVER logins? And why am I not seeing the linked tables from my workstation but can from the server? -- John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Thu Jul 28 07:59:37 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 28 Jul 2011 08:59:37 -0400 Subject: [dba-SQLServer] How to set up Performance Monitor Message-ID: <4E315D39.1040206@colbyconsulting.com> I am running Windows 2008 and SQL Server 2008. I want to set up performance monitor to tell me how I am doing on memory, cache etc. When i go to perf monitor it shows a ton of counters which breaks down things (apparently) by the service or something. For SQL Server I have: SQLAgent :Jobs :jobSteps :Statistics SQLServer :Access methods :BackupDevice :broker Activation etc. This is just a ton of stuff and I haven't a clue what is important and what is not. Can anyone point me to something that discusses this in an understandable format along with how to set up Perf Monitor for the basics? Thanks, -- John W. Colby www.ColbyConsulting.com From pcs.accessd at gmail.com Sun Jul 31 07:59:13 2011 From: pcs.accessd at gmail.com (Borge Hansen) Date: Sun, 31 Jul 2011 20:59:13 +0800 Subject: [dba-SQLServer] ODBC Linked Tables SQL Server 2008 R2 Express to Access 2003 Using SQL Native Client 10 : nvarchar(max) comes across as text(255) - should be Memo Message-ID: Does anyone know the answer to this? Configuration: *One Machine*: OS: Windows Server 2008 R2 (virtual machine) MS Access 2003 (11.8166.8172) SP3 ODBC Driver: SQL Server Native Client 10.0 : 2009.100.2500.00 SQLNCLI10.DLL 17/06/2011 (Version 10.50.2500) *accesses SQL Server 2008 R2 Express on other machine via TCP* Other Machine: OS: Windows Server 2008 R2 (virtual machine - both machines on same domain) Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Express Edition with Advanced Services on Windows NT 6.0 (Build 6002: Service Pack 2) (VM) The Access 2003 application starts up, re-links all tables to SQL Server Db ok! The offending linked table is a very small table with only three records. All Memo fields on the table are linked defined as text 255 - and as a consequence only the last 255 characters of the field comes across. We have several other installation configurations, where this is NOT a problem - none of which are SQL Server 2008 R2 Express though. We have for example two SQL Server 2008 R2 Web Edition - with expected behaviour on the linked tables. Anyone with answer to this? Would be greatly appreciated! (In the past we have used SQL Server 2005 Express with no problems linking tables with nvarchar(max) as memo fields) So far spent / wasted 1 1/2 day on trouble shooting this. Kind regards, Borge From pcs.accessd at gmail.com Sun Jul 31 08:18:57 2011 From: pcs.accessd at gmail.com (Borge Hansen) Date: Sun, 31 Jul 2011 21:18:57 +0800 Subject: [dba-SQLServer] How to set up Performance Monitor In-Reply-To: <4E315D39.1040206@colbyconsulting.com> References: <4E315D39.1040206@colbyconsulting.com> Message-ID: Hi John, I subscribe to sqlservercentral to learn more about sql server Here is one article that might be of relevance to you: http://www.sqlservercentral.com/articles/Performance+Tuning/monitoringperformance/1007/ Regards, Borge On Thu, Jul 28, 2011 at 8:59 PM, jwcolby wrote: > I am running Windows 2008 and SQL Server 2008. I want to set up > performance monitor to tell me how I am doing on memory, cache etc. When i > go to perf monitor it shows a ton of counters which breaks down things > (apparently) by the service or something. For SQL Server I have: > > SQLAgent > :Jobs > :jobSteps > :Statistics > > SQLServer > :Access methods > :BackupDevice > :broker Activation > > etc. > > This is just a ton of stuff and I haven't a clue what is important and what > is not. > > Can anyone point me to something that discusses this in an understandable > format along with how to set up Perf Monitor for the basics? > > Thanks, > > -- > John W. Colby > www.ColbyConsulting.com > ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From pcs.accessd at gmail.com Sun Jul 31 08:38:02 2011 From: pcs.accessd at gmail.com (Borge Hansen) Date: Sun, 31 Jul 2011 21:38:02 +0800 Subject: [dba-SQLServer] Link odbc tables in Access In-Reply-To: <4E2DBCA0.6050603@colbyconsulting.com> References: <4E2DBCA0.6050603@colbyconsulting.com> Message-ID: John, How about defining all your tables that you want to link to in your SQL Db in a hidden local table in your frontend and then use some vba re-linker code that travels all the records (table names) in the local table and drops and re-links the SQL Db tables. Then you can forget about the one billion other tables. And using the re-linker code you can link directly to tables in the SQL Db removing the "dbo_" prefix....which may be of benefit if you have a lot of queries that reference tables from back when they perhaps lived as access .mdb backend tables.... Just a suggestion. Regards Borge On Tue, Jul 26, 2011 at 2:57 AM, jwcolby wrote: > When I link to SQL Server from Access using a DSN file, I end up with my > tables displayed but also about a billion (sorry, I didn't count) tables > that start with INFORMATION_SCHEMA_.xyz. At this time I have no use for > those tables and would like to filter them out so that I cannot see them. > Does anyone know how to do that? > > -- > John W. Colby > www.ColbyConsulting.com > ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From jwcolby at colbyconsulting.com Fri Jul 1 18:03:37 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Fri, 01 Jul 2011 19:03:37 -0400 Subject: [dba-SQLServer] When are log files used Message-ID: <4E0E5249.1040406@colbyconsulting.com> Are log files used for read operations or only data modifications? -- John W. Colby www.ColbyConsulting.com From fhtapia at gmail.com Fri Jul 1 18:16:55 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 1 Jul 2011 16:16:55 -0700 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E0E5249.1040406@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> Message-ID: <-7846804006833651449@unknownmsgid> It's for updates and inserts only, read operations may use the tempdb depending on how you constructed the select... Sent from my mobile On Jul 1, 2011, at 4:04 PM, jwcolby wrote: > Are log files used for read operations or only data modifications? > > -- > John W. Colby > www.ColbyConsulting.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From jwcolby at colbyconsulting.com Sat Jul 2 00:36:37 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 01:36:37 -0400 Subject: [dba-SQLServer] Server upgrade Message-ID: <4E0EAE65.90501@colbyconsulting.com> Awhile back I built a server based on the ASUS KGPE-D16 dual socket G34 http://www.newegg.com/Product/Product.aspx?Item=N82E16813131643 and a single 8 core AMD 6128 http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266 with 32 gigs of RAM using the Kingston KVR1333D3D4R9S/8G http://www.newegg.com/Product/Product.aspx?Item=N82E16820139140 While I was at it I built a 2 drive SSD Raid 0 using OCZ Vertex 2 OCZSSD2-2VTX120G http://www.newegg.com/Product/Product.aspx?Item=N82E16820227705 to place my central databases on. Backup is everything (!) but it has been fault free so far, and pretty darned fast! But I started filling up the disk (in fact twice had a "disk full") so this week I ordered an ASUS PIKE 1068E raid controller http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 And 4 more SSDs using the Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 I just finished the install. I created a 4 disk raid 0 on the 1068 E controller and moved the data files onto that volume and left the log files on the original raid 0 SSD volume. I understand that I must keep backups in place due to the RAID 0 failure scenario. However these databases are primarily read-only, with approximately monthly updates so the data is relatively static. Given that they are primarily read-only the high IOPS and low latency getting at the data makes an enormous performance difference with manageable risk. I am getting the typical ATTO graphs you would expect with a 4 drive SSD Raid 0, in the neighborhood of 700 GBPS for the large block transfers, both read and write. I am impatiently waiting for the Interlagos to arrive, though I may not be able to afford them at first. OTOH the price of the RAM has dropped substantially in the 8 months since I built the server so buying another 32 gigs soon is definitely doable. Doing an AB comparison with the old system is impractical nut i can tell you I am running processes in a few minutes that used to take a half hour or more. Sometimes much more. It is gratifying to watch 8 cores running at 80+ % at times. I can also say that backing up a 60 gig database file with high compression from SSD to rotating media is *really* fast. I can't say that SSDs would be the answer for transaction processing systems but for my purposes they really do rock. -- John W. Colby www.ColbyConsulting.com From marklbreen at gmail.com Sat Jul 2 05:25:57 2011 From: marklbreen at gmail.com (Mark Breen) Date: Sat, 2 Jul 2011 11:25:57 +0100 Subject: [dba-SQLServer] When are log files used In-Reply-To: <-7846804006833651449@unknownmsgid> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> Message-ID: Hello John, Just curious, what prompted your question? Mark On 2 July 2011 00:16, Francisco Tapia wrote: > It's for updates and inserts only, read operations may use the tempdb > depending on how you constructed the select... > > Sent from my mobile > > On Jul 1, 2011, at 4:04 PM, jwcolby wrote: > > > Are log files used for read operations or only data modifications? > > > > -- > > John W. Colby > > www.ColbyConsulting.com > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Sat Jul 2 09:28:48 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 10:28:48 -0400 Subject: [dba-SQLServer] When are log files used In-Reply-To: References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> Message-ID: <4E0F2B20.1000709@colbyconsulting.com> Mark, > Just curious, what prompted your question? When I got into this business I bought a 16 port Areca RAID controller and a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum reliability and as much speed as I could muster. I created 2 tb partitions and placed my data files on one and my log files on another. Awhile back I bought a pair of SSDs http://www.newegg.com/Product/Product.aspx?Item=N82E16820227590 And made a 220 GB RAID 0 array and placed a set of three databases (my "central" databases) on there for speed. This last week I was doing some Update / Append operations on some of these databases and ended up with "disk full" - stopped me cold!!! Luckily I was able to move the logs off to rotating media and let them complete their operations and then finish up what I was doing. Anyway... I upgraded the server last night. I added a very reasonably priced (and reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid controller. It only supports Raid 0 and 1 but that is perfect for this application since I am using Raid 0 for these volumes. It also has no write cache so it is not appropriate for high write applications. http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 ASUS PIKE and four new SSDs to hold the central database files I work with: Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 > Just curious, what prompted your question? What I was trying to discover was when log files are used in order to discover how much room I needed to give them. I had all of the databases and their log files on a single RAID0. I was doing some appends / updates and the log files filled up the disk, which is what prompted the expansion. In the end I decided to put the data files on a new RAID0 created from the 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old two SSDs (~220 GB). I really only write to these files roughly once per month, but I ended up doing some processing unrelated to the monthly thing. ATM the data disk has 160 GB used (280 GB free) and the log file disk has 18 GB used (204 GB free). That should hold me for awhile, but I still have 4 more SATA ports on the Pike controller if I need them. John W. Colby www.ColbyConsulting.com On 7/2/2011 6:25 AM, Mark Breen wrote: > Hello John, > > Just curious, what prompted your question? > > Mark > > > > > On 2 July 2011 00:16, Francisco Tapia wrote: > >> It's for updates and inserts only, read operations may use the tempdb >> depending on how you constructed the select... >> >> Sent from my mobile >> >> On Jul 1, 2011, at 4:04 PM, jwcolby wrote: >> >>> Are log files used for read operations or only data modifications? >>> >>> -- >>> John W. Colby >>> www.ColbyConsulting.com >>> _______________________________________________ >>> dba-SQLServer mailing list >>> dba-SQLServer at databaseadvisors.com >>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.com >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fhtapia at gmail.com Sat Jul 2 11:23:57 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Sat, 2 Jul 2011 09:23:57 -0700 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E0F2B20.1000709@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> Message-ID: <-5104475497454315775@unknownmsgid> Wow I like your speedy setup, just remember to backup often, also something that might help during large operations is to switch the recovery model to simple mode to help maintain the log file size, operations are automatically flushed and the space is reused when write operations commit to the db files. Full recovery models are only really needed if you have to be able to restore back up to the minute before failure. Sent from my mobile On Jul 2, 2011, at 7:29 AM, jwcolby wrote: > Mark, > > > Just curious, what prompted your question? > > When I got into this business I bought a 16 port Areca RAID controller and a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum reliability and as much speed as I could muster. I created 2 tb partitions and placed my data files on one and my log files on another. Awhile back I bought a pair of SSDs > > http://www.newegg.com/Product/Product.aspx?Item=N82E16820227590 > > And made a 220 GB RAID 0 array and placed a set of three databases (my "central" databases) on there for speed. > > This last week I was doing some Update / Append operations on some of these databases and ended up with "disk full" - stopped me cold!!! Luckily I was able to move the logs off to rotating media and let them complete their operations and then finish up what I was doing. Anyway... > > > I upgraded the server last night. I added a very reasonably priced (and reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid controller. It only supports Raid 0 and 1 but that is perfect for this application since I am using Raid 0 for these volumes. It also has no write cache so it is not appropriate for high write applications. > > http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 ASUS PIKE > > and four new SSDs to hold the central database files I work with: > > Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX > > http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 > > > Just curious, what prompted your question? > > What I was trying to discover was when log files are used in order to discover how much room I needed to give them. I had all of the databases and their log files on a single RAID0. I was doing some appends / updates and the log files filled up the disk, which is what prompted the expansion. > > In the end I decided to put the data files on a new RAID0 created from the 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old two SSDs (~220 GB). > > I really only write to these files roughly once per month, but I ended up doing some processing unrelated to the monthly thing. > > ATM the data disk has 160 GB used (280 GB free) and the log file disk has 18 GB used (204 GB free). That should hold me for awhile, but I still have 4 more SATA ports on the Pike controller if I need them. > > John W. Colby > www.ColbyConsulting.com > > On 7/2/2011 6:25 AM, Mark Breen wrote: >> Hello John, >> >> Just curious, what prompted your question? >> >> Mark >> >> >> >> >> On 2 July 2011 00:16, Francisco Tapia wrote: >> >>> It's for updates and inserts only, read operations may use the tempdb >>> depending on how you constructed the select... >>> >>> Sent from my mobile >>> >>> On Jul 1, 2011, at 4:04 PM, jwcolby wrote: >>> >>>> Are log files used for read operations or only data modifications? >>>> >>>> -- >>>> John W. Colby >>>> www.ColbyConsulting.com >>>> _______________________________________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer at databaseadvisors.com >>>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.com >>>> >>> _______________________________________________ >>> dba-SQLServer mailing list >>> dba-SQLServer at databaseadvisors.com >>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.com >>> >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From jwcolby at colbyconsulting.com Sat Jul 2 12:29:27 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 13:29:27 -0400 Subject: [dba-SQLServer] SQL Server - SSD / Rotating media - Side by side test results Message-ID: <4E0F5577.9060206@colbyconsulting.com> This morning I set out to do a little bit of real world testing to see what my SSD investment buys me. The following are some results. BTW if a SQL Server guru wants to actually gain remote access and run tests on my system I would welcome that. I am obviously not very talented as a SQL Server DBA and so a real DBA who has an interest is welcomed to tune and test. Anyway, I have a pair of databases that I will be testing with. One is my infamous "database from hell" called HSID, containing 51 million records with about 600 fields. The other is my HSIDAllAdults containing about 65 million Name and address fields. HSIDAllAdults is child to HSID, IOW it has a foreign key which contains the PK of HSID. Both databases have a clustered index on their autonumber PK. So... I have both of these databases on a four SSD RAID 0. I backed them up last night and restored them to the same name + _Rotatingmedia on my RAID6 volumes. So I have identical databases on a 4 SSD Raid 0 and a 6 disk RAID 6. I am now doing some comparative A/B runs of rather standard queries - similar to things I do routinely. I backed up the two databases last night just before upgrading the server. I restored both tha HSID and HSIDAllAdults this AM to the same rotating media location I normally hold the databases. I did not defrag the rotating media before doing the restore. I include the counts so that we can be assured that the actual data is identical between the SSD and rotating media DBs. HSIDAllAdults - has a pair of cover indexes each of which includes Address valid DBCC DROPCLEANBUFFERS SELECT AddrValid, COUNT(PK) AS Cnt FROM dbo.tblAllAdultNameAddr GROUP BY AddrValid SSD: 12 seconds ANK 635917 E 2918652 INV 936058 MOV 112093 PO 3780131 V 59074768 Rotating: 52 seconds ANK 635917 E 2918652 INV 936058 MOV 112093 PO 3780131 V 59074768 DBCC DROPCLEANBUFFERS SELECT COUNT(_DataHSID.dbo.tblHSID.PKID) AS Cnt, dbo.tblAllAdultNameAddr.AddrValid FROM dbo.tblAllAdultNameAddr INNER JOIN _DataHSID.dbo.tblHSID ON dbo.tblAllAdultNameAddr.PKHSID = _DataHSID.dbo.tblHSID.PKID GROUP BY dbo.tblAllAdultNameAddr.AddrValid SSD: 35 seconds 635917 ANK 2918652 E 936058 INV 112093 MOV 3780131 PO 59074768 V DBCC DROPCLEANBUFFERS SELECT COUNT(_DataHSID_RotatingMedia.dbo.tblHSID.PKID) AS Cnt, dbo.tblAllAdultNameAddr.AddrValid FROM _DataHSID_RotatingMedia.dbo.tblHSID INNER JOIN dbo.tblAllAdultNameAddr ON _DataHSID_RotatingMedia.dbo.tblHSID.PKID = dbo.tblAllAdultNameAddr.PKHSID GROUP BY dbo.tblAllAdultNameAddr.AddrValid Rotating: 1:00 635917 ANK 2918652 E 936058 INV 112093 MOV 3780131 PO 59074768 V The following appears to be a table scan which would be a "worst case". I just picked a field from HSID which we use occasionally. DBCC DROPCLEANBUFFERS SELECT COUNT(PKID) AS Cnt, Household_Occupation_code FROM dbo.tblHSID GROUP BY Household_Occupation_code Rotating: 7:06 35481479 NULL 7143021 10 11480 11 9780 12 37452 13 115093 20 2266292 21 501715 22 23724 23 1039660 30 1325728 40 1183311 50 8271 51 70318 52 2566 60 33157 61 28595 62 15305 70 511464 80 739340 90 609317 91 Rotating media: 1:05 35481479 NULL 7143021 10 11480 11 9780 12 37452 13 115093 20 2266292 21 501715 22 23724 23 1039660 30 1325728 40 1183311 50 8271 51 70318 52 2566 60 33157 61 28595 62 15305 70 511464 80 739340 90 609317 91 DBCC DROPCLEANBUFFERS SELECT COUNT(PKID) AS Cnt, Narrow_Income_Band FROM dbo.tblHSID GROUP BY Narrow_Income_Band SSD: 8 seconds 13824508 NULL 3762511 1 1675853 2 1015899 3 2307736 4 1031640 5 2595759 6 1069374 7 2662509 8 1100049 9 1055216 A 1026910 B 4285629 C 941494 D 862906 E 831573 F 2443917 G 738328 H 676959 I 478582 J 423856 K 1168819 L 371413 M 333796 N 249064 O 204771 P 708189 Q 193265 R 189413 S 2927130 T Rotating media: 10 seconds 13824508 NULL 3762511 1 1675853 2 1015899 3 2307736 4 1031640 5 2595759 6 1069374 7 2662509 8 1100049 9 1055216 A 1026910 B 4285629 C 941494 D 862906 E 831573 F 2443917 G 738328 H 676959 I 478582 J 423856 K 1168819 L 371413 M 333796 N 249064 O 204771 P 708189 Q 193265 R 189413 S 2927130 T I am going to stop for now. I have the rotating media copies and will leave them in place for awhile. If any real DBA wants to do some testing let me know. Obviously I have to know you. :) -- John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Sat Jul 2 12:48:36 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 13:48:36 -0400 Subject: [dba-SQLServer] When are log files used In-Reply-To: <-5104475497454315775@unknownmsgid> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> <-5104475497454315775@unknownmsgid> Message-ID: <4E0F59F4.5090303@colbyconsulting.com> Francisco, Of course I need to do that. Thanks for the suggestion. John W. Colby www.ColbyConsulting.com On 7/2/2011 12:23 PM, Francisco Tapia wrote: > Wow I like your speedy setup, just remember to backup often, also > something that might help during large operations is to switch the > recovery model to simple mode to help maintain the log file size, > operations are automatically flushed and the space is reused when > write operations commit to the db files. Full recovery models are > only really needed if you have to be able to restore back up to the > minute before failure. > > Sent from my mobile > > On Jul 2, 2011, at 7:29 AM, jwcolby wrote: > >> Mark, >> >>> Just curious, what prompted your question? >> >> When I got into this business I bought a 16 port Areca RAID controller and a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum reliability and as much speed as I could muster. I created 2 tb partitions and placed my data files on one and my log files on another. Awhile back I bought a pair of SSDs >> >> http://www.newegg.com/Product/Product.aspx?Item=N82E16820227590 >> >> And made a 220 GB RAID 0 array and placed a set of three databases (my "central" databases) on there for speed. >> >> This last week I was doing some Update / Append operations on some of these databases and ended up with "disk full" - stopped me cold!!! Luckily I was able to move the logs off to rotating media and let them complete their operations and then finish up what I was doing. Anyway... >> >> >> I upgraded the server last night. I added a very reasonably priced (and reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid controller. It only supports Raid 0 and 1 but that is perfect for this application since I am using Raid 0 for these volumes. It also has no write cache so it is not appropriate for high write applications. >> >> http://www.newegg.com/Product/Product.aspx?Item=N82E16816110042 ASUS PIKE >> >> and four new SSDs to hold the central database files I work with: >> >> Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX >> >> http://www.newegg.com/Product/Product.aspx?Item=N82E16820226152 >> >>> Just curious, what prompted your question? >> >> What I was trying to discover was when log files are used in order to discover how much room I needed to give them. I had all of the databases and their log files on a single RAID0. I was doing some appends / updates and the log files filled up the disk, which is what prompted the expansion. >> >> In the end I decided to put the data files on a new RAID0 created from the 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old two SSDs (~220 GB). >> >> I really only write to these files roughly once per month, but I ended up doing some processing unrelated to the monthly thing. >> >> ATM the data disk has 160 GB used (280 GB free) and the log file disk has 18 GB used (204 GB free). That should hold me for awhile, but I still have 4 more SATA ports on the Pike controller if I need them. >> >> John W. Colby >> www.ColbyConsulting.com >> >> On 7/2/2011 6:25 AM, Mark Breen wrote: >>> Hello John, >>> >>> Just curious, what prompted your question? >>> >>> Mark >>> >>> >>> >>> >>> On 2 July 2011 00:16, Francisco Tapia wrote: >>> >>>> It's for updates and inserts only, read operations may use the tempdb >>>> depending on how you constructed the select... >>>> >>>> Sent from my mobile >>>> >>>> On Jul 1, 2011, at 4:04 PM, jwcolby wrote: >>>> >>>>> Are log files used for read operations or only data modifications? >>>>> >>>>> -- >>>>> John W. Colby >>>>> www.ColbyConsulting.com >>>>> _______________________________________________ >>>>> dba-SQLServer mailing list >>>>> dba-SQLServer at databaseadvisors.com >>>>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>>>> http://www.databaseadvisors.com >>>>> >>>> _______________________________________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer at databaseadvisors.com >>>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.com >>>> >>>> >>> _______________________________________________ >>> dba-SQLServer mailing list >>> dba-SQLServer at databaseadvisors.com >>> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.com >>> >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Sat Jul 2 13:36:42 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 02 Jul 2011 14:36:42 -0400 Subject: [dba-SQLServer] SQL Server - SSD / Rotating media - Side by side test results In-Reply-To: References: <4E0F5577.9060206@colbyconsulting.com> Message-ID: <4E0F653A.7070503@colbyconsulting.com> Arthur, There are definite cases where the gains are minimal, others where they are significant. The other thing is that I am intentionally clearing the cache before each test. The cache further minimizes the differences as it turns out. That is to be expected of course. This just goes to show the old axiom that throwing memory at SQL Server does a world of good. Without a shadow of a doubt, one thing that SSDs (and faster / better hardware in general) do is minimize the impact of ignorance and sloth. ;) I am not an accomplished DBA, and I simply do not have the time to become one. As a result I am unable to correctly tune my system. By throwing cores, memory and SSDs at the problem I manage to achieve respectable results in spite of myself. Hardware is cheap. My entire server cost somewhere in the neighborhood of $5K. Additionally I have dragged disk and RAID controllers forward through many upgrades. Back around 2005 I spent the then enormous sum of $1600 for three Areca raid controllers which I am still using today. I bought 10 1 TB drives back when they were $150, but I am still using them today. What I upgrade are the motherboards and more frequently the processors. In 2004 I started with single core AMD 3800 processors using Windows2003 X32 and 4 gigs of RAM. I built two systems for $4000! Moving up to dual and then quad cores, and Windows / SQL Server X64 and 8 GB RAM, then 16 gigs of ram. My latest motherboard / processor cost me (my client of course) about $700 (8 cores, with 24 cores possible) and 32 gigs of ram was about $1000. I was looking last night and another 32 GB of RAM (same modules) is now only $600! And... I am using my entire old server (quad core / 16 gigs ram) for a VM server. The point really is that while it is not a trivial amount spent over the years making these upgrades, over that same period I billed a couple of hundred thousand dollars. All these upgrades make me more and more productive, faster and faster getting the results back to the client. The client *loves me* precisely because he gets results back in hours instead of a week as his previous provider gave him. I program custom C# solutions (and bill him for the programming) which have enabled me to do orders literally in hours which (back in 2006) took me a full day or even two to get out. Counts which took an hour in 2004 now take my custom program 2 minutes. *AND* I have developed a system which allows him to send emails with zip lists as attachments. A program running on my server strips off the CSV attachment, generate counts, build a count spreadsheet, attach it to an email and send it back to him literally within 5 minutes of him pressing send *without* my doing anything. Again those counts used to take me an hour back when I did everything by hand. Now I log that a count came in and put a small charge in my billing database! The lesson for me is that my time is worth much more than the cost of the electronics and my response time is what makes me valuable to the client. I fully understand than everyone cannot solve all their problems by throwing hardware / custom software at it, but for a sole proprietor it just might be the only way! I don't have the time to be good at all the hats I wear! And so I do things like spend a thousand on SSDs on an educated guess that they will make a significant difference for an uneducated sloth. :) And finally, because the client loves me, he is sending me a *ton* more work! MORE CORES! MORE MEMORY! More SSDs! :):):) John W. Colby www.ColbyConsulting.com On 7/2/2011 1:48 PM, Arthur Fuller wrote: > I would be happy to assist. Judging by your IMO rather narrow result-gap (measured in a few > seconds), my initial guess would be that the SSHDs are not gaining you much over your investment in > CPU and RAM. However, that remains to be determined. Could be that table-scans or some other factor > are causing this lag. Should be that SSHD retrieves ought to be an order of magnitude quicker, but > according to your posted measurements they lag significantly behind that thumbnail benchmark. > > And besides all that, how are you? What's new with you and your family? > > A. From jwcolby at colbyconsulting.com Sun Jul 3 11:56:54 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sun, 03 Jul 2011 12:56:54 -0400 Subject: [dba-SQLServer] SSD, The Game Changer - SQL Man of Mystery - SQLServerCentral.com Message-ID: <4E109F56.809@colbyconsulting.com> -- John W. Colby www.ColbyConsulting.com http://www.sqlservercentral.com/blogs/sqlmanofmystery/archive/2009/04/14/ssd-the-game-changer.aspx From fuller.artful at gmail.com Sun Jul 3 12:01:34 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sun, 3 Jul 2011 13:01:34 -0400 Subject: [dba-SQLServer] SSD, The Game Changer - SQL Man of Mystery - SQLServerCentral.com In-Reply-To: <4E109F56.809@colbyconsulting.com> References: <4E109F56.809@colbyconsulting.com> Message-ID: Huh? On Sun, Jul 3, 2011 at 12:56 PM, jwcolby wrote: > > -- > John W. Colby > www.ColbyConsulting.com > http://www.sqlservercentral.**com/blogs/sqlmanofmystery/** > archive/2009/04/14/ssd-the-**game-changer.aspx > ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From marklbreen at gmail.com Sun Jul 3 12:16:53 2011 From: marklbreen at gmail.com (Mark Breen) Date: Sun, 3 Jul 2011 18:16:53 +0100 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E0F2B20.1000709@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> Message-ID: Hello John, With a memory of a gold fish these days, I was reluctant to mention the simple recovery mode in case we had already discussed it in detail. But I would have expected that if you have simple more enabled, your logs would never grow too large - is that the case? That was my main reason for asking, however, I read your following emails with green envy - I love your setup. Thanks Mark On 2 July 2011 15:28, jwcolby wrote: > Mark, > > > > Just curious, what prompted your question? > > When I got into this business I bought a 16 port Areca RAID controller and > a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum > reliability and as much speed as I could muster. I created 2 tb partitions > and placed my data files on one and my log files on another. Awhile back I > bought a pair of SSDs > > http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820227590 > > And made a 220 GB RAID 0 array and placed a set of three databases (my > "central" databases) on there for speed. > > This last week I was doing some Update / Append operations on some of these > databases and ended up with "disk full" - stopped me cold!!! Luckily I was > able to move the logs off to rotating media and let them complete their > operations and then finish up what I was doing. Anyway... > > > I upgraded the server last night. I added a very reasonably priced (and > reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid > controller. It only supports Raid 0 and 1 but that is perfect for this > application since I am using Raid 0 for these volumes. It also has no write > cache so it is not appropriate for high write applications. > > http://www.newegg.com/Product/**Product.aspx?Item=**N82E16816110042ASUS PIKE > > and four new SSDs to hold the central database files I work with: > > Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX > > http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820226152 > > > > Just curious, what prompted your question? > > What I was trying to discover was when log files are used in order to > discover how much room I needed to give them. I had all of the databases > and their log files on a single RAID0. I was doing some appends / updates > and the log files filled up the disk, which is what prompted the expansion. > > In the end I decided to put the data files on a new RAID0 created from the > 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old > two SSDs (~220 GB). > > I really only write to these files roughly once per month, but I ended up > doing some processing unrelated to the monthly thing. > > ATM the data disk has 160 GB used (280 GB free) and the log file disk has > 18 GB used (204 GB free). That should hold me for awhile, but I still have > 4 more SATA ports on the Pike controller if I need them. > > > John W. Colby > www.ColbyConsulting.com > > On 7/2/2011 6:25 AM, Mark Breen wrote: > >> Hello John, >> >> Just curious, what prompted your question? >> >> Mark >> >> >> >> >> On 2 July 2011 00:16, Francisco Tapia wrote: >> >> It's for updates and inserts only, read operations may use the tempdb >>> depending on how you constructed the select... >>> >>> Sent from my mobile >>> >>> On Jul 1, 2011, at 4:04 PM, jwcolby> >>> wrote: >>> >>> Are log files used for read operations or only data modifications? >>>> >>>> -- >>>> John W. Colby >>>> www.ColbyConsulting.com >>>> ______________________________**_________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer@**databaseadvisors.com >>>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.**com >>>> >>>> ______________________________**_________________ >>> dba-SQLServer mailing list >>> dba-SQLServer@**databaseadvisors.com >>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.**com >>> >>> >>> ______________________________**_________________ >> dba-SQLServer mailing list >> dba-SQLServer@**databaseadvisors.com >> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.**com >> >> >> ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From jwcolby at colbyconsulting.com Sun Jul 3 13:08:28 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sun, 03 Jul 2011 14:08:28 -0400 Subject: [dba-SQLServer] When are log files used In-Reply-To: References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> Message-ID: <4E10B01C.8040506@colbyconsulting.com> > That was my main reason for asking, however, I read your following emails with green envy - I love your setup. My envy is folks who have the knowledge to do things right instead of throwing hardware at it. But we all get what we get. It is a nice server. Imagine what it could do with one of our DBAs at the helm. ;) Hardware really is cheap though. I fully expect to just blow it out with 128 GB of RAM and another processor. Basically if I can keep the entire pair of tables in cache... It is strange to think about keeping 50 gb tables entirely in RAM. I keep getting more business though and the reason (I believe) is that I can get it done quickly. John W. Colby www.ColbyConsulting.com On 7/3/2011 1:16 PM, Mark Breen wrote: > Hello John, > > With a memory of a gold fish these days, I was reluctant to mention the > simple recovery mode in case we had already discussed it in detail. > > But I would have expected that if you have simple more enabled, your logs > would never grow too large - is that the case? > > That was my main reason for asking, however, I read your following emails > with green envy - I love your setup. > > Thanks > > Mark > > > On 2 July 2011 15:28, jwcolby wrote: > >> Mark, >> >> >>> Just curious, what prompted your question? >> >> When I got into this business I bought a 16 port Areca RAID controller and >> a bunch of 1 TB drives. I built big arrays and RAID06 volumes for maximum >> reliability and as much speed as I could muster. I created 2 tb partitions >> and placed my data files on one and my log files on another. Awhile back I >> bought a pair of SSDs >> >> http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820227590 >> >> And made a 220 GB RAID 0 array and placed a set of three databases (my >> "central" databases) on there for speed. >> >> This last week I was doing some Update / Append operations on some of these >> databases and ended up with "disk full" - stopped me cold!!! Luckily I was >> able to move the logs off to rotating media and let them complete their >> operations and then finish up what I was doing. Anyway... >> >> >> I upgraded the server last night. I added a very reasonably priced (and >> reasonably powerful) RAID expansion card called the ASUS PIKE 1068E raid >> controller. It only supports Raid 0 and 1 but that is perfect for this >> application since I am using Raid 0 for these volumes. It also has no write >> cache so it is not appropriate for high write applications. >> >> http://www.newegg.com/Product/**Product.aspx?Item=**N82E16816110042ASUS PIKE >> >> and four new SSDs to hold the central database files I work with: >> >> Mushkin Enhanced Callisto Deluxe MKNSSDCL120GB-DX >> >> http://www.newegg.com/Product/**Product.aspx?Item=**N82E16820226152 >> >> >>> Just curious, what prompted your question? >> >> What I was trying to discover was when log files are used in order to >> discover how much room I needed to give them. I had all of the databases >> and their log files on a single RAID0. I was doing some appends / updates >> and the log files filled up the disk, which is what prompted the expansion. >> >> In the end I decided to put the data files on a new RAID0 created from the >> 4 new SSDs (~440 GB) and leave the log files on the old RAID0 using the old >> two SSDs (~220 GB). >> >> I really only write to these files roughly once per month, but I ended up >> doing some processing unrelated to the monthly thing. >> >> ATM the data disk has 160 GB used (280 GB free) and the log file disk has >> 18 GB used (204 GB free). That should hold me for awhile, but I still have >> 4 more SATA ports on the Pike controller if I need them. >> >> >> John W. Colby >> www.ColbyConsulting.com >> >> On 7/2/2011 6:25 AM, Mark Breen wrote: >> >>> Hello John, >>> >>> Just curious, what prompted your question? >>> >>> Mark >>> >>> >>> >>> >>> On 2 July 2011 00:16, Francisco Tapia wrote: >>> >>> It's for updates and inserts only, read operations may use the tempdb >>>> depending on how you constructed the select... >>>> >>>> Sent from my mobile >>>> >>>> On Jul 1, 2011, at 4:04 PM, jwcolby> >>>> wrote: >>>> >>>> Are log files used for read operations or only data modifications? >>>>> >>>>> -- >>>>> John W. Colby >>>>> www.ColbyConsulting.com >>>>> ______________________________**_________________ >>>>> dba-SQLServer mailing list >>>>> dba-SQLServer@**databaseadvisors.com >>>>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>>>> http://www.databaseadvisors.**com >>>>> >>>>> ______________________________**_________________ >>>> dba-SQLServer mailing list >>>> dba-SQLServer@**databaseadvisors.com >>>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>>> http://www.databaseadvisors.**com >>>> >>>> >>>> ______________________________**_________________ >>> dba-SQLServer mailing list >>> dba-SQLServer@**databaseadvisors.com >>> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >>> http://www.databaseadvisors.**com >>> >>> >>> ______________________________**_________________ >> dba-SQLServer mailing list >> dba-SQLServer@**databaseadvisors.com >> http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.**com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From marklbreen at gmail.com Tue Jul 5 02:30:55 2011 From: marklbreen at gmail.com (Mark Breen) Date: Tue, 5 Jul 2011 08:30:55 +0100 Subject: [dba-SQLServer] When are log files used In-Reply-To: <4E10B01C.8040506@colbyconsulting.com> References: <4E0E5249.1040406@colbyconsulting.com> <-7846804006833651449@unknownmsgid> <4E0F2B20.1000709@colbyconsulting.com> <4E10B01C.8040506@colbyconsulting.com> Message-ID: Hi John, >I keep getting more business though and the reason (I believe) is that I can get it done quickly. In todays market, this, IMO, is an enormous differentiator. I try to use this all the time, speed first, quality second. That does not mean quality last, just that is should take second place to getting the job done quickly. Interestingly, I sent this email yesterday morning and it was stopped by the moderation because it was too long. Last night, I spoke with a developer aged 58. Out of the blue, he mentioned that he specialises in what he is fast at, not what is the latest craze. It was a very interesting conversation. He says he has almost no SQL skills, but with HTML, CSS and some clever use of some Dotnetnuke modules, he builds online applications for the last four years with relative ease. I told him that I also believe in this philosophy - use what we are fast at. Enjoy the hardware :) Mark On 3 July 2011 19:08, jwcolby wrote: > > That was my main reason for asking, however, I read your following emails > with green envy - I love your setup. > > My envy is folks who have the knowledge to do things right instead of > throwing hardware at it. > > But we all get what we get. It is a nice server. Imagine what it could do > with one of our DBAs at the helm. ;) > > Hardware really is cheap though. I fully expect to just blow it out with > 128 GB of RAM and another processor. Basically if I can keep the entire > pair of tables in cache... It is strange to think about keeping 50 gb > tables entirely in RAM. > > I keep getting more business though and the reason (I believe) is that I > can get it done quickly. > > > > John W. Colby > www.ColbyConsulting.com > > > From jwcolby at colbyconsulting.com Tue Jul 5 03:56:04 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 05 Jul 2011 04:56:04 -0400 Subject: [dba-SQLServer] Sourcegear vault free two developer license Message-ID: <4E12D1A4.1010809@colbyconsulting.com> I ran across this today. http://www.sqlservercentral.com/articles/Red+Gate+Software/74579/ http://promotions.sourcegear.com/vouchers/new/ -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Tue Jul 5 07:01:00 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Tue, 5 Jul 2011 08:01:00 -0400 Subject: [dba-SQLServer] Sourcegear vault free two developer license In-Reply-To: <4E12D1A4.1010809@colbyconsulting.com> References: <4E12D1A4.1010809@colbyconsulting.com> Message-ID: Were you able to connect to it? I've tried several times and I just get a message saying "Oops, one voucher per person. A voucher has already been requested from this email address. So I guess that means you got my message yesterday. But I have not received a voucher yet. A. On Tue, Jul 5, 2011 at 4:56 AM, jwcolby wrote: > I ran across this today. > > http://www.sqlservercentral.**com/articles/Red+Gate+**Software/74579/ > http://promotions.sourcegear.**com/vouchers/new/ > > From jwcolby at colbyconsulting.com Tue Jul 5 07:15:42 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Tue, 05 Jul 2011 08:15:42 -0400 Subject: [dba-SQLServer] Sourcegear vault free two developer license In-Reply-To: References: <4E12D1A4.1010809@colbyconsulting.com> Message-ID: <4E13006E.6040704@colbyconsulting.com> I got a message saying "oopps something went wrong. We will look into it." I guess the link isn't working correctly. John W. Colby www.ColbyConsulting.com On 7/5/2011 8:01 AM, Arthur Fuller wrote: > Were you able to connect to it? I've tried several times and I just get a > message saying "Oops, one voucher per person. A voucher has already been > requested from this email address. So I guess that means you got my message > yesterday. But I have not received a voucher yet. > > A. > > On Tue, Jul 5, 2011 at 4:56 AM, jwcolby wrote: > >> I ran across this today. >> >> http://www.sqlservercentral.**com/articles/Red+Gate+**Software/74579/ >> http://promotions.sourcegear.**com/vouchers/new/ >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jm.hwsn at gmail.com Tue Jul 5 07:38:14 2011 From: jm.hwsn at gmail.com (jm.hwsn) Date: Tue, 5 Jul 2011 07:38:14 -0500 Subject: [dba-SQLServer] Sourcegear vault free two developer license In-Reply-To: <4E13006E.6040704@colbyconsulting.com> References: <4E12D1A4.1010809@colbyconsulting.com> <4E13006E.6040704@colbyconsulting.com> Message-ID: <4e1305b7.09a32a0a.21a8.1641@mx.google.com> I just tried it... it's working now. Jim -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of jwcolby Sent: Tuesday, July 05, 2011 7:16 AM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Sourcegear vault free two developer license I got a message saying "oopps something went wrong. We will look into it." I guess the link isn't working correctly. John W. Colby www.ColbyConsulting.com On 7/5/2011 8:01 AM, Arthur Fuller wrote: > Were you able to connect to it? I've tried several times and I just get a > message saying "Oops, one voucher per person. A voucher has already been > requested from this email address. So I guess that means you got my message > yesterday. But I have not received a voucher yet. > > A. > > On Tue, Jul 5, 2011 at 4:56 AM, jwcolby wrote: > >> I ran across this today. >> >> http://www.sqlservercentral.**com/articles/Red+Gate+**Software/74579/ >> http://promotions.sourcegear.**com/vouchers/new/ >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From jwcolby at colbyconsulting.com Wed Jul 6 06:46:07 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 06 Jul 2011 07:46:07 -0400 Subject: [dba-SQLServer] =?windows-1252?q?HASHBYTES_=96_A_T-SQL_Function_-?= =?windows-1252?q?_SQL_Musings_-_SQLServerCentral=2Ecom?= Message-ID: <4E144AFF.5010306@colbyconsulting.com> We were discussing hashes awhile back. -- John W. Colby www.ColbyConsulting.com http://www.sqlservercentral.com/blogs/steve_jones/archive/2011/6/28/hashbytes-_1320_-a-t_2D00_sql-function.aspx From jwcolby at colbyconsulting.com Thu Jul 7 07:20:46 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 07 Jul 2011 08:20:46 -0400 Subject: [dba-SQLServer] [AccessD] SQL Server - Query non-updateable In-Reply-To: <4E15A088.9050801@colbyconsulting.com> References: <4E15A088.9050801@colbyconsulting.com> Message-ID: <4E15A49E.30002@colbyconsulting.com> Further to this, I have discovered that if I build a temp table inside of the access fe and do the join it is un-updateable. However if I use the temp table in the IN() clause it is now updateable. So it is something about using the stored procedure in the IN() that causes the query to become un-updateable. John W. Colby www.ColbyConsulting.com On 7/7/2011 8:03 AM, jwcolby wrote: > I have a selection query for a bound form. If I just do a select xyz from tblInmate it works of > course. I want to select a subset of inmates that reflect those that I work with (specific camps). > > I have a tblVolunteerCamps (the camps that a volunteer works with) and I built a stored procedure > out in SQL Server that selects the IDs of inmates at those camps. I feed the SP the volunteer ID and > back comes the campIDs and the inmateIDs in those camps. > > I had read (on this list) that if I used IN (SELECT ID from QueryXYZ) in the where clause it would > allow the query to be editable but doing so is turning my query into a non-updatable query. > > SELECT TblInmate.* from tblInmate > WHERE (INM_Active <> 0) AND > (INM_Location IN (SELECT CMP_LOCCODE FROM qspVolCampIDs)) > > If I remove the IN clause, the query is updateable. > > I really need to filter to just the camps the volunteer works with and I am wondering how to > accomplish this. In the past I would try to JOIN the main query to the selection filter and that > caused non-updateable. I was told to use the IN(SELECT) which has worked in most cases in the past. > > Any clue why not now and how to go about filtering and keeping it updateable? > From fuller.artful at gmail.com Fri Jul 15 07:14:14 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 08:14:14 -0400 Subject: [dba-SQLServer] Update Foreign Keys Message-ID: I'd like to poll the readership to ask, "Do you permit FKs to be updated, and if so under what circumstances?" I'm asking because a client and I are discussing a situation where this has arisen: A Client may have several locations. A Location has zero or more machines installed. A Machine has related data in at least one table (Assessments and optionally Measurements). >From time to time the Client may want to move a Machine from one Location to another. The client suggested that I simply replace the FK LocationID on the Machine record with the LocationID of the new Location. I pointed out that there are two possible results to this operation: a) do a Cascade Update on the tables under Machines. This approach "destroys history", so to speak, in that the data really no longer applies to the relocated Machine. The Assessments and Measurements no longer apply to the new Location. b) Orphan the Assessments and Measurements. This is unacceptable, IMO. So I suggested that rather than change the Machine's LocationID, we instead copy the Machine data (only) to a new row, assigning it the new LocationID and leaving the old row intact, along with its Assessments and Measurements In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, under what circumstances?" I'll respond to that one first. The only time I permit this is when using staging tables. For example, a wizard may accept new data into several tables. The last step in the wizard is equivalent to "COMMIT" -- it writes the accumulated data to the "real" tables. There is also a "Cancel" button, which if pressed causes a Cascade Delete across all the tables involved. Arthur From jwcolby at colbyconsulting.com Fri Jul 15 07:42:16 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Fri, 15 Jul 2011 08:42:16 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <4E2035A8.9090505@colbyconsulting.com> Arthur, FKs are updated all of the time. Since I only use autonumber PKs the FKs do not form part of the PK of the child table and therefore I do not need cascade updates. As for the history, the history belongs to the machine. If the machine moves, then the history moves as well. If the history needs other "location" data to be valid then it needed to have location FKs in that history data, which it apparently does not. If it truly needs that location data then add a new location field to the immediate child of the machine, update with the location where the data was accumulated and off you go. It seems to me that location is probably not what is actually being tracked however but rather instruments (taking the measurements) and you probably already have an instrument id in the measurements. If not you have bigger problems than location data. If you copy the machine and create a new record then you have the same machine in two different locations. Clearly a problem in this universe. You will now be working around a problem that you created. I understand that you like the PIT architecture stuff but unless the system is designed from the ground up to use that it seems unwise to me to be applying it piecemeal. The machine moved. The location ID gets updated. FK updates happen all the time in my world. Think people / cars and so forth. John W. Colby www.ColbyConsulting.com On 7/15/2011 8:14 AM, Arthur Fuller wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be updated, > and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > >> From time to time the Client may want to move a Machine from one Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to the > relocated Machine. The Assessments and Measurements no longer apply to the > new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new LocationID > and leaving the old row intact, along with its Assessments and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > under what circumstances?" I'll respond to that one first. The only time I > permit this is when using staging tables. For example, a wizard may accept > new data into several tables. The last step in the wizard is equivalent to > "COMMIT" -- it writes the accumulated data to the "real" tables. There is > also a "Cancel" button, which if pressed causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From df.waters at comcast.net Fri Jul 15 08:43:19 2011 From: df.waters at comcast.net (Dan Waters) Date: Fri, 15 Jul 2011 08:43:19 -0500 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <001d01cc42f5$29750f60$7c5f2e20$@comcast.net> I think I'd do something like: Create a tblMachineHistory table which shows MachineID and LocationID. Have fields for Active, EnteredBy and EnteredDate. When a machine is moved, add a new record for that machine and make that record Active. Keep the historical records which show the location for each machine. When a new record is recorded with your assessment/measurement information, the Machine record can have the LocationID entered by your code where the LocationID comes from the lookup table. Now, you can recall location history for the machine, and know where the machine was when the record was created. Of course ... don't change the machine ID! HTH, Dan From davidmcafee at gmail.com Fri Jul 15 08:46:49 2011 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 Jul 2011 06:46:49 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: Create a junction table for installs. tblInstalls InstalledID (PK, INT) LocationID (FK, int) MachineID (fk, int) Installdate EntryDate Entryuserid Every record is an insertion. You never have to overwrite data. Built in history. A simple view/sproc using Max() can show the latest location location for a given machine or machines at a given location. HTH, David Sent from my Droid phone. On Jul 15, 2011 5:15 AM, "Arthur Fuller" wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be updated, > and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > > From time to time the Client may want to move a Machine from one Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to the > relocated Machine. The Assessments and Measurements no longer apply to the > new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new LocationID > and leaving the old row intact, along with its Assessments and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > under what circumstances?" I'll respond to that one first. The only time I > permit this is when using staging tables. For example, a wizard may accept > new data into several tables. The last step in the wizard is equivalent to > "COMMIT" -- it writes the accumulated data to the "real" tables. There is > also a "Cancel" button, which if pressed causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From df.waters at comcast.net Fri Jul 15 08:51:34 2011 From: df.waters at comcast.net (Dan Waters) Date: Fri, 15 Jul 2011 08:51:34 -0500 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <001e01cc42f6$50352530$f09f6f90$@comcast.net> Yeah ..... What David said! ;-) -----Original Message----- From: dba-sqlserver-bounces at databaseadvisors.com [mailto:dba-sqlserver-bounces at databaseadvisors.com] On Behalf Of David McAfee Sent: Friday, July 15, 2011 8:47 AM To: Discussion concerning MS SQL Server Subject: Re: [dba-SQLServer] Update Foreign Keys Create a junction table for installs. tblInstalls InstalledID (PK, INT) LocationID (FK, int) MachineID (fk, int) Installdate EntryDate Entryuserid Every record is an insertion. You never have to overwrite data. Built in history. A simple view/sproc using Max() can show the latest location location for a given machine or machines at a given location. HTH, David Sent from my Droid phone. On Jul 15, 2011 5:15 AM, "Arthur Fuller" wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be > updated, and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where > this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > > From time to time the Client may want to move a Machine from one > Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that > there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to > the relocated Machine. The Assessments and Measurements no longer > apply to the new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new > LocationID and leaving the old row intact, along with its Assessments > and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if > so, under what circumstances?" I'll respond to that one first. The > only time I permit this is when using staging tables. For example, a > wizard may accept new data into several tables. The last step in the > wizard is equivalent to "COMMIT" -- it writes the accumulated data to > the "real" tables. There is also a "Cancel" button, which if pressed > causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > _______________________________________________ dba-SQLServer mailing list dba-SQLServer at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/dba-sqlserver http://www.databaseadvisors.com From fuller.artful at gmail.com Fri Jul 15 08:53:31 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 09:53:31 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: That is my preferred approach. A while back I wrote a piece for Simple-Talk on PITA (Point In Time Architecture, not the other meaning, although it is somewhat appropriate too :). In the case I was discussing, nothing was ever updated, other than its EndDate value. A case in point: throughout your life, you might change family physicians, for any number of reasons. On the other hand, you may need your medical history while you were with doctor 123, from 2004 to 2007. Since then you've hand two other family doctors. Sometimes you need a PIT, sometimes you need all the data, A, On Fri, Jul 15, 2011 at 9:46 AM, David McAfee wrote: > Create a junction table for installs. > > tblInstalls > InstalledID (PK, INT) > LocationID (FK, int) > MachineID (fk, int) > Installdate > EntryDate > Entryuserid > > Every record is an insertion. > You never have to overwrite data. > Built in history. > > A simple view/sproc using Max() can show the latest location location for a > given machine or machines at a given location. > > HTH, > David > > From fhtapia at gmail.com Fri Jul 15 08:57:07 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 06:57:07 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <2171584035414575940@unknownmsgid> What a great concept! Sent from my mobile On Jul 15, 2011, at 6:47 AM, David McAfee wrote: > Create a junction table for installs. > > tblInstalls > InstalledID (PK, INT) > LocationID (FK, int) > MachineID (fk, int) > Installdate > EntryDate > Entryuserid > > Every record is an insertion. > You never have to overwrite data. > Built in history. > > A simple view/sproc using Max() can show the latest location location for a > given machine or machines at a given location. > > HTH, > David > > Sent from my Droid phone. > On Jul 15, 2011 5:15 AM, "Arthur Fuller" wrote: >> I'd like to poll the readership to ask, "Do you permit FKs to be updated, >> and if so under what circumstances?" >> >> I'm asking because a client and I are discussing a situation where this > has >> arisen: >> >> A Client may have several locations. >> A Location has zero or more machines installed. >> A Machine has related data in at least one table (Assessments and > optionally >> Measurements). >> >> From time to time the Client may want to move a Machine from one Location > to >> another. >> >> The client suggested that I simply replace the FK LocationID on the > Machine >> record with the LocationID of the new Location. I pointed out that there > are >> two possible results to this operation: >> >> a) do a Cascade Update on the tables under Machines. This approach > "destroys >> history", so to speak, in that the data really no longer applies to the >> relocated Machine. The Assessments and Measurements no longer apply to the >> new Location. >> b) Orphan the Assessments and Measurements. This is unacceptable, IMO. >> >> So I suggested that rather than change the Machine's LocationID, we > instead >> copy the Machine data (only) to a new row, assigning it the new LocationID >> and leaving the old row intact, along with its Assessments and > Measurements >> >> In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, >> under what circumstances?" I'll respond to that one first. The only time I >> permit this is when using staging tables. For example, a wizard may accept >> new data into several tables. The last step in the wizard is equivalent to >> "COMMIT" -- it writes the accumulated data to the "real" tables. There is >> also a "Cancel" button, which if pressed causes a Cascade Delete across > all >> the tables involved. >> >> Arthur >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fuller.artful at gmail.com Fri Jul 15 09:30:00 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 10:30:00 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: <2171584035414575940@unknownmsgid> References: <2171584035414575940@unknownmsgid> Message-ID: If you want a detailed explanation and implementation guide, as it were, visit www.simple-talk.com and download my article on it. The editors didn't like my PITA title so they changed it to PIT Architecture. Myself, I liked the double-entendre. A. On Fri, Jul 15, 2011 at 9:57 AM, Francisco Tapia wrote: > What a great concept! > > From davidmcafee at gmail.com Fri Jul 15 10:28:40 2011 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 Jul 2011 08:28:40 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: <2171584035414575940@unknownmsgid> References: <2171584035414575940@unknownmsgid> Message-ID: hmmmm. Where was that concept used before? ;) On Fri, Jul 15, 2011 at 6:57 AM, Francisco Tapia wrote: > What a great concept! > > Sent from my mobile > > On Jul 15, 2011, at 6:47 AM, David McAfee wrote: > > > Create a junction table for installs. > > > > tblInstalls > > InstalledID (PK, INT) > > LocationID (FK, int) > > MachineID (fk, int) > > Installdate > > EntryDate > > Entryuserid > > > > Every record is an insertion. > > You never have to overwrite data. > > Built in history. > > > > A simple view/sproc using Max() can show the latest location location for > a > > given machine or machines at a given location. > > > > HTH, > > David > > > > Sent from my Droid phone. > > On Jul 15, 2011 5:15 AM, "Arthur Fuller" > wrote: > >> I'd like to poll the readership to ask, "Do you permit FKs to be > updated, > >> and if so under what circumstances?" > >> > >> I'm asking because a client and I are discussing a situation where this > > has > >> arisen: > >> > >> A Client may have several locations. > >> A Location has zero or more machines installed. > >> A Machine has related data in at least one table (Assessments and > > optionally > >> Measurements). > >> > >> From time to time the Client may want to move a Machine from one > Location > > to > >> another. > >> > >> The client suggested that I simply replace the FK LocationID on the > > Machine > >> record with the LocationID of the new Location. I pointed out that there > > are > >> two possible results to this operation: > >> > >> a) do a Cascade Update on the tables under Machines. This approach > > "destroys > >> history", so to speak, in that the data really no longer applies to the > >> relocated Machine. The Assessments and Measurements no longer apply to > the > >> new Location. > >> b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > >> > >> So I suggested that rather than change the Machine's LocationID, we > > instead > >> copy the Machine data (only) to a new row, assigning it the new > LocationID > >> and leaving the old row intact, along with its Assessments and > > Measurements > >> > >> In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > >> under what circumstances?" I'll respond to that one first. The only time > I > >> permit this is when using staging tables. For example, a wizard may > accept > >> new data into several tables. The last step in the wizard is equivalent > to > >> "COMMIT" -- it writes the accumulated data to the "real" tables. There > is > >> also a "Cancel" button, which if pressed causes a Cascade Delete across > > all > >> the tables involved. > >> > >> Arthur > >> _______________________________________________ > >> dba-SQLServer mailing list > >> dba-SQLServer at databaseadvisors.com > >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > >> http://www.databaseadvisors.com > >> > > _______________________________________________ > > dba-SQLServer mailing list > > dba-SQLServer at databaseadvisors.com > > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > > http://www.databaseadvisors.com > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Fri Jul 15 10:32:02 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 11:32:02 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: <2171584035414575940@unknownmsgid> Message-ID: I first encountered it in theory in Ralph Kimball's books. In practice I encountered it while working on a project called Ontario Labs Information System (OLIS), whose goal is to digitize all the province's medical information, down to and including all the X-rays, MRIs -- the whole kit and kaboodle. A. On Fri, Jul 15, 2011 at 11:28 AM, David McAfee wrote: > hmmmm. Where was that concept used before? ;) > > From davidmcafee at gmail.com Fri Jul 15 10:32:53 2011 From: davidmcafee at gmail.com (David McAfee) Date: Fri, 15 Jul 2011 08:32:53 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: Yes. I do it for our rebates and incentives, here at my current job. It make it so nice to be able to see what a price/incentive was on a given date. I keep trying to get another developer to move his seller/customer assignments over to this model. He feels it is too confusing and just keeps sticking to "live" overwriting data. It sucks when someone needs to interface our systems, as there is no historical data on his end. On Fri, Jul 15, 2011 at 6:53 AM, Arthur Fuller wrote: > That is my preferred approach. A while back I wrote a piece for Simple-Talk > on PITA (Point In Time Architecture, not the other meaning, although it is > somewhat appropriate too :). In the case I was discussing, nothing was ever > updated, other than its EndDate value. A case in point: throughout your > life, you might change family physicians, for any number of reasons. On the > other hand, you may need your medical history while you were with doctor > 123, from 2004 to 2007. Since then you've hand two other family doctors. > Sometimes you need a PIT, sometimes you need all the data, > > A, > > On Fri, Jul 15, 2011 at 9:46 AM, David McAfee > wrote: > > > Create a junction table for installs. > > > > tblInstalls > > InstalledID (PK, INT) > > LocationID (FK, int) > > MachineID (fk, int) > > Installdate > > EntryDate > > Entryuserid > > > > Every record is an insertion. > > You never have to overwrite data. > > Built in history. > > > > A simple view/sproc using Max() can show the latest location location for > a > > given machine or machines at a given location. > > > > HTH, > > David > > > > > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fhtapia at gmail.com Fri Jul 15 10:35:52 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 08:35:52 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <6012251177852188278@unknownmsgid> SQL right? Have him create a trigger so that data is also inserted at every overwrite to the audit table. Sent from my mobile On Jul 15, 2011, at 8:33 AM, David McAfee wrote: > Yes. I do it for our rebates and incentives, here at my current job. > > It make it so nice to be able to see what a price/incentive was on a given > date. > > I keep trying to get another developer to move his seller/customer > assignments over to this model. > > He feels it is too confusing and just keeps sticking to "live" overwriting > data. > > It sucks when someone needs to interface our systems, as there is no > historical data on his end. > > > > On Fri, Jul 15, 2011 at 6:53 AM, Arthur Fuller wrote: > >> That is my preferred approach. A while back I wrote a piece for Simple-Talk >> on PITA (Point In Time Architecture, not the other meaning, although it is >> somewhat appropriate too :). In the case I was discussing, nothing was ever >> updated, other than its EndDate value. A case in point: throughout your >> life, you might change family physicians, for any number of reasons. On the >> other hand, you may need your medical history while you were with doctor >> 123, from 2004 to 2007. Since then you've hand two other family doctors. >> Sometimes you need a PIT, sometimes you need all the data, >> >> A, >> >> On Fri, Jul 15, 2011 at 9:46 AM, David McAfee >> wrote: >> >>> Create a junction table for installs. >>> >>> tblInstalls >>> InstalledID (PK, INT) >>> LocationID (FK, int) >>> MachineID (fk, int) >>> Installdate >>> EntryDate >>> Entryuserid >>> >>> Every record is an insertion. >>> You never have to overwrite data. >>> Built in history. >>> >>> A simple view/sproc using Max() can show the latest location location for >> a >>> given machine or machines at a given location. >>> >>> HTH, >>> David >>> >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fhtapia at gmail.com Fri Jul 15 10:39:08 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 08:39:08 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: <2171584035414575940@unknownmsgid> Message-ID: <-4406565997184781526@unknownmsgid> :). Thanks Arthur, I really love using this pita method whenever historical data is required. The plus side for me is that even though we have new dba's and I've moved on to a new job description, the new guys always come back to me to ask on how they can use this method on new projects. Sent from my mobile On Jul 15, 2011, at 7:30 AM, Arthur Fuller wrote: > If you want a detailed explanation and implementation guide, as it were, > visit www.simple-talk.com and download my article on it. The editors didn't > like my PITA title so they changed it to PIT Architecture. Myself, I liked > the double-entendre. > > A. > > On Fri, Jul 15, 2011 at 9:57 AM, Francisco Tapia wrote: > >> What a great concept! >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fhtapia at gmail.com Fri Jul 15 10:44:00 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Fri, 15 Jul 2011 08:44:00 -0700 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: References: Message-ID: <-7963481768233347922@unknownmsgid> I will tell you this arthur. To this day I do not let my clients (be it my current employer nor my freelance stuff) dictate the architecture of a database. I will always work with my colleagues and design what makes the most business sense. I guess its different if the client is also a db developer, but generally my clients are not thus why I can say, it like this. Sent from my mobile On Jul 15, 2011, at 5:15 AM, Arthur Fuller wrote: > I'd like to poll the readership to ask, "Do you permit FKs to be updated, > and if so under what circumstances?" > > I'm asking because a client and I are discussing a situation where this has > arisen: > > A Client may have several locations. > A Location has zero or more machines installed. > A Machine has related data in at least one table (Assessments and optionally > Measurements). > > From time to time the Client may want to move a Machine from one Location to > another. > > The client suggested that I simply replace the FK LocationID on the Machine > record with the LocationID of the new Location. I pointed out that there are > two possible results to this operation: > > a) do a Cascade Update on the tables under Machines. This approach "destroys > history", so to speak, in that the data really no longer applies to the > relocated Machine. The Assessments and Measurements no longer apply to the > new Location. > b) Orphan the Assessments and Measurements. This is unacceptable, IMO. > > So I suggested that rather than change the Machine's LocationID, we instead > copy the Machine data (only) to a new row, assigning it the new LocationID > and leaving the old row intact, along with its Assessments and Measurements > > In a somewhat related topic, "Do you permit Cascase DELETEs, and if so, > under what circumstances?" I'll respond to that one first. The only time I > permit this is when using staging tables. For example, a wizard may accept > new data into several tables. The last step in the wizard is equivalent to > "COMMIT" -- it writes the accumulated data to the "real" tables. There is > also a "Cancel" button, which if pressed causes a Cascade Delete across all > the tables involved. > > Arthur > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From fuller.artful at gmail.com Fri Jul 15 10:54:12 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Fri, 15 Jul 2011 11:54:12 -0400 Subject: [dba-SQLServer] Update Foreign Keys In-Reply-To: <-7963481768233347922@unknownmsgid> References: <-7963481768233347922@unknownmsgid> Message-ID: In this case, the client is an engineer and also has experience in Access. In fact, he wrote the first version of the app, and then brought me in when the requirements went beyond his skill set. We've been working together for five years, and I consider him a very good friend. He never dictates anything to do with db architecture. We kick subjects around a lot, though; he considers that exercise a learning experience. On Fri, Jul 15, 2011 at 11:44 AM, Francisco Tapia wrote: > I will tell you this arthur. To this day I do not let my clients (be > it my current employer nor my freelance stuff) dictate the > architecture of a database. I will always work with my colleagues and > design what makes the most business sense. > > I guess its different if the client is also a db developer, but > generally my clients are not thus why I can say, it like this. > > From jwcolby at colbyconsulting.com Wed Jul 20 06:51:31 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 20 Jul 2011 07:51:31 -0400 Subject: [dba-SQLServer] How to get top1 of group by Message-ID: <4E26C143.4010206@colbyconsulting.com> The following sql gets me the topN of a group where based on rank. with PR as ( SELECT Product, PartNo, Buildable, LimitingFactor, rank() over (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank FROM dbo.VpARTS_REQUIREMENTS_FOR_WORKORDERS_OWBreakout as PRFW ) select * from PR WHERE PR.LFRank = 1 but if there are several of the same rank it returns all of the records with the top rank. I need to get only 1 item per product. Any suggestions? -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Wed Jul 20 06:57:46 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 Jul 2011 07:57:46 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: <4E26C143.4010206@colbyconsulting.com> References: <4E26C143.4010206@colbyconsulting.com> Message-ID: SELECT TOP 1 ... and the rest of your query. A. On Wed, Jul 20, 2011 at 7:51 AM, jwcolby wrote: > The following sql gets me the topN of a group where based on rank. > > with PR as > ( > SELECT Product, PartNo, Buildable, LimitingFactor, rank() over > (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank > FROM dbo.VpARTS_REQUIREMENTS_FOR_**WORKORDERS_OWBreakout as PRFW > ) > > select * from PR WHERE PR.LFRank = 1 > > but if there are several of the same rank it returns all of the records > with the top rank. I need to get only 1 item per product. > > Any suggestions? > From jwcolby at colbyconsulting.com Wed Jul 20 07:07:49 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Wed, 20 Jul 2011 08:07:49 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: References: <4E26C143.4010206@colbyconsulting.com> Message-ID: <4E26C515.5090300@colbyconsulting.com> No, that gets the top 1 of all the records. The query is returning 1 to N records for every product. I need 1 record for every product. John W. Colby www.ColbyConsulting.com On 7/20/2011 7:57 AM, Arthur Fuller wrote: > SELECT TOP 1 ... and the rest of your query. > > A. > > On Wed, Jul 20, 2011 at 7:51 AM, jwcolbywrote: > >> The following sql gets me the topN of a group where based on rank. >> >> with PR as >> ( >> SELECT Product, PartNo, Buildable, LimitingFactor, rank() over >> (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank >> FROM dbo.VpARTS_REQUIREMENTS_FOR_**WORKORDERS_OWBreakout as PRFW >> ) >> >> select * from PR WHERE PR.LFRank = 1 >> >> but if there are several of the same rank it returns all of the records >> with the top rank. I need to get only 1 item per product. >> >> Any suggestions? >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Wed Jul 20 08:56:19 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Wed, 20 Jul 2011 09:56:19 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: <4E26C515.5090300@colbyconsulting.com> References: <4E26C143.4010206@colbyconsulting.com> <4E26C515.5090300@colbyconsulting.com> Message-ID: Sorry, I misunderstood. To get what you want, you have to go "correlated subquery". See this link for more info on setting this up: http://msdn.microsoft.com/en-us/library/ms187638.aspx HTH, A. On Wed, Jul 20, 2011 at 8:07 AM, jwcolby wrote: > No, that gets the top 1 of all the records. The query is returning 1 to N > records for every product. I need 1 record for every product. > > > John W. Colby > > From fhtapia at gmail.com Thu Jul 21 14:19:04 2011 From: fhtapia at gmail.com (Francisco Tapia) Date: Thu, 21 Jul 2011 12:19:04 -0700 Subject: [dba-SQLServer] iOS and Sql Server Message-ID: Is anybody here working on mobile apps? Today I ran into this link from redgate for helping to get access to sql servers in your organization using iOS's mobile platform... http://mobilefoo.com/ProductDetail.aspx/iSql -Francisco http://bit.ly/sqlthis | Tsql and More... From jwcolby at colbyconsulting.com Thu Jul 21 14:52:04 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 21 Jul 2011 15:52:04 -0400 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: References: <4E26C143.4010206@colbyconsulting.com> <4E26C515.5090300@colbyconsulting.com> Message-ID: <4E288364.9090001@colbyconsulting.com> Making matters much worse, they have SQL Server 2000. John W. Colby www.ColbyConsulting.com On 7/20/2011 9:56 AM, Arthur Fuller wrote: > Sorry, I misunderstood. To get what you want, you have to go "correlated > subquery". See this link for more info on setting this up: > > http://msdn.microsoft.com/en-us/library/ms187638.aspx > > HTH, > A. > > On Wed, Jul 20, 2011 at 8:07 AM, jwcolbywrote: > >> No, that gets the top 1 of all the records. The query is returning 1 to N >> records for every product. I need 1 record for every product. >> >> >> John W. Colby >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From markamatte at hotmail.com Fri Jul 22 10:39:40 2011 From: markamatte at hotmail.com (Mark A Matte) Date: Fri, 22 Jul 2011 15:39:40 +0000 Subject: [dba-SQLServer] How to get top1 of group by In-Reply-To: <4E26C143.4010206@colbyconsulting.com> References: <4E26C143.4010206@colbyconsulting.com> Message-ID: Could you : select top 1 pr.* from (SELECT Product, PartNo, Buildable, LimitingFactor, rank() over (partition by PRFW.product order by PRFW.LimitingFactor) as LFRank FROM dbo.VpARTS_REQUIREMENTS_FOR_WORKORDERS_OWBreakout as PRFW) pr where pr.LFRank = 1 Mark A. Matte > Date: Wed, 20 Jul 2011 07:51:31 -0400 > From: jwcolby at colbyconsulting.com > To: dba-sqlserver at databaseadvisors.com > Subject: [dba-SQLServer] How to get top1 of group by > > The following sql gets me the topN of a group where based on rank. > > with PR as > ( > SELECT Product, PartNo, Buildable, LimitingFactor, rank() over (partition by PRFW.product order > by PRFW.LimitingFactor) as LFRank > FROM dbo.VpARTS_REQUIREMENTS_FOR_WORKORDERS_OWBreakout as PRFW > ) > > select * from PR WHERE PR.LFRank = 1 > > but if there are several of the same rank it returns all of the records with the top rank. I need > to get only 1 item per product. > > Any suggestions? > -- > John W. Colby > www.ColbyConsulting.com > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > From jwcolby at colbyconsulting.com Sat Jul 23 00:40:58 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 01:40:58 -0400 Subject: [dba-SQLServer] Manually recreate database structure Message-ID: <4E2A5EEA.30804@colbyconsulting.com> I have a database designed on SQL Server 2008 Express which is a later version (10.50) than my full version (10.0). I cannot backup and restore backwards nor can I detach / attach. Thus I am scripting as create each table and copying the script to a new database (same name) on the older software version Server. My problem is that the scripts have constraints which are PK/FK pairs. Each table has these constraints but one half of the constraint will always be missing as I run these scripts. Is there a way to tell SQL server to create but ignore the constraints as I create the tables and even as I load the tables with the existing data and then "turn on" the constraints at the very end? Or do I need to move the constraint SQL into a separate query, create the tables, load the data and then create the constraints at the end? -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Sat Jul 23 01:55:56 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 23 Jul 2011 02:55:56 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2A5EEA.30804@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> Message-ID: Yes, or alternatively re-order the script's execution sequence to populate all the lookup tables first, and only then populate the main tables. FWIW, I agree that these MS tools ought to do a far better job at this, and understand in which order the tables ought to be created. A. On Sat, Jul 23, 2011 at 1:40 AM, jwcolby wrote: > I have a database designed on SQL Server 2008 Express which is a later > version (10.50) than my full version (10.0). I cannot backup and restore > backwards nor can I detach / attach. > > Thus I am scripting as create each table and copying the script to a new > database (same name) on the older software version Server. My problem is > that the scripts have constraints which are PK/FK pairs. Each table has > these constraints but one half of the constraint will always be missing as I > run these scripts. Is there a way to tell SQL server to create but ignore > the constraints as I create the tables and even as I load the tables with > the existing data and then "turn on" the constraints at the very end? > > Or do I need to move the constraint SQL into a separate query, create the > tables, load the data and then create the constraints at the end? > From jwcolby at colbyconsulting.com Sat Jul 23 07:43:25 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 08:43:25 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: References: <4E2A5EEA.30804@colbyconsulting.com> Message-ID: <4E2AC1ED.9020408@colbyconsulting.com> I am building the scripts one by one by clicking on the table and right clicking "script table as". Is there some way to script the whole shebang at once time into a single script? John W. Colby www.ColbyConsulting.com On 7/23/2011 2:55 AM, Arthur Fuller wrote: > Yes, or alternatively re-order the script's execution sequence to populate > all the lookup tables first, and only then populate the main tables. > > FWIW, I agree that these MS tools ought to do a far better job at this, and > understand in which order the tables ought to be created. > > A. > > On Sat, Jul 23, 2011 at 1:40 AM, jwcolbywrote: > >> I have a database designed on SQL Server 2008 Express which is a later >> version (10.50) than my full version (10.0). I cannot backup and restore >> backwards nor can I detach / attach. >> >> Thus I am scripting as create each table and copying the script to a new >> database (same name) on the older software version Server. My problem is >> that the scripts have constraints which are PK/FK pairs. Each table has >> these constraints but one half of the constraint will always be missing as I >> run these scripts. Is there a way to tell SQL server to create but ignore >> the constraints as I create the tables and even as I load the tables with >> the existing data and then "turn on" the constraints at the very end? >> >> Or do I need to move the constraint SQL into a separate query, create the >> tables, load the data and then create the constraints at the end? >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From jwcolby at colbyconsulting.com Sat Jul 23 07:53:04 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 08:53:04 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2AC1ED.9020408@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> Message-ID: <4E2AC430.5030701@colbyconsulting.com> Never mind, I found it. When run, the script generated about a billion errors but it did apparently build the tables. John W. Colby www.ColbyConsulting.com On 7/23/2011 8:43 AM, jwcolby wrote: > I am building the scripts one by one by clicking on the table and right clicking "script table as". > Is there some way to script the whole shebang at once time into a single script? > > John W. Colby > www.ColbyConsulting.com > > On 7/23/2011 2:55 AM, Arthur Fuller wrote: >> Yes, or alternatively re-order the script's execution sequence to populate >> all the lookup tables first, and only then populate the main tables. >> >> FWIW, I agree that these MS tools ought to do a far better job at this, and >> understand in which order the tables ought to be created. >> >> A. >> >> On Sat, Jul 23, 2011 at 1:40 AM, jwcolbywrote: >> >>> I have a database designed on SQL Server 2008 Express which is a later >>> version (10.50) than my full version (10.0). I cannot backup and restore >>> backwards nor can I detach / attach. >>> >>> Thus I am scripting as create each table and copying the script to a new >>> database (same name) on the older software version Server. My problem is >>> that the scripts have constraints which are PK/FK pairs. Each table has >>> these constraints but one half of the constraint will always be missing as I >>> run these scripts. Is there a way to tell SQL server to create but ignore >>> the constraints as I create the tables and even as I load the tables with >>> the existing data and then "turn on" the constraints at the very end? >>> >>> Or do I need to move the constraint SQL into a separate query, create the >>> tables, load the data and then create the constraints at the end? >>> >> _______________________________________________ >> dba-SQLServer mailing list >> dba-SQLServer at databaseadvisors.com >> http://databaseadvisors.com/mailman/listinfo/dba-sqlserver >> http://www.databaseadvisors.com >> >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Sat Jul 23 07:55:01 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 23 Jul 2011 08:55:01 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2AC430.5030701@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> <4E2AC430.5030701@colbyconsulting.com> Message-ID: Only a billion? On Sat, Jul 23, 2011 at 8:53 AM, jwcolby wrote: > Never mind, I found it. When run, the script generated about a billion > errors but it did apparently build the tables. > > > John W. Colby > www.ColbyConsulting.com > From jwcolby at colbyconsulting.com Sat Jul 23 08:09:58 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Sat, 23 Jul 2011 09:09:58 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> <4E2AC430.5030701@colbyconsulting.com> Message-ID: <4E2AC826.9070009@colbyconsulting.com> Well maybe a couple of billion. I didn't really count them. ;) I am now trying to populate the tables with the data from the source db. What a PITA. This is a small database so I will just keep on at it until I get it. John W. Colby www.ColbyConsulting.com On 7/23/2011 8:55 AM, Arthur Fuller wrote: > Only a billion? > > On Sat, Jul 23, 2011 at 8:53 AM, jwcolbywrote: > >> Never mind, I found it. When run, the script generated about a billion >> errors but it did apparently build the tables. >> >> >> John W. Colby >> www.ColbyConsulting.com >> > _______________________________________________ > dba-SQLServer mailing list > dba-SQLServer at databaseadvisors.com > http://databaseadvisors.com/mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.com > > From fuller.artful at gmail.com Sat Jul 23 09:25:27 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Sat, 23 Jul 2011 10:25:27 -0400 Subject: [dba-SQLServer] Manually recreate database structure In-Reply-To: <4E2AC826.9070009@colbyconsulting.com> References: <4E2A5EEA.30804@colbyconsulting.com> <4E2AC1ED.9020408@colbyconsulting.com> <4E2AC430.5030701@colbyconsulting.com> <4E2AC826.9070009@colbyconsulting.com> Message-ID: I could be wrong, and perhaps you already found the thing to which I am about to refer, but in SSMS if you click on the database rather than any given table, and then choose All Tasks and then Script, it comes out in the right order, respecting your FKs. I'll check that in a minute but I think that is the case. A. P.S. Since when do you work on small DBs? I always thought of you as the 60M-rows man. There could be a sequel to The Social Network here, starring oh I don't know, maybe Christopher Walken as JWC, duelling with tables 700 columns wide and unleashing his SSHDs upon his enemies, and delivering the requirements OTAOB (on time and on budget). I'll call Spielberg in the morning; nah, Christopher Nolan is the man to approach. Speaking of whom, have you seen either Memento or Inception? This is one brilliant man. A. On Sat, Jul 23, 2011 at 9:09 AM, jwcolby wrote: > Well maybe a couple of billion. I didn't really count them. ;) > > I am now trying to populate the tables with the data from the source db. > What a PITA. > > This is a small database so I will just keep on at it until I get it. > > > From jwcolby at colbyconsulting.com Mon Jul 25 13:57:36 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Mon, 25 Jul 2011 14:57:36 -0400 Subject: [dba-SQLServer] Link odbc tables in Access Message-ID: <4E2DBCA0.6050603@colbyconsulting.com> When I link to SQL Server from Access using a DSN file, I end up with my tables displayed but also about a billion (sorry, I didn't count) tables that start with INFORMATION_SCHEMA_.xyz. At this time I have no use for those tables and would like to filter them out so that I cannot see them. Does anyone know how to do that? -- John W. Colby www.ColbyConsulting.com From fuller.artful at gmail.com Mon Jul 25 16:42:45 2011 From: fuller.artful at gmail.com (Arthur Fuller) Date: Mon, 25 Jul 2011 17:42:45 -0400 Subject: [dba-SQLServer] Link odbc tables in Access In-Reply-To: <4E2DBCA0.6050603@colbyconsulting.com> References: <4E2DBCA0.6050603@colbyconsulting.com> Message-ID: They are all views into the meta-data of the database, intended to insulate you from relying on the structure of tables such as SysColumns. As to how to hide them, I have no idea. But if you create an ADP, you don't see them. So why don't you do that instead? Just a question; you may have some valid reason for going the DSN route. A. On Mon, Jul 25, 2011 at 2:57 PM, jwcolby wrote: > When I link to SQL Server from Access using a DSN file, I end up with my > tables displayed but also about a billion (sorry, I didn't count) tables > that start with INFORMATION_SCHEMA_.xyz. At this time I have no use for > those tables and would like to filter them out so that I cannot see them. > Does anyone know how to do that? > From jwcolby at colbyconsulting.com Mon Jul 25 16:47:50 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Mon, 25 Jul 2011 17:47:50 -0400 Subject: [dba-SQLServer] SQL Server Security - from Server / From workstation Message-ID: <4E2DE486.2090805@colbyconsulting.com> I am setting up a SQL Server at the client. The OS is Windows 2000 so my choice is SQL Server 2005 Express - SQL Server 2008 won't run on Windows 2000. Sigh already! When I view the server security from the server I see a ton of logins including all of the server logins such as ASPNet, NT Authority \System and so forth. I also see an sa, DiscoAdmin and DiscoApp. I created the DiscoAdmin and DiscoApp. I have created several databases. Several are required because of the max file size throttling on SQL Server 2005. I created tables in the databases and I can see them from SSMS. I can also use ODBC linked tables to link to the tables from my Access application. This is all from the server. From my workstation, I am managing to see the server and databases. I can do everything I can do from the server except see the tables from the Access application. If I click on the links it says they are not available. Also oddly, while I do see the SERVER DiscoAdmin login from my workstation I cannot see the DiscoUser login. I can see BOTH logins in the databases themselves. What do I need to do to see the SERVER logins? And why am I not seeing the linked tables from my workstation but can from the server? -- John W. Colby www.ColbyConsulting.com From jwcolby at colbyconsulting.com Thu Jul 28 07:59:37 2011 From: jwcolby at colbyconsulting.com (jwcolby) Date: Thu, 28 Jul 2011 08:59:37 -0400 Subject: [dba-SQLServer] How to set up Performance Monitor Message-ID: <4E315D39.1040206@colbyconsulting.com> I am running Windows 2008 and SQL Server 2008. I want to set up performance monitor to tell me how I am doing on memory, cache etc. When i go to perf monitor it shows a ton of counters which breaks down things (apparently) by the service or something. For SQL Server I have: SQLAgent :Jobs :jobSteps :Statistics SQLServer :Access methods :BackupDevice :broker Activation etc. This is just a ton of stuff and I haven't a clue what is important and what is not. Can anyone point me to something that discusses this in an understandable format along with how to set up Perf Monitor for the basics? Thanks, -- John W. Colby www.ColbyConsulting.com From pcs.accessd at gmail.com Sun Jul 31 07:59:13 2011 From: pcs.accessd at gmail.com (Borge Hansen) Date: Sun, 31 Jul 2011 20:59:13 +0800 Subject: [dba-SQLServer] ODBC Linked Tables SQL Server 2008 R2 Express to Access 2003 Using SQL Native Client 10 : nvarchar(max) comes across as text(255) - should be Memo Message-ID: Does anyone know the answer to this? Configuration: *One Machine*: OS: Windows Server 2008 R2 (virtual machine) MS Access 2003 (11.8166.8172) SP3 ODBC Driver: SQL Server Native Client 10.0 : 2009.100.2500.00 SQLNCLI10.DLL 17/06/2011 (Version 10.50.2500) *accesses SQL Server 2008 R2 Express on other machine via TCP* Other Machine: OS: Windows Server 2008 R2 (virtual machine - both machines on same domain) Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Express Edition with Advanced Services on Windows NT 6.0 (Build 6002: Service Pack 2) (VM) The Access 2003 application starts up, re-links all tables to SQL Server Db ok! The offending linked table is a very small table with only three records. All Memo fields on the table are linked defined as text 255 - and as a consequence only the last 255 characters of the field comes across. We have several other installation configurations, where this is NOT a problem - none of which are SQL Server 2008 R2 Express though. We have for example two SQL Server 2008 R2 Web Edition - with expected behaviour on the linked tables. Anyone with answer to this? Would be greatly appreciated! (In the past we have used SQL Server 2005 Express with no problems linking tables with nvarchar(max) as memo fields) So far spent / wasted 1 1/2 day on trouble shooting this. Kind regards, Borge From pcs.accessd at gmail.com Sun Jul 31 08:18:57 2011 From: pcs.accessd at gmail.com (Borge Hansen) Date: Sun, 31 Jul 2011 21:18:57 +0800 Subject: [dba-SQLServer] How to set up Performance Monitor In-Reply-To: <4E315D39.1040206@colbyconsulting.com> References: <4E315D39.1040206@colbyconsulting.com> Message-ID: Hi John, I subscribe to sqlservercentral to learn more about sql server Here is one article that might be of relevance to you: http://www.sqlservercentral.com/articles/Performance+Tuning/monitoringperformance/1007/ Regards, Borge On Thu, Jul 28, 2011 at 8:59 PM, jwcolby wrote: > I am running Windows 2008 and SQL Server 2008. I want to set up > performance monitor to tell me how I am doing on memory, cache etc. When i > go to perf monitor it shows a ton of counters which breaks down things > (apparently) by the service or something. For SQL Server I have: > > SQLAgent > :Jobs > :jobSteps > :Statistics > > SQLServer > :Access methods > :BackupDevice > :broker Activation > > etc. > > This is just a ton of stuff and I haven't a clue what is important and what > is not. > > Can anyone point me to something that discusses this in an understandable > format along with how to set up Perf Monitor for the basics? > > Thanks, > > -- > John W. Colby > www.ColbyConsulting.com > ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > > From pcs.accessd at gmail.com Sun Jul 31 08:38:02 2011 From: pcs.accessd at gmail.com (Borge Hansen) Date: Sun, 31 Jul 2011 21:38:02 +0800 Subject: [dba-SQLServer] Link odbc tables in Access In-Reply-To: <4E2DBCA0.6050603@colbyconsulting.com> References: <4E2DBCA0.6050603@colbyconsulting.com> Message-ID: John, How about defining all your tables that you want to link to in your SQL Db in a hidden local table in your frontend and then use some vba re-linker code that travels all the records (table names) in the local table and drops and re-links the SQL Db tables. Then you can forget about the one billion other tables. And using the re-linker code you can link directly to tables in the SQL Db removing the "dbo_" prefix....which may be of benefit if you have a lot of queries that reference tables from back when they perhaps lived as access .mdb backend tables.... Just a suggestion. Regards Borge On Tue, Jul 26, 2011 at 2:57 AM, jwcolby wrote: > When I link to SQL Server from Access using a DSN file, I end up with my > tables displayed but also about a billion (sorry, I didn't count) tables > that start with INFORMATION_SCHEMA_.xyz. At this time I have no use for > those tables and would like to filter them out so that I cannot see them. > Does anyone know how to do that? > > -- > John W. Colby > www.ColbyConsulting.com > ______________________________**_________________ > dba-SQLServer mailing list > dba-SQLServer@**databaseadvisors.com > http://databaseadvisors.com/**mailman/listinfo/dba-sqlserver > http://www.databaseadvisors.**com > >